content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Calculate Speed Unit Conversions | TheCalculatorKing.com
All-in-One Speed Converter
This all-in-one speed converter lets you calculate all speed units at once. Convert between speed units including foot per hour, foot per minute, foot per second, kilometer per hour, kilometer per
minute, kilometer per second, knot (std), knot (UK), mach, meter per hour, meter per minute, meter per second, mile per hour, mile per minute, mile per second, speed of light, speed of sound, yard
per hour, yard per minute, and yard per second.
All Speed Converters
The speed converters below provide more detail about converting between the individual speed units. Each one includes a definition of the individual speed units, step-by-step instructions on
performing the conversion, conversion examples, together with conversion charts and other visualisations.
What Speed Units Are Supported?
Name Symbol Measurement System Description
foot per ft/s United States customary / Foot per second (ft/s) is a unit of linear velocity measurement. It represents the rate at which an object or particle travels a distance of one foot in
second Imperial system one second. Foot per second is commonly used to quantify the speed or velocity of objects or the flow rate of fluids.
foot per ft/m United States customary / Feet per minute (ft/min) is a unit of linear velocity measurement. It represents the rate at which an object or particle travels a distance of one foot
minute Imperial system in one minute. Feet per minute is used to quantify slower speeds or rates of movement over longer time intervals.
foot per ft/h United States customary / Feet per hour (ft/hr) is a unit of linear velocity measurement. It represents the rate at which an object or particle travels a distance of one foot in
hour Imperial system one hour. Feet per hour is used to quantify extremely slow speeds or rates of movement over extended time intervals.
kilometer km/s International System of Kilometers per second (km/s) is a unit of linear velocity measurement. It represents the rate at which an object or particle travels a distance of one
per second Units (SI) / Metric System kilometer in one second. Kilometers per second is commonly used to quantify high speeds or rapid rates of movement.
kilometer km/m International System of Kilometers per minute (km/min) is a unit of linear velocity measurement. It represents the rate at which an object or particle travels a distance of one
per minute Units (SI) / Metric System kilometer in one minute. Kilometers per minute is used to quantify speeds or rates of movement over relatively short time intervals.
kilometer km/h International System of Kilometers per hour (km/h) is a unit of linear velocity measurement. It represents the rate at which an object or particle travels a distance of one
per hour Units (SI) / Metric System kilometer in one hour. Kilometers per hour is commonly used to quantify speeds or rates of movement over longer time intervals.
knot (std) kn, kt Non-SI (International) A knot is a unit of speed typically used in maritime and aviation contexts. It represents one nautical mile per hour, where a nautical mile is defined
as one minute of latitude on a navigational chart. Knots are commonly used to measure the speed of ships, boats, and aircraft.
knot (UK) kn, kt Non-SI (International) Prior to 1970, the knot unit of speed was defined in the UK in terms of the imperial measurement system. The knot was defined as the speed of one
nautical mile per hour (nm/h), where a nautical mile was defined as 6,080 feet, which is equivalent to 1,853.184 meters or approximately 1.151 miles.
Mach is a unit of speed used to measure the velocity of an object relative to the speed of sound in the surrounding medium. It is named after Ernst
mach Ma Non-SI (International) Mach, an Austrian physicist. Mach 1 represents the speed of sound, so when an object is traveling at Mach 2, it is moving twice as fast as the speed of
meter per m/s International System of Meters per second is a unit of speed that represents the distance traveled in meters divided by the time taken in seconds. It is a fundamental unit of
second Units (SI) / Metric System speed in the International System of Units (SI).
meter per m/min International System of Meters per minute is a unit of speed that represents the distance traveled in meters divided by the time taken in minutes. It is derived from the
minute Units (SI) / Metric System fundamental unit of length, the meter, and the unit of time, the minute.
meter per m/h International System of Meter per hour is a unit of speed that represents the distance traveled in meters divided by the time taken in hours. It is derived from the fundamental
hour Units (SI) / Metric System unit of length, the meter, and the unit of time, the hour.
mile per mi/s United States customary / Mile per second is a unit of speed that represents the distance traveled in miles divided by the time taken in seconds. It is a relatively high-speed
second Imperial system measurement used to describe rapid velocities.
mile per mi/m United States customary / Mile per minute is a unit of speed that represents the distance traveled in miles divided by the time taken in minutes. It is derived from the unit of
minute Imperial system length, the mile, and the unit of time, the minute.
mile per mph United States customary / Mile per hour (abbreviated as mph) is a unit of speed commonly used in countries that follow the imperial system of measurement, such as the United
hour Imperial system States and the United Kingdom. It measures the distance traveled in miles divided by the time taken in hours.
yard per yd/s United States customary / Yard per second is a unit of speed that measures the distance traveled in yards divided by the time taken in seconds. It is primarily used in countries
second Imperial system that follow the imperial system of measurement, such as the United States and the United Kingdom.
yard per yd/m United States customary / Yard per minute is a unit of speed that measures the distance traveled in yards divided by the time taken in minutes. It is primarily used in countries
minute Imperial system that follow the imperial system of measurement, such as the United States and the United Kingdom.
yard per yd/h United States customary / Yard per hour is a unit of speed that measures the distance traveled in yards divided by the time taken in hours. It is primarily used in countries that
hour Imperial system follow the imperial system of measurement, such as the United States and the United Kingdom.
speed of c SI Defining Constant The speed of light is a fundamental constant in physics that represents the maximum speed at which information or energy can travel through space. In a
light vacuum, the speed of light is approximately 299,792,458 meters per second (or about 186,282 miles per second).
speed of c other The speed of sound is the rate at which sound waves propagate through a medium, such as air, water, or solids. It represents the speed at which
sound disturbances or vibrations in the medium travel and carry sound energy. The speed of sound can vary depending on the properties of the medium. | {"url":"https://www.thecalculatorking.com/converters/speed","timestamp":"2024-11-13T06:22:52Z","content_type":"text/html","content_length":"121199","record_id":"<urn:uuid:1d2d2660-9752-4da2-95fd-7916ed629375>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00204.warc.gz"} |
ME1303 Gas Dynamics and Jet Propulsion Questions Bank 2014
Anna University, Chennai
Sub. Code/Name: ME1303 Gas Dynamics and Jet Propulsion Year/Sem: III/V
PART- A (2 Marks)
1) State the difference between compressible fluid and incompressible fluid ?
2) Define stagnation pressure?
3) Express the stagnation enthalpy in terms of static enthalpy and velocity of flow?
4) Explain Mach cone and Mach angle?
5) Define adiabatic process?
6) Define Mach number?
7) Define zone of action and zone of silence?
8) Define closed and open system?
9) What is the difference between intensive and extensive properties?
10) Distinguish between Mach wave and normal shock?
Part - B (16 Marks)
1) Derive the energy equations
a² /γ -1 +½ c² = ½ c² max =ao² /γ -1 =ho
1. velocity of sound at 400 K (2)
2. Velocity of sound at the stagnation conditions. (4)
3. Maximum velocity of the jet. (4)
4. Stagnation enthalpy. (4)
5. Crocco number. (2)
Stating the assumptions used. An air jet (γ =1.4, R=287 J/Kg K) at 400K has sonic velocity .Determine:
2.) The pressure, temperature and Mach number at the entry of a flow passage are 2.45 bar 26.5˚ C
and 1.4 respectively. If the exit Mach number is 2.5 determine for adiabatic flow
of perfect gas (γ =1.3, R=0.469 KJ/Kg K). (16)
3) Air (γ =1.4, R=287.43 J/Kg K) enters a straight axis symmetric duct at 300 K, 3.45 bar and
150 m/s and leaves it at 277 k, 500cm². Assuming adiabatic flow determines:
1. stagnation temperature, (4)
2. maximum velocity, (4)
3. mass flow rate, and, (4)
4. Area of cross-section at exit. (4)
4) An aircraft flies at 800 Km/hr at an altitude of 10,000 meters (T=223.15 K, P=0.264 bar). The air is reversibly compressed in an inlet diffuser. If the Mach number at the exit of the diffuser is
0.36 determine (a) entry Mach number and (b) velocity, pressure and temperature of air at
diffuser exit. (16)
5) Air (Cp =1.05 KJ/Kg K, γ =1.38) at p1 =3x 10 5 N/m² and T1 =500 k flows with a velocity of
200 m/s in a 30 cm diameter duct. Calculate mass flow rate, stagnation temperature, Mach number, and Stagnation pressure values assuming the flow as compressible and
incompressible. (16)
6) (a) What is the effect of Mach number on compressibility prove for
γ=1.4, Þo –þ / ½ þ c² = 1 +¼ M² + 1/40 M 4 + ……. (8) (b) Show that for sonic flow the deviation between the compressible and incompressible
flow values of the pressure coefficient of a percent gas (γ =1.4) is about 27.5 per cent. (8)
7) Air at stagnation condition has a temperature of 800 K. Determine the stagnation velocity of
Sound and the maximum possible fluid velocity. What is the velocity of the sound when
the flow velocity is at half the maximum velocity (16)
8) Air flow through a duct. The pressure and temperature at station one are pressure is0 .7 bar and temperature is 300C. At a second station the pressure is 0.5 bar. Calculate temperature
and density at the second station. Assume the flow is to be Isentropic (16)
Unit -2
PART- A (2 Marks)
1) Differentiate Adiabatic and Isentropic process.
2) Differentiate nozzle and diffuser?
3) What is Impulse function?
4) Differentiate between adiabatic flow and adiabatic flow?
5) State the expression for dA/A as a function of Mach number?
6) Give the expression for T/To and T/T* for isentropic flow through variable area in terms of
Mach number?
7) Draw the variation of Mach number along the length of a convergent divergent duct when it acts as a (a) Nozzle (b) Diffuser (c) Venturi
8) What is chocked flow through a nozzle?
9) What type of nozzle used for sonic flow and supersonic flow?
10) When does the maximum mass flow occur for an isentropic flow with variable area?
Part - B (16 Marks)
1) Stagnation pressure and temperature, (4)
2) Velocity of sound in the dynamic and stagnation conditions, (6)
3) Stagnation pressure assuming constant density. (6)
1)Air flowing in a duct has a velocity of 300 m/s ,pressure 1.0 bar and temperature 290 k. Taking γ=1.4 and R =287J/Kg K determine:
2) A conical diffuser has entry and exit diameters of 15 cm and 30cm respectively.
.The pressure, temperature and velocity of air at entry are 0.69bar,340 k and
180 m/s respectively. Determine
1) The exit pressure, (4)
2) The exit velocity and (6)
3) The force exerted on the diffuser walls. (6)
Assume isentropic flow, γ =1.4, Cp =1.00 KJ Kg-K.
3) A nozzle in a wind tunnel gives a test –section Mach number of 2.0 .Air enters the nozzle from a large reservoir at 0.69 bar and 310 k .The cross –sectional area of the throat is
1000cm².Determine the following quantities for the tunnel for one dimensional isentropic flow
1) Pressures, temperature and velocities at the throat and test sections, (4)
2) Area of cross- sectional of the test section , (4)
3) Mass flow rate, (4)
4) Power rate required to drive the compressor.
4) Air is discharged from a reservoir at Po =6.91bar and To =325˚c through a nozzle to an exit pressure of 0.98 bar .If the flow rate is 3600Kg/hr determine for isentropic flow:
5) A supersonic wind tunnel settling chamber expands air or Freon-21 through a nozzle from a nozzle from a pressure of 10 bar to 4bar in the test section. Calculate the stagnation temperature to the
maintained in the setting chamber to obtain a velocity of
500 m/s in the test section for Air, Cp =1.025 KJ/Kg K, Cv =0.735 KJ/Kg K, Freon -21, Cp =0.785 KJ/Kg K, Cv= 0.675 KJ/Kg K.
What is the test section Mach number in each case? (16)
6) Derive the following relations for one dimensional isentropic flow:
dA/A =dP/þ c²(1 -M²) (8)
p*/p =(2/γ+1 +γ-1 /γ+1M²) (8)
7) Air flowing in a duct has a velocity of 300 m/s, pressure 1.0 bar and temperature
290 k.
Taking γ=1.4 and R =287J/Kg K determines:
1)Stagnation pressure and temperature, (4)
2)Velocity of sound in the dynamic and stagnation conditions (6)
3) Stagnation pressure assuming constant density. (6)
8) A conical diffuser has entry and exit diameters of 15 cm and 30cm respectively
The pressure ,temperature and velocity of air at entry are 0.69bar,340 k and
180 m/s respectively. Determine
1) The exit pressure, (4)
2)The exit velocity and (6)
3) The force exerted on the diffuser walls. Assume isentropic flow, γ =1.4,Cp =1.00 KJ Kg-K.
9) A nozzle in a wind tunnel gives a test –section Mach number of 2.0 .Air enters the nozzle
from a large reservoir at 0.69 bar and 310 k .The cross –sectional area of the throat is 1000cm².
Determine the following quantities for the tunnel for one dimensional isentropic flow:
1)Pressures, temperature and velocities at the throat and test sections, (4)
2)Area of cross- sectional of the test section , (4)
3)Mass flow rate, (4)
4) Power rate required to drive the compressor. (4)
10) Air is discharged from a reservoir at Po =6.91bar and To =325˚c through a nozzle to an exit pressure of 0.98 bar .If the flow rate is 3600Kg/hr determine for isentropic flow:
11) A supersonic wind tunnel settling chamber expands air or Freon-21 through a nozzle
from a nozzle from a pressure of 10 bar to 4bar in the test section. Calculate the stagnation
temperature to the maintained in the setting chamber to obtain a velocity of 500 m/s in the test section for, Air, Cp =1.025 KJ/Kg K, Cv =0.735 KJ/Kg K, Freon -21, Cp =0.785 KJ/Kg K Cv= 0.675 KJ/Kg
What is the test section Mach number is each case? (16)
12) Derive the following relations for one dimensional isentropic flow: (8)
dA/A =dP/þ c²(1 -M²)
p*/p =(2/γ+1 +γ-1 /γ+1M²) (8)
Unit -3
PART-A (2 Marks)
1) What are the consumption made for fanno flow?
2) Differentiate Fanno flow and Rayleigh flow?
3) Explain chocking in Fanno flow?
4) Explain the difference between Fanno flow and Isothermal flow?
5) Write down the ratio of velocities between any two sections in terms of their Mach number in a fanno flow?
6) Write down the ratio of density between any two sections in terms of their Mach number in a fanno flow?
7) What are the three equation governing Fanno flow?
8) Give the expression to find increase in entropy for Fanno flow?
9) Give two practical examples where the Fanno flow occurs?
10) What is Rayleigh line and Fanno line?
PART B (16MARKS)
1) A circular duct passes 8.25Kg/s of air at an exit Mach number of 0.5. The entry pressure and temperature are 3.45 bar and 38˚C respectively and the coefficient of friction 0.005.If the Mach number
at entry is 0.15,
I. The diameter of the duct , (2)
II. Length of the duct, (4)
III. Pressure and temperature at the exit, (4)
IV. Stagnation pressure loss, and (4)
V. Verify the exit Mach number through exit velocity and temperature. (2)
2) A gas (γ =1.3, R=0.287 KJ/KgK) at p1 =1bar, T1 =400 k enters a 30cm diameter duct at
1)Lengths of the duct upstream and downstream of the shock wave, (6)
2)Mass flow rate of the gas and (4)
3)Change of entropy upstream and downstream of the shock, across the shock and
downstream of the shock. (6)
a Mach number of 2.0.A normal shock occurs at a Mach number of 1.5 and the exit Mach number is1.0,If the mean value of the friction factor is 0.003 determine:
3) Air enters a long circular duct (d =12.5cm,f=0.0045) at a Mach number 0.5, pressure 3.0 bar
and temperature 312 K. If the flow is isothermal throughout the duct determine (a) the
length of the duct required to change the Mach number to 0.7,(b) pressure and temperature of air at M =0.7 (c) the length of the duct required to attain limiting Mach number, and
(d) State of air at the limiting Mach number. Compare these values with those obtained in
adiabatic flow. (16)
4) A convergent –divergent nozzle is provided with a pipe of constant cross-section at its exit the exit diameter of the nozzle and that of the pipe is 40cm. The mean coefficient of friction for t h
e pipe is 0.0025. Stagnation pressure and temperature of air at the nozzle entry are 12 bar and 600k. The flow is isentropic in the nozzle and adiabatic in the pipe. The Mach numbers at the entry and
exit of the pipe are 1.8 and 1.0 respectively .
a) The length of the pipe , (4)
b) Diameter of the nozzle throat, and (6)
c) Pressure and temperature at the pipe exit. (6)
5) Show that the upper and lower branches of a Fanno curve represent subsonic and supersonic flows respectively. Prove that at the maximum entropy point Mach number is unity and all processes
approach this point .How would the state of a gas in a flow change from
the supersonic to subsonic branch ? (16)
Flow in constant area ducts with heat transfer(Rayleigh flow)
6) The Mach number at the exit of a combustion chamber is 0.9. The ratio of stagnation t e m p e r ature at exit and entry is 3.74. If the pressure and temperature of the gas at exit are 2.5 bar and
1000˚C respectively determine (a) Mach number, pressure and temperature of the gas at entry, (b) the heat supplied per kg of the gas and (c) the maximum heat that can be supplied. Take γ= 1.3,
Cp= 1.218 KJ/KgK (16)
7) The conditions of a gas in a combuster at entry are: P1=0.343bar, T1 = 310K, C1= 60m/s.
Determine the Mach number, pressure ,temperature and velocity at the exit if the increase in stagnation enthalpy of the gas between entry and exit is 1172.5KJ/Kg.
Take Cp=1.005KJ/KgK, γ =1.4 (16)
8) A combustion chamber in a gas turbine plant receives air at 350 K ,0.55bar and 75 m/s .The air – fuel ratio is 29 and the calorific value of the fuel is 41.87 MJ/Kg .Taking γ=1.4 and R =0.287 KJ/
kg K for the gas determine.
a) The initial and final Mach numbers, (4)
b) Final pressure ,temperature and velocity of the gas, (4)
c) Percent stagnation pressure loss in the combustion chamber , and (4)
d) The maximum stagnation temperature attainable. (4)
9) Obtain an equation representing the Rayleigh line . Draw Rayleigh lines on the h-s and
p-v planes for two different values of the mass flux. Show that the slope of the Rayleigh line
on the p-v plane is {dp/dv} = þ² c² (16)
PART-A (2 Marks)
1) What is mean by shock wave?
2) What is mean by Normal shock?
3) What is oblique shock?
4) Define strength of shock wave?
5) What are applications of moving shock wave?
6) Shock waves cannot develop in subsonic flow? Why?
7) Define compression and rarefaction shock? Is the latter possible?
8) State the necessary conditions for a normal shock to occur in compressible flow?
9) Give the difference between normal and oblique shock?
10) what are the properties change across a normal shock ?
Part - B (16 Marks)
Flow with normal shock
1) The state of a gas (γ=1.3, R =0.469 KJ/Kg K) upstream of a normal shock is given by the following data:
Mx =2.5, px= 2bar ,Tx =275K calculate the Mach number ,pressure, temperature and velocity of the gas downstream of the shock; check the calculated values with those give in the gas tables. (16)
2) The ratio of th exit to entry area in a subsonic diffuser is 4.0 .The Mach number of a jet of air approaching the diffuser at p0=1.013 bar, T =290 K is 2.2 .There is a standing normal
Shock wave just outside the diffuser entry. The flow in the diffuser is isentropic. Determine
at the exit of the diffuser.
1. Mach number , (4)
2. Temperature, and (4)
3. Pressure (4)
4. What is the stagnation pressure loss between the initial and final states of the flow ? (4)
3) The velocity of a normal shock wave moving into stagnant air (p=1.0 bar, t=17˚C) is 500 m/s .
If the area of cross- section of the duct is constant determine (a) pressure (b) temperature (c) velocity of air (d) stagnation temperature and (e) the Mach number imparted upstream of the wave
front. (16)
4) The following data refers to a supersonic wind tunnel: Nozzle throat area =200cm²
Test section cross- section =337.5cm²
Working fluid ;air (γ =1.4, Cp =0.287 KJ/Kg K)
Determine the test section Mach number and the diffuser throat area if a normal
shock is located in the test section. (16)
5) A supersonic diffuser for air (γ =1.4) has an area ratio of 0.416 with an inlet Mach number of
2.4 (design value). Determine the exit Mach number and the design value of the pressure ratio across the diffuser for isentropic flow. At an off- design value of the inlet Mach number
(2.7) a normal shock occurs inside the diffuser .Determine the upstream Mach number and area ratio at the section where the shock occurs, diffuser efficiency and the pressure ratio
across the diffuser. Depict graphically the static pressure distribution at off design. (16)
6) Starting from the energy equation for flow through a normal shock obtain the following relations (or) Prandtl – Meyer relation Cx Cy =a* ² M*x M*y =1 (16)
Flow with oblique shock waves:
7) Air approaches a symmetrical wedge (δ =15˚) at a Mach number of 2.0.Determine for the strong and weak waves (a) wave angle (b) pressure ratio (c) density ratio,
(d) Temperature ratio and (e) downstream Mach number Verify these values using
Gas tables for normal shocks. (16)
8) A gas (γ =1.3) at p1 =345 Mbar, T1= 350 K and M1=1.5 is to be isentropically expanded
to 138 Mbar. Determine (a) the deflection angle, (b) final Mach number and (c) the temperature
of the gas. (16)
9) A jet of air at Mach number of 2.5 is deflected inwards at the corner of a curved wall.The wave angle at the corner is 60˚.Determine the deflection angle of the wall, pressure
and temperature ratios and final Mach number. (16)
10) Derive the Rankine –Hugoniot relation for an oblique shock
Þ2 /þ 1 = γ + 1 p2 γ + 1 p2
-------- --- +1 -------- + ------
γ -1 p1 γ -1 p1
Compare graphically the variation of density ratio with the initial Mach number in isentropic flow
and flow with oblique shock. (16)
11) The Mach number at the exit of a combustion chamber is 0.9. The ratio of stagnation
temperature at exit and entry is 3.74.If the pressure and temperature of a gas at exit are
2.5 bar and 1000˚C respectively determine (a) Mach number ,pressure and temperature of the gas at entry,(b) the heat supplied per Kg of the gas and (c) the maximum heat that
can be supplied.
Take γ =1.3 and Cp =1.218 KJ/Kg K (16)
12) The conditions of a gas in a combuster at entry are: P1=0.343 bar,T1= 310K ,C1=60m/s Determine the Mach number ,pressure, temperature and velocity at the exit if the increase in stagnation
enthalpy of the gas between entry and exit is 1172.5KJ/Kg.
Take Cp=1.005KJ/kg, γ =1.4. (16)
13) A combustion chamber in a gas turbine plant receives air at 350 K , 0.55 bar and 75m/s.
The air –fuel ratio is 29 and the calorific value of the fuel is 41.87 MJ/Kg. Taking γ =1.4 and R =0.287 KJ/Kg K for the gas determine:
a) The initial and final Mach number, (4)
b) Final pressure, temperature and velocity of the gas, (4)
c) Percent stagnation pressure loss in the combustion chamber and (4),
d) The maximum stagnation temperature attainable. (4)
14) Obtain an equation representing the Rayleigh line. Draw Rayleigh lines on the h-s and p-v planes for two different values of the mass flux.
Show that the slope of the Rayleigh line on the p-v plane is {dP/dV} r = þ² c² (16)
Unit -5
PART- A (2 Marks)
1) Differentiate jet propulsion and rocket propulsion (or) differentiate between air breathing and rocket propulsion?
2) What is monopropellant? Give one example for that?
3) What is bipropellant?
4) Classify the rocket engines based on sources of energy employed?
5) What is specify impulse of rocket?
6) Define specific consumption?
7) What is weight flow co-efficient?
8) What is IWR?
9) What is thrust co-efficient?
10) Define propulsive efficiency?
Part - B (16 Marks)
1) A turboprop engine operates at an altitude of 3000 meters above mean sea level and an aircraft speed of 525 Kmph. The data for the engine is given below
Inlet diffuser efficiency =0.875
Compressor efficiency =0.790
Velocity of air at compressor entry =90m/s
Properties of air: γ =1.4, Cp =1.005 KJ/kg K (16)
2) The diameter of the propeller of an aircraft is 2.5m; It flies at a speed of 500Kmph at an altitude of 8000m. For a flight to jet speed ratio of 0.75 determine (a) the flow rate of air
through the propeller, (b) thrust produced (c) specific thrust, (d) specific impulse and
(e) The thrust power. (16)
3) An aircraft flies at 960Kmph. One of its turbojet engines takes in 40 kg/s of air and expands the gases to the ambient pressure .The air –fuel ratio is 50 and the lower calorific value of the fuel
is 43 MJ/Kg .For maximum thrust power determine (a)jet velocity (b) thrust (c) specific thrust
(d) Thrust power (e) propulsive, thermal and overall efficiencies and (f) TSFC (16)
3) A turbo jet engine propels an aircraft at a Mach number of 0.8 in level flight at an altitude of 10 km
The data for the engine is given below:
Stagnation temperature at the turbine inlet =1200K
Stagnation temperature rise through the compressor =175 K
Calorific value of the fuel =43 MJ/Kg
Compressor efficiency =0.75
Combustion chamber efficiency =0.975
Turbine efficiency =0.81
Mechanical efficiency of the power transmission between turbine and compressor =0.98
Exhaust nozzle efficiency=0.97
Specific impulse =25 seconds
Assuming the same properties for air and combustion gases calculate
Fuel –air ratio, (2)
Compressor pressure ratio, (4)
Turbine pressure ratio, (4)
Exhaust nozzles pressure ratio ,and (4)
Mach number of exhaust jet (2)
5) A ramjet engine operates at M=1.5 at an altitude of 6500m.The diameter of the inlet diffuser at entry is 50cm and the stagnation temperature at the nozzle entry is 1600K.The calorific value
of the fuel used is 40MJ/Kg .The properties of the combustion gases are same as those of
air (γ =1.4, R=287J/Kg K ). The velocity of air at the diffuser exit is negligible
(a) the efficiency of the ideal cycle, (b) flight speed (c) air flow rate (d) diffuser pressure ratio
(e) fuel –ratio (f)nozzle pressure ratio (g) nozzle jet Mach number (h) propulsive efficiency
(i) and thrust. Assume the following values: 0D =0.90, 0B =0.98, 0 j= 0.96.
Stagnation pressure loss in the combustion chamber =0.002Po2. (16)
7) A rocket flies at 10,080 Kmph with an effective exhaust jet velocity of 1400m/s and propellant flow rate of 5.0Kg/s .If the heat of reaction of the propellants is 6500KJ/Kg of the
propel at mixture determine;
a) Propulsion efficiency and propulsion power, (6)
b) Engine output and thermal efficiency ,and (6)
c) Overall efficiency. (4)
7) Determine the maximum velocity of a rocket and the altitude attained from the following data: Mass ratio =0.15
Burn out time =75s
Effective jet velocity =2500m/s
What are the values of the velocity and altitude losses due to gravity? Ignore drag and
Assume vertical trajectory. (16)
8) A missile has a maximum flight speed to jet speed ratio of 0.2105 and specific impulse equal to 203.88 seconds .Determine for a burn out time of 8 seconds
a) Effective jet velocity (4)
b) Mass ratio and propellant mass functions (4)
c) Maximum flight speed, and (4)
d) Altitude gain during powered and coasting flights (4)
9) Calculate the orbital and escape velocities of a rocket at mean sea level and an altitude of 3 0 0 k m from the following data:
Radius of earth at mean sea level =6341.6Km (16)
Acceleration due to gravity at mean sea level =9.809 m/s²
10) With a neat sketches the principle of operation of:
1. turbo fan engine and (8)
2. ram jet engine (8)
11) Explain the construction and operation of a ramjet engine and derive an expression for the
ideal efficiency. (16)
four solid propellants.and state its advantages and disadvantages. (16)
12) Explain the construction and operation of a solid propellant rocket engine. Also name any
13 ) What are the advantages and disadvantages of liquid propellants compared to solid propellants. (16)
14) Discuss in detail the various propellants used in solid fuel rockets and liquid fuel system
.Also sketch the propellant feed-system for a liquid propellant rocket motor. (16)
15) Briefly explain the construction and working of:
A. Rocket engine (6) B. Ramjet engine (6) C. Pulsejet engine (4)
16) With the help of a neat sketch describe the working of a ramjet engine. Depict the
various thermodynamic process occurring in it on h-s diagram. What is the effect of
flight Mach number on its efficiency? (16)
17) Explain with a neat sketch the working of a turbo-pump feed system used in a liquid pr o p e l l a n t rocket? (16)
No comments: | {"url":"http://www.vidyarthiplus.in/2014/11/me1303-gas-dynamics-and-jet-propulsion.html","timestamp":"2024-11-15T03:47:17Z","content_type":"application/xhtml+xml","content_length":"168110","record_id":"<urn:uuid:193c21c6-12e8-43d8-a532-d25d79298c25>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00672.warc.gz"} |
\documentclass{chapman} %%% copy Sweave.sty definitions %%% keeps `sweave' from adding `\usepackage{Sweave}': DO NOT REMOVE %\usepackage{Sweave} \RequirePackage[T1]{fontenc} \RequirePackage
{graphicx,ae,fancyvrb} \IfFileExists{upquote.sty}{\RequirePackage{upquote}}{} \usepackage{relsize} \DefineVerbatimEnvironment{Sinput}{Verbatim}{} \DefineVerbatimEnvironment{Soutput}{Verbatim}
{fontfamily=courier, fontshape=it, fontsize=\relsize{-1}} \DefineVerbatimEnvironment{Scode}{Verbatim}{} \newenvironment{Schunk}{}{} %%% environment for raw output \newcommand{\SchunkRaw}{\
renewenvironment{Schunk}{}{} \DefineVerbatimEnvironment{Soutput}{Verbatim}{fontfamily=courier, fontshape=it, fontsize=\small} \rawSinput } %%% environment for labeled output \newcommand{\nextcaption}
{} \newcommand{\SchunkLabel}{ \renewenvironment{Schunk}{\begin{figure}[ht] }{\caption{\nextcaption} \end{figure} } \DefineVerbatimEnvironment{Sinput}{Verbatim}{frame = topline} \
DefineVerbatimEnvironment{Soutput}{Verbatim}{frame = bottomline, samepage = true, fontfamily=courier, fontshape=it, fontsize=\relsize{-1}} } %%% S code with line numbers \DefineVerbatimEnvironment
{Sinput} {Verbatim} { %% numbers=left } \newcommand{\numberSinput}{ \DefineVerbatimEnvironment{Sinput}{Verbatim}{numbers=left} } \newcommand{\rawSinput}{ \DefineVerbatimEnvironment{Sinput}{Verbatim}
{} } %%% R / System symbols \newcommand{\R}{\textsf{R}} \newcommand{\rR}{{R}} \renewcommand{\S}{\textsf{S}} \newcommand{\SPLUS}{\textsf{S-PLUS}} \newcommand{\rSPLUS}{{S-PLUS}} \newcommand{\SPSS}{\
textsf{SPSS}} \newcommand{\EXCEL}{\textsf{Excel}} \newcommand{\ACCESS}{\textsf{Access}} \newcommand{\SQL}{\textsf{SQL}} %%\newcommand{\Rpackage}[1]{\hbox{\rm\textit{#1}}} %%\newcommand{\Robject}[1]{\
hbox{\rm\texttt{#1}}} %%\newcommand{\Rclass}[1]{\hbox{\rm\textit{#1}}} %%\newcommand{\Rcmd}[1]{\hbox{\rm\texttt{#1}}} \newcommand{\Rpackage}[1]{\index{#1 package@{\fontseries{b}\selectfont #1}
package} {\fontseries{b}\selectfont #1}} \newcommand{\rpackage}[1]{{\fontseries{b}\selectfont #1}} \newcommand{\Robject}[1]{\texttt{#1}} \newcommand{\Rclass}[1]{\index{#1 class@\textit{#1} class}\
textit{#1}} \newcommand{\Rcmd}[1]{\index{#1 function@\texttt{#1} function}\texttt{#1}} \newcommand{\Roperator}[1]{\texttt{#1}} \newcommand{\Rarg}[1]{\texttt{#1}} \newcommand{\Rlevel}[1]{\texttt{#1}}
%%% other symbols \newcommand{\file}[1]{\hbox{\rm\texttt{#1}}} %%\newcommand{\stress}[1]{\index{#1}\textit{#1}} \newcommand{\stress}[1]{\textit{#1}} \newcommand{\booktitle}[1]{\textit{#1}} %%' %%%
Math symbols \usepackage{amstext} \usepackage{amsmath} \newcommand{\E}{\mathsf{E}} \newcommand{\Var}{\mathsf{Var}} \newcommand{\Cov}{\mathsf{Cov}} \newcommand{\Cor}{\mathsf{Cor}} \newcommand{\x}{\
mathbf{x}} \newcommand{\y}{\mathbf{y}} \renewcommand{\a}{\mathbf{a}} \newcommand{\W}{\mathbf{W}} \newcommand{\C}{\mathbf{C}} \renewcommand{\H}{\mathbf{H}} \newcommand{\X}{\mathbf{X}} \newcommand{\B}
{\mathbf{B}} \newcommand{\V}{\mathbf{V}} \newcommand{\I}{\mathbf{I}} \newcommand{\D}{\mathbf{D}} \newcommand{\bS}{\mathbf{S}} \newcommand{\N}{\mathcal{N}} \renewcommand{\L}{L} \renewcommand{\P}{\
mathsf{P}} \newcommand{\K}{\mathbf{K}} \newcommand{\m}{\mathbf{m}} \newcommand{\argmin}{\operatorname{argmin}\displaylimits} \newcommand{\argmax}{\operatorname{argmax}\displaylimits} \newcommand{\bx}
{\mathbf{x}} \newcommand{\bbeta}{\mathbf{\beta}} %%% links \usepackage{hyperref} \hypersetup{% pdftitle = {A Handbook of Statistical Analyses Using R (3rd Edition)}, pdfsubject = {Book}, pdfauthor =
{Torsten Hothorn and Brian S. Everitt}, colorlinks = {black}, linkcolor = {black}, citecolor = {black}, urlcolor = {black}, hyperindex = {true}, linktocpage = {true}, } %%% captions & tables %% :
conflics with figure definition in chapman.cls %%\usepackage[format=hang,margin=10pt,labelfont=bf]{caption} %% \usepackage{longtable} \usepackage[figuresright]{rotating} %%% R symbol in chapter 1 \
usepackage{wrapfig} %%% Bibliography \usepackage[round,comma]{natbib} \renewcommand{\refname}{References \addcontentsline{toc}{chapter}{References}} \citeindexfalse %%% texi2dvi complains that \
newblock is undefined, hm... \def\newblock{\hskip .11em plus .33em minus .07em} %%% Example sections \newcounter{exercise}[chapter] \setcounter{exercise}{0} \newcommand{\exercise}{\stepcounter
{exercise} \item{Ex.~\arabic{chapter}.\arabic{exercise} }} %% URLs \newcommand{\curl}[1]{\begin{center} \url{#1} \end{center}} %%% for manual corrections %\renewcommand{\baselinestretch}{2} %%% plot
sizes \setkeys{Gin}{width=0.95\textwidth} %%% color \usepackage{color} %%% hyphenations \hyphenation{drop-out} \hyphenation{mar-gi-nal} %%% new bidirectional quotes need \usepackage[utf8]{inputenc} %
\usepackage{setspace} \definecolor{sidebox_todo}{rgb}{1,1,0.2} \newcommand{\todo}[1]{ \hspace{0pt}% \marginpar{% \fcolorbox{black}{sidebox_todo}{% \parbox{\marginparwidth} { \raggedright\sffamily\
footnotesize{TODO: #1}% } }% } } \begin{document} %% Title page \title{A Handbook of Statistical Analyses Using \R{} --- 3rd Edition} \author{Torsten Hothorn and Brian S. Everitt} \maketitle %%\
VignetteIndexEntry{Chapter Missing Values} %%\VignetteDepends{mice} \setcounter{chapter}{15} \SweaveOpts{prefix.string=figures/HSAUR,eps=FALSE,keep.source=TRUE} <>= rm(list = ls()) s <- search()[-1]
s <- s[-match(c("package:base", "package:stats", "package:graphics", "package:grDevices", "package:utils", "package:datasets", "package:methods", "Autoloads"), s)] if (length(s) > 0) sapply(s,
detach, character.only = TRUE) if (!file.exists("tables")) dir.create("tables") if (!file.exists("figures")) dir.create("figures") set.seed(290875) options(prompt = "R> ", continue = "+ ", width =
63, # digits = 4, show.signif.stars = FALSE, SweaveHooks = list(leftpar = function() par(mai = par("mai") * c(1, 1.05, 1, 1)), bigleftpar = function() par(mai = par("mai") * c(1, 1.7, 1, 1))))
HSAURpkg <- require("HSAUR3") if (!HSAURpkg) stop("cannot load package ", sQuote("HSAUR3")) rm(HSAURpkg) ### hm, R-2.4.0 --vanilla seems to need this a <- Sys.setlocale("LC_ALL", "C") ### book <-
TRUE refs <- cbind(c("AItR", "DAGD", "SI", "CI", "ANOVA", "MLR", "GLM", "DE", "RP", "GAM", "SA", "ALDI", "ALDII", "SIMC", "MA", "PCA", "MDS", "CA"), 1:18) ch <- function(x) { ch <- refs[which(refs
[,1] == x),] if (book) { return(paste("Chapter~\\\\ref{", ch[1], "}", sep = "")) } else { return(paste("Chapter~", ch[2], sep = "")) } } if (file.exists("deparse.R")) source("deparse.R") setHook
(packageEvent("lattice", "attach"), function(...) { lattice.options(default.theme = function() standard.theme("pdf", color = FALSE)) }) @ \pagestyle{headings} <>= book <- FALSE @ \chapter[Missing
Values]{Missing Values: Lowering Blood Pressure During Surgery \label{MV}} \section{Introduction} \index{Blood pressure} It is sometimes necessary to lower a patient's blood pressure during surgery,
using a hypotensive drug. Such drugs are administered continuously during the relevant phase of the operation; because the duration of this phase varies so does the total amount of drug administered.
Patients also vary in the extent to which the drugs succeed in lowering blood pressure. The sooner the blood pressure rises again to normal after the drug is discontinued, the better. The data in
Table~\ref{MV-bp-tab} \citep[a missing-value version of the data presented by][]{HSAUR:RobertsonArmitage1959} relate to a particular hypotensive drug and give the time in minutes before the patient's
systolic blood pressure returned to 100mm of mercury (the recovery time), the logarithm (base 10) of the dose of drug in milligrams, and the average systolic blood pressure achieved while the drug
was being administered. The question of interest is how is the recovery time related to the other two variables? For some patients the recovery time was not recorded and the missing values are
indicated as NA in Table~\ref{MV-bp-tab}. <>= data("bp", package = "HSAUR3") toLatex(HSAURtable(bp), pcol = 2, caption = paste("Blood pressure data."), label = "MV-bp-tab") @ \section{Analyzing
Multiply Imputed Data} \label{MI:ana} From the analysis of each data set we need to look at the estimates of the quantity of interest, say $Q$, and the variance of the estimates. We let $\hat{Q}_i$
be the estimate from the $i$th data set and $S_i$ its corresponding variance. The combined estimate of the quantity of interest is \begin{eqnarray*} \bar{Q} = \frac{1}{m}\sum_{i = 1}^m \hat{Q}_i. \
end{eqnarray*} To find the combined variance involves first calculating the within-imputation variance, \begin{eqnarray*} \bar{S} = \frac{1}{m}\sum_{i = 1}^m S_i \end{eqnarray*} followed by the
between-imputation variance, \begin{eqnarray*} B = \frac{1}{m - 1} \sum_{i = 1}^m (\hat{Q}_i - \bar{Q})^2 \end{eqnarray*} then the required total variance can now be found from \begin{eqnarray*} T =
\bar{S} + (1 + m^{-1}) B \end{eqnarray*} This total variance is made up of two components; the first which preserves the natural variability, $\bar{S}$, is simply the average of the variance
estimates for each imputed data set and is analogous to the variance that would be suitable if we did not need to account for missing data; the second component, $B$, estimates uncertainty caused by
missing data by measuring how the point estimates vary from data set to data set. More explanation of how the formula for $T$ arises is given in \cite{HSAUR:vanBuuren2012}. The overall standard error
is simply the square root of $T$. A significance test for $Q$ and a confidence interval is found from the usual test statistic, ($Q-$ hypothesized value of $Q$)/$\sqrt{T}$, the value of which is
referred to a Student's $t$-distribution. The question arises however as to what is the appropriate value for the degrees of freedom of the test, say $v_0$? \cite{HSAUR:Rubin1987} suggests that the
answer to this question is given by; \begin{eqnarray*} v_0 = (m - 1) (1 + 1/r^2) \end{eqnarray*} where \begin{eqnarray*} r = \frac{B + B / m}{\bar{S}} \end{eqnarray*} But \cite
{HSAUR:BarnardRubin1999} noted that using this value of $v_0$ can produce values that are larger than the degrees of freedom in the complete data, a result which they considered `clearly
inappropriate'. Consequently they developed an adapted version that does not lead to the same problem. Barnard and Rubin's revised value for the degrees of freedom of the $t$-test in which we are
interested is $v_1$ given by; \begin{eqnarray*} v_1 = \frac{v_0 v_2}{v_0 + v_2} \end{eqnarray*} where \begin{eqnarray*} v_2 = \frac{n(n-1)(1 - \lambda)}{n + 2} \end{eqnarray*} and \begin{eqnarray*} \
lambda = \frac{r}{\sqrt{r^2 + 1}}. \end{eqnarray*} The quantity $v_1$ is always less than or equal to the degrees of freedom of the test applied to the hypothetically complete data. \citep[For more
details see][]{HSAUR:vanBuuren2012}. \index{Imputation|)} \section{Analysis Using \R{}} To begin we shall analyze the blood pressure data in Table~\ref{MV-bp-tab} using the complete-case approach,
i.e., by simply removing the data for patients where the recovery time is missing. To begin we might simply count the number of missing values using the sapply function as follows: <>= sapply(bp,
function(x) sum(is.na(x))) @ So there are ten missing values of recovery time but no missing values amongst the other two variables. Now we use the \Rcmd{summary} function to look at some basic
statistics of the complete data for recovery time: <>= summary(bp$recovtime, na.rm = TRUE) @ And next we can calculate the complete data estimate of the standard deviation of recover time <>= sd
(bp$recovtime, na.rm = TRUE) @ The final numerical results we might be interested in are the correlations of recovery time with blood pressure and of recovery time with logdose. These can be found as
follows: <>= with(bp, cor(bloodp, recovtime, use = "complete.obs")) with(bp, cor(logdose, recovtime, use = "complete.obs")) @ And a useful graphic of the data is a scatterplot matrix which we can
construct using \Rcmd{pairs}. The scatterplot matrix is given in Figure~\ref{MV-bp-pairs-cc}. \begin{figure} \begin{center} <>= layout(matrix(1:3, nrow = 1)) plot(bloodp ~ logdose, data = bp) plot
(recovtime ~ bloodp, data = bp) plot(recovtime ~ logdose, data = bp) @ \caption{Scatterplots of the complete cases of the \Robject{bp} data. \label{MV-bp-pairs-cc}} \end{center} \end{figure} To
investigate how recovery time is related to blood pressure and logdose we might begin by fitting a multiple linear regression model (see Chapter~\ref{MLR}). The relevant command and the summary of
the results is shown in Figure~\ref{MV-bp-lm-cc}. Note that this summary output reports that ten observations with missing values were removed prior to the analysis; this is default for many models
in \R. \renewcommand{\nextcaption}{\R{} output of the complete-case linear model for the \Robject{bp} data. \label{MV-bp-lm-cc}} \SchunkLabel <>= summary(lm(recovtime ~ bloodp + logdose, data = bp))
@ \SchunkRaw Now let us see what happens when we impute the missing values of the recovery time variable simply by the mean of the complete case; for this we will use the \Rpackage{mice} \citep
{PKG:mice} package; <>= library("mice") @ We begin by creating a new data set, \Robject{imp}, which will contain the three variables log-dose, blood pressure, and recovery time with the missing
values in the latter replaced by the mean recovery time of the complete cases; <>= imp <- mice(bp, method = "mean", m = 1, maxit = 1) @ So now we can find the summary statistics of recovery time to
compare with those given previously <>= with(imp, summary(recovtime)) @ Making the comparison we see that only the values of the first and third quantile and the median have changed. The minimum and
maximum values are the same and so, of course, is the mean. But of more interest is what happens to the sample standard deviation; its value for the imputed data can be found using: <>= with(imp, sd
(recovtime)) @ The value for the imputed data, $\Sexpr{round(with(imp, sd(recovtime))[["analyses"]][[1]], 2)}$ is, as we would expect, lower than that for the complete data, $\Sexpr{round(with(bp, sd
(recovtime, na.rm = TRUE)), 2)}$. What about the correlations? <>= with(imp, cor(bloodp, recovtime)) with(imp, cor(logdose, recovtime)) @ The correlations of blood pression and recovery time are very
similar before ($\Sexpr{round(with(bp, cor(bloodp, recovtime, use = "complete.obs")), 2)}$) after ($\Sexpr{round(with(imp, cor(bloodp, recovtime))[["analyses"]][[1]], 2)}$) imputation. For log-dose,
imputation changes the correlation from $\Sexpr{round(with(bp, cor(logdose, recovtime, use = "complete.obs")), 2)}$ to $\Sexpr{round(with(imp, cor(logdose, recovtime))[["analyses"]][[1]], 2)}$. The
scatterplot of the imputed data is found as given by the code displayed with Figure~\ref{MV-bp-pairs-imp}. For mean imputation, the imputed value of the recovery time is constant for all observations
and so they appear as a series of points along the value of the mean value of the observed recovery times namely, $\Sexpr{round(with(bp, mean(recovtime, na.rm = TRUE)), 2)}$. \begin{figure} \begin
{center} <>= layout(matrix(1:2, nrow = 1)) plot(recovtime ~ bloodp, data = complete(imp), pch = is.na(bp$recovtime) + 1) plot(recovtime ~ logdose, data = complete(imp), pch = is.na(bp$recovtime) + 1)
legend("topleft", pch = 1:2, bty = "n", legend = c("original", "imputed")) @ \caption{Scatterplots of the imputed \Robject{bp} data. Imputed observations are depicted as triangles. \label
{MV-bp-pairs-imp}} \end{center} \end{figure} \renewcommand{\nextcaption}{\R{} output of the mean imputation linear model for the \Robject{bp} data. \label{MV-bp-lm-imp}} \SchunkLabel <>= with(imp,
summary(lm(recovtime ~ bloodp + logdose))) @ \SchunkRaw Comparison of the multiple linear regression results in Figure~\ref{MV-bp-lm-imp} with those in Figure~\ref{MV-bp-lm-cc} show some interesting
differences, for example, the standard errors of the regression coefficients are somewhat lower for the mean imputed data but the conclusions drawn from the results in each table would be broadly
similar. \index{Predictive mean matching} The single imputation of a sample mean is not to be recommended and so we will move on to using a more sophisticated multiple imputation procedure know as \
stress{predictive mean matching}. The method is described in detail in \cite{HSAUR:vanBuuren2012} who considers it both easy-to-use and versatile. And imputations outside the observed data range will
not occur so that problems with meaningless imputations, for example, a negative recovery time, will not occur. The method is labeled \Robject{pmm} in the \Rpackage{mice} package and here we will
apply it to the blood pressure data with $m = 10$ (we need to fix the seed in order to make the result reproducible): <>= imp_ppm <- mice(bp, m = 10, method = "pmm", print = FALSE, seed = 1) @ The
scatterplot of the imputed data is found as given by the code displayed with Figure~\ref{MV-bp-pairs-imp-mice}. We only show the imputed recovery times from the first iteration ($m = 1$).The imputed
recovery times now take different values. \begin{figure} \begin{center} <>= layout(matrix(1:2, nrow = 1)) plot(recovtime ~ bloodp, data = complete(imp_ppm), pch = is.na(bp$recovtime) + 1) plot
(recovtime ~ logdose, data = complete(imp_ppm), pch = is.na(bp$recovtime) + 1) legend("topleft", pch = 1:2, bty = "n", legend = c("original", "imputed")) @ \caption{Scatterplots of the multiple
imputed \Robject{bp} data (first iteration). Imputed observations are depicted as triangles. \label{MV-bp-pairs-imp-mice}} \end{center} \end{figure} From the resulting object we can compute the mean
and standard deviations of recovery time for each of the $m = 10$ iterations. We first extract these numbers from the \Robject{analyses} element of the returned object, convert this list to a vector,
and use the \Rcmd{summary} function to compute the usual summary statistics: <>= summary(unlist(with(imp_ppm, mean(recovtime))$analyses)) summary(unlist(with(imp_ppm, sd(recovtime))$analyses)) @ We
do the same with the correlations as follows <>= summary(unlist(with(imp_ppm, cor(bloodp, recovtime))$analyses)) summary(unlist(with(imp_ppm, cor(logdose, recovtime))$analyses)) @ The estimate of the
mean of the blood pressure data from the multiply imputed results is $\Sexpr{round(mean(unlist(with(imp_ppm, mean(recovtime))$analyses)) , 2)}$, very similar to the values found previously. Similarly
the estimate of the standard deviation of the data is $\Sexpr{round(mean(unlist(with(imp_ppm, sd(recovtime))$analyses)) , 2)}$ which lies between the complete data estimate and the \emph
{mean-imputed} value. The two correlation estimates are also very close to the previous values. The variation in the estimates of mean, standard deviation, and correlations across the ten imputation
is relatively small apart from that for the correlation between log-dose and recovery time -- here there is considerable variation in the values for the ten imputations. Finally, we will fit a linear
model to each of the imputed samples and then find the summary statistics for the ten sets of regression coefficients: the results are given in Figure~\ref{MV-bp-lm-cc-mice}: <>= fit <- with(imp_ppm,
lm(recovtime ~ bloodp + logdose)) @ \renewcommand{\nextcaption}{\R{} output of the multiple imputed linear model for the \Robject{bp} data. \label{MV-bp-lm-cc-mice}} \SchunkLabel <>= summary(pool
(fit)) @ \SchunkRaw The result for blood pressure is similar to the previous complete data and mean-imputed results with the regression coefficient for this variable being highly significant $(p = \
Sexpr{round(summary(pool(fit))["bloodp", 5], 3)})$. But the result for log dose differs from those found previously; for the multiply imputed data the regression coefficient for log dose is not
significant at the $5\%$ level $(p = \Sexpr{round(summary(pool(fit))["logdose", 5], 3)})$ whereas in both of the previous two analyses it was significant. This finding reflects the greater variation
of the value of the correlation between log dose and recovery time in the ten imputations noted above. (Remember that the standard errors in Figure~\ref{MV-bp-lm-cc-mice} computed by \Rcmd{pool}
arise from the formulae given in Section~\ref{MI:ana}.) Now suppose we wish to test the hypothesis that in the population from which the sample data in Table~\ref{MV-bp-tab} arises a mean recovery
time of $27$ minutes. We will test this hypothesis in the usual way using Student's t-test applied to the complete-data, the singly imputed data, and the multiply imputed data: <>= with(bp, t.test
(recovtime, mu = 27)) with(imp, t.test(recovtime, mu = 27))$analyses[[1]] @ For the multiply imputed data we need to use the \Rcmd{lm} function to get the equivalent of the $t$-test by modeling
recovery time minus $27$ with an intercept only and testing for zero intercept. So the code needed is: <>= fit <- with(imp_ppm, lm(I(recovtime - 27) ~ 1)) summary(pool(fit)) @ Looking at the results
of the three analyses we see that the complete-case analysis fails to reject the hypothesis at the $5\%$ level whereas the other two analyses lead to results that are statistically significant at the
level. This simple (and perhaps rather artificial) example demonstrates that different conclusions can be reached by the different approaches. \section{Summary of Findings} The estimated standard
deviation of the blood pressure is lower when computed from the mean-imputed data than from the complete data. The corresponding value from the multiply imputed data lies between these two values.
The estimate of the mean from the multiply imputed data is very similar to the value obtained in the complete data analysis. (The value from the singly imputed data is, of course, the same as from
the complete data.) The estimates of the correlations between blood pressure and recovery time and log dose and recovery time are very similar in all three analyses but the variation in the latter
across the ten multiple imputations is considerable and this results in the regression coefficient for log dose being less significant than in the other two analyses. Testing the hypothesis that the
population mean of recovery time is $27$ minutes using complete-case analysis leads to a different conclusion than is arrived at by the two multiple imputations approaches. \section{Final Comments}
Missing values are an ever-present possibility in all types of studies although everything possible should be done to avoid them. But when data contain missing values multiple imputation can be used
to provide valid inferences for parameter estimates from the incomplete data. If carefully handled, multiple imputation can cope with missing data in all types of variables. In this chapter we have
given only a brief account of dealing with missing values; a detailed account is available in the issue of \stress{Statistical Methods in Medical Research entitled Multiple Imputation: Current
Perspectives} (Volume 16, Number 3, 2007) and in \cite{HSAUR:vanBuuren2012}. \section*{Exercises} \begin{description} \exercise The data in Table~\ref{MI-UStemp-tab} give the lowest temperatures (in
Fahrenheit) recorded in various months for cities in the US; missing values are indicated by NA. Calculate the correlation matrix of the data using \begin{enumerate} \item the complete-case approach,
\item the available-data approach, and \item a multiple-imputation approach. \end{enumerate} Find the principal components of the data using each of three correlation matrices and plot the cities in
the space of the first two components of each solution. <>= data("UStemp", package = "HSAUR3") toLatex(HSAURtable(UStemp), caption = "Lowest temperatures in Fahrenheit recorded in various months for
cities in the US.", label = "MI-UStemp-tab", rownames = TRUE) @ \exercise Find $95\%$ confidence intervals for the population means of the lowest temperature in each month using \begin{enumerate} \
item the complete-case approach, \item the mean value imputation, and \item a multiple-imputation approach. \end{enumerate} \exercise Find the correlation matrix for the four months in Table~\ref
{MI-UStemp-tab} using complete-case analysis, listwise deletion, and multiple imputation. \end{description} %%\bibliographystyle{LaTeXBibTeX/refstyle} %%\bibliography{LaTeXBibTeX/HSAUR} \end | {"url":"https://cran.mirror.garr.it/CRAN/web/packages/HSAUR3/vignettes/Ch_missing_values.Rnw","timestamp":"2024-11-14T20:35:29Z","content_type":"text/plain","content_length":"26708","record_id":"<urn:uuid:313c577e-2425-447e-ba2f-1961cf10eb1d>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00094.warc.gz"} |
Linux Terminal Basics
Graphical User Interface (GUI) has been useful and easy to use for all types of computer users. We all use it every day to get things done. However, the computing history has the provision to do
command-line instructions also to do the same things. Linux terminal is one such command line interface. Let us take a few examples where one beats the other.
Why command line ?
Suppose you wanted to rename 3 files by including their date of creation. You can easily do that in GUI. What if you had to do the same for 100 files?. Would you be still happy to do it via GUI,
selecting each file, right-click, rename and type?.
Commandline tools give us the option to do in single-line instructions. Both command-line and GUI have their pros and cons. It is up to us to identify the ideal tool for the scenario and use it.
The command-line tool in Linux is often called a terminal. When it comes to the professional skill set of any programmer/machine learning/artificial intelligence enthusiast, he/she needs to have at
least some familiarity with command-line usage. In this blog post, we will see the minimum required basics that will help someone in this technology space.
Some examples of terminals include Python shell, command prompt in Windows, Console in a browser where you can execute Javascript, Database terminals. So we are already familiar with the terminals in
one way or other. A historical look at the terminal usage shows some interesting aspects.
Although these days terminal is a piece of software it originally meant a physical piece of hardware for interfacing with the computer. The computing device and the hardware interface were often
different. The computing part was too big to fit in a table. So the immediate interaction of a user was with the physical hardware like a typewriter which is used to instruct the computer.
A similar theme we might see these days could be the fact that most of the AI/ML engines run on the cloud and our laptop can be thought of as an interface to the cloud. Another situation is where you
have to leverage a high-performance computing or parallel computing facility, you might be remotely instructing the supercomputing facility from your laptop.
When it comes to the Linux terminal we have a piece of software via which we can give commands to the operating system and get the tasks done. Some key plus points of command-line interface compared
to GUI is that are:
• Programmable
• Efficient in terms of Bandwidth
• You can work remotely
The second and third points are obvious when we think about it in terms of using a cloud service or supercomputing facility.
Now let us get started with the actual terminal usage practice
Linux Terminal
Launching Terminal in Ubuntu
We can launch the terminal in Ubuntu by pressing Control+ALT+T. There is also the GUI way of doing this. But the whole point of learning terminal is to realize its use-cases and power. So we will
stick to the keyboard shortcut way.
Just to keep in mind that Unix and Linux programmers over the years have written many different shell programs like Bourne shell, C shell, Korn shell, Bash shell, etc. The one we will be using is the
Bash shell.
Upon opening the terminal we will get a window with a prompt. The general form of the prompt will be like ‘user@host$’. This means the user has been logged into the host computer and the computer is
at its home directory ready to receive instructions.
Basic Commands and its Usage
Zoom In and Out: Press Control and + to make fonts larger, Control and – to make fonts smaller
Date: The command date gives the present date and time in the system.
sajil@sajil-KY652AA-ACJ-CQ3050IL:~$ date
Wed Nov 25 17:02:04 IST 2015
Mathematical Expression: We can evaluate mathematical expressions in the terminal, just like a calculator.
sajil@sajil-KY652AA-ACJ-CQ3050IL:~$ expr 10 + 21
Printing Message: We can display custom messages as outputs by using the echo command. The command ‘echo’ can be used to output a message to the terminal. This feature is especially useful when
writing bash script files (a file containing a set of bash commands to accomplish a relatively complex task).
sajil@sajil-KY652AA-ACJ-CQ3050IL:~$ echo Namaste
user@host~$ echo Have a good day
Have a good day
Clearing Terminal: At any point, we can clear the terminal screen using the clear command.
sajil@sajil-KY652AA-ACJ-CQ3050IL:~$ clear
Present Working Directory: To know the present working location we can use the pwd command.
sajil@sajil-KY652AA-ACJ- CQ3050IL:~$ pwd
# ~ denotes home directory
$ cd ~
# . denote current directory
$ cd .
Navigating File Structures and Folders
Listing Files and Directories: To list out all files and folders in a directory, the ls command is used.
sajil@sajil-KY652AA-ACJ- CQ3050IL:~$ ls
Activity files
Here we can see it lists in addition to the PDF and Ipython files the folders also.
List with details: list out files with additional details like file size,users, modified date,time, permissions, etc.
# General format is
$ ls -<flag> <filename/directory>
# List files with details
$ ls -l
# 1st column: permission
# 2nd column: type (1 for file, 2 for directory)
# 3rd column: owner
# 4th column: group
# 5th column: size
# 6-8th column: modified time
#9 th column: name
# To list all files including hidden files
$ ls -a
# Show folders with a slash and executables with a star
$ ls -F
# sort by size (big to small)
$ ls -S
sort by time (from new to old)
$ ls -t
# Combine arguements
$ ls -la
# -h : human readable
# -r : reverse order
This could be the most frequent usage you ever going to encounter. To move inside a directory you can type.
sajil@sajil-KY652AA-ACJ- CQ3050IL:~$ cd documents
To move one step out/parent folder type cd<space>.. To step back to the root folder type cd<space>/
sajil@sajil-KY652AA-ACJ- CQ3050IL:~$ cd ..
sajil@sajil-KY652AA-ACJ- CQ3050IL:~$ cd /
Additional change directory commands:
# To home folder
$ cd ~
# To last used directory
$ cd -
Directory and File Management
# To create a directory
$ mkdir yourfoldername
# To remove/delete a file
$ rm filename
# Remove directory recursively
# rm -<flag> <directory path>
# -r: recursively (Remove directory and their contents)
# -i: interactively (Ask to confirm if it before removing a file)
# -v: verbose (Show the removing progress)
# -f: force (Remove without asking)
$ rm -r foldername
# Copy a file from source to destination
# cp -<flag> <source> <destination>
# -r: recursively
# -i: interactively
# -v: verbose
$ cp source destination
# Copy whole folder
$ cp -r sourcedirectory destinationdirectory
# To cut and paste to another location
$ mv source destination
# To create shortcut
$ ln -s file shortcutname
To denote space between file or folder names: The character ‘\<space>’ is used to denote actual space between names of files or folders.
$ mkdir commandline\ workshop
# To create an empty file
$ touch new.txt
# Logging command line outputs to files
$ echo hello>new.txt
# Display/concatenate files and print on the standard output
$ cat filename
# Disply contents of file pagewise/controlled (press q to exit, up/down keys to scroll)
$ less filename
$ more filename
# Display first k lines (10 lines by default) of a file
$ head filename
$ head -2 filename
# Display last k lines(10 lines by default) of a file
$ tail filename
$ tail -2 filename
# Command line editor
$ nano filename
# Sort lines in a file
$ sort -<flag> <filename>
# Flag options
# -k col1,col2 : Specify a key to do the sorting, col1 and col2 are used to indicate the starting filed indices and ending field indices
# -t: specify the delimiter
# -n: sort based on numerical value (default: alphabetically)
# -r: reverse order
# -u: only unique lines
# -b: Ignore blanks at the start of the line
# -o: specify the output file (default is the standard output)
Another key feature that is very useful to the terminal is its auto-completion feature. Suppose you want to navigate inside into a set of folders. You don’t have to type the name of all the files and
folders. You can simply type the starting 2-3 letters and press Tab. If there is only one matching file or folder starting with that name the terminal completes the typing for you. Otherwise, it
lists out all the matches, with that information you can add one more letter to the starting and press Tab to complete.
Accessing previous commands: Also the up and down arrows on the keyboard lets you recall all the past commands used so that if you want to use it again with minor modifications, just select from the
past and use it with the auto-completion feature.
Moving Files: You can move files from one location to another using the ‘mv’ command. The usage is simply mv<space>Source<space>Destination.
sajil@sajil-KY652AA-ACJ- CQ3050IL:~$ mv *.txt ../
Here I am moving all text files from the present directory to the home directory. The ‘*’ here represents any character. This way the source here represents all files with ‘.txt’ as an extension. The
destination here is ‘../’ which means step back to the home location where the terminal opens up by default and move all these files to there.
History: If you want to get a history of all commands used so far, you can use the history command.
sajil@sajil-KY652AA-ACJ- CQ3050IL:~$ history
File Permissions
The change mode command is used to alter the file permission settings. The general format is chmod ugo filename (u:user, g:group, o:others). values of u,g,and o goes between 0 to 7.
$ chmod [user/group/others/all] [+-] [permission] <file/directory name>
Read Write Execute Decimal Explantion
0 0 0 0 No permissions
0 0 1 1 Execute only
0 1 0 2 Write only
0 1 1 3 Write & Execute
1 0 0 4 Read only
1 0 1 5 Read & Execute
1 1 0 6 Read & Write
1 1 1 7 Full permissions
# r : read, w : write, x : executable
# You can read and write
$ chmod 600 filename
# You can read, write, and execute (e.g. scripts)
$ chmod 700 filename
# You can read, write, and execute, and everyone else can read and execute (e.g. programs to share)
$ chmod 755 filename
# Alternate was to set all permissions (read, write, execute) to all
$ chmod a+x 777 filename
RegEx Use
You can write regular expression patterns to select from files
# Concatenates files whose name starts with chr
$ cat chr*
# List details of only PDF files
$ ls *.pdf
# List files that contains chr in filename
$ ls *chr*
System Commands
Run as Admin: In situations where you want to run code with admin privileges, you can prepend with the keyword ‘sudo’. The terminal will ask for a password before executing it.
sajil@sajil-KY652AA-ACJ- CQ3050IL:~$ sudo snap install vlc
Command Documentation: To check the Manuel/documentation for any Bash command prepend with the keyword ‘man’ with that command.
sajil@sajil-KY652AA-ACJ- CQ3050IL:~$ man ls
# To show current date and time
$ date
# To show current user logged in
$ whoami
# Show disk memory usage
$ df
# Show directory space usage
$ du
Listing out the structure of files and folders: We know that we can organize files inside folders or subfolders according to our convenience. The command ‘tree’ lets us visualize the hierarchical
structure in a visual form.
sajil@sajil-KY652AA-ACJ-CQ3050IL:~/a$ tree
├ ── 1
└── 2
├ ── 3
├ ── 4
└── 5
└── 6
6 directories, 0 files
Creating and deleting folders: Folders/directories can be created or deleted using ‘mkdir’ and ‘rmdir’ commands. The other most frequent commands are changing directory and exiting from a directory.
# Creating Directories
sajil@sajil-KY652AA-ACJ-CQ3050IL:~/a$ mkdir directoryname
# Changing Directories
sajil@sajil-KY652AA-ACJ-CQ3050IL:~/a$ cd directoryname
# Removing Directories
sajil@sajil-KY652AA-ACJ-CQ3050IL:~/a$ rmdir directoryname
# Exiting from current Directory
sajil@sajil-KY652AA-ACJ-CQ3050IL:~/a$ cd ..
Logging Output
Saving Output of a Command to Text File: We can log the output messages given by a command to a text file using a ‘>’ character. The usage is as follows.
# > : output to the file (if file exists, it will overwrite it)
# >>: append the output to the end of a file (if file doesn’t exist, it will create the file)
$ ls >outputlog.txt
Here the command ‘ls’ will list out all files and folders in the current directory which is written into a text file named ‘output.txt’.
Alias: You can give alternate names to specific commands
# $alias<space><customname>="<actualcommand>"
$ alias listout="ls"
Process Management
# Display system usages and running programs (press q to exit)
$ top
# Display currently active process
$ ps
# Stop a particular process with process id
$ kill pid
# Check Web Connection
$ ping google.com
# Download file from URL
$ wget url
$ curl url
# Remote log in
$ ssh user@host
# Remote log in with specific port
$ ssh -p port user@host
# Download and Upload SCP
# To upload file to remote server
$ scp <local file> username@hoffman2.idre.ucla.edu:<path>
# To download file from remote server to local computer
$ scp username@hoffman2.idre.ucla.edu:<path> .
# Exit or logout
$ exit
$ logout
File Compression
# Compressing files
$ gzip file.txt
# Uncompressing files
$ gzip -d file.txt.gz
$ unzip text.zip
# Ccreate a tar named file.tar containing files
$ tar cf outputfile_or_folder.tar filename_or_foldername
# Extract tar files
$ tar xf file.tar
# Create tar file with Gzip
$ tar czf zipfile.tar.gz inputfile
# Extract tar file with Gzip
$ tar xzf file.tar.gz
Searching Files
# Listing specific type of files
$ ls *.pdf
# Finding files
$ find . -name “*.txt“
Combine multiple commands together
$ command1 | command2 | command3
# The command1’s output will be redirected to command2’s input, and so on
# Color Output
$ echo Hello|grep --color el
# Getting Arguements
$ echo a b c d e f |xargs -n 3
Bash Scripting
Creating Bash Script: We can combine many of these commands and create a bash script. This can be done by writing the commands in a file with the extension ‘sh’ and changing its administrative
permission as executable. To run this file we will call its name prepending a ‘./’ characters.
$ gedit new.sh
echo first command
echo second command
echo third command
$ chmod a+x new.sh
# or chmod 755 new.sh
$./new.sh >outputlog.txt
Here the first commands invoke the inbuilt text editor in Ubuntu to create a text file with ‘sh’ extension. In the next step, a bunch of simple echo print commands is written onto that file. The
‘chmod’ command and its permission setting make the file executable. The last step runs the executable file and logs its terminal outputs to another text file.
Exiting From Terminal: The final command to exit from the terminal can be done by simply typing ‘exit’.
Concluding Thoughts
In this post, we have seen the most frequent commands which a developer or programmer might encounter while working with terminal commands. A combination of two or more of such commands might make
many tasks easier which otherwise might be harder to do in a GUI. The usage of such commands creatively and practice of it is crucial to be productive in your workspace. I hope these examples will
get start you to learn about the first steps, start realizing its power and save you a lot of your time from mundane tasks.
1 thought on “Linux Terminal Basics”
Well explained 👍🏻
Leave a Comment | {"url":"https://intuitivetutorial.com/2021/06/11/linux-terminal-basics/","timestamp":"2024-11-07T10:47:48Z","content_type":"text/html","content_length":"99666","record_id":"<urn:uuid:5b671ba5-8854-4295-9462-dace1c08e34e>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00619.warc.gz"} |
Lesson 12
Polynomial Division (Part 1)
Lesson Narrative
This is the first of two lessons whose purpose is to introduce students to polynomial division, focusing specifically on dividing by linear factors. Up until this point, students have added,
subtracted, and multiplied polynomials, but any types of division have been restricted to rewriting quadratics as the product of two linear factors.
In this lesson, students begin by dividing a 3rd degree polynomial by \((x-1)\) using their understanding of the distributive property. It is important to note that while the diagrams used in this
lesson and the next are useful for staying organized and providing a structure to think through the division, they are not required for polynomial division, and some students may reason about the
division in different ways.
For example, when dividing \(2x^2+7x+6\) by \((x+2)\), students can think backwards to figure out what \((x+2)\) would have to be multiplied by in order to get \(2x^2+7x+6\). They can reason that the
\(x\) term from \((x+2)\) would need to be multiplied by \(2x\) to yield \(2x^2\). Then, since the whole polynomial \((x+2)\) is being multiplied by \(2x\), this would also result in a \(4x\). The
polynomial we’re trying to get has the term \(7x\), so \(3x\) must be added to the \(4x\) from the previous step. This means that \((x+2)\) must be multiplied by 3. Again, the 2 in \((x+2)\) is also
multiplied by this factor, so we also get a constant term of 6. This matches the constant term of \(2x^2+7x+6\), so we know that \((x+2)\) divides it evenly. Looking back at the terms we multiplied
by at each step, we can conclude that \(2x^2+7x+6=(x+2)(2x+3)\). A diagram like the ones shown in this lesson is a compact way of keeping track of this reasoning.
Regardless of the division strategy used, the important takeaway for students is that when the division works out with no extra terms, we prove by example that \((x-1)\) is a linear factor of the
polynomial. Building on their previous work, students make a sketch of the original 3rd degree polynomial after they rewrite it as three linear factors.
Dividing polynomials can be confusing at first to students since the number of terms involved mean there are a lot of pieces students need to keep an eye on. An important part of this lesson is
students sharing their reasoning for how they deduced each term of the division (MP3).
Learning Goals
Teacher Facing
• Calculate the result of polynomial division using a diagram.
• Identify factors of polynomials using division.
Student Facing
• Let’s learn a way to divide polynomials.
Student Facing
• I can divide one polynomial by another.
CCSS Standards
Building On
Building Towards
Additional Resources
Google Slides For access, consult one of our IM Certified Partners.
PowerPoint Slides For access, consult one of our IM Certified Partners. | {"url":"https://curriculum.illustrativemathematics.org/HS/teachers/3/2/12/preparation.html","timestamp":"2024-11-07T23:33:19Z","content_type":"text/html","content_length":"79636","record_id":"<urn:uuid:5c529b6c-229e-4822-8cdf-0209671754dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00295.warc.gz"} |
I have written a lot about distance estimated ray marching using OpenGL shaders on this blog.
But one of the things I have always left out is how to setup the camera and perspective projection in OpenGL. The traditional way to do this is by using functions such as ‘gluLookAt’ and
‘gluPerspective’. But things become more complicated if you want to combine ray marched shader graphics with the traditional OpenGL polygons. And if you are using modern OpenGL (the ‘core’ context),
there is no matrix stack and no ‘gluLookAt’ functions. This post goes through through the math necessary to combine raytraced and polygon graphics in shaders. I have seen several people implement
this, but I couldn’t find a thorough description of how to derive the math.
Here is the rendering pipeline we will be using:
It is important to point out, that in modern OpenGL there is no such thing as a model, view, or projection matrix. The green part on the diagram above is completely programmable, and it is possible
to do whatever you like there. Only the part after the green box of the diagram (starting with clip coordinates) is fixed by the graphics card. But the goal here is to precisely match the convention
of the fixed-function OpenGL pipeline matrices and the GLU functions gluLookAt and gluPerspective, so we will stick to the conventional model, view, and projection matrix terminology.
The object coords are the raw coordinates, for instance as specified in VBO buffers. This is the vertices of an 3D object in its local coordinate system. The next step is to position and orient the
3D object in the scene. This is accomplished by applying the model matrix, that transform the object coordinates to global world coordinates. The model transformation will be different for the
different objects that are placed in the scene.
The camera transformation
The next step is to transform the world coordinates into camera or eye space. Now, neither old nor modern OpenGL has any special support for implementing a camera. Instead the conventional
gluPerspective always assumes an origo centered camera facing the negative z-direction, and with an up-vector in the positive y-direction. So, in order to implement a generic, movable camera, we
instead find a camera-view matrix, and then apply the inverse transformation to our world coordinates – i.e. instead of moving/rotate the camera, we apply the opposite transformation to the world.
Personally, I prefer using a camera specified using a forward, up, and right vector, and a position. It is easy to understand, and the only problem is that you need to keep the vectors orthogonal at
all times. So we will use a camera identical to the one implemented in gluLookAt.
The camera-view matrix is then of the form:
r.x & u.x & -f.x & p.x \\
r.y & u.y & -f.y & p.y \\
r.z & u.z & -f.z & p.z \\
0 & 0 & 0 & 1
where r=right, u=up, f=forward, and p is the position in world coordinates. R, u, and f must be normalized and orthogonal.
Which gives an inverse of the form:
r.x & r.y & r.z & q.x \\
u.x & u.y & u.z & q.y \\
-f.x & -f.y & -f.z & q.z \\
0 & 0 & 0 & 1
By multiplying the matrices together and requiring the result is the identity matrix, the following relations between p and q can be established:
q.x = -dot(r,p), q.y = -dot(u,p), q.z = dot(f,p)
p = -vec3(vec4(q,0)*modelView);
As may be seen, the translation part (q) of this matrix is the position of the camera expressed in the R,u, and f coordinate system.
Now, per default, the OpenGL shaders use a column-major representation of matrices, in which the data is stored sequentially as a series of columns (notice, that this can be changed by specifying
‘layout (row_major) uniform;’ in the shader). So creating the model-view matrix as an array on the CPU side looks like this:
float[] values = new float[] {
r[0], u[0], -f[0], 0,
r[1], u[1], -f[1], 0,
r[2], u[2], -f[2], 0,
q[0], q[1], q[2], 1};
Don’t confuse this with the original camera-transformation: it is the inverse camera-transformation, represented in column-major format.
The Projection Transformation
The gluPerspective transformation uses the following matrix to transform from eye coordinates to clip coordinates:
f/aspect & 0 & 0 & 0 \\
0 & f & 0 & 0 \\
0 & 0 & \frac{(zF+zN)}{(zN-zF)} & \frac{(2*zF*zN)}{(zN-zF)} \\
0 & 0 & -1 & 0
where ‘f’ is cotangent(fovY/2) and ‘aspect’ is the width to height ratio of the output window.
(If you want to understand the form of this matrix, try this link)
Since we are going to raytrace the view frustum, consider what happens when we transform an direction of the form (x,y,-1,0) from eye space to clip coordinates and further to normalized device
coordinates. Since the clip.w in this case will be 1, the x and y part of the NDC will be:
ndc.xy = (x*f/aspect, y*f)
Since normalized device coordinates range from [-1;1], this means that when we ray trace our frustrum, our ray direction (in eye space) must be in the range:
eyeX = [-aspect/f ; aspect/f]
eyeY = [-1/f ; 1/f]
eyeZ = -1
where 1/f = tangent(fovY/2).
We now have the necessary ingredients to set up our raytracing shaders.
The polygon shaders
But let us start with the polygon shaders. In order to draw the polygons, we need to apply the model, view, and projection transformations to the object space vertices:
gl_Position = projection * modelView * vertex;
Notice, that we premultiply the model and view matrix on the CPU side. We don’t need them individually on the GPU side. If you wonder why we don’t combine the projection matrix as well, it is because
we want to use the modelView to transform the normals as well:
eyeSpaceNormal = mat3(modelView) * objectSpaceNormal;
Notice, that in general normals transform different from positions. They should be multiplied by the inverse of the transposed 3×3 part of the modelView matrix. But if we only do uniform scaling and
rotations, the above will work, since the rotational part of matrix is orthogonal, and the uniform scaling does not matter if we normalize our normals. But if you do non-uniform scaling in the model
matrix, the above will not work.
The raytracer shaders
The raytracing must be done in world coordinates. So in the vertex shader for the raytracer, we need figure out the eye position and ray direction (both in world coordinates) for each pixel. Assume
that we render a quad, with the vertices ranging from [-1,-1] to [1,1].
The eye position can be easily found from the formula found under ‘the camera transformation’:
eye = -(modelView[3].xyz)*mat3(modelView);
Similar, by transforming the ranges we found above from eye to world space we get that:
dir = vec3(vertex.x*fov_y_scale*aspect,vertex.y*fov_y_scale,-1.0)
where fov_y_scale = tangent(fovY/2) is an uniform calculated on the CPU side.
Normally, OpenGL takes care of filling the z-buffer. But for raytracing, we have to do it manually, which can be done by writing to gl_fragDepth. Now, the ray tracing takes place in world
coordinates: we are tracing from the eye position and into the camera-forward direction (mixed with camera-up and camera-right). But we need the z-coordinate of the hit position in eye coordinates.
The raytracing is of the form:
vec3 hit = p + rayDirection * distance; // hit in world coords
Converting the hit point to eye coordinates gives (the p and q terms cancel):
eyeHitZ = -distance * dot(rayDirection * cameraForward);
which in clip coordinates becomes:
clip.z = [(zF+zN)/(zN-zF)]*eyeHitZ + (2*zF*zN)/(zN-zF);
clip.w = -eyeHitZ;
Making the perspective divide, we arrive at normalized device coordinates:
ndcDepth = ((zF+zN) + (2*zF*zN)/eyeHitZ)/(zF-zN)
The ncdDepth is in the interval [-1;1]. The last step that remains is to convert into window coordinates. Here the depth value is mapped onto an interval determined by the gl_DepthRange.near and
gl_DepthRange.far parameters (usually these are just 0 and 1). So finally we arrive at the following:
gl_FragDepth =((gl_DepthRange.diff * ndcDepth) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
Putting the pieces together, we arrive at the following for the ray tracing vertex shader:
void main(void)
gl_Position = vertex;
eye = -(modelView[3].xyz)*mat3(modelView);
dir = vec3(vertex.x*fov_y_scale*aspect,vertex.y*fov_y_scale,-1.0)
cameraForward = vec3(0,0,-1.0)*mat3(modelView);
and this code for the fragment shader:
void main (void)
vec3 rayDirection=normalize(dir);
trace(eye,rayDirection, distance, color);
fragColor = color;
float eyeHitZ = -distance *dot(cameraForward,rayDirection);
float ndcDepth = ((zFar+zNear) + (2.0*zFar*zNear)/eyeHitZ)
gl_FragDepth =((gl_DepthRange.diff * ndcDepth)
+ gl_DepthRange.near + gl_DepthRange.far) / 2.0;
The above is of course just snippets. I’m currently experimenting with a Java/JOGL implementation of the above (Github repo), with some more complete code.
Optimizing GLSL Code
By making selected variables constant at compile time, some 3D fractals render more than four times faster. Support for easily locking variables has been added to Fragmentarium.
Some time ago, I became aware that the raytracer in Fragmentarium was somewhat slower than both Fractal Labs and Boxplorer for similar systems – this was somewhat puzzling since the DE raycasting
technique is pretty much the same. After a bit of investigation, I realized that my standard raytracer had grown slower and slower, as new features had been added (e.g. reflections, hard shadows, and
floor planes) – even if the features were turned off!
One way to speed up GLSL code, is by marking some variables constant at compile-time. This way the compiler may optimize code (e.g. unroll loops) and remove unused code (e.g. if hard shadows are
disabled). The drawback is that changing these constant variables requires that the GLSL code is compiled again.
It turned out that this does have a great impact on some systems. For instance for the ‘Dodecahedron.frag’, take a look at the following render times:
No constants: 1.4 fps (1.0x)
Constant rotation matrices : 3.4 fps (2.4x)
Constant rotation matrices + Anti-alias + DetailAO: 5.6 fps (4.0x)
All 38 parameters (except camera): 6.1 fps (4.4x)
The fractal rotation matrices are the matrices used inside the DE-loop. Without the constant declarations, they must be calculated from scratch for each pixel, even though they are identical for all
pixels. Doing the calculation at compile-time gives a notable speedup of 2.4x (notice that another approach would be to calculate such frame constants in the vertex shader and pass them to the pixel
shader as ‘varying’ variables. But according to this post this is – surprisingly – not very effective).
The next speedup – from the ‘Anti-alias’ and ‘DetailAO’ variables – is more subtle. It is difficult to see from the code why these two variables should have such impact. And in fact, it turns out
that combinations of other variables will amount in the same speedup. But these speedups are not additive! Even if you make all variables constants, the framerate only increases slightly above 5.6
fps. It is not clear why this happens, but I have a guess: it seems that when the complexity is lowered between a certain treshold, the shader code execution speed increases sharply. My guess is that
for complex code, the shader runs out of free registers and needs to perform calculations using a slower kind of memory storage.
Interestingly, the ‘iterations’ variable offers no speedup – even though the compiler must be able to unroll the principal DE loop, there is no measurable improvement by doing it.
Finally, the compile time is also greatly reduced when making variables constant. For the ‘Dodecahedron.frag’ code, the compile time is ~2000ms with no constants. By making most variables constant,
the compile time is lowered to around ~335ms on my system.
Locking in Fragmentarium.
In Fragmentarium variables can be locked (made compile-time constant) by clicking the padlock next to them. Locked variables appear with a yellow padlock next to them. When a variable is locked, any
changes to it will first be executed when the system is compiled (by pressing ‘build’). Locked variables, which have been changes, will appear with a yellow background until the system is compiled,
and the changes are executed.
Notice, that whole parameter groups may be locked, by using the buttons at the bottom.
The ‘AntiAlias’ and ‘DetailAO’ variables are locked. The ‘DetailAO’ has been changed, but the changes are not executed yet (the yellow background). The ‘BoundingSphere’ variable has a grey
background, because it has keyboard focus: its value can be finetuned using the arrow keys (up/down controls step size, left/right changes value).
In a fragment, a user variable can be marked as locked by default, by adding a ‘locked’ keyword to it:
uniform float Scale; slider[-5.00,2.0,4.00] Locked
Some variables can not be locked – e.g. the camera settings. It is possible to mark such variables by the ‘NotLockable’ keyword:
uniform vec3 Eye; slider[(-50,-50,-50),(0,0,-10),(50,50,50)] NotLockable
The same goes for presets. Here the locking mode can be stated, if it is different from the default locking mode:
#preset SomeName
AntiAlias = 1 NotLocked
Detail = -2.81064 Locked
Offset = 1,1,1
Locking will be part of Fragmentarium v0.9, which will be released soon.
GPU versus CPU for pixel graphics
After having gained a bit of experience with GPU shader programming during my Fragmentarium development, a natural question to ask is: how fast are these GPU’s?
This is not an easy question to answer, and it depends on the specific application. But I will try to give an answer for the kind of systems that I’m interested in: pixel graphics systems, where each
pixel can be calculated independently of the others, such as raytraced 3D fractals.
Lets take my desktop computer, a fairly standard desktop machine, as an example. It is equipped with Nvidia Geforce 9800GT GPU @ 1.5 GHz, and a Intel Core 2 Quad Q8200 @ 2.33GHz.
How many processing unit are there?
Number of processing units (CPU): 4 CPU cores
Number of processing units (GPU): 112 Shader units
Based on these numbers, we might expect the GPU to be a factor of 28x times faster than the CPU. Of course, this totally ignores the efficiency and operating speed of the processing units. Lets try
looking at the processing power in terms of maximum number of floating-point operations per second instead:
Theoretical Peak Performance
Both Intel and Nvidia list the GFLOPS (billion floating point operations per second) rating for their products. Intel’s list can be found here, and Nvidia’s here. For my system, I found the following
Performance (CPU): 37.3 GFLOPS
Performance (GPU): 504 GFLOPS
Based on these numbers, we might expect the GPU to be a factor of 14x times faster than the CPU. But what do these numbers really mean, and can they be compared? It turns out that these number are
obtained by multiplying the processor frequency by the maximum number of instructions per clock cycle.
For the CPU, we have four cores. Now, when Intel calculate their numbers, they do it based on the special 128-bit SSE registers on every modern Pentium derived CPU. These extensions make it possible
to handle two double precision floating point, or four single precision floating point numbers per clock cycle. And in fact there exists a special instruction – the MAD, or Multiply-Add, instruction
– which allows for two arithmetic operations per clock cycle on each element in the SSE registers. These means Intel assume 4 (cores) x 2 (double precision floats) x 2 (MAD instructions) = 16
instructions per clock cycle. This gives the theoretical peak performance stated above:
Performance (CPU): 2.33 GHz * 4 * 2 * 2 = 37.3 GFLOPS (double precision floats)
What about the GPU? Here we have 112 independent processing units. On the GPU architecture an even more benchmarking-friendly instruction exists: the MAD+MUL which combines two multiplies and one
addition in a single clock cycle. This means Nvidia assumes 112 (cores) * 3 (MAD+MUL instructions) = 336 instructions per clock cycle. Combining this with a stated processing frequency of 1.5 GHz, we
arrive at the number stated earlier:
Performance (GPU): 1.5 GHz * 112 * 3 = 504 GFLOPS (single precision floats)
But wait… Nvidia’s number are for single precision floats – the Geforce 8800GT does not even support double precision floats. So for a fair comparison we should double Intel’s number, since the SSE
extensions allows four simultaneous single precision numbers to be processed instead of two double precision floats. This way we get:
Performance (CPU): 2.33 GHz * 4 * 4 * 2 = 74.6 GFLOPS (single precision floats)
Now, using this as a guideline, we would expect my GPU to be a factor of 6.8x faster than my CPU. But we have some pretty big assumptions here: for instance, not many CPU programmers would write
SSE-optimized code – and is a modern C++ compiler powerful enough to automatically take advantage of them anyway? And how often is the GPU able to use the three operation MUL+MAD instruction?
A real-world experiment
To find out I wrote a simple 2D Mandelbrot system and benchmarked it on the CPU and GPU. This is really the kind of computational tasks that I’m interested in: it is trivial to parallelize and is not
memory-intensive, and the majority of executed code will be floating point arithmetics. I did not try to optimize the C++ code, because I wanted to see if the compiler was able to perform some SSE
optimization for me. Here are the execution times:
13941 ms – CPU single precision (x87)
13941 ms – CPU double precision (x87)
10535 ms – CPU single precision (SSE)
11367 ms – CPU double precision (SSE)
424 ms – GPU single precision
(These numbers have some caveats – I did perform the tests multiple times and discarded the first few runs, but the CPU code was only single-threaded – so I assumed the numbers would scale perfectly
and divided the execution times by four. Also, I verified by checking the generated assembly code, that SSE instructions indeed were used for the core Mandelbrot loop, when they were enabled.).
There are a couple of things to notice here: first, there is no difference between single and double precision on the CPU. This is as could be expected for the x87 compiled code (since the x87
defaults to 80-bit precision anyway), but for the SSE version, we would expect a double up in speed. As can be seen, the SSE code is really not very much more efficient the the x87 code – which
strongly suggests that the compiler (here Visual Studio C++ 2008) is not very good at optimizing for SSE.
So for this example we got a factor of 25x speedup by using the GPU instead of the CPU.
“Measured” GFLOPS
Another questions is how this example compares to the theoretical peak performance. By using Nvidia’s Cg SDK I was able to get the GPU assembly code. Since I now could count the number of instruction
in the main loop, and I knew how many iterations were performed, I was able to calculate the actual number of floating point operations per second:
GPU: 211 (Mandel)GFLOPS
CPU: 8.4 (Mandel)GFLOPS*
(*The CPU number was obtained by assuming the number of instructions in the core loop was the same as for the GPU: in reality, the CPU disassembly showed that the simple loop was partially unrolled
to more than 200 lines of very complex assembly code.)
Compared to the theoretical maximum numbers of 504 GFLOPS and 74.6 GFLOPS respectively, this shows the GPU is much closer to its theoretical limit than the CPU.
GPU Caps Viewer – OpenCL
A second test was performed using the GPU Caps Viewer. This particular application includes a 4D Quaternion Julia fractal demo in OpenCL. This is interesting since OpenCL is a heterogeneous platform
– it can be compiled to both CPU and GPU. And since Intel just released an alpha version of their OpenCL SDK, I could compare it to Nvidia’s SDK.
The results were interesting:
Intel OpenCL: ~5 fps
Nvidia OpenCL: ~35 fps
(The FPS did vary through the animation, so these numbers are not very accurate. There were no dedicated benchmark mode.)
This suggest that Intel’s OpenCL compiler is actually able to take advantage of the SSE instructions and provides a highly optimized output. Either that, or Nvidia’s OpenCL implementation is not very
efficient (which is not likely).
The OpenCL based benchmark showed my GPU to be approximately 7x times faster than my CPU. Which is exactly the same as predicted by comparing the theoretical GFLOPS values (for single precision).
For normal code written in a high-level language like C or GLSL (multithreaded, single precision, and without explicit SSE instructions) the computational power is roughly equivalent to the number of
cores or shader units. For my system this makes the GPU a factor of 25x faster.
Even though the CPU cores have higher operating frequency and in principle could execute more instructions via their SSE registers, this does not seem be fully utilized (and in fact, compiling with
and without SSE optimization did not make a significant difference, even for this very simple example).
The OpenCL example tells another story: here the measured performance was proportional to the theoretical GFLOPS ratings. This is interesting since this indicate, that OpenCL could also be
interesting for CPU-applications.
One thing to bear in mind is, that the examples tested here (the Mandelbrot and 4D Quaternion Julia) are very well-suited for GPU execution. For more complex code, with conditional branching, double
precision floating point operations, and non-coalesced memory access, the CPU is much more efficient than the GPU. So for a desktop computer such as mine, a factor of 25x is probably the best you can
hope for (and it is indeed a very impressive speedup for any kind of code).
It is also important to remember that GPU’s are not magical devices. They perform operations with a theoretical peak performance typically 5-15 times larger than a CPU. So whenever you see these
1000x speed up claims (e.g. some of the CUDA showcases), it is probably just an indication of a poor CPU implementation.
But even though the performance of GPU’s may be somewhat exaggerated you can still get a tremendous speedup. And GPU interfaces such as GLSL shaders are really simple to use: you do not need to deal
explicitly with threads, you have built-in vectors and matrices, and you can compile GLSL code dynamically, during run-time. All features which makes GPU programming nearly ideal for exploring pixel
graphic systems.
Creating a Raytracer for Structure Synth (Part II)
When I decided to implement the raytracer in Structure Synth, I figured it would be an easy task – after all, it should be quite simple to trace rays from a camera and check if they intersect the
geometry in the scene.
And it turned out, that it actually is quite simple – but it did not produce very convincing pictures. The Phong-based lighting and hard shadows are really not much better than what you can achieve
in OpenGL (although the spheres are rounder). So I figured out that what I wanted was some softer qualities to the images. In particular, I have always liked the Ambient Occlusion and Depth-of-field
in Sunflow. One way to achieve this is by shooting a lot of rays for each pixel (so-called distributed raytracing). But this is obviously slow.
So I decided to try to choose a smaller subset of samples for estimating the ambient occlusion, and then do some intelligent interpolation between these points in screen space. The way I did this was
to create several screen buffers (depth, object hit, normal) and then sample at regions with high variations in these buffers (for instance at every object boundary). Then followed the non-trivial
task of interpolating between the sampled pixels (which were not uniformly distributed). I had an idea that I could solve this by relaxation (essentially iterative smoothing of the AO screen buffer,
while keeping the chosen samples fixed) – the same way the Laplace equation can be numerically solved.
While this worked, it had a number of drawbacks: choosing the condition for where to sample was tricky, the smoothing required many steps to converge, and the approach could not be easily
multi-threaded. But the worst problem was that it was difficult to combine with other stuff, such as anti-alias and depth-of-field calculations, so artifacts would show up in the final image.
I also played around with screen based depth-of-field. Again I thought it would be easy to apply a Gaussian blur based on the z-buffer depth (of course you have to prevent background objects from
blurring the foreground, which complicates things a bit). But once again, it turned out that creating a Gaussian filter for each particular depth actually gets quite slow. Of course you can bin the
depths, and reuse the Gaussian filters from a cache, but this approach got complicated, and the images still displayed artifacts. And a screen based method will always have limitations: for instance,
the blur from an object hidden behind another object will never be visible, because the object is not part of the screen buffers.
So in the end, I ended up discarding all the hacks, and settled for the much more satisfying solution of simply using a lot of rays for each pixel.
This may sound very slow: after all you need multiple rays for anti-alias, multiple rays for depth-of-field, multiple rays for ambient occlusion, for reflections, and so forth, which means you might
end up with a combinatorial explosion of rays per pixel. But in practice there is a nice shortcut: instead of trying all combinations, just choose some random samples from all the possible
This works remarkably well. You can simulate all these complex phenomena with a reasonably number of rays. And you can use more clever sampling strategies in order to reduce the noise (I use
stratified sampling in Structure Synth). The only drawback is, that you need a bit of book-keeping to prepare your stratified samples (between threads) and ensure you don’t get coherence between the
different dimensions you sample.
Another issue was how to accelerate the ray-object intersections. This is a crucial part of all raytracers: if you need to check your rays against every single object in the scene, the renders will
be extremely slow – the rendering time will be proportional to the number of objects. On the other hand spatial acceleration structures are often able to render a scene in a time proportional to the
logarithm of the number of objects.
For the raytracer in Structure Synth I chose to use a uniform grid (aka voxel stepping). This turned out to be a very bad choice. The uniform grid works very well, when the geometry is evenly
distributed in the scene. But for recursive systems, objects in a scene often appear at very different scales, making the cells in the grid very unevenly populated.
Another example of this is, that I often include a ground plane in my Structure Synth scenes (by using a flat box, such as “{ s 1000 1000 0.1 } box”). But this will completely kill the performance of
the uniform grid – most objects will end up in the same cell in the grid, and the acceleration structure gets useless. So in general, for generative systems with different scales, the use of a
uniform grid is a bad choice.
Not that is a lot of stuff, that didn’t work out well. So what is working?
As of now the raytracer in Structure Synth provides a nice foundation for things to come. I’ve gotten the multi-threaded part set correctly up, which includes a system for coordinating stratified
samples. Each thread have its own (partial) screen space buffer, which means I can do progressive rendering. This also makes it possible to implement more complex filtering (where the filtered
samples may contribute to more than one pixel – in which case the raytracer is not embarrassingly parallel anymore).
What is missing?
Materials. As of now there is only very limited control of materials. And things like transparency doesn’t work very well.
Filtering. As I mentioned above, the multi-threaded renderer supports working with filters, but I haven’t included any filters in the latest release. My first experiments (with a Gaussian filter)
were not particularly successful.
Lighting. As of now the only option is a single, white, point-like light source casting hard shadows. This rarely produce nice pictures.
In the next post I’ll talk a bit about what I’m planning for future versions of the raytracers.
Cinder – Creative Coding in C++
Cinder is a new C++ library for creative applications. It is free, open-source, and cross-platform (Windows, Mac, iPhone/iPad, but no Linux). Think of it as Processing, but in C++.
Cinder offers classes for image processing, matrix, quaternion, spline and vector math, but also more general stuff like XML, HTTP, IO, and 2D Graphics.
The more generic stuff is implemented via third-party libraries, such as TinyXML, Cairo, AntTweakBar (a simple GUI), Boost (smart pointers and threads) and system libraries (QuickTime, Cocoa,
DirectAudio, OpenGL) – certainly an ambitious range of technologies and uses.
Their examples are impressive, especially some of the demos by Robert Hodgin (flight404):
Cymatic Ferrofluid by flight404 (be sure to watch the videos).
Robert Hodgin has also created a very nice Cinder tutorial, which guides you through the creation of a quite spectacular particle effect.
Finally, it should be noted the openFrameworks offers related functionality, also based on C++.
Assorted Links
Generative Music Software
Adam M. Smith has begun working on cfml – a context-free music language. It is a Context-Free Design Grammar – for music. I’m very interested in how this develops.
A graphical representation of cfml output (original here)
Cfml is implemented as an Impromptu library. Impromptu is a live coding environment, based on the Scheme language, and has existed since 2005. Andrew Sorensen, the developer of Impromptu, has created
some of the most impressive examples of live coding I have seen. In particular, the last example, inspired by Keith Jarrett’s Sun Bear Concerts, is really impressive. (I might be slightly biased
here, since I believe that Jarrett’s solo piano concerts – especially the Köln Concert and the Sun Bear Concerts – rank among the best music ever made).
Finally, Supercollider 140 is a selection of audio pieces all created in Supercollider in 140 characters or less. An interesting example of using restrictions to spur creativity. Another example is
the 200 char Processing sketch contest.
Free Indy Game Development
This month also saw the release of the Unreal Development Kit, basically a version of the Unreal Engine 3, that is free for non-commercial use. This is great news for amateur game developers, but for
me, the big question was whether this could be used as a powerful platform for generative art or live demos. I downloaded the kit and played around with it for a while, but while the 3D engine is
stunning, UDK seems very geared towards graphical development (I certainly do not want to do draw my programs, and the built-in Unrealscript does not impress me either).
In related news, that basic version of Unity 2.6 is now also free. The main focus of Unity is also game development, but from a generative art / live demo perspective it holds greater promise. Unity
offers an advanced graphics engine with user-scriptable shaders, integrated PhysX physics engine, and 3D audio.
Unitys development architecture is also very solid: scripts are written in (JIT-compiled) JavaScript, and components can be written in C# (using Mono, the open-source .NET implementation). Using a
dynamic scripting language such as JavaScript to control a more rigid body of classes written in a more strict, statically typed environment, such as C#, is a good way to manage complex software. All
Mozilla software – including Firefox – is built using this model (JavaScript + XPCOM C++ components), and newer platforms, such as Microsoft’s Silverlight platform also use it (JavaScript + C#
I made a few tests with Unity, and it is simple to control and instance even pretty complex structures. I considered writing a simple Structure Synth viewer using Unity, but was unfortunately put a
bit off, when I discovered that Screen Space Ambient Occlusion and Full Screen Post-Processing Effects are not part of the free basic edition. The iPhone version of the Unity engine is not free
either, but that is probably as could be expected.
It will be interesting to see if Unity will be picked up by the Generative Art community.
SIGGRAPH Asia
Finally two papers presented at SIGGRAPH Asia 2009 should be noted:
Shadow Art creates objects which cast three different shadows.
Sketch2Photo creates realistic photo-montages from freehand sketches annotated with text labels.
Random Colors, Color Pools, and Dual Mersenne Twister Goodness.
I’ve implemented a random color scheme in Structure Synth, using a new ‘color random’ specifier.
But what exactly is a random color? My first attempt was to use the HSV color model and choose a random hue, with full brightness and saturation.
This produces colors like this:
Most of my Nabla pictures used this color scheme. It produces some very strong colors.
Then I tried the RGB model using 3 random numbers, one for each color-channel, which creates this kind of colors:
But what about greyscale colors:
I decided that it was necessary to be able to switch between different color schemes.
So I created a new ‘set colorpool’ command. Besides the color schemes above (‘set colorpool randomhue’, ‘set colorpool randomrgb’, and ‘set colorpool greyscale’) I created two additional color
One where you specify a list of colors:
(For this image the command was: “set colorpool list:orange,white,white,white,white,white,white,grey”. As is evident it is possible to repeat a given color, to emphasize its occurrence in the image.)
And on where you specify an image which is used to sample colors from:
The command used for the above image was: “set colorpool image:001.PNG”. Whenever a random color is requested (by the ‘random color’ operator), the program will sample a random point from the
specified image and use the color of this pixel. This is a quite powerful command, making it possible to imitate the color tonality of another picture.
Now this is all good. But I realized that there are some problems with this approach.
The problem is that geometry and the colors draw numbers from the same random number generator (the C-standard library ‘rand()’ function).
This means that changing the color scheme changes the geometry (since the color schemes use a different number of random numbers for each color – randomhue uses 1 random number per color, the image
sampling uses two (X and Y) random numbers per color, the randomrgb uses three).
This is not acceptable, since you’ll want to change the color schemes without changing the geometry. Another problem is that C-standard library ‘rand’ function is not platform independent – so even
if you specify a EisenScript together with an initial random seed, you will not get the same structure on different platforms.
I solved this by implementing new random generators in Structure Synth. I now use two independent Mersenne Twister random number generators, so that I have two random streams – one for geometry and
one for colors.
The Second Coming of JavaScript
Some months ago, John Resig created processing.js – an impressive JavaScript port of processing, which draws its output on a ‘canvas’ element entirely client-side inside your browser (at least if
your web-browser is Firefox 3 or a recent nightly build of WebKit, that is).
Now Context Free (the original inspiration for Structure Synth) has been ported to JavaScript too: Aza Raskin has created ContextFree.js (Source here).
JavaScript has undergone a tremendous evolution. From creating cheesy ‘onMouseOver’ effects for buttons on web pages to being the ‘glue’ binding together complex applications like Firefox or Songbird
(the Mozilla application frameworks works by stringing together C++ components with JavaScript). Likewise Microsoft chose to build their Silverlight technology on .NET components which can be
controlled by JavaScript in the browser.
And of course the ActionScript in Adobe Flash is also JavaScript. Adobe (and/or Macromedia) has put a lot of effort into creating fast JavaScript implementations – most notably their Tamarin virtual
machine and Just-In-Time compiler, which in theory should make JavaScript almost as fast as native code – or at least comparable to other JIT compiled languages such as Java and the .NET languages.
Tamarin is open-sourced, and will eventually make it into Firefox 4.
Finally, while the Tamarin virtual machine was built to execute (and JIT) bytecode originating from JavaScript, other languages may target Tamarin as well. Adobe has demonstrated the possibility of
compiling standard C programs into Tamarin parseable byte-code (their demo included Quake, a Nintendo emulator, and several languages like Python and Ruby).
So perhaps a future version of Structure Synth could be running as C++ compiled into Tamarin bytecode in a Flash application…
Underground code
The Demo Scene never cease to amaze me. The technical quality of these demos is amazing – complex 3D scenes rendered real-time, procedural textures, real-time sound synthesis, and incredible low
Recently I stumbled upon demoscene.tv which features recorded videos (flash video) of many of the best demos. Of course part of the fun is actually running these demos, to be amazed that they are
indeed real-time, but sadly my laptop is not geared towards neither CPU or GPU intensive activities.
A few selected demos:
4K Should Be Enough For Everyone
Kindernoiser (yep, weird name) is a a 4096 byte demo of 3D julia sets. For comparison the HTML for this page is close to 30 KB.
If you do not have a powerful graphics card, try the video linked to below. | {"url":"https://blog.hvidtfeldts.net/index.php/category/programming/","timestamp":"2024-11-05T18:53:54Z","content_type":"text/html","content_length":"79828","record_id":"<urn:uuid:7495cd27-b702-4363-9966-90ab001925bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00509.warc.gz"} |
Investigation of the space charge field reduction factor in a klystron in the nonlinear mode
Computer simulation of the electronic processes is used to calculate the space charge field reduction factor for a bunched electron beam in the nonlinear mode of operation of a klystron. Compared
with the linear mode of operation, the reduction factor is found to depend on the initial velocity modulation, the value of space charge, the spatial coordinates, and the transit angle. With
increasing transit angle, the reduction factor increases for bunched electrons and decreases for dispersed electrons.
Radiotekhnika i Elektronika
Pub Date:
March 1982
□ Electron Beams;
□ Electron Bunching;
□ Klystrons;
□ Nonlinear Equations;
□ Space Charge;
□ Coefficients;
□ Computerized Simulation;
□ Electron Trajectories;
□ Electronics and Electrical Engineering | {"url":"https://ui.adsabs.harvard.edu/abs/1982RaEl...27..540F/abstract","timestamp":"2024-11-09T21:11:15Z","content_type":"text/html","content_length":"34643","record_id":"<urn:uuid:96e08267-0dca-48f2-8bc2-44b43d9f2ba0>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00453.warc.gz"} |
Dijkstra's Algorithm: Shortest Paths for Weighted Graph
In this article I show how to implement Dijkstra's algorithm to show the shortest route by train through several American cities.
Single Source Shortest Path
Today we're going to take a look at another Single Source Shortest Path(SSSP) algorithm: Dijkstra's Algorithm. Edsgar Dijkstra invented this algorithm in the late 1950's in order to find the shortest
flights between cities in Europe. It's been inferred that this algorithm, or a variation of it is what powers many common driving directions programs like google maps, and other GPS direction
finders. It is also extremely popular in Robotics for route planning and obstacle avoidance, and in the video game development field for NPC pathfinding.
Relationship to other Algorithms
Dijkstra's Algorithm is very similar to other commonly used graphing algorithm. One in particular, Prims Minimum Spanning Tree Algorithm functions almost identically, except it builds a shortest path
including all of the vertexes in the graph. A* search is an outgrowth of Dijkstra's Algorithm which incorporates a "heuristic" which guides the algorithm towards a possibly even better solution. All
of these algorithms can be viewed as adaptations of Best First Search, sometimes called Priority First Search. In artificial intelligence it is sometimes called "Uniform Cost Search."
Priority Queues
When we implemented Breadth First Search we were able to get away with using a regular First In First Out queue. For this Algorithm we need to use a priority queue. a priority queue organizes output
based on a weighted rank. Priority Queues can be implemented using a variety of data structures, the most common of which is a heap.
One of the simplest ways to implement a priority queue is via a sorted linked list. Keep in mind that the choice of data structure for both the adjacency list, and the priority queue can have a very
big impact on the running time of this algorithm. The fastest known implementation was discovered by Tarjan et. al. in the 1980s and incorporates a Fibonacci Heap.
Priority Queues can either be "Min Priority" or "Max Priority" depending on if the application at hand considers a higher value or lower value a better priority. For our purposes we will be using a
min priority queue.
Where a normal queue only has two main operations, push() and pop(), our priority queue includes a third operation: update() which is used to change the priority of items already in the queue.
Because this is an article on Dijkstras Algorithm, and not Priority Queues, i'm not going to go into the implementation details of my priority queue. The code for the priority queue used in this
example is available on my github at:
The Graph
One of the input's to our algorithm is of course, the graph. I went over the implementation details of using the Standard Template Library for representing graphs in a previous article. I've expanded
upon that implementation here. Were using strings for vertex labels, and were also keeping a list of all vertex names alongside our edge list, and lastly, i'm using a struct for representing edges
instead of std::pair in the adjacency list. All of these changes lend to both a cleaner looking code, and a nicer looking graph and algorithm output. Lets take a look at the improved graph class:
#include <iostream>
#include <list>
#include <map>
using namespace std;
class Graph {
typedef struct _edge {
string vertex;
int weight;
_edge(string v, int w) : vertex(v), weight(w) { }
} edge;
string graphName;
list<string> verts;
map<string, list<edge> > adjList;
void addEdge(string v, string u, int w);
void showAdjList();
bool findVert(string v);
Graph(string N) : graphName(N) { }
bool Graph::findVert(string v)
for (auto vert : verts)
if (vert == v)
return true;
return false;
void Graph::addEdge(string v, string u, int w)
if (!findVert(v))
if (!findVert(u))
adjList[v].push_back(edge(u, w));
adjList[u].push_back(edge(v, w));
void Graph::showAdjList()
for (auto vert : verts)
cout<<vert<<": ";
for (auto adj : adjList[vert])
cout<<adj.vertex<<", ";
Overall not too much different from the implementation covered in my other graphing articles. findVert() is a simple sequential search of the vertex list, a feature i still think should be included
in the STL list API.
Dijkstra's Algorithm
With our priority queue and graph in place, we're ready to tackle the implementation of Dijkstra's Algorithm. One of the features of Dijkstra's Algorithm that sets it apart from both Depth First and
Breadth First Search is that in this case we DO allow multiple visits to a vertex that we've already seen, in fact these multiple visits are the crux of how this algorithm works. If we find that by
going through a vertex to a vertex we've already been to lowers the cost of travel to that vertex thus far, then we have essentially found a "short cut" on our shortest path and thus will want to
incorporate it in said path. If we didnt allow visits to vertexes we've already visited, we wouldnt be able to do this. So instead of using an std::map<vertex, bool> for tracking visitation, we will
use a std::map<vertex, int> for tracking distance. Initially we, are going to set the distance for each vertex to an arbitrarily high value, in our case INT_MAX. Aside from these changes, things
progress very similarly to a normal Breadth First Search:
void dijkstraShortestPath(Graph& G, string v, string u)
bool found = false;
map<string, int> dist; //Distance so far for each vertex
map<string, string> camefrom; //previous vertex
pq<string> s; //priority queue
for (auto l : G.verts)
dist[l] = INT_MAX;
string current = v;
s.push(current, 0);
dist[current] = 0;
camefrom[current] = current;
while (!s.empty())
current = s.pop();
cout<<"Current: "<<current<<endl;;
if (current == u)
found = true;
for (auto child : G.adjList[current])
if (dist[child.vertex] > dist[current] + child.weight)
dist[child.vertex] = dist[current] + child.weight;
s.update(child.vertex, dist[child.vertex]);
camefrom[child.vertex] = current;
cout<<"Search Over.\n";
if (found)
cout<<"A Path Exists...\n";
showPath(camefrom, dist, v, u);
We also need a function for recreating the path taken:
void showPath(std::map<string, string> camefrom, std::map<string, int> dist, string v, string u)
list<string> path;
int pathdist = 0;;
string crawl = u;
path.push_back(crawl + "(" + to_string(dist[crawl]) + " hours)");
while (crawl != v)
crawl = camefrom[crawl];
path.push_back(crawl + "(" + to_string(dist[crawl]) + " hours)");
reverse(path.begin(), path.end());
cout<<"Travel Time: "<<dist[u]<<" hours."<<endl;
cout<<"Path: \n";
for (auto p : path)
Now its time to test our work, i decided to use train travel between U.S. Cities to see if our algorithm would give us real world results. Lets see how we did:
Driver program to build input graph:
int main(int argc, char *argv[])
Graph G("United States Inter-City Passenger Rail Road");
G.addEdge("New York", "Philidelphia", 2);
G.addEdge("New York", "Boston", 2);
G.addEdge("New York", "Washington D.C.", 4);
G.addEdge("Philidelphia", "Washington D.C.", 2);
G.addEdge("Washington D.C.", "Chicago", 18);
G.addEdge("Washington D.C.", "Atlanta", 8);
G.addEdge("Chicago", "Atlanta", 15);
G.addEdge("Chicago", "Denver", 13);
G.addEdge("Chicago", "New Orleans", 16);
G.addEdge("New Orleans", "Houston", 11);
G.addEdge("Denver", "Los Angeles", 20);
G.addEdge("Houston", "Los Angeles", 22);
G.addEdge("Los Angeles", "San Francisco", 8);
G.addEdge("San Francisco", "Denver", 19);
cout<<"\nDijkstras Algorithm: \n";
dijkstraShortestPath(G, "New York", "New Orleans");
return 0;
Dijkstras Algorithm:
Current: New York
Current: Philidelphia
Current: Washington D.C.
Current: Atlanta
Current: Chicago
Current: Denver
Current: New Orleans
Search Over.
A Path Exists...
Travel Time: 38 hours.
New York(0 hours)
Washington D.C.(4 hours)
Chicago(22 hours)
New Orleans(38 hours)
As you can see by the output from the algorithm, two things are clear: It gives us a very realistic/accurate route and travel time, importantly, the route given makes sense. It the algorithm
considers routing our train from D.C. to Atlanta, but decided that D.C. to chicago was the better choice. Why? Because D.C. -> Chicago -> New Orleans was faster than D.C. -> Atlanta -> New Orleans!
Nicely Done.
In this implementation we included a test for determining if we've reached our goal city or not. This is called the "Early Exit" technique. This test can be removed to yield whats called the Single
Source All Paths variant.
Dijkstra's Algorithm is a powerful and useful algorithm, and its another tool we can now add to our tool box.
The full code for the examples shown at this website can be found on my github:
For a full implementation of both BFS and Dijkstras SPA in C: | {"url":"http://maxgcoding.com/dijkstra","timestamp":"2024-11-03T05:38:17Z","content_type":"text/html","content_length":"18987","record_id":"<urn:uuid:b2be22cd-6452-46e2-a3e4-c0ae9e258ccc>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00308.warc.gz"} |
Problem Solving with Quadratic Equation MCQ [PDF] Quiz Questions Answers | Problem Solving with Quadratic Equation MCQs App Download & e-Book
Class 7 Math Online Tests
Problem Solving with Quadratic Equation MCQ (Multiple Choice Questions) PDF Download
The Problem solving with quadratic equation Multiple Choice Questions (MCQ Quiz) with Answers PDF (Problem Solving with Quadratic Equation MCQ PDF e-Book) download to practice Grade 7 Math Tests.
Study Expansion and Factorization of Algebraic Expressions Multiple Choice Questions and Answers (MCQs), Problem Solving with Quadratic Equation quiz answers PDF to learn online certificate courses.
The Problem Solving with Quadratic Equation MCQ App Download: Free learning app for solving quadratic equation by factorization, factorization of quadratic expression, factorization using algebraic
identities, expansion of algebraic expression test prep for online learning.
The MCQ: The whole number such that twice of its square added to itself gives 10; "Problem Solving with Quadratic Equation" App Download (Free) with answers: 5; 2; 8; 10; to learn online certificate
courses. Solve Expansion and Factorization of Algebraic Expressions Quiz Questions, download Apple eBook (Free Sample) for distance learning.
Problem Solving with Quadratic Equation MCQ (PDF) Questions Answers Download
MCQ 1:
The two positive numbers differ by 5 and square of their sum is 169 are
1. 2,4
2. 5,6
3. 4,9
4. 3,7
MCQ 2:
The whole number such that twice of its square added to itself gives 10
1. 5
2. 2
3. 8
4. 10
MCQ 3:
The perimeter of rectangle is 20cm and area is 24cm² the length and breadth are
1. 6cm and 4cm
2. 8cm and 6cm
3. 12cm and 10cm
4. 7cm and 5cm
Class 7 Math Practice Tests
Problem Solving with Quadratic Equation Learning App: Free Download Android & iOS
The App: Problem Solving with Quadratic Equation MCQs App to learn Problem Solving with Quadratic Equation Textbook, 7th Grade Math MCQ App, and 8th Grade Math MCQ App. The "Problem Solving with
Quadratic Equation" App to free download iOS & Android Apps includes complete analytics with interactive assessments. Download App Store & Play Store learning Apps & enjoy 100% functionality with | {"url":"https://mcqlearn.com/math/g7/problem-solving-with-quadratic-equation-mcqs.php","timestamp":"2024-11-03T03:17:46Z","content_type":"text/html","content_length":"70678","record_id":"<urn:uuid:344759a4-c901-493c-b84c-beab87ec2633>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00105.warc.gz"} |
Conditionally replace one
PIC Microcontoller Program Flow Method: Conditionally replace one value with another
See also:
• compcon.htm for information about comparisons and conditionals in theory.
• math/radix/ for replacing a nybble in W (or in some register) with its ASCII value, or vice versa.
[FIXME: rather than having many copies of each routine, one for each register that you need to adjust, point FSR at the register and use a single routine that uses FSR fsr.htm ]
If w='a' then replace it with 'x', if it is something else, leave it alone.
Nikolai Golovchenko says It can be done like this:
xorlw 'a' ;compare to 'a', w' = w ^ 'a'
btfss STATUS, Z
movlw 'x'^'a' ;if not equal, w = 'x' ^ 'a'
xorlw 'a' ;if w was equal to 'a', restore it
;if w was unequal to 'a',
; w = 'x' ^ 'a' ^ 'a'= 'x'
From http://www.myke.com/basic.htm
Here's a compare and swap
movf X, w
subwf Y, w ; Is Y >= X?
btfsc STATUS, C ; If Carry Set, Yes
goto $ + 2 ; Don't Swap
addwf X, f ; Else, X = X + (Y - X)
subwf Y, f ; Y = Y - (Y - X)
This can be used to, for example, Convert ASCII to Upper Case
code to find the minimum or maximum value of 2 or values.
Code to force a variable to the maximum value if it exceeds that value.
Also known as ``saturating arithmetic''. Very important in audio filters.
"clipping" or "limiting": If the value _x is "too big", clip it to the maximum value. If the value _x is "too small", clip it to the minimum value. Otherwise leave _x alone.
min_limit equ 5
max_limit equ H'F1'
; clip _x to make sure it doesn't exceed the limits.
; unsigned_max8:
; _x_new := max( _x_initial, min_limit )
; from Anders Jansson
movlw min_limit
subwf _x, W
skpc ; btfss STATUS, C
subwf _x, F
; unsigned_min8:
; _x_new := min( _x_initial, max_limit )
; from Anders Jansson
movlw max_limit
subwf _x, W
skpnc ; btfsc STATUS, C
subwf _x, F
; now we can be sure that (min_limit <= x) and (x <= max_limit).
More routines:
; WARNING: untested code. Does this really work ?
; Change register R0 to maximum of (R0, limit), where limit is passed in w.
; Works when both registers are 8 bit unsigned.
; (What about when both are signed ?)
; results: R0new = max(R0initial, w);
; w is unchanged.
; by David Cary
subwf R0,f
btfsc R0,7
clrf R0
addwf R0
; R0new = min(R0initial, w);
; w is unchanged
subwf R0,f
btfss R0,7
clrf R0
addwf R0
; Example: Force R0 to stay in the range 8..0x19
movlw 8
call unsigned_max8
movlw 0x19
call unsigned_min8
; Change signed 8 bit register R0 to maximum of (R0, 0).
btfsc R0,7 ; skip if positive
clrf R0
16-bit saturating arithmetic
This code adds a signed 8 bit value "reading" to a signed 16 bit running sum "total". (useful for the "I" part of PID control). If the result overflows 16 bits, it properly saturates to max_int or
; code by jspaarg 2005-05-25
; as posted to http://forum.microchip.com/tm.asp?m=108810&mpage=2
; minor changes by David Cary
; warning: untested code.
reading res 1
total_L res 1
temp res 1
total_H res 1
; sign-extend reading into temp
clrf temp
btfss reading, 7
decf temp,f
; now "temp" holds 0xFF if reading was negative, 0 if reading was positive
; semi-normal 16 bit sum
; http://techref.massmind.org/techref/microchip/math/add/index.htm
movf reading,w
addwf total_L
incf temp,f
; now temp = 0 for no change, +1 to increment, or 0xFF to decrement.
movf temp,w
addwf total_H
; if you are sure that total_H can never overflow,
; then we are already done here.
; Check for overflow
; If you sure that you can never exceed limits, then you don't need to check.
; (special shortcut that only works because we are adding 8 bits to 16 bits --
; -- won't work for adding 16 bits to 16 bits)
; only 2 ways to overflow:
; (a) total_H was already the positive number 0x7F, and we incremented it
; with a positive number (temp = +1), resulting in 0x80 and C=0. --> need to saturate to total = 0x7FFF.
; However, if total_H was already the negative number 0x80, and we "added" reading=0,
; we get the same result: 0x80 and C=0; we *don't* want to saturate.
; (b) total_H was already the negative number 0x80, and we "decremented" it
; with a negative number (temp = 0xFF), resulting in 0x7F and C=1. --> need to saturate to total = 0x8000 (or total = 0x8001 would probably be OK).
; However, if total_H was already the positive number 0x7F (total = 0x7F01), and we "added" reading = -1,
; we get the same result: 0x7F and C=1; we *don't* want to saturate.
; Check for overflow.
; If you have an overflow,
; force the total to the appropriate value
; (0x7FFF for max pos, 0x8000 for max neg).
; An overflow happened if
; (a) temp now equals +1, and the result total_H = 0x80, or
; (b) temp now equals -1 (0xFF), and the result total_H = 0x7F.
; check_for_underflow:
; if ( (total_H == 0x7F) and (temp == -1) ) then ForceNeg;
movlw 0x7F
xorwf total_H,W
goto check_for_overflow
movlw H'FF'
xorwf temp,W
; ForceNeg:
movlw 0x80
movwf total_H
movlw 0x01
movwf total_L
; if ( (total_H == 0x80) and (temp == +1) ) then ForcePos;
movlw 0x80
xorwf total_H,W
movlw 1
xorwf temp,W
; ForcePos:
movlw 0x7f
movwf total_H
movlw 0xff
movwf total_L
; WARNING: untested code. Does this really work ?
; Take absolute value of signed 8 bit register R0:
; R0 = abs(R0);
btfsc R0,7
decf R0
btfsc R0,7
comf R0
; Bug: when R0 is the "wierd" value, 0x80, -128,
; this function returns +127, (the most positive
; value that can be represented as a signed
; 8 bit value) not +128 (since that cannot
; be represented as a signed 8 bit value).
; warning: untested
; Obsolete ?
; from Peter Peres 2001-04-14
; Fmax = max(Fmax, INDF); // update running maximum so far
movf INDF,w
subwf Fmax,w ;; Fmax - INDF < 0 ? C = 0
movf INDF,w
btfss STATUS,C
movwf Fmax
; warning: untested
; Obsolete ?
; from Peter Peres 2001-04-14
; Fmin = min(Fmin, INDF)
; // update running maximum so far
movf Fmin,w
subwf INDF,w ;
; INDF - Fmin < 0 ? C = 0
movf INDF,w
btfss STATUS,C
movwf Fmin
; warning: untested
; from Andy Warren 2001-04-14
; Fmax = max(Fmax, INDF)
; // update running maximum so far
MOVF Fmax ; W = INDF - Fmax.
SUBWF INDF,W ; C = ( Fmax <= INDF ).
SKPNC ;IF Fmax <= INDF,
ADDWF Fmax,f ; then Fmax := (Fmax + INDF - Fmax) = INDF.
; warning: untested
; from Andy Warren 2001-04-14
; Fmin = min(Fmin, INDF)
; // update running maximum so far
MOVF Fmin,W ; W = INDF - Fmin.
SUBWF INDF,W ; C = ( Fmax <= INDF ).
SKPC ;IF INDF < Fmin,
ADDWF Fmin,f ; then Fmin := (Fmin + INDF - Fmin) = INDF.
Keywords: (if anyone looks for this code using a search engine) compare comparing 16-bit signed integer integers DS1820 DS18B20 DALLAS thermometer
I wrote this code after spending several hours on the net looking for a readymade example. I wanted to build a thermometer based on the Dallas DS1820, and have a minimum-maximum reading. This
requires a signed-16-bit (2's complement) subtraction. Finally I got this (not so optimized):
;Compare 16-bit signed integer (2's complement) to minimum & maximum.
;First byte is LSB (as in the DS1820 thermometer), second byte is MSB
;compare to MINIMUM:
btfsc zeroMinMaxFlag
goto setMin
movf temperature,w
subwf min,w ;subtract LSB (low byte)
movf temperature+1,w
btfss STATUS,C
addlw 1
subwf min+1,w ;subtract MSB (hi byte)
andlw b'10000000' ;test result's sign bit
btfss STATUS,Z
goto skipSetMin
movf temperature,w
movwf min
movf temperature+1,w
movwf min+1
;compare to MAXIMUM:
btfsc zeroMinMaxFlag
goto setMax
movf temperature,w
subwf max,w ;subtract LSB (low byte)
movf temperature+1,w
btfss STATUS,C
addlw 1
subwf max+1,w ;subtract MSB (hi byte)
andlw b'10000000' ;test result's sign bit
btfsc STATUS,Z
goto skipSetMax
movf temperature,w
movwf max
movf temperature+1,w
movwf max+1
James Newton replies: Thank you!+
Thank you, whoever you are, this looks useful. (Did you mean to post this anonymously, or do you want us to name you in the credits?) -- DavidCary
See also: PIC Microcontroller Comparison Math Methods
[syntax check this page: http://validator.w3.org/check?uri=http://massmind.org/techref/microchip/condrepl.htm ]
• Merci!
Looked exactly for this code, building my Min/Max thermometer with DS18B20.+
©2024 These pages are served without commercial sponsorship. (No popup ads, etc...).Bandwidth abuse increases hosting cost forcing sponsorship or shutdown. This server aggressively defends against
automated copying for any reason including offline viewing, duplication, etc... Please respect this requirement and DO NOT RIP THIS SITE. Questions?
<A HREF="http://massmind.org/techref/microchip/condrepl.htm"> PIC Microcontoller Program Flow Method: Conditionally replace one value with another </A>
After you find an appropriate page, you are invited to your to this massmind site! (posts will be visible only to you before review) Just type a nice message (short messages are blocked as spam) in
the box and press the Post button. (HTML welcomed, but not the <A tag: Instead, use the link box to link to another page. A tutorial is available Members can login to post directly, become page
editors, and be credited for their posts.
Link? Put it here:
if you want a response, please enter your email address:
Attn spammers: All posts are reviewed before being made visible to anyone other than the poster. | {"url":"http://massmind.org/techref/microchip/condrepl.htm","timestamp":"2024-11-14T11:40:40Z","content_type":"text/html","content_length":"27080","record_id":"<urn:uuid:8dd71024-9d11-425e-9ad3-a9a6620a6e10>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00695.warc.gz"} |
American Mathematical Society
In this paper we prove some identities, conjectured by Lewis, for the rank and crank of partitions concerning the modulo $4$ and $8$. These identities are similar to Dyson’s identities for the rank
modulo $5$ and $7$ which give a combinatorial interpretation to Ramanujan’s partition congruences. For this, we use multisection of series and some of the results that Watson established for the
third order mock theta functions. References
G. E. Andrews and D. Hickerson, Ramanujan’s "lost" notebook. VVI: The sixth order mock theta functions, preprint. F. J. Dyson, Some guesses in the theory of partitions, Eureka (Cambridge) 8
(1944), 10-15. —, Combinatorial interpretations of Ramanujan’s partitions congruences, Ramanujan Revisted: Proc. of the Centenary Conference (Univ. of Illinois at Urbana-Champaign, June 1-5,
1987), Academic Press, San Diego, 1988. —, The crank of partitions $\bmod 8$, $9$ and $10$, Trans. Amer. Math. Soc. 322 (1990), 803-821. S. Ramanujan, Some properties of $p(n)$, the number of
partitions of $n$, Paper 25 of Collected Papers of S. Ramanujan, Cambridge Univ. Press, London and New York, 1927; reprinted: Chelsea, New York, 1962. N. Santa-Gadea, On the rank and the crank
moduli $8$, $9$ and $12$, Ph.D. thesis, Penn State Univ., 1990.
Similar Articles
• Retrieve articles in Transactions of the American Mathematical Society with MSC: 11P83, 05A17
• Retrieve articles in all journals with MSC: 11P83, 05A17
Bibliographic Information
• © Copyright 1994 American Mathematical Society
• Journal: Trans. Amer. Math. Soc. 341 (1994), 449-465
• MSC: Primary 11P83; Secondary 05A17
• DOI: https://doi.org/10.1090/S0002-9947-1994-1136545-0
• MathSciNet review: 1136545 | {"url":"https://www.ams.org/journals/tran/1994-341-01/S0002-9947-1994-1136545-0/?active=current","timestamp":"2024-11-04T14:45:19Z","content_type":"text/html","content_length":"65474","record_id":"<urn:uuid:8d767ca3-2fe8-4f55-bb66-a812355aa886>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00575.warc.gz"} |
Understanding The Equation Of A Line: Y=mx+b
Understanding the Equation of a Line: y=mx+b
y=mx+b When it comes to understanding linear equations, the formula y=mx+b is fundamental. It’s the bread and butter of algebra, providing a straightforward way to describe a straight line. This
equation might look simple at first glance, but it carries a lot of information about the relationship between two variables. In this article, we’ll break down the components of this equation,
explore how it’s used, and dive into why it’s so important in mathematics and beyond.
The Basics: What Do y, m, x, and b Stand For?
Understanding ‘y’ and ‘x’
In the equation y=mx+b, ‘y’ and ‘x’ are variables. These variables represent the coordinates of any point on the line. Specifically, ‘x’ is the independent variable, which means you can choose any
value for ‘x,’ and ‘y’ is the dependent variable, meaning its value depends on what you choose for ‘x.’
For example, if you set x to 1, you can use the equation to find the corresponding value of y. This relationship between x and y is the essence of what makes this equation powerful – it allows you to
predict one value based on the other.
The Slope ‘m’
The ‘m’ in the equation represents the slope of the line. The slope tells us how steep the line is and the direction it goes. A positive slope means the line is rising as it moves from left to right,
while a negative slope means the line is falling. The slope is calculated as the ratio of the rise (the change in y) over the run (the change in x).
For instance, if you have a slope of 2, it means for every unit increase in x, y increases by 2 units. Understanding the slope is crucial because it gives insight into the rate of change between the
The Y-Intercept ‘b’
The ‘b’ in the equation is the y-intercept. This is the point where the line crosses the y-axis. When x is zero, y equals b. The y-intercept provides a starting point for the line on the graph. It
tells you the value of y when x is zero, which can be particularly useful in various applications, from economics to physics.
For example, if the y-intercept is 5, this means that when x is zero, y is 5. This starting point helps to anchor the line on the graph, making it easier to plot and understand.
Plotting the Line: From Equation to Graph
Setting Up the Graph
To plot the line represented by y=mx+b, you first need to set up a graph with an x-axis and a y-axis. Label each axis and decide on a scale. It’s essential to choose a scale that allows you to
accurately plot the points you’ll calculate from the equation.
Finding Points
The next step is to find points on the line. Start by choosing a value for x and use the equation to find the corresponding y value. Plot this point on the graph. Repeat this process with different
values of x to get a few points. Typically, you’ll want at least two points, but more points can help ensure accuracy.
For instance, if your equation is y = 2x + 3, you can start with x = 0. Plugging this into the equation gives y = 3. So, one point is (0, 3). Next, try x = 1. Plugging this in gives y = 5, so another
point is (1, 5). Continue this process to get a few points, then draw a line through them.
Drawing the Line
Once you have your points plotted, draw a line through them. Make sure the line extends across the entire graph to show the relationship between x and y clearly. This line represents all the
solutions to the equation y=mx+b. Every point on this line is a solution to the equation, meaning if you plug in the x and y coordinates of any point on the line into the equation, it will hold true.
Applications of y=mx+b in Real Life
Economics and Business
In economics and business, the equation y=mx+b is used to model relationships between different variables. For example, in a simple linear cost model, y could represent the total cost, x could be the
number of units produced, m could be the variable cost per unit, and b could be the fixed cost.
This model helps businesses predict their total costs based on the number of units they plan to produce. By understanding the slope and y-intercept, businesses can make informed decisions about
production levels, pricing, and profitability.
Physics and Engineering
In physics and engineering, this equation is used to describe motion and forces. For instance, in kinematics, the position of an object moving at a constant velocity can be described by the equation
y=mx+b, where y is the position, m is the velocity, x is time, and b is the initial position.
This simple linear model is a powerful tool for predicting future positions of moving objects. It’s fundamental in designing systems and understanding physical phenomena.
Everyday Life
Even in everyday life, you can see the principles of y=mx+b in action. For example, consider the relationship between time spent studying and test scores. If there’s a linear relationship, you could
use this equation to predict test scores based on study time.
This relationship helps students understand the importance of their study habits and how they directly impact their performance. By recognizing and applying this equation, students can better manage
their time and improve their results.
Common Misconceptions and Challenges
Misunderstanding the Slope
One common misconception is misunderstanding what the slope represents. Some might think a slope of 2 means you go up 2 units for every 2 units you go over, but actually, it means you go up 2 units
for every 1 unit you go over. Understanding the correct interpretation of the slope is crucial for accurately using the equation.
Confusing the Y-Intercept
Another challenge is confusing the y-intercept. It’s easy to mix up the y-intercept with other points on the graph. Remember, the y-intercept is specifically where the line crosses the y-axis, which
happens when x is zero. Keeping this in mind helps in correctly plotting and interpreting the graph.
Graphing Mistakes
Graphing errors are also common, especially if the scale is not chosen properly or points are plotted inaccurately. Ensuring that your graph is correctly scaled and your points are accurately plotted
is essential for a clear and correct representation of the equation.
Conclusion: The Power of y=mx+b?
The equation y=mx+b is more than just a formula; it’s a powerful tool that helps us understand and predict relationships between variables. Whether in business, science, or everyday life, this
equation provides a clear and straightforward way to model and analyze linear relationships.
y=mx+b By breaking down the components of the equation, understanding how to plot it on a graph, and recognizing its real-life applications, we gain valuable insights into the world around us.
Despite its simplicity, y=mx+b opens the door to deeper mathematical concepts and practical problem-solving skills that are essential in various fields.
You Ma Also Read | {"url":"https://hownewsinsider.com/ymxb/","timestamp":"2024-11-10T15:46:09Z","content_type":"text/html","content_length":"241578","record_id":"<urn:uuid:ce179ac4-8c91-4428-bf89-c9b550970ab3>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00252.warc.gz"} |
Regression Techniques
Regression Techniques By Their Machine Learning Families
Several Machine Learning (ML) algorithms and families are out there and it's really interesting to know which are the algorithms come under each ML family. Having a knowledge of various techniques
can also speed up your modelling process.
In this post, I tried to classify all possible regression techniques according to their ML families. Following chart contains name of the regression technique with it's R package and reference.
Kindly follow my blog and stay tuned for more advanced post on machine learning modelling. Thank you!
An extensive experimental survey of regression methods. Neural Networks.
6 comments:
1. Useful cheat sheet on ML regression methods.
Thank You.
2. I was reading some of your content on this website and I conceive this internet site is really informative ! Keep on putting up. Admond Lee
1. Thank you!
3. I found your this post while searching for some related information on blog search...Its a good post..keep posting and update the information.
Ciencia de Datos
4. I learn some new stuff from it too, thanks for sharing your information.Bomber Jackets
5. I recently came across your blog and have been reading along. I thought I would leave my first comment. I don’t know what to say except that I have enjoyed reading. gratis | {"url":"https://manisha-sirsat.blogspot.com/2019/04/regression-techniques.html","timestamp":"2024-11-10T11:54:44Z","content_type":"text/html","content_length":"84537","record_id":"<urn:uuid:c8e1480e-071b-460a-8687-c7f15792ff70>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00316.warc.gz"} |
reversible computation
A (perhaps) surprising fact is that the concepts of entropy as used in thermodynamics and information theory are connected. If you want to store information in a system, it must be in an orderly
state (with low entropy), or the patterns you are imprinting in it will be lost in thermal noise.
If you want to lower the entropy of a system (delete information from it), you must dissipate heat to the outside, or the second law of thermodynamics would be violated. More precisly, for each bit
you delete, at least kT ln 2 Joules must be dissipated, where k is Boltzmann's constant and T is the temperature.
Normal computers delete one bit of information for each logic operation they carry out. That does not matter much, because they give off much more heat that the thermodynamical limit anyway. However,
as computers get more efficient (perhaps using nanotechnology), it could become a serious problem.
That has created interest in computing designs that do not delete their input, but only permute it in some way, so that the answer can be read off. Such a computer could, in principle, use
arbitrarily little energy to perform a calculation. That could have consequences for cryptography, because thermodynamical arguments are often made to demonstrate, for example, that keys of a certain
length cannot be broken by brute force | {"url":"https://everything2.com/title/reversible+computation","timestamp":"2024-11-13T19:45:55Z","content_type":"text/html","content_length":"32124","record_id":"<urn:uuid:9d013c99-14a6-4eb5-888a-0b1ecfc3b19e>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00377.warc.gz"} |
Re: [Numpy-discussion] numpy.mean still broken for large float32arrays
24 Jul 2014 24 Jul '14
8:42 p.m.
Inaccurate and utterly wrong are subjective. If You want To Be sufficiently strict, floating point calculations are almost always 'utterly wrong'. Granted, It would Be Nice if the docs specified the
algorithm used. But numpy does not produce anything different than what a standard c loop or c++ std lib func would. This isn't a bug report, but rather a feature request. That said, support for
fancy reduction algorithms would certainly be nice, if implementing it in numpy in a coherent manner is feasible. -----Original Message----- From: "Joseph Martinot-Lagarde"
<joseph.martinot-lagarde@m4x.org> Sent: 24-7-2014 20:04 To: "numpy-discussion@scipy.org" <numpy-discussion@scipy.org> Subject: Re: [Numpy-discussion] numpy.mean still broken for large
float32arrays Le 24/07/2014 12:55, Thomas Unterthiner a écrit :
I don't agree. The problem is that I expect `mean` to do something reasonable. The documentation mentions that the results can be "inaccurate", which is a huge understatement: the results can be
utterly wrong. That is not reasonable. At the very least, a warning should be issued in cases where the dtype might not be appropriate.
Maybe the problem is the documentation, then. If this is a common error, it could be explicitly documented in the function documentation. _______________________________________________
NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion | {"url":"https://mail.python.org/archives/list/numpy-discussion@python.org/message/6KTBHHTEYO4N6VM5UJHTS72QMGDXGTLZ/","timestamp":"2024-11-13T17:30:47Z","content_type":"text/html","content_length":"14341","record_id":"<urn:uuid:c442a344-4ed4-4289-8a8c-b932d49dcb2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00222.warc.gz"} |
Plotting 2D Graphs | R Programming | Bottom Science
Plotting 2D Graphs | R Programming
Previous > Introduction to Plotting & Applications | R Programming
Now let’s have a look at some 2-Dimensional plots:
As mentioned earlier, these plots are made with the help of R software.
1. BARPLOT
A bar plot is also known as a bar chart that shows bars of different values hence of different heights to depict the relationship between numerical and some sort of categorical data.
Syntax of Bar plot in R:
2. BOXPLOT
A simple way of representing statistical data on a plot during which a rectangle is drawn to represent the second and third quartiles, usually with a vertical line inside to point to the median. The
lower and upper quartiles are shown as horizontal lines on either side of the rectangle.
Syntax of Boxplot in R:
3. SCATTERPLOT
This plot is used to visualize the relationship between the continuous variables. Since the data points seem to be scattered in the graph hence, it is called a scatter plot. It uses cartesian
coordinates to display the values.
Syntax of Scatterplot in R:
ggplot(titanic,aes(x = age, y = fare))+
4. 2D PIE CHART
It is used to represent values in the form of slices of a circle in different colours. We label the slices and represent those numbers in the chart.
2D PIE CHART
Syntax of 2D Pie chart in R:
fig <- plot_ly(titanic, labels =[ ~]names, values = [~]fare, type = 'pie')
5. 2D DENSITY PLOT
A related visualization to the histogram is a density plot. A density plot is a smoothed version of the histogram. It uses a kernel density estimate to point out the probability density function of
the variable.
Syntax of Density Plot in R:
ggplot(data, aes(x=x, y=y))+
stat_density_2d(aes(fill = ..level..),geom=”polygon”) | {"url":"https://www.bottomscience.com/plotting-2d-graphs-r-programming/","timestamp":"2024-11-13T18:18:14Z","content_type":"text/html","content_length":"85783","record_id":"<urn:uuid:60502f15-5dba-4dab-b061-2be3e4b9baf0>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00838.warc.gz"} |
Price-to-Book (PB) Ratio: Meaning, Formula, and Example.
Price-to-Book Ratio: Meaning and Formula What is book value of equity? The book value of equity is the portion of a company's assets that are owned by shareholders. This figure is calculated by
subtracting the company's liabilities from its total assets. The book value of equity can be used to measure the financial health of a company, as well as its potential value to shareholders.
How do you calculate book value and market value? Book value is calculated by subtracting the total liabilities from the total assets of a company. This will give you the book value of the company.
To calculate the market value, you will need to take the book value and add or subtract the market value of the assets and liabilities.
How is book ratio calculated? The book ratio is calculated by dividing the book value of a company's assets by the book value of its liabilities. The book value of assets is the accounting value of a
company's assets, which is the original cost of the assets minus any depreciation that has been incurred. The book value of liabilities is the accounting value of a company's liabilities, which is
the original amount of the liabilities minus any payments that have been made. Why do banks price to book value? Banks use the book value of their assets as the starting point for pricing because it
is the most accurate reflection of the true value of the bank's holdings. By contrast, the market value of assets can be affected by a number of factors that have nothing to do with the underlying
value of the asset, such as market speculation or the general level of interest rates.
The book value of assets is also a better predictor of future cash flows than the market value, since it is based on the historical cost of the assets rather than current market conditions. This is
especially important for banks, since their business model is based on borrowing money at one interest rate and lending it out at a higher rate.
In summary, banks price their assets at book value because it is the most accurate reflection of the true value of the bank's holdings, and it is a better predictor of future cash flows than the
market value.
How do you calculate the PB ratio of a portfolio?
PB Ratio = Price of Portfolio / Book Value of Portfolio
The price of the portfolio is the current market value of all the securities in the portfolio. The book value of the portfolio is the original value of all the securities in the portfolio, plus any
income that has been reinvested, minus any losses. | {"url":"https://www.infocomm.ky/price-to-book-pb-ratio-meaning-formula-and-example/","timestamp":"2024-11-11T21:20:56Z","content_type":"text/html","content_length":"40002","record_id":"<urn:uuid:29aa61d3-524b-4659-a311-00437349f546>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00761.warc.gz"} |
The power of multimedia: Combining point-to-point and multiaccess networks
In this paper we introduce a new network model called a multimedia network. It combines the point-to-point message passing network and the multiaccess channel. To benefit from the combination we
design algorithms which consist of two stages: a local stage which utilizes the parallelism of the point-to-point network and a global stage which utilizes the broadcast capability of the multiaccess
channel. As a reasonable approach, one wishes to balance the complexities of the two stages by obtaining an efficient partition of the network into O(√n) connected components each of radius O(√n). To
this end we present efficient deterministic and randomized partitioning algorithms. The deterministic algorithm runs in O(√n log^∗ n) time and O(m + n log n log^∗ n) messages, where n and m are the
number of nodes and number of point-to-point links in the network. The randomized algorithm runs in the same time, but sends only O(m+ n log^∗ n) messages. The partitioning algorithms are then used
to obtain: (1) O(√n log n log^∗ n) time deterministic and O(√n log^∗ n) time randomized algorithms for computing global sensitive functions, and (2) O(√n log n) time deterministic algorithm for
computing a minimum spanning tree. An Ω(n) time lower bounds for computing global sensitive functions in both point-topoint and multiaccess networks, are given, thus showing that the multimedia
network is more powerful than both its separate components. Furthermore, we prove an Ω(√n) time lower bound for multimedia networks, thus leaving a small gap between our upper and lower bounds.
Publication series
Name Proceedings of the Annual ACM Symposium on Principles of Distributed Computing
Volume Part F130192
Conference 7th Annual ACM Symposium on Principles of Distributed Computing, PODC 1988
Country/Territory Canada
City Toronto
Period 15/08/88 → 17/08/88
Funders Funder number
Applied Mathematical Sciences
IBM Research Division
U.S. Department of Energy DE-AC02 76ERO3077
Office of Energy Research and Development
Dive into the research topics of 'The power of multimedia: Combining point-to-point and multiaccess networks'. Together they form a unique fingerprint. | {"url":"https://cris.tau.ac.il/en/publications/the-power-of-multimedia-combining-point-to-point-and-multiaccess--2","timestamp":"2024-11-08T17:26:49Z","content_type":"text/html","content_length":"54855","record_id":"<urn:uuid:f6eea04b-c8f9-4dfe-be9a-5756dbcaee0e>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00623.warc.gz"} |
Radix Sort
" Radix sort is an integer sorting algorithm that sorts data with integer keys by grouping the keys by individual digits that share the same significant position and value (place value). Radix sort
uses counting sort as a subroutine to sort an array of numbers." - [brilliant.org]
Base Numbering Systems
The value of different positions in a number increases by a multiplier of 10 in increasing positions. This means that a digit ‘8’ in the rightmost place of a number is equal to the value 8, but that
same digit when shifted left one position (i.e., in 80) is equal to 10 * 8. If you shift it again one position you get 800, which is 10 * 10 * 8.
This is where it’s useful to incorporate the shorthand of exponential notation. It’s important to note that 100 is equal to 1. Each position corresponds to a different exponent of 10.
So why 10? It’s a consequence of how many digits are in our alphabet for numbering. Since we have 10 digits (0-9) we can count all the way up to 9 before we need to use a different position. This
system that we used is called base-10 because of that.
Sorting By Radix
So how does a radix sort use this base numbering system to sort integers? First, there are two different kinds of radix sort: most significant digit, or MSD, and least significant digit, or LSD.
Both radix sorts organize the input list into ten “buckets”, one for each digit. The numbers are placed into the buckets based on the MSD (left-most digit) or LSD (right-most digit). For example, the
number 2367 would be placed into the bucket “2” for MSD and into “7” for LSD.
This bucketing process is repeated over and over again until all digits in the longest number have been considered. The order within buckets for each iteration is preserved. For example, the numbers
23, 25 and 126 are placed in the “3”, “5”, and “6” buckets for an initial LSD bucketing. On the second iteration of the algorithm, they are all placed into the “2” bucket, but the order is preserved
as 23, 25, 126.
Radix Sort Performance
This makes its performance a little difficult to compare to most other comparison-based sorts. Consider a list of length n. For each iteration of the algorithm, we are deciding which bucket to place
each of the n entries into.
How many iterations do we have? Remember that we continue iterating until we examine each digit. This means we need to iterate for how ever many digits we have. We’ll call this average number of
digits the word-size or w.
This means the complexity of radix sort is O(wn). Assuming the length of the list is much larger than the number of digits, we can consider w a constant factor and this can be reduced to O(n).
[Image credits to inductivestep.org</sub]
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://dev.to/wdiep10/radix-sort-11ec","timestamp":"2024-11-14T22:06:23Z","content_type":"text/html","content_length":"63315","record_id":"<urn:uuid:abe53814-0129-4d46-af28-018b3de44c37>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00428.warc.gz"} |
Permutation Groups
Schedule for: 16w5087 - Permutation Groups
Beginning on Sunday, November 13 and ending Friday November 18, 2016
All times in Banff, Alberta time, MST (UTC-7).
Sunday, November 13
16:00 - 17:30 Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre)
Dinner ↓
17:30 - 19:30 A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
20:00 - 22:00 Informal gathering (Corbett Hall Lounge (CH 2110))
Monday, November 14
Breakfast ↓
- Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
- Introduction and Welcome by BIRS Station Manager (TCPL 201)
Pierre-Emmanuel Caprace: Boundary 2-transitive automorphism groups of trees ↓
09:00 This talk is concerned with locally compact groups admitting a continuous 2-transitive action on a compact space. Two fundamental classes of such are exhaustively understood: the finite
- 2-transitive groups and the 2-transitive Lie groups. The groups defined in the title form a third such class on which we shall focus. A basic feature is that all groups in that class have a
09:50 simple socle. A key challenge is to determine which simple groups occur in this way. We will survey characterizations and classification results obtained recently in that direction.
(TCPL 201)
Gabriel Verret: Vertex-primitive graphs having vertices with almost equal neighbourhoods, and vertex-primitive graphs of valency 5 ↓
A graph is vertex-primitive if its automorphism group does not preserve any nontrivial partition of its vertex-set. It is an easy exercise to prove that (apart from some trivial exceptions) a
10:00 vertex-primitive graph cannot have distinct vertices with equal neighbourhoods. I will discuss some results about vertex-primitive graphs having two vertices with “almost” equal neighbourhoods,
- and how these results were used to answer a question of Araújo and Cameron about synchronising permutation groups. These results were also the motivation for a recent classification of
10:25 vertex-primitive graphs of valency 5. (Graphs of valency at most 4 had previously been classified.) I will describe this classification, some of the issues that arose in the proof, and the
connection with the previous problem.
(TCPL 201)
- Coffee (TCPL 201)
Michael Aschbacher: The Palfy-Pudlak question for exceptional groups ↓
- I'll discuss a theorem showing that no large member of a certain class of lattices is an overgroup lattice in an exceptional group of Lie type. The proof involves a question about pairs $
11:50 (M_1,M_2)$ of maximal subgroups of a finite group $G$ such that the overgroup lattice in $G$ of $M_1\cap M_2$ is a 2-simplex.
(TCPL 201)
- Lunch (Vistas Dining Room)
Guided Tour of The Banff Centre ↓
- Meet in the Corbett Hall Lounge for a guided tour of The Banff Centre campus.
(Corbett Hall Lounge (CH 2110))
Group Photo ↓
- Meet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the
14:20 official group photo!
(TCPL Foyer)
- Coffee Break (TCPL Foyer)
Joanna Fawcett: Partial linear spaces with a primitive affine automorphism group of rank 3 ↓
17:00 A partial linear space consists of a non-empty set of points P and a collection of subsets of P called lines such that each pair of points lies on at most one line, and each line contains at
- least two points. A partial linear space is proper if it is not a linear space or a graph. In this talk, we will consider some recent progress on classifying the finite proper partial linear
17:30 spaces with a primitive affine automorphism group of rank 3.
(TCPL 201)
David Craven: Lie-primitive subgroups of exceptional algebraic groups: Their classification so far ↓
- I will give a brief overview of (as far as I know) the current state of the putative classification of all Lie-primitive subgroups of the exceptional algebraic groups, and its implications for
18:00 the maximal subgroup structure of the finite (almost simple) exceptional groups of Lie type.
(TCPL 201)
Dinner ↓
- A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
Tuesday, November 15
- Breakfast (Vistas Dining Room)
Csaba Schneider: Permutation groups and cartesian decompositions ↓
Intransitive and imprimitive permutation groups preserve disjoint union decompositions and are routinely studied by considering their actions on invariant partitions. I would like to present a
09:00 similar approach to the study of permutation groups that preserve cartesian product decompositions. Such groups occur naturally in the various versions of the O'Nan-Scott Theorem, and also in
- combinatorial applications, such as groups of automorphisms of Hamming graphs. Much of the theory I present is valid for arbitrary permutation groups. However, combining this theory with the
09:50 classification of finite simple groups leads to a surprisingly detailed descriptions of finite groups that act on cartesian products. The results I present were obtained in collaboration with
Robert Baddeley and Cheryl Praeger.
(TCPL 201)
Phillip Wesolek: Commensurated subgroups of finitely generated branch groups ↓
09:55 We first recall a completion operation which takes as input a group with a commensurated subgroup and outputs a locally compact group. This operation allows one to study finitely generated
- groups via locally compact groups and vice versa. We apply this completion to study the compelling class of finitely generated branch groups. In particular, we show every commensurated subgroup
10:30 of a just infinite finitely generated branch group is either finite or of finite index.
(TCPL 201)
- Coffee Break (TCPL Foyer)
Zoé Chatzidakis: A new invariant for difference fields ↓
11:00 If $(K,f)$ is a difference field, and a is a finite tuple in some difference field extending $K$, and such that $f(a)$ in $K(a)^{alg}$, then we define $dd(a/K) = lim [K(f^k(a),a):K(a)]^{1/k}$,
- the distant degree of $a$ over $K$. This is an invariant of the difference field extension $K(a)^{alg}/K$. We show that there is some $b$ in the difference field generated by $a$ over $K$,
11:25 which is equi-algebraic with $a$ over $K$, and such that $dd(a/K)=[K(f(b),b):K(b)]$, i.e.: for every $k>0$, $f(b)$ in $K(b,f^k(b))$. Viewing $Aut(K(a)^{alg}/K)$ as a locally compact group, this
result is connected to results of Willis on scales of automorphisms of locally compact totally disconnected groups. I will explicit the correspondence between the two sets of results.
(TCPL 201)
Pham Tiep: Non-abelian anti-concentration inequalities ↓
11:30 In 1943, Littlewood and Offord proved the first anti-concentration result for sums of independent random variables. Their result has since then been strengthened and generalized by generations
- of researchers, with applications in several areas of mathematics. In this talk, we will discuss the first non-abelian analogue of Littlewood-Offord result, a sharp anti-concentration
12:15 inequality for products of independent random variables. This is joint work with Van H. Vu.
(TCPL 201)
- Lunch (Vistas Dining Room)
- Coffee Break (TCPL Foyer)
Alastair Litterick: Reductive Subgroups of Reductive Groups ↓
17:00 The subgroup structure of reductive groups has been intensively studied since at least the 1950s, when Dynkin classified maximal connected subgroups of complex reductive groups. One of the main
- outstanding problems in the theory is to classify so-called "non-completely-reducible" reductive subgroups. Recent joint work with Adam Thomas has achieved such a classification for subgroups
17:30 of exceptional simple groups, when the characteristic is 'good'. When the characteristic is bad, the theory becomes much more delicate, and we will discuss some of the problems arising,
including understanding 'non-abelian cohomology sets' which arise.
(TCPL 201)
Jacqui Ramagge: Flat groups and graphs (Subtitle: The unreasonable connectedness of mathematics) ↓
- Given a totally disconnected, locally compact group $G$, M\”oller gave a graph-theoretic characterisation for subgroups tidy for $x\in G$. We consider a corresponding result for flat subgroups
18:00 of $G$.
(TCPL 201)
- Dinner (Vistas Dining Room)
Wednesday, November 16
- Breakfast (Vistas Dining Room)
Luke Morgan: Semiprimitive groups - a classification theorem (sort of) ↓
A transitive permutation group is called semiprimitive if each normal subgroup is transitive or semiregular. This large class of groups includes the classes of primitive, quasiprimitive,
09:00 innately transitive and Frobenius groups. Apart from being a generalisation of these important classes of permutation groups, motivation to study this class came from problems in abstract
- algebra and in algebraic graph theory. A barrier to their study has been the lack of any apparent structure and the prevalence of wild examples. In this talk I will report on joint work with
09:50 Michael Giudici in which we brought some clarity to the study of this class of groups. We found that there is strong structure to a semiprimitive group, although not as precise as the
O'Nan-Scott Theorem for primitive groups and there is rough structure that explains how semiprimitive groups are built from innately transitive groups. Along the way I'll mention plenty of
examples and time permitting some application of this theory to the motivating problems.
(TCPL 201)
Tim Burness: Generating simple groups and their subgroups ↓
It is well known that every finite simple group can be generated by two elements and this leads to a wide range of problems that have been the focus of intensive research in recent years, such
09:55 as random generation, (2,3)-generation and so on. In this talk I will report on recent joint work with Martin Liebeck and Aner Shalev on similar problems for subgroups of (almost) simple
- groups, focussing on maximal and second maximal subgroups. We prove that every maximal subgroup of a simple group can be generated by four elements (this is best possible) and we show that the
10:30 problem of determining a bound on the number of generators for second maximal subgroups depends on a formidable open problem in Number Theory. Time permitting, we will also present some related
results on random generation and subgroup growth.
(TCPL 201)
- Coffee Break (TCPL Foyer)
Dugald MacPherson: Locally compact permutation groups, and maximal-closed permutation groups ↓
10:55 The full symmetric group S on a countably infinite set X carries a natural topology, the topology of pointwise convergence. I will discuss joint work in progress with Cheryl Praeger and Simon
- Smith, and some open questions, on multiply transitive locally compact subgroups of S, on maximal-closed subgroups of S, and on subgroups of S which are maximal subject to being
11:30 subdegree-finite.
(TCPL 201)
Ilaria Castellano: Rational discrete first degree cohomology for totally disconnected locally compact groups ↓
For a topological group G several cohomology theories have been introduced
and studied in the past. In many cases the main motivation was to obtain an
interpretation of the low-dimensional
cohomology groups in analogy to discrete
groups. The aim of this talk is to give firstly interpretations of the first degree rational discrete cohomology functor $\mathrm{dH}^1(G,-)$ introduced
11:30 in [3], where $G$ is a totally disconnected locally compact (=t.d.l.c.) group. Secondly, it will be shown how these interpretations can be used to prove several results about t.d.l.c. groups in
- analogy with the discrete case. Namely, we prove that a non-trivial splitting of a compactly generated t.d.l.c. group can be detected by knowing a single cohomology group in analogy to [1,
12:00 Theorem IV 6.10]. As a consequence, we characterize a compactly presented t.d.l.c. group of rational discrete cohomological dimension 1 to be a fundamental group of a finite graph of profinite
groups in analogy to [2, Theorem 1.1] 1. Dicks, W. and Martin John Dunwoody. Groups acting on graphs. Vol. 17. Cambridge University Press, 1989.
2. Dunwoody, M. J. Accessibility and groups of
cohomological dimension one. Proceedings of the London Mathematical Society 3.2 (1979): 193-215.
3. Castellano, I., and Th Weigel. Rational discrete cohomology for totally disconnected locally
compact groups. Journal of Algebra 453 (2016): 101-159.
(TCPL 201)
- Lunch (Vistas Dining Room)
- Free Afternoon (Banff National Park)
- Dinner (Vistas Dining Room)
Thursday, November 17
- Breakfast (Vistas Dining Room)
Michael Giudici: $s$-arc-transitive digraphs ↓
09:00 An $s$-arc in a digraph $\Gamma$ is a sequence $v_0,v_1,\ldots,v_s$ of vertices such that for each $i$ the pair $(v_i,v_{i+1})$ is an arc of $\Gamma$. There are several important differences
- between the study of $s$-arc-transitive graphs and $s$-arc transitive digraphs. For example, there are no 8-arc-transitive graphs of valency at least 8, while for every positive integer $s$
09:35 there are infinitely many digraphs of valency at least three that are $s$-arc-transitive but not $(s+1)$-arc transitive. In this talk I will discuss a solution to an old question of Cheryl
Praeger about the existence of vertex-primitive 2-arc-transitive digraphs. This is joint work with Cai Heng Li and Binzhou Xia.
(TCPL 201)
Simon Smith: The structure of infinite primitive permutation groups ↓
09:40 This talk is about infinite permutation groups that satisfy a finiteness condition: all orbits of point stabilizers are finite. Such groups are called ${\em subdegree-finite}$. Subdegree-finite
- permutation groups are the natural permutation representations of totally disconnected and locally compact topological groups.
In this talk I'll present a recent result which describes the
10:15 structure of all subdegree-finite primitive permutation groups. It is akin to the seminal O'Nan--Scott Theorem for finite primitive permutation groups.
(TCPL 201)
- Coffee Break (TCPL Foyer)
Alejandra Garrido: Maximal subgroups of groups of intermediate growth ↓
Studying the primitive actions of a group corresponds to studying its maximal subgroups. In the case where the group is countably infinite, one of the first questions one can ask is whether
10:45 there are any primitive actions on infinite sets; that is, whether there are any maximal subgroups of infinite index. The study of maximal subgroups of countably infinite groups has so far
- mainly concerned classes of groups where the Tits Alternative holds (every subgroup is either virtually soluble or contains a free subgroup) and in the case where there are maximal subgroups of
11:15 infinite index, there are uncountably many. It is natural to investigate this question for groups of intermediate growth, for instance, some groups of automorphisms of rooted trees. I will
report on some recent joint work with Dominik Francoeur where we show that some groups of intermediate growth have exactly countably many maximal subgroups of infinite index.
(TCPL 201)
Yoav Segev: On algebras generated by idempotents ↓
11:20 Idempotents ($e^2=e$) in (nonassociative commutative) algebras (e.g., Jordan algebras) behave sometimes like involutions in a group. Indeed, in some (very) interesting cases one can associate
- an involutive automorphism of the algebra to an idempotent. This topic is related to Griess algebras (and the Monster group), Majorana algebras, Axial algebras (and the Fischer groups) and
11:50 Miyamoto involutions. We explore mostly algebras generated by two idempotents, having in mind groups, and an extension of the results to algebras generated by finitely many idempotents (when
possible). Joint work with Louis Rowen.
(TCPL 201)
- Lunch (Vistas Dining Room)
- Coffee Break (TCPL Foyer)
Pierre Simon: $AGL_n(Q)$ and $PGL_n(Q)$ are maximal-closed ↓
- In joint work with Itay Kaplan, we show that $AGL_n(Q)$ ($n>1$) and $PGL_n(Q)$ ($n>2$) are maximal amongst closed proper subgroups of the infinite symmetric group. I will present this result
17:20 and its proof which relies on Adeleke and Macpherson's classification of infinite Jordan groups. I will also mention some open questions.
(TCPL 201)
Colin Reid: Chief series in locally compact groups ↓
I will be talking about joint work with Phillip Wesolek. A chief factor of a topological group $G$ is a factor $K/L$, where $K$ and $L$ are closed normal subgroups such that no closed normal
17:30 subgroup of $G$ lies strictly between $K$ and $L$. We show that a compactly generated locally compact group admits an 'essentially chief series', that is, a finite normal series in which each
- of the factors is compact, discrete or a chief factor. In the totally disconnected case, the proof is based on the fact that $G$ acts vertex-transitively on a locally finite connected graph
18:00 with compact open stabilizers. I will also indicate why totally disconnected chief factors can have a complicated normal subgroup structure as groups in their own right, in contrast to
semisimple groups.
(TCPL 201)
- Dinner (Vistas Dining Room)
Friday, November 18
- Breakfast (Vistas Dining Room)
Andre Nies: Complexity questions for classes of closed subgroups of the group of permutations of N ↓
The closed subgroups of the group of permutations of N coincide with the automorphism groups of countable structures. We consider classes of such groups, such as being oligomorphic, or being
09:00 topologically finitely generated. We study their complexity in the sense of descriptive set theory. If the class is Borel, we next consider the complexity of the topological isomorphism
- relation. For instance, the classes of locally compact and of oligomorphic groups are both Borel. In either case, the isomorphism relation is bounded in complexity by the problem of deciding
09:35 whether two countable graphs are isomorphic. In the first case, and even restricted to compact (i.e. profinite separable) groups, this upper bound is known to be sharp. The upper bounds have
been obtained independently by Rosendal and Zielinski (arXiv, Oct. 2016). This is joint work with A. Kechris and K. Tent.
(TCPL 201)
Inna Capdeboscq: Generation and Presentations: Kac-Moody groups' perspective. ↓
- In this talk we discuss results about generation and presentations of Kac-Moody groups over finite fields and their consequences for some Chevalley groups. This talk is partly based on recent
10:15 work with A. Lubotzky and B.Remy, and with D. Rumynin.
(TCPL 201)
- Coffee Break (TCPL Foyer)
Gunter Malle: On the number of modular characters in a block ↓
- We propose a conjectural upper bound for the number of irreducible Brauer characters in a $p$-block of a finite group, argue that it holds for $p$-solvable groups, give some further evidence in
11:15 the case of quasi-simple groups and discuss some reductions. This is joint work with G. Robinson.
(TCPL 201)
Checkout by Noon ↓
- 5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the
12:00 guest rooms by 12 noon.
(Front Desk - Professional Development Centre)
- Lunch from 11:30 to 13:30 (Vistas Dining Room) | {"url":"http://webfiles.birs.ca/events/2016/5-day-workshops/16w5087/schedule","timestamp":"2024-11-14T22:02:46Z","content_type":"application/xhtml+xml","content_length":"46053","record_id":"<urn:uuid:31dc214f-23c3-4619-be63-ac404b285e42>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00101.warc.gz"} |
Lougovski, Pavel (2004): Quantum state engineering and reconstruction in cavity QED: An analytical approach. Dissertation, LMU München: Faculty of Physics
Preview 2MB
The models of a strongly-driven micromaser and a one-atom laser are developed. Their analytical solutions are obtained by means of phase space techniques. It is shown how to exploit the model of a
one-atom laser for simultaneous generation and monitoring of the decoherence of the atom-field "Schrödinger cat" states. The similar machinery applied to the problem of the generation of the
maximally-entangled states of two atoms placed inside an optical cavity permits its analytical solution. The steady-state solution of the problem exhibits a structure in which the two-atom
maximally-entangled state correlates with the vacuum state of the cavity. As a consequence, it is demonstrated that the atomic maximally-entangled state, depending on a coupling regime, can be
produced via a single or a sequence of no-photon measurements. The question of the implementation of a quantum memory device using a dispersive interaction between the collective internal ground
state of an atomic ensemble and two orthogonal modes of a cavity is addressed. The problem of quantum state reconstruction in the context of cavity quantum electrodynamics is considered. The optimal
operational definition of the Wigner function of a cavity field is worked out. It is based on the Fresnel transform of the atomic invertion of a probe atom. The general integral transformation for
the Wigner function reconstruction of a particle in an arbitrary symmetric potential is derived.
Item Type: Theses (Dissertation, LMU Munich)
Subjects: 500 Natural sciences and mathematics
500 Natural sciences and mathematics > 530 Physics
Faculties: Faculty of Physics
Language: English
Date of oral examination: 23. September 2004
1. Referee: Walther, Herbert
MD5 Checksum of the PDF-file: 0e75561399212f7adab6abba7ac2182d
Signature of the printed copy: 0001/UMC 14029
ID Code: 2638
Deposited On: 06. Oct 2004
Last Modified: 24. Oct 2020 11:17 | {"url":"https://edoc.ub.uni-muenchen.de/2638/","timestamp":"2024-11-12T12:38:20Z","content_type":"application/xhtml+xml","content_length":"26458","record_id":"<urn:uuid:42dbd564-497f-47ad-a651-020e24e1b8bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00724.warc.gz"} |
Mach number and plasma beta dependence of the ion temperature perpendicular to the external magnetic field in the transition region of perpendicular collisionless shocks
Ion temperature anisotropy is a common feature for (quasi-)perpendicular collisionless shocks. By using two-dimensional full particle simulations, it is shown that the ion temperature component
perpendicular to the shock magnetic field at the shock foot region is proportional to the square of the Alfvén Mach number divided by the plasma beta. This result is also explained by a simple
analytical argument in which the reflected ions get energy from an upstream plasma flow. By comparing our analytic and numerical results, it is also confirmed that the fraction of the reflected ions
hardly depends on the plasma beta and the Alfvén Mach number when the square of the Alfvén Mach number divided by the plasma beta is larger than about 20.
In various kinds of solar-terrestrial, astrophysical, and laboratory plasmas, collisionless shocks are ubiquitous, at which the upstream kinetic energy of the supersonic plasma flow dissipates into
downstream energy of thermal ions and electrons, waves (turbulence), and nonthermal particles.^1,2 Despite various kinds of studies, detailed processes of shock dissipation remain to be clarified.
For example, we do not fully understand how energies are partitioned between downstream thermal electrons and ions although their total pressure can be simply predicted by the fluid Rankine-Hugoniot
For supercritical (quasi-)perpendicular shocks, a fraction of the incoming ions can be specularly reflected toward the upstream region but gyrates back to the shock front.^3–10 Such
reflected-gyrating ions can gain energy from the motional electric field of the upstream plasma flow and contribute to the increase of the ion temperature component perpendicular to the local
magnetic field. Consequently, large temperature anisotropy arises at the shock foot, exciting waves through ion temperature anisotropy instability, which is responsible for the shock ripples.^11,12
Electron preheating at the foot also takes place under some conditions.^3,13,14 The ripple further dissipates ions, increasing ion parallel temperature, and even electron acceleration occurs.^15 In
the downstream region, ion distribution is no longer nongyropropic, and its structures are smoothed out by collisionless gyrophase mixing, resulting in downstream ion heating.^16–21 In order to
understand such a multistep dissipation process across the shock front, it is important to estimate the initial ion temperature component perpendicular to the shock magnetic field at the foot region.
In the present study, using the two-dimensional full particle simulation of low-Mach-number, perpendicular, rippled, and collisionless shocks, we study the ion perpendicular temperature at the shock
foot region. We show, for the first time, that it is proportional to the square of the Alfvén Mach number divided by the plasma beta or the square of the sonic Mach number, which is consistent with
the analytical scaling relation.^5
We perform two-dimensional (2D) simulations of perpendicular ($θBn=90○$) collisionless shocks by using a standard particle-in-cell code.^22 As in our previous works,^15,23,24 the shock is excited
by the “relaxation” between a supersonic and a subsonic plasma flow moving in the same direction. The initial state consists of two regions separated by a discontinuity. Both regions have spatially
uniform distributions of electrons and ions with different bulk flow velocities, temperatures, and densities, and they have a uniform perpendicular magnetic field with a different strength. The
simulation domain is taken in the x-y plane and an in-plane shock magnetic field (B[y0]) is assumed. We apply a uniform external electric field E[z0] = u[x1]B[y01]/c (=u[x2]B[y02]/c) in both upstream
and downstream regions, so that both electrons and ions drift along the x axis. Here, u[x] is the bulk flow velocity, and subscripts “1” and “2” denote “upstream” and “downstream,” respectively. At
the left (right) boundary of the simulation domain in the x direction, we inject plasmas with the same quantities as those in the initial upstream (downstream) region. We use absorbing boundaries to
suppress nonphysical reflection of electromagnetic waves at both ends of the simulation domain in the x direction,^25 while the periodic boundaries are imposed in the y direction.
In the present study, we show results of six simulation runs (A–F) with different upstream conditions. We summarize in Table I the upstream plasma parameters, such as the bulk flow velocity u[x1] and
the ratio of the electron plasma frequency to the electron cyclotron frequency ω[pe1]/ω[ce1]. For all runs, we fix the ion-to-electron mass ratio as m[i]/m[e] = 256, and set v[te1]/c = 0.1, where v[
te1] and c are the electron thermal velocity upstream and the speed of light, respectively, and subscripts “i” and “e” represent “ion” and “electron,” respectively. Then, we obtain the plasma beta
$β1=2(vte1/c)2(ωpe1/ωce1)2$ and $ux1/VA1=(mi/me)1/2(vte1/c)(ωpe1/ωce1)(ux1/vte1)$, where V[A1] is the upstream Alfvén velocity. It is assumed that the electrons and ions have the same plasma beta, β
[e1] = β[i1] = β[1], and the same isotropic temperature, T[e1] = T[i1]. Here, temperatures and thermal velocities are related as $Te1=mevte12$ and $Ti1=mivti12$. For given upstream frequencies ω[pe
1] and ω[ce1], the initial upstream number density $n1≡meωpe12/4πe2$ and the initial magnetic field strength B[y01] = m[e]ω[ce1]/e are derived. Then, the initial downstream parameters are determined
by solving the shock jump conditions (Rankine-Hugoniot conditions) for a magnetized two-fluid isotropic plasma consisting of electrons and ions,^26 assuming T[i2]/T[e2] = 8.0.
TABLE I.
. Upstream parameters . Results .
Run . u[x1]/v[te1] . ω[pe1]/ω[ce1] . β[1] . u[x1]/V[A1] . M[A] . M[A]^2/β[1] . $⟨Ti⊥max⟩/Ti1$ . $Ti⊥max/Ti1$ .
A 1.875 2 0.08 6 5.2 334 183 160–205
B 0.937 5 4 0.32 6 6.5 132 63.9 53.9–75.4
C 0.468 75 8 1.28 6 6.1 28.6 14.9 13.4–17.1
D 1.25 2 0.08 4 4.6 265 101 86.5–123
E 0.625 4 0.32 4 4.7 70.5 30.0 26.2–35.5
F 0.312 5 8 1.28 4 3.9 12.5 5.01 4.80–5.20
. Upstream parameters . Results .
Run . u[x1]/v[te1] . ω[pe1]/ω[ce1] . β[1] . u[x1]/V[A1] . M[A] . M[A]^2/β[1] . $⟨Ti⊥max⟩/Ti1$ . $Ti⊥max/Ti1$ .
A 1.875 2 0.08 6 5.2 334 183 160–205
B 0.937 5 4 0.32 6 6.5 132 63.9 53.9–75.4
C 0.468 75 8 1.28 6 6.1 28.6 14.9 13.4–17.1
D 1.25 2 0.08 4 4.6 265 101 86.5–123
E 0.625 4 0.32 4 4.7 70.5 30.0 26.2–35.5
F 0.312 5 8 1.28 4 3.9 12.5 5.01 4.80–5.20
The grid spacing and the time step of the present simulation runs are set to be Δx = Δy ≡ Δ = λ[De1] and cΔt/Δ = 0.5, respectively, where λ[De] is the electron Debye length. The total size of the
simulation domain is 32l[i1] × 6l[i1], where $li1=c/ωpi1=(mi/me)1/2(c/vte1)λDe1$ is the ion inertial length of the upstream plasma. We used 25 pairs of electrons and ions per cell in the upstream
region and 64 pairs of electrons and ions per cell in the downstream region at the initial state.
The ion temperature component perpendicular to the shock magnetic field is approximated by the arithmetic mean of ion temperatures in the x and z directions, that is, T[i⊥] = (T[x] + T[z])/2. In Fig.
1, we show spatiotemporal evolution of B[y] (left panel) and T[i⊥] (right panel) for Run D. Here, both of them are averaged over the y direction. The initial unphysical disturbance disappears, and
the growth of the shock ripples ceases until ω[ci1]t = 7.0 after which the shock structure seems to be in the quasi-steady state. Unlike one-dimensional simulation, quasiperiodic reformation seems
unclear although we can still see the front oscillation. Hence, we analyze the data after ω[ci1]t = 7.0 for all runs.
As shown in Fig. 1, the shock front moves leftward in our simulation frame. Using the spatiotemporal diagram of B[y], we measured the shock velocity $v$[sh], and obtained the Alfvén Mach number, M[A]
= (u[x1] − v[sh])/V[A1], in the shock-rest frame. The results for all six runs are shown in Table I.
We extract the representative value of T[i⊥] in the transition layer for each run as follows. First, we take a snapshot of T[i⊥] which is averaged over the y direction and find its maximum value,
$Ti⊥max$, for each time step. As shown in Fig. 2, one can find that T[i⊥] has a maximum at the foot region. This fact is also true in an arbitrary epoch. The value of $Ti⊥max$ changes with time.
Then, the time mean of $Ti⊥max$ for 7 ≤ ω[ci1]t ≤ 12 is obtained. For example, we obtain the average value $Ti⊥max/Ti1=101$, and the maximum and minimum values are 123 and 86.5, respectively. In the
same way, the average, maximum, and minimum values of $Ti⊥max$ are obtained for other runs. The results are summarized in Table I. In Fig. 3, $Ti⊥max/Ti1$ is shown as a function of M[A]^2/β[1]. The
simulation results seem to lie on the line, $Ti⊥max/Ti1≈0.5×MA2/β1$.
We derive a simple analytical formula to have ion perpendicular temperature at the shock foot. Although similar formulae have been already derived,^5–7 our final equation, Eq. (11), is in an
excellent agreement with our simulation results. In our simulation frame, the shock front moves at a velocity v[sh], and upstream and downstream bulk velocities are typically u[x1] and u[x2],
respectively. The incoming ions are adiabatically heated at the shock foot. They have a perpendicular temperature, $[Cγ−1/(γ−1)]mivti12$, where C ∼ (n[i,f]/n[i1]) is the compression factor, and γ
and n[i,f] are the adiabatic index (γ = 2 in our case) and the typical value of the density at the foot, respectively. Since they are also decelerated due to mass flux conservation, their bulk
velocity measured in the rest frame of the shock front becomes u[(in)] ≈ $vsh′$/C, where $vsh′=ux1−vsh=MAVA1$. Next, a part of them is reflected. The bulk velocity of the reflected ions in the shock
rest frame is u[(ref)] ≈ −$vsh′$/C.^4 Hence, the velocity difference at the shock foot between the incoming and the reflected ions is given below:
A large fraction of energy (per ion), (m[i]/2)|Δu|^2, is consumed for increasing the ion perpendicular temperature.
Here, we consider a simple analytical model to estimate T[i⊥] at the foot region, whose ion distribution function is written as
where the first and the second terms in the rhs describe the incoming and reflected components, respectively, and the number density N[(k)], bulk velocity u[(k)], and temperature T[(k)] are
respectively. A subscript (k) = (in), (ref) denotes each component. Then, it is natural to approximate T[i⊥] as
where N[tot] and $ū$ are given by
respectively. When we introduce the fraction of the reflected ions as r = N[(ref)]/N[tot], then we get
and $ū$ = (1 − r)u[(in)] + ru[(ref)]. Assuming T[(in)] = T[(ref)] as in one of our previous simulation studies^27 and eliminating ū, we can rewrite Eq. (9) as
Using Eq. (1) together with T[(in)] = CT[i1] and m[i]$vsh′$^2/T[i1] = 2M[A]^2/β[1], we finally obtain
The first term in the rhs of Eq. (11) is important for the case of low M[A]^2/β[1] only. The compression factor C is slightly larger than unity for such shocks. For large M[A]^2/β[1], the second term
dominates the rhs of Eq. (11); hence, we can explain our numerical result, $Ti⊥max/Ti1∝MA2/β1$, shown in Fig. 3, if the factor 8r(1 − r)/C^2 in Eq. (11) hardly depends on M[A]^2/β[1]. The fraction
of the reflected ions is typically r ≈ 0.3 and varies from 0.2 to 0.4 during nonstationary processes at the shock front. On the other hand, the value of C is less variable and ranges between 1.0 and
1.1. Then, one can see 8r(1 − r)/C^2 = 1.1–1.9. This is a factor of a few larger than estimated from Fig. 3. Indeed, our analytical formula, shown in Eq. (11), gives the upper bound of T[i⊥] because
the free energy m[i](Δu)^2 goes not only to T[i⊥] but also to the thermal energy of reflected ions and waves excited in the shock transition layer.^28 In practice, one can see that T[i⊥] becomes
larger if T[(ref)] > T[(in)] [see Eq. (9)]. Note that the fraction of the reflected ions, r, estimated in the previous analytical works^4,8,29 is slightly smaller than that obtained from our
In the present study, we focus on T[i⊥] only. On the other hand, the parallel component of ion temperature (T[i∥] ≈ T[y]) is much smaller at the foot region. Therefore, the total ion temperature, T[i
] = (T[x] + T[y] + T[z])/3 = (2T[i⊥] + T[i∥])/3, is approximated as T[i] ≈ (2/3)T[i⊥].
Using two-dimensional full particle simulations, we have shown that ion perpendicular temperature at the foot of supercritical perpendicular collisionless shocks is proportional to M[A]^2/β[1] or the
square of the sonic Mach number. This fact will give us a simple estimate of the energy partition between downstream thermal ions and electrons although further study is necessary. The ion heating at
the foot region of (quasi-)perpendicular shocks has been extensively investigated by many authors mainly using spacecraft observations,^5,9 (semi-)analytical studies,^4,5 and one-dimensional hybrid
simulations.^7 In this paper, we have extended such studies by using two-dimensional full particle simulations which can better capture various kinetic effects including wave excitation and plasma
heating in the direction tangential to the shock front. More specifically, we have demonstrated in this paper that our analytical scaling relation, as shown in Eq. (11), is in excellent agreement
with two-dimensional full particle simulations of rippled shocks. This result also indicates that the dependence of the fraction of ion reflection on the plasma beta and the Alfvén Mach number is
small when $MA2/β$ is larger than about 20, which is consistent with our simulation results.
The authors would like to thank Yutaka Ohira for helpful comments. Computer simulations were performed on the CIDAS supercomputer system at the Institute for Space-Earth Environmental Research in
Nagoya University under the joint research program. This work was partly supported by JSPS KAKENHI, Grant Nos. 15K05088, 18H01232 (R.Y.), 2628704, and 19H01868 (T.U.).
R. A.
Physics of Collisionless Shocks
New York
Collisionless Shocks in Space Plasmas
Cambridge University Press
L. C.
Plasma Phys.
M. M.
Phys. Fluids
S. J.
, and
J. T.
J. Geophys. Res.
, (
J. T.
A. E.
Collisionless Shocks in the Heliosphere: Reviews of Current Research
American Geophysical Union
Washington, DC
), Vol. 35, p.
W. P.
, and
S. J.
J. Geophys. Res.
, (
W. P.
S. J.
Planet. Space Sci.
A. L.
C. W.
, and
J. Geophys. Res.
, (
Phys. Fluids B
K. B.
J. Geophys. Res.
, (
R. E.
Ann. Geophys.
E. L. M.
O. V.
F. S.
S. D.
, and
Geophys. Res. Lett.
, (
I. J.
S. J.
K. A.
R. E.
S. A.
M. I.
E. R.
D. J.
G. P.
et al.,
J. Geophys. Res.
, (
, and
Astrophys. J.
J. Geophys. Res.
, (
Geophys. Res. Lett.
, (
J. Plasma Phys.
Phys. Plasmas
C. T.
, and
J. Geophys. Res.
, (
J. Geophys. Res.
, (
, and
Comput. Phys. Commun.
, “
Particle simulation of a perpendicular collisionless shock: A shock-rest-frame model
Earth Planets Space
, and
Astrophys. J.
, and
Comput. Phys. Commun.
P. D.
Planet. Space Sci.
, and
Phys. Plasmas
, and
Phys. Plasmas
, and
J. Geophys. Res.
, ( | {"url":"https://pubs.aip.org/aip/adv/article/9/12/125010/990616/Mach-number-and-plasma-beta-dependence-of-the-ion","timestamp":"2024-11-10T12:50:44Z","content_type":"text/html","content_length":"242430","record_id":"<urn:uuid:bbd8f372-e17a-4807-98a7-8d7a35f6762f>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00873.warc.gz"} |
Blog - SAT Quantum
Try the following SAT math practice question (No Calculator) that tests your understanding of factorization and algebraic identities.
Question(No Calculator):
Which of the following must be a factor of both $x^4-y^4$ and $x^3y – x y^3$ ?
I. $x + y$
II. $x^2 – y^2$
III. $(x-y)^2$
A. $\quad \textrm{II only}$
B. $\quad \textrm{I and II only}$
C. $\quad \textrm{I and III only}$
D. $\quad \textrm{I, II, and III}$
SAT math practice question: Circles in coordinate plane
Try the following SAT math practice question (Calculator Permitted) that tests your understanding of circles in the coordinate plane.
Question(Calculator Permitted):
In the $xy-$plane, point $A$ has coordinates $(3, 3)$ and point $B$ has coordinates $(-3, 3)$. Which of the following points is on the circle with center $A$ and radius $\overline{AB}$?
A. $\quad (-9, 3)$
B. $\quad (-3, -3)$
C. $\quad (0, 3)$
D. $\quad (3, 9)$
Video Explanation
SAT math practice question: Arithmetic Sequences
Try the following SAT math practice question (Calculator Permitted) that tests your understanding of arithmetic sequences.
Question(Calculator Permitted):
Aaron is planning to save money every Sunday of the week to buy a car. He will save $\$100$ on the first Sunday of the year. Each Sunday after the first, he will save $\$40$ more than he saved on the
preceding Sunday. Which of the following is an expression for the amount of dollars that Aaron would save on the $n$th Sunday of his savings effort?
A. $\quad 80n+20$
B. $\quad 40n+100$
C. $\quad 40n+80$
D. $\quad 40n+60$
Video Explanation | {"url":"https://satquantum.com/blog/page/7/","timestamp":"2024-11-11T18:18:46Z","content_type":"text/html","content_length":"56302","record_id":"<urn:uuid:6ebb6b3c-24ce-41fa-acf8-b135a2b2d524>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00394.warc.gz"} |
87 research outputs found
In the common linear regression model we consider the problem of designing experiments for estimating the slope of the expected response in a regression. We discuss locally optimal designs, where the
experimenter is only interested in the slope at a particular point, and standardized minimax optimal designs, which could be used if precise estimation of the slope over a given region is required.
General results on the number of support points of locally optimal designs are derived if the regression functions form a Chebyshev system. For polynomial regression and Fourier regression models of
arbitrary degree the optimal designs for estimating the slope of the regression are determined explicitly for many cases of practical interest. --locally optimal design,standardized minimax optimal
design,estimating derivatives,polynomial regression,Fourier regression
n.a. --Trigonometric polynomial,extremal polynomial,Chebyshev system,extremal problem
In the common Fourier regression model we investigate the optimal design problem for estimating pairs of the coefficients, where the explanatory variable varies in the interval [¡¼; ¼]. L-optimal
designs are considered and for many important cases L-optimal designs can be found explicitly, where the complexity of the solution depends on the degree of the trigonometric regression model and the
order of the terms for which the pair of the coe±cients has to be estimated. --L-optimal designs,Fourier regression models,parameter subsets,equivalence theorem
This paper concerns locally optimal experimental designs for non-linear regression models. It is based on the functional approach introduced in (Melas, 1978). In this approach locally optimal design
points and weights are studied as implicitly given functions of the nonlinear parameters included in the model. Representing these functions in a Taylor series enables analytical solution of the
optimal design problem for many nonlinear models. A wide class of such models is here introduced. It includes, in particular,three parameters logistic distribution, hyperexponential and rational
models. For these models we construct the analytical solution and use it for studying the efficiency of locally optimal designs. As a criterion of optimality the well known D-criterion is considered
In this paper we consider the problem of constructing $T$-optimal discriminating designs for Fourier regression models. We provide explicit solutions of the optimal design problem for discriminating
between two Fourier regression models, which differ by at most three trigonometric functions. In general, the $T$-optimal discriminating design depends in a complicated way on the parameters of the
larger model, and for special configurations of the parameters $T$-optimal discriminating designs can be found analytically. Moreover, we also study this dependence in the remaining cases by
calculating the optimal designs numerically. In particular, it is demonstrated that $D$- and $D_s$-optimal designs have rather low efficiencies with respect to the $T$-optimality criterion.Comment:
Keywords and Phrases: T-optimal design; model discrimination; linear optimality criteria; Chebyshev polynomial, trigonometric models AMS subject classification: 62K0
This paper concerns locally optimal experimental designs for non- linear regression models. It is based on the functional approach intro- duced in (Melas, 1978). In this approach locally optimal
design points and weights are studied as implicitly given functions of the nonlinear parameters included in the model. Representing these functions in a Taylor series enables analytical solution of
the optimal design prob- lem for many nonlinear models. A wide class of such models is here introduced. It includes, in particular,three parameters logistic distri- bution, hyperexponential and
rational models. For these models we construct the analytical solution and use it for studying the e_ciency of locally optimal designs. As a criterion of optimality the well known D-criterion is
considered. --nonlinear regression,experimental designs,locally optimal designs,functional approach,three parameters logistic distribution,hyperexponential models,rational models,D-criterion,implicit
function theorem
For a broad class of nonlinear regression models we investigate the local E- and c-optimal design problem. It is demonstrated that in many cases the optimal designs with respect to these optimality
criteria are supported at the Chebyshev points, which are the local extrema of the equi-oscillating best approximation of the function f_0\equiv 0 by a normalized linear combination of the regression
functions in the corresponding linearized model. The class of models includes rational, logistic and exponential models and for the rational regression models the E- and c-optimal design problem is
solved explicitly in many cases.Comment: Published at http://dx.doi.org/10.1214/009053604000000382 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics
This paper considers the problem of constructing optimal discriminating experimental designs for competing regression models on the basis of the T-optimality criterion introduced by Atkinson and
Fedorov [Biometrika 62 (1975) 57-70]. T-optimal designs depend on unknown model parameters and it is demonstrated that these designs are sensitive with respect to misspecification. As a solution to
this problem we propose a Bayesian and standardized maximin approach to construct robust and efficient discriminating designs on the basis of the T-optimality criterion. It is shown that the
corresponding Bayesian and standardized maximin optimality criteria are closely related to linear optimality criteria. For the problem of discriminating between two polynomial regression models which
differ in the degree by two the robust T-optimal discriminating designs can be found explicitly. The results are illustrated in several examples.Comment: Published in at http://dx.doi.org/10.1214/
13-AOS1117 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
The problem of constructing Bayesian optimal discriminating designs for a class of regression models with respect to the T-optimality criterion introduced by Atkinson and Fedorov (1975a) is
considered. It is demonstrated that the discretization of the integral with respect to the prior distribution leads to locally T-optimal discrimination designs can only deal with a few comparisons,
but the discretization of the Bayesian prior easily yields to discrimination design problems for more than 100 competing models. A new efficient method is developed to deal with problems of this
type. It combines some features of the classical exchange type algorithm with the gradient methods. Convergence is proved and it is demonstrated that the new method can find Bayesian optimal
discriminating designs in situations where all currently available procedures fail.Comment: 25 pages, 3 figure | {"url":"https://core.ac.uk/search/?q=authors%3A(Melas%2C%20Viatcheslav%20B.)","timestamp":"2024-11-09T05:12:46Z","content_type":"text/html","content_length":"136183","record_id":"<urn:uuid:782f2e05-1a63-40fc-bb21-b293b1176c50>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00480.warc.gz"} |
Relo Triangle Calculator
The Reulot triangle is the area of intersection of three equal circles with centers at the vertices of a regular triangle and radii equal to its side. The non-smooth closed curve bounding this figure
is also called the Reulot triangle. This is the simplest form of a constant width curve. Its two-dimensional equivalent is the Reulot tetrahedron. Enter one known value, then click the “Calculate”
l = a π / 3
p = a π
S = ( π – √3 ) a2 / 2 | {"url":"https://calculators.vip/relo-triangle-calculator/","timestamp":"2024-11-06T23:57:43Z","content_type":"text/html","content_length":"36762","record_id":"<urn:uuid:237676d5-1e74-4991-a64c-7c29a2902ddd>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00324.warc.gz"} |
College Tutoring San Jose, CA | MathTowne Tutoring
College tutoring – Math Review For College Students
We provide the highest-rated tutoring service to college students in the San Jose Bay Area.
College PRECALCULUS Tutoring
For college students enrolled in a precalculus course such as:
Or similar courses at other institutions in the Bay Area or elsewhere, here are the general topics a Precalculus class will cover:
Precalculus Curriculum
• Solve and graph polynomial functions
• Understand Arithmetic and Geometric Sequences and Series
• Investigate Rational Functions
• Solve Exponential and Logarithmic Functions
• Understand basic Linear Algebra including vectors and Matrices
• Cover more Probability and Statistics
• Understand Conics: Parabola, Ellipse, and Hyberbola
• Learn Trigonometric Identities and functions
• Understand trigonometric functions and their inverses
• Law of sines and cosines
College Calculus For Business Majors
For business majors enrolled in calculus courses such as:
Or similar courses at other institutions, here are the general topics a Calculus class will cover:
Calculus Curriculum
• Study the properties and characteristics of functions and their graphs.
• Learn about the limits of functions and how they relate to continuity.
• Study the concept of derivatives and their various applications.
• Solve related rates and optimization problems.
• Study antiderivatives, the fundamental theorem of calculus, and techniques for integration.
• Learn practical applications of integrals.
Business Calculus Vs Calculus
Business calculus is a branch of calculus that is tailored to address problems and concepts that are relevant to business and economics. As such, it differs somewhat from the more general calculus
curriculum in that it focuses on applying calculus concepts to practical problems in business.
Business Calculus
The problems covered are specific to business and economics, rather than purely theoretical or abstract, and include content such as:
• Marginal analysis: Analyze the marginal behavior of functions such as cost, profit, and revenue.
• Business applications of integration e.g. predicting future trends in sales and production.
• Risk probabilities and statistics.
• Applications of natural logarithms and exponential functions, such as compound interest and elasticity of demand.
Some topics in regular calculus that are usually absent in business calculus classes include:
• Trigonometric functions.
• Advanced integration techniques, such as integration by parts, trigonometric substitution, and partial fraction decomposition.
• Vector calculus, including vector fields, line integrals, surface integrals, and the theorems of Green, Stokes, and Gauss.
College Calculus For Life Science Majors
For life science majors (biology, neuroscience, environmental science, etc.) enrolled in calculus courses such as:
Or similar courses at other institutions. Calculus for life science majors typically focuses on using calculus to model and analyze biological phenomena, such as growth, decay, and movement. The
regular calculus curriculum on derivatives and integrals will be covered but some key differences of calculus for life science majors include:
Calculus For Life Sciences
• Emphasis on Applications: Applying calculus concepts and techniques to solve problems in the life sciences, rather than exploring the theory and abstract concepts of calculus.
• Biological Context: Includes examples from the life sciences, such as modeling population growth or analyzing enzyme kinetics.
• Less Emphasis on Rigor: Less emphasis on the rigorous proofs and mathematical theory that are emphasized in regular calculus courses. More focus on a practical understanding of calculus concepts
and techniques that are relevant to the life sciences.
• More Emphasis on Visualization: More emphasis on visualization techniques, such as graphing and drawing diagrams, to help students better understand the concepts.
College Calculus For Math, Physics, & Engineering Majors
For math, physics, or engineering majors enrolled in calculus courses such as:
Or similar courses at other institutions. Other than the general curriculum of derivatives and integrals in a regular calculus course, math, physics, or engineering majors will likely learn
additional topics such as:
Additional Topics in Calculus
• Vector calculus (including multivariable calculus, gradient, curl, and divergence)
• Fourier series and transforms
• Numerical methods for solving calculus problems
• Laplace transforms
• Applications in physics and engineering, such as mechanics, thermodynamics, and electromagnetism
• Real analysis
• Topics in abstract algebra, topology, and geometry
• Advanced topics in differential equations (stability theory, chaos theory)
For students taking courses such as:
Or similar courses at other institutions, here are the general topics a college statistics course will cover:
Statistics Curriculum
• Descriptive statistics (measures of central tendency, measures of dispersion)
• Probability distributions (discrete and continuous)
• Sampling distributions (mean, variance, standard deviation)
• Confidence intervals (for means, proportions, and differences in means)
• Hypothesis testing (null and alternative hypotheses, p-values, type I and type II errors)
• Inference for means and proportions (using t-tests, z-tests, and chi-square tests)
• Correlation and regression analysis (linear regression, multiple regression)
• Analysis of variance (ANOVA)
• Nonparametric methods (Kruskal-Wallis test, Wilcoxon rank-sum test, chi-square goodness-of-fit test)
College Tutoring: Math Review
Math Review
Whether you are struggling with basic concepts or need to review more advanced topics, our math tutors are here to provide personalized support and guidance. We understand that math can be daunting,
and our goal is to help you gain the confidence and skills you need to succeed in your classes. Our tutors are experienced and knowledgeable, and they are dedicated to helping you achieve your goals
in a stress-free, supportive environment. We are here to help you succeed, so don’t hesitate to reach out for assistance with your math review needs.
Common Challenges in College Math Classes
College students often face different challenges when it comes to math compared to their high school years, such as:
1. Increased Difficulty: College-level math courses are often more challenging and rigorous than high school courses, which can be overwhelming for students who were able to succeed in high school
math classes without much effort.
2. Pacing and Time Management: College math courses typically move at a faster pace than high school courses, and students are expected to be more self-directed in their learning. This can be
difficult for students who struggle with time management or who need more time to process the material.
3. Higher Expectations: College math courses often have higher expectations for students in terms of their understanding of the material, ability to solve complex problems, and ability to
communicate mathematical concepts effectively.
4. Lack of Support: College students may not have the same level of support for math as they did in high school. They may have fewer opportunities to meet with their instructor outside of class, and
tutoring resources may be limited.
5. New Learning Environment: College students may need to adjust to a new learning environment, including larger class sizes, more independent learning, and a different teaching style. Students who
are used to a more structured and supportive learning environment in high school often find adjusting difficult.
If you recognize any of these common problems, you’ll likely benefit from finding a private tutor to work with you.
Precalculus, Calculus, & Statistics Help
Comprehensive Math and Stats Tutoring
Our comprehensive tutoring services are designed to cover all areas of precalculus, calculus, and statistics. We provide support for topics such as algebra, functions, limits, differentiation,
integration, probability, and statistical inference. Our tutors are experienced in helping students of all levels, whether you’re just starting out or preparing for advanced coursework.
Math Made Simple
We understand that math can be an intimidating subject for many students. That’s why our tutors strive to make these topics simple and approachable. We use real-world examples and practical
applications to help you understand the concepts in a way that is relatable and easy to grasp.
How our College Tutoring Service Works
Our initial assessment allows us to understand your strengths and weaknesses in math, and to create a personalized tutoring plan that meets your unique needs. During the assessment, we will work with
you to identify the specific areas of math where you need the most help, and we will develop a plan of action to help you overcome any challenges.
Whether you need help with a difficult subject, a specific assignment, or test preparation, our tutors have the experience and knowledge to help you succeed.
We offer flexible and convenient tutoring options to fit your busy schedule. Our tutoring services are available both online and in person, and we can work around your availability to make sure you
get the help you need when you need it.
I need College Tutoring For Math
< 1 min
Frequently Asked Questions
What math classes are required for a college degree?
The specific math classes required for a college degree can vary depending on the major and the school. However, most colleges and universities require students to complete at least one semester of
college-level math, often in the form of calculus or statistics.
How often should I meet with a math tutor?
The frequency of math tutoring sessions can vary depending on your needs and schedule. Some students may benefit from weekly sessions, while others may only need occasional help before exams or on
specific assignments.
What should I bring to a math tutoring session?
Before your math tutoring session, it is a good idea to review any relevant class materials, notes, or homework assignments. You may also want to bring a calculator, a notebook or paper for taking
notes, and any specific questions or areas you want to focus on during the session.
How can I make the most of my math tutoring sessions?
To make the most of your math tutoring sessions, come prepared with questions and areas you want to focus on. Be open to feedback and guidance from your tutor and be willing to work hard and practice
outside of sessions. Finally, communicate with your tutor about your progress and any areas where you are still struggling. | {"url":"https://mathtowne.com/college-tutoring/","timestamp":"2024-11-14T22:05:34Z","content_type":"text/html","content_length":"195294","record_id":"<urn:uuid:ba1293dc-0e0e-4313-83cb-a73953c8c9b7>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00512.warc.gz"} |
Sensitivity of locally computed values
Sensitivity of locally computed values
If you traverse the tree for the case of Machine Learning or Statistical Analysis, it might be that you come across the question: “Are the locally computed values to be exchanged sensitive?”. This
question might seem vague and/or unclear. Let us consider an extreme example of local sensitivity to outline the reasoning of this question.
Let us assume that two parties wish to calculate collaborative statistics on their data. Let us also assume that each party is aware of the number of samples the other party owns. Let us finally
assume that party A only owns 2 samples. Then, we can immediately realize that if party A shares with party B the local mean and variance of a feature they own, then party B will be able to fully
reconstruct that feature for both samples owned by party A.
As mentioned, this is an extreme example and in practice the risk of freely exchanging locally computed values is nuanced and often hard to quantify. That being said, various recent research papers
[references] have focused on reconstructing data from local model updates, even in far less extreme examples of local sensitivity. As a result, any party participating in data sharing need to
consider such dangers and carefully plan the protocol of sharing intermediate values that may leak much more information than immediately obvious. | {"url":"https://tno-pet-explorer.online/guide/decision-tree-guide/sensitivity-of-locally-computed-values/","timestamp":"2024-11-07T20:05:24Z","content_type":"text/html","content_length":"12771","record_id":"<urn:uuid:3631fce0-e695-42b8-8009-b08bd2d15281>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00631.warc.gz"} |
Theoretical Modelling in Lecce Unit
The theoretical-computational activity in Lecce is focused on four different reseach lines:
1) development of density-functional theory, 2) application of DFT methods to model nanosystems, 3) computational (quantum) plasmonics and 4) modelling of MEMS devices.
Development of Density Functional Theory Methods
We develop Density Functional Theory (DFT) exchange-correlation (XC) and kinetic energy (KE) functionals. First principles Kohn-Sham (KS) calculations allow a reliable prediction of electronic and
optical properties of materials. However, the final accuracy depends on the approximation made for the XC functional. We work at the meta Generalized Gradient Approximation (meta-GGA) level of
theory, which allows to obtain a very high accuracy with low computational costs, and with orbital-dependent functionals to treat exactly the exchange and the second-order correlation. While the
accuracy of KS-DFT depends on the approximations for the XC energy as a functional of the electronic density, orbital-free (OF) and subsystem-DFT calculations require even more essential
approximations for the non-interacting KE functional. In OF-DFT the total KE needs to be approximated, whereas in subsystem-DFT only its non-additive part (i.e. the difference between the total
interacting system and the sum of its non-interacting subsystems) is required. In both cases the development of accurate KE density functionals is a big challenge in DFT. We are developing semilocal
KE functionals which employ the Laplacian of the electronic density as input ingredient.
Collaborations: L. A. Constantin (IIT), P. Gori-Giorgi (Amsterdam), P. Cortona (Paris), I. Grabowski (Torun, Poland), J.M. Pitarke (San-Sebastian, Spain)
Software: TURBOMOLE (development), PROFESS (development), JELLCODE (in-house developed)
S. Vuckovic, P. Gori-Giorgi, F. Della Sala, and E. Fabiano Restoring Size Consistency of Approximate Functionals Constructed from the Adiabatic Connection, J. Phys. Chem. Lett. 9, 3137 (2018)
L. A. Constantin, E. Fabiano, F. Della Sala Nonlocal kinetic energy functional from the jellium-with-gap model: Applications to orbital-free density functional theory, Phys. Rev. B 97, 205137 (2018)
L.A Constantin, E. Fabiano, F. Della Sala, Modified fourth-order kinetic energy gradient expansion with hartree potential-dependent coefficients, J. Chem. Theory Comput. 13, 4228 (2017)
F. Della Sala, E. Fabiano, L. A. Constantin Kinetic-energy-density dependent semilocal exchange-correlation functionals, Int. J. Quant. Chem. 116, 1641 (2016)
Contact person: Fabio Della Sala, Eduardo Fabiano
Modeling of advanced organic materials for energy and biology
We apply computational electronic structure methods, in particular (time-dependent) density functional theory, to investigate the basic properties of molecular materials used in biological and
optoelectronic applications. Modeling is, in fact, extremely important to determine the fundamental electronic properties (charge distributions and energetic properties), the spatial arrangement
(formation of aggregates and/or complexes), and the interaction of different molecular species in complex systems.
Moreover, for optoelectronic applications, a basic characterization of the fundamental optical properties of the material (excited states, absorption and emission spectra) is required. These studies
are conducted in close collaboration with experimental groups and are designed to provide a better understanding of the experimental data and a detailed knowledge of the physical and chemical
phenomena underlying the materials’ behaviour.
Software: TURBOMOLE
A. L. Capodilupo, L. De Marco, G. A. Corrente, R. Giannuzzi, E. Fabiano, A. Cardone, G. Gigli, G. Ciccarella Synthesis and characterization of a new series of dibenzofulvene based organic dyes for
DSSCs, Dyes and Pigments 130, 79 (2016);
A. L. Capodilupo, E. Fabiano, L. De Marco, G. Ciccarella, G. Gigli, C. Martinelli, A.Cardone, [1] Benzothieno [3, 2-b] benzothiophene-based organic dyes for dye-sensitized solar cells, J. Org. Chem.
81, 3235 (2016);
F. Di Maria et al Improving the Property–Function Tuning Range of Thiophene Materials via Facile Synthesis of Oligo/Polythiophene‐S‐Oxides and Mixed Oligo/Polythiophene‐S‐Oxides/Oligo/
Polythiophene‐S,S‐Dioxides, Adv. Funct. Mater. 26, 6970(2016)
Contact Person: Eduardo Fabiano
We study the optical properties of metallic nanostructures using the classical methods (discrete dipole approximation) and non-local approaches (quantum hydrodynamics, time-dependent density
functional theory). The former allow the modeling of large systems, the latter are required when characteristic distances are below few nanometers. We investigated noble metal nanoparticles with
different shapes and their interactions with localized dipolar sources.
Collaborations: C. Ciraci (IIT), M. Yurkin (Novosibirsk, Russia)
Software: ADDA, COMSOL, JELLCODE (in-house developed)
M. Khalid, F. Della Sala, C. Ciracì, Optical properties of plasmonic core-shell nanomatryoshkas: a quantum hydrodynamic analysis, Opt. Express 26, 17322 (2018)
A. Camposeo, R. Jurga, M. Moffa, A. Portone, F. Cardarelli, F. Della Sala, C. Ciracì, D. Pisignano Nanowire‐Intensified Metal‐Enhanced Fluorescence in Hybrid Polymer‐Plasmonic Electrospun Filaments,
Small 14, 1800187 (2018)
C. Ciraci, F. Della Sala, Quantum hydrodynamic theory for plasmonics: Impact of the electron density tail, Phys. Rev. B 93, 205405 (2016)
Contact Person: Fabio Della Sala
Finite Element Method (FEM) simulations for MEMS Energy Harvesting devices
The Finite Element Method (FEM) simulations activity at CNR-IMM supports the design of MEMS devices for low power consumption and Energy Harvesting applications. Coupled physics phenomena are
modelled and solved by using Comsol Multiphysics®. Coupled thermal and electrical analysis allows to maximize the performance of thermoelectric generators (TEGs), based on planar and vertical
thermopiles, by optimizing the design of modules and the package of devices. FEM simulations are also implemented to evaluate the deflection and stress distribution in different-shaped floating
membranes of capacitive pressure sensors, where small or large deformations in the model can be considered. About the piezoelectric devices, the piezoelectricity is combined with solid mechanics and
electrostatics equations and physics features for a comprehensive modelling of generation behaviour. Piezoelectric coupling can be on stress-charge or strain-charge form. FEM analysis is used to
study low temperature plasmas and to model dielectric barrier discharge in embedded system of sensor-DBD actuator for detection and control of flow separation in aeronautical applications.
COMSOL Multyphysics, MEMS PRO suite, L-Edit, Solidworks
Francioso, L., De Pascali, C., Sglavo, V., Grazioli, A., Masieri, M., Siciliano, P. Modelling, fabrication and experimental testing of an heat sink free wearable thermoelectric generator (2017)
Energy Conversion and Management, 145, pp. 204-213
Malucelli, G., Fioravanti, A., Francioso, L., De Pascali, C., Signore, M.A., Carotta, M.C., Bonanno, A., Duraccio, D. Preparation and characterization of UV-cured composite films containing ZnO
nanostructures: Effect of filler geometric features on piezoelectric response(2017) Progress in Organic Coatings, 109, pp. 45-54.
Veri, C., Francioso, L., Pasca, M., De Pascali, C., Siciliano, P., D'Amico, S. An 80 mV startup voltage fully electrical DC-DC converter for flexible thermoelectric generators (2016) IEEE Sensors
Journal, 16 (8), art. no. 7184928, pp. 2735-2745
Contact person: Luca Francioso
High Performance Computing Center
First-principles DFT calculations require High-Performance Computating. At IMM-Lecce the following main computational resources are available:
• cluster HPXC4000 with 16 nodes (128 cores Opteron, 512GB ram) and switch Infiniband;
• cluster Clustervision with 24 nodes (224 cores Intel, 992 GB ram) switch Infiniband, 24 TB parallel storage;
• cluster HP with 4 nodes (96 cores Opteron, 256 GB ram);
• server IBM with 24 cores and 102GB ram. | {"url":"https://container.imm.cnr.it/articles/theoretical-modelling-lecce-unit","timestamp":"2024-11-06T01:09:09Z","content_type":"text/html","content_length":"69584","record_id":"<urn:uuid:523e7762-c9d0-4bec-b4af-0b82c0d52b0d>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00875.warc.gz"} |
What does "User MIP start did not produce a new incumbent solution" mean?
In the output log (typically gurobi.log), you might see the message
User MIP start did not produce a new incumbent solution
This message is printed when Gurobi rejects a provided MIP start. There are several possible reasons for this:
• Gurobi already has a solution that is at least as good as the provided MIP start.
• The provided MIP start is infeasible for the model. In this case there might be an additional message similar to the following to give you some hint:
User MIP start violates constraint c1 by 2934.000000000
• There are numerical issues, e.g., the MIP start is only almost feasible or only feasible within tolerances.
In the last two cases Gurobi may reject the MIP start at first but then may go back and try the MIP start again after presolve. In the log you will then see:
Another try with MIP start
If you want to diagnose an infeasible MIP start, you can try fixing the variables in the model to their values in your MIP start (by setting both their lower and upper bound attribute to the MIP
start value).
If the resulting MIP model is infeasible, you can then compute an IIS on this model to get additional information that might help to identify the cause of the infeasibility.
Related articles
0 comments
Article is closed for comments. | {"url":"https://support.gurobi.com/hc/en-us/articles/15185996954897-What-does-User-MIP-start-did-not-produce-a-new-incumbent-solution-mean","timestamp":"2024-11-07T20:19:10Z","content_type":"text/html","content_length":"31711","record_id":"<urn:uuid:3a04c8eb-2d67-4ee1-a225-ecc47a99b323>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00717.warc.gz"} |
: Rotational temperature
Hi there, I have a question that I'm not sure how to go about solving:
I've been given a series of transitions in the microwave spectrum of ^31P^14N and have assigned these J[initial] and J[final] quantum numbers, calculated the bond length etc.
The next part says that when ^31P^14N is observed in the very cold environment of interstellar space by microwave spectroscopy, the second and third lines have equal intensity, and asks what the
rotational temperature of the molecule in this environment would be. Any help would be greatly appreciated. | {"url":"https://www.chemicalforums.com/index.php?topic=79092.0;prev_next=prev","timestamp":"2024-11-14T15:34:28Z","content_type":"text/html","content_length":"49456","record_id":"<urn:uuid:751e9680-0c23-44c5-92b7-2aed712e53fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00721.warc.gz"} |
COS Function in☝️ Google Sheets Explained (Definition, Syntax, How to Use It, Examples)
This guide covers everything you need to know about the Google Sheets COS function, including its definition, syntax, use cases, and how to use it.
What is the COS Function? How Does It Work?
The COS function in Google Sheets returns the cosine of a given angle provided in radians. This function is pivotal in trigonometry and other mathematical fields, enabling users to calculate the
cosine without resorting to manual computation or external resources.
To understand how the COS function works, it’s important to familiarize yourself with the concept of cosine in trigonometry. In a right-angled triangle, the cosine of an angle is the ratio of the
length of the adjacent side to the length of the hypotenuse. So, if you’ve got the angle in radians, the COS function can expediently calculate this ratio for you.
Note that the angle must be in radians, not in degrees. If your angle is in degrees, you will have to convert it to radians before using it in the COS function. The RADIANS function can be used for
this conversion.
The COS function works with a single argument – the angle in radians. You can input this angle directly, like COS(1), or reference a cell containing the angle, like COS(A2). Alternatively, you can
use other functions to calculate the angle, such as the PI function, and use that as your argument, like COS(PI()).
The COS function can be used in a wide variety of applications, such as calculating the coordinates of points in a circle or modeling periodic phenomena in physics and engineering. It’s an invaluable
function for anyone who regularly works with mathematical calculations in Google Sheets.
COS Syntax
The syntax and arguments for the function are as follows:
The COS function in Google Sheets accepts one argument:
• angle: This is the angle for which you want to find the cosine, expressed in radians. The argument must be a numeric value, and it can be a direct number, a cell reference containing the number,
or the result of another function that yields a numeric result.
Usage notes related to the syntax and arguments:
• The COS function in Google Sheets expects the angle to be provided in radians. If you have an angle in degrees, you need to convert it to radians before using it in the COS function. Google
Sheets provides a RADIANS function for this purpose.
• The COS function will return a #VALUE! error if the angle argument is non-numeric.
• The result of the COS function will be a numeric value between -1 and 1, inclusive.
• The COS function is case-insensitive, meaning you can use it as COS, cos, Cos, etc.
• The COS function can be used as a part of a larger formula or function in Google Sheets as long as the argument provided to it adheres to the rules mentioned above.
• The COS function in Google Sheets is a mathematical function and follows the standard mathematical rules for the cosine function.
Examples of How to Use the COS Function
Here are some practical examples to help you understand how to use the COS function in Google Sheets.
Example #1: Basic COS Function
Let’s start with a simple example. Suppose you want to find the cosine of 45 degrees.
In cell A1, type 45. Then in cell B1, type the formula =COS(RADIANS(A1)).
The RADIANS function is used to convert degrees to radians because the COS function in Google Sheets uses radians. Press Enter, and you will get the result 0.707106781, which is the cosine of 45
Example #2: Using COS Function in a Formula
The COS function can also be used in a formula. Let’s say you want to find the length of the adjacent side of a right-angled triangle, given the hypotenuse length and the angle.
In cell A1, type the hypotenuse length, let’s say 10. In cell B1, type the angle in degrees, let’s say 60. Then in cell C1, type the formula =A1*COS(RADIANS(B1)). Press Enter, and you will get the
length of the adjacent side, which is 5.
Example #3: Using COS Function with PI Function
You can use the COS function with the PI function to find the cosine of multiples of π.
For example, if you want to find the cosine of π/4, in cell A1, type the formula =COS(PI()/4). Press Enter, and you will get the result 0.707106781, which is the cosine of π/4.
Example #4: Using COS Function with Other Trigonometric Functions
The COS function can be used with other trigonometric functions.
For example, if you want to prove the Pythagorean identity sin²θ + cos²θ = 1 for a certain angle, you can do so in Google Sheets. In cell A1, type the angle in degrees, let’s say 30. Then in cell B1,
type the formula =SIN(RADIANS(A1))^2+COS(RADIANS(A1))^2.
Press Enter to get the result 1, proving the Pythagorean identity.
Why Is COS Not Working? Troubleshooting Common Errors
If you are having trouble with the COS function in Google Sheets, there are several common errors you might encounter. Knowing how to identify these errors, understanding their causes, and learning
how to fix them can help you use the COS function more effectively.
#VALUE! Error
Cause: This error generally occurs when the function’s input argument is non-numeric. Since the COS function only works with numeric values, any text or string value will cause the function to return
the #VALUE! error.
Solution: Check the cell reference or the argument that you are using in the COS function. Make sure that it is a numeric value. If it’s a cell reference, ensure it contains a number. If you’re
directly inputting the value, confirm that you’re entering a number, not text.
#DIV/0! Error
Cause: While this error is less common with the COS function, it can still occur in certain situations. The #DIV/0! error typically happens when a formula tries to divide a number by zero.
Solution: Since the COS function does not involve division, this error is likely due to an issue with other parts of your formula. Review the entire formula to identify any divisions by zero and
correct them.
#NUM! Error
Cause: This error happens when the function’s input argument is outside the function’s acceptable range. For the COS function, the argument should be a real number.
Solution: Check the argument you are using in the COS function to ensure it is a real number. If it is a cell reference, ensure the cell contains a real number.
#N/A Error
Cause: This error typically appears when Google Sheets cannot find the input or value you referenced in your formula.
Solution: Check your formula to ensure all references point to the correct cells and ranges. If you’re using a direct input, verify that the value is valid for the COS function.
#REF! Error
Cause: The #REF! error usually occurs when a formula references a cell that does not exist. This can happen if you delete a row, column, or cell that your formula is referencing.
Solution: Review your formula to ensure all cell references are correct. If you have deleted a row, column, or cell, you may need to adjust your formula or undo the deletion.
These are some of the most common errors you might encounter while using the COS function in Google Sheets. By understanding their causes and knowing how to solve them, you can ensure that your COS
function works correctly.
Using COS With Other Google Sheets Functions
Combining the COS function with other Google Sheets functions can help you perform more complex calculations and analysis. Here are a few examples:
With SUM
Usage: The SUM function adds all the numbers in a range of cells. You can use it with the COS function to find the sum of the cosine values of a range of numbers.
Example: Suppose you want to find the sum of the cosine values of the numbers from 1 to 3. You can use the following formula:
=SUM(COS(1), COS(2), COS(3))
This will return the sum of the cosine values of 1, 2, and 3.
With AVERAGE
Usage: The AVERAGE function calculates the average of a range of numbers. You can use it with the COS function to find the average of the cosine values of a range of numbers.
Example: Suppose you want to find the average of the cosine values of the numbers from 1 to 3. You can use the following formula:
=AVERAGE(COS(1), COS(2), COS(3))
This will return the average of the cosine values of 1, 2, and 3.
With PI
Usage: The PI function returns the value of Pi. You can use it with the COS function to find the cosine of Pi.
Example: Suppose you want to find the cosine of Pi. You can use the following formula:
This will return the cosine of Pi, which is -1.
With RADIANS
Usage: The RADIANS function converts degrees to radians. You can use it with the COS function to find the cosine of an angle given in degrees.
Example: Suppose you want to find the cosine of 45 degrees. You can use the following formula:
This will return the cosine of 45 degrees, which is approximately 0.7071.
For more details on the COS function, check out the official documentation at the Google Docs Editors Help Center. | {"url":"https://spreadsheetdaddy.com/google-sheets/functions/cos","timestamp":"2024-11-15T00:23:21Z","content_type":"text/html","content_length":"224936","record_id":"<urn:uuid:d4ce44a6-e40d-49fc-b105-65f915395452>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00332.warc.gz"} |
Class DefaultMutableTreeNode
□ javax.swing.tree.DefaultMutableTreeNode
All Implemented Interfaces:
Serializable, Cloneable, MutableTreeNode, TreeNode
Direct Known Subclasses:
public class DefaultMutableTreeNode
extends Object
implements Cloneable, MutableTreeNode, Serializable
is a general-purpose node in a tree data structure. For examples of using default mutable tree nodes, see
How to Use Trees
The Java Tutorial.
A tree node may have at most one parent and 0 or more children. DefaultMutableTreeNode provides operations for examining and modifying a node's parent and children and also operations for
examining the tree that the node is a part of. A node's tree is the set of all nodes that can be reached by starting at the node and following all the possible links to parents and children. A
node with no parent is the root of its tree; a node with no children is a leaf. A tree may consist of many subtrees, each node acting as the root for its own subtree.
This class provides enumerations for efficiently traversing a tree or subtree in various orders or for following the path between two nodes. A DefaultMutableTreeNode may also hold a reference to
a user object, the use of which is left to the user. Asking a DefaultMutableTreeNode for its string representation with toString() returns the string representation of its user object.
This is not a thread safe class.If you intend to use a DefaultMutableTreeNode (or a tree of TreeNodes) in more than one thread, you need to do your own synchronizing. A good convention to adopt
is synchronizing on the root node of a tree.
While DefaultMutableTreeNode implements the MutableTreeNode interface and will allow you to add in any implementation of MutableTreeNode not all of the methods in DefaultMutableTreeNode will be
applicable to all MutableTreeNodes implementations. Especially with some of the enumerations that are provided, using some of these methods assumes the DefaultMutableTreeNode contains only
DefaultMutableNode instances. All of the TreeNode/MutableTreeNode methods will behave as defined no matter what implementations are added.
Warning: Serialized objects of this class will not be compatible with future Swing releases. The current serialization support is appropriate for short term storage or RMI between applications
running the same version of Swing. As of 1.4, support for long term storage of all JavaBeans™ has been added to the java.beans package. Please see XMLEncoder.
See Also:
□ Constructor Summary
Constructor Description
DefaultMutableTreeNode() Creates a tree node that has no parent and no children, but which allows children.
DefaultMutableTreeNode(Object userObject) Creates a tree node with no parent, no children, but which allows children, and initializes it with the specified user object.
DefaultMutableTreeNode(Object userObject, boolean Creates a tree node with no parent, no children, initialized with the specified user object, and that allows children only if
allowsChildren) specified.
□ Method Summary
All Methods Instance Methods Concrete Methods
Modifier and Type Method Description
void add(MutableTreeNode newChild) Removes newChild from its parent and makes it a child of this node by adding it to the end of this node's child array.
Enumeration<TreeNode> breadthFirstEnumeration() Creates and returns an enumeration that traverses the subtree rooted at this node in breadth-first order.
Enumeration<TreeNode> children() Creates and returns a forward-order enumeration of this node's children.
Object clone() Overridden to make clone public.
Enumeration<TreeNode> depthFirstEnumeration() Creates and returns an enumeration that traverses the subtree rooted at this node in depth-first order.
boolean getAllowsChildren() Returns true if this node is allowed to have children.
TreeNode getChildAfter(TreeNode aChild) Returns the child in this node's child array that immediately follows aChild, which must be a child of this node.
TreeNode getChildAt(int index) Returns the child at the specified index in this node's child array.
TreeNode getChildBefore(TreeNode aChild) Returns the child in this node's child array that immediately precedes aChild, which must be a child of this node.
int getChildCount() Returns the number of children of this node.
int getDepth() Returns the depth of the tree rooted at this node -- the longest distance from this node to a leaf.
TreeNode getFirstChild() Returns this node's first child.
DefaultMutableTreeNode getFirstLeaf() Finds and returns the first leaf that is a descendant of this node -- either this node or its first child's first leaf.
int getIndex(TreeNode aChild) Returns the index of the specified child in this node's child array.
TreeNode getLastChild() Returns this node's last child.
DefaultMutableTreeNode getLastLeaf() Finds and returns the last leaf that is a descendant of this node -- either this node or its last child's last leaf.
int getLeafCount() Returns the total number of leaves that are descendants of this node.
int getLevel() Returns the number of levels above this node -- the distance from the root to this node.
DefaultMutableTreeNode getNextLeaf() Returns the leaf after this node or null if this node is the last leaf in the tree.
DefaultMutableTreeNode getNextNode() Returns the node that follows this node in a preorder traversal of this node's tree.
DefaultMutableTreeNode getNextSibling() Returns the next sibling of this node in the parent's children array.
TreeNode getParent() Returns this node's parent or null if this node has no parent.
TreeNode[] getPath() Returns the path from the root, to get to this node.
getPathToRoot(TreeNode aNode, int Builds the parents of node up to and including the root node, where the original node is the last element in the returned
protected TreeNode[] depth) array.
DefaultMutableTreeNode getPreviousLeaf() Returns the leaf before this node or null if this node is the first leaf in the tree.
DefaultMutableTreeNode getPreviousNode() Returns the node that precedes this node in a preorder traversal of this node's tree.
DefaultMutableTreeNode getPreviousSibling() Returns the previous sibling of this node in the parent's children array.
TreeNode getRoot() Returns the root of the tree that contains this node.
TreeNode getSharedAncestor( Returns the nearest common ancestor to this node and aNode.
DefaultMutableTreeNode aNode)
int getSiblingCount() Returns the number of siblings of this node.
Object getUserObject() Returns this node's user object.
Object[] getUserObjectPath() Returns the user object path, from the root, to get to this node.
insert(MutableTreeNode newChild, int Removes newChild from its present parent (if it has a parent), sets the child's parent to this node, and then adds the child to
void childIndex) this node's child array at index childIndex.
boolean isLeaf() Returns true if this node has no children.
Returns true if anotherNode is an ancestor of this node -- if it is this node, this node's parent, or an ancestor of this
boolean isNodeAncestor(TreeNode anotherNode) node's parent.
boolean isNodeChild(TreeNode aNode) Returns true if aNode is a child of this node.
isNodeDescendant(DefaultMutableTreeNode Returns true if anotherNode is a descendant of this node -- if it is this node, one of this node's children, or a descendant of
boolean anotherNode) one of this node's children.
boolean isNodeRelated(DefaultMutableTreeNode Returns true if and only if aNode is in the same tree as this node.
boolean isNodeSibling(TreeNode anotherNode) Returns true if anotherNode is a sibling of (has the same parent as) this node.
boolean isRoot() Returns true if this node is the root of the tree.
Enumeration<TreeNode> pathFromAncestorEnumeration(TreeNode Creates and returns an enumeration that follows the path from ancestor to this node.
Enumeration<TreeNode> postorderEnumeration() Creates and returns an enumeration that traverses the subtree rooted at this node in postorder.
Enumeration<TreeNode> preorderEnumeration() Creates and returns an enumeration that traverses the subtree rooted at this node in preorder.
void remove(int childIndex) Removes the child at the specified index from this node's children and sets that node's parent to null.
void remove(MutableTreeNode aChild) Removes aChild from this node's child array, giving it a null parent.
void removeAllChildren() Removes all of this node's children, setting their parents to null.
void removeFromParent() Removes the subtree rooted at this node from the tree, giving this node a null parent.
void setAllowsChildren(boolean allows) Determines whether or not this node is allowed to have children.
void setParent(MutableTreeNode newParent) Sets this node's parent to newParent but does not change the parent's child array.
void setUserObject(Object userObject) Sets the user object for this node to userObject.
String toString() Returns the result of sending toString() to this node's user object, or the empty string if the node has no user object.
□ Field Detail
☆ EMPTY_ENUMERATION
public static final Enumeration<TreeNode> EMPTY_ENUMERATION
An enumeration that is always empty. This is used when an enumeration of a leaf node's children is requested.
☆ parent
protected MutableTreeNode parent
this node's parent, or null if this node has no parent
☆ children
protected Vector<TreeNode> children
array of children, may be null if this node has no children
☆ userObject
protected transient Object userObject
optional user object
☆ allowsChildren
protected boolean allowsChildren
true if the node is able to have children
□ Constructor Detail
☆ DefaultMutableTreeNode
public DefaultMutableTreeNode()
Creates a tree node that has no parent and no children, but which allows children.
☆ DefaultMutableTreeNode
public DefaultMutableTreeNode(Object userObject)
Creates a tree node with no parent, no children, but which allows children, and initializes it with the specified user object.
userObject - an Object provided by the user that constitutes the node's data
☆ DefaultMutableTreeNode
public DefaultMutableTreeNode(Object userObject,
boolean allowsChildren)
Creates a tree node with no parent, no children, initialized with the specified user object, and that allows children only if specified.
userObject - an Object provided by the user that constitutes the node's data
allowsChildren - if true, the node is allowed to have child nodes -- otherwise, it is always a leaf node
□ Method Detail
☆ insert
public void insert(MutableTreeNode newChild,
int childIndex)
Removes newChild from its present parent (if it has a parent), sets the child's parent to this node, and then adds the child to this node's child array at index childIndex. newChild must
not be null and must not be an ancestor of this node.
Specified by:
insert in interface MutableTreeNode
newChild - the MutableTreeNode to insert under this node
childIndex - the index in this node's child array where this node is to be inserted
ArrayIndexOutOfBoundsException - if childIndex is out of bounds
IllegalArgumentException - if newChild is null or is an ancestor of this node
IllegalStateException - if this node does not allow children
See Also:
☆ remove
public void remove(int childIndex)
Removes the child at the specified index from this node's children and sets that node's parent to null. The child node to remove must be a MutableTreeNode.
Specified by:
remove in interface MutableTreeNode
childIndex - the index in this node's child array of the child to remove
ArrayIndexOutOfBoundsException - if childIndex is out of bounds
☆ setParent
public void setParent(MutableTreeNode newParent)
Sets this node's parent to newParent but does not change the parent's child array. This method is called from insert() and remove() to reassign a child's parent, it should not be messaged
from anywhere else.
Specified by:
setParent in interface MutableTreeNode
newParent - this node's new parent
☆ getParent
public TreeNode getParent()
Returns this node's parent or null if this node has no parent.
Specified by:
getParent in interface TreeNode
this node's parent TreeNode, or null if this node has no parent
☆ getChildAt
public TreeNode getChildAt(int index)
Returns the child at the specified index in this node's child array.
Specified by:
getChildAt in interface TreeNode
index - an index into this node's child array
the TreeNode in this node's child array at the specified index
ArrayIndexOutOfBoundsException - if index is out of bounds
☆ getChildCount
public int getChildCount()
Returns the number of children of this node.
Specified by:
getChildCount in interface TreeNode
an int giving the number of children of this node
☆ getIndex
public int getIndex(TreeNode aChild)
Returns the index of the specified child in this node's child array. If the specified node is not a child of this node, returns -1. This method performs a linear search and is O(n) where
n is the number of children.
Specified by:
getIndex in interface TreeNode
aChild - the TreeNode to search for among this node's children
an int giving the index of the node in this node's child array, or -1 if the specified node is a not a child of this node
IllegalArgumentException - if aChild is null
☆ children
public Enumeration<TreeNode> children()
Creates and returns a forward-order enumeration of this node's children. Modifying this node's child array invalidates any child enumerations created before the modification.
Specified by:
children in interface TreeNode
an Enumeration of this node's children
☆ setAllowsChildren
public void setAllowsChildren(boolean allows)
Determines whether or not this node is allowed to have children. If
is false, all of this node's children are removed.
Note: By default, a node allows children.
allows - true if this node is allowed to have children
☆ getAllowsChildren
public boolean getAllowsChildren()
Returns true if this node is allowed to have children.
Specified by:
getAllowsChildren in interface TreeNode
true if this node allows children, else false
☆ setUserObject
public void setUserObject(Object userObject)
Sets the user object for this node to userObject.
Specified by:
setUserObject in interface MutableTreeNode
userObject - the Object that constitutes this node's user-specified data
See Also:
getUserObject(), toString()
☆ removeFromParent
public void removeFromParent()
Removes the subtree rooted at this node from the tree, giving this node a null parent. Does nothing if this node is the root of its tree.
Specified by:
removeFromParent in interface MutableTreeNode
☆ remove
public void remove(MutableTreeNode aChild)
Removes aChild from this node's child array, giving it a null parent.
Specified by:
remove in interface MutableTreeNode
aChild - a child of this node to remove
IllegalArgumentException - if aChild is null or is not a child of this node
☆ removeAllChildren
public void removeAllChildren()
Removes all of this node's children, setting their parents to null. If this node has no children, this method does nothing.
☆ add
public void add(MutableTreeNode newChild)
Removes newChild from its parent and makes it a child of this node by adding it to the end of this node's child array.
See Also:
☆ isNodeAncestor
public boolean isNodeAncestor(TreeNode anotherNode)
Returns true if anotherNode is an ancestor of this node -- if it is this node, this node's parent, or an ancestor of this node's parent. (Note that a node is considered an ancestor of
itself.) If anotherNode is null, this method returns false. This operation is at worst O(h) where h is the distance from the root to this node.
anotherNode - node to test as an ancestor of this node
true if this node is a descendant of anotherNode
See Also:
isNodeDescendant(javax.swing.tree.DefaultMutableTreeNode), getSharedAncestor(javax.swing.tree.DefaultMutableTreeNode)
☆ isNodeDescendant
public boolean isNodeDescendant(DefaultMutableTreeNode anotherNode)
Returns true if anotherNode is a descendant of this node -- if it is this node, one of this node's children, or a descendant of one of this node's children. Note that a node is considered
a descendant of itself. If anotherNode is null, returns false. This operation is at worst O(h) where h is the distance from the root to anotherNode.
anotherNode - node to test as descendant of this node
true if this node is an ancestor of anotherNode
See Also:
isNodeAncestor(javax.swing.tree.TreeNode), getSharedAncestor(javax.swing.tree.DefaultMutableTreeNode)
☆ getSharedAncestor
public TreeNode getSharedAncestor(DefaultMutableTreeNode aNode)
Returns the nearest common ancestor to this node and aNode. Returns null, if no such ancestor exists -- if this node and aNode are in different trees or if aNode is null. A node is
considered an ancestor of itself.
aNode - node to find common ancestor with
nearest ancestor common to this node and aNode, or null if none
See Also:
isNodeAncestor(javax.swing.tree.TreeNode), isNodeDescendant(javax.swing.tree.DefaultMutableTreeNode)
☆ isNodeRelated
public boolean isNodeRelated(DefaultMutableTreeNode aNode)
Returns true if and only if aNode is in the same tree as this node. Returns false if aNode is null.
aNode - node to find common ancestor with
true if aNode is in the same tree as this node; false if aNode is null
See Also:
getSharedAncestor(javax.swing.tree.DefaultMutableTreeNode), getRoot()
☆ getDepth
public int getDepth()
Returns the depth of the tree rooted at this node -- the longest distance from this node to a leaf. If this node has no children, returns 0. This operation is much more expensive than
getLevel() because it must effectively traverse the entire tree rooted at this node.
the depth of the tree whose root is this node
See Also:
☆ getLevel
public int getLevel()
Returns the number of levels above this node -- the distance from the root to this node. If this node is the root, returns 0.
the number of levels above this node
See Also:
☆ getPath
public TreeNode[] getPath()
Returns the path from the root, to get to this node. The last element in the path is this node.
an array of TreeNode objects giving the path, where the first element in the path is the root and the last element is this node.
☆ getPathToRoot
protected TreeNode[] getPathToRoot(TreeNode aNode,
int depth)
Builds the parents of node up to and including the root node, where the original node is the last element in the returned array. The length of the returned array gives the node's depth in
the tree.
aNode - the TreeNode to get the path for
depth - an int giving the number of steps already taken towards the root (on recursive calls), used to size the returned array
an array of TreeNodes giving the path from the root to the specified node
☆ getUserObjectPath
public Object[] getUserObjectPath()
Returns the user object path, from the root, to get to this node. If some of the TreeNodes in the path have null user objects, the returned path will contain nulls.
the user object path, from the root, to get to this node
☆ getRoot
public TreeNode getRoot()
Returns the root of the tree that contains this node. The root is the ancestor with a null parent.
the root of the tree that contains this node
See Also:
☆ isRoot
public boolean isRoot()
Returns true if this node is the root of the tree. The root is the only node in the tree with a null parent; every tree has exactly one root.
true if this node is the root of its tree
☆ getNextNode
public DefaultMutableTreeNode getNextNode()
Returns the node that follows this node in a preorder traversal of this node's tree. Returns null if this node is the last node of the traversal. This is an inefficient way to traverse
the entire tree; use an enumeration, instead.
the node that follows this node in a preorder traversal, or null if this node is last
See Also:
☆ getPreviousNode
public DefaultMutableTreeNode getPreviousNode()
Returns the node that precedes this node in a preorder traversal of this node's tree. Returns null if this node is the first node of the traversal -- the root of the tree. This is an
inefficient way to traverse the entire tree; use an enumeration, instead.
the node that precedes this node in a preorder traversal, or null if this node is the first
See Also:
☆ preorderEnumeration
public Enumeration<TreeNode> preorderEnumeration()
Creates and returns an enumeration that traverses the subtree rooted at this node in preorder. The first node returned by the enumeration's
method is this node.
Modifying the tree by inserting, removing, or moving a node invalidates any enumerations created before the modification.
an enumeration for traversing the tree in preorder
See Also:
☆ postorderEnumeration
public Enumeration<TreeNode> postorderEnumeration()
Creates and returns an enumeration that traverses the subtree rooted at this node in postorder. The first node returned by the enumeration's
method is the leftmost leaf. This is the same as a depth-first traversal.
Modifying the tree by inserting, removing, or moving a node invalidates any enumerations created before the modification.
an enumeration for traversing the tree in postorder
See Also:
depthFirstEnumeration(), preorderEnumeration()
☆ breadthFirstEnumeration
public Enumeration<TreeNode> breadthFirstEnumeration()
Creates and returns an enumeration that traverses the subtree rooted at this node in breadth-first order. The first node returned by the enumeration's
method is this node.
Modifying the tree by inserting, removing, or moving a node invalidates any enumerations created before the modification.
an enumeration for traversing the tree in breadth-first order
See Also:
☆ depthFirstEnumeration
public Enumeration<TreeNode> depthFirstEnumeration()
Creates and returns an enumeration that traverses the subtree rooted at this node in depth-first order. The first node returned by the enumeration's
method is the leftmost leaf. This is the same as a postorder traversal.
Modifying the tree by inserting, removing, or moving a node invalidates any enumerations created before the modification.
an enumeration for traversing the tree in depth-first order
See Also:
breadthFirstEnumeration(), postorderEnumeration()
☆ pathFromAncestorEnumeration
public Enumeration<TreeNode> pathFromAncestorEnumeration(TreeNode ancestor)
Creates and returns an enumeration that follows the path from
to this node. The enumeration's
method first returns
, then the child of
that is an ancestor of this node, and so on, and finally returns this node. Creation of the enumeration is O(m) where m is the number of nodes between this node and
, inclusive. Each
message is O(1).
Modifying the tree by inserting, removing, or moving a node invalidates any enumerations created before the modification.
ancestor - the node to start enumeration from
an enumeration for following the path from an ancestor of this node to this one
IllegalArgumentException - if ancestor is not an ancestor of this node
See Also:
isNodeAncestor(javax.swing.tree.TreeNode), isNodeDescendant(javax.swing.tree.DefaultMutableTreeNode)
☆ isNodeChild
public boolean isNodeChild(TreeNode aNode)
Returns true if aNode is a child of this node. If aNode is null, this method returns false.
aNode - the node to determinate whether it is a child
true if aNode is a child of this node; false if aNode is null
☆ getFirstChild
public TreeNode getFirstChild()
Returns this node's first child. If this node has no children, throws NoSuchElementException.
the first child of this node
NoSuchElementException - if this node has no children
☆ getLastChild
public TreeNode getLastChild()
Returns this node's last child. If this node has no children, throws NoSuchElementException.
the last child of this node
NoSuchElementException - if this node has no children
☆ getChildAfter
public TreeNode getChildAfter(TreeNode aChild)
Returns the child in this node's child array that immediately follows aChild, which must be a child of this node. If aChild is the last child, returns null. This method performs a linear
search of this node's children for aChild and is O(n) where n is the number of children; to traverse the entire array of children, use an enumeration instead.
aChild - the child node to look for next child after it
the child of this node that immediately follows aChild
IllegalArgumentException - if aChild is null or is not a child of this node
See Also:
☆ getChildBefore
public TreeNode getChildBefore(TreeNode aChild)
Returns the child in this node's child array that immediately precedes aChild, which must be a child of this node. If aChild is the first child, returns null. This method performs a
linear search of this node's children for aChild and is O(n) where n is the number of children.
aChild - the child node to look for previous child before it
the child of this node that immediately precedes aChild
IllegalArgumentException - if aChild is null or is not a child of this node
☆ isNodeSibling
public boolean isNodeSibling(TreeNode anotherNode)
Returns true if anotherNode is a sibling of (has the same parent as) this node. A node is its own sibling. If anotherNode is null, returns false.
anotherNode - node to test as sibling of this node
true if anotherNode is a sibling of this node
☆ getSiblingCount
public int getSiblingCount()
Returns the number of siblings of this node. A node is its own sibling (if it has no parent or no siblings, this method returns 1).
the number of siblings of this node
☆ getNextSibling
public DefaultMutableTreeNode getNextSibling()
Returns the next sibling of this node in the parent's children array. Returns null if this node has no parent or is the parent's last child. This method performs a linear search that is O
(n) where n is the number of children; to traverse the entire array, use the parent's child enumeration instead.
the sibling of this node that immediately follows this node
See Also:
☆ getPreviousSibling
public DefaultMutableTreeNode getPreviousSibling()
Returns the previous sibling of this node in the parent's children array. Returns null if this node has no parent or is the parent's first child. This method performs a linear search that
is O(n) where n is the number of children.
the sibling of this node that immediately precedes this node
☆ isLeaf
public boolean isLeaf()
Returns true if this node has no children. To distinguish between nodes that have no children and nodes that cannot have children (e.g. to distinguish files from empty directories), use
this method in conjunction with getAllowsChildren
Specified by:
isLeaf in interface TreeNode
true if this node has no children
See Also:
☆ getFirstLeaf
public DefaultMutableTreeNode getFirstLeaf()
Finds and returns the first leaf that is a descendant of this node -- either this node or its first child's first leaf. Returns this node if it is a leaf.
the first leaf in the subtree rooted at this node
See Also:
isLeaf(), isNodeDescendant(javax.swing.tree.DefaultMutableTreeNode)
☆ getLastLeaf
public DefaultMutableTreeNode getLastLeaf()
Finds and returns the last leaf that is a descendant of this node -- either this node or its last child's last leaf. Returns this node if it is a leaf.
the last leaf in the subtree rooted at this node
See Also:
isLeaf(), isNodeDescendant(javax.swing.tree.DefaultMutableTreeNode)
☆ getNextLeaf
public DefaultMutableTreeNode getNextLeaf()
Returns the leaf after this node or null if this node is the last leaf in the tree.
In this implementation of the MutableNode interface, this operation is very inefficient. In order to determine the next node, this method first performs a linear search in the parent's
child-list in order to find the current node.
That implementation makes the operation suitable for short traversals from a known position. But to traverse all of the leaves in the tree, you should use depthFirstEnumeration to
enumerate the nodes in the tree and use isLeaf on each node to determine which are leaves.
returns the next leaf past this node
See Also:
depthFirstEnumeration(), isLeaf()
☆ getPreviousLeaf
public DefaultMutableTreeNode getPreviousLeaf()
Returns the leaf before this node or null if this node is the first leaf in the tree.
In this implementation of the MutableNode interface, this operation is very inefficient. In order to determine the previous node, this method first performs a linear search in the
parent's child-list in order to find the current node.
That implementation makes the operation suitable for short traversals from a known position. But to traverse all of the leaves in the tree, you should use depthFirstEnumeration to
enumerate the nodes in the tree and use isLeaf on each node to determine which are leaves.
returns the leaf before this node
See Also:
depthFirstEnumeration(), isLeaf()
☆ getLeafCount
public int getLeafCount()
Returns the total number of leaves that are descendants of this node. If this node is a leaf, returns 1. This method is O(n) where n is the number of descendants of this node.
the number of leaves beneath this node
See Also:
☆ toString
public String toString()
Returns the result of sending toString() to this node's user object, or the empty string if the node has no user object.
toString in class Object
a string representation of the object.
See Also:
☆ clone
public Object clone()
Overridden to make clone public. Returns a shallow copy of this node; the new node has no parent or children and has a reference to the same user object, if any.
clone in class Object
a copy of this node
See Also: | {"url":"http://nw.tsuda.ac.jp/doc/jdk-11.0.12-docs/api/java.desktop/javax/swing/tree/DefaultMutableTreeNode.html","timestamp":"2024-11-05T23:32:01Z","content_type":"text/html","content_length":"98300","record_id":"<urn:uuid:7296b4a3-1a8b-4549-b113-fb2574b59488>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00238.warc.gz"} |
TopCoder Solution SRM483-D2-1000 Statement for BestApproximationDiv2 C,C++, Java, Js and Python
Problem Statement for BestApproximationDiv2
Problem link- https://community.topcoder.com/stat?c=problem_statement&pm=11073&rd=14236
Problem Statement
Elly is not a big fan of mathematical constants. Most of them are infinite, and therefore hard to memorize. Fractions, on the other hand, are often easier to remember and can provide good
approximations. For example, 22/7 = 3.1428... is one way to approximate Pi. Unfortunately, it's only accurate to 2 places after the decimal point, so it doesn't help at all. A slightly
better example is 355/113 = 3.14159292... which is correct up to 6 digits after the decimal point.
Elly understands that working with infinite decimal fractions is going to be very difficult, so she first wants to find a good way to approximate floating point numbers with decimal
representations that are finite. Your task is to help her in this mission. You will be given a String number containing the decimal representation of a non-negative fraction strictly less
than 1 (possibly with trailing zeros). More precisely, number will be formatted "0.dd...d" (quotes for clarity) where each d is a decimal digit ('0'-'9') and the number of d's is between 1
and 48, inclusive.
Given a fraction F = A/B, where 0 <= A < B, its quality of approximation with respect to number is calculated as follows:
• Let S be the decimal fraction (infinite or finite) representation of F.
• Let N be the number of digits after the decimal point in number. If number has trailing zeros, all of them are considered to be significant and are counted towards N.
• If S is infinite or the number of digits after the decimal point in S is greater than N, only consider the first N decimals after the decimal point in S. Truncate the rest of the digits
without performing any kind of rounding.
• If the number of digits after the decimal point in S is less than N, append trailing zeroes to the right side until there are exactly N digits after the decimal point.
• The quality of approximation is the number of digits in the longest common prefix of S and number. The longest common prefix of two numbers is the longest string which is a prefix of the
decimal representations of both numbers with no extra leading zeroes. For example, "3.14" is the longest common prefix of 3.1428 and 3.1415.
Elly doesn't like long approximations either, so you are only allowed to use fractions where the denominator is less than or equal to maxDen. Find an approximation F = A/B of number such
that 1 <= B <= maxDen, 0 <= A < B, and the quality of approximation is maximized. Return a String formatted "A/B has X exact digits" (quotes for clarity) where A/B is the approximation you
have found and X is its quality. If there are several such approximations, choose the one with the smallest denominator among all of them. If there is still a tie, choose the one among those
with the smallest numerator.
Class: BestApproximationDiv2
Method: findFraction
Parameters: int, String
Returns: String
Method signature: String findFraction(int maxDen, String number)
(be sure your method is public)
- maxDen will be between 1 and 100,000, inclusive.
- number will contain between 3 and 50 characters, inclusive.
- number will consist of a digit '0', followed by a period ('.'), followed by one or more digits ('0'-'9').
Returns: "1/7 has 3 exact digits"
3 plus the current approximation yields an approximation of Pi.
Returns: "0/1 has 1 exact digits"
Not a lot of options here.
Returns: "10/81 has 8 exact digits"
Returns: "3/7 has 3 exact digits"
This one can be represented in more than one way. Be sure to choose the one with the lowest denominator.
Note that 21/50 is an even more accurate approximation (it is, in fact, exact), but it has the same number of matching digits (all three) and has a greater
Returns: "21/50 has 4 exact digits"
All trailing zeros in number are significant.
Returns: "16/113 has 7 exact digits"
A better approximation for the decimal part of Pi.
Code Examples
#1 Code Example with C++ Programming
Code - C++ Programming
#include <string>
#include <sstream>
#include <iomanip>
using namespace std;
class BestApproximationDiv2 {
int div(int a, int b, string num) {
int ret = 1;
for(int i = 2; i < num.length(); ++i) {
int t = a * 10 - b * (num[i] - '0');
if(t >= b || t < 0)
a = t;
return ret;
string findFraction(int maxDen, string number) {
stringstream ss(number);
ss << setprecision(12);
long double nnum;
ss >> nnum;
int mx = -1, numR, denR;
for(int den = maxDen, num, cur; den > 0; --den) {
num = den * nnum + 1;
cur = div(num, den, number);
if(cur >= mx) {
mx = cur;
numR = num;
denR = den;
num -= 1;
cur = div(num, den, number);
if(cur>= mx) {
mx = cur;
numR = num;
denR = den;
num -= 1;
if(num >= 0) {
cur = div(num, den, number);
if(cur >= mx) {
mx = cur;
numR = num;
denR = den;
string s = to_string(numR) + "/" + to_string(denR) + " has " + to_string(mx) + " exact digits";
return s;
Copy The Code &
Try With Live Editor
"1/7 has 3 exact digits"
TopCoder Solution SRM483-D2-1000 Statement for BestApproximationDiv2 C,C++, Java, Js and Python ,SRM483-D2-1000,TopCoder Solution | {"url":"https://devsenv.com/example/topcoder-solution-srm483-d2-1000-statement-for-bestapproximationdiv2-c,c++,-java,-js-and-python","timestamp":"2024-11-14T14:56:18Z","content_type":"text/html","content_length":"77939","record_id":"<urn:uuid:dbed0299-6f56-4166-ba84-f616fe052efb>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00661.warc.gz"} |
This information is part of the Modelica Standard Library maintained by the Modelica Association.
Since a long time, Modelica is used to model advanced nonlinear control systems. Especially, Modelica allows a semi-automatic treatment of inverse nonlinear plant models. In the fundamental article
(Looye et.al. 2005, see Literature or Download) this approach is described and several controller structures are presented to utilize an inverse plant model in the controller. This approach is
attractive because it results in a systematic procedure to design a controller for the whole operating range of a plant. This is in contrast to standard controller design techniques that usually
design a linear controller for a plant model that is linearized at a specific operating point. Therefore the operating range of such controllers is inherently limited.
Up to Modelica 3.2, controllers with inverse plant models can only be defined as continuous-time systems. Via the export mechanism of a Modelica tool they could be exported with solvers embedded in
the code and then used as sampled data system in other environments. However, it is not possible to re-import the sampled data system to Modelica.
The synchronous features of Modelica 3.3 together with the Modelica.Clocked library offer now completely new possibilities, so that the inverse model can be designed and evaluated as sampled data
system within Modelica and a Modelica simulation environment. This approach is shown at hand of a simple example using a nonlinear plant model of a mixing unit (Föllinger O. (1998): Nichtlineare
Regelungen I, Oldenbourg Verlag, 8. Auflage, page 279) and utilizing this plant model as nonlinear feed-forward controller according to (Looye et.al. 2005):
A substance A is flowing continuously into a mixing reactor. Due to a catalyst, the substance reacts and splits into several base substances that are continuously removed. The reaction generates
energy and therefore the reactor is cooled with a cooling medium. The cooling temperature T_c(t) in [K] is the primary actuation signal. Substance A is described by its concentration c(t) in [mol/l]
and its temperature T(t) in [K]. The concentration c(t) is the signal to be primarily controlled and the temperature T(t) is the signal that is measured. These equations are collected together in
input/output block Utilities.ComponentsMixingUnit.MixingUnit.
The design of the control system proceeds now in the following steps:
Inverting a model usually means that equations need to be symbolically differentiated and that higher derivatives of the inputs are needed (that are usually not available). One approach is to filter
the inputs, so that a Modelica tool can determine the derivatives of the filtered input from the filter states. The minimum needed filter order is determined by first inverting the continuous-time
plant model from the variable to be primarily controlled (here: "c") to the actuator input (here: "T_c"). This is performed with the help of block Modelica.Blocks.Math.InverseBlockConstraints that
allows connecting an external input to an output in the pre-filter design block Utilities.ComponentsMixingUnit.FilterOrder:
Translating this model will generate the continuous-time inverse plant model. However, a Modelica tool will give an error message that it has to differentiate the model, but this requires the second
derivative of the external input c_ref and this derivative is not available. The conclusion is that a low pass filter of at least second order has to be connected between c_ref and c, for example
Modelica.Blocks.Continuous.Filter. Only filter types should be used that do not have "vibrations" in the time domain for a step input. Therefore, parameter analogFilter of the component should be
selected as CriticalDamping (= only real poles), or Bessel (= nearly no vibrations, but steeper frequency response as CriticalDamping). The cut-off frequency f_cut is manually selected by simulations
of the closed loop system. In the example, a CriticalDamping filter of third order (the third order is selected to get smoother signals) and a cut-off frequency of 1/300 Hz is used.
Design of Controller
The controller for the mixing unit is shown in the diagram layer of block at hand, as well as in the following figure:
It consists of the filter discussed above. The input to the filter is the reference concentration which is filtered by the low pass filter. The output of the filter is used as input to the
concentration c in the inverse plant model. This model computes the desired cooling temperature T_c (which is used as desired cooling temperature at the output of the controller) and the desired
temperature T (which is used as desired value for the feedback controller). This part of the control system is the "feed-forward" part that computes the desired actuator signal. As feedback
controller a simple P-Controller with one gain is used.
This controller could be defined as continuous-time system in previous Modelica versions. However, with Modelica 3.3 it is now also possible to define the controller as sampled data system. For this,
the two inputs are sampled (sample1 and sample2) and the actuator output is hold (hold1). The controller partition is then associated with a periodic clock (via sample2) that has a sample period of 1
s and a solverMethod = "ExplicitEuler". Since the controller partition is a continuous-time system, it is discretized and solved with an explicit Euler method at every clock tick (by integrating from
the previous to the actual time instant of the clock).
Simulation Results
The controller works perfectly if the same parameters for the plant and the inverse plant model are used (follows perfectly the filtered reference concentration). Changing all parameters of the
inverse plant model by 50 % (with exception of parameter e since the plant is very sensitive to it) still results in a reasonable control behavior as shown in the next two figures:
The green curve in the upper window is the (clocked) output of the filter, that is, it is the desired concentration. The red curve in the upper window is the concentration of model mixingUnit, which
is the concentration in the plant. Obviously, the concentration follows reasonably well the desired one. By using a more involved feedback controller, the control error could be substantially | {"url":"https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Clocked.Examples.Systems.ControlledMixingUnit.html","timestamp":"2024-11-09T06:23:04Z","content_type":"text/html","content_length":"48387","record_id":"<urn:uuid:d9401547-db45-4768-8cba-79ea52676326>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00084.warc.gz"} |
Exponential lower bound for static semi-algebraic proofs
Semi-algebraic proof systems were introduced in [1] as extensions of Lovász-Schrijver proof systems [2,3].These systems are very strong; in particular, they have short proofs of Tseitin's
tautologies, the pigeonhole principle, the symmetric knapsack problem and the cliquecoloring tautologies [1]. In this paper we study static versions of these systems.W e prove an exponential lower
bound on the length of proofs in one such system.The same bound for two tree-like (dynamic) systems follows.The proof is based on a lower bound on the "Boolean degree" of Positivstellensatz Calculus
refutations of the symmetric knapsack problem.
Original language English
Title of host publication Automata, Languages and Programming - 29th International Colloquium, ICALP 2002, Proceedings
Editors Peter Widmayer, Stephan Eidenbenz, Francisco Triguero, Rafael Morales, Ricardo Conejo, Matthew Hennessy
Pages 257-268
Number of pages 12
State Published - 2002
Externally published Yes
Event 29th International Colloquium on Automata, Languages, and Programming, ICALP 2002 - Malaga, Spain
Duration: 8 Jul 2002 → 13 Jul 2002
Publication series
Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume 2380 LNCS
ISSN (Print) 0302-9743
ISSN (Electronic) 1611-3349
Conference 29th International Colloquium on Automata, Languages, and Programming, ICALP 2002
Country/Territory Spain
City Malaga
Period 8/07/02 → 13/07/02
Dive into the research topics of 'Exponential lower bound for static semi-algebraic proofs'. Together they form a unique fingerprint. | {"url":"https://cris.ariel.ac.il/en/publications/exponential-lower-bound-for-static-semi-algebraic-proofs","timestamp":"2024-11-02T05:40:32Z","content_type":"text/html","content_length":"55338","record_id":"<urn:uuid:7ae1dd22-9d49-403d-9709-dce225d4a0c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00203.warc.gz"} |
C. Letellier & J. Maquet,
Analyse de dynamiques chaotiques : une nouvelle approche de l’activité solaire
Bulletin de la Société Française de Physique, 154, 10-13, 2006. Online
Abstract: Les processus impliqués dans la physique des plasmas étant très souvent non linéaires, c’est naturellement que cette discipline peut être désormais abordée dans le cadre de la théorie
des systèmes dynamiques non linéaires. Plus que de nouvelles techniques d’analyse qui viendraient simplement s’ajouter à d’autres, c’est une nouvelle manière de voir les comportements dynamiques
que propose la théorie du chaos’ : les fluctuations apparemment irrégulières se révèlent ordonnées dans l’espace des phases, des variations du comportement sont associées à des bifurcations et
des modèles relativement simples au regard de la physique impliquée peuvent être obtenus. Nous proposons ici un rapide survol de ces techniques dans deux contextes différents de la physique des
C. Letellier,
Estimating the Shannon entropy: recurrence plots versus symbolic dynamics
Physical Review Letters, 96, 254102, 2006. Online
Abstract: Recurrence plots were first introduced to quantify the recurrence properties of chaotic dynamics. A few years later, the recurrence quantification analysis was introduced to transform
graphical representations into statistical analysis. Among the different measures introduced, a Shannon entropy was found to be correlated with the inverse of the largest Lyapunov exponent. The
discrepancy between this and the usual interpretation of a Shannon entropy is solved here by using a new definition - still based on the recurrence plots - and it is verified that this new
definition is correlated with the largest Lyapunov exponent, as expected from the Pesin conjecture. A comparison with a Shannon entropy computed from symbolic dynamics is also provided.
C. Letellier, J. Maquet, L. A. Aguirre & R. Gilmore
Evidence for low dimensional chaos in the sunspot cycles,
Astronomy & Astrophysics, 449, 379-387, 2006. Online
Abstract: Sunspot cycles are widely used for investigating solar activity. In 1953 Bracewell argued that it is sometimes desirable to introduce the inversion of the magnetic field polarity, and
that can be done with a sign change at the beginning of each cycle. It will be shown in this paper that, for topological reasons, this so-called Bracewell index is inappropriate and that the
symmetry must be introduced in a more rigorous way by a coordinate transformation. The resulting symmetric dynamics is then favourably compared with a symmetrized phase portrait reconstructed
from the
G. F. V. Amaral, C. Letellier & L. A. Aguirre,
Piecewise affine models of chaotic attractors: the case of the Rössler system,
Chaos, 16, 013115, 2006. Online
Abstract: This paper proposes a procedure by which it is possible to synthesize Rössler and Lorenz dynamics by means of only two affine linear systems and an abrupt switching law. Comparison of
different (valid) switching laws suggests that parameters of such a law behave as co-dimension one bifurcation parameters that can be changed to produce various dynamical regimes equivalent to
those observed with the original systems. Topological analysis is used to characterize the resulting attractors and to compare them with the original attractors. The paper provides guidelines
that are helpful to synthesize other chaotic dynamics by means of switching affine linear systems.
C. Letellier, L. A. Aguirre & J. Maquet,
How the choice of the observable may influence the analysis of non linear dynamical systems,
Communications in Nonlinear Science and Numerical Simulation, 11 (5), 555-576, 2006. Online
Abstract: A great number of techniques developed for studying nonlinear dynamical systems start with the embedding, in a
C. Letellier, E. Roulin & O. E. Rössler,
Inequivalent topologies of chaos in simple equations,
Chaos, Solitons & Fractals, 28, 337-360, 2006. Online
Abstract: In the 1970, one of us introduced a few simple sets of ordinary differential equations as examples showing different types of chaos. Most of them are now more or less forgotten with the
exception of the so-called Rössler system published in Physics Letters A, 57 (5), 397-398, 1976. In the present paper, we review most of the original systems and classify them using the tools of
modern topological analysis, that is, using the templates and the bouding tori recently introduced by Tsankov and Gilmore in Physical Review Letters, 91 (13), 134104, 2003. Thus, examples of
inequivalent topologies of chaotic attractors are provided in modern spirit. | {"url":"http://www.atomosyd.net/spip.php%3Farticle1+inurl:/local/cache-vignettes/L51xH120/-/W3C/DTD/overlib/skelato/skelato/skelato/skelato/illustra/dist/javascript/spip.php?article17","timestamp":"2024-11-06T08:58:19Z","content_type":"text/html","content_length":"20038","record_id":"<urn:uuid:94472187-7a57-41c4-9cca-5526abcba16b>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00422.warc.gz"} |
Price to Cash Flow (P/CF) (2024)
What is Price to Cash Flow?
The Price to Cash Flow Ratio (P/CF) evaluates the valuation of a company’s stock by comparing its share price to the amount of operating cash flow produced.
Unlike the price-to-earnings ratio (P/E), the P/CF ratio removes the impact of non-cash items such as depreciation & amortization (D&A), which makes the metric less prone to manipulation via
discretionary accounting decisions.
How to Calculate Price to Cash Flow Ratio (P/CF)?
The price to cash flow ratio (P/CF) is a common method used to assess the market valuation of publicly-traded companies, or more specifically, to decide if a company is undervalued or overvalued.
The P/CF ratio formula compares the equity value (i.e. market capitalization) of a company to its operating cash flows.
• Market Capitalization: The market capitalization (or “market cap”) is calculated by multiplying the latest closing share price by the total number of diluted shares outstanding.
• Cash Flow from Operations (CFO): While operating cash flow typically refers to cash from operations from the cash flow statement (CFS), other variations of levered cash flow metrics could be
used, instead. On the cash from operations (CFO) section of the CFS, the starting line item is net income, which is adjusted for non-cash items like D&A and changes in net working capital (NWC).
In short, the P/CF represents the amount that investors are currently willing to pay for each dollar of operating cash flow generated by the company.
Learn More → Valuation Multiple
Price to Cash Flow Ratio Formula (P/CF)
The formula for P/CF is simply the market capitalization divided by the operating cash flows of the company.
Price to Cash Flow (P/CF) = Market Capitalization ÷ Cash Flow from Operations
Alternatively, P/CF can be calculated on a per-share basis, in which the latest closing share price is divided by the operating cash flow per share.
Price-to-Cash Flow (P/CF) = Share Price ÷ Operating Cash Flow Per Share
To calculate the operating cash flow per share, there are two financial metrics required:
1. Cash from Operations (CFO): The company’s annual operating cash flow.
2. Total Diluted Shares Outstanding: The total number of total outstanding shares, inclusive of the effect of potentially dilutive securities like options and convertible debt.
By dividing the two figures, we arrive at the operating cash flow on a per-share basis, which must be done to match the numerator (i.e. the market share price).
Note that the share price used in the formula must reflect a “normalized” share price; i.e., that there are no abnormal, share price movements temporarily affecting the current market valuation.
Otherwise, the P/CF will be skewed by one-time, non-recurring events (e.g. news leakage of potential ).
What is a Good Price to Cash Flow Ratio?
The P/CF is most useful for evaluating companies that have positive operating cash flow but are not profitable on an accrual accounting basis due to non-cash charges.
In other words, a company could have negative net income yet be profitable (in terms of generating positive cash flows) after non-cash expenses are added back.
Following the adjustments to net income, which is the purpose of the top section of the cash flow statement, we can get a much better sense of the company’s profitability.
Regarding the general rules for interpreting the P/CF ratio:
• Low P/CF Ratio: The company’s shares could potentially be undervalued by the market – but further analysis is required.
• High P/CF Ratio: The company’s share price could potentially be overvalued by the market, but again, there might be a particular reason as to why the company is trading at a higher valuation than
peer companies. Further analysis is still required.
Price to Cash Flow (P/CF) vs. Price to Earnings (P/E)
Equity analysts and investors often prefer the P/CF ratio over the price-to-earnings (P/E) since accounting profits – the net earnings of a company – can be manipulated more easily than operating
cash flow.
Hence, certain analysts prefer the P/CF ratio over the P/E ratio, since they view P/CF as a more accurate depiction of a company’s earnings.
P/CF is especially useful for companies with positive free cash flow, which we are defining as cash from operations (CFO), but are not profitable at the net income line because of substantial
non-cash charges.
Non-cash charges are added back to the cash flow statement in the cash from operations section to reflect that they are not actual outflows of cash. For example, depreciation is added back because
the real outflow of cash occurred on the date of the capital expenditure (Capex).
To comply with accrual accounting rules, the purchase of fixed assets must be spread across the useful life of the asset. The issue, however, is that the useful life assumption can be discretionary
and thereby creates the opportunity for misleading accounting practices.
Either way, both the P/CF and P/E ratios are used widely among retail investors, primarily for their convenience and ease of calculating.
P/CF Ratio Limitations
• Capital Expenditure (Capex): The main limitation of the P/CF ratio is the fact that capital expenditures (Capex) are not removed from operating cash flow. Considering the significant impact Capex
has on the cash flows of a company, the ratio of a company can be skewed by the exclusion of Capex.
• Limited or Lack of Profitability: Next, similar to the P/E ratio, the P/CF ratio cannot be used for truly unprofitable companies, even after adjusting for non-cash expenses. In such scenarios,
the P/CF will not be meaningful and other revenue-based metrics such as the price-to-sales multiple would be more useful.
• Early-Stage Companies: Further, for companies in their very early stages of development, high P/CF ratios are going to be the norm, and comparisons to mature companies in different stages in
their lifecycles will not be too informative. High-growth companies are mostly valued based on their future growth prospects and the potential to someday become more profitable once growth slows
down. Depending on the industry, the average P/CF will be different, although a lower ratio is generally considered to be a sign that the company is relatively undervalued.
Price to Cash Flow Calculator (P/CF)
We’ll now move to a modeling exercise, which you can access by filling out the form below.
1. P/CF Ratio Model Assumptions
In our example scenario, we have two companies that we’ll refer to as “Company A” and “Company B”.
For both companies, we’ll be using the following financial assumptions:
• Latest Closing Share Price = $30.00
• Total Diluted Shares Outstanding = 100m
From those two assumptions, we can calculate the market capitalization of both companies by multiplying the share price and diluted share count.
• Market Capitalization = $30.00 × 100m = $3bn
As for the next step, we’ll calculate the denominator using the following operating assumptions:
• Net Income = $250m
• Depreciation & Amortization (D&A):
• Company A D&A = $250m
• Company B D&A = $85m
• Increase in Net Working Capital (NWC) = –$20m
Based on the stated assumptions above, the only difference between the two companies is the D&A amount ($250m vs $85m).
In effect, cash from operations (CFO) for Company A is equal to $240m while CFO is $315m for Company B.
2. Price to Cash Flow Calculation Example
At this point, we have the required data points to calculate the P/CF ratio.
But to see the benefit of the P/CF ratio over the P/E ratio, we’ll first calculate the P/E ratio by dividing the market capitalization by net income.
• Price to Earnings Ratio (P/E) = $3bn ÷ $250m = 12.0x
Then, we’ll calculate the P/CF ratio by dividing the market capitalization by cash from operations (CFO), as opposed to net income.
• Company A – Price-to-Cash Flow Ratio (P/CF) = $3bn ÷ $240m = 12.5x
• Company B – Price-to-Cash Flow Ratio (P/CF) = $3bn ÷ $315m = 9.5x
To confirm our calculation is done correctly, we can use the share price approach to check our P/CF ratios.
Upon dividing the latest closing share price by the operating cash flow per share, we get 12.5x and 9.5x for Company A and Company B once again.
For either company, the P/E ratio comes out to 12.0x, but the P/CF is 12.5x for Company A while being 9.5x for Company B.
The difference is caused by the non-cash add-back of depreciation and amortization (D&A).
In closing, the more the net income of a company varies from its cash from operations (CFO), the more insightful the price to cash flow (P/CF) ratio will be.
Step-by-Step Online Course
Everything You Need To Master Financial Modeling
Enroll in The Premium Package: Learn Financial Statement Modeling, DCF, M&A, LBO and Comps. The same training program used at top investment banks.
Enroll Today
Related Posts
• P/E Ratio
• Price to Sales (P/S Ratio)
• Price to Book (P/B Ratio)
• PEG Ratio
Inline Feedbacks
View all comments
I am a financial expert with a deep understanding of valuation metrics and financial analysis. My expertise in this area is demonstrated by my ability to provide detailed and accurate information on
various financial concepts and their practical applications. I have a thorough understanding of the Price to Cash Flow (P/CF) ratio and its significance in evaluating the market valuation of
publicly-traded companies. My knowledge is not only theoretical but also practical, as I can effectively apply these concepts to real-world scenarios and provide insightful analysis.
Price to Cash Flow (P/CF) Ratio
The Price to Cash Flow (P/CF) ratio is a key valuation metric used to assess the market valuation of a company's stock by comparing its share price to the amount of operating cash flow produced.
Unlike the price-to-earnings ratio (P/E), the P/CF ratio removes the impact of non-cash items such as depreciation & amortization (D&A), making it less prone to manipulation via discretionary
accounting decisions [[1]].
Calculation of Price to Cash Flow Ratio
The P/CF ratio is calculated by dividing the market capitalization by the operating cash flows of the company. It can also be calculated on a per-share basis by dividing the latest closing share
price by the operating cash flow per share [[2]].
To calculate the operating cash flow per share, the following financial metrics are required:
• Cash from Operations (CFO): The company’s annual operating cash flow.
• Total Diluted Shares Outstanding: The total number of total outstanding shares, inclusive of the effect of potentially dilutive securities like options and convertible debt [[3]].
Interpreting the P/CF Ratio
The P/CF ratio is most useful for evaluating companies that have positive operating cash flow but are not profitable on an accrual accounting basis due to non-cash charges. A low P/CF ratio may
indicate that the company's shares could potentially be undervalued by the market, while a high P/CF ratio may suggest that the company's share price could potentially be overvalued [[4]].
Price to Cash Flow (P/CF) vs. Price to Earnings (P/E)
Equity analysts and investors often prefer the P/CF ratio over the P/E ratio, as accounting profits can be manipulated more easily than operating cash flow. The P/CF ratio is especially useful for
companies with positive free cash flow but are not profitable at the net income line due to substantial non-cash charges [[5]].
Limitations of the P/CF Ratio
The main limitations of the P/CF ratio include the fact that capital expenditures (Capex) are not removed from operating cash flow, and it cannot be used for truly unprofitable companies.
Additionally, for companies in their early stages of development, high P/CF ratios are expected, and comparisons to mature companies may not be informative [[6]].
Price to Cash Flow Calculation Example
In a hypothetical scenario involving two companies, the P/CF ratio is calculated by dividing the market capitalization by the operating cash flows of each company. The difference in the P/CF ratios
between the two companies is caused by the non-cash add-back of depreciation and amortization (D&A) [[7]].
In summary, the Price to Cash Flow (P/CF) ratio is a valuable tool for evaluating the market valuation of companies, especially those with positive operating cash flow but non-cash charges impacting
their profitability. It provides insights into a company's valuation that may not be captured by traditional metrics like the Price to Earnings (P/E) ratio. | {"url":"https://haolit.sbs/article/price-to-cash-flow-p-cf","timestamp":"2024-11-08T17:37:58Z","content_type":"text/html","content_length":"76774","record_id":"<urn:uuid:036c30c9-cd3a-4cd8-b7e8-19038b2050a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00734.warc.gz"} |
Cone Shape Pattern
Cone Shape Pattern - 2 print out the file on a4 or letter size cardstock. Web instructions 1 open the printable file above by clicking the image or the link below the image. If the common disc method
isn't to your liking, you can make a cone shape starting out with a paper triangle. If you want to get more specific, however, you should start with a circle. Web the easiest way to make a cone is to
start with a semicircle, then overlap the straight edges until they form a cone shape. Take one of the far corners and roll it. This is just one way and i find that. Web how to layout a cone pattern.
Web cone pattern full scale cutting template. Web making a paper cone using the folding method 1.
Cone Net Printable
This is just one way and i find that. Web cone pattern full scale cutting template. The template makes a cone. Web the easiest way to make a cone is to start with a semicircle, then overlap the
straight edges until they form a cone shape. There are many different ways to lay out this pattern.
All About 3D Shapes What is a Cone? Free Printable Worksheets for Kids
You will need a pdf reader to view. This is just one way and i find that. Take one of the far corners and roll it. There are many different ways to lay out this pattern. If you want to get more
specific, however, you should start with a circle.
Crafty Procrastination Felt Cone Christmas Tree
There are many different ways to lay out this pattern. Web cone pattern full scale cutting template. Take one of the far corners and roll it. If the common disc method isn't to your liking, you can
make a cone shape starting out with a paper triangle. The template makes a cone.
Cone Pattern DOWNLOADPDF Digital Download
If the common disc method isn't to your liking, you can make a cone shape starting out with a paper triangle. There are many different ways to lay out this pattern. Take one of the far corners and
roll it. If you want to get more specific, however, you should start with a circle. Web the easiest way to make.
Printable Cone Template for Crafts Woo! Jr. Kids Activities
There are many different ways to lay out this pattern. This is just one way and i find that. If the common disc method isn't to your liking, you can make a cone shape starting out with a paper
triangle. Web how to layout a cone pattern. 2 print out the file on a4 or letter size cardstock.
Cone clipart 3 d shape, Cone 3 d shape Transparent FREE for download on
Web the easiest way to make a cone is to start with a semicircle, then overlap the straight edges until they form a cone shape. Web making a paper cone using the folding method 1. Web instructions 1
open the printable file above by clicking the image or the link below the image. This is just one way and i.
Cone Cut By Plane ClipArt ETC
If you want to get more specific, however, you should start with a circle. This is just one way and i find that. There are many different ways to lay out this pattern. Web the easiest way to make a
cone is to start with a semicircle, then overlap the straight edges until they form a cone shape. 2 print.
Cone Templates / Cone Shape Printables Tim's Printables
Take one of the far corners and roll it. Web how to layout a cone pattern. Web instructions 1 open the printable file above by clicking the image or the link below the image. If you want to get more
specific, however, you should start with a circle. The template makes a cone.
Cone Templates Tim's Printables Cone template, Templates printable
If you want to get more specific, however, you should start with a circle. Web the easiest way to make a cone is to start with a semicircle, then overlap the straight edges until they form a cone
shape. Take one of the far corners and roll it. The template makes a cone. This is just one way and i.
Development of a Cone ClipArt ETC
There are many different ways to lay out this pattern. Web the easiest way to make a cone is to start with a semicircle, then overlap the straight edges until they form a cone shape. 2 print out the
file on a4 or letter size cardstock. Web how to layout a cone pattern. Web making a paper cone using the.
This is just one way and i find that. Web making a paper cone using the folding method 1. You will need a pdf reader to view. The template makes a cone. If you want to get more specific, however, you
should start with a circle. 2 print out the file on a4 or letter size cardstock. Web how to layout a cone pattern. Web the easiest way to make a cone is to start with a semicircle, then overlap the
straight edges until they form a cone shape. If the common disc method isn't to your liking, you can make a cone shape starting out with a paper triangle. Web cone pattern full scale cutting
template. There are many different ways to lay out this pattern. Web instructions 1 open the printable file above by clicking the image or the link below the image. Take one of the far corners and
roll it.
Web Instructions 1 Open The Printable File Above By Clicking The Image Or The Link Below The Image.
Take one of the far corners and roll it. This is just one way and i find that. There are many different ways to lay out this pattern. If you want to get more specific, however, you should start with
a circle.
If The Common Disc Method Isn't To Your Liking, You Can Make A Cone Shape Starting Out With A Paper Triangle.
You will need a pdf reader to view. Web the easiest way to make a cone is to start with a semicircle, then overlap the straight edges until they form a cone shape. Web how to layout a cone pattern.
Web cone pattern full scale cutting template.
Web Making A Paper Cone Using The Folding Method 1.
2 print out the file on a4 or letter size cardstock. The template makes a cone.
Related Post: | {"url":"https://time.ocr.org.uk/en/cone-shape-pattern.html","timestamp":"2024-11-11T19:29:11Z","content_type":"text/html","content_length":"27515","record_id":"<urn:uuid:f84bb32c-8d6d-4961-bd89-0945bc1717fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00515.warc.gz"} |
PROC GENMOD: Type 1 Analysis :: SAS/STAT(R) 9.22 User's Guide
A Type 1 analysis consists of fitting a sequence of models, beginning with a simple model with only an intercept term, and continuing through a model of specified complexity, fitting one additional
effect on each step. Likelihood ratio statistics—that is, twice the difference of the log likelihoods—are computed between successive models. This type of analysis is sometimes called an analysis of
deviance since, if the dispersion parameter is held fixed for all models, it is equivalent to computing differences of scaled deviances. The asymptotic distribution of the likelihood ratio
statistics, under the hypothesis that the additional parameters included in the model are equal to 0, is a chi-square with degrees of freedom equal to the difference in the number of parameters
estimated in the successive models. Thus, these statistics can be used in a test of hypothesis of the significance of each additional term fit.
This type of analysis is not available for GEE models, since the deviance is not computed for this type of model.
If the dispersion parameter from a maximal model by the deviance or Pearson’s chi-square divided by degrees of freedom, as discussed in the section Goodness of Fit, and this value can be used in all
models. An alternative is to consider the dispersion to be an additional unknown parameter for each model and estimate it by maximum likelihood on each step. By default, PROC GENMOD estimates scale
by maximum likelihood at each step.
A table of likelihood ratio statistics is produced, along with associated
If you specify either the SCALE=DEVIANCE or the SCALE=PEARSON option in the MODEL statement, the dispersion parameter is estimated using the deviance or Pearson’s chi-square statistic, and F
Statistics for a definition of
This Type 1 analysis has the general property that the results depend on the order in which the terms of the model are fitted. The terms are fitted in the order in which they are specified in the
MODEL statement. | {"url":"http://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/statug_genmod_sect040.htm","timestamp":"2024-11-07T16:08:09Z","content_type":"application/xhtml+xml","content_length":"12622","record_id":"<urn:uuid:58001617-5a39-40e3-974c-82e6d413a444>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00383.warc.gz"} |
Template for Backtracking
What is backtracking?
Backtracking is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the
solutions, and abandons a candidate (“backtracks”) as soon as it determines that the candidate cannot possibly be completed to a valid solution.
Essentially, backtracking is dfs on a tree of possible solutions.
Backtracking template
For a general backtracking, we need to write a base case for the end condition, a loop for the choices, and a recursive call.
def backtrack(path, choices):
if end_condition(path):
for choice in choices:
if is_valid(path, choice):
backtrack(path, choices)
You can remove the path.add(choice) and path.remove(choice) if you just add the choice to the path in the recursive call.
backtrack(path + [choice], choices)
There are many variations of backtracking, such as the subset sum problem, n-queens problem, and sudoku.
We can moidy the template to fit different problems.
For permutations, we need to add a visited set to keep track of the visited elements.
def backtrack(path, choices, visited):
if end_condition(path):
for choice in choices:
if is_valid(path, choice):
backtrack(path, choices, visited)
For combinations, we need to add a start index to avoid duplicates.
def backtrack(path, choices, start):
if end_condition(path):
for i in range(start, len(choices)):
if is_valid(path, choices[i]):
backtrack(path, choices, i + 1)
Duplicated input
For duplicated input, we need to sort the input and skip the duplicated elements.
def backtrack(path, choices, start):
if end_condition(path):
for i in range(start, len(choices)):
if i > start and choices[i] == choices[i - 1]:
if is_valid(path, choices[i]):
backtrack(path, choices, i + 1)
Multiple solutions on one path(Subsets)
For subsets, we need to add the result to the result list, and we do not need to check the end condition.
def backtrack(path, choices, start):
for i in range(start, len(choices)):
if is_valid(path, choices[i]):
backtrack(path, choices, i + 1, result) | {"url":"https://mikelxk.com/blog/backtrack-template","timestamp":"2024-11-14T08:02:44Z","content_type":"text/html","content_length":"15792","record_id":"<urn:uuid:38460b2a-c4cd-43a8-a74d-ddab0971bb48>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00445.warc.gz"} |
Machine Learning: Evolutionary Algorithms
• Machine Learning: Evolutionary Algorithms
Machine Learning: Evolutionary Algorithms
Please register via moodle: https://moodle.ruhr-uni-bochum.de/m/course/view.php?id=47234
Moodle is often very slow, in that case use the following links:
Please also register in the Moodle course, which will be used for announcements and for exam registration.
Evolutionary Algorithms are randomized optimization methods. They are inspired by principles of biological evolution, however, applied in a technical context for the solution of mathematical or
technical optimization problems,
These population-based methods apply the principles of inheritance, variation, and the "survival of the fittest" to candidate solutions. The resulting search heuristics are applicable to a wide
variety of application problems. They are conceptually relatively simple and often easy to implement. Their analysis requires elaborate tools and quickly becomes intractable. Evolutionary search is
often applied to the approximate solution of hard optimization tasks for which efficient problem-specific solvers are not available.
The course starts out with a basic model of an evolutionary algorithm. Departing from this model students will learn about various aspects of evolutionary optimization on discrete and continuous
search spaces, from which a systematic taxonomy of modular components will be developed.
The course applies the flipped classroom format. Students work through the relevant lecture material at home; most of the material is provided in the form of videos. All lecture material is in
English. The presence time is dedicated to a practical session. Most of the practical session is filled with exercises, many of which involve programming tasks. We will use the Python programming
language. All exercises are distributed in the form of Jupyter notebooks.
Lecturer (+49) 234-32-25558 tobias.glasmachers@ini.rub.de NB 3/27
Course type
6 CP
Winter Term 2022/2023
Takes place every week on Wednesday from 10:00 to 14:00 in room IA 0/158-79.
First appointment is on 12.10.2022
Last appointment is on 01.02.2023
The course is designed for Master students of the Angewandte Informatik and Medizinphysik programs, but all students with a mathematical or technical background, e.g., studying natural science or
engineering topics, should have the necessary background.
Participants must be familiar with linear algebra and elementary probability theory. For example, students should be well acquainted with the following terms:
• vector, basis, linear function, linear map, matrix
• norm, inner product, orthogonal
• probability, distribution, density, quantile
• normal distribution, expectation, variance, covariance
More advanced mathematical material will be introduced as needed.
The course is taught in a flipped classroom format. Most of the presence time is dedicated to practical exercises, many of which involve programming. The course it taught online via zoom.
The Institut für Neuroinformatik (INI) is a central research unit of the Ruhr-Universität Bochum. We aim to understand the fundamental principles through which organisms generate behavior and
cognition while linked to their environments through sensory systems and while acting in those environments through effector systems. Inspired by our insights into such natural cognitive systems, we
seek new solutions to problems of information processing in artificial cognitive systems. We draw from a variety of disciplines that include experimental approaches from psychology and
neurophysiology as well as theoretical approaches from physics, mathematics, electrical engineering and applied computer science, in particular machine learning, artificial intelligence, and computer
Universitätsstr. 150, Building NB, Room 3/32
D-44801 Bochum, Germany
Tel: (+49) 234 32-28967
Fax: (+49) 234 32-14210 | {"url":"https://www.ini.rub.de/teaching/courses/machine_learning_evolutionary_algorithms_winter_term_2022/","timestamp":"2024-11-10T22:43:19Z","content_type":"text/html","content_length":"43998","record_id":"<urn:uuid:f89cdfd6-475d-4bce-b212-659cfbcecc76>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00001.warc.gz"} |
The gain G of a certain microwave dish antenna can be expressed as a function of angle by the equation. Plot this gain function on a polar plot, with the title “Antenna Gain vs” in boldface. - Tejsumeru
Home » Blog » The gain G of a certain microwave dish antenna can be expressed as a function of angle by the equation. Plot this gain function on a polar plot, with the title “Antenna Gain vs” in
The gain G of a certain microwave dish antenna can be expressed as a function of angle by the equation. Plot this gain function on a polar plot, with the title “Antenna Gain vs” in boldface.
• MATLAB, Tutorials
The gain G of a certain microwave dish antenna can be expressed as a function of angle by the equation
where θ is measured in radians from the boresite of the dish, and sinc x=sin x/x. Plot this gain function on a polar plot, with the title “Antenna Gain vs” in boldface.
clear all;
close all;
theta = -pi/2:pi/30:pi/2;
y = abs(sin(4.*theta)./(4.*theta));
title("Antenna Gain Vs \theta")
Leave a Reply Cancel reply | {"url":"https://tejsumeru.com/2021/04/17/the-gain-g-of-a-certain-microwave-dish-antenna-can-be-expressed-as-a-function-of-angle-by-the-equation-plot-this-gain-function-on-a-polar-plot-with-the-title-antenna-gain-vs-in-bol/","timestamp":"2024-11-06T02:21:39Z","content_type":"text/html","content_length":"88729","record_id":"<urn:uuid:8fe783e3-d6ad-46a5-af92-b57362b5f01b>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00497.warc.gz"} |
BITMEDU7 - Editorial
Problem Link - Finding least significant, most significant bit
Problem Statement
• Write a program to input an integer N.
• Print the position of the most significant bit, and print the least significant bit.
The code idea is to determine two key properties of the input integer N: the position of the most significant bit (MSB) and the value of the least significant bit (LSB).
1. Least Significant Bit (LSB): This is obtained by performing a bitwise AND operation between N and 1. If N is odd, the LSB will be 1, and if it is even, the LSB will be 0.
2. Most Significant Bit (MSB): The position of the MSB is found by repeatedly dividing N by 2 until N becomes 0, while keeping track of the number of divisions. The position of the MSB is then the
count of divisions minus one since the counting starts from 1.
Time Complexity
The time complexity of the algorithm is O(log N), as it involves repeatedly dividing N by 2.
Space Complexity
The space complexity of the algorithm is O(1), since it uses a constant amount of space for variables. | {"url":"https://discuss.codechef.com/t/bitmedu7-editorial/120915","timestamp":"2024-11-15T01:08:16Z","content_type":"text/html","content_length":"14731","record_id":"<urn:uuid:025db8ea-f8be-489e-95cb-035fd9635afd>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00132.warc.gz"} |
Math, Grade 6, Expressions, Peer Review
Make Connections
Performance Task
Ways of Thinking: Make Connections
Take notes about how your classmates corrected the work and their conclusions about the most common errors that Emma and Chen made.
As your classmates present, ask questions such as:
• What strategy did you use to find the student errors?
• How could you tell if the work was correct or incorrect?
• How does your explanation or diagram clearly show what the student did wrong?
• How did what you have learned about expressions and the distributive property help you evaluate the student work?
• How did you identify the most common error that each student made?
• What advice would you give each student about how to deal with problems of this type in the future? | {"url":"https://openspace.infohio.org/courseware/lesson/2129/student/?section=6","timestamp":"2024-11-11T04:56:24Z","content_type":"text/html","content_length":"32089","record_id":"<urn:uuid:c3ab1420-e4d7-413e-b1ae-a97d5b88c5b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00589.warc.gz"} |
This document is Copyright © 2021 by the LibreOffice Documentation Team. Contributors are listed below. You may distribute it and/or modify it under the terms of either the GNU General Public License
(https://www.gnu.org/licenses/gpl.html), version 3 or later, or the Creative Commons Attribution License (https//creativecommons.org/licenses/by/4.0/), version 4.0 or later.
All trademarks within this guide belong to their legitimate owners.
Jean Hollis Weber Kees Kriek
Peter Schofield Hazel Russman Laurent Balland-Poirier
Rafael Lima Winston Min Tjong Kees Kriek
Jean Hollis Weber John A Smith Martin Saffron
Regina Henschel Christian Kühl Florian Reisinger
Gisbert Friege (Dmaths) Jochen Schiffers Frédéric Parrenin
Bernard Siaud Steve Fanning Olivier Hallot
Roman Kuznetsov Dave Barton
Please direct any comments or suggestions about this document to the Documentation Team’s mailing list: documentation@global.libreoffice.org
Everything you send to a mailing list, including your email address and any other personal information that is written in the message, is publicly archived and cannot be deleted.
Publication date and software version
Published June 2021. Based on LibreOffice 7.1 Community.
Other versions of LibreOffice may differ in appearance and functionality.
Using LibreOffice on macOS
Some keystrokes and menu items are different on macOS from those used in Windows and Linux. The table below gives some common substitutions for the instructions in this document. For a detailed list,
see the application Help.
Windows or Linux macOS equivalent Effect
Tools > Options LibreOffice > Preferences Access setup options
menu selection
Right-click Control+click or right-click depending on computer setup Open a context menu
Ctrl (Control) ⌘ (Command) Used with other keys
F11 ⌘+T Open the Styles deck in the Sidebar
Math is a formula editor module included with LibreOffice that allows you to create or edit formulas (equations) in a symbolic form, within LibreOffice documents or as stand-alone objects. Example
formulas are shown below:
$\frac{\mathit{df}\left(x\right)}{\mathit{dx}}=\mathrm{ln}\left(x\right)+{\mathrm{tan}}^{-1}\left({x}^{2}\right)$ or ${\mathrm{NH}}_{3}+{\mathrm{H}}_{2}\mathrm{O}\mathrm{⇌}{\mathrm{NH}}_{4}^{+}+{\
The Formula Editor in Math uses a markup language to represent formulas. This markup language is designed to be easily read wherever possible. For example, a over b using markup language produces the
fraction $\frac{a}{b}$ when used in a formula.
Getting started
Using the Formula Editor, you can create a formula as a separate document or file for a formula library, or insert formulas directly into a document using LibreOffice Writer, Calc, Impress, or Draw.
Formulas as separate documents or files
To create a formula as a separate document or file, use one of the following methods to open an empty formula document in LibreOffice Math (Figure 1):
• On the Menu bar, go to File > New > Formula.
• On the Standard toolbar, click the triangle to the right of the New icon and select Formula.
• In the Start Center, click Math Formula.
• From within LibreOffice Math, use the keyboard shortcut Ctrl+N.
• You can also launch Math from the command line using libreoffice --math
Figure 1: An empty formula document in Math
As you enter the markup language in the Formula Editor, the formula will appear in the Preview window during and after input of the markup language. The Elements Dock to the left of the Preview
window may also appear, if these have been selected in View on the Menu bar. Figure 2 illustrates how to enable the Elements dock in Math. For more information on creating formulas, see “Creating
formulas” below.
Figure 2: Enabling the Elements Dock
Formulas in LibreOffice documents
To insert a formula into a LibreOffice document, open the document in Writer, Calc, Draw, or Impress. The LibreOffice module you are using affects how you position the cursor to insert the formula.
• In Writer, click in the paragraph where you want to insert the formula.
• In Calc, click in the spreadsheet cell where you want to insert the formula.
• In Draw and Impress, the formula is inserted into the center of the drawing or slide.
Then, go to Insert > Object > Formula on the Menu bar to open the Formula Editor. Alternatively, go to Insert > Object > OLE Object on the Menu bar to open the Insert OLE Object dialog, then select
Create new option, choose the Object Type “LibreOffice Formula” and then click OK to open the Formula Editor. The Elements Dock to the left of the Preview window and/or the Elements dialog as a
floating dialog may also appear, if these have been selected in View on the Menu bar.
Figure 3 shows an example Writer document with the formula box selected ready for a formula to be entered.
When you have completed entering the markup language for your formula, close the Formula Editor by pressing the Esc key or by clicking an area outside the formula in your document. Double-clicking on
the formula object in the document will open the Formula Editor again so that you can edit the formula.
Formulas are inserted as OLE objects into documents. You can change how the object is placed within the document, as with any OLE object. For more information on OLE objects, see “Formulas in Writer”
below, “Formulas in Calc, Draw, and Impress” below, and the user guides for Writer, Calc, Draw, and Impress.
Figure 3: Empty formula in a Writer document
If you frequently insert formulas into documents, it is recommended to add the Formula button to the Standard toolbar or create a keyboard shortcut. See “Customization” below for more information.
Creating formulas
You can insert a formula using one of the following methods:
• In the Elements dock, select a category in the drop-down list, then a symbol.
• Right-click in the Formula Editor and select a category, then a symbol in the context menu.
• Enter markup language directly in the Formula Editor.
Using the Elements dock or the context menus to insert a formula provides a convenient way to learn the markup language used by Math.
When using the Elements dock, it is recommended to have Extended tips selected in the LibreOffice Options. This will help you identify the categories and symbols you want to use in the formula. Go to
Tools > Options > LibreOffice > General and select Extended tips in the Help section.
Elements dock
The Elements dock can be used when entering formula data.
1) Go to View > Elements on the Menu bar to open the Elements dock (Figure 4).
2) Select the category you want to use in your formula in the drop-down list at the top of the Elements Dock.
3) Select the symbol you want to use in the formula in the Elements Dock. The symbols that are available change according to the selected category.
4) After choosing one of the symbols in the Elements Dock, the Formula Editor will be updated with the Markup notation of the selected symbol.
The Elements dock can either be a floating dialog, as shown in Figure 4, or positioned to the left of the Formula Editor, as shown in Figure 1 and Figure 3.
The Elements Dock also provides an Examples category which provides example formulas to use as a starting point for your formula or equation.
Context menu
The Formula Editor also provides a context menu to access categories and symbols when creating a formula. Right-click in the Formula Editor to open the context menu. Select a category and then select
the markup example that you want to use in the sub-context menu. An example is shown in Figure 5.
The Elements Dock and the context menu contain only the most common commands that are used in formulas. To insert other symbols and commands not listed in the Elements Dock and context menu, you have
to enter them manually using the markup language. For a complete list of commands and symbols available in Math, see Appendix A, Commands Reference, in the Math Guide.
Figure 5: Context menu in Formula Editor
Markup language
Markup language is entered directly into the Formula Editor. For example, typing the markup language 5 times 4 into the Formula Editor creates the simple formula $5×4$ . If you are experienced in
using markup language, it can be the quickest way to enter a formula. Table 1 shows some examples of using markup language to enter commands. For a full list of commands that can be used in the
Formula Editor, see Appendix A, Commands Reference, in the Math Guide.
Table 1: Example commands using markup language
Display Command Display Command
$a=b$ a = b $a$ sqrt {a}
$a2$ a^2 $an$ a_n
$∫f(x)dx$ int f(x) dx $∑an$ sum a_n
$a≤b$ a <= b $∞$ infinity
$a×b$ a times b $x⋅y$ x cdot y
Greek characters
Using markup language
Greek characters are commonly used in formulas, but they cannot be entered into a formula using the Elements dock or the context menu. Use the English names of Greek characters in markup language
when entering Greek characters into a formula. See Appendix A, Commands Reference, in the Math Guide for a list of characters that can be entered using markup language.
• For a lowercase Greek character, type a percentage % sign, then type the character name in lowercase using the English name. For example, typing %lambda creates the Greek character $\lambda$ .
• For an UPPERCASE Greek character, type a percentage % sign, then type the character name in UPPERCASE using the English name. For example, typing %LAMBDA creates the Greek character $\mathrm{\
Lambda }$ .
• For an italic Greek character, type a percentage % sign followed by the i character, then the English name of the Greek character in lower or UPPER case. For example, typing %iTHETA creates the
italic Greek character $\Theta \phantom{\rule{0ex}{0ex}}$ .
Symbols dialog
Greek characters can also be entered into a formula using the Symbols dialog.
1) Make sure your cursor is in the correct position in the Formula Editor.
2) Go to Tools > Symbols on the Menu bar, or click the Symbols icon in the Tools toolbar, to open the Symbols dialog (Figure 6).
3) Select Greek in the Symbol set drop-down list. For italic characters, select iGreek in the drop-down list.
4) Double-click the Greek character you want to insert or select it and click Insert. When selected, the name of the character is shown below the symbol list.
5) Click Close when you have finished entering Greek characters into your formula.
Formula examples
The simple formula $5×4$ can be created using LibreOffice Math as follows:
1) Make sure your cursor is flashing in the Formula Editor, then select the category Unary/Binary Operators and symbol Multiplication using one of the following methods:
• In the Elements Dock, select Unary/Binary Operators in the drop-down list and then select the Multiplication icon
• Right-click in the Formula Editor and select Unary/Binary Operators > a times b in the context menu.
• Using markup language, enter 5 times 4 in the Formula Editor.
• appears in the document. For the third method, using markup language in the Formula Editor places the formula $5×4$ directly into the document and there is no need to carry out the following
• The first two methods place the formula text <?> times <?> in the Formula Editor and the symbol
2) Select the first placeholder <?> before the word times in the Formula Editor and replace it with the character 5. The formula in the document updates automatically.
3) Select the second placeholder <?> after the word times in the Formula Editor and replace it with the character 4. The formula in the document updates automatically.
To move forward from one placeholder to the next placeholder in a formula, press the F4 key. To move backward from one placeholder to the previous placeholder in a formula, use the key combination
If necessary, you can prevent a formula in a document from updating automatically. Go to View on the Menu bar and deselect AutoUpdate display. To then manually update a formula, press the F9 key or
select View > Update on the Menu bar.
You want to enter the formula $\mathrm{\pi }\phantom{\rule{2em}{0ex}}\simeq \phantom{\rule{2em}{0ex}}3.14159$ where the value of pi is rounded to 5 decimal places. You know the name of the Greek
character (pi), but do not know the markup associated with the Is Similar Or Equal symbol $\simeq$ .
1) Make sure your cursor is flashing in the Formula Editor.
2) Enter %pi in the Formula Editor to enter the Greek character for pi (π).
3) Select the category Relations and symbol Is Similar Or Equal using one of the following methods:
• In the Elements dock, select Relations in the drop-down list and then select the Is Similar Or Equal icon
• Right-click in the Formula Editor and select Relations > a simeq b in the context menu.
4) Delete the first placeholder <?> before the word simeq in the Formula Editor.
5) Select the second placeholder <?> after the word simeq in the Formula Editor and replace it with the characters 3.14159. The formula $\mathrm{\pi }\phantom{\rule{2em}{0ex}}\simeq \phantom{\rule
{2em}{0ex}}3.14159$ now appears in the document.
Editing formulas
How you edit a formula and switch into formula editing mode depends on whether the formula is in Math or another LibreOffice component.
1) In Math, double-click on a formula element in the formula that appears in the Preview window to select the formula element in the Formula Editor, or directly select a formula element in the
Formula Editor.
2) In Writer, Calc, Impress, or Draw, double-click on the formula, or right-click on the formula and select Edit in the context menu, to open the Formula Editor and enter editing mode. The cursor is
positioned at the start of the formula in the Formula Editor.
If you cannot select a formula element using the cursor, click on the Formula Cursor icon in the Tools toolbar to activate the formula cursor.
3) Select the formula element you want to change using one of the following methods:
• Click on the formula element in the Preview window, positioning the cursor at the beginning of the formula element in the Formula Editor, then select the formula element in the Formula Editor.
• Double-click on the formula element in the Preview window to select the formula element in the Formula Editor.
• Position the cursor in the Formula Editor at the formula element you want to edit, then select that formula element.
• Double-click directly on the formula element in the Formula Editor to select it.
4) Make your changes to the formula element you have selected.
5) Go to View > Update on the Menu bar, or press the F9 key, or click on the Update icon on the Tools toolbar to update the formula in the Preview window or your document.
6) In Math, save your changes to the formula after editing.
7) In Writer, Calc, Impress, or Draw, click anywhere in the document away from the formula to leave editing mode, then save the document to save your changes to the formula.
Formula layout
This section provides some advice on how to lay out complex formulas in Math or in your LibreOffice document.
Using braces
LibreOffice Math knows nothing about order of operations within a formula. You need to use braces (curly brackets) to define the order of operations. The following examples show how braces can be
used in a formula.
Brackets (parentheses) and matrices
If you want to use a matrix in a formula, you have to use the matrix command. Below is a simple example of a 2 x 2 matrix.
matrix { a # b ## c # d } $abcd$
In matrices, rows are separated by two hashes (##) and entries within each row are separated by one hash (#).
Normally, when you use brackets within a matrix, the brackets do not scale as the matrix increases in size. The example below shows a formula where the parentheses do not scale to the size of the
resulting matrix.
( matrix { a # b ## c # d } ) $(abcd)$
To overcome this problem of brackets in a matrix, LibreOffice Math provides scalable brackets that grow in size to match the size of the matrix. The commands left( and right) have to be used to
create scalable brackets around a matrix. The following example shows how to create a matrix with scalable parentheses.
left( matrix { a # b ## c # d } right) $(abcd)$
Scalable brackets can also be used with any element of a formula, such as fraction, square root, and so on.
If you want to create a matrix where some values are empty, you can use the grave accent (`) so that Math will put a small space in that position, as shown in the example below:
left( matrix { 1 # 2 # 3 ## 4 # ` # 6 } right) $(12346)$
Use the commands left[ and right] to obtain square brackets. A list of all brackets available within Math can be found in Appendix A, Commands Reference, of the Math Guide.
If you want all brackets to be scalable, go to Format > Spacing on the Menu bar to open the Spacing dialog. Click on Category, select Brackets in the drop-down list, and then select the option Scale
all brackets.
Unpaired brackets
When using brackets in a formula, Math expects that for every opening bracket there will be a closing one. If you forget to add a closing bracket, Math places an inverted question mark next to where
the closing bracket should have been placed. For example, lbrace a; b will result in because the right bracket rbrace is missing.
This inverted question mark disappears when all the brackets are paired. The previous example could be fixed to lbrace a; b rbrace, resulting in $\left\{a;b\right\}$ . However, there are cases where
an unpaired bracket is necessary and for that you have the following options.
Non scalable brackets
A backslash \ is placed before a non scalable bracket to indicate that the subsequent character should not be regarded as a bracket, but rather as a literal character.
For example, the unpaired brackets in the formula [ a; b [ would result in an inverted question mark because Math expects that [ will be closed by ]. To fix the error, you can use the backslash and
insert \ [ a; b \ [ into the Formula Editor to obtain $\left[a;b\left[$ as the result.
Scalable brackets
To create unpaired scalable brackets or braces in a formula, the markup commands left, right, and none can be used.
abs x = left lbrace stack {x "for" x >= 0 # -x "for" x < 0} right none $|x|={xforx≥0−xforx<0$
Recognizing functions
In the basic installation of Math, Math outputs functions in normal characters and variables in italic characters. However, if Math fails to recognize a function, you can tell Math that you have just
entered a function. Enter the markup command func before a function forces Math to recognize the following text as a function and uses normal characters.
For a full list of functions within Math, see Appendix A, Commands Reference, in the Math Guide.
Some Math functions have to be followed by a number or a variable. If these are missing, Math places an inverted question mark where the missing number or variable should be. To remove the inverted
question mark and correct the formula, you have to enter a number, a variable, or a pair of empty brackets as a placeholder.
You can navigate through errors in a formula using the F3 key to move to the next error or the key combination Shift+F3 to move to the previous error.
Formulas over multiple lines
Suppose you want to create a formula that requires more than one line, for example $\begin{array}{c}x=3\\ y=1\end{array}$ . Your first reaction would be to press the Enter key. However, if you do
that, the markup language in the Formula Editor goes to a new line, but the resulting formula does not have two lines. To add a new line to the formula you need to use the markup command newline.
Markup Language Resulting Formula
x = 3 $x=3y=1$
y = 1
x = 3 newline y = 1 $x=3y=1$
Spacing within formulas
Spacing between the elements in a formula is not set by using space characters in the markup language. If you want to add spaces into your formula, use one of the following options:
• Grave ` to add a small space.
• Tilde ~ for a large space.
• Add space characters between quotes “ ”. These spaces will be considered as text.
Any spaces at the end of a line in the markup language are ignored by default. For more information, see “Customization” below.
Adding limits to sum/integral commands
The sum and int commands, used for summations and integrals respectively, can take the parameters from and to if you want to set the lower and upper limits. The parameters from and to can be used
singly or together as shown by the following examples. For more information on the sum and integral commands, see Appendix A, Commands Reference, in the Math Guide.
Markup Language Resulting Formula
sum from k = 1 to n a_k $∑k=1nak$
sum to infinity 2^{-n} $∑∞2−n$
sum from{ i=1 } to{ n } sum from{ j=1; i <> j } to{ m } x_ij $∑i=1n∑j=1;i≠jmxij$
int from 0 to x f(t) dt $∫0xf(t)dt$
int_0^x f(t) dt $∫0xf(t)dt$
int from Re f $∫ℜf$
Writing derivatives
When writing derivatives, you have to tell Math that it is a fraction by using the over command. The over command is combined with the character d for a total derivative or the partial command for a
partial derivative to achieve the effect of a derivative. Braces {} are used in each side of the elements to surround them and make the derivative, as shown by the following examples.
Markup Language Resulting Formula
{df} over {dx} $dfdx$
{partial f} over {partial y} $∂f∂y$
{partial^2 f} over {partial t^2} $∂2f∂t2$
To write function names with primes, as is normal in school notation, you must first add the symbols to the catalog. See “Customization” below for more information.
Markup language characters as normal characters
Characters that are used as controls in markup language cannot be entered directly as normal characters. These characters are: %, {, }, &, |, _, ^ and ". For example, you cannot write 2% = 0.02 in
markup language and expect the same characters to appear in your formula. To overcome this limitation in markup language, use one of the following methods:
• Use double quotes to mark that character as text, for example 2"%"= 0.02 will appear in your formula as $2\text{%}=0.02$. However, this method cannot be used for the double-quote character
itself, see “Text in formulas” below.
• Add the character to the Math Catalog, for example the double quote character.
• Use commands, for example lbrace and rbrace give you literal braces $\left\{\right\}$.
The Special Characters dialog used by other LibreOffice modules is not available in Math. If you are going to regularly require special characters in Math, then it is recommended to add the
characters to the Math Catalog; see “Customization” below for more information.
Text in formulas
To include text in a formula, you have to enclose any text in double-quotes, for example x " for " x >= 0 in markup language will create the formula $x\text{for}x\ge 0$. All characters, except double
quotes, can be used in text.
However, if you require double quotes in formula text, then you have to create the text with double quotes in LibreOffice Writer, then copy and paste the text into the Formula Editor, as shown in
Figure 7.
Figure 7: Example of double quotes in formula text
The font used for text in a formula will be the default font that has been set in the Fonts dialog. For more information on how to change fonts used for in formulas, see “Changing formula appearance”
By default, text alignment is left-justified in formulas. For more information on how to change text alignment, see “Adjusting formula alignment” below.
Formatting text in formulas
Formatting commands are not interpreted within text used in formulas. If you want to use formatting commands within formula text, then you must break up the text using double quotes in the Formula
Aligning formulas using equals sign
LibreOffice Math does not have a command for aligning formulas on a particular character. However, you can use a matrix to align formulas on a character and this character is normally the equals sign
(=). In addition, you can use the markup commands alignr, alignl and alignc to set the alignment of each value inside the matrix to the right, left or center, respectively.
matrix{ alignr x+y # {}={} # alignl 2 ## alignr x # {}={} # alignl 2-y } $x+y=2x=2−y$
The empty braces each side of the equals sign are necessary because the equals sign is a binary operator and requires an expression on each side. You can use spaces, or ` or ~ characters each side of
the equals sign, but braces are recommended as they are easier to see within the markup language.
You can reduce the spacing on each side of the equals sign if you change the inter-column spacing of the matrix. See “Adjusting formula spacing” below for more information.
Changing formula appearance
This section describes how to change the font or font size in a selected formula and how to change the default font or font size.
If you have already inserted formulas into your document and you change the default font or font size, only formulas inserted after the change in default font or font size will use the new default
settings. You have to individually change the font or font size of formulas already inserted if you want these formulas to use the same font or font size as the default settings.
The extension “Formatting of all Math formulas” allows you to change font name and font size for all or only for selected formulas in a document. You can download it and read the installation and
usage instructions here: https://extensions.libreoffice.org/extensions/show/formatting-of-all-math-formulas
Formula font size
Current formula font size
To change the font size used for a formula already inserted in Math or another LibreOffice component:
1) Click in the markup language in the Formula Editor.
2) Go to Format > Font size on the Menu bar to open the Font Sizes dialog (Figure 8).
3) Select a different font size using the Base size spinner or type a new font size in the Base Size box.
4) Click OK to save your changes and close the dialog. An example result when you change font size is shown below.
Figure 8: Font Sizes dialog
Default formula font size
To change the default font size used for all formulas in Math or another LibreOffice component:
1) Before inserting any formulas in your document, go to Format > Font size on the Menu bar to open the Font Sizes dialog (Figure 8).
2) Select a different font size using the Base size spinner or type a new font size in the Base size box.
3) Click Default and confirm your changes to the base size font. Any formulas created from this point on will use the new base size font for formulas.
4) Click OK to save your changes and close the Font Sizes dialog.
Font size options
The Font Sizes dialog (Figure 8) specifies the font sizes for a formula. Select a base size and all elements of the formula will be scaled in relation to this base.
• Base size – all elements of a formula are proportionally scaled to the base size. To change the base size, select or type in the desired point (pt) size. You can also use other units of measure
or other metrics, which are then automatically converted to points. For example, if you enter 1in or 1”, Math converts the value to 72 pt.
• Relative Sizes – in this section, you can determine the relative sizes for each type of element with reference to the base size.
• Default – click this button to save any changes as a default for all new formulas. A confirmation message appears.
Formula fonts
Current formula fonts
To change the fonts used for the current formula in Math or another LibreOffice component:
1) Click in the markup language in the Formula Editor.
2) Go to Format > Fonts on the Menu bar to open the Fonts dialog (Figure 9).
3) Select a new font for each of the various options in the drop-down lists.
4) If the font you want to use does not appear in the drop-down list, click Modify and select the option in the context menu to open a fonts dialog. Select the font you want to use and click OK to
add it to the drop-down list for that option.
5) Click OK to save your changes and close the Fonts dialog.
Default formula fonts
To change the default fonts used for all formulas in Math or another LibreOffice component:
1) Before inserting any formulas in your document, go to Format > Fonts on the Menu bar to open the Fonts dialog (Figure 9).
2) Select a new font for each of the various options in the drop-down lists.
3) If the font you want to use does not appear in the drop-down list, click Modify and select the option in the context menu to open a fonts dialog. Select the font you want to use and click OK to
add it to the drop-down list for that option.
4) Click Default and confirm your changes to the fonts. Any formulas created from this point on will use the new font for formulas.
5) Click OK to save your changes and close the Fonts dialog.
Formula font options
Defines the fonts that can be applied to formula elements.
• Formula Fonts – defines the fonts used for the variables, functions, numbers and inserted text that form the elements of a formula.
• Custom Fonts – in this section of the Fonts dialog (Figure 9), fonts are defined which format other text components in a formula. The three basic fonts Serif, Sans, and Fixed are available. Other
fonts can be added to each standard installed basic font using the Modify button. Every font installed on a computer system is available for use.
• Modify – click one of the options in the drop-down menu to access the Fonts dialog, where the font and attributes can be defined for the respective formula and for custom fonts.
• Default – click this button to save any changes as a default for all new formulas. A confirmation message appears.
When a new font is selected for a formula, the old font remains in the list alongside the new one and can be selected again.
Variables should be written in italics, so make sure that the Italic option is selected. For the font you want to use. For all other elements, use the basic form of a font. The style can be easily
altered in the formula itself by using the commands italic or bold to set these characteristics and nitalic or nbold to unset them.
Adjusting formula spacing
Use the Spacing dialog (Figure 10) to determine the spacing between formula elements. The spacing is specified as a percentage in relation to the defined base size for font sizes.
Figure 10: Spacing dialog
Current formula spacing
To change the spacing used for the current formula in Math or another LibreOffice component:
1) Click in the markup language in the Formula Editor.
2) Go to Format > Spacing on the Menu bar to open the Spacing dialog (Figure 10).
3) Click Category and select one of the options in the drop-down list. The options in the Spacing dialog change according to the category selected.
4) Enter new values for the spacing category and click OK.
5) Check the result in your formula. If not to your satisfaction, repeat the above steps.
Default formula spacing
To change the default spacing used for all formulas in Math or another LibreOffice component:
1) Before inserting any formulas in your document, go to Format > Spacing on the Menu bar to open the Spacing dialog (Figure 10).
2) Click Category and select one of the options in the drop-down list. The options in the Spacing dialog change according to the category selected.
3) Enter new values for the spacing category.
4) Click Default and confirm your changes to the formula spacing. Any formulas created from this point on will use the new spacing for formulas.
5) Click OK to save your changes and close the Spacing dialog.
If you have already inserted formulas into your document and you change the spacing, only formulas inserted after the change in spacing will use the new default settings. You have to individually
change the spacing of formulas already inserted if you want these formulas to use the same spacing as the default settings.
Spacing options
Use Category in the Spacing dialog (Figure 10) to determine the formula element for which you would like to specify the spacing. The appearance of the dialog depends on the selected category. A
preview window shows you which spacing is modified through the respective boxes.
• Category – pressing this button allows you to select the category for which you would like to change the spacing.
• Spacing – defines the spacing between variables and operators, between lines, and between root signs and radicals.
• Indexes – defines the spacing for superscript and subscript indexes.
• Fractions – defines the spacing between the fraction bar and the numerator or denominator.
• Fraction Bars – defines the excess length and line weight of the fraction bar.
• Limits – defines the spacing between the sum symbol and the limit conditions.
• Brackets – defines the spacing between brackets and the content.
• Excess size (left/right) – determines the vertical distance between the upper edge of the contents and the upper end of the brackets.
• Spacing – determines the horizontal distance between the contents and the upper end of the brackets.
• Scale all brackets – scales all types of brackets. If you then enter (a over b) in the Formula Editor, the brackets will surround the whole height of the argument. You normally achieve this
effect by entering left (a over b right).
• Excess size – adjusts the percentage excess size. At 0% the brackets are set so that they surround the argument at the same height. The higher the entered value is, the larger the vertical gap
between the contents of the brackets and the external border of the brackets. The field can only be used in combination with Scale all brackets.
• Line spacing – determines the spacing between matrix elements in a row.
• Column spacing – determines the spacing between matrix elements in a column.
• Primary height – defines the height of the symbols in relation to the baseline.
• Minimum spacing – determines the minimum distance between a symbol and variable.
• Excess size – determines the height from the variable to the operator upper edge.
• Spacing – determines the horizontal distance between operators and variables.
• Borders – adds a border to a formula. This option is particularly useful if you want to integrate the formula into a text file in Writer. When making settings, make sure that you do not use 0 as
a size as this creates viewing problems for text that surrounds the insertion point.
• Preview Field – displays a preview of the current selection.
• Default – saves any changes as default settings for all new formulas. A confirmation dialog will appear before saving these changes.
Adjusting formula alignment
The alignment settings determine how formula elements located above one another are aligned horizontally relative to each other.
It is not possible to align formulas on a particular character and formula alignment does not apply to text elements. Text elements are always aligned left.
Independent of using formula alignment given below, it is possible to align formulas using the commands alignl, alignc and alignr. These commands also work for text elements.
Current formula alignment
To change the alignment used for the current formula in Math or another LibreOffice component:
1) Click in the markup language in the Formula Editor.
2) Go to Format > Alignment on the Menu bar to open the Alignment dialog (Figure 11).
3) Select either Left, Centered, or Right for horizontal alignment.
4) Click OK and check the result in your formula. If not to your satisfaction, repeat the above steps.
Figure 11: Alignment dialog
Regardless of the alignment option selected in the Alignment dialog, it is possible to align sections of a formula using the commands alignl, alignc and alignr. For example, they can be useful to
align formulas in matrices. These commands also work for text elements.
Default formula alignment
To change the default alignment used for all formulas in Math or another LibreOffice component:
1) Before inserting any formulas in your document, go to Format > Alignment on the Menu bar to open the Alignment dialog (Figure 11).
2) Select either Left, Centered, or Right for horizontal alignment.
3) Click Default and confirm your changes to the formula alignment. Any formulas created from this point on will use the new alignment for formulas.
4) Click OK and check the result in your formula. If not to your satisfaction, repeat the above steps.
If you have already inserted formulas into a document and you change the formula alignment, only formulas inserted after the change in alignment will use the new default settings. You have to
individually change the alignment of formulas already inserted if you want these formulas to use the same alignment as the default settings.
Changing formula color
Character color
Formula color for the characters used in a formula is changed by using the command color in the markup language. This command only works on the formula element immediately after the color name. For
example, entering the markup language color red 5 times 4 gives the result $5×4$. Note that only the number 5 was colored red.
To change the color of the whole formula, you have to enclose the whole formula within brackets. For example, entering the markup language color red {5 times 4} gives the result $5×4$.
Math now has full support for named HTML colors. Some of them have been added to the Attributes section of the Elements dock (Figure 4). For information on the colors available in Math, see Appendix
A, Commands Reference, in the Math Guide.
Applying colors using RGB values
Alternatively, it is possible to use custom colors defined using RGB (Red, Green and Blue) values ranging from 0 to 255. This is done by using the color rgb R G B markup command, where R, G and B
correspond to the Red, Green and Blue values of the desired color.
Background color
It is not possible to select a background color for formulas in LibreOffice Math. The background color for a formula is by default the same color as the document or frame that the formula has been
inserted into. In LibreOffice documents, you can use object properties to change the background color for a formula. See “Background and borders” below.
Formula library
If you regularly insert the same formulas into your documents, you can create a formula library using formulas that you have created using the Formula Editor. Individual formulas can be saved as
separate files using the ODF format for formulas with the file suffix of .odf, or in MathML format with the file suffix of .mml.
You can use either LibreOffice Math, Writer, Calc, Draw, or Impress to create formulas and build up your formula library.
Using Math
1) Create a folder on your computer to contain your formulas and give the folder a memorable name, for example Formula Library.
2) In LibreOffice, go to File > New > Formula on the Menu bar, or click on Math Formula in the opening splash screen to open Math and create your formula using the Formula Editor. See “Formulas as
separate documents or files” above for more information.
3) Go to File > Save As on the Menu bar or use the keyboard shortcut Ctrl+Shift+S to open a Save As dialog.
4) Navigate to the folder you have created for your formula library.
5) Type a memorable name for the formula in the File name box.
6) Select in the File type drop-down list either ODF Formula (.odf) or MathML 2.0 (.mml) as the file type for the formula.
7) Click Save to save the formula and close the Save As dialog.
Using Writer, Calc, Draw, or Impress
1) Create a folder on your computer to contain your formulas and give the folder a memorable name, for example Formula Library.
2) Open a document using Writer, Calc, Draw, or Impress.
3) Go to Insert > Object > Formula on the Menu bar to open the Formula Editor and create your formula. See “Formulas in LibreOffice documents” above for more information.
4) Right-click on the formula object and select Save Copy as in the context menu to open a Save As dialog.
5) Navigate to the folder you have created for your formula library.
6) Type a memorable name for the formula in the File name box.
7) Select in the File type drop-down list either ODF Formula (.odf) or MathML 2.0 (.mml) as the file type for the formula.
8) Click Save to save the formula and close the Save As dialog.
Using your formula library
You cannot insert a formula from your library into a document by dragging and dropping using the mouse, nor by using Insert > File on the Menu bar. You must insert the formula as an OLE object.
1) Open the document in Writer, Calc, Draw, or Impress.
2) Go to Insert > Object > OLE Object on the Menu bar to open the Insert OLE Object dialog.
3) Select the option Create from file.
4) Click Search to open a file browser dialog.
5) Navigate to the folder you have created for your formula library.
6) Select the formula you want to insert and click Open, or double-click on the formula.
7) Click OK to insert the formula as an OLE object in the document and close the dialog.
Formulas in Writer
When a formula is inserted into a document, LibreOffice Writer inserts the formula into a frame and treats the formula as an OLE object. Double-clicking on an inserted formula will open the Formula
Editor in LibreOffice Math allowing you to edit the formula.
This section explains what options you can change for each individual formula within a Writer document. Please refer to the chapters on styles in the Writer Guide for information on how to change the
default settings for frame styles for OLE objects.
Automatic formula numbering
Automatic numbering of formulas for cross reference purposes can only be carried out in LibreOffice Writer. The easiest way to add numbered formulas in sequence is to use the AutoText entry fn (for '
formula numbered') using the following steps:
1) Start a new line in your document.
2) Type fn and then press the F3 key. A two column table with no borders is inserted into your document with the left column containing a sample formula and the right column containing a reference
number, as shown below.
3) Delete the sample formula and insert your formula as an object in the left column.
4) Alternatively, you can first insert your formula into the document, then carry out Steps 1 and 2 above replacing the sample formula with your formula.
Cross referencing
1) Click in your document where you want the cross reference to appear.
2) Go to Insert > Cross-reference on the Menu bar to open the Fields dialog (Figure 12).
3) Click on the Cross-references tab, then select Text in the Type section.
4) In the Selection section, select the formula number you want to refer to.
5) In the Insert reference to section, select Reference and click Insert.
6) When you have finished creating cross references, click Close to close the Fields dialog.
Figure 12: Fields dialog – Cross-references tab
To insert the cross reference number without parentheses, select Numbering instead of Reference in the Insert reference to section.
If you want to use square parentheses instead of round ones, or if you want the cross reference number to be separated from the formula by tabs instead of using a table, then refer to the chapter on
automatic text in the Writer Guide.
Anchoring formulas
A formula is treated as an object within Writer and its default anchoring is To character within a paragraph when it is inserted into a document. To change the anchoring of a formula object:
1) Right-click on the selected formula object and select Anchor in the context menu.
2) Select a new anchoring option in the context sub-menu. The anchoring positions available are To Page, To Paragraph, To Character, or As Character.
3) Alternatively, right-click on the selected formula object and select Properties in the context menu, or go to Format > Frame and Object > Properties on the Menu bar to open the Object dialog (
Figure 13).
4) Make sure the Type tab is selected and select a new anchoring position from the Anchor section in the upper right of the tab.
5) Click OK to save your changes and close the Object dialog.
Figure 13: Object dialog – Type tab with Anchor options
The anchoring options are not available in the Object dialog when you are making changes to the options available for frame styles. For more information on how to modify frame styles, please refer to
the chapters on styles in the Writer Guide.
Vertical alignment
The normal default setting for vertical alignment for formula objects is to use the text base line as a reference. This default setting can be changed by modifying the Formula frame style; see the
chapters on styles in the Writer Guide for more information.
To change the vertical alignment position of an individual formula object (assuming that the As character anchoring option is selected):
1) Right-click on the selected formula object and select Properties in the context menu, or go to Format > Frame and Object > Properties to open the Object dialog (Figure 13).
2) Make sure the Type tab is selected and select a new alignment position in the drop-down list in the Position section. The vertical alignment options available are Top, Bottom, Center, or From
3) If necessary, type in the text box a plus or minus value for vertical alignment. This option is only available if From bottom vertical alignment has been selected.
4) Select the type of text alignment in the drop-down list in the Position section. The text alignment options available are Base line, Character, and Row.
5) Click OK to save your changes and close the Object dialog.
If the Position section in the Object dialog is grayed out and not available, then go to Tools > Options > LibreOffice Writer > Formatting Aids and uncheck the option Math baseline alignment. This
setting is stored with the document and applies to all formulas within it. Any new documents created will also use this setting for Math baseline alignment.
Object spacing
A formula object, when inserted into a Writer document, has spacing each side of the formula object. The default value used for spacing is set within the frame style for formula objects and can be
changed by modifying the Formula frame style, see the chapters on styles in the Writer Guide for more information.
You can individually adjust the spacing for each formula object within your document as follows:
1) Create your formula in your Writer document.
2) Right-click on your selected formula object and select Properties in the context menu, or go to Format > Frame and Object > Properties on the Menu bar to open the Object dialog.
3) Click on the Wrap tab to open the Wrap page in the Object dialog (Figure 14).
4) In the Spacing section, enter the spacing value for Left, Right, Top and Bottom spacing.
5) Click OK to save your changes and close the Object dialog.
Figure 14: Object dialog – Wrap tab
Text mode
In large formulas placed within a line of text, the formula elements can often be higher than the text height. Therefore, to make large formulas easier to read, it is recommended to always insert
large formulas into a separate paragraph of their own so that it is separated from text.
However, if it is necessary to place a large formula within a line of text, double-click on the formula to open the Formula Editor and then go to Format > Text Mode on the Menu bar. The Formula
Editor will try to shrink the formula to fit the text height. The numerators and denominators of fractions are shrunk, and the limits of integrals and sums are placed beside the integral/sum sign, as
shown in the following example.
Background and borders
The default setting for background (area fill) and borders for formula objects is set by the formula frame style. To change this default setting for formula frame style, refer to the chapters on
styles in the Writer Guide. However, for individual formulas in your document, you can change the background and borders.
The size of the frame that a formula is placed in when inserted into a document cannot be changed. The frame size for a formula object depends on the setting of the formula font size, see “Formula
font size” above for more information.
1) In your document, select the formula where you want to change the background.
2) Right-click on the formula and select Properties in the context menu, or go to Format > Frame and Object > Properties on the Menu bar to open the Object dialog.
3) Click on the Area tab and use the buttons to select the type of fill you want to use for your formula (Figure 15).
4) Select the options you want to use for your formula background. The options change depending on the type of fill selected.
5) Click OK to save your changes and close the Object dialog.
Figure 15: Object dialog – Area tab
1) In your document, select the formula where you want to change the borders.
2) Right-click on the formula and select Properties in the context menu, or go to Format > Frame and Object > Properties on the Menu bar to open the Object dialog.
3) Click on the Borders tab and select the options you want to use for your formula borders (Figure 16).
4) Click OK to save your changes and close the Object dialog.
Figure 16: Object dialog – Borders tab
Quick insertion of formulas
If you know the markup language for your formula, you can quickly insert it into your Writer document without opening the Formula Editor as follows:
1) Enter the formula markup language into your document at the position where you want the formula to be placed.
2) Select the markup language.
3) Go to Insert > Object on the Menu bar and select Formula to create a formula from the selected markup language.
4) Alternatively you can use the key combination Ctrl + Insert to open the “Insert OLE Object” dialog and then select Formula.
Formulas in Calc, Draw, and Impress
In Calc, Draw, and Impress, formulas are inserted as OLE objects without any background (area fill) or borders. Each formula object is inserted into a spreadsheet, drawing, or slide as follows:
• In Calc, formulas are inserted into a selected cell in a spreadsheet with no style assigned to the formula object.
• In Draw and Impress, formulas are inserted into a central position on your drawing or slide and, by default, are assigned the drawing object style Object with no fill and no line. For more
information on how to modify or assign drawing object styles, see the Draw Guide or the Impress Guide.
Anchoring formulas
A formula object can be anchored into a spreadsheet as To Page (default setting), or as To Cell. To change the anchoring type of formulas in a Calc spreadsheet:
1) Select the formula object in the spreadsheet.
2) Right-click on the formula and select Anchor > To Page or To Cell in the context menu
3) Alternatively, go to Format > Anchor on the Menu bar and select To Page or To Cell.
If you insert a formula into a Calc spreadsheet and it appears out of scale, you can fix it by right-clicking the formula object and then selecting the Original Size option in the context menu.
Draw and Impress
When a formula is inserted into a drawing or slide, it is inserted as a floating OLE object and is not anchored to any particular position in a drawing or slide.
Formula object properties
Formula objects in Calc, Draw, and Impress can be modified just like any other object that has been placed in your spreadsheet, drawing, or presentation, with the exception of formula object size and
changing the format of any text within a formula. For more information on how to change object properties, see the Calc Guide, Draw Guide and Impress Guide.
The following points will help you select which dialog to use if you want to change the properties of formula objects.
• For formula backgrounds, use the various options in the tabs of the Area dialog.
• For formula borders, use the various options in the Line dialog. Note that formula borders are separate from cell borders in a Calc spreadsheet.
• To accurately re-position a formula object, use the various options in tabs of the Position and Size dialog.
• In Draw and Impress, you can arrange, align, group, flip, convert, break, combine, and edit points of formula objects.
• You cannot change the text attributes of a formula object. The text used in a formula is set when you create the formula in the Formula Editor.
• Formula object size is set by the formula font size when the formula is created in the Formula Editor. The formula object size is protected in the Position and Size dialog, but this can be
deselected if you wish. However, this is not recommended as resizing a formula object using the Position and Size dialog could lead to distortion of a formula making it difficult to read.
Formulas in charts
A chart in a Calc spreadsheet is itself an OLE object, therefore, you cannot use the Formula Editor to create and insert a formula directly into a chart. However, you can create both the Chart and
Math objects separately and later copy and paste the Math formula into the Chart object:
1) Create the chart using LibreOffice Calc. For a complete reference on how to create charts, see Chapter 3 in the Calc Guide.
2) Click at any cell in your spreadsheet so that the Chart is no longer selected.
3) Insert a Math Formula object by clicking Insert > Object > Formula.
4) Type the desired formula into the Formula Editor.
5) After editing the formula, select the Math Formula object and press Ctrl+C to copy the Formula object to the clipboard.
6) Double-click the chart object to start editing the chart and press Ctrl+V to paste the Formula object into the chart.
7) Now you can position the object anywhere you want inside the chart.
If you want to change the formula at a later date, then you must repeat the whole process of creating, copying, and pasting the Formula object into the chart.
This section explains how you can customize Math to suit the way you create formulas for use in LibreOffice documents. Also, refer to Chapter 14, Customizing LibreOffice, for more general information
on how to customize LibreOffice.
Floating dialogs
The Formula Editor and Elements dock can cover a large part of your document. To help create more space and/or allow you to move either the Formula Editor or Elements dock out of the way, you can
turn both of them into floating dialogs.
1) Position the cursor on the frame.
2) Hold down the Ctrl key and double-click. This turns the Formula Editor into the Commands dialog (Figure 17) and the Elements dock into the Elements dialog (Figure 18).
Figure 17: Commands dialog
Figure 18: Elements dialog
To return the Commands dialog and Elements dialog back to their default positions:
1) Position the cursor on the frame of the dialog, NOT the title bar at the top of the dialog.
2) Hold down the Ctrl key and double-click.
Adding keyboard shortcuts
You can add keyboard shortcuts to LibreOffice to make creating documents much easier and to match your workflow. See Chapter 14, Customizing LibreOffice, for instructions.
Catalog customization
If you regularly use a symbol that is not available in Math, you can add it to the Symbols dialog (Figure 19) by opening the Edit Symbols dialog (Figure 20).
Using the Edit Symbols dialog you can add symbols to a symbol set, edit symbol sets, or modify symbol notations. You can also define new symbol sets, assign names to symbols, or modify existing
symbol sets.
Adding symbols
1) Go to Tools > Symbols on the Menu bar or click on the Symbols icon on the Tools toolbar to open the Symbols dialog.
2) Click the Edit button to open the Edit Symbols dialog.
3) Select a font in the Font drop-down list.
4) Select a symbol character that you want to add in the preview box. You may have to scroll down in the preview box to locate the symbol you want to use. The small right preview box displays the
new symbol.
5) In the Symbol box, type a memorable name for the symbol you are adding.
6) In the Symbol set box, select a symbol set in the drop-down list to add your new symbol to, or type a new name to create a new symbol set for your new symbol.
7) If required, select a font style in the Style drop-down list – Standard, Italic, Bold, or Bold, Italic.
8) Click Add, then click OK to close the Edit Symbols dialog. The new symbol and, if created, new symbol set are now available for use.
Figure 19: Symbols dialog
Figure 20: The Edit Symbols dialog
When a new symbol is added to the catalog, you can type a percentage sign (%) followed by the new name into the markup language in the Formula Editor and your new symbol will appear in your formula.
Remember that symbol names are case sensitive, for example, %prime is a different symbol to %Prime.
There are numerous free fonts available that contain several symbols if you cannot find a symbol to use in the fonts already installed on your computer. For example, the STIX font was developed
specially for writing mathematical and technical texts. Also, the DejaVu and Lucida fonts have a wide range of symbols that you can use.
When LibreOffice is installed on a computer, only those user-defined symbols that actually occur in the document are stored with it. Sometimes it is useful to embed all the user-defined symbols, so
that when the document is transferred to another computer it can be edited by another person. Go to Tools > Options > LibreOffice Math > Settings, uncheck the option Embed only used symbols (smaller
file size).This setting is only available when you are working with LibreOffice Math.
Editing symbols
Modifying symbol names
You can change the name of a symbol as follows:
1) Select the symbol name you want to change in the Old symbol drop-down list. The symbol appears in the left preview pane at the bottom of the Edit Symbols dialog (Figure Figure 20).
2) Type a new name for the symbol in the Symbol text box, or select a new name in the Symbol drop-down list. The new symbol name appears above the right preview pane at the bottom of the Edit
Symbols dialog.
3) Click Modify and the symbol name is changed.
4) Click OK to close the Edit Symbols dialog.
Moving symbols
You can move a symbol from one symbol set to another as follows:
1) In the Old symbol set drop-down list, select the symbol set where the symbol you want to move is located.
2) Select the symbol name you want move in the Old symbol drop-down list. The symbol appears in the left preview pane at the bottom of the Edit Symbols dialog (Figure 20).
3) In the Symbol set drop-down list, select the symbol set that you want to move the symbol to. The new symbol set name appears below the right preview pane at the bottom of the Edit Symbols dialog.
4) Click Modify and the symbol is moved to the new symbol set
5) Click OK to close the Edit Symbols dialog.
Deleting symbols
You can delete a symbol from a symbol set as follows:
1) In the Old symbol set drop-down list, select the symbol set from which you want to delete the symbol.
2) Select the symbol name you want delete in the Old symbol drop-down list. The symbol appears in the left preview pane at the bottom of the Edit Symbols dialog (Figure 20).
3) Click Delete and the symbol is deleted from the symbol set without any confirmation.
4) Click OK to close the Edit Symbols dialog.
The only way you can delete a symbol set is by deleting all of the symbols in that set. When you delete the last symbol from a set, the set is also deleted.
Options for editing symbols
For details of the fields in the Edit Symbols dialog (Figure 20), please refer to the Math Guide.
Formula spacing
The grave accent (`) inserts an additional small space and the tilde (~) inserts an additional large space into formulas. However, in the basic installation of LibreOffice, these symbols are ignored
when they occur at the end of a formula. If you are working with running text in a formula, it may be necessary to include spacing at the end of formulas as well. This customization is only required
when you are working with a Math document and is not required when you are inserting a formula into another LibreOffice component.
To add spacing at the end of formula in Math, go to Tools > Options > LibreOffice Math > Settings on the Menu bar and uncheck Ignore ~ and ` at the end of the line in the Miscellaneous Options
If you create formulas frequently in your documents, you can customize LibreOffice by adding extensions that are designed to help you create formulas. Extensions are easily installed using the
Extension Manager. For more information on how to install extensions, see Chapter 14, Customizing LibreOffice.
A commonly used extension is Formatting of All Math Formulas. It allows you to format all Math formulas in your Writer, Calc, Draw or Impress document. With it you can change the font names and font
sizes of all formulas in your document. For more information on this extension, go to https://extensions.libreoffice.org/extensions/show/formatting-of-all-math-formulas.
Exporting and Importing
Math ML format
In addition to exporting documents as PDFs, LibreOffice offers the possibility of saving formulas in the MathML format. This allows you or another person to insert formulas into documents that were
created in other software, for example, Microsoft Office or an internet browser.
Some internet browsers do not fully support the MathML format and your formula may not display correctly.
If you are working on a Math document, go to File > Save as on the Menu bar or use the keyboard combination Ctrl+Shift+S to open the Save as dialog. Select MathML in the list of available file
formats in File type: to save your formula as MathML.
If you are working in another LibreOffice module, right-click on the formula object and select Save copy as in the context menu to open the Save as dialog. Select MathML in the list of available file
formats in “File type” to save your formula object as MathML.
In Math you can also import MathML formulas. Use Tools > Import MathML from clipboard on the Menu bar.
Microsoft file formats
To control how formulas in Microsoft format are imported and exported using LibreOffice, go to Tools > Options > Load/Save > Microsoft Office on the Menu bar and select or deselect the options for
MathType to LibreOffice Math or reverse.
• [L]: Load and convert the object
• [S]: Convert and save the object
[L]: Load and convert the object
Select this option if Microsoft OLE objects are to be converted into the specified LibreOffice OLE object when a Microsoft document is opened in LibreOffice. For formulas, any embedded MathType
objects must not exceed the MathType 3.1 specifications to be successfully loaded and converted. Information on MathType format can be found on the website http://www.dessci.com/en.
If a document containing OMML formulas has been saved in .docx format and then converted to the older .doc format, then any OMML objects are converted into graphics, which will be displayed in
LibreOffice as graphics.
[S]: Convert and save the object
Select this option if LibreOffice OLE objects are to be converted and saved in Microsoft file format. LibreOffice converts any formulas into a format that can be read and modified by Microsoft
Equation Editor and MathType.
When this option is not selected, the formula is treated as an OLE object on conversion into a .doc format and remains linked to LibreOffice. A double-click on the object in Microsoft Office will
attempt to launch LibreOffice. | {"url":"https://books.libreoffice.org/en/GS71/GS7109-GettingStartedWithMath.html","timestamp":"2024-11-13T18:12:32Z","content_type":"application/xhtml+xml","content_length":"212565","record_id":"<urn:uuid:7a160442-d6d8-4656-abe4-6365b3a14f9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00152.warc.gz"} |
Mathematical objects in Chrono
Table of Contents
This documentation component focuses on Chrono mathematical functions and classes. These concepts are quite ubiquitous in the rest of the Chrono API and they are discussed also in the Linear algebra
API documentation.
Linear algebra
Handling vectors and matrices is a recurring theme throughout the Chrono API. Chrono uses Eigen3 for representing all matrices (dense and sparse) and vectors.
Dense matrices in Chrono are templated by the scalar type and have row-major storage order. All Chrono matrix and vector types below are simply aliases to appropriate Eigen matrix types; see Linear
algebra and the ChMatrix.h header.
classes, used to represent 3D vectors in space and 2D vectors in a plane, respectively, are not Eigen types nor derived from Eigen matrices.
Matrices are indexed starting from 0, with (row,column) indexing:
\[ \mathbf{A}=\left[ \begin{array}{cccc} a_{0,0} & a_{0,1} & a_{0,2} & ... \\ a_{1,0} & a_{1,1} & a_{1,2} & ... \\ a_{2,0} & ... & ... & ... \\ a_{n_{rows}-1,0} & ... & ... & a_{n_{rows}-1,n_{cols}
-1} \end{array} \right] \]
There are many matrix and vector specializations and some of their basic features are outlined next.
Dynamic size matrices. Use ChMatrixDynamic to create a matrix with generic size, say 12 rows x 4 columns. ChMatrixDynamic is templated by the scalar type, with double as the default.
Fixed size matrices. Use ChMatrixNM to create matrices that do not need to be resized and whose size is known at compile-time.
From the Eigen documentation:
When should one use fixed sizes and when should one prefer dynamic sizes? The simple answer is: use fixed sizes for very small sizes where you can, and use dynamic sizes for larger sizes or where you
have to. For small sizes, especially for sizes smaller than (roughly) 16, using fixed sizes is hugely beneficial to performance, as it allows Eigen to avoid dynamic memory allocation and to unroll
The limitation of using fixed sizes, of course, is that this is only possible when you know the sizes at compile time. Also, for large enough sizes, say for sizes greater than (roughly) 32, the
performance benefit of using fixed sizes becomes negligible. Worse, trying to create a very large matrix using fixed sizes inside a function could result in a stack overflow, since Eigen will try to
allocate the array automatically as a local variable, and this is normally done on the stack. Finally, depending on circumstances, Eigen can also be more aggressive trying to vectorize (use SIMD
instructions) when dynamic sizes are used.
3x3 fixed size matrices. Use ChMatrix33 to create 3x3 matrices, which are mostly used to represent rotation matrices and 3D inertia tensors.
ChMatrix33 is templated by the scalar type (with double as the default). This matrix type is derived from a 3x3 fixed-size Eigen matrix with row-major storage and offers several dedicated
constructors and methods for coordinate and rotation operations.
Dynamic size column vectors. Use ChVectorDynamic to create a column vector (one-column matrix) with a generic number of rows.
Fixed size column vectors. Use ChVectorN to create a column vector with fixed length (known at compile time).
Row vectors. Use ChRowVectorDynamic and ChRowVectorN to create row vectors (one-row matrices) with dynamic size and fixed size, respectively.
In addition to the above types, specialized 3x4, 4x3, and 4x4 matrices used in multibody formalism are defined in ChMatrixMBD.h.
Basic operations with matrices
Consult the Eigen API for all matrix and vector arithmetic operations, block operations, and linear system solution.
demo_CH_linalg.cpp illustrates basic operations with matrices.
Function objects
These ChFunction objects are used in many places in Chrono::Engine, and are used to represent y=f(x) functions, for example when introducing prescribed displacements in a linear actuator.
These functions are scalar,
\[ x \in \mathbb{R} \rightarrow y \in \mathbb{R} \]
and there are a predefined number of them that are ready to use, such as sine, cosine, constant, etc. If the predefined ones are not enough, the user can implement his custom function by inheriting
from the base ChFunction class.
See ChFunction for API details and a list of subclasses.
Example 1
ChFunctionRamp f_ramp;
f_ramp.SetAngularCoeff(0.1); // set angular coefficient;
f_ramp.SetStartVal(0.4); // set y value for x=0;
// Evaluate y=f(x) function at a given x value, using GetVal() :
double y = f_ramp.GetVal(10);
// Evaluate derivative df(x)/dx at a given x value, using GetDer() :
double ydx = f_ramp.GetDer(10);
std::cout << " ChFunctionRamp at x=0: y=" << y << " dy/dx=" << ydx << std::endl;
Example 2
Save values of a sine ChFunction into a file.
ChFunctionSine f_sine;
f_sine.SetAmplitude(2); // set amplitude;
f_sine.SetFrequency(1.5); // set frequency;
std::ofstream file_f_sine ("f_sine_out.dat");
// Evaluate y=f(x) function along 100 x points, and its derivatives,
// and save to file (later it can be loaded, for example, in Matlab)
for (int i=0; i<100; i++)
double x = (double)i/50.0;
double y = f_sine.GetVal(x);
double ydx = f_sine.GetDer(x);
double ydxdx = f_sine.GetDer2(x);
file_f_sine << x << " " << y << " " << ydx << " " << ydxdx << std::endl;
Example 3
Define a custom function.
The following class will be used as an example of how you can create custom functions based on the ChFunction interface.
There is at least one mandatory member function to implement: GetVal.
Note that the base class implements a default computation of derivatives GetDer() and GetDer2() by using a numerical differentiation, however if you know the analytical expression of derivatives, you
can override the base GetDer() and GetDer2() too, for higher precision.
// First, define a custom class inherited from ChFunction
class ChFunctionMyTest : public ChFunction
ChFunction* new_Duplicate() {return new ChFunctionMyTest;}
double GetVal(double x) {return cos(x);} // just for test: simple cosine
ChFunctionMyTest f_test;
std::ofstream file_f_test ("f_test_out.dat");
// Evaluate y=f(x) function along 100 x points, and its derivatives,
// and save to file (later it can be loaded, for example, in Matlab)
for (int i=0; i<100; i++)
double x = (double)i/50.0;
double y = f_test.GetVal(x);
double ydx = f_test.GetDer(x);
double ydxdx = f_test.GetDer2(x);
file_f_test << x << " " << y << " " << ydx << " " << ydxdx << std::endl;
Quadrature is an operation that computes integrals as when computing areas and volumes.
The following code shows how to use the Gauss-Legendre quadrature to compute the integral of a function
\( \mathbb{R} \mapsto \mathbb{R} \) over a 1D interval, or \( f: \mathbb{R}^2 \mapsto \mathbb{R}\) over a 2D interval or \( f: \mathbb{R}^3 \mapsto \mathbb{R}\) over a 3D interval:
\[ F_{1D}=\int^a_b f(x) dx \]
\[ F_{2D}=\int^{a_y}_{b_y}\int^{a_x}_{b_x} f(x,y) dx dy \]
\[ F_{3D}=\int^{a_z}_{b_z}\int^{a_y}_{b_y}\int^{a_x}_{b_x} f(x,y,z) dx dy dz \]
If the function is polynomial of degree N and the quadrature is of order N, the result is exact, otherwise it is approximate (using large N improves quality but remember that this type of integration
is often used where N in the range 1..10 suffices, otherwise other integration methods might be better).
For N less than 10, the quadrature uses precomputed coefficients for maximum performance.
// Define a y=f(x) function by inheriting ChIntegrable1D:
class MySine1d : public ChIntegrable1D<double>
void Evaluate (double& result, const double x) {
result = sin(x);
// Create an object from the function class
MySine1d mfx;
// Invoke 6th order Gauss-Legendre quadrature on 0..PI interval:
double qresult;
ChQuadrature::Integrate1D<double>(qresult, mfx, 0, CH_PI, 6);
std::cout << "Quadrature 1d result:" << qresult << " (analytic solution: 2.0)" << std::endl;
// Other quadrature tests, this time in 2D
class MySine2d : public ChIntegrable2D<double>
void Evaluate (double& result, const double x, const double y) { result = sin(x); }
MySine2d mfx2d;
ChQuadrature::Integrate2D<double>(qresult, mfx2d, 0, CH_PI, -1,1, 6);
std::cout << "Quadrature 2d result:" << qresult << " (analytic solution: 4.0)" << std::endl;
Note that thanks to templating, one can also integrate m-dimensional (vectorial, tensorial) functions \( \mathbf{f}: \mathbb{R}^n \mapsto \mathbb{R}^m \) , for example:
\[ \mathbf{F}=\int^{a_y}_{b_y}\int^{a_x}_{b_x} \mathbf{f}(x,y) dx dy \quad \mathbf{F} \in \mathbb{R}^2 \]
class MySine2dM : public ChIntegrable2D< ChMatrixNM<double,2,1> >
void Evaluate (ChMatrixNM<double,2,1>& result, const double x, const double y)
result(0) = x*y;
result(1) = 0.5*y*y;
MySine2dM mfx2dM;
ChMatrixNM<double,2,1> resultM;
ChQuadrature::Integrate2D< ChMatrixNM<double,2,1> >(resultM, mfx2dM, 0, 1, 0,3, 6);
std::cout << "Quadrature 2d matrix result:" << resultM << " (analytic solution: 2.25, 4.5)" << std::endl;
Eigen::Matrix< T, N, 1 > ChVectorN
Column vector with fixed size (known at compile time).
Definition: ChMatrix.h:122
Eigen::Matrix< T, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor > ChMatrixDynamic
Dense matrix with dynamic size (i.e., with unknown at compile time) and row-major storage.
Definition: ChMatrix.h:75
Eigen::Matrix< T, Eigen::Dynamic, 1, Eigen::ColMajor > ChVectorDynamic
Column vector with dynamic size (i.e., with size unknown at compile time).
Definition: ChMatrix.h:112
Eigen::Matrix< T, M, N, Eigen::RowMajor > ChMatrixNM
Dense matrix with fixed size (known at compile time) and row-major storage.
Definition: ChMatrix.h:80 | {"url":"https://api.projectchrono.org/development/mathematical_objects.html","timestamp":"2024-11-04T11:01:20Z","content_type":"application/xhtml+xml","content_length":"30754","record_id":"<urn:uuid:f1ab5ff7-62fa-4277-bb1f-2bf1b114f89f>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00667.warc.gz"} |
Exercise: How many numbers are equal?
Course of Raku / Essentials / Conditional checks / Exercises
How many numbers are equal?
Create a program that takes three integer numbers from the user and tells how many of them are equal. The program should be able to work with both positive and negative numbers.
A possible run of the program is shown below:
$ raku how-many-equal-numbers.raku
Number 1: 3
Number 2: -4
Number 3: 3
If all three numbers are different, the program prints 0.
Next exercise
Course navigation
← Code blocks / Local variables | Loops → | {"url":"https://course.raku.org/essentials/conditional-checks/exercises/how-many-equal-numbers/","timestamp":"2024-11-07T03:05:56Z","content_type":"text/html","content_length":"5484","record_id":"<urn:uuid:438104d8-19c0-4fd7-bf62-8fd188ecf761>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00459.warc.gz"} |
U.S. Code of Federal Regulations
Regulations most recently checked for updates: Nov 09, 2024
§ 60.4213 - What test methods and other procedures must I use if I am an owner or operator of a stationary CI internal combustion engine with a displacement of greater than or equal to 30 liters per
Owners and operators of stationary CI ICE with a displacement of greater than or equal to 30 liters per cylinder must conduct performance tests according to paragraphs (a) through (f) of this
(a) Each performance test must be conducted according to the requirements in § 60.8 and under the specific conditions that this subpart specifies in table 7. The test must be conducted within 10
percent of 100 percent peak (or the highest achievable) load.
(b) You may not conduct performance tests during periods of startup, shutdown, or malfunction, as specified in § 60.8(c).
(c) You must conduct three separate test runs for each performance test required in this section, as specified in § 60.8(f). Each test run must last at least 1 hour.
(d) To determine compliance with the percent reduction requirement, you must follow the requirements as specified in paragraphs (d)(1) through (3) of this section.
(1) You must use Equation 2 of this section to determine compliance with the percent reduction requirement:
Where: Ci = concentration of NOX or PM at the control device inlet, Co = concentration of NOX or PM at the control device outlet, and R = percent reduction of NOX or PM emissions.
(2) You must normalize the NOX or PM concentrations at the inlet and outlet of the control device to a dry basis and to 15 percent oxygen (O2) using Equation 3 of this section, or an equivalent
percent carbon dioxide (CO2) using the procedures described in paragraph (d)(3) of this section.
Where: Cadj = Calculated NOX or PM concentration adjusted to 15 percent O2. Cd = Measured concentration of NOX or PM, uncorrected. 5.9 = 20.9 percent O2−15 percent O2, the defined O2 correction
value, percent. %O2 = Measured O2 concentration, dry basis, percent.
(3) If pollutant concentrations are to be corrected to 15 percent O2 and CO2 concentration is measured in lieu of O2 concentration measurement, a CO2 correction factor is needed. Calculate the CO2
correction factor as described in paragraphs (d)(3)(i) through (iii) of this section.
(i) Calculate the fuel-specific Fo value for the fuel burned during the test using values obtained from Method 19, Section 5.2, and the following equation:
Where: Fo = Fuel factor based on the ratio of O2 volume to the ultimate CO2 volume produced by the fuel at zero percent excess air. 0.209 = Fraction of air that is O2, percent/100. Fd = Ratio of the
volume of dry effluent gas to the gross calorific value of the fuel from Method 19, dsm 3/J (dscf/10 6 Btu). Fc = Ratio of the volume of CO2 produced to the gross calorific value of the fuel from
Method 19, dsm 3/J (dscf/10 6 Btu).
(ii) Calculate the CO2 correction factor for correcting measurement data to 15 percent O2, as follows:
Where: XCO2 = CO2 correction factor, percent. 5.9 = 20.9 percent O2−15 percent O2, the defined O2 correction value, percent.
(iii) Calculate the NOX and PM gas concentrations adjusted to 15 percent O2 using CO2 as follows:
Where: Cadj = Calculated NOX or PM concentration adjusted to 15 percent O2. Cd = Measured concentration of NOX or PM, uncorrected. %CO2 = Measured CO2 concentration, dry basis, percent.
(e) To determine compliance with the NOX mass per unit output emission limitation, convert the concentration of NOX in the engine exhaust using Equation 7 of this section:
Where: ER = Emission rate in grams per KW-hour. Cd = Measured NOX concentration in ppm. 1.912x10−3 = Conversion constant for ppm NOX to grams per standard cubic meter at 25 degrees Celsius. Q = Stack
gas volumetric flow rate, in standard cubic meter per hour. T = Time of test run, in hours. KW-hour = Brake work of the engine, in KW-hour.
(f) To determine compliance with the PM mass per unit output emission limitation, convert the concentration of PM in the engine exhaust using Equation 8 of this section:
Where: ER = Emission rate in grams per KW-hour. Cadj = Calculated PM concentration in grams per standard cubic meter. Q = Stack gas volumetric flow rate, in standard cubic meter per hour. T = Time of
test run, in hours. KW-hour = Energy output of the engine, in KW. [71 FR 39172, July 11, 2006, as amended at 76 FR 37971, June 28, 2011] | {"url":"https://old.govregs.com/regulations/title40_chapterI-i7_part60_subpartIIII_subjgrp279_section60.4213","timestamp":"2024-11-09T22:10:43Z","content_type":"text/html","content_length":"20272","record_id":"<urn:uuid:bc7c98a5-f7ee-4a0b-bec1-f3856b5ac7f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00386.warc.gz"} |
Under the Hood: Formula One Elo Ratings
Over the last several years,
Elo ratings
have expanded beyond chess to become a popular framework for calculating power ratings for teams in the
, as well as international
, and
teams. Elo provides an excellent framework for initial analysis of teams or players, especially when the actual event has events, moves, or strategy which are difficult to quantify at a fine
granularity. Some sports for which there are Elo ratings -- such as basketball and baseball -- have been analyzed in more detail, and we now have a better understanding of how to evaluate both teams
and individuals.
And some websites just like to churn out Elo rating
systems because they say "Elo" around the office
the same way Kristian Nairn says "Hodor" at work.
However there are many sports -- notably hockey, soccer, and football -- which are more difficult to pick apart. Football and soccer have more players than basketball or baseball. Soccer and hockey
don't really have solidly defined possessions in the same way as baseball or basketball. At a first approximation, the outcome of a game -- and its score -- are what we have to go on.
This is where Elo comes in.
Elo is well suited for these situations for a handful of reasons. Elo rating systems
• only need to know who won and who lost;
• can provide win probabilities prior to a game; and
• work well in a (relatively) closed system in which there are a fixed number of teams or players.
Formula One
In this regard, Elo ratings are a good framework for approaching Formula One. The question is how to structure the system and choose certain parameters to handle the nature of Formula One. Elo rating
systems have a few parameters to suss out, including
• the number of initial points each participant gets;
• the magnitude of points transferred from losers to winner (the K-factor); and
• the potential separation of teams or players into divisions.
Formula One is also potentially vulnerable to
Elo inflation
. This occurs when marginal players or teams overperform for long enough to be promoted into the professional tier, lose repeatedly (thereby dumping their points into the pool at the top tier) and
then retire without having claimed any points. Over time this leads to a situation in which top players or teams have significantly higher ratings than their equally-talented counterparts from
previous years. It becomes difficult -- if not impossible -- to compare players or teams between eras.
The specific application of Elo to Formula One is this:
every race is a round-robin tournament in which you compete with every other driver
. Therefore if you finish third in a field of 18 cars, you are said to have tallied 15 wins (the cars behind you) and two losses (the cars in front of you. Furthermore, because the data set does not
indicate if DNFs are due to crashes (presumably partially the driver's fault) or a mechanical failure (presumably the car's fault),
drivers that do not finish the race are treated as if they did not compete at all
In the Elo system, the K-factor controls how the magnitude of the swing between winners and losers. A large K-factor includes new information rapidly, while a small K-factor resists overreacting to
new results. The
World Chess Federation
uses a range of factors depending on the experience of the players; new players use a factor of 40, while experienced players can use a factor as small as 10. The
NBA Elo model at FiveThirtyEight
uses a fixed Elo of 20, which incorporates new results at a relatively high rate.
Ultimately we settled on a modified version of a K-factor scale formerly used by the U.S. Chess Federation:
K = Adj[Type] * max((800 / Ne), 16)
In this formulation, N
is the total number of races in which a driver has participated. There is an effective floor on the K-factor of 16. Effectively this means that during the first two-and-a-half seasons of a driver's
Formula One career they will have a relatively large K-factor.
On top of the experience adjustment, there is an multiplier based on the type of result: was this a qualifying session, or a full race? The race adjustment is simply 1.0, whereas the multiplier for
qualifying is 0.1. Qualifying involves only a handful of laps, but has the advantage of reliably including every car in the field. This is especially important when we go back more then twenty years,
anywhere from 20-40% of the cars would fail to finish due to mechanical failures
Once we have a K-factor for each driver, we can adjust the ratings of each driver based on the results of a race or full qualifying session. For each pair of drivers
we can calculate the expected probability that
would defeat (or finish in front of)
. (The details of this model can be found on the
Elo wiki page
.) For each driver we can calculate these odds and then sum them up. This gives us the expected score, while the result gives us the actual score. A player's rating is adjusted as follows:
R'[a] = R[a] + K(S[a] - E[a])
Let's say that a driver with a rating of 1200 races against a driver with ratings of 1000, 800, and 600. The sum of the individual win probabilities would be 2.638. If that driver has a K-factor of
16 and finishes first (i.e., wins 3.0 contests), their new rating would be
1200 + 16(3.0 - 2.638) = ~1206
However if the driver with a rating of 600 (and similar experience) wins, their new score would be
600 + 16(3.0 - 0.362) = ~642
Sharp-eyed readers may have noticed that this rebalancing is only point-neutral if all drivers have the same K-factor. However this is effectively never the case, as there will always be drivers in
the point-swing-heavy early portions of their careers. In order to get everything to balance out, we calculate an adjustment ratio for each race. That is, we take the sum of the total points of all
drivers before the race and divide it by the sum of the total points of all drivers after the race. We then multiply each driver's post-race rating by this ratio, preserving the total number of
points in the pool after each race.
Elo Inflation
However we still run into the situation where young drivers are brought into Formula One for a season (or even a few races), fail miserably, dump all their points into the pool, and exit. There are
two main ways we fight inflation:
1. Revert each driver back to the mean slightly at the end of each year; and
2. Use a smaller, fixed K-factor for new drivers for a number of races.
The first approach I blatantly
borrowed from the FiveThirtyEight NBA Elo ratings.
This forces everyone back to the mean slightly each season, preventing long-term inflation.
The second approach limits the rate at which new drivers can bleed points into the pool until they've been around for a little while. Currently both the K-factor and the minimum number of races are
set at 16, meaning that for a driver's first season they're somewhat capped in how many points they can lose. Note that because of how Elo works, though, excellent rookie drivers such as Villeneuve
or Hamilton can still climb the ranks at the same rate as experienced drivers.
Putting it all together
In the end, after crunching 36 years of Formula One data ... we'll get to some real fun stuff tomorrow. But for now here are some numbers you can use as a reference point when looking at long-term
Elo scores:
1700 Legendary season
1600 Title contender
1500 Regular podiums
1250 Regular points
1000 Average
With that in mind, let the ratings begin! | {"url":"https://www.tfgridiron.com/2016/03/under-hood-formula-one-elo-ratings.html","timestamp":"2024-11-11T13:24:09Z","content_type":"application/xhtml+xml","content_length":"104972","record_id":"<urn:uuid:09f92926-91e8-47c0-89a5-d14c5ac32dea>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00325.warc.gz"} |
A UNIVERSAL CURE FOR SEPSIS – AN EPIDEMIC IN DISGUISE!
Robert O Young DSc, PhD, Naturopathic Practitioner
Where Does All Sickness and Disease Begin?
Answer: In the largest organ of the human body!
What is the largest organ of the human body?
Where sepsis sepsis begin?
Answer: The same place cancer, kidney disease, lung disease, Alzheimer's, lupus, paralysis, diabetes, heart disease, stroke, and all the other 5000 diseases - in the Interstitium that flows through
every organ, gland and tissue and touches every human and animal body cell - all 70 trillion!
Legend - See below
Organ Cells - The cells at the boundary of the ISF, which supplies the substances to keep it alive.
Gap Junction - It provides cell bonding for multicellular organisms to maintain the body form and structure.
Sensor - This membrane receptor controls the chemical signal and cellular response to and from the cell.
Proteoglycan/Hyaluronic Acid - This large molecule provides viscosity to support the structure.
Protein - This is the nutrient going to the cells.
Acidic Debris - This is the waste draining from the cells.
Lymphatic Vessel - The vessel that carries away some of the debris.
Arteriole - The small branch of the artery carrying oxygenated blood.
Venule - The small branch of the vein carrying de-oxygenated blood.
Capillary - Brings electrons to the cells for energy.
Oxygen - It is released from the capillary to ISF by concentration gradient.
Carbon Dioxide - It becomes HCO3- (bicarbonate, which is a blood pH buffering agent, regulating it within small range of 7.35-7.45 pH - death of the cell at 7.0) in water, reabsorbed into drainages
by concentration gradient.
Interstitial Fluid (ISF) - The liquid component of the Interstitium.
Reticular Fiber - The fibers crosslink to form a fine meshwork to support soft tissues.
Elastic Fiber - Bundle of elastic proteins to prevent deformation.
Fibroblast - Cell that synthesizes the extracellular matrix and collagen.
Adipose Cell - This fat cell - can be derived from undifferentiated fibroblast.
Collagen Fiber - This structural protein is the main component of connective tissue
Cells in the Interstitium - The largest organ of the human body.
Closer examination of the ISF reveals that it contains a mixture of nutrient and acidic waste indicating a primitive origin of the organization. This property is a hallmark of sea water which
contains ions of sodium for the transport of electrons to the cells for energy. www.drrobertyoung.com
The Largest Organ of the Human Body - The Interstitium
Every year it is estimated that more than 30 million people worldwide, leading to over 6 million deaths (1) die from potentially preventable systemic acidosis or what is called Sepsis. The burden of
systemic acidosis or sepsis is highest in low and middle-income countries where the acid loads from lifestyle and diet on the interstitial fluid compartments of the Interstitium becomes excessive and
severe (pH of interstitial fluids drops below 7.2 where normal pH is 7.365). It is estimated that 3 million newborns and 1.2 million children suffer from sepsis globally every year (2).Apr 19, 2018
Why is Sepsis or Systemic Acidosis an Epidemic?
An individual who is sick or injured and not being treated for decompensated acidosis of the Interstitial Fluids of the Interstitium and the blood, buildup excess dietary, metabolic or cellular
degenerative acids, in the Interstitium which causes cellular breakdown that leads to Sepsis and then death. Sepsis is suggested in the medical literature to be caused by an outside infection. This
so-called infection can only be a contributing factor and not causative. My research, in association with Dr. Galina Migalko, MD, has revealed that Sepsis is an acidic condition of the interstitial
fluids of the Interstitium, leading to severe decompensated acidosis and the degeneration of the body cells that make up our organs and glands that sustain life. In addition, acids held in the
compartments of the Interstitium can eventually effect the pH of the blood plasma causing erythrocytosis and leukocytosis and eventual death. Therefore, Sepsis is an outfectious condition of the body
cells (giving birth to bacteria and/or yeast) and the body’s inability to remove an excessive buildup of acidity in the Interstitium and the blood, through the four channels of elimination, including
urination, respiration, defecation or perspiration.
In addition, our research and findings suggest, using high-tech EMF electrode testing of the biochemistry of the body fluids, including the pH of the interstitial fluids of the Interstitium, shows
that Sepsis results from an individual’s hyper-inflammatory or hyper-acidic state of the interstitial fluids of the Interstiium and then the blood plasma, created from an acidic lifestyle, diet,
metabolism, environment, stress, and injury, all contributing factors.
Currently, Sepsis has a very low Standard of Care in hospitals and hospital-induced outfections that increase the risk of Systemic Sepsis of the Interstitial Fluids of the Interstitium and then the
blood plasma.
Read the entire story by clicking here - https://www.dailymail.co.uk/health/article-6332445/Boy-two-arms-legs-amputated-mystery-infection-baffled-doctors.html
The cause is directly related to the administration of dextrose, glucose, antibiotic IV’s and acidic hospital food and drink. In response to the low standard of care in hospitals we will share in
this article our non-invasive, non-radioactive high-tech medical equipment for testing the chemistry of the blood, interstitial fluids, and intracellular fluids, thereby validating the severe
decompensated acidosis, the efficacy of treatment for restoring and balancing the ideal chemistry and alkaline pH at 7.365 and for preventing and treating Systemic Sepsis successfully of the
Interstitial Fluids of the Interstitium and Blood Plasma.
To examine the biochemistry of the interstitial fluids of the Interstitium, representing 80 percent of the extracellular fluids compared to the biochemistry of the blood plasma, representing 20
percent of the extracellular fluids, to validate that Sepsis is caused by a declining pH of the interstitial fluids leading to decompensated acidosis of the Interstitium and the human blood.
Data was drawn from the use of a new technology used by NASA for non-invasive and non-radioactive testing of the chemistry, including the pH, of the blood, interstitial fluids of the Interstitium and
the intracellular fluids.
All patients tested were in hypercalcemia of the interstitial fluids, chronic bone and muscle loss, and decompensated acidosis of the interstitial fluids of the Interstitium with a pH near or below
7.2. After administration of an alkalizing dose of sodium and potassium bicarbonate rectally and/or by IV, the chemistry, including the pH of the interstitial fluids are restored within their normal
alkaline range (7.365) resulting in the successful reversal and cure of systemic acidosis or Sepsis, preventing the loss of life.
[2] Jui, Jonathan; et al. (American College of Emergency Physicians) (2011). "Ch. 146: Septic Shock". In Tintinalli, Judith E.; Stapczynski, J. Stephan; Ma, O. John; Cline, David M.; Cydulka, Rita
K.; Meckler, Garth D. (eds.). Tintinalli's Emergency Medicine: A Comprehensive Study Guide (7th ed.). New York: McGraw-Hill. pp. 1003–14. Archived from the original on 15 January 2014. Retrieved 11
December 2012 – via AccessMedicine.
[4] Singer M, Deutschman CS, Seymour CW, Shankar-Hari M, Annane D, Bauer M, Bellomo R, Bernard GR, Chiche JD, Coopersmith CM, Hotchkiss RS, Levy MM, Marshall JC, Martin GS, Opal SM, Rubenfeld GD, van
der Poll T, Vincent JL, Angus DC (February 2016). "The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3)". JAMA. 315 (8): 801–10. doi:10.1001/jama.2016.0287. PMC
4968574. PMID 26903338.
[8] Dellinger RP, Levy MM, Rhodes A, Annane D, Gerlach H, Opal SM, Sevransky JE, Sprung CL, Douglas IS, Jaeschke R, Osborn TM, Nunnally ME, Townsend SR, Reinhart K, Kleinpell RM, Angus DC, Deutschman
CS, Machado FR, Rubenfeld GD, Webb SA, Beale RJ, Vincent JL, Moreno R (February 2013). "Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012"
(PDF). Critical Care Medicine. 41 (2): 580–637. doi:10.1097/CCM.0b013e31827e83af. PMID 23353941. Archived from the original (PDF) on 2 February 2015.
[10] Poltorak A, Smirnova I, He X, Liu MY, Van Huffel C, McNally O, Birdwell D, Alejos E, Silva M, Du X, Thompson P, Chan EK, Ledesma J, Roe B, Clifton S, Vogel SN, Beutler B (September 1998).
"Genetic and physical mapping of the Lps locus: identification of the toll-4 receptor as a candidate gene in the critical region". Blood Cells, Molecules & Diseases. 24 (3): 340–55. doi:10.1006/
bcmd.1998.0201. PMID 10087992. | {"url":"https://www.drrobertyoung.com/post/a-universal-cure-for-sepsis-an-epidemic-in-disguise","timestamp":"2024-11-02T07:55:23Z","content_type":"text/html","content_length":"1050492","record_id":"<urn:uuid:60f6773d-60a0-45df-a41c-150682f95620>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00070.warc.gz"} |
Stereo Disparity Using Semi-Global Block Matching
This example shows how to compute disparity between left and right stereo camera images using the Semi-Global Block Matching algorithm. This algorithm is suitable for implementation on an FPGA.
Distance estimation is an important measurement for applications in Automated Driving and Robotics. A cost-effective way of performing distance estimation is by using stereo camera vision. With a
stereo camera, depth can be inferred from point correspondences using triangulation. Depth at any given point can be computed if the disparity at that point is known. Disparity measures the
displacement of a point between two images. The higher the disparity, the closer the object.
This example computes disparity using the Semi-Global Block Matching (SGBM) method, similar to the disparity (Computer Vision Toolbox) function. The SGBM method is an intensity-based approach and
generates a dense and smooth disparity map for good 3D reconstruction. However, it is highly compute-intensive and requires hardware acceleration using FPGAs or GPUs to obtain real-time performance.
The example model presented here is FPGA-hardware compatible, and can therefore provide real-time performance.
Disparity estimation algorithms fall into two broad categories: local methods and global methods. Local methods evaluate one pixel at a time, considering only neighboring pixels. Global methods
consider information that is available in the whole image. Local methods are poor at detecting sudden depth variation and occlusions, and hence global methods are preferred. Semi-global matching uses
information from neighboring pixels in multiple directions to calculate the disparity of a pixel. Analysis in multiple directions results in a lot of computation. Instead of using the whole image,
the disparity of a pixel can be calculated by considering a smaller block of pixels for ease of computation. Thus, the Semi-Global Block Matching (SGBM) algorithm uses block-based cost matching that
is smoothed by path-wise information from multiple directions.
Using the block-based approach, this algorithm estimates approximate disparity of a pixel in the left image from the same pixel in the right image. More information about Stereo Vision is available
here. Before going into the algorithm and implementation details, two important parameters need to be understood: Disparity Levels and Number of Directions.
Disparity Levels: Disparity levels is a parameter used to define the search space for matching. As shown in figure below, the algorithm searches for each pixel in the Left Image from among D pixels
in the Right Image. The D values generated are D disparity levels for a pixel in Left Image. The first D columns of Left Image are unused because the corresponding pixels in Right Image are not
available for comparison. In the figure, w represents the width of the image and h is the height of the image. For a given image resolution, increasing the disparity level reduces the minimum
distance to detect depth. Increasing the disparity level also increases the computation load of the algorithm. At a given disparity level, increasing the image resolution increases the minimum
distance to detect depth. Increasing the image resolution also increases the accuracy of depth estimation. The number of disparity levels are proportional to the input image resolution for detection
of objects at the same depth. This example supports disparity levels from 8 to 128 (both values inclusive). The explanation of the algorithm refers to 64 disparity levels. The models provided in this
example can accept input images of any resolution.
Number of Directions: In the SGBM algorithm, to optimize the cost function, the input image is considered from multiple directions. In general, accuracy of disparity result improves with increase in
number of directions. This example analyzes five directions: left-to-right (A1), top-left-to-bottom-right (A2), top-to-bottom (A3), top-right-to-bottom-left (A4), and right-to-left (A5).
SGBM Algorithm
The SGBM algorithm takes a pair of rectified left and right images as input. The pixel data from the raw images may not have identical vertical coordinates because of slight variations in camera
positions. Images need to be rectified before performing stereo matching to make all epi-polar lines parallel to the horizontal axis and match vertical coordinates of each corresponding pixel. For
more details on rectification, please see rectifyStereoImages (Computer Vision Toolbox) function. The figure shows a block diagram of the SGBM algorithm, using five directions.
The SGBM algorithm implementation has three major modules: Matching Cost Calculation, Directional Cost Calculation and Post-processing.
Many methods have been explored in the literature for computing matching cost. This example implementation uses the census transform as explained in [2]. This module can be divided into two steps:
Center-Symmetric Census Transform (CSCT) of left and right images and Hamming Distance computation. First, the model computes the CSCT on each of the left and right images using a sliding window. For
a given pixel, a 9-by-7 pixel window is considered around it. CSCT for the center pixel in that window is estimated by comparing the value of each pixel with its corresponding center-symmetric
counterpart in the window. If the pixel value is larger than its corresponding center-symmetric pixel, the result is 1, otherwise the result is 0. The figure shows an example 9-by-7 window. The
center pixel number is 31. The 0th pixel is compared to the 62nd pixel (blue), the 1st pixel is compared to the 61st pixel (red), and so on, to generate 31 results. Each result a single bit output
and the result of the whole window is arranged as a 31-bit number. This 31-bit number is the CSCT output for each pixel in both images.
In the Hamming Distance module, the CSCT outputs of the left and right images are pixel-wise XOR'd and set bits are counted to generate the matching cost for each disparity level. To generate D
disparity levels, D pixel-wise Hamming distance computation blocks are used. The matching cost for D disparity levels at a given pixel position, p, in the left image is computed by computing the
Hamming distance with (p to D+p) pixel positions in the right image. The matching cost, C(p,d), is computed at each pixel position, p, for each disparity level, d. The matching cost is not computed
for pixel positions corresponding to the first D columns of the left image.
The second module of SGBM algorithm is directional cost estimation. In general, due to noise, the matching cost result is ambiguous and some wrong matches could have lower cost than correct ones.
Therefore additional constraints are required to increase smoothness by penalizing changes of neighboring disparities. This constraint is realized by aggregating 1-D minimum cost paths from multiple
directions. It is represented by aggregated cost from r directions at each pixel position, S(p,d), as given by
The 1-D minimum cost path for a given direction, L_r(p,d), is computed as shown in the equation.
As mentioned earlier, this example uses five directions for disparity computation. Propagation in each direction is independent. The resulting disparities at each level from each direction are
aggregated for each pixel. Total cost is the sum of the cost calculated for each direction.
The third module of SGBM algorithm is Post-processing. This module has three steps: minimum cost index calculation, interpolation, and a uniqueness function. Minimum cost index calculation finds the
index corresponding to the minimum cost for a given pixel. Sub-pixel quadratic interpolation is applied on the index to resolve disparities at the sub-pixel level. The uniqueness function ensures
reliability of the computed minimum disparity. A higher value of the uniqueness threshold marks more disparities unreliable. As a last step, the negative disparity values are invalidated and replaced
with -1.
HDL Implementation
The figure below shows the overview of the example model. The blocks leftImage and rightImage import a stereo image pair as input to the algorithm. In the Input subsystem, the Frame To Pixels block
converts input images from the leftImage and rightImage blocks to a pixel stream and accompanying control signals in a pixelcontrol bus. The pixel stream is passed as input to the SGBMHDLAlgorithm
subsystem which contains three computation modules described above: matching cost calculation, directional cost calculation, and post-processing. The output of the SGBMHDLAlgorithm subsystem is a
disparity value pixel stream. In the Output subsystem, the Pixels To Frame block converts the output to a matrix disparity map. The disparity map is displayed using the Video Viewer block.
modelname = 'SGBMDisparityExample';
Matching Cost Calculation
The matching cost calculation is again separated into two parts: CSCT computation and Hamming distance calculation. CSCT is calculated on each 9-by-7 pixel window by aligning each group of pixels for
comparison using Tapped Delay (Simulink) blocks, For Each Subsystem (Simulink) blocks and buffers. The input pixels are padded with zeros to allow CSCT computation for the corner pixels. The
resulting stream of pixels is passed to ctLogic subsystem. Figure below shows ctLogic subsystem which uses the Tapped Delay block to generate a group of pixels. The pixels are buffered for imgColSize
cycles, where imgColSize is the number of pixels in an image line. A group of pixels that is aligned for comparison is generated from each row. The For Each block and Logical Operator block replicate
the comparison logic for each pixel of the input vector size. To implement a 9-by-7 window, the model uses four such For Each blocks. The result generated by each For Each block is a vector which is
further concatenated to form a vector of size 31-bits. After Bit Concat (HDL Coder) is used, the output data type is uint5. CSCT and zero-padding operations are performed separately on the left and
right input images and the results are passed to the Hamming Distance subsystem.
In the Hamming Distance subsystem, the 65th result of the left CSCT is XOR'd with the 65th to 2nd results of the right CSCT. The set bits are counted to obtain Hamming distance. This distance must be
calculated for each disparity level. The right CSCT result is passed to the enabledTappedDelay subsystem to generate a group of pixels which is then XOR'd with the left CSCT result using For Each
block. The For Each block also counts the set bits in the result. The For Each block replicates the Hamming distance calculation for each disparity level. The result is a vector, with 64 disparity
levels corresponding to each pixel. This vector is the Matching Cost, and it is passed to the Directional Cost subsystem.
Directional Cost Calculation
The Directional Cost subsystem computes disparity at each pixel in multiple directions. The five directions used in the example are left-to-right (A1), top-left-to-bottom-right (A2), top-to-bottom
(A3), top-right-to-bottom-left (A4), and right-to-left (A5). As the cost aggregation at each pixel in each direction is independent of each other, all five directions are implemented concurrently.
Each directional analysis is investigating the previous cost value with respect to the current cost value. The value of previous cost required to compute the current cost for each pixel depends on
the direction under consideration. The figure below shows the position of the previous cost with respect to the current cost under computation, for all five directions.
In the figure above, the blue box indicates the position of the current pixel for which current cost values are computed. The red box indicates the position of the previous cost values to be used for
current cost computation. For A1, the current cost becomes the previous cost value for the next computation when traversing from left to right. Thus, the current cost value should be immediately fed
back to compute the next current cost, as described in [3]. For A2, when traversing from left to right, current cost value should be used as the previous cost value after imgColSize+1 cycles. Current
cost values are hence buffered for cycles equal to imgColSize+1 and then fed back to compute the next current cost.
Similarly, for A3 and A4, the current cost values are buffered for cycles equal to imgColSize and imgColSize-1, respectively. However, for A5, when traversing from left to right, the previous cost
value is not available. Thus, the direction of traversal to compute A5 is reversed. This adjustment is done by reversing the input pixels of each row. The current cost value then becomes the previous
cost value for the next current cost computation, similar to A1.
The 1-D minimum cost path computes the current cost at d disparity position, using the Matching Cost value, the previous cost values at disparities d-1, d, and d+1, and the minimum of the previous
cost values. The figure below shows the minimum cost path subsystem, which computes the current cost at a disparity position for a pixel.
The For Each block is used to replicate the minimum cost path calculation for each disparity level, for each direction. The figure below shows the implementation of A1 for 64 disparity levels. As
shown in the figure, 64 minimum cost path calculations are generated as represented by minCostPath subsystem. The matching cost is an input from the Hamming Distance subsystem. The current cost
computed by the minCostPath subsystem is immediately fed back to itself as the previous cost values, for the next current cost computation. Thus, values for prevCost_d are now available. Values for
prevCost_d-1 are obtained by shifting the 1st to 63rd fed-back values to the 2nd to 64th positions. The d-1 subsystem contains a Selector (Simulink) block that shifts the position of the values, and
fills in zero at the 1st position.
Similarly, values for prevCost_d+1 position are obtained by shifting the 2nd to 64th feedback values to the 1st to 63rd position and inserting a zero at the 64th position. The current cost computed
is also passed to the min block to compute the minimum value from the current cost values. This value is fed back to the minPrevCost input of the minCostPath subsystem. The next current cost is then
computed by using the current cost values, acting as previous cost values, in the next cycle for A1. Since the minimum cost of disparity levels from the previous set is immediately needed for the
current set, this feedback path is the critical path in the design.
The current cost computations for A2, A3, and A4 are implemented in the same manner. Since the current cost value is not immediately required for these directions, there is a buffer in both feedback
paths. This buffer prevents this feedback path from being the critical path in the design. The figure below shows the A3 implementation with a buffer in the feedback paths.
The current cost calculation for A5 has additional logic to reverse the rows at input and again reverse the rows at output to match the pixel positions for the total cost calculation. A single buffer
of imgColSize cycles achieves this reversal. Since all directions are calculated concurrently, the time required to reverse the rows must be compensated for on the other paths. Delay equivalent to
2*imgColSize cycles is introduced in the other four directions. To optimize resources, instead of buffering 64 values of matching cost for each pixel, the 31-bit result of CSCT is buffered. A
separate Hamming Distance module is then required to compute matching cost for A5. This design reduces on-chip memory usage. The rows are reversed after the CSCT computation and matching cost is
calculated using a separate Hamming Distance module that provides the Matching Cost input to A5. Also, dataAligner subsystem is used to remove data discontinuity for each row before passing it to
Hamming Distance subsystems. This helps easy synchronization of data at time of aggregation. The current cost obtained from all five directions at each pixel are aggregated to obtain the total cost
at each pixel. The total cost is passed to the Post-processing subsystem.
In the post-processing subsystem, the index of the minimum cost is calculated at each pixel position from 64 disparity levels by using Min blocks in a tree architecture. The index value obtained is
the disparity of each pixel. Along with minimum cost index computation, the minimum cost value at the computed index, and the cost values at index-1 and index+1 are also computed. The
Minimum_Cost_Index subsystem implements tree architecture to compute a minimum value from 128 values. 64 disparity values are padded with 64 more values to make a vector of 128 values. Minimum value
is computed from this vector with 128 values. In case, a vector with 128 values is available no value is padded to a vector or in other words, vector is passed directly for minimum value calculation.
Variant Subsystem (Simulink) is used to select between logic using variant subsystem variables. Sub-pixel quadratic interpolation is then applied to the index to resolve disparity at sub-pixel level.
Also, a uniqueness function is applied to the index calculated by min blocks, to ensure reliable disparity results. As a last step, invalid disparities are identified and replaced with -1.
Model Parameters
The model presented here takes disparity levels and uniqueness threshold as input parameters as shown in figure. Disparity levels is an integer value from 8 to 128 with the default value of 64.
Higher value of disparity level reduces the minimum distance detected. Also, for larger input image size larger disparity level helps better detection of depth of object. The uniqueness threshold
must be a positive integer value, between 0 and 100 with a typical range from 5 to 15. Lower value of uniqueness threshold marks more disparities reliable. The default value of uniqueness threshold
is 5.
Simulation and Results
The example model can be simulated by specifying a path for the input images in the leftImage and rightImage blocks. The example uses sample images of size 640-by-480 pixels. The figure shows a
sample input image and the calculated disparity map. The model exports these calculated disparities and a corresponding valid signal to the MATLAB workspace, using variable names dispMap and
dispMapValid respectively. The output disparity map is 576-by-480 pixels, since the first 64 columns are unused in the disparity computation. The unused pixels are padded with 0 in Output subsystem
to generate output image of size 640-by-480 as shown in Video Viewer. A disparity map with colorbar is generated using the commands shown below. Higher disparity values in the result indicate that
the object is nearer to the camera and lower disparity values indicate farther objects.
dispMapValid = find(dispMapValid == 1);
disparityMap = (reshape(dispMap(dispMapValid(1:imgRowSize*imgColSize),:),imgColSize,imgRowSize))';
figure(); imagesc(disparityMap);
title('Disparity Map');
colormap jet; colorbar;
The example model is compatible to generate HDL code. You must have an HDL Coder™ license to generate HDL code. The design was synthesized for the Intel® Arria® 10 GX (115S2F45I1SG) FPGA. The table
below shows resource utilization for three disparity level at different image resolutions. Considering one pair of stereo input images as a frame, the algorithm throughput is estimated by finding the
number of clock cycles required for processing the current frame before the arrival of next frame. The core algorithm throughput, without overhead of buffering input and output data, is the maximum
operating frequency divided by the minimum cycles required between input frames. For example, for 128 disparity levels and 1280-by-720 image resolution, the minimum cycles to process the input frame
is 938,857 clock cycles/frame. The maximum operating frequency obtained for algorithm with 128 disparity levels is 61.69 MHz, the core algorithm throughput is computed as 65 frames per second.
% ==================================================================================
% |Disparity Levels || 64 || 96 || 128 |
% ==================================================================================
% |Input Image Resolution || 640 x 480 || 960 x 540 || 1280 x 720 |
% |ALM Utilization || 45,613 (11%) || 64,225 (15%) || 85,194 (20%) |
% |Total Registers || 49,232 || 64,361 || 85,564 |
% |Total Block Memory Bits || 3,137,312 (6%) || 4,599,744 (9%) || 11,527,360 (21%) |
% |Total RAM Blocks || 264 (10%) || 409 (16%) || 741 (28%) |
% |Total DSP Blocks || 65 (4%) || 97 (6%) || 129 (8%) |
% ==================================================================================
[1] Hirschmuller H., Accurate and Efficient Stereo Processing by Semi-Global Matching and Mutual Information, International Conference on Computer Vision and Pattern Recognition, 2005.
[2] Spangenberg R., Langner T., and Rojas R., Weighted Semi-Global matching and Center-Symmetric Census Transform for Robust Driver Assistance, Computer Analysis of Images and Patterns, 2013.
[3] Gehrig S., Eberli F., and Meyer T., A Real-Time Low-Power Stereo Vision Engine Using Semi-Global Matching, International Conference on Computer Vision System, 2009. | {"url":"https://de.mathworks.com/help/visionhdl/ug/stereoscopic-disparity.html","timestamp":"2024-11-11T20:33:48Z","content_type":"text/html","content_length":"101327","record_id":"<urn:uuid:f826d0a8-d17e-4078-b09d-37204698d2d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00745.warc.gz"} |
Scientists Are Scared of Math Too
It’s not just artistic types that find math a turn-off and try to avoid having to deal with it – scientists do the same. A new study by University of Bristol biologists shows that if a piece of
research is packed full of mathematical equations, other scientists tend to ignore it. Scientific articles laden with equations on every page are seldom referred to by other scientists, they found.
Indeed, the most math-heavy articles are only half as likely to be referenced as those with little or none.
The ideal way to solve the problem, clearly, would be to improve the math education of science graduates so that they could better understand all those pesky equations. Another would be for people
writing papers to remember the possible shortcomings of their readers and explain their assumptions and workings more thoroughly. | {"url":"https://metanexus.net/scientists-are-scared-math-too/","timestamp":"2024-11-08T20:40:37Z","content_type":"text/html","content_length":"92949","record_id":"<urn:uuid:ec4e1de8-867a-4315-a873-a27ca2eeab5c>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00000.warc.gz"} |
Anatomy Drawing Lessons
Draw The Shear Force And Bending Moment Diagrams
Draw The Shear Force And Bending Moment Diagrams - This page will walk you through what shear forces and bending moments are, why they are useful, the procedure for drawing the diagrams and some
other keys aspects as well. Web draw the shearing force and bending moment diagrams for the cantilever beam subjected to a uniformly distributed load in its entire length, as shown in figure 4.5a.
Web shear force and bending moment diagrams are powerful graphical methods that are used to analyze a beam under loading. By bending the beam to an excessive amount. Web draw the shearing force and
bending moment diagrams for the cantilever beam subjected to a uniformly distributed load in its entire length, as shown in figure 4.5a. Web shear force and bending moment diagrams are analytical
tools used in conjunction with structural analysis to help perform structural design by determining the value of shear forces and bending moments at a given point of a structural element such as a
There is a long way and a quick way to do them. Web shear force and bending moment are examples of interanl forces that are induced in a structure when loads are applied to that structure. Web the
shear force and the bending moment usually vary continuously along the length of the beam. (1) normal stress that is caused by. Shear and bending moment diagrams.
Draw the axial force, shearing force, and bending moment diagram for the structure, noting the sign conventions discussed in. The internal forces give rise to two kinds of stresses on a transverse
section of a beam: Web this video explains how to draw shear force diagram and bending moment diagram with easy steps for a simply supported beam loaded with a concentrated load. This page will walk
you through what shear forces and bending moments are, why they are useful, the procedure for drawing the diagrams and some other keys aspects as well. Web treat the section as a free body:
Web steps to construct shear force and bending moment diagrams. First, compute the reactions at the support. Web to create the moment diagram for a shaft, we will use the following process. Web 8.4.1
shear and bending moment diagrams. Web sketch the shear force and bending moment diagrams and find the position and magnitude of maximum bending moment.
Web treat the section as a free body: Web the shear force and the bending moment usually vary continuously along the length of the beam. Web our calculator generates the reactions, shear force
diagrams (sfd), bending moment diagrams (bmd), deflection, and stress of a cantilever beam or simply supported beam. Web write shear and moment equations for the beams in.
First, compute the reactions at the support. There is a long way and a quick way to do them. Web write shear and moment equations for the beams in the following problems. Web draw the shearing force
and bending moment diagrams for the cantilever beam subjected to a uniformly distributed load in its entire length, as shown in figure 4.5a..
The internal forces give rise to two kinds of stresses on a transverse section of a beam: Web in this post we’ll give you a thorough introduction to shear forces, bending moments and how to draw
shear and moment diagrams with worked examples. Web shear force and bending moment are examples of interanl forces that are induced in a structure.
By bending the beam to an excessive amount. Web learn to draw shear force and moment diagrams using 2 methods, step by step. (1) normal stress that is caused by. Web sketch the shear force and
bending moment diagrams and find the position and magnitude of maximum bending moment. In each problem, let x be the distance measured from left.
Web our calculator generates the reactions, shear force diagrams (sfd), bending moment diagrams (bmd), deflection, and stress of a cantilever beam or simply supported beam. Web shear force and
bending moment diagrams are powerful graphical methods that are used to analyze a beam under loading. Solve for all external forces and moments, create a free body diagram, and create the.
First, compute the reactions at the support. Web shear force and bending moment are examples of interanl forces that are induced in a structure when loads are applied to that structure. Also, draw
shear and moment diagrams, specifying values at all change of loading positions and at. Web 8.4.1 shear and bending moment diagrams. Lined up below the shear diagram,.
Web learn to draw shear force and moment diagrams using 2 methods, step by step. (1) normal stress that is caused by. Web compute the principal values of the shearing force and the bending moment at
the segment where the section lies. Web steps to construct shear force and bending moment diagrams. Web our calculator generates the reactions, shear force.
First, compute the reactions at the support. Web being able to draw shear force diagrams (sfd) and bending moment diagrams (bmd) is a critical skill for any student studying statics, mechanics of
materials, or structural engineering. Draw the axial force, shearing force, and bending moment diagram for the structure, noting the sign conventions discussed in. Web write shear and moment.
In each problem, let x be the distance measured from left end of the beam. Web draw the shearing force and bending moment diagrams for the cantilever beam subjected to a uniformly distributed load in
its entire length, as shown in figure 4.5a. Web shear and moment diagrams are graphs which show the internal shear and bending moment plotted along.
Draw The Shear Force And Bending Moment Diagrams - Web steps to construct shear force and bending moment diagrams. Web draw the shearing force and bending moment diagrams for the cantilever beam
subjected to a uniformly distributed load in its entire length, as shown in figure 4.5a. Web treat the section as a free body: In each problem, let x be the distance measured from left end of the
beam. (1) normal stress that is caused by. Web compute the principal values of the shearing force and the bending moment at the segment where the section lies. Web shear and moment diagrams are
graphs which show the internal shear and bending moment plotted along the length of the beam. Web shear force and bending moment diagrams are analytical tools used in conjunction with structural
analysis to help perform structural design by determining the value of shear forces and bending moments at a given point of a structural element such as a beam. Web learn to draw shear force and
moment diagrams using 2 methods, step by step. Shear and moment diagrams are graphs which show the internal shear and bending moment plotted along the length of the beam.
Web compute the principal values of the shearing force and the bending moment at the segment where the section lies. There is a long way and a quick way to do them. Web learn the fundamentals of
shear force and bending moment diagrams, understand different loading scenarios on a beam, and discover how to draw these diagrams accurately. Web draw the shearing force and bending moment diagrams
for the cantilever beam subjected to a uniformly distributed load in its entire length, as shown in figure 4.5a. Shear and moment diagrams are graphs which show the internal shear and bending moment
plotted along the length of the beam.
Web draw the shearing force and bending moment diagrams for the cantilever beam subjected to a uniformly distributed load in its entire length, as shown in figure 4.5a. (1) normal stress that is
caused by. Loading tends to cause failure in two main ways: Web shear and moment diagrams are graphs which show the internal shear and bending moment plotted along the length of the beam.
Web learn to draw shear force and moment diagrams using 2 methods, step by step. Also, draw shear and moment diagrams, specifying values at all change of loading positions and at. Web in this post
we’ll give you a thorough introduction to shear forces, bending moments and how to draw shear and moment diagrams with worked examples.
Web write shear and moment equations for the beams in the following problems. Web 8.4.1 shear and bending moment diagrams. Web shear force and bending moment diagrams are analytical tools used in
conjunction with structural analysis to help perform structural design by determining the value of shear forces and bending moments at a given point of a structural element such as a beam.
Web This Video Explains How To Draw Shear Force Diagram And Bending Moment Diagram With Easy Steps For A Simply Supported Beam Loaded With A Concentrated Load.
Web shear force and bending moment diagrams are powerful graphical methods that are used to analyze a beam under loading. Loading tends to cause failure in two main ways: There is a long way and a
quick way to do them. Shear and bending moment diagrams.
They Allow Us To See Where The Maximum Loads Occur So That We Can Optimize The Design To Prevent Failures And Reduce The Overall Weight And Cost Of The Structure.
Web learn the fundamentals of shear force and bending moment diagrams, understand different loading scenarios on a beam, and discover how to draw these diagrams accurately. The internal forces give
rise to two kinds of stresses on a transverse section of a beam: Web draw the shearing force and bending moment diagrams for the cantilever beam subjected to a uniformly distributed load in its
entire length, as shown in figure 4.5a. David roylance department of materials science and engineering massachusetts institute of technology cambridge, ma 02139 november 15, 2000.
Web To Create The Moment Diagram For A Shaft, We Will Use The Following Process.
Web in this post we’ll give you a thorough introduction to shear forces, bending moments and how to draw shear and moment diagrams with worked examples. They allow us to see where the maximum loads
occur so that we can optimize the design to prevent failures and reduce the overall weight and cost of the structure. Draw a free body diagram of the beam with global coordinates (x) calculate the
reaction forces using equilibrium equations ( ∑ forces = 0 and ∑ moments = 0 ) cut beam to reveal internal forces and moments*. (1) normal stress that is caused by.
Web Shear Force And Bending Moment Diagrams Are Analytical Tools Used In Conjunction With Structural Analysis To Help Perform Structural Design By Determining The Value Of Shear Forces And Bending
Moments At A Given Point Of A Structural Element Such As A Beam.
Web the first step in calculating these quantities and their spatial variation consists of constructing shear and bending moment diagrams, \(v(x)\) and \(m(x)\), which are the internal shearing
forces and bending moments induced in. Web compute the principal values of the shearing force and the bending moment at the segment where the section lies. Web draw the shearing force and bending
moment diagrams for the cantilever beam subjected to a uniformly distributed load in its entire length, as shown in figure 4.5a. Draw the axial force, shearing force, and bending moment diagram for
the structure, noting the sign conventions discussed in. | {"url":"https://revivalportal.goodwood.com/art/anatomy-drawing-lessons/draw-the-shear-force-and-bending-moment-diagrams.html","timestamp":"2024-11-10T17:58:32Z","content_type":"text/html","content_length":"37876","record_id":"<urn:uuid:8dfdc1e7-2aa7-4ff9-a0b6-99b71352b52f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00009.warc.gz"} |
A ball with a mass of 4 kg and velocity of 5 m/s collides with a second ball with a mass of 9 kg and velocity of - 4 m/s. If 40% of the kinetic energy is lost, what are the final velocities of the balls? | HIX Tutor
A ball with a mass of #4 kg # and velocity of #5 m/s# collides with a second ball with a mass of #9 kg# and velocity of #- 4 m/s#. If #40%# of the kinetic energy is lost, what are the final
velocities of the balls?
Answer 1
This is a Conservation of Energy question
We know that #m_1v_1=m_2v_2#
We also know that the energy before the collision has to equal the energy after the collision. Use #KE_1=1/2m_1v_1^2# to find the initial KE. Then #KE_2=.6KE_1# and you can find the final velocities
from there.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/a-ball-with-a-mass-of-4-kg-and-velocity-of-5-m-s-collides-with-a-second-ball-wit-2-8f9af8b555","timestamp":"2024-11-07T22:43:36Z","content_type":"text/html","content_length":"580124","record_id":"<urn:uuid:c5109711-705c-4a29-845f-3e4584bdc88a>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00116.warc.gz"} |
Content Knowledge
IPTS Standard 1 | Content Knowledge
The competent teacher understands the central concepts, methods of inquiry, and structures of the discipline(s) and creates learning experiences that make the content meaningful to all students.
Wyzant Tutoring Website -- Math Resource Section
During the summer of 2010, I worked for a tutoring website that connects students to tutors all over the nation. My job was to oversee the development of the math help section as a resource for both
students and teachers. I personally created the Calculus section and made modifications to the Algebra and Pre-Calculus sections. The lessons are designed to be easy to follow, informative, and
interactive. This section of the website is a great resource for learning both step by step procedural knowledge, but also understanding the big ideas behind each concept. In the Calculus section, I
give a brief description of the history of calculus, how it relates to other big math topics, and how we use calculus in various applications. All of these lessons are detailed with constructed
images and practice problems for the visitor to test their comprehension.
The development of this math resource section requires a deep understanding of mathematics content. To construct the lessons, I made sure to use multiple representations of the topics and explain the
overall ideas behind the procedures. The lessons are designed so that students can easily see the connections between mathematical ideas and take away some new knowledge about the math that they may
have not seen before from their math textbooks. For instance, in the
Quadratic Equations
Section of Algebra, I present the lesson so that the reader discovers how quadratic equations can be constructed from linear equations, a connection that is not usually taught in Algebra classes.
Before starting each lesson, there is a brief description of what terms and ideas you should know before moving on. At the end of each section, there may be an interactive review to test for
comprehension, and a summary of the main ideas covered. My lessons also have motivations for the material and address common misconceptions that students may have in their math classes. An important
aspect of understanding the math content is the structures behind it, and in my lessons I explain not only the how, but the why behind each concept. This way, the content becomes more meaningful to
the students.
A crucial aspect of being a competent teacher is having deep knowledge and understanding of your content area. As a math teacher, I have a passion for the mathematics and continue to explore these
topics as I teach them to my students. Because of this, I plan to have my students engage in their learning, pose their own problems, make conjectures and discoveries, allow them to be wrong, to have
inspiration, and to question. I will not use my knowledge of the mathematics to tell my students what is right and wrong, but rather to ponder and be comfortable with uncertainty. Out of this, the
students can feel the satisfaction of discovering knowledge that is new to them, which can be a truly meaningful learning experience. I plan to build an honest intellectual relationship with my
students. They will be given ample opportunities to become active and creative mathematical thinkers. Since I believe I have a deep level of knowledge and passion for mathematics, I will have the
ability to be open, to talk and listen to my students, and to share excitement and a love of learning. | {"url":"https://www.zaneranney.com/content-knowledge.html","timestamp":"2024-11-14T02:25:39Z","content_type":"text/html","content_length":"27075","record_id":"<urn:uuid:213ebda0-143a-4cc8-8e86-11e0d8675d9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00335.warc.gz"} |
Solving Polynomial Equations Using Rational Root Theorem Worksheet - Equations Worksheets
Solving Polynomial Equations Using Rational Root Theorem Worksheet
Solving Polynomial Equations Using Rational Root Theorem Worksheet – Expressions and Equations Worksheets are designed to aid children in learning faster and more efficiently. These worksheets are
interactive and questions based on the sequence of operations. These worksheets make it simple for children to master complex concepts and basic concepts quickly. These PDF resources are completely
free to download and could be used by your child to practise maths equations. These resources are useful to students in the 5th-8th grades.
Get Free Solving Polynomial Equations Using Rational Root Theorem Worksheet
These worksheets are for students in the 5th to 8th grades. These two-step word problem are created with fractions or decimals. Each worksheet contains ten problems. They are available at any online
or print resource. These worksheets are a great way to test the practice of rearranging equations. These worksheets are a great way to practice rearranging equations . They assist students with
understanding the concepts of equality and inverse operations.
These worksheets can be used by fifth and eighth graders. These worksheets are perfect for students struggling to compute percentages. There are three types of questions you can choose from. You can
choose to either solve single-step problems that contain whole numbers or decimal numbers, or to employ words-based techniques for fractions and decimals. Each page is comprised of 10 equations. The
Equations Worksheets are suggested for students in the 5th through 8th grade.
These worksheets can be a wonderful resource for practicing fraction calculations and other aspects of algebra. There are many different kinds of problems that you can solve with these worksheets. It
is possible to select the one that is word-based, numerical, or a mixture of both. It is essential to pick the problem type, because every challenge will be unique. Each page will have ten challenges
which makes them an excellent aid for students who are in 5th-8th grade.
These worksheets are designed to teach students about the relationship between variables as well as numbers. They allow students to solve polynomial equations and discover how to use equations to
solve problems in everyday life. These worksheets are a fantastic way to get to know more about equations and expressions. They will assist you in learning about the different kinds of mathematical
issues and the different types of symbols used to describe them.
These worksheets are beneficial for students in their first grade. These worksheets will help them learn how to graph and solve equations. These worksheets are perfect for learning about polynomial
variables. They will also help you learn how to factor and simplify them. There are many worksheets that can be used to aid children in learning equations. Working on the worksheet yourself is the
best method to get a grasp of equations.
There are a variety of worksheets that can be used to help you understand quadratic equations. There are various worksheets that cover the various levels of equations at each level. These worksheets
are designed in order to allow you to practice solving problems of the fourth degree. When you’ve reached an amount of work it is possible to work on other kinds of equations. Once you have completed
that, you are able to work on the same level of problems. You could, for instance identify the same problem as an elongated one.
Gallery of Solving Polynomial Equations Using Rational Root Theorem Worksheet
PPT Rational Root Theorem PowerPoint Presentation Free Download ID
Algebra 2 Rational Root Theorem Worksheet Answer Key ABC Worksheets
Rational Root Theorem Worksheet Worksheet
Leave a Comment | {"url":"https://www.equationsworksheets.net/solving-polynomial-equations-using-rational-root-theorem-worksheet/","timestamp":"2024-11-09T06:56:31Z","content_type":"text/html","content_length":"63416","record_id":"<urn:uuid:7e711fd4-5726-4883-a9b6-b7c2cc7d0cf4>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00878.warc.gz"} |
Quantitative Aptitude Quiz For SBI Clerk Prelims 2021- 16th May
Q1. A train can cross a platform of 100 m length completely in 12 seconds while covers a platform of double of its length in 21 seconds. Find the speed of the train? (in m/s)
(a) 12
(c) 18
(d) 20
(e) 24
Q2. The circumference of two circles is 132 m and 176 m respectively. What is difference between the area of larger circle and smaller circle ? (in m²)
(a) 1052
(b) 1128
(c) 1258
(d) 1078
(e) 1528
Q3. Diameter of a cylindrical jar is increased by 25%. By what percent must the height be decreased so that there is no change in its volume?
(a) 18%
(b) 25%
(c) 32%
(d) 36%
(e) None of these
Q4. If speed of boat in upstream is double than the speed of current and speed of boat in still water is 27 km/hr. Then find the time taken by boat to travel 54 km downstream? (in hour)
(a) 1.5
(b) 1.8
(c) 2.5
(d) 1.2
(e) 2
Q5. The letters of the word PROMISE are to be arranged so that three vowels should not come together. Find the number of arrangements.
(a) 4470
(b) 4320
(c) 3792
(d) 4200
(e) 4450
Q6. Train A is 180 metre long, while another train B is 240 metre long. Train A has a speed of 30 kmph and train B’s speed is 40 kmph, if the trains move in opposite directions then in what time will
Train A pass Train B completely?
(a) 21 seconds
(b) 21.6 seconds
(c) 26.1 seconds
(d) 26 seconds
(e) 16 seconds
Q7. A train covers certain distance between two places at a uniform speed. If the train moved 10 kmph faster, it would take 2 hours less, and if the train were slower by 10 kmph, it would take 3
hours more than the scheduled time. Find the distance covered by the train.
(a) 300 km
(b) 600 km
(c) 800 km
(d) 1200 km
(e) None of these
Q8. How many five-letters containing 2 vowels and 3 consonants can be formed using the letters of the word EQUALITY so that 2 vowels occur together?
(a) 1260
(b) 1000
(c) 1150
(d) 1152
(e) None of these
Q10. A man can row 15 kmph in still water and it takes him 75 minutes to row to a place and back if the speed of current is 3 Kmph, then how far is the place?
(a) 9 km
(b) 6 km
(c) 12 km
(d) 15 km
(e) 13.5 km
Q11. There are 5 multiple choice questions in an examination. How many sequences of answers are possible, if the first three questions have 4 choices each and the next two have 6 choices each?
(a) 2804
(b) 3456
(c) 7776
(d) 2304
(e) 1024
Q12. What will be the cost of fencing the rectangular field whose area is 486 sq. m. if the cost of fencing is Rs 11 per meter and length of the field is 50% more than the breadth of the field.
(a) Rs 1100
(b) Rs 990
(c) Rs 880
(d) Rs 770
(e) Rs. 660
Q13. Ratio of upstream speed to downstream speed is 1 : 11. If speed of boat is 30 km/hr. Find the distance covered in upstream in 5 hours ? (in km)
(a) 66
(b) 55
(c) 25
(d) 30
(e) 40
Q14. What will be the difference between simple interest and compound interest at 10% p.a. on the sum of Rs 10,000 after 4 years?
(a) 628
(b) 541
(c) 640
(d) 540
(e) 641
Q15. Mohan has two sons named Ram and Karan. The ratio of age of Mohan and Ram is 5 : 2 and that of Ram and Karan is 8 : 5. Also, Ram is 6 years elder than Karan. Find the ratio of their ages after
10 years.
(a) 23 : 10 : 4
(b) 25 : 16 : 9
(c) 25 : 13 : 10
(d) 50 : 26 : 19
(e) 25 : 16 : 10
Q16. Pipe A and Pipe B can fill a cistern together in 18 minutes. Pipe B is 50% more efficient than pipe A. Find the capacity of cistern, if it is given that pipe ‘A’ fills the cistern at speed of 6
(a) 150 l
(b) 225 l
(c) 240 l
(d) 180 l
(e) 270 l
Q20. A sum of money of Rs. 8000 is invested at rate of 40% p.a. for 1 year. Find CI if rate is compounded quarterly.
(a) Rs. 3600
(b) Rs. 3712.8
(c) Rs. 3625.18
(d) Rs. 3576.8
(e) Rs. 4200
Practice More Questions of Quantitative Aptitude for Competitive Exams: | {"url":"https://www.bankersadda.com/quantitative-aptitude-quiz-for-sbi-clerk-prelims-2021-16th-may/","timestamp":"2024-11-08T23:40:24Z","content_type":"text/html","content_length":"614800","record_id":"<urn:uuid:b2da3ed4-5333-45de-9f17-8e43ab2f360c>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00438.warc.gz"} |
Turing Machine
--Warren Schudy 17:56, 1 January 2008 (CST) This page was actually defining Turing Completeness, not Turing Machine. A Turing Machine is a specific computational model. A Turing Complete machine is
any machine that can simulate a Turing Machine. Wikipedia's article on Deterministic Turing Machine is pretty good, so I suggest that we import it. I just ended the existing article a bit to avoid
confusing readers in the interim.
Related machine and concepts
Was Alan Turing's bicycle his Touring Machine? Is there an Italian Gran Turing version?
Should Turing Test link from here? Howard C. Berkowitz
There was a microcomputer around 1980 that was marketed as a "Grand Turing Machine". Z-80 or some such CPU. Sandy Harris 08:56, 9 August 2010 (UTC)
Article name?
Should this be at "Turing Machine" since it is a proper noun? Or "Turing machine" to suit CZ conventions? I just created the latter as a redirect here; that's fine by me, but it seemed worth asking.
Both forms have many links. Sandy Harris 08:56, 9 August 2010 (UTC)
I think that it is usually "Turing machine". But I'll check (not immediately). --Peter Schmitt 09:33, 9 August 2010 (UTC)
Can it write to the tape?
I thought the machine was allowed to change the current character, but our current description does not mention that. Am I wrong? Does it matter? Sandy Harris 10:06, 9 August 2010 (UTC)
You are right, of course. The possible actions are writing a character (overwriting the previous one), or moving left, or moving right, or stopping (precisely one of these four options.) --Peter
Schmitt 11:52, 9 August 2010 (UTC) | {"url":"https://citizendium.org/wiki/Talk:Turing_Machine","timestamp":"2024-11-03T12:42:23Z","content_type":"text/html","content_length":"36753","record_id":"<urn:uuid:0ad87729-4376-4dd1-8d30-a37f0999e037>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00136.warc.gz"} |
How Preschoolers Learn to Compare Numbers - Kate Snow - Homeschool Math Help
How Preschoolers Learn to Compare Numbers
Preschoolers need to progress through 3 stages of understanding to learn how to compare numbers. Learn all 3 stages so you can help your child develop a deep understanding of numbers.
One of the things that’s hardest about teaching math is figuring out how to break concepts down into small, manageable chunks. So many math concepts that seem obvious to adults are actually very
multi-layered and complex for kids who are approaching them for the first time.
Comparing numbers is one of these challenging concepts for young children. While it’s obvious to adults that 8 is more than 7, it takes a long time (and a lot of hands-on experience) for children to
understand that numbers that come later in the counting sequence are greater than the numbers that come before.
In this excerpt from my preschool homeschool math curriculum, Preschool Math at Home, you’ll learn the skills your preschooler needs to develop as she learns to compare numbers.
Comparing numbers and quantities: Which has more?
Learning to compare quantities helps preschoolers begin to make sense of the relationships between numbers: seven is one less than eight, but it is one more than six. Your child will build on these
relationships in kindergarten addition and subtraction. For example, a kindergartner might use her knowledge that eight is one more than seven to solve 7 + 1.
Preschoolers already understand the concept of more and less informally, especially if they feel that someone else is getting “more” and they are getting “less”! Even without any instruction, most
are able to compare quantities if only small amounts are involved, or if two quantities are very different from each other visually. For example, in the pictures below, your child could probably tell
right away which box of cars has more cars, or which plate of cookies has fewer cookies.
But comparing larger quantities (or quantities are that look about equal) is much more difficult. To learn to make these more difficult comparisons, preschoolers need instruction and lots of
practice. For example, in the picture [below], most young children would find it very difficult to tell which bag has more marbles.
Preschoolers who are just learning to compare larger quantities begin by matching the objects one-by-one to see which group has more.
Then, as they gain more experience with comparing, children learn that they can use counting to compare quantities: “There are seven striped marbles and eight plain marbles. Eight comes after seven,
so eight marbles is more than seven marbles.”
Check out my book, Preschool Math at Home, for more simple, playful math activities to give your preschooler a great start in math.
Give your preschooler a great foundation in math–
in just 5 minutes a day!
3 thoughts on “How Preschoolers Learn to Compare Numbers”
1. I love this! Thanks for breaking it down to the individual skills! So helpful. Curious what you’d recommend for the 6 year old who is right between the Preschool and Kindergarten skill set. How
do you recommend filling in the gaps? Thanks a bunch!
2. Glad you found it helpful, Kara! For a six-year-old who’s in-between, I’d go ahead and begin a kindergarten curriculum. Most kindergarten programs start at the beginning with simple number work,
so he should be able to jump right in.
Happy Math!
3. Tnx this was helpful for my school assignments
Leave a Comment | {"url":"https://kateshomeschoolmath.com/how-preschoolers-compare/","timestamp":"2024-11-11T08:18:58Z","content_type":"text/html","content_length":"60494","record_id":"<urn:uuid:a0c7926e-bd27-4a20-8017-18610ffe621c>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00410.warc.gz"} |
What is: Null Space
What is Null Space?
The concept of null space, also known as the kernel of a matrix, is a fundamental topic in linear algebra, particularly in the fields of statistics, data analysis, and data science. The null space of
a matrix ( A ) is defined as the set of all vectors ( x ) such that when ( A ) is multiplied by ( x ), the result is the zero vector. Mathematically, this can be expressed as ( Ax = 0 ).
Understanding null space is crucial for various applications, including solving linear equations, dimensionality reduction, and understanding the properties of linear transformations.
Mathematical Definition of Null Space
Formally, if ( A ) is an ( m times n ) matrix, the null space of ( A ), denoted as ( N(A) ), is given by the equation ( N(A) = { x in mathbb{R}^n : Ax = 0 } ). This means that the null space consists
of all vectors in ( mathbb{R}^n ) that, when transformed by the matrix ( A ), yield the zero vector in ( mathbb{R}^m ). The null space is a vector subspace of ( mathbb{R}^n ) and can provide insights
into the solutions of the homogeneous system of equations represented by ( Ax = 0 ).
Geometric Interpretation of Null Space
Geometrically, the null space can be visualized as the set of directions in which the transformation represented by the matrix ( A ) collapses points to the origin. If the null space contains only
the zero vector, it indicates that the transformation is injective (one-to-one), meaning that no two distinct vectors in the domain are mapped to the same point in the codomain. Conversely, if the
null space contains non-zero vectors, it signifies that the transformation is not injective, leading to multiple vectors being mapped to the same point.
Dimension of Null Space
The dimension of the null space, known as the nullity of the matrix ( A ), is a critical aspect of linear algebra. It can be determined using the Rank-Nullity Theorem, which states that for any ( m
times n ) matrix ( A ), the following relationship holds: ( text{rank}(A) + text{nullity}(A) = n ). Here, the rank of ( A ) refers to the dimension of the column space, while the nullity indicates
the dimension of the null space. This theorem provides a powerful tool for understanding the structure of linear transformations and the solutions to linear systems.
Applications of Null Space in Data Science
In data science, the null space plays a significant role in various applications, including dimensionality reduction techniques such as Principal Component Analysis (PCA). By identifying the null
space of a data matrix, data scientists can determine the directions in which the data varies the least, allowing for effective reduction of dimensionality while preserving essential information.
Additionally, understanding the null space is crucial for regularization techniques in machine learning, where it helps to mitigate overfitting by constraining the solution space.
Null Space and Linear Independence
The relationship between null space and linear independence is another important aspect to consider. A set of vectors is said to be linearly independent if no vector in the set can be expressed as a
linear combination of the others. The presence of a non-trivial null space indicates that there are dependencies among the columns of the matrix ( A ). In practical terms, this means that some
features in a dataset may be redundant, and identifying these dependencies can lead to more efficient models and better insights.
Computing the Null Space
To compute the null space of a matrix, one typically employs methods such as Gaussian elimination or the reduced row echelon form (RREF). By transforming the matrix into RREF, it becomes easier to
identify the free variables that correspond to the null space. The solution set can then be expressed in parametric form, providing a clear representation of the null space. Software tools such as
MATLAB, Python (with NumPy or SciPy), and R offer built-in functions to facilitate these computations, making it accessible for practitioners in data analysis and statistics.
Null Space in the Context of Linear Transformations
When considering linear transformations, the null space provides valuable insights into the behavior of the transformation. If a linear transformation maps a vector space to a lower-dimensional
space, the null space can help identify the dimensions that are effectively “lost” during the transformation. This understanding is particularly relevant in fields such as computer vision and image
processing, where transformations may lead to loss of information, and recognizing the null space can aid in reconstruction and recovery of the original data.
The null space is a vital concept in linear algebra with far-reaching implications in statistics, data analysis, and data science. Its mathematical definition, geometric interpretation, and
applications in various domains underscore its importance in understanding linear systems and transformations. By exploring the null space, practitioners can gain deeper insights into their data and
develop more robust analytical models. | {"url":"https://statisticseasily.com/glossario/what-is-null-space/","timestamp":"2024-11-04T07:49:27Z","content_type":"text/html","content_length":"138293","record_id":"<urn:uuid:208471e2-f7df-40f1-b10d-db50c191d161>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00541.warc.gz"} |
Formula for Calculating percentage in rows
Please below is a dummy example and I need the formula for finding the population percentage so that I can apply the same formula to other rows by clicking and dragging to auto-fill.
Countries Total #of Men Total # of Women Total # of Children Total Population Population in %
Canada 345000 456000 567000 1368000 what is the %?
Best Answer
• Hi @daoppong,
Just create columns of % with formulas in each column as below screenshot:
% Men : =[Total #of Men]@row / [Total Population]@row
% Women : =[Total # of Women]@row / [Total Population]@row
% Children: =[Total # of Children]@row / [Total Population]@row
Gia Thinh Technology - Smartsheet Solution Partner.
• Hi @daoppong,
Just create columns of % with formulas in each column as below screenshot:
% Men : =[Total #of Men]@row / [Total Population]@row
% Women : =[Total # of Women]@row / [Total Population]@row
% Children: =[Total # of Children]@row / [Total Population]@row
Gia Thinh Technology - Smartsheet Solution Partner.
• Thank you , Gia. Very hepful. Issue resolved.
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/89197/formula-for-calculating-percentage-in-rows","timestamp":"2024-11-13T13:20:42Z","content_type":"text/html","content_length":"406261","record_id":"<urn:uuid:07c317a6-1cf9-4990-9bc9-c424cabe2bfd>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00502.warc.gz"} |
Raman Scattering
First of all, the pronunciation is RAman, not raMAN. Raman scattering is a type of inelastic scattering of light by molecules. Compton scattering (see The Nature of Light) is inelastic scattering of
X-rays by electrons. Upon learning of Compton’s discovery (for which Compton received the 1928 Nobel Prize in Physics), C. V. Raman thought there should be a similar effect at visible wavelengths and
set out to find it. He succeeded and subsequently received the 1930 Nobel Prize for the discovery and explanation of what is now called Raman scattering or the Raman effect. (Incidentally, Raman
received his bachelor’s degree in physics at 16, graduating at the head of his class, and he published his first paper at 18. The Raman family produced a number of scientists, included Raman’s nephew
Subrahmanyan Chandrasekhar, who received the Nobel in 1983. The family seems to have had good genes for science.)
The physics of Raman scattering is quite complicated, but it can be conceptualized as following. Incident light can excite a molecule in its ground state to a higher “virtual” energy level, which
then immediately decays back to a lower level accompanied by the emission of light. If the decay returns the molecule to its initial state, the scattering is elastic and is called Rayleigh
scattering. If the decay is to a molecular vibrational level above the ground state, then then emitted light has a longer wavelength (lower energy) than the incident light: this is Raman scattering.
(See The Physics of Absorption for a discussion of electronic, vibrational, and rotational energy levels.) The use of Raman scattering, often called Raman spectroscopy, is widely used in chemistry as
a way to study the vibrational energy levels of molecules. In those applications, the incident light usually comes from a laser. In the ocean, the molecule of interest is water, and the exciting
light can be either sunlight or a laser.
The wavelength shift for Raman scatter by water is exceptionally large, corresponding to a wavenumber (1/wavelength) shift of about $3400\phantom{\rule{2.6108pt}{0ex}}c{m}^{-1}$, which at visible
wavelengths is many tens to more than a hundred nanometers. The time scale for Raman scattering (i.e, the time for the molecular excitation and subsequent decay) is roughly $1{0}^{-13}$ to $1{0}^
{-12}$ seconds. This is much faster than the time scale of fluorescence by chlorophyll or CDOM molecules, which is on the order of $1{0}^{-9}\phantom{\rule{2.6108pt}{0ex}}s$ or longer. Thus Raman
scatter can be thought of as an almost instantaneous scattering process, rather than as the absorption of light followed much later by the emission of new light.
Quantities Defining Raman Scattering
The quantities needed to compute Raman scatter contributions to the radiance are
• the Raman scattering coefficient ${b}_{\text{R}}\left({\lambda }^{\prime }\right)$, with units of ${m}^{-1}$
• the Raman wavelength distribution function ${f}_{\text{R}}\left({\lambda }^{\prime },\lambda \right)$, with units of $n{m}^{-1}$
• the Raman scattering phase function beta ${\stackrel{̃}{\beta }}_{\text{R}}\left(\psi \right)$, with units of $s{r}^{-1}$
The next sections discuss each of these quantities in turn.
The Raman scattering coefficient
The Raman scattering coefficient ${b}_{\text{R}}\left({\lambda }^{\prime }\right)$ tells how much of the irradiance at the excitation wavelength ${\lambda }^{\prime }$ scatters into all emission
wavelengths $\lambda >{\lambda }^{\prime }$, per unit of distance traveled by the excitation irradiance. The most recently published values of ${b}_{\text{R}}\left(488\phantom{\rule{2.6108pt}{0ex}}nm
\right)$ for water are $\left(2.7±0.2\right)×1{0}^{-4}\phantom{\rule{2.6108pt}{0ex}}{m}^{-1}$ (Bartlett et al. (1998)) and $2.4×1{0}^{-4}\phantom{\rule{2.6108pt}{0ex}}{m}^{-1}$ (Desiderio (2000)).
(The current version (6.0) of the HydroLight radiative transfer model uses ${b}_{\text{R}}\left(488\phantom{\rule{2.6108pt}{0ex}}nm\right)=2.6×1{0}^{-4}\phantom{\rule{2.6108pt}{0ex}}{m}^{-1}$ as the
default value.)
Various values for the wavelength dependence of ${b}_{\text{R}}$ can be found in the literature. Bartlett et al. (1998) reviewed the wavelength dependence of the Raman scattering coefficient in detail
and found, based on their measurements, a wavelength dependence of ${\lambda }^{-4.8±0.3}$ for calculations performed in energy units (as in HydroLight). In terms of the excitation wavelength,
Bartlett et al. found ${b}_{\text{R}}\left({\lambda }^{\prime }\right)={b}_{\text{R}}\left(488\right){\left(488∕{\lambda }^{\prime }\right)}^{5.5±0.4}$ for energy computations. (The current version
of HydroLight uses ${b}_{\text{R}}\left({\lambda }^{\prime }\right)={b}_{\text{R}}\left(488\right){\left(488∕{\lambda }^{\prime }\right)}^{5.5}$ as the default.) For calculations in terms of photon
numbers (as in a Monte Carlo simulation), Bartlett et al. found wavelength dependencies of ${\left({\lambda }^{\prime }\right)}^{-5.3±0.3}$ or ${\left(\lambda \right)}^{-4.6±0.3}$.
The Raman wavelength distribution function
The Raman wavelength distribution function ${f}_{\text{R}}\left({\lambda }^{\prime },\lambda \right)$ relates the excitation and emission wavelengths, i.e., what wavelengths $\lambda$ receive the
scattered spectral irradiance for a given excitation wavelength ${\lambda }^{\prime }$ or, conversely, what wavelengths ${\lambda }^{\prime }$ excite a given emission wavelength $\lambda$. (The term
“wavelength distribution function” is non-standard, but descriptive, so that is what was used in Light and Water and again here.) The function ${f}_{\text{R}}\left({\lambda }^{\prime },\lambda \
right)$ is most conveniently described in terms of the corresponding wavenumber distribution function ${f}_{\text{R}}\left({\kappa }^{″}\right)$, where ${\kappa }^{″}$ is the wavenumber shift,
expressed in units of $c{m}^{-1}$. This follows because the Raman-scattered light undergoes a frequency shift that is determined by the type of molecule and is independent of the incident frequency.
The wavenumber $\kappa$ in $c{m}^{-1}$ is related to the wavelength $\lambda$ in nm by $\kappa =1{0}^{7}∕\lambda$, and to the frequency $u$ by $\kappa =u ∕c$, where $c$ is the speed of light. (The $1
{0}^{7}$ factor converts nanometers to centimeters.)
According to Walrafen (1967), the shape of ${f}_{\text{R}}\left({\kappa }^{″}\right)$ for water is given by a sum of four Gaussian functions:
${f}_{\text{R}}\left({\kappa }^{″}\right)={\left[{\left(\frac{\pi }{4\phantom{\rule{0.3em}{0ex}}ln2}\right)}^{\frac{1}{2}}\phantom{\rule{0.3em}{0ex}}\sum _{i=1}^{4}{A}_{i}\right]}^{-1}\phantom{\
rule{0.3em}{0ex}}\sum _{j=1}^{4}{A}_{j}\phantom{\rule{0.3em}{0ex}}\frac{1}{\Delta {\kappa }_{j}}\phantom{\rule{0.3em}{0ex}}exp\left[-4\phantom{\rule{0.3em}{0ex}}ln2\phantom{\rule{0.3em}{0ex}}\ (1)
frac{{\left({\kappa }^{″}-{\kappa }_{j}\right)}^{2}}{\Delta {\kappa }_{j}^{2}}\phantom{\rule{0.3em}{0ex}}\right]\phantom{\rule{2em}{0ex}}\left(cm\right)\phantom{\rule{2.6108pt}{0ex}},$
• ${\kappa }^{″}$ is the wavenumber shift of the Raman-scattered light, relative to the wavenumber ${\kappa }^{\prime }$ of the incident light, in $c{m}^{-1}$
• ${\kappa }_{j}$ is the center of the ${j}^{\text{th}}$ Gaussian function, in $c{m}^{-1}$
• $\Delta {\kappa }_{j}$ is the full width at half maximum of the ${j}^{\text{th}}$ Gaussian function, in $c{m}^{-1}$
• ${A}_{j}$ is the nondimensional weight of the ${j}^{\text{th}}$ Gaussian function.
The values of ${A}_{j}$, ${\kappa }_{j}$, and $\Delta {\kappa }_{j}$ for pure water at a temperature of 25 deg C are given in Table 1. Figure 1 shows ${f}_{\text{R}}\left({\kappa }^{″}\right)$
evaluated for the water parameter values of Table 1. The function shows a peak and a shoulder, which result from the sums of the four gaussians seen in Eq. (1). For water, the wavenumber shift is
roughly $3400\phantom{\rule{2.6108pt}{0ex}}c{m}^{-1}$.
$j$ ${A}_{j}$ ${\kappa }_{j}$ $\Delta {\kappa }_{j}$
$c{m}^{-1}$ $c{m}^{-1}$
1 0.41 3250 210
2 0.39 3425 175
3 0.10 3530 140
4 0.10 3625 140
Consider, for example, incident light with ${\lambda }^{\prime }=500\phantom{\rule{2.6108pt}{0ex}}nm$, which corresponds to ${\kappa }^{\prime }=20000\phantom{\rule{2.6108pt}{0ex}}c{m}^{-1}$. Figure
1 shows that this light, if Raman scattered, will be shifted by roughly $3400\phantom{\rule{2.6108pt}{0ex}}c{m}^{-1}$ to $\kappa =16600\phantom{\rule{2.6108pt}{0ex}}c{m}^{-1}$, which corresponds to $
\lambda \approx 602\phantom{\rule{2.6108pt}{0ex}}nm$.
The function ${f}_{\text{R}}\left({\kappa }^{″}\right)$ can be interpreted as a probability density function giving the probability that a light of any incident wavenumber ${\kappa }^{\prime }=1{0}^
{7}∕{\lambda }^{\prime }$, if Raman scattered, will be scattered to a wavenumber
$\kappa ={\kappa }^{\prime }-{\kappa }^{″}\phantom{\rule{2.6108pt}{0ex}}.$ (2)
The function ${f}_{\text{R}}\left({\kappa }^{″}\right)$ satisfies the normalization condition
${\int }_{0}^{{\kappa }^{\prime }}{f}_{\text{R}}\left({\kappa }^{″}\right)d{\kappa }^{″}=1\phantom{\rule{2.6108pt}{0ex}},$ (3)
as is required of any probability distribution function. The integration limits above come from observing that as $\lambda \to \infty$ then the wavenumber ${\kappa }^{″}\to {\kappa }^{\prime }$, and
as $\lambda \to {\lambda }^{\prime }$ then ${\kappa }^{″}\to 0$. If A change of variables from ${\kappa }^{″}$ to $\lambda$ in Eq. (3) leads to the corresponding wavelength redistribution function $
{f}_{\text{R}}\left({\lambda }^{\prime }\to \lambda \right)$. Thus
$\begin{array}{llll}\hfill {\int }_{0}^{{\kappa }^{\prime }}{f}_{\text{R}}\left({\kappa }^{″}\right)d{\kappa }^{″}=& {\int }_{{\lambda }^{\prime }}^{\infty }{f}_{\text{R}}\left(\frac{1{0}^{7}}{{\
lambda }^{″}}\right)\phantom{\rule{0.3em}{0ex}}\frac{d{\kappa }^{″}}{d\lambda }\phantom{\rule{0.3em}{0ex}}d\lambda ={\int }_{{\lambda }^{\prime }}^{\infty }{f}_{\text{R}}\left[1{0}^{7}\left(\frac{1}
{{\lambda }^{\prime }}-\frac{1}{\lambda }\right)\right]\phantom{\rule{0.3em}{0ex}}\frac{1{0}^{7}}{{\lambda }^{2}}\phantom{\rule{0.3em}{0ex}}d\lambda \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule
{2em}{0ex}}\\ \hfill \equiv & {\int }_{{\lambda }^{\prime }}^{\infty }{f}_{\text{R}}\left({\lambda }^{\prime }\to \lambda \right)d\lambda =1\phantom{\rule{2.6108pt}{0ex}},\phantom{\rule{2em}{0ex}}& \
hfill & \phantom{\rule{2em}{0ex}}\end{array}$
where the wavelengths are in nanometers. In the last equation, we have identified the function
${f}_{\text{R}}\left({\lambda }^{\prime },\lambda \right)\equiv \left\{\begin{array}{c}\phantom{\rule{1em}{0ex}}\frac{1{0}^{7}}{{\lambda }^{2}}{f}_{\text{R}}\left(\frac{1{0}^{7}}{{\lambda }^{″}}\
right)=\frac{1{0}^{7}}{{\lambda }^{2}}{f}_{\text{R}}\left[1{0}^{7}\left(\frac{1}{{\lambda }^{\prime }}-\frac{1}{\lambda }\right)\right]\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\ (4)
rule{1em}{0ex}}\text{if}\phantom{\rule{1em}{0ex}}{\lambda }^{\prime }<\lambda \phantom{\rule{1em}{0ex}}\hfill \\ \phantom{\rule{1em}{0ex}}0\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\
phantom{\rule{1em}{0ex}}\text{if}\phantom{\rule{1em}{0ex}}{\lambda }^{\prime }\ge \lambda \phantom{\rule{1em}{0ex}}\hfill \end{array}\right\$
as being the desired Raman wavelength redistribution function, with units of $n{m}^{-1}$.
In ${f}_{\text{R}}\left({\lambda }^{\prime },\lambda \right)$ we can fix the incident wavelength ${\lambda }^{\prime }$ and plot the corresponding emission wavelengths. We can also fix the emission
wavelength $\lambda$ and use the function to see where the light emitted at $\lambda$ comes from. Both options are seen in Fig. 2.
Figure 3 shows ${f}_{\text{R}}\left({\lambda }^{\prime },\lambda \right)$ for four values of the incident wavelength ${\lambda }^{\prime }$. These plots show that as the incident wavelength ${\lambda
}^{\prime }$ increases, the emission band becomes broader and the shift from ${\lambda }^{\prime }$ to $\lambda$ becomes larger. Excitation at 400 nm gives emission centered at roughly 463 nm, a
shift of 63 nm, but excitation at 550 nm gives emission centered at round 677 nm, a shift of 127 nm.
Equation (2) can be rewritten as
$\lambda =\frac{1{0}^{7}}{\frac{1{0}^{7}}{{\lambda }^{\prime }}-3400}$
and used to compute the approximate center of the emission wavelength band for a given excitation wavelength. Figure 4 shows the result.
The Raman phase function
The Raman phase function ${\stackrel{̃}{\beta }}_{\text{R}}\left(\psi \right)$ gives the angular distribution of the Raman scattered radiance. This function (averaging over all polarization states) is
given by
${\stackrel{̃}{\beta }}_{\text{R}}\left(\psi \right)=\frac{3}{16\pi }\phantom{\rule{0.3em}{0ex}}\frac{1+3\rho }{1+2\rho }\phantom{\rule{0.3em}{0ex}}\left[1+\left(\frac{1-\rho }{1+3\rho }\right)\
phantom{\rule{0.3em}{0ex}}{cos}^{2}\psi \right]\phantom{\rule{2.6108pt}{0ex}},$
where $\psi$ is the scattering angle between the direction of the incident and scattered radiance, and $\rho$ is the depolarization factor. The value of $\rho$ depends on the wavenumber shift ${\
kappa }^{″}$ (Ge et al. (1993), their Fig. 2). For a value of ${\kappa }^{″}=3400\phantom{\rule{2.6108pt}{0ex}}c{m}^{-1}$, $\rho \approx 0.18$, in which case the phase function is
${\stackrel{̃}{\beta }}_{\text{R}}\left(\psi \right)=0.068\phantom{\rule{0.3em}{0ex}}\left(1+0.53\phantom{\rule{0.3em}{0ex}}{cos}^{2}\psi \right)\phantom{\rule{2.6108pt}{0ex}}.$
This phase function is similar in shape to the phase function for elastic scattering by pure water.
Incorporation of Raman Scatter into Radiative Transfer Calculations
We now have have the pieces needed to define the volume scattering function for Raman scattering, ${\beta }_{\text{R}}\left({\xi }^{\prime }\to \xi ;{\lambda }^{\prime }\to \lambda \right)$, where ${\
xi }^{\prime }$ and $\xi$ represent the incident and final directions of the light. This VSF specifies the strength of the Raman scattering via the Raman scattering coefficient ${b}_{\text{R}}\left({\
lambda }^{\prime }\right)$, the wavelength distribution of the scattered light via the wavelength redistribution function ${f}_{\text{R}}\left({\lambda }^{\prime },\lambda \right)$, and its angular
distribution relative to the direction of the incident light via the Raman phase function ${\stackrel{̃}{\beta }}_{\text{R}}\left(\psi \right)$. Thus we have
${\beta }_{\text{R}}\left({\xi }^{\prime }\to \xi ;{\lambda }^{\prime }\to \lambda \right)={b}_{\text{R}}\left({\lambda }^{\prime }\right)\phantom{\rule{0.3em}{0ex}}{f}_{\text{R}}\left({\lambda }
^{\prime },\lambda \right)\phantom{\rule{0.3em}{0ex}}{\stackrel{̃}{\beta }}_{\text{R}}\left(\psi \right)\phantom{\rule{2em}{0ex}}\left({m}^{-1}\phantom{\rule{2.6108pt}{0ex}}n{m}^{-1}\phantom{\rule (5)
Raman scatter is incorporated into unpolarized radiative transfer calculations as a source term (See Eq. 3 of The Scalar Radiative Transfer Equation):
$\begin{array}{lll}\hfill cos𝜃\frac{dL\left(z,𝜃,\varphi ,\lambda \right)}{dz}=& -c\left(z,\lambda \right)L\left(z,𝜃,\varphi ,\lambda \right)\phantom{\rule{2em}{0ex}}& \hfill \text{(6)}\\ \hfill +& {\
int }_{0}^{2\pi }{\int }_{0}^{\pi }\beta \left(z;{𝜃}^{\prime },{\varphi }^{\prime }\to 𝜃,\varphi ;\lambda \right)L\left(z,{𝜃}^{\prime },{\varphi }^{\prime },\lambda \right)sin{𝜃}^{\prime }d{𝜃}^{\
prime }d{\varphi }^{\prime }\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill +& {\int }_{0}^{\lambda }{\int }_{0}^{2\pi }{\int }_{0}^{\pi }{\beta }_{\text{R}}\left({𝜃}^{\prime
},{\varphi }^{\prime }\to 𝜃,\varphi ;{\lambda }^{\prime }\to \lambda \right)L\left(z,{𝜃}^{\prime },{\varphi }^{\prime },{\lambda }^{\prime }\right)sin{𝜃}^{\prime }d{𝜃}^{\prime }d{\varphi }^{\prime }\
phantom{\rule{0.3em}{0ex}}d{\lambda }^{\prime }\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$
The scattering angle $\psi$ is given by $\psi ={cos}^{-1}\left({\xi }^{\prime }\cdot \xi \right)$, which can be computed from ${𝜃}^{\prime },{\varphi }^{\prime },𝜃,\varphi$ via Eq. (5) of the
Geometry page.
Note that this formidiable equation cannot be solved at just the wavelength $\lambda$ of interest; it must be solved at all wavelengths ${\lambda }^{\prime }<\lambda$ that contribute Raman-scattered
radiance to the radiance at $\lambda$.
Examples of Raman Effects
Effect on remote-sensing reflectance
Figure 5 shows the effect of Raman scattered radiance on the remote-sensing reflectance for chlorophyll values of $Chl=0.02$ and $2\phantom{\rule{2.6108pt}{0ex}}mg\phantom{\rule{2.6108pt}{0ex}}{m}^{-3}
$ as simulated by HydroLight. The HydroLight runs used a bio-optical model for Case 1 water for which the absorption and scattering properties of the water are determined only by the chlorophyll
concentration. The water was homogeneous and infinitely deep. The Sun was at a 30 deg zenith angle in a clear sky; the wind speed was $5\phantom{\rule{2.6108pt}{0ex}}m\phantom{\rule{2.6108pt}{0ex}}{s}
^{-1}$. Identical runs were made with and without Raman scatter included in the runs. It is seen that for the very clear water with $Chl=0.02$, ${R}_{\text{rs}}$ is as much as 22% higher, but for the
water with $Chl=2$, Raman increases ${R}_{\text{rs}}$ by at most 5%. The Raman effect decreases but still can be significant in higher chlorophyll waters or in turbid Case 2 waters. Although difficult to
see in Fig. 5, the Raman effect does not “turn on” until about 340 nm even though the HydroLight run started at 300 nm. This is because the start of the emission band for excitation near 300 nm is
around 340 nm (recall Fig. 4).
Effect on upwelling plane irradiance
Although Raman scattering does not have a large effect on ${R}_{\text{rs}}$ except in the clearest water, it can be the dominant source of light at red wavelengths at depths where absorption by water
has removed most of the incoming sunlight. This is illustrated in Fig. 6. HydroLight was run for Case 1 water with a chlorophyll concentration of $Chl=0.5\phantom{\rule{2.6108pt}{0ex}}mg\phantom{\
rule{2.6108pt}{0ex}}{m}^{-3}$, which is typical of open ocean water. The Sun was at a 30 deg zenith angle in a clear sky. The run started at 300 nm, so that Raman effects would be present at
wavelengths greater than 340 nm. At 400 and 500 nm, the contribution by Raman scattered light to the upwelling irradiance is almost unnoticeable. This is because at these wavelengths elastically
scattered solar radiance is the main contributor to the upwelling irradiance. At 580 nm, water absorption (${a}_{w}\left(580\right)=0.09\phantom{\rule{2.6108pt}{0ex}}{m}^{-1}$) is beginning to filter
out enough of the solar radiance that the Raman contribution, which comes from wavelengths around 485 nm (see Fig.4), where the light penetrates well to depth, is becoming the main contribution to $
{E}_{u}\left(z,580\right)$. At 600 nm, water absorption (${a}_{w}\left(600\right)=0.22\phantom{\rule{2.6108pt}{0ex}}{m}^{-1}$) has removed almost all of the solar light below about 20 m. Below 20 m,
almost all of ${E}_{u}\left(z,600\right)$ comes from Raman scattered light that originates from wavelengths around 500 nm, where sunlight penetrates well to depth. It should be noted that below about
40 m, the depth rate of decay of ${E}_{u}\left(z,600\right)$ (i.e., ${K}_{u}\left(z,600\right)$) is almost the same as the rate of decay the irradiance at 500 nm. This is because the light at 500 nm
is the source of the light at 600 nm.
It was the measurement of unexpected upwelling irradiance ${E}_{u}\left(z,\lambda \right)$ at depths below 50 m and wavelengths greater than 520 nm that led Sugihara et al. (1984) to suggest that the
unexpected upwelling irradiance came from downwelling irradiance a blue-green wavelengths being Raman scattered. A number of subsequent studies (e.g., Stavin and Weidemann (1988), Marshall and Smith
(1990)) confirmed this hypothesis. Many studies since have studied Raman effects on ocean light fields. For example, Kattawar and Xu (1992) and Ge et al. (1993) studied the filling in of Fraunhofer lines
in underwater light by Raman scattering.
Interpretation of Raman emission profiles
Another interesting example of the contribution of Raman scattering to in-water radiances is seen in Fig. 7. This plot shows the downwelling (zenith-viewing) radiance generated from a HydroLight run
using its option for simulating LIDAR illumination at one wavelength in an otherwise black sky. The inputs were as follows:
• The incident (LIDAR) irradiance was ${E}_{d}\left(\text{direct beam}\right)=1\phantom{\rule{2.6108pt}{0ex}}W\phantom{\rule{2.6108pt}{0ex}}{m}^{-2}$ at 488 nm; the sky was otherwise black.
• The chlorophyll concentration was $Chl=0.05\phantom{\rule{2.6108pt}{0ex}}mg\phantom{\rule{2.6108pt}{0ex}}{m}^{-3}$ for Case 1 water
• The water was infinitely deep water with Eq. (6) solved down to 50 m
• The output was at 560 to 610 nm at 1 nm resolution
The curves in Fig. 7 are explained as follows. At depth 0, just below the sea surface, the only contribution to ${L}_{d}\left(0,\lambda \right)$ is upwelling radiance that is reflected back downward
by the sea surface. By 5 m depth, there is now enough water above the simulated measurement instrument that the water column is generating significant downwelling radiance. Note that the shape of the
emission has the same shape as the emission function seen in Fig. 2, namely a peak near 585 nm with a shoulder at about 580 nm. As the depth increases to 10 and then 20 m, the magnitude of ${L}_{d}\
left(z,\lambda \right)$ increases, but the shape of the emission begins to flatten between 580 and 585. By 30 m, the magnitude of ${L}_{d}\left(z,\lambda \right)$ has decreased because of the decrease
in the radiance penetrating to this depth from the water between the surface and 30 m, and the magnitude continues to decrease as the depth becomes greater. However, the shape of the emission at 30 m
shows almost the same magnitudes at 580 and 585 nm, and at 40 and 50 m, the peak emission is actually greater at 580 than at 585 nm. This may seem strange because the shape of the Raman emission
function seen in Fig. 2 is the same for all depths.
This “reversal” of the “peak-shoulder” shape of the emission is a consequence of the difference in absorption across the emission wavelengths. The total absorption (water plus phytoplankton) increases
by a factor of three (from 0.071 to $0.221\phantom{\rule{2.6108pt}{0ex}}{m}^{-1}$) between 570 and 600 nm, and by 22% (from 0.091 to $0.111\phantom{\rule{2.6108pt}{0ex}}{m}^{-1}$) between 580 and 585
nm. These rapidly increasing absorption values change the shape of the local (at each depth) emission function when integrated over depth to obtain the total ${L}_{d}\left(z,\lambda \right)$, which
has contributions from all depths. Simply stated, the higher absorption at the 585 nm peak lets relatively less of the radiance emitted above a given depth reach the measurement depth than for the
shoulder at 580 nm, so the 585 nm peak appears smaller relative to the shoulder than what is seen for the emission function of Fig. 2.
The claim that the change is shape with depth of the Raman emission is due to wavelength-dependent absorption can be verified as follows. An “artificial water” IOP data file was created with the IOP
values between 570 and 600 nm having the values at 570 nm. The water IOPs are then the same over the entire range of emission wavelengths seen in Fig. 7. The resulting Raman ${L}_{d}\left(z,\lambda \
right)$ spectra are seen in Fig. 8. Now, without the wavelength-dependent absorption, the shape of ${L}_{d}\left(z,\lambda \right)$ does not change with depth and, indeed, looks exactly like the
shape of the emission function seen in Fig. 2. The magnitude of the ${L}_{d}\left(z,\lambda \right)$ curves is greater than before because the absorption is less. Of course, if the HydroLight run had
been made at 5 or 10 nm resolution, then the shape of the emission band would not have been resolved. However, the total Raman-scattered power would have been the same, but spread over the wider
These simulations show that the interpretation of Raman-scattered spectra can be complicated because of IOP effects at both the excitation and emission wavelengths, even in the simplest possible case
of excitation at one wavelength in an otherwise black sky. The situation becomes even more complicated for solar-stimulated Raman scatter because many excitation wavelengths can contribute to a range
of emission wavelengths, and everything blurs together in a non-obvious fashion.
Temperature and Salinity Dependence
The Walrafen data presented in Table 1 were determined on pure water at a temperature of 25 deg C. There is, however, a small but significant dependence on temperature and salinity of the shape of the
emission curve seen in Fig. 1. This dependence is shown in Fig. 9. The upper panel shows the shape of the emission curve as a function of temperature for a salinity of 15 PSU, and the lower curve
shows the dependence on salinity for a temperature of 25 deg C. Artlett and Pask (2017) have shown in laboratory measurements that these differences can be used to simultaneously determine temperature
and salinity with an RMS errors of $±0.7\phantom{\rule{2.6108pt}{0ex}}deg\phantom{\rule{2.6108pt}{0ex}}C$ and $±1.4\phantom{\rule{2.6108pt}{0ex}}PSU$, and they present the design for a three-channel
Raman spectrometer that excites at 532 nm and measures the emission at three bands. The emission bands would be used to form band ratios, from which temperature and salinity can be extracted.
Comments for Raman Scattering:
Loading Conversation | {"url":"https://www.oceanopticsbook.info/view/scattering/level-2/raman-scattering","timestamp":"2024-11-02T02:43:55Z","content_type":"text/html","content_length":"146546","record_id":"<urn:uuid:cecd0e6c-4604-433d-bec0-2180244955df>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00146.warc.gz"} |
Why (N-1) vs N [Long post] | Stephen R. Martin, PhD
Why (N-1) vs N [Long post]
I have a lot of students ask why we use (n – 1) vs (n) when computing variance. There are of course canned answers, like “Using N-1 unbiases the estimate” or “Using N-1 inflates the estimate, and
it’s more accurate” or “Using N results in an underestimate”.
These are all true, but they’re deeply unsatisfying. I remember learning the more precise reason in my first semester of my phd program, but the question came up again and I decided to and write it
out myself as a reference. This will be fairly basic stuff for many, I’m sure, but for the sake of my students and my future self, here it is:
First, distinguish between an estimator and its properties in expectation.
The estimator is some method with a defined goal that produces the best estimate given the goals.
Let’s say we are estimating the $\mu$ and $\sigma$ parameters to a normal distribution.
Let’s further say that we want a point estimate, and we’re not Bayesians (for now…), nor do we have any penalties.
One estimator is the maximum likelihood estimator, for obtaining the maximum likelihood estimate (MLE).
The MLE is the estimate that maximizes the joint probability density of the observed data, assuming some probability distribution underlies the sampling distribution.
In our case, we have the normal distribution, which looks like:
p(x | \mu, \sigma) = \sigma^{-1}(2\pi)^{-.5}e^{-(2\sigma^2)^{-1}(x – \mu)^2}
When we have more than one x – A vector of x called $X$ (capitalized), and if we assume independence, then the joint density is:
p(X | \mu, \sigma) = \prod_{i = 1}^N \sigma^{-1}(2\pi)^{-.5}e^{-(2\sigma^2)^{-1}(x_i – \mu)^2}
This is a pain to compute with, and the log-likelihood is much easier.
f(X | \mu, \sigma) = -N\log(\sigma) – .5N\log(2\pi) – (2\sigma^2)^{-1}\sum_{i = 1}^N (x_i – \mu)^2
Now, using this log-likelihood, we just need to see which value of $\mu$ and which value of $\sigma$ maximize the function output, the log-likelihood.
Because the log function is a monotonic function, maximizing the log-likelihood is the same as maximizing the likelihood.
To do this, we need to take the derivative of the log-likelihood with respect to each parameter at a time. Ignoring the notation for partial derivatives for the moment, the derivatives are as
f(X|\mu,\sigma) &= \ldots -(2\sigma^2)^{-1}\sum(x_i^2 – 2x_i\mu + \mu^2) \\
&= \ldots -(2\sigma^2)^{-1} \left ( \sum x_i^2 – 2\mu\sum x_i + N\mu^2 \right ) \\
&= \ldots – \frac{\sum x_i^2}{2\sigma^2} + \frac{\mu}{\sigma^2}\sum x_i – N\mu^2(2\sigma^2)^{-1} \\
\frac{d(f(X|\mu,\sigma)}{d\mu} &= 0 + \frac{\sum x_i}{\sigma^2} – N\mu(\sigma^2)^{-1} \\
0 &= \frac{d(f(X|\mu,\sigma)}{d\mu} \\
0 &= \sum x_i – N\mu \\
\mu &= \frac{\sum x_i}{N}
I’ll call that $\hat\mu$, since it’s an estimate that maximizes the likelihood of the observations, and not the “true” $\mu$ parameter value.
Onto $\sigma$:
f(X | \mu, \sigma) &= -N\log(\sigma) – .5N\log(2\pi) – (2\sigma^2)^{-1}\sum_{i = 1}^N (x_i – \mu)^2 \\
\frac{d(f)}{d\sigma} &= -N\sigma^{-1} – 0 – \sum(x_i – \mu)^2\sigma^{-3} \\
0 &= \sigma^{-1}(N – \sigma^{-2}\sum(x_i – \mu)^2 \\
\sigma^{-2}\sum(x_i – \mu)^2 &= N \\
\sigma^2 &= \frac{\sum(x_i – \mu)^2}{N}
I’ll call that $\hat\sigma^2$, since it’s an estimate that maximizes the likelihood of the observations, and not the “true” $\sigma$ parameter value.
So now we have two estimators, derived from maximum likelihood goals:
\hat\mu = \frac{\sum x_i}{N} \
\hat\sigma^2 = \frac{(\sum x_i – \mu)^2}{N}
Now the question, what are the expected values of these parameter estimates, and are they equal to the desired parameter values?
Let me just take a shortcut and say the expected value of $\hat\mu$ is indeed $\mu$.
There is variance of the $\hat\mu$ estimate around $\mu$, derived as follows.
Assume $E(x_i) = \mu$, $E(\sum x_i) = \sum E(x_i)$, $E(cX) = cE(X)$, which are some basic expectation rules, and the rest follows.
E(\hat\mu – \mu)^2 &= E(\hat\mu^2 – 2\hat\mu\mu + \mu^2) \\
&= E(\hat\mu^2) – 2\mu E(\hat\mu) + \mu^2 \\
&= E(\frac{(\sum x_i)^2}{N^2}) – 2\mu E(\frac{\sum x_i}{N}) + \mu^2 \\
&= \frac{1}{N^2}E((\sum x_i)^2) – \mu^2 \\
&= \frac{1}{N^2}E\left (\sum x_i – \mu + \mu \right )^2 -\mu^2 \\
&= \frac{1}{N^2}E\left ((\sum x_i – \mu) + N\mu \right )^2 – \mu^2 \\
&= \frac{1}{N^2} E \left ( (\sum (x_i – \mu)^2 + 2N\mu\sum(x_i – \mu) + N^2\mu^2)\right ) – \mu^2\\
&= \frac{1}{N^2} (N\sigma^2 + 0 + N^2\mu^2) – \mu^2\\
&= \frac{\sigma^2}{N} = \sigma^2_\mu
Voila, the expected variance (VAR) of the $\hat\mu$ estimator, defined as $E(\hat\mu – \mu)^2)$, is equal to $\frac{\sigma^2}{N}$.
Just as our textbooks tell us.
With that out of the way, what is the expected value of $\hat\sigma^2$?
\hat\sigma^2 &= \frac{\sum(x_i – \hat\mu)^2}{N} \\
E(\hat\sigma^2) &= \frac{1}{N}E(\sum((x_i – \mu) – (\hat\mu – \mu))^2) \\
&= \frac{1}{N}E(\sum (x_i – \mu)^2 -2\sum(x_i – \mu)(\hat\mu – \mu) + \sum(\hat\mu – \mu)^2) \\
&= \frac{1}{N}\left (N\sigma^2 – 2\sum E(x_i – \mu)(\hat\mu – \mu) + N\sigma^2_\mu) \right ) \\
&= \frac{1}{N}\left (N\sigma^2 – 2\sum E(\hat\mu x_i – \mu x_i – \mu\hat\mu + \mu^2) + N\sigma^2_\mu \right ) \\
&= \sigma^2 + \sigma^2_\mu – \frac{2}{N}E(N\mu^2 – N\mu\hat\mu – N\mu\hat\mu + N\hat\mu^2) \\
&= \sigma^2 + \sigma^2_\mu – 2E(\hat\mu – \mu)^2\\
&= \sigma^2 – \sigma^2_\mu \\
E(\hat\sigma^2)&= \sigma^2 – \frac{\sigma^2}{N} = \sigma^2(1 – \frac{1}{N}) = \sigma^2(\frac{N-1}{N}) \\
\sigma^2 &= \hat\sigma^2\frac{N}{N-1}
1. $E(\hat\sigma^2)= \sigma^2 – \sigma^2_\mu$ is really interesting. It’s saying the expected value of our estimate of the variance is equal to the true variance, minus the error variance of the
mean estimator. Conversely, you can say that the true variance is equal to the observed variance plus the error variance in the mean estimate. That already should intuitively suggest to you that
the reason the MLE for $\sigma$ is underestimated is because it fails to account for the variability in the mean used to compute the sample variance. Scores not only vary around the mean, but the
mean varies around the parameter, and by neglecting that latter variability, we underestimate the population variance.
From an ordinary least squares perspective, you can think of it this way: The mean is the value for which the average squared deviation from it is the smallest. Already, the mean is minimizing
the variability. But because the mean does not equal the population parameter, that variability will be minimized around the incorrect point.
2. The $\frac{N}{N-1}$ adjustment merely alters the estimated value so that the expected value will equal the true value. It does so basically by indirectly adding back some error variance due to
the estimation of the mean itself. And if that fraction doesn’t look familiar to you, maybe this helps: $\hat\sigma^2 \frac{N}{N-1} = \frac{(\sum x_i- \hat\mu)^2 N}{N(N-1)} = \frac{\sum (x_i – \
And that’s basically it. I probably have minor mistakes in the above, as this was all hastily written. But that’s the gist – Using (N-1) as the divisor ultimately comes from the fact that our
estimator is not expected to equal the true variance parameter. The estimated sample variance is off by a factor of $\frac{N-1}{N}$, asymptotically, so we just adjust it so the expected value with
the adjustment does equal the true variance. | {"url":"https://srmart.in/n-1-vs-n-long-post/","timestamp":"2024-11-09T20:12:43Z","content_type":"text/html","content_length":"88295","record_id":"<urn:uuid:cdb4d578-d520-4d94-99c6-460f53c3a092>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00793.warc.gz"} |
A model of HIV-1 pathogenesis that includes an intracellular delayModel StatusModel Structurereaction schematic for the model
Catherine Lloyd Auckland Bioengineering Institute, The University of Auckland This CellML model represents the general model from the paper, based on equation 1. The CellML model runs in both COR and
OpenCell and the units are consistent. The model simulation output looks reasonable but we are unsure as to whether or not it recreates the results of the published model as there are no obvious
figures for simple comparison. ABSTRACT: Mathematical modeling combined with experimental measurements have yielded important insights into HIV-1 pathogenesis. For example, data from experiments in
which HIV-infected patients are given potent antiretroviral drugs that perturb the infection process have been used to estimate kinetic parameters underlying HIV infection. Many of the models used to
analyze data have assumed drug treatments to be completely efficacious and that upon infection a cell instantly begins producing virus. We consider a model that allows for less then perfect drug
effects and which includes a delay in the initiation of virus production. We present detailed analysis of this delay differential equation model and compare the results to a model without delay. Our
analysis shows that when drug efficacy is less than 100%, as may be the case in vivo, the predicted rate of decline in plasma virus concentration depends on three factors: the death rate of virus
producing cells, the efficacy of therapy, and the length of the delay. Thus, previous estimates of infected cell loss rates can be improved upon by considering more realistic models of viral
infection.. The original paper reference is cited below: A model of HIV-1 pathogenesis that includes an intracellular delay, Patrick W. Nelson, James D. Murray, and Alan S. Perelson, 2000,
Mathematical Biosciences, 163, 201-215. PubMed ID: 10701304 A schematic diagram showing the cascade of events triggered by the binding of a HIV-1 virus particle to a receptor on a target T-cell. | {"url":"https://models.cellml.org/workspace/nelson_murray_perelson_2000/@@rawfile/5bf656acbd7c2608436eed3fbb5acb23296d077e/nelson_murray_perelson_2000_general.cellml","timestamp":"2024-11-12T10:21:22Z","content_type":"application/cellml+xml","content_length":"25028","record_id":"<urn:uuid:6df28ee0-d720-424d-ab5d-072b2a34e107>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00365.warc.gz"} |
Project description
Keras Attention Mechanism
Many-to-one attention mechanism for Keras.
pip install attention
import numpy as np
from tensorflow.keras import Input
from tensorflow.keras.layers import Dense, LSTM
from tensorflow.keras.models import load_model, Model
from attention import Attention
def main():
# Dummy data. There is nothing to learn in this example.
num_samples, time_steps, input_dim, output_dim = 100, 10, 1, 1
data_x = np.random.uniform(size=(num_samples, time_steps, input_dim))
data_y = np.random.uniform(size=(num_samples, output_dim))
# Define/compile the model.
model_input = Input(shape=(time_steps, input_dim))
x = LSTM(64, return_sequences=True)(model_input)
x = Attention(units=32)(x)
x = Dense(1)(x)
model = Model(model_input, x)
model.compile(loss='mae', optimizer='adam')
# train.
model.fit(data_x, data_y, epochs=10)
# test save/reload model.
pred1 = model.predict(data_x)
model_h5 = load_model('test_model.h5', custom_objects={'Attention': Attention})
pred2 = model_h5.predict(data_x)
np.testing.assert_almost_equal(pred1, pred2)
if __name__ == '__main__':
Other Examples
Browse examples.
Install the requirements before running the examples: pip install -r examples/examples-requirements.txt.
IMDB Dataset
In this experiment, we demonstrate that using attention yields a higher accuracy on the IMDB dataset. We consider two LSTM networks: one with this attention layer and the other one with a fully
connected layer. Both have the same number of parameters for a fair comparison (250K).
Here are the results on 10 runs. For every run, we record the max accuracy on the test set for 10 epochs.
Measure No Attention (250K params) Attention (250K params)
MAX Accuracy 88.22 88.76
AVG Accuracy 87.02 87.62
STDDEV Accuracy 0.18 0.14
As expected, there is a boost in accuracy for the model with attention. It also reduces the variability between the runs, which is something nice to have.
Adding two numbers
Let's consider the task of adding two numbers that come right after some delimiters (0 in this case):
x = [1, 2, 3, 0, 4, 5, 6, 0, 7, 8]. Result is y = 4 + 7 = 11.
The attention is expected to be the highest after the delimiters. An overview of the training is shown below, where the top represents the attention map and the bottom the ground truth. As the
training progresses, the model learns the task and the attention map converges to the ground truth.
Finding max of a sequence
We consider many 1D sequences of the same length. The task is to find the maximum of each sequence.
We give the full sequence processed by the RNN layer to the attention layer. We expect the attention layer to focus on the maximum of each sequence.
After a few epochs, the attention layer converges perfectly to what we expected.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for attention-5.0.0.tar.gz
Hashes for attention-5.0.0.tar.gz
Algorithm Hash digest
SHA256 dec0734c8de45be9b15765b4b2fd5c952484246a8bbfa4953b81951948402b8e
MD5 31df6e2f394bbb8499b1b6d37718e8a6
BLAKE2b-256 c33f4f821fbcf4c401ec43b549b67d12bf5dd00eb4545378c336b09a17bdd9f3
Hashes for attention-5.0.0-py3-none-any.whl
Hashes for attention-5.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5605b4b2fb5780f161b525819d94ebdf05ccf5aa5febbd70eeb9c6e9eea239bd
MD5 3178864cc0d20c1e7180fce6967f11c1
BLAKE2b-256 5559e43b191c104ba7f5f289acd11921511838fbab273c1164b954203cf8d966 | {"url":"https://pypi.org/project/attention/","timestamp":"2024-11-09T12:25:14Z","content_type":"text/html","content_length":"54701","record_id":"<urn:uuid:c15c792e-bc22-4d9b-9efb-b590dc95bb51>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00142.warc.gz"} |
Reading the content in a sinusoidal wave
Making sense of the equation
The equation for the displacement of an elastic string undergoing sinusoidal oscillation is
$$y(x,t) = A \sin{(kx-ωt)}$$
This is an anchor equation. It can serve as the basis for understanding a lot of interesting phenomena, including beats, interference, and spectral analysis.
What does it all mean? Let's make sense of each part of it.
1. What are we talking about? The expression $y(x,t)$ means that we are finding the $y$ displacement of the bit of string that is labeled by its position $x$ at a time $t$. By labeling the
displacement as $y$ and the position along the string $x$ we imply that the wave is a transverse wave. More generally we could write: $f(x,t) = A \sin{(kx-ωt)}$ where $f$ indicates the displacement
of the bit of string away from its equilibrium position. The tricky part is that it oscillates in both time and space.
2. What's $A$? Since we know that the function "sin" only oscillates between 1 and -1 and is dimensionless, multiplying it by $A$ means that $y$ will oscillate between the values $A$ and $-A$. So we
can interpret $A$ as the amplitude of the oscillation.
3. What's $k$? The constant $k$ was introduced to make units come out right. But it will have implications. Since $k$ is about how fast the argument of sine changes with $x$, let's choose a fixed
value of $t$, say for convenience at $t = 0$. Then our function is $A \sin{kx}$. It looks like the figure below.
How does this change as $x$ changes? We know that the sine goes through a full oscillation when its argument (in radians) changes by $2π$ (say, from 0 up to $2π$). If $kx$ changes by $2π$, then $x$
must change by $2π/k$. Therefore, when $x$ changes by $2π/k$ our function goes through one full oscillation. The spatial distance for one full oscillation is called the wavelength, $λ$, and $k$ is
called the wave number. Therefore
$$k = 2π/λ$$
and it has dimensionality $[k] =$ 1/L. Choosing $k$ selects how fast the wave will be changing in space.
4. What's $ω$? This was put in to give correct units to what happens when time changes. So, as in part 3, this time weconsider a fixed $x$ position, say for convenience $x = 0$. When we look at the
time variation of a particular bead, we get something proportional to $\sin{ωt}$. It looks like the figure below.
We've put a little clock to indicate that the axis is a time measurement.
We know that the sine goes through a full oscillation when its argument (in radians) changes by $2π$ (say, from 0 up to $2π$). If the argument of sin, $ωt$ changes by $2π$, then $t$ must change by
$2π/ω$. Therefore, when $t$ changes by $2π/ω$, the sine goes through one full oscillation. The time for a bit of the string to go through one full oscillation is called the period. $T$ Therefore
$$ω = 2π/T.$$
The inverse of the period is also a convenient variable, the frequency, $f$. It's measured in inverse seconds (cycles per second) which is called Hertz. This gives the equations
$$f = 1/T = ω/2π.$$
It's easy to see that this is right via unit conversion including the units of the angles. The frequency $f$ is in cycles/sec, the angular velocity $ω$ is in radians/sec and 1 cycle = $2π$ radians.
So multiplying omega by 1 = (1 cycle)/(2π radians) converts the units from radians/sec to cycles/sec.
Relating the frequency and the wavelength
We've related the frequency and the wavelength to our parameters $ω$ and $k$, but we have a relationship between them: $ω = kv_0$. What does that tell us about the frequency, wavelength, and period?
If we express $ω$ and $k$ in terms of frequency and wavelength in this relation, we get
$$ω = kv_0$$
Expressing $\omega$ and $k$ in terms of $f$ and $\lambda$, we get
$$(2πf) = (2π/λ)v_0$$
so the $2\pi$s cancel and we can get
$$fλ = v_0.$$
So the product of the frequency and the wavelength is the wave speed. This makes more sense if we express it in terms of the period. Since $f = 1/T$, we get
$$λ = v_0T.$$
This relation makes good sense: If we wiggle a string in our hand to generate a wave, in the time we go through one full oscillation (the period, $T$) a full wiggle will have run out onto the string.
In that time the start of the wiggle has already moved along the string for a time $T$. Since the wiggle is moving with a speed $v_0$, the wavelength, $λ$, must be just $v_0T$.
Joe Redish 3/31/12 , Wolfgang Losert 4/6/13
Article 699
Last Modified: March 30, 2022 | {"url":"https://www.compadre.org/nexusph/course/view.cfm?ID=699","timestamp":"2024-11-05T07:52:13Z","content_type":"text/html","content_length":"17570","record_id":"<urn:uuid:c000d625-259a-455d-9473-1cb93d88e8e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00747.warc.gz"} |
Printable Maths Puzzles For 8 Year Olds - Printable Crossword Puzzles
Printable Maths Puzzles For 8 Year Olds – printable maths puzzles for 8 year olds, That does not know about Printable Maths Puzzles For 8 Year Olds? This mass media is traditionally used to instruct
phrase. In any part of this community, this press will need to have been very familiar for many people. A minimum of, people could possibly have ever seen it in class. Some others may have ever seen
it from an additional source.
Regarding individuals, this is probably not a new thing anymore. This mass media is quite familiarized to be used in educating and learning actions. You can find issues you may have to know linked to
the crossword puzzle. Have you been interested in understanding much more? Now, let’s check out the info listed below.
What you should Find out about Printable Maths Puzzles For 8 Year Olds
Let’s remember the storage where you can find this multimedia. University may be a position where kids will likely look at it. For instance, when children are understanding a vocabulary, they want
numerous fun routines. Nicely, Printable Maths Puzzles For 8 Year Olds can be one of your actions. Here is the method that you remedy the puzzles.
Inside a crossword puzzle, you will notice a lot of letters that are positioned in length. They could not are as a way. In reality, you will get to view several words. But, there are always
directions of the things words that you have to see in the puzzle. A list could have more than 5 words to discover. This will depend in the puzzle maker, though.
In case you are the one who make it, you are able to choose how numerous phrases the children must discover. Those phrases might be written above, alongside, or beneath the puzzle. Additionally,
Printable Maths Puzzles For 8 Year Olds are mostly in square shape. Square is most frequent condition to use. You have to have ever seen no less than one, do not you?
Up to this time, you have to have ever recalled a lot of memories relating to this puzzle, right? Related to using this puzzle in educating and discovering actions, vocabulary studying will not be
the sole one that utilizes this mass media. It is quite probable to be used in other topics.
One more example is, it can be used in technology subject for training about planets in galaxy. The label of planets might be written right down to assist youngsters getting them in puzzle. It is
really an exciting action for them.
Additionally, it is not necessarily too hard like a process. Certainly, people can use it for an additional use outside of the education and learning discipline. In order to make Printable Maths
Puzzles For 8 Year Olds, initially choice is so it will be all by yourself. It is not necessarily difficult in any way to organize it all by yourself.
Another alternative is by using crossword puzzle machine. There are numerous totally free internet sites and cost-free software program that help your job less difficult. It will help you set up the
puzzle simply by entering downward words that you might want, and there you are! Your crossword puzzle is able to use.
It is very easy to make your Printable Maths Puzzles For 8 Year Olds, right? You never need to invest lots of your time and energy so that it is using a help of the device manufacturer. Printable
Maths Puzzles For 8 Year Olds | {"url":"https://crosswordpuzzles-printable.com/printable-maths-puzzles-for-8-year-olds/","timestamp":"2024-11-03T23:13:48Z","content_type":"text/html","content_length":"53598","record_id":"<urn:uuid:4746f1ee-4be5-4171-8c8c-55bad51f91d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00167.warc.gz"} |
Multiple Associative Container
Multiple Associative Container
Category: containers Component type: concept
A Multiple Associative Container is an Associative Container in which there may be more than one element with the same key. That is, it is an Associative Container that does not have the restrictions
of a Unique Associative Container.
Refinement of
Associative Container
Associated types
None, except for those defined by Associative Container
X A type that is a model of Multiple Associative Container
a Object of type X
t Object of type X::value_type
k Object of type X::key_type
p, q Object of type X::iterator
Valid expressions
In addition to the expressions defined in Associative Container, the following expressions must be valid.
│ Name │ Expression │ Type requirements │Return type│
│Range constructor│X(i, j) │i and j are Input Iterators whose value type is convertible to T [1] │ │
│ │X a(i, j); │ │ │
│Insert element │a.insert(t) │ │X::iterator│
│Insert range │a.insert(i, j)│i and j are Input Iterators whose value type is convertible to X::value_type. │void │
Expression semantics
│ Name │Expression│ Precondition │ Semantics │ Postcondition │
│Range │X(i, j) │[i,j) is a valid│Creates an associative container that contains all elements in the range [i,j). │size() is equal to the distance from i to j. Each element in [i, │
│constructor│X a(i, j);│range. │ │j) is present in the container. │
│Insert │a.insert │ │Inserts t into a. │The size of a is incremented by 1. The value of a.count(t) is │
│element │(t) │ │ │incremented by a. │
│Insert │a.insert │[i, j) is a │Equivalent to a.insert(t) for each object t that is pointed to by an iterator in the range│The size of a is incremented by j - i. │
│range │(i, j) │valid range. │[i, j). Each element is inserted into a. │ │
Complexity guarantees
Average complexity for insert element is at most logarithmic.
Average complexity for insert range is at most O(N * log(size() + N)), where N is j - i.
[1] At present (early 1998), not all compilers support "member templates". If your compiler supports member templates then i and j may be of any type that conforms to the Input Iterator requirements.
If your compiler does not yet support member templates, however, then i and j must be of type const T* or of type X::const_iterator.
See also
Associative Container, Unique Associative Container, Unique Sorted Associative Container, Multiple Sorted Associative Container
Copyright © 1999 Silicon Graphics, Inc. All Rights Reserved. TrademarkInformation | {"url":"http://seanborman.com/STL_doc/MultipleAssociativeContainer.html","timestamp":"2024-11-07T13:12:43Z","content_type":"text/html","content_length":"7510","record_id":"<urn:uuid:70d9c1a3-ce77-4033-95d9-4f6336df3aa0>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00619.warc.gz"} |
In mathematics, a
is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that relates each real number
to its square
. The output of a function
corresponding to an input
is denoted by
. In this example, if the input is −3, then the output is 9, and we may write
= 9. The input variable are sometimes referred to as the argument of the function.
The above text is a snippet from Wikipedia: Function (mathematics)
and as such is available under the Creative Commons Attribution/Share-Alike License.
1. What something does or is used for.
2. A professional or official position.
3. An official or social occasion.
4. A relation where one thing is dependent on another for its existence, value, or significance.
5. A relation in which each element of the domain is associated with exactly one element of the codomain.
6. A routine that receives zero or more arguments and may return a result.
7. The physiological activity of an organ or body part.
8. The characteristic behavior of a chemical compound.
9. The role of a social practice in the continued existence of the group.
1. to have a function
2. to carry on a function; to be in action
The above text is a snippet from Wiktionary: function
and as such is available under the Creative Commons Attribution/Share-Alike License.
Need help with a clue?
Try your search in the crossword dictionary! | {"url":"https://crosswordnexus.com/word/FUNCTION","timestamp":"2024-11-07T07:29:42Z","content_type":"application/xhtml+xml","content_length":"11206","record_id":"<urn:uuid:0db724a6-ead4-4901-bfae-ff7a67503435>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00362.warc.gz"} |
E-Journal № 1 2005
0 Aspects of a problem of energy security of Republic Moldova
Authors: G. Duca, V.Postolati, E.Bikova
Abstract: The key positions and methodical aspects of a problem of energy security, the system of indicating devices is circumscribed, the state estimation of a power complex is made and the
recommendations for a raise of energy security are offered.
Keywords: The power complex, energy security , threats, indicators, scale of crisis, threshold values, economic security.
1 The Integrated software and informational system “KROS”
Authors: Gonchariuk, N.V., Makarov S.F., Mihailov A.L., Ltd “Grosmeister”, Moscow, Russian Federation
Abstract: The article describes functions of the information system “KROS”, which consists 2 subsystems “BOS” (block of dispatcher’s schemes) and “BARS”. Complex “BARS” works for organizing a base
of calculation circuits of power pools and regions, which have different level and volume. After equivalent or pooling circuit construction there are creates and uses new calculation circuits of
two types (search network and equivalence network).
Keywords: electrical power system, equivalent circuit construction, calculation circuit, database.
2 Calculation of the optimum thickness of thermo-insulation for collectors of sunlight installations
Authors: Ermuratski V. Institute of Power Engineering of the ASM
Abstract: In the work the task of calculation of the optimum thickness of thermo-insulation for collectors of sunlight and accumulators of the heat is considered. The simplified model of
calculation and the technique based on an estimation of efficiency of investment projects is offered. It is shown, that at calculation on the simplified model which are not taking into account
financial streams, the overestimated values of thickness thermo-insulation turn out.
Keywords: thermo-insulation, solar collector.
3 Control law design for the fuel consumption system of steam drum boiler taking into consideration the economy of energy resources
Authors: Juravliov A.A., Sit M.L., Poponova O.B., Sit B.M., Institute of Power Engineering of the ASM
Abstract: The cascade control system is examined. Main features of the sytem are the PI-controller in the external control loop with introducing the functional component of the error signal of the
external control loop with the negative feedback of the error signal between the prescribed value of steam flowrate and the signal of the steam flowrate in the exit of the boiler in the internal
control loop.
Keywords: control system, PI-controller.
4 Design of the control law of the steam drum boiler level taking into consideration energy economy
Authors: Juravliov A.A., Sit M.L., Poponova O.B., Sit B.M. Institute of Power Engineering of the Academy of Sciences of Moldova
Abstract: The cascade system for maintaining the level in the steam boiler drum was investigated. Main features of the system are it’s variable structure which changes depending on the load
variation relative to its previous value, and the use of the model of the drum level in dependence of the feedwater flow rate.
Keywords: cascade control system, steam drum boiler.
5 Decoupling Interconnector
Authors: Kalinin L., S. Chebotari, D.Zaitsev Institute of Power Engineering of the ASM
Abstract: Active power flow in transmission lines can be stabilized using the Interphase Power Controller (IPC). It is a series connected controller consisting of two impedances, one inductive and
one capacitive, subjected to phase shifted voltages. The paper addresses the principal aspects of IPC as power flow stabilizer, explains the basic theory and operating characteristics with the
methodical approach to cost evaluation of the components.
Keywords: active power flow, Interphase Power Controller, power flow stabilizer, methodical approach, cost evaluation of the components.
6 Minimization of dispersive effects in the difference scheme telegraphic equations
Authors: I.Andros, V.Berzan Institute of Power Engineering of the Academy of Sciences of Moldova
Abstract: Paper deals with difference scheme for the system of telegraph equations, which describes the potential’s and current’s waves diffusion in a long line, considering Earth as the return
line. Using the first differential approximation was obtained the approximation of the differential equations with the minimized influence of the dissipative and dispersing terms.
Keywords: system of telegraph equations, potential’s and current’s waves diffusion in a long line, differential approximation, minimized influence of the dissipative and dispersing terms.
7 Measurement methods and interpretation algorithms for the determination of the remaining lifetime of the electrical insulation
Authors: F.Engster S.C. ELECTRICA Muntenia Sud S.A , Bucharest, Romania
Abstract: The paper presents a set of on-line and off-line measuring methods for the dielectric parameters of the electric insulation as well as the method of results interpretation aimed to
determine the occurence of a damage and to set up the its speed of evolution. These results lead finally to the determination of the life time under certain imposed safety conditions. The
interpretation of the measurement results is done based on analytical algorithms allowing also the calculation of the index of correlation between the real results and the mathematical
interpolation. It is performed a comparative analysis between different measuring and interpretation methods. There are considered certain events occurred during the measurement performance
including their causes. The working-out of the analytical methods has been improved during the during the dielectric measurements performance for about 25 years at a number of 140 turbo and hydro
power plants. Finally it is proposed a measurement program to be applied and which will allow the correlation of the on-line and off-line dielectric measurement obtaining thus a reliable technology
of high accuracy level for the estimation of the available lifetime of electrical insulation.
Keywords: measuring methods for the dielectric parameters of the electric insulation.
8 Definition of a matrix of the generalized parameters asymmetrical multiphase transmission lines
Authors: Suslov V. Institute of Power Engineering of the ASM, Kishinau, Republic of Moldova
Abstract: Idle time, without introduction of wave characteristics, algorithm of definition of a matrix of the generalized parameters asymmetrical multiphase transmission lines is offered.
Definition of a matrix of parameters is based on a matrix primary specific of parameters of line and simple iterative procedure. The amount of iterations of iterative procedure is determined by a
set error of performance of the resulted matrix ratio between separate blocks of a determined matrix. The given error is connected by close image of with a margin error determined matrix.
Keywords: algorithm of definition of a matrix, transmission lines.
9 The account of sagging of wires at definition of specific potential factors of air High-Voltage Power Transmission Lines
Authors: Suslov V. Institute of Power Engineering of the Academy of Sciences of Moldova
Abstract: The opportunity approached is shown, but more exact as it is usually accepted, the account of sagging of wires at definition of specific potential factors air High-Vltage Power
Transmission Lines. The technique of reception of analytical expressions is resulted. For an opportunity of comparison traditional expressions for specific potential factors are resulted also.
Communication of the offered and traditional analytical expressions is shown. Offered analytical expressions are not difficult for programming on a personal computer of any class and besides they
allow to make an estimation of an error of traditional expressions by means of parallel definition of specific potential factors by both ways.
Keywords: account of sagging of wires, specific potential factors air High-Voltage Power Transmission Lines, analytical expressions. | {"url":"https://mail.nufarul.com/en/contents/elektronnyij-zhurnal-n-1-2005","timestamp":"2024-11-14T05:17:04Z","content_type":"text/html","content_length":"20037","record_id":"<urn:uuid:47e80eb8-cd3a-41f7-9a1d-9c9c6525b0ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00225.warc.gz"} |
Pipeline (AHGPP) Database Location Reference System
Hrvoje Lukatela
Paper presented at the 50th Annual Meeting of the American Society of Photogrammetry,
Washington, D.C. 11 - 16 March 1984
A multi-user geo-based information system was implemented as a surveying, geotechnical, biological, cadastral and pipeline design data repository for the Alaska Highway Gas Pipeline Project.
Location reference software was designed as a general-purpose, high precision, global coverage system, and integrated with a commercial pseudorelational database package. Locations were described
by a set of ellipsoid related planar cells, and their internal coordinates, derived by a pseudo-stereographic ellipsoid to plane projection. Input and output location data can be given or
generated in any commonly used plane surveying coordinate system. Retrieval is based on the KP range, alignment sheet, proximity to other items or any area polygon. Database resolution surpasses
the accuracy of the reference line field survey. Processing and storage utilization are highly efficient, an important consideration on a large mainframe computer complex.
A significant amount of work has been done in recent years in the area of the design and implementation of geo-based computerised systems combining high data volumes and large area coverage. While
all such systems share many similarities, an important distinction can be made based on the spatial analysis usually performed on the data. If only the general distribution and correlation of
different entities is of interest, as often is the case in e.g. computerised natural resource inventories, the positional precision requirements are relatively low. Such systems are therefore usually
based on a large coverage plane projection (e.g. UTM), sometimes using a small rectangular grid cell identifier as the only location descriptor. All spatial analysis is carried out by completely (but
safely) ignoring the fact that plane coordinate relationships are at best only approximations of the actual location geometry.
On the other hand, systems in e.g. cadastral, control network inventory, large civil engineering and defence applications require the ability to derive geometric relationships from location
descriptions on the database with the same level of precision as that normally obtained by field measurement techniques and instruments employed by those disciplines. If the area coverage is
continental or even global, differences in plane and true spatial geometry can no longer be ignored. At this point several options are available:
• Implement the database using large coverage plane projection coordinates, and selectively calculate and apply corrections to the derived geometric data.
• Implement the database using only the ellipsoid coordinates, and derive directly all geometric data required. No corrections will be necessary, but the geometry processing, data volume and
indexing scheme might require more computer resources than available or justifiable for a given application.
• Implement the database with a combination of ellipsoid and plane location descriptors, so that different tasks are performed using different mechanisms, with the goal of overall optimization in
the usage of computer resources in data storage, searching and geometronical processing.
It is interesting to note that the first of these options describes quite well the approach used for location description and processing in manual (pre-computer) geo-based systems. The trade-off
between the coverage and precision in most such systems (UTM 1958 is a good example) is resolved by adopting (by current standards) extremely high discrepancy tolerance. In consequence, corrections
have to be calculated and applied much more often than anticipated when the projection was developed and the system implemented. Regardless of which of the above options are taken, some data
partitioning and indexing will be required to make location based retrievals fast and efficient. How this is best done depends to a large degree on the type of user interface functions, volume,
distribution and composition of the data, as well as on the performance characteristics of the computing hardware and software devices available.
Throughout this paper, the term "location reference system" is used to identify a set of data items and software components which make possible location based retrieval of information, and support
required geometronical processing.
Location Reference System Architecture
The major functional requiremets which affected the design of this location reference system were:
• The application system will be implemented on a mainframe computer, using as much as possible already available communication network and database software. Effective integration in the database
package must be possible.
• Location data retrieval will be based on: pipeline section; alignment sheet; proximity to reference line or other data items; or any other conveniently defined area polygon. Location elements
should be stored with at least the same precision as that of the combined field traversing measurements and adjustment.
• Since an on-line database access is required, data storage volume and access cost should be minimized. Optimization should be biased towards the retrieval rather than the initial load of data.
• UTM coordinates will be given as location description for field surveyed items, and must be generated for some output reports and medium scale plotting. It should be possible to expand this
requirement with other commonly used plane survey coordinate systems.
• Reference line distance accumulation (Kilometer-posts, KP) can be defined either along the centre-line or the R-O-W boundary line, slope or horizontal. KP/offset system should be kept as a
simple, low precision way to reference data. In particular, KP values produced by a series of pipeline traverses and marked in the field should be recognized by the system.
• Programming of the location reference system routines must be kept as simple as possible in order to meet the pipeline design database implementation schedule.
The system is based on a hierarchical set of location descriptors: cell identifiers, ellipsoid coordinates, pseudo -stereographic ellipsoid to plane projection, common plane survey mapping system,
and KP/offset pipeline references. Different sets of location elements are by definition only different numerical representations of one positional geometry, and can be dynamically transformed from
one format into another. Redundant location elements are checked for consistency upon entry to the system, and then discarded, to be re-constituted when required. The system uses spatial, ellipsoid
related set of the data partitioning cells. Each cell record contains a list of its neighbours. Two substantially different cell arrangement schemes are possible:
• System assigned (and dynamically re-assigned) pattern of cells. Two sets of criteria are used by the system in this assignment process: optimum projection distortion tolerance for balance between
plane and ellipsoid geometry calculations, and optimum cell population size for data retrieval efficiency.
• User assigned, orderly cell pattern. Optimization decisions mentioned above will in this case have to be made a priori and will remain frozen in a given implementation.
The cell identifier and coordinates are the only location descriptors that need to be retained on the system. They are used for all local spatial manipulations, including data searching. Global
problems are solved using ellipsoid coordinates. Location based data searching is performed by first finding the cell and solving the proximity or similar conditions for the population of that cell.
If further data is required, a list of its neighbours must be traversed. A cell is rejected if no part of it can contribute to the search. Otherwise, the coordinates of its population are dynamically
transformed into the local coordinate system of the cell in which the exercise was initiated, and search criteria are evaluated. The system is therefore capable of performing both quick and simple
manipulations of data using plane coordinates, as well as high-precision ellipsoid geometry calculations required to match or use directly field measured values.
Ellipsoid Coordinates and Geometry
A discussion of discrepancies between a digital model using an ellipsoid of rotation as a reference surface and the true location of the objects on the surface of the Earth is out of the scope of
this paper. We will only make the assumption (valid for any large and complex civil engineering system) that the derived spatial values, based on rigid ellipsoid geometry solutions, satisfy the
precision reqiurements of all of the database geometronical application disciplines.
Ellipsoid geometry has been a subject of intensive study with the purpose of finding ways to simplify manual calculations by accepting assumptions and derivation procedures yielding only the minimum
of required accuracy. It is safe to say that practical considerations driving this research have been transformed considerably with the introduction of automatic computing devices. The benefit
derived by reducing the number of elementary arithmetic operations has decreased substantially, while the cost of increase in calculational accuracy is no longer proportional (or extraproportional)
to the number of significant digits carried - it is increasing in discrete, computing device dependent, steps. Since the loss of accuracy almost always increases with the "size" of the problem (e.g.
length of intersecting lines), different classical solutions are indicated depending on some external, individual problem (data!) dependent criteria. Such a decision requires insight difficult to
replicate in a computerised procedure. For this, "self-adjusting" algorithms are necessary, capable of producing maximum accuracy that can be delivered by the internal numerical resolution, yet
simple and efficient when only short lines are involved. Iterative algorithms, (as opposed to expansions to required accuracy) are usually best capable of achieving this quality.
The methodology employed for construction of these iterative ellipsoid geometry algorithms is illustrated by example - a solution of an ellipsoid intersection problem: given are two points (P1, P2),
with known positions; and their respective tangential plane azimuths to a third point (Pn, point of intersection). Ellipsoid coordinates of the Pn are to be determined. A few terms will be defined to
simplify the description of the procedure:
Line of sight
is a given point tangential plane projection of the vector from a given to the unknown point.
Intersection plane
is a plane containing line of sight and ellipsoid normal in a given point.
Intersection line
is a straight line common to both intersection planes.
is a point (on any straight line), closest to the coordinate origin.
is a distance from the para-centre.
The procedure will first check for valid intersection data. Two conditions must be satisfied: first, intersection planes must not be parallel or coincident, and second, first of the two intersection
points reached by moving along the intersection ellipse in the direction of the azimuth starting at each given point must be the same. Checking those conditions is simplified by the fact that an
azimuth is internally kept as two direction cosines in its respective tangential plane, which is in turn stated by the ellipsoid coordinates (normal!) of a given point. Both intersection plane normal
equation coefficients can be found by a simple rotation of the line of sight vectors and their free terms, by using given point coordinates.
The equations of two intersecting planes and of the ellipsoid of rotation form a system whose roots are the cartesian coordinates of the two points of intersection. The solution is possible by any
number of numerical methods, and transforming the result to the ellipsoid normal form presents no problem. Even simpler, more direct solution is possible, by finding an approximate normal through Pn,
and iterating its components using the meridian ellipse productions until ellipsoid geometry is satisfied, observing in the process intersection conditions as well as the elevation of Pn. The
iteration steps are as follows:
1. Obtain the intersection line in a normalized parametric equation form and find the intersection line para-centre coordinates.
2. Assume the normal to be collinear with the intersection line, and find the normal para-radius using meridian ellipse geometry productions.
3. Set the initial intersection line parameter equal to the para-radius obtained in the previous step. (Intersection point elevation can be taken into account in this and all subsequent steps
involving para-radius.)
4. Start next iteration step by finding a new position of the intersection point using the intersection line para-centre and parametric equations.
5. Recalculate normal components using the provisional coordinates of the intersection point. Check for change since the previous step and exit if none.
6. Use normal direction cosines to find new normal para-radius. Compare it with previous step and update the intersection line para-radius by a corresponding amount. Repeat from step (4).
Note that the meridian ellipse productions employed are closed, and the precision depends only on the criterion used in step (5). The iteration is so fast, even on lines only one order of magnitude
below the ellipsoid semi-axes, that a "fuzziness threshold" of the floating point representation can be used as this criterion.
The most common method of describing the location on (or slightly above or below) the ellipsoid surface is a numerical description of a direction of a normal through the location point. This can be
done by specifying angular values relative to the equator and reference meridian plane (latitude and longitude), or using a vector component (direction cosine) form. The main advantage of the
traditional form is its direct correspondence to navigational and similar observations, but it is extremly ill-suited for computerised calculations. With an angular value as a source item, most
geometry calculations will require expensive, multiple evaluation of trigonometric functions. If direction cosines are used, this will not be the case, and the formulae of ellipsoid geometry will
become symmetrical and much easier for programming. In addition, any numerical collapse will be directly related to the definition of the problem, and never to the method used. With 64 bit floating
point arithmetic and data variables, a comfortable margin of several orders of magnitude can be maintained over the precision of any field measurements. While one more element must be kept for
direction cosines (three plus elevation, versus 2+1 for latitude and longitude) this provides a simple check for a valid normal element data: if the vector is normalized after each manipulation, the
sum of the squares of the components will equal one. The most important consideration remains the type of calculational algorithms utilized: conventional ellipsoid geometry derivations typically
employ angular values and trigonometric functions, while direction cosines are used by closed iterative procedures and ellipsoid to plane geometric projection routines.
Pseudo-stereographic Ellipsoid to Plane Projection
As mentioned before, plane coordinates used by the system may require frequent and high volume transformations to the ellipsoid and back. Problems associated with the transfer of positions from
ellipsoid to the plane and vice versa can vary from trivial to fairly complex. This complexity depends on a number of factors: the definition of the mapping function; the format in which the location
is given for both ellipsoid and plane systems; the precision to be maintained in a single transfer; and the acceptable numerical deterioration in a case of multiple transfers.
The mapping selected for this global (ellipsoid) to cell (plane) coordinate transformation can best be described as an ellipsoid distortion of a spherical stereographic perspective projection. It
provides extremely fast and and precise (including multiple transfer) mapping algorithms. Given the population related restrictions on the cell size, projection distortions are likely to be
negligible for all local geometry problems. Ellipsoid coordinates of the cell centre point are at the same time coefficients of a normal form of the projection plane equation. In order to avoid the
necessity of the "sea level" reduction of the field measured distances, the average elevation of the terrain covered by the cell can be incorporated into the free term of the projection plane
equation. The projection centre is located on the centre-point normal, at twice the depth below the surface of the ellipsoid of the geometric mean of the radii of curvature along the meridian and
prime vertical.
Ellipsoid coordinates of the centre -point, in addition to the elevation (if used as described above), are the only values required as the cell dependent projection parameters. Depending on the
relative efficiency of data retrieval versus recalculation, and whether most transformations are performed individually or in a series, coordinates of the projection centre, para-radius of the
projection centre normal and the free term of the centre-point prime vertical equation can be calculated when a cell is generated, and re-used in all ellipsoid to plane and reverse transformations
involving the cell.
A scaling effect takes place in ellipsoid to plane transformations: ceteris paribus, fewer digits are required to describe the location than if the ellipsoid coordinates are used. Specifically, if
cell diameters are in the order of up to ten kilometers, a 32 bit floating point representation will be in the order of millimeters. While somewhat below the precision of the 64 bit ellipsoid
coordinates, discrepancies will not propagate across the cell boundaries. This is an important consideration for implementations on systems with a significant difference in efficiency between "short"
and "long" floating point arithmetic.
Universal Transverse Mercator Projection (UTM 1958)
UTM coordinates are, because of their widespread use and regulatory support, frequently used for interchange of location data. Unfortunately, several properties make them particularly unsuitable for
computer system usage (most of the following comments would apply to all large coverage plane survey coordinate systems):
• Because of the high maximum distortion, only the most crude geometry processing can be performed using simple plane coordinate manipulations.
• Large systems will likely extend over more than one UTM zone. If that happens, the advantage of a single, continuous set of coordinates is lost. Transformation function domain is not global.
• "Long" floating point is normally required to store and manipulate coordinates. This is a special disadvantage where multiple coordinates are required to describe an entity.
• The transfer of coordinates between the UTM plane and ellipsoid is a rather involved and inefficient calculation due to the numerical nature of mapping algorithms (Taylor's expansion of
Cauchy-Riemann differential equations). Most of the advantages of a conformal projection are lost when field measurements consist of a balanced set of directions and distances.
To provide the ability for external data interchange, the system must include a set of transformation routines, guarantying sub-millimiter precision even on the edges of a 6 degree zone in
mid-latitudes. UTM coordinates presented to the system are immediately transformed into ellipsoid format. If an output request is made for numerical UTM coordinates, reverse transformation takes
place. Direct orthogonal transformation of cell coordinates into UTM grid of the output page is used for plotting, as it is well within the resolution of even large scale plots.
Linear (Kilometer-post) Reference
KP/offset is an often used method of describing the location on any "linear" engineering system. Cumulative distance and offset values are relative to a convenient reference line, usually centre line
or R-O-W boundary line. Distances are sometimes accumulated and marked in the field ("original" KP) as the line is surveyed. There are ;/oe problems inherent with the KP/offset usage: the most
significant resulting from the fact that whatever is selected as the reference line, it can, and probably will, change its length in the design and the construction phases of the project. Updating
these values presents no problem, but references to "original" field values can become invalid or ambiguous.
All cells that cover the reference line contain an ordered, doubly linked list of points representing a section of the reference line within the cell. Each item in the list contains point
identification, cell coordinates, and original, field KP label. Each cell also contains true KP values at its boundary. Mapping from cell coordinates to KP/offset system is defined as an orthogonal
projection to the closest reference line segment. List of local coordinates of the reference line segments can therefore be considered as a set of "mapping parameters" for cell to KP/offset and
reverse transformations in the same manner as the coordinates of the centre-point are for cell to global transformation. Inquiry references using "original" values only are resolved by a
bi-directionl search along the line, from a point with the same "true" KP, until a segment with appropriate "original" KP range is found.
AHGPP Implementation
The Alaska Highway Gas Pipeline Project is a natural gas transportation system designed for the primary purpose of transporting Alaskan natural gas from Prudhoe Bay on the North Slope of Alaska,
south across western Canada, to markets in California and the mid-western United States.The Canadian portion of the project is comprised of 3285 km of pipeline. Due to the complexity of ambiance,
primarily in the Yukon region of the AHGPP, large quantities of engineering, geotechnical, regulatory, biological, cadastral, pipeline design and other site-specific information had to be assembled
and continuously referred to in the process of designing the pipeline. A computerised database system was implemented to provide a repository for the most volatile and widely required sub-set of that
Implementation Environment - Hardware
The system was implemented on a large corporate multiple mainframe computer complex, IBM 360/370 architecture, with buffered character CRT display terminals, remote (batch) printers, and an
electrostatic plotter at the central location. Implementation of the system on a dedicated CAD computer was considered, but two problems made mainframe implementation more attractive: capital cost
associated with the acquisition of dedicated hardware, and inability of existing CAD systems to cost effectively serve large number of simultaneous users, including those at remote locations. On-line
mainframe access from remote locations was provided via CTR TTY terminals, 1200 baud dial-up modems and an async/SNA hardware protocol converter. The same lines could be used with portable hardcopy
terminals to retrieve batch generated reports. Plot output was available only at the central site.
Further development, if and when initiated, will be based on a remote microcomputer. All graphical output would be generated on the workstation, and data maintenance for items other than global
geometry would be done there. Mainframe functions would be reduced to the database retention and block update control processing, probably on a cell/discipline/user oriented authorization scheme.
Implementation Environment - Software
The system is implemented using ADABAS database management system. It is a multiple inverted file system, intended primarily for implementation of administrative and commercial systems. Like most
similar products, it suffers from one serious impediment when used for comprehensive financial or engineering models: inability to deal with the floating point numeric data items. The problem was
solved either by storing the value in fixed decimal format or by storing a floating point value as a character (byte) variable, and leave it's manipulation to an external programme. There was one
more reason for storing cell coordinates in the decimal format. One of the ADABAS support software components is a dedicated on-line interpreter and compiler. While the language was particularly
ill-suited for any kind of list processing or modelling programming (poor data structure handling capabilities, no workspace management functions, etc.) it provided very effective data retrieval and
index maintenance functions, and was therefore used for all on-line database interaction programmes. All geometry required for data retrieval and proximity solutions could therefore be handled
without incurring the overhead of run-time external module invocation.
Since the system must operate in a multiple project environment, a project identifier is appended to each record on file and all location processing is normally carried out only within the project.
Alignment sheets are an already established way of location partitioning of the pipeline data, and their size (2x7 Km) provides an excellent compromise between total number of cells on the system and
individual cell population size. Data is further identified by cell coordinates (multiple in case of line or area items), and KP point or range. Some of these items are incorporated in the database
index structures. Programmes are available to process location descriptors upon entry of an item to the database, manipulate data in the process of pipeline design where locations are changed, or
retrieve data through on-line interactions or printed or plotted hardcopy output.
Each record representing a location specific entity on the database contains the following data items:
• Project identifier
• Entity class (record type) identifier
• Cell identifier
• True cumulative distance along the reference line (True KP)
• Cell coordinates (multiple for a linear or an area item)
In addition, some records include parameter values which are used by programmes whenever location data is manipulated:
Project record:
• Type and parameters of external mapping projection (including ellipsoid values) used on the project (e.g. UTM, Lambert, etc)
• Reference line entity and method of KP distance accumulation
Cell (Alignment sheet) record:
• Approximate average elevation
• True KP of up-stream and down-stream sheet matchline
• Coordinates of logical (non -overlapping) cell boundaries
• Ellipsoid coordinates of the cell coordinate system origin
• Ellipsoid to plane transformation parameters
• List of pointers to neighbouring cells
Reference line point record:
• Elevation
• Internal pointers to up-stream and down-stream point record
• Field marked cumulative distance (Original KP)
Three inverted lists (indexes) comprising location description data are maintained on the system. Arguments of these lists are concatenated location description items, as follows:
Cell index:
• Project identifier
• Entity class identifier
• Cell identifier
Start KP index:
• Project identifier
• Entity class identifier
• True start KP
End KP index:
• Project identifier
• Entity class identifier
• True end KP
The last index structure contains only linear and area items which occur as contiguous segments along the line. It makes possible bi-directional traversing of item records. Unlike the neighbouring
cell pointer list in the cell record, this index structure is maintained automatically by the inverted list update mechanism. (The conceptual design of both the pipeline database and the location
reference components remained independent of the particular package, and any multiple index data access method or file management system could have been used with similar effectiveness.)
The main programming component of the location reference system is a library of PL1 subroutines, performing elementary tasks, usually on single problem data (e.g. conversion of coordinates between
ellipsoid and UTM, ellipsoid and cell, and cell and KP/offset systems; closed iterative solutions of common ellipsoid geometry propositions; planar geometry problems and transformations and data
Plotting is implemented by batch executing database traverse programmes, which produce a high level, device independent flat plot files. These are subsequently reprocessed by a graphical
post-processor generating electrostatic plotter output. At this point, one of the two types of files containing cell transformation parameters is used. The first contains a copy of cell centre
points, and provides the ability to transform cell coordinates on plot file to a UTM defined output page. Second contains cell sets of reference line points and their page coordinates digitized from
alignment sheets based (where these are used) on non-ortho mosaics. In this case, a separate orthogonal transformation will be calculated for each reference line segment, and used for all items in
its KP range.
The first step in establishing new project data is entry of skeleton cell records and location of reference line points. Cumulative distances along the reference line and cell projection parameters
are next calculated, and stored on the database. Any other project data can now be entered, described by location either using external mapping system, KP/offset references or definition of proximity
geometry relative to items already on the system. As each item is entered, inverted lists are maintained automatically by the database mechanism, making the item instantly part of inquiry traverses
of project data. In case of a change of a reference line (pipeline re-routing) one or more cells are locked out from the active system, and updated positions of reference line stations are entered.
Total population of the cell is then traversed, it's KP references re-calculated and updated. The rest of the project is then adjusted by adding or subtracting the required amount from all KP
refernces down-stream from the revision area.
Major software components must be developed and integrated into generally available database packages if they are to be used with success for geo-based applications. Once this is achieved,
significant benefits can be derived from implementation on a mainframe complex: simultaneous on-line service of an extensive user base; large data volume capability; economical remote access and
capacity expansion and utilization of existing corporate capital investment and software engineering and construction skill base.
Bomford, G. 1975, ed. Geodesy, London, Oxford University Press
Broughton, C. 1980, COSINE, a Land-Surveying Database Application: Paper presented to the 1980 S2K User Group meeting, Toronto
Maggio, R.C., Baker, R.D., Harris, M.K. 1983, A Geographic Data Base for Texas Pecan: Photogrammetric Engineering and Remote Sensing, Vol 49, pp.47-52
Radlinski, W.A. 1977, Modern Land Data Systems - A National Objective: Photogrammetric Engineering and Remote Sensing, Vol 43, pp.887-890
Tobey, W.M. 1928, Geodesy, Geodetic Survey of Canada, Publication No. 11
Wehde, M. 1982, Grid Cell Size in Relation to Errors in Maps and Inventories Produced by Computerized Map Processing: Photogrammetric Engineering and Remote Sensing, Vol 48, pp.1289-1298
---, 1982 rev., The Alaska Highway Gas Pipeline Project - Project Overview, Foothills Pipe Lines (Yukon) Ltd., Calgary | {"url":"https://www.lukatela.com/hrvoje/papers/ahgpp.html","timestamp":"2024-11-06T02:34:46Z","content_type":"text/html","content_length":"33839","record_id":"<urn:uuid:58fc3a5a-acd9-40ae-b7c8-5d74572446b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00384.warc.gz"} |
The Tetragonal Crystal System (Part Seven)
(Alternative name : Quadratic System)
back to Part One
The Tetragonal-disphenoidic Class
(= Sphenoidic Tetartohedric) 4*
When two hemihedrics are applied to a holohedric Form we get a so called tetartohedric Form.
This is the case with the Forms of this Class.
We obtain these Forms when we subject each holohedric Form to first a sphenoidic hemihedric (= suppression of the equatorial mirror plane and of the two vertical mirror planes that go through the
horizontal crystallographic axes) and then subject the result to trapezohedric hemihedric (= suppression of all mirror planes).
The symmetry content of this Class is :
• One vertical 4-fold roto-inversion axis (4*).
Derivation of the Forms of the Tetragonal-disphenoidic Class
The Forms of this Class will be found when we subject the holohedric Forms to the two mentioned hemihedrics.
Recall that the holohedric Forms were the following :
Ditetragonal Bipyramid
Ditetragonal Prism
Basic Pinacoid
Applying the sphenoidic hemihedric to these Forms resulted in the following Forms (See Part Two) :
The holohedric Protopyramid gave the sphenoidic hemihedric sphenoid (also called disphenoid).
The holohedric Deuteropyramid gave the sphenoidic hemihedric type II tetragonal bipyramid.
The holohedric Ditetragonal Pyramid gave the sphenoidic hemihedric tetragonal scalenohedron.
The holohedric Protoprism gave the sphenoidic hemihedric type I tetragonal prism.
The holohedric Deuteroprism gave the sphenoidic hemihedric type II tetragonal prism.
The holohedric Ditetragonal Prism gave the sphenoidic hemihedric ditetragonal prism.
The holohedric Basic Pinacoid gave the sphenoidic hemihedric basic pinacoid.
These sphenoidic Forms will now be subjected to the trapezohedric hemihedric. The resulting Forms are then the Forms of this Class (The Tetragonal-disphenoidic Class).
The sphenoidic sphenoid (disphenoid) will then loose its remaining mirror planes, namely the ones that bissect the angles between the horizontal crystallographic axes. The result is again a type I
sphenoid because the mirror planes to be suppressed are perpendicular to the faces. See Figure 1.
Figure 1. From the sphenoidic Type I Sphenoid is derived a Tetartohedric Type I Sphenoid.
The sphenoidic hemihedric type II tetragonal bipyramid does change its shape when the second hemihedric (trapezohedric hemihedric) is applied to it. It becomes a type II tetragonal sphenoid (a
deuterosphenoid) See Figure 2.
Figure 2. Applying the trapezohedric hemihedric to the sphenoidic hemihedric Type II Tetragonal Bipyramid does lead to a change in external shape. Its vertical mirror planes -- one is indicated by A
B C -- disappear, as indicated by the coloring of the right image, and causes the change in shape, as shown in the next Figure.
The actual construction of the resulting type II sphenoid (from the right image of Figure 2) is given in the Figures 2a and 2b.
Figure 2a. Construction of a Type II Sphenoid (Deuterosphenoid) from the Type II Tetragonal Bipyramid, by suppressing the yellow faces and extending the brown faces till they meet.
Figure 2b. The resulting Sphenoid (the other Sphenoid can be derived from the yellow faces of the tetragonal bipyramid).
The sphenoidic hemihedric tetragonal scalenohedron yields a tritosphenoid (= type III sphenoid) in virtue of loosing its mirror planes. See Figure 3.
Figure 3. The Tetragonal Scalenohedron becomes a Tritosphenoid when it is subjected to trapezohedric hemihedric.
The sphenoidic hemihedric type I tetragonal prism, gives, when subjected to the trapezohedric hemihedric, a tetartohedric type I tetragonal prism. See Figure 4.
Figure 4. The white 'faces' of the developing sphenoidic Protoprism (1) are suppressed while its brown faces extend till they meet, recovering the original faces of the protoprism (2), but resulting
in a prism with the symmetry indicated in 1, the sphenoidic Type I Tetragonal Prism. Next (in applying the trapezohedric hemihedric to it) the remaining vertical mirror planes are removed (3), and,
because of that, the white 'faces' are removed while the brown faces are allowed to extend themselves till they meet, resulting in a tetartohedric Type I Tetragonal Prism (4). The same prism is
obtained when the brown 'faces' are removed while the white 'faces' are allowed to extend themselves till they meet.
The sphenoidic hemihedric type II tetragonal prism does not change its external shape when subjected to the trapezohedric hemihedric. We obtain the tetartohedric type II tetragonal prism. See Figure
Figure 5. The Deuteroprism-undergone-sphenoidic-hemihedric is subjected to trapezohedric hemihedric which means that it looses its vertical mirror planes lying between the axial planes (i.e. the
vertical planes each of which containing a horizontal crystallographic axis). We imagine the white 'faces' to disappear while the brown faces extend themselves till they meet, recovering the
tetragonal prism but one with lower symmetry, the Tetartohedric Type II Prism.
The sphenoidic hemihedric ditetragonal prism becomes a tritoprism ( = type III tetragonal prism). See Figure 6.
Figure 6. The sphenoidic hemihedric Ditetragonal Prism, looses the mirror planes that go between the horizontal crystallographic axes. Two possible tetartohedral tetragonal Tritoprisms result, one
from the (four) red faces, and one from the (four) grey faces.
The sphenoidic hemihedric basic pinacoid remains a basic pinacoid. See Figure 7.
Figure 7. Derivation of the tetartohedric Basic Pinacoid from the sphenoidic hemihedric Basic Pinacoid. By suppressing the 'faces' of one color and letting extend the 'faces' of the other color, in
the right image, we get this Pinacoid. It consists of two parallel horizontal faces. As the figure shows these faces are not mirror reflections of each other.
This concludes our derivation of all the Forms of the Tetragonal-disphenoidic Crystal Class by the merohedric approach.
These Forms can engage in combinations with each other, and thus it is possible that several such Forms appear in one and the same crystal.
We will now derive those same Forms by subjecting the basic faces (compatible with the Tetragonal Crystal System) one by one to the symmetry operations of the present Class (the
Tetragonal-disphenoidic Crystal Class).
Recall that the basic faces were the following :
• a : a : c
• a : ~a : c
• a : na : mc
• a : a : ~c
• a : ~a : ~c
• a : na : ~c
• ~a : ~a : c
The only symmetry element of this Class is the 4-fold roto-inversion axis (4*).
Because the Class does not possess any mirror plane, nor any 2-fold rotation axis, the motifs, which figure in the stereographic projections, are themselves NOT symmetrical, i.e. each motif does not
itself possess any mirror plane. We could represent such a motif as a comma : faces, and they are not each for themselves symmetrical, which is detectable by means of the shape and orientation of
etch pits on their surface, or by means of other physical features.
When indeed the motifs (faces) are not themselves symmetrical, the actions of the 4-fold roto-inversion axis (4*) do not imply any other symmetries, thus no mirror planes or 2-fold rotation axes
appear. As the only available symmetry element in this Class the 4-fold roto-inversion axis can be depicted as in the next Figure, in which the asymmetry of the motifs is indicated.
Figure 8. The four-fold roto-inversion axis as the only symmetry element of the Tetragonal-disphenoidic Crystal Class.
The action of this axis is rotation by 90^0 about its axis followed by inversion through a point on that axis, but recall that its action can also (and totally equivalently) be described by a
rotation of 90^0 about its axis followed by a reflection with respect to a plane perpendicular to that axis.
The stereogram of the symmetry elements of the present Class is :
Figure 9. Stereographic projection of the symmetry elements and of all the faces of the general Form. Red dots upper faces, small red circles lower faces. The two straight dashed lines do not
represent symmetry elements but represent the two horizontal crystallographic axes.
The face a : a : c is the unit face of the Tetragonal Crystal System. When we subject this face to the actions of the 4-fold roto-inversion axis (4*) a type I sphenoid (also called a disphenoid)
(Figure 1, right image) is finally generated. We develop its stereographic projection in four consecutive steps : We start with (the position of) the face a : a : c, then we subject it to the action
of the 4* axis, which means that we rotate it 90^0 clockwise about the main crystallographic axis and then invert it through the origin of the system of crystallographic axes. On the result we apply
again the action of that axis, and on the lastly obtained result we apply the action of that axis again. We then have generated the stereogram of the Form (generated by constructing from the initial
face a configuration of faces that complies with the symmetry elements of the present Class which means that the configuration (as a whole) complies with its having a 4* axis, and not any other
symmetry element whatsoever).
This same stepwise procedure we will also deploy in the generation of the stereogram of all the other Forms of this Class.
Figure 10. Developing the stereogram of the Tetartohedric Type I Tetragonal Sphenoid from the face a : a : c.
Red dots are upper faces, small red circles are lower faces. The dashed straight lines do not represent symmetry elements. The horizontal and vertical (as seen in the Figure) dashed lines represent
the two horizontal crystallographic axes, while the diagonal ones (as seen in the Figure) are just visual aids.
The face a : ~a : c behaves as follows under the operation of the 4* axis, generating a tetartohedric deuterosphenoid (= tetartohedric type II sphenoid) :
Figure 11. Developing the stereogram of the Tetartohedric Type II Tetragonal Sphenoid from the face a : ~a : c.
The face a : na : mc yields a tritosphenoid (= type III tetragonal sphenoid) :
Figure 12. Developing the stereogram of the tetartohedric Tetragonal Tritosphenoid from the face a : na : mc.
The face a : a : ~c yields a tetartohedric type I tetragonal prism :
Figure 13. Developing the stereogram of the tetartohedric Type I Tetragonal Prism from the face a : a : ~c.
Because the faces are vertical there is no difference between an upper face (red dot) and a lower face (small red circle). So we can symbolize the faces of the prism just as red dots, like the next
Figure shows.
Figure 14. Stereogram of the tetartohedric Type I Tetragonal Prism.
The face a : ~a : ~c yields a tetartohedric type II tetragonal prism :
Figure 15. Developing the stereogram of the tetartohedric Type II Tetragonal Prism from the face a : ~a : ~c .
Also in this case there is no difference between upper and lower faces. So the final stereogram is as is depicted in Figure 16.
Figure 16. Stereogram of the Tetartohedric Type II Tetragonal Prism.
The face a : na : ~c yields the tetartohedric tetragonal tritoprism (= type III prism) :
Figure 17. Developing the tetartohedric Tetragonal Tritoprism from the face a : na : ~c. Also in this case there is no difference between upper and lower faces, so the final stereogram of this Form
is as is depicted in Figure 18.
Figure 18. Stereogram of the tetartohedric Tetragonal Tritoprism (= type III tetragonal prism).
Finally the face ~a : ~a : c yields again a basic pinacoid, namely a tetartohedric basic pinacoid :
Figure 19. Developing the tetartohedric Basic Pinacoid from the face ~a : ~a : c. Rotation of this face about the main crystallographic axis has no effect, but its consecutive inversion through the
origin of the system of crystallographic axes does produce a second face parallel to the initial one. So the result is an upper face and a lower face which coincide on the projection plane of the
stereographic projection.
This concludes the derivation of all the Forms of the Tetragonal-disphenoidic Crystal Class, and also concludes our exposition of the Tetragonal Crystal System.
For the next Crystal System, the Hexagonal Crystal System, click HERE. | {"url":"http://metafysica.nl/tetragonal_7.html","timestamp":"2024-11-14T08:00:04Z","content_type":"text/html","content_length":"17502","record_id":"<urn:uuid:0174a2e3-b280-47eb-87b5-5d68567f9bc8>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00490.warc.gz"} |
Anonymization Procedures for Tabular Data: An Explanatory Technical and Legal Synthesis
Technology Campus Vilshofen, Deggendorf Institute of Technology, 94474 Vilshofen an der Donau, Germany
Faculty of Law, University of Augsburg, 86159 Augsburg, Germany
Author to whom correspondence should be addressed.
Submission received: 2 August 2023 / Revised: 22 August 2023 / Accepted: 29 August 2023 / Published: 1 September 2023
In the European Union, Data Controllers and Data Processors, who work with personal data, have to comply with the General Data Protection Regulation and other applicable laws. This affects the
storing and processing of personal data. But some data processing in data mining or statistical analyses does not require any personal reference to the data. Thus, personal context can be removed.
For these use cases, to comply with applicable laws, any existing personal information has to be removed by applying the so-called anonymization. However, anonymization should maintain data utility.
Therefore, the concept of anonymization is a double-edged sword with an intrinsic trade-off: privacy enforcement vs. utility preservation. The former might not be entirely guaranteed when anonymized
data are published as Open Data. In theory and practice, there exist diverse approaches to conduct and score anonymization. This explanatory synthesis discusses the technical perspectives on the
anonymization of tabular data with a special emphasis on the European Union’s legal base. The studied methods for conducting anonymization, and scoring the anonymization procedure and the resulting
anonymity are explained in unifying terminology. The examined methods and scores cover both categorical and numerical data. The examined scores involve data utility, information preservation, and
privacy models. In practice-relevant examples, methods and scores are experimentally tested on records from the UCI Machine Learning Repository’s “Census Income (Adult)” dataset.
1. Introduction
Working with personalized data is a highly risky task. Not only in sensitive sectors like health and finance, personal data has to be protected. Personal data can occur in vast varieties.
Nevertheless, in practice, personal data are often stored in structured tabular datasets, and this work focuses on tabular datasets as objects of study.
Violating the regulations in force, such as the General Data Protection Regulation (GDPR) by the European Union (EU), can lead to severe penalties. More importantly, from an ethical perspective, data
leakage can cause irreversible and irreparable damage.
However, removing personal information, i.e., called anonymizing, is a challenging task that comes with a trade-off. On the one hand, after anonymizing, no personal references should be possible.
This can only be achieved by manipulating or even deleting data. On the other hand, the data utility should be maintained. Hereby, we refer to “data utility” as any measure to rate how useful data
are for given tasks.
Furthermore, anonymization is highly task-dependent, and due to the lack of specialized Open Data, Data Controllers and Data Processors cannot rely on given experiences.
In the following, this article looks at the anonymization of tabular data from the legal perspective of the GDPR. We describe practice-relevant anonymization terms, methods, and scores for tabular
data in a technical manner while enforcing common terminology and explaining the legal setting for anonymizing tabular data.
This explanatory synthesis aims to distill and organize the wealth of information from a multitude of versatile sources in the context of anonymizing tabular data. We aim to bring the information
into a clear and structured format to grasp the key concepts, trends, and current ambiguities. Our approach seeks to ensure both comparability and broad applicability, focusing on achieving general
validity in practical use cases.
The main contributions of this review paper can be summarized as follows:
• Terminology and taxonomy establishment of anonymization methods for tabular data:
This review introduces a unifying terminology for anonymization methods specific to tabular data. Furthermore, the paper presents a novel taxonomy that categorizes these methods, providing a
structured framework that enhances clarity and organization within tabular data anonymization.
• Comprehensive summary of information loss, utility loss, and privacy metrics in the context of anonymizing tabular data:
By conducting an extensive exploration, this paper offers a comprehensive overview of methods used to quantitatively assess the impact of anonymization on information and utility in tabular data.
By providing an overview of the so-called privacy models, along with precise definitions aligned with the established terminology, the paper reviews and explains the trade-offs between privacy
protection and data utility, with special attention to the Curse of Dimensionality. This contribution facilitates a deeper understanding of the complex interplay between anonymization and the
quality of tabular data.
• Integration of anonymization of tabular data with legal considerations and risk assessments:
Last but not least, this review bridges the gap between technical practices and legal considerations by analyzing how state-of-the-art anonymization methods align with case law and legislation.
By elucidating the connection between anonymization techniques and the legal context, the paper provides valuable insights into the regulatory landscape surrounding tabular data anonymization.
This integration of technical insights with legal implications is essential for researchers, practitioners, and policymakers alike, contributing to a more holistic approach to data anonymization.
The paper conducts a risk assessment for privacy metrics and discusses present issues regarding implementing anonymization procedures for tabular data. Further, it examines possible gaps in the
interplay of legislation and research from both technical and legal perspectives. Based on the limited sources of literature and case law, conclusions on the evaluation of the procedures were
summarized and were partially drawn using deduction.
In summary, these three main contributions collectively provide interdisciplinary insights for assessing data quality impact and promote a well-informed integration of technical and legal aspects in
the domain of tabular data anonymization.
2. Background
This article does not consider the anonymization of graph data or unstructured data, where high dimensionality adds additional constraints [
]. We solely focus on tabular data that can be extracted from relational databases. Due to their reliability and widespread tools, relational databases are used in a wide range of applications across
various industries. Thus, anonymizing tabular data in relational databases is a practice-relevant task. In this matter, protecting privacy is the main goal. Further, it facilitates the development of
new applications with the possible publishing of Open Data.
We only consider data that have string or atomic data types, e.g., Boolean, integer, character, and float, as attribute data types. From a conceptual point of view, we only distinguish between
categorical and numerical attributes, which can be reduced to the data types of string and float in implementations. Characters and integers might be typecast, respectively. We define records as
single entries in the database. Individuals might be related to more than one record. This happens when records are created by user events, such as purchase records. Though we relate to relational
databases and their taxonomy, to emphasize the anonymization task, instead of using the term “primary key”, we use the term Direct Identifier. Instead of talking about a “super key”, we say
Quasi-Identifier (QI). A QI refers to a set of attributes where the attributes are not identifiers by themselves, but together as a whole might enable the unique identification of records in a
database. The QI denotes the characteristics on which linking can be enforced [
]. The QI contains the attributes that are likely to appear in other known datasets, and in the context of privacy models, there is the assumption that a data holder can identify attributes in their
private data that may also appear in external information and thus can accurately identify the QI [
]. Further, by considering the same attribute values of a QI, the dataset of records is split into disjunct subsets that form equivalence classes. In the following, we call these equivalence classes
groups. If a group consists of
$k ∈ N$
entries, we call the group a
-group. Besides Direct Identifiers and (potentially more than one) QIs, there are the so-called Sensitive Attributes (SAs), which, importantly, should not be assignable to individuals after applying
anonymization. In
Section 4
, we give the mathematical setting for the data to study. In contrast to pseudonymization, where re-identification is possible but is not within the scope of this article, anonymization does not
allow Direct Identifiers at all. For this reason, in anonymization, removing Direct Identifiers is always the first step to take (e.g., in [
]). For the sake of simplicity, we assume that this step is already performed and define the data model on top of it. For the sake of consistency and comparability, throughout the article, we use the
Adult dataset from the UCI Machine Learning Repository [
] (“Census income” dataset) for visualizing examples.
3. Related Work
Related work can be categorized into several categories depending on the data format, the perspective (technical or legal), and the use case. The first listed works take a technical perspective and
deal with different data types and use cases in anonymization.
The survey [
] by Abdul Majeed et al. gives a comprehensive overview of anonymization techniques used in privacy-preserving data publishing (PPDP) and divides them into the anonymization of graphs and tabular
data. Although anonymization techniques for tabular data are presented, the focus of the survey is on graph data in the context of social media. The survey concludes that privacy guidelines must be
considered not only at the anonymization level, but in all stages, such as collection, preprocessing, anonymization, sharing, and analysis.
In the literature, most often, the approaches to anonymization are context-sensitive.
Another example is [
], where the authors discuss anonymizing Public Participation Geographic Information System (PPGIS) data by first identifying privacy concerns, referring to the European GDPR as the legal guideline.
The authors claim to have reached a satisfactory level of anonymization after applying generalization to non-spatial attributes and perturbations to primary personal spatial data.
Also in [
], by Olatunji et al., anonymization methods for relational and graph data are the focus but with an emphasis on the medical field. Further, in addition to the various anonymization methods,
an overview of various attack methods and tools used in the field of anonymization is given. The evaluation is focused on two main objectives, which are performed on the Medical Information Mart for
Intensive Care (MIMIC-III) dataset anonymized with the ARX data anonymization tool [
]. In the anonymization procedure, the differences in the accuracy of the predictions between anonymized data and de-anonymized data are shown. In this use case, generalization has less impact on
accuracy than suppression, and it is not necessary to anonymize all attributes but only specific ones.
Again—considering anonymization procedures—in [
], Jakob et al. present a data anonymization pipeline for publishing an anonymized dataset based on COVID-19 records. The goal is to provide anonymized data to the public promptly after publication,
while protecting the dataset consisting of 16 attributes against various attacks. The pipeline itself is tailored to one dataset. All Direct Identifiers were removed, and the remaining variables were
evaluated using [
] to determine whether they had to be classified as QIs or not.
In [
], the authors examine privacy threats in data analytics and briefly list privacy preservation techniques. Additionally, they propose a new privacy preservation technique using a data lake for
unstructured data.
In the literature review in [
], the authors list 13 tools and their anonymization techniques. They identify Open Source anonymization tools for tabular data and give a short summary for each tool. Also, they give an overview of
which privacy model is supported by which tool. However, they focus on a literature review and do not give in-depth evaluations of the tools. Last but not least, they derive recommendations for tools
to use for anonymizing phenotype datasets with different properties and in different contexts in the area of biology. Besides anonymization methods, some of the literature focuses on the scoring of
anonymity and privacy.
In the survey [
], the authors list system user privacy metrics. They list over 80 privacy metrics and categorize into different privacy aspects. Further, they highlight the individuality of single scenarios and
present a method for how to choose privacy metrics based on questions that help to choose privacy metrics for a given scenario. Whereas the authors unify and simplify the metric notation when
possible, they do not focus on the use case of tabular data and do not describe anonymization methods for tabular data (in a unifying manner). Further, they do not consider the legal perspective.
The following works take a legal perspective but do not fill the gap between legal and technical requirements. The legal understanding is not congruent with technology development, and there are
different definitions of identifiable and non-identifiable data in different countries.
In [
], the authors discuss different levels of anonymization of tabular health data in the jurisdictions of the US, EU, and Switzerland. They call for legislation that respects technological advances and
provides clearer legal certainty. They propose a move towards fine-grained legal definition and classification of re-identification steps. In the technical analysis, the paper considers only two
anonymization methods, removal of Direct Identifiers and perturbation, and gives a schematic overview of classification for levels of data anonymization. The data are classified into identifying
data, pseudonymized data, pseudo-anonymized data, aggregated data, (irreversibly) anonymized data, and anonymous data.
In [
], the authors consider the even more opaque regulations regarding anonymizing unstructured data, such as text documents or images. They examine the identifiability test in Recital 26 to understand
which conditions must be met for the anonymization of unstructured data. Further, they examine both approaches that will be discussed in
Section 6.3
Section 6.4
From a conceptual perspective, in [
], the authors call for a paradigm shift from anonymization towards transparency, accountability, and intervenability, because full anonymization, in many cases, is non-feasible to implement,
and solely relying on anonymization often leads to undesired results.
In summary, it can be seen that there is an increasing demand for practical anonymization solutions due to the rising number of privacy data breaches and the increasing number of data. With the
establishment of new processing paradigms, the relevance of user data anonymization will continue to increase. However, current approaches need significant improvement, and there is a need to develop
new practical approaches that enable the balancing act between privacy and utility.
4. Technical Perspective
The following model omits the existence of Direct Identifiers and just deals with one QI and several SAs. Furthermore, to make the setting comprehensible, we use the terms table, database, and
dataset interchangeably. Let
$D = { R 1 , R 2 , … , R n }$
be a database modeled as a multiset with
$n ∈ N$
not necessarily distinct records, where
$R i ∈ A 1 × A 2 × … × A r × A r + 1 × … × A r + t , i = 1 , … , n$
, are database entries composed of attribute values;
$r ∈ N$
is the number of attributes that are part of the QI;
$t ∈ N 0$
is the number of non QI attributes;
$A j , j = 1 , … , r + t$
, is the set of possible attribute values of the attribute indexed by
; and the first
attributes represent the QI. In the following, let
$| · |$
denote the cardinality of a set, and more specifically, let
$| D |$
denote the number of distinct records in database
. As several records can potentially be assigned to one individual,
records correspond to
$m ≤ n$
individuals with QI attributes
${ U 1 , U 2 , … , U m }$
, where
$U i ∈ A 1 × A 2 × … × A r , i = 1 , … , m$
. We assume that given data are preprocessed and individuals can only be assigned to one individual, i.e.,
$| D | = m = n$
. Further, let
$S A ⊆ { A 1 , … , A r + t }$
denote the SAs as a subset of all attributes. For the sake of simplicity, in the article, without loss of generality, we restrict the numerical attributes to
$A i ⊂ R$
and the categorical attributes to
$A i ⊂ N$
$i = 1 , … , r + t$
. Let
$R i , i ∈ { 1 , … , n }$
denote the
-th entry and
$R i ( j ) , j ∈ { 1 , … , r + t }$
denote the value of the
-th attribute of the
-th entry in the database.
Figure 1
visualizes the data structure to be studied.
Before scoring certain levels of anonymity for a dataset with personal data, we give an overview of common anonymization methods. We aim to cover relevant methods for tabular data in as detailed a
manner as necessary. We are aware that not all methods are described in detail and that research is being carried out on newer approaches. However, in this article, we focus on the most important
methods that are state-of-the-art and/or common practice. Some anonymization methods use the information given by the QI. In that case, it is important to note that there might be more than one QI
(super key) in a database, and often, several choices of QI have to be considered to score anonymization. For the sake of simplicity and because the following definitions do not limit the use of
multiple QIs, where needed, we use a fixed set of attributes as a single QI. In the following, we categorize anonymization methods in seven categories (
Section 4.1
Section 4.2
Section 4.3
Section 4.4
Section 4.5
Section 4.6
Section 4.7
), where not all are necessarily based on QIs. The considered methods are given in the taxonomy in
Figure 2
. This taxonomy represents a hierarchical structure that classifies anonymization methods into different levels of categories and subcategories, reflecting their relationships.
4.1. Eliminating Direct Identifiers
Direct Identifiers are attributes that allow for the immediate re-identification of data entries. Therefore, due to the GDPR definition of anonymization, removing the Direct Identifier is compulsory
and usually the first step in any anonymization of personal tabular data. Direct Identifiers, often referred to as IDs, do not usually contain valuable information and can simply be removed. A more
detailed description can be found in the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule [
] by the United States Department of Health and Human Services, which specifies a Safe Harbor method that requires certain Direct Identifiers of individuals to be removed. The 18 Direct Identifiers
that are required to be removed according to the Safe Harbor method can be found in
Table 1
. To the best of our knowledge, there is no EU counterpart to the HIPAA.
4.2. Generalization
In generalization, the level of detail is coarsened. As a result, given the attributes of individuals, re-identification in the dataset should be impossible. Further, generalization limits the
possibility of finding correlations between different attribute columns and datasets. This also makes it difficult to combine and assign records to an individual. There are several types of
generalizations, such as subtree generalization, full-domain generalization, unrestricted subtree generalization, cell generalization, and multi-dimensional generalization [
]. generalization for categorical attributes can be defined as follows (c.f. [
]): Let
$A ¯ j ⊆ P ( A j )$
be a set of subsets of
$A j$
A mapping
$g : A 1 × … × A r → A ¯ 1 × … × A ¯ r$
is called a record generalization if and only if for any record’s QI
$( b 1 , … , b r ) ∈ A 1 × … × A r$
$( B 1 , … , B r ) : = g ( b 1 , … , b r ) ∈ A ¯ 1 × … × A ¯ r$
, it holds that
$b j ∈ B j , j = 1 , … , r$
$g i : A 1 × … × A r → A ¯ 1 × … × A ¯ r , i = 1 , … , n$
be record generalizations. With
$R ¯ i : = g i ( R i ) , i = 1 , … n$
, we call
$g ( D ) : = { R 1 ¯ , … , R n ¯ }$
a generalization of database
The trivial generalization of an attribute is defined as
$g : A → A ¯ , b ↦ { b } .$
Often, generalization is achieved by generalizing attribute values by replacing parts of the value with a special character, for example, “∗”.
Generalization is sometimes also named recoding and can be categorized according to the strategies used [
]. There is a classification in global or local recoding. Global recoding refers to the process of mapping a chosen value to the same generalized value or value set across all records in the dataset.
In contrast, local recoding allows the same value to be mapped to different generalized values in each anonymized group. For the sake of simplicity, we use the word generalization instead of
recoding. Generalization offers flexibility in data anonymization, but it also requires more careful consideration to ensure that the privacy of individuals is still protected. Further, there is the
classification into single- and multi-dimensional generalizations. Here, single-dimensional generalization involves mapping each attribute individually.
$g : A 1 × … × A r → A ¯ 1 × … × A ¯ r ,$
In contrast, multi-dimensional generalization involves mapping the Cartesian Product of multiple attributes.
$g : A 1 × … × A r → B ¯ 1 × … × B ¯ s , s < r ,$
$B i , i = 1 , … , s$
, is a set in
${ A 1 , … , A r }$
or is a Cartesian Product of sets
$A k 1 × … × A k l , 1 < l ≤ r$
. When dealing with numerical attributes, generalization can be implemented using discretization, where attribute values are discretized into same-length intervals. The approach is also referred to
as value-class membership [
]. Let
$L ∈ N$
be the interval size. Then, discretization can be defined as
$g : R → { [ a , b ) | a , b ∈ R , a < b } , λ ↦ I ,$
maps the real number
to half-open real interval
$I = [ l o w e r , u p p e r ) : = λ L L , λ L L + L ,$
has length
represents the floor function, which rounds down to the nearest integer. If one wants to discretize to tenths or even smaller decimal places, one can multiply
and the attribute values in the corresponding column with
$10 , 100 , …$
before applying discretization and with the multiplicative inverse of
$10 , 100 , …$
after applying discretization. In practice, due to the often vast possibilities of generalizing tabular data, a generalization strategy has to be found. Note that data consisting of categorical and
numerical attributes can incorporate different generalizations for different attributes and different database entries (Equations (
An example for applying generalization and discretization to the Adult dataset is given in
Figure 3
4.3. Suppression
Suppression (or Data Masking) can be defined as a special type of generalization [
]. To be specific, suppression using generalization resp. total generalization can be achieved by applying
$g ( b 1 , … , b r ) = ( b 1 ¯ , … , b r ¯ ) , b j ¯ ∈ { b j , ∗ }$
for every database record
$( b 1 , … , b r ) ∈ A 1 × … × A r$
, where
$∗ : = A j$
$∗ : = ∅$
, when suppressing categorical attribute values. To suppress numerical attribute values, we can define
$∗ : = f ( A j )$
$f : R → R$
, where
is a statistical function such as mean, sum, variance, standard deviation, median, mode, min, and max. An example of suppression is given in
Figure 4
Another concept of suppression is tuple suppression, which can be used to deal with outliers. Thereby, given a positive
$k ∈ N$
for the desired
-anonymity, the database entries in groups with less than
-entries are deleted [
4.4. Permutation
With permutation, the order of an individual QI’s attribute values within a column is swapped. Mathematically, a permutation is defined as a bijective function that maps a finite set to itself. Let
$σ : { 1 , 2 , … , n } n → { 1 , 2 , … , n } n , i 1 , i 2 , … , i n ↦ σ ( i 1 ) , σ ( i 2 ) , … , σ ( i n )$
be a permutation of record indices.
Considering only column
of the records of a database, we define a column permutation as
$π : A j n → A j n , R i ( j ) i = 1 , … , n ↦ R σ ( i ) ( j ) i = 1 , … , n .$
This reassigns information among columns, potentially breaking important relationships among attributes. This can result in a subsequent deterioration of analyses where the relationships are
relevant. An example of column permutation is given in
Figure 5
4.5. Perturbation
In perturbation, additive or multiplicative noise is applied to the original data. However, without a careful choice of noise, there is the possibility that utility is hampered. On the contrary,
especially in the case of outliers, applying noise might not be enough to ensure privacy after anonymization achieved using perturbation. Perturbation is mainly applied to SAs. In [
], the perturbation approaches provide modified values for SAs. The authors consider two methods for modifying SAs without using information about QIs. Besides value-class membership or
discretization, which is here explained in generalization (
Section 4.2
), the authors use value distortion as a method for privacy preservation in data mining. Hereby, for every attribute value
$R i ( j ) , i = 1 , … , n$
, in an attribute column
, the value
$R i ( j )$
is replaced with the value
$R i ( j ) + ρ$
, where
$ρ ∈ R$
is additive noise drawn from a random variable with continuous uniform distribution
$r ∼ U − a , a , a > 0$
, or with normal distribution
$r ∼ N μ , σ$
with mean
$μ = 0$
and standard deviation
$σ > 0$
Probability distribution-based methods might also be referred to as perturbation. However, because these methods replace the original data as a whole, we list these approaches in Synthetic Data (
Section 4.7
). The same applies to dimensionality reduction-based anonymization methods, which we also list in Synthetic Data.
Section 4.6
studies a more sophisticated field of perturbations, namely, Differential Privacy (DP), which is the state of the art in privacy-preserving ML.
4.6. Differential Privacy
Differential Privacy, introduced by Cynthia Dwork in [
], is a mathematical technique that allows for the meaningful analysis of data while preserving the privacy of individuals in a dataset. The idea is to add random noise to data in such a way that—as
it is the goal in anonymization—no inferences can be made about personal and sensitive data. DP is implemented in different variants depending on the use case, where anonymization is only a sub-task
in a vast variety of use cases. Generally, there is a division into local [
] and global DP [
]. The local DP model does not require any assumptions about the server, whereas the global DP model is a central privacy model that assumes the existence of a trusted server. As a result,
the processing frameworks for global and local DP differ significantly. However, the definition of local DP can be embedded into the definition of global DP as a special case where the number of
database records equals one.
Common techniques to implement local DP are the Laplace and Exponential mechanisms [
], and Randomized Response [
In the context of global DP, there are novel output-specific variants of DP for ML training processes, where ML models are applied to sensitive data and model weights are manipulated in order to
preclude successful membership or attribute inference attacks. For example, in Differentially Private Stochastic Gradient Descent (DP-SGD) [
], instead of adding noise to the data themselves, gradients (i.e., the multi-variable derivative of the loss function with respect to the weight parameters) are manipulated to obtain
privacy-preserving Neural Network models. Adapting the training process is also referred to as private training. Whereas private training only adjusts the training process and leads to private
predictions, private prediction itself is a DP technique to prevent privacy violations by limiting the amount of information about the training data that can be obtained from a series of model
predictions. Whereas private training operates on model parameters, private prediction perturbs model outputs [
]. The privacy models’
-diversity, and
-closeness rely on deterministic mechanisms and can be calculated given the database and a QI. On the contrary, global DP does not depend only on the QI but also on the whole database and a
randomized mechanism
in connection with a data-driven algorithm, such as database queries, statistical analysis, or ML algorithms. The most basic definition of the so-called
$( ϵ , δ )$
-DP includes the definition of a randomized algorithm, probability simplex, and the distance between two databases based on the
$ℓ 1$
-norm of the difference of histograms. This definition of DP requires that for every pair of “neighbouring” databases
$X , Y$
(given as histograms), it is extremely unlikely that, ex post facto, the observed value
$M ( X )$
$M ( Y )$
is much more or much less likely to be generated when the input database is
than when the input database is
]. Two databases are called neighbors if the resulting histograms
$x , y ∈ { 0 , 1 } | X |$
only differ in at most one record, where in our setting,
$x i , y i ∈ { 0 , 1 } , i = 1 , … , | X |$
is the number of non-duplicate records with the same type in
, where
$X ⊇ D$
is the record “universe”. More in detail, the
$( ϵ , δ )$
-DP for a randomized algorithm
with domain
${ 0 , 1 } | X |$
is defined by the inequality below, where
$ϵ > 0 , δ ≥ 0$
are privacy constraints.
For all
$S ⊆ r a n g e ( M )$
(subset of the possible outputs of
) and
$x , y ∈ { 0 , 1 } | X |$
, such that
$∥ x − y ∥ 1 ≤ 1$
, we have
$P r [ M ( X ) ∈ S ] ≤ e ϵ P r [ M ( Y ) ∈ S ] + δ .$
The smaller the value of the so-called privacy budget
, the stronger the privacy guarantee. Additionally, parameter
is a small constant term that is usually set to a very small value to ensure that the formula holds with high probability. In summary, DP guarantees that the output of a randomized algorithm does not
reveal much about any individual in the dataset, even if an adversary has access to all other records in the database. There are promising approaches, such as in [
], where the authors propose a general and scalable approach for differentially private synthetic data generation that also works for tabular data.
4.7. Synthetic Data
Whereas the above approaches directly manipulate dataset entries, with synthetic approaches, new data are generated based on extracted and representative information from the original data. For the
sake of simplicity, the following synthetic approaches to generate data are only described for numerical data. However, by using a reasonable coding method (such as one-hot encoding), categorical
data might be converted into numerical data, and vice versa.
In [
], to improve anonymization using generalization for
-anonymity, the so-called condensation method was introduced. The approach is related to probability distribution-based perturbation methods. Thereby, the resulting numerical attribute values closely
match the statistical characteristics of the original attribute values, including inter-attribute correlations (second order) and mean values (first order). Condensation does not require hierarchical
domain generalizations and fits both static data (static condensation) and dynamic data streams (dynamic condensation). In summary, this approach condenses records into groups of predefined size,
where each group maintains a certain level of statistical information (mean, covariance). The authors test the accuracy of a simple
-Nearest Neighbor classifier on different labeled datasets and show that condensation allows for high levels of privacy without noticeably compromising classification accuracy. Further, the authors
find that by using static condensation for anonymization, in many cases, even better classification accuracy can be achieved. This is because the implied removal of anomalies cancels out the negative
impact of adding noise. In summary, condensation produces synthetic data by creating a new perturbed dataset with similar dataset characteristics. The mentioned paper states the corresponding
algorithm to calculate statically condensed group statistics: first-order and second-order sum per attribute and total number of records. Afterwards, given the calculated group statistics,
by building the covariance matrix of attributes for every group, the eigenvectors and eigenvalues of the covariance matrix can be calculated using eigendecomposition. To construct new data,
the authors assume that the data within each group is independently and uniformly distributed along each eigenvector with a variance equal to the corresponding eigenvalue.
Another approach to improving privacy preservation when creating synthetic data is to bind Principal Component Analysis (PCA) or Linear Discriminant Analysis (LDA) with DP [
]. In the paper, considering the PCA-based approach, a perturbed covariance matrix (real and symmetric) is decomposed into eigenvalues and eigenvectors, and Laplace noise is applied on the resulting
eigenvectors to generate noisy data. The introduced differential PCA-based privacy-preserving data publishing mechanism satisfies
-Differential Privacy and yields better utility in comparison to the Laplace and Exponential mechanisms, even when having the same privacy budget.
In [
], the authors propose a sparsified Singular Value Decomposition (SVD) for data distortion to protect privacy. Given the dataset—often a sparse—matrix
$D ∈ R n × m$
, the SVD of
$D = : U Σ V T$
, where
is an
$n × n$
orthonormal matrix;
$Σ : = d i a g [ σ 1 , σ 2 , … , σ s ] ,$$σ i ≥ 0 , 1 ≤ i ≤ s$
$s : = min { m , n }$
is an
$n × m$
diagonal matrix whose non-negative diagonal entries are in descending order; and
$V T$
is an
$m × m$
orthonormal matrix. Due to the property of descending variation in
$σ 1 , … , σ s$
, data can be compressed to lower dimensionality while preserving utility. This is achieved by using only the first
$1 ≤ d ≤ s$
columns of
, the
$d × d$
upper left submatrix of
, and the first
rows from
$V T$
$U d , Σ d , V d T$
. The matrix
$D d : = U d Σ d V d T$
to represent
can be interpreted as a reduced dataset of
that can be used for mining on the original dataset,
. In contrast to SVD, in sparsified SVD, entries in
$U d$
$V d T$
that are below a threshold are set to zero to obtain a sparsified data matrix
$D ¯ d$
. By thresholding values in
$U d$
$V d T$
to zero and by dropping less important features in
, data are distorted, which makes it harder to estimate values and records in
. However, the most important features are kept. Therefore, the approach aims to maintain the utility of the original dataset,
Overall, from a technical perspective, when considering eigenvector-based approaches to generate synthetic data, a numerically stable algorithm including suitable matrix preprocessing for the
eigenvalue problem at hand has to be selected. Last but not least, eigenvector-based approaches can also help mitigate the Curse of Dimensionality in data anonymization [
]. The Curse of Dimensionality and its relation to anonymization methods are explained in more detail in
Section 5.5
More recent generative ML models that are often based on deep learning can effectively create synthetic and anonymous data. Generative models aim to approximate a real-world joint probability
distribution, such that the original dataset only represents samples pulled from the learned distribution. One common use case of generative models is to fix class imbalances or to apply domain
transfer. However, generative approaches can also be used to generate anonymous data. Importantly, considering privacy preservation, the generated data should not allow for (membership/attribute)
inferences about specific training data. When it comes to tabular data, in [
], the authors create synthetic tabular data by adapting a Generative Adversarial Network (GAN) that incorporates a Long Short-Term Memory (LSTM) Neural Network in the generator and a Fully Connected
Neural Network in the discriminator. Other examples for synthetic tabular data based on GANs can be found in the papers [
]. However, just considering a generative ML model by itself does not imply the privacy preservation of training data. Therefore, generative ML might be combined with DP as a potential way out [
]. This again also applies to tabular data; c.f. [
5. Utility vs. Privacy
In anonymization, there is always the trade-off of removing information vs. keeping utility. In the literature, two main concepts are used to model the change in utility when applying anonymizing:
information loss (
Section 5.1
) and utility loss (
Section 5.2
To give an overview, we categorize and list the studied anonymization scores in
Section 5
Table 2
In the following subsections, we explain the measurements and methods in greater detail. Further, we give insights into the occurring phenomena of the so-called Curse of Dimensionality in the context
of anonymizing tabular data.
5.1. Information Loss
Conditional entropy assesses the amount of information that is lost with anonymization in terms of generalization and suppression of categorical attributes. In [
], the authors study the problem of achieving
-anonymity using generalization and suppression with minimal loss of information. As a solution to the problem, they prove that the stated problem is NP-hard and present an algorithm with an
approximation guarantee of
$O ( ln k )$
-anonymity. The calculation of information loss based on entropy builds on probability distributions for each of the attributes. Let
$X j$
denote the categorical value of attribute
$A j , j = 1 , … r$
, in a randomly selected record from a dataset
consisting of only categorical data. Then, for
$a ∈ A j , j ∈ { 1 , 2 , … , r }$
$P r [ X j = a ] : = | { 1 ≤ i ≤ n : R i ( j ) = a } | n .$
$B j ⊆ A j$
. Then, the conditional entropy of
$X j$
$B j$
is defined as follows:
$H X j | B j : = − ∑ b j ∈ B j P r [ X j = b j | X j ∈ B j ] log 2 P r [ X j = b j | X j ∈ B j ] .$
Loosely speaking, conditional entropy measures the average amount of uncertainty in $X j$ given the knowledge that $X j$ takes values from $B j$.
$g ( D ) = { R 1 ¯ , R 2 ¯ , … , R n ¯ }$
, a generalization of
, the entropy measure of the loss of information caused by generalizing
$g ( D )$
is defined as
$Π e ( D , g ( D ) ) : = ∑ i = 1 n ∑ j = 1 r H ( X j | R ¯ i ( j ) ) .$
$R ¯ i , i ∈ { 1 , … , n }$
, is no generalization at all, i.e.,
$| R ¯ i ( j ) | = 1$
, we have
$H ( X j | R ¯ i ( j ) ) = 0$
, and there is no uncertainty. On the other hand, if
$R ¯ i ( j ) = A j$
, there is maximal uncertainty. An example of entropy information loss is given in
Figure 6
In [
], the authors also use other variants of entropy measures, namely, the so-called monotone entropy measure and non-uniform entropy measure, with different characteristics. However, the authors claim
that the entropy measure is a more appropriate measure when it comes to privacy.
Given a dataset
$D = { R i | i = 1 , … , n }$
consisting of only numerical attributes and a discretization
$g ( D ) = { R ¯ i ( j ) | i = 1 , … , n , j = 1 , … , r }$
, the information loss on a per-attribute basis can be calculated with the following formula [
$Π ( D , g ( D ) ) : = 1 n · r ∑ i = 1 n ∑ j = 1 r u p p e r i j − l o w e r i j max j − min j ,$
$u p p e r i j$
$l o w e r i j$
are the upper and lower bounds of generalized attribute value interval
$R ¯ i ( j )$
, and
$min j : = min i = 1 , … , n { R i j }$
$max j : = max i = 1 , … , n { R i j }$
, i.e., the minimum and maximum attribute values before generalization.
Based on condensation (
Section 4.7
) for
-anonymity, in [
], the so-called relative condensation loss is defined to score information loss in anonymization. Given anonymized tabular data
$D ¯$
, the relative condensation loss is group-wise-defined and represents a minimum level of information loss. For
$g ∈ g r o u p s$
, where
$g r o u p s$
are the groups of anonymized data
$D ¯$
$L g : = max R ¯ i , R ¯ k ∈ g , i ≠ k ∥ R ¯ i − R ¯ k ∥ 2 max R ¯ i , R ¯ k ∈ D ¯ , i ≠ k ∥ R ¯ i − R ¯ k ∥ 2 ∈ ( 0 , 1 ] ,$
$| | · | | 2$
denotes the 2-norm and anonymized entries
$R i ¯ , i = 1 , … , n ∈ R d$
, are quantified as real vectors of dimension
$d ∈ N , d ≥ r$
. Different values of
$L ( g )$
for the different
$g ∈ g r o u p s$
can be aggregated (
$a v g , max$
, …) to a total information loss
$L D ¯$
Last but not least, in [
], the authors use the average Euclidean distance to measure information loss:
$I L ( D , g ( D ) ) : = 1 n ∑ i = 1 n d i s t R i , R ¯ i ,$
$d i s t$
defines the Euclidean distance between data records. Note that in the case of non-real-valued attributes in the dataset, the records have to be vectorized before applying
$d i s t$
. An example of numerical information loss is given in
Figure 7
If there is a mixture of categorical and numerical attributes in D, the summands of the combined sum have to be weighted accordingly. Relative condensation loss can be used for both categorical and
numerical data by defining feature embeddings for categorical data.
5.2. Utility Loss
As mentioned above, the entropy measure can only be used for processing categorical attributes. However, they lack the capability to deal with numerical data. By designing a utility loss that can
deal with both categorical and numerical attribute values, we can overcome this downside. In [
], the authors quantify utility by calculating the distance between the relative frequency distributions of each data attribute in the original data and the sanitized data. The distance is based on
the Earth Mover Distance (EMD). Further,
-test statistics can be utilized to examine whether significant differences exist between variables in the original and the anonymized data [
]. Another method to score the utility of anonymization that can be used for evaluations is the average size of groups [
$g r o u p A V G ( D ) : = | D | | g r o u p s | ,$
or the normalized average equivalence class size metric [
], defined by the formula
$C A V G ( D ) : = | D | | g r o u p s | · k ,$
or the so-called, commonly used discernibility metric, which scores the number of database entries that are indistinguishable from each other [
] and penalizes large group sizes,
$C D M ( D ) : = ∑ g r o u p ∈ g r o u p s | g r o u p | 2 .$
The listed group-size-based metrics $g r o u p A V G , C A V G$, and $C D M$ should be minimized to maintain utility while aiming for k-anonymity with k greater than or equal to a predefined
positive integer.
Taking into account record suppression (
Section 4.3
), the proportion of suppressed records in the total number of records before anonymization can also be used to measure the loss of utility. However, applying record suppression to obtain
-anonymity extends group sizes and thus group-size-based metrics.
In contrast to the above approaches, when the context is known in advance, there is the possibility to measure the data utility by scoring the output of ML algorithms that use anonymized data for
training. For example, in [
], anonymized labeled data are scored by calculating the
-measure after applying
-Nearest Neighbor to classify molecules that are given as numerical attributes. Considering the Adult dataset, in [
], the authors apply different ML algorithms (
-Nearest Neighbor, Random Forest, Adaptive Boosting, Gradient Tree Boosting) to anonymized data. However, they just apply record suppression for anonymization. In the following, we call this type of
score ML utility.
5.3. Privacy Models
There are common models to determine if records in a dataset can be re-identified. Yet, the models have weaknesses that can potentially be exploited by attackers. In the following, we solely focus on
the definitions and give examples. In
Section 6.7
, we list the models’ weaknesses and embed the definitions in a legal context.
5.3.1. k-Anonymity
The so-called
-anonymity, first introduced in [
$k ∈ N + , k ≤ n$
, is a dataset property for anonymization that considers a QI. If the attributes of the QI for each record in the dataset are identical to at least
$k − 1$
other records in the dataset, the dataset is called
-anonymous. When having
-anonymity, groups consist of at least
-records. Technically,
-anonymity is defined by
$k : = min g r o u p ∈ g r o u p s | g r o u p | .$
To give an example,
Figure 8
shows a database
, where the four attributes
education, education-num, capital-loss, native-country
build a QI and the attribute
is an SA. In
Figure 8
, generalization and discretization are applied, affecting the attributes
education, education-num, native-country
in such a way that at least two records in the table always have the same QI, leading to
-anonymity with
$k = 2$
. To be precise, the data are split into two groups:
${ R 1 , R 2 , R 5 , R 6 }$
${ R 3 , R 4 }$
The privacy metric of
-anonymity might be combined with different metrics. For example, the authors in [
] introduce the so-called Mondrian multi-dimensional
-anonymity as a multi-dimensional generalization model for
-anonymity. The paper proposes a greedy metric approximation algorithm that offers flexibility and incorporates general-purpose metrics such as the discernibility metric or the normalized average
equivalence class size metric (
Section 5.2
5.3.2. l-Diversity
-Diversity, introduced in [
], a second common model for anonymization, considers SAs and gives additional privacy protection to
-anonymity. Again, it considers groups of records with the same QI. When having distinct
$l ∈ N + , l ≤ n$
, each group has at least
different attribute values for every SA. Therefore, it is not possible to assign a single attribute value to all records of a group, and group membership does not imply assigning a unique SA to a
person. Utilizing
-diversity for scoring anonymity can be challenging, as it depends on the variety of values an SA can have. Technically,
-diversity is defined as
$l : = min g r o u p ∈ g r o u p s | { R ( j ) | R ∈ g r o u p } | ,$
$j ∈ { 1 , … , r + t }$
denotes the column index of the SA. Given the example at the bottom of
Figure 8
and the SA
in every group, all values of
are diverse, and each group consists of two records. Therefore, we have
-diversity with
$l = 2$
. For the SA
, there would be
-diversity with
$l = 1$
5.3.3. t-Closeness
-Closeness [
] again takes into account SA values. Whereas
-diversity considers the variety of SA values in single groups,
-closeness checks the granularity of SA values in a single group in comparison to the overall value distribution in the dataset. A group is said to have
-closeness if the EMD between the relative frequency distribution of an SA in this group and the relative frequency distribution of the attribute in the whole dataset is no more than a threshold
$t > 0$
. A dataset is said to have
-closeness if all equivalence classes have
-closeness. Originally, the authors considered the EMD for this purpose (for comparison, see
Section 5.2
). The distance is calculated differently for integer, numerical, and categorical attributes. Given a dataset
with an SA at index
$s ∈ { 1 , … , r + t }$
, the
-closeness of the dataset is defined as
$t ( D ) : = max g r o u p ∈ g r o u p s E M D ( P , Q g r o u p ) ,$
where the following apply:
• D is the dataset;
• P is the relative frequency distribution of all attribute values in the column of the SA in dataset D;
• $Q g r o u p$ is the relative frequency distribution of all attribute values in the column of the SA within $g r o u p$ that is an equivalence class of dataset D and is obtained by a given QI;
• $E M D ( P , Q )$ is the EMD between two relative frequency distributions and depends on the attributes’ value type.
Given two ordered relative frequency distributions
of integer values, the ordered EMD is defined as follows:
$E M D ( P , Q ) : = 1 o − 1 ∑ i = 0 o − 1 | ∑ j = 0 i ( P − Q ) j | ,$
where the following apply:
• o is the number of distinct integer attribute values in the SA column;
• P and Q are two relative frequency distributions as histograms (integers are ordered in ascending order).
Given two ordered relative frequency distributions
of categorical values, the equal EMD is defined as follows:
$E M D ( P , Q ) : = 1 2 ∑ i = 0 o − 1 | ( P − Q ) i | ,$
where the following apply:
• o is the number of distinct categorical attribute values in the SA column;
• P and Q are two relative frequency distributions as histograms (integers are ordered in ascending order).
Given the example at the bottom of
Figure 8
and the sensitive integer attribute
, there would be
-closeness with
$t = 0.2$
, due to
$E M D ( P 1 , Q ) = 0.1$
$E M D ( P 2 , Q ) = 0.2 ,$
$P 1$
is the orange group with four records and
$P 2$
is the yellow group with two records.
5.4. Re-Identification Risk Quantification
Besides information loss, utility scoring, and privacy models, there is a fourth important method to score anonymization, namely, quantifying the probability of re-identification risk. Privacy models
can only be calculated given the anonymized tabular dataset, and information loss and utility scores evaluate the application of anonymization regarding utility preservation. Re-identification risk
can be calculated given an anonymized dataset plus an individual’s attribute value(s) as background knowledge. The re-identification risk method particularly takes into account the very realistic
danger of the so-called inference attacks. For example, in [
], the authors define a score that incorporates the uniqueness, uniformity, and correlation of attribute values. They quantify the re-identification risk by calculating a joint probability of the
non-uniqueness and non-uniformity of records. From a technical perspective, the re-identification risk is modeled as a Markov process. We adapt the definition of the probability (
$P R$
) of re-identifying a record
to our setting assuming a unit record dataset
, i.e., not having event data. Further, we restrict the definition to attributes that are part of the QI, i.e., to the first
attributes in the dataset. We define the probability (
$P R$
) of re-identifying a record
given its attribute values at indices
$J ⊆ { 1 , … , r }$
as follows:
$P R ( R ( J ) ) : = 1.0 − P P ( R ( J ) ) · n ) ,$
is the total number of records in the dataset and
$P P ( R ( J ) )$
is the privacy probability of non-re-identifying record
in dataset
with a subset of attribute values of record
at attribute indices
, i.e.,
$R ( J )$
$P P$
is calculated by utilizing the Markov Model risk score. Without loss of generality, we re-index the ordered set of attribute values
${ R ( 1 ) , … , R ( r ) }$
, define the ordered set
${ R ( 2 ) , … , R ( m ) } : = { R ( 1 ) , … , R ( r ) } ∖ R ( J )$
, and let
$R ( 1 ) : = R ( J )$
. Then, the privacy probability of non-re-identifying record
in dataset
with a subset of attribute values of record
at attribute indices
is defined as
$P P ( R ( J ) ) : = P ( R ( J ) ) · 1 − P ( R | R ( J ) ) · ∏ 1 ≤ j ≤ m − 1 P R ( j + 1 ) | R ( j ) 1 − P R | R ( j + 1 ) ,$
where the following apply:
• $P ( R ( J ) ) : = P r [ X j = R ( j ) , j ∈ J ]$;
• $P ( R | R ( J ) ) : = P r [ X i = R ( i ) , i ∉ J | X j = R ( j ) , j ∈ J ]$;
• $P ( R ( j + 1 ) | R ( j ) ) : = P r [ X j + 1 = R ( j + 1 ) | X j = R ( j ) ]$;
• $P ( R | R ( j + 1 ) ) : = P r [ X i = R ( i ) , i ∉ J | X j + 1 = R ( j + 1 ) ]$.
Calculating the average
$P R$
for all records in the dataset yields
$P R ( D , J ) : = 1 n ∑ i = 1 n P R ( R i ( J ) ) .$
Considering the dataset given in
Figure 9
as an example, given the attribute value “Bachelors” for education in dataset record
$R 1$
, the privacy probability of re-identifying the record is
$P R ( R 1 ( { 1 } ) ) = 0.9$
. The calculation of the start probability, i.e., attribute uniqueness,
$P ( R 1 ( { 1 } ) ≈ 0.386$
, is equivalent to the re-identification-risk score,
$R I R$
, which is efficiently calculated with CSIRO’s R4 tool [
]. Given the attribute value “HS-grad” for education in dataset record
$R 3$
, the privacy probability of re-identifying this record is the highest, as
$P R ( R 3 ( { 1 } ) ) = 1.0$
, and the RIR score is
$P ( R 3 ( { 1 } ) = 1.0$
. Whereas the RIR score does not depend on the order of attributes,
$P R$
depends on the attribute indices and also takes into account inter-attribute relations. Besides the average privacy probability of re-identifying records, the paper [
] describes the minimum, maximum, median, and marketer re-identification risk based on the calculated
$P R$
values of all dataset records to score the re-identification risk of a dataset.
5.5. Curse of Dimensionality
The phenomena of the Curse of Dimensionality, first mentioned in [
] in the context of linear equations, refer to the increase in computational complexity and requirements for data analysis as the number of variables (dimensions/attributes) grows. This increase
makes it more and more difficult to find optimal solutions for high-dimensional problems. Considering anonymization, most privacy models on multivariate tabular data lead to poor utility if enforced
on datasets with many attributes [
]. Aggarwal has already shown in [
] that large-sized QIs lead to difficult anonymization, having previously presented condensation [
] (described in
Section 4.7
) as a synthetic approach to anonymization to achieve
-anonymity. Besides showing the openness inference attacks in terms of probability when having high-dimensional data, in an experimental analysis, it is visualized that anonymizing high-dimensional
data, even for only 2-anonymity, leads to unacceptable information loss. However, high-dimensional data potentially have inter-attribute correlations that—despite the theoretic Curse of
Dimensionality—can be used to better anonymize them in terms of utility preservation. Therefore, to overcome the Curse of Dimensionality in anonymization, in the so-called Vertical Fragmentation, the
data are first partitioned into disjoint sets of correlating attributes and subsequently anonymized and assembled after the anonymization step [
]. This approach is method-agnostic, as it can be used with all anonymization methods described in
Section 4
. Given the attributes
$A 1 , … , A r + t$
, a vertical fragmentation
of the attributes is a partitioning of the attributes in fragments
$F = { F 1 , … , F f }$
$∀ i ∈ { 1 , … , f } : F i ⊆ { A 1 , … , A r + t }$
$F i ∩ F j = ∅ , i ≠ j$
$⋃ i = 1 , … , f F i = { A 1 , … , A f }$
, where
$i , j ∈ { 1 , … , r + t }$
. Considering the single fragments, groups can be formed, and
-anonymity, calculated. However, there are a vast number of possibilities for vertical fragmentation depending on the number of attributes. Therefore, systematic vertical fragmentation that takes
into account inter-attribute correlations and post-utility after anonymization has to be chosen. The approach in [
] focuses on classification problems and attempts to maximize the amount of non-redundant information contained in single fragments while also striving for high utility of fragments to conduct the
classification task. The authors propose the so-called Fragmentation Minimum Redundancy Maximum Relevance (FMRMR) metric to head into beneficial fragmentation. In the following, let
$F j , j = 1 , … , | F |$
, denote indexed attributes of fragment
$A C$
be the class attribute in the database. The “supervised” FMRMR metric is calculated with the formula
$F M R M R F : = ∑ F ∈ F V F − W F ,$
$V F : = 1 | F | ∑ j = 1 | F | I A c , F j$
is the total mutual information between the attributes and class attribute
$A C$
in fragment
of fragmentation
$W F : = 1 | F | 2 ∑ k = 1 | F | ∑ j = 1 | F | I F k , F j ,$
is the total pairwise mutual information between the attributes in fragment
of fragmentation
. The formula [
$I ( A k , A j ) : = ∑ a k ∈ R ( k ) ∑ a j ∈ R ( j ) P r X k = a k , X j = a j log 2 P r X k = a k , X j = a j P r X k = a k ] P r [ X j = a j$
defines the mutual information between attributes
$A k$
$A j$
, where
$X k , X j$
are discrete random variables as defined in
Section 5.1
and the joint probability distribution is defined as
$P r [ X k = a , X j = b ] : = | { 1 ≤ i ≤ n : R i ( k ) = a , R i ( j ) = b } | n ,$
$a ∈ R ( k ) , b ∈ R ( j )$
are values of the corresponding column. Note that if
$X k$
$X j$
are independent random variables, we have
$I ( A k , A j ) = 0$
, and the columns are non-redundant.
With Equation (
), the fragment utility for the classification task at hand is maximized (Equation (
)) while minimizing the mutual information and redundancy of attributes inside the fragment (Equation (
)). Above, we described the procedure in the context of a supervised application. However, vertical fragmentation can also be used in the context of an unsupervised application by adding one or more
common attributes to the single fragments to enforce correspondence between fragments. Therefore, when having an unsupervised task at hand, an “unsupervised” FMRMR metric might be defined by adapting
Equation (
$u F M R M R F e x t : = − ∑ F e ∈ F e x t W F e ,$
$F e x t : = { F e 1 , … , F e f }$
is obtained from fragmentation
$F = { F 1 , … , F f }$
by adding one or more common attribute(s)
$A ⊂ { A 1 , … , A r + t }$
to each fragment:
$∀ i = 1 , … , f : F e i : = F i ∪ A$
To sum up, the vertical fragmentation approach aims to alleviate the negative effects of the Curse of Dimensionality. By choosing suitable discrete or continuous probability distributions depending
on the given data, after possibly necessary preprocessing like discretizing values, the approach can be used in principle for both categorical and numerical data.
Figure 10
visualizes the mutual information of all attribute pairs of the Adult dataset in a symmetric matrix.
The Curse of Dimensionality also occurs in DP. For example, in [
], the authors state that Randomized Response suffers from the Curse of Dimensionality. There is a trade-off between applying Randomized Response to single attributes and applying Randomized Response
to a set of attributes simultaneously. Depending on the number of records, the latter might lead to poor utility of the estimated distribution of the original data, and applying Randomized Response
to single attributes implies a poor estimated joint distribution of the original data. The authors propose an algorithm to cluster attributes with high mutual dependencies and apply Randomized
Response to single clusters jointly. Their measure of dependency between two attributes
$A k , A j$
is based on the absolute value of the Pearson Correlation and Cramér’s V Statistic
$V ( A k , A j )$
. In Randomized Response,
$| C o r r ( A k , A j ) |$
can be calculated given discretized numerical attributes
$A k , A j$
, and Cramér’s V Statistic
$V ( A k , A j )$
can be calculated given categorical attributes
$A k , A j$
that have no ordering. In their experimental results, they empirically evaluate the phenomenon on the multivariate Adult dataset.
The Pearson Correlation of attributes
$A j , A k$
is defined as
$| C o r r ( A k , A j ) | : = ∑ i = 1 n A j ( i ) − A j ¯ A k ( i ) − A k ¯ ∑ i = 1 n A j ( i ) − A j ¯ 2 ∑ i = 1 n A k ( i ) − A k ¯ 2 ,$
$A j ¯$
$A k ¯$
denote the mean value of attributes
$A j$
$A k$
Let $r j$ be the number of categories of attribute $A j$ and $r k$ be the number of categories of attribute $A k$. In the scope of the following formula, let ${ 1 , … , r j }$ be the set of
categories of attribute $A j$ and ${ 1 , … , r k }$ be the set of categories of attribute $A k$.
Then, Cramér’s V Statistic of attributes
$A j , A k$
is defined as
$V j k = χ j k 2 / n min ( r j − 1 , r k − 1 ) ,$
$χ 2 : = ∑ p = 1 r j ∑ q = 1 r k ( n · P r [ X j = p , X k = q ] − n · P r [ X j = p ] · P r [ X k = q ] ) 2 n · P r [ X j = p ] · P r [ X k = q ]$
is the chi-squared independence statistic.
Figure 11
shows an example where the absolute value of the Pearson Correlation and Cramér’s V Statistic are calculated for numerical resp. categorical attributes in the Adult dataset.
6. Legal Perspective
Section 4
Section 5
have presented technical procedures, and the consequences of the anonymization of tabular datasets have been worked out. To comply with the legal requirement for anonymization in the EU, especially
concerning the GDPR, the legal basis and prerequisites must be elaborated. Based on this, conclusions about the legally secure and robust anonymization of tabular data can be drawn. In general, the
legal literature on anonymization is not restricted to structured data.
However, the literature discussed in this review can be straightforwardly related to tabular data but not to unstructured data.
Firstly, we look at the legal aspects of data anonymization in general. The legal framework and requirements for handling anonymized data are analyzed. Subsequently, the problem of anonymizing
tabular data is addressed, and existing legislation, analyzed. Particular attention is paid to the GDPR, which must be interpreted as the legal basis for this problem. Furthermore, different
approaches to anonymizing data are considered. Especially, the absolute and relative theories of anonymization are discussed, and the different legal interpretations are highlighted. Lastly, an
evaluation of the privacy models is carried out with an individual evaluation of the k-anonymity, l-diversity, and t-closeness privacy models, which serve as common approaches to anonymizing tabular
data. Relevant factors such as the effectiveness and security of anonymization techniques are considered.
6.1. Synopsis of the Problem
When publishing data, the GDPR sets the framework and requirements for lawful publication. The aim of this law is to protect the individual’s right to informational self-determination, i.e., the
individual’s own influence on the dissemination and collection of personal data is to be preserved [
The European GDPR refers in its scope exclusively to personal data. This means that all data that cannot be traced back to an identifiable person fall outside the scope of protection and are
generally available as Open Data. Despite the considerable importance of the distinction between personal reference and anonymity, the GDPR does not regulate this but merely presupposes the concept
of anonymity as a counterpart to personal data.
According to Art. 4 (1) GDPR “personal data means any information relating to an identified or identifiable natural person (’data subject’); an identifiable natural person is one who can be
identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier, or to one or more factors specific to
the physical, physiological, genetic, mental, economic, cultural, or social identity of that natural person”. In this context, the Article 29 Data Protection Working Party stated, in Opinion 4/2007
WP 136, that identification is normally achieved using particular pieces of information, which are called “identifiers” [
]. They are distinguished in “directly” and “indirectly” identifiers.
Thereby—in the context of tabular data—in our terminology, “directly” identifier refers to a Direct Identifier and “indirectly” identifier refers to an attribute that is part of a QI. A person may be
directly identified by name, whereas they may be identified indirectly by a telephone number, car registration, or by a combination of significant criteria, which allows them to be recognized by
narrowing down the group to which they belong (age, occupation, place of residence) [
Particularly with regard to Indirect Identifiers, the issue arises when a reference to a person still exists. Some characteristics are so unique that someone can be identified with no effort
(“present Prime Minister of Spain”), but a combination of several different details may also be specific enough to narrow it down to one person, especially if someone has access to additional
information [
]. According to this, sufficient anonymization only exists if this personal reference is removed and is not traceable [
Hereby, it should be pointed out that in [
], “[…] it is not necessary for the information to be considered as personal data that it is contained in a structured database or file”. However, the given examples mostly refer to structured data,
as they are given in tabular datasets.
Further, as Recital 26 to the GDPR states, “the principles of data protection should therefore not apply to anonymous information, namely, information, which does not relate to an identified or
identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable”.
Anonymization occurs when personal data are changed in such a way that the person behind them can no longer be identified by personal and factual circumstances [
]. This also applies to the remaining or otherwise related datasets in their entirety [
]. The complexity of anonymity, therefore, lies in the definition, which is difficult to delimit and determine, of which datasets have which attributes that are sufficiently related to a person. This
can only be performed with an intensive examination of the type and scope of the existing data and the data to be anonymized [
To obtain meaningful Open Data, a careful and difficult balance between sufficient information and effective anonymization to protect data subjects is necessary. Basically, anonymization must be
distinguished from pseudonymization, which is essentially characterized by the fact that the data and persons can be identified again by using a code or key [
]. So far, pseudonymization has been considered insufficient and treated as personal data [
]. However, the European General Court (EGC) recently ruled that under certain circumstances, pseudonymous data may not fall under the scope of the GDPR if the data recipient lacks means for
re-identification. The critical factor is whether the recipient has access to the decryption key or can obtain it. If not, the data are not considered personal data and thus do not fall under the
GDPR [
6.2. Recital 26
Recital 26 to the GDPR further demands “to ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs
of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments”.
In determining the relevant knowledge and means, Recital 26, therefore, requires a risk analysis to evaluate the likelihood of the risk of re-identification. In this analysis, an objective standard
must be used, and in principle, a purely abstract standard of measurement must be applied, not the subjective interests and motivation for the use of such data. Under certain circumstances, however,
these must also be included in the assessment criteria [
The risk of re-identification must, therefore, be assessed on a case-by-case basis. However, the interpretation of these requirements and the extent to which the available knowledge and means of
third parties are to be taken into account are controversial. In this context, the spectrum of opinions is divided with regard to the requirements for the feasibility of establishing a connection to
a person. It is questionable whether it depends on the respective Data Controller (relative personal reference) or whether anybody can establish the personal reference (absolute personal reference) [
6.3. Absolute Personal Reference/Zero-Risk Approach
The absolute approach shows two main considerations. On the one hand, it is about the group of people who must be considered potential de-anonymizers. The other is the re-identification risk that
still exists due to the means available to this group of people. According to the absolute personal reference approach, a person becomes identifiable if anybody at all can re-establish the personal
reference. All means available to this third party must be deliberated over. Hence, this approach can only be met if all anonymization is fully and completely irreversible and the capability of
de-anonymization is eliminated [
]. In this regard, it is sometimes demanded that the original and thus still personal data records are deleted after anonymization has been implemented [
]. This refers to Tuple Suppression, which is explained in
Section 4.3
. According to this approach, they are still personal data when a Data Controller does not delete the original data and hands over the anonymized dataset [
]. Accordingly, all possibilities for reversing the anonymization process must be taken into contemplation. This also includes illegal means of obtaining special knowledge as well as potential
infringements of professional confidentiality [
To a greater extent, nevertheless, such a scale should not be required and is simply not feasible, according to the state of the art [
]. This also reflects both the telos of the law and the wording of Recital 26. Recital 26 states that “all the means reasonably likely to be used” should be deliberated. Hence, the GDPR does not
consider all and every possibility of de-anonymization. It more likely supports a risk-based approach to it, which must be evaluated on the basis of the circumstances of the individual case.
Furthermore, following this absolute approach would mean that most data must still be considered personal, making true anonymization practically impossible. The main issue lies in the fact that there
can never be complete certainty that no one else possesses additional knowledge or data that could potentially lead to re-identification [
6.4. Relative Personal Reference/Risk-Based Approach
The relative approach also exhibits two considerations that run parallel to those of the absolute approach. First, the circle of persons who need to be focused on is tighter. Secondly, the relative
approach acknowledges a certain risk of de-identification [
]. Moreover, when dealing with Open Data, the choice between the relative and absolute approaches becomes largely inconsequential. The very nature of Open Data dictates that they should be accessible
to a broad and diverse audience, opening the data to virtually anybody interested in utilizing them. As a result, the practical reality of Open Data means that considerations must extend to any
potential data recipient, since they all have access to the shared data. Therefore, it is necessary to consider anybody as a potential de-anonymizer. The absolute and relative approaches thus lead to
the same result. However, a key distinction between the relative approach and the absolute one emerges concerning the treatment of re-identification risk. While the absolute approach aspires to
eliminate any possibility of re-identification, the relative approach recognizes that a certain level of re-identification risk may persist. The decisive factor is then the assessment of the risk and
the inclusion of risk factors.
6.5. Tightened Relative Personal Reference of the EU’s Court of Justice
The EU’s Court of Justice (ECJ) developed a conciliatory, relative approach to establishing the reference to persons in the context of a preliminary ruling in 2016. In this respect, the ECJ dealt
with the question of the extent to which the knowledge and means of third parties should be included in accordance with Recital 26, so we are referring to anonymized data. The decisive issue was
whether dynamic IP addresses constitute personal data. The crucial question was which conditions must be met for a Data Controller to “reasonably” have access to the data held by a third party [
]. As General Advocate Sanchez pointed out, Recital 26 does not refer to any means that may be used by anybody but constrains these means to “likely reasonably to use”.
Therefore, a risk-based approach is more in line with the wording. Third parties are persons to whom any person may reasonably turn to obtain additional data or knowledge for the purpose of
identification. After all, the General Advocate set forth that “otherwise […] it would be virtually impossible to discriminate between the various means, since it would always be possible to imagine
the hypothetical contingency of a third party who, no matter how inaccessible to the [data controller], could—now or in the future—have additional relevant data to assist in the identification of a
[person]” [
This restriction of the absolute theory and tightening of the relative theory have been endorsed by the ECJ. In this respect, the absolute theory is limited to the extent that additional knowledge,
which can only be gained using illegal methods or is practically impossible on account of the fact that it requires a disproportionate effort in terms of time, cost, and manpower [
]. Thus, the risk of identification appears to be negligible [
The relative approach, on the other hand, is tightened to the effect that they are still to be considered personal data if there are legal means that can be used to obtain additional knowledge from a
third party that can enable the identification of a person [
]. However, the extent to which such legal means are available and whether it is reasonable to expect them to be used remains an open question. This concretization work is, therefore, incumbent on
the national courts [
6.6. Evaluation Standards for the Risk Assessment of the Techniques
The Art. 29 Data Protection Working Party sets out various criteria for assessing the risk of individuals being identifiable or determinable when personal data are anonymized. The individual risk
groups are merely a framework for evaluating the risk of identification. These principles should always be applied to the individual case and require a thorough evaluation. According to the idea of
the data protection authority, Data Controllers should submit a final risk evaluation to the relevant authority. This is recommended as a general concept that a Data Controller drafts for his
existing and expected datasets.
The first aspect of risks is singling out individuals from datasets [
]. The initial point is anonymized data records that have been generalized, for example. The aim of a legally secure anonymization process is to form these groups on such a scale that an individual
assignment of attributes to a single person is no longer possible [
]. This is to be achieved by ensuring that the combined group has several identical attributes. The danger of singling out, therefore, exists within small group formations as well as with extreme
attributes, since these are easier to assign. If persons in group formations still have unique characteristics of attributes, this favors classification. In order to prevent singling out, an
appropriately large number of similar attributes must be chosen based on the evaluation of the individual case and the dataset. In this evaluation process, special attention should be paid to
preserving the information content [
]. Consequently, if the
-groups are becoming too large, the information value can be reduced or falsified. Therefore, the information content of the dataset should always be taken into account, as this can result in data
being rendered unrecognizable or falsified. In this way, the Data Controller can maintain the information content of other attributes and still guarantee anonymity.
The second risk factor relates to the linkability of data [
]. In relation to an anonymous dataset, this must be considered in combination with two individually anonymous datasets. If a Data Controller publishes several anonymized datasets, these must also
preserve anonymity in their entirety. If individual persons can be determined from the combination of these two datasets, because individual attributes can now be linked together, the data are still
to be considered personal [
]. In this respect, this approach has substantial uncertainty. It is questionable, and not yet clarified, which data are to be considered for this purpose. Certainly, the entirety of the publication
is to be taken into account, but it is debatable whether data already published by third parties are also to be included [
]. Or, what probably leads to the widest extension, whether third parties have data at their disposal with which a linkage leads to the identifiability of individuals. Again, the jurisprudence of the
ECJ can be used, that is, only additional knowledge that can be obtained by legal means is taken into consideration.
The last criterion set by the Art. 29 Data Protection Working Party is the so-called inference [
]. This is the most difficult requirement to circumvent. Basically, it means that conclusions can be drawn from datasets for the entirety of persons. In view of the challenges of anonymization, it
rather demands that no conclusions that could be used to infer an individual person can be drawn from the published dataset. Here, too, there is a lack of concreteness in differentiation from
singling out. However, reference attributes are probably more limited to the individual dataset from which assumptions could be drawn.
In the further outlook, each anonymization concept and method is, therefore, examined with regard to these three risk factors [
]. Other aspects may also be included as risks in the evaluation, so the standard for these three aspects from the perspective of the “motivated intruder” must always be set. This “motivated intruder
test” is intended to test the anonymization carried out for its stability and, as above, is based on the individual case. The motivation of the intruder is inevitably measured according to the value
and information content of the dataset.
6.7. Legal Evaluation
This subsection conducts a legal evaluation by embedding technical terms such as privacy models in a legal context.
6.7.1. Identifiers, Quasi-Identifiers, and Sensitive Attributes
In the process of anonymization using the individual models, the QIs are to be determined and evaluated. For example, these might include dates of specific events (death, birth, discharge from a
hospital, etc.), postal codes, sex, ethnicity, etc. [
]. One can orient oneself towards an assessment system that evaluates and assesses the attributes. This should essentially identify all SAs, also in the sense of the GDPR. For this purpose, all
variables are listed and evaluated within the framework of three case groups. The assessment ranges from low (1) to medium (2) to high (3). The first category for the individual variables is
“replication”, in which the information is assessed according to how consistently it appears in connection with a person. A low score is given to measured blood pressure, while a high score is given
to a person’s date of birth. The second group is concerned with the “availability” of the information. The decisive factor, here, is how available this information or variable is for third parties to
re-identify. As already shown above, the ECJ’s standard also affects this assessment as to how far-reaching additional knowledge is to be taken into account. Therefore, the laboratory values of a
person are difficult to obtain, whereas, as in the example of the “Breyer” case, the person behind an IP address can certainly be obtained by legal means if there is a legitimate interest. This
should also be considered for public registers, such as the land registry. The last category concerns “distinguishability”, according to which it is possible to assess how people can be distinguished
from each other by means of individual values. For example, a ZIP code with a complete reproduction is to be classified as higher than one with a shortened reproduction [
6.7.2. k-Anonymity
The privacy model
-anonymity, which is defined in
Section 5.3.1
, ensures that given a QI, each record is indistinguishable from at least
$k − 1$
other records, making it more difficult for attackers to identify individuals by their attributes [
]. The degree of privacy protection depends on the quality and quantity of attributes in the dataset and the choice of
. The larger
, the larger its group, and the more securely an individual is protected from re-identification.
Singling out within a k-group is made more difficult by the fact that all individuals have the same QI and are indistinguishable based on them, such that individuals can hide behind the k-group.
However, Data Processors must also consider the risk of attribute disclosure, where an attacker can infer sensitive information about an individual even if they cannot directly re-identify them. This
may still be possible with linkability and inference. Linkability of records may still be possible, because the probability of $1 / k$ with small k is sufficient to make correlations about affected
individuals among records in a k-group.
Another deficit of the
-anonymity model is that attacks are not closed with inference techniques [
]. If all
-individuals belong to the same group and it is known to which group an individual belongs, it is very easy to determine the value of a property. Attackers are able to extract information from the
dataset and make inferences about the affected individuals, whether it is included in the dataset or not.
Therefore, whether this model alone ensures compliance with the anonymization requirement of the GDPR is largely negated. To achieve robust anonymization, additional models such as l-diversity or t
-closeness can be used.
Nevertheless, the model is used in anonymization applications because it provides the basic structure for anonymization when values are not to be corrupted, as it is the case with perturbation. The
LEOSS cohort study [
] uses an anonymization pipeline built on
equal to 11 by applying the ARX tool [
]. Thus, they follow the recommendation of the Art. 29 Data Protection Working Party (WP216) [
], which evaluates a
-value less than or equal to 10 as insufficient. The
-value depends, among other things, on the number of aggregated attributes [
] used in a QI. In the NAPKON study, the qualitative analysis of the attributes included in the dataset was controlled for the risk of linkage or selection by reducing the uniqueness of the
combinations of the variables age, sex, quarter, and year of diagnosis and cohort [
6.7.3. l-Diversity
The privacy model
-diversity, which is defined in
Section 5.3.2
, was introduced as an extension of
-anonymity to compensate one of its major shortcomings: the failure to account for the distribution of SAs within each group of
-indistinguishable individuals [
]. This deficiency can lead to the disclosure of SAs resulting from the merging to
-groups. The advancement aims to ensure that deterministic attacks using inference techniques are no longer possible by guaranteeing that the individual attributes in each equivalence class have at
different values, so that attackers are always guaranteed significant uncertainty about a particular affected individual [
Thus, the evaluation in [
] shows two different shortcomings of
-diversity, when the
values for each SA are not well represented. A similarity attack can be performed when the SAs fulfill the criterion of
-diversity but are semantically similar. Despite meeting the requirement of
-diversity, it is possible to learn that someone has cancer when every attribute value is a specific form of cancer. An attack on skewness can be made when the overall distribution is skewed. Then,
-diversity cannot prevent attribute disclosure. This is the case when the distribution of attribute values in a dataset consists predominantly of one of two possible values and a
-group has the other value except for one entry. This allows assumptions to be derived about this group that an attacker can use.
Despite possible protection from inference techniques, linkability may still be possible even with diversification because this risk still remains on k-anonymity settings. Only the risk of singling
out can be prevented when implementing l-Diversity as an extension of k-anonymity. l-diversity processes just the SAs that were initially unaffected. Unlike k-anonymity, there is no recommendation
from WP216 for a threshold of l.
This privacy model is suitable for protecting data from attacks using inference techniques when the values are well distributed and represented. However, it should be noted that this technique cannot
prevent information leakage if the attribute values within a group are inconsistently distributed, have low bandwidth, or are semantically similar. Eventually, the concept of
-diversity provides room for attacks using inference techniques [
6.7.4. t-Closeness
The privacy model
-closeness, which is defined in
Section 5.3.3
, deals with a new measure of security and complements
-diversity [
]. It takes into account the unavoidable gain in knowledge of an attacker when considering all SA values in the entire dataset.
-Closeness represents a measure of minimal knowledge gain that results from considering a generalized
-group compared with the entire dataset. This also means that any group of individuals, indistinguishable on the basis of the QI, behind which a person is anonymized, can hardly be distinguished from
any other group with respect to their SA values by the
-closeness-defined measure. Thus, a person’s data are better protected in their anonymizing group than was the case with
-diversity, since this group hardly reveals more information than the entire distribution.
In the specific case where the attribute values within a group are non-uniformly distributed, have a narrow range of values, or are semantically similar, an approach known as
-closeness is applied. This represents a further improvement in anonymization using generalization and consists of a procedure in which the data are partitioned into groups in such a way that the
original distribution of the attribute values in the original dataset is reproduced as far as possible [
]. However, WP216 has not given any recommendation for the
-value, so it depends on case-by-case consideration. One approach would be to incrementally increase the
-value if re-identification by an attacker with the current value is still possible.
-closeness, a dataset processed with
-anonymity is improved regarding the risk of inference and was implemented in the LEOSS cohort study [
], with
equal to
Nevertheless, data anonymized using k-anonymity and t-closeness are still vulnerable to inference techniques and have to reviewed case by case. Whereas in k-anonymity and l-diversity, large values
mean better privacy, in t-closeness, small values mean better privacy.
6.7.5. Differential Privacy
DP, which is defined in
Section 4.6
, applied as a randomized process, manipulates data in such a way that the direct link between data and the data subject can be removed [
]. There are several mechanisms that satisfy the defined anonymity criterion and are applicable to different types of data. The method ensures the protection of individual data by modifying the
results by adding random noise. This can limit a potential attacker’s ability to draw conclusions about the attribute value of a single data point, even if they know all the attribute values of the
other data points. By adding random noise, the influence of a single data point on the statistical result is hidden [
]. With regard to the risk criteria, it can be seen that singling out can be prevented under certain circumstances. Linking and inference can still be possible with multiple applications and are thus
dependent on the so-called privacy budget, which refers to parameter
Section 4.6
6.7.6. Synthetic Data
As explained in
Section 4.7
, synthetic approaches can be used as a workaround to anonymize tabular data. Artificially generated synthetic data retain the statistical characteristics of the original data. This process can
involve utilizing a machine learning model that comprehends the structure and statistical distribution of the original data to create synthetic data. Preserving the statistical properties of the
original data is vital, as it enables data analysts to derive significant insights from the synthetic data, treating them as if they were drawn directly from the original dataset. To introduce a
diverse range of data, the generation process may incorporate a certain level of unrelated randomness into synthetic data [
Synthetic data can help to ensure that an individual’s records are not singled out or linked. However, if an adversary knows of the presence of a person in the original dataset, even if that person
cannot be individualized, sensitive inferences such as attribute disclosure may still be possible, as shown in [
]. Moreover, machine learning models can be exposed to privacy attacks by the so-called Membership Inference Attacks or Model Inversion Attacks [
6.7.7. Risk Assessment Overview
Based on the findings in
Section 6.7.2
Section 6.7.3
Section 6.7.4
Section 6.7.5
Section 6.7.6
Table 3
gives an overview of risk assessments of the discussed privacy models and privacy-enhancing technologies for anonymizing tabular data. We only rate with respect to the attack scenarios that are
described by the Art. 29 Data Protection Working Party: singling out, linkability, and inference.
7. Discussion
In our exploration of anonymization methods and scores for tabular data, some unclarities and issues are present.
Foremost is the uncertainty surrounding the choice of QIs and thresholds for privacy models. A fundamental challenge is the inability to make a priori assumptions about the knowledge an adversary
possesses regarding records in tabular data. Often, there is a vast array of potential QIs that could be exploited, which goes hand in hand with the lack of context understanding.
This issue is further complicated by the fact that the privacy models adopted only cover specific scenarios, leaving room for specific attack scenarios to succeed.
Further, to maximize privacy protection, we may compromise the data utility. A potential solution might be found in combining different anonymization methods, each addressing specific weaknesses. For
instance, use-case-specific DP can be applied to provide an additional layer of security. However, implementation details and the actual compatibility of methods are yet to be thoroughly studied. As
an example, the interaction between t-closeness and group formation has shown that the elimination of group records to achieve certain t-closeness, k-anonymity, and l-diversity can unintentionally
lead to higher t. This can potentially compromise the achieved anonymization.
Moreover, the structure and composition of the dataset themselves poses a challenge. Often, SAs are the target variables, thereby making their concealment problematic. Privacy models, such as
-diversity, depend on the number of attribute values for the SA, meaning that the effectiveness of the method varies based on the characteristics of the dataset. When it comes to anonymizing
high-dimensional tabular data, as described in
Section 5.5
, one also has to deal with the Curse of Dimensionality.
Anonymizing the Adult dataset into
-anonymity with
$k > 10$
still yields comparable utility for different ML models, but this is data- and task-dependent and DP might additionally be applied in model inference [
As Wagner et al. [
] have recommended, a selection of multiple metrics to cover multiple aspects of privacy should be pursued. This approach allows for more robust privacy protection, minimizing the chances of
oversights and weaknesses.
The implementation of these privacy protection measures presents its own set of challenges. To begin with, different types of data, such as categorical and numerical, necessitate different
approaches. Some attributes might even possess dual characteristics, complicating the anonymization procedure. Different possible definitions and ways of implementing these methods add to the
complexity. Privacy models must also be adapted to data types, with a clear understanding of the differences between integers and floating-point numbers, or categorical versus numerical data types.
Additionally, applying these methods often involves a trial-and-error process. Multi-stage anonymization is a potential strategy that might yield better results, though the complexity and difficulty
of execution cannot be underestimated. For example, achieving certain
-anonymity using generalization and suppression with minimal loss of information [
] is an NP-hard problem. This implies that execution time could be exponential in the worst-case scenarios—a factor that needs to be tested and considered in the implementation phase.
Last but not least, the context of data—whether they are fixed or streaming—poses another challenge. Privacy protection measures for streaming or online data may require a different approach,
considering the time and space complexity involved.
Future research should focus on addressing these issues, providing a more comprehensive and effective solution to data anonymization of tabular data.
8. Conclusions
In conclusion, this article has examined the technical and legal considerations of data anonymization and explored different approaches to solving this problem.
From the legal perspective, based on our analysis and legal evaluation, the following conclusions can be drawn. The risk-based approach, in alignment with the ECJ case law in the “Breyer” case,
highlights the importance of considering legally obtainable additional knowledge when assessing the acceptable re-identification risk. This approach enhances the understanding of data anonymity by
taking into account relevant information that can potentially lead to re-identification. Due to the missing legal requirements for robust anonymization, a recommendation for
-anonymity with
greater than 10 was made by the Article 29 Data Protection Working Party in WP216 [
]. Prior to implementing
-anonymity, it is crucial to identify the QIs using the evaluation table and the provided evaluation system. Furthermore, the opinion suggests the use of
-closeness. Similarly, there are no legal requirements at this point to ensure legally compliant anonymization. Only in [
], a
-value set at
was considered to be a high level of privacy protection. However, since the risk-based approach is based on individual-case assessment, it must be considered that these values should not be
considered universally applicable. The ongoing uncertainty makes anonymization still a challenging endeavor. In addition, it is important to note that for anonymized data, future consideration of the
EU Data Governance Act, particularly in relation to data rooms and the security of such data, becomes crucial. The Data Governance Act aims to establish a framework for secure and responsible data
sharing that ensures data protection and governance in data rooms.
Future research and advancements in the field should continue to explore the legal and technical aspects of data anonymization, taking into account evolving legislation, court rulings, and emerging
best practices. By staying abreast of these developments and adhering to appropriate standards, a data-driven environment that respects privacy, safeguards personal information, and promotes
responsible data sharing practices can be fostered.
Anonymization procedures can support the creation of Open Data. Similar to Open Source, Open Data represent an economically and socially relevant concept. For example, it is part of the digital
strategy resp. the Open Data strategy of the current resp. the previous federal government in Germany. However, a challenge may be that under the current European regulations, in the near future, all
data might be classified as personal data as a result of moving forward into a data-driven world. In [
], this is named the Law of Everything. The reason for this is the widely defined rules on data protection and the definition of the terms “information” and “personal data” by the GDPR. This is
accelerated by the rapid advances in technology, which enable ever greater interpretability of data as well as the increased collection of information in real time. The Law of Everything is an
approach with a worthy goal but not one that can be implemented sustainably with current procedures.
Author Contributions
Conceptualization, R.A.; Methodology, R.A., J.F., J.G. and E.M.; Software, R.A.; Validation, R.A., J.F. and M.H.; Formal analysis, R.A.; Investigation, R.A. and J.F.; Resources, R.A., J.F., J.G. and
E.M.; Data curation, R.A.; Writing—original draft preparation, R.A., J.G. and E.M.; Writing—review and editing, R.A., J.F., M.H., B.B. and M.S.; Visualization, R.A. and J.F.; Supervision, M.H., B.B.
and M.S.; Project administration, M.H., B.B. and M.S.; Funding acquisition, M.H. and M.S. All authors have read and agreed to the published version of the manuscript.
The research project EAsyAnon (“Verbundprojekt: Empfehlungs- und Auditsystem zur Anonymisierung”, funding indicator: 16KISA128K) is funded by the European Union under the umbrella of the funding
guideline “Forschungsnetzwerk Anonymisierung für eine sichere Datennutzung” from the German Federal Ministry of Education and Research (BMBF).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The dataset Adult used in this study for experiments is openly available to download from the UCI Machine Learning Repository. Data can be found at
(accessed on 15 May 2023).
Conflicts of Interest
The authors declare no conflict of interest.
The following abbreviations are used in this manuscript:
DP Differential Privacy
DP-SGD Differentially Private Stochastic Gradient Descent
ECJ European Court of Justice
EGC European General Court
EU European Union
FMRMR Fragmentation Minimum Redundancy Maximum Relevance
GAN Generative Adversarial Network
GDPR General Data Protection Regulation
HIPAA Health Insurance Portability and Accountability Act
LDA Linear Discriminant Analysis
LSTM Long Short-Term Memory
MIMIC-III Medical Information Mart for Intensive Care
PCA Principal Component Analysis
PPDP Privacy-preserving data publishing
PPGIS Public Participation Geographic Information System
QI Quasi-Identifier
SA Sensitive Attribute
SVD Singular Value Decomposition
1. Weitzenboeck, E.M.; Lison, P.; Cyndecka, M.; Langford, M. The GDPR and unstructured data: Is anonymization possible? Int. Data Priv. Law 2022, 12, 184–206. [Google Scholar] [CrossRef]
2. Samarati, P.; Sweeney, L. Protecting privacy when disclosing information: K-anonymity and its enforcement through generalization and suppression. In Proceedings of the IEEE Symposium on Security
and Privacy, Oakland, CA, USA, 3–6 May 1998; pp. 1–19. [Google Scholar]
3. Sweeney, L. K-Anonymity: A Model for Protecting Privacy. Int. J. Uncertain. Fuzziness-Knowl.-Based Syst. 2002, 10, 557–570. [Google Scholar] [CrossRef]
4. Ford, E.; Tyler, R.; Johnston, N.; Spencer-Hughes, V.; Evans, G.; Elsom, J.; Madzvamuse, A.; Clay, J.; Gilchrist, K.; Rees-Roberts, M. Challenges Encountered and Lessons Learned when Using a
Novel Anonymised Linked Dataset of Health and Social Care Records for Public Health Intelligence: The Sussex Integrated Dataset. Information 2023, 14, 106. [Google Scholar] [CrossRef]
5. Becker, B.; Kohavi, R. Adult. UCI Machine Learning Repository. 1996. Available online: https://archive-beta.ics.uci.edu/dataset/2/adult (accessed on 15 May 2023).
6. Majeed, A.; Lee, S. Anonymization Techniques for Privacy Preserving Data Publishing: A Comprehensive Survey. IEEE Access 2021, 9, 8512–8545. [Google Scholar] [CrossRef]
7. Hasanzadeh, K.; Kajosaari, A.; Häggman, D.; Kyttä, M. A context sensitive approach to anonymizing public participation GIS data: From development to the assessment of anonymization effects on
data quality. Comput. Environ. Urban Syst. 2020, 83, 101513. [Google Scholar] [CrossRef]
8. Olatunji, I.E.; Rauch, J.; Katzensteiner, M.; Khosla, M. A review of anonymization for healthcare data. In Big Data; Mary Ann Liebert, Inc.: New Rochelle, NY, USA, 2022. [Google Scholar]
9. Prasser, F.; Kohlmayer, F. Putting statistical disclosure control into practice: The ARX data anonymization tool. In Medical Data Privacy Handbook; Springer: Cham, Switzerland, 2015; pp. 111–148.
[Google Scholar]
10. Jakob, C.E.M.; Kohlmayer, F.; Meurers, T.; Vehreschild, J.J.; Prasser, F. Design and evaluation of a data anonymization pipeline to promote Open Science on COVID-19. Sci. Data 2020, 7, 435. [
Google Scholar] [CrossRef]
11. Malin, B.; Loukides, G.; Benitez, K.; Clayton, E.W. Identifiability in biobanks: Models, measures, and mitigation strategies. Hum. Genet. 2011, 130, 383–392. [Google Scholar] [CrossRef]
12. Ram Mohan Rao, P.; Murali Krishna, S.; Siva Kumar, A. Privacy preservation techniques in big data analytics: A survey. J. Big Data 2018, 5, 33. [Google Scholar] [CrossRef]
13. Haber, A.C.; Sax, U.; Prasser, F.; the NFDI4Health Consortium. Open tools for quantitative anonymization of tabular phenotype data: Literature review. Briefings Bioinform. 2022, 23, bbac440. [
Google Scholar] [CrossRef]
14. Wagner, I.; Eckhoff, D. Technical Privacy Metrics. ACM Comput. Surv. 2018, 51, 1–38. [Google Scholar] [CrossRef]
15. Vokinger, K.; Stekhoven, D.; Krauthammer, M. Lost in Anonymization—A Data Anonymization Reference Classification Merging Legal and Technical Considerations. J. Law Med. Ethics 2020, 48, 228–231.
[Google Scholar] [CrossRef] [PubMed]
16. Zibuschka, J.; Kurowski, S.; Roßnagel, H.; Schunck, C.H.; Zimmermann, C. Anonymization Is Dead—Long Live Privacy. In Proceedings of the Open Identity Summit 2019, Garmisch-Partenkirchen, Germany,
28–29 March 2019; Roßnagel, H., Wagner, S., Hühnlein, D., Eds.; Gesellschaft für Informatik: Bonn, Germany, 2019; pp. 71–82. [Google Scholar]
17. Rights (OCR), Office for Civil. Methods for De-Identification of PHI. HHS.gov. 2012. Available online: https://www.hhs.gov/hipaa/for-professionals/privacy/special-topics/de-identification/
index.html (accessed on 21 July 2023).
18. Gionis, A.; Tassa, T. k-Anonymization with Minimal Loss of Information. IEEE Trans. Knowl. Data Eng. 2009, 21, 206–219. [Google Scholar] [CrossRef]
19. Terrovitis, M.; Mamoulis, N.; Kalnis, P. Local and global recoding methods for anonymizing set-valued data. VLDB J. 2011, 20, 83–106. [Google Scholar] [CrossRef]
20. Agrawal, R.; Srikant, R. Privacy-Preserving Data Mining. In Proceedings of the SIGMOD ’00: Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, Dallas, TX, USA,
16–18 May 2000; Association for Computing Machinery: New York, NY, USA, 2000; pp. 439–450. [Google Scholar] [CrossRef]
21. Bayardo, R.; Agrawal, R. Data privacy through optimal k-anonymization. In Proceedings of the 21st International Conference on Data Engineering (ICDE’05), Tokyo, Japan, 5–8 April 2005; pp.
217–228. [Google Scholar] [CrossRef]
22. Dwork, C. Differential Privacy. In Automata, Languages and Programming, Proceedings of the 33rd International Colloquium on Automata, Languages and Programming, Part II (ICALP 2006), Venice,
Italy, 10–14 July 2006; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4052, pp. 1–12. [Google Scholar]
23. Wang, T.; Zhang, X.; Feng, J.; Yang, X. A Comprehensive Survey on Local Differential Privacy toward Data Statistics and Analysis. Sensors 2020, 20, 7030. [Google Scholar] [CrossRef]
24. Dwork, C.; Roth, A. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 2014, 9, 211–407. [Google Scholar] [CrossRef]
25. Wang, Y.; Wu, X.; Hu, D. Using Randomized Response for Differential Privacy Preserving Data Collection. In Proceedings of the EDBT/ICDT Workshops, Bordeaux, France, 15 March 2016. [Google Scholar
26. Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep Learning with Differential Privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and
Communications Security, Vienna, Austria, 24–28 October 2016; pp. 308–318. [Google Scholar] [CrossRef]
27. van der Maaten, L.; Hannun, A.Y. The Trade-Offs of Private Prediction. arXiv 2020, arXiv:2007.05089. [Google Scholar]
28. McKenna, R.; Miklau, G.; Sheldon, D. Winning the NIST Contest: A scalable and general approach to differentially private synthetic data. arXiv 2021, arXiv:2108.04978. [Google Scholar] [CrossRef]
29. Aggarwal, C.C.; Yu, P.S. A condensation approach to privacy preserving data mining. In Advances in Database Technology-EDBT 2004, Proceedings of the International Conference on Extending Database
Technology, Crete, Greece, 14–18 March 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 183–199. [Google Scholar]
30. Jiang, X.; Ji, Z.; Wang, S.; Mohammed, N.; Cheng, S.; Ohno-Machado, L. Differential-Private Data Publishing Through Component Analysis. Trans. Data Priv. 2013, 6, 19–34. [Google Scholar]
31. Xu, S.; Zhang, J.; Han, D.; Wang, J. Singular value decomposition based data distortion strategy for privacy protection. Knowl. Inf. Syst. 2006, 10, 383–397. [Google Scholar] [CrossRef]
32. Soria-Comas, J.; Domingo-Ferrer, J. Mitigating the Curse of Dimensionality in Data Anonymization. In Proceedings of the Modeling Decisions for Artificial Intelligence: 16th International
Conference, MDAI 2019, Milan, Italy, 4–6 September 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 346–355. [Google Scholar]
33. Xu, L.; Veeramachaneni, K. Synthesizing Tabular Data using Generative Adversarial Networks. arXiv 2018, arXiv:1811.11264. [Google Scholar]
34. Park, N.; Mohammadi, M.; Gorde, K.; Jajodia, S.; Park, H.; Kim, Y. Data Synthesis based on Generative Adversarial Networks. arXiv 2018, arXiv:1806.03384. [Google Scholar] [CrossRef]
35. Xu, L.; Skoularidou, M.; Cuesta-Infante, A.; Veeramachaneni, K. Modeling Tabular data using Conditional GAN. arXiv 2019, arXiv:1907.00503. [Google Scholar]
36. Xie, L.; Lin, K.; Wang, S.; Wang, F.; Zhou, J. Differentially Private Generative Adversarial Network. arXiv 2018, arXiv:1802.06739. [Google Scholar]
37. Kunar, A.; Birke, R.; Zhao, Z.; Chen, L. DTGAN: Differential Private Training for Tabular GANs. arXiv 2021, arXiv:2107.02521. [Google Scholar]
38. Zakerzadeh, H.; Aggrawal, C.C.; Barker, K. Towards Breaking the Curse of Dimensionality for High-Dimensional Privacy. In Proceedings of the 2014 SIAM International Conference on Data Mining,
Philadelphia, PA, USA, 24–26 April 2014. [Google Scholar]
39. Aggarwal, C.C. On K-Anonymity and the Curse of Dimensionality. In Proceedings of the VLDB ’05: 31st International Conference on Very Large Data Bases, Trondheim, Norway, 30 August–2 September
2005; pp. 901–909. [Google Scholar]
40. Salas, J.; Torra, V. A General Algorithm for k-anonymity on Dynamic Databases. In Proceedings of the DPM/CBT@ESORICS, Barcelona, Spain, 6–7 September 2018. [Google Scholar]
41. Xu, J.; Wang, W.; Pei, J.; Wang, X.; Shi, B.; Fu, A. Utility-based anonymization for privacy preservation with less information loss. SIGKDD Explor. 2006, 8, 21–30. [Google Scholar] [CrossRef]
42. LeFevre, K.; DeWitt, D.; Ramakrishnan, R. Mondrian Multidimensional K-Anonymity. In Proceedings of the 22nd International Conference on Data Engineering (ICDE’06), Atlanta, GA, USA, 3–8 April
2006; p. 25. [Google Scholar] [CrossRef]
43. Elabd, E.; Abd elkader, H.; Mubarak, A.A. L—Diversity-Based Semantic Anonymaztion for Data Publishing. Int. J. Inf. Technol. Comput. Sci. 2015, 7, 1–7. [Google Scholar] [CrossRef]
44. Wang, X.; Chou, J.K.; Chen, W.; Guan, H.; Chen, W.; Lao, T.; Ma, K.L. A Utility-Aware Visual Approach for Anonymizing Multi-Attribute Tabular Data. IEEE Trans. Vis. Comput. Graph. 2018, 24,
351–360. [Google Scholar] [CrossRef]
45. Machanavajjhala, A.; Gehrke, J.; Kifer, D.; Venkitasubramaniam, M. L-diversity: Privacy beyond k-anonymity. In Proceedings of the 22nd International Conference on Data Engineering (ICDE’06),
Atlanta, GA, USA, 3–8 April 2006; p. 24. [Google Scholar] [CrossRef]
46. Li, N.; Li, T.; Venkatasubramanian, S. t-Closeness: Privacy Beyond k-Anonymity and l-Diversity. In Proceedings of the 2007 IEEE 23rd International Conference on Data Engineering, Istanbul,
Turkey, 15 April 2006–20 April 2007; pp. 106–115. [Google Scholar] [CrossRef]
47. Vatsalan, D.; Rakotoarivelo, T.; Bhaskar, R.; Tyler, P.; Ladjal, D. Privacy risk quantification in education data using Markov model. Br. J. Educ. Technol. 2022, 53, 804–821. [Google Scholar] [
48. Díaz, J.S.P.; García, Á.L. Comparison of machine learning models applied on anonymized data with different techniques. arXiv 2023, arXiv:2305.07415. [Google Scholar]
49. CSIRO. Metrics and Frameworks for Privacy Risk Assessments, CSIRO: Canberra, Australia, Adopted on 12 July 2021. 2021. Available online: https://www.csiro.au/en/research/technology-space/cyber/
Metrics-and-frameworks-for-privacy-risk-assessments (accessed on 4 June 2023).
50. Bellman, R. Dynamic Programming, 1st ed.; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
51. Ding, C.; Peng, H. Minimum redundancy feature selection from microarray gene expression data. In Proceedings of the 2003 IEEE Bioinformatics Conference. CSB2003, Stanford, CA, USA, 11–14 August
2003; pp. 523–528. [Google Scholar] [CrossRef]
52. Domingo-Ferrer, J.; Soria-Comas, J. Multi-Dimensional Randomized Response. arXiv 2020, arXiv:2010.10881. [Google Scholar]
53. Kühling, J.; Buchner, B. (Eds.) Datenschutz-Grundverordnung BDSG: Kommentar, 3rd ed.; C.H.Beck: Bayern, Germany, 2020. [Google Scholar]
54. Article 29 Data Protection Working Party. Opinion 4/2007 on the Concept of Personal Data, WP136, Adopted on 20 June 2007. 2007. Available online: https://ec.europa.eu/justice/article-29/
documentation/opinion-recommendation/files/2007/wp136en.pdf (accessed on 5 May 2023).
55. Auer-Reinsdorff, A.; Conrad, I. (Eds.) Früher unter dem Titel: Beck’sches Mandats-Handbuch IT-Recht. In Handbuch IT-und Datenschutzrecht, 2nd ed.; C.H.Beck: Bayern, Germany, 2016. [Google Scholar
56. Paal, B.P.; Pauly, D.A.; Ernst, S. Datenschutz-Grundverordnung, Bundesdatenschutzgesetz; C.H.Beck: Bayern, Germany, 2021. [Google Scholar]
57. Specht, L.; Mantz, R. Handbuch europäisches und deutsches Datenschutzrecht. In Bereichsspezifischer Datenschutz in Privatwirtschaft und öffentlichem Sektor; C.H.Beck: München, Germany, 2019. [
Google Scholar]
58. Case T-557/20; Single Resolution Board v European Data Protection Supervisor. ECLI:EU:T:2023:219. Official Journal of the European Union: Brussel, Belgium, 2023. Available online: https://
eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:62020TA0557 (accessed on 1 July 2023).
59. Groos, D.; van Veen, E.B. Anonymised data and the rule of law. Eur. Data Prot. L. Rev. 2020, 6, 498. [Google Scholar] [CrossRef]
60. Finck, M.; Pallas, F. They who must not be identified—distinguishing personal from non-personal data under the GDPR. Int. Data Priv. Law 2020, 10, 11–36. [Google Scholar] [CrossRef]
61. Article 29 Data Protection Working Party. Opinion 5/2014 on Anonymisation Techniques; WP216, Adopted on 10 April 2014; Directorate-General for Justice and Consumers: Brussel, Belgium, 2014;
Available online: https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2014/wp216_en.pdf (accessed on 1 July 2023).
62. Bergt, M. Die Bestimmbarkeit als Grundproblem des Datenschutzrechts—Überblick über den Theorienstreit und Lösungsvorschlag. Z. Datenschutz 2015, 365, 345–396. [Google Scholar]
63. Burkert, C.; Federrath, H.; Marx, M.; Schwarz, M. Positionspapier zur Anonymisierung unter der DSGVO unter Besonderer Berücksichtigung der TK-Branche. Konsultationsverfahren des BfDI. 10 February
2020. Available online: https://www.bfdi.bund.de/SharedDocs/Downloads/DE/Konsultationsverfahren/1_Anonymisierung/Positionspapier-Anonymisierung.html (accessed on 11 May 2023).
64. Case C-582/14; Patrick Breyer v Bundesrepublik Deutschland. ECLI:EU:C:2016:779. Court of Justice of the European Union: Brussel, Belgium, 2016. Available online: https://eur-lex.europa.eu/
legal-content/EN/TXT/PDF/?uri=CELEX:62014CJ0582 (accessed on 1 July 2023).
65. Schwartmann, R.; Jaspers, A.; Lepperhoff, N.; Weiß, S.; Meier, M. Practice Guide to Anonymising Personal Data; Foundation for Data Protection, Leipzig 2022. Available online: https://
stiftungdatenschutz.org/fileadmin/Redaktion/Dokumente/Anonymisierung_personenbezogener_Daten/SDS_Practice_Guide_to_Anonymising-Web-EN.pdf (accessed on 10 June 2023).
66. Bischoff, C. Pseudonymisierung und Anonymisierung von personenbezogenen Forschungsdaten im Rahmen klinischer Prüfungen von Arzneimitteln (Teil I)-Gesetzliche Anforderungen. Pharma Recht 2020, 6,
309–388. [Google Scholar]
67. Simitis, S.; Hornung, G.; Spiecker gen. Döhmann, I. Datenschutzrecht: DSGVO mit BDSG; Nomos: Baden-Baden, Germany, 2019; Volume 1. [Google Scholar]
68. Csányi, G.M.; Nagy, D.; Vági, R.; Vadász, J.P.; Orosz, T. Challenges and Open Problems of Legal Document Anonymization. Symmetry 2021, 13, 1490. [Google Scholar] [CrossRef]
69. Koll, C.E.; Hopff, S.M.; Meurers, T.; Lee, C.H.; Kohls, M.; Stellbrink, C.; Thibeault, C.; Reinke, L.; Steinbrecher, S.; Schreiber, S.; et al. Statistical biases due to anonymization evaluated in
an open clinical dataset from COVID-19 patients. Sci. Data 2022, 9, 776. [Google Scholar] [CrossRef]
70. Dewes, A. Verfahren zur Anonymisierung und Pseudonymisierung von Daten. In Datenwirtschaft und Datentechnologie: Wie aus Daten Wert Entsteht; Springer: Berlin/Heidelberg, Germany, 2022; pp.
183–201. [Google Scholar] [CrossRef]
71. Giomi, M.; Boenisch, F.; Wehmeyer, C.; Tasnádi, B. A Unified Framework for Quantifying Privacy Risk in Synthetic Data. arXiv 2022, arXiv:2211.10459. [Google Scholar] [CrossRef]
72. López, C.A.F. On the legal nature of synthetic data. In Proceedings of the NeurIPS 2022 Workshop on Synthetic Data for Empowering ML Research, New Orleans, LA, USA, 2 December 2022. [Google
73. Veale, M.; Binns, R.; Edwards, L. Algorithms that Remember: Model Inversion Attacks and Data Protection Law. Philos. Trans. R. Soc. Math. Phys. Eng. Sci. 2018, 376, 20180083. [Google Scholar] [
74. Purtova, N. The law of everything. Broad concept of personal data and future of EU data protection law. Law Innov. Technol. 2018, 10, 40–81. [Google Scholar] [CrossRef]
Figure 1. The considered data model. The first r attributes form a QI. All attributes indexed from 1 to $r + t$ are potentially SAs. The considered data model does not contain Direct Identifiers.
Figure 3. Example. Visualizing both generalization and discretization by projecting the first six records of Adult on the columns age and education. In the categorical attribute column education, the
attribute values “Bachelors” and “Masters” are summarized to a set with both values. In the numerical attribute column age, the values for age are discretized in intervals of size 10.
Figure 4. Example. Visualizing suppression of the numerical attribute column fnlwgt (final weight: number of units in the target population that the responding record represents) by replacing every
column value with the mean value of all column values. Visualizing suppression of the categorical attribute column marital-status by replacing the values with ∗, which denotes all possible values or
the empty set.
Figure 5. Example. Visualizing permutation of the column occupation in the cutout of the first six rows in the Adult dataset. The attached indices point out the change in order by applying
permutation. No attribute values are deleted, but the ordering inside the column is very likely destroyed.
Figure 6. Example. Entropy information loss when generalizing the column education of the cutout of the first six rows in the Adult dataset. In generalization (I), we obtain $Π e ( D , g ( D ) ) ≈
3.25$, which means lower information loss than in generalization (II), where $Π e ( D , g ( D ) ) ≈ 5.25$.
Figure 7. Example. Numerical information loss when generalizing the column age of the cutout of the first six rows in the Adult dataset. In generalization (I), we obtain $Π ( D , g ( D ) ) ≈ 0.36$
and $I L ( D , g ( D ) ) ≈ 3.33$, which means higher information loss than in generalization (II), where $Π ( D , g ( D ) ) = 0.16$ and $I L ( D , g ( D ) ) ≈ 1.17$. In this example, to apply $I D$,
intervals are vectorized by calculating the mean of the minimum and maximum values.
Figure 8.
Example. The first six rows of the Adult dataset, where the blue-background attributes
education, education-num, capital-loss, native-country
define a QI (just artificially chosen as the QI for demonstration purposes!). Column sorting can be applied to fit the data scheme (
Figure 1
). The transformed six-row database fulfills
-anonymity with
$k = 2$
, whereas before discretization in the column
and generalizations in the columns
, the groups had a minimum group size of one. The background colors (orange and yellow) visualize group correspondence, where the attributes in the chosen QI are identical for every record in
the group.
Figure 9. Example. Projecting the first six rows of the Adult set on the attributes education, sex, hours-per-week. The $P R$ score assumes that attribute values are known and subsequently calculates
the risk of re-identifying a single record (in the case of unit record data). Having knowledge about different values of the attribute education (yellow resp. orange) leads to different privacy
probabilities of re-identifying a record (record $R 1$ resp. $R 3$).
Figure 10.
Example. Considering the Adult dataset as an example, this dataset can be used for the supervised training of a machine learning algorithm to classify persons having income ≤USD 50 K. The categorical
$education - num$
contain highly mutual information (
$I ( A e d u c a t i o n , A e d u c a t i o n − n u m ) ≈ 2.93$
) and might be part of different fragments, whereas the categorical attributes
do not contain highly mutual information (
$I ( A r a c e , A s e x ) ≈ 0.01$
) and can be part of the same fragment in vertical fragmentation. The calculated mutual information values are based on the training dataset (without the test data) of the Adult dataset. The matrix
is symmetric because the function in (
) is symmetric. The values are rounded to two decimal places.
Figure 11. Example. Absolute values of Pearson Correlation coefficients and Cramér’s V Statistic coefficients in the Adult dataset. Both matrices are symmetric. The values are rounded to two decimal
No. Direct Identifier No. Direct Identifier
1 Names 10 Social security numbers
2 All geographic subdivisions smaller than a state 11 IP addresses
3 All elements of dates (except year) directly related to an individual 12 Medical record numbers
4 Telephone numbers 13 Biometric identifiers, including finger and voice prints
5 Vehicle identifiers and serial numbers 14 Health plan beneficiary numbers
6 Fax numbers 15 Full-face photographs and any comparable images
7 Device identifiers and serial numbers 16 Account numbers
8 Email addresses 17 Any other unique identifier
9 URLs 18 Certificate/license numbers
Table 2. Overview of information losses, utility losses/measurements, and privacy models when applying anonymization methods to tabular data.
Measurement Method
Conditional entropy [18]
Monotone entropy [18]
Information loss Non-uniform entropy [18]
Information loss on a per-attribute basis [38]
Relative condensation loss [39]
Euclidean distance [40]
Average group size [41]
Normalized average equivalence class size metric [42]
Discernibility metric [21,42,43]
Utility loss Proportion of suppressed records
ML utility
Earth Mover Distance [44]
z-Test statistics [7]
k-Anonymity [3]
Mondrian multi-dimensional k-anonymity [42]
Privacy models l-Diversity [45]
t-Closeness [46]
Privacy probability of non-re-identification [47]
Table 3. Risk assessment for anonymization methods of tabular data. $( 1 )$: Risk depends on chosen k. $( 2 )$: It does not take into account similarity attacks. $( 3 )$: Based on k-anonymity. $( 4 )
$: Risk depends on value distribution of Sensitive Attributes. $( 5 )$: Risk depends on privacy budget. $( 6 )$: Might be combined with DP. +: The method can be considered a strategy to defend
against the attack scenario. −: The method cannot solely be considered a defense strategy against the attack scenario.
Singling Out Linkability Inference
k-Anonymity + $− ( 1 )$ $− ( 2 )$
l-Diversity $+ ( 3 )$ $− ( 1 , 3 )$ $+ ( 2 , 4 )$
t-Closeness $+ ( 3 )$ $− ( 1 , 3 )$ $+ ( 2 , 4 )$
DP + $+ ( 5 )$ $+ ( 5 )$
Synthetic data + + $− ( 6 )$
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Aufschläger, R.; Folz, J.; März, E.; Guggumos, J.; Heigl, M.; Buchner, B.; Schramm, M. Anonymization Procedures for Tabular Data: An Explanatory Technical and Legal Synthesis. Information 2023, 14,
487. https://doi.org/10.3390/info14090487
AMA Style
Aufschläger R, Folz J, März E, Guggumos J, Heigl M, Buchner B, Schramm M. Anonymization Procedures for Tabular Data: An Explanatory Technical and Legal Synthesis. Information. 2023; 14(9):487. https:
Chicago/Turabian Style
Aufschläger, Robert, Jakob Folz, Elena März, Johann Guggumos, Michael Heigl, Benedikt Buchner, and Martin Schramm. 2023. "Anonymization Procedures for Tabular Data: An Explanatory Technical and Legal
Synthesis" Information 14, no. 9: 487. https://doi.org/10.3390/info14090487
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2078-2489/14/9/487","timestamp":"2024-11-14T22:28:50Z","content_type":"text/html","content_length":"714452","record_id":"<urn:uuid:b15b2154-cbd4-4a77-b51a-084f1fa9adff>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00248.warc.gz"} |
How Many Decimeters Is 2296 Inches?
2296 inches in decimeters
How many decimeters in 2296 inches?
2296 inches equals 583.184 decimeters
Unit Converter
Conversion formula
The conversion factor from inches to decimeters is 0.254, which means that 1 inch is equal to 0.254 decimeters:
1 in = 0.254 dm
To convert 2296 inches into decimeters we have to multiply 2296 by the conversion factor in order to get the length amount from inches to decimeters. We can also form a simple proportion to calculate
the result:
1 in → 0.254 dm
2296 in → L[(dm)]
Solve the above proportion to obtain the length L in decimeters:
L[(dm)] = 2296 in × 0.254 dm
L[(dm)] = 583.184 dm
The final result is:
2296 in → 583.184 dm
We conclude that 2296 inches is equivalent to 583.184 decimeters:
2296 inches = 583.184 decimeters
Alternative conversion
We can also convert by utilizing the inverse value of the conversion factor. In this case 1 decimeter is equal to 0.0017147246838048 × 2296 inches.
Another way is saying that 2296 inches is equal to 1 ÷ 0.0017147246838048 decimeters.
Approximate result
For practical purposes we can round our final result to an approximate numerical value. We can say that two thousand two hundred ninety-six inches is approximately five hundred eighty-three point one
eight four decimeters:
2296 in ≅ 583.184 dm
An alternative is also that one decimeter is approximately zero point zero zero two times two thousand two hundred ninety-six inches.
Conversion table
inches to decimeters chart
For quick reference purposes, below is the conversion table you can use to convert from inches to decimeters | {"url":"https://convertoctopus.com/2296-inches-to-decimeters","timestamp":"2024-11-05T00:27:20Z","content_type":"text/html","content_length":"33352","record_id":"<urn:uuid:4bf5b2f5-7cd5-4736-b129-5ccc5b4b466e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00070.warc.gz"} |
How to Calculate the Age of Someone Born in 2002 - Cuantos Anos Tienes
How to Calculate the Age of Someone Born in 2002
Calculating someone’s age should be a straightforward task, yet it often trips up those unaccustomed to simple arithmetic or those who overthink the process. In this opinion piece, I want to
demystify the calculation and stress the importance of keeping it simple.
The Basic Calculation
First and foremost, understanding the basic calculation is essential. To determine the age of someone born in 2002, you subtract the birth year from the current year.
For instance, in 2024, the calculation would be:
\[ 2024 – 2002 = 22 \]
Thus, someone born in 2002 would turn 22 years old in 2024.
Birthdays and Partial Years
Things get slightly more complex when considering whether the person has had their birthday yet in the current year. If today’s date is before the person’s birthday:
• They are still one year younger than the basic calculation suggests.
• For example, if their birthday is in September and today is April 2024, they would still be 21 years old until their birthday occurs.
Conversely, if today’s date is on or after their birthday, the basic calculation holds true.
Why It Matters
Accurately calculating age is important for various reasons:
• Legal Implications: Age determines voting rights, drinking age, and eligibility for various social services.
• Professional Context: Age can affect job eligibility, retirement benefits, and insurance policies.
• Personal Milestones: Knowing someone’s exact age helps in planning celebrations and acknowledging life milestones.
Simplicity Over Complexity
Overcomplicating the process can lead to errors and misunderstandings. It’s easy to get caught up in unnecessary steps or advanced mathematical techniques when the simplest method is often the most
Technology vs. Manual Calculation
With the advent of technology, many rely on apps and online calculators to determine age. While convenient, these tools can fail, leading to incorrect outputs if data is entered incorrectly.
Therefore, understanding the manual calculation process remains valuable.
In conclusion, calculating the age of someone born in 2002—or any year—is straightforward when approached correctly. Subtract the birth year from the current year, and adjust for whether they’ve had
their birthday yet. This simple method ensures accuracy and helps avoid the pitfalls of overcomplication.
So, next time you need to figure out someone’s age, remember to keep it simple, and you’ll never go wrong. | {"url":"https://cuantosanostienes.com/how-to-calculate-the-age-of-someone-born-in-2002/","timestamp":"2024-11-12T03:38:29Z","content_type":"text/html","content_length":"107228","record_id":"<urn:uuid:7cbda617-e98f-41f7-8f50-78cf7002eca3>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00086.warc.gz"} |
The Phase Vocoder Transform - Christian Yost
The Phase Vocoder Transform
Christian Yost●February 12, 2019
1 Introduction
I would like to look at the phase vocoder in a fairly ``abstract'' way today. The purpose of this is to discuss a method for measuring the quality of various phase vocoder algorithms, and building
off a proposed measure used in [2]. There will be a bit of time spent in the domain of continuous mathematics, thus defining a phase vocoder function or map rather than an algorithm. We will be using
geometric visualizations when possible while pointing out certain group theory similarities for those interested. To start we will make a claim, set forth properties, explore these properties, and
amend our original claim based on this exploration. After going through all of this, we should have some new ideas about the phase vocoder and feel sufficiently confident in the given measurement for
phase vocoder quality as being a good one.
2 Theory
Let's start by laying out some notation.
2.1 Notation
$\alpha = $ time modification factor
$\beta = $ frequency modification factor
$x(t) = $ analysis/input time domain signal
$y(t) = $ synthesis/output time domain signal
$X(\omega) = $ analysis/input frequency domain signal
$Y(\omega) = $ synthesis/output frequency domain signal
When the discussion is more theoretical, we will refer to $x(t)$ as simply $x$, per [1]. We will see why later on.
2.2 Definitions
The phase vocoder map, $PV(x,\alpha,\beta)$ is defined in the following equation
$$PV\big(x,\alpha,\beta\big) = \int_{-\infty}^{\infty}\big|X\big(\frac{\omega}{\beta},\alpha t\big)\big|\cdot e^{i\phi_{pv}(\omega,t)}d\omega = y(t)$$
where the phase vocoder phase function $\phi_{pv}(t)$ is
$$\phi_{pv}(\omega, t) = \angle X(\frac{\omega}{\beta},0) + \int_{0}^{t} \frac{d}{dt}\big[\phi(\omega,t)\big] dt = \angle X(\frac{\omega}{\beta},0) + \phi(\omega,t)$$
In equation (3), initially integrating the derivative itself may seem a bit strange, but here we are simply trying to give a continuous representation to the discrete algorithm we already know and
love. The important bit in equation (3)is that the phase offset is set to the initial phase of the properly scaled frequency information of the input signal.
3 The Weeds
The phase vocoder acts like a map between two sets, the elements of which are signals. These signals have finite energy and can be understood as vectors in a Hilbert Space $\mathbb{H}$, known as a
signal space, as elaborated on in [1]. When referring to the vector as a whole, $x(t)$ will be referred to as simply $x$. The domain is the singleton set of the input signal $x(t)$ itself: $\{ x \}$.
The co-domain, or range, is the set of all signals $y(t)$ such that $y = PV(x,\alpha,\beta)$ and $\alpha, \beta \in \mathbb{R} - \{0\}$. In total,
$$ PV: \mathcal{X} \mapsto \mathcal{Y}$$
$$ \text{such that } \mathcal{X} = \{(x,\alpha,\beta) | x \in \mathbb{H} \ \text{and } (\alpha,\beta) \in \mathbb{R}^{2} \} $$
$$\text{and } y \in \mathbb{H} $$
We see here that the input signal $x$ acts like a generator of the co-domain $\mathcal{Y}$. When $\alpha = 1$ and $\beta = 1$, $PV$ acts like the identity map since $PV(x,1,1) = x$.
Let's look at the characteristics of the input signal which we are modifying (duration and pitch) and see how information in the input signal is related to information in the output. First, let's
consider the time characteristics of the signal. Let the duration of $x(t)$ be $N$. Since $x(t)$ is a signal of finite energy, we will assume that $N$ is finite (although this is not necessarily the
case). We can pretty easily intuit the effect of time modification $\alpha$: the amplitude envelope of the entire signal is modified such that the relationship of any portion of the amplitude
envelope in the original signal is preserved in the output signal. For example, if there is an amplitude swell that lasts a quarter of the length of the input signal, if we want to stretch that
signal by a factor of $\alpha = 2$, that same swell would still occupy a quarter of the output signal. Consequently the duration of our output signal is defined as $\frac{N}{\alpha}$. We see the
length of the input signal and output signal are linearly related.
Furthermore, consider the frequency characteristics of the output signal. Once again let's start with a geometric visualization: the short term spectrum for any part of the original signal. If at
some point in time we observe a frequency $f_{0}$ and its first harmonic $2f_{0}$, then for any frequency modification $\beta$ we still want it to be the case that when $f_{0}$ is mapped to the
output $f_{0}^{*}$, that $2f_{0}$ is mapped to $2f_{0}^{*}$. Thus, for a frequency modification factor of $\beta$, we want the frequency domain characteristics of $Y(\omega)$ to be that of $X(\frac{\
omega}{\beta})$. Once again the frequency information of the input signal and output signal are linearly related.
We have defined the possible values for $\alpha$ and $\beta$ pretty broadly at the outset of this section - all real numbers. Let's think about what that means, look at curious cases, and decide how
this defines the type of map the phase vocoder is in the following sections.
3.1 Negative Time Modification Factors
Perhaps what we have said so far seems reasonable for positive modification factors, but the reader might have raised an eyebrow or two when thinking about negative ones. Here let's dig into these
negative modification factors and see how it affects our notion of a phase vocoder map.
Let's start with time since that is an easier parameter to modify ``negatively''. A time modification factor of $\alpha = -1$ results in the original signal but ``reverse''. In other words all the
original time evolution characteristics, such as amplitude envelopes and event timing, are backwards. Furthermore, if we break it down to a Fourier view, this corresponds to conjugation of the o. The
latter observation is demonstrated in Figure 1.
From here we can simply decompose any time modification factor $- \alpha$ as $-1 \cdot \alpha$ and approach idea conceptually by starting with a time modification of $\alpha$ (which we presumably
already understand) and then account for the $-1$ by multiplying the imaginary component by $-1$. Given these observations, negative time modifications seem fairly approachable, and well within the
realm of what we can physically intuit.
3.2 Negative Frequency Modification Factors
When we move to thinking about negative frequency modifications, it may at first seem a bit more difficult to grasp. While the geometric representation we had in the case of negative time
modification was enough in the previous section - amplitude envelopes across time - here our geometric representation - amplitude envelopes across frequency - is initially not as robust. Instead, the
answer lies in how the Fourier Transform itself operates to generate real signals from complex ones. However, by employing similar tricks we will see that it is well within the realm of what we can
By starting with what we know, let our frequency modification factor be $\beta = 2$. Given the following spectrum on the left, we want to produce the subsequent frequency domain data on the right, as
shown in Figure 2. This is again defined in our expression $Y(\omega) = X(\frac{\omega}{\beta})$.
Here we see that the information is stretched by a factor of 2 - simple enough. However, how do we extend this to include a negative frequency modification, or in other words, a negative stretch? To
start, let's note that Figure 2 isn't giving us the ``full'' spectrum, only the first half. Let's look at all the frequency domain data the Fourier Transform gives us in Figure 3.
And in fact, another common, and perhaps more useful visualization for our purposes, is shown in Figure 4
In Figure 4 the notion of negative frequencies is better represented, and perhaps we start to see how this will relate to the topic at hand. Let's start with the simple observation of what negative
frequencies even are: complex conjugates of the ``first half'' of the spectrum. For a fourier coefficient $X(\omega) = |X|e^{i\theta}$, its complex conjugate is expressed as $\overline{X(\omega)} = |
X|e^{-i\theta}$. Here we will note that if we take a 3D complex view of these frequencies, the conjugates of every positive frequency rotate in the opposite direction of its positive counterpart.
This is a consequence of negating the imaginary component. Now let's consider the fourier coefficients which the phase vocoder maps them to given a frequency modification of $\beta > 0$. Let
$$ x(t) = \int_{-\infty}^{\infty}|X(\omega)|e^{-i\theta}d\omega $$
such that
$$ |X(\omega)| = 0 \ \forall \ \omega \notin \{k,-k\} \ \text{where } |X(k)| = |X(-k)| \neq 0$$
then it follows that
$$ PV(x(t),1,\beta) = y(t)$$
$$|Y(\omega)| = 0 \ \forall \omega \notin \{\beta\cdot k,\beta\cdot-k\} \ \text{such that } |Y(\beta \cdot k)| = |Y(\beta \cdot -k)| = |X(k)|$$
Keeping all this in mind, now consider a frequency modification of $-\beta$. Once again let's take the approach of viewing this modification as having two parts: a positive frequency shift, and then
reversing it via the negative part (-1). Using the equation (5), notice the frequencies present in our output signal:
$$\{- \beta \cdot k,-\beta \cdot -k\} = \{\beta \cdot -k,\beta \cdot k\}$$
These are the same frequencies for a frequency domain modification of $\beta$! This is due to the symmetry of the the spectrum around $\omega = 0$, since we need the negative frequencies to produces
a real signal from complex data. Therefore we see that
$$ PV(x(t),\alpha,\beta) = PV(x(t),\alpha,-\beta)$$
3.3 Curious Observation
It might have come across your mind what a frequency or time modification of zero represents in a phase vocoder context. Let's consider this case starting off with time. As we stated earlier, the
total time of the input signal and the output are linearly related. So for a time modification factor of $\alpha = 0$, the phase vocoder compresses all of the data in the original signal down to a
single point! Interestingly, it seems that $PV(x,0,\beta)$ maps $x$ to an impulse signal. Thus we will say that
$$PV(x,0,\beta) = \delta_{x}(n)$$
In some sense we see that this is some link to impulse decomposition [3]. However this is more of a tangent than relevant information.
Furthermore, what does a frequency modification factor of $\beta = 0$ represent? Think back to our geometric conceptualization of frequency modification. We saw that $\beta$ translates the frequency
information at a given point to another point in the spectrum. So where is information translated for $\beta = 0$? To frequency `0' or DC! So for $|x| = N$, $PV(x,\alpha, 0)$ generates a singal of
length $\alpha N$ that is equal to $DC$ at every point. Thus we will say that
$$PV(x,\alpha,0) = DC_{x}$$
Finally, if we think about what both of these mean together, we see that $PV(x,0,0)$ gives us a scalar that is equal to the average energy, or DC offset, $DC$, of the input signal $x$.
3.4 Putting it all together
So what kind of map is the phase vocoder? Or perhaps more importantly: what kind of map do we want the phase vocoder to be? We probably want the PV to be the best kind of map there is: \textit
{bijective}! However, given some of the properties we laid out in section 3.2 and 3.3, it seems that our original definition of $(\alpha,\beta) \in \mathbb{R}^{2}$ will not produce a function that is
\textit{one-to-one} and \textit{onto}. Let's look at how we can tweak this to result in a map we are happy with.
We saw in equation (6) that a frequency modification of $\beta$ and $-\beta$ produce an output with the same frequency information. Therefore by defining $\beta \in \mathbb{R}$ the frequency
information is not injective, and consequently neither is the PV map as a whole. Furthermore, from section 3.3 we see that for $\beta = 0$, $PV$ is not injective, since multiple input signals $x$ can
have the same DC offset. For example, let $x_{1} = sin(t)$ and $x_{2} = cos(t)$. We see that it is the case that
$$PV(x_{1},\alpha,0) = PV(x_{2},\alpha,0)$$
Moreover, we also see that $PV(x,\alpha,0)$ is the same for all $\alpha$ - as laid out in the beginning of this article. Thus $\beta = 0$ fails a second time in terms of injectivity and therefore
must be excluded from the domain. So in order to have bijective frequency information, we must define $\beta \in \mathbb{R}_{> 0}$.
In terms of time modification, we see that negative values for $\alpha$ actually produce unique output signals. Therefore, unlike is the case for $\beta$, we can include them in the domain for a
bijective map. However, once again, as pointed out in 3.3, the result of $PV(x,0,\beta)$ completely destroys the original time information and it cannot be recovered, thus violating injectivity. So
if we want to define a bijective phase vocoder map, $\alpha = 0$ must be excluded from the domain. In other words $\alpha \in \mathbb{R} - \{0\}$.
Given these restrictions on $\alpha$ and $\beta$, we see that an output signal of a certain duration and frequency relationship to the input is only possible with a unique $(\alpha_{0},\beta_{0}) \in
\mathbb{R}-\{0\} \times \mathbb{R}_{>0}$. Because of this we will say that the phase vocoder map is \textit{one-to-one} and \textit{onto}, in other words a bijection. The reason why defining a
bijective phase vocoder map is desirable in the first place is that there must exist an inverse mapping as a consequence of this bijectivity. We will define the inverse phase vocoder map in equation
$$PV^{-1}(x,\alpha,\beta) = PV(y,\frac{1}{\alpha},\frac{1}{\beta})$$
Because $\alpha, \beta \neq 0$, we know that $PV$ is a well-behaved and well-defined mapping.
In terms of negative time modifications and the inverse PV operation, since there is a $-1$ in both $\alpha$ and $\frac{1}{\alpha}$ in equation (9), when we go from our original signal, to the PV
signal, then back to the original signal, there is a phase shift of $\pi$ at each mapping, so a total phase shift of $2\pi$. This gives us the original sinusoid we started off with. Additionally,
since we see that the phase information is taken care of, our amplitude envelopes which were initially flipped as a result of the $-1$, are then again flipped in $PV^{-1}$ thus resulting in their
original orientation. Again if we take the group theory perspective this acts like a cyclic subgroup $\{ R_{0},R_{180} \}$ of a dihedral group. This last bit seems relevant since dihedral groups are
understood in a geometric sense, and here we are conceptualizing the amplitude.
After exploring some of the properties initially set fourth, we have amended our original statement - \textit{that the phase vocoder acts like a map} - to a more powerful and useful conclusion - \
textit{this map is bijective}. The fact that the phase vocoder acts like a bijective map is important because it tells us that the signals we are generating from an initial input $x$ are unique for
each ordered pair $(\alpha,\beta) \in \mathbb{R} - \{0\} \times \mathbb{R}_{>0}$. Because of this, we can be confident that operations performed on $y \in \mathcal{Y}$ are reflective of the phase
vocoder map $PV$ itself, and not some other curious circumstances which were overlooked. \\
In the world of continuous mathematics and analog signals the inverse phase vocoder returns the exact input signal which we originally gave it. However, when working in the world of DSP, certain
``phasey'' artifacts arise in the output phase vocoder signal as a result of spectral leakage and sinusoids of varying frequency. Thus, when we perform the inverse phase vocoder in a digital signal
processing context, our signal we get back isn't identical to the one we originally started with. We will see how to use these artifacts to judge the effectiveness of a phase vocoder algorithm.
4 Back to DSP
The properties we just laid out are a bit different when we reenter the digital signal realm. Specifically our frequency modification is no longer injective because of aliasing. They are now cyclic,
and for a sampling frequency of $f_{s}$ we redefine
$$\beta^{*} \equiv \beta \mod f_{s}$$
Furthermore, our choice of $\alpha$ is restricted such that
$$\alpha \times N \in [1,\infty)$$
since we can't have a signal less than $1$ sample, and we have limited data storage. However, it is near impossible to think of a phase vocoder application that exceeds these bounds, so we will
assume these restrictions are met from here on. We will use this notion of the phase vocoder transform as a measure for the resolution of our DSP phase vocoders.
4.1 Quality Measurement
Laroche and Dolson give us the following Consistency Measure in [2] to quantify the effectiveness of a phase vocoder algorithm.
$$D_{M} = \frac{\sum_{u = 1}^{P}\sum_{k=0}^{N-1}\big[ | Z(t_{s}^{u},\omega_{k}) | - | X(t_{s}^{u},\omega_{k}) | \big]^{2}}{\sum_{u = 1}^{P}\sum_{k=0}^{N-1} | X(t_{s}^{u},\omega_{k}) |^{2}}$$
This has been slightly modified from the original version: now we are comparing the twice modified signal, $z(t) = PV^{-1}(x(t),\alpha,\beta)$, to the original input, $x(t)$. $D_{M}$ is comparing the
squared energy added by the phase vocoder algorithm, to the squared energy of the original signal. If perfect reconstruction is achieved, $D_{M} = 0$. In the following section, we will look at the
consistency measure of the Identity Phase Locking algorithm which Laroche and Dolson proposed in [2].
4.2 MATLAB Results
The linked MATLAB code performs this forward and inverse phase vocoder operation, and compares the resultant signal with the original one using the consistency measure $D_{M}$. We see these results
in Figure 1 and Table 1 where our FFT size is $4096$ with a hop factor of $4$. In light of the aforementioned injectivity failure, $\beta = 1$ in order to avoid aliasing and subsequent distortion.
In both phase vocoder reconstructions, we see that a fair amount of energy is added by the phase vocoder. However, we should note that the energy added is exaggerated in our $D_{M}$, since we are
performing the discrete phase vocoder algorithm twice, and in the second instance taking in an already phasey signal as input. We see that the Identity Phase Locking algorithm consistently
outperforms the classic phase vocoder algorithm in terms of $D_{M}$, as also shown in [2].
5 Conclusion
The reader is directed to my phase vocoder master patch for a Max/MSP real-time interactive approach, in order to get a more concrete sense of the PV as a function of three variables. This
investigation has brought us through some fringe areas for DSP (Hilbert spaces, group theory, continuous mathematics) in order to give us confidence in using the idea of a phase vocoder transform to
judge the quality of a given algorithm. Not only does it give us a slightly different application of the consistency measure $D_{M}$, but we have thought about the phase vocoder and some of the ideas
behind it in perhaps a new way: continuously. This is always a healthy practice, as we continue to move through more complete and physical understandings of the powerful ideas employed by digital
signal processing.
[1] Robert G. Gallager. Signal Space Concepts. https://pdfs.semanticscholar.org/53f7/0f5b6dc734802cf23624d7912a79f752d102.pdf
[2] Mark Dolson Jean Laroche. Improved Phase Vocoder Time-Scale Modification of
Audio. IEEE, 7(3):323–332, 1999. http://www.cmap.polytechnique.fr/~bacry/MVA/getpapers.php?file=phase_vocoder.pdf&type=pdf
[3]Steven W. Smith. The Scientist and Engineer’s Guide to Digital Signal Processing .California Technical Publishing, San Diego, California, 1997.
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Please login (on the right) if you already have an account on this platform.
Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: | {"url":"https://www.dsprelated.com/showarticle/1229.php","timestamp":"2024-11-12T18:24:39Z","content_type":"text/html","content_length":"81856","record_id":"<urn:uuid:78c543d1-c7cc-44b6-9e6d-edcd97186d8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00179.warc.gz"} |
Research & Innovation - Research in Mathematics
Learn more about Research in Mathematics at the Morganton Departmental Opportunities Fair on
Wednesday, September 18 at 4:00-5:30 PM at the Academic Commons Stairs!
About Research in Mathematics
Research in Math is a semester-long course in which students collaborate on open-ended research problems on topics in higher level mathematics. This is an application-based course.
What would I do in the program?
Students will spend the semester understanding and working on open problems in mathematics. Throughout the semester, students will also engage with professional research mathematicians, practice
reading higher level mathematical research papers, and hone their formal proof writing skills. At the end of the semester, students will package and present their original results.
How do I know this program is a good fit for me?
Students in this course should have a very strong interest in mathematics, and have some experience with proof-writing. Students should also enjoy working collaboratively and exchanging mathematical
ideas with others.
What projects have past / current students worked on?
• The Match Game
• A Game on Graphs
• 𝒕-tone 𝒌-colorings of a Graph
• Positive Triangle Game
Application Deadline
September 26,
12:00 PM (Noon)
(with optional second semester continuation)
(with optional second semester continuation)
Course Information
MA4510 & MA4512, Academic Year
Reed Hubbard, NCSSM Morganton Instructor of Mathematics
Reed Hubbard joined the NCSSM community in July 2022 as an Instructor of Mathematics. Hailing from North Little Rock, he attended the Arkansas School for Mathematics, Sciences, and the Arts an
Arkansan version of NCSSM before pursuing a math degree at the University of Arkansas. For graduate school, he found his way to North Carolina, where he developed a love for the outdoors, student
engagement, and mathematics at UNC-Chapel Hill. His mathematical interests include linear algebra, topology, and geometry. Recreationally, he enjoys cooking, reading, and spending time in the natural
beauty of western North Carolina.
Hannah Schwartz, NCSSM Morganton Instructor of Mathematics
Hannah is thrilled to be an instructor of mathematics at NCSSM-Morganton. She graduated with her PhD from Bryn Mawr College, where she researched low dimensional topology. She then traveled to
Germany for a research position at the Max Planck Institute of Math, followed by a second postdoc at Princeton University. There, she began to develop courses where students could explore higher
mathematics in unexpected and creative ways, regardless of their background. Her favorites included a course for Princeton freshmen on the mathematics behind circus acts, and a course on mathematics
in the courtroom for incarcerated students earning their BA from Rutgers. Now, she is happy to be part of the NCSSM community where she enjoys sharing her love of mathematics, especially the aspects
of her research that involve visualizing knots, surfaces, and 4-dimensional spaces! She spends her time outside of work sitting on her front porch, cooking, hiking, gardening, and exploring with her
three beautiful dogs and partner Jason.
MA4510 Research in Mathematics
Prerequisite(s): MA4500 AND Research Program Application, MA4330 AND Research Program Application, or permission of the Dean of Math
Corequisite(s): None
Graduation Requirements Met: One STEM credit OR One Mathematics credit
Schedule Requirements Met: One of five courses required each semester
Meeting Times: Three periods per week and a lab
This course is designed for students who have completed calculus and would like to work on a research team investigating an unsolved problem in mathematics. Since the research questions usually arise
from the fields of graph theory and complex systems, students are encouraged to complete MA4500 Graph Theory with REX Math and MA4230 Introduction to Complex Systems prior to enrolling or to have
completed comparable coursework in 9th or 10th grade.
MA4512 Research in Mathematics II
Prerequisite(s): MA4510 Research in Mathematics I OR MA4520 Advanced Mathematical Topics I OR MA4522 Advanced Mathematics Topics II
Corequisite(s): None
Graduation Requirements Met: One STEM credit OR One Mathematics credit
Schedule Requirements Met: One of five courses required each semester
Meeting Times: Three periods per week and a lab
This course continues the project begun in MA4510. Students write a formal paper presenting the background of the problem and any prior results found by other researchers. The students' results are
then presented in standard mathematical form with all necessary detail in the proofs and corollaries presented. If the students' results warrant, the paper may be submitted for publication. | {"url":"https://research-innovation.ncssm.edu/departmental-programs/morganton-campus/research-in-mathematics","timestamp":"2024-11-03T12:55:38Z","content_type":"text/html","content_length":"228673","record_id":"<urn:uuid:b5020df5-47b5-46de-9a28-c7c07baf7b3d>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00384.warc.gz"} |
Problem of the Week: Missing Data - Statistics.com: Data Science, Analytics & Statistics CoursesProblem of the Week: Missing Data - Statistics.com: Data Science, Analytics & Statistics Courses
Problem of the Week: Missing Data
Question: You have a supervised learning task with 30 predictors, in which 5% of the observations are missing. The missing data are randomly distributed across variables and records. If your
strategy for coping with missing data is to drop records with missing data, what proportion of the records will be dropped? Is the assumption of random distribution reasonable?
Answer: The problem that the first variable in a record will be missing is 0.05, so the probability that it will be present is 0.95. The probability that the second variable will be present is,
likewise, 0.95. The probability that the first and second variables will both be present is 0.95 * 0.95 or 0.9025. The probability that the first, second, and third variables will all be present is
0.95 * 0.95 * 0.95 = 0.8574. And so on. The probability that all 30 variables will be present is 0.95^30 = 0.215 or 21.5%, meaning that there is a 78.5% probability that at least one variable will be
missing, and the record must be omitted. If each record has a 78.5% chance of being omitted, then, on average, 78.5% of the records will be dropped from the analysis.
The assumption of random distribution is not very reasonable. Typically, missingness is concentrated in a limited number of variables and records. If just one or two variables have a lot of missing
values, they can be omitted from the analysis. If a subset of records is missing a lot of values, this is often an indicator that there is something different about those records. In either case, a
derived variable that flags whether a record has data for the variable can have predictive power in a modeling task. | {"url":"https://www.statistics.com/problem-of-the-week/","timestamp":"2024-11-09T03:44:21Z","content_type":"text/html","content_length":"67039","record_id":"<urn:uuid:53369598-73a7-494e-99fe-88257faa4fe2>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00509.warc.gz"} |
3.4 Area and Circumference 1 Circle A circle is a plane figure that consists of all points that lie the same distance from a fixed point. The fixed point. - ppt download
Presentation is loading. Please wait.
To make this website work, we log user data and share it with processors. To use this website, you must agree to our
Privacy Policy
, including cookie policy.
Ads by Google | {"url":"https://slideplayer.com/slide/5959994/","timestamp":"2024-11-03T19:34:22Z","content_type":"text/html","content_length":"149785","record_id":"<urn:uuid:eb9fe5cf-6482-40e6-a829-ae7dc6ebe668>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00215.warc.gz"} |
fit_growth: Fitting microbial growth in biogrowth: Modelling of Population Growth
This function provides a top-level interface for fitting growth models to data describing the variation of the population size through time, either under constant or dynamic environment conditions.
See below for details on the calculations.
fit_growth( fit_data, model_keys, start, known, environment = "constant", algorithm = "regression", approach = "single", env_conditions = NULL, niter = NULL, ..., check = TRUE, logbase_mu =
logbase_logN, logbase_logN = 10, formula = logN ~ time )
fit_data observed microbial growth. The format varies depending on the type of model fit. See the relevant sections (and examples) below for details.
model_keys a named list assigning equations for the primary and secondary models. See the relevant sections (and examples) below for details.
start a named numeric vector assigning initial guesses to the model parameters to estimate from the data. See relevant section (and examples) below for details.
known named numeric vector of fixed model parameters, using the same conventions as for "start".
environment type of environment. Either "constant" (default) or "dynamic" (see below for details on the calculations for each condition)
algorithm either "regression" (default; Levenberg-Marquard algorithm) or "MCMC" (Adaptive Monte Carlo algorithm).
approach approach for model fitting. Either "single" (the model is fitted to a unique experiment) or "global" (the model is fitted to several dynamic experiments).
env_conditions Tibble describing the variation of the environmental conditions for dynamic experiments. See the relevant sections (and examples) below for details. Ignored for environment="constant".
niter number of iterations of the MCMC algorithm. Ignored when algorithm!="MCMC".
... Additional arguments for modFit().
check Whether to check the validity of the models. TRUE by default.
logbase_mu Base of the logarithm the growth rate is referred to. By default, the same as logbase_logN. See vignette about units for details.
logbase_logN Base of the logarithm for the population size. By default, 10 (i.e. log10). See vignette about units for details.
formula An object of class "formula" defining the names of the x and y variables in the data. logN ~ time as a default.
observed microbial growth. The format varies depending on the type of model fit. See the relevant sections (and examples) below for details.
a named list assigning equations for the primary and secondary models. See the relevant sections (and examples) below for details.
a named numeric vector assigning initial guesses to the model parameters to estimate from the data. See relevant section (and examples) below for details.
named numeric vector of fixed model parameters, using the same conventions as for "start".
type of environment. Either "constant" (default) or "dynamic" (see below for details on the calculations for each condition)
either "regression" (default; Levenberg-Marquard algorithm) or "MCMC" (Adaptive Monte Carlo algorithm).
approach for model fitting. Either "single" (the model is fitted to a unique experiment) or "global" (the model is fitted to several dynamic experiments).
Tibble describing the variation of the environmental conditions for dynamic experiments. See the relevant sections (and examples) below for details. Ignored for environment="constant".
number of iterations of the MCMC algorithm. Ignored when algorithm!="MCMC".
Whether to check the validity of the models. TRUE by default.
Base of the logarithm the growth rate is referred to. By default, the same as logbase_logN. See vignette about units for details.
Base of the logarithm for the population size. By default, 10 (i.e. log10). See vignette about units for details.
An object of class "formula" defining the names of the x and y variables in the data. logN ~ time as a default.
If approach="single, an instance of GrowthFit. If approach="multiple", an instance of GlobalGrowthFit
Please check the help pages of each class for additional information.
When environment="constant", the functions fits a primary growth model to the population size observed during an experiment. In this case, the data has to be a tibble (or data.frame) with two
logN: the logarithm of the observed population size Nonetheless, the names of the columns can be modified with the formula argument.
The model equation is defined through the model_keys argument. It must include an entry named "primary" assigned to a model. Valid model keys can be retrieved calling primary_model_data().
The model is fitted by non-linear regression (using modFit()). This algorithm needs initial guesses for every model parameter. This are defined as a named numeric vector. The names must be valid
model keys, which can be retrieved using primary_model_data() (see example below). Apart from that, any model parameter can be fixed using the "known" argument. This is a named numeric vector, with
the same convenctions as "start".
When environment="constant" and approach="single", a dynamic growth model combining the Baranyi primary growth model with the gamma approach for the effect of the environmental conditions on the
growth rate is fitted to an experiment gathered under dynamic conditions. In this case, the data is similar to fitting under constant conditions: a tibble (or data.frame) with two columns:
logN: the logarithm of the observed population size Note that these default names can be changed using the formula argument.
The values of the experimental conditions during the experiment are defined using the "env_conditions" argument. It is a tibble (or data.frame) with one column named ("time") defining the elapsed
time. Note that this default name can be modified using the formula argument of the function. The tibble needs to have as many additional columns as environmental conditions included in the model,
providing the values of the environmental conditions.
The model equations are defined through the model_keys argument. It must be a named list where the names match the column names of "env_conditions" and the values are model keys. These can be
retrieved using secondary_model_data().
The model can be fitted using regression (modFit()) or an adaptive Monte Carlo algorithm (modMCMC()). Both algorithms require initial guesses for every model parameter to fit. These are defined
through the named numeric vector "start". Each parameter must be named as factor+"_"+parameter, where factor is the name of the environmental factor defined in "model_keys". The parameter is a valid
key that can be retrieved from secondary_model_data(). For instance, parameter Xmin for the factor temperature would be defined as "temperature_xmin".
Note that the argument ... allows passing additional arguments to the fitting functions.
When environment="constant" and approach="global", fit_growth tries to find the vector of model parameters that best describe the observations of several growth experiments.
The input requirements are very similar to the case when approach="single". The models (equations, initial guesses, known parameters, algorithms...) are identical. The only difference is that
"fit_data" must be a list, where each element describes the results of an experiment (using the same conventions as when approach="single"). In a similar fashion, "env_conditions" must be a list
describing the values of the environmental factors during each experiment. Although it is not mandatory, it is recommended that the elements of both lists are named. Otherwise, the function assigns
automatically-generated names, and matches them by order.#'
## Example 1 - Fitting a primary model -------------------------------------- ## A dummy dataset describing the variation of the population size my_data <- data.frame(time = c(0, 25, 50, 75, 100),
logN = c(2, 2.5, 7, 8, 8)) ## A list of model keys can be gathered from primary_model_data() ## The primary model is defined as a list models <- list(primary = "Baranyi") ## The keys of the model
parameters can also be gathered from primary_model_data primary_model_data("Baranyi")$pars ## Any model parameter can be fixed known <- c(mu = .2) ## The remaining parameters need initial guesses
start <- c(logNmax = 8, lambda = 25, logN0 = 2) primary_fit <- fit_growth(my_data, models, start, known, environment = "constant", ) ## The instance of FitIsoGrowth includes several useful methods
print(primary_fit) plot(primary_fit) coef(primary_fit) summary(primary_fit) ## time_to_size can be used to calculate the time for some concentration time_to_size(primary_fit, 4) ## Example 2 -
Fitting under dynamic conditions------------------------------ ## We will use the example data included in the package data("example_dynamic_growth") ## And the example environmental conditoins
(temperature & aw) data("example_env_conditions") ## Valid keys for secondary models can be retrived from secondary_model_data() ## We need to assign a model equation (secondary model) to each
environmental factor sec_models <- list(temperature = "CPM", aw = "CPM") ## The keys of the model parameters can be gathered from the same function secondary_model_data("CPM")$pars ## Any model
parameter (of the primary or secondary models) can be fixed known_pars <- list(Nmax = 1e4, # Primary model N0 = 1e0, Q0 = 1e-3, # Initial values of the primary model mu_opt = 4, # mu_opt of the gamma
model temperature_n = 1, # Secondary model for temperature aw_xmax = 1, aw_xmin = .9, aw_n = 1 # Secondary model for water activity ) ## The rest, need initial guesses (you know, regression) my_start
<- list(temperature_xmin = 25, temperature_xopt = 35, temperature_xmax = 40, aw_xopt = .95) ## We can now fit the model dynamic_fit <- fit_growth(example_dynamic_growth, sec_models, my_start,
known_pars, environment = "dynamic", env_conditions = example_env_conditions ) ## The instance of FitDynamicGrowth has several S3 methods plot(dynamic_fit, add_factor = "temperature") summary
(dynamic_fit) ## We can use time_to_size to calculate the time required to reach a given size time_to_size(dynamic_fit, 3) ## Example 3- Fitting under dynamic conditions using MCMC
------------------- ## We can reuse most of the arguments from the previous example ## We just need to define the algorithm and the number of iterations set.seed(12421) MCMC_fit <- fit_growth
(example_dynamic_growth, sec_models, my_start, known_pars, environment = "dynamic", env_conditions = example_env_conditions, algorithm = "MCMC", niter = 1000 ) ## The instance of FitDynamicGrowthMCMC
has several S3 methods plot(MCMC_fit, add_factor = "aw") summary(MCMC_fit) ## We can use time_to_size to calculate the time required to reach a given size time_to_size(MCMC_fit, 3) ## It can also
make growth predictions including uncertainty uncertain_growth <- predictMCMC(MCMC_fit, seq(0, 10, length = 1000), example_env_conditions, niter = 1000) ## The instance of MCMCgrowth includes several
nice S3 methods plot(uncertain_growth) print(uncertain_growth) ## time_to_size can calculate the time to reach some count time_to_size(uncertain_growth, 2) time_to_size(uncertain_growth, 2, type =
"distribution") ## Example 4 - Fitting a unique model to several dynamic experiments -------- ## We will use the data included in the package data("multiple_counts") data("multiple_conditions") ## We
need to assign a model equation for each environmental factor sec_models <- list(temperature = "CPM", pH = "CPM") ## Any model parameter (of the primary or secondary models) can be fixed known_pars
<- list(Nmax = 1e8, N0 = 1e0, Q0 = 1e-3, temperature_n = 2, temperature_xmin = 20, temperature_xmax = 35, pH_n = 2, pH_xmin = 5.5, pH_xmax = 7.5, pH_xopt = 6.5) ## The rest, need initial guesses
my_start <- list(mu_opt = .8, temperature_xopt = 30) ## We can now fit the model global_fit <- fit_growth(multiple_counts, sec_models, my_start, known_pars, environment = "dynamic", algorithm =
"regression", approach = "global", env_conditions = multiple_conditions ) ## The instance of FitMultipleDynamicGrowth has nice S3 methods plot(global_fit) summary(global_fit) print(global_fit) ## We
can use time_to_size to calculate the time to reach a given size time_to_size(global_fit, 4.5) ## Example 5 - MCMC fitting a unique model to several dynamic experiments --- ## Again, we can re-use
all the arguments from the previous example ## We just need to define the right algorithm and the number of iterations ## On top of that, we will also pass upper and lower bounds to modMCMC set.seed
(12421) global_MCMC <- fit_growth(multiple_counts, sec_models, my_start, known_pars, environment = "dynamic", algorithm = "MCMC", approach = "global", env_conditions = multiple_conditions, niter =
1000, lower = c(.2, 29), # lower limits of the model parameters upper = c(.8, 34) # upper limits of the model parameters ) ## The instance of FitMultipleDynamicGrowthMCMC has nice S3 methods plot
(global_MCMC) summary(global_MCMC) print(global_MCMC) ## We can use time_to_size to calculate the time to reach a given size time_to_size(global_MCMC, 3) ## It can also be used to make model
predictions with parameter uncertainty uncertain_prediction <- predictMCMC(global_MCMC, seq(0, 50, length = 1000), multiple_conditions[[1]], niter = 100 ) ## The instance of MCMCgrowth includes
several nice S3 methods plot(uncertain_growth) print(uncertain_growth) ## time_to_size can calculate the time to reach some count time_to_size(uncertain_growth, 2) time_to_size(uncertain_growth, 2,
type = "distribution")
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/cran/biogrowth/man/fit_growth.html","timestamp":"2024-11-04T12:08:01Z","content_type":"text/html","content_length":"50662","record_id":"<urn:uuid:d9803698-0b0d-458f-b897-c358b10f8d67>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00689.warc.gz"} |
Assignment 1 - Binary Search Practice | Jovian
In this assignment, you'll apply and practice the following concepts covered during the first lesson:
• Understand and solve a problem systematically
• Implement linear search and analyze it
• Optimize the solution using binary search
• Ask questions and help others on the forum | {"url":"https://jovian.ai/learn/data-structures-and-algorithms-in-python/assignment/assignment-1-binary-search-practice","timestamp":"2024-11-13T11:53:39Z","content_type":"text/html","content_length":"38064","record_id":"<urn:uuid:de38f368-e050-40b4-821a-bdeb183d70d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00645.warc.gz"} |
Tracking of Model Performance Over Time
Now, suppose you conclude, based on the previously mentioned criteria, that the model is not stable. What will you do then? Let’s hear from Hindol on that.
Let’s go back to the telecom churn example. If you recall, the model was built using data from 2014. Now suppose, you are tracking its performance over time, and that ends up giving you the following
So, the first time, when the model’s Gini dropped to 0.72, you avoided building a new model. Basically, you just recalibrated, i.e. updated the coefficients of the variables. That resulted in a
slight increase of Gini. However, the next time Gini dropped to a low value, i.e. 0.71, we just rebuilt the model, i.e. got new sample data, performed data prep, etc. and built the entire model. | {"url":"https://www.internetknowledgehub.com/tracking-of-model-performance-over-time/","timestamp":"2024-11-09T23:10:19Z","content_type":"text/html","content_length":"78225","record_id":"<urn:uuid:3dd9cbc1-7230-41c9-8a9a-f83c6a5b8df8>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00803.warc.gz"} |
Pathfind --- Introduction ---
Pathfind is a mathematical game on a problem which is not without real importance: you have a certain number of points in the plane, and you must find the shortest path passing through all these
points. The most recent version
This page is not in its usual appearance because WIMS is unable to recognize your web browser.
Please take note that WIMS pages are interactively generated; they are not ordinary HTML files. They must be used interactively ONLINE. It is useless for you to gather them through a robot program.
Description: link points by a shortest path. interactive exercises, online calculators and plotters, mathematical recreation and games
Keywords: interactive mathematics, interactive math, server side interactivity, geometry, combinatorics, combinatorics, distance, path, circuit | {"url":"http://www.designmaths.net/wims/wims.cgi?lang=en&+module=H5%2Fgeometry%2Fpathfind.en","timestamp":"2024-11-02T19:05:18Z","content_type":"text/html","content_length":"4990","record_id":"<urn:uuid:feb96f43-f87c-4894-b2a2-743edec8db65>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00270.warc.gz"} |
Universitat de Girona. Departament d’Informàtica i Matemàtica Aplicada
Martín Fernández, Josep Antoni
Pierotti, Michele E. R.
Barceló i Vidal, Carles
The variety of resources that a population exploits is known as the “niche width”. A particular population has a narrow niche if only few kinds of the available resources are exploited by its
members. When the individuals of a population exploit many different resources, then the population has a wide niche. From this point of view it seems that the niche is a property of the population
as a whole. However, it is well known that many apparently generalist populations are in fact composed of individual specialists, that is, members that use only small subsets of the population’s
niche. This approach justifies the definition of indices to measure the individual-level resource specialization. Although this kind of analysis could be applied to any niche variation: oviposition
sites, habitat, etc., we focus the discussion in terms of analysis of diet data. So as to measure species niche breadth a comparison between the frequency distribution of the species’ resource use
with that of all available resources is carried out. When a measure of individual specialization is considered, then one should compare the population’s total diet with the individual use. In
particular, the total niche width of a population should be compared with its two components: within and between-individual variation. In this sense, in the literature several indices of
intrapopulation niche variation are proposed. Our goal is to describe, compare and evaluate four of the most relevant indices applied in ecology. In this work we point out how these techniques could
be developed in a compositional framework, particularly when these indices are applied to discrete diet data [e.g. frequency of different prey specimen in the diet].
Universitat de Girona. Departament d’Informàtica i Matemàtica Aplicada
Tots els drets reservats
Estadística matemàtica -- Congressos
Mathematical statistics -- Congresses
Anàlisi multivariable -- Congressos
Mathematical statistics -- Congresses
Ecologia -- Mètodes estadístics -- Congressos
Ecology -- Statistical methods -- Congresses
Biologia -- Mètodes estadístics -- Congressos
Biology -- Statistical methods -- Congresses
Biologia de poblacions -- Mètodes estadístics -- Congressos
Population biology -- Mètodes estadístics -- Congressos
Examining Indices of Individual-level Resource Specialization | {"url":"http://dugi.udg.edu/item/http:@@@@hdl.handle.net@@2072@@299071","timestamp":"2024-11-14T07:34:41Z","content_type":"application/xhtml+xml","content_length":"33923","record_id":"<urn:uuid:8d5a5133-a6ad-4231-a435-92d086e90ac3>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00053.warc.gz"} |
Chisquare Test, Different Types and its Application using R
Chi-Square Test
The chi-square statistic is represented by χ^2. The tests associated with this particular statistic are used when your variables are at the nominal and ordinal levels of measurement – that is, when
your data is categorical. Briefly, chi-square tests provide a means of determining whether a set of observed frequencies deviate significantly from a set of expected frequencies.
Chi-square can be used at both univariate and bivariate levels. In its univariate form –the analysis of a single variable – it is associated with the ‘goodness of fit’. Goodness of fit is used to
determine whether sample data are consistent with a hypothesized distribution.
When used for bivariate analysis – the analysis of two variables in conjunction with one another – it is called the chi-square test of association, or the chi-square test of independence, and
sometimes the chi-square test of homogeneity.
Chi-square test of independence
The chi-square test is applied when you have two categorical variables from a single population and it evaluates whether there is a significant association between the categories of the two
The chi-square test of independence is used to analyze the frequency table (i.e. contingency table) formed by two categorical variables.
2 x 2 Contingency Table
There are several types of chi square tests depending on the way the data was collected and the hypothesis being tested. We’ll begin with the simplest case: a 2 x 2 contingency table. If we set the 2
x 2 table to the general notation, using the letters a, b, c, and d to denote the contents of the cells, then we would have the following table:
General notation for a 2 x 2 contingency table.
For a 2 x 2 contingency table the Chi Square statistic is calculated by the formula:
This calculated Chi-square statistic is compared to the critical value (obtained from statistical tables) with a degrees of freedom df = (r−1) × (c−1) and p = 0.05. Where, ‘r ‘is the number of rows
and ‘c’ is the number of column in the contingency table.
Chi-square test of significance
Chi-square test examines whether rows and columns of a contingency table are statistically significantly associated.
• Null hypothesis (H0): There is no association between the two variables. That means the row and the column variables of the contingency table are independent.
• Alternative hypothesis (H1): There is an association between the two variables. That means the row and column variables are dependent. For each cell of the table, we have to calculate the
expected value under null hypothesis.
The chi-square test is always testing what scientists call the null hypothesis, which states that there is no significant difference between the expected and observed result.
If the calculated Chi-square statistic is greater than the Chi-Square table value, we will reject the null hypothesis then we must conclude that the row and the column variables are related to each
other. This implies that they are significantly associated.
If Chi-square statistic value is as large as, say 20, it would indicate a substantial difference between our observed values and our expected values. A Chi-square statistic value is zero, on other
hand, indicates that the observed frequencies exactly match the expected frequencies. The value of Chi-square can never be negative because the differences between the observed and expected
frequencies are always squared.
Chi-square Test in R
In R the Chisq.test () function is used to test the association between two categorical variables.
EG:-The Car93 data set from the MASS library which represents the data from the same of different type of cars in USA in the year 1993. By using this dataset we need to test the type of Airbags and
the type of type of car sold have any significant relationship between them. If association is observed then we can estimate which types of cars can sell better with what types of air bags.
We have a chi-squared value of 33.0009 and p-value of 0.0002723.Since we get a p-value less than the significance level of 0.05, we reject the null hypothesis and conclude that the two variables
Airbags and Type have a significant relationship.
Chi Square Goodness of Fit (One Sample Test)
A chi-square goodness of fit test – sometimes called the chi-square one-sample test – can help us to do this as it tells us whether there is a difference between what was actually observed in our
results and what we might expect to observe by chance. So, the test is able to determine whether a single categorical variable fits a theoretical distribution or not. It enables us to make an
assessment of whether the frequencies across the categories of the variable are likely to distribute according to random variation or something more meaningful.
For the chi-square goodness of fit test to be useful, a number of assumptions first need to be met. The assumptions for the chi-square test of association are the same as they are for the chi-square
goodness of fit test:
As an absolute requirement, your data must satisfy the following conditions:
• The variable must be either nominal or ordinal and the data represented as counts/frequencies.
• Each count is independent. That is, one person or observation should not contribute more than once to the table and the total count of your scores should not be more than your sample size: one
person = one count.
If your data do not satisfy these conditions then it is not possible to use the test and it should not be used. However, your data should also typically conform to the following:
• None of the expected frequencies in the cells should be less than 5.
• The sample size should be at least 20 – but more is better.
If the data in your sample does not satisfy these two criteria, the test becomes unreliable. That is, any inferences that you may make about your data have a significantly higher likelihood of error.
In such instances of low sample size or very low expected frequencies, it has been repeatedly demonstrated by statisticians that the chi-square statistic becomes inflated and no longer provides a
useful summary of the data. If your expected frequencies are less than 5, it is probably worth considering collapsing your data into bigger categories or using a different test.
Chi-square Goodness of Test in R
Eg:-In R the built-in data set survey, the Smoke column records the survey response about the student’s smoking habit. Suppose the campus smoking statistics is as below. Determine whether the sample
data in survey supports it at a 0.05 significance level.
Heavy Never Occas Regul
4.5% 79.5% 8.5% 7.5%
As the p-value 0.991 is greater than the 0.05 significance level, we do not reject the null hypothesis that the sample data in survey supports the campus-wide smoking statistics.
Fisher’s exact test
Fisher exact test proposed in the mid-1930s almost simultaneously by Fisher, Irwin and Yates. Fisher’s exact test is a statistical significance test used in the analysis of contingency tables for two
nominal variables and you want to see whether the proportions of one variable are different depending on the value of the other variable.
Fisher’s exact test is more accurate than the chi-squared test of independence when the expected numbers are small. When one of the expected values (note: not the observed values) in a 2 × 2 table is
less than 5, and especially when it is less than 1, then Yates’ correction can be improved upon. In such cases the Fisher exact test is a better choice than the Chi-square.
The Fisher Exact test is generally used in one tailed tests. However, it can be used as a two tailed test as well.
• Null hypothesis (H0): The null hypothesis is that the relative proportions of one variable are independent of the second variable; in other words, the proportions at one variable are the same for
different values of the second variable. (H0: p1 = p2)
• Alternative hypothesis (H1): The relative proportions of one variable are dependent of the second variable. The alternative hypothesis can be either left-tailed (p1 < p2), right-tailed (p1 > p2),
or two-tailed (p1 ≠ p2).
A data set which is called an “R×C table,” where R is the number of rows and C is the number of columns. If the columns represent the study group and the rows represent the outcome, then the null
hypothesis could be interpreted as the probability of having a particular outcome not being influenced by the study group, and the test evaluates whether the two study groups differ in the
proportions with each outcome. An important assumption for all of the methods outlined, including Fisher’s Exact test, is that the binary data are independent. If the proportions are correlated then
more advanced techniques should be applied.
Fishers Exact Test in R
The function fisher.test() is used to perform Fisher’s exact test when the sample size is small to avoid using an approximation that is known to be unreliable for samples.
Eg:-Consider a trial comparing the performance of two boxers. Each of the boxers undertook the trial eight times and the number of successful trials was recorded. The hypothesis under investigation
in this experiment is that the performance of the two boxers is similar. If the first boxer was only successful on one trial and the second boxer was successful on four of the eight trials then can
we discriminate between their performances?
The data is setup in a matrix:
The p-value calculated for the test does not provide any evidence against the assumption of independence. In this example the association between rows and columns is considered to be not
statistically significant, this means that we cannot confidently claim any difference in performance for the two boxers. So we are accepting null hypothesis.
About Kavitha:
Kavitha P.S is an MCA. Currently she is working as a Senior Analyst Intern with NikhilGuru Consulting Analytics Service LLP, Bangalore. She has prior worked 5 Years with UST-Global, Trivandrum.
1 Comment on "Chisquare Test, Different Types and its Application using R"
1. Thanks for sharing this blog. I enjoyed while reading this blog. Your presentation skills are Superb. I bookmarked your site for further blogs. Keep sharing more blogs. | {"url":"https://dataanalyticsedge.com/2016/12/23/chisquare-test-different-types-and-its-application-using-r/","timestamp":"2024-11-07T19:57:21Z","content_type":"text/html","content_length":"116105","record_id":"<urn:uuid:baf5f232-0b6f-4078-a0ec-5440c97accfd>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00180.warc.gz"} |
mp_arc 00-487
00-487 Xin-Chu Fu, Weiping Lu, Peter Ashwin and Jinqiao Duan
Symbolic Representations of Iterated Maps (76K, Latex) Dec 5, 00
Abstract , Paper (src), View paper (auto. generated ps), Index of related papers
Abstract. This paper presents a general and systematic discussion of various symbolic representations of iterated maps through subshifts. We give a unified model for all continuous maps on a
metric space, by representing a map through a general subshift over usually an uncountable alphabet. It is shown that at most the second order representation is enough for a continuous map. In
particular, it is shown that the dynamics of one-dimensional continuous maps to a great extent can be transformed to the study of subshift structure of a general symbolic dynamics system. By
introducing distillations, partial representations of some general continuous maps are obtained. Finally, partitions and representations of a class of discontinuous maps, piecewise continuous
maps are discussed, and as examples, a representation of the Gauss map via a full shift over a countable alphabet and representations of interval exchange transformations as subshifts of infinite
type are given.
Files: 00-487.src( 00-487.keywords , duan.tex ) | {"url":"http://kleine.mat.uniroma3.it/mp_arc-bin/mpa?yn=00-487","timestamp":"2024-11-06T18:57:29Z","content_type":"text/html","content_length":"2249","record_id":"<urn:uuid:d7112814-5528-4c92-aeeb-8dbb04bf2d1b>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00781.warc.gz"} |
Mastering Python Functions: Built-in and User-Defined with Practical Examples
Python functions are the backbone of efficient and reusable code. Whether you're a beginner or an experienced developer, understanding both built-in and user-defined functions is crucial for writing
clean and effective Python programs. In this comprehensive guide, we'll delve into the essentials of Python functions, explore built-in functions, create user-defined functions, and demonstrate their
applications with practical examples. #2Articles1Week, #Hashnode.
🔥Introduction to Python Functions
Functions in Python are blocks of reusable code that perform specific tasks. They help in organizing code, making it more readable, and reducing redundancy. Python offers a plethora of built-in
functions, and it also allows developers to create their own functions tailored to their specific needs.
Key Benefits of Using Functions:
• Reusability: Write once, use multiple times.
• Modularity: Break down complex problems into manageable chunks.
• Maintainability: Easier to debug and update code.
• Readability: Clear structure enhances understanding.
Built-in Functions in Python
Python comes with a rich set of built-in functions that perform common tasks, saving developers from reinventing the wheel. Some of the most frequently used built-in functions include:
• print(): Outputs data to the console.
• max(): Returns the largest item in an iterable or the largest of two or more arguments.
• min(): Returns the smallest item.
• len(): Returns the length of an object.
• sum(): Sums up the items of an iterable.
• reversed(): Returns a reversed iterator.
• upper(): Converts a string to uppercase.
Example: Using the max() Function
numbers = [10, 20, 30, 40, 50]
largest = max(numbers)
print(f"The largest number is {largest}.")
# output = The largest number is 50.
In this example, the max() function efficiently finds the largest number in the list.
User-Defined Functions
While built-in functions are powerful, creating your own functions allows you to tailor functionality to your specific needs. User-defined functions enhance code flexibility and reusability.
Defining Simple Functions
A simple function performs a basic task without any parameters.
Example: Greeting Function
def say_hello():
print("Hello, Shubham!")
# output = Hello, Shubham!
This function, say_hello(), prints a greeting message when called.
Functions with Arguments
Functions can accept parameters, making them more dynamic and versatile.
Example: Greeting with Arguments
def say_hello_args(name="Shubham"):
print(f"Hello, {name}!")
# output =
Hello, Shubham!
Hello, Ajay!
Hello, Sachin!
Here, say_hello_args accepts a name parameter, allowing personalized greetings.
Returning Values from Functions
Functions can return values using the return statement, enabling further manipulation of the result.
Example: Summation Function
def test_sum(num1, num2):
return num1 + num2
result = test_sum(5, 6)
# output = 11
The test_sum function adds two numbers and returns the result, which is then printed.
Advanced Function Concepts
Using args and *kwargs
args and *kwargs allow functions to accept an arbitrary number of arguments, enhancing their flexibility.
*Example: Function with args
def print_args(*args):
for arg in args:
print(arg, end=" ")
print_args("Shubham", 23, 44, 2.9)
# output = Shubham 23 44 2.9
This function prints all provided arguments, regardless of their number.
**Example: Function with kwargs
def print_kwargs(**kwargs):
for key, value in kwargs.items():
print(f"{key}: {value}")
print_kwargs(name="Shubham", age=23, score=44)
# output =
name: Shubham
age: 23
score: 44
Nested Functions and Scope
Functions can be nested within other functions, and understanding scope is vital for variable accessibility.
Example: Nested Functions and Scope
def outer_function():
a = 10
local_var = 34
print("Hello from the outer function!")
print(f"a = {a}")
def inner_function():
print("Hello from the inner function!")
# output =
Hello from the outer function!
a = 10
Hello from the inner function!
In this example, inner_function is defined within outer_function, demonstrating nested functions and variable scope.
Practical Examples
Let's apply our knowledge of functions to solve real-world problems.
Leap Year Checker
Determining whether a year is a leap year involves specific conditions. We'll create a function to perform this check.
Leap Year Rules:
• A year is a leap year if it is divisible by 4.
• However, if the year is divisible by 100, it is not a leap year, unless:
• The year is also divisible by 400.
Example: Leap Year Function
def is_leap_year(year):
if (year % 4 == 0 and year % 100 != 0) or (year % 400 == 0):
return True
return False
year = int(input("Enter a year to check if it's a leap year: "))
if is_leap_year(year):
print(f"{year} is a Leap Year.")
print(f"{year} is not a Leap Year.")
# output =
Enter a year to check if it's a leap year: 2024
2024 is a Leap Year.
This function accurately determines leap years based on the defined rules.
Factorial Calculator
Calculating the factorial of a number is a common programming exercise. We'll implement this using a function.
Factorial Definition: The factorial of a non-negative integer n is the product of all positive integers less than or equal to n.
Example: Factorial Function
Example: Factorial Function
def factorial(n):
result = 1
for i in range(n, 0, -1):
result *= i
return result
fact = int(input("Enter a number to calculate its factorial: "))
print(f"Factorial of {fact} is {factorial(fact)}.")
# output =
Enter a number to calculate its factorial: 5
Factorial of 5 is 120.
This function efficiently computes the factorial of a given number using a loop.
Understanding and effectively utilizing Python functions is fundamental to writing efficient and maintainable code. Built-in functions offer powerful tools for common tasks, while user-defined
functions provide the flexibility to address specific needs. By mastering concepts such as arguments, args and *kwargs, nested functions, and scope, you can enhance your programming skills and build
robust Python applications.
Happy Coding ❤ | {"url":"https://ishubh.hashnode.dev/python-functions-built-in-and-user-defined","timestamp":"2024-11-07T12:18:09Z","content_type":"text/html","content_length":"214722","record_id":"<urn:uuid:002eda15-007c-42c4-acfb-e047a06505f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00478.warc.gz"} |
Bjorn’s Corner: Sustainable Air Transport. Part 41. VTOL mission calculations. - Leeham News and Analysis
Bjorn’s Corner: Sustainable Air Transport. Part 41. VTOL mission calculations.
October 14, 2022, ©. Leeham News: Last week, we defined the phases of an eVTOL mission that shall show us the typical range and endurance of the eVTOLs of a hybrid vectored thrust/lift and cruise
eVTOL, similar to a Vertical VX4, Figure 1.
Several parts of the energy consumption calculations are complex, and surprisingly it’s not the vertical parts. We go through why and how we calculate the energy consumed for the mission.
eVTOL mission calculations
The typical mission we defined last week is shown in Figure 2, with some additional data regarding each phase.
Here is how we fly the mission phases, their energy consumption calculations, and problem areas;
1. Vertical takeoff to 300ft: The vertical takeoff for an eVTOL is straightforward to calculate. You use the disc loading formula to calculate the power needed at the aircraft level, times the hover
time of 20 seconds. Then, you get the battery energy draw by multiplying it with the chain’s efficiency losses. You have losses in the battery (internal resistance that generates heat), the
inverter, the electric motors, and the rotor. Chain efficiency is 70% during hover. It includes blocking losses for the rotor booms. It would be wrong to calculate with a brand-new battery as
this is seldom the case in the eVTOLs life (brand new and after battery refurbishment/renewal). We chose the 90% State Of Charge (SOC), which is the midpoint SOC between a new battery’s full
charge level at 100% SOC and the full charge level of a worn-down battery at 80% SOC which is normally the point at which an operator decides to refurbish/renew the battery.
2. The transition to forward flight at constant altitude: The transition from zero speed to a deeply stalled eVTOL, then to the stall region, and finally, forward speed of 1.3 times stall speed (the
usual definition of margin kept to stall for normal flight) is the most tricky phase of the mission. It’s, therefore, done in horizontal flight. The acceleration from zero speed to 1.3 times
stall speed is done with 0.2G’s acceleration, which gives the transition time. We assume the clean stall speed to be 80kts. The average chain efficiency is 72%.
3. The climb to Top Of Climb (TOC) at a cruise altitude of 8,000ft for long missions or 5,000ft for 20-minute missions is done at 110kts. Chain efficiency is 73%. See below for the calculation
problematic of climb, cruise, and descent energy consumption.
4. Cruise at 130kts to Top Of Descent (TOD). Projects talk of “speeds up to 200mph” ( 170kts). But the eVTOL will be energy constrained, and the more SOC that can be left in the battery at the end
of the mission, the shorter the recharging time and less wear on the battery. We, therefore, use 130kts as cruise speed. Chain efficiency is 75%.
5. At TOD, we descend with cruise speed at a sink rate of 800ft/min to 2,000ft, where the approach procedure to the landing pad commences. We assume a flight from a downtown heliport to a feeder
airport helipad. You will be required to follow a specific approach procedure also in VFR conditions; therefore, we assume a 3° approach at 130kts to Decision Height (DH) which is 600ft. From
there, it’s either a flight to transition/vertical land or a divert to an alternate in IFR below minimum conditions. Descent and approach efficiency 76%
6. Speed and efficiency data for transition and vertical as before, with the vertical land taking 45 seconds. The power drawn from the battery is high at hover, and it shall be finished at a SOC
level where the battery can deliver this C-rate. We assume a minimum SOC of 10%, getting us 80% SOC to use in the mission.
7. Mission reserves as discussed; 20 minutes in VFR conditions, alternate plus 30 minutes flight for IFR conditions. If we need an alternate, we look at the effects of a 30nm, 60nm, and 100nm
Climb, Cruise, and Descent energy consumption
If it wasn’t for the influence on the eVTOL drag of the 4+4 rotors, the calculation of energy consumption in these phases would be straightforward. Thrust is vehicle drag with the added effect of a
change in potential energy at climb and descent.
As Energy is Force times Distance, if we know the thrust used, we can calculate energy consumption per nautical mile.
Calculating the basic parasitic and induced drag of a vehicle like ours is straightforward. But then we have the influence of the rotors. We have four tractor rotors that behave like the propellers
of the C-130, Figure 3. A swirling propwash will impinge on the boom, wing, and fixed rear rotors.
The added drag is difficult to assess, especially what happens around the rear fixed rotors. We use drag data I have from prop aircraft with similar configurations to estimate this drag delta. This
is where projects can have some surprises.
Next week we look at the resulting energy consumption for the different phases and what it means for the range of the eVTOL in different operational scenarios.
10 Comments on “Bjorn’s Corner: Sustainable Air Transport. Part 41. VTOL mission calculations.”
Would be nice to see payload/range in pure aircraft mode with required field lengths as well as mixed operation of vertical city helipad start but horizontal landing at city airport UAM runway.
• Sure, we can do that.
□ A nomagram can be a beautifully clear and powerful thing but they can take a while to create. I imagine a payload versus range graph with several curves for different battery mass fractions
or MTOW?
Maybe vertical take-off and transition can be skipped, to save battery energy.
https://youtu.be/b1_ADVlUZ2Q?t=21. Not sure if all passengers would be ready for this & installing launch equipment on rooftops & helipads might be challenging.
Joking aside I’m curious to see what the energy calculations & estimations Bjorn will be adding up will do to required battery capacity for a 400kg payload for such a vehicle.. Investors should be
• You can do the numbers and add the benefits from: 1) Using the altitude loss allowed from the helipad to gain speed 2) Using a high voltage power chord for the first 100′ of the T-O designing
some engines with dual Voltage motors lke +/-270VDC and 5-15kV A/C (higher voltage = smaller engine for same power). 3) T-O against the wind direction by rotating the helipad. Ideally these
helipads should be at the end of multi lane highways in the cities allowing for a straight climb out and easier emergency landings. We will see what EASA/FAA suggests for city UAM helipads.
I understand the calcs for preliminary and ballpark, but in the end just like Boeing or Airbus, you just test it and measure the fuel flows (battery use) in this case and get the real world data.
Your design should give you some margin to add batteries to achieve a goal (or accept your goal is too much and reduce it but that of course depends on any customers and contracts if any)
• The designs are weight constrained. The max limit for an eVTOL is 3175kg, and all the manufacturers are struggling with getting a large enough fraction of that to energy (battery system). So it’s
not a matter of adding battery modules unless something else gives, like passengers. In a fuel aircraft, you have a trade space between payload and fuel as you can fly a full house but then only
fill the tanks too, say, 60%. With batteries, they are always at 100% mass: at idle, takeoff, and landing. No flexibility.
□ Well, in order to trade ‘fuel’ for payload, one could have removable battery modules.
Another point is, with conventional Pt 23 airplanes, critical performance data like takeoff and climb are required to be calculated with ‘minimum’ (i. e. ready for overhaul) engines. Cruise
can be done with average engine performance.
☆ Thanks, but the engine, in this case, the electric motors have negligible deterioration; it’s the energy source that degrades. For takeoff an 80% SOC battery has no influence. For landing
the allocation including the cutoff for hover (10% SOC) is the same, it’s the range that suffers (the only flexible part of the mission). I will include the effect of an 80% battery, good
○ Wouldn’t an 80% SOC have implications for peak power delivery (and thus take-off)? It’s not only total capacity that degrades in batteries but also maximum current. | {"url":"https://leehamnews.com/2022/10/14/bjorns-corner-sustainable-air-transport-part-40-vtol-mission-calculations/?utm_source=dlvr.it&utm_medium=twitter","timestamp":"2024-11-07T09:53:12Z","content_type":"text/html","content_length":"117415","record_id":"<urn:uuid:6057e739-4a25-4210-896d-5b3cbd990546>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00042.warc.gz"} |
Fast trained neural network - Progress, Inc.Fast trained neural network
Boris Zlotin; Ivan Nehrishnyi, Sr; Ivan Nehrishnyi, Jr & Vladimir Proseanic
The paper is dedicated to the basic architecture of the artificial neural network of a new type – the Progressive Artificial Neural Network (PANN) – and its new training algorithm. The PANN
architecture and its algorithm when applied together provide a significantly higher training speed than the known types of artificial neural networks and methods of their training. The results of
testing the proposed network and its comparison with existing networks are given. For those who are interested in independent testing, information is provided allowing to download one of the variants
of the proposed network.
1. Theoretical basis of fast neural network training
The artificial neural network (ANN) was mathematically confirmed as a universal approximating device in 1969 in [1]. At the same time, one of the major ANN limitations was revealed. As is known, each
synapse in ANN has one synaptic weight. ANN training is performed by calculating and correcting these weights for the training image.For each next training image the same weights must be again
corrected. Therefore, training is conducted through a large number of iterations. With training volume increase training time can grow exponentially.
In [2] and [3], a new neural network design is proposed, which differs from existing ANN by the fact there is a plurality of corrective weights at each synapse, and the corrective weights are
selected by a special device (distributor) depending on the value of the input signal.
Figure 1 shows a well-known artificial formal neuron, which includes a summing device and an activation device, and Figure 2 shows a new artificial neuron, called p-neuron, proposed in [2] and [3].
In p-neuron, the signal from the input device is sent to the distributor, which estimates the value of the signal, refers it to one of the value intervals, and appropriately assigns a corrective
weight corresponding to this signal. Figure 2 shows that a signal with a value corresponding to the value interval 3 selects the correcting weight d3.
A new neural network has a classical neural architecture with p-neurons used in place of formal neurons. Figure 3 shows ANN with classical formal neurons, and Fig. 4 – a p-network with proposed
2. Training of the p-network
The P-network training, which that is shown in Fig. 4 differs significantly from the training of a classic ANN. Due to the presence of a plurality of corrective weights at each synapse, different
input signals activate different weights. Input signals of the same value activate the same weights.
P-network training includes the following steps:
1. Input signals are sent to the distributors. Distributors activate weights at the synapses, depending on the value of the input signals. With other input signals, other weights at the synapses are
activated. Values of these weights are sent to the neurons to which the synapses are connected.
2. Neurons form their output signals as a sum of corrective weights received by a given neuron
∑[n] = ∑[𝑖,𝑑,𝑛] 𝑊[𝑖,𝑑,𝑛]
∑[n] – Neuron input signal;
i – Corrective weight input index, which determines the signal input;
d – Corrective weight interval index, which determines the value interval for the given signal;
n – Corrective weight neuron index, which determines the neuron that received the signal;
W[i,d,n] – Corrective weight value;
3. Comparison of the received neuron output signals with the predefined desirable output signals and generation of correction signals for the group correction of corrective weights.
Where, the group correction is a modification of the activated corrective weights associated with a given neuron, with each weight changing to the same value or multiplying by the same
Below are two exemplary and non-limiting variants of the formation and use of the group correction signals:
Variant #1 – Formation and application of correction signals based on the difference between desirable output signals and obtained output sums, as follows:
Calculation of the equal correction value ∆[n] for all corrective weights contributing into the neuron n according to the equation:
∆[n] = (O[n]-∑[n])/ S
O[n] – Desirable output signal corresponding to the neuron output sum ∑[n];
S – Number of synapses connected to the neuron n.
Variant #2 – Formation and application of corrective signals based on a ratio of desirable output signals versus obtained output sums as follows:
Calculation of the equal correction value ∆[n] for all corrective weights contributing into the neuron n according to the equation:
∆[n]= O[n]/∑[n]
4. Correction of all weights connected to the given neuron. According to the first variant, Δ[n] is added to the current weight value. In the second variant, the current weight value is multiplied
by Δ[n]. This nullifies the training error for the current image for a given neuron. In other words, instead of the gradient descent used in classical neural networks, which requires a large
number of iterations, p-network provides a radical correction of the weights in one step.
5. Repeat steps 2 through 5 for all images. This completes the first training epoch. The corrected weights obtained during the network training with the first images of the epoch can be corrected by
its training with the next images, and thus, training with previous images during one epoch may deteriorate. If the desirable accuracy is not achieved after the first training epoch, several more
epochs can be carried out.
The simple method for calculating weight corrections that does not require iterative processes, provides a very fast completion of the training epochs. And the one-step correction of all active
weights for the entire amount of the training error drastically reduces the number of epochs necessary for training.
3. Software implementation of a fast training neural network
The described p-network has been implemented in the software form using the Object Oriented Language. Fig. 5 represents the software implementation of the p-network in the Unified Modeling Language
The UML model in Fig.5 shows the generated software objects, their relationships, as well as functions and parameters of these objects. In more detail, these steps are shown in Figures 6 –12,
• Fig. 6 – general sequence of p-network formation;
• Fig. 7 – analysis process, which allows to prepare data necessary for p-network formation;
• Fig. 8 – input signal processing, which makes it possible for the p-network to interact with the input data during its training and operation;
• Fig. 9 – formation of neuron units, including a neuron and a synapse with corrective weights, which provides p-network training and operation;
• Fig. 10 – creation of synapses with corrective weights.
Within the process the following classes of objects are formed:
• PNet;
• InputSignal;
• NeuronUnit;
• Synapse.
The formed class of objects NeuronUnit includes:
• Array of objects of the Synapse class;
• Neuron – a variable, in which adding is provided during training process;
• Calculator – a variable, in which the value of the expected sum is placed and where the calculations of the training corrections are being made.
The class NeuronUnit provides network training,including:
• Formation of neuron sums;
• Assignment of values of expected sums;
• Calculation of corrections;
• Introduction of corrections into corrective weights.
The formed class of objects Synapse includes:
• Array of corrective weights;
• Pointer directed to the synapse-related input.
The class Synapse provides the following functions:
• Initiation of corrective weights;
• Factors by weights multiplication;
• Weight correction.
The formed class of objects InputSignal includes:
• Array of pointers to the synapses connected with a given input;
• Variable where the value of input signal is placed;
• Values of potential minimum and maximum input signal;
• Number of intervals;
• Width of an interval.
The class InputSignal provides the following functions:
• Formation of the network structure, including:
□ Adding and removing links between an input and synapses;
□ Assignment of the number of intervals for the synapses of the given input.
• Assignment of values of parameters for the minimum and maximum input signal;
• Contribution into the network operation:
□ Setting the input signal;
□ Setting correction factors.
The formed class of objects PNet includes the array of object classes:
The class PNet provides the following functions:
• Specifies the number of objects in the class InputSignal;
• Specifies the number of objects in the class NeuronUnit;
• Provides group request of functions of the objects NeuronUnit and InputSignal.
In the training process, operation cycles are formed in which:
• The output of the neuron is formed, which is equal to zero prior to the start of the cycle. All the synapses contributing to the given NeuronUnit are reviewed, wherein for each synapse:
□ Distributor forms an array of correction factors based on the input signal.
□ All the weights coming to this synapse are reviewed, wherein for each weight the following operations are executed:
☆ Multiplication of the weight value by the corresponding coefficient С[i,d,n];
☆ The result of multiplication is added to the output sum of the neuron.
• Correction value ∆[n] is calculated;
• The result of multiplication of the correction value ∆[n] by the coefficient С[i,d,n] is calculated (∆[n]× С[i,d,n]);
• All the synapses contributing to the given NeuronUnit are reviewed, wherein for each synapse:
□ All the weights coming to the synapse are reviewed, and each weight value is changed by the corresponding correction value.
• Fig. 11 – training process of a single neural unit in detail.
• Fig. 12 – general process of p-network training.
4. Test results
The experimental PANN built according to the above algorithm was implemented in Python.
The comparison was provided between Deep Learning ANN (DNN) based on advanced Google Tensor Flow technology and PANN. The results of comparison based on the standard statistical IRIS Test showed the
a. Training Error: PANN performance is comparable to the best results of the testedDNN;
b. Training Speed: PANN speed is at least 3,000 times higher than DNN.
The results of the tests are shown in the screenshot below.
As can be seen, with the same accuracy, PANN training speed is 1 ms vs 22,141 ms for DNN.
The results are obtained on the computer with the following parameters:
• Windows 7 Home Premium
• Processor: Intel® Core™ i5 -337U CPU @ 1.80 GHz
• System type: 64-bit Operating System
It is apparent that on a different computer and with different processor download training time will be different.
Those who are interested in independent testing or in the non-commercial use of the new network can download one of its variants and user instructions here.
Discussion and conclusions
The tests have confirmed the theoretical expectations of a radical acceleration of network training by eliminating the need for iterative calculations. They also revealed additional benefits:
• Drastic decrease in the number of network training epochs necessary for the predefined accuracy of results. In some cases, the entire training was completed in 2-3 epochs, in some cases – in
dozens of epochs. This provided additional reduction in training time.
• P-network has no need in activation function that is required in classical neural networks. Removing this function has further increased the training speed.
• PANN and the proposed training method can significantly accelerate the operation of various specialized neural networks, such as Hopfield and Kohonen networks, Boltzmann machine and adaptive
networks. The network and its training method can be used in recognition, clustering and classification, prediction, associative information search, and the like, with additional partial network
training in real time.
In addition, PANN application makes it possible to:
• Increase the computing power of computers, through the use of PANN-based computing blocks and cache memory systems.
• Save computing resources and reduce energy consumption.
• Create Large Databases with high performance, speed and reliability.
1. Marvin Minsky and Seymour Papert “Perceptrons: an introduction to computational geometry”, Cambridge, MA: MIT Press., 1969.
2. D. Pescianschi, A.Boudichevskaia, B. Zlotin, and V. Proseanic, “Analog and Digital Modeling of a Scalable Neural Network,” CSREA Press, US 2015.
3. Patent US9390373.
4. Patent US9619749.
5. Patent application US20170177998 A1.
6. International patent application PCT/US2015/19236, 06-Mar-2015.
7. International patent application PCT/US2017/36758, 09-JUN-2017. | {"url":"https://progress.ai/fast-trained-neural-network/","timestamp":"2024-11-07T14:00:58Z","content_type":"text/html","content_length":"84896","record_id":"<urn:uuid:bab2d608-04b0-4e74-bfbe-4c306d26e45e>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00178.warc.gz"} |
OpenStax College Physics for AP® Courses, Chapter 6, Problem 22 (Problems & Exercises)
A mother pushes her child on a swing so that his speed is 9.00 m/s at the lowest point of his path. The swing is suspended 2.00 m above the child's center of mass. (a) What is the magnitude of the
centripetal acceleration of the child at the low point? (b) What is the magnitude of the force the child exerts on the seat if his mass is 18.0 kg? (c) What is unreasonable about these results? (d)
Which premises are unreasonable or inconsistent?
Question by
is licensed under
CC BY 4.0
Final Answer
a. $905 \textrm{ N}$
b. The force exerted by the swing is 5.1 times the weight of the child. This is excessive.
c. The swing speed is overstated.
Solution video
OpenStax College Physics for AP® Courses, Chapter 6, Problem 22 (Problems & Exercises)
vote with a rating of votes with an average rating of.
Video Transcript
This is College Physics Answers with Shaun Dychko. This child on a swing is traveling with a tangential velocity of 9.00 meters per second at the bottom of the swing, allegedly, and their center of
mass is 2.00 meters from the pivot here at the top of the swing. So finding their centripetal acceleration, the formula is the linear speed squared divided by the radius. So that's 9.00 meters per
second squared divided by 2.00 meters which is 40.5 meters per second squared. In part (b), we are asked what is the force the child applies on the swing and the magnitude of that force is gonna be
the same as the normal force because these are Newton's third law pairs so the normal force upwards on the child is of equal magnitude to the force downwards on the swing which I haven't drawn here
but a dot sort of on the swing itself going down this would be the force on the seat of the swing. Okay! So Newton's second law says that the force upwards which is the normal force minus the force
downwards which is gravity equals mass times acceleration. In this case, we put a subscript c on the acceleration just because it's a circular motion scenario but it is Newton's second law just the
same, you don't have to write the c. Okay but let's put it back! So in order to answer our question, we find the normal force and that would be the force exerted on the seat. So normal force is mass
times acceleration plus force of gravity moving this term to the right hand side by adding it to both sides and the force of gravity is mass times acceleration due to gravity. So we can factor out
this common factor m and then plug in numbers. So the normal force is 18.0 kilograms times 40.5 meters per second squared that we found in part (a) plus 9.8 meters per second squared which is a force
of 905 newtons. Now in order to decide whether that's a reasonable number or not, we have to compare it to something relevant: let's compare it to the child's weight. So we'll take that normal
force—905 newtons— divide it by the child's weight which is 18.0 kilograms times 9.80 newtons per kilogram which is 5.1 and it seems to me that 5.1 is excessive. So the force exerted by the swing is
5.1 times the child's weight—that's too much. Then the swing speed is overstated; it's not reasonable to presume that they are going 9.00 meters per second. | {"url":"https://collegephysicsanswers.com/openstax-solutions/mother-pushes-her-child-swing-so-his-speed-900-ms-lowest-point-his-path-swing-0","timestamp":"2024-11-08T18:57:24Z","content_type":"text/html","content_length":"147960","record_id":"<urn:uuid:257e1d43-4ef1-4e11-8b11-f17828c5bc84>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00775.warc.gz"} |
Yuanyuan Shi - ECE171B (2022 Fall)
ECE171B Linear Control SysteM Theory
2022 Fall
• This course provides an introduction to systems and control in physical, biological, engineering, information, financial, and social sciences, etc. Key themes include linear system modeling,
linear system stability, eigenvalue placement, state feedback controller design, LQR, observability, output feedback controller, and an introduction to reinforcement learning (and its connection
to control theory), etc. It includes both the practical and theoretical aspects of the topic.
Topics Include
• Review of Linear Algebra and ODE
• System Modeling
• Linearization and Linear Time-Invariant Systems
• Linear System Stability
• Lyapunov Stability and Lyapunov Equation
• Reachability
• Eigenvalue Placement
• State Feedback Control
• Optimal Control: Linear Quadratic Regulator
• Observability and State Estimation
• Output Feedback Control
• Introduction to Reinforcement Learning and Connection to Control
• Lecture 1: Introduction and Course Logistics [Lecture 1]
• Lecture 2: Linear Algebra and ODEs review [Lecture 2]
• Lecture 3: System Modeling I [Lecture 3]
• Lecture 11: Eigenvalue Placement [Lecture 11]
• Lecture 12: Linear Quadratic Regulator (LQR) [Lecture 12]
• Lecture 13: Midterm Review
• Lecture 14: Observability and State Estimation [Lecture 14]
• Lecture 15: Output Feedback Control [Lecture 15]
• Lecture 16: Kalman Filter [Lecture 16]
• Lecture 17: Reinforcement Learning and Connection with Control [Lecture 17]
• Lecture 18: Reinforcement Learning II [Lecture 18]
• Lecture 18: Reinforcement Learning III [Lecture 19]
• Lecture 20: Final Exam Review [Lecture Note]
Karl J. Astrom and Richard M. Murray, Feedback Systems: An Introduction for Scientists and Engineers (Second Edition), Princeton University Press. Available at: https://fbswiki.org/wiki/index.php/
Acknowledgment: much of the class material is based on Prof. Na Li at Harvard University and her class on ES 155: Systems and Control; and Prof. Richard Murray at Caltech and his course on CDS 101/
110 Introduction to Control Systems. | {"url":"https://yyshi.eng.ucsd.edu/teaching/ece171b-2022-fall","timestamp":"2024-11-07T12:01:51Z","content_type":"text/html","content_length":"141877","record_id":"<urn:uuid:76aef3fe-f873-48c0-9750-4501c246d8eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00727.warc.gz"} |
Every object in a box is either a sphere or a cube, and every object
Question Stats:
84% 16% (01:15) based on 2391 sessions
Every object in a box is either a sphere or a cube, and every object in the box is either red or green. How many objects are in the box?
(1) There are six cubes and 5 green objects in the box.
(2) There are two red spheres in the box.
Target question: How many objects are in the box? Given: Every object in a box is either a sphere or a cube, and every object in the box is either red or green.
We can solve this using the
Double Matrix Method
This technique can be used for most questions featuring a population in which each member has two characteristics associated with it (aka overlapping sets questions)..
Here, we have a population of objects, and the two characteristics are:
- sphere or cube
- red or green
So, we can set up our matrix as follows:
From here, I'll jump straight to . . .
Statements 1 and 2 COMBINED
When we combine the statements, we get the following matrix:
There are several scenarios that satisfy BOTH statements. Here are two:
Case a
In this case, the total number of objects = 3 + 3 + 2 + 2 = 10
So, the answer to the target question is
there are 10 objects in the boxCase b
In this case, the total number of objects = 5 + 1 + 2 + 4 = 12
So, the answer to the target question is
there are 12 objects in the box
Since we cannot answer the
target question
with certainty, the combined statements are NOT SUFFICIENT
Answer: E
This question type is
on the GMAT, so be sure to master the technique.
To learn more about the Double Matrix Method, watch this video:
Here's a practice question too: | {"url":"https://gmatclub.com/forum/every-object-in-a-box-is-either-a-sphere-or-a-cube-and-every-object-243453.html#p1876753","timestamp":"2024-11-10T03:08:05Z","content_type":"application/xhtml+xml","content_length":"1022930","record_id":"<urn:uuid:38cc989a-6716-4883-a0bb-699aff040146>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00200.warc.gz"} |
NCERT Solutions for Class 10 Maths Chapter 15 Probability Exercise 15
By Mr Ahmad
Published on:
NCERT Solutions for Class 10 Maths Chapter 15 Probability Exercise 15 are prepared by specialised experienced mathematic teacher. Maths are most important subject of board and with the help of this
chapter-wise NCERT solution and little practices you can get very good marks in your respective board exam. It also help students to build a foundation of upcoming class 11th and 12th. Student can
also check the Important Question with solution for class 9 to class 12.
Also Check – To known more about latest Mobile, Laptop and other gadget click here
Class 10 Maths Chapter 15 Probability Exercise 15 contain total 2 exercise that has 30 questions and questions based on based on the concept of theoretical probability. Check Previous chapter – NCERT
Solutions for Class 10 Maths Chapter 14 Statistics Exercise 14
Important Formulas –
1. The theoretical probability (also called classical probability) of an event E, written as P(E), is defined as
Let n be the total number of trails. The empirical probability of an event E happening, is given by
(i) Experiment : An operation which can produce some well defined outcomes is known as experiment.
(ii) Trail : Performing of an experiment is called trial.
(iii) Equally likely outcomes : Outcomes of trial are equally likely if there is no reason to accept one in preference to the others.
(iv) Sample space : The set of all possible outcomes of an experiment is called sample space.
(v) Elementary event : An event having only one outcome
Note that the sum of probabilities of all the elementary events of an experiment is 1.
Probability – A Theoretical Approach(Classical Probability)
If an event ‘A’ can happen is ‘m’ ways and does not happen in ‘n’ ways, then the probability of occurrence of event ‘A’ denoted by P(A) is given by
Number of favourable outcomes m
Probability of Impossible and Sure Events
The probability of an event which is impossible to occur is 0 and such an event is called impossible event, i.e; for impossible event T, P(I) = 0
The probability of an event which is sure or certain to occur is 1 and such an event is called sure event or certain event.
i.e; for sure event or certain event ‘s’, P(s) = 1
Exercise 15.1
Question 1.
Complete the following statements:
(i) Probability of an event E + Probability of the event ‘not E’ = ………
(ii) The probability of an event that cannot happen is ……… Such an event is called ………
(iii) The probability of an event that is certain to happen is ………. Such an event is called ………
(iv) The sum of the probabilities of all the elementary events of an experiment is ………..
(v) The probability of an event is greater than or equal to …………. and less than or equal to ………..
Question 2.
Which of the following experiments have equally likely outcomes? Explain.
(i) A driver attempts to start a car. The car starts or does not start.
(ii) A player attempts to shoot a basketball. She/he shoots or misses the shot.
(iii) A trial is made to answer a true-false question. The answer is right or wrong.
(iv) A baby is born. It is a boy or a girl.
Question 3.
Why is tossing a coin considered to be a fair way of deciding which team should get the bail at the beginning of a football game?
Question 4.
Which of the following cannot be the probability of an event?
(A) 23
(B) -1.5
(C) 15%
(D) 0.7
Ex 15.1 Class 10 Maths Question 5.
If P (E) = 0.05, what is the probability of ‘not E’?
Question 6.
A bag contains lemon flavoured candies only. Malini takes out one candy without looking into the bag. What is the probability that she takes out
(i) an orange flavoured candy?
(ii) a lemon flavoured candy?
Question 7.
It is given that in a group of 3 students, the probability of 2 students not having the same birthday is 0.992. What is the probability that the 2 students have the same birthday?
Question 8.
A bag contains 3 red balls and 5 black balls. A ball is drawn at random from the bag. What is the probability that the ball drawn is
(i) red?
(ii) not red?
Question 9.
A box contains 5 red marbles, 8 white marbles and 4 green marbles. One marble is taken out of the box at random. What is the probability that the marble taken out will be
(i) red?
(ii) white?
(iii) not green?
Question 10.
A piggy bank contains hundred 50 p coins, fifty ₹ 1 coins, twenty ₹ 2 coins and ten ₹ 5 coins. If it is equally likely that one of the coins will fall out when the bank is turned upside down, what is
the probability that the coin
(i) will be a 50 p coin?
(ii) will not be a ₹ 5 coin?
Question 11.
Gopi buys a fish from a shop for his aquarium. The shopkeeper takes out one fish at random from a tank containing 5 male fish and 8 female fish (see figure). What is the probability that the fish
taken out is a male fish?
Question 12.
A game of chance consists of spinning an arrow which comes to rest pointing at one of the numbers 1, 2, 3, 4, 5, 6, 7, 8 (see figure.), and these are equally likely outcomes. What is the probability
that it will point at
(i) 8?
(ii) an odd number?
(iii) a number greater than 2?
(iv) a number less than 9?
Question 13.
A die is thrown once. Find the probability of getting
(i) a prime number
(ii) a number lying between 2 and 6
(ill) an odd number
Question 14.
One card is drawn from a well-shuffled deck of 52 cards. Find the probability of getting
(i) a king of red colour
(ii) a face card
(iii) a red face card
(iv) the jack of hearts
(v) a spade
(vi) the queen of diamonds
Question 15.
Five cards – the ten, jack, queen, king and ace of diamonds, are well shuffled with their face downwards. One card is then picked up at random.
(i) What is the probability that the card is the queen?
(ii) If the queen is drawn and put aside, what is the probability that the second card picked up is
(a) an ace?
(b) a queen?
Question 16.
12 defective pens are accidentally mixed with 132 good ones. It is not possible to just look at a pen and tell whether or not it is defective. One pen is taken out at random from this lot. Determine
the probability that the pen taken out is a good one.
Question 17.
(i) A lot of 20 bulbs contain 4 defective ones. One bulb is drawn at random from the lot. What is the probability that this bulb is defective?
(ii) Suppose the bulb drawn in (i) is not defective and is not replaced. Now one bulb is drawn at random from the rest. What is the probability that this bulb is not defective?
Question 18.
A box contains 90 discs which are numbered from 1 to 90. If one disc is drawn at random from the box, find the probability that it bears
(i) a two digit number.
(ii) a perfect square number.
(iii) a number divisible by 5.
Question 19.
A child has a die whose six faces show the letters as given below:
The die is thrown once. What is the probability of getting
(i) A?
(ii) D?
Question 20.
Suppose you drop a die at random on the rectangular region shown in figure. What is the probability that it will land inside the circle with diameter 1 m?
Question 21.
A lot consists of 144 ball pens of which 20 are defective and the others are good. Nuri will buy a pen if it is good, but will not buy if it is defective. The shopkeeper draws one pen at random and
gives it to her. What is the probability that
(i) she will buy it?
(ii) she will not buy it?
Question 22.
Two dice, one blue and one grey, are thrown at the same time. Now
(i) Complete the following table:
(ii) A student argues that-there are 11 possible outcomes 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12. Therefore, each of them has a probability 111. Do you agree with this argument? Justify your answer.
Question 23.
A game consists of tossing a one rupee coin 3 times and noting its outcome each time. Hanif wins if all the tosses give the same result, i.e. three heads or three tails, and loses otherwise.
Calculate the probability that Hanif will lose the game.
Question 24.
A die is thrown twice. What is the probability that
(i) 5 will not come up either time?
(ii) 5 will come up at least once?
[Hint: Throwing a die twice and throwing two dice simultaneously are treated as the same experiment.]
Question 25.
Which of the following arguments are correct and which are not correct? Give reasons for your answer.
(i) If two coins are tossed simultaneously there are three possible outcomes- two heads, two tails or one of each. Therefore, for each of these outcomes, the probability is 13.
(ii) If a die is thrown, there are two possible outcomes- an odd number or an even number. Therefore, the probability of getting an odd number is 12.
Exercise 15.2
Question 1.
Two customers Shyam and Ekta are visiting a particular shop in the same week (Tuesday to Saturday). Each is equally likely to visit the shop on any day as on another day. What is the probability that
both will visit the shop on
(i) the same day?
(ii) consecutive days?
(iii) different days?
Question 2.
A die is numbered in such a way that its faces show the number 1, 2, 2, 3, 3, 6. It is thrown two times and the total score in two throws is noted. Complete the following table which gives a few
values of the total score on the two throws:
What is the probability that the total score is at least 6?
(i) even
(ii) 6
(iii) at least 6
Question 3.
A bag contains 5 red balls and some blue balls. If the probability of drawing a blue ball is doubles that of a red ball, determine the number of blue balls in the bag.
Question 4.
A box contains 12 balls out of which x are black. If one ball is drawn at random from the box, what is the probability that it will be a black ball? If 6 more black balls are put in the box, the
probability of drawing a black ball is now double of what it was before. Find x.
Question 5.
A jar contains 24 marbles, some are green and others are blue. If a marble is drawn at random from the jar, the probability that it is green is 2/3. Find the number of blue balls in the jar.
Leave a Comment | {"url":"https://oxfordclasses.com/ncert-solutions-for-class-10-maths-chapter-15-probability-exercise-15/","timestamp":"2024-11-04T05:55:11Z","content_type":"text/html","content_length":"220704","record_id":"<urn:uuid:4ad338d7-83d2-4882-9bd7-823fb246a369>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00888.warc.gz"} |
Sergei Artemov
9^th Estonian Winter School in Computer Science (EWSCS)
IX Eesti Arvutiteaduse Talvekool (EATTK)
Palmse, Estonia
February 29 - March 5, 2004
Sergei Artemov
Graduate Center
City University of New York
Proof Polynomials
Basic knowledge of undergraduate logic.
According to Brouwer, the truth in intuitionistic logic means provability. On this basis Heyting and Kolmogorov introduced an informal Brouwer-Heyting-Kolmogorov (BHK) semantics for intuitionistic
logic. The ideas of BHK led to a discovery of computational semantics of intuitionistic logic, in particular, realizability semantics and Curry-Howard isomorphism of natural derivations and typed
lambda-terms. However, despite many efforts the original semantics of intuitionistic logic as logic of proofs did not meet, until recently, an exact mathematical formulation.
Gödel in 1933 suggested a mechanism based on modal logic S4 connecting classical provability (represented as S4-modality) to intuitionistic logic. This did not solve the BHK problem, since S4 itself
was left without an exact provability model. In 1938 Gödel suggested using the original BHK format of proof carrying formulas to build a provability model of S4. This Gödel's program was accomplished
in 1995 when proof polynomials and the Logic of Proofs (LP) were discovered, shown to enjoy a natural provability semantics, and to be able of realizing all S4-modalities by proof polynomials. The
Logic of Proofs became both an explicit counterpart of modal logic and a reflexive combinatory logic (reflexive lambda-calculus) thus providing a uniform mathematical model of knowledge and
In this course we will discuss several applications of the Logic of Proofs. In the areas of lambda-calculi and typed theories, LP brings in a much richer system of types capable of iterating the type
assignment, in particular, referential types. In the context of typed programming languages LP provides a theory of data types containing programs. In the area of knowledge representation LP allows
us to approach classical logical omniscience problem, which addresses a failure of the traditional logic to distinguish between given facts, like logical axioms, and knowledge which can be only
obtained after a long derivation process. The proof carrying formulas of LP naturally make this distinction. In foundations of formal verification the LP idea of explicit reflection gives a new
mechanism of extending verification system which guarantees its provable stability.
Course Outline
Lectures 1-2: Explicit tradition in logic and its impact to Computer Science. Classical and intuitionistic proof systems. Operational reading of logical connectives. The problem of finding the
Brouwer-Heyting-Kolmogorov (BHK) provability semantics for intuitionistic logic. Kripke semantics. Gentzen proof systems for classical and intuitionistic logics. Modal logic: time vs knowledge
semantics. Basic modal systems: K,K4,S4,S5,GL. Kripke models for modal logic. Gentzen proof systems for modal logics. Completeness and Normalization Theorems. Gödel's embedding of intuitionistic
logic to S4. The problem of the intended provability semantics for S4.
Lectures 3-4: Natural derivations in intuitionistic logic. Transformation of a Gentzen style proof into a Natural Derivation. Transformation of natural deduction trees into Gentzen style derivations.
Abstract data types, examples. Simple types as implication only formulas. Set theoretical (functional) semantics of simple types. Simply typed lambda-calculus. Curry-Howard isomorphism of natural
derivations and typed lambda-terms. Simply typed combinatory terms calculus a.k.a. the simply typed combinatory logic. Computational and provability meaning of combinators. Emulated
lambda-abstraction and beta-reduction in combinatory logic.
Lectures 5-6: Proof Polynomials and proof-carrying formulas. Logic of Proofs, its provability semantics. Internalization Property. Logic of Proofs vs. Combinatory Logic, polymorphism. Logic of Proofs
vs modal logic, realization theorem. Solution to Gödel's provability problem, two models of provability. Proof polynomials as BHK proofs. Reflexive combinatory logic, its computational semantics and
provability semantics. Areas of applications.
Course materials
• Proof polynomials. Slides for Lectures 1-2. [ps]
• Proof polynomials. Slides for Lectures 3-4. [ps]
• Proof polynomials. Slides for Lectures 5-6. [ps]
Background reading
• K. Gödel, Eine Interpretation des intuitionistischen Aussagenkalkuls, Ergebnisse Math. Colloq., Bd. 4, S. 39-40, 1933.
• K. Gödel, Vortrag bei Zilsel (1938), in S. Feferman, ed., Kurt Gödel Collected Works: Volume III, Oxford University Press, 1995
• A. S. Troelstra and H.Schwichtenberg, Basic Proof Theory, Cambridge University Press, 1996.
• D. van Dalen, Logic and Structure, 3rd ed., Springer-Verlag, 1994.
• J.-Y. Girard, Y. Lafont, P. Taylor, Proofs and Types, Cambridge University Press, 1989.
• S. Artemov, Explicit provability and constructive semantics, Bulletin of Symbolic Logic, v. 7, n. 1, pp. 1-36, 2001. [ps]
• J. Alt and S. Artemov, Reflective lambda-calculus, in R. Kahle, P. Schroeder-Heister, R. F. Stärk, eds., Proc. of Int. Seminar on Proof Theory in Computer Science, PTCS 2001, Lecture Notes in
Computer Science, v. 2183, pp. 22-37, Springer-Verlag, 2001. [ps]
• S. Artemov, Unified semantics for modality and lambda-terms via proof polynomials, in K. Vermeulen and A. Copestake, eds., Algebras, Diagrams and Decisions in Language, Logic and Computation,
CSLI Publications, 2002. [ps]
About the Lecturer
URL: http://www.cs.gc.cuny.edu/~sartemov/ | {"url":"https://cs.ioc.ee/yik/schools/win2004/artemov.php","timestamp":"2024-11-08T18:19:16Z","content_type":"text/html","content_length":"7956","record_id":"<urn:uuid:4f42bff4-f85f-4daf-b3f5-85333cf840bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00731.warc.gz"} |
Fatigue Damage of Joint Welds under Variable Amplitude Load
Investigation of Fatigue Damage to Welded Joints under Variable Amplitude Loading Spectra
Yan-Hui Zhang and S J Maddox
Structural Integrity Technology Group
TWI Limited
Granta Park
Great Abington
CB1 6AL, UK
Paper published in International Journal of Fatigue 2008, vol. 31, Issue 1, January 2009. pp.138-152.
The paper presents the results of the investigation on the effect of loading spectra with different mean stresses on the validity of Miner's rule, and the effect of stresses below the constant
amplitude fatigue limit (CAFL) on the fatigue performance of two types of weld joint. In support of understanding the mechanism for any deficiency to Miner's rule, fracture mechanics analysis was
carried out by measuring and predicting the crack growth in specimens tested under both constant amplitude and variable amplitude loading. The experimental results showed that, although Miner's rule
would predict the same fatigue life for each type of specimens tested under the same spectrum, in fact the actual value of Σ(n/N) at failure strongly depended on the sequence applied. The influence
of the loading sequence on Σ(n/N) was in agreement with that on crack growth rates. The deficiency in Miner's rule was attributed primarily to the stress interaction effects resulting from the type
of loading sequence used. The experimental results also showed that, under certain circumstances, stress ranges well below the fatigue limit were found to be as damaging as implied by the S-N curve
extrapolated beyond the CAFL without changing the slope. The value of the minimum fully damaging stress range was found to depend on the basic fatigue strength of the weld joint.
Key words:
Variable amplitude loading, Miner's rule, welded joint, fatigue limit, crack growth.
Nomenclature and definitions
Block length N [L] : Total number of cycles in a variable amplitude loading block.
CA: Constant amplitude.
Constant amplitude fatigue limit (CAFL): Fatigue strength under constant amplitude loading corresponding to infinite fatigue life or a number of cycles large enough to be considered infinite by a
design code.
Equivalent (constant amplitude) stress range ΔS [eq] : For a particular number of cycles to failure, ΔS[eq] is the constant amplitude stress range which, according to Miner's linear cumulative damage
rule, is equivalent in terms of fatigue damage to a variable amplitude stress spectrum.
Fully damaging stress range: Stress range that is as damaging as implied by the CA S-N curve extrapolated beyond the CAFL without changing the slope.
Loading block: The stress history between successive applications of the peak stress in the spectrum.
Miner's rule: Fatigue failure under variable amplitude loading corresponds to the following:
Where n [1] , n [2] , etc are the numbers of cycles at applied stress ranges ΔS [1] , ΔS [2] , etc and N [1] , N [2] , etc are the corresponding numbers of cycles to failure under constant amplitude
loading at those stress ranges, k is number of stress range levels.
p [i] : ΔS[i]/ΔS[max], relative stress range in a spectrum.
R: Stress ratio (= S[min]/S [max] ).
ΔS: Stress range
ΔS [i] : The ith stress range in a spectrum.
ΔS [max] : The maximum (peak) stress range in a spectrum.
ΔS' [min] : The minimum fully damaging stress range in a spectrum, below which stresses are not as damaging as implied by the CA S-N curve extrapolated beyond the CAFL without a slope change.
S-N curve: Relation between applied stress range and life in cycles to failure under constant amplitude loading. For welded joints it has the general form ΔS^mN=C where C and m are material
Sequence A: A loading spectrum with all stresses cycling down from a constant maximum tensile stress
Sequence B: A loading spectrum with all stresses cycling around a constant mean stress
Sequence C: A loading spectrum with all stresses cycling up from a constant minimum stress
Spectrum irregularity factor, I: The number of positive mean crossings divided by the total number of cycles in one block.
VAM: Variable amplitude.
Wide band spectrum or loading: This describes a loading history with an irregularity factor significantly less than 1.0.
1 Introduction
In service the great majority of structures and components are subjected to stresses of variable amplitude (VA). The fatigue design of welded joints in such structures is based on fatigue data
obtained under constant amplitude (CA) loading, used in conjunction with a cumulative damage rule to estimate the damage introduced by cycles of various magnitudes in the service stress history. The
most widely used is Miner's linear cumulative damage rule^[1], which states that the following should be satisfied in fatigue design:
where n [1] , n [2] , etc are the numbers of cycles corresponding to applied stress ranges ΔS [1] , ΔS [2] , etc expected in the life of the structures and N [1] , N [2] , etc are the corresponding
numbers of cycles to failure under CA loading at those stress ranges, k is number of stress range levels. Miner's rule suggests that any structure with Σ(n/N) <1.0 is safe for operation.
An implicit assumption in Miner's rule is that the fatigue damage due to the application of a particular stress cycle in a VA load sequence is exactly the same as that due to the same stress cycle
under CA loading. However, there is extensive evidence^[2-7] to suggest that VA stress cycles could be more damaging than the same stress cycles under CA loading, with the result that Miner's rule
can be unsafe (ie Σ(n/N) <1.0 at failure) under certain circumstances. Gurney has investigated the fatigue performance of welded joints under VA loading extensively, as summarised recently.^[6] His
test results suggested that Miner's rule was generally correct or conservative when the spectrum block length was sufficiently long (N[L] >1,000 cycles) and the mean stresses for all cycles were
comparable with that used in the CA tests, which were mostly performed at R=0 or -1. However, he found that Miner's rule tended to be unsafe in the following circumstances, even when assuming that
all stresses below the CAFL were fully damaging (ie as implied by the S-N curve extrapolated beyond the CAFL without changing the slope, see below):
• Short block length, typically less than 100 cycles
• High mean stresses in the spectrum.
However, the latter may have been a reflection of the type of test specimen used - plates with longitudinal fillet welded edge attachments (see Figure 1a). The fatigue performance under CA loading
was found to exhibit mean stress dependence^[5] but the VA tests carried out with variable stress ratio were assessed on the basis of the CA S-N curve obtained at a constant R (often R=0).
There are also doubts^[5, 8-12] about the method of treating stresses below the constant amplitude fatigue limit (CAFL). The most widely used assumption is that their damaging effect can be
represented by the CA S-N curve extrapolated beyond the CAFL, widely assumed to correspond to N=10^7 cycles, at a shallower slope, typically m = 5 instead of 3. However, there is evidence^[5,10] that
in fact they are more damaging than this. For example, it was reported^[5] that the use of a 2-slope S-N curve with the slope change at a stress range above 10.1N/mm^2 (corresponding to about 5.5x10^
8 cycles) is potentially unsafe, particularly for loading spectra containing large numbers of small stress ranges. This was especially true for fully tensile loading.
When applying Miner's rule in the design of welded structures the CA fatigue strength is generally represented by a single S-N curve, expressed in terms of the applied stress range regardless of mean
stress, eg BS 7608.^[13] This is to allow for the inevitable presence of high tensile residual stresses, which are assumed to produce conditions equivalent to the most severe high applied tensile
mean stress conditions under either CA or VA loading. However, in view of the apparent influence of applied mean stress reported, this may be an over-simplification.
Thus, the objectives of this study were to continue Gurney's work and investigate the effects of loading spectra with different mean stresses on both the validity of Miner's rule and the damaging
effect of stresses below the CAFL.
2 Approach
Three variable amplitude loading sequences, all based on the same spectrum (with respect to stress ranges, corresponding number of cycles at each stress range and block length) but with different
mean stresses, were tested to investigate the effect of mean stress on fatigue performance under VA loading. They were:
• Sequence A, stresses cycling down from a constant high tensile stress, in which case the mean stress increased with decrease in stress range;
• Sequence B, stresses cycling about a constant mean stress;
• Sequence C, stresses cycling up from a constant minimum stress, in which case the mean stress increased with increase in stress range.
The fatigue tests were performed on two types of fillet welded joint in steel with different fatigue strengths. CA S-N curves were established for each by testing with the maximum stress held
constant, as with Sequence A. In this way it was anticipated that the VA test results obtained under Sequence A would provide a unique opportunity to evaluate the accuracy of Miner's rule by
eliminating any possible effect of mean stress.
A spectrum with a so called concave-up stress distribution, in which fatigue damage from small stresses was predominant, was derived to investigate the effect of small stresses. Using the same
approach as Gurney^[5], it was anticipated that by successively adding progressively smaller stress ranges, Miner's sum would be significantly increased when a non-damaging stress range was
approached. On the other hand, if the lowest stress in a spectrum produced fatigue damage consistent with the CA S-N curve extrapolated beyond the CAFL without a slope change, Miner's sum would be
expected to be almost constant.
In support of understanding the mechanisms responsible for any deficiency in Miner's rule, fracture mechanics analysis was carried out by measuring and predicting the crack growth in specimens tested
under both CA and VA loading.
3 Test specimens
Two types of specimen were used, designated G and F, as shown in Figure 1. They were manufactured from two grades of carbon manganese structural steel to BS 4360, one from Grade 50D and the other
Grade 50B. The chemical compositions and mechanical properties of the parent materials are given in Tables 1 and 2.
Table 1 Chemical composition of the parent materials
Element Type G specimen (BS 4360 Grade 50D)* Type F specimen (BS 4360 Grade 50B)
Batch 1 Batch 2
C 0.14 0.16 0.17
Mn 1.35 1.35 1.36
Si 0.39 0.33 0.26
S 0.012 0.029 0.002
P 0.012 0.022 0.031
Cr 0.022 0.12 0.017
Ni 0.021 0.087 0.016
Al 0.046 0.025 0.029
Cu 0.019 0.017 0.011
Nb 0.026 0.003 <0.002
Ti 0.003 0.002 0.004
N 0.0056 0.0065
*: from^[5].
Table 2 Mechanical properties of parent materials
Mechanical properties Type G specimen* Type F specimen
Batch 1 Batch 2
Yield stress, N/mm ^2 399 386 418
Tensile strength, N/mm ^2 541 509 554
Elongation 34% 27% 34%
*: from^[5].
The type G specimen, Figure 1a, consisted of a 12mm thick plate with longitudinal attachments fillet welded to each edge, while the type F specimen involved a 12.5mm plate with longitudinal
attachments fillet welded on each surface, Figure 1b. The former specimens were from the batch tested by Gurney^[5] and the latter were from the batch tested by Maddox.^[14] In terms of fatigue
strength, they are designated as Class G and F respectively^[13]; hence the present designations. Both types of specimen were symmetrical and could be subjected to axial loading without any
significant secondary bending.
Fig.1. Details of test specimens and the locations where residual stresses were measured:
a) Longitudinal non-load-carrying attachments on plate edges (type G specimen);
b) Longitudinal non-load-carrying fillet welded joints (type F specimen)
c) Locations where residual stresses were measured
All the specimens were fabricated so that the direction of stressing was parallel to the rolling direction. The fillet welds were made in the flat position in two runs with the stop-start positions
in the middle of the attachments so as to avoid, as far as possible, the effects of end craters. The fillet welds were carried around the ends of the attachments in the type F specimens, but not in
the type G specimens. All specimens were tested in the as-welded condition.
4 Experimental details
4.1 Fatigue testing
The test programme involved both CA and VA amplitude loading tests on each type of specimen. All the specimens in the testing programme were subjected to axial loading in servo-hydraulic fatigue
testing machines at testing frequencies in the range 3 to 8Hz.
Apart from specimen F-14, all the CA tests were performed under a constant maximum tensile stress of 280N/mm^2, about 0.7 x yield strength for the type G specimen and 0.67 x yield strength for type F
specimen. For specimen F-14, the maximum stress was held constant at the lower value of 135N/mm^2.
As noted previously, some of the VA tests also involved cycling down from a constant tensile stress, while others involved cycling about a constant tensile mean stress and others cycling up from a
constant tensile minimum stress, as detailed in the next section.
Fatigue tests continued until complete failure of the specimen. Examples of fatigue failures in each type of specimen are shown in Figure 2. The failure modes were the same under CA and VA loading.
In the type G specimens (Figure 2a) fatigue cracks initiated at one or more of the weld ends, propagated through the plate thickness and finally propagated across the plate width as edge cracks. In
the type F specimens (Figure 2b) fatigue cracks initiated at one or more of the weld toes at the ends of the attachments. They then propagated through the plate thickness, adopting semi-elliptical
shapes, and finally grew to failure across the width of the main plate.
Fig.2. Examples of fatigue failures in specimen types G and F
a) Type G
4.2 Variable amplitude loading spectrum
To investigate the effect of small stress ranges, a loading spectrum with a concave-up shape in the plot of relative stress range, p [i] , the ratio of ith stress range to the maximum stress range in
a spectrum, against exceedence and a reasonably long block length, N[L], was derived. The block length in a given test depends on the minimum value of p [i] adopted. The length of the basic spectrum,
in which the lowest value of p [i] was 0.04, was ~2x10^5 cycles. However, it was shorter for those cases when low p [i] values were omitted. The stress distribution was derived in such a way that
small stresses made a significant contribution to fatigue damage. Details of the stress ranges and the corresponding numbers of cycles in a block are shown in Table 3 and the distribution is plotted
in Figure 3. The spectrum used by Gurney^[5] is also shown for comparison. As will be seen, both spectra exhibit concave-up shapes, but the present spectrum is longer than Gurney's. The relative
fatigue damage, defined as the ratio of fatigue damage at a stress level ΔS [i] against the fatigue damage at the maximum stress range Formula.1., where m was assumed to be 3, as for the BS 7608[13]
Class F and G design curves, gradually increased with decreasing stress range, Figure 4. In comparison with Gurney's spectrum, included in Figure 4, this should make the results of the present VA
tests more sensitive to the inclusion of small stresses in the spectrum.
Table 3 Details of concave-up spectrum derived for the VA tests
Relative stress range, p [i] Stress range, N/mm ^2 Cycles Exceedence
1.00 210.0 1 1
0.90 189.0 3 4
0.80 168.0 6 10
0.70 147.0 12 21
0.60 126.0 23 44
0.50 105.0 48 92
0.40 84.0 109 202
0.30 63.0 296 498
0.25 52.5 544 1,042
0.20 42.0 1,125 2,1<67
0.15 31.5 2,815 4,982
0.10 21.0 9,500 14,482
0.06 12.6 43,981 58,463
0.04 8.4 148,438 206,901
4.2 Variable amplitude loading spectrum
To investigate the effect of small stress ranges, a loading spectrum with a concave-up shape in the plot of relative stress range, p [i] , the ratio of ith stress range to the maximum stress range in
a spectrum, against exceedence and a reasonably long block length, N[L], was derived. The block length in a given test depends on the minimum value of p [i] adopted. The length of the basic spectrum,
in which the lowest value of p [i] was 0.04, was ~2x10^5 cycles. However, it was shorter for those cases when low p [i] values were omitted. The stress distribution was derived in such a way that
small stresses made a significant contribution to fatigue damage. Details of the stress ranges and the corresponding numbers of cycles in a block are shown in Table 3 and the distribution is plotted
in Figure 3. The spectrum used by Gurney^[5] is also shown for comparison. As will be seen, both spectra exhibit concave-up shapes, but the present spectrum is longer than Gurney's. The relative
fatigue damage, defined as the ratio of fatigue damage at a stress level ΔS [i] against the fatigue damage at the maximum stress range Formula.1., where m was assumed to be 3, as for the BS 7608[13]
Class F and G design curves, gradually increased with decreasing stress range, Figure 4. In comparison with Gurney's spectrum, included in Figure 4, this should make the results of the present VA
tests more sensitive to the inclusion of small stresses in the spectrum.
Fig.3. Concave-up spectrum used in the present VA tests. The spectrum used by Gurney^[5] is also included for comparison. The dashed lines indicate the linear stress distribution for each spectrum.
Fig.4. Comparison of the relative fatigue damage (Formula.2.) in the current stress distribution with that in the distribution used by Gurney. ^[5]
A maximum stress range of 210N/mm^2 was used in all VA loading tests. Initially, a VA test was conducted using a spectrum with the minimum stress range above the estimated CAFL, to ensure that all
stress cycles were damaging. As in BS 7608, the CAFL was defined as the fatigue strength corresponding to 10^7 cycles on the S-N curves generated from the CA tests on the two types of specimen.
Smaller stress ranges were then gradually added in the subsequent VA tests to establish the minimum stress range, ΔS' [min] , that was still contributing to fatigue damage in line with that expected
according to the CA S-N curve extrapolated beyond the CAFL without a slope change. The minimum p [i] in the spectra tested ranged from 0.15 to 0.04 for type G specimen and from 0.25 to 0.1 for the
type F specimen.
For the same basic stress distribution (identical number of cycles for each stress range) described above, the three types of loading sequence used to investigate the effect of mean stress were
applied as follows:
Sequence A - stresses cycling down from a constant maximum stress of 280N/mm^2;
Sequence B - stresses cycling at a constant mean stress of 175N/mm^2;
Sequence C - stresses cycling up from a constant minimum stress of 70N/mm^2.
The resulting maximum and minimum stresses for each p [i] value are shown schematically in Figure 5. As indicated, the maximum stress and maximum stress range was the same in every case. One
specimen, F-13, was tested under a variant of Spectrum A in which the maximum stress was reduced to 147N/mm^2 in order to investigate the effect of mean stress under this sequence.
Fig.5. Schematic illustration showing the difference in maximum and minimum stresses for the three sequences used in the VA tests.
The VA tests were performed in computer-controlled testing machines that were programmed to apply the stress cycles in each block in a random order. This was achieved by selecting p [i] values using
a random number generator. When the whole of the first block had been applied, the process started again and subsequent blocks were applied in the same random order. This process was repeated until
the specimen failed. Examples of stress - time histories for each type of sequence are shown in Figure 6.
Fig.6. Examples showing the three loading sequences
a) Cycling-down from a constant maximum stress;
b) Cycling at a constant mean stress;
c) Cycling-up from a constant minimum stress
4.3 Crack initiation monitoring and growth measurements
To support a fracture mechanics analysis of the test results, crack initiation and growth were monitored in many type F specimens under both CA and VA loading. As seen in Figure 2b, the cracks in
these specimens propagated from a weld toe through the plate thickness with a semi-elliptical shape. The crack depth a and surface length 2c were measured using a combination of visual inspection and
the alternating current potential drop (ACPD) method.
The visual inspection was aided by the application of soap solution and a magnifier to detect fatigue-induced cracking and to monitor its propagation. When applied to specimens while they were being
tested under cyclic loading, this method was able to detect a crack length of ~2mm. However, ACPD was able to detect evidence of fatigue cracking before the soap solution method. As seen in Figure 2b
, the soap solution usually left marks on the fracture surface so that the crack depth corresponding to a measured surface length at a known endurance could be measured after failure. Additional a
-growth data were also obtained by ACPD.
4.4 Residual stress measurements
Neglect of the effect of mean stress in fatigue design codes is based on the assumption that high tensile residual stresses are always present in welded structures. To confirm this for the present
specimens, and to investigate any change of residual stresses during cyclic loading, residual stresses close to the plate surface were measured in both types of specimen. The hole drilling method was
used, with the holes located 5mm from either the attachment end (type G specimen) or the weld toe (type F specimen), as indicated in Figure 1. Measurements were made at four locations in each
In addition, two more measurements were made in a type F specimen after it had been subjected to 10 blocks of cycles under VA loading, which was less than 1% of the endurance of that specimen.
5 Test results
5.1 Residual stress measurements
The results of the residual stress measurements are presented in Table 4. These confirmed that high tensile residual stresses acting parallel to the direction in which the specimens were to be
fatigue loaded were present in both types of specimen near the areas where crack initiation was expected to occur. The residual stresses were relatively higher in the type G specimen, approaching the
yield strength of the parent metal, while they were about 67% of yield in the type F specimen.
Table 4 Results of residual stress measurements
Specimen Yield strength, N/mm ^2 Residual stress, N/mm ^2
Location 1 Location 2 Location 3 Location 4 Average
G-02, as-received 386 - 399 455 324 296 404 370
F-02, as-received 418 302 237 300 285 281
F-08, 10 blocks under VA loading* 68 84 - - 76
*: Residual stresses were measured after the specimen was tested under VA loading for ten blocks. The total fatigue life of this specimen was 1,147 blocks.
Under the fatigue loading, part of the residual stresses quickly relaxed. The average residual stress in the type F specimen, tested under Sequence A, was reduced by 73% after the specimen had been
tested for <1% of the total life. This was in agreement with other work^[15] for a similar welded joint containing yield magnitude residual stress. In that case, about 80% of the residual stresses
relaxed under the first application of the maximum tensile stress in the spectrum, which corresponded to about 57% of yield. Similar results were also reported by others.^[16,17]
Although the fatigue loading reduced the residual stresses considerably, those remaining were still significant with respect to small stress ranges in the spectrum. The average residual stress from
the two measurements after 10 blocks of VA loading noted above was 76N/mm^2. It is expected that a similar level would remain for the rest of the life since residual stress relaxation occurs mainly
in the first application of the maximum stress.^[15,18] A remaining residual stress of 70N/mm^2 would result in actual stress ratios of 0.74 and 0.63 for applied stress ranges of 21 and 31.5N/mm^2
(corresponding to p [i] =0.10 and 0.15) respectively.
5.2 Constant amplitude tests
The results obtained from the type G specimens, together with a result obtained by Gurney from the same batch of specimens that happened to be tested with the same S [max] ^[5], are given in Table 5
and plotted in Figure 7. Also shown is the best-fit S-N curve:
ΔS^2.728 N=1.183x10^11 [2]
Table 5 Constant amplitude test results for the type G specimen, S[max]=280N/mm ^2
Specimen No. Stress range, N/mm ^2 Stress ratio R Cycles to failure
G-01 80 0.71 736,000
G-02 120 0.57 237,000
G-03 55 0.80 2,240,000
G-04 65 0.77 1,350,000
871* 280 0.0 26,500
*: from^[5].
Fig.7. Constant amplitude test results for the type G specimen, tested at a constant maximum stress of 280N/mm^2. The test results obtained at R=0^[6] were virtually on the BS 7608 Class G mean S-N
It may be noted that the test results obtained at R=0 from this batch of specimens^[6] happened to lie virtually on the BS 7608 Class G mean S-N curve, as shown in Figure 7. The present results
obtained at low stress ranges are lower than those obtained at R=0. This will be seen to have a significant effect on Σ(n/N) when the VA test results are evaluated.
Again five results were used to establish the CA S-N curve for the type F specimens, four from the present tests, obtained under a constant maximum stress of 280N/mm^2, and a fifth from tests carried
out by Maddox^[14] at a maximum stress of 267N/mm^2. These results are presented in Table 6 and plotted in Figure 8 together with the best-fit S-N curve:
ΔS^3.072 N=1.312x10^12 [3]
Table 6 Constant amplitude test results for the type F specimen
Specimen No. Maximum stress, N/mm ^2 Stress range, N/mm ^2 Stress ratio R Cycles to failure
F-01 280 90 0.71 1,378,500
F-02 280 120 0.57 546,600
F-11 280 65 0.77 3,271,600
F12 280 140 0.50 360,400
SN3-31* 267 240 0.1 60,100
F-14 135 65 0.52 3,866,500
Fig.8. Constant amplitude results for the type F specimen, tested at a constant maximum stress of ~280N/mm^2.
Figure 8 also includes the experimental data obtained from the same batch of specimens but at R=0^[14], as well as the BS 7608 Class F mean curve, for comparison. It will be seen that the mean stress
had little effect on the fatigue strength of the type F specimen investigated, as was also found for a similar type of specimen.^[19] This was also confirmed by comparison of the results obtained
from specimens F-14 and F-11, tested at the same stress range but different maximum stresses. The fatigue life was only slightly higher for specimen F-14 tested at S[max]=135N/mm^2 when compared with
specimen F-11 tested at S[max]=280N/mm^2, a difference that was partly due to the longer crack growth length before failure under the lower maximum stress in F-14.
5.3 Variable amplitude tests
5.3.1 Method used to analyse VA test results
The VA test results were analysed in terms of Miner's rule to check its validity and to examine the effect of stresses below the CAFL. The CAFL was assumed to correspond to a fatigue endurance of 10^
7 cycles, in accordance with the recommendation of BS 7608. For each type of specimen, three S-N curves based on the mean curves fitted to the CA data were used to determine N in the calculation of Σ
• a single curve without any slope change;
• a bi-linear curve with a slope change from m to (m+2) at the CAFL;
• a single curve cut off at the CAFL so that all stresses below it were assumed to be non-damaging.
5.3.2 Type G specimen
The test results are summarised in Table 7 where Σ(n/N) values at failure were calculated using Eq. (2), the CA S-N curve obtained under a constant maximum stress of 280N/mm^2. As will be seen:
• Miner's rule was significantly non-conservative for all tests under Sequence A, even when an S-N curve without a slope change was used in calculating Σ(n/N). It should be noted that, at the same
stress range, both CA and VA tests had the same maximum and minimum stresses. This ruled out any possibility of a mean stress effect. Hence these non-conservative results must have resulted from
a form of stress interaction under spectrum loading, whereby some stress cycles become more damaging than expected on the basis of their effect under CA loading.
• Small stresses below the fatigue limit were still 'fully damaging'. The effect of small stresses can be seen by comparing the Σ(n/N) values calculated using the single-slope S-N curve with those
determined using either the bi-linear curve or the single curve with a cut-off at the fatigue limit. If the smallest stress range in a spectrum does not produce fatigue damage, the number of
blocks to failure would be the same as that for a spectrum without this stress range. By comparing both the number of blocks to failure and the Miner's rule damage sums for the two spectra with
minimum p [i] =0.06 and p [i] =0.04 (corresponding to a stress range of 8.4N/mm^2), it can be concluded that stress ranges as low as 8.4N/mm^2 (only 27% of the assumed fatigue limit at 10^7
cycles) were still fully damaging.
• For the specimens tested under Sequence B (constant mean stress of 175N/mm^2), Miner's rule was still non-conservative, even when the S-N curve without a slope change was used. However, the Σ(n/
N) value was significantly greater than those obtained under Sequence A.
• For the specimens tested under Sequence C (constant minimum stress of 70N/mm^2), Miner's rule was very conservative even when the S-N curve with a cut-off at the assumed fatigue limit was used,
which gave Σ(n/N) =3.38.
Table 7 VA test results for the type G specimen
Specimen No Spectrum Block length, cycles Minimum stress range, N/mm ^2 Cycles to failure Number of blocks to failure Σ(n/N)
Minimum p [i] Sequence No slope change ^a Bi-linear ^b Cut-off at fatigue limit ^c
G-06 0.15 A 4,982 31.5 1.37x10 ^6 275 0.43 0.43 0.43
G-07 A 3.16x10 ^6 218 0.41 0.38 0.34
G-10 0.10 C 14,482 21.0 3.12x10 ^7 2,152 4.08 3.70 3.38
G-11 B 5.71x10 ^6 394 0.75 0.68 0.62
G-08 0.06 A 58,463 12.6 1.13x10 ^7 212 0.48 0.38 0.33
G-09 0.04 A 206,901 8.4 3.66x10 ^7 181 0.49 0.33 0.28
a. ΔS^2.728N=1.183x10^11, the mean S-N curve obtained under a constant maximum stress of 280N/mm^2.
b. Slope change from m to (m+2) at 10^7 cycles.
c. Cut-off at the fatigue limit, which was 31N/mm^2 corresponding to an endurance of 10^7 cycles.
These characteristics of the results are more evident in Figure 9, which shows them plotted in terms of the equivalent CA stress range. This is the CA stress range which, according to Miner's rule,
is equivalent in terms of fatigue damage to a VA stress spectrum. It relates to the CA S-N curve for the detail under consideration as follows:
where m is the slope of the CA S-N curve, that is 2.728 in the present case. If an experimental result lies on or above the CA S-N curve it means that Miner's rule gave an accurate or safe estimate
of the actual fatigue life. From Figure 9, the above observations regarding the effect of the loading sequence on Miner's rule can be readily seen. The result obtained from the test with a minimum
stress range of 8.4N/mm^2 in the spectrum agrees exactly with the line extrapolated from the results of those tests with higher minimum stresses, indicating that a stress range as low as 8.4N/mm^2
was still fully damaging under Sequence A.
Fig.9. Comparison of the VA test results with the CA S-N curve (ΔS2.728N =1.183x1011) obtained with Smax=280N/mm2 for the type G specimen, expressed in terms of the equivalent stress range
The opportunity has also been taken to compare the present results with those obtained from the same batch of specimens by Gurney.^[5] Gurney evaluated his results in terms of the CA S-N curve
obtained with R=0. As seen earlier this was less steep and slightly higher than the present curve obtained with S [max] =280N/mm^2. Gurney obtained consistently low Σ(n/N) values at failure, ranging
from 0.49 to 0.74 for the tests with the peak stress range at R=0 and from 0.44 to 0.78 for the tests with the peak stress range at R=-1, and concluded that Miner's rule was significantly
non-conservative for the conditions investigated. Clearly, re-analysis of his results using the present lower S-N curve will result in higher Σ(n/N) values. In fact, the range of Σ(n/N) increases to
0.75-0.95 for the VA tests with the peak stress range at R=0, and to 0.73-1.28 for the VA tests with the peak stress ange at R=-1, indicating that Miner's rule is more reasonable. This is also
evident from Figure 9. Compared with the very small amount of scatter in the CA results (Figure 7), Gurney's VA data still indicate that Miner's rule is generally non-conservative but not by as much
as he found using the CA S-N curve obtained at R=0.
In practice, the CA S-N curves given in design codes, for example BS 7608, would be used in fatigue design. Therefore, Σ(n/N) was also calculated using both the mean and design curves given in BS
7608 to examine the accuracy of Miner's rule. The results are presented in Table 8. As will be seen:
• Since the present CA results fell below the Class G mean S-N curve the non-conservatism was increased when this was used to apply Miner's rule to specimens tested under Sequences A and B. Under
Sequence A the Miner's rule damage sums were as low as 0.26.
• For the specimen tested under Sequence C, it was still very conservative.
• Even when the design curve was used, the Miner's rule damage sums for all tests under Sequence A were still significantly less than unity. The average life would only be about half that predicted
by the BS 7608 design curve (with a slope change at 10^7 cycles).
Table 8 VA test results for the type G specimen - further analysis using the BS 7608 curves for the Class G details
Specimen No Spectrum Σ(n/N) - based on the mean curve Σ(n/N) - based on the design curve
Minimum p [i] Sequence No slope change ^a Bi-linear ^b Cut off at fatigue limit ^c No slope change ^d Bi-linear ^e Cut-off at fatigue limit ^f
G-06 0.15 A 0.28 0.27 0.24 0.64 0.64 0.64
G-07 A 0.26 0.22 0.19 0.59 0.55 0.51
G-10 0.10 C 2.55 2.21 1.88 5.78 5.41 5.02
G-11 B 0.47 0.40 0.35 1.06 0.99 0.92
G-08 0.06 A 0.28 0.22 0.19 0.64 0.55 0.49
G-09 0.04 A 0.27 0.19 0.16 0.61 0.47 0.42
a. ΔS^3.0N=5.66x10^11, the Class G mean curve given in BS 7608.
b. Slope change from 3.0 to 5.0 at a fatigue limit of 38N/mm^2 (corresponding to 10^7 cycles).
c. Cut-off at the fatigue limit of 38N/mm^2.
d. ΔS^3.0N=2.50x10^11, the Class G design curve given in BS 7608.
e. Slope change from 3.0 to 5.0 at a fatigue limit of 29N/mm^2 for the design curve.
f. Cut-off at the fatigue limit, which was 29N/mm^2 corresponding to an endurance of 10^7 cycles.
5.3.3 Type F specimen
The VA test results, as well as the calculated Miner's sums using the experimentally determined mean S-N curve (Eq. (3)), are presented in Table 9. As with the type G specimen results, the VA results
are also plotted in terms of the equivalent CA stress range in Figure 10. In general, the fatigue test results displayed similar characteristics to those obtained from type G specimens but there were
also some significant differences, as follows:
• Miner's rule was non-conservative for the specimens tested under Sequence A, but less so than in type G specimens.
• Stresses as low as 31.5N/mm^2 appeared to be fully damaging. This stress is ~70% of the fatigue limit, assumed to be 45.3N/mm^2 at 10^7 cycles according to Eq. (3), slightly higher than the Class
F design fatigue limit of 40N/mm^2. However, when the minimum stress range in the spectrum was reduced to 21N/mm^2 (p [i] =0.1), about 46% of the assumed CAFL, it was not fully effective, as
judged by comparing the number of blocks to failure with that from the test with the slightly higher minimum p [i] value of 0.15.
• If ΔS'[min]/CAFL is defined as the relative fatigue limit, below which stress ranges are no longer 'fully damaging', it appears that both the absolute value of ΔS'[min] and ΔS'[min]/CAFL x 100%
decreased with decrease in basic fatigue performance. Thus, in the case of the type G specimen, the values were 8.4 N/mm^2 and 27% respectively, as compared with 31.5N/mm^2 and 70% for the higher
fatigue performance type F specimen.
• Miner's rule was slightly non-conservative for the specimens tested under Sequence B, but again less so than in type G specimen.
• Miner's rule was conservative for the specimens tested under Sequence C, although again the deviation from Miner's rule was not as great as that seen in type G specimens.
Table 9 VA test results for the type F specimen
Spectrum Block length, Minimum stress range, Σ(n/N)
Specimen No. Minimum, p [i] Sequence cycles N/mm ^2 Cycles to failure Number of blocks to failure No slope change ^a Bi-linear Cut-off at fatigue limit ^d
Note b Note c
F-03 A 1.10x10 ^6 1,053 0.46 0.46 0.46 0.46
F-15 0.25 1,042 52.5 1.50x10 ^6 1,441 0.63 0.63 0.63 0.63
F-09 C 3.81x10 ^6 3,661 1.59 1.59 1.59 1.59
F-10 B 1.87x10 ^6 1,799 0.78 0.78 0.78 0.78
F-04 A 2.21x10 ^6 1,021 0.53 0.51 0.53 0.44
F-13 0.20 A ^e 2,167 42.0 2.51x10 ^6 1,158 0.60 0.58 0.60 0.50
F-06 C 5.62x10 ^6 2,592 1.34 1.30 1.30 1.13
F-07 B 3.92x10 ^6 1,808 0.94 0.91 0.91 0.79
F-05 0.15 A 4,982 31.5 4.10x10 ^6 822 0.50 0.45 0.50 0.36
F-08 0.10 A 14,482 21.0 1.66x10 ^7 1,147 0.79 0.64 0.74 0.50
a. ΔS^3.072N=1.312x10^12, the mean S-N curve obtained under a constant maximum stress of 280N/mm^2.
b. Slope change from 3.072 to 5.072 at N=10^7cycles on CA S-N curve.
c. Slope change from 3.072 to 5.072 at N=3.3x10^7 cycles on CA S-N curve (corresponding to assumed CAFL of 31.5N/mm^2)
d. Cut-off at the fatigue limit, which was 46.3N/mm^2 corresponding to an endurance of 10^7 cycles.
e. The maximum stress for this test was kept at 147N/mm^2, different from other tests where the maximum stress was 280N/mm^2.
Fig.10. Comparison of the VA test results with the CA S-N curve (ΔS^3.072N =1.312x10^12) for the type F specimen, expressed in terms of the equivalent stress range, and the S-N curve predicted by
fracture mechanics (ΔK[th]=63N/mm^3/2)
To investigate the possible effect of the magnitude of the maximum stress used in Sequence A, specimen F-13 was tested under the same conditions as those for specimen F-04 except that the maximum
stress was reduced from 280 to 147N/mm^2, so the maximum stress range cycled between 147 and -63N/mm^2. Compared with the result of specimen F-04, the Miner's rule damage sum was increased slightly,
from 0.53 to 0.60 when the S-N curve without a slope change was used. This is still non-conservative, but it suggests a small influence of the maximum stress in Sequence A on Σ(n/N). However, it will
be noted that Σ(n/N) varied from 0.46 to 0.63 for specimens F-03 and F-15, both tested under Sequence A with the same maximum stress of 280N/mm^2. Thus, the apparent effect of the magnitude of the
maximum stress could simply reflect scatter in fatigue lives.
The test result for specimen F-13 also implies that the applied mean stress was not significant in comparison with the effect of the sequence type. For each stress range (p [i] value), the mean
stress in the spectrum for specimen F-13 was less than that in Sequence B, and even less than that in Sequence C for p [i] > ~0.37, see Figure 5. However, the Σ(n/N) value from the test on specimen
F-13 was significantly lower than those from the specimens tested under both Sequences B and C.
A possible consequence of the above observation that stress ranges below the conventional CAFL (corresponding to N = 10^7 cycles on the CA S-N curve) appear to have been fully damaging is that the
current method of accounting for the damaging effect of such stresses when applying Miner's rule is wrong. As noted earlier, in BS 7608 this is to assume a bi-linear S-N curve that changes slope from
m to m+2 at the CAFL corresponding to N = 10^7 cycles. The present results suggest that this approach may underestimate the damaging effect of stresses below the CAFL. To investigate this, Σ(n/N)
values for the present VA results were calculated using two bi-linear versions of the present mean CA S-N curve for the type F specimens. The first changed slope at the conventional CAFL,
corresponding to N = 10^7 cycles, while the second changed slope at an assumed CAFL of 31.5 N/mm^2, corresponding to N = 3.3 x 10^7 cycles. The results are included in Table 9. Referring particularly
to the results obtained under Sequence A, it will be seen that the largest difference in Σ(n/N) between the two bi-linear curves was 0.1 (0.64 against 0.74, corresponding to the spectrum with minimum
p [i] =0.10), well within the data scatter range, regardless of the choice of bi-linear S-N curve. Thus, again basic scatter in the fatigue test results masks the possible influence of a variable.
Nevertheless, any influence is clearly very small and it cannot be concluded from the present results that the current method of defining the bi-linear S-N curve with the slope change at 10^7 cycles
is wrong.
Σ(n/N) values were also calculated using the BS 7608 Class F mean and design S-N curves. The results are presented in Table 10. Since the CA results from the present specimens fell significantly
below the BS 7608 Class F mean curve, it is not surprising to see that Miner's rule was even more non-conservative in this case. Indeed, the Miner's rule damage sums were less than unity even for the
tests under Sequence C. However, the rule was conservative for tests performed under Sequences B and C when Miner's rule was applied using the design curve, but still slightly non conservative for
Sequence A. This suggests that the current BS 7608 design rules for this type of joint need to be reviewed.
Table 10 VA test results for the type F specimen - further analysis by using the BS 7608 curves for the Class F details
Specimen No Spectrum Σ(n/N) - based on the mean curve Σ(n/N) - based on the design curve
Minimum, p [i] Sequence No slope change ^a Bi-linear ^b Cut off at fatigue limit ^c No slope change ^d Bi-linear ^e Cut-off at fatigue limit ^f
F-03 A 0.25 0.25 0.20 0.69 0.69 0.69
F-15 0.25 0.34 0.34 0.28 0.94 0.94 0.94
F-09 C 0.87 0.85 0.71 2.39 2.39 2.39
F-10 B 0.43 0.42 0.35 1.18 1.18 1.18
F-04 A 0.29 0.27 0.20 0.80 0.80 0.80
F-13 0.20 A ^g 0.33 0.30 0.22 0.91 0.91 0.91
F-06 C 0.74 0.68 0.50 2.04 2.04 2.04
F-07 B 0.52 0.47 0.35 1.42 1.42 1.42
F-05 0.15 A 0.28 0.23 0.16 0.76 0.72 0.65
F-08 0.10 A 0.45 0.33 0.22 1.22 1.05 0.90
a. ΔS^3.0N=1.726x10^12, the Class F mean curve given in BS 7608.
b. Slope change from 3.0 to 5.0 at 10^7 cycles for the above mean curve.
c. Cut-off at the fatigue limit, which was 56N/mm^2 corresponding to an endurance of 10^7 cycles.
d. ΔS^3.0N=6.30x10^11, the Class F design curve given in BS 7608.
e. Slope change from 3.0 to 5.0 at 10^7 cycles for the above design curve.
f. Cut-off at the fatigue limit, which was 40N/mm^2 corresponding to an endurance of 10^7 cycles.
g. The maximum stress for this test was kept at 147N/mm^2, different from other tests where the maximum stress was 280N/mm^2.
The various observations about the validity of Miner's rule and the damaging effect of stresses below the CAFL are also evident from Figure 10, which shows the VA test results expressed in terms of Δ
S [eq] in comparison with the CA results. The point made earlier about scatter in the Sequence A results masking any difference between the results obtained with S [max] = 280 or 147N/mm^2 is also
evident. Similarly, the graph highlights the fact that the specimen tested under Sequence A with the lowest equivalent stress range performed proportionately better than those tested at higher
stresses, suggesting that the lowest stress range (21N/mm^2) was not fully damaging.
It is interesting to note that the only two specimens tested under Sequence B (with ΔS [min] = 52.5 or 42N/mm^2) also showed a similar trend, but occurring at a higher stress range. Although the data
were very limited, it is speculated that the minimum fully effective stress range ΔS' [min] would be higher under Sequence B than that under Sequence A. This would mean that ΔS' [min] is dependent on
the loading-sequence as well as the basic fatigue performance of the weld detail. Further work is required to confirm this speculation.
6 Fracture mechanics analysis
6.1 Approach
The crack growth measurements made on type F specimens tested under both CA and VA loading were used to develop a fracture mechanics model for calculating fatigue lives. This entailed first defining
the fracture mechanics parameter ΔK, the stress intensity factor range, for the observed cracks. Then, the relationship between observed rate of crack growth (da/dN) measured under CA loading and ΔK
was integrated between an initial crack size and the crack size at failure along the lines detailed in BS 7910^[20] to calculate the progress of fatigue cracks and the fatigue lives of specimens
tested under VA loading. That relationship was assumed to adopt the usual form:
da/dN = A(ΔK) ^m
where A and m are material constants.
Most measurements were of the surface crack lengths and it is this measure of crack size that is compared with that calculated using fracture mechanics. In all the fracture mechanics calculations,
the crack size first recorded for each specimen was used as the starting point in the integration and the calculated life was compared with the actual one remaining after that initial crack
6.2 Stress intensity factor
As illustrated in Figure 2b), the case under consideration is a semi-elliptical crack at the toe of a fillet weld. For such cases:
where M [k] is a function of the stress concentration effect of the weld detail, Y is a function of the crack depth to plate thickness ratio a/B and the crack aspect ratio a/2c, where 2c is the
surface crack length.
The solution for Y for semi-elliptical surface cracks in BS 7910 was used in the present analysis. Specific values of the crack aspect ratio a/2c were based on examinations of the fracture surfaces
of the tested specimens. These showed that, in the early stages of crack growth, the crack aspect ratio was ~0.25. This was in agreement with other observations in a similar type of specimen.^[3] It
gradually increased with increasing crack size and was about 0.3 when a crack just became a through-thickness crack.
The magnification factor, M [k] , due to the stress concentration effect of the joint geometry is defined as^[26]:
M[k] = K[in plate with weld]/K [in plate without weld]
M [k] quantifies the change in stress intensity factor as a result of the surface discontinuity at the weld toe. M [k] decreases sharply with increasing distance from the weld toe in the thickness
direction and usually reaches unity at crack depths of typically 30% of plate thickness.
The M [k] solution^[3] derived on the basis of 3D finite element analysis of the same type welded joint, using a model with a weld toe radius of 0.16mm, was adopted. This gave:
M[k]=0.845(a/B)^-0.316 for a/B ≤ 0.1
M[k]=0.853(a/B)^-0.312 for a/B > 0.1
These expressions provide M [k] only at the deepest point of the crack. Crack growth in this direction also depends on the crack aspect ratio a/2c. In the present calculations of crack growth,
initially the M [k] value at the tips of the crack at the plate surface, M [kc] , was assumed to be identical to the M [k] value in depth, M [ka] , at a = 0.1mm since this represents a good
approximation^[21] and the predicted crack aspect ratio also agreed well with that observed experimentally. However, the stress concentration effect of the welded joint decreases as the crack grows
across the plate width away from the end of the stiffener. To reflect this observation, the M [kc] values were varied, decreasing from an initial value of about 3.9 (corresponding to M [ka] at a =
0.1mm) to M [kc] = 1.0 when the crack had just grown beyond the weld (about 32mm long) in front of the attachment. At and beyond this crack length it was assumed that the effect of the weld on
surface crack growth could be neglected.
6.3 Determination of fatigue crack growth relationship
Crack growth was monitored in CA specimens F-11 and F-12, tested under a maximum stress of 280N/mm^2 but with different stress ranges, and specimen F-14, tested at the same stress range as specimen
F-11 but with the lower maximum stress of 135N/mm^2. The observed crack growth behaviour was in good agreement with fracture mechanics calculations when they were based on the following fatigue crack
growth relationships, independently of the mean stress:
da/dN = 0, when ΔK≤ ΔK [th] = 63N/mm ^3/2
da/dN = 2.1x10^-13 ΔK^3, when ΔK > 63N/mm^3/2
The assumed threshold stress intensity factor range, ΔK [th] , is the lower bound value in BS 7910. The crack growth rate was slightly lower than that corresponding to the simplified mean growth rate
given in BS 7910 for R>0.5. An example of the good agreement between actual and calculated surface crack growth is shown in Figure 11. It will be seen that the crack growth data did not exhibit
smooth curves. This was not surprising because the crack growth was along the curved weld toe initially and then grew away from the weld at the attachment end. Another relevant factor could be the
decreasing magnitude of the residual stresses with increasing distance from the centreline of the attachment plate.
Fig.11. Comparison between the measured and predicted fatigue crack growths in specimen F-14, tested under a constant amplitude stress range of 65N/mm^2 and at a maximum stress of 135N/mm ^2
6.4 Fracture mechanics fatigue life calculations
The above fracture mechanics model (including K solution, M [k] factor and the crack growth rate) was also used to calculate the total fatigue endurance of the type F specimen. By assuming an average
initial flaw depth of 0.15mm^[22] a crack length 2c=0.6mm (a/2c=0.25), and that fatigue endurance was controlled by fatigue crack propagation only, the calculated fatigue endurance S-N curve for the
type F specimen is included in Figure 10. It will be seen that it agreed very well with the experimental data. This supported the fracture mechanics model and also implied that the fatigue endurance
of this type of specimen was predominantly controlled by a crack propagation process, not crack initiation.
Turning to the specimens tested under VA loading, fatigue crack growth was measured in five specimens under Sequence A (F-04, F-05, F-08, F-13 and F-15), two under Sequence B (F-07 and F-10) and two
under Sequence C (F-06 and F-09). It was found that crack growth under spectrum loading could not be predicted accurately using the crack growth rate relationship obtained under CA loading. This
significantly and modestly under-estimated the crack growth rate observed under Sequences A, and B respectively and over-estimated that under Sequence C. Examples are shown in Figure 12. To predict
the crack growth accurately, the crack growth rate parameter A in Eq. 5 would need to be in the ranges 2.5x10^-13 to 5.5x10^-13 for Sequence A, 1.9x10^-13 to 2.6x10^-13 for Sequence B and 1.4x10^-13
to 1.8x10^-13 for Sequence C. The ratio of the A value for CA to the average A value for each loading sequence varied, from 0.52 for Sequence A, to 0.93 for Sequence B and to 1.31 for Sequence C.
These agree well with the Miner's rule damage sums obtained under the three Sequences. In other words, the use of fracture mechanics crack growth analysis to calculate the fatigue lives of the type F
specimens under VA loading would produce essentially the same errors as calculations based on Miner's rule. This implies that the factors responsible for the deviations between the actual lives and
those calculated by Miner's rule were mainly related to crack growth, rather than to crack initiation as suggested by others. ^[23,24]
Fig.12. Comparison between the measured and predicted fatigue crack growths in specimens F-15, F-10 and F-09. They were tested under VA loading with the same minimum p [i] of 0.25, and under Sequence
A, Sequence B and Sequence C, respectively
7 Discussion
7.1 Validity of Miner's rule
The present work is relevant with respect to two basic assumptions made in the fatigue design of welded joints. Firstly, the fatigue strength of welded joints is dependent only on stress range and
the effect of mean stress can be ignored.^[25,26] This is based on the assumption that high tensile residual stresses, up to the yield strength of the base metal, will be present in welded joints.
Because of this, an applied stress is considered to be superimposed on such residual stress to give an effective stress of the same range but cycling down from tensile. Secondly, there is no
interaction between applied stress cycles, so the fatigue damage due to the application of a particular stress cycle in a VA load sequence is exactly the same as that due to the same stress cycle
under CA loading, which is the basis of Miner's linear cumulative damage rule.
According to the above two assumptions, it is expected that:
• The CA S-N curves obtained at R=-1, R=0 and at a constant maximum stress would be the same for the type G specimen.
• The fatigue endurance of specimens F-04 and F-13, which were tested under the same spectrum but with different maximum stress, would be similar.
• The specimens tested under Sequences A, B and C would have similar fatigue endurance when the minimum p [i] value in each sequence was the same.
• The average value of Σ(n/N) for each sequence would be about 1.0.
However, the results in the present work were different from the above expectations, suggesting the breakdown of the two basic assumptions under certain circumstances. It is acknowledged that a
larger CA database, even from the same batch of specimens, could well exhibit more scatter and so lend support to the conclusion that Miner's rule was reasonably accurate. However, there is less
doubt about the accuracy of the rule when considering just the present results.
It should be noted that, for each stress range, the minimum and maximum stresses were the same between the CA and the VA under Sequence A. This ruled out any possible effect of mean stress between
the CA and VA loading on fatigue endurance. The test results suggested that some form of interaction between the applied stresses was the major factor contributing to the difference in Miner's rule
damage sums obtained for the three sequences.
7.2 Stress interaction - underloading effect
Although stress interaction under VA loading has been well recognised, much of the work in this area has focused on the tensile overloading effect whereby the effective stress intensity factor range,
ΔK [eff] , for the following lower stress ranges is smaller than the applied due to the crack closure effect, resulting in reduced crack growth rates when compared to those under CA loading. The
results of the specimens tested under Sequence C in the current investigation could be explained by this mechanism. However, work on the effect of underloading, where the absolute magnitude of
unloading is significantly greater than that in the subsequent cycle, that causes an increase in the rate of crack growth under the following cycles is comparatively limited.^[27-29]
Fleck^[27] conducted CA and VA fatigue crack growth tests on specimens of two plain materials: steel and aluminium alloy. These involved a VA spectrum with two stress ranges but the same maximum
stress: one major cycle followed by different numbers of smaller stress range cycles, n. The ratio of the major to the minor stress range, Φ, varied. All tests were carried out at high stress ratios
(R=0.5 and 0.75) in a ΔK range between 14-30MPa√m. A parameter called the acceleration factor, Γ, defined as the ratio of the measured growth rate per block to that predicted by a linear summation of
the CA crack growth response, was used to indicate the stress interaction effect. The maximum Γ value was found to be 1.79 which corresponded to n =~10 and Φ=~0.5. In the present investigation, the
ratio of the maximum crack growth rate under Sequence A to the crack growth rate under CA was 2.6, greater than the maximum value obtained by Fleck.
Enhanced crack propagation rates due to underloading in two aluminium alloys were also reported recently .^[28] The spectrum involved two stress ranges with identical maximum stress. The acceleration
in crack growth rate was found to be about 30% in one Al alloy but up to 1200% in another, depending on the number of small cycles between underloads, as well as other loading conditions. However,
the very high effect seen in the second alloy was due to a distinct change in crack growth mechanism associated with the particular microstructure of the alloy, which is unlikely to be relevant to
common weldable metals.
The detrimental effect on fatigue performance of underloading has also been observed in welded specimens. In an early study, Gurney^[6] investigated the fatigue performance of the type G specimen
under three simple spectra, all containing a major cycle followed by two smaller cycles. The characteristics of the smaller stress cycles varied, one having the same maximum stress as the major cycle
(referred to by Gurney as type C), one having the same minimum stress as the major cycle (type B) and the third having the same mean stress and the major cycle (type A). It was found that the
specimens tested under type C spectrum always produced the lowest fatigue endurance. Later, Gurney^[6] investigated the same type of specimen in a high strength steel under two spectra with identical
stress distributions (number of cycles at each stress range was the same) and block length (10^4 cycles), one under a constant maximum stress of 500N/mm^2 and the other at a constant minimum stress
of zero (R=0). He found that the fatigue endurance of the specimens tested under the former conditions were all significantly less than those tested under the latter conditions.
Unlike overloading, the mechanisms for enhanced crack growth due to underloading are not well established. One possible mechanism could be related to the introduction of tensile residual stresses
around the crack tip.^[30] This mechanism may be applicable for plain materials but questionable for welded joints since welding-induced tensile residual stresses are present in welds. Other possible
mechanisms proposed include: reduced ductility due to strain hardening of material ahead of the crack tip from the major cycles;^[27] minor stresses following an under-load experiencing a higher
tensile mean stress at the crack tip than that occurring under CA loading;^[27] increased ΔK[eff] due to reduced crack opening stress intensity factor, K[op], [28,29]. It may be claimed that the
higher Σ(n/N) value of specimen F-13, when compared with specimen F-04, was associated with this mechanism. However, with reduced maximum stress applied to F-13, more of the residual tensile stress
induced by welding would be expected to remain after loading. As a result, the actual stress range could still be effectively fully tensile even under an applied compressive stress of 63N/mm^2 for
this specimen.
By relating to the results of crack growth measurements in both CA and VA tests, it seems certain that for specimens with the same crack size and loaded with the same stress range, the effective
stress intensity factor ranges for CA and VA loading are different. Furthermore, the same is true for the three VA loading sequences, and this led to different crack growth rates for the different
loading conditions.
7.3 Constant amplitude S-N curve used with Miner's rule
A possible reason for Σ(n/N) values below unity is that crack closure is inhibited under some VA loading regardless of the applied stress ratio or mean stress, such that cyclic stresses in such VA
loading spectra produce fatigue damage associated with high tensile mean stress CA loading conditions. Support for this comes from the re-analysis of VA test results obtained from type G specimens by
Gurney^[5] at R=0 and R=-1 using the present CA data obtained with S [max] = 280 N/mm^2. In spite of the presence of high tensile residual stresses, this was lower than that obtained by Gurney at R=
0, which he had used when applying Miner's rule. The accuracy of Miner's rule was improved considerably. It may be noted that a dependence on mean stress of fatigue endurance for some weld details
has also been reported elsewhere^[5,17,31-33], even when high tensile residual stresses were known or expected to be present in the welded joints.^[5,17]
Under VA loading, the actual mean stresses (a combination of residual and applied stresses) for small stress cycles are expected to be lower than those under CA loading because of the relaxation of
residual stresses under the peak stress in the spectrum. This would have yielded Σ(n/N)>1.0, but all tests under Sequences A and B gave values less than 1.0. This suggests that stress interaction
effects associated with the type of loading sequence are more significant than the mean stress effect.
7.4 Small stresses in spectrum
Under Sequence A loading, the lowest stress range that was found to be 'fully damaging', defined as ΔS'[min], was found to be as low as 8.4N/mm^2 for the type G specimen. This was lower than the
value of 10.1N/mm^2 found by Gurney^[5] for the same type of specimen under a spectrum with the peak stress range applied at R=0. The stress range of 8.4N/mm^2 corresponds to a fatigue endurance of
3.5x10^8 cycles on the present type G specimen S-N curve, significantly greater than the endurance of 10^7 cycles at which a slope change from m to (m+2) is recommended in BS 7608 to allow for
fatigue damage due to stresses below the CAFL under VA loading. A consequence was that the conventional bi-linear S-N curve with the slope change at N=10^7 cycles under-estimated the damaging effect
of stresses below the CAFL and was therefore non-conservative.
As noted previously, the relative fatigue limit, ΔS'[min]/CAFL, increased with increase in the basic fatigue strength of the weld detail. Thus, for the type F specimen, ΔS'[min] appeared to be ~31.5N
/mm^2, which corresponded to a fatigue endurance of 3.3x10^7 cycles. In spite of this, allowing for basic scatter in the fatigue test results, it was found that the damaging effect of stress ranges
between 31.5 N/mm^2 and the conventional CAFL, corresponding to N = 10^7 cycles, of 45.3 N/mm^2 was still accounted for satisfactorily when applying Miner's rule by the mean CA S-N curve (slope m)
extrapolated beyond 10^7 cycles at the shallower slope of m+2, as widely recommended. In other words, in contrast to the results for type G specimens, the present type F specimen results obtained
under Sequence A support the current design method for accounting for the damage due to stresses below the CAFL. However, this conclusion is based on very limited data and clearly there is the need
for further test results, particularly for loading spectra with a greater proportion of low stresses than those used here. It will also be recalled that published VA data, some obtained from type F
specimens [e.g., 2,3] have shown that the bi-linear S-N curve with the slope change at 10^7 cycles can seriously under-estimate the damaging effect of stresses below the CAFL.
None of the tests performed under Sequences B and C included sufficient low stresses to establish the best approach for assessing their damaging effect when applying Miner's rule.
7.5 Implication of the current work to fatigue design
The present work suggests that Miner's rule could be non-conservative under spectrum loading where cycling down from a constant maximum stress is predominant. On the other hand, Miner's rule can be
unduly conservative in a spectrum where most cycles involve cycling-up from a constant minimum stress. Although loading spectra like Sequence A are primarily chosen to simulate the severe conditions
that are thought to exist in welded joints containing high tensile residual stresses, they do represent the actual loading sequence for some engineering structures. As summarised by Fleck,^[27] a
number of engineering components can be subjected to periodic underloading, such as gas storage vessels, gas turbine blades, railway lines and aircraft wings. A simple example for producing the
Sequence A spectrum could be a combination of the following two loadings: a constant high load to keep the maximum stress constant and cyclic loads, similar to Sequence C spectrum, loading in a
direction opposite to the constant load. Furthermore, some loading spectra that do not involve cycling down from a constant high tensile stress may also exhibit under-loading behaviour. One such
example is the wide band spectrum^[5] but the present Sequence B is another. Therefore, due consideration of the type of variable amplitude loading spectrum involved is required when Miner's rule is
used in fatigue design.
With regard to current design guidance, if a service loading spectrum is similar to Sequence A or predominantly involves stresses cycling down from a fixed tensile level, the present and other test
results suggest that the Miner's rule damage sum should not take a value greater than 0.4, i.e. Σ(n/N) ≤ 0.4. Furthermore, the CA S-N curve, to be used in the calculation of Σ(n/N), should be
established using the same maximum stress as that in the spectrum. In this respect the evidence from the present study, particularly in the case of the type F weld detail, suggests that the current
design S-N curves in BS 7608 may not be low enough to allow for cycling down from a high fixed tensile stress. Clearly, this needs further investigation, including consideration of simply very high
mean stress or stress ratio conditions. Finally, with regard to the assumption to be made about the damaging effect of stresses below the CAFL, it may be necessary to draw a distinction between weld
details depending on their basic fatigue strengths. On the basis of the present test results for the relatively low fatigue strength type G specimen, it is recommended that Class G details are
assessed on the basis of the CA S-N curve extrapolated beyond the CAFL without a slope change. In contrast, the current approach of extrapolating the CA S-N curve beyond the CAFL, assumed to
correspond to N = 10^7 cycles, at a shallower slope (m changing to m+2) could be suitable for Class F and higher details. However, since evidence for this from the present project and published work
is very limited, for critical cases the CA S-N curve extrapolated beyond the CAFL without a slope change is still recommended.
Such stringent measures should not be required for spectra of the types that do not produce significant crack growth acceleration (such as Sequence B) or encourage crack growth retardation (such as
Sequence C), or perhaps for higher fatigue strength weld details. However, insufficient relevant experimental evidence is available at this stage to make specific recommendations.
8 Conclusions
On the basis of fatigue tests performed on two types of welded specimen, one corresponding to BS 7608 design Class G and the other to Class F, under both constant and variable amplitude loading with
three different sequences, the following conclusions can be drawn:
• The CA fatigue performance of the type G specimen decreased with increase in applied mean stress, but this was not the case for the type F specimen.
• Residual stress measurements confirmed the presence of high tensile residual stresses in regions near crack initiation sites for both types of specimen. They were reduced significantly under
fatigue loading for a small proportion of the fatigue life.
• For the same basic VA load spectrum, Σ(n/N) at failure depended on the sequence applied. Miner's rule was substantially non-conservative (Σ(n/N) < 1 at failure) for all tests under Sequence A,
with values down to around 0.4 for both specimen types. It was modestly non-conservative (Σ(n/N ~ 0.8) for tests under Sequence B but conservative (Σ(n/N > 1.3) for Sequence C.
• The above findings agreed very well with the results from crack growth measurements. Crack growth under spectrum loading could not be predicted accurately using the crack growth rates obtained
under constant amplitude loading. They significantly underestimated the crack growth in specimens tested under Sequence A, marginally underestimated that under Sequence B and overestimated that
under Sequence C, in all cases in similar proportions to the Σ(n/N) values at failure.
• There were strong indications that the most significant effect on fatigue behaviour of the type of loading was stress interaction, whereby high stresses caused significant crack growth
acceleration from subsequent lower stresses under Sequence A, moderate crack growth acceleration under Sequence B and crack growth retardation under Sequence C. Variations in the applied mean or
maximum stress levels were of secondary importance.
• From tests performed under Sequence A, stress ranges well below the fatigue limit were found to be as damaging as implied by the CA S-N curve extrapolated beyond the CAFL without changing the
slope. The value of the minimum fully damaging stress range, referred to as ΔS'[min], depended on the basic fatigue strength of the welded joint, being lower for the Class G detail (≤ 8.4N/mm^2)
than the Class F (≤ 31.5 N/mm^2).
• For loading conditions similar to Sequence A, or other spectra that are expected to cause crack growth acceleration from stress interaction, the present results indicate that Miner's rule should
be applied assuming Σ(n/N) ≤ 0.4 at the end of the required life and in conjunction with the CA S-N curve extrapolated beyond the CAFL without a slope change. Limited evidence suggested that the
less stringent bi-linear S-N curve with the slope change from m to m+2 at 10^7 cycles may be suitable for assessing Class F and higher details, but further work is required to confirm this.
It should be noted that the conclusions drawn in the present work were based on a limited number of tests. Verification is required by conducting tests covering different maximum stress range and
maximum peak stress in a spectrum, welded joint types and types of spectra (wide band).
9 Acknowledgements
This work was supported by the Industrial Members of TWI. In addition, the authors would like to thank the staff of the Fatigue Laboratory in carrying out the experimental work.
10 References | {"url":"https://www.twi-global.com/technical-knowledge/published-papers/investigation-of-fatigue-damage-to-welded-joints-under-variable-amplitude-loading-spectra-january-2008","timestamp":"2024-11-03T20:34:46Z","content_type":"text/html","content_length":"246184","record_id":"<urn:uuid:6061a68b-3e37-4972-8fb8-8da682475ef2>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00181.warc.gz"} |
Multiplication - Test Questions and Answers
Home > Numerical Tests > Multiplication Practice > Multiplication Test Questions
Being able to do quick multiplication is not just good when taking aptitude tests but also in normal day to day life. You will not always have a calculator next to you so learning the skill of quick
multiplication is valuable. Also it keeps you brain healthy.
These test questions will allow you to practice your multiplication skills which will only improve over time the more you practice.
Answer & Explanation:
Answer: Option D
Answer & Explanation:
Answer: Option B
Answer & Explanation:
Answer: Option A
Answer & Explanation:
Answer: Option D
Answer & Explanation:
Answer: Option B
Page 1 | Page 2 | Page 3 | Page 4 | Page 5 | Page 2
More educational and fun tests below.
Questions or comments? Please discuss below. | {"url":"http://www.theonlinetestcentre.com/multiplication-quiz4.html","timestamp":"2024-11-02T20:16:49Z","content_type":"application/xhtml+xml","content_length":"30015","record_id":"<urn:uuid:d9420bf2-a608-4e56-acda-1fb0adf45082>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00234.warc.gz"} |
Advances and applications in continuous location and related problems
Universidad de Granada
Universidad de Granada. Programa de Doctorado en Matemáticas
Continuous location
Fecha lectura
Referencia bibliográfica
Gázquez Torres, Ricardo. Advances and applications in continuous location and related problems. Granada: Universidad de Granada, 2022. [http://hdl.handle.net/10481/75448]
Tesis Univ. Granada.
This thesis focuses on the family of the continuous location problems. A location problem arises whenever a question of where to locate something is raised. This kind of problems belongs to one of
the research areas of Operations Research which has had a greatest development since the 1960s, the Location Science. This discipline is mainly defined by the facility location problems which consist
of finding the optimal locations for a set of facilities with respect a set of demand nodes and a given objective function. There are many classifications of location problems, one of them is the one
that considers the location space as a classifier. In a discrete location problem, facilities can be located in a finite set of potential locations; the continuous space consider the space where the
problem is defined and there are infinite positions to locate; and when we consider a network as a location space, facilities can be located at the nodes or at the arcs of the network. The use of
each space will be given by the real application of the problem. We usually use the discrete case when we locate physical services such as schools or ATMs; the continuous one when the location can be
more flexible, as in routers or sensors; and the network when the elements to be located are used in applications with networks such as bus stops or gas stations. | {"url":"https://digibug.ugr.es/handle/10481/75448","timestamp":"2024-11-03T21:45:57Z","content_type":"text/html","content_length":"26296","record_id":"<urn:uuid:d61a34a9-146f-4fa2-982c-1c0c4daa9194>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00228.warc.gz"} |
In a fish tank, there are 24 goldfish, 2 angel fish, and 5 guppies. If a fish is selected at random, what is the probability that it is a goldfish or an angel fish? | HIX Tutor
In a fish tank, there are 24 goldfish, 2 angel fish, and 5 guppies. If a fish is selected at random, what is the probability that it is a goldfish or an angel fish?
Answer 1
Goldfish and angelfish are mutually exclusive, so simply add together the respective probabilities.
P(goldfish ) #= 24/(24+2+5)=24/31#
P(angelfish ) #= 2/(24+2+5)=2/31#
P(goldfish or angelfish ) #=24/31+2/31=26/31#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To find the probability of selecting either a goldfish or an angel fish from the tank, we add the probabilities of selecting each type of fish.
Total number of goldfish + angel fish = 24 (goldfish) + 2 (angel fish) = 26
Total number of fish in the tank = 24 (goldfish) + 2 (angel fish) + 5 (guppies) = 31
Probability of selecting a goldfish or an angel fish = (Number of goldfish + Number of angel fish) / Total number of fish Probability = (24 + 2) / 31 Probability = 26 / 31
Therefore, the probability of selecting either a goldfish or an angel fish from the tank is 26/31.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/in-a-fish-tank-there-are-24-goldfish-2-angel-fish-and-5-guppies-if-a-fish-is-sel-96d27237d5","timestamp":"2024-11-11T03:40:03Z","content_type":"text/html","content_length":"579460","record_id":"<urn:uuid:f1d2e283-c78b-414e-913d-f6ade2d60744>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00569.warc.gz"} |
What exactly is A Numerical Expression In Math
There is some misunderstanding in college in terms of what’s reflection in math.
Even though math itself is math, the way students find out math is extremely distinctive from what’s generally taught in school. In this report I am going to look at what reflection in math is.
Mathematics is all about creating connections. When a student has the ability to connect a series of patterns together, they’re going to then have the ability to make sense of their suggestions.
Often the idea is complicated but after a pattern is connected, the student will discover that it tends to make much more sense. This really is among the extra crucial places of math that a student
must be able to grasp early on.
What is usually a numerical expression in math? The whole concept behind what’s a numerical expression in math is always to relate one thing to a quantity. By way of example, if a student is asked
what their average is, they would consider it as “the quantity of students that have this grade”. http://www.feinberg.northwestern.edu/sites/pmr/ They could be missing the connection involving it and
a number. For this reason in math they use graphing calculators to do their operate.
The very same is often stated for the word themselves, they could be utilised to express a number of distinctive approaches. Whilst some will concentrate on how you can relate a word or phrase to a
quantity, other people are a lot more concerned with ways to evaluate two words within the similar context. To this end, mathematicians use what exactly is recognized as rank or ordinal comparison to
produce confident that the two words are comparable. The persons who use this are identified as authorities within the field.
What is a numerical expression in math could be the approach of being able to compare two factors. This is a extremely basic aspect of how a student learns the basic principles of math. They may be
using these tips within the future for much more complicated concepts, just like persons do in day-to-day life. https://buyessay.net/essay-writer The initial time that a student encounters a complex
idea they ought to have a way to evaluate their concepts to ensure that they could make sense of what they’re wanting to say.
Some of your issues that students face when learning what’s a numerical expression in math are one thing that I think all students should comprehend. These difficulties stem from the reality that you
can find so many terms which might be made use of in mathematics. Many people which have been in college to get a although almost certainly have encounter the term ‘aspect ratio’.
What is a numerical expression in math is definitely the process of figuring out a ratio involving a set of numbers. In case you had been asking them to perform this they would must know what the
notion was. The most beneficial strategy to explain it really is to take a plate and try and obtain the ratio with the width from the plate for the height.
Once they determine the ratio, they then use a ruler to determine an area ratio. It is possible to locate this out by searching in the width in the plate and how it relates for the location in the
plate. They’re able to even use tools like these within a graphing calculator to establish the area of the plate.
What is a numerical expression in math would be the procedure of comparing all of those numbers and ratios with each other? At this point they are able to type the relationship involving these
things. This can be where factors just like the location on the plate comes in.
What is a numerical expression in math is the process of connecting all of those things with each other? They’re able to combine a number together with the width of the plate to find out the area.
They will combine a number using the height of the plate to find out the height. These items are referred to as ratios.
What is a numerical expression in math is the procedure of applying certainly one of these ratios to see the relationship in between the two. This can be the fundamental idea that teachers really
should be teaching their students at a young age. and they should be performing it as a element of their understanding solutions. | {"url":"https://pbp.com.pk/what-exactly-is-a-numerical-expression-in-math/","timestamp":"2024-11-02T01:20:26Z","content_type":"text/html","content_length":"56680","record_id":"<urn:uuid:1ffc9234-08d1-41dc-bb39-78da8099495d>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00387.warc.gz"} |
CSCI111 Chapter 10: Program 7 - Med
CSCI111 Chapter 10: Program 7 - Median
This function works with the median of a list of numbers. The median of a list is the middle observation
in the list. For example, for the list of numbers:
1, 3, 4, 7, 8, 10, 11
The median value is 7. There are three numbers above 7 and three numbers below 7.
For the list
1, 3, 4, 7, 8, 10, 11, 13
there are two middle values, 7 and 8. Then the median would be (7+8)/2 = 7.5.
So to find the median of a list we do the following.
1. Sort the list
2. Determine the number of items in the list
3. If the number in the list is odd, then the median is the middle value. If the number of numbers in
the list is even, then the median is the average of the middle two observations.
Write a function called middle(llist) that takes a list as the argument and returns the median value.
There are many ways to do this. I used a recursive function combined with the sort and pop commands,
but you may find another way.
Requirements for this program are as follows:
1. Your program must include a comment header with Author, Assignment (Program 7), Description,
and Due Date
2. Your function may not use any of Python’s pre-built median functions.
3. Your function should use list methods and operations to find the median.
4. Your program that you submit must be a script in a file named program7 WillM.py where you
replace WillM with your first name and last initial.
5. Submit just that program on Blackboard under Chapter 10, Program 7.
Need a custom answer at your budget?
This assignment has been answered 2 times in private sessions. | {"url":"https://codifytutor.com/marketplace/csci111-chapter-10-program-7-median-0ec18236-9141-447f-b49d-779a24d5c7e6","timestamp":"2024-11-02T12:53:51Z","content_type":"text/html","content_length":"21422","record_id":"<urn:uuid:7e81c8f2-5c74-4377-9d6c-a16a317c2a0f>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00479.warc.gz"} |