content
stringlengths
86
994k
meta
stringlengths
288
619
How To Use T In Python - Techinima.com How To Use T In Python In this tutorial, we will learn how to use the Transpose function (T) in Python with the help of the popular library, NumPy. The transpose function is used to change the orientation of a given matrix or ndarray. Specifically, it reverses the axes of the input array, significantly helping with linear algebra and data manipulation tasks. Step 1: Installing NumPy To use transpose in Python, we need to have the NumPy library installed. If you don’t have it installed already, you can install it using pip: Step 2: Importing NumPy and Creating an Array After installing NumPy, you need to import it and create the input array, which can be a list or a multi-dimensional array. For this example, let’s create a 2×3 array: import numpy as np # Create a 2x3 array arr = np.array([[1, 2, 3], [4, 5, 6]]) Step 3: Using the T Attribute Now we can use the T attribute to obtain the transpose of the created array. The code below demonstrates its usage: 2 # Get the transpose of the array 3 transpose_arr = arr.T 4 # Print the original and transposed arrays 5 print("Original Array:") 6 print(arr) 7 print("Transposed Array:") 8 print(transpose_arr) When the code is executed, you will see the original 2×3 array and its Transpose displayed as a 3×2 array: Original Array: [[1 2 3] [4 5 6]] Transposed Array: [[1 4] [2 5] [3 6]] Full Code Example 3 import numpy as np 4 # Create a 2x3 array 5 arr = np.array([[1, 2, 3], 6 [4, 5, 6]]) 7 # Get the transpose of the array 8 transpose_arr = arr.T 9 # Print the original and transposed arrays 10 print("Original Array:") 11 print(arr) 12 print("Transposed Array:") 13 print(transpose_arr) This simple example demonstrated how to use the T attribute in Python to obtain the transpose of a given array using NumPy. This technique can be very useful when working with linear algebra and manipulating data in a variety of applications. The Transpose function (T) in Python can be a handy tool when dealing with multi-dimensional arrays in machine learning, data analysis, or linear algebra tasks. Utilizing the NumPy library, transposing arrays becomes a straightforward task. Now you know how to implement it in your projects and take advantage of its capabilities.
{"url":"https://techinima.com/python/use-t-in-python/","timestamp":"2024-11-07T06:40:10Z","content_type":"text/html","content_length":"51184","record_id":"<urn:uuid:a977794c-b9a4-4b1e-8552-74c2a43a7fcf>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00115.warc.gz"}
Performance Analysis of Electrolytic Capacitors - Xuansn CapacitorPerformance Analysis of Electrolytic Capacitors Performance Analysis of Electrolytic Capacitors Most of today’s electrolytic capacitors are used in power frequency or high-frequency rectification and filtering of power electronic circuits, power supply bypass with high ripple current, etc. Therefore, in addition to the traditional performance of capacitors: rated voltage, capacitance, loss factor, leakage current, modern power electronics technology and future radio frequency power electronics technology have put forward new performance requirements for electrolytic capacitors, mainly including ripple current endurance, Equivalent series resistance, equivalent series inductance and other parameters.Therefore, the performance analysis of electrolytic capacitors is the main content of this chapter. 1 Performance Analysis of Electrolytic Capacitors– Equivalent circuit of electrolytic capacitor Electrolytic capacitors can be represented by different equivalent circuits under different working conditions. The equivalent circuit that can better reflect the characteristics of aluminum electrolytic capacitors is shown in Figure 1-1 R1, R, R3, C1, C2, L and VD in Figure 1-1a are respectively the resistance of the electrode and the terminal, the resistance of the electrolyte, the insulation resistance of the oxide film medium (after being damaged by the manufacturing process), and the anode foil. capacitance, the original oxide film capacitance of the cathode foil, the inductance caused by the electrodes and lead terminals, and the diode indicating the polarity of the anodized film. Therefore, a reverse voltage of an electrolytic capacitor exceeding 1.5V will cause a large leakage current, much like a diode conducting forward. In this case, the electrolysis effect will produce hydrogen, which will increase the internal pressure and burst the pressure relief device. At the same time, the reverse voltage will also destroy the aluminum oxide film, causing the electrolytic capacitor’s withstand voltage to drop sharply until it fails. This is why electrolytic capacitors cannot be used with reverse polarity. The original oxide film of the cathode foil is very thin, with very small withstand voltage, and little remains under the action of negative polarity voltage. Therefore, the capacitance C2 of the original oxide film of the cathode foil can be regarded as a short circuit. The equivalent circuit for general applications mostly uses a simplified equivalent circuit, that is, combining R1 and R2 in Figure 1-1a, combining C1 and C2, ignoring R3: (the leakage current is very small) and VD (no reverse voltage is applied in normal applications) , a commonly used equivalent circuit is obtained, as shown in Figure 1-1b. In Figure 1-1b, neither RESR nor L is expected to exist in the capacitor. They are parasitic parameters of the aluminum electrolytic capacitor. The parasitic parameters of the aluminum electrolytic capacitor have a great impact on its performance. The above is about the performance analysis of electrolytic capacitors—equivalent circuit of electrolytic capacitors 2 Performance Analysis of Electrolytic Capacitors – Equivalent series resistance and its characteristics Begin by analyzing the Equivalent Series Resistance (ESR). For the convenience of analysis, Figure 1-1 can be simplified into an equivalent circuit of a capacitor and ESR in series, as shown in Figure 1-2. Among them, the resistance of the electrolyte is the main part of ESR. Low ESR aluminum electrolytic capacitors actually use low resistivity electrolytes. The measurement conditions of ESR are to use a maximum AC signal voltage with an effective value of 1V and a 120Hz power supply with no forward bias voltage in an environment of 25°C to power an equivalent series circuit of aluminum electrolytic capacitors, and measure the resistance in the circuit. Measurement Figure 1-2 Simplified equivalent circuit of electrolytic capacitor For aluminum electrolytic capacitors used in general applications, most manufacturers do not provide ESR data, but they do provide this data for low-ESR aluminum electrolytic capacitors used in switching power supplies or pin-type aluminum electrolytic capacitors with relatively large capacitance. The main reason why most aluminum electrolytic capacitor manufacturers do not provide ESR data is: compared to capacitors of other media, the ESR of aluminum electrolytic capacitors is too large. For example, the ESR of a 1uF/16V ordinary aluminum electrolytic capacitor is generally around 20Ω; the ESR of a 100F aluminum electrolytic capacitor is between 1.5~2Ω. Just imagine, writing such data in the data sheet will definitely affect the user’s confidence in using aluminum electrolytic capacitors. Therefore, in a sense, the application of aluminum electrolytic capacitors is a helpless choice. In the application of switching power supply, it is often found that when ordinary aluminum electrolytic capacitors are used, the output voltage ripple and spike suppression effect is very poor. The main reason is that the ESR of conventional aluminum electrolytic capacitors is “too large”. In high-frequency applications, electrolytic capacitors behave like resistors to AC circuits. Therefore, to obtain a better high-frequency filtering effect, the ESR of the filter capacitor should be reduced as much as possible, that is, a low-ESR aluminum electrolytic capacitor should be selected. The ESR of low-ESR aluminum electrolytic capacitors can generally be an order of magnitude or more lower than that of ordinary aluminum electrolytic capacitors. In order to obtain low ESR aluminum electrolytic capacitors, a low resistivity electrolyte is used. If it is also necessary to reduce the equivalent series inductance, adopt low parasitic inductance measures in the winding process and electrode lead-out of the aluminum electrolytic capacitor 2.1 Performance Analysis of Electrolytic Capacitors-Equivalent series resistance ESR is the circuit equivalent of the heating effect produced when AC current flows through an electrolytic capacitor. Why not a parallel equivalent resistance? Any capacitor will have a parallel equivalent resistance, that is, the impact of leakage current on the circuit. Heating caused by leakage current is mainly related to the capacitor terminal voltage and has nothing to do with the flowing AC current. The heat generated by the electrolytic capacitor caused by AC current flowing through it is ESR from the circuit equivalent point of view, whether it is the volume resistance of the electrolyte or the equivalent resistance of the lattice vibration caused by the aluminum oxide film under the action of AC current, which results in heat generation. The equivalent series resistance of an aluminum electrolytic capacitor can be divided into two parts: when the aluminum oxide film flows through AC current, the polar lattice of the solid aluminum oxide film vibrates with the AC electric field to produce a heating electrical effect, and the equivalent resistance part of the electrolyte. These two types of equivalent series resistance have different causes, so the resistance characteristics will have different effects in different frequency bands. 2.2 Performance Analysis of Electrolytic Capacitors-ESR frequency characteristics Unlike the resistance characteristics of metals, the ESR of aluminum electrolytic capacitors changes with frequency. Its changing pattern is shown in Figure 1-3. As can be seen from Figure 1-3, the ESR frequency characteristics of electrolytic capacitors with different rated voltages are different, but the general trend is the same, that is, as the frequency increases, the ESR value decreases. The higher the rated voltage, the ESR decreases. The frequency changes are more obvious. Considering that most electrolytic capacitors are used for rectification and filtering, even in half-wave rectification, the ripple current frequency flowing into the electrolytic capacitor is 50Hz. Therefore, the frequency starting point of the ESR frequency characteristic is 50Hz and the highest frequency is 100kHz. In the range of 50~200Hz, the ESR changes relatively drastically. It can be considered that the ESR of the aluminum oxide film decreases sharply as the ripple current frequency increases. Figure 1-3 RIFA’s PEF356 series electrolytic capacitor ESR frequency characteristic curve As the ripple current frequency increases, the ESR value of the aluminum oxide film decreases significantly, its proportion in the total ESR of the aluminum electrolytic capacitor gradually decreases, and the proportion of ESR in the electrolyte part becomes larger and larger. Therefore, as the frequency increases, the decreasing trend of electrolytic capacitor ESR slows down. When the frequency is above 10kHz, the ESR of the electrolyte part is close to the ESR of the electrolytic capacitor, so that the ESR of the electrolytic capacitor no longer changes with frequency. Figure 1-3 also shows that the ESR ratio generated by the aluminum oxide film of high-voltage electrolytic capacitors is much larger than that of low-voltage electrolytic capacitors, causing the ESR of high-voltage electrolytic capacitors to change drastically in the low frequency band. 2.3 Performance Analysis of Electrolytic Capacitors-ESR temperature characteristics The ESR of an electrolytic capacitor changes with temperature, and its temperature characteristic curve is shown in Figure 1-4. The above is about the performance analysis of electrolytic capacitors—equivalent series resistance and its characteristics 3 Performance Analysis of Electrolytic Capacitors-equivalent series inductance Various electronic components and even the wires themselves have parasitic inductance, and electrolytic capacitors are no exception. The parasitic inductance of the electrolytic capacitor is connected in series with the ESR and the ideal capacitor in the capacitor equivalent circuit, so the parasitic inductance of the capacitor is also called the equivalent series inductance (ESL). Since electrolytic capacitors are wound, wound inductance cannot be avoided, and the inductance produced by electrolytic capacitors is much larger than non-inductive film capacitors with gold-sprayed The parasitic inductance of an electrolytic capacitor with multiple groups of conductor bars is lower than that of a single group of conductor bars or lead-pin electrolytic capacitors. The parasitic inductance of electrolytic capacitors with “middle” guide pins or “middle” guide bars is lower than that of electrolytic capacitors with offset guide bars or offset guide pins. The reason is that the current in the center guide pins flows to both sides of the filter, which will Offset part of the winding inductance. Multiple guide bars reduce the filtering length between the two guide bars, and the aluminum foil flowing from the guide bar to both sides of the guide bar will also offset part of the winding inductance. The ESL of electrolytic capacitors using the negative electrode extension process is lower than that of electrolytic capacitors without negative electrode extension. The reason is that all negative electrode parts are “short-circuited” with the negative electrode foil by using negative electrode extension. Although the short-circuit effect is not ideal, it is still a short circuit. If the entire positive foil is also “shorted”, the ESL of the wound electrolytic capacitor will be greatly reduced. The resonant frequency of an electrolytic capacitor with a capacitance of 100μF may increase to 100kHz. ESL affects the current distribution on the aluminum foil. The longer the aluminum foil, the greater its impact. The narrower the aluminum foil, the greater its impact. The higher the ripple current frequency, the greater its impact. Figure 1-5 shows the current distribution diagram of the aluminum foil of the electrolytic capacitor. The upper picture of Figure 1-5a shows the position of the aluminum foil and the guide pin. The guide pin is located in the middle of the aluminum foil. The lower picture of Figure 1-5a shows the distribution of current on the aluminum foil. Obviously, the current in the contact part between the guide pin and the aluminum foil is the largest, and the current on each side is half of the current flowing into the capacitor. As the foil gets farther and farther away from the guide pin, the current becomes smaller and smaller until the foil terminal current drops to zero. If the guide pin needs to be set on one side of the aluminum foil, as shown in Figure 1-5b, it is obvious that the current distribution on both sides of the guide pin is different. The current on the long side of the aluminum foil is large, and the current on the short side of the aluminum foil is small. Obviously, when the guide pin is in the middle of the aluminum foil, the performance of the capacitor is the best. If the guide pin deviates from the center of the aluminum foil, the performance of the capacitor will decrease. The farther the guide pin is from the middle of the aluminum foil, the worse the performance. The performance of the capacitor is best when the guide pin is located at one end of the aluminum foil. Difference Since electrolytic capacitors are wound, winding inductance will inevitably occur. The aluminum foil current affected by ESL will be more concentrated near the guide pin or guide bar, and the higher the frequency, the more serious it will be. The reason is that high-frequency ripple current flows through the ESL, producing a voltage drop on the ESL, and this voltage drop U will hinder the current flow to the aluminum foil at the far end of the guide pin or guide bar. The voltage drop is For example, when the ESL is 10nH and the frequency is 100kHz, an impedance of 6.28mΩ will be produced. What needs to be clear is that the ESR of a 1000μF high-frequency low-resistance electrolytic capacitor is only 30mΩ, while the ESL of a 1000μF electrolytic capacitor is much more than 10nH! When the guide pin deviates greatly from the center of the aluminum foil, under normal (120Hz) test conditions, the capacitance, leakage current, loss factor, ESR and other parameters are not much different from those of the capacitor with the guide pin located in the center of the aluminum foil, and they are all good products. However, there will be a big difference in the high temperature load life test, and the high frequency characteristics are also different. Therefore, in practical applications, users should try to avoid products with guide pins far away from the center of the aluminum foil. The above is the performance analysis of electrolytic capacitors—equivalent series inductance 4 Performance Analysis of Electrolytic Capacitors-impedance frequency characteristics of electrolytic capacitors 4.1 Performance Analysis of Electrolytic Capacitors-Performance Analysis of Electrolytic Capacitors-The relationship between the impedance of lead-pin electrolytic capacitors, frequency and The resistance of the electrolyte is the main part of the ESR of the aluminum electrolytic capacitor, and the resistivity of most electrolytes decreases as the temperature increases, so the ESR of the aluminum electrolytic capacitor also decreases as the temperature increases. The relationship between impedance frequency characteristics and temperature of a certain 100μF/63V and 47μF/350V aluminum electrolytic capacitor is shown in Figure 1-6. The figure shows the impedance frequency characteristics at typical temperatures from -40 to 85°C. The lowest value of each curve in Figure 1-6 can be considered as the ESR value. It can be seen from the figure that the ESR of the 100μF/63V aluminum electrolytic capacitor is close to 1.5Ω at -40℃ and drops to 0.5 at -25℃. Ω, dropped to 0.1Ω at 0℃, 0.05Ω at room temperature of 20℃, and the ESR at the maximum operating temperature of 85℃ is the lowest, 0.04Ω; the ESR of the 47μF/350V aluminum electrolytic capacitor at -40℃ is close to 6Ω , dropped to 3Ω at -25℃, 1.2Ω at 0℃, 0.4Ω at room temperature 20℃, and the lowest ESR at the maximum operating temperature of 85℃, 0.06Ω. It can be seen that the ESR decreases by 35% to 50% from 20°C to 85°C, but the ESR increases significantly at low temperatures, and the ESR increases by about an order of magnitude from -40°C to 0°C. From the highest operating temperature to the lowest operating temperature, the ESR increases by 40 to 100 times. Typically, the ESR of aluminum electrolytic capacitors changes relatively little with frequency. ESR values range from 0.002Ω for large capacitance bolt terminal aluminum electrolytic capacitors to 20 to 30Ω for very small capacitance leaded aluminum electrolytic capacitors. The resonant frequency of the electrolytic capacitor shown in Figure 1-6a is about 70kHz, and the LC resonant frequency formula is The 100μF/63V aluminum electrolytic capacitor is a lead-pin electrolytic capacitor, and its ESL can be calculated according to formula (1-1) to be approximately 45nH. ESL can also be found using the slope of the inductive section of the electrolytic capacitor. It is also a 100μF/63V lead-pin aluminum electrolytic capacitor. The impedance corresponding to 400kHz is about 0.1Ω, the impedance corresponding to 4MHz is about 1Ω, and the impedance corresponding to 40MHz is about 10Ω. Every time the frequency increases by one order of magnitude, the impedance also increases by one order of magnitude. Since the capacitive reactance of 100μF is very different from the inductive reactance in the range of 4 to 40MHz, it can be approximately considered that the curve in this frequency band is the curve of inductance and ESR. That is to say, the impedance at 40MHz frequency is about 11Ω, and the ESR is about 0.04Ω. According to the impedance relationship between inductor and resistor in series have to Also based on The results of estimating ESL using the resonant frequency or estimating the ESL using the slope of the inductive section of the impedance frequency characteristic curve are very close. The corresponding impedance of the 47μF/350V aluminum electrolytic capacitor in Figure 1-6b is 20Ω at a frequency of 60MHz. According to equation (1-4), it can be concluded that the ESL is 4.2 The relationship between the impedance of pin-type electrolytic capacitors, frequency and temperature The impedance frequency characteristic curve of the pin type electrolytic capacitor B43504 is shown in Figure 1-7. The resonant frequency point and inductance curve part cannot be seen in Figure 1-7, so the ESL of the electrolytic capacitor cannot be obtained using the impedance frequency characteristic curve. You can read the datasheet to see the ESL of this series of electrolytic capacitors, which is about 20nH. 4.3 The relationship between the impedance of bolt-type electrolytic capacitors, frequency and temperature Take the bolt-type electrolytic capacitor B43560 as an example. It is given in the data sheet: an electrolytic capacitor with a diameter of 51.6mm has an ESL of approximately 15nH, and an electrolytic capacitor with a diameter greater than or equal to 64.3mm has an ESL of approximately 20nH. If ESL is not given in the data sheet, it needs to be calculated from the impedance frequency characteristic curve. The impedance frequency characteristic curve of B43560 is shown in Figure 1-8. The ESL of the 15000μF/350V and 6000μF/450V electrolytic capacitors in Figure 1-8 is basically the same. The corresponding impedance at 1MHz frequency is about 0.1Ω, and the corresponding ESL is 3900μF/350V electrolytic capacitor. The ESL of the 3900μF/350V electrolytic capacitor is basically the same, and the corresponding impedance at 1MHz frequency is The impedance is about 0.08Ω, and the corresponding ESL is The ESL of the 3900μF/350V electrolytic capacitor is basically the same. The corresponding impedance at 1MHz frequency is about 0.08Ω, and the corresponding ESL is Obviously for large electrolytic capacitors, the ESL of 16nH and 13nH is definitely very low, even lower than the ESL of film capacitors of the same size! Since the capacitance of the 15000μF/350V electrolytic capacitor is significantly larger than that of the 6000μF/450V electrolytic capacitor, although the ESL of the two electrolytic capacitors is basically the same, the resonant frequency of the former (about 100kHz) is significantly lower than the resonant frequency of the latter (200~250kHz ), judging from these two curves, the traditional saying that large capacitors filter low frequencies and small capacitors filter high frequencies is inappropriate for the current level of components. A 15000μF electrolytic capacitor has the same impedance as a 6000μF electrolytic capacitor in the high frequency range, so the filtering performance is also the same. 4.4 Impedance frequency characteristics of electrolytic capacitors 4.4.1 Analysis of impedance frequency characteristics From Figure 1-1, it can be seen that the aluminum electrolytic capacitor is equivalent to an RLC series circuit. From this, the impedance frequency characteristics of the aluminum electrolytic capacitor can be obtained, as shown in Figure 1-6. As can be seen from Figure 1-6: below the resonant frequency, the impedance is dominated by the capacitive reactance of the capacitor. Aluminum electrolytic capacitors with poor frequency characteristics or large capacitance aluminum electrolytic capacitors can only maintain up to 20kHz or even more in this frequency band. At low frequencies, aluminum electrolytic capacitors with good frequency characteristics or low capacitance aluminum electrolytic capacitors can reach or exceed frequencies of 100kHz or even higher. In this frequency band, as the frequency increases, the capacitive reactance decreases and the inductive reactance increases. Since the capacitive reactance accounts for the main component, the impedance of the aluminum electrolytic capacitor decreases with the frequency in this frequency band. As the frequency increases, the capacitive reactance decreases and the inductive reactance increases. The frequency when the capacitive reactance is equal to the inductive reactance and cancel each other out is the resonant frequency of the aluminum electrolytic capacitor. At this time, the impedance is the lowest and only ESR remains. If the ESR is zero, the impedance at this time is also zero. Since the ESR of aluminum electrolytic capacitors is relatively high, the sum of capacitive reactance and inductive reactance will be lower than the ESR over a relatively wide frequency band, making the impedance frequency characteristic curve of aluminum electrolytic capacitors relatively flat over a relatively wide frequency band. At this time, the aluminum electrolytic capacitor is only equivalent to a resistance for AC. As the frequency continues to rise, the inductive reactance begins to be greater than the capacitive reactance. When the inductive reactance is close to the ESR, the impedance frequency characteristic curve begins to rise and becomes inductive. From this frequency onwards, the capacitor is actually an inductor! Due to the manufacturing process, the greater the capacitance, the greater the parasitic inductance, the lower the resonant frequency (in fact, the increase in capacitance will directly lead to a decrease in the resonant frequency), and the lower the frequency at which the capacitor becomes inductive. This is why in some literature on filtering, there is often a saying that “large capacitors filter low frequencies and small capacitors filter high frequencies” 4.4.2 Impedance The impedance of an aluminum electrolytic capacitor is actually the sum of capacitive reactance, ESR and inductive reactance in the equivalent circuit in Figure 1-1. The relationship between impedance Z and capacitive reactance, ESR, and inductive reactance is 4.4.3 Measurement of impedance The Z of the aluminum electrolytic capacitor is tested under the equivalent series circuit at 25°C under a measuring bridge powered by a variable frequency power supply with an AC signal voltage of tunable between 10Hz and 100kHz with an effective value of 1V. Impedance measurements are mainly typical characteristic curves and low temperature limit measurements. For low temperature impedance measurement, place the capacitor in the temperature control box and set the low temperature limit to ±2°C. The impedance is measured at a frequency of (120±5) Hz using any suitable method providing an accuracy of ±2.5%. As soon as possible after the temperature has stabilized, the measurement should be made with the lowest possible AC measurement voltage so that it does not cause the capacitor to heat up. The capacitor is assumed to be thermally stable if two consecutive measurements taken 15 minutes apart show no change 4.4.4 Impedance temperature characteristics As can be seen from Figure 1-6, the capacitive impedance temperature characteristics of aluminum electrolytic capacitors and the inductive reactance temperature characteristics due to parasitic inductance basically do not change with temperature. The impedance temperature characteristics only change greatly with temperature when ESR plays a major role. , which is determined by the temperature characteristics of the resistivity of the electrolyte 4.4.5 Equivalent series inductance The equivalent series inductance is relatively independent of frequency and temperature, with typical values ranging from 2 to 8nH (surface mount), 10 to 30nH (radial lead), and 20nH (axial lead). These inductance values vary with the number of lead-out electrode positions and the lead-out method. 4.4.6 Resonant frequency The resonant frequency is the frequency when the capacitive reactance 1/(2πfC) is equal to the inductive reactance 1/(2πfL). This is because the capacitive reactance and the inductive reactance are 180° out of phase, the two reactances cancel out, and the remaining impedance is pure resistance, which is equal to the ESR at this frequency. For aluminum electrolytic capacitors, the typical value of the resonant frequency should be much higher than the rectification and filtering frequency of the 120Hz capacitor. The resonant frequency of current aluminum electrolytic capacitors is generally above 20kHz, which is sufficient for power frequency rectification and filtering, but for high frequency The filtered resonant frequency can reach more than 100kHz. 4.5 Performance Analysis of Electrolytic Capacitors-Summary Electrolytic capacitors are used as filter components, and their capacitive reactance, ESL, and ESR determine the filtering effect of electrolytic capacitors. In the low frequency band, the capacitive reactance of the electrolytic capacitor is much greater than ESR and ESL. At this time, the capacitive reactance of the electrolytic capacitor determines the filtering effect. In this state, sufficient capacitance is required to ensure a low enough capacitive reactance. As the frequency increases, the capacitive reactance decreases. When the capacitive reactance of the electrolytic capacitor is close to the ESR or even lower than the ESR, the filtering effect will depend on the ESR. Since the ESR of electrolytic capacitors is relatively large, the frequency range in which ESR determines the filtering effect may be relatively wide. The larger the ESR, the wider the frequency band that affects the filtering effect. This is why high-frequency, low-resistance electrolytic capacitors are needed. If it is also low resistance in the low frequency band, it can effectively increase the ESR under 120Hz conditions, reduce the loss factor, and increase the ripple current tolerance at 120Hz. As the frequency continues to increase, the inductive reactance of the electrolytic capacitor ESL increases to a point that cannot be ignored, and the filtering effect will depend on the ESL. Therefore, the lower the ESL, the better the high-frequency filtering effect. Using multiple sets of conductor strips to lead out electrodes at appropriate locations on the aluminum foil will partially or mostly offset the winding parasitic inductance. Using a negative extension will significantly reduce the parasitic inductance of the electrolytic capacitor. The actual negative electrode of the electrolytic capacitor is the electrolyte. The ESR of the low-frequency band of the electrolytic capacitor is mainly the equivalent ESR of the aluminum oxide film when alternating current is applied. As the frequency increases, the equivalent ESR of the aluminum oxide film decreases, and the equivalent ESR of the electrolyte begins to take effect. Since the electrolyte is ionically conductive, the equivalent ESR will change with temperature. The lower the temperature, the greater the equivalent ESR. The higher the temperature (not exceeding the maximum ambient temperature in the data sheet), the lower the equivalent ESR. The highest temperature and the lowest temperature are The temperature electrolyte equivalent ESR differs by more than an order of magnitude. The performance of liquid electrolytic capacitors at low temperatures (close to the minimum operating temperature in the datasheet) is poor relative to room temperature The above is about the performance analysis of electrolytic capacitors—impedance frequency characteristics of electrolytic capacitors 5 Performance Analysis of Electrolytic Capacitors -Ripple Current Withstanding Capability 5.1 Performance Analysis of Electrolytic Capacitors-Origin of ripple current withstanding capability The initial electrolytic capacitors were used in general electronic circuits. In this application state, the AC current flowing through the electrolytic capacitors was low. At this time, it seemed unnecessary to consider the current carrying capacity of the electrolytic capacitors. Therefore, the data sheets of early electrolytic capacitors There is no such data. Electrolytes with higher resistivity in boric acid are also acceptable. When electrolytic capacitors are mainly used in power electronic circuits such as switching power supplies, the working status of electrolytic capacitors has changed. The most important change is that the unit capacitance current flowing through the electrolytic capacitor surges, causing the “internal resistance” of the electrolytic capacitor to heat up, eventually leading to early failure of the electrolytic capacitor, or even failure very quickly. Therefore, the electrolytic capacitor has the ability to withstand ripple current. Referred to as ripple current. The term ripple current is transplanted from ripple voltage. After the alternating current is rectified and filtered, the voltage becomes a relatively stable direct current, but there is still an alternating current component superimposed on the direct current voltage, and the waveform is like ripples on the water surface. For voltage, this calm “water surface” is the DC voltage component, and the “ripple” is the AC component superimposed on the DC voltage, that is, the ripple voltage. The same is true for current. The AC current output by rectification is called ripple current. For the DC load end, this ripple current is not required or even allowed, so the ripple current must “all” flow into the electrolytic capacitor. Therefore, the effective current value capability of the electrolytic capacitor is called ripple current. 5.2 Ripple current withstanding capability The AC ripple current flowing through the aluminum electrolytic capacitor will cause losses in its ESR and cause the aluminum electrolytic capacitor to heat up. This heating limit limits the ripple current to the rated ripple current value. It is defined as the maximum ripple current value that can ensure the rated life time of the aluminum electrolytic capacitor at the highest operating temperature. For aluminum electrolytic capacitors used in general applications, most aluminum electrolytic capacitor manufacturers do not give rated ripple current data. For low ESR aluminum electrolytic capacitors used in switching power supplies or pin-type aluminum electrolytic capacitors with larger capacitance, they give this data. In fact, the ripple current that aluminum electrolytic capacitors can withstand is relatively low. The first impression of the ripple current value that general-purpose aluminum electrolytic capacitors can withstand is that it is too low. Fortunately, most applications do not require very high requirements. ripple current 5.3 Definition of rated ripple current At the highest ambient temperature, the current flowing through the electrolytic capacitor causes the center temperature of the electrolytic capacitor core to rise. The ripple current value corresponding to the rated temperature rise is the rated ripple current. Electrolytic capacitors with different maximum temperatures have different temperature rises. For example, an electrolytic capacitor with a maximum temperature of 105°C has a rated ripple current of 10°C, while an electrolytic capacitor with a maximum temperature of 105°C has a rated ripple current of 10°C. The corresponding temperature rise is 5°C. This different result is because the maximum operating temperature of the 85°C electrolyte should generally not exceed 95°C, while the maximum operating temperature of the 105°C electrolyte should generally not exceed 110°C. Based on the above habits, even if a 105°C electrolytic capacitor is manufactured with a 125°C electrolyte, the rated ripple current temperature rise is still 5°C. The temperature rise under the rated ripple current of electrolytic capacitors with a maximum temperature of 125°C or 130°C or even 144°C or 150°C is generally limited to 5°C. To sum up, the essence of the rated ripple current parameter is the limitation of the heating of the electrolytic capacitor core caused by the ESR of the ripple current flowing through the electrolytic capacitor 5.4 Ripple current frequency characteristics As can be seen from Figure 1-6, the ESR of the electrolytic capacitor changes with frequency. From the same perspective as the heating of electrolytic capacitors, as the frequency increases, the ESR decreases. If the ripple current remains unchanged, the loss of the electrolytic capacitor itself decreases, and the temperature rise decreases accordingly. At this time, if you want to achieve the same temperature rise, you can increase the ripple current so that the loss caused by the ripple current on the ESR of the electrolytic capacitor is the same as that under the “standard conditions” of 120Hz. The corresponding multiplication factor is the frequency of the ripple current. Conversion factor. The frequency conversion coefficient of the ripple current of a certain electrolytic capacitor is shown in Table 1-1. Table 1-1 Frequency conversion coefficient of ripple current of an electrolytic capacitor At the same frequency: the smaller the capacitance, the smaller the conversion coefficient obtained; the lower the frequency, the smaller the conversion coefficient obtained. The pattern is consistent with ESR increasing as frequency decreases. Similarly, the lower the frequency band, the faster the ESR of the electrolytic capacitor increases, so the rated ripple current in this frequency band attenuates faster. Table 1-1 does not list the attenuation degree of the ripple current at the 50Hz frequency. As can be seen from Figure 1.2, the rated ripple current attenuates more obviously. If the frequency continues to be reduced, there will be little corresponding rated ripple current left. Therefore, electrolytic capacitors need to try their best to avoid the existence of ultra-low frequency ripple The frequency conversion factor of the ripple current of the electrolytic capacitor can also be based on 120Hz. Table 1-2 shows the conversion coefficient of the ripple current frequency of an electrolytic capacitor based on 120Hz as the test standard. Table 1-2 Frequency conversion coefficient of ripple current of an electrolytic capacitor based on 120Hz At the same frequency: the smaller the capacitance, the greater the conversion coefficient obtained; the lower the frequency, the smaller the conversion coefficient obtained. This is consistent with ESR increasing as frequency decreases. Similarly, the lower the frequency band, the faster the ESR of the electrolytic capacitor increases, so the rated ripple current in this frequency band increases faster. It should be noted that the current conversion factor based on 100kHz is generally feasible. However, for lead-pin electrolytic capacitors, the ripple current frequency conversion coefficients calculated back to 1kHz, 10kHz, and 100kHz based on 120Hz are not necessarily correct. It is very likely that the following situation will occur: under the condition of 100kHz, multiply the ripple current at the 120Hz reference frequency by the frequency conversion factor, and it will be concluded that the ripple current life of the electrolytic capacitor under the converted 100kHz condition is shorter than that under the 120Hz reference ripple current condition. life span. The reason is that the electrolytic capacitor has winding parasitic inductance. This parasitic inductance will quickly attenuate the current of the aluminum foil at the far end of the guide pin or guide bar, causing the current to be too concentrated near the guide pin or guide bar, resulting in Aluminum foil is severely overcurrent. 5.5 Ripple current temperature characteristics It can be seen from Table 1-3 that as the core temperature of the electrolytic capacitor increases, the ESR of the electrolytic capacitor decreases. For the same temperature rise, the rated ripple current can increase. The degree of increase is the temperature conversion coefficient of the ripple current. Most small electrolytic capacitors do not give a temperature conversion factor for ripple The temperature conversion coefficient given for the electrolytic capacitor (105℃/4000h) shown in Table 1-3 is based on 85℃. The reason is that the rated ripple current of this electrolytic capacitor is given at 85℃. It can be seen from the table that as the ambient temperature decreases, the temperature conversion coefficient of the ripple current of this electrolytic capacitor increases. At 45°C, it reaches 2 times that of 85°C and 4 times that of 105°C! Table 1-3 Temperature conversion coefficient of ripple current of lead-pin electrolytic capacitor (CDE361R series) Table 1-4 Temperature conversion coefficient of ripple current of pin-type electrolytic capacitor (CDE380 series) Table 1-5 Temperature conversion coefficient of ripple current of bolt-type electrolytic capacitor (CDE520C series) 5.6 Nature of rated ripple current The rated ripple current is actually the limit of the temperature rise caused by the ripple current flowing through the electrolytic capacitor. This limitation is mainly due to the service life of the electrolytic capacitor. If the ripple current flowing through the electrolytic capacitor exceeds the rated ripple current, the electrolytic capacitor can still work normally to a certain extent, but the life will be shortened. The greater the ripple current, the shorter the life, and the life is shortened much faster. higher than the rate at which the ripple current increases. Continuing to increase the ripple current can cause the electrolytic capacitor to heat up severely in a short period of time and generate gas in the electrolytic capacitor. Under such working conditions, the gas pressure inside the electrolytic capacitor is getting higher and higher, which will eventually burst the explosion-proof valve of the electrolytic capacitor. In mild cases, the electrolytic capacitor will appear as a convex bottom. In severe cases, the explosion-proof valve of the electrolytic capacitor will be opened wide and spray out. Electrolyte, that is, bursting pulp, will be accompanied by broken capacitor paper when the electrolyte is sprayed out. The most serious situation is an electrical spark between the guide pin or guide bar and the aluminum foil, resulting in implosion (only one of the causes of implosion) The above is the performance analysis of electrolytic capacitors—ripple current endurance 6 Performance Analysis of Electrolytic Capacitors-relationship between life and temperature and ripple current 6.1 Performance Analysis of Electrolytic Capacitors-The relationship between the service life of lead-pin electrolytic capacitors, temperature and ripple current The actual service life of the electrolytic capacitor is related to the working conditions of the electrolytic capacitor, mainly related to temperature and ripple current. Electrolytic capacitors with different packages, rated voltages, and capacitances have different relationships between life, temperature, and ripple current, as shown in Figure 1-9. The ordinate in the figure is the ratio of the actual ripple current to the rated ripple current when the ambient temperature (or case temperature) is 105°C. The abscissa is the ambient temperature. The case temperature curve of this electrolytic capacitor is basically the same as the ambient temperature curve. Figure 1-9 shows a low-voltage, high-frequency, low-resistance electrolytic capacitor with diameters of 10mm (105℃/5000h) and 12.5mm (105℃/7000h) with same-side leads. The shaded area in the figure is a prohibited working area. Once the electrolytic capacitor enters this area, there is no guarantee whether the electrolytic capacitor will fail. Comparing the curves in Figure 5-9a and b, the life of small-diameter electrolytic capacitors (5000h) is shorter than that of large-diameter electrolytic capacitors (7000h). The reason is that large-diameter electrolytic capacitors contain more electrolyte than small-diameter electrolytic capacitors. It can also be seen that when the ambient temperature is 40°C, the effective limit ripple current of a 10mm diameter electrolytic capacitor is greater than that of a 12.5mm diameter electrolytic capacitor. The former is about 3.2 times the rated ripple current, and the latter is about 2.75 times the rated ripple current. Under the rated ripple current condition, when the temperature drops by 25°C, the life span is extended to 8 times the original value. For every 8°C drop in ambient temperature, the life span is Taking the 12.5mm product as an example, when the ambient temperature of the electrolytic capacitor is 65°C, the life span under the rated ripple current state is 200,000 hours; when the ripple current rises to about 1.7 times the rated ripple current, the life span drops to 10 10,000 h; while the ripple current at the 50,000 h life span only requires 2.2 times the rated ripple current; the ultimate ripple current is approximately 2.35 times the rated ripple current! The life span at this time should be reduced to 7000h. It can be seen that as the ripple current increases, the life of the electrolytic capacitor decreases under the same temperature conditions. When the ripple current rises to more than 1.5 times the rated ripple current, the life of the electrolytic capacitor shortens faster and faster until it exceeds the limit. Ripple current may cause convex bottom or even explosion. 6.2 The relationship between the service life of axial leaded electrolytic capacitors, temperature and ripple current The advent of axial lead type electrolytic capacitors preceded the same side lead (lead pin type) electrolytic capacitors. Generally speaking, axial lead electrolytic capacitors perform better than same-side lead electrolytic capacitors, but same-side lead electrolytic capacitors are more suitable for automated mass production and are cheaper. The relationship between the life of the B36697 series electrolytic capacitors, temperature and ripple current is shown in Figure 1-10. Since B43697 is a high-voltage electrolytic capacitor, its limit overcurrent multiple is lower than the limit overcurrent multiple given in Figure 1-9. As can be seen from Figure 1-10, for high-voltage electrolytic capacitors, appropriately reducing the operating voltage can effectively extend its life. Even a 6.6% reduction will extend the life from the rated life of 4000h to 11500h. 6.3 The relationship between the service life of automotive grade electrolytic capacitors, temperature and ripple current Since automotive-grade electronic devices need to work between -40 and 120°C, fuel-engine vehicles require that the maximum operating temperature of automotive-grade electrolytic capacitors needs to be 125°C or even 150°C. Figure 1-11 shows the relationship between the life of the B3693 series of automotive grade electrolytic capacitors, temperature and ripple current. This is a low-voltage, high-frequency, low-resistance electrolytic capacitor. The ripple current limit curve (the boundary between the shaded and non-shaded areas) is similar to Figure 1-7. Figure 1-11a shows the relationship between life and ambient temperature and ripple current. Figure 1-11b shows the relationship between life and case temperature and ripple current. The latter has a slightly larger ripple current limit than the former, reaching the rated ripple current. Nearly 3.3 times, an increase of about 10%. At the same time, the equal life curve in Figure 1-11b becomes steeper. Even if the ripple current is the same, the life span will be longer as measured by the case temperature. 6.4 Relationship between the service life of pin-type electrolytic capacitors, temperature and ripple current Figure 1-12 shows the relationship between life and temperature and ripple current of the B43504 series of pin-type electrolytic capacitors. As can be seen from Figure 1-12, the isothermal life curve is relatively steep, which may be the result of the extension of the negative electrode improving the thermal conductivity, and the limit ripple current is close to 3.5 times the rated ripple current. 6.5 Relationship between the service life of bolt-type electrolytic capacitors, temperature and ripple current Figure 1-13 shows the relationship between the service life of the bolt-type electrolytic capacitor B43560 series and the temperature and ripple current. The above is the performance analysis of electrolytic capacitors—the relationship between life, temperature and ripple current 7 Performance Analysis of Electrolytic Capacitors- Thermal Effect of ESR and Thermal Resistance of Aluminum Electrolytic Capacitors The ripple current flowing through the ESR of the aluminum electrolytic capacitor will produce a power loss of p=i^2R ESRand cause the aluminum electrolytic capacitor to heat up. Compared with power semiconductor devices, aluminum electrolytic capacitors have very poor heat dissipation capabilities, so the internal temperature of aluminum electrolytic capacitors with slight power consumption will increase significantly, thereby reducing the service life of aluminum electrolytic capacitors. Therefore, in addition to understanding the impact of the ESR of aluminum electrolytic capacitors on circuit operation, we must also pay attention to the heat dissipation ability of aluminum electrolytic capacitors, that is, the thermal resistance. In large aluminum electrolytic capacitors, a large ripple current usually flows, causing the aluminum electrolytic capacitor to heat up. In order to dissipate the heat generated by the aluminum electrolytic capacitor, the temperature of the aluminum electrolytic capacitor core pack will be higher than the temperature of the shell, that is, the temperature from the shell to the core pack of the aluminum electrolytic capacitor increases. This temperature rise is very important and determines the working status and life of the aluminum electrolytic capacitor. If you know the heat dissipation capacity (thermal resistance) of the aluminum electrolytic capacitor from the core pack to the shell, it is easy to calculate whether the temperature of the aluminum electrolytic capacitor core pack is within the desired value or design requirement range through the measured ripple current and shell temperature. The packaging forms of aluminum electrolytic capacitors are mainly divided into three categories: bolt type (large aluminum electrolytic capacitors or high ripple current aluminum electrolytic capacitors), pin type (medium aluminum electrolytic capacitors or medium ripple current aluminum electrolytic capacitors), lead type and Surface mount (small aluminum electrolytic capacitors or low ripple current aluminum electrolytic capacitors). Tables 5-6 to 5-9 respectively show the thermal resistance data of bolt-type aluminum electrolytic capacitors produced by CDE Company, the thermal resistance data of bolt-type aluminum electrolytic capacitors produced by RIFA Company, and the thermal resistance data of pin-type aluminum electrolytic capacitors produced by CDE Company. Thermal resistance data of pin-type aluminum electrolytic capacitors produced by The above is about the performance analysis of electrolytic capacitors—The thermal effect of ESR and the thermal resistance of aluminum electrolytic capacitors Table 1-6 Thermal resistance data of bolt-type aluminum electrolytic capacitors produced by CDE Company (unit: ℃/W) Note: The metal base in the table is to fix the aluminum electrolytic capacitor on a radiator or metal chassis with heat dissipation capability. Table 1-7 Thermal resistance data of bolt-type aluminum electrolytic capacitors produced by RIFA (Unit: °C/W) Note: Rthhs in the table is the thermal resistance from the aluminum electrolytic capacitor shell to the radiator. Table 1-8 Thermal resistance data of pin-type aluminum electrolytic capacitors produced by CDE (unit: ℃/W) Another effective way to reduce the thermal resistance from the core to the shell of an aluminum electrolytic capacitor is to “sit” the negative aluminum foil of the aluminum electrolytic capacitor directly on the shell to increase the thermal conductivity and reduce the thermal resistance from the core to the shell. From the above analysis, we can see that for products produced by different aluminum electrolytic capacitor manufacturers, products with the same appearance or similar shells have basically the same thermal resistance from the shell to the environment, but the thermal resistance from the core package to the shell is quite different. If only considered from a quality perspective, the thermal resistance from the core package to the shell of the aluminum electrolytic capacitor reflects the application quality of the aluminum electrolytic capacitor. Among the many aluminum electrolytic capacitor manufacturers in the world, few can provide the thermal resistance of their aluminum electrolytic capacitors. Some do not have this data (such as many domestic aluminum electrolytic capacitor manufacturers), while others require technical confidentiality. The size and rated ripple current of leaded aluminum electrolytic capacitors are very small, and they can usually meet the requirements of designing electronic circuits without considering thermal Performance Analysis of Electrolytic Capacitors-—mainly including ripple current tolerance, equivalent series resistance, equivalent series inductance and other parameters….To learn more about electrolytic capacitors and products, please click:www.xuanxcapacitors.com Share This Story, Choose Your Platform!
{"url":"https://solidcapacitor.com/performance-analysis-of-electrolytic-capacitors/","timestamp":"2024-11-13T21:33:49Z","content_type":"text/html","content_length":"191719","record_id":"<urn:uuid:fd0f9d83-37ef-4882-bc2d-27b56824a3e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00813.warc.gz"}
Chapter 3 - Regression Models When performing time series analysis, models are mathematical or statistical representations that describe the relationship between variables over time. They can serve as an abstraction of real-world processes, enabling scientists and data analysts to better understand, interpret and predict data. Def. 2.1 - Model A model in time series analysis is a formal mathematical framework that is used to describe how a variable (or set of variables) evolve over time. 2.1 Introduction To Models ❌ The following chapters in this syllabus will cover various types of models, which will be explained in detail. Different types of models allow to capture different underlying patterns, trends and relationships within the time series data. These models can be broadly categorized on their structure, underlying assumptions and the nature of time series they are suited for. Below is a short overview of common model types, some of which will be introduced in the current and later chapters. Naïve & Simple Models The easiest models to understand are those that do not require complex mathematical representations or computations. They often serve as a useful baseline or benchmark in comparison to more traditional or complex models. • The Naïve model assumes that the next value will be the exact same as the last observed value in the series. Mathematically, an estimator (see 2.2) can be represented as: • The naïve model can be expanded to the Seasonal Naïve model, which is very similar but has one big advantage; it takes into account seasonality. This is achieved by forecasting the next value as the same as the last observed value in the corresponding seasons. In case of monthly data, the following estimator can be used: • Finally, the Simple Average (SA) model forecasts the next value in the time series based on the average of all past observations, which leads to the following estimator: Smoothing Models Some time series data may contain noise from external sources. In other cases, it can be beneficial to capture certain trends within a time series. This is where smoothing models are useful, since they smooth out the data, removing noise in the process. Due to their accumulative nature, they also posses the capability of capturing trends over time. • The Moving Average (MA) model calculates the average of the past observations and uses that as a forecast for the next value. This bears similarity to the previously discussed simple average model, the only difference being that only the past observations are taken into account, instead of all observations. The moving average estimator can be written mathematically as: • The Exponential Smoothing (ES) model uses a weighted average of past observations, where more recent observations are given a higher weight (and thus contribute more to the estimate for the next value). There exist several types of exponential smoothing models such as the Simple Exponential Smoothing (SES) model, which is often used in case no clear trend or seasonal pattern is visible. Holt’s linear trend model extends the SES model by adding the capability of capturing linear trends. Finally, the Holt-Winter (sometimes referred to as triple exponential smoothing) model further extends this by accounting for seasonality. Regression Models Regression models aim to explain the relationship between the dependent variable (response, in time series analysis often the data dimension) and one or more independent variables (predictors, in time series analysis often the temporal dimension). • The Linear Regression model assumes a linear relationship between the dependent variable and one or more independent variables. An estimator can be constructed as follows: • Autoregressive Distributed Lag (ARDL) models combine lagged values of the dependent variable (as in autoregressive models) and independent variables (as in regressive models) to model the time Autoregressive Models As briefly mentioned in the first chapter, autoregressive models use past values of the time series to predict or forecast future values. This means that such models assume that the current values are dependent on previous values in the time series. • Autoregressive (AR) models forecast the value of the response as a linear combination of the response at past points in time. This is represented by the following estimator: • Autoregressive Moving Average (ARMA) models take regular AR models one step further by combining them with the MA model. This allows to capture both the autoregressive nature of the time series and remove noise in the process. An ARMA estimator can be expressed as: • ARMA models can be further extended to (seasonal) autoregressive integrated moving average ((S)ARIMA) models. ARIMA models include a differencing step to remove trends. SARIMA models are a further extension that specifically aim at capturing seasonality. State-Space & Structural Models State-space and structural models provide a flexible framework for modeling complex time series data by representing the series with latent (unobserved) state variables. • The Kalman Filter is a state-space model that uses a recursive algorithm to estimate the hidden state variables of a time series. It is often used for filtering and forecasting in dynamic • The Unobserved Components (UC) model decomposes a series into several components that constitute to the series, such that they provide relevant information. In the case of time series, this often includes decomposition into trend, seasonal and irregular components. Multivariate Models In some cases, time series data consists of more than one data dimension. In these cases, it is necessary to deal with more than one time series at once, allowing for interactions between different time series. • Vector Autoregressive (VAR) models extend AR models to multiple time series. Each variable in the system is modeled as a linear function of past values of all variables in the system. An estimator for the VAR model can be expressed as follows: • Vector Error Correction (VECM) models are a variation of VAR models used when time series are cointegrated (i.e. meaning they share a long-term equilibrium relation). This is often reserved for econometric analysis. Machine-Learning Models In recent years, machine learning methods have become a popular choice in time series forecasting, often outperforming traditional models in some applications. • Random forest & Decision tree models are tree-based methods that can be used for time series forecasting. This is particularly the case when there are complex non-linear interactions between • Gradient Boosting methods such as “XGBoost” or “LightGBM” have been successfully applied to time series forecasting tasks. • Neural networks are also quite successful when it comes to time series forecasting. In particular, Recurrent Neural Networks (RNNs) such as LSTM and GRU models are specifically designed to handle (originally textual) sequential data. This allows to capture longer-term dependencies. Other types of neural networks such as Convolutional Neural Networks (CNNs) can be applied to time series data to capture both spatial and temporal dependencies. Non-linear Models Non-linear models are used when a time series exhibits a non-linearity. These models can be used to capture these specific dynamics. • Threshold Autoregressive (TAR) models are models where the AR process switches between different regimes, depending on whether the past values are above or below a certain threshold. A smooth version, called a smooth transition autoregressive (STAR) model is a generalization of a TAR model where the transition between regimes is smooth rather than abrupt. • General Autoregressive Conditional Heteroskedasticity (GARCH) models are used for modeling time series with a changing variance over time. They are commonly applied in financial data to capture volatility clustering. Other Model Types Long-Memory and Fractional models can be used when a time series exhibits long-term dependencies that are not captured well by traditional short-term models such as ARIMA models. A general ARIMA model can be extended to ARFIMA (where F stands for “fractionally”). These allow for fractional differencing, making it suitable for time series with long-range dependencies. Furthermore, hybrid models combine different types of models to leverage the strengths of each approach. For example, combining ARIMA with machine learning models (such as ARIMA-LSTM) allows for capturing both linear and non-linear patterns in the data. Example 2.1 - Basic Forecasting Models This example shows the closing price of the Microsoft stock price over the period of one month. Different models were then used to make predictions for the closing price of this stock. Noteworthy is the moving average, which seems to be constant. This is due to the fact that no new moving average can be computed with “predicted” data, since this would dampen the overall prediction over time to a stable value. It is also interesting to see that, due to the volatility of stock market data, none of these models do a good job of accurately predicting what the stock price will do. 2.2 The Linear Regression Model Linear regression involves modeling a dependent variable as a linear combination of one or more predictors (independent variables). In time series data, the goal is often to understand the trend, seasonality and potential cyclical patterns of the time-dependent variable or to forecast future values. In a linear regression model for time series, the dependent variable at time is expressed as: • is the value of the dependent variable at time . • is the predictor variable at time . • is the intercept of the regression line, representing the baseline level of when all the predictors evaluate to zero. • is the (slope) coefficient for the predictor variable, which measures the impact of on . • is the error term at time , which is often assumed to be a random variable. This term can be omitted if unnecessary, but is often assumed to have zero mean and constant variance: Types of Linear Regression Models in TSA In time series analysis often two types of linear regression models are considered; the trend-based regression model and explanatory variable models: • Trend-based Regression Models are used to identify long-term directional movements in the time series data. Common trends include linear, quadratic or exponential trends. For example, to model a linear trend, the following model can be used with being the time index: • Explanatory Variable Models (also known as Multiple Regression) are models where the dependent variable is explained by other time-dependent variables. For example, in predicting sales, economic indicators such as consumer confidence or seasonal indicators (e.g. month or quarterly) are used as independent variables. Assumptions for Linear Regression in TSA When using a linear regression model in time series analysis, it is important to review several assumptions to ensure the validity and reliability of the model and its estimated parameters. • The relationship between the predictors and the dependent variable is assumed to be linear. This is the linearity assumption. • The error terms should be uncorrelated over time. This can be quite problematic, especially in time series, since consecutive values are often autocorrelated. This is the independence assumption. • The variance of the errors should also be constant over time, an assumption known as homoscedasticity. • Finally, the error terms should be normally distributed. This is of particular importance for inference of the regression coefficients. This assumption is the normality of errors assumption. Violating one of these assumptions can lean to biased estimates, incorrect predictions and unreliable inference, impacting the predictive power of the model. 2.2.1 Estimation of Model Parameters To obtain a linear regression model that can be used to make predictions of forecast future values, it is necessary to estimate the model parameters. In this particular case, the model parameters are • The slope coefficients for all predictors To obtain the intercept and slope coefficients, data samples can be used. The data samples are pairs of both the sample and the predictors that were used to obtain that sample : Here, we assume that the following linear relation holds, such that the linearity constraint is satisfied: The error term is assumed to be distributed as white noise. This means the error terms are i.i.d. with zero mean and are uncorrelated over time. This satisfies both the independence and normality of errors contraint. If the errors are drawn from any distribution where the variance is constant over time, the homoscedasticity constraint is further satisfied, validating all assumptions necessary for a linear regression model. Typically, the intercept and slope coefficients are estimated using the (ordinary) least squares (OLS) estimator described in the previous chapter. Remember that this estimator uses the following relation to estimate a parameter: In this particular case we let . This allows the OLS estimator to choose the values of the intercept and slope coefficients that minimize the value of the expression. This results in fitted values , such that It is important to note that in most cases, the resulting values for the intercept and slope coefficients will yield a linear relation where, if plotted in a -dimensional graph, the resulting line will not intercept any of the data samples. The OLS estimator obtains a line that minimizes the distance to each of these points. This results in so called residuals for each data sample. A residual is the difference between the actual data sample for the sample and the predicted value. Example 2.2 - Linear Regression In this example simple linear regression between one predictor and one dependent variable is shown. The first image contains the fake data, together with two random regression lines . It is clear that the two random regression lines do not capture the trend observed in the data very well. The trend observable in the upper regression line is too strong, while the trend in the lower regression line seems to follow the actual trend of the data. However, the intercept of the lower regression line is too low compared to the actual data. After applying the OLS estimator, the result is the actual regression line with intercept and slope coefficient . This results in the following regression line that fits the actual data much better! Performing a residual analysis on the residuals can give an insight in why the regression line obtained with the OLS estimator is better. To compare, the sum of squared residuals (SSR) is used: This results in the following SSR scores: Regression Line SSR Note that this is the lowest obtainable score, since the OLS estimator results in the parameters that minimize this sum. This means that a better score cannot be obtained using the OLS estimator and that the resulting regression line is the best one obtainable using a linear regression model. Matrix Notation The regression model can also written in matrix notation. In this notation a matrix containing all the values for the predictor variables and a value of 1 for all intercepts is constructed. The vector contains all the values of the dependent variable, obtained from the sample data. We can now rewrite the linear regression model in matrix notation. • is an vector of observed responses (the sample data). • is an matrix of predictors, with each row representing an observation and each column a predictor variable. • is the vector of unknown parameter that have to be estimated (the intercept and slope coefficients). • is an vector of random errors, which are (typically) assumed to be i.i.d. with mean 0 and variance , i.e. to satisfy the assumptions for a linear regression model. It is then possible to show that Covariance Matrix of The covariance matrix of the estimated parameters is denoted as and quantifies the uncertainty or variability of the estimated coefficients. It is given by the following expression: It can further be shown that • is the variance of the error term . If the errors have more variability, then the estimates of are less precise, leading to a larger covariance matrix (in terms of the magnitude of the values) • is the inverse of the matrix formed by taking the product of with itself. This component enables to capture information about the predictor variables. It might be unclear why this covariance matrix matters in the first place. The covariance matrix serves several purposes, especially in the context of statistical inference. • The diagonal elements represent the variances of each estimated parameter . These values can indicate how precise each parameter estimate is. Smaller variances imply more precise estimates, while larger variances indicate a greater uncertainty. • The off-diagonal elements represent the covariances between different parameter estimates. They provide an indication on how different parameter estimates are linearly related. If the predictors are highly correlated, the covariances will be high, leading to less reliable estimates (a phenomenon known as multicollinearity).
{"url":"https://www.matthias-kovacic.dev/chapter-3--regression-models","timestamp":"2024-11-05T04:30:34Z","content_type":"text/html","content_length":"268431","record_id":"<urn:uuid:fc3a1b32-4a10-49f3-8f47-64d68250b4cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00248.warc.gz"}
Remark 3.3.8.3. If $f: X \rightarrow S$ is a Kan fibration of simplicial sets, then every vertex $s \in S$ determines a Kan complex $X_{s} = \{ s\} \times _{S} X$. One can think of the construction $s \mapsto X_{s}$ as supplying a map from $S$ to the “space” of all Kan complexes. Roughly speaking, one can think of Theorem 3.3.8.1 as asserting that this “space” itself behaves like a Kan complex. We will return to this idea in §5.6.
{"url":"https://kerodon.net/tag/00ZU","timestamp":"2024-11-12T21:53:53Z","content_type":"text/html","content_length":"9801","record_id":"<urn:uuid:2050347c-675f-4e41-ab01-48dcad8608bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00708.warc.gz"}
Feature Elimination using RFE You built your first model in the previous segment. Based on the summary statistics, you inferred that many of the variables might be insignificant and hence, you need to do some feature elimination. Since the number of features is huge, let’s first start off with an automated feature selection technique (RFE) and then move to manual feature elimination (using p-values and VIFs) – this is exactly the same process that you did in linear regression. So let’s start off with the automatic feature selection technique – RFE. Let’s summarise the steps you just performed one by one. First, you imported the logistic regression library from sklearn and created a logistic regression object using. Then you run an RFE on the dataset using the same command as you did in linear regression. In this case, we choose to select 15 features first (15 is, of course, an arbitrary number). You can see that RFE has eliminated certain features such as ‘MonthlyCharges’, ‘Partner’, ‘Dependents’, etc. We decided to go ahead with this model but since we are also interested in the statistics, we take the columns selected by RFE and use them to build a model using statsmodels using: Here, you use the GLM (Generalized Linear Models) method of the library statsmodels. ‘Binomial()’ in the ‘family’ argument tells statsmodels that it needs to fit a logit curve to a binomial data (i.e. in which the target will have just two classes, here ‘Churn’ and ‘Non-Churn’). Now, recall that the logistic regression curve gives you the probabilities of churning and not churning. You can get these probabilities by simply using the ‘predict’ function as shown in the Since the logistic curve gives you just the probabilities and not the actual classification of ‘Churn’ and ‘Non-Churn’, you need to find a threshold probability to classify customers as ‘churn’ and ‘non-churn’. Here, we choose 0.5 as an arbitrary cutoff wherein if the probability of a particular customer churning is less than 0.5, you’d classify it as ‘Non-Churn’ and if it’s greater than 0.5, you’d classify it as ‘Churn’. The choice of 0.5 is completely arbitrary at this stage and you’ll learn how to find the optimal cutoff in ‘Model Evaluation’, but for now, we’ll move forward with 0.5 as the cutoff. Coming Up In the next segment, you will learn how to calculate the accuracy of the fitted logistic regression curve.
{"url":"https://www.internetknowledgehub.com/feature-elimination-using-rfe/","timestamp":"2024-11-08T04:34:09Z","content_type":"text/html","content_length":"80658","record_id":"<urn:uuid:445c0bbf-09d0-4770-945b-6d5acf06224f>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00835.warc.gz"}
What is the Gross Margin Formula? Gross Margin Formula The formula to calculate Gross Margin, often expressed as a percentage, is: \(\text{Gross Margin} = \frac{\text{Total Revenue} – \text{Cost of Goods Sold}}{\text{Total Revenue}} \times 100\% \) • Total Revenue is the total amount of money received from the sale of goods or services. • Cost of Goods Sold (COGS) includes all the direct costs attributable to the production of those goods or services sold by the company. This typically includes direct labor costs and direct materials costs. In this formula, you first subtract the COGS from the Total Revenue to get the Gross Profit. Then you divide the Gross Profit by the Total Revenue to get the Gross Margin Ratio. Multiplying by 100% expresses this ratio as a percentage. The Gross Margin represents the percent of total sales revenue that the company retains after incurring the direct costs associated with producing the goods and services it sells. A higher percentage indicates the company is retaining more on each dollar of sales to cover its non-production costs and/or generate profits. Example of the Gross Margin Formula Let’s take an example of a hypothetical company, “Best Bikes,” which manufactures and sells bicycles: Suppose in the last fiscal year, Best Bikes had total revenue (total sales) of $2,000,000. The Cost of Goods Sold (COGS), which includes the direct costs for materials and labor used in creating the bicycles, was $1,200,000. We can use the gross margin formula to calculate the gross margin: • Calculate the Gross Profit: $2,000,000 (Total Revenue) – $1,200,000 (COGS) = $800,000 (Gross Profit) • Calculate the Gross Margin: ($800,000 (Gross Profit) / $2,000,000 (Total Revenue)) x 100% = 40% So, Best Bikes’ gross margin is 40%. This means that for every dollar Best Bikes earns from selling bicycles, it retains $0.40 after covering the direct costs associated with producing the bicycles. This remaining amount can be used to cover other business expenses like overhead costs, R&D, marketing expenses, or it can contribute to net profit.
{"url":"https://www.superfastcpa.com/what-is-the-gross-margin-formula/","timestamp":"2024-11-13T22:33:43Z","content_type":"text/html","content_length":"396294","record_id":"<urn:uuid:4544a3d3-352c-4fa5-9f59-311f0e5b5651>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00185.warc.gz"}
CECAM - Entanglement in Strongly Correlated Systems The study of topological phases of matter has recently experienced a tremendous intensification, with much progress on both the experimental as well as theoretical side. Most notably are the newly discovered topological insulators (or superconductors), which combines physics from the quantum Hall effect and graphene. Currently, most of the interesting physics in topological insulators emerges from combining non-interacting band theory with the notion of topology, which has led to some spectacular results. Most of the developments in the field of topological insulators, has focussed on the effects of the topological properties, without taking the electron interactions into account. While giving rise to very interesting physics, combining the topological effects with electron interactions, will most certainly lead to many interesting discoveries. The fractional quantum Hall effect is a good example of where this interplay indeed has led to very exciting new physics. The theoretical prediction of (non-abelian) Majorana particles in topological insulators and closely related systems [1] has recently boosted the quest for the discovery of emergent non-abelian particles [2] beyond the realm of the quantum Hall effect [3]. In parallel to the developments in condensed matter physics, tremendous progress has been made in the field of cold atomic systems [4]. Such systems are extremely versatile, because of their tunability, and there are several proposals to use exploit the properties cold atomic gasses offer. Amongst these are the realization of interesting model lattice systems, which are known to exhibit interesting topological phases, such as Kitaev's honeycomb model [5], to name an interesting example. In addition, there are several proposals, to use atomic gasses, to emulate non-abelian gauge-potentials [6]. Success in this direction, in particular in combination with `traditional' condensed matter physics, would open up a whole new realm of interesting topological phases of matter. During the recent years, the field of topological phases has been boosted by the possible application to quantum computing [7]. Topological quantum computation solves by construction, the problem of local decoherence. Implementing topological quantum computation [8] in realistic experimental systems is one of the grail of the community. Numerical simulations with theoretical guidance have provided enormous insights on these complex many-body systems - quantum Monte Carlo simulations of cold atoms [9], density matrix renormalisation group [10] and tensor network studies [11] of topological spin liquids or exact diagonalizations of non-Abelian strongly interacting anyons [12] to cite only a few. From the above, it should be clear that the field of topological phases in condensed matter physics is an active field, where theoretical (both analytic approaches and simulations) and experimental progress go hand in hand. It is therefor important to have a regular platform, where physicist with different backgrounds - numerical, theoretical or experimental - but with the common interest of topological phases of matter, can report and discuss the recent developments in the field.
{"url":"https://www.cecam.org/workshop-details/entanglement-in-strongly-correlated-systems-67","timestamp":"2024-11-02T06:23:02Z","content_type":"text/html","content_length":"64560","record_id":"<urn:uuid:78ac35ac-f815-4149-af9c-fe0bcab481ac>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00002.warc.gz"}
Graduate Courses Catalog ME 501 Advanced Engineering Mathematics I (3+0+0) 3 (İleri Mühendislik Matematiği I) Systems of linear equations; linear vector spaces; theory of matrices and the eigenvalue problem; multivariable differential calculus; ordinary differential equations; vectors in R^3; vector field theory, Fourier series and Fourier transform; Laplace transform; calculus of variations. ME 502 Advanced Engineering Mathematics II (3+0+0) 3 (İleri Mühendislik Matematiği II) Partial differential equations; Laplace, diffusion, and wave equations; Bessel and Legendre functions; integral equations; functions of a complex variable; conformal mapping; complex integral calculus; series expansion and residue theorem. ME 503 Mechanics of Continua I (4+0+0)I or II:4 (Sürekli Ortamlar Mekaniği I) Vectors, matrix algebra, tensor analysis. Deformation and strain tensors. Length, angle, area and volume changes. Kinematics of motion, mass, momentum, moment of momentum, and energy. Fundamental axioms of mechanics. Stress; thermodynamics of continuous media. Constitutive equations; ideally elastic solids. Stokesian fluids. ME 511 Principles of Materials Science and Engineering (4+0+0)I or II:4 (Malzeme Bilimi ve Mühendisliği Prensipleri) Atomic bonding and crystal structure; imperfections in crystals; x-ray and electron diffraction; thermodynamics of crystals; kinetics; transport in materials; phase transformations; annealing processes; deformation and fracture of materials; examples of technological materials. ME 512 Principles of Manufacturing Processes (3+0+0)I or II:3 (İmalat Süreçleri Esasları) Fundamentals of production and processing of metallic, ceramic and polymeric materials. Manufacturing processes based on heating/cooling. Casting techniques. Near net shape processes. Principles of metal forming. Thermomechanical treatment. Surface modification. ME 521 Engineering Design (3+0+0)I or II:3 (Mühendislik Tasarımı) Nature and properties of materials; advanced topics of strength of materials; analysis of composite, honeycomb and reinforced materials; pressure vessel design; residual stresses, thermal stresses; failure theories, beyond the elastic range; buckling; shock; impact and inertia. ME 523 Elasticity (3+0+0)I or II:3 Cartesian tensor notation. Analysis of strain, components and compatibility of strain. Analysis of stress; definitions and components of stress; equations of equilibrium. Constitutive equations, generalized Hook’s law; governing equations of elasticity. Plane strain and plane stress; problems, some examples of 2-D problems of elasticity. Energy principles. Sample problems of applied ME 530 Advanced Dynamics (3+0+0)I or II:3 (İleri Dinamik) Kinematics of rigid body motion. Coordinate transformations. Rigid body dynamics. Euler’s equations of motion. Eulerian angles. Motion under no force. Lagrange equations and their first integrals. Hamilton’s equations. Applications to mechanical engineering systems. ME 537 State Space Control Theory (3+0+0)I or II:3 (Durum Uzayı Kontrol Kuramı) State space representation of systems. Dynamic response from state equations. Stability, controlability and observability. Canonical forms, control with state feedback. Pole placement. Observer-based controllers. Reference input tracking. Introduction to optimal control and Lyapunov stability. Example applications. ME 551 Advanced Fluid Mechanics (4+0+0)I:4 (İleri Akışkanlar Mekaniği) Dynamics of motion, constitutive equations. lncompressible flows; potential flows, wing theory; waves. Compressible flows; thermodynamics of flow; two dimensional potential flows, theory of small perturbations; shock waves. Viscous flows; some exact and approximate solutions of Navier-Stokes equations. ME 561 Conduction Heat Transfer (3+0+0) 3 (İletim ile Isı Transferi) Steady and unsteady heat conduction involving various boundary conditions. Methods of formulation. Analytical solutions and approximate methods. ME 579 Graduate Seminar (0+1+0) 0 (Lisansüstü Seminer) The widening of students’ perspectives and awareness of topics of interest to mechanical engineers through seminars offered by faculty, guest speakers and graduate students ME 581, 582, 583, 584, 585, 586, 587, 588, 589 Special Topics (3+0+0) 3 (Özel Konular) Special Topics of current interest in mechanical engineering selected to suit the individual interests of the students and faculty in the department. The course is designed to give the student of advanced level an opportunity to learn about the most recent advances in the field of mechanical engineering. ME 591, 592, 593, 594, 595, 596 Special Studies (3+0+0) 3 (Özel Çalışmalar) Study of special subjects not covered in other courses at the graduate level. ME 597, 598 Mechanical Engineering Seminars (1+0+0) 1 (Makina Mühendisliği Seminerleri) Subjects and speakers to be arranged. ME 599 Guided Research (0+4+0) 0 (Yönlendirilmiş Çalışmalar) Research in the field of Mechanical Engineering, supervised by faculty. ME 602 Mechanics of Continua II (3+0+0)I or II:3 (Sürekli Ortamlar Mekaniği II) Constitutive equations; thermomechanical materials, elastic materials. Stokesian fluids. Elasticity, fluid dynamics, thermoelasticity, visco-elasticity. Linear and nonlinear physical interactions in continuous media. Selected problems of practical importance in engineering disciplines. ME 610 Finite Elements (3+0+0) 3 (Sonlu Elemanlar) Strong and weak statements of boundary value problems. The concept of finite element discretization and finite element interpolatory schemes. The isoparametric concept. Programming techniques for numerically integrated finite elements. Implementation of finite element model and solution methods. Preprocessing and postprocessing. Time-stepping algorithms and their implementation approximation errors in the finite element method and error analysis. ME 613 Deformation of Engineering Materials (3+0+0):3 (Mühendislik Malzemelerinin Sekil Degistirmesi) Fundamentals of the mechanical behavior of materials. Elements of dislocation theory. Plastic deformation of crystalline materials. The relationship between microstructure and mechanical behavior at ambient and elevated temperatures ME 614 Materials Processing (3+0+0):3 (Malzeme Üretimi) Control of microstructure and alteration of material properties. Heat treatment of steel. Precipating hardening. Shape memory alloys. Processing of electronic and magnetic materials. Processing of glasses. Powder metallurgy. ME 618 Mechanical Behavior of Materials (3+0+0)I or II:3 (Malzemelerin Mekanik Davranisi) Treatment of elastic, plastic and creep deformation under steady and cyclic loads. Emphasis on approximate solutions which enable the prediction of service performance from simple tests. Failure due to fatigue, creep rupture and plastic instability. Treatment of fracture from engineering point of view. ME 620 Fracture (3+0+0)I or II:3 Stress analysis of cracked members; applications of linear elastic fracture mechanics; experimental determination of fracture toughness; microstructure aspects of fracture toughness. Fracture prediction beyond linear elastic range: the transition temperature approach, crack opening displacement, J-integral. Fatigue crack initiation, propagation and stress corrosion cracking. ME 622 Advanced Vibrations (3+0+0) 3 (Ileri Titresimler) Vibratory response of multi-degree-of-freedom systems, matrix formulation, concepts of impedance, frequency response, and complex mode shapes. Nonlinear vibrations, parametric resonance. Vibration of elastic bodies. Modal analysis. ME 625 Optimum Structural Design (3+0+0) 3 (En İyi Yapısal Tasarım ) Basic concepts of design optimization: classical techniques in structural optimization (differential calculus, variational calculus, Lagrange multipliers); Karush-Kuhn-Tucker conditions. Application of linear and nonlinear programming to structural problems. Advanced topics in structural optimization. ME 626 Mechanics of Composite Materials (3+0+0):3 (Kompozit Malzemelerin Mekaniği) Types of composite materials; matrix materials, thermosets, thermoplastics, fiber materials. Effective moduli; rule of mixtures. Constitutive relation for anisotropic materials. Laminates; constitutive relations, transformation equations. Strength and failure criteria. Classical theory of laminated plates; governing relations, higher order theories, energy methods. Cylindrical bending and vibration of laminated plates. ME 631 Engineering Analysis (4+0+0)I or II:4 (Mühendislik Analizi) Planning and design of project of a comprehensive character requiring the correlation of principles and procedures drawn from a variety of areas in engineering and related branches of science. ME 632 Approximate Solution Techniques (3+0+0)I or II:3 (Yaklaşık Çözüm Yöntemleri) Method of weighted residuals; boundary value, eigenvalue and initial value problems in heat and mass transfer. Application to fluid mechanics, chemical reaction systems, convective instability problems. Variational principles in heat and mass transfer. Convergence and error bounds. ME 634 Robotics (3+0+0) 3 (Robot Sistemleri) Fundamental aspects of robotics and type of robots. Rotation matrices. Homogeneous transformations. Direct kinematics. Inverse kinematics. Jacobean matrix. Dynamic force analysis via Newton-Euler formulation. Motion equations via Lagrangian formulation. Trajectory planning. Control methods of manipulators. ME 636 System Modeling and Identification (3+0+0) 3 (Sistem Modelleme ve Tanımlama) Systems and models. Modeling of complex systems. Lagrange equations. Bond graphs. System identification. Estimation from transient response. Spectra and frequency functions. Least squares estimation. Parameter estimation in dynamic models. Model validation. ME 641 Wave Propagation (3+0+0)I or II:3 (Dalga Yayılması) Basic equations of elastodynamics, methods of solutions. Navier's equations. Selected problems in one and two space dimensions. impact problems, explosion, reflection, refraction, Rayleigh surface waves, and various other selected problems of practical importance in diverse engineering disciplines. ME 652 Viscous Flow Theory (3+0+0)I or II:3 (Viskoz Akış Kuramı) Equation of Motion for Viscous flow Exact solutions of Navier-Stokes equations. Creeping flow: Stokes and Oseen solutions, lubrication theory. Boundary layer theory: similar solutions, approximate methods of solution, computer methods of solution, stability, turbulent boundary layers. Introduction to three-dimensional compressible boundary layer flows. ME 653 Turbulent Flow Theory (3+0+0)I or II:3 (Türbülanslı Akış Kuramı) Basic concepts. Scales of time, velocity, space. Time averaging of fundamental equations.Turbulent flow theories and models. Dynamics of turbulence. Turbulent pipe, boundary layer and force shear flows turbulent transport. Statistical description of turbulence. Spectral dynamics. ME 654 Gas Dynamics (3+0+0)I or II:3 (Gaz Dinamigi) Basic equation of compressible flow. Wave propagation in compressible media. One dimensional compressible flow. Equations of motion for multidimensional flow. Methods for solution. Oblique shock. Introduction to hypersonic flow. Introduction to rarefied gas dynamics. ME 655 Advanced Turbine Design (3+0+0) 3 (İleri Türbin Tasarımı) Review of gas dynamics and thermodynamics. Velocity triangles. Two dimensional flow in turbine stages. Turbine cascades. Calculation of design point efficiency of turbine stages using cascade data. Potential flow and methods of solution. Three dimensional design of turbines. Radial equilibrium theory. Off-design performance. Introduction to turbine cooling. ME 656 Computational Fluid Dynamics (3+0+0) 3 (Sayısal Akışkanlar Dinamiği) Fundamentals of computational fluid dynamics and high performance computing: basic flow models; grid generation; discretization techniques. Analysis of linear and nonlinear systems; algorithm development; convective-diffusive systems; turbulence modeling; combustion modeling. Prerequisite: ME 551 ME 660 Advanced Thermodynamics (3+0+0) 3 (İleri Termodinamik) An advanced study of the first and second laws of thermodynamics and their application to engineering systems and flow processes. Equilibrium conditions. Thermodynamic potentials; systems of variable mass. Chemical equilibrium and thermodynamics of chemical reactions. Emphasis is placed on the relationship of thermodynamics to the broad fields of engineering and applied science. ME 662 Convective Heat Transfer (3+0+0)I or II:3 (Taşınım ile Isı Transferi) Basic equations of fluid flow. Differential and integral equations of the boundary layer. Forced convection in internal and external laminar flows. Momentum-heat transfer analogies for turbulent flow. Natural convection. ME 663 Radiation Heat Transfer (3+0+0)I or II:3 (Işınım ile Isı Transferi) Basic laws of thermal radiation. Radiation properties of solids and liquids. Exchange of thermal radiation between surfaces separated by transparent media; non-gray and non-diffuse surfaces. Gas radiation in enclosures. Radiation combined with conduction and/or convection. ME 664 Two-Phase Heat Transfer (3+0+0)I or II:3 (İki Fazlı Isı Transferi) Nucleation and bubble growth in boiling. Pool boiling heat transfer. Critical heat flux. Film boiling. Kinematics and dynamics of adiabatic two-phase flow. Two phase flow with boiling and/or evaporation. Stability of two-phase flows. Condensation. ME 681, 682, 683, 684, 685, 686, 687, 688, 689 Special Topics (3+0+0) 3 (Özel Konular) Advanced special topics of current interest in mechanical engineering selected to suit the individual interests of the students and faculty in the department. The course is designed to give the student of advanced level an opportunity to learn about the most recent advances in the field of mechanical engineering. ME 690 M.S. Thesis (Yüksek Lisans Tezi) ME 691, 692, 693, 694, 695, 696 Special Studies (3+0+0) 3 (Özel Çalışmalar) Study of special subjects not covered in other courses at the graduate level. ME 697, 698 Mechanical Engineering Seminars (1+0+0) 1 (Makina Mühendisliği Seminerleri) Subjects and speakers to be arranged. ME 699 Guided Research (2+0+4) 4 (Yönlendirilmiş Araştırmalar) Research in the field of Mechanical Engineering, by arrangement with members of the faculty; guidance of doctoral students towards the preparation and presentation of a research proposal. ME 69A Guided Research II (0+4+0) 0 (Yönlendirilmiş Çalışmalar II) Continued research in the field of Mechanical Engineering, supervised by faculty; preparation and presentation of a research proposal. ME 69B Guided Research III (0+4+0) 0 (Yönlendirilmiş Çalışmalar III) Continued research in the field of Mechanical Engineering, supervised by faculty; preparation and presentation of a research proposal. ME 69C Guided Research IV (0+4+0) 0 (Yönlendirilmiş Çalışmalar IV) Continued research in the field of Mechanical Engineering, supervised by faculty; preparation and presentation of a research proposal. ME 69D Guided Research V (0+4+0) 0 (Yönlendirilmiş Çalışmalar V) Continued research in the field of Mechanical Engineering, supervised by faculty; preparation and presentation of a research proposal. ME 790 Ph.D. Thesis (Doktora Tezi)
{"url":"http://www.me.boun.edu.tr/?q=content/graduate-courses-catalog","timestamp":"2024-11-12T14:07:51Z","content_type":"text/html","content_length":"43426","record_id":"<urn:uuid:e4a5e776-6149-4471-8d62-e21f9f4e92dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00400.warc.gz"}
Publications by Year: 2019 Androulidakis I, Skandalis G A Baum-Connes conjecture for singular foliations . Annals of K-theory [Internet]. 2019;4(4):561-620. Publisher's VersionAbstract We consider singular foliations whose holonomy groupoid may be nicely decomposed using Lie groupoids (of unequal dimension). We show that the Baum-Connes conjecture can be formulated in this setting. This conjecture is shown to hold under assumptions of amenability. We examine several examples that can be described in this way and make explicit computations of their K-theory.
{"url":"http://scholar.uoa.gr/iandroul/publications/year/2019","timestamp":"2024-11-14T17:45:54Z","content_type":"text/html","content_length":"23358","record_id":"<urn:uuid:919004bc-2dbe-4ea9-a48b-1fffd1d54967>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00621.warc.gz"}
Slava Krushkal (University of Virginia), Triangle Topology Seminar - Department of Mathematics Slava Krushkal (University of Virginia), Triangle Topology Seminar October 2, 2017 @ 4:30 pm - 5:30 pm Location: SAS 1102 at North Carolina State University, 4:30 pm Title: Flow and Yamada polynomials, planar triangulations, and TQFT Abstract: In the 1960s Tutte observed that the value of the chromatic polynomial of planar triangulations at (golden ratio +1) obeys a number of remarkable properties. In this talk I will explain how TQFT gives rise to a conceptual framework for studying planar triangulations. I will discuss several extensions of Tutte’s results and applications to the structure of the chromatic and flow polynomials of graphs, and the Yamada polynomial of spatial graphs. This talk is based on joint works with Ian Agol and with Paul Fendley.
{"url":"https://math.unc.edu/event/slava-krushkal-university-of-virginia-triangle-topology-seminar/","timestamp":"2024-11-02T12:29:18Z","content_type":"text/html","content_length":"110870","record_id":"<urn:uuid:c4103e92-6324-4dff-93e2-2a0f3fb8d307>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00648.warc.gz"}
Finding Trig Ratios Using the Unit Circle Notes | Math = Love Finding Trig Ratios Using the Unit Circle Notes I put together these finding trig ratios using the unit circle notes for my trigonometry students to write in their interactive notebooks. We completed these notes after gluing in a copy of the unit circle and a chart of trig ratios in the first quadrant. More Activities and Resources for Teaching the Unit Circle
{"url":"https://mathequalslove.net/finding-trig-ratios-using-the-unit-circle-notes/","timestamp":"2024-11-08T11:04:08Z","content_type":"text/html","content_length":"237009","record_id":"<urn:uuid:bbe31f13-cea9-43bf-8293-24867c5557e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00282.warc.gz"}
Frank Solutions for Class 10 Maths Chapter 14 Symmetry avail free PDF Frank Solutions for Class 10 Maths Chapter 14 Symmetry are provided here. In order to obtain a good academic score in Mathematics, the important thing to be done by the students is to solve the questions of each and every exercise. Frank Solutions for Class 10 Maths are prepared by a team of experts who work with their full potential to help the students excel in their exams. Step-by-step solutions to each question are given in simple, understandable language for the students to grasp concepts effortlessly. Chapter 14 – Symmetry, a figure is said to have line symmetry, if, by folding the figure along a line, the left and right parts of it coincide exactly. The line is called the line (or axis) of the symmetry of the figure. Access Answers to Frank Solutions for Class 10 Maths Chapter 14 Symmetry 1. Construct an isosceles triangle whose equal sides are 7 cm each and the base side is 5 cm. Draw all its lines of symmetry. Steps of construction: 1. Draw a line segment QR = 5 cm. 2. With Q as a centre and radius 7 cm, draw an arc. 3. With R as a centre and radius 7 cm, draw another arc, cutting the previous arc at P. 4. Join PQ and PR. Then, ΔPQR is the required isosceles triangle. 5. Now, draw an angle bisector with P as the centre and meeting QR at S. 6. PS is the perpendicular bisector of QR, and PQ is equal to PR. Therefore, PS is the line of symmetry. Isosceles triangle has only one line of symmetry. 2. Construct a triangle ABC in which each side measures 5.8. Draw all the possible lines of symmetry. Steps of construction: 1. Draw a line segment AC = 5.8 cm. 2. With A as a centre and radius 5.8 cm, draw an arc. 3. With C as a centre and radius 5.8 cm, draw another arc, cutting the previous arc at B. 4. Join AB and CB. Then, ΔABC is the required equilateral triangle. 5. Now, draw angle bisectors with B as the centre and meeting AC at P, with A as the centre and meeting BC at Q, with C as the centre and meeting AB at R. 6. Therefore, BP, AQ, CR are the line of symmetry. 3. Construct a parallelogram PQRS in which QR = 5.4 cm, SR = 6.0 cm and ∠Q = 60^o. Draw its lines of symmetry, if possible. Steps of construction: 1. Draw a line segment QR = 5.4 cm. 2. At Q, draw a ray making an angle of 60^o with QR 3. Along a ray, set off QP = 6 cm. 4. With P as a centre and radius 5.4 cm, draw an arc. 5. With R as a centre and radius 6 cm, draw another arc, cutting the previous arc at S. 6. Join PS and RS. Then, PQRS is the required parallelogram. 7. So, QS and PR intersect each other at O. Therefore, there is no line of symmetry in parallelogram PQRS. 4. Construct a square of side 4.8 cm and draw all its lines of symmetry. Steps of construction: 1. Draw a line segment PQ = 4.8 cm. 2. At P and Q draw perpendiculars PM and QN. 3. With P as centre and radius equal to 4.8 cm, cut PM at S. 4. With Q as centre and radius equal to 4.8 cm, cut QN at R. 5. Join RS, so PQRS is the required square. 6. Now, join the diagonals of square PR and QS. 7. Then, draw perpendicular bisectors of PQ and PS. Therefore, the diagonals and perpendicular bisectors are the lines of symmetry of square PQRS. 5. Construct a regular hexagon of side = 3.8 cm and draw all its lines of symmetry. Steps of construction: 1. Draw a line segment LM = 3.8 cm. 2. At L and M, draw a rays making an angle of 120^o each, then cut off LQ and MN for 3.8 cm. 3. At N and F, draw a rays making an angle of 120^o each, then cut off NO and QP for 3.8 cm. 4. Join OP, so LMNOPQ is the required regular polygon. 5. Now, join the diagonals of regular hexagon LO, MP and NQ. 6. Then, draw perpendicular bisectors of LM, NO and PQ. Therefore, the diagonals and perpendicular bisectors are the lines of symmetry of regular hexagon LMNOPQ. 6. Construct a rhombus ABCD with AB = 5 cm and AC = 8 cm. Draw it lines of symmetry. Steps of construction: 1. Draw a line segment AB = 5 cm. 2. With B as a centre and radius 5 cm, draw an arc. 3. With A as a centre and radius 8 cm, draw another arc, cutting the previous arc at C. 4. Join AC and BC, then we get ΔABC the isosceles triangle. 5. Again with A as a centre and radius 5 cm, draw an arc. 6. With C as a centre and radius 5 cm, draw another arc, cutting the previous arc at D. 7. Join AD and CD, then we get ABCD the required rhombus. 8. Now, join the diagonal of rhombus BD. Therefore, the diagonals are the lines of symmetry of rhombus ABCD. 7. Construct an isosceles right-angled triangle, having hypotenuse = 8 cm. Draw its lines of symmetry. Steps of construction: 1. Draw a line segment BC = 8 cm. 2. Then draw its perpendicular bisector, which intersects BC at D. 3. With D as a centre and BD or CD radius, draw a semi-circle. 4. Now produce the perpendicular bisector of BC, which intersects the circle at A 5. Join AB and AC, so ΔABC is the required isosceles right angled triangle. Therefore, perpendicular bisector hypotenuse BC is the lines of symmetry of isosceles right angled triangle. 8. Construct a ΔABC in which BA = BC = 6 cm and AC = 4.5 cm. Taking AC as line of symmetry, obtain a point D to form a quadrilateral ABCD. Name the figure ABCD. Steps of construction: 1. Draw a line segment AC = 4.5 cm. 2. With A as a centre and radius 6 cm, draw an arc. 3. With C as a centre and radius 6 cm, draw another arc cutting the previous arc at B. 4. Join AB and BC, Then, ΔABC is the isosceles triangle. As per the condition given in the question, 5. Taking AC as line of symmetry. 6. With A as a centre and radius 6 cm, draw an arc. 7. With C as a centre and radius 6 cm, draw another arc cutting the previous arc at D. 8. Join AD and CD. Therefore, ABCD is the required quadrilateral i.e. rhombus. 9. Construct a ΔPQR in which ∠R = 90^o, PQ = 5.2 cm and QR = 2.6 cm. Complete the figure taking PR as the line of symmetry and name the figure. Steps of construction: 1. Draw a line segment QS = 2.6 cm. 2. With Q as a centre and radius 5.2 cm, draw an arc. 3. At R draw a perpendicular to QR to meet at P. 4. Join PQ, so PQR is the required triangle. As per the condition given in the question, 5. Taking PR as the line of symmetry. 6. Now, produce QR to S i.e. RS = 2.6 cm 7. With Q as a centre and radius 5.2 cm, draw an arc at p. 8. Join PS, so PRS is the triangle. Therefore, ΔPQS is the required triangle, and also it is an equilateral triangle. 10. Take a graph paper and mark the points A(2, 0), B(2, 8) and C(5, 4) on it. Taking AB as the line of symmetry, obtain and write the co-ordinates of point D. Complete the quadrilateral ABCD and give its geometrical name. Steps for marking the points on graph: 1. As per the given data plot the points A (2, 0), B(2, 8) and C(5, 4) on the graph. 2. Join AB and BC. 3. Condition given the question, taking AB as the line of symmetry. 4. So, point D symmetrical about AB is a point with vertices x = -1 and y = 4 (because from point A to C in vertices x there are 3 units and in y there are 4 units) 5. Now plot point D(-1, 4) 6. Join BD. Therefore, the obtained figure is an arrow. 11. Take a graph paper and mark the points P(2, 1), Q(7, 1) and R(7, 5). Taking QR as the line of symmetry, obtain and write the co-ordinates of point S. Steps for marking the points on graph: 1. As per the given data plot the points P (2, 1), Q(7, 1) and R(7, 5) on the graph. 2. Join PR and PQ. 3. Condition given the question, taking QR as the line of symmetry. 4. So, point S symmetrical about QR is a point with vertices x = 12 and y = 1 (because from point Q to P in vertices x there are 5 units and in y there are 1 unit) 5. Now plot point S(12, 1) 6. Join SQ and SR. Therefore, the obtained figure is an isosceles triangle. 12. A(8, 2) and B(6, 4) are the vertices of a figure which is symmetrical about x = 6 and y = 2. Complete the figure and give the geometrical name of the figure. Steps for marking the points on graph: 1. As per the given data plot the points A (8, 2) and B(6, 8) on the graph. 2. Then plot point M whose vertices are x = 6 and y = 2. 3. Condition given the question, taking P as the point of symmetry. 4. So, point symmetric to A(8, 2) in the line x = 6 is C(4, 2) 5. Point symmetric to B(6, 4) in the line y = 2 is D(6, 0) 6. Now join AP, PC, BP and PD By using the distance formula, AD = √((8 – 6)^2 + (2 – 0)^2) = √(2^2 + 2^2) = √(4 + 4) = √8 Then, AB = √((8 – 6)^2 + (2 – 4)^2) = √(2^2 + (-2^2)) = √(4 + 4) = √8 So, from Pythagoras theorem BD^2 = AD^2 + AB^2 4^2 = (√8)^2 + (√8)^2 16 = 8 + 8 16 = 16 Therefore, ∠BAD = 90^o Hence, it is clear that AB = BC = CD = DA, AC and BD bisect each other at right angles, so ABCD is a square. 13. A(2, 2) and B(5, 5) are the vertices of a figure which is symmetrical about x – axis. Complete the figure and give its geometrical name. Steps for marking the points on graph: 1. As per the given data plot the points A (2, 2) and B(5, 8) on the graph. 2. Condition given the question, a figure which is symmetrical about x – axis. 3. So, point symmetric to A(2, 2) in the line x – axis is C(2, -2) 4. Point symmetric to B(5, 5) in the line y = 2 is D(5, -5) 5. Now join AB, AC, CD and BD Therefore, the obtained figure is a trapezium. 14. A(4, 1), B(2, 3) and C(5, 6) are the vertices of a figure which is symmetrical about x = 7. Complete the figure and give the geometrical name of the figure if any. Steps for marking the points on graph: 1. As per the given data plot the points A (4, 1), B(2, 3) and C(5, 6) on the graph. 2. Condition given the question, a figure which is symmetrical about x = 7. 3. So, point symmetric to A(4, 1) about x = 7 is D(10, 1) 4. Point symmetric to B(2, 3) about x = 7 is E(12, 3) 5. Point symmetric to C(5, 6) about x = 7 is F(9, 6) 5. Now join AB, AC, BC, AD, DE, DF, EF and CF Therefore, the obtained figure is a trapezium ADCF with two equal scalene triangles i.e. ΔABC and ΔDEF are attached to it. 15. In each of the following figures, the line of symmetry has been drawn with a dotted line. Identify the corresponding sides and the corresponding angles about the line of symmetry. In the given figure, The corresponding sides about the line of symmetry is, PS = SR, PQ = QR Then, corresponding angles bout line of symmetry is ∠SPQ = ∠SRQ In the given figure, The corresponding sides about the line of symmetry is, AB = AD, BC = CD Then, corresponding angles bout line of symmetry is ∠ABC = ∠ADC In the given figure, The corresponding sides about the line of symmetry is, AB = BC, AD = DC Then, corresponding angles bout line of symmetry is ∠DAB = ∠DCB In the given figure, The corresponding sides about the line of symmetry is, PQ = PU, QR = UT Then, corresponding angles bout line of symmetry is ∠PQR = ∠PUT, ∠QRT = ∠UTR Leave a Comment
{"url":"https://byjus.com/frank-solutions/class-10-maths-chapter-14/","timestamp":"2024-11-04T04:55:12Z","content_type":"text/html","content_length":"587015","record_id":"<urn:uuid:fc7c7255-8a8b-4109-a8ab-bfb8927a7696>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00892.warc.gz"}
Inconsistent Conditional Formatting Hey Community! I'm wondering if someone can help me out with why conditional formatting is not giving me consistent results. As shown below, I would like to see red text when the Unbilled column is not zero. One section is returning red text for $0.00 and another section is leaving it black (which is what I want). The 3 cells circled have the exact same formula in each section. When I pick my criteria from a list, I'm given multiple $0.00 options...and the more I work on the sheet, the more the list of zero dollars grows. Any suggestions?? Best Answers • Ah-ha! Figured it out! I increased the number of decimal places and that value is not actually $0! I'll have to change my criteria to greater than 1 and another for less than -1. Thanks to you both for being a sounding board to help me work through it! • That's what I was thinking. Glad you got it sorted. You could also wrap each formula in a ROUND function and specify 2 decimal places so you don't have to worry about back-end data and can leave it at $0.00. • @Amanda M what happens when you change the font color of the top zeros to black? Is your zero column format set to Currency $? (Highlight the column and click the $ in the icon menu at the top) Also, in the first step of your conditional format, click the link that says "define custom criteria" and set it to "is not equal to" and enter 0 in the field. I haven't seen the duplicated zeros in the list before. Smartsheet Tips and Tricks for Beginners and Advanced on LinkedIn and YouTube Come Say Hello! • That's how I have it set up...Currency column is set, and criteria is not equal to zero. And the font colour is automatic...so black. I've tried moving the conditional formatting priority up and it doesn't make a difference either. *I should note that I'm not a beginner user, I just have a beginner profile because I started with a new company recently ;) • @Amanda M Thanks for the note! I'm thinking the other Conditional Formats are overriding. (i.e. the one above says if Hierarchy = 4, the entire row is black font. ) Have you tried moving it to the very top of Conditional Formatting? Smartsheet Tips and Tricks for Beginners and Advanced on LinkedIn and YouTube Come Say Hello! • I tried that too. Moving right to the top doesn't help either. The grouping of cells are literally exactly the same (formula, hierarchy, etc) Oh wait - the only difference is that the top ones showing red are $0.00 because of numbers being subtracted to get the value. The ones showing black are a result of equation $0.00-$0.00 That seems to be consistent with the rest of the page as well. • Does your formula have a +"" or something similar? This would convert the number into text which Cond Format would consider as not equal to zero. Other than Cond Formatting step on itself, which you tested, I'm at a loss. Smartsheet Tips and Tricks for Beginners and Advanced on LinkedIn and YouTube Come Say Hello! • All the "x" & "z" values below utilize an IFERROR formula that returns "" with an error. But it's the same formula in the top group as the bottom (from original post) so I'm not sure why there would be a difference in formatting. The same happens in this group, and it only seems to be if there is a mathematically equation involved. @Andrée Starå @Paul Newcome any thoughts? • What is the exact formula you are using? • For "x" - the bottom 2 values (530,581.51 & 33,161.34) are =IFERROR([Total Sell]2 * [*Delivered/Installed Payment]# * [% to Invoice (based on completion)]@row * [% of Material cost]@row, "") and the top value is =IFERROR(SUM(CHILDREN()), "") For "y" = the top value is =SUM(CHILDREN()) and the bottom 2 yellow cells are text entries. If I put a $0 in there it doesn't change the outcome. For "z" - the bottom 2 values are =IFERROR([Total to Invoice]@row - [Actual Invoiced]@row, "") and the top is =[Total to Invoice]@row - [Actual Invoiced] I removed all the IFERRORs and it's still producing the same result. • Ah-ha! Figured it out! I increased the number of decimal places and that value is not actually $0! I'll have to change my criteria to greater than 1 and another for less than -1. Thanks to you both for being a sounding board to help me work through it! • I'll have to put that one in my knowledge bank for sure! Glad you got it figured out. Smartsheet Tips and Tricks for Beginners and Advanced on LinkedIn and YouTube Come Say Hello! • That's what I was thinking. Glad you got it sorted. You could also wrap each formula in a ROUND function and specify 2 decimal places so you don't have to worry about back-end data and can leave it at $0.00.
{"url":"https://community.smartsheet.com/discussion/98468/inconsistent-conditional-formatting","timestamp":"2024-11-12T12:09:03Z","content_type":"text/html","content_length":"457044","record_id":"<urn:uuid:9ca2be66-916e-4259-a57f-254534ee07ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00836.warc.gz"}
Convert 0.24 decimeters to inches 0.24 (Decimeters, dm) - unit for measure distances, lengths, heights and widths in Metric Units. One decimeter is equal to 3.93700787402 inches. On this page we consider in detail all variants for convert 0.24 decimeters to inches and all opportunities how to convert decimeters with usage comprehensive examples, related charts and conversion tables for decimeters. Here you will find all the ways for calculating and converting decimeters in inches and back. If you want to know how many inches are in 0.24 decimeters you can obtain the answer in several ways: • convert 0.24 decimeters using the online conversion on this page; • calculate 0.24 decimeters using calculator InchPro from our software collection for offline converting units; • apply arithmetic calculations and conversions for 0.24 decimeters outlined in this article. To indicate decimeters, we will use the abbreviation "dm" to indicate inches we will use the abbreviation "in". All options to convert "dm" to "in" we will look more detail in individual topics below. So, we're starting explore all avenues of transformation zero point two four decimeters and conversions between decimeters and inches. Convert 0.24 dm to inches by online conversion To convert 0.24 decimeters into inches we consider using the online converter on the web page. The online converter has very simple interface and will help us quickly convert our decimeters. The online decimeter converter has an adaptive shape for different devices and therefore for monitors it looks like the left and right input fields but on tablets and mobile phones it looks like the top and bottom input fields. If you want convert any decimeter values, you must only enter required value in the left (or top) input field and automatically you get the result in the right (or bottom) field. Under each field you see a more detailed result of the calculation and the coefficient of 3.93700787402 which is used in the calculations. The big green string, under the input fields - "0.24 Decimeters = 0.944881889764 Inches" further enhances and shows final result of the conversion. Calculator for converting units of measurement works symmetrically in both directions. If you enter any value in any field, you will get the result in the opposite field. Clicking on the arrow icons between the input fields you can swap the fields and perform other calculations. We are all made for easily converting any values between decimeters and inches. If you came to this page, you already see the result work of the online calculator. In the left (or top) field you see the value of 0.24 "dm" on the right (or bottom) box you see the value of result is equal to 0.944881889764 "in". Write briefly: 0.24 "dm" = 0.944881889764 "in" Convert 0.24 dm in inches by conversion tables We have briefly reviewed how to use the unit Converter on this page, but this is only part of the features of the page service. We made an interesting possibility to compute all possible values for units of measure in the lower tables. These tables are used to convert basic units of measurement: Metric conversion chart, US Survey conversion chart, International conversion chart, Astronomical conversion chart. Please, find these 4 tables at the bottom of this page they have the headers: • All conversions of 0.24 decimeters in the Metric System Units • All conversions of 0.24 decimeters in the US Survey Units • All conversions of 0.24 decimeters in the International Units • All conversions of 0.24 decimeters in the Astronomical Units If you enter a test number in any field of web calculator (field of decimeters or inches it doesn't matter), for example 0.24 as it is now, you not only get the result in 0.944881889764 inches but also a huge list of computed values for all unit types in the lower tables. Without doing your own search and making the transition to other pages of the website, you can use our conversion tables to calculate all the possible results for main units. Try delete and again entering into the calculator a value of 0.24 decimeters and you will see that all the conversion results in the lower tables will are recalculated for 0.24 (dm). The calculated data in the conversions tables change dynamically and all transformations are performed synchronously with converting decimeters in the page How many inches are in 0.24 decimeters? To answer this question, we start with a brief definition of decimeter and inch, and their purpose. The decimeter and inch units of length which can be converted one to another using a conversion factor which is equal to 3.93700787402. This coefficient answers the question how many inches are equivalent to one decimeter. The value of this multiplier determines the basic value to calculate all other lengths, sizes and other transformations for these units (decimeter and inch), it is enough to know the value, i.e. to remember that 1 decimeter = 3.93700787402 (in). Knowing the number of inches in one decimeter by simple multiplication we can calculate any values. Let's do a simple calculation using the multiplication: 0.24 (dm) × 3.93700787402 = 0.944881889764 (in) Thus it is seen that after multiplying by the coefficient we get the following relationship: 0.24 Decimeters = 0.944881889764 Inches How much is 0.24 decimeters in inches? We have already seen how to convert these two values and how change decimeters to inches. So in summary, you can write all possible results that have the same meaning. 0.24 decimeters to inches = 0.944881889764″ (in) 0.24 dm in inches = 0.944881889764″ (in) 0.24 dm into inches = 0.944881889764″ (in) 0.24 dm in = 0.944881889764″ (in) 0.24 dm is 0.944881889764″ inches zero point two four decimeters = 0.944881889764 inches For a detailed reviewing of similar numbers, visit next pages: How to convert 0.24 decimeters into inches? All rules and methods. To convert 0.24 decimeters into inches we can use many ways: • calculation using the formula; • calculation using the proportions; • calculation using the online converter of the current page; • calculation using the offline calculator "InchPro Decimal". Calculating 0.24 dm to inches formula for lengths and values. In the calculations for decimeters and inches, we will use the formula presented below that would quickly get the desired result. Y (dm) × 3.93700787402 = X (in) Y - value of decimeters X - result in inches That is, you need to remember that 1 decimeter is equal 3.93700787402 inches, and when converting decimeters just multiply the number of decimeters (in this case 0.24 decimeters) by a factor of 3.93700787402. For example, we transform the set of values 0.24 dm, 1.24 dm, 2.24 dm, 3.24 dm, 4.24 dm into inches and get the result in the following examples: 0.24 (dm) × 3.93700787402 = 0.944881889764 (in) 1.24 (dm) × 3.93700787402 = 4.88188976378 (in) 2.24 (dm) × 3.93700787402 = 8.8188976378 (in) 3.24 (dm) × 3.93700787402 = 12.7559055118 (in) 4.24 (dm) × 3.93700787402 = 16.6929133858 (in) In all variants we multiplied the all decimeters in range from 0.24 (dm) to 4.24 (dm) with the same ratio of 3.93700787402 and got the correct results in calculations. The calculation using mathematical proportions to convert 0.24 decimeters into inches To calculate the proportions you need to know the reference value in inches for 1 decimeter and according to the rules of arithmetic we can calculate any value in inches for any length in decimeters. See the next examples. We form the proportion for 3 values our decimeters 0.24 dm, 1.24 dm, 2.24 dm and calculate results values in inches: 1 (dm) — 3.93700787402 (in) 0.24 (dm) — X (in) Solve the above proportion for X to obtain: X = 0.24(dm) × 3.93700787402(in) ÷ 1(dm) = 0.944881889764(in) 1 (dm) — 3.93700787402 (in) 1.24 (dm) — X (in) Solve the above proportion for X to obtain: X = 1.24(dm) × 3.93700787402(in) ÷ 1(dm) = 4.88188976378(in) 1 (dm) — 3.93700787402 (in) 2.24 (dm) — X (in) Solve the above proportion for X to obtain: X = 2.24(dm) × 3.93700787402(in) ÷ 1(dm) = 8.8188976378(in) All proportions used reference value 1 decimeter = 3.93700787402 inches Calculation of values using decimeter online calculator on the page You can use our basic universal online converter on the current web page and convert any your length dimensions and distances between decimeters and inches in any directions free and fast. Currently, the field for decimeters contains the number 0.24 (dm) you can change it. Just enter any number into field for decimeters (for example any value from our set: 1.24, 2.24, 3.24, 4.24 decimeters or any other value) and get the fast result into field for inches. How to use the decimeter online calculator you can more detail read at this link manual for the calculator. For example, we take the 16 values into decimeters and will try to calculate the result values in inches. Also, we will use the web calculator (you can find it at the top of this page). In the set up a table in the left margin we write the value in decimeters in the right margin you see the values that you should obtain after calculation. You can check it right now, without leaving the site and make sure that the calculator works correctly and quickly. In all calculations, we used the ratio 3.93700787402 which helps us to get the desired values of computation results in inches. Please, see the results in the next table: Example of Work Decimeter Online Calculator with Calculation Results Decimeters Table Factor Inches 4720551 (dm) × 3.93700787402 = 18584846.4567 (in) 4720552 (dm) × 3.93700787402 = 18584850.3937 (in) 4720553 (dm) × 3.93700787402 = 18584854.3307 (in) 4720554 (dm) × 3.93700787402 = 18584858.2677 (in) 4720555 (dm) × 3.93700787402 = 18584862.2047 (in) 4720556 (dm) × 3.93700787402 = 18584866.1417 (in) 4720557 (dm) × 3.93700787402 = 18584870.0787 (in) 4720558 (dm) × 3.93700787402 = 18584874.0157 (in) 4720559 (dm) × 3.93700787402 = 18584877.9528 (in) 4720560 (dm) × 3.93700787402 = 18584881.8898 (in) 4720561 (dm) × 3.93700787402 = 18584885.8268 (in) 4720562 (dm) × 3.93700787402 = 18584889.7638 (in) 4720563 (dm) × 3.93700787402 = 18584893.7008 (in) 4720564 (dm) × 3.93700787402 = 18584897.6378 (in) 4720565 (dm) × 3.93700787402 = 18584901.5748 (in) 4720566 (dm) × 3.93700787402 = 18584905.5118 (in) Convert 0.24 decimeters with the use of calculator "InchPro Decimal" We're briefly describe the possibility for using our calculator for converting 0.24 decimeters. The calculator allows you to convert any value for lengths and distances not only in decimeters but also for all other units. Our conversion tables which we mentioned earlier are also included in the logic operation of the calculator and all these calculations you can get in one application if you download and install the software on your computer. Converter easily converts 0.24 "dm" for you in offline mode. All the details of the work of this application for conversion of the heights and widths, lengths, sizes and distances described in decimeters or other units of measurement you will find in menu "Software" of this site or by the link: InchPro Decimal. Please, also see the screenshots for acquaintance. Visual charts conversion of 0.24 decimeters. Many people can hardly imagine the relationship between decimeter and inch. In this picture, you can clearly see the ratio of these quantities to understand them in real life. The ratio of the lengths of the segments is retained on screens with any resolution as for large monitors as well as for small mobile devices. The graphical representation of scales for comparing values. The graph shows the relative values the decimeters in the form of rectangular segments of different lengths and colors. As well as the visual representation of 0.24 (dm) with the reference value in The graphs of the relationship between decimeters and inches are expressed in the following colours: • Green is the original length or distance in decimeters; • Blue color is the scale in decimeters; • Yellow color is the scale in inches. The scale may increase or decrease depending on the current number value on the page. The diagram shows the ratio between decimeters and inches for the same lengths and magnitude (see charts of the blue and yellow colors). Thu 14 Nov 2024
{"url":"http://inchpro.com/conversion/0.24-decimeters-to-inches/","timestamp":"2024-11-14T11:49:57Z","content_type":"text/html","content_length":"173929","record_id":"<urn:uuid:4e918c25-2e14-4b1e-8a13-e821374271e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00267.warc.gz"}
Shaping Tomorrows World The Shapingtomorrowsworld.org blog experienced unexpected and unresolvable technical difficulties on its original host several months ago. We have recreated the blog on this new host, using the WordPress software, which will provide a stable platform from here on. In the process, it has also been integrated with www.cogsciWA.com, Stephan Lewandowsky’s academic home page. All original posts published on Shapingtomorrowsworld.org will gradually be ported to the new host and will appear below over the next few weeks or months. Unfortunately, it would be prohibitively time consuming to transfer comments from the original content, so they will not become available here in the future. Blogging will resume once all the development and transfer tasks have been completed. Thank you for your patience until then. Familiarity-based processing in the continued influence of misinformation Advertisements in the U.S. for Listerine mouthwash falsely claimed that the product helped prevent or reduce the severity of colds and sore throats for more than 50 years. After a prolonged legal battle, the Federal Trade Commission directed Listerine to mount an advertising campaign to correct these deceptive claims. For 16 months, the company ran a $10,000,000 ad campaign in which the cold-related claims about Listerine were retracted. The campaign was only partially successful, with more than half of consumers continuing to report that the product’s presumed medicinal effects were a key factor in their purchasing decision. This real-life example from the 1970s underscores the difficulties of debunking of misinformation. Even a large budget, prime time TV exposure, and numerous repetitions may be insufficient to alter the public’s belief in false information. But this does not mean that debunking of misinformation is necessarily impossible: On the contrary, there are some known techniques that can help make a correction more effective, which John Cook and I summarized in the Debunking Handbook some time ago. One of the recommendations in the Debunking Handbook was to avoid the so-called Familiarity Backfire Effect. As we noted in the Handbook: To debunk a myth, you often have to mention it – otherwise, how will people know what you’re talking about? However, this makes people more familiar with the myth and hence more likely to accept it as true. Does this mean debunking a myth might actually reinforce it in people’s minds? To test for this backfire effect, people were shown a flyer that debunked common myths about flu vaccines. Afterwards, they were asked to separate the myths from the facts. When asked immediately after reading the flyer, people successfully identified the myths. However, when queried 30 minutes after reading the flyer, some people actually scored worse after reading the flyer. The debunking reinforced the myths. We therefore went on to recommend that communicators should avoid mentioning the myth altogether while correcting it. When seeking to counter misinformation, the best approach is to focus on the facts you wish to communicate.” Instead of saying “it’s a myth that vaccines cause autism” it is better to accurately state that “vaccinations save many lives.” Two recent studies published by Ullrich Ecker and colleagues and I took another empirical look at this familiarity backfire effect. In a nutshell, we did not find the effect in our experiments, which instead showed that repeating a myth while correcting it need not be harmful. The two studies have been discussed in previous posts by Tania Lombrozo here and by Briony Swire here. These results raise two related questions: First, is the cautionary discussion of the familiarity backfire effect in the Debunking Handbook wrong? Second, is the recommendation not to repeat a myth during debunking wrong? It would be premature to answer those questions in the affirmative. To understand why, we need to consider the theory that underlies the familiarity backfire effect. We discussed the effect in the Debunking Handbook not just because it had been shown to exist, but also because its existence is compatible with a lot of our knowledge about how memory works. A common assumption in memory research is that there are two separate classes of memory retrieval processes, known as strategic and automatic, respectively. Strategic memory processes allow for the controlled recollection of the information’s contextual details. Similar to the meta-data of a computer file, contextual details include information about the information itself, such as the information’s spatiotemporal context of encoding, its source, and its veracity. In contrast, automatic processes provide a fast indication of memory strength or familiarity of the information but little else. Automatic retrieval processes can therefore contribute to the continued influence of misinformation in two related ways. First, people’s truth judgments about a statement are known to be influenced by its familiarity. It follows that false information might be accepted as true just because it seems familiar. Second, when corrected misinformation is automatically retrieved from memory without any accompanying contextual details, it might mistakenly be considered true. To illustrate, many researchers have suggested that when information in memory is retracted, a “negation tag” is linked to the original memory representation (e.g., “Listerine alleviates cold symptoms–NOT TRUE”). If retrieval relies only on automatic processes, those processes might retrieve the claim without also retrieving the attached negation tag. If strategic memory processes are not engaged, familiar claims may be mistakenly judged to be true even after they have been corrected (and the person has diligently updated their memory). In support of this idea, misinformation effects have been found to be reduced when conditions encourage reliance on strategic processing. In an earlier study, Ullrich Ecker and colleagues and I found that presenting participants with a pre-exposure warning detailing the continued-influence effect greatly reduced reliance on misinformation, and was as effective as providing a factual alternative. We suggested that those warnings not only allowed individuals to more effectively tag misinformation as false at encoding, but also boosted later recall of the “negation tag” because people were more likely to engage strategic processes. It follows that the recent studies showing the absence of a familiarity backfire effect should be examined more closely to see if the data contain a signature of the presumed familiarity process. If that signature could be detected, then the studies might best be interpreted as showing that the familiarity backfire effect is not always observed but that there are theoretical reasons to expect it to occur in some circumstances. Note that this is quite different from showing that the familiarity backfire effect does not exist. It turns out that the study by Swire, Ecker, and Lewandowsky contains fairly strong evidence for the presence of the presumed familiarity process. The figure below shows one aspect of this evidence (it replicates across measures and experiments): facts after they were affirmed across three retention intervals. It can be seen that affirmation raises belief above the pre-intervention baseline (the dotted line), and that the increase in belief ratings is remarkably stable over time (we can ignore the ‘brief’ vs. ‘detailed’ manipulation as it clearly had little effect). The bars on the right show the same measure for myths after they were rebutted. Again, the rebuttal was effective because it lowered belief far below the pre-intervention baseline. However, unlike the effect on facts, the effects of the correction obviously wore off over time: After a week, people’s belief in the myths had increased considerably compared to the low level immediately after the correction had been received. In other words, misinformation began to be “re-believed” after a week, whereas newly-acquired fact belief remained stable during that time. This asymmetrical forgetting of the intervention is arguably a signature of the familiarity process. As we note in the paper: “In the case of an affirmed fact, it does not matter if an individual relies on the recollection of the affirmation or on the boosted familiarity of the factual statement—familiarity and recollection operate in unison and lead to the individual assuming the item to be true. However, in the case of a retracted myth, recollection of the retraction will support the statement’s correct rejection, whereas the myth’s boosted familiarity will foster its false acceptance as true, as familiarity and recollection stand in opposition.” The asymmetry of forgetting therefore suggests that the familiarity of the material persists whereas the strategic component of memory retrieval becomes less effective over time: The strategic component is required to retrieve the negation tag and “unbelieve” a myth, whereas familiarity is sufficient to believe affirmed facts even when strategic processes have become less effective. A clear implication of this interpretation is that under conditions in which strategic processes are even less effective, a familiarity-based backfire effect may well emerge. In a nutshell, our recent studies show that familiarity-based responding is important but that it can be overridden by a correction when strategic processing remains intact. What our studies do not show is that the familiarity backfire effect will never be observed. Although our experiments showed that familiarity does not lead to a backfire effect in some cases where strategic processes are compromised (i.e. in older adults, when little detail is provided about why the misconception is incorrect, after a long period of time between encoding the retraction and recalling it), our studies do not address whether familiarity backfire effects occur in other circumstances. For example, strategic memory processes are also compromised when people pay little attention during encoding of the correction because they are otherwise occupied (e.g., driving a car while listening to a corrective message on the radio). Those expectations remain to be examined by experimentation. A recent as-yet unpublished study by Gordon Pennycook and colleagues reports results that point in the direction of a familiarity-backfire effect. In their study, participants were shown “fake news” headlines that were sometimes accompanied by warnings that they were false (“disputed by independent fact checkers”). At a later stage in their study, participants rated fake news headlines that had been identified as false and were presented a second time as more accurate than novel fake news headlines. In other words, the familiarity afforded by repetition outweighed the effect of the warnings about the veracity of the content. In light of these latest results, which point to a fluid balance between familiarity-based processing and strategic processing, what should be our recommendations to communicators who are tasked to debunk misinformation? The picture below shows the recommendations from the Debunking Handbook annotated in light of the recent results: Qualifying the Familiarity Backfire Effect With ‘fake news’ fast becoming a global issue, the ability to effectively correct inaccurate information has never been more pertinent. Unfortunately, the task of correcting misinformation is far from trivial. In many instances, corrections are only partially effective and people often continue to rely on outdated information. This is known as the continued-influence effect, and it has been of great interest for cognitive psychology, political science, and communication studies. One recommendation that has emerged from the literature regarding optimal misinformation correction, is that it is best to avoid repeating the initial misconception. This is due to the risk of the repetition of the misconception increasing its familiarity. For example, truthfully stating that “playing Mozart to an infant will not boost its IQ” mentions both “Mozart” and “boosting IQ”. This makes the link between the two concepts more familiar, even though the actual statement is aiming to correct the myth. The potential problem that arises from increased familiarity is that people are often more likely to think that familiar information is true. The Familiarity Backfire Effect Some reports even suggest that the increased familiarity associated with a correction can be so detrimental, that it causes a familiarity backfire effect. A backfire effect is when a correction ironically increases an individual’s belief in the original misconception. For example, if people were more likely to think that listening to Mozart can boost a child’s IQ after they had been told that this was false (in comparison to their belief levels prior to the correction), this would be considered a backfire effect. However, scientific evidence for this phenomenon has been fairly thin on the ground. In fact, the most cited example of the familiarity backfire effect is from an unpublished manuscript. The manuscript reports an experiment that corrects misinformation regarding vaccines. We know that corrective attempts for misinformation about contentious subjects is likely to backfire because people’s worldview is being challenged. It is therefore unclear whether the backfire effect described in the manuscript arose solely due to familiarity. Ullrich Ecker, Stephan Lewandowsky, and I designed a set of experiments to see how detrimental familiarity really is to the updating of beliefs. We focused on many different topics to avoid confounding worldview backfire effects and familiarity backfire effects. The experiments were reported in an article that recently appeared in the Journal of Experimental Psychology: Learning, Memory, and Cognition. Memory Retrieval The experiments were based upon what we know about how memory works. It is a common assumption that there are two types of memory retrieval; strategic and automatic. Strategic memory allows you to remember details such as where or when you learnt the information, and whether information is true or false. However, strategic memory takes effort and may therefore fail when you are distracted or cannot expend effort for other reasons. Automatic memory retrieval, by contrast, does not take effort and is based largely on a perception of familiarity. For example, you will have little difficulty recognizing that the word “Mozart” occurred in this post based on its sense of familiarity alone, whereas you might have to engage in more strategic retrieval to recall in what context it appeared. You are less likely to be able to use strategic memory retrieval (and are more likely to rely on automatic memory retrieval) if you are (1) an older adult (2) you are not provided with enough detail about why the information is incorrect, and (3) there is a long period of time between the correction presentation and when you are asked to remember it. Given that we know under what circumstances people are more likely to use automatic processes, we can manipulate the extent to which people rely on familiarity when they are updating their beliefs. Experiment 1: Young Adults We presented 100 undergraduate students with 20 familiar myths (for example, “Ostriches hide their head in the sand when frightened”), and 20 facts (for example, “Dogs shouldn’t eat chocolate”). We asked participants to rate how much they believed each item on a scale between 0-10. We then informed them as to what was true and what was false. We did this by either briefly stating “this is true/ false”, or by giving participants a short evidence-based blurb as to why the information was true or false. The participants then re-rated their belief either immediately, after half an hour, or after one-week. If a familiarity backfire effect were to be elicited in an undergraduate population, we would expect it to occur in cases where they had brief corrections and/or after a long delay. For example, the figure below shows what a backfire effect could hypothetically look like. Belief levels in the myth after correction would be greater than people’s belief prior to the correction: We did not find this to be the case—the figure below shows the pattern of results that we actually found. Note that the dotted horizontal line refers to the belief prior to the correction. Any bar that falls below that line therefore represents successful memory updating. As belief levels after the correction consistently remained below belief prior to the correction, participants updated their beliefs in the right direction. Familiarity did play a role in that both brief corrections and a long delay led to corrections being less effective, but there was no evidence for a familiarity-based backfire effect. As previously noted, age is also known to be a factor in whether people can use strategic memory processes. It is therefore possible that the familiarity backfire effect only exists in older adult populations. This idea was tested in our second experiment. Experiment 2: Older adults We asked 124 older adults over the age of 50 to participant in a very similar study to Experiment 1. The only change we made to the Experiment 1 design, was that we added a three-week delay condition to maximize the chance of eliciting a familiarity backfire effect. The results of this study can be seen below: Belief after the correction remained well below belief levels prior to the correction. Even under circumstances most conducive to a familiarity backfire effect (i.e. after a long period of time, with a correction that provided little detail, in older adults), we failed to find it. However, we again found that familiarity impacted the efficacy of the correction: People were better at sustaining their belief over time if they were middle aged participants under the age of 65, in comparison to those over the age of 65. It might be a relief for fact-checkers and news accuracy experts that repeating the misinformation when correcting it will most likely not make people’s belief worse. This is also good news as correcting information without mentioning the original misconception can be extremely challenging. In fact, there is evidence to suggest that repeating the misconception prior to the correction can be beneficial to belief updating, as we showed in a previous post. One concern that stems from our data is that people over the age of 65 are potentially worse than middle-aged and younger adults at sustaining their post-correction belief that myths are inaccurate. It is therefore even more important that we not only understand the underlying mechanisms of why people continue to believe in inaccurate information, but develop techniques to facilitate evidence-based updating, so that all sectors of the population can be on the same page as to what is fact and what is fiction. So where do our results leave the familiarity backfire effect? Should we avoid repeating the myth when correcting it? Can we confidently use the phrase “playing Mozart does not boost an infant’s IQ” without ramifications? Those issues will be taken up in the next post. Can Repeating False Information Help People Remember True Information? Last Saturday, a powerful earthquake struck the Philippines. It was first reported as having a magnitude of 7.2; this was later corrected to 6.8. Last Friday, a wharf collapsed in Gloucester Harbor in Massachusetts. It was first reported as a wharf belonging to Cape Ann Ice, but later identified as a wharf used by Channel Fish. Last Thursday, President Trump announced plans regarding NAFTA. He originally claimed that he would withdraw from the agreement entirely, but later indicated plans to renegotiate. Corrections and retractions are common — not only in the news, but also in science and in everyday life. Sometimes it’s as simple as correcting a careless mistake; other cases involve new information that leads to a reinterpretation of the evidence and the rejection of some prior assumption. We discover that the complaint wasn’t made by our neighbor after all, or that the purported link between vaccines and autism was based on deliberate fraud. The trouble is that initial beliefs are sometimes hard to dislodge. Dozens of studies in experimental psychology have identified a phenomenon known as the continued influence effect: Even after misinformation is retracted, many people continue to treat it as true. In other words, it has a continued influence on their thinking. When misinformation concerns something like the safety of vaccines or the perpetrators behind some atrocity, getting it wrong can be personally and societally consequential. That’s one reason why psychologists have been eager to understand precisely what drives the continued influence effect, and what kinds of corrections are most likely to be effective. A new paper by Ullrich Ecker, Joshua Hogan and Stephan Lewandowsky, forthcoming in the Journal of Applied Research in Memory and Cognition, takes up one important question regarding the correction of misinformation: Is it better to explicitly state and retract the false claim, or is it better to avoid repeating something false, and instead simply state what’s now believed to be true? Both possibilities are suggested by prior research. On the one hand, repeating a false claim could make it more familiar. Familiarity, in turn, could be mistaken for fact, or at least the suspicion that there’s something to the (false) claim. Weeks after reading a brochure about vaccine safety, for example, there might be something familiar about the idea that vaccines are associated with autism, but you might not remember precisely what was claimed, and in particular that the association was refuted. On the other hand, there’s evidence that explicitly articulating a misconception can facilitate the process of updating one’s beliefs. For instance, some approaches to student learning emphasize the value of engaging with students’ initial (mistaken) beliefs as a precursor to conceptual change. Perhaps drawing attention to a false belief is a good way to assimilate the new information in a way that replaces, rather than merely co-exists with, the initial misinformation. Given these competing possibilities, Ecker and his colleagues designed an experiment in which 60 university undergraduates read a series of scenarios that were written as pairs of news stories, half of which involved a retraction in the second story of some misinformation stated in the first story. The crucial variation was in how the retraction occurred: by merely stating the new claim; by implying that the new claim revised a prior claim (but without stating what the prior claim was); or by including both the initial claim and the new claim that superseded it. To measure the “continued influence” of the initial misinformation, participants were asked a series of questions relevant to that aspect of the news story. The researchers found that people’s reasoning often showed an influence of the initial, retracted claim, thus replicating prior work. However, they also found that this influence was most pronounced when the new claim was simply stated, and least pronounced when the retraction included both the initial claim and the new claim that superseded it. At least for these scenarios, the most effective retractions were those that repeated the initial misinformation. The study’s authors are cautious about making strong recommendations on the basis of this single result. For instance, they still suggest that unnecessary repetitions of misinformation should be avoided; if someone doesn’t already believe the misinformation, repeating it could do more harm than good. It’s also important to know how robust these findings are to different kinds of (mis)information and different ways in which it is presented. One important factor could be time. Does it matter if the retraction follows the initial information almost immediately, versus after a long delay? Moreover, it could be that the retraction that’s most effective for the few minutes after it’s been read doesn’t have the most staying power as weeks and months go by. These caveats aside, the new results offer an important qualification to prior recommendations concerning misinformation and its correction, some of which encouraged educators and communicators to avoid repeating false claims. At least sometimes, there may be value in repeating misinformation alongside the alternative we now consider to be true. This post was originally published at NPR’s 13.7: Cosmos & Culture page. It is reposted here as a first post in a series of three posts on recent research on misinformation by Ulli Ecker, Stephan Lewandowsky, and colleagues. The next post reports a recent study, with Briony Swire as first author, that takes a further look at whether repeating a myth during its debunking is always harmful. Constraining the social discount rate by consideration of uncertainties An article that just appeared in the journal Global and Planetary Change, authored by me and Mark Freeman and Michael Mann, reported a simulation experiment that sought to put constraints on the social discount rate for climate economics. The article is entitled Harnessing the uncertainty monster: Putting quantitative constraints on the intergenerational social discount rate, and it does just that: In a nutshell, it shows how a single, policy-relevant certainty-equivalent declining social discount rate can be computed from consideration of a large number of sources of uncertainty and In the previous three posts • I first outlined the basics of the discounting problem and highlighted the importance of the discount rate in climate economics. • In the second post, I discussed the ethical considerations and value judgments that are relevant to determining the discount rate within a prescriptive Ramsay framework. • In the third post I explained how unresolvable difference between different value judgments can be “integrated out” by a process known as gamma discounting. Those three posts provided us with the background needed to understand the simulation experiment that formed the core of our paper. Basic Procedure The goal of our simulation experiment was to explore different sources of uncertainty that are relevant to decision making in climate economics. In particular, we wanted to constrain the social discount rate, ρ, within a prescriptive framework embodied by the Ramsay rule: ρ = δ + η × g. As explained earlier, the parameters d and h represent ethical considerations relating to pure time preference and inequality aversion, respectively. The anticipated future economic growth is represented by g. To derive candidate discount rates from this framework we therefore need estimates of future economic growth. We obtained these estimates of g in our experiment by projecting global warming till the end of the century using a climate model (a simple emulator), and converting that warming into a marginal effect on baseline economic growth through an empirical model of the temperature-growth relationship reported by Marshall Burke, Solomon Hsiang and Edward Miguel in 2015. Their model is shown in the figure below: It can be seen that, controlling for all other variables, economic productivity is maximal at an annual average temperature of around 13°C, with temperatures below or above that leading to a reduction in economic output. This descriptive model has been shown to be quite robust and we relied on it to convert warming forecasts to economic growth rates. Experimental Design We projected economic growth as a function of three variables that are the source of considerable uncertainty: the sensitivity of the climate to carbon emissions, the emissions trajectory that results from our policy choices, and the socio-economic development pathway that the world is following. We formed all possible combinations of those three variables to examine their effect on projected global growth. The figure below shows our experimental design. • We fixed climate sensitivity at a constant mean but varied the uncertainty of that sensitivity, expressed as its standard deviation in 6 steps from 0.26°C to 1.66°C • We employed the climate forcings (i.e., the imbalance of incoming and outgoing energy that results from atmospheric greenhouse gases) provided by several of the IPCC’s Representative Concentration Pathways (RCPs). Specifically, we used RCP 2.6, RCP 4.5, RCP 6.0, and RCP 8.5 for the period 2000 through 2100. These RCPs span the range from aggressive mitigation and limiting global temperature rises to approximately 2°C (RCP 2.6), to continued business as usual and extensive warming (RCP 8.5). • We compared two Shared Socio-Economic Pathways (SSPs). SSPs form the basis of the IPCC’s projections of future global development in Working Group 3. We employed two scenarios, SSP3 and SSP5. SSP3 assumes low baseline growth and slow global income convergence between rich and poor countries; SSP5 assumes high baseline growth and fast global income convergence. Our experiment thus consisted of 48 cells, obtained by fully crossing 6 levels of uncertainty about climate sensitivity with 4 RCPs and 2 SSPs. For each cell, 1,000 simulation replications were performed by sampling a realization of climate sensitivity from the appropriate distribution. For each realization, global temperatures were projected to the end of the century and the economic effects of climate change were derived by considering the relevant SSP in conjunction with the empirical model relating temperature to economic production. Cumulative average growth rates for the remainder of the century were then computed across the 1,000 replications in each cell of the design. These 48 projected global economic trajectories to the end of the century, each of which represented the expectation under one set of experimental conditions, were then converted into candidate social discount rates. At this stage the ethical considerations (top left of the above figure; see my previous post here for a discussion) were applied to each trajectory, by combining each of the 48 projected global economic growth rates (g) with four combinations of η and δ. Specifically, we used values for η and δ obtained by a recent expert survey, such that δ was either 0% or 3.15% with probability 65% and 35%, respectively, and η was 0.5 or 2.2 with equal probability. This yielded a final set of 192 candidate discount rates across all combinations of experimental variables which were then integrated via gamma discounting into a single certainty-equivalent declining discount rate. I explained gamma discounting in a previous post, and you may wish to re-read that if the process is not clear to you. Although the experiment was quite complex—after all, we explored 3 sources of scientific, socio-economic, and policy uncertainty plus 2 sources of ethical ambiguity!—the crucial results are quite straightforward and consist of a single declining discount rate that is integrated across all those sources of ambiguity and uncertainty. The figure below shows the main result (the article itself contains lots more but we skip over those data here). The solid black line represents the (spot) certainty-equivalent declining discount rate that applies at any given point in time. For example, if we are concerned with a damage cost that falls due in 2050, then we would discount that cost at 3%. If we are worried about damages at the end of the century, then we would discount that cost by less than 2%. The figure also shows various previous estimates of declining discount rates that were derived by different means but all based on the underlying principle of gamma discounting. Our approach differs from those precedents in two important ways: First, we explicitly consider the major (if not most) sources of uncertainty and ambiguity, and we encompass their effects via gamma discounting. Second, our approach explicitly models the impact of future climate change on economic production. When the likely impact of climate change on the global economy is considered, a more rapid decline of the discount rate is observed than in previous work. By 2070, our estimates of the spot rate dips below the other past benchmark estimates in the above figure. It should be noted that our results mesh well with the median long-run social discount rate elicited from experts. We consider this article to provide a proof of concept, with much further exploration remaining to be performed. We take up some of those open issues and the limitations of our work in the article There is one clear message from our work: uncertainty is no reason to delay climate mitigation. Quite on the contrary, our extensive exploration of uncertainty yielded a lower discount rate (form around 2070 onward) than existing proposals. This lower discount rate translates into a considerable increase in the social cost of carbon emissions, and hence even greater impetus to mitigate climate change. One caveat to our conclusion is that our discounting model assumes that things can be done only now or never. This makes sense in many situations when individuals or firms are confronted with a choice about a potential project. However, there are limitations to this approach. To take an extreme example, suppose we knew that the precise value of climate sensitivity would be revealed to us by some miraculous process in exactly a year’s time. In that it case, it would not be impossible that we might decide to wait that year to learn the precise climate sensitivity before acting. A possible alternative approach that stretches the decision path over time involves so-called real options models. Real options analyses account for the sequential nature and path dependence of choice processes. We flag this alternative briefly, but it remains for future work to apply it to climate economics in a more systematic fashion. Harnessing uncertainty: A single certainty-equivalent social discount rate An article that just appeared in the journal Global and Planetary Change, authored by me and Mark Freeman and Michael Mann, reported a simulation experiment that sought to put constraints on the social discount rate for climate economics. The article is entitled Harnessing the uncertainty monster: Putting quantitative constraints on the intergenerational social discount rate, and it does just that: In a nutshell, it shows how a single, policy-relevant certainty-equivalent declining social discount rate can be computed from consideration of a large number of sources of uncertainty and In the previous two posts, I first outlined the basics of the discounting problem and highlighted the importance of the discount rate in climate economics. In the second post, I discussed the ethical considerations and value judgments that are relevant to determining the discount rate within a prescriptive Ramsay framework. I showed that those ethical considerations can yield unresolvable ambiguity: different people have different values, and sometimes those values cannot be reconciled. Fortunately, in the discounting context, we can “integrate out” those ambiguities by a process known as gamma discounting. This is the topic of the remainder of this post. A final post explains our simulation experiment and the results. Gamma Discounting We know from the last post that in a recent survey of 200 experts, Moritz Drupp and colleagues found that the distribution of expert responses was closely approximated by setting δ to zero with 65% probability and setting it to 3.15 with 35% probability. (If you don’t know what d refers to, please read the previous post first.) It turns out that recent work in economics has proposed a way to resolve such ambiguity or uncertainty. This process is known as gamma discounting. In a nutshell, instead of averaging the candidate discount rates, the process averages the discounted future values for each candidate rate. The table below illustrates gamma discounting using an example provided by Ken Arrow and colleagues. The table shows discounted values of $1,000 at various times t in the future for three different candidate discount rates (namely, 1%, 4%, and 7%). For example, if the rate is 4%, then the discounted value of $1,000 after 50 years is $135.34, and so on. So how do we deal with the uncertainty about the discount rate? Suppose we assume that the rate is either 1% or 7% with equal probability, then 50 years from now our $1,000 can be worth either $606.53 or $30.20 (also with equal probability). It follows that the average of those two uncertain values represents the probability-weighted expectation for our $1,000, which 50 years from now is ($30.20 + $606.53)/2 = $318.36. These averages are called the “mean expected present values” and are shown in the column labeled MEV. They form the basis of our final computation. The ratio between successive MEVs yields a single certainty-equivalent discount rate (columns labeled CE-DDR) for any given point in time. For example, the MEV at t = 50 is $318.36, and the MEV at t = 51 is $314.33. The ratio between those successive values, $318.37/$314.33 = 1.0128 = 1.28%, provides the CE-DDR at time t = 50, known as the “forward” rate, and those values are shown in the second-to-last column of the table. Several important points can be made about that column: First, there is only one column. No matter how many candidate discount rates we started out with, and what their probability weighting might be, we end up with a single certainty-equivalent discount rate that can be applied with 100% certainty, but that has embedded within it the uncertainty we started out with. Second, the discount rate is not constant: as can be seen in the table, it starts out at nearly 4% and converges towards 1% after 100 years or more. The discount rate is therefore declining over time. (In the limit, when time heads towards infinity, the discount rate converges towards the smallest candidate rate being considered. The choice of lowest candidate rate is therefore crucial, although this primarily affects the distant future.) Finally, the second-to-last column captures the slope of the declining discount rate function between times t and t + 1. Those forward values, however, cannot be used to discount an amount from the present to time t—for example, the MEV at time t = 50 cannot be obtained by discounting $1,000 at a rate of 1.28% over 50 years. Instead, to obtain a rate that covers all of those 50 years, we need a different certainty-equivalent discount rate that is also declining but generally has higher values. This rate is called the “spot” certainty-equivalent declining discount rate and is shown in the final column. If we apply the “spot” rate to our present-day $1,000 for time t, it will yield the MEVs for that time shown in the table. For example, $1,000 discounted at 2.32% over 50 years (i.e., 1000/1.0232^50) yields the MEV of $318 (± rounding error). To summarize: We start out by being uncertain about the discount rate. For example, we don’t know whether to set the pure time preference to zero or to permit a value greater than that. We apply gamma discounting and this uncertainty has “disappeared”. Of course, it hasn’t really disappeared, it has just been taken into account in the final certainty-equivalent discount rate. But for all practical intents and purposes, we now have a single number that we can apply in economic decision making. In the first post, we considered the implications of the discount rate if climate change were to cause $5 trillion (i.e., $5,000,000,000,000) in damages by the end of the century. We noted that the present discounted cost could be as large as $2.2 trillion (discount at 1%) or as little as $18 billon (at 7%). If we assume that 1% or 7% are equally likely to be “correct”, then from the table above we can obtain a certainty-equivalent spot rate of somewhere below 2% (the end of the century is 83 years away, but that’s reasonably close to the 100 years that yield a spot rate of 1.71%). It follows that to avert $5 trillion in damages, it would be economically advisable to expend in excess of $1 trillion now on climate mitigation even if we are uncertain about which discount rate to Combining sources of uncertainty This post explained the basic idea behind gamma discounting. We now have a mathematical platform to convert ambiguity and uncertainty about the discount rate into a single certainty-equivalent discount rate. The beauty of gamma discounting is that it is theoretically particularly firmly grounded when the candidate discount rates (the first three columns in the above table) arise from irreducible heterogeneity among expert opinions rather than from random variation about an imprecise estimate. Different ethical positions about inequality aversion (η; see previous post) and pure time preference (δ) are clear instances of such irreducible heterogeneity. In our simulation experiment, we considered the uncertainty about three other relevant variables as similar cases of irreducible heterogeneity; namely, uncertainty about climate sensitivity, uncertainty about emissions policy, and uncertainty about future global development. To briefly foreshadow the final post, we conducted a simulation experiment that forecast economic growth till the end of the century under all possible combinations of those variables. We then applied gamma discounting as in the table above to extract a single certainty-equivalent declining discount rate that policy makers can apply in the knowledge that a broad range of uncertainties has been considered. We must discount, but how much? An article that just appeared in the journal Global and Planetary Change, authored by me and Mark Freeman and Michael Mann, reported a simulation experiment that sought to put constraints on the social discount rate for climate economics. The article is entitled Harnessing the uncertainty monster: Putting quantitative constraints on the intergenerational social discount rate, and it does just that: In a nutshell, it shows how a single, policy-relevant certainty-equivalent declining social discount rate can be computed from consideration of a large number of sources of uncertainty and In a previous post, I outlined the basics of the discounting problem and highlighted the importance of the discount rate in climate economics. In the remainder of this post, I will outline the ethical considerations and value judgments that are relevant to determining the discount rate. Because those considerations may yield unresolvable ambiguity, they must be “integrated out” by a process known as gamma discounting. This will be explained in the next post. A further post will explain our simulation experiment and the results. Considerations about the social discount rate in climate economics When individuals or businesses make decisions about investments, they tend to use the prevailing market interest rates to set the discount rate. This approach, known as the descriptive approach to setting the discount rate, makes perfect sense for short or medium-term time horizons when the costs and benefits of a project involve the same people and the same markets. The approach is called descriptive because the discount rate correctly describes how society actually discounts, as determined by the markets. An alternative approach, called the prescriptive approach, prefers to estimate the social discount rate directly from its primitives rather than using market rates of interest. In this context, the discount rate is usually called the social discount rate because it applies not to individuals or firms but to society overall. The approach is called prescriptive because it imposes a rate on social planners that is, at least in part, based on value judgments. There are a number of arguments that support the prescriptive approach. For example, many economists and philosophers would argue that we cannot discount with respect to future generations. That is, present-day decision makers should not endorse policies that inevitably disadvantage future generations, who have no power to resist or retaliate against present-day decisions. In addition, those most affected by climate change—the poor, often in developing countries—do not influence market interest rates. This arguably places a burden on governments to take a wider ethical perspective than investors who trade in financial markets. Our article therefore took the prescriptive approach to setting the discount rate, consistent with governmental recommendations in the UK and much of Europe. (Although US authorities have generally preferred descriptive approaches to intergenerational discounting.) The prescriptive approach is conventionally understood within the framework of the Ramsay rule: ρ = δ + η × g. It can be seen that the social discount rate, ρ, results from two distinct components: A component known as the “pure time preference”, encapsulated by δ, and a component that combines the expected average annual real economic growth rate, g, with a parameter η that turns out to capture people’s inequality aversion. (It also does other things but here we focus on inequality aversion). The pure time preference is simply our impatience: It’s our impulse that $50 today is “worth more” than $51 in a month, even though the accrual during this delay would correspond to a whopping annual interest rate of nearly 27%. The rationale for inclusion of the growth rate is that growing wealth makes a given cost for future generations more bearable than it appears to us now, in the same way that $100 is valued a great deal more by a poor student than by a billionaire. Within the Ramsey framework we thus have to consider three quantities to determine the social discount rate: Future economic growth, inequality aversion, and pure time preference. Future growth rates can be estimated by economic modeling—and that is precisely what we did in our article, and I will describe the details of that in the next post. Determination of the other two quantities, by contrast, involves ethical value judgments that are necessarily subjective. (Given the inter-generational context, we ignore the possibility of estimating η and δ from asset markets and behavioral experiments, respectively.) To illustrate the ethical implications I focus on δ, the pure time preference. I will ignore issues surrounding η for simplicity. It has been argued that it is ethically indefensible for δ to be greater than zero, as it would embody “a clear violation of the attitude of impartiality that is foundational to ethics”. That is, we should not disadvantage future generations simply because we happen to have been born before them. If one wanted to treat future generations equally to ours, as most people seem prepared to do, one would therefore want to constrain δ to be zero—and indeed, in the U.K.’s influential Stern report, δ was set to (near) zero for that reason. However, the seemingly attractive idea of treating all generations equally by setting δ to zero entails some unnerving consequences. In general, the lower the discount rate, the more future consumption (or cost) matters and hence the more we should set aside for the benefit of future generations. Partha Dasgupta computed the mathematically implied savings rate when δ is set to the value recommended in the Stern report and found it to be 97%. That is, out of $100 we currently own, we may only consume $3, with the remainder being tucked away for the benefit of our children. Our children, in turn, would also only be allowed to spend $3 of their considerably greater wealth, with the remainder being passed on to their children, and so on. An implication of d being near zero therefore is the impoverishment of each current generation for the benefit of the succeeding one! And it doesn’t stop there: low discounting, although it may appear benign in the climate context, has dangerous implications elsewhere. As William Nordhaus put it: “Countries might start wars today because of the possibility of nuclear proliferation a century ahead; or because of a potential adverse shift in the balance of power two centuries ahead; or because of speculative futuristic technologies three centuries ahead. It is not clear how long the globe could survive the calculations and machinations of zero-discount-rate military strategists.” So what is the “correct” value of δ? We don’t know. But we do know that in a recent survey of 200 experts, Moritz Drupp and colleagues found that the distribution of expert responses was closely approximated by setting δ to zero with 65% probability and setting it to 3.15 with 35% probability. So now what? Do we make policy decisions based on majority rule? Or based on the average of the two sets of expert opinions? Or do we decide that experts are no good and that we should ask Bruce at the pub? The next post presents a solution to this dilemma known as gamma discounting. The future is certainly uncertain The future is uncertain. So how do we best cope with this uncertainty? Nowhere is this question more acute than in the climate arena where today’s policy decisions have an impact on people centuries The existence of scientific uncertainty has often been used in support of arguments that climate mitigation is unnecessary or too costly. Those arguments are flawed because, if anything, greater uncertainty about the future evolution of the climate should compel us to act with even greater urgency than if there were no (or less) uncertainty. I published two articles that sketched out this analysis a few years ago, and in earlier posts I explained their underlying logic and mathematics in some detail here, here, and here. Climate scientist Michael Mann also made this point during his recent Congressional testimony to the House Committee on Science, Space, and Technology. In a nutshell, uncertainty is not your friend but a Dutch uncle advising you to roll up your sleeves and start working towards climate mitigation. Our initial work was not the final word on the matter, but it stimulated follow-up research by an economist from the UK, Mark Freeman, who together with colleagues Gernot Wagner and Richard Zeckhauser from Harvard’s Kennedy School, published a more extensive mathematical analysis of the problem that came to roughly the same conclusions. One limitation of our existing work on uncertainty has been that we were unable to say anything that was specifically policy relevant. That is, although we could make a strong case for mitigation and against “business as usual”, we were unable to specify how much mitigation would be appropriate on the basis of our work to date. An article that just appeared in the journal Global and Planetary Change, authored by me and Mark Freeman and Michael Mann, tackled this problem. The article is entitled Harnessing the uncertainty monster: Putting quantitative constraints on the intergenerational social discount rate, and it does just that: In a nutshell, it shows how a single, policy-relevant certainty-equivalent declining social discount rate can be computed from consideration of a large number of sources of uncertainty and ambiguity. I have written a series of posts that unpack this rather dense summary statement of our article and that will appear here during the next few days: • In the remainder of this post, I describe the basics of discounting. • The next post describes the ethical considerations that enter into setting of the discount rate. • A further post explains how uncertainty about the proper discount rate can be “integrated out” to yield a single certainty-equivalent declining discount rate. • A final post explains our simulation experiment and the results. Discounting the future[1] We value the present more than the future. When given the choice, very few people would prefer to wait a month to receive $51 if the alternative were to receive $50 today, even though the accrual during this delay would correspond to a whopping annual interest rate of nearly 27%. This entrenched preference for the present, and the discounting of the future it entails, appears to be an immutable aspect not just of human cognition but of organisms more generally. When given the choice between a smaller reward now or a larger reward later, most animals generally prefer the immediate reward. In humans, decisions relating to the present involve regions of the brain (viz. limbic and paralimbic cortical structures) that are also consistently implicated in impulsive behavior and cravings such as heroin addiction, whereas decisions that pertain to the future involve brain regions (viz. lateral prefrontal and parietal areas) known to support deliberative processing and numerical Our strong preference for immediate rewards may therefore reflect the proverbial “reptilian brain,” which competes with our “rational brain” that is telling us to consider and plan for the future. However, that does not mean that discounting is irrational: On the contrary, discounting is a standard aspect of inter-temporal decision making in economics. Whenever costs and benefits of projects are evaluated, the comparison must be adjusted by the delay between current costs and future benefits (or vice versa). This is done by setting an interest rate known as the discount rate. The discount rate is at the same time both quite simple and surprisingly nuanced. For now, let’s focus on its simplicity and introduce it with the following example: Suppose you are faced with the decision whether to attend university now, thereby incurring tuition costs and deferring earned income, or to enter the job market straight away. Ignoring all non-economic variables (not recommended in reality!), this decision boils down to figuring out whether the cost of tuition and deferred income will be recouped in the future by the higher income you are likely to earn with a university degree than without one. (A peer-reviewed paper that works this out in detail can be found here.) Economists often use the prevailing market interest rates to make inter-temporal decisions of this type. To illustrate, let’s suppose the prevailing annual interest rate is 3%. Let’s furthermore suppose you are trying to decide whether to service your car engine now, because you have a pretty good idea that if you didn’t, you’d incur a repair bill of $100 in a year’s time. Now here is the crucial feature of discounting: If you had a $100 now and invested it at 3%, then you could pay the damage in a year’s time and pocket $3 profit. Or conversely, the damage of $100 in a year’s time is only “worth” a little over $97 today (because $97.09 invested at 3% would be $100 in a year). Thus, an economist might argue that you should get your car serviced now only if the cost is less than $97—any more than that, and you’d be better off investing the money and using it to pay off the future repair bill. This trivial example illustrates the discount rate: it is simply the interest rate you would accrue on current costs (or benefits) until some future point in time when the benefits (or costs) come Determining the discount rate for personal decisions, such as whether to service your car or attend university, is relatively straightforward because we have a very good historical record of the prevailing interest rates and may extrapolate those to the medium-term future with some confidence. Enter climate change. The situation changes dramatically when inter-temporal decisions cross generational boundaries and extend into the distant future. Today’s policy decisions with respect to climate change will affect people who have not yet been born, and whom today’s decision makers will never meet. The extended temporal horizon renders the setting of the discount rate ever more important and tricky. To illustrate, suppose climate change will cause $5 trillion (i.e., $5,000,000,000,000) in damages by the end of the century. At a discount rate of 1%, this would be “worth” $2.2 trillion today—a whopping amount, to be sure, but still less than half the value at the end of the century. At a discount rate of 3%, this damage would be “worth” around $430 billion today—considerably less than at 1%. Incidentally, $430 billion is a little over two thirds of the U.S. military budget. At a discount rate of 7%, finally, the damage in today’s dollars would be only $18 billion, an amount equivalent to the foreign investment in Vietnam during 2016. Seemingly slight variations in the discount rate can thus make future climate damages appear either very large (two-thirds of the Pentagon budget or more) or small (Vietnam is a pretty small economy). Taking mitigative action is more compelling in the former case than the latter. The choice of an appropriate discount rate for climate economics has therefore been hotly contested in policy, economics, and ethics. This debate has failed to yield a consensual value, with some scholars proposing that the discount rate for climate change should be negative and others permitting rates of 5% or more. In the next post, I discuss the ethical and economic considerations that typically enter into setting the discount rate. [1] Parts of this section are derived from a post I previously published at https://featuredcontent.psychonomic.org/2016/04/26/when-todays-grass-is-greener-than-tomorrows-gold/ A post-Shakespearean farce for a post-fact political age Never before has so much deception unraveled so quickly and with so little shame. Within a few hours of the EU referendum, the major planks of the Leave campaign had evaporated. We learned that the additional £350,000,000 that could be spent on the NHS every week, if we only left the EU, never existed. After all the fear mongering and denigration of immigrants, we learned that withdrawal from the EU would not reduce immigration. And we are currently learning the hard way that Leaving is economically disastrous: Although today’s downgrade in the UK’s credit rating may sound like a distant and hardly-relevant rumbling, we may care that we will now receive less of an annuity for our retirement. Perhaps that explains why the Leave Campaign diligently wiped its webpage of any content, leaving behind the message “thank you” but no record of their promises. (Don’t worry, it’s been archived. After all, the Leave campaign was the evil twin of climate denial, and so the reality-based community knows how to prevent things going down the memory hole.) We now get to watch in fascination (and terror?) how another “hockey-stick” graph is unfolding in front of our very eyes: Even arctic ice doesn’t melt that fast. While the “Project Fear” of the Remain campaign may now turn out to have been a “Project Understatement”, we should briefly summarize the activities of the various main actors in this unfolding “ Project Farce”: • The soon-to-be-no-more Prime Minister has stopped tweeting about huggable heroes and addressed Parliament today and said something that contained a lot of words. If you want to know what he really said, you need to read this. • The still-completely-in-charge Chancellor assured everyone this morning that Great Britain is Great again and that we should keep calm and carry on with being unemployed or deprived (but quietly). A few hours later, and innumerable Footsie and Dow points further south, the UK’s credit rating was downgraded. (After the markets in London closed, so as not to upset anyone). • Leave operative Michael Gove likened the experts, who predicted the current economic fallout, to Nazis before the referendum. He has not been heard from since Friday morning. (Please report any sightings if he has left the bunker for air or check his investments). • The leader of the Leave campaign, who may yet become our next Prime Minister, played cricket all weekend before he reassuringly reappeared in the Telegraph this morning, pronouncing the economy to be in good health and guaranteeing us access to the EU’s free market without any bothersome human rights legislation. He gets paid £5,000 for each of those columns, so he is bound to be back next week. A Shakespearean tragedy in a world of post-fact politics. Actually, no. Shakespeare’s tragedies, like their Greek counterparts, included evil villains who cunningly conspired to bring down kings and empires. The villains of the Brexit tragedy are not evil and cunning. Their banal evil arises from an infantile recklessness that gave them license to turn the future of this country, Europe, and the world economy into a Riot Club frat-boy tussle. Unchecked by the jingoist tabloids, their abject recklessness turned a decision of grave consequence into a platform for dull patriotic cheer-leading. Who are the adults in this post-Shakespearean farce? • President Obama (are you missing him already?) sent his Secretary of State, John Kerry, to Europe to exercise some adult supervision. Perhaps pointedly, Kerry stopped in Europe before visiting London, the home of special relationships. • Angela Merkel directly addressed the 48% of Britons who wanted to avoid this mess, and has generally struck a balance between giving the UK time to re-constitute itself and insisting on speedy action to commence divorce proceedings for the sake of the world economy. • Nicola Sturgeon, the First Minister of Scotland, continued to calmly clarify that the Riot Club frat boys did not have license to tear down Scotland as well. • The columnists and the millions who will not abandon Europe to the deceptions of demagogues. How will this “Project Farce” play out in the end? No one knows, but there is one ghoul that is emerging from the fog. Taking leadership of the country now, and being the one who pushes the irreversible button of Article 50 to commence separation from the EU, must surely be the most poisoned chalice of recent history. Does the UK have a government? It has now been 12 hours or more, since at the close of the first day of trading after the UK’s referendum vote to leave the EU, more than $2 trillion had been wiped off the stock markets around the world. This response is pretty much as it was expected by the preponderance of national and international experts, whom a leading “Leave” campaigner likened to the Nazis in the closing days of the During those 12 hours, on what should be a relatively quiet Saturday in summer, a number of remarkable things have transpired: One thing that has been remarkably absent from this list of events, as of 1pm Saturday, is any mention or appearance of any sort of a government of the United Kingdom. We have not heard from the currently-former Prime Minister, nor from a future-possible Prime Minister. Does the UK even have a government at this most crucial time of its history during my life time? Events are unfolding on a millisecond time scale, all the world’s market analysts are tracing events waiting to pounce when the markets open on Monday, and the UK government has gone AWOL. Driven by demagogues and arsonists, the UK ignored all the experts and all the facts and, to its own horror, set off a global crisis and a national recession on Thursday. On Friday the pound Sterling suffered a record loss of value and the stock markets worldwide lost $2,000,000,000,000. Also on Friday, the Leave campaign revealed itself to be the scam that it was. On Saturday, the arsonists and demagogues are nowhere to be seen, while the frat boys in the Tory party are trying to figure out what to do next. Eventually the adults will have to clean up the mess. Upated 2:10pm: Now this from the defense secretary, Michael Fallon: “The prime minister goes on, the government goes on until the autumn, until there is a new leader and a new government. We’ll remain at our posts and we have a big agenda. We were elected only a year ago and we’ve set out fresh legislation, which we’re taking through parliament at the moment. Cabinet is meeting on Monday. We were all elected just a year ago on a big programme of continuing to move the economy forward, creating more jobs, a programme of social reform, and investment in defence which you can see today.” Oh dear. Seriously? Updated 2:33pm: London has a government too. Mayor Khan came out strongly, declaring that #LondonisOpen We also have a video message from the currently-still-not-quite-former Prime Minister about celebrating Gay Pride. This is actually the second tweet of the day by No 10. I missed the first one because it was about huggable heroes and did not show up in my news feed. Apologies to the huggables.
{"url":"http://www.shapingtomorrowsworld.org/","timestamp":"2024-11-02T03:11:57Z","content_type":"text/html","content_length":"202490","record_id":"<urn:uuid:f5047b29-de44-4e5c-9822-baf68cf0877e>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00611.warc.gz"}
Share Results of Econometric Modeler App Session This example shows how to share the results of an Econometric Modeler app session by: • Exporting time series and model variables to the MATLAB^® Workspace • Generating MATLAB plain text and live functions to use outside the app • Generating a report of your activities on time series and estimated models During the session, the example transforms and plots data, runs statistical tests, and estimates a multiplicative seasonal ARIMA model. The data set Data_Airline.mat contains monthly counts of airline passengers. Import Data into Econometric Modeler At the command line, load the Data_Airline.mat data set. At the command line, open the Econometric Modeler app. Alternatively, open the app from the apps gallery (see Econometric Modeler). Import DataTimeTable into the app: 1. On the Econometric Modeler tab, in the Import section, click the Import button . 2. In the Import Data dialog box, in the Import? column, select the check box for the DataTimeTable variable. 3. Click Import. The variable PSSG appears in the Time Series pane, its value appears in the Preview pane, and its time series plot appears in the Time Series Plot(PSSG) figure window. The series exhibits a seasonal trend, serial correlation, and possible exponential growth. For an interactive analysis of serial correlation, see Detect Serial Correlation Using Econometric Modeler Stabilize Series Address the exponential trend by applying the log transform to PSSG. 1. In the Time Series pane, select PSSG. 2. On the Econometric Modeler tab, in the Transforms section, click Log. The transformed variable PSSGLog appears in the Time Series pane, and its time series plot appears in the Time Series Plot(PSSGLog) figure window. The exponential growth appears to be removed from the series. Address the seasonal trend by applying the 12th order seasonal difference. With PSSGLog selected in the Time Series pane, on the Econometric Modeler tab, in the Transforms section, set Seasonal to 12. Then, click Seasonal. The transformed variable PSSGLogSeasonalDiff appears in the Time Series pane, and its time series plot appears in the Time Series Plot(PSSGLogSeasonalDiff) figure window. The transformed series appears to have a unit root. Test the null hypothesis that PSSGLogSeasonalDiff has a unit root by using the Augmented Dickey-Fuller test. Specify that the alternative is an AR(0) model, then test again specifying an AR(1) model. Adjust the significance level to 0.025 to maintain a total significance level of 0.05. 1. With PSSGLogSeasonalDiff selected in the Time Series pane, on the Econometric Modeler tab, in the Tests section, click New Test > Augmented Dickey-Fuller Test. 2. On the ADF tab, in the Parameters section, set Significance Level to 0.025. 3. In the Tests section, click Run Test. 4. In the Parameters section, set Number of Lags to 1. 5. In the Tests section, click Run Test. The test results appear in the Results table of the ADF(PSSGLogSeasonalDiff) document. Both tests fail to reject the null hypothesis that the series is a unit root process. Address the unit root by applying the first difference to PSSGLogSeasonalDiff. With PSSGLogSeasonalDiff selected in the Time Series pane, click the Econometric Modeler tab. Then, in the Transforms section, click Difference. The transformed variable PSSGLogSeasonalDiffDiff appears in the Time Series pane, and its time series plot appears in the Time Series Plot(PSSGLogSeasonalDiffDiff) figure window. In the Time Series pane, rename the PSSGLogSeasonalDiffDiff variable by clicking it twice to select its name and PSSGStable. The app updates the names of all documents associated with the transformed series. Identify Model for Series Determine the lag structure for a conditional mean model of the data by plotting the sample autocorrelation function (ACF) and partial autocorrelation function (PACF). 1. With PSSGStable selected in the Time Series pane, click the Plots tab, then click ACF. 2. Show the first 50 lags of the ACF. On the ACF tab, set Number of Lags to 50. 3. Click the Plots tab, then click PACF. 4. Show the first 50 lags of the PACF. On the PACF tab, set Number of Lags to 50. 5. Drag the ACF(PSSGStable) figure window above the PACF(PSSGStable) figure window. According to [1], the autocorrelations in the ACF and PACF suggest that the following SARIMA(0,1,1)×(0,1,1)[12] model is appropriate for PSSGLog. $\left(1-L\right)\left(1-{L}^{12}\right){y}_{t}=\left(1+{\theta }_{1}L\right)\left(1+{\Theta }_{12}{L}^{12}\right){\epsilon }_{t}.$ Close all figure windows. Specify and Estimate SARIMA Model Specify the SARIMA(0,1,1)×(0,1,1)[12] model. 1. In the Time Series pane, select the PSSGLog time series. 2. On the Econometric Modeler tab, in the Models section, click the arrow to display the models gallery. 3. In the models gallery, in the ARMA/ARIMA Models section, click SARIMA. 4. In the SARIMA Model Parameters dialog box, on the Lag Order tab: □ Nonseasonal section 1. Set Degrees of Integration to 1. 2. Set Moving Average Order to 1. 3. Clear the Include Constant Term check box. □ Seasonal section 1. Set Period to 12 to indicate monthly data. 2. Set Moving Average Order to 1. 3. Select the Include Seasonal Difference check box. 5. Click Estimate. The model variable SARIMA_PSSGLog appears in the Models pane, its value appears in the Preview pane, and its estimation summary appears in the Model Summary(SARIMA_PSSGLog) document. Export Variables to Workspace Export PSSGLog, PSSGStable, and SARIMA_PSSGLog to the MATLAB Workspace. 1. On the Econometric Modeler tab, in the Export section, click . 2. In the Export Variables dialog box, select the Select check boxes for the PSSGLog and PSSGStable time series, and the SARIMA_PSSGLog model (if necessary). The app automatically selects the check boxes for all variables that are highlighted in the Time Series and Models panes. 3. Click Export. At the command line, list all variables in the workspace. Name Size Bytes Class Attributes Data 144x1 1152 double DataTable 144x2 3525 table DataTimeTable 144x1 3311 timetable Description 22x54 2376 char PSSGLog 144x1 1152 double PSSGStable 144x1 1152 double SARIMA_PSSGLog 1x1 7963 arima dates 144x1 1152 double series 1x1 162 cell The contents of Data_Airline.mat, the numeric vectors PSSGLog and PSSGStable, and the estimated arima model object SARIMA_PSSGLog are variables in the workspace. Forecast the next three years (36 months) of log airline passenger counts using SARIMA_PSSGLog. Specify the PSSGLog as presample data. numObs = 36; fPSSG = forecast(SARIMA_PSSGLog,numObs,'Y0',PSSGLog); Plot the passenger counts and the forecasts. fh = DataTimeTable.Time(end) + calmonths(1:numObs); hold on legend('Airline Passenger Counts','Forecasted Counts',... title('Monthly Airline Passenger Counts, 1949-1963') ylabel('Passenger counts') hold off Generate Plain Text Function from App Session Generate a MATLAB function for use outside the app. The function returns the estimated model SARIMA_PSSGLog given DataTimeTable. 1. In the Models pane of the app, select the SARIMA_PSSGLog model. 2. On the Econometric Modeler tab, in the Export section, click Export > Generate Function. The MATLAB Editor opens and contains a function named modelTimeSeries. The function accepts DataTimeTable (the variable you imported in this session), transforms data, and returns the estimated SARIMA(0,1,1)×(0,1,1)[12] model SARIMA_PSSGLog. 3. On the Editor tab, click Save > Save. 4. Save the function to your current folder by clicking Save in the Select File for Save As dialog box. At the command line, estimate the SARIMA(0,1,1)×(0,1,1)[12] model by passing DataTimeTable to modelTimeSeries. Name the model SARIMA_PSSGLog2. Compare the estimated model to SARIMA_PSSGLog. SARIMA_PSSGLog2 = modelTimeSeries(DataTimeTable); ARIMA(0,1,1) Model Seasonally Integrated with Seasonal MA(12) (Gaussian Distribution) Effective Sample Size: 144 Number of Estimated Parameters: 3 LogLikelihood: 276.198 AIC: -546.397 BIC: -537.488 Value StandardError TStatistic PValue _________ _____________ __________ __________ Constant 0 0 NaN NaN MA{1} -0.37716 0.066794 -5.6466 1.6364e-08 SMA{12} -0.57238 0.085439 -6.6992 2.0952e-11 Variance 0.0012634 0.00012395 10.193 2.1406e-24 ARIMA(0,1,1) Model Seasonally Integrated with Seasonal MA(12) (Gaussian Distribution) Effective Sample Size: 144 Number of Estimated Parameters: 3 LogLikelihood: 276.198 AIC: -546.397 BIC: -537.488 Value StandardError TStatistic PValue _________ _____________ __________ __________ Constant 0 0 NaN NaN MA{1} -0.37716 0.066794 -5.6466 1.6364e-08 SMA{12} -0.57238 0.085439 -6.6992 2.0952e-11 Variance 0.0012634 0.00012395 10.193 2.1406e-24 As expected, the models are identical. Generate Live Function from App Session Unlike a plain text function, a live function contains formatted text and equations that you can modify by using the Live Editor. Generate a live function for use outside the app. The function returns the estimated model SARIMA_PSSGLog given DataTimeTable. 1. In the Models pane of the app, select the SARIMA_PSSGLog model. 2. On the Econometric Modeler tab, in the Export section, click Export > Generate Live Function. The Live Editor opens and contains a function named modelTimeSeries. The function accepts DataTimeTable (the variable you imported in this session), transforms data, and returns the estimated SARIMA(0,1,1)×(0,1,1)[12] model SARIMA_PSSGLog. 3. To ensure the function does not shadow the M-file function, change the name of the function to modelTimeSeriesMLX. 4. On the Live Editor tab, in the File section, click Save > Save. 5. Save the function to your current folder by clicking Save in the Select File for Save As dialog box. At the command line, estimate the SARIMA(0,1,1)×(0,1,1)[12] model by passing DataTimeTable to modelTimeSeriesMLX. Name the model SARIMA_PSSGLog2. Compare the estimated model to SARIMA_PSSGLog. SARIMA_PSSGLog2 = modelTimeSeriesMLX(DataTimeTable); ARIMA(0,1,1) Model Seasonally Integrated with Seasonal MA(12) (Gaussian Distribution) Effective Sample Size: 144 Number of Estimated Parameters: 3 LogLikelihood: 276.198 AIC: -546.397 BIC: -537.488 Value StandardError TStatistic PValue _________ _____________ __________ __________ Constant 0 0 NaN MA{1} -0.37716 0.066794 -5.6466 1.6364e-08 SMA{12} -0.57238 0.085439 -6.6992 2.0952e-11 Variance 0.0012634 0.00012395 10.193 2.1406e-24 ARIMA(0,1,1) Model Seasonally Integrated with Seasonal MA(12) (Gaussian Distribution) Effective Sample Size: 144 Number of Estimated Parameters: 3 LogLikelihood: 276.198 AIC: -546.397 BIC: -537.488 Value StandardError TStatistic PValue _________ _____________ __________ __________ Constant 0 0 NaN NaN MA{1} -0.37716 0.066794 -5.6466 1.6364e-08 SMA{12} -0.57238 0.085439 -6.6992 2.0952e-11 Variance 0.0012634 0.00012395 10.193 2.1406e-24 As expected, the models are identical. Generate Report Generate a PDF report of all your actions on the PSSGLog and PSSGStable time series, and the SARIMA_PSSGLog model. 1. On the Econometric Modeler tab, in the Export section, click Export > Generate Report. 2. In the Select Variables for Report dialog box, select the Select check boxes for the PSSGLog and PSSGStable time series, and the SARIMA_PSSGLog model (if necessary). The app automatically selects the check boxes for all variables that are highlighted in the Time Series and Models panes. 3. Click OK. 4. In the Select File to Write dialog box, navigate to the C:\MyData folder. 5. In the File name box, type SARIMAReport. 6. Click Save. The app publishes the code required to create PSSGLog, PSSGStable, and SARIMA_PSSGLog in the PDF C:\MyData\SARIMAReport.pdf. The report includes: • A title page and table of contents • Plots that include the selected time series • Descriptions of transformations applied to the selected time series • Results of statistical tests conducted on the selected time series • Estimation summaries of the selected models [1] Box, George E. P., Gwilym M. Jenkins, and Gregory C. Reinsel. Time Series Analysis: Forecasting and Control. 3rd ed. Englewood Cliffs, NJ: Prentice Hall, 1994. See Also Related Topics
{"url":"https://kr.mathworks.com/help/econ/share-results-of-econometric-modeler-session.html","timestamp":"2024-11-03T22:24:54Z","content_type":"text/html","content_length":"105810","record_id":"<urn:uuid:f623f024-bd4b-4058-8ab4-eb7ad23fc7eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00073.warc.gz"}
zk-SNARK: Succinct Non-interactive ARgument of Knowledge • zkSNARK rollup for privacy (although rollup typically does not require zk, only SNARK) • confidential transactions (ZCash, Tornado Cash, IronFish) • private dapps code (Aleo) • prove solvency in transaction (e.g. prove that I have certain about of property) • prove transaction compliance to regulation (e.g. prove transaction is under 1000 dollars) • zk taxes Study Resources and Current State Study Resource Linear Algebra: Strang's Book, 3B1B Abstract Algebra: A first course in abstract algebra by Fraleigh and Algebra chapter 0 by Aluffi. Fourrier transform: 3B1B Cryptography: Introduction to Mathematical Cryptography by Hoffstei, then Dan Boneh’s courses in Coursera ZK: ZK Whiteboard sessions, then Plonk implementations, gnark and arkworks codebase too. If you already a pro, you might start with proving BrainFuck Study Resources: please read them in order To understand math and this paper: - If you like courses: you need Crypto I CS255 - Coursera prereques and maybe Stanford CS355 and Number Theory. - Modern Zero Knowledge Cryptography - 4 week course on ZK - Berkeley Course on ZK and recording - Or Graduate Course on Applied Cryptography - You can also use ZCash Documentation or this repository - Or even a course by ZK-DAO (login as To learn Halo2 Circuit Proving System: Zero Knowledge Proof — A Guide to Halo2 Source Code To learn Plonk2 Circuit Proving System: To learn Noir Language: To learn Circom: To learn RISC Zero zkVM: Documentation and Examples To solve puzzles and learn arkworks: ZKHack Resource To learn quantization Current State Firstly, this repository summarizes projects related to ZKML. Secondly, this repository is everything about ZK. (Not necessarily about ML.) Other ZK ML Projects: Other ZK Projects: • RISC Zero zkVM • zkML ads: don't give your data to Google/Youtube recommending system while enjoy things they recommended based on your data by zk Yuki. Trusted Execution Environment (TEE). Provide proofs alongside recommendations and/or run the recommendations directly on-chain by Nazih Kalo • zkEmail by Ayush, proof of me received an email containing partial hidden information. Other Encryption Projects: • Fully Homomorphic Encryption (FHE) is a type of encryption that allows computations to be performed on encrypted data without the need to decrypt the data first. And both will produce the same result. But you don’t get computational integrity/verifiability with FHE afaik like you do with SNARKs/STARKs. Concrete-ML is a library for FHE ML, and they are working on verifiability. But since both zk and FHE have a 1000x slowdown factor, the combination will be very slow. (FHE gives you privacy, ZK gives your verifiability. Combine both could replace SGX) □ reduce ML models to Numpy circuits □ transpile Numpy circuits to FHE circuits and auto-optimize all crypto params □ compile FHE circuits to whatever hardware accelerator you want to use for best performance □ they use MLIR framework • Software Guard Extensions (SGX) is a set of CPU instructions and hardware enhancements by Intel. It protect specific memory region when executing code on sensitive data so that data can't be accessed by outside (other programs or even OS) • Ocean Protocol: allow publishing, selling, and training on encrypted data on chain. The Foundations Arithmetic Circuits Arithmetic circuits: a function $C : \mathbb{F}^n \to \mathbb{F}$ • fix a finite field $\mathbb{F} = \{0, ..., p-1\}$ for some prime $p > 2$ • fix a set of modular operation on $F$ • $|C|$ denotes the number of gates in the circuit You can think of arithmetic circuits as a general computer circuit with one output that has integer range $[0, p-1]$ for some prime. (since practical computers are intrinsically modular due to boolean-SAT: a problem asking given boolean circuit $C(...)$, whether there exists a set of $x_i$ boolean input, such that $C(x_1, x_2, ..., x_n) = 0$. Note that boolean-SAT is NP-complete. But if I an the prover who obtain a set of witness $x_i$ that satisfies $C$, I wish that I am able to convince the verifier that $C$ is satisfiable without giving out my $x_i$ witness. All polynomial-time computation can be captured by Poly-Arithmetic Circuits • must be deterministic • must be pure Argument System Argument System: • given statement $x \in \mathbb{F}^n$ • given witness $w \in \mathbb{F}^m$ • prover knows $x, w$ • verifier knows $x$ • prover needs to convince verifier that $(\exists w)(C(x, w) = 0)$, basically the circuit $C(x, \cdot)$ is satisfiable. Without ZK, the prover need only simply send $w$ to the verifier to convince the verifier that $C$ is satisfiable. Interactive Argument System: prover and verifier can have many rounds of interaction to raise the confidence of the verifier. Non-interactive Argument System: prover send only one thing once to verifier. Non-interactive Argument System is needed with many verifiers. Example: rollup server would be crowded with challenges from nodes all over the world if using interactive argument system. Properties of zk-SNARK Completeness: if a prover has correct $w$ that satisfies $C$, then verifier will be convinced. • $C$: modular SAT circuit, publically avaliable • $x$: argument • $w$: satisfying string, the witness • $\pi$: proof, either real or faked (\forall x, w)(C(x, w) = 0 \implies Pr\{V(x, P(x, w)) = \text{ accept}\} = 1) Argument of Knowledge: • $V \text{ accept} \implies P \text{ "knows" } w$ • $P \text{ "not know" } w \implies Pr\{V(x, \pi) = \text{ accept}\} < \epsilon$ Prover Knows: means $w$ can be "extracted" from the prover. Formally, if there exist a forger who is able to convince the verifier, then there exist an algorithm called "extractor" who can use the forger as a blackbox to get $w$. So the forger isn't really a forger. Formal Definition of Knowing Above definition assumes computational bounded adversary, so it is called "argument of knowledge", not a "proof of knowledge". Zero Knowledge: $(x, \pi)$ "reveals nothing" about $w$. Learn Nothing: there exist an efficient algorithm "simulator" that generates a proof $\pi$ given $C, S_p, S_v$ (without using $w$) such that the simulated $\pi$ is in the distribution of real proof $\pi$. Formal Definition of Learning Nothing Note that the simulator can choose $S_p, S_v$ freely but a real prover is given $S_p$ from initialization. Succinct: with a constant security parameter $\lambda$ • the length of proof is short: $\pi$ is in $O(\log|C| + \lambda)$ • verifier is fast: $V(x, \pi)$ run in $O(|x| + \log|C| + \lambda)$ // TODO: some source says that \pi need to be in log(|w|), which I don't know if it is correct. I don't have intuition on relation between circuit size and the number of input digits. If you observe "Succinct" requirement, then you may discover that $\log|C|$ is not achievable since verifier at least need to read $C$ in $O(|C|)$ for correctness reason. But the verifier is allowed to pre-process $C$ and summarize $C$ to $S_v$ that has length $O(\log|C|)$. So the verifier can look like: $V(S_v, x, \pi)$. A trivial argument system: just sending $w$ will break • zero knowledge • succinct length • succinct computation Pre-Processing and Initialization In order to satisfy succinct requirement, preprocessing technique is needed to summarize the circuit $C$ to the size of $\log|C|$. There are a few techniques on how to do this: in general we the preprocess algorithm $S$ takes in a circuit, and generate tuple $(S_p, S_v)$ a summary for the prover and verifier. Trusted Setup Per-circuit: 1. network admin choose a random secret string $\lambda$ for every $C$ 2. admin run $S(\lambda, C)$ to produce $(S_p, S_v)$ 3. admin destroy $\lambda$. Non-admin believe in faith that admin destroyed $\lambda$ and act non-maliciously. If $\lambda$ leaked, anybody can prove false statement. So classically, when layer2 rollup chains starts, founders will gather people to run setups. Each person run their own algorithm and hope at least one of them throw away the secret. Trusted but Updatable: 1. network admin choose a random secret string $\lambda$ for every $C$ 2. admin run $S_{init}(\lambda)$ to produce $U$ 3. admin destroy $\lambda$. Non-admin believe in faith that admin destroyed $\lambda$ and act non-maliciously. If $\lambda$ leaked, anybody can prove false statement. 4. Everybody can run $S_{pre}(U, C)$ to produce $(S_p, S_v)$ Here is Vitalik's Blog on How Trusted Setup Worked. More zk-SNARK circuits and constraints require longer setup time. ZCash setup took about 24 hours on a fast machine, and requires a 97G download and 49G upload. Rollup requires more than 260 million constraints, so it must compute $2^{28}$ powers of tau. Transparent: much more costly than above two methods, but does not rely on trusted admin and requires no secret data. Types of SNARKs Types of SNARKs (partial list) Types of SNARKs (partial list) with exact time Kilian'92, Micali'94: succinct transparent arguments for PCP, but with impractical prover time GGPR'13, Groth'16: quasi-linear prover time, constant size proof, trusted, per-circuit setup (most application use Groth'16) Sonic'19, Marlin'19, Plonk'19: universal updatable DARK'19, Halo'19, STARK, ...: no trusted setup (transparent) • STARK: proof against quantum computer, widely used in commercial Typically, people use Groth'16: • size of $\pi$ is around 200 bytes • $S_p$ is quiet large, but $S_v$ is small (good for blockchain miners) • verification time: 3ms • but require trusted setup • Plonk/Marlin's method doubled the size of $\pi$ and verification time, but enable updatable/universal setup SNARKs Software System SNARKs Software System: each green arrow is an algorithm. The computational intense part is to produce the proof. and plus Noir, CirC(build your own DSL) for high-level language. and plus PIL for low-level language. Example: We want to show that $SHA256(w) = x$ We first write in higher language ZoKrates: // domain-specific language def main(field x[2], private field w) -> (field): h = sha256packed(w) h[0] == x[0] // check first 128 bits h[1] == x[1] // check last 128 bits return 1 Then we compile above to R1CS program (arithmetic circuit representation) over $\mathbb{F}$ Whiteboard Sessions by Dan Boneh Promises of ZK Asked by Justin: The person asking for the result doesn't want to disclose their input (e.g. their personal health information) and the model provider does not want to disclose their model weights (e.g. if their model is proprietary). This is possible. Reply by Jason: Yes, you could combine MPC+ZK to do that. If part of the model can be public there might be a way to play some games with recursion / composition, but it would probably leak Reply by Elisey: i guess it can be combined in the following way. let’s say there are two models: first model is run in users device. second on behalf of provider. one should be really careful when designing such system to avoid leaking information. two main vectors of leakage: • straight up reversible way to obfuscate the data • big enough output vector ( so people can not be rainbow table all the possible inputs) Current Issues of ZKML Ryan | Modulus Labs: In terms of verifying e.g. that a model is robust to extraction attacks, data poisoning, adversarial attacks, etc., it's a totally different story (and folks have posited that this may not even be possible when working in such high-dimensional feature spaces) -- I'm unfortunately behind on SoTA AI literature for this subfield in particular, but folks are certainly increasingly interested in robustness and explainability as a research topic. "It's possible to extract model weights just by querying the model on certain inputs, so even if you have an ML black box it's not going to be private in many respects, especially not towards the I think that it's a very interesting question to apply a lot of the already existing work on "model compression" and "knowledge distillation" so that ML models can produce a witness that is more closely compatible with provers than simply naively compiling a NN to a circuit Non-linear layers like Softmax are harder to implement in proofs. Hey, right now the state of art in ZKML is in its very early stages, nothing that you can just plug an play for even mid sized models, let alone big models. Remaining Questions // QUESTION: what is Recursive zkSNARKs? (https://github.com/lyronctk/zator Proving the execution of arbitrarily deep neural networks with recursive SNARKs.) // QUESTION: I don't quiet understand this conversation: https://t.me/zkmlcommunity/812 and https://t.me/zkmlcommunity/825 // QUESTION: Read this paper: https://eprint.iacr.org/2022/1508 "Indeed, the case that we consider — where Alice’s program is large — is extremely well motivated: the program P could be an ML model with billions of painstakingly learned parameters. // TODO: read this when you have sufficient knowledge: https://polybase.xyz/blog/polybase-zk-database-polygon-miden // TODO: not sure what these two guys are discussing: https://t.me/zkmlcommunity/1314 // TODO: watch the following 1. OpenZL: Middleware and Open Standards for the Next Generation of zkApps, Brandon Gomes, Manta Network 2. Ring VRFs from zero knowledge continuations, Jeff Burdges, Web3 Foundation 3. A Zero-Knowledge circuit for the Lurk language, Eduardo Morais, Protocol Labs 4. Arkworks: A Rust Ecosystem for zkSNARKs, Pratush Mishra, Aleo/UPenn 5. Hardware acceleration of ZKP, Bowen Huang, Cysic 6. Tutorial: Poseidon in ECLAIR, Todd Norton, Manta Network 7. Considering Plonky2, Sebastien La Duca 8. Multi-level IR and its utility in ZK, Brian Retford, RISC0 9. On Interoperability of Crypto Compute Environments, Wei Dai 10. CycloneNTT: Improving Twiddle Access for Number Theoretic Transforms, Rahul Maganti, Jump Crypto https://www.crowdcast.io/c/manta-openzl(or https://www.crowdcast.io/c/manta-openzl/kkomP) // QUESTION: I don't know about this paper: https://eprint.iacr.org/2022/957 as part of this website: https://www.ulvetanna.io/news/introducing-ulvetanna // QUESTION: I don't understand SNARKs utilization in BFT to reduce complexitiy: https://arxiv.org/pdf/2205.11652v2.pdf // QUESTION: Halo2 FRI Gadget // QUESTION: Don't understand this image
{"url":"https://kokecacao.me/page/Post/zk-SNARK.md","timestamp":"2024-11-12T17:03:04Z","content_type":"text/html","content_length":"41080","record_id":"<urn:uuid:62ee707b-d29a-4dd1-b267-267eb2bfe1d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00821.warc.gz"}
Algorithmic Advances In Riemannian Geometry And Applications : For Machine Learning, Computer Vision, Statistics, And Optimization [PDF] [6o5oba831ei0] This book presents a selection of the most recent algorithmic advances in Riemannian geometry in the context of machine learning, statistics, optimization, computer vision, and related fields.<span class='showMoreLessContentElement' style='display: none;'> <span class='showMoreLessControlElement'>�<span class="control-link">Read more... Advances in Computer Vision and Pattern Recognition Hà Quang Minh Vittorio Murino Editors Algorithmic Advances in Riemannian Geometry and Applications For Machine Learning, Computer Vision, Statistics, and Optimization Advances in Computer Vision and Pattern Recognition Founding editor Sameer Singh, Rail Vision, Castle Donington, UK Series editor Sing Bing Kang, Microsoft Research, Redmond, WA, USA Advisory Board Horst Bischof, Graz University of Technology, Austria Richard Bowden, University of Surrey, Guildford, UK Sven Dickinson, University of Toronto, ON, Canada Jiaya Jia, The Chinese University of Hong Kong, Hong Kong Kyoung Mu Lee, Seoul National University, South Korea Yoichi Sato, The University of Tokyo, Japan Bernt Schiele, Max Planck Institute for Computer Science, Saarbrücken, Germany Stan Sclaroff, Boston University, MA, USA More information about this series at http://www.springer.com/series/4205 Hà Quang Minh Vittorio Murino • Algorithmic Advances in Riemannian Geometry and Applications For Machine Learning, Computer Vision, Statistics, and Optimization Editors Hà Quang Minh Pattern Analysis and Computer Vision Istituto Italiano di Tecnologia Genoa Italy Vittorio Murino Pattern Analysis and Computer Vision Istituto Italiano di Tecnologia Genoa Italy ISSN 2191-6586 ISSN 2191-6594 (electronic) Advances in Computer Vision and Pattern Recognition ISBN 978-3-319-45025-4 ISBN 978-3-319-45026-1 (eBook) DOI 10.1007/978-3-319-45026-1 Library of Congress Control Number: 2016948260 © Springer International Publishing Switzerland 2016 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Overview and Goals The theme of this volume is the application of the rich and powerful theories and techniques of Riemannian geometry to the problems in machine learning, statistics, optimization, computer vision, and related fields. Traditional machine learning and data analysis methods often assume that the input data can be represented by vectors in Euclidean space. While this assumption has worked well for many applications, researchers have increasingly realized that if the data is intrinsically non-Euclidean, ignoring this geometrical structure can lead to suboptimal results. In the existing literature, there are two common approaches for exploiting data geometry when the data is assumed to lie on a Riemannian manifold. In the first direction, often referred to as manifold learning, the data is assumed to lie on an unknown Riemannian manifold and the structure of this manifold is exploited through the training data, either labeled or unlabeled. Examples of manifold learning techniques include manifold regularization via the graph Laplacian, locally linear embedding, and isometric mapping. In the second direction, which is gaining increasing importance and success, the Riemannian manifold representing the input data is assumed to be known explicitly. Some manifolds that have been widely used for data representation are the manifold of symmetric, positive definite matrices, the Grassmannian manifold of subspaces of a vector space, and the Kendall manifold of shapes. When the manifold is known, the full power of the mathematical theory of Riemannian geometry can be exploited in both the formulation of algorithms as well as their theoretical analysis. Successful applications of this approach are numerous and range from brain imaging, kernel learning, and low rank matrix completion, to computer vision tasks such as object detection and tracking. This volume focuses on the latter research direction. The forthcoming chapters were written by researchers currently active in the fields. Overall, the book describes some of the latest algorithmic advances using Riemannian geometry, both theoretically and computationally, with potentially large impact on many research areas in these fields. The volume targets a broad audience, consisting of Ph.D. students and researchers in machine learning, statistics, optimization, computer vision, and related fields. Acknowledgments We wish to thank all the authors for contributing some of their latest works to the volume. We also wish to thank Pierre-Antoine Absil, Gregory Chirikjian, Mark Girolami, Pavan Turaga, Bart Vandereycken, and Baba Vemuri, for their help in reviewing the manuscript. Finally, we wish to thank Simon Rees and the Springer editing team for helping us bring this volume to fruition. Genoa, Italy June 2016 Hà Quang Minh Vittorio Murino 1 Bayesian Statistical Shape Analysis on the Manifold of Diffeomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Miaomiao Zhang and P. Thomas Fletcher 2 Sampling Constrained Probability Distributions Using Spherical Augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shiwei Lan and Babak Shahbaba 3 Geometric Optimization in Machine Learning . . . . . . . . . . . . . . . . . . Suvrit Sra and Reshad Hosseini 4 Positive Definite Matrices: Data Representation and Applications to Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anoop Cherian and Suvrit Sra 5 From Covariance Matrices to Covariance Operators: Data Representation from Finite to Infinite-Dimensional Settings . . . . . . . 115 Hà Quang Minh and Vittorio Murino 6 Dictionary Learning on Grassmann Manifolds . . . . . . . . . . . . . . . . . 145 Mehrtash Harandi, Richard Hartley, Mathieu Salzmann and Jochen Trumpf 7 Regression on Lie Groups and Its Application to Affine Motion Tracking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Fatih Porikli 8 An Elastic Riemannian Framework for Shape Analysis of Curves and Tree-Like Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Adam Duncan, Zhengwu Zhang and Anuj Srivastava Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Anoop Cherian ARC Centre of Excellence for Robotic Vision, Australian National University, Canberra, Australia Adam Duncan Department of Statistics, Florida State University, Tallahassee, FL, USA P. Thomas Fletcher University of Utah, Salt Lake City, UT, USA Mehrtash Harandi College of Engineering and Computer Science, Australian National University, Canberra, ACT, Australia Richard Hartley College of Engineering and Computer Science, Australian National University, Canberra, ACT, Australia Reshad Hosseini School of ECE, College of Engineering, University of Tehran, Tehran, Iran Shiwei Lan Department of Statistics, University of Warwick, Coventry, UK Hà Quang Minh Pattern Analysis and Computer Vision (PAVIS), Istituto Italiano di Tecnologia (IIT), Genoa, Italy Vittorio Murino Pattern Analysis and Computer Vision (PAVIS), Istituto Italiano di Tecnologia (IIT), Genoa, Italy Fatih Porikli Australian National Univeristy, Canberra, Australia; Data61/CSIRO, Eveleigh, Australia Mathieu Salzmann CVLab, EPFL, Lausanne, Switzerland Babak Shahbaba Department of Statistics and Department of Computer Science, University of California, Irvine, CA, USA Suvrit Sra Laboratory for Information & Decision Systems Massachusetts Institute of Technology, Cambridge, MA, USA Anuj Srivastava Department of Statistics, Florida State University, Tallahassee, FL, USA Jochen Trumpf College of Engineering and Computer Science, Australian National University, Canberra, ACT, Australia Miaomiao Zhang Massachusetts Institute of Technology, Cambridge, MA, USA Zhengwu Zhang SAMSI, Research Triangle Park, Durham, NC, USA Themes of the Volume The aim of this book is to present some of the most recent algorithmic advances in Riemannian geometry in the context of machine learning, statistics, optimization, computer vision, and related fields. The unifying theme of the different chapters in the book is the exploitation of the geometry of data using the mathematical machinery of Riemannian geometry. As demonstrated by all the subsequent chapters, when the data is intrinsically non-Euclidean, the utilization of this geometrical information can lead to better algorithms that can capture more accurately the structures inherent in the data, leading ultimately to better empirical performance. This book is not intended to be an encyclopedic compilation of the applications of Riemannian geometry. Instead, it focuses on several important research directions that are currently actively pursued by researchers in the field. These include statistical modeling and analysis on manifolds, optimization on manifolds, Riemannian manifolds and kernel methods, and dictionary learning and sparse coding on manifolds. We now describe these topics in more detail, noting that a particular chapter in the book may cover more than one of them. 1. Statistical modeling and analysis on manifolds. This research direction seeks to develop theories and techniques for statistical modeling and analysis on Riemannian manifolds by utilizing the intrinsic geometry of the underlying manifolds. This theme is covered by several chapters in the book. First, it is considered in the chapter by Lan and Shahbaba, which develops Hamiltonian Monte Carlo on the sphere, using spherical geometry, for sampling constrained probability distributions. Second, it is covered in the chapter by Zhang and Fletcher, which develops Bayesian models for shape analysis, with shape variability being represented as random variables on the manifold of diffeomorphic transformations. Statistical shape analysis is also the theme of the chapter by Duncan et al., which presents a Riemannian framework for curves in Euclidean and Hilbert spaces and for tree-like structures. Finally, the chapter by xi Porikli treats regression on the matrix Lie group of affine transformations, with applications in computer vision. 2. Optimization on manifolds. This research direction is concerned with the generalization of the theories and algorithms for optimization in Euclidean space to the manifold setting. This is the theme of the chapter by Sra and Hosseini, which deals with optimization on the manifold of symmetric, positive definite (SPD) matrices by exploiting its geometric structure. From an application perspective, this theme is considered in the chapter by Cherian and Sra for the problem of Riemannian Dictionary Learning and Sparse Coding. 3. Riemannian manifolds and kernel methods. Kernel methods are among the most powerful paradigms in machine learning and its applications. However, most of the kernels employed in the literature are defined on Euclidean space and applying them directly to data that lies on a nonlinear manifold generally leads to suboptimal results. In order to capture the manifold structure in the data, it is necessary to define kernels on the manifold itself, using a notion of distance that captures its intrinsic geometry. This theme is pursued in the chapter by Harandi et al, which considers kernels defined on Grassmann manifolds, and the chapter by Minh and Murino, which discusses kernels defined on the manifold of SPD matrices. Moreover, the interplay between kernels and Riemannian manifolds can also go in the direction of kernels giving rise to manifolds. In the chapter by Minh and Murino, it is shown how a positive definite kernel gives rise to covariance operators, which lie on the infinite-dimensional manifold of positive definite operators, on which another kernel can be defined, leading to a two-layer kernel machine. 4. Dictionary learning and sparse coding on manifolds. This research direction seeks to generalize the algorithms for dictionary learning and sparse coding on Euclidean space to the Riemannian manifold setting by utilizing the intrinsic geometry of the underlying manifold. This is the theme of the chapter by Cherian and Sra, which considers dictionary learning and sparse coding on the manifold of SPD matrices, and the chapter by Harandi et al, which considers this problem on the Grassmann manifolds. Organization of the Volume We now give a summary of the chapters in the book, in order of appearance. The chapter by Zhang and Fletcher is titled Bayesian Statistical Shape Analysis on the Manifold of Diffeomorphisms. In this chapter, the authors present two recently introduced Bayesian models for the statistical analysis of anatomical shapes through diffeomorphic transformations on the image domain. The first model, namely Bayesian diffeomorphic atlas building, is a probabilistic formulation for computing an atlas, or template image, that is most representative of a set of input images. In this model, the distance between images is measured via an energy function, whose regularization term is defined using a Riemannian metric on the manifold of diffeomorphisms. The model parameters are estimated via Monte Carlo expectation maximization (EM) algorithm, with the E step carried out via Hamiltonian Monte Carlo sampling on the manifold of diffeomorphisms. The mathematical formulation is accompanied by numerical examples involving atlas building for 3D images. The second model, namely Bayesian principal geodesic analysis (BPGA), generalizes the Bayesian formulation of principal component analysis (PCA) to the manifold of diffeomorphisms. Using experimental results on the task of reconstructing 3D brain MRI, the authors demonstrate that BPGA results in a much more compact representation compared with both linear PCA and tangent PCA. The chapter by Lan and Shahbaba is titled Sampling Constrained Probability Distributions using Spherical Augmentation. In this chapter, the authors present their recently introduced approach, namely spherical augmentation, for sampling from constrained probability distributions. In this approach, the constrained domain, defined by norm or functional constraints, is mapped to a sphere in an augmented space. Sampling algorithms then generate new proposals on the sphere, using the spherical metric, which satisfy the required constraints when mapped back to the original space. The authors use this approach to obtain several novel Monte Carlo sampling algorithms, namely spherical Hamiltonian Monte Carlo and spherical Lagrangian Monte Carlo. The mathematical formulation is accompanied by many numerical examples, including Bayesian Lasso, Bayesian bridge regression, and latent Dirichlet allocation (LDA) for topic modeling, tested in particular on the Wikipedia corpus, among others. The chapter by Sra and Hosseini is titled Geometric Optimization in Machine Learning. In this chapter, the authors report some of the most recent algorithmic developments in solving optimization problems on the manifold of positive definite matrices. Two key mathematical concepts involved are geodesic convexity, which is the generalization of Euclidean convexity to the Riemannian manifold setting, and Thompson nonexpansivity, for a class of nonconvex functions that are not necessarily geodesically convex. Together, these concepts enable the global optimization of many functions that are nonconvex in the Euclidean sense. In particular, the authors exploit geodesic convexity in the problem of fitting Gaussian mixture models (GMMs), leading to an algorithm with substantial improvement in performance compared to the classical expectation minimization (EM) algorithm. The chapter by Cherian and Sra is titled Positive Definite Matrices: Data Representation and Applications to Computer Vision. In this chapter, the authors consider positive definite matrices in the form of covariance descriptors, a powerful data representation paradigm in computer vision. In particular, the authors present their recent approach on Riemannian dictionary learning and sparse coding on the manifold of positive definite matrices, using the affine-invariant Riemannian metric and the Riemannian conjugate gradient algorithm. Using experimental results involving face recognition, person re-identification, and 3D object recognition, the authors demonstrate that the Riemannian approach performs substantially better than its Euclidean counterpart. The chapter by Minh and Murino is titled From Covariance Matrices to Covariance Operators: Data Representation from Finite to Infinite-Dimensional Settings. In this chapter, the authors report on the recent generalization of the covariance matrix representation for images to the infinite-dimensional setting, using covariance operators in reproducing kernel Hilbert spaces (RKHS). In particular, the authors describe the generalizations of the affine-invariant Riemannian and Log-Euclidean distances between positive definite matrices to the infinite-dimensional manifold of positive definite operators. In the case of RKHS covariance operators, these distances admit closed form expressions via the Gram matrices corresponding to the reproducing kernels. The mathematical formulation is accompanied by numerical examples in image classification, demonstrating that the infinite-dimensional framework substantially outperforms its finite-dimensional counterpart. The chapter by Porikli is titled Regression on Lie Groups and Its Application to Affine Motion Tracking. In this chapter, the author treats regression on matrix Lie groups, which are used in most of the transformations in computer vision, such as affine motions and rotations. The proposed formulation goes beyond the typical Euclidean approximation in the literature by providing a solution consistent with the underlying topology. The mathematical formulation is accompanied by numerical examples in a fundamental computer vision task, namely affine motion tracking. The chapter by Harandi, Hartley, Salzmann, and Trumpf, is titled Dictionary Learning on Grassmann Manifolds. In this chapter, the authors present their recent work on dictionary learning and sparse coding on the Grassmann manifolds of subspaces of Euclidean space. In particular, the authors propose to embed Grassmann manifolds into reproducing kernel Hilbert spaces (RKHS) by defining a family of positive definite kernels on these manifolds. Thus, the problems of dictionary learning and sparse coding on the Grassmann manifolds are transformed into the corresponding ones in kernel spaces, which can be solved efficiently. The mathematical formulation is accompanied by numerical examples in action recognition in videos. The chapter by Duncan, Zhang, and Srivastava is titled An Elastic Riemannian Framework for Shape Analysis of Curves and Tree-like Structures. In this chapter, the authors present a Riemannian framework for shape analysis that is invariant under the action of re-parametrization for three different types of objects. The chapter begins with the elastic Riemannian metric framework for shape analysis of Euclidean curves using square-root velocity functions. This framework is then extended to trajectories in Hilbert spaces and tree-like structures, which are treated as composite trajectories in Hilbert spaces. The mathematical formulation is accompanied by numerical examples involving planar shapes and neuron morphology, using digital 3D neuronal reconstructions. Hà Quang Minh Vittorio Murino Chapter 1 Bayesian Statistical Shape Analysis on the Manifold of Diffeomorphisms Miaomiao Zhang and P. Thomas Fletcher Abstract In this chapter, we present Bayesian models for diffeomorphic shape variability in populations of images. The first model is a probabilistic formulation of the image atlas construction problem, which seeks to compute an atlas image most representative of a set of input images. The second model adds diffeomorphic modes of shape variation, or principal geodesics. Both of these models represent shape variability as random variables on the manifold of diffeomorphic transformations. We define a Gaussian prior distribution for diffeomorphic transformations using the inner product in the tangent space to the diffeomorphism group. We develop a Monte Carlo Expectation Maximization (MCEM) algorithm for the Bayesian inference, due to the lack of closed-form solutions, where the expectation step is approximated via Hamiltonian Monte Carlo (HMC) sampling of diffeomorphisms. The resulting inference produces estimates of the image atlas, principal geodesic modes of variation, and model parameters. We show that the advantage of the Bayesian formulation is that it provides a principled way to estimate both the regularization parameter of the diffeomorphic transformations and the intrinsic dimensionality of the input data. 1.1 Introduction The key philosophy in computational anatomy is to quantify anatomical shapes in images through diffeomorphic transformations of the image domain. These diffeomorphic transformations are smooth, bijective mappings with smooth inverses, which guarantee topological consistency between images, e.g., no folding or tearing. An elegant mathematical formulation for estimating diffeomorphism transformations between images is Large Deformation Diffeomorphic Metric Mapping (LDDMM), M. Zhang (B) Massachusetts Institute of Technology, Cambridge, MA, USA e-mail: [email protected] P.T. Fletcher University of Utah, Salt Lake City, UT, USA e-mail: [email protected] © Springer International Publishing Switzerland 2016 H.Q. Minh and V. Murino (eds.), Algorithmic Advances in Riemannian Geometry and Applications, Advances in Computer Vision and Pattern Recognition, DOI 10.1007/978-3-319-45026-1_1 M. Zhang and P.T. Fletcher proposed by Beg et al. [6]. In this setting, the space of diffeomorphisms of an image domain forms an infinite-dimensional Lie group equipped with a right-invariant Riemannian metric. This gives rise to a variational principle that expresses the optimal registration between two images as a geodesic flow. Computing a representative image for a population, or atlas, is an important first step in shape analysis, with the goal of finding a common coordinate system for comparisons across individuals. Principal modes of variation, represented in the tangent space of the diffeomorphism group, provide second-order statistics of shape variability. Many diffeomorphic shape models of images are set up as optimization problems, rather than as statistical inference problems. These approaches (1) perform poorly in the presence of noises or outliers, (2) have free parameters that are difficult and time consuming to select, even for experienced users, and (3) do not provide probabilistic conclusions about the data. To address these problems, this chapter presents a Bayesian framework of statistical shape analysis including diffeomorphic atlas building and principal geodesic analysis of diffeomorphisms. Previous methods often formulate diffeomorphic atlas building as a maximum a posterior (MAP) optimization problem [15, 16, 29] with an image matching likelihood of the atlas and each input image, as well as a prior arising from a metric on an infinite-dimensional manifold of diffeomorphisms that encourages smooth deformations. The regularity of smoothness is typically controlled by parameters describing the metric on the tangent space of the diffeomorphism group. However, Allassonniére et al. [1] pointed out that the common mode approximation scheme performs poorly under image noise, even for a simple 1D template estimation problem where the transformations are discrete shifts. Besides this, the regularity parameters were specified in an ad hoc manner rather than being estimated directly from data, due to the fact that the log posterior of the metric parameters does not have a closed form and is computationally problematic to solve using direct optimization. To overcome these disadvantages, several works [14, 20, 21, 28] have then developed Bayesian formulations of the atlas estimation problem in a small deformation setting. They estimated the atlas by marginalizing over the posterior distribution of image transformations using a Monte Carlo sampling procedure. Furthermore inference of the level of regularization in nonrigid registration by a Bayesian model were developed in [3, 23, 24]. Image atlas is a mean point estimation and it does not encode the group shape variability. Extracting an efficient low-dimensional, second-order statistics from highdimensional diffeomorphisms is critical to improve the power and interpretability of further statical analysis. A standard way to analyze manifold data variability is principal geodesic analysis (PGA) [11, 12], which generalizes principal component analysis (PCA) in Euclidean space to nonlinear manifolds. PGA estimates lower dimensional geodesic subspaces by minimizing the sum-of-squared geodesic distance to the data. Based on this work, exact solutions to PGA were developed in [22, 26]. To give a probabilistic interpretation of these geodesic analysis, Zhang and Fletcher [33] developed a latent variable model for PGA that provides a probabilistic framework for factor analysis on finite-dimensional manifolds called PPGA. Previous methods [13, 19, 27] performed the dimensionality reduction after the fact, 1 Bayesian Statistical Shape Analysis on the Manifold of Diffeomorphisms i.e., as a PCA of diffeomorphisms in the tangent space as a second stage after the estimation step. All these models have two major disadvantages. First, they do not explicitly optimize the fit of the principal modes to the data intrinsically in the space of diffeomorphisms, which results in a suboptimal fit to the data. Second, the number of dimensions to use is selected manually rather than inferred directly from the data. A related Bayesian model of PCA (BPCA) [7] for Euclidean data was proposed to automatically learn the dimension of the latent space from data by including a sparsity-inducing prior on each component of the factor matrix. This linear factor analysis model, however, is not applicable to nonlinear diffeomorphic transformations. The main difficulty of generalizing the model definition of BPCA to generic manifolds is the lack of an explicit formulation for the normalizing constant of distributions on manifolds. The following sections present an integrated framework for two recently introduced statistical shape models, Bayesian diffeomorphic atlas building [36] and Bayesian principal geodesic analysis (BPGA) of diffeomorphisms [34, 35]. We first introduce a posterior distribution of diffeomorphisms, followed by methods for MCMC sampling of diffeomorphisms, as well as parameter estimation via marginalization of the diffeomorphic transformations. There has been some work on stochastic flows of diffeomorphisms [8], which are Brownian motions, i.e., small perturbations integrated along a time-dependent flow. A similar idea of using Gaussian Process prior on the stochastic velocity field to generate random transformations is developed in [31]. This chapter focuses on a different prior distribution on the tangent space of initial velocity fields rather than on the entire time-dependent flow, which leads to random geodesics in the space of diffeomorphisms. Other prior distributions can also be adapted to this Bayesian framework such as control points parameterization of diffeomorphisms [2]. We next describe the BPGA model, which treats the dimensionality reduction step as a probabilistic inference problem on discrete images, and an inference procedure that jointly estimates the image atlas and principal geodesic modes of variation. This model goes beyond the PPGA algorithm by introducing automatic dimensionality reduction and extending from finite-dimensional manifolds to the infinite-dimensional case of diffeomorphic image registration. 1.2 Mathematical Background In this section, we briefly review the basic concepts of diffeomorphisms and their applications in image analysis. 1.2.1 Space of Diffeomorphisms Consider a d-dimensional torus Ω = Rd /Zd as the domain in which we define images. A diffeomorphism is a bijective smooth invertible mapping φ : Ω → Ω M. Zhang and P.T. Fletcher Fig. 1.1 Space of diffeomorphisms with its smooth inverse φ −1 . We denote by Diff(Ω) the space of diffeomorphisms whose derivatives of all orders exist and are square-integrable. This space forms an infinite-dimensional Lie group. The Lie algebra of Diff(Ω) is the tangent space at the identity transform, V = Tid Diff(Ω), and consists of all vector fields equipped with the Lie bracket operation. Most of the computations w.r.t. the diffeomorphism group are done in the space of the Lie algebra because it is a linear vector space. Given a flow of time-varying velocity field, vt : [0, 1] → V , we generate a flow of diffeomorphisms, t → φt ∈ Diff(Ω) (see Fig. 1.1), as a solution to the ordinary differential equation, dφt (x) = vt ◦ φt (x). (1.1) dt Note that we use subscripts for the time variable, i.e., vt (x) = v(t, x), and φt (x) = φ(t, x). 1.2.2 Metrics on Diffeomorphisms A distance metric provides a way to measure the difference between diffeomorphisms, which forms the mathematical foundation for statistical analysis of diffeomorphisms such as the Fréchet mean, variability quantification, and regression. We first define an inner product on the tangent space of diffeomorphisms at identity, V = Te Diff(Ω), where e ∈ Diff(Ω) is the identity transformation. This inner product is of the form v, wV = (Lv(x), w(x))dx, Ω for v, w ∈ V , and a symmetric, positive-definite differential operator L : V → V ∗ , mapping to the dual space, V ∗ . The dual to the vector v is a momentum, m ∈ V ∗ , such that m = Lv or v = Km, where K is the inverse of L. In this chapter, we use 1 Bayesian Statistical Shape Analysis on the Manifold of Diffeomorphisms Fig. 1.2 Metrics on diffeomorphisms L = −αΔ + β, where Δ is the discrete Laplacian, and α and β are positive numbers. A major advantage of using Laplacian is that it is a diagonal matrix in the Fourier space, thus makes the inverse of L much easier to compute. Next we define a right-invariant metric as an inner product at any other point φ ∈ Diff(Ω), by pulling back the velocities at φ to the identity by right composition. For v, w ∈ Tφ Diff(Ω), the right-invariant metric is given by v, wTφ Diff(Ω) = v ◦ φ −1 , w ◦ φ −1 V . A geodesic curve {φt } ∈ Diff(Ω), illustrated in Fig. 1.2, is a flow of diffeomorphisms that minimizes the energy E(φt ) = 0 2 dφt −1 dt ◦ φt dt, V and it is characterized by the Euler–Poincaré equations (EPDiff) [5, 17], ∂v = −K (Dv)T m + Dm v + m div v , ∂t where D denotes the Jacobian matrix and div is the divergence operator. Given an initial velocity, v0 ∈ V , at t = 0, the EPDiff equation (1.2) can be integrated forward in time, resulting in a time-varying velocity vt : [0, 1] → V , which itself is subsequently integrated in time by the rule (1.1) to arrive at the geodesic path, φt ∈ Diff(Ω). This process is known as geodesic shooting. 1.2.3 Diffeomorphic Atlas Building with LDDMM Before introducing the diffeomorphic atlas building problem in the setting of LDDMM [6] with geodesic shooting [25, 30, 32], we first review a distance M. Zhang and P.T. Fletcher between pairwise images. Consider images I0 , I1 ∈ L 2 (Ω, R) as square-integrable functions defined on a domain Ω, we compute the diffeomorphic matching from a source image I0 and a target image I1 by minimizing an energy function of sum-ofsquared distance function plus regularization term as E(v0 , I0 , I1 ) = 1 1 I0 ◦ φ1−1 − I1 2L2 + v0 2V , 2σ 2 2 where σ 2 represents image noise variance. When the energy above is minimized over all initial velocities, it yields a squared distance metric between the two input images, i.e., d(I0 , I1 )2 = min E (v0 , I0 , I1 ). v0 ∈V Using this distance metric between images, we find the Fréchet mean of a group of images J 1 , . . . , J N ∈ L 2 (Ω, R) and the initial velocities {v0k ∈ L 2 ([0, 1], V )}k=1...N that minimize the distances function, i.e., N 1 d(I, J k )2 . arg min I,v0k N Putting (1.3) and (1.4) together, we have E(v0k , I) = N 1 I ◦ (φ k )−1 − J k 2 2 + 1 vk 2 , 1 L 2 2σ 2 0 V where the deformation φ1k is defined in (1.1) as the integral flow of v0k with φ0k = Id. Because the distance function between images is itself a minimization problem, the atlas estimation is typically done by alternating between the minimization to find the optimal v0k and the update of the atlas, I. Note that for notation simplicity, we denote v0k as vk and φ1k as φ k in the following 1.3 A Bayesian Model for Atlas Building For a continuous domain Ω ⊂ Rn , direct interpretation of (1.3) as a negative log posterior is problematic, as the image match term would be akin to isotropic Gaussian noise in the infinite-dimensional Hilbert space L 2 (Ω, R). This is not a well-defined probability distribution as it has an infinite measure. More appropriately, we can instead consider our input images, J k , and our atlas image, I, to be measured on a discretized grid, Ω ⊂ Zn . That is, images are elements of the finite-dimensional Euclidean space l2 (Ω, R). We will also consider velocity fields vk and the resulting 1 Bayesian Statistical Shape Analysis on the Manifold of Diffeomorphisms diffeomorphisms φ k to be defined on the discrete grid, Ω. Now our noise model is i.i.d. Gaussian noise at each image voxel, with the likelihood given by p(J | v , I) = k 1 (2π )M/2 σ M I ◦ (φ k )−1 − J k 2 , exp − 2σ 2 where M is the number of voxels, and the norm inside the exponent is the Euclidean norm of l2 (Ω, R). The negative log prior on the vk is a discretized version of the squared Hilbert space norm above. Now consider L to be a discrete, self-adjoint, positive-definite differential operator on the domain Ω. The prior on each vk is given by a multivariate Gaussian, 1 (Lvk , vk ) , (1.7) exp − p (vk ) = M 1 2 (2π ) 2 |L −1 | 2 where |L| is the determinant of L. In the sequel, we could put noninformative priors on θ = (α, σ 2 , I) and jointly marginalize them out with vk . Instead, we simply treat θ to be parameters that we wish to estimate. We fix β to a small number to ensure that the L operator is nonsingular. Putting together the likelihood (1.6) and prior (1.7), we arrive at the log joint posterior for the diffeomorphisms, via initial velocities, vk , as log N N 1 k k N MN log σ p vk | J k ; θ = log |L| − (Lv , v ) − 2 2 2 k=1 N 1 − I ◦ (φ k )−1 − J k 2 + const. 2σ 2 1.4 Estimation of Model Parameters We now present an algorithm for estimating the parameters, θ , of the probabilistic image atlas model specified in the previous section. These parameters include the image atlas, I, the smoothness level, or metric parameter, α, and the variance of the image noise, σ 2 . We treat the vk , i.e., the initial velocities of the image diffeomorphisms, as latent random variables with log posterior given by (1.8). This requires integration over the latent variables, which is intractable in closed form. We thus develop a Hamiltonian Monte Carlo procedure for sampling vk from the posterior and use this in a Monte Carlo Expectation Maximization algorithm to estimate θ . It consists of two main steps: 1. E-step We draw a sample of size S from the posterior distribution (1.8) using HMC with the current estimate of the parameters, θ (i) . Let vkj , j = 1, . . . , S, denote the jth point in M. Zhang and P.T. Fletcher this sample for the kth velocity field. The sample mean is taken to approximate the Q function, (i) Q(θ | θ ) = Evk |J k ;θ (i) log p v | J ; θ k N S 1 log p vkj | J k ; θ . S j=1 2. M-step Update the parameters by maximizing Q(θ | θ (i) ). The maximization is closed form in I and σ 2 , and a one-dimensional gradient ascent in α. Image Matching Gradient In our HMC sampling procedure, we will need to compute gradients, with respect to initial momenta, of the diffeomorphic image matching problem in (1.3), for matching the atlas I to an input image J k . Following the optimal control theory approach in [30], we add Lagrange multipliers to constrain the diffeomorphism φ k (t) to be a geodesic path. This is done by introducing time-dependent adjoint variables, m, ˆ Iˆ and vˆ , and writing the augmented energy, ˜ 0 ) =E(Km0 , I, J k )+ E(m 1 1 1 ˆ I˙ + ∇I · vdt + m, ˆ m ˙ + ad∗v mdt + I, ˆv, m − Lvdt, 0 where E is the diffeomorphic image matching energy from (1.3), and the other terms correspond to Lagrange multipliers enforcing: (a) the geodesic constraint, which comes from the EPDiff equation (1.2), (b) the image transport equation, I˙ = −∇I · v, and c) the constraint that m = Lv, respectively. The optimality conditions for m, I, v are given by the following time-dependent system of ODEs, termed the adjoint equations: ˙ˆ + adv m ˆ + vˆ = 0, −m ˆ = 0, −I˙ˆ − ∇ · (Iv) ˆ − Lˆv = 0, −ad∗mˆ m + I∇I 1 Bayesian Statistical Shape Analysis on the Manifold of Diffeomorphisms subject to initial conditions m(1) ˆ = 0, 1 ˆ I(1) = 2 (I(1) − J k ). σ Finally, after integrating these adjoint equations backwards in time to t = 0, the gradient of E˜ with respect to the initial momenta is ˆ 0. ∇m0 E˜ = Km0 − m 1.4.1 Hamiltonian Monte Carlo (HMC) Sampling Hamiltonian Monte Carlo [9] is a powerful MCMC sampling methodology that is applicable to a wide array of continuous probability distributions. It utilizes Hamiltonian dynamics as a Markov transition probability and efficiently explores the space of a target distribution. The integration through state space results in more efficient, global moves, while it also uses gradient information of the log probability density to sample from higher probability regions. In this section, we derive a HMC sampling method to draw a random sample from the posterior distribution of our latent variables, vk , the initial velocities defining the diffeomorphic image transformations from the atlas to the data. To sample from a pdf f (x) using HMC, one first sets up a Hamiltonian H(x, μ) = U(x) + V (μ), consisting of a “potential energy,” U(x) = − log f (x), and a “kinetic energy”, V (μ) = − log g(μ). Here g(μ) is some proposal distribution (typically isotropic Gaussian) on an auxiliary momentum variable, μ. An initial random momentum μ is drawn from the density g(μ). Starting from the current point x and initial random momentum μ, the Hamiltonian system is integrated forward in time to produce a candidate point, x˜ , along with the corresponding forward-integrated momentum, μ. ˜ The candidate point x˜ is accepted as a new point in the sample with probability P(accept) = min(1, exp(−U(˜x ) − V (μ) ˜ + U(x) + V (μ)). This acceptance–rejection method is guaranteed to converge to the desired density f (x) under fairly general regularity assumptions on f and g. In our model, to sample vk from the posterior in (1.8), we equivalently sample k m from the dual momenta, using vk = Kmk , so we define our potential energy as U(mk ) = − log p(mk |J k ; θ ). We use the prior distribution on the dual momenta as our proposal density, in other words, we use p(Kμ) defined as in (1.7), taking care to include the appropriate change-of-variables. As shown in [18], the form of the kinetic energy can be chosen to enforce certain conditions in the sampling. In our work, we define V (μ) = (μ, Kμ), which helps to enforce that the velocity samples be smooth vector fields via application of the low-pass filter K. This gives us the M. Zhang and P.T. Fletcher following Hamiltonian system to integrate in the HMC: dmk ∂H = = Kμ, dt ∂μ ∂H dμ ˜ = − k = −∇mk E, dt ∂m where the last term comes from the gradient defined in (1.10). As is standard practice in HMC, we use a “leap-frog” integration scheme, which better conserves the Hamiltonian and results in high acceptance rates. 1.4.2 The Maximization Step We now derive the M-step for updating the parameters θ = (α, σ 2 , I) by maximizing the HMC approximation of the Q function, which is given in (1.9). This turns out to be a closed-form update for the noise variance σ 2 and the atlas I, and a simple one-dimensional gradient ascent for α. From (1.9), it is easy to derive the closed-form update for σ 2 as σ2 = S N 1 I ◦ (φ kj )−1 − J k 2 . MNS j=1 For updating the atlas image I, we set the derivative of the Q function approximation which with respect to I to zero. The solution for I gives a closed-form update, S N I= k kj kj k=1 J ◦ φ |Dφ | . S N kj j=1 k=1 |Dφ | The gradient ascent over α requires that we take the derivative of the metric L = −αΔ + βI, with respect to α. We do this in the Fourier domain, where the discrete Laplacian is a diagonal operator. For a 3D grid, the coefficients Axyz of the discrete Laplacian at coordinate (x, y, z) in the Fourier domain is Axyz 2π y 2π z 2π x + cos + cos + 6, = −2 cos W −1 H −1 D−1 where W , H, D are the dimension of each direction. Hence, the determinant of the operator L is |L| = Axyz α + β. x,y,z 1 Bayesian Statistical Shape Analysis on the Manifold of Diffeomorphisms The gradient of the HMC approximated Q function, with respect to α, is S N A 1 xyz − −Δvkj , vkj . ∇α Q(θ | θ (i) ) ≈ 2 j=1 A α+β x,y,z xyz k=1 1.5 Bayesian Principal Geodesic Analysis Before introducing our BPGA model for diffeomorphisms, we first review BPCA [7] for Euclidean data. The main idea of BPCA is to formulate a generative latent variable model for PCA that automatically selects the appropriate dimensionality of the model. Note that since our main goal is to quantify the data variability in this section, we fix the regularity parameter α estimated as described in Sect. 1.3. Consider a set y of n-dimensional Euclidean random variables {yj }j=1,...,N ∈ Rn , the relationship between each variable yj and its corresponding q-dimensional (q < n) latent variable xj is (1.12) yj = μ + Bxj + ε, where μ is the mean of dataset {yj }, xj is conventionally defined as a random variable generated from N(0, I), B is an n × q factor matrix that relates xj and yj , and ε ∼ N(0, σ 2 I) represents error. This definition gives a data likelihood as yj − μ − Bxj 2 p(y | x; B, μ, σ ) ∝ exp − . 2σ 2 j=1 2 To automatically select the principal components from data, BPCA includes a Gaussian prior over each column of B, which is known as an automatic relevance determination (ARD) prior. Each such Gaussian has an independent variance associated with a precision hyperparameter γi , so that q γi n/2 1 T p(B | γ ) = exp − γi Bi Bi , 2π 2 i=1 where Bi denotes the ith column of B. The inference of BPCA is an EM algorithm that iteratively estimates the model parameters. At each iteration the value of γi is approximated by γi = Bni 2 , using the current estimate of Bi . This induces sparsity by driving the corresponding component Bi to zero. More specifically, if γi is large, Bi will be effectively removed in the latent space. This arises naturally because the larger γi is, the lower the probability of Bi will be. Notice that the columns of B define the principal subspace of standard PCA, therefore, inducing sparsity on B has the same effect as removing irrelevant dimensions in the principal subspace. M. Zhang and P.T. Fletcher 1.5.1 Probability Model We formulate the random initial velocity for the kth individual as vk = Wx k , where W is a matrix with q columns of principal initial velocities, and x k ∈ Rq is a latent variable that lies in a low-dimensional space, with 1 k 2 p(x | W ) ∝ exp − Wx V . 2 k Compared to BPCA, the difference of this latent variable prior is incorporating W as a conditional probability, which guarantees smoothness of the geodesic shooting path. Our noise model is based on the assumption of i.i.d. Gaussian at each image voxel, much like [16, 19, 36]. This can be varied under different conditions, for instance, spatially dependent model for highly correlated noise data. In this chapter, we will focus on the commonly used and simple Gaussian noise model, with the likelihood given by p(J | I, σ , x ) = k 1 (2π )M/2 σ M I ◦ (φ k )−1 − J k 2 2 L exp − . 2σ 2 The prior on W is a sparsity prior that suppresses the small principal initial velocity to zero. This prior is analogous to the hierarchical sparsity prior proposed by [10], with the difference that we use the natural Hilbert space norm for the velocity. The prior is based on Laplacian distribution, a widely used and exploited way to achieve sparse estimation. It presses the irrelevant or redundant components exactly to zero. As first introduced by [4], the Laplace distribution is equivalent to the marginal distribution of a hierarchical-Bayes model: a Gaussian prior with zero mean and exponentially distributed variances. Let i denote the ith principal component of W . We define each component Wi as a random variable with the hierarchical model distribution p(Wi | τi ) ∼ N(0, τi ), γ i , p(τi | γi ) ∼ Exp 2 After integrating out τi , we have the marginalized distribution as p(Wi | γi ) = 0 √ p(Wi | τi )p(τi | γi )dτi = √ γi exp − γi Wi 1 , 2 which is a Laplacian distribution with scale parameter γi /2. The degree of sparsity is controlled by the hyperparameter γi on the l1 penalty. However, the sparsity parameter is specified in an ad hoc manner. As in [10], an effective model is proposed to remove 1 Bayesian Statistical Shape Analysis on the Manifold of Diffeomorphisms Fig. 1.3 Graphical representation of BPGA for the kth subject J k γi by adopting a Jeffreys’ noninformative hyperprior as p(τi ) ∼ 1/τi . This has the advantages that (1) the improper hyperprior is scale-invariant, and (2) the model is parameter-free. Using this hierarchical sparsity prior on the columns of W for the automatic mode selection, we formulate the problem as q N Wi 2V 1 Wx k 2 − p(W , x | τ ) ∝ exp − V 2 2τi i=1 1 p(τ ) ∝ , τ where x = [x 1 , . . . , x k ], τ = [τ1 , . . . , τq ]. We will later integrate out the latent variable τ using expectation maximization. We can express our model for the kth subject using the graphical representation shown in Fig. 1.3. 1.5.2 Inference Due to the high computational cost of treating θ = {I, σ 2 } as random variables and sampling them, we use MAP estimation to determine θ as model parameters. After defining the likelihood (1.14) and prior (1.15) in the previous section, we now arrive at the joint posterior for BPGA as N p W , x, τ | J k ; θ ∝ k=1 p(J k | x k , θ ) p(x k | W ) p(W |τ ) p(τ ). In order to treat the W , x k and τ as latent random variables with the log posterior given by (1.16), we would ideally integrate out the latent variables, which are intractable in closed form for W , x k . Instead, we develop an expectation maximization algorithm to compute a closed-form solution to integrate out τ first, and then use a mode approximation for W , x k to the posterior distribution. It contains two alternating steps: M. Zhang and P.T. Fletcher E-step Using the current estimate of the parameters θˆ , we compute the expectation Q of the complete log posterior of (1.16) with respect to the latent variables τ as ˆ) ∝ − ˆ W Q(W , x k , θ | θ, N 1 I ◦ (φ k )−1 − J k 2 2 − MN log σ L 2 2σ 2 k=1 N k=1 q k 2 Wi 2V Wx − . V ˆ i 2V 2 W Note that we use the same approach to integrate out τ in [10]. Details are in Appendix A. M-step: Gradient Ascent for W, xk We introduce a gradient ascent scheme to estimate W , x k , and θ = (I, σ 2 ) simultaneously. We compute the gradient with respect to the initial momentum mk of the diffeomorphic image matching problem in (1.5), and then apply the chain rule to obtain the gradient term w.r.t. W and x k . Following the same derivation in (1.10), we obtain the gradient of Q with respect to W after applying the chain rule as ∇W Q = − K(mk − K m ˆ k )(x k )T − W Λ, where Λ is a diagonal matrix with diagonal element 1 2 ˆ Wi . The gradient with respect to x k is ˆ k ). ∇xk Q = −W T K(mk − K m Closed-Form Solution for θ We now derive the maximization for updating the parameters θ . This turns out to be a closed-form update for the atlas I, noise variance σ 2 . For updating I and σ 2 , we set the derivative of the expectation with respect to I, σ 2 to zero. The solution for I, σ 2 gives an update N I= N J k ◦ φ k |Dφ k | 1 I ◦ (φ k )−1 − J k 2 2 . , σ2 = N L k MN k=1 |Dφ | k=1 1 Bayesian Statistical Shape Analysis on the Manifold of Diffeomorphisms 1.6 Results In this section, we first demonstrate the effectiveness of our proposed model and MCEM estimation routine using both 2D synthetic data and real 3D MRI brain data. Because we have a generative model, we can forward simulate a random sample of images from a distribution with known parameters θ = (α, σ 2 , I). Then, in the next subsection, we test if we can recover those parameters using our MCEM algorithm. Figure 1.4 illustrates this process. We simulated a 2D synthetic dataset starting from an atlas image, I, of a binary circle with resolution 100 × 100. We then generated 20 smooth initial velocity fields from the prior distribution, p(vk ), defined in (1.15), setting α = 0.025 and β = 0.001. Deformed circle images were constructed by shooting the initial velocities by the EPDiff equations and transforming the atlas by the resulting diffeomorphisms, φ k . Finally, we added i.i.d. Gaussian noise according to our likelihood model (1.14). We used a standard deviation of σ = 0.05, which corresponds to an SNR of 20 (which is more noise than typical structural MRI). Parameter Estimation on Synthetic Data In our estimation procedure, we initialized α with 0.002 for noise free, and 0.01 for noise corrupted images. The step size of 0.005 for leap-frog integration is used in HMC with 10 units of time discretization in integration of EPDiff equations. Figure 1.5 compares the true atlas and estimated atlases in the clean and noisy case. Figure 1.6 shows the convergence graph for α and σ estimation by using 100 samples. It shows that our method recovers the model parameters fairly well. However, the iterative mode approximation algorithm does not recover the α parameter as nicely as our method. In the noisy case, the mode approximation algorithm estimates α as Fig. 1.4 Simulating synthetic 2D data from the generative diffeomorphism model. From left to right the ground truth template image, random diffeomorphisms from the prior model, deformed images, and final noise corrupted images M. Zhang and P.T. Fletcher Fig. 1.5 Atlas estimation results. Left ground truth template. Center estimated template from noise-free dataset. Right estimated template from noise-corrupted dataset Fig. 1.6 Estimation of α, σ . Left α estimation. Right σ estimation. In our MCEM method, final estimated α and σ for noise free data are 0.028, 0.01, and for noise data are 0.026, 0.0501. Compared with max-max method, for the noise data, estimated α and σ are 0.0152, 0.052 0.0152, which is far from the ground truth value of 0.025. This is compared with our estimation of 0.026. In addition, in the noise-free example, the mode approximation algorithm blows up due to the σ dropping close to 0, thus making the image match term numerically too high and the geodesic shooting unstable. Atlas Building on 3D Brain Images To demonstrate the effectiveness of our method on the real data, we apply our MCEM atlas estimation algorithm to a set of brain MRI from ten healthy subjects. The MRI have resolution 108 × 128 × 108 and are skull stripped, intensity normalized, and co-registered with rigid transforms. We set the initial α = 0.01, β = 0.001 with 15 time steps. Due to the massive computational cost of sampling in the highdimensional image space, we implement a message passing interface (MPI) parallel programming on a GPU cluster. The entire inference costs approximately a week to complete. The left side (the first five columns) of Fig. 1.7 shows coronal and axial view of the 3D MRI used as input. The right side shows the initialization (greyscale average of the input images), followed by the final atlas estimated by our method. The final atlas estimate correctly aligns the anatomy of the input images, producing a sharper average image. The algorithm also jointly estimated the smoothness parameter to be α = 0.028 and the image noise standard deviation to be σ = 0.031. 1 Bayesian Statistical Shape Analysis on the Manifold of Diffeomorphisms Fig. 1.7 Left (the first five columns) coronal and axial view of the input 3D MRIs. Right (a) initial greyscale average of the input images. Right (b) final atlas estimated by our MCEM estimation Image Matching Accuracy Finally, we demonstrate that another benefit of our HMC sampling methodology is improved performance in the standard image registration problem under large deformation shooting. Rather than using a direct gradient descent to solve the image registration problem, we instead can find the posterior mean of the model (1.16), where for image matching we fix the “atlas,” I, as the source image and have just one target image, I1 . The stochastic behavior in the sampling helps to get out of local minima, where the direct gradient descent can get stuck. We compared our proposed method with direct gradient descent image registration by geodesic shooting from [30]. We used the authors’ uTIlzReg package for geodesic shooting, which is available freely online. For the comparison, we registered the image pair shown in the first two panels of Fig. 1.8, which requires a large deformation. The source and Fig. 1.8 The first two images from left to right are the source and target image, respectively. Third is the matched image obtained by geodesic shooting method using [30]. Last image is the matched image from our MCEM method M. Zhang and P.T. Fletcher target images are 50 × 50. We used α = 0.02, β = 0.001 for smoothing kernel, and h = 40 time steps between t = 0 and t = 1. Note that we only want to compare the image matching here, so we fix the α and σ parameters. Figure 1.8 demonstrates the results of the direct geodesic shooting registration with our HMC posterior mean. It shows that the geodesic shooting method gets stuck in a local minima and cannot make it to the target image even with a large number of time steps (h = 60) in the time discretization (we tried several time discretizations up to 60, and none worked). Though our method did not match perfectly in the tip of the “C,” it still recovers the full shape while retaining a diffeomorphic transformation. Principal Geodesics Estimation on OASIS Brain Data We then demonstrate the effectiveness of BPGA model by applying it to a set of brain magnetic resonance images (MRI) from the 3D OASIS brain database. The data consists of MRI from 130 subjects between the age of 60 and 95. The MRI have a resolution of 128 × 128 × 128 with an image spacing of 1.0 × 1.0 × 1.0 mm3 and are skull stripped, intensity normalized, and co-registered with rigid transforms. To set the parameters in L operator, we did initial step of estimating α = 0.01 using the Bayesian atlas building procedure introduced in Sect. 1.3. We used 15 time steps in geodesic shooting and initialize the template I as the average of image intensities, with W as the matrix of principal components from tangent PCA (TPCA) [19]. The proposed BPGA model automatically determined that the latent dimensionality of the data was 15. Figure 1.9 displays the automatically estimated modes, i = 1, 2, of the brain MRI variation. We forward shoot the constructed atlas, I, by the estimated principal momentum ai Wi along the geodesics. For the purpose of visualization, we demonstrate the brain variation from the atlas by ai = −3, −1.5, 0, 1.5, 3. We also show the log determinant of Jacobians at ai = 3, with red representing regions of expansion and blue representing regions of contraction. The first mode of variation clearly shows that ventricle size change is a dominant source of variability in brain shape. Our algorithm also jointly estimated the image noise standard deviation parameter as σ = 0.04. Image Reconstruction Accuracy We validated the ability of our BPGA model to compactly represent the space of brain variation by testing how well it can reconstruct unseen images. After estimating the principal initial velocity and parameters from the training subjects above, we used these estimates to reconstruct another 20 testing subjects from the same OASIS database that were not included in the training. We then measured the discrepancy between the reconstructed images and the testing images. Note that our reconstruction only used the first 15 principal modes, which were automatically selected by our algorithm. We use the first fifteen dimensions to compare our model with linear PCA (LPCA) on image intensities and TPCA. Examples of the reconstructed images and their error maps from these models are shown in Figs. 1.10 and 1.11. Table 1.1 shows the comparison of the reconstruction accuracy as measured by the average and standard Fig. 1.9 Top to bottom axial, coronal, and sagittal views of shooting the atlas by the first and second principal modes. Left to right BPGA model of image variation evaluated at ai = −3, −1.5, 0, 1.5, 3, and log determinant of Jacobians at ai = 3 1 Bayesian Statistical Shape Analysis on the Manifold of Diffeomorphisms 19 M. Zhang and P.T. Fletcher (a) Observed (b) LPCA (c) TPCA (d) BPGA Fig. 1.10 Left to right original data, reconstruction by LPCA, TPCA, and BPGA (a) LPCA (b) TPCA (c) BPGA Fig. 1.11 Left to right absolute value of reconstruction error map by LPCA, TPCA, and BPGA Table 1.1 Comparison of mean squared reconstruction error between LPCA, TPCA, and BPGA models. Average and standard deviation over 20 test images LPCA TPCA BPGA Average MSE Std of MSE 4.2 × 10−2 1.25 × 10−2 3.4 × 10−2 4.8 × 10−3 2.8 × 10−2 4.2 × 10−3 1 Bayesian Statistical Shape Analysis on the Manifold of Diffeomorphisms 0.065 reconstruction error 0.055 0.05 0.045 0.04 0.035 0.03 0.025 0.02 15 20 25 30 number of principal modes Fig. 1.12 Averaged mean squared reconstruction error with different number of principal modes by LPCA, TPCA, and BPGA over 20 test images deviation of the mean squared error (MSE). The table indicates that our model outperforms both LPCA and TPCA in the diffeomorphic setting. We also display the reconstruction error with increasing number of principal modes. Figure 1.12 shows that TPCA requires approximately 32 principal modes, more than twice as much as our model does, to achieve the same level of reconstruction accuracy. LPCA cannot match the BPGA reconstruction accuracy with even 40 principal modes. This reflects that our model BPGA gains a more compact representation than TPCA and LPCA. Acknowledgments This work was supported by NIH Grant R01 MH084795, NIH Grant 5R01EB007688, and NSF CAREER Grant 1054057. References 1. S. Allassonnière, Y. Amit, A. Trouvé, Toward a coherent statistical framework for dense deformable template estimation. J. R. Stat. Soc. Ser. B 69, 3–29 (2007) 2. S. Allassonniere, S. Durrleman, E. Kuhn, Bayesian mixed effect atlas estimation with a diffeomorphic deformation model. SIAM J. Imaging Sci. 8(3), 1367–1395 (2015) 3. S. Allassonnière, E. Kuhn, Stochastic algorithm for parameter estimation for dense deformable template mixture model. ESAIM-PS 14, 382–408 (2010) 4. D.F. Andrews, C.L. Mallows, Scale mixtures of normal distributions. J. R. Stat. Soc. Ser. B (Methodological) 36, 99–102 (1974) 5. V.I. Arnol’d, Sur la géométrie différentielle des groupes de Lie de dimension infinie et ses applications à l’hydrodynamique des fluides parfaits. Ann. Inst. Fourier 16, 319–361 (1966) 6. M.F. Beg, M.I. Miller, A. Trouvé, L. Younes, Computing large deformation metric mappings via geodesic flows of diffeomorphisms. Int. J. Comput. Vis. 61(2), 139–157 M. Zhang and P.T. Fletcher 7. C.M. Bishop, Bayesian PCA, in Advances in neural information processing systems (MIT press, Cambridge, 1999), pp. 382–388 8. A. Budhiraja, P. Dupuis, V. Maroulas, Large deviations for stochastic flows of diffeomorphisms. Bernoulli 16, 234–257 (2010) 9. S. Duane, A. Kennedy, B. Pendleton, D. Roweth, Hybrid Monte Carlo. Phys. Lett. B 195, 216–222 (1987) 10. M.A.T. Figueiredo, Adaptive sparseness for supervised learning. IEEE Trans. Pattern Anal. Mach. Intell. 25(9), 1150–1159 (2003) 11. P.T. Fletcher, C. Lu, S. Joshi, Statistics of shape via principal geodesic analysis on Lie groups, in Computer Vision and Pattern Recognition (IEEE Computer Society, Washington, DC, 2003), pp. 95–101 12. P.T. Fletcher, C. Lu, S.M. Pizer, S. Joshi, Principal geodesic analysis for the study of nonlinear statistics of shape. IEEE Trans. Med. Imaging 23(8), 995–1005 (2004) 13. P. Gori, O. Colliot, Y. Worbe, L. Marrakchi-Kacem, S. Lecomte, C. Poupon, A. Hartmann, N. Ayache, S. Durrleman, Bayesian atlas estimation for the variability analysis of shape complexes, in Medical Image Computing and Computer-Assisted Intervention, vol. 8149 (Springer, Heidelberg, 2013). pp. 267–274 14. J.E. Iglesias, M.R. Sabuncu, K. Van Leemput, ADNI, Incorporating parameter uncertainty in Bayesian segmentation models: application to hippocampal subfield volumetry, in MICCAI, (Springer, Heidelberg, 2012) 15. S. Joshi, B. Davis, M. Jomier, G. Gerig, Unbiased diffeomorphic atlas construction for computational anatomy. NeuroImage 23(Supplement 1), 151–160 (2004) 16. J. Ma, M.I. Miller, A. Trouvé, L. Younes, Bayesian template estimation in computational anatomy. NeuroImage 42, 252–261 (2008) 17. M.I. Miller, A. Trouvé, L. Younes, Geodesic shooting for computational anatomy. J. Math. Imaging Vis. 24(2), 209–228 (2006) 18. R.M. Neal, MCMC using Hamiltonian dynamics. Handb. Markov Chain Monte Carlo 2, 113– 162 (2011) 19. A. Qiu, L. Younes, M.I. Miller, Principal component based diffeomorphic surface mapping. IEEE Trans. Med. Imaging 31(2), 302–311 (2012) 20. P. Risholm, S. Pieper, E. Samset, W.M. Wells, Summarizing and visualizing uncertainty in non-rigid registration, in MICCAI (Springer, Heidelberg, 2010) 21. P. Risholm, E. Samset, W.M. Wells, Bayesian estimation of deformation and elastic parameters in non-rigid registration, in WBIR (Springer, Heidelberg, 2010) 22. S. Said, N. Courty, N. Le Bihan, S.J. Sangwine, Exact principal geodesic analysis for data on SO(3), in Proceedings of the 15th European Signal Processing Conference (2007). pp. 1700–1705 23. I.J.A. Simpson, M.J. Cardoso, M. Modat, D.M. Cash, M.W. Woolrich, J.L.R. Andersson, J.A. Schnabel, S. Ourselin, Alzheimers Disease Neuroimaging Initiative et al., Probabilistic nonlinear registration with spatially adaptive regularisation. Med. Image Anal. 26(1), 203–216 (2015) 24. I.J.A. Simpson, A.S. Julia, R.G. Adrian, L.R.A. Jesper, W.W. Mark, Probabilistic inference of regularisation in non-rigid registration. NeuroImage 59, 2438–2451 (2012) 25. N. Singh, J. Hinkle, S. Joshi, P. Thomas Fletcher, A vector momenta formulation of diffeomorphisms for improved geodesic regression and atlas construction, in International Symposium on Biomedial Imaging (ISBI), April 2013 26. S. Sommer, F. Lauze, S. Hauberg, M. Nielsen, Manifold valued statistics, exact principal geodesic analysis and the effect of linear approximations, in Proceedings of the European Conference on Computer Vision (2010). pp. 43–56 27. M. Vaillant, M.I. Miller, L. Younes, A. Trouvé, Statistics on diffeomorphisms via tangent space representations. NeuroImage 23, S161–S169 (2004) 28. K. Van Leemput, Encoding probabilistic brain atlases using Bayesian inference. IEEE Trans. Med. Imaging 28, 822–837 (2009) 1 Bayesian Statistical Shape Analysis on the Manifold of Diffeomorphisms 29. F.-X. Vialard, L. Risser, D. Holm, D. Rueckert, Diffeomorphic atlas estimation using Kärcher mean and geodesic shooting on volumetric images, in MIUA (2011) 30. F.-X. Vialard, L. Risser, D. Rueckert, C.J. Cotter, Diffeomorphic 3d image registration via geodesic shooting using an efficient adjoint calculation. Int. J. Comput. Vis. 97, 229–241 (2012) 31. D. Wassermann, M. Toews, M. Niethammer, W. Wells III, Probabilistic diffeomorphic registration: representing uncertainty, in Biomedical Image Registration (Springer, Switzerland, 2014). pp. 72–82 32. L. Younes, F. Arrate, M.I. Miller, Evolutions equations in computational anatomy. NeuroImage 45(1S1), 40–50 (2009) 33. M. Zhang, P.T. Fletcher, Probabilistic principal geodesic analysis, in Advances in Neural Information Processing Systems (2013). pp. 1178–1186 34. M. Zhang, P.T. Fletcher, Bayesian principal geodesic analysis in diffeomorphic image registration, in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2014 (Springer, Heidelberg, 2014). pp. 121–128 35. M. Zhang, P.T. Fletcher, Bayesian principal geodesic analysis for estimating intrinsic diffeomorphic image variability. Med. Image Anal. 25, 37–44 (2015) 36. M. Zhang, N. Singh, P.T. Fletcher, Bayesian estimation of regularization and atlas building in diffeomorphic image registration, in Information Processing in Medical Imaging (Springer, Heidelberg, 2013). pp. 37–48 Chapter 2 Sampling Constrained Probability Distributions Using Spherical Augmentation Shiwei Lan and Babak Shahbaba Abstract Statistical models with constrained probability distributions are abundant in machine learning. Some examples include regression models with norm constraints (e.g., Lasso), probit, many copula models, and latent Dirichlet allocation (LDA). Bayesian inference involving probability distributions confined to constrained domains could be quite challenging for commonly used sampling algorithms. In this work, we propose a novel augmentation technique that handles a wide range of constraints by mapping the constrained domain to a sphere in the augmented space. By moving freely on the surface of this sphere, sampling algorithms handle constraints implicitly and generate proposals that remain within boundaries when mapped back to the original space. Our proposed method, called Spherical Augmentation, provides a mathematically natural and computationally efficient framework for sampling from constrained probability distributions. We show the advantages of our method over state-of-the-art sampling algorithms, such as exact Hamiltonian Monte Carlo, using several examples including truncated Gaussian distributions, Bayesian Lasso, Bayesian bridge regression, reconstruction of quantized stationary Gaussian process, and LDA for topic modeling. 2.1 Introduction Many commonly used statistical models in Bayesian analysis involve high dimensional probability distributions confined to constrained domains. Some examples include regression models with norm constraints (e.g., Lasso), probit, many copula models, and latent Dirichlet allocation (LDA). Very often, the resulting models are S. Lan (B) Department of Statistics, University of Warwick, Coventry CV4 7AL, UK e-mail: [email protected] B. Shahbaba Department of Statistics and Department of Computer Science, University of California, Irvine, CA 92697, USA e-mail: [email protected] © Springer International Publishing Switzerland 2016 H.Q. Minh and V. Murino (eds.), Algorithmic Advances in Riemannian Geometry and Applications, Advances in Computer Vision and Pattern Recognition, DOI 10.1007/978-3-319-45026-1_2 S. Lan and B. Shahbaba intractable and the imposed constraints add another layer of complexity. Therefore, in these problems simulating samples for Monte Carlo estimation is quite challenging [12, 40, 41, 47, 57]. Although the literature on improving the efficiency of computational methods for Bayesian inference is quite extensive see, for example, [1–3, 5, 7, 9, 11, 14, 15, 18, 20, 22, 25–28, 32, 33, 35, 37, 39, 42–46, 51–53, 56, 63– 65, 67], these methods do not directly address the complications due to constrained target distributions. When dealing with such distributions, MCMC algorithms typically evaluate each proposal to ensure it is within the boundaries imposed by the constraints. Computationally, this is quite inefficient, especially in high dimensional problems where proposals are very likely to miss the constrained domain. Alternatively, one could map the original domain to the entire Euclidean space to remove the boundaries [49]. This approach too is computationally inefficient since the sampler needs to explore a much larger space than needed. In this chapter, we study a novel technique, Spherical Augmentation (SA), for a family of MCMC algorithms to handle a specific class of constraints. SA was recently proposed by [34] for Hamiltonian Monte Carlo (HMC) [23, 46] to handle constraints involving norm inequalities (Fig. 2.1). SA method augments the parameter space and maps the constrained domain to a sphere in the augmented space. The sampling algorithm explores the surface of this sphere. This way, it handles constraints implicitly and generates proposals that remain within boundaries when mapped back to the original space. Here, we generalize the work of [34] to handle a broader class of constraints, which are still convertible to norm constraints. We will also discuss an improved version of Spherical HMC [34] designed specifically for box-type constraints. Additionally, we will show how SA can be applied to Lagrangian Monte Carlo [35] with application to LDA problems and sampling from probability simplex. There have been some related works recently. Reference [46] discusses modifying standard HMC such that the sampler bounces off the boundaries by letting the poten- q = Inf q = 0.5 q = 0.1 Fig. 2.1 q-Norm constraints 2 Sampling Constrained Probability Distributions … tial energy go to infinity for parameter values that violate the constraints. This creates “energy walls” at boundaries. This approach, henceforth called Wall HMC, has limited applications and tends to be computationally inefficient, because the frequency of hitting and bouncing increases exponentially as dimension grows. Reference [47] follow the idea of Wall HMC and propose an exact HMC algorithm specifically for truncated Gaussian distributions with constraints. Reference [12] on the other hand propose a modified version of HMC for handling holonomic constraint c(θ ) = 0 by using Lagrange multipliers. Reference [13] discuss an alternative approach for situations where constrained domains can be identified as sub-manifolds. In particular, Spherical HMC [34] is motivated by [13] but removes the requirement of embedding, thus it is applicable to more general settings. All these methods provide interesting solutions for specific types of constraints. In contrast, our proposed method offers a general and efficient framework applicable to a wide range of problems. The chapter is structured as follows. Before presenting our methods, in Sect. 2.2 we provide a brief overview of HMC and one of its variants, namely, Lagrangian Monte Carlo (LMC) [35]. We then present the underlying idea of Spherical Augmentation, first for two simple cases, ball type (Sect. 2.3.1) and box type (Sect. 2.3.2) constraints, then for more general q-norm type constraints (Sect. 2.3.3), as well as some functional constraints (Sect. 2.3.4). In Sect. 2.4, we apply the SA technique to HMC (Sect. 2.4.2) and LMC (Sect. 2.4.3) for sampling from constrained target distributions. We evaluate our proposed methods using simulated and real data in Sect. 2.5. Finally, Sect. 2.6 is devoted to discussion and future directions. 2.2 Preliminaries 2.2.1 Hamiltonian Monte Carlo HMC improves upon random walk Metropolis (RWM) by proposing states that are distant from the current state, but nevertheless accepted with high probability. These distant proposals are found by numerically simulating Hamiltonian dynamics, whose state space consists of its position, denoted by the vector θ , and its momentum, denoted by the vector p. Our objective is to sample from the continuous probability distribution of θ with the density function f (θ ). It is common to assume that the fictitious momentum variable p ∼ N (0, M), where M is a symmetric, positivedefinite matrix known as the mass matrix, often set to the identity matrix I for convenience. In this Hamiltonian dynamics, the potential energy, U (θ), is defined as minus the log density of θ (plus any constant), that is U (θ) := − log f (θ ); the kinetic energy, K (p) for the auxiliary momentum variable p is set to be minus the log density of p (plus any constant). Then the total energy of the system, Hamiltonian function, is defined as their sum, H (θ , p) = U (θ ) + K (p) (2.1) S. Lan and B. Shahbaba Given the Hamiltonian H (θ , p), the system of (θ, p) evolves according to the following Hamilton’s equations θ˙ p˙ = ∇p H (θ , p) = − ∇θ H (θ , p) = M−1 p = − ∇θ U (θ ) In practice when the analytical solution to Hamilton’s equations is not available, we need to numerically solve these equations by discretizing them, using some small time step ε. For the sake of accuracy and stability, a numerical method called leapfrog is commonly used to approximate the Hamilton’s equations [46]. We usually solve the system for L steps, with some step size, ε, to propose a new state in the Metropolis algorithm, and accept or reject it according to the Metropolis acceptance probability. [See [46], for more discussions]. 2.2.2 Lagrangian Monte Carlo Although HMC explores the target distribution more efficiently than RWM, it does not fully exploit the geometric properties of the parameter space. Reference [28] propose Riemannian HMC (RHMC), which adapts to the local Riemannian geometry of the target distribution by using a position-specific mass matrix M = G(θ). More specifically, they set G(θ) to the Fisher information matrix. In this chapter, we mainly use the spherical metric instead, to serve the purpose of constraint handling. The proposed method can be viewed as an extension to this approach since it explores the geometry of sphere. Following the argument of [4, 28] define Hamiltonian dynamics on the Riemannian manifold endowed with metric G(θ). With the non-flat metic, the momentum vector becomes p|θ ∼ N (0, G(θ )) and the Hamiltonian is therefore defined as follows: 1 1 H (θ, p) = φ(θ) + pT G(θ)−1 p, φ(θ) := U (θ ) + log det G(θ) 2 2 Unfortunately the resulting Riemannian manifold Hamiltonian dynamics becomes nonseparable since it contains products of θ and p, and the numerical integrator, generalized leapfrog, is an implicit scheme that involves time-consuming fixed-point iterations. Reference [35] proposes to change the variables p → v := G(θ)−1 p and define an explicit integrator for RHMC by using the following equivalent Lagrangian dynamics: θ˙ = v − vT Γ (θ)v − G(θ)−1 ∇θ φ(θ) 2 Sampling Constrained Probability Distributions … where the velocity v|θ ∼ N (0, G(θ )−1 ). Here, Γ (θ) is the Christoffel symbols derived from G(θ). The proposed explicit integrator is time reversible but not volume preserving. Based on the change of variables theorem, one can adjust the acceptance probability with Jacobian determinant to satisfy the detailed balance condition. The resulting algorithm, Lagrangian Monte Carlo (LMC), is shown to be computationally more efficient than RHMC [See [35] for more details]. Throughout this chapter, we express the kinetic energy K in terms of velocity, v, instead of momentum, p [9, 35]. 2.3 Spherical Augmentation In this section, we introduce the Spherical Augmentation technique for handling norm constraints implicitly. We start with two simple constraints: ball type (2-norm) and box type (∞-norm). Then, we generalize the methodology to arbitrary q-norm type constraints for q > 0. Finally, we discuss some functional constraints that can be reduced to norm constraints. 2.3.1 Ball Type Constraints Consider probability confined to the D-dimensional unit ball B0D (1) := distributions D 2 {θ ∈ R D : θ 2 = i=1 θi ≤ 1}. The constraint is given by restricting the 2-norm of parameters: θ 2 ≤ 1. θD+1 θ A B ~ θ =(θ,θD+1) θD+1=(1−||θ||2)0.5 Fig. 2.2 Transforming the unit ball B0D (1) to the sphere S D S. Lan and B. Shahbaba The idea of SA is to augment the original D-dimensional manifold of unit ball B0D (1) to a hypersphere S D := {θ˜ ∈ R D+1 : θ˜ 2 = 1} in a (D + 1)-dimensional space. This can be done by adding an auxiliary variable θ D+1 to the original D ˜ parameter θ ∈ B0 (1) to form an extended parameter θ = (θ, θ D+1 ) such that θ D+1 = 1 − θ 22 . Next, we identify the lower hemisphere S−D with the upper hemisphere S+D by ignoring the sign of θ D+1 . This way, the domain of the target distribution is changed from the unit ball B0D (1) to the D-dimensional sphere, S D := {θ˜ ∈ R D+1 : θ˜ 2 = 1}, through the following transformation: TB→S : B0D (1) S±D , 2 ˜ θ → θ = θ, ± 1 − θ 2 which can also be recognized as the coordinate map from the Euclidean coordinate chart {θ , B0D (1)} to the manifold S D . ˜ using a sampling algorithm (e.g., HMC) defined on After collecting samples {θ} the sphere, S D , we discard the last component θ D+1 and obtain the samples {θ} that automatically satisfy the constraint θ 2 ≤ 1. Note that the sign of θ D+1 does not affect our Monte Carlo estimates. However, after applying the above transformation, we need to adjust our estimates according to the change of variables theorem as follows: [58] dθ B dθ S ˜ f (θ)dθ B = f (θ ) (2.6) c dθ Sc B0D (1) S+D dθ B where dθ = |θ D+1 | as shown in Corollary 2.1 in Appendix “Canonical Metric Sc in Cartesian Coordinates”. Here, dθ B and dθ Sc are volume elements under the Euclidean metric and the canonical spherical metric, respectively. With the above transformation (2.5), the resulting sampler is defined and moves freely on S D while implicitly handling the constraints imposed on the original parameters. As illustrated in Fig. 2.2, the boundary of the constraint, i.e., θ 2 = 1, corresponds to the equator on the sphere S D . Therefore, as the sampler moves on the sphere, e.g., from A to B, passing across the equator from one hemisphere to the other translates to “bouncing back” off the boundary in the original parameter space. 2.3.2 Box-Type Constraints Many constraints are given by both lower and upper bounds. Here we focus on a special case that defines a hyper-rectangle R0D := [0, π ] D−1 × [0, 2π ); other box type constraints can be transformed to this hyper-rectangle. This constrained domain can be mapped to the unit ball B0D (1) and thus reduces to the ball type constraint discussed in Sect. 2.3.1. However, a more natural approach is to use spherical coordinates, which directly maps the hyperrectangle R0D to the sphere S D , 2 Sampling Constrained Probability Distributions … TR0 →S : d−1 cos(θd ) i=1 sin(θi ), d < D + 1 −→ S , θ → x, xd = D d = D+1 i=1 sin(θi ), (2.7) D Therefore, we use {θ, R0D } as the spherical coordinate chart for the manifold S D . Instead of being appended with an extra dimension as in Sect. 2.3.1, here θ ∈ R D is treated as the spherical coordinates of the point x ∈ R D+1 with x2 = 1. After obtaining samples {x} on the sphere S D , we transform them back to {θ} in the original constrained domain R0D using the following inverse mapping of (2.7): TS →R0 : S R0D , x → θ , θd = ⎧ ⎪ ⎨arccot √ ⎪ ⎩2 arccot xd , d 1− i=1 xi2 2 2 x D+1 +x D x D+1
{"url":"https://vdoc.pub/documents/algorithmic-advances-in-riemannian-geometry-and-applications-for-machine-learning-computer-vision-statistics-and-optimization-6o5oba831ei0","timestamp":"2024-11-12T06:47:20Z","content_type":"text/html","content_length":"128126","record_id":"<urn:uuid:f7c0ce68-4946-42ec-a2c7-a94bc957ec4f>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00291.warc.gz"}
Sparse weights using structural pruning | TensorFlow Model Optimization Structural pruning weights from your model to make it sparse in specific pattern can accelerate model inference time with appropriate HW supports. This tutorial shows you how to: • Define and train a model on the mnist dataset with a specific structural sparsity • Convert the pruned model to tflite format • Visualize structure of the pruned weights For a general overview of the pruning technique for the model optimization, see the pruning overview. For tutorial on general weight pruning, see Pruning in Keras. Structural pruning of weights Structural pruning systematically zeroes out model weights at the beginning of the training process. You apply this pruning techniques to regular blocks of weights to speed up inference on supporting HWs, for example: grouping weights in the model by blocks of four and zeroing out two of those weights in each block, known as a 2 by 4 reduction. This technique applies only to the last dimension of the weight tensor for the model that is converted by TensorFlow Lite. For example, Conv2D layer weights in TensorFlow Lite have the structure [channel_out, height, width, channel_in] and Dense layer weights have the structure [channel_out, channel_in]. The sparsity pattern is applied to the weights in the last dimension: channel_in. Compare to the random sparsity, the structured sparsity generally has lower accuracy due to restrictive structure, however, it can reduce inference time significantly on the supported hardware. Pruning can be applied to a model together with other model compression techniques for better compression rate. See quantization and clustering examples in collaborative optimization technique for more details. Prepare your development environment and data. pip install -q tensorflow pip install -q tensorflow-model-optimization pip install -q matplotlib import tensorflow as tf from tensorflow import keras import tensorflow_model_optimization as tfmot prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude Download and normalize image data from the MNIST dataset # Load MNIST dataset. mnist = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 and 1. train_images = train_images / 255.0 test_images = test_images / 255.0 Define structural pruning parameters Define parameters for pruning and specify the type of structural pruning. Set the parameters for pruning to (2, 4). These settings mean that in a block of four elements, at least two with the lowest magnitude are set to zero. You don't have to set the pruning_schedule parameter. By default, the pruning mask is defined at the first step and it is not updated during the training. pruning_params_2_by_4 = { 'sparsity_m_by_n': (2, 4), Define parameters for random pruning with the target sparsity of 50%. pruning_params_sparsity_0_5 = { 'pruning_schedule': tfmot.sparsity.keras.ConstantSparsity(target_sparsity=0.5, Define the model architecture and specify which layers to prune. Structural pruning is applied based on the layers of the model you select. In the example below, we prune only some of the layers. We prune the second Conv2D layer and the first Dense layer. Notice that the first Conv2D layer cannot be pruned structurally. To be pruned structurally, it should have more than one input channels. Instead, we prune the first Conv2D layer with random pruning. model = keras.Sequential([ 32, 5, padding='same', activation='relu', input_shape=(28, 28, 1), keras.layers.MaxPooling2D((2, 2), (2, 2), padding='same'), 64, 5, padding='same', keras.layers.MaxPooling2D((2, 2), (2, 2), padding='same'), 1024, activation='relu', 2024-03-09 12&colon;19&colon;11.497336&colon; E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc&colon;282] failed call to cuInit&colon; CUDA_ERROR_NO_DEVICE&colon; no CUDA-capable device is detected Model&colon; "sequential" Layer (type) Output Shape Param # prune_low_magnitude_prunin (None, 28, 28, 32) 1634 g_sparsity_0_5 (PruneLowMa max_pooling2d (MaxPooling2 (None, 14, 14, 32) 0 prune_low_magnitude_struct (None, 14, 14, 64) 102466 ural_pruning (PruneLowMagn batch_normalization (Batch (None, 14, 14, 64) 256 re_lu (ReLU) (None, 14, 14, 64) 0 max_pooling2d_1 (MaxPoolin (None, 7, 7, 64) 0 flatten (Flatten) (None, 3136) 0 prune_low_magnitude_struct (None, 1024) 6423554 ural_pruning_dense (PruneL dropout (Dropout) (None, 1024) 0 dense (Dense) (None, 10) 10250 Total params&colon; 6538160 (24.94 MB) Trainable params&colon; 3274762 (12.49 MB) Non-trainable params&colon; 3263398 (12.45 MB) Train and evaluate the model. batch_size = 128 epochs = 2 _, pruned_model_accuracy = model.evaluate(test_images, test_labels, verbose=0) print('Pruned test accuracy:', pruned_model_accuracy) Pruned test accuracy&colon; 0.9897000193595886 Remove the pruning wrapper so that it is not included in the model when you convert it to TensorFlow Lite format. model = tfmot.sparsity.keras.strip_pruning(model) Convert model to tflite format import tempfile converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() _, tflite_file = tempfile.mkstemp('.tflite') print('Saved converted pruned model to:', tflite_file) with open(tflite_file, 'wb') as f: INFO&colon;tensorflow&colon;Assets written to&colon; /tmpfs/tmp/tmp04kvq4rj/assets INFO&colon;tensorflow&colon;Assets written to&colon; /tmpfs/tmp/tmp04kvq4rj/assets Saved converted pruned model to&colon; /tmpfs/tmp/tmp218fgsbq.tflite WARNING&colon; All log messages before absl&colon;&colon;InitializeLog() is called are written to STDERR W0000 00&colon;00&colon;1709986802.425001 13320 tf_tfl_flatbuffer_helpers.cc&colon;390] Ignored output_format. W0000 00&colon;00&colon;1709986802.425052 13320 tf_tfl_flatbuffer_helpers.cc&colon;393] Ignored drop_control_dependency. Visualize and check weights Now visualize the structure of weights in the Dense layer pruned with 2 by 4 sparsity. Extract the weights from the tflite file. # Load tflite file with the created pruned model interpreter = tf.lite.Interpreter(model_path=tflite_file, experimental_preserve_all_tensors=True) details = interpreter.get_tensor_details() # Weights of the dense layer that has been pruned. tensor_name = 'structural_pruning_dense/MatMul' detail = [x for x in details if tensor_name in x["name"]] # We need the first layer. tensor_data = interpreter.tensor(detail[0]["index"])() To verify that we selected the correct layer that has been pruned, print the shape of the weight tensor. print(f"Shape of Dense layer is {tensor_data.shape}") Shape of Dense layer is (1, 1024) Now we visualize the structure for a small subset of the weight tensor. The structure of the weight tensor is sparse in the last dimension, using the (2,4) pattern: two elements out of four are zeros. To make the visualization more clear, we replace all non-zero values with ones. import matplotlib.pyplot as plt import numpy as np # The value 24 is chosen for convenience. width = height = 24 subset_values_to_display = tensor_data[0:height, 0:width] val_ones = np.ones([height, width]) val_zeros = np.zeros([height, width]) subset_values_to_display = np.where(abs(subset_values_to_display) > 0, val_ones, val_zeros) Define the auxiliary function to draw separation lines to see the structure clearly. def plot_separation_lines(height, width): block_size = [1, 4] # Add separation lines to the figure. num_hlines = int((height - 1) / block_size[0]) num_vlines = int((width - 1) / block_size[1]) line_y_pos = [y * block_size[0] for y in range(1, num_hlines + 1)] line_x_pos = [x * block_size[1] for x in range(1, num_vlines + 1)] for y_pos in line_y_pos: plt.plot([-0.5, width], [y_pos - 0.5 , y_pos - 0.5], color='w') for x_pos in line_x_pos: plt.plot([x_pos - 0.5, x_pos - 0.5], [-0.5, height], color='w') Now visualize the subset of the weight tensor. plot_separation_lines(height, width) plt.title("Structural pruning for Dense layer") Visualize weights for the Conv2D layer. The structural sparsity is applied in the last channel, similar to the Dense layer. Only the second Conv2D layer is structurally pruned as pointed out above. # Get weights of the convolutional layer that has been pruned with 2 by 4 sparsity. op_details = interpreter._get_ops_details() op_name = 'CONV_2D' op_detail = [x for x in op_details if op_name in x["op_name"]] tensor_data = interpreter.tensor(op_detail[1]["inputs"][1])() print(f"Shape of the weight tensor is {tensor_data.shape}") Shape of the weight tensor is (64, 5, 5, 32) Similar to the weights of Dense layer, the last dimension of the kernel has a (2, 4) structure. weights_to_display = tf.reshape(tensor_data, [tf.reduce_prod(tensor_data.shape[:-1]), -1]) weights_to_display = weights_to_display[0:width, 0:height] val_ones = np.ones([height, width]) val_zeros = np.zeros([height, width]) subset_values_to_display = np.where(abs(weights_to_display) > 1e-9, val_ones, val_zeros) plot_separation_lines(height, width) plt.title("Structurally pruned weights for Conv2D layer") Let's see how those randomly pruned weights look. We extract them and display a subset of the weight tensor. # Get weights of the convolutional layer that has been pruned with random pruning. tensor_name = 'pruning_sparsity_0_5/Conv2D' detail = [x for x in details if tensor_name in x["name"]] tensor_data = interpreter.tensor(detail[0]["index"])() print(f"Shape of the weight tensor is {tensor_data.shape}") Shape of the weight tensor is (32, 5, 5, 1) weights_to_display = tf.reshape(tensor_data, [tensor_data.shape[0],tf.reduce_prod(tensor_data.shape[1:])]) weights_to_display = weights_to_display[0:width, 0:height] val_ones = np.ones([height, width]) val_zeros = np.zeros([height, width]) subset_values_to_display = np.where(abs(weights_to_display) > 0, val_ones, val_zeros) plot_separation_lines(height, width) plt.title("Unstructed pruned weights for Conv2D layer") The TensorFlow Model Optimization Toolkit includes a python script that can be used to check whether which layers in the model from the given tflite file have the structurally pruned weights: check_sparsity_m_by_n.py. The following command demonstrates how to use this tool to check for 2 by 4 sparsity in a specific model. python3 ./tensorflow_model_optimization/python/core/sparsity/keras/tools/check_sparsity_m_by_n.py --model_tflite=pruned_model.tflite --m_by_n=2,4 python3&colon; can't open file '/tmpfs/src/temp/tensorflow_model_optimization/g3doc/guide/pruning/./tensorflow_model_optimization/python/core/sparsity/keras/tools/check_sparsity_m_by_n.py'&colon; [Errno 2] No such file or directory
{"url":"https://www.tensorflow.org/model_optimization/guide/pruning/pruning_with_sparsity_2_by_4?authuser=0","timestamp":"2024-11-04T20:32:46Z","content_type":"text/html","content_length":"167276","record_id":"<urn:uuid:35319e7e-9536-49d8-a697-1e4f3250d0d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00003.warc.gz"}
Next: MAXIMUM DOMATIC PARTITION Up: Covering and Partitioning Previous: MINIMUM VERTEX COVER &nbsp Index • INSTANCE: Graph • SOLUTION: A dominating set for G, i.e., a subset • MEASURE: Cardinality of the dominating set, i.e., • Good News: Approximable within INIMUM SET COVER [276]. • Bad News: Not approximable within c > 0 [423]. • Comment: Equivalent to MINIMUM SET COVER under L-reduction [282] and [63]. See MINIMUM SET COVER for more comments. Not approximable within 153]. Complete for the class of 305]. Admits a PTAS for planar graphs [53] and for unit disk graphs [264]. Variation in which the degree of G is bounded by a constant B is APX-complete for 393] and is approximable within INIMUM SET COVER. The bad news hold also for bipartite graphs and split graphs (observation). If the dominating set is restricted to be connected the problem is approximable within 208]. • Garey and Johnson: GT2 Next: MAXIMUM DOMATIC PARTITION Up: Covering and Partitioning Previous: MINIMUM VERTEX COVER &nbsp Index Viggo Kann
{"url":"https://www.csc.kth.se/~viggo/wwwcompendium/node11.html","timestamp":"2024-11-11T08:34:56Z","content_type":"text/html","content_length":"7661","record_id":"<urn:uuid:fcafe62b-48c4-4aa1-958b-23ba1b2c329f>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00115.warc.gz"}
rpact 4.1.0 New features • The new function getSimulationCounts() can be used to perform power simulations for clinical trials with negative binomial distributed count data. The function returns the simulated power, stopping probabilities, conditional power, and expected sample size for testing mean rates for negative binomial distributed event numbers in the two treatment groups testing situation. • The functions getDesignGroupSequential(), getDesignInverseNormal(), and getDesignFisher() now support the argument directionUpper to specify the direction of the alternative for one-sided testing early at the design phase, see enhancement #26 • getSampleSizeCounts() and getPowerCounts() output boundary values also on the treatment effect scale, see enhancement #40 • The fetch() and obtain() functions can be used to extract multiple parameters from an rpact result object and support various output formats Improvements, issues, and changes • Usage of pipe-operators improved • Analysis progress messages are only displayed when R is used interactively • Manual use of kable() for rpact result objects marked as deprecated, as the formatting and display will be handled automatically by rpact • The order of all summary entries has been revised and optimized • Minimum version of suggested package ggplot2 changed from 2.2.0 to 3.2.0 • Issues #41, #44, #46, and #47 fixed • When analyzing with a two-sided test, an issue with the calculation of the conditional rejection probability was fixed • Bug is fixed: directionUpper = FALSE has no influence in simulation for testing rates in one-sample situation rpact 4.0.0 New features • All reference classes in the package have been replaced by R6 classes. This change brings significant advantages, including improved performance, more flexible and cleaner object-oriented programming, and enhanced encapsulation of methods and properties. The transition to R6 classes allows for more efficient memory management and faster execution, making the package more robust and scalable. Additionally, R6 classes provide a more intuitive and user-friendly interface for developers, facilitating the creation and maintenance of complex data structures and workflows. • Extension of the function getPerformanceScore() for sample size recalculation rules to the setting of binary endpoints according to Bokelmann et al. (2024) • The getSimulationMultiArmMeans(), getSimulationMultiArmRates(), and getSimulationMultiArmSurvival() functions now support an enhanced selectArmsFunction argument. Previously, only effectVector and stage were allowed as arguments. Now, users can optionally utilize additional arguments for more powerful custom function implementations, including conditionalPower, conditionalCriticalValue, plannedSubjects/plannedEvents, allocationRatioPlanned, selectedArms, thetaH1 (for means and survival), stDevH1 (for means), overallEffects, and for rates additionally: piTreatmentsH1, piControlH1, overallRates, and overallRatesControl. • Same as above for getSimulationEnrichmentMeans(), getSimulationEnrichmentRates(), and getSimulationEnrichmentSurvival(). Specifically, support for population selection with selectPopulationsFunction argument based on predictive/posterior probabilities added (see #32) • The fetch() and obtain() functions can be used to extract a single parameter from an rpact result object, which is useful for writing pipe-operator linked commands Improvements, issues, and changes • Issues #25, #35, and #36 fixed • Minor improvements rpact 3.5.1 • The internal fields .parameterNames and .parameterFormatFunctions were removed from all rpact result objects in favor of a more efficient solution • Issues #15, #16, #17, #19, and #23 fixed • Fixed inconsistent naming of variables and class fields (issue #21) □ getSampleSizeSurvival() / getPowerSurvival(): ☆ Field eventsPerStage replaced by cumulativeEventsPerStage ☆ Field singleEventsPerStage added □ getSimulationSurvival(): ☆ Field eventsPerStage replaced by singleEventsPerStage ☆ Field overallEventsPerStage replaced by cumulativeEventsPerStage □ getSimulationMultiArmSurvival(): ☆ Field eventsPerStage replaced by cumulativeEventsPerStage ☆ Field singleNumberOfEventsPerStage replaced by singleEventsPerArmAndStage ☆ Field singleEventsPerStage added □ getSimulationEnrichmentSurvival(): ☆ field singleNumberOfEventsPerStage replaced by singleEventsPerSubsetAndStage • Test coverage CI/CD pipeline activated with the assistance of GitHub Actions, which runs covr and uploads the results to codecov.io • Minor improvements rpact 3.5.0 New features • The new functions getSampleSizeCounts() and getPowerCounts() can be used to perform sample size calculations and the assessment of test characteristics for clinical trials with negative binomial distributed count data. This is possible for fixed sample size and group sequential designs. For the latter, the methodology described in Muetze et al. (2019) is implemented. These functions can also be used to perform blinded sample size reassessments according to Friede and Schmidli (2010). Improvements, issues, and changes • Original Fortran 77 code of AS 251 included into the package, see functions mvnprd, mvstud, as251Normal, and as251StudentT • R package mnormt dependency has been removed • Argument theta can be used for plotting of sample size and power results • Pipe operator usage improved • Shiny app link changed to https://rpact.shinyapps.io/cloud • Several minor improvements rpact 3.4.0 New features • The new function getPerformanceScore() calculates the conditional performance score, its sub-scores and components according to Herrmann et al. (2020) for a given simulation result from a two-stage design • allocationRatioPlanned for simulating multi-arm and enrichment designs can be a vector of length kMax, the number of stages • getObjectRCode() (short: rcmd()): with the new arguments pipeOperator and output many new output variants can be specified, e.g., the native R pipe operator or the magrittr pipe operator can be • Generic function knitr::knit_print for all result objects implemented and automatic code chunk option results = 'asis' activated Improvements, issues, and changes • Improved speed of numerical computation of group sequential designs and test characteristics • Multivariate t distribution restricted to df <= 500 because of erroneous results in mnormt package otherwise. For df > 500, multivariate normal distribution is used • Performance of cumulative distribution function and survival function plot improved • Test coverage extended and improved • Descriptions for all class fields added • Renamed field omega to chi in class TrialDesignPlanSurvival • Several minor improvements rpact 3.3.4 • Rcpp sugar function sapply removed from C++ code to stop deprecated warnings on r-devel-linux-x86_64-fedora-clang • Minor improvements rpact 3.3.3 • allocationRatioPlanned for simulating means and rates for a two treatment groups design can be a vector of length kMax, the number of stages • calcSubjectsFunction can be used in C++ version for simulating means and rates • calcEventsFunction added in getSimulationSurvival() • getPerformanceScore() added: calculates the performance score for simulation means results (1 and 2 groups; 2 stages) • Performance of simulation rates improved for 1 and 2 groups (by translating from R to C++) • Performance of simulation means improved for 1 and 2 groups • Two-sided O’Brien and Fleming beta-spending function corrected • Issue in plot type 5 for sample size means and rates fixed • Added dependency on R >= 3.6.0 • Minor improvements rpact 3.3.2 • Design objects can be piped into getDataset() to enable pipe syntax for analysis, e.g., getDesignGroupSequential() |> getDataset(dataMeans) |> getAnalysisResults() • Performance of simulation means improved for 1 and 2 groups (by translating from R to C++) • Total test time was cut in half by improving simulation performance and enabling parallel testing • SystemRequirements: C++11 added to DESCRIPTION to enable C++ 11 compilation on R 3.x • Minor improvements rpact 3.3.1 • Help pages improved • Parameter betaAdjustment can also be used in getDesignInverseNormal() • subsets removed from result of getWideFormat() for non-enrichment datasets • Summary of enrichment survival simulation results improved • Parameter populations in getSimulationEnrichmentMeans(), getSimulationEnrichmentRates(), and getSimulationEnrichmentSurvival() has been removed since it is always derived from effectList • Bug fixed in getSimulationEnrichmentRates() for calculated non-integer number of subjects • Futility probabilities and futility bounds corrected for two-sided beta-spending function approach • getRawData(): the resulting data.frame now contains the correct stopStage and lastObservationTime (formerly observationTime) • deltaWT is provided with three decimal points for typeOfDesign = “WToptimum” • Generic as.data.frame functions improved • testthat version changed to edition 3 • The rpact source code has been published on GitHub and the bug report link has been changed to https://github.com/rpact-com/rpact/issues • Minor improvements rpact 3.3.0 New features • Two-sided beta-spending approach with binding and non-binding futility bounds • Delayed response utility added in design specification Improvements, issues, and changes • getSimulationMultiArmSurvival(): single stage treatment arm specific event numbers account for selection procedure • User defined selection function can be used in getSimulationEnrichmentRates() and getSimulationEnrichmentSurvival() • Design summary extended by information of getDesignCharacteristics() • getSimulationSurvival(): the result object now contains the new parameter overallEventsPerStage, which contains the values previously given in eventsPerStage (it was “cumulative” by mistake); eventsPerStage contains now the non-cumulative values as expected • Minor improvements rpact 3.2.3 • Performance of group sequential and Fisher’s combination test designs improved • ‘register’ storage class specifier removed from C++ sources • Minor improvements rpact 3.2.2 • Performance of group sequential and Fisher’s combination test designs improved (by translating from R to C++) • Numerical issue in analysis time calculation for survival design in specific cases resolved • The internally used minimum quantile function value was changed from stats::qnorm(1e-323) to stats::qnorm(1e-100) • Unit tests extended • Minor improvements rpact 3.2.1 • C++ warning “using integer absolute value function ‘abs’ when argument is of floating point type” under r-devel-linux-x86_64-debian-clang removed • getDataset: support of emmeans result objects as input improved • getAnalysisResults(): issue with zero values in the argument ‘userAlphaSpending’ fixed • Minor improvements rpact 3.2.0 New features • Simulation tools for enrichment design testing means, rates, and hazard ratios: function getSimulationEnrichmentMeans(), getSimulationEnrichmentRates(), getSimulationEnrichmentSurvival() available for simulation of enrichment designs; note that this is a novel implementation, hence experimental • getDesignGroupSequential() / getDesignInverseNormal(): new typeOfDesign = “noEarlyEfficacy” added Improvements, issues, and changes • getSimulationSurvival(): bug fixed for accruallIntensity = 0 at some accrual intervals • For observed conditional power, standardized theta not truncated to 0 any more in getSimulationMultiArmMeans(), getSimulationMultiArmRates(), and getSimulationMultiArmSurvival() • Conditional power calculation for analysis rates takes into account differently the null value of condErrorRate • Function testPackage(): a problem with downloading full set of unit tests under Debian/Linux has been fixed • Generic function kable() improved: optional knitr::kable arguments enabled, e.g., format • In print and summary output, “overall” renamed to “cumulative” if means, stDevs, or rate are calculated over stages rather than stage-wise • getDataset: support of emmeans result objects as input improved • Numerical accuracy of qnorm() calculations improved • Analysis enrichment results now support the generic function as.data.frame() • Naming of the stage results parameters in the print output improved • New example data added: “rawDataTwoArmNormal” • Issue in summary fixed: earlyStop and rejectPerStage were no longer displayed • Minor improvements rpact 3.1.1 • Performance of two-sided Pampallona & Tsiatis design improved • 12 example datasets added • Sample sizes in plots now have the same format as in print output; format can be changed using setOutputFormat() • getDataset supports emmeans result objects as input • Print output of simulation results improved • Added dependency on R >= 3.5.0 because serialized objects in serialize/load version 3 cannot be read in older versions of R • Plot label interface for configuration via the rpact Shiny app implemented • Minor improvements rpact 3.1.0 New features • Analysis tools for enrichment design testing means, rates, and hazard ratios: function getAnalysisResults() generalized for enrichment designs; function getDataset() generalized for entering stratified data; manual extended for enrichment designs • Automatic boundary recalculations during the trial for analysis with alpha spending approach, including under- and over-running: setup via the optional parameters ‘maxInformation’ and ‘informationEpsilon’ in function getAnalysisResults() • The new function getObjectRCode() (short: rcmd()) returns the original R command which produced any rpact result object, including all dependencies • getWideFormat() and getLongFormat() return a dataset object in wide format (unstacked) or long format (narrow, stacked) • Generic function kable() returns the output of an rpact result object formatted in Markdown. • Generic function t() returns the transpose of an rpact result object Improvements, issues, and changes • New argument ‘plotSettings’ added to all plot functions • Summary for design, simulation, and analysis unified and extended • Issue in getDesignFisher() fixed: getDesignFisher(method = "noInteraction", kMax = 3) and getDesignFisher(method = "noInteraction") produced different results • ‘normalApproximation’ default value changed to TRUE for multi-arm analysis of rates • Repeated p-values: in search algorithm, upper bound of significance level corrected when considering binding futility bounds • testPackage(): the default call is now running only a small subset of all available unit tests; with the new argument ‘connection’ the owners of the rpact validation documentation can enter a ‘token’ and a ‘secret’ to get full access to all unit tests • Scaling of grid plots improved • Minor improvements rpact 3.0.4 • Beta-spending function approach with binding futility bounds • Pampallona & Tsiatis design with binding and non-binding futility bounds • Argument ‘accrualIntensityType’ added to getSampleSizeSurvival(), getSimulationSurvival(), getNumberOfSubjects(), and getEventProbabilities() • Specification of Weibull survival times possible through definition of hazard rates or medians in simulation tool • Minor improvements rpact 3.0.3 • New utility functions getParameterCaption() and getParameterName() implemented • Design parameters added to simulation print output • Generic function as.matrix() improved for several result objects • Issue in getAvailablePlotTypes() for sample size and power results fixed • Issue for getDesignFisher(kMax = 1) in getSimulationMultiArm...() fixed • getSimulationMultiArmSurvival(): correlation of log-rank statistics revised and improved • getSimulationMultiArmMeans(): name of the first effectMeasure option “effectDifference” changed to “effectEstimate” • getSimulation[MultiArm][Means/Rates/Survival](): argument ‘showStatistics’ now works correctly and is consistently FALSE by default for multi-arm and non-multi-arm • getSimulation[MultiArm]Survival(): generic function summary() improved • getAnalysisResults(): generic function summary() improved • getAccrualTime(): improved and new argument ‘accrualIntensityType’ added • Header text added to design summaries • getSampleSizeSurvival(): field ‘studyDurationH1’ in result object was replaced by ‘studyDuration’, i.e., ‘studyDurationH1’ is deprecated and will be removed in future versions • Minor changes in the inline help and manual • Minor improvements rpact 3.0.2 • getSimulationMultiArmSurvival(): plannedEvents redefined as overall events over treatment arms • getStageResults(): element overallPooledStDevs added; print output improved • Unit tests improved: test coverage and references to the functional specification optimized • plot type 13 of getSampleSizeSurvival() with user defined lambdas with different lengths: issue fixed • Minor improvements rpact 3.0.1 • Vignette “rpact: Getting Started” included into the package • New summary output option “rpact.summary.width” added • Generic function summary() improved for several result objects • Result output of function testPackage() improved • getSimulationMultiArm[Means/Rates/Survival](): stage index corrected for user defined calcSubjectsFunction or calcEventsFunction • getSimulationMultiArmRates(): adjustment for identical simulated rates to account for ties • getSimulationMultiArmSurvival(): corrected correlation of test statistics • Output formatting improved • Minor improvements rpact 3.0.0 New features • Simulation tools for multi-arm design testing means, rates, and hazard ratios • Analysis tools for multi-arm design testing means, rates, and hazard ratios • getSimulationRates(): exact versions for testing a rate (one-sample case) and equality of rates (two-sample case) • getDataset: multi-arm datasets for means, rates, and survival data • Analysis of fixed designs • Summary for analysis and simulation result objects newly implemented • Summary for most rpact result objects substantially improved and enhanced • getEventProbabilities(): plot of result object • getNumberOfSubjects(): plot of result object • Visual comparison of two designs: plot(design1, design2) • Functions setOutputFormat and getOutputFormat implemented: definition of user defined output formats • getSimulationMeans(): thetaH1 and stDevH1 can be specified for assessment of sample size recalculation (replaces thetaStandardized) • getSimulationSurvival(): separate p-values added to the aggregated simulation data for Fisher designs • getSimulationMeans(), getSimulationRates(): Cumulated number of subjects integrated in getData object • getSimulation[MultiArm][Means/Rates/Survival](): new logical argument ‘showStatistics’ added • Example datasets (csv files) added to the package • plot type “all”: plot all available plots of an object in one step using plot(x, type = "all") • plot type improved: ‘type’ now can be a vector, e.g., plot(x, type = c(1, 3)) • plot(x, grid = 1): new plot argument ‘grid’ enables the plotting of 2 or more plots in one graphic Improvements, issues, and changes • getAnalysisResults(): list output implemented analogous to the output of all other rpact objects • getAnalysisResults(): the following stage result arguments were removed from result object because they were redundant: effectSizes, testStatistics, and pValues. Please use the ‘.stageResults’ object to access them, e.g., results$.stageResults$effectSizes • getAnalysisResults(): the following design arguments were removed from result object because they were redundant: stages, informationRates, criticalValues, futilityBounds, alphaSpent, and stageLevels. Please use the ‘.design’ object to access them, e.g., results$.design$informationRates • Optional argument ‘stage’ removed from functions getConditionalPower, getConditionalRejectionProbabilities, getFinalPValue, getRepeatedPValues, and getTestActions • Function testPackage improved, e.g., results will be displayed now on screen • Help system renewed and approved, e.g., help for corresponding generic functions (e.g., plot) linked where applicable • Function getPiecewiseSurvivalTime improved: pi1 and pi2 will not be calculated any longer for lambda- or median-based definitions; eventTime only required for pi-based definitions • plot(x, showSource = TRUE) improved for all rpact result objects x • Performance of plotting analysis results of Fisher designs improved • getSimulationRates(): issue for futility stopping for Fisher’s combination test fixed • getSimulationSurvival(): issue for expected number of events fixed • getSimulationSurvival(): if eventsNotAchieved > 0, rejection/futility rate and analysis time is estimated for valid simulation runs • getSimulationSurvival(): output improved for lambda1/median1/hazardRatio with length > 1 • getSampleSizeSurvival(): calculation of the maximum number of subjects given the provided argument ‘followUpTime’ improved • getPiecewiseSurvivalTime(): delayed response via list-based piecewiseSurvivalTime definition enabled • getAccrualTime() / getSimulationSurvival(): issue with the calculation of absolute accrual intensity by given relative accrual intensity fixed • getRawData(): issue for multiple pi1 solved • Implementation of the generic function ‘names’ improved • Test coverage improved: lots of new unit tests added • License information in the DESCRIPTION file corrected: changed from GPL-3 to LGPL-3 • Minor improvements rpact 2.0.6 • Boundaries on effect scale for testing means now accounts for the unknown variance case • getAnalysisSurvival(): calculation of stage wise results not more in getStageResults • getStageResults(): the calculation of ‘effectSizes’ for survival data and thetaH0 != 1 was corrected • getDataset() of survival data: issue with the internal storage of log ranks fixed • Sample size plot: issue for kMax = 1 fixed • getSampleSizeSurvival() with piecewise survival time: issue with calculation of ‘maxNumberOfSubjects’ for given ‘followUpTime’ fixed • Internal Shiny app interface improved • Minor improvements rpact 2.0.5 • Assumed median survival time: get[SampleSize/Power/Simulation]Survival now support direct input of arguments ‘median1’ and ‘median2’ • Output of generic function summary() improved • Plot type 5 of getPower[…] and getSimulation[…] objects improved • Output of getSampleSizeSurvival() with given maxNumberOfSubjects improved • Output of get[SampleSize/Power]Survival() for Kappa != 1 improved • Assert function for minNumberOfSubjectsPerStage corrected for undefined conditionalPower • Two-sided boundaries on effect scale in survival design improved • Error in summary() for getDesign[...]() fixed • Other minor improvements rpact 2.0.4 • Incorrect output of function summary() fixed for getSampleSize[...]() and getPower[...]() • as.data.frame: default value of argument ‘niceColumnNamesEnabled’ changed from TRUE to FALSE rpact 2.0.3 New features • Plot function for Fisher design implemented • Generic function summary() implemented for getDesign[...](), getSampleSize[...](), getPower[...](), and getSimulation[...]() results: a simple boundary summary will be displayed Improvements, issues, and changes • Generic function as.data.frame improved for getDesign[...](), getSampleSize[...](), getPower[...](), and getSimulation[...]() results • Output of getStageResults() improved • Improvements for Shiny app compatibility and better Shiny app performance • Repeated p-values are no longer calculated for typeOfDesign = “WToptimum” • Piecewise survival time improved for numeric definition: median and pi will not be calculated and displayed any longer • Plot: legend title and tick mark positioning improved; optional arguments xlim and ylim implemented • Sample size/power: usage of argument ‘twoSidedPower’ optimized • Performance of function rpwexp/getPiecewiseExponentialRandomNumbers improved (special thanks to Marcel Wolbers for his example code) • For group sequential designs a warning will be displayed if information rates from design not according to data information • Format for output of standard deviation optimized rpact 2.0.2 • Minor corrections in the inline help • Labeling of lower and upper critical values (effect scale) reverted • Simulation for Fisher’s combination test corrected • Parameter minNumberOfAdditionalEventsPerStage renamed to minNumberOfEventsPerStage • Parameter maxNumberOfAdditionalEventsPerStage renamed to maxNumberOfEventsPerStage • Parameter minNumberOfAdditionalSubjectsPerStage renamed to minNumberOfSubjectsPerStage • Parameter maxNumberOfAdditionalSubjectsPerStage renamed to maxNumberOfSubjectsPerStage • Output of function getAccrualTime() improved • Validation of arguments maxNumberOfIterations, allocation1, and allocation2 added: check for positive integer • Function getSampleSizeSurvival() improved: numeric search for accrualTime if followUpTime is given • Default value improved for analysis tools: if no effect was specified for conditional power calculation, the observed effect is selected • Fixed: function getDataset produced an error if only one log-rank value and one event was defined • Number of subjects per treatment arm are provided in output of simulation survival if allocation ratio != 1 • Function getSimulationSurvival improved: first value of minNumberOfEventsPerStage and maxNumberOfEventsPerStage must be NA or equal to first value of plannedSubjects rpact 2.0.1 • Function base::isFALSE replaced to guarantee R 3.4.x compatibility • C++ compiler warning on r-devel-linux-x86_64-debian-clang system removed • C++ compiler error on r-patched-solaris-x86 system fixed rpact 2.0.0 New features • Power calculation at given or adapted sample size for means, rates and survival data • Sample size and power calculation for survival trials with piecewise accrual time and intensity • Sample size and power calculation for survival trials with exponential survival time, piecewise exponential survival time and survival times that follow a Weibull distribution • Simulation tool for survival trials; our simulator is very fast because it was implemented with C++. Adaptive event number recalculations based on conditional power can be assessed • Simulation tool for designs with continuous and binary endpoints. Adaptive sample size recalculations based on conditional power can be assessed • Comprehensive and unified tool for performing sample size calculation for fixed sample size design • Enhanced plot functionalities Improvements, issues, and changes • Fisher design, analysis of means or rates, conditional rejection probabilities (CRP): calculation issue fixed for stage > 2 • Call of getSampleSize[Means/Rates/Survival] without design argument implemented • For all set.seed() calls ‘kind’ and ‘normal.kind’ were specified as follows: kind = “Mersenne-Twister”, normal.kind = “Inversion” • Minor code optimizations, e.g. ‘return()’ replaced by ‘return(invisible())’ if reasonable • Bug in readDatasets() fixed: variable names ‘group’ and ‘groups’ are now accepted • “Overall reject per stage” and “Overall futility per stage” renamed to “Overall reject” and “Overall futility”, respectively (also variable names) • Labels “events..” and “..patients..” consistently changed to “# events..” and “# patients…”, respectively • Output format for ‘allocationRatioPlanned’ specified • Method ‘show’ of class ‘ParameterSet’ expanded: R Markdown output features implemented • getSampleSizeSurvival(): argument ‘maxNumberOfPatients’ was renamed in ‘maxNumberOfSubjects’ • Result output, inline help and documentation: the word ‘patient’ was replaced by ‘subject’ • Variables ‘numberOfSubjectsGroup1’ and ‘numberOfSubjectsGroup2’ were renamed to ‘numberOfSubjects1’ and ‘numberOfSubjects1’ • Final p-values for two-sided test (group sequential, inverse normal, and Fisher combination test) available • Upper and lower boundaries on effect scale for testing rates in two samples rpact 1.0.0
{"url":"https://cran.case.edu/web/packages/rpact/news/news.html","timestamp":"2024-11-05T03:23:28Z","content_type":"application/xhtml+xml","content_length":"37375","record_id":"<urn:uuid:02422f42-47e6-4990-829a-6943968d9523>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00375.warc.gz"}
dask.array.bitwise_not(x, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature]) = <ufunc 'invert'>¶ This docstring was copied from numpy.invert. Some inconsistencies with the Dask version may exist. Compute bit-wise inversion, or bit-wise NOT, element-wise. Computes the bit-wise NOT of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator ~. For signed integer inputs, the bit-wise NOT of the absolute value is returned. In a two’s-complement system, this operation effectively flips all the bits, resulting in a representation that corresponds to the negative of the input plus one. This is the most common method of representing signed integers on computers [1]. A N-bit two’s-complement system can represent every integer in the range \(-2^{N-1}\) to \(+2^{N-1}-1\). Only integer and boolean types are handled. outndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. wherearray_like, optional This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. For other keyword-only arguments, see the ufunc docs. outndarray or scalar Result. This is a scalar if x is a scalar. numpy.bitwise_not is an alias for invert: >>> np.bitwise_not is np.invert Wikipedia, “Two’s complement”, https://en.wikipedia.org/wiki/Two’s_complement We’ve seen that 13 is represented by 00001101. The invert or bit-wise NOT of 13 is then: >>> x = np.invert(np.array(13, dtype=np.uint8)) >>> x >>> np.binary_repr(x, width=8) The result depends on the bit-width: >>> x = np.invert(np.array(13, dtype=np.uint16)) >>> x >>> np.binary_repr(x, width=16) When using signed integer types, the result is the bit-wise NOT of the unsigned type, interpreted as a signed integer: >>> np.invert(np.array([13], dtype=np.int8)) array([-14], dtype=int8) >>> np.binary_repr(-14, width=8) Booleans are accepted as well: >>> np.invert(np.array([True, False])) array([False, True]) The ~ operator can be used as a shorthand for np.invert on ndarrays. >>> x1 = np.array([True, False]) >>> ~x1 array([False, True])
{"url":"https://docs.dask.org/en/stable/generated/dask.array.bitwise_not.html","timestamp":"2024-11-08T12:14:02Z","content_type":"text/html","content_length":"39220","record_id":"<urn:uuid:9c8306bd-7650-4072-a7f2-d048604288ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00703.warc.gz"}
Calculating Refinance Mortgage Payment Enter the specifics about your current mortgage, along with your current appraised value, new loan term, rate and closing costs. This will determine how much. Number of payments over the loan's lifetime: Multiply the number of years in your loan term by 12 (the number of months in a year) to get the number of payments. How can refinancing lower my monthly mortgage payment? · Lock in a lower interest rate - The higher your interest rate, the more you pay for your mortgage, both. Use this simple refinance calculator to compare your existing mortgage and see how much you could save by refinancing. See your monthly and lifetime savings. $31k*. *There is NO WARRANTY, ACTUAL OR IMPLIED, for the accuracy of this information. The Refinance Calculator provides an estimate of only the principal and. Tips for Using the Cash-Out Calculator · Your home's current market value — an estimate of the amount it would sell for in the current real estate market · Your. To qualify for a refinance, take a look at your debt-to-income ratio. The new monthly mortgage payment shouldn't be more than 30% of your monthly income. To. Free calculator to plan the refinancing of loans by comparing existing and refinanced loans side by side, with options for cash out, mortgage points. Calculate your estimated monthly mortgage payments and potential savings with our easy-to-use refinance calculator. If you like what you see, apply online. Refinancing is estimated to lower your monthly payment by $ and save you $56, in total interest. Your break-even point is approximately 16 months. The calculator takes into account your interest rate, length of the loan, the amount of time you plan to stay in your home, origination and closing costs and. Use this refinance calculator to see if refinancing your mortgage is right for you. Calculate estimated monthly payments and rate options for a variety of. Monthly savings is the amount you can save each month by refinancing your mortgage at a lower interest rate. You can calculate this by subtracting your new. This Refinance Calculator makes it easy to determine your potential savings from refinancing your mortgage. It lets you takes into account such things as. For loans secured with less than 20% down, PMI is estimated at % of your loan balance each year. Monthly PMI is calculated by multiplying your starting loan. Enter the specifics about your current mortgage, along with your current appraised value, new loan term, rate and closing costs. This will determine how much. Award Winning Calculator determines if Refinancing makes sense using live mortgages and real data. Find out now exactly how much you can save or cash out. Refinancing a mortgage? Bankrate's refinance calculator is an easy-to-use tool that helps estimate how much you could save by refinancing. A mortgage calculator that displays refinancing options for lowering monthly mortgage payments. How does the refinance calculator work? · Current interest rate–this is the rate on your current loan. · Current principal and interest payment–the amount you. Interested in refinancing to a lower rate or lower monthly payment? With NerdWallet's free refinance calculator, you can calculate your new monthly payment. Use this calculator to estimate how much it will cost you to refinance your home loan. To calculate the value of refinancing your home, compare the monthly payment of your current loan to the proposed payment on the new loan. Then use an. Use our cash-out refinance calculator to help you determine how much you can cash out and what your new mortgage payment will be after refinancing. Your total estimated refinancing costs will be: $6, · Loan Info · Choose a term length · Taxes & Insurance · Origination Fees · Other Settlement Services. If you take out a year fixed rate mortgage, this means: n = 30 years x 12 months per year, or payments. Our simple mortgage calculator with taxes and. Ready to see how much you can save on your monthly mortgage with a refinance? Use our free calculators to run the numbers. Refinancing comes with closing costs, like when you originally took out your mortgage. A refinance costs about $5,, on average, but it varies depending on. What is the loan term? The length of the loan will determine how much total interest you pay on your mortgage. If you have lived in your home for five to Determine what you could pay each month by using this mortgage calculator to calculate estimated monthly payments and rate options for a variety of loan. Use this mortgage refinance calculator to estimate the costs and savings of refinancing and compare the potential savings with your current mortgage. PNC's mortgage refinance calculator can help estimate how much you can save by refinancing your mortgage & determine if now is the right time to refinance. Simply enter your current loan details into our mortgage refinancing calculator and the projected details of your new loan. Our refi calculator will estimate. Get started by estimating how much you will save using our mortgage refinance calculator. Input your current home value and the refinance amount to view the. Can Too Many Bananas Cause Leg Cramps | Holy Grail Investments
{"url":"https://stornik.ru/gainers-losers/calculating-refinance-mortgage-payment.php","timestamp":"2024-11-05T09:52:34Z","content_type":"text/html","content_length":"12243","record_id":"<urn:uuid:9fe68f57-935c-4ed8-8e39-94377947c4a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00340.warc.gz"}
Our users: The Algebrator software helped me very much. I thought the step by step solving of equations was the most helpful. It was easy to use and easy to understand. I would definitely recommend this to anyone. Thanks, Annie Hines Natalie Olive, MO Math has never been easy for me to grasp but this program makes it easy to understand. Thanks! B.C., Malta-EU As a single mom attending college, I found that I did not have much time for my daughter when I was struggling over my Algebra homework. I tried algebra help books, which only made me more confused. I considered a tutor, but they were just simply to expensive. The Algebrator software was far less expensive, and walked me through each problem step by step. Thank you for creating a great product. Michael, OH I recommend this program to every student that comes in my class. Since I started this, I have noticed a dramatic improvement. Tami Garleff, MI This is the best software I have come across in the education field. Dania J. Guth, KS Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among Search phrases used on 2013-12-07: • rational expression problem solving • simple inverse operations worksheets • math dictionary 6-9th grade • "summation rules" tutorial • understanding algebra word problems • how to use log on ti-89 • ks2 reading scales worksheets • chemistry addison wesley practice worksheet answers • perimeter with fractions worksheet • finding the next root in algebra problems • square roots formula in visual basic • division worksheet ks2 bbc • math calculator radical index • help with square roots for kids! • Linear Algebra Questions • maths gcse algebra fractions simplify help • program code to solve 10 nonlinear equations simultaneously on matlab? • simplifying complex fractions with variables using lcd • Factoring Expressions Online Calculators • formula to find the square root • algebraator • the worlds hardest math equation • solution to nonlinear differential equation • solve linear system • solving "two variable" "absolute value equations" • cubic root problem solving • how to solve first order partial differential equations • free online Algebra, Structure and Method Prentice Hall Book 2 • rational expressions/Ti-84 • terms in algebraic expressions • 7th grade math formula chart • mixed number to decimal • ged pratice exam memory • how calculate: log base 2 • online polar graphing calculator • geometry review sheet • how is the saxon teacher algebra 2 tutor program • lessons on algebra 1 grade 8 • examples of math trivia • algebraic equations flow chart • solve for slope with ti 88 • factoring polynomials with a cubed term • solver boolean algebra • practice problems in abstract algebra and solutions • glencoe algebra 1 free online book • maryland sample ged algebra questions • find y-intercept on graphing calc • square root formulas • free aptitude paper download • answers to absolute equations calculator • simplify radical expressions calculator variable free online • solving an second order ordinary differential equation • homework solver with explanation free • how do you simplify a cube • find the lcd calculator • laplace s domain ti89 • free download of mats formulas of class x • free intermediate algebra or dummies logarithms and ph formulas • plotting a parabola in mastering physics • calculate square root excel • coordinates +ks2 +worksheet • ronald larson free download calculus • graphing algebra quandrants basics • Foundations for Algebra: Year 2 homework answers • solving simultaneous equations in excel • iowa test pre algebra sample • what is the best algebra software for solving all types of homework • convert square root • rational expression calculator • fun activities with negative and positive integers • algebra formulas and real life • answers to algebra 1 McDougal Littell Mathematics workbook • convert fractions to decimal in TI 89 • multiplying and dividing decimal test • polar coordinates free worksheets • scale worksheet grade 6 • iowa algebra aptitude test example questions • ALGEBRATOR FOR VISTA • fractions +find lcd • printable third grade math • formula for fatorization • Free online help for solving word problems • maths on factoring year 8 • long hand calculator • polynomial equation examples • science second grade appti question and answer • products of a chemical equation calculator • solve linear systems by adding or subtracting • TI graphing calculator online • solve algebra equations • proportions worksheets • substitution method • ti-84 software fractions • free secondary 2 mathematics worksheet • evaluating exponent expressions • ticalc quadtratic • equation for a curved line • balancing equations calculator • square root finder
{"url":"https://mathworkorange.com/math-help-calculator/trigonometry/least-common-multiple-with.html","timestamp":"2024-11-03T04:17:31Z","content_type":"text/html","content_length":"88202","record_id":"<urn:uuid:61670d29-1d05-4652-b462-dfae1253e2fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00789.warc.gz"}
Formulation in compressibleInterFoam Originally Posted by Thanks Richard, but I think we're talking at cross-purposes. In the source codes (compressibleInterFoam) it is indeed if (dgdt[celli] > 0.0 && alpha1[celli] > 0.0) Sp[celli] -= dgdt[celli]*alpha1[celli]; Su[celli] += dgdt[celli]*alpha1[celli]; So it has nothing to do with your haste. For this one, I wasn't wondering about where dgdt is evaluated. Rather I was wondering about the origin and the meaning of the peace of code quoted above. Sp += foo; Su -= foo; looks "wrong" for me, because Su must be something like Su~Sp*X, where X is the field we're solving for. If alpha1 would have dimensions() different from (0 0 0 0 0 0 0), this wouldn't work. But since alpha1 is dimless, it works. I hope now you understand my confusion. The quoted code seems to be wrong, but it also seems to perform right. Until this point, I can follow your derivation, but I'm still missing the last step to derive Sp & Su for (dgdt[celli] > 0.0 && alpha1[celli] > 0.0) Best regards, Ok now I understand, I think you appear to have overlooked the factor "alpha1" that will multiply 'Sp' in fvm::Sp (Sp, alpha1) cf. MULESTemplates.C where the matrix equation to be solved is: is the following: fvScalarMatrix psiConvectionDiffusion fvm::ddt(rho, psi) + fv::gaussConvectionScheme<scalar>(mesh, phi, UDs).fvmDiv(phi, psi) //- fv::gaussLaplacianScheme<scalar, scalar>(mesh, CDs, snGrads) //.fvmLaplacian(Dpsif, psi) - fvm::Sp(Sp, psi) - Su alpha1 * alpha2 * dgdt = ( alpha1 - alpha1^2 ) *dgdt so that if dgdt> 0 we get Sp[celli] -= dgdt[celli]*alpha1[celli]; // "-alpha1^2 *dgdt " contribution Su[celli] += dgdt[celli]*alpha1[celli]; // alpha1 * dgdt contribution and where we remember to extract the factor alpha1 from Sp, this is inserted later in the implicit term "fvm::Sp(Sp, alpha1)". Check out the programmer's manual for reference to fvm::Sp(..). Good luck, Richard K.
{"url":"https://www.cfd-online.com/Forums/openfoam-solving/70965-formulation-compressibleinterfoam-2.html","timestamp":"2024-11-14T22:10:35Z","content_type":"application/xhtml+xml","content_length":"180154","record_id":"<urn:uuid:525a4fd8-7151-4994-9de2-7fbca1cf0d46>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00558.warc.gz"}
Defining an Algorithm Defining an Algorithm - Part 2 At the end of my Defining an Algorithm article, I said that my next instalment would be to go through the example of a mathematical implementation of Euclid's algorithm that is at the end of the Algorithms section in the first chapter of The Art of Computer Programming - Volume 1. And that is exactly what I am going to do. If you've not already read the previous instalment, I suggest that you do that first in order to fully understand the example we are describing in this article. So, let's get straight to it. The next line in the chapter, after our last article left off, states the following; Algorithm E may, for example, be formalized in these terms as follows: Let Q be the set of all singletons (n), all ordered pairs (m, n), and all ordered quadruples (m, n, r, 1), (m, n, r, 2), and (m, n, p, 3), where m, n, and p are positive integers and r is a nonnegative integer. Let I be the subset of all pairs (m, n) and let Ω be the subset of all singletons (n). Now, first things first, I am going to presume you understood the workings and purpose of Algorithm E (Euclid's algorithm) explained to us earlier in the chapter. If you haven't already, this would be a good time to go back and familiarize yourself with it. So, as we know from the previous article Q represents a collection of all possible states of our algorithm, including our inputs, outputs and everything between the two. Q contains I and Ω, which are out inputs and outputs respectively. All Knuth is doing at this point is declaring what variables (in groups) represent the various stages of Euclid's Algorithm. As we know Euclid's Algorithm takes two positive integers, which are represented together as the pair (m, n), as our input. When it's done it gives us a single integer, the greatest common divisor of m and n, which Knuth declares as being represented by the singleton (n). So what do we have left unaccounted for at this point? Well, we have the quadruples (m, n, r, 1), (m, n, r, 2), and (m, n, p, 3). They represent the steps in between, the states of our variables, during the process of working out our largest common divisor. Then the difficult bit, the computational rule is defined. f((m, n)) = (m, n, 0, 1); f((n)) = (n); f((m, n, r, 1)) = (m, n, remainder of m divided by n, 2); f((m, n, r, 2)) = (n) if r = 0, (m, n, r, 3) otherwise; f((m, n, p, 3)) = (n, p, p, 1). When I first looked at the above, I remember thinking that I would never understand it. The only way I managed to finally get to grips with it was by attempting to use the functions to get an expected result. Once I've described how it all works and what it means I implore you to try and use it to work through a few examples yourself until it sticks. Human beings learn through doing, not just through reading. Each statement, ending in a semicolon (' ; '), is a definition of what the function will do with certain arguments. Each argument is of course one of the states of the computation, and as we know from Knuth's formal definition we pass our states back into our function to get the next state, until we get our end results or output (which will not change if passed back into the function). Our first statement f((m, n)) = (m, n, 0, 1); defines what our function (or computation rule) does with the initial input m & n. So this is where we start, and from looking at it we can see that, taking our input ((m, n) pair from I) our function converts it to one of the quadruples (one of the intermediate states). Nothing has been done to m and n, so they remain the same, but now we also have 0 and 1. The 0 is just filler, and the 1 seems to represent the step to be performed next. So, now we have a quadruple which needs to be passed back into our function to get the next state. At this point, take a look at the formula above, and look for what our function would do to our result. If we pass our 0 into the variable r then the statement on the next line down fits the bill perfectly: f((m, n, r, 1)) = (m, n, remainder of m divided by n, 2);. You can see at this stage that we've done the division to find the remainder and now, we have a new intermediate quadruple, which can be passed through our function again using the definition on the third line of the formula. The third line statement checks whether the remainder calculated in our last step is 0, if so it converts our quadruple into the singleton (n). If that is the case then that is the end of our process. Passing the singleton (n) back into the function will not change it. (n) is our output (the largest common divisor). If our remainder is not 0 then we get another quadruple, which is passed into the statement on the last line of the formula. All this does is re-arrange the variables to pass them back to our function as f((m, n, r, 1)), and we start again. The re-arranging of the variables is essentially the replacing of m with our old n value, and then replacing n with the remainder (as per our original definition of Euclid's Algorithm). Now at this point, in order for it to actually sink in, I think it's prudent to go through an actual example with our example algorithm. Imagine we want to find the largest common divisor of our pair (6, 4). This is our input (m, n). Passing this into our input gives us f((6, 4)) = (6, 4, 0, 1);. This is our second line in the formula. Pass our new quadruple to the third line and we get f((6, 4, 0, 1)) = (6, 4, 2, 2); where the first 2 is the remainder of 6/4. So now we pass it back in again and get f((6, 4, 2, 2)) = (6, 4, 2, 3); because the remainder was not 0. And lastly, it get passed to the bottom line in the formula and we get f((6, 4, 2, 3)) = (4, 2, 2, 1);. Now we have re-arranged our variables so that m <- n and n <- r and passed it back to the second line in the formula. And now we go around again. Now we do f((4, 2, 2, 1)) = (4, 2, 0, 2) because 0 is the remainder of 4/2, and then f((4, 2, 0, 2)) = (2) because our remainder was 0 and we therefore have our answer which is 2. This all looks a little something like this; f((6, 4)) = (6, 4, 0, 1); f((6, 4, 0, 1)) = (6, 4, 2, 2); f((6, 4, 2, 2)) = (6, 4, 2, 3); f((6, 4, 2, 3)) = (4, 2, 2, 1); f((4, 2, 2, 1)) = (4, 2, 0, 2); f((4, 2, 0, 2)) = (2); f((2)) = (2); It's worth restating at this point, that the last number in the quadruples indicates how the function will work on it next. Before we move onto the next bit, sit down and work through a few of your own examples. If you think you've got it already, try working through something much more complicated than starting with 6 and 4. I think that's enough for this instalment. Next, Knuth mentions that this system doesn't restrict defined Algorithms to strictly effective ones, where there are a finite number of steps or where the steps can always be performed by a human being. He then goes onto describe a second implementation of Algorithm E, using Markov algorithms that doesn't have this issue. In our next instalment, I will be describing what those are and how they apply to the next few paragraphs. I hope this was helpful. Thanks for reading. If you have any criticism, corrections, objections, or (Gods forbid) praise just drop me a comment under the article, and until my next article, happy
{"url":"https://www.rudikershaw.com/articles/computationalmethod2","timestamp":"2024-11-08T07:36:21Z","content_type":"text/html","content_length":"13868","record_id":"<urn:uuid:bca5830b-affa-412f-a7c6-bb3068c0d80b>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00665.warc.gz"}
Hungarian Algorithm Introduction & Python Implementation In this article, I will introduce how to use Hungarian Method to resolve the linear assignment problem and provide my personal Python code solution. So… What is the linear assignment problem? The linear assignment problem represents the need to maximize the available resources (or minimize the expenditure) with limited resources. For instance, below is a 2D matrix, where each row represents a different supplier, and each column represents the cost of employing them to produce a particular product. Each supplier can only specialize in the production of one of these products. In other words, only one element can be selected for each column and row in the matrix, and the sum of the selected elements must be minimized (minimized cost expense). The cost of producing different goods by different producers: Indeed, this is a simple example. By trying out the possible combinations, we can see that the smallest sum is 13, so supplier A supplies Bubble Tea, supplier B supplies milk tea, and supplier C supplies Fruit Tea. However, such attempts do not follow a clear rule and become inefficient when applied to large tasks. Therefore, the next section will introduce step by step the Hungarian algorithm, which can be applied to the linear assignment problem. Hungarian Algorithm & Python Code Step by Step In this section, we will show how to use the Hungarian algorithm to solve linear assignment problems and find the minimum combinations in the matrix. Of course, the Hungarian algorithm can also be used to find the maximum combination. Step 0. Prepare Operations First, an N by N matrix is generated to be used for the Hungarian algorithm (Here, we use a 5 by 5 square matrix as an example). import numpy as np cost_matrix = np.random.randint(10,size=(5, 5)) print(f"The cost matrix is:\n", cost_matrix) The above code randomly generates a 5x5 cost matrix of integers between 0 and 10. If we want to find the maximum sum, we could do the opposite. The matrix to be solved is regarded as the profit matrix, and the maximum value in the matrix is set as the common price of all goods. The cost matrix is obtained by subtracting the profit matrix from the maximum value. Finally, the cost matrix is substituted into the Hungarian algorithm to obtain the minimized combination and then remapped back to the profit matrix to obtain the maximized sum value and composition result. import numpy as np #The matrix who you want to find the maximum sum profit_matrix = np.random.randint(10,size=(5, 5)) #Using the maximum value of the profit_matrix to get the corresponding cost_matrix max_value = np.max(profit_matrix) #Using the cost matrix to find which positions are the answer cost_matrix = max_value - profit_matrix print(f"The profit matrix is:\n", profit_matrix, f"\nThe cost matrix is:\n", cost_matrix) The above code randomly generates a 5x5 profit matrix of integers between 0 and 10 and generate a corresponding cost matrix By following the steps above, you can randomly generate either the cost matrix or the profit matrix. Next, we will move into the introduction of the Hungarian algorithm, and for the sake of illustration, the following sections will be illustrated using the cost matrix shown below. We will use the Hungarian algorithm to solve the linear assignment problem of the cost matrix and find the corresponding minimum sum. Example cost matrix: Step 1. Every column and every row subtract its internal minimum First, every column and every row must subtract its internal minimum. After subtracting the minimum, the cost matrix will look like this. Cost matrix after step 1: And the current code is like this: import numpy as np def hungarian_algorithm(mat): dim = mat.shape[0] cur_mat = mat #Step 1 - Every column and every row subtract its internal minimum for row_num in range(mat.shape[0]): cur_mat[row_num] = cur_mat[row_num] - np.min(cur_mat[row_num]) for col_num in range(mat.shape[1]): cur_mat[:,col_num] = cur_mat[:,col_num] - np.min(cur_mat[:,col_num]) def main(): #The matrix who you want to find the maximum sum cost_matrix = np.array([[7, 6, 2, 9, 2], [6, 2, 1, 3, 9], [5, 6, 8, 9, 5], [6, 8, 5, 8, 6], [9, 5, 6, 4, 7]]) ans_pos = hungarian_algorithm(cost_matrix.copy()) if __name__ == '__main__': Step 2.1. Min_zero_row Function Implementation At first, we need to find the row with the fewest zero elements. So, we can convert the previous matrix to the boolean matrix(0 → True, Others → False). Transform matrix to boolean matrix: import numpy as np #Transform the matrix to boolean matrix(0 = True, others = False) zero_bool_mat = (cur_mat == 0) Corresponding Boolean matrix: Therefore, we can use the “min_zero_row” function to find the corresponding row. # zero_mat = boolean, mark_zero = blank list def min_zero_row(zero_mat, mark_zero): The function can be splitted into two steps: #1 The function is used to find the row which containing the fewest 0. #2 Select the zero number on the row, and then marked the element corresponding row and column as False #Find the row min_row = [99999, -1] for row_num in range(zero_mat.shape[0]): if np.sum(zero_mat[row_num] == True) > 0 and min_row[0] > np.sum(zero_mat[row_num] == True): min_row = [np.sum(zero_mat[row_num] == True), row_num] The row which contains the least 0: Third, mark any 0 elements on the corresponding row and clean up its row and column (converts elements on the Boolean matrix to False). The coordinates of the element are stored in mark_zero. # zero_mat = boolean matrix, mark_zero = blank list def min_zero_row(zero_mat, mark_zero): The function can be splitted into two steps: #1 The function is used to find the row which containing the fewest 0. #2 Select the zero number on the row, and then marked the element corresponding row and column as False #Find the row min_row = [99999, -1] for row_num in range(zero_mat.shape[0]): if np.sum(zero_mat[row_num] == True) > 0 and min_row[0] > np.sum(zero_mat[row_num] == True): min_row = [np.sum(zero_mat[row_num] == True), row_num] # Marked the specific row and column as False zero_index = np.where(zero_mat[min_row[1]] == True)[0][0] mark_zero.append((min_row[1], zero_index)) zero_mat[min_row[1], :] = False zero_mat[:, zero_index] = False Hence, the boolean matrix will look like this: The boolean matrix after the first process. The fourth row has been changed to all False. The process is repeated several times until the elements in the boolean matrix are all False. The below picture shows the order in which they are marked. The possible answer composition: Step 2.2. Mark_matrix Function Implementation After getting Zero_mat from the step 2–1, we can check it and mark the matrix according to certain rules. The whole rule can be broken down into several steps: 1. Mark rows that do not contain marked 0 elements and store row indexes in the non_marked_row 2. Search non_marked_row element, and find out if there are any unmarked 0 elements in the corresponding column 3. Store the column indexes in the marked_cols 4. Compare the column indexes stored in marked_zero and marked_cols 5. If a matching column index exists, the corresponding row_index is saved to non_marked_rows 6. Next, the row indexes that are not in non_marked_row are stored in marked_rows Finally, the whole mark_matrx function is finished and then returns marked_zero, marked_rows, marked_cols. At this point, we will be able to decide the result based on the return information. def mark_matrix(mat): Finding the returning possible solutions for LAP problem. #Transform the matrix to boolean matrix(0 = True, others = False) cur_mat = mat zero_bool_mat = (cur_mat == 0) zero_bool_mat_copy = zero_bool_mat.copy() #Recording possible answer positions by marked_zero marked_zero = [] while (True in zero_bool_mat_copy): min_zero_row(zero_bool_mat_copy, marked_zero) #Recording the row and column indexes seperately. marked_zero_row = [] marked_zero_col = [] for i in range(len(marked_zero)): #step 2-2-1 non_marked_row = list(set(range(cur_mat.shape[0])) - set(marked_zero_row)) marked_cols = [] check_switch = True while check_switch: check_switch = False for i in range(len(non_marked_row)): row_array = zero_bool_mat[non_marked_row[i], :] for j in range(row_array.shape[0]): #step 2-2-2 if row_array[j] == True and j not in marked_cols: #step 2-2-3 check_switch = True for row_num, col_num in marked_zero: #step 2-2-4 if row_num not in non_marked_row and col_num in marked_cols: #step 2-2-5 check_switch = True #step 2-2-6 marked_rows = list(set(range(mat.shape[0])) - set(non_marked_row)) return(marked_zero, marked_rows, marked_cols) If we use the example cost matrix, the corresponding marked_zero, marked_rows, and marked_cols are as follows: 1. marked_zero: [(3, 2), (0, 4), (1, 1), (2, 0), (4, 3)] 2. marked_rows: [0, 1, 2, 3, 4] 3. marked_cols: [] Step 3. Identify the Result At this step, if the sum of the lengths of marked_rows and marked_cols is equal to the length of the cost matrix, it means that the solution of the linear assignment problem has been found successfully, and marked_zero stores the solution coordinates. Fortunately, in the example matrix, we find the answer on the first try. Therefore, we can skip to step 5 and calculate the solution. However, everything is hardly plain sailing. Most of the time, we will not find the solution on the first try, such as the following matrix: After Step 1 & 2, the corresponding matrix, marked_rows, and marked_cols are as follows: The sum of the lengths of Marked_Rows and Marked_Cols is 4 (less than 5). Apparently, the sum of the lengths is less than the length of the matrix. At this time, we need to go into Step 4 to adjust the matrix. def hungarian_algorithm(mat): dim = mat.shape[0] cur_mat = mat #Step 1 - Every column and every row subtract its internal minimum for row_num in range(mat.shape[0]): cur_mat[row_num] = cur_mat[row_num] - np.min(cur_mat[row_num]) for col_num in range(mat.shape[1]): cur_mat[:,col_num] = cur_mat[:,col_num] - np.min(cur_mat[:,col_num]) zero_count = 0 while zero_count < dim: #Step 2 & 3 ans_pos, marked_rows, marked_cols = mark_matrix(cur_mat) zero_count = len(marked_rows) + len(marked_cols) if zero_count < dim: cur_mat = adjust_matrix(cur_mat, marked_rows, marked_cols) return ans_pos Step 4. Adjust Matrix In Step 4, we're going to put the matrix after Step 1 into the Adjust_Matrix function. Taking the latter matrix in Step 3 as an example, the matrix to be modified in Adjust_Matrix is: The whole function can be separated into three steps: 1. Find the minimum value for an element that is not in marked_rows and not in marked_cols. Hence, we can find the minimum value is 1. def adjust_matrix(mat, cover_rows, cover_cols): cur_mat = mat non_zero_element = [] #Step 4-1 Find the minimum value for row in range(len(cur_mat)): if row not in cover_rows: for i in range(len(cur_mat[row])): if i not in cover_cols: min_num = min(non_zero_element) 2. Subtract the elements which not in marked_rows nor marked_cols from the minimum values obtained in the previous step. def adjust_matrix(mat, cover_rows, cover_cols): cur_mat = mat non_zero_element = [] #Step 4-1 for row in range(len(cur_mat)): if row not in cover_rows: for i in range(len(cur_mat[row])): if i not in cover_cols: min_num = min(non_zero_element) for row in range(len(cur_mat)): if row not in cover_rows: for i in range(len(cur_mat[row])): if i not in cover_cols: cur_mat[row, i] = cur_mat[row, i] - min_num 3. Add the element in marked_rows, which is also in marked_cols, to the minimum value obtained by Step 4–1. def adjust_matrix(mat, cover_rows, cover_cols): cur_mat = mat non_zero_element = [] #Step 4-1 for row in range(len(cur_mat)): if row not in cover_rows: for i in range(len(cur_mat[row])): if i not in cover_cols: min_num = min(non_zero_element) #Step 4-2 for row in range(len(cur_mat)): if row not in cover_rows: for i in range(len(cur_mat[row])): if i not in cover_cols: cur_mat[row, i] = cur_mat[row, i] - min_num #Step 4-3 for row in range(len(cover_rows)): for col in range(len(cover_cols)): cur_mat[cover_rows[row], cover_cols[col]] = cur_mat[cover_rows[row], cover_cols[col]] + min_num return cur_mat Return the adjusted matrix and repeat Step 2 and Step 3 until the conditions satisfy the requirement of entering Step 5. Step 5. Calculate the Answer Using the element composition stored in marked_zero, the minimum and maximum values of the linear assignment problem can be calculated. The minimum composition of the assigned matrix and the minimum sum is 18. The maximum composition of the assigned matrix and the maximum sum is 43. The code of the Answer_Calculator function is as follows: def ans_calculation(mat, pos): total = 0 ans_mat = np.zeros((mat.shape[0], mat.shape[1])) for i in range(len(pos)): total += mat[pos[i][0], pos[i][1]] ans_mat[pos[i][0], pos[i][1]] = mat[pos[i][0], pos[i][1]] return total, ans_mat The complete code is as follows: import numpy as np def min_zero_row(zero_mat, mark_zero): The function can be splitted into two steps: #1 The function is used to find the row which containing the fewest 0. #2 Select the zero number on the row, and then marked the element corresponding row and column as False #Find the row min_row = [99999, -1] for row_num in range(zero_mat.shape[0]): if np.sum(zero_mat[row_num] == True) > 0 and min_row[0] > np.sum(zero_mat[row_num] == True): min_row = [np.sum(zero_mat[row_num] == True), row_num] # Marked the specific row and column as False zero_index = np.where(zero_mat[min_row[1]] == True)[0][0] mark_zero.append((min_row[1], zero_index)) zero_mat[min_row[1], :] = False zero_mat[:, zero_index] = False def mark_matrix(mat): Finding the returning possible solutions for LAP problem. #Transform the matrix to boolean matrix(0 = True, others = False) cur_mat = mat zero_bool_mat = (cur_mat == 0) zero_bool_mat_copy = zero_bool_mat.copy() #Recording possible answer positions by marked_zero marked_zero = [] while (True in zero_bool_mat_copy): min_zero_row(zero_bool_mat_copy, marked_zero) #Recording the row and column positions seperately. marked_zero_row = [] marked_zero_col = [] for i in range(len(marked_zero)): #Step 2-2-1 non_marked_row = list(set(range(cur_mat.shape[0])) - set(marked_zero_row)) marked_cols = [] check_switch = True while check_switch: check_switch = False for i in range(len(non_marked_row)): row_array = zero_bool_mat[non_marked_row[i], :] for j in range(row_array.shape[0]): #Step 2-2-2 if row_array[j] == True and j not in marked_cols: #Step 2-2-3 check_switch = True for row_num, col_num in marked_zero: #Step 2-2-4 if row_num not in non_marked_row and col_num in marked_cols: #Step 2-2-5 check_switch = True #Step 2-2-6 marked_rows = list(set(range(mat.shape[0])) - set(non_marked_row)) return(marked_zero, marked_rows, marked_cols) def adjust_matrix(mat, cover_rows, cover_cols): cur_mat = mat non_zero_element = [] #Step 4-1 for row in range(len(cur_mat)): if row not in cover_rows: for i in range(len(cur_mat[row])): if i not in cover_cols: min_num = min(non_zero_element) #Step 4-2 for row in range(len(cur_mat)): if row not in cover_rows: for i in range(len(cur_mat[row])): if i not in cover_cols: cur_mat[row, i] = cur_mat[row, i] - min_num #Step 4-3 for row in range(len(cover_rows)): for col in range(len(cover_cols)): cur_mat[cover_rows[row], cover_cols[col]] = cur_mat[cover_rows[row], cover_cols[col]] + min_num return cur_mat def hungarian_algorithm(mat): dim = mat.shape[0] cur_mat = mat #Step 1 - Every column and every row subtract its internal minimum for row_num in range(mat.shape[0]): cur_mat[row_num] = cur_mat[row_num] - np.min(cur_mat[row_num]) for col_num in range(mat.shape[1]): cur_mat[:,col_num] = cur_mat[:,col_num] - np.min(cur_mat[:,col_num]) zero_count = 0 while zero_count < dim: #Step 2 & 3 ans_pos, marked_rows, marked_cols = mark_matrix(cur_mat) zero_count = len(marked_rows) + len(marked_cols) if zero_count < dim: cur_mat = adjust_matrix(cur_mat, marked_rows, marked_cols) return ans_pos def ans_calculation(mat, pos): total = 0 ans_mat = np.zeros((mat.shape[0], mat.shape[1])) for i in range(len(pos)): total += mat[pos[i][0], pos[i][1]] ans_mat[pos[i][0], pos[i][1]] = mat[pos[i][0], pos[i][1]] return total, ans_mat def main(): '''Hungarian Algorithm: Finding the minimum value in linear assignment problem. Therefore, we can find the minimum value set in net matrix by using Hungarian Algorithm. In other words, the maximum value and elements set in cost matrix are available.''' #The matrix who you want to find the minimum sum cost_matrix = np.array([[7, 6, 2, 9, 2], [6, 2, 1, 3, 9], [5, 6, 8, 9, 5], [6, 8, 5, 8, 6], [9, 5, 6, 4, 7]]) ans_pos = hungarian_algorithm(cost_matrix.copy())#Get the element position. ans, ans_mat = ans_calculation(cost_matrix, ans_pos)#Get the minimum or maximum value and corresponding matrix. #Show the result print(f"Linear Assignment problem result: {ans:.0f}\n{ans_mat}") #If you want to find the maximum value, using the code as follows: #Using maximum value in the cost_matrix and cost_matrix to get net_matrix profit_matrix = np.array([[7, 6, 2, 9, 2], [6, 2, 1, 3, 9], [5, 6, 8, 9, 5], [6, 8, 5, 8, 6], [9, 5, 6, 4, 7]]) max_value = np.max(profit_matrix) cost_matrix = max_value - profit_matrix ans_pos = hungarian_algorithm(cost_matrix.copy())#Get the element position. ans, ans_mat = ans_calculation(profit_matrix, ans_pos)#Get the minimum or maximum value and corresponding matrix. #Show the result print(f"Linear Assignment problem result: {ans:.0f}\n{ans_mat}") if __name__ == '__main__':
{"url":"https://plainenglish.io/blog/hungarian-algorithm-introduction-python-implementation-93e7c0890e15","timestamp":"2024-11-08T03:05:31Z","content_type":"text/html","content_length":"151213","record_id":"<urn:uuid:ae837a75-9c75-4dba-a0ee-0b167651fa68>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00864.warc.gz"}
Lucky Palindromes: when do prime factors of palindromes make palindromes? 3627 Views 10 Replies 23 Total Likes Lucky Palindromes: when do prime factors of palindromes make palindromes? 10 Replies There is one construction method that leads to the majority of lucky palindromes so far. Take a prime whose digits are all ones or zeroes, with at most 9 ones, such that its digit reversal is also prime. Their product will not generate any carries, and is a (doubly lucky) palindrome, such as 121121010242121121383121121242010121121 == 11001000011000001011*11010000011000010011 There can also be a two and up to five ones, that just fits (note the central 9): 10201112123150905132121110201 == 100001001010201*102010100100001 And then there are the repunit primes. Combined with 11 they form palindromes of the form 122...221. 12222222222222222221 == 1111111111111111111*11 == 11*1111111111111111111 This happens to be the only lucky palindrome of length 20. I'll wrap up my search and report back with a followup post. Why do you exclude palindromes with a single (multiple) prime factor? they make for some pretty patterns, and in particular, a lucky palindrome that is a square has a prime factor that is itself a palindrome, like this 17 digit example: 12323244744232321 == 111010111 111010111 which is also a double lucky palindrome, of course. There are higher powers, as well: 343 == 7 7 7 14641 == 11 11 11 11 Great question. Because there is no asymmetry. It is easy to get a palindrome from 2 or more identical palindromes. But it is much harder for not-identical non-palindromic factors to line up to form a palindrome. I am just curious about these not-trivial cases, if that makes sense. ...with a single (multiple) prime factor I see what you mean, those a interesting too. Do you see how frequent they are? Do you have an impression they are sensibly more frequent than cases with multiple distinct factors ? There are not that many. There are just two examples with 17 digits, 10022212521222001 == 100111001 100111001 12323244744232321 == 111010111 111010111 none with 18 digits (there's just a single, very lucky, palindrome with 18 digits) and none so far with 19 digits (search still ongoing): I actually agree, Roman, single multiple prime factor are also very nice. When you done with your search I'd love to see the sequences complete with those numbers and the optimized code. Thank you very much for looking into this. The search can be optimized, and I have been running one for the last two hours or so. There are no lucky palindromes with 16 digits. There are a couple with 17 digits, and I just found one with 18 294507705507705492 == 2231118981118981 11 3 2 2 but no more than five factors so far. Wow, Roman, great, thank you! It would be really interesting to see the optimized code. "...no lucky palindromes with 16 digits" -- do you mean in both ascending and descending order? 294507705507705492 == 223111898111898111322 -- remarkable! And again 5 factors tops - very interesting. I wonder if 7-factors even possible. do you mean in both ascending and descending order? both, I factor the number only once, and then test for both ascending and descending at the same time. They are just a Reverse[] apart. Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use
{"url":"https://community.wolfram.com/groups/-/m/t/2961701","timestamp":"2024-11-06T04:41:10Z","content_type":"text/html","content_length":"144064","record_id":"<urn:uuid:1f4ccefb-c811-404d-aa3b-2d5a43ec9f9f>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00033.warc.gz"}
Published on 23 Mar 2004 Tags #Statistics A series of measurements $x_i = x_1, \dots, x_n$ is a one dimensional list or array which is by nature very space inefficient to store. A histogram is a two dimensional data structure that can be configured to a custom trade off between space and accuracy. The values are sorted into bucket according to their size. There are two properties controlling the trade off that a histogram represents: • Granularity $g$. This is the width of the individual buckets. It controls how much accuracy is lost due to rounding. • Buckets $m$. The number of buckets that the histogram consists of. It denotes the maximum value that can be recorded in the histogram. To construct the histogram, each individual value $x_i$ is assigned to a bucket $b$: $b = \lfloor \frac{x_i}{g}\rfloor$. Although the original series of measurements cannot be reconstructed, an approximation can be generated from the histogram: 1. Calculate the value that a bucket corresponds to: $x_b = b*g$ 2. The value $x_b$ has to be inserted zero or more times corresponding to the number of values in the bucket. Due to the fact that each bucket of a histogram contains an absolute number (i.e. the number of measurements of the corresponding magnitude), it is very useful for visualizing and analyzing the values in a series of measurements. Outliers can be easily identified by looking for buckets with an exceptionally high or low number of values. NOTE: A histogram is not suitable for comparing two or more series of measurements because of its absolute nature. Distributions are a better alternative.
{"url":"https://dille.name/blog/2004/03/23/histograms/","timestamp":"2024-11-12T13:33:03Z","content_type":"text/html","content_length":"9639","record_id":"<urn:uuid:1f0a6c1d-fda7-4643-b404-ba1dc4dbc408>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00268.warc.gz"}
Application of Online Optimization Supporting Problem-Solving & Decision-Making Smart Agriculture Platform [sagp-cu][1]: Maximizing tree production The target is to optimize the number of planted trees per acre such that the yield is maximized. [sagp-cu][2]: Maximizing tree production with more contraints The target is to optimize the number of planted trees per acre such that the yield profit is maximized. [sagp-cu][3]: Cropping plan optimization The target is to optimize the the area of yields such that the profit is maximized. [sagp-cu][4]: Maximizing garden shop revenue The target is to optimize the number of gardening mixture and potting mixture packages such that the income is maximized. [sagp-cu][5]: Minimizing the cost of fruit drink How much of each fruit drink should the catere use to obtain the required composition at minimum cost? Smart Business Platform Resource Allocation Market Economy [sbup-po][1]: Product-mix problem in furniture industry How many tables and chairs should the manufacturer produce in order to maximize the overall profit? [sbup-po][2]: Optimizing a production planning model Minimize overall production cost while Maximize utilization of working hours. [sbup-po][3]: Meet demand of production in the penalty minimization How many products each type does the manufacturer need to produce to minimize the penalty? Resource Allocation Market Economy Smart Communication Platform Resource Allocation Massive MIMO UAV Communication Resource Allocation [scop-ra][1]: Antenna selection To be updated! [scop-ra][2]: Power allocation How much power should we allocate to this BS serving each user for maximizing the total date rate? [scop-ra][3]: Maximizing throughput Determine tower location that minimize total distance from new tower to existing towers. [scop-ra][4]: Energy Efficiency To be updated! Massive MIMO UAV Communication Smart Design Platform Material design Multi-objective Design Material design [sdep-md][1]: A curtain material trim loss problem Determine the production plan that minimizes the curtain material trim loss. [sdep-md][2]: Maximizing the volume of a box How to maximize the volume of the box with the constraints of wall and floor? [sdep-md][3]: Minimizing bounding box for floor plan How to minimize the bounding box for floor design plan with the constraints of legth and width of floor? [sdep-md][4]: A design of water pipeline using GP The target is to find the dimension of container such that the cost of material for designing the container is minimized. [sdep-md][5]: A design of box for storing using GP The task for designers is how to design an optimal box with the aim of cost minimization. [sdep-md][6]: A design of closed tank using GP What are the optimal sizes of the tank to minimize the material cost? [sdep-md][7]: Design Box The target is to find the dimension of container such that the cost of material for designing the container is minimized. [sdep-md][8]: Design Closed Box The target is to find the dimension of box such that the volume of designed box is maximized. [sdep-md][9]: Design Garden The target is to find the dimension of garden such that the cost of material for designing the garden is minimized. [sdep-md][10]: Design Can (Soda can) The target is to find the cheapest cost to ensure the standard volume of can. [sdep-md][11]: Design Tank The target is to find the cheapest cost to ensure the volume of tank. Multi-objective Design Smart Digital Platform Artificial Intelligence Data Analysis Digital Signal Processing Artificial Intelligence Data Analysis Digital Signal Processing Smart Environmental Platform Disaster Relief Green Communication Green Environment Mission-critical Communication Disaster Relief Green Communication Green Environment Mission-critical Communication Smart Healthcare Platform Health Services Resource Allocation Health Services Resource Allocation Smart Industry Platform Control Process Supply Chain Control Process [sinp-cp][1]: Minimizing energy loss using GP How to maximize the rate of energy supplied to the industry and obtain the optimum? [sinp-cp][2]: Minimizing hot-rolling process using GP How to minimize the cost in the hot-rolling process? [sinp-cp][3]: Maximizing the profit for producing allocation How to maximize profit by allocating the optimal number of products to suitable facilities? [sinp-cp][4]: Maximizing the engine power using GP How to maximize the engine power? Supply Chain Smart Transportation Platform Smart Logistic Resource Allocation Smart Routing Smart Logistic [stsp-lg][1]: Minimize the number of trucks for package transportation How many trucks is the minimum number needed for shipping all packages from the warehouse to three areas within a given number of days? [stsp-lg][2]: Minimize transportation time in Logistic What is the minimum transportation time for shipping all packages from the warehouse to three areas with a given number of trucks in company? [stsp-lg][3]: Choosing locations for new plants Find the locations for 3 new assembly plants to minimize the delivery cost. [stsp-lg][4]: Airport taxiway optimization Designing a smart schedule to minimize taxi distance can not only reduce the moving duration of airplanes, but also reduce the cost of energy consumption. Resource Allocation Smart Routing Mathematical Programming Linear Programming Quadratic Programming Conic Programming Geometric Programming Semidefinite Programming Structure Nonconvex Programming Linear Programming [magp-lp][1]: Linear programming Linear programming with scalar variables. [magp-lp][2]: Linear programming (vector) Linear programming with vector variables. [magp-lp][3]: Linear programming (affine) Linear programming with scalar variables and affine functions. [magp-lp][4]: Linear programming (fraction) Linear programming with a fractional objective of affine functions. [magp-lp][5]: Linear programming (norm) Linear programming with norm-l1 function. [magp-lp][6]: Linear programming (norm) Linear programming with norm-infinity function. [magp-lp][7]: Linear programming (inner product) Linear programming with inner product of LP problem. [magp-lp][8]: Linear programming (fraction) Linear programming with a fractional constraint. [magp-lp][9]: Linear programming (exponent) Linear programming with an exponential constraint. [magp-lp][10]: Linear programming (norm) Linear programming with a norm constraint. Quadratic Programming Conic Programming Geometric Programming Semidefinite Programming Structure Nonconvex Programming Your ideas are precious for ODMO development to make great decisions for our life. ODMO Team
{"url":"https://www.opda-odmo.com/","timestamp":"2024-11-02T21:14:58Z","content_type":"text/html","content_length":"459527","record_id":"<urn:uuid:9c70a589-9335-461e-bb35-9a16e6a3ac2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00601.warc.gz"}
5.4 Fitting the Empty Model Course Outline • segmentGetting Started (Don't Skip This Part) • segmentStatistics and Data Science: A Modeling Approach • segmentPART I: EXPLORING VARIATION • segmentChapter 1 - Welcome to Statistics: A Modeling Approach • segmentChapter 2 - Understanding Data • segmentChapter 3 - Examining Distributions • segmentChapter 4 - Explaining Variation • segmentPART II: MODELING VARIATION • segmentChapter 5 - A Simple Model □ 5.4 Fitting the Empty Model • segmentChapter 6 - Quantifying Error • segmentChapter 7 - Adding an Explanatory Variable to the Model • segmentChapter 8 - Models with a Quantitative Explanatory Variable • segmentPART III: EVALUATING MODELS • segmentChapter 9 - The Logic of Inference • segmentChapter 10 - Model Comparison with F • segmentChapter 11 - Parameter Estimation and Confidence Intervals • segmentChapter 12 - What You Have Learned • segmentFinishing Up (Don't Skip This Part!) • segmentResources list High School / Advanced Statistics and Data Science I (ABC) 5.4 Fitting the Empty Model The simple model we have started with—using the mean to model the distribution of a quantitative variable—is sometimes called the empty model or null model. Note that it’s empty because it doesn’t have any explanatory variables in it yet. Our empty model does not explain any variation; it merely reveals the variation in the outcome variable that could be explained. If the mean is our model, then fitting the model to data simply means calculating the mean of the distribution. Let’s think this through in the context of students’ thumb lengths. We will use a tiny data set, which we’ve put in a data frame called TinyFingers. StudentID Thumb The whole data set is just six observations. Make a histogram of the distribution of six thumb lengths (Thumb). Add in a blue line to show where the mean is. require(coursekata) TinyFingers <- data.frame( StudentID = 1:6, Thumb = c(56, 60, 61, 63, 64, 68) ) # modify this to save favstats for Thumb length tinythumb_stats <- # modify this to draw a vline representing the mean in "blue" gf_histogram(~Thumb, data = TinyFingers) %>% gf_vline() tinythumb_stats <- favstats(~Thumb, data = TinyFingers) gf_histogram(~Thumb, data = TinyFingers) %>% gf_vline (xintercept = ~mean, color = "blue", data = tinythumb_stats) ex() %>% { check_object(., "tinythumb_stats") %>% check_equal() check_function(., "gf_histogram") %>% check_result() %>% check_equal() check_or(., check_function(., "gf_vline") %>% check_result() %>% check_equal(), override_solution(., "tinythumb_stats <- favstats(~Thumb, data = TinyFingers); gf_histogram(~Thumb, data = TinyFingers) %>% gf_vline(xintercept = ~tinythumb_stats$mean, color = 'blue')") %>% check_function(., "gf_vline") %>% check_result() %>% check_equal() ) } CK Code: ch5-6 It’s easy to fit the empty model—it’s just the mean (62 in this case). But later you will learn to fit more complex models to your data. We are going to teach you a way of fitting models in R that you can use now for fitting the empty model, but that will also work later for fitting more complex models. The R function we are going to use is lm(), which stands for “linear model.” (We’ll say more about why it’s called that in a later chapter.) Here’s the code we use to fit the empty model, followed by the output. lm(Thumb ~ NULL, data = TinyFingers) lm(formula = Thumb ~ NULL, data = TinyFingers) Although the output seems a little strange, with words like “Coefficients” and “Intercept,” it does give you back the mean of the distribution (62), as expected. Thus, this function finds the best-fitting number for our model. The word “NULL” is another word for “empty” (as in “empty model.”) It will be helpful to save the results of this model fit in an R object. Here’s code that uses lm() to fit the empty model, then saves the results in an R object called Tiny_empty_model: Tiny_empty_model <- lm(Thumb ~ NULL, data = TinyFingers) If you want to see what the model estimates are after running this code, you can just type the name of the object where you saved the model: lm(formula = Thumb ~ NULL, data = TinyFingers) We seem to be making a big deal about having calculated the mean of six numbers! But trust us, it will make more sense once you see where we go with it. One point is worth making now, however. Remember, the goal of statistics is to understand the DGP. The mean of the data distribution gives us our best estimate of the mean of the population that results from the DGP. It may not be a very good estimate—after all, it is only based on a small amount of data—but it’s the best one we can come up with based on the available data. It also is an unbiased estimate, meaning that it is just as likely to be too high as it is too low. Now that you have fit the empty model to the tiny set of data, use lm() to fit the empty model to our full data set, Fingers. Modify the code below to create a histogram of Thumb; draw a vertical line where the mean is; fit the empty model; and save the model to an R object called empty_model. require(coursekata) # modify this to fit the empty model of Thumb empty_model <- # this prints the best-fitting number empty_model # save the favstats for Thumb (this is helpful for drawing a line) Thumb_stats <- # make a histogram of Thumb and draw the line for the mean gf_histogram() %>% gf_vline(xintercept = ) # modify this to fit the empty model of Thumb empty_model <- lm(Thumb ~ NULL, data = Fingers) # this prints the best-fitting number empty_model # save the favstats for Thumb (this is helpful for drawing a line) Thumb_stats <- favstats(~Thumb, data = Fingers) # make a histogram of Thumb and draw the line for the mean gf_histogram(~Thumb, data = Fingers) %>% gf_vline(xintercept = ~mean, data = Thumb_stats) ex() %>% { check_object(., "empty_model") %>% check_equal() check_object (., "Thumb_stats") %>% check_equal() check_function(., "gf_histogram") %>% { check_arg(., "object") %>% check_equal() check_arg(., "data") %>% check_equal() } check_function(., "gf_vline") %>% { check_arg(., "xintercept") %>% check_equal() check_arg(., "data") %>% check_equal() } } CK Code: ch5-7 lm(formula = Thumb ~ NULL, data = Fingers) A common mistake when trying to add a line with data saved from favstats is to use the wrong data frame. The mean and median and other statistics are saved in the Thumb_stats data frame - not in the Fingers data frame! In a single function, the data frame needs to actually contain the variable you are trying to use. The gf_histogram function uses the Thumb variable which is in the Fingers data frame. But you can chain on a different function that uses a totally different data frame. That’s why we can chain on the gf_vline function that uses the mean (which is a variable) in the Thumb_stats data frame.
{"url":"https://coursekata.org/preview/book/2bf8f9f7-fd3d-4c5f-a740-5130f528a1b6/lesson/8/3","timestamp":"2024-11-05T12:40:49Z","content_type":"text/html","content_length":"93613","record_id":"<urn:uuid:3bc78041-9782-4d10-a7d9-152f44935231>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00301.warc.gz"}
Counting Sort So far, we’ve looked at sorting algorithms that fall under the comparison category of sorts. These algorithms work to find out which element is bigger by comparing each element with one another: In this world, the fastest algorithms run at around O(n log n) time. Now, what if I told you that sorting algorithms can run much faster than this? What if I told you that we can sort values in linear or O(n) time? Well, buckle up! Today, we are going to look at another category of sorting algorithms that can run faster than the quickest comparison-based sorting algorithms. We are going to look at integer-based (aka non-comparative) sorting algorithms where they can run as fast as O(n) under certain conditions. We will start our look at one of the most famous non-comparative sorting algorithms, the counting sort How Counting Sort Works The best way to learn how counting sort works is to walk through an example. As with all great sorting problems, we start first with an unsorted collection of data: The makeup of our unsorted data is important. For counting sort, our data needs to be: 1. Made up of positive numbers (0 and greater) 2. The range between the smallest and the largest number needs to be fairly small, around the same order of magnitude as the input size In looking at our input array, we can see that both of these characteristics are represented. Every value is a positive number, and the range of our numbers goes only from 0 to 7. We will see in a few moments why both of these characteristics are important for our particular implementation of counting sort. Step 1: Finding the Maximum Value With our input array all squared away, the first step is for us to find the largest value in our input array: This is a common linear time operation, and most programming languages provide helper methods to make this work with little effort. Step 2: Creating a Count of all Values Next, we want to create a running count of how often each value in our input occurs. In looking at our numbers, we can see that 0 appears twice, 1 appears twice, 2 appears twice, 3 appears once, and so on. We want to calculate this in a more scalable way, and one we can do this is by first creating a new array called count whose keys represent each unique value from our input array: What I've just said sounds complicated, but let's break this down and simplify it. The maximum value in our input array is 7. This means that in our count array, the highest key will have a value of 7 as well. What exactly does this mean? This means something like this: In the world of arrays, the key is the index position. When the largest value in our input array is 7, and we know that array index positions always start with 0, all of our input values will be represented somewhere between 0 and 7. Our main work is to create an array whose size is one greater than the maximum value to allow for an index position of 7 to exist: What we now have is a count array made up of eight empty items. The index positions of this array map to values that could exist in in our input array for the index positions start from 0 and go all the way to the maximum value of 7. From here, it is a counting exercise. When we encounter each value from our input array, we go to the corresponding entry in our count array and increment the value by 1. At the end of this, our count array will look as follows: Each index position corresponds to a value in our input array, and each value at the index position is nothing more than a count of the number of times this index position appeared as a value in By looking at the final state of our count array, there is something else we can observe. We can sorta reconstruct what a sorted version of our input array will look like, but this is an illusion. It's not what we want. If you are curious as to why, the callout below goes into more detail. Wait...are we done sorting our input array? With this information about the count of each value, believe it or not, we have what it takes to construct our sorted array...kind of! Our count array index positions already go from 0 to 7 (in perfect increasing order) and capture the full range of values from our input. We can also see that 0 appears twice, 1 appears once, 2 appears twice, 3 appears once, 4 appears twice, 5 appears once, 6 doesn't appear at all, and 7 appears just once. All we need to do is build up our sorted output array with this information: There is a big catch here. If all we are ever going to sort are integer values with no regard for stability, then yes, we are done. For real-world scenarios, that’s too limiting. What we will typically sort will include more than just integer values. We have also seen that stability, the property where the initial order of similar items is preserved in the final order, is also important. So, no...we are not done! Step 3: Calculating the Final Position If we take stock of what we have right now, we have our count array and our input array. Our input array is...just sitting there with its unsorted values happily loitering. The count array stores a count of the number of times each distinctive element from the input array appears. Each distinctive element is represented as the index position in our count array. What we are going to do next is evolve our count array to better describe the sorted position each item will ultimately be in. In the earlier note, we visually looked at the arrangement of each item and where it will land. Let’s go ahead and formalize it by storing the cumulative sum (or prefix sum) at each array position instead of the count value. This will involve us taking each array item, starting at index position 1 (aka the second item), and adding its value with the value in the previous array item: What the count values now show is no longer the count. Instead, they show where each corresponding value from the input array (as specified by the count array’s index positions) will need to appear in the sorted output array. If we translate each value into index positions, it will look as follows: Because index positions start at 0 and our cumulative sum values start at 1, the numbers are offset by one when we look at the output array to determine the final index positions of our sorted Step 4: Placing the Elements in Order We are now at our final step. Here, we will take our unsorted elements from our input array, map them with the locations we calculated in our count array (in the previous step), and fill our output array with the correct values in their sorted locations. This is relatively easy, but it is tedious with a handful of steps. We'll walk through the steps together very slowly. This is our current starting point: Our input array will be the primary driver of all of the remaining steps we are about to see, and we will be iterating through all the values in our input array in reverse. There is a reason for that, and we’ll talk about that a bit later. First, let’s start at the end where we have our 2 value: We use this 2 value as the key (aka the index position in an array) to find the corresponding value in our count array: The value at the position indicated by the 2 value is 6. This 6 defines the position our last item in the input array (the one with the 2 value) will be when sorted. Because the 6 position needs to account for 0-based indexing, we reduce it by one and to find the index position 5 in our output array: Now that we have found the sorted position our 2 value will be in, we can go ahead and place it in there: There is one more step remaining. In our count array, we will decrement by 1 the stored value at index position 2 to indicate that we have already used it. In our case, the stored value of 6 will now become a 5: This is to ensure that any future attempts to find out where to place a value of 2 will not create a collision and overwrite a previously sorted value. When we encounter a 2 in our input array again, this time we will see that its sorted position is at 5, so we will place it at index position 4 instead. We’ll get to these shortly when we encounter our two adjacent 0 values in the input array, so don't worry too much about this detail just yet. Ok, phew! What we saw was a very slow and detailed look at how we figure out how to place an unsorted item in the correct sorted position. We are going to speed things up a bit and repeat these similar steps. Next, we go down our input array to the next item, which has a value of 7. We know that 7 has a count value of 11, and this means its index position in our output array will be one less than that, which is 10: We also decrement the value of 11 in our count array to 10 to account for any future instances of 7 (if any!) finding a new location. Going down our input, our next value is 4: This 4 value maps to position 9 in our count array, and this means that 4 goes into index position 8 (9 minus 1) in our output array. As our last sub-step, we decrement the value of 9 to 8 in our count array. Next up we have a 0 value, and the steps we saw earlier repeat: Moving along, we have another 0 value. When we repeat the steps, the state of our three arrays will be as follows: Because this is the second time the 0 value is being placed, we can see the importance of decrementing the position value in our count array after placing our item. When we encountered 0 for the first time a few steps ago, the corresponding count value was 2. After placing that first 0, we decremented the count value to 1. This ensured when we encountered 0 again in this step, because the corresponding position was now 1, we were able to place it at index position 0 in our output array and not have to deal with a collision. At this point, you should have a good understanding of how each subsequent item from our input array will go through the count array and then be properly placed in the output array. At the end of all of these steps, the final state of our three arrays will be as follows: Of course, the only detail we really care about is what the output array shows, and what we see there is a properly sorted array. Counting Sort Implementation Now that we’ve seen how counting sort works using words and pictures, let’s go one level deeper and look at a JavaScript implementation of counting sort: function countingSort(input) { // Find the maximum element in the array const max = Math.max(...input); // Create a count array to store the frequency of each element const count = new Array(max + 1).fill(0); // Count the occurrences of each element for (const num of input) { // Calculate prefix sum to store the position of each element in the sorted array for (let i = 1; i <= max; i++) { count[i] += count[i - 1]; // Create an output array to store the sorted elements const output = new Array(input.length); // Place elements in the output array based on counts for (let i = input.length - 1; i >= 0; i--) { output[count[input[i]] - 1] = input[i]; // Return the sorted array return output; // Example! let unsortedArray = [5, 2, 1, 3, 4, 1, 0, 0, 4, 7, 2]; let sortedArray = countingSort(unsortedArray); console.log(sortedArray); // 0, 0, 1, 1, 2, 2, 3, 4, 4, 5, 7 Take a moment to walk through what this code is doing. There are a few things to notice here: 1. Notice how all of the code maps neatly to the various counting sort steps we looked at earlier. 2. We are doing no comparisons between values at any point here! This further underlines the point that counting sort is not a comparative sort at all. The trickiest part of the counting sort implementation is making sense of the various array-related accesses we are doing. We have our input array, count array, and the output array. We take numbers from one, add it to the other, do some more adding, and then place a value into the correct location in the end. Compared to iterative functions or recursive calls that we’ve seen with our comparison-based sorting algorithms, I’ll take a bunch of loops and array accesses any day of the week. Maybe that is just me! 😅 Iterating Backwards == Ensuring Stability A sorta strange thing we glossed over is where we iterate through our input array in reverse order at the last step when placing our unsorted items into the correct sorted position in our output This visual maps to the following loop in our code where we can see we are looping in reverse: // Place elements in the output array based on counts for (let i = input.length - 1; i >= 0; i--) { output[count[input[i]] - 1] = input[i]; The reason for iterating backward is to ensure stability. Counting sort is a stable sorting algorithm where it preserves the relative order of equal elements. By iterating backward through the original array, we ensure that elements with the same value are placed in the output array in the same order they appeared in the input array. This is crucial for maintaining stability, especially when dealing with elements that have the same key or value. Let's say we didn't iterate backwards, and we have an input where where we have multiple instances of the same value. If we were to iterate forward through the array, the elements would be placed in reverse order in the output array, potentially breaking stability. By iterating backwards, we guarantee that elements with the same value are placed in the correct order in the output array, preserving the stability of the sorting process. Performance Characteristics We started off by saying that counting sort runs in linear or O(n) time...under certain conditions. The following table is a quick summary of the best-case, average-case, and worst-case scenarios: Scenario Time Complexity Memory Complexity Best case O(n + k) O(n + k) Worst case O(n + k) O(n + k) Average case O(n + k) O(n + k) This looks very nice, but there is something subtle here that glosses over a lot of variation. As you can guess, by saying under certain conditions to qualify the linear-ness of counting sort, you know there is a catch. The catch is that the range of values we are sorting needs to be around the same size as the total number of values in our input. Diving into Time Complexity Taking our example from earlier, the largest number in our input was 7: This means the range of numbers our sort will need to deal with goes from 0 to 7 for a total of 8 values. A good proxy for this is the size of count array, for we know its size will mimic the range of numbers from input as well. We will get more precise and give a label to both our input size and the range of numbers. Our input size will be designated as n, and the range of numbers in our input will be designated as k: When we tie this with how counting sort works, we know that we loop through our input and count arrays a few times to calculate the final position of each unsorted item in our output array. Running a loop only a few times is not a major problem, so we can approximate counting sort's running time by saying O(n + k). This results in a linear running time only if n and k are similar. If the range of k is very large, for example, it is on the order of n^2, then O(n + k) is no longer linear. It will be the max of whatever value is the largest, and it would be that the running time is O(n^2) because k is n^2. We can see this play out in the following example: The value of n here would be 7 because we are trying to sort 7 values. What would the value for k be? Because k corresponds to the range of numbers in our input, it would be the maximum value...which is 1,390,502! The implications of this can be seen in our count array: Our count array will have 1,390,503 entries whose index positions go from 0 to 1,390,502. No matter how we look at it, that is a massive array for just 7 numbers. This isn’t very efficient both from the time it will take to iterate through all of the items as well as the amount of memory we need relative to the size of the initial data we are trying to sort. With n being 7 and k being 1,390,502, k is something around n^7. This would mean that O(n + k) would be O(n^7). That’s not very fast at all. It is far slower than some of our worst performing comparison-based sorting algorithms. Tying this all together, the relationship between n (size of the input) and k (range of numbers in the input) is what determines whether counting sort runs in linear time or not. For this reason, counting sort works best when the range of numbers we are dealing with is limited and within a narrow range. This is a detail that we'll always see called out to clarify why counting sort isn't always the fastest algorithm nor the default choice for all sorting situations. Understanding the Space Complexity Similar to the performance, the space complexity for counting sort is also O(n + k). Counting sort requires additional space proportional to the size of the input array (n) plus the range of the input values (k). For large ranges, such as the example we saw a few moments ago, the additional memory required to deal with the count array could become very impractical. Speaking of space, because of how counting sort is implemented, it isn’t considered an in-place sorting algorithm. It requires additional space proportional to the range of input values (k) for storing intermediate counts. In-place sorting algorithms modify the input array without requiring additional space, which is optimal from a memory usage point of view. Counting sort has a lot of things going for it, but it also has some issues that we need to keep in mind. In terms of what it is great at: • Really really fast: Counting sort operates in linear time O(n+k), where n is the number of elements and k is the range of the input values, making it highly efficient for datasets with a limited • Stability FTW: Counting sort preserves the relative order of equal elements, ensuring stability in sorting, which is essential in various applications. • Simple implementation: Counting sort has a straightforward implementation made up of loops and arrays, making it easy to understand and implement, even for those new to sorting algorithms. In terms of what makes counting sort a bit meh: • Limited applicability: Unless we do a lot of special work, counting sort is restricted to sorting arrays of non-negative integers, limiting its use in scenarios involving other data types or negative numbers. For sorting arbitrary types of data "out of the box", a comparative sorting algorithm would be your best bet. • Space complexity: Counting sort requires additional space proportional to the range of input values, which can be a drawback when dealing with large ranges in memory-constrained environments. This is especially true if the range of input values is significantly larger than the number of elements (k >> n), counting sort's space requirements may become impractical, leading to...inefficiency! 💀 Not all non-comparative sorting algorithms exhibit some of these same problems. There are better implementations. In the future, we'll look at one such improved implementation, the Radix Sort. Here is how counting sort stacks up with all of the other sorting algorithms we have seen so far: Just a final word before we wrap up. If you have a question and/or want to be part of a friendly, collaborative community of over 220k other developers like yourself, post on the forums for a quick
{"url":"https://www.kirupa.com/data_structures_algorithms/counting_sort.htm","timestamp":"2024-11-08T22:33:59Z","content_type":"text/html","content_length":"119368","record_id":"<urn:uuid:db03c9d1-dbcc-4047-bb3e-0490bdf0cae7>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00475.warc.gz"}
[Solved] Find the union of each of the following pairs of sets:... | Filo Find the union of each of the following pairs of sets: (iii) is a natural number and multiple of 3} and is a natural number less than 5} (iv) is a natural number and B={x: x is a natural number and Not the question you're searching for? + Ask your question Was this solution helpful? Found 5 tutors discussing this question Discuss this question LIVE for FREE 15 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions from Mathematics (NCERT) Practice questions from Mathematics (NCERT) View more Practice more questions from Sets Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Find the union of each of the following pairs of sets: Question Text (iii) is a natural number and multiple of 3} and is a natural number less than 5} (iv) is a natural number and B={x: x is a natural number and Updated On Dec 4, 2022 Topic Sets Subject Mathematics Class Class 11 Answer Type Text solution:1 Video solution: 1 Upvotes 74 Avg. Video Duration 2 min
{"url":"https://askfilo.com/math-question-answers/find-the-union-of-each-of-the-following-pairs-of-siio","timestamp":"2024-11-12T06:21:40Z","content_type":"text/html","content_length":"532654","record_id":"<urn:uuid:d85008d9-a165-41da-ab5d-5ae2b414ba59>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00655.warc.gz"}
ribbon(Z) plots the columns of Z as three-dimensional ribbons of uniform width, where y-coordinates range from 1 to the number of rows in Z. Ribbons advance along the x-axis centered at unit ribbon(Y,Z) plots three-dimensional ribbons at the locations specified by Y. ribbon(Y,Z,width) specifies the width of the ribbons. ribbon(___,Name=Value) sets properties of the ribbon plot using one or more name-value arguments. For example you can specify the color and transparency of the ribbons. For a list of properties, see Surface Properties. (since R2024b) ribbon(ax,___) plots into the axes specified by ax instead of the current axes (gca). The ax option can precede any of the input argument combinations in the previous syntaxes. s = ribbon(___) returns a vector of Surface objects with one object per ribbon. Use s to modify properties of the plot after creating it. For a list of properties, see Surface Properties. Create Ribbon Plot Create a plot with five ribbons at increasing heights. First, create a 5-by-5 matrix with elements corresponding to ribbon heights. Z = 4×5 Each column of Z represents one ribbon, plotted at a constant x-coordinate corresponding to the column number and with y-coordinates corresponding to the row numbers of Z. Specify Ribbon Locations Create a 5-by-5 matrix with the magic function. Z = 5×5 Create a ribbon plot of the matrix and specify the y-coordinates so each ribbon is centered at 0. Y = [-2 -1 0 1 2]; Create Ribbons with Different y-Coordinates Plot three ribbons at different locations along the y-axis. Specify the y-coordinates of the ribbons as a matrix Y that is the same size as Z, the matrix of ribbon heights. Each column of Y corresponds to one ribbon. Y = [1 2 3; 2 3 4; 3 4 5; 4 5 6]; Z = Y; Specify Ribbon Width Create a ribbon plot and set the width of each ribbon to 30% of the total space available. Z = magic(5); Y = [-2 -1 0 1 2]; Modify Ribbon Plot Appearance Create a ribbon plot and specify an output argument. The output is a vector of five Surface objects, where each object corresponds to one ribbon. Z = magic(5); Y = [-2 -1 0 1 2]; s = ribbon(Y,Z) s = 5x1 Surface array: Highlight the first ribbon by changing the EdgeColor and LineWidth properties of the corresponding Surface object. s(1).EdgeColor = "yellow"; s(1).LineWidth = 3; Specify Ribbon Plot Colormap Create a ribbon plot with 30 ribbons and a colorbar. t = linspace(0,2*pi,30); x = sin(t)'; y = cos(t); cbar = colorbar; cbar.Label.String= "Ribbon Number"; Change the ribbon colors using the colormap function. ribbon maps the x-coordinates of the ribbons to colors in the colormap linearly. Input Arguments Z — z-coordinates numeric vector | numeric matrix z-coordinates that represent ribbon heights, specified as a numeric vector or numeric matrix. • If Z is a vector, ribbon creates a single ribbon regardless of whether Z is a row or column vector. • If Z is a matrix, ribbon creates one ribbon for each column. Ribbons advance along the x-axis centered at unit intervals, where x-coordinates range from 1 to the number of columns in Z. Y — y-coordinates numeric vector | numeric matrix y-coordinates, specified as a numeric vector or numeric matrix. The size of Z determines the possible sizes of Y: • If Z is a vector, Y must be a vector of the same size as Z. ribbon plots a single ribbon at X = 1 using the data in Y and Z. • If Z is a matrix, Y can be a row or column vector with a length equal to the number of rows in Z, or a matrix of the same size as Z. ribbon plots a ribbon for each column of Z using the data in Y and Z. If Y is a vector, each ribbon has the same y-coordinate. width — Ribbon width 0.75 (default) | numeric scalar Ribbon width, specified as a numeric scalar representing a percentage of the total space available for each ribbon. • If width < 1, the ribbon width occupies that fraction of the allocated space. • If width = 1, the ribbons touch one another, leaving no space between them when viewed down the z-axis. • If width > 1, the ribbons overlap and can intersect. For example, the default value of 0.75 means the ribbon width is 75% of the total space available for the ribbon, with 12.5% of empty space on each side. ax — Target axes Axes object Target axes, specified as an Axes object. If you do not specify the axes, MATLAB^® plots into the current axes or it creates an Axes object if one does not exist. Name-Value Arguments Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter. Example: ribbon([1 2 3; 1 2 3],FaceColor="red") creates a red ribbon plot. The properties listed here are only a subset. For a full list, see Surface Properties. Extended Capabilities GPU Arrays Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™. The ribbon function supports GPU array input with these usage notes and limitations: • This function accepts GPU arrays, but does not run on a GPU. For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox). Distributed Arrays Partition large arrays across the combined memory of your cluster using Parallel Computing Toolbox™. Usage notes and limitations: • This function operates on distributed arrays, but executes in the client MATLAB. For more information, see Run MATLAB Functions with Distributed Arrays (Parallel Computing Toolbox). Version History Introduced before R2006a R2024b: Control appearance and behavior with name-value arguments Control the appearance and behavior of ribbon plots by specifying name-value arguments. Previously, ribbon did not support name-value arguments. See Also
{"url":"https://de.mathworks.com/help/matlab/ref/ribbon.html","timestamp":"2024-11-13T16:10:24Z","content_type":"text/html","content_length":"146432","record_id":"<urn:uuid:7ae9cc57-647d-4a8a-9c14-fe95c92a97c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00724.warc.gz"}
3.2 F) Converting Decimals into Percentages – Edexcel GCSE Maths Higher We are able to convert decimals into percentages by multiplying our decimal by 100; we do the opposite to what we were doing when we were converting percentages into decimals, which was divide by Example 1 What is 0.78 as a percentage? We multiply 0.78 by 100 to get the value as a percentage. When we are multiplying by 100, we move the decimal point two places towards the right. This means that 0.78 as a percentage is 78%. Example 2 What is 0.053 as a percentage? We obtain our percentage by multiplying by 100, which gives us 5.3%. Example 3 What is 5.932 as a percentage? We multiply 5.932 by 100 to give us 593.2%. Final Comment A decimal that is positive and less than 1 will give you a percentage that is less than 100%. We saw in our first example that 0.78 was less than 1 and it was equal to 78%, which is less than 100%. A decimal or integer that is greater than 1 will give you a percentage that is greater than 100%. We saw this in our final example where we saw 5.932 was converted into 593.2%.
{"url":"https://www.elevise.co.uk/g-e-m-h-32-f.html","timestamp":"2024-11-03T20:20:38Z","content_type":"text/html","content_length":"91812","record_id":"<urn:uuid:ac9cba2d-ffc9-463c-b596-59dcd0811976>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00733.warc.gz"}
Transmission Lines Applications in PSpice Transmission lines are used to propagate digital and analog signals, and as frequency domain components in microwave design. This app note illustrates the steps and issues involved in modeling and analyzing transmission lines in PSpice. Transmission lines are used for varied applications, including: • Power transmission line • Telephone lines • Traces on Printed Circuit Boards • Traces on Multi-Chip Modules • Traces on Integrated Circuit Packages OrCAD PSpice contains distributed and lumped lossy transmission lines that can help to improve the reliability of many applications. For analog and digital circuits, there is a need to examine signal quality for a printed circuit board and cables in a system. For analog circuits, the frequency response of circuits with transmission lines can be analyzed. It is the purpose of this article to examine the steps and issues involved in modeling and analyzing transmission lines in PSpice. Applications Flowchart The analysis of transmission line nets requires multiple steps. These steps are given in the following flowchart: Figure 1. Analysis flowchart for transmission line nets This article provides information for the two center blocks, by discussing relevant devices and models in PSpice, along with specific modeling techniques and examples. This section presents the basic concepts of characteristic impedance and propagation delay, and reflections and crosstalk. • Characteristic impedance, Z0 The characteristic impedance of a transmission line is the ratio of the voltage to the current. For a uniform line, it is invariant with respect to time and position on the line: If R and G are zero, the characteristic impedance will not depend on frequency, and reduces to The attenuation constant is the real part of the propagation constant and is important when losses must be considered. The propagation delay is the reciprocal of the phase velocity multiplied by the length of the transmission line: where c is the speed of light, and er is the relative dielectric constant. For a uniform, lossless transmission line. Table 1. Delay And Dielectric Constants For Some Transmission Lines When a voltage step is traveling down a uniform impedance transmission line, and then encounters an abrupt change in impedance, a portion of the incident energy is “reflected” back. The amount of energy reflected depends on the degree of impedance mismatch. The voltage reflection coefficient, (Z1-Z0)/(Z1+Z0) is a measure of this mismatch: Figure 2. Impedance reflection across a boundary Crosstalk is undesired energy imparted to a transmission line due to signals in adjacent lines. Crosstalk magnitude is dependent on risetime, signal line geometry and net configuration (type of terminations, etc.). Quantitatively, this energy results from mutual inductances and capacitances in the Telegrapher’s Equations: Crosstalk is often discussed in terms of forward and backward crosstalk coefficients. These are determined by the ratio of mutual capacitance to self-capacitance, and mutual inductance to self-inductance. If a disturbing line j is coupled to a victim line i that is terminated in its characteristic impedance, these coupling coefficients are These expressions are valid for loose couplings (KB < 0.25). It is clear that crosstalk can be decreased by decreasing mutual coupling Cij and Lij, or by increasing the coupling to ground. Figure 3 indicates two signal lines in close proximity that are capacitively coupled (CM) and inductively coupled (LM). Both lines have the same characteristic impedance, Z0, and are fully terminated to avoid reflections. One line is “active” and transmits a pulse, while the other is “passive”. At the source end of the passive line, the current due to CM and the current due to LM are additive. These summed currents produce a voltage drop of the same polarity as the source voltage, termed “Near End” or “Backward” crosstalk (it travels the opposite direction to the source pulse). At the far end of the passive line, the current due to CM and the current due to LM are of opposite polarity producing “Far End” or “Forward” crosstalk. Figure 3. Two parallel coupled transmission lines. L=length. Figure 4. Near and far end crosstalk resulting from a step input on an adjacent line • “Long “ vs. “Short” Lines Defining the point at which an interconnect should be treated as a transmission line and hence reflection analysis applied has no consensus of opinion. A rule of thumb is when the delay from one end to the other is greater than risetime/2, the line is considered electrically long. If the delay is less than risetime/2, the line is electrically short. Hence, the following guidelines: • Short line: 4 > tr/Td > 2 “Lumped” and “short” lines may always be modeled by lumped circuits. The topic of the next section is to decide how to best model an electrically “long” line. Lumped and Distributed Transmission Lines Ideal and Lossy Transmission Lines Transmission lines that are lossless, that is R=G=0, are termed ideal transmission lines. This is valid if attenuation and skin effect are either negligible or not of concern for the signal frequencies being analyzed. For real lines, the series resistance is not quite zero, and the phase velocity is slightly dependent on the applied frequency. These non-idealities result in attenuation and dispersion. Attenuation results in a reduction in signal amplitude, which may be a function of frequency. Dispersion results from the propagation velocity being different for the various frequencies. These effects can cause the frequency components of a signal to be quite different at the far end of the line, compared to the source. The fast rise and fall times of the input signals can be reduced and become “rounded”. It should be noted that there is a theoretical condition where there is attenuation without dispersion, when R/L = G/C. This is normally not of practical significance. An attenuation vs frequency curve is often provided by cable manufacturers to show susceptibility to these effects: Figure 5 – Attenuation vs. Frequency for a 100 meter lossy coax cable Quantitatively, attenuation is the real part of the complex propagation constant At high frequencies the real part is The following sections discuss the physical causes of line loss, skin effect, dielectric loss, and proximity effect. Skin effect results from the fact that currents tend to concentrate on the conductor surface. Current density continuously increases from the conductor center to its surface. For classical skin effects, the penetration depth is given by where K=1/sqrt(p*m*s), m=magnetic permeability of the conducting material expressed in henries per unit length, and s=conductivity of the conducting material. For SI units and for a copper conductor s=5.85x107 (ohm-meter)-1 and m=4p×10-7 (H/meter). The skin effect reduces the equivalent conductor cross-sectional area, which causes the effective resistance per unit length to increase with increasing frequency. Dielectric losses result from leakage currents through the dielectric material, which causes an increase in the shunt conductance per unit length. This results in signal attenuation. For frequencies below 250 MHz, this loss can usually be neglected. Skin effect losses will dominate up through RF frequencies. This is a current density redistribution in a conductor due to the mutual repulsion (or attraction) to currents flowing in nearby conductors. This current density redistribution reduces the effective cross-sectional area of the conductor, thereby increasing the series resistance. No general rules of thumb have been proposed due to its complicated nature. This effect is a function of the conductor diameters, separation of conductors, and frequency. Influence of Loss Effects on Primary Line Parameters • Resistance Per Unit Length, R For coaxial lines, skin effect losses dominate and the resistance per unit length is described by At high frequencies Rdc can often be neglected • Inductance Per Unit Length, L It has been shown for 2-wire lines (twisted pair, parallel wire) that, as the frequency is increased, the skin effect and proximity effect cause a slight reduction in the effective per-unit length self-inductance of the line. This frequency effect can often be neglected in models, and can lead to non-causality. • Capacitance Per Unit Length, C This depends primarily on the dielectric constant of the insulating medium and the geometry of the conductor. Capacitance per unit length is constant over a wide range of frequencies for most dielectrics, such as Polyethylene. • Conductance Per Unit Length, G If the loss tangent is available, G may be modeled by use of Where C is capacitance per unit length, w is the angular frequency, and tanj is a dielectric material coefficient known as the “loss tangent”. • Obtaining R and G from the attenuation vs. frequency curve It is often necessary to obtain R and G simultaneously from the attenuation vs. frequency curve. The following methods have been used successfully: 1: Using two points at the lower and upper bound of the signal frequencies of interest, simultaneously solve two equations of the form: 2: Attribute 90% of the loss to R, and 10% of the loss to G [3]. The rational for this is that most of the loss at high frequencies comes from the resistance of the center conductor. Specific examples of modeling the attenuation curve are given later. Both ideal and lossy transmission lines may be modeled as either distributed or lumped. The internal transmission line device in PSpice is distributed, but often a lumped macromodel can be used to advantage. For this reason both are provided in OrCAD’s libraries. The following are guidelines for model selection. Model Selection Networks of transmission lines will typically span multiple categories. Each transmission line should be modeled as simply as possible, for faster simulation speed. Table 2. Model Selection For Different Rise Time And Primary Line Constants. Distributed (T and TLOSSY) OrCAD provides both ideal and lossy distributed models: The parameters of the ideal transmission line are: • Z0 – Characteristic Impedance • TD – Transmission Delay The parameters of the lossy transmission line are: • R - Resistance Per Unit Length • G - (Shunt) Conductance Per Unit Length • C - Self Capacitance Per Unit Length • L - Self Inductance Per Unit Length • LEN – Length Of Transmission Line Lumped RC models (TLURCx) An RC line is a special case where R/L is large (or the series inductance is small). The simplest model for an RC line is a capacitor and a resistor. Figure 6. Single RC Lump. OrCAD provides the following RC lumped models in the transmission line library: Table 3. RC Lumped models. For TLURC64, the value of R for one lump is R*LENGTH/64. A distributed RC line can be realized with a cascade of T-sections, each T like that of figure 6. If the number of sections were infinite, the governing equations would be These combine to produce the diffusion equation The “Elmore delay” of this line is td=0.5 R CL2 + R CL*L + Rd*CL + Rd CL, where Rd is the driver resistance, CL is the load capacitance, and L is the line length. PSpice can be used to predict this delay for heterogeneous networks. Lumped RLCG models (TLUMPx) When wL/R>>1 or wC/G>>1, the per unit length inductance should be included in the lumped model. Figure 7: Lumped RLCG including series inductance OrCAD provides the following RLCG lumped models in the transmission line library Table 4. RLC Lumped models. Transmission Line Couplings Transmission lines may be coupled to study the effects of mutual inductive and capacitive coupling, such as crosstalk. It is possible to use both a distributed and a lumped model for these Systems of coupled transmission lines can be described by their capacitance and inductance matrices. The elements of the capacitance matrix C are defined by Cij gives the charge induced on the ith conductor when conductor j is set to a potential of 1 Volt, and all other conductors are grounded. The diagonal elements of C are related to the capacitance of the ith conductor to ground by the following formula Off diagonal elements are the mutual capacitances for conductors i and j. Terms of the inductance matrix L are described by Lij gives the flux between the ith conductor and the ground plan when conductor j carries 1.0 Amp, and all other conductors are floating. Off diagonal terms are mutual inductances. In PSpice, the mutual parameters of a coupled transmission line structure are: LM - Mutual inductance between adjacent tlines per unit length CM - Mutual capacitance between adjacent tlines per unit length The methods of [1] and [2] are used to decouple the transmission line parameters, subject to the following assumptions: 1. All of the line parameters, C, L, R, G, CM, and LM, must be the same for all of the lines in the set. 2. Coupling is modeled across adjacent lines only. Edge effects are neglected as a result of the first limitation. The following models are available for coupled transmission lines: Table 5. Transmission line coupling parts. In some cases it is desirable to use a lumped circuit to model coupling. Here we present a symmetric, coupled, 3 conductor, lumped model: * symmetric coupled 3 conductor lumped .subckt C3L in1 in2 in3 out1 out2 out3 +params: len=1 r=0 l=1 c=1 lm=1 cm=1 * first conductor r1 in1 1 {len*r+1u} l1 1 2 {len*l/2} c1 2 0 {len*c} l2 2 out1 {len*l/2} * second conductor r3 in2 3 {len*r+1u} l3 3 4 {len*l/2} c2 4 0 {len*c} l4 4 out2 {len*l/2} * third conductor r5 in3 5 {len*r+1u} l5 5 6 {len*l/2} c3 6 0 {len*c} l6 6 out3 {len*l/2} *mutual couplings k1 l1 l3 {lm/l} k2 l2 l4 {lm/l} k3 l3 l5 {lm/l} k4 l4 l6 {lm/l} k5 l1 l5 {lm/l} k6 l2 l6 {lm/l} c4 2 4 {len*cm} c5 4 6 {len*cm} c6 2 6 {len*cm} This model can be extrapolated to two, four, and five conductors. To be able to decouple the inductance and capacitance matrices, LM < L and CM < C. Large values of LM can lead to a negative eigenvalue when decoupling the matrix. Rules Of Thumb For Choosing Between Lumped And Distributed Types • For short transmission lines the distributed model can slow down the simulation by imposing a maximum time step of Td/2. For each line where Tr>Td/2, consider using a lumped model. Also, a large number of lumps required can slow down the simulation - use largest lump size that still gives accurate results • Asymmetric coupled lines should be simulated using lumped models due to assumptions in the model [1] and [2]. • RC (high loss tlines) must be simulated as lumped circuits. Library Models And Modeling This section provides an overview of available models and presents modeling techniques and considerations. Library Models OrCAD's libraries (PSpice A/D and PSpice) contain lossy transmission lines for coax and twisted pair (frequency domain analysis only), as well as other distributed and lumped macromodels. Coax Modeling A simple for formula for the characteristic impedance of coax is Where d1 is the diameter of the inner conductor, and d2 is the diameter of the inside surface of the shield. The propagation delay is For coaxial lines, the primary loss is from the skin effect. The resistance per unit length becomes: Where 0 < m < 1. For coax inductance and capacitance per unit length can be treated as frequency independent. Conductance per unit length is Where C is capacitance per unit length, w is the angular frequency, and tanf is a dielectric material coefficient (“loss tangent”). The angle f is called the dielectric loss angle. This angle is usually quite small (<0.005 radians) for most dielectrics up to RF frequencies. The following is an example model of RG6A/U coax from OrCAD’s libraries. Note that R and G use the Laplace variable ‘s’ to model attenuation as a function of frequency: * Model parameter units are as follows: * len: meters * r: Ohms/meter * l: Henries/meter * g: Mhos/meter * c: Farads/meter * Z0(Ohms) vp(%) F1(MHz) Loss1(dB/100Ft) F2(MHz) Loss2(dB/100Ft) * RG6A/U 75 66 100 2.9 1000 11 .model RG6A/U TRN (r={59.5022u*sqrt(2*s)} l=379.050n + g={0.0428900p*abs(s)} c=67.3867p) An alternate version of the model is obtained by using the FREQ attribute on the RG6A/U part to use a specified frequency to evaluate R and G. This can have some advantages in transient analysis. * Subckt version uses fixed frequency, frq, to model simple lossy line * Near end hi * | Near end lo * | | Far end hi * | | | Far end lo * | | | | .subckt RG6A/U A1 A2 B1 B2 params: frq=100Meg len=1 .param PI2 {3.141592654*2} .model RG6A/U TRN (r={59.5022u*sqrt(PI2*frq)} l=379.050n + g={0.0428900p*PI2*frq} c=67.3867p) t A1 A2 B1 B2 rg6a/u len={len} • Modeling R and G at high frequencies Attenuation vs. frequency data is generally available to ~1GHz for coax cable. At frequencies above ~1MHz, R and G have the following dependencies on frequency: Here is complex frequency (the Laplace variable). Modeling Attenuation In Mathcad: The following is a Mathcad program which fits the loss parameters R and G to two points of the Attenuation vs. Frequency curve: Enter attenuation of 100' of cable in dB at f1. attn1 :=2.9 Enter characteristic impedance of cable. Enter frequency @ attn1 Enter attenuation of 100' of cable in dB at f2. attn2 :=11 Enter frequency @ attn2 F1 in radians/sec F1 in radians/sec Attenuation factor for f1 in nepers/meter Attenuation factor for f2 in nepers/meter alpha1 = 0.0042559231 alpha2 = 0.0108141844 * r and g are both functions of frequency, and are computed using the method * described in "Transmission Lines" by Robert Chipman, McGraw-Hill, 1968, * pp 65-66. r is assumed to increase in proportion to the square root * of frequency, while g varies in direct proportion to frequency. A high * frequency relationship for the attenuation factor is: * alpha = ((r / z0) + (g * z0)) / 2, * and r and g can be found by selecting values of alpha at two frequencies * (100 MHz and 1 GHz are used here) and solving two simultaneous equations: * alpha1 = (.5 / z0) * r1 + (.5 * z0) * g1 * alpha2 = (.5 / z0) * sqrt(w2 / w1) * r1 + (.5 * z0) * (w2 / w1) * g1 * The alpha's are converted to units of nepers per meter, and the frequencies * (w1 and w2) are in units of radians per second. Kramer's rule gives: r1 = 0.6963951914 g1 = -0.0000103123 * Then the frequency-dependent expressions for r and g are: The transmission line parameters derived are then Twinax and Shielded Twisted Pair (STP) Modeling Twisted shielded pair (STP) or “twinax” is recommended for differential transmission systems at high frequencies or in noisy environments. Although it has superior noise rejection, the proximity of the shield increases the distributed capacitance which significantly attenuates the signal. • Modeling L, C, LM, and CM To model STP for differential driving, a multiconductor transmission line is needed. • OrCAD library parts T2COUPLED or KCOUPLE2 may be used. • In addition to the lossy tline parameters R, L, G, C, you will need LM and CM, the mutual inductance and capacitance between adjacent tlines per unit length A simple for formula for the characteristic impedance of two parallel wires is Where d is the diameter of the conductor, and s is the separation between wire centers. The propagation delay is For multiconductor (crosstalk) simulations, it is important to obtain the conductor to conductor coupling parameters LM and CM. Here are suggestions for obtaining these parameters: • Contact cable vendor for mutual capacitance and inductance data. • Measure the capacitance and inductance matrices in the lab. • Use a 2-D field solver, such as the code provided in [5]. Sometimes odd and even mode impedances are provided rather than the inductance and capacitance matrices. The even and odd mode characteristic impedances are related to L, C, LM and CM in the following way: Zoe = SQRT((L + e*LM)/(C - e*CM)) Zoo = SQRT((L + o*LM)/(C - o*CM)) Tde = SQRT((L + e*LM)(C - e*CM)) Tdo = SQRT((L + o*LM)(C - o*CM)) Zoe and Zoo are the even and odd mode impedances, respectively, and Tde and Tdo are the corresponding delays. The coefficients e and o are the even and odd mode eigenvalues of the matrix [L][C], and come out to e=SQRT(2)/2 and o=-SQRT(2)/2 for two symmetric lines. L, C, Lm and Cm are found by solving the 4 equations above in terms of Zoe, Zoo, Tde, and Tdo. • Modeling R and G at high frequencies Twinax attenuation curves have an a + b*sqrt(f) frequency dependence, similar to coax. The same method as suggested for coax may be used to model R and G for twinax. Unshielded Twisted Pair (UTP) Modeling Unshielded twisted pair (UTP) can not be used at higher frequencies, as can STP. A simple for formula for the characteristic impedance of two parallel wires is Where d is the diameter of the conductor, and s is the separation between wire centers. The propagation delay is For UTP, the inductance in the region below ~500KHz can vary slightly with frequency. The distributed lossy transmission line model allows R and G to depend on frequency, but not L. The best solution is to pick the value of L for the frequencies of interest. OrCAD's transmission line library contains four UTP models: Table 6. UTP models in the transmission line library. Note: These models can be used for transient analysis by setting FREQ=<signal frequency>. This will allow PSpice to use the R, G, and L values corresponding to <signal frequency>. For use with AC sweep, set FREQ=<nothing>. • Modeling R and G at mid-range frequencies Attenuation curve data is generally not available above ~16MHz for UTP cable. At these mid-range frequencies, attenuation does not always obey a square root dependence on frequency. Here is a suggested method to model UTP attenuation: • Obtain the frequency dependent R, G vs. frequency curves from the cable vendor. Use a linear least square fitting routine to fit more points of the attenuation curve to a polynomial of the form Where s is complex frequency. • If only the attenuation data is available, follow the method used in [3], which is valid above ~500 KHz. Assume that 90% of the loss is due to skin effect (the R parameter), and 10% due to dielectric loss (the G parameter). Then, R=0.9*(2Z0)*attenuation, and G=0.1*(2/Z0)*attenuation. Note that this model will considerably overestimate the loss at low frequencies. Geometry Parameterized Models Another way to model a transmission line is by describing its physical dimensions, and relative dielectric constant. There are empirical equations derived for many popular transmission line geometries [6]. The functions supported by PSpice’s analog behavioral modeling expressions allow models to be created for a large variety of geometries, including coaxial, paired, coplanar, microstrip, stripline, inverted microstrip, and low order modes of waveguides. Coupled lines may also be parameterized by their geometry. Two of the most common types are microstrip and stripline. Figure 8. Microstrip configuration. Simple formula valid for 0.1<w/h<2.0: Here h is the height above ground, w is the trace width, and t is the line thickness. • Stripline Configuration Figure 9. Stripline configuration. Simple formula: Here h is the separation between grounds, w is the trace width, and t is the line thickness. A library of geometry parametrized models is available and can be downloaded from ftp.microsim.com/tech_support/tlinean.zip. Ribbon Cable Multiconductor Modeling This type of cable is most used for single ended data transmission, is familiar to designers and can consist of any number of conductors. Ribbon cable is very flexible because it is narrow, and can fit in thin spaces where round multiconductor cable would not. Grounds can be used as “barriers” between asynchronous signals. A simple for formula for the characteristic impedance of two parallel wires is Where d is the diameter of the conductor, and s is the separation between wire centers. The propagation delay is For multiconductor (crosstalk) simulations, it is important to obtain the conductor to conductor coupling parameters LM and CM. Here are suggestions for obtaining these parameters: • Contact cable vendor for mutual capacitance and inductance data. • Measure the capacitance and inductance matrices in the lab. • Use a 2-D field solver, such as the code provided in [5]. Round Multiconductor Cable Considerations Round multiconductor cable is generally discouraged for high-speed applications because the worst case amount of crosstalk that may occur varies from cable to cable. Yet, statistical analysis of crosstalk can be performed for cables meeting tight manufacturing specifications. If a known minimum distance between two sensitive conductors along with a maximum parallel length can be determined, then the resulting crosstalk can be studied. Best bets for obtaining the inductance and capacitances matrices are: • Measure the capacitance and inductance matrices in the lab. • Use a 2-D field solver, such as the wire-separation approximation 2-D code provided in [5]. Signal Quality Analysis of an ECL System Clock This example serves to illustrate the process of modeling and simulating a high-speed system clock net, by applying the steps outlined in the flowchart. A ECL system clock must pass through multiple PCBs (including backplanes), and a 30 ft cable. It is desired that ribbon cable be used, but if necessary twinax can be used (although it is more Figure 11. ECL system clock spanning multiple PCBs. STEP 1: Determine driving frequencies and technology constraints. We are using ECL100K technology, which has the following device characteristics: • Normal logic swing is about 800 millivolts. Voh=-0.9 volts Vol = -1.7 volts High level, 125 millivolts Low level, 125 millivolts Although these are guaranteed minimums, each is generally better by about 75 millivolts. • Normally, Vcc is grounded and Vee is tied to –5.2 volts. • Typical gate delay is 1 ns • Ouput Impedance is typically 5 ohms in both high and low state • Gate input impedance is typically 50 K-ohms • Gate input capacitance is typically 3-5 pF. • Gate output capacitance is typically 2-5 pF • Ouput pulldown resistors are not included on chip • Maximum recommended open line length for microstrip configuration │ │Fanout=1│Fanout=2│Fanout=4│Fanout=8│ │ │ │ │ │ │ │Z0 (ohms) │(3.3pF) │(6.6pF) │(13.2pF)│(26.4pF)│ │ │ │ │ │ │ │ │MAX IN. │MAX IN. │MAX IN. │MAX IN. │ │50 │1.6 │1.1 │0.7 │0.6 │ │68 │1.4 │0.8 │0.5 │0.4 │ │75 │1.3 │0.8 │0.4 │0.3 │ Table 7. Maximum open line length for ECL 100K for microstrip. In practice, there is a tradeoff between use of terminations and lowering power dissipation. Thus, terminations are not perfect in the clock path. STEP 2: Decide how to model the net. The data path involves single-ended signals on the PCBs, and a differential signal through the 30 ft ribbon cable. The following are also suggested by the schematic: • A typical model for ECL 100K is needed. • A multiconductor model is needed for the ribbon cable, since it will be driven differentially. • The PCB transmission lines must be characterized. • The board to board and board to cable connectors must be characterized. These models should account for ground pin locations for a later crosstalk simulation. STEP 3: Create Models. • ECL100K driver and receiver These are included in OrCAD’s standard libraries. • Shielded ribbon cable with adjacent differential signals The specifications for the ribbon cable are: Wire radius (mils) = 7.5 Insulation thickness (mils) = 10.0 Relative dielectric constant of insulation = 3.5 Adjacent wire separation (mils) = 50.0 The L and C matrices are obtained from the 2-D code in [3]. L11=0.74850 uH/m L12=0.5077 uH/m L22=0.74850 uH/m C11=37.432 pF/m C12=18.716 pF/m C22=37.432 pF/m Attenuation curves have been fitted to R and G: R={851u*sqrt(2*s)} ohms/m G={-0.340p*abs(s)} siemens-m • The alternate twinax cable has the following specifications: L11=L22=332.730 nH/m C11=C22=57.02 pF/m L12=253.85 nH/m C12=45.131 pF/m R={241.315u*sqrt(2*s)} ohms/m G={-0.140442p*abs(s)} siemens-m This cable is better matched to the PCBs, but is more expensive. The ECL clock will be routed on an outer layer of a PCB (microstrip) with the following properties: An impedance of 75+/-10 ohms will be used. The propagation delay is 0.16 ns/in. Simulations should investigate the effect of the upper ends of the tolerance range. • D-SUB connector with 2:1 signal to ground pin arrangement Figure 11. Signal/Ground arrangement in D-SUB connectors; S=signal, G=ground. We use a 2-D finite-element field solver to obtain the capacitance and inductance matrix: L11=2.97 nH L12=0.98 nH L22=2.91 nH C11=0.122 pF C12=0.0314 pF C22=0.122 pF An 8 Lump RLCG model can be used such as TLUMP8. For simulation of crosstalk we can use KCOUPLE2 or a lumped coupled model. STEP 4: Simulate Net. Run a 150 ns transient analysis for the circuit of figure nn. STEP 5: Compare results to design specifications. It is extremely important for a system clock to meet the specifications for Vil,max and Vih,min at all receiver inputs. If a “glitch” were to exceed these voltages, we risk the possibility of data Figure 12. Voltage margins The Probe plot (figure 13) shows that Vih,min=-1.165mV and Vil,max=-1.475mVare never exceeded, and the clock edges are monotonic through the transition region. The differential input voltage at the end of the 30 ft cable is 382mV worst case, which exceeds the required minimum 300mV. Figure 14. OrCAD Probe plot showing clock signals at the inputs of the receivers Advanced Topics Resolution of the Impulse Response For frequency independent R and G, OrCAD PSpice calculates closed form impulse responses [1]. But, when R and G are Laplace expressions, a numerical impulse response is calculated using FFT. The minimum number of points for the FFT is 256 and the maximum is 65536. Decreasing RELTOL will increase this frequency resolution. Here are some points to keep in mind: • Large G/C and R/L ratios (especially > ~1E8) can give inaccurate results, and the simulation may diverge. If one of these ratio exceeds 1E10, PSpice will issue a warning for this condition: WARNING – G/C for T_T1 is 1e+015. Results may be inaccurate for G/C > 1e10. For these high loss lines, use a lumped modeling approach as discussed previously. • The time step size will decrease as you increase the R/L or G/C ratio (for highly lossy lines). For these cases the controlling factor for time steps becomes the time resolution of the impulse responses used in convolutions for the loss branches of the transmission lines. • To increase resolution for fast inputs set the Print Step to Tr/10. • When Laplace expressions are used for R and G, numerical impulse responses are computed based on FFT. The number of points used, and hence the resolution varies between 256 and 65536. PSpice tries to estimate the best resolution for the simulation, by examining the transmission line model and length, RELTOL, the Print Step, the Step Ceiling, and the Final Time. There is, however, a tradeoff between resolution and simulation time. The number of points can be increased by setting a smaller Print Step, which increases the resolution of the impulse response. Setting the Step Ceiling smaller will also increase the resolution, but may also impose an unnecessarily small maximum time step during the transient analysis. Essentially the reason for this “user control” is that when the impulse response is computed for a lossy tline, it has no knowledge of the input that will be applied during the transient analysis. Example: 10 m of RG58/U coax is used for a 60ns and 600ns simulation. The impulse response is ~4% non-causal, but the results are still valid. The 60ns simulation shows good resolution at far end of line, however, the 600ns simulation requires setting RELTOL=0.0001 to obtain the same good resolution. Non-Causality And The Numerical Impulse Response PSpice uses a numerical convolution to obtain the impulse responses for lossy transmission lines with frequency dependent losses. It is an assumption fundamental to the convolution method [1] that the impulse response is a causal function of time. Unfortunately, not all Laplace expressions have causal impulse responses. • What to do when PSpice reports that the impulse response is non-causal? If an impulse response is partially non-causal, PSpice will write a message to the output file: WARNING -- 10.9038 percent of T_T1 impulse response is non-causal. WARNING -- It should be delayed by at least 4.86374e-013 sec. Non-causality more than a few percent can lead to highly inaccurate results, depending on what feature of the impulse response has been lost through truncating values for t<0. The following are guidelines for improving the simulations. • Try adding phase delay to the Laplace expression by multiplying by exp(-s*<tdelay>), where <tdelay> is the recommended delay in the output file. • Use a Laplace expression of the form a+b*sqrt(s). sqrt(s) is fundamentally non-causal, but is has a known phase of 45 degrees. Larger b/L ratios lead to higher amounts of non-causality (L= inductance). Check if calculations will allow this ratio to be reduced. • Larger ratios of <final time>/<propagation delay> increase the degree of non-causality due to the limited resolution of the FFT used to compute the impulse response. Try to run shorter simulations to simulate with frequency dependent loss expressions, then run longer simulations with single frequency loss expressions. • Lastly, consider using a constant value for R and G, by using values corresponding to the driving frequencies of interest in the simulation. For sinusoidal signals, this is a simple matter. For digital (pulse) inputs, use 2/Tr for the frequency. If loss must be considered at multiple frequencies, run a simulation with each value to find the worst case results. [1] Roychowdhury, J.S. and D.O. Pederson, “Efficient Transient Simulation of Lossy Interconnect”, 28^th ACM/IEEE Design Automation Conference, 1991, pp. 740-745. [2] Tripathi, V.K., and J.B. Rettig, “A SPICE Mode for Multiple Coupled Microstrips and other Transmission Lines”, IEEE MTT-S Digest, 1985, pp. 703-706. [3] Banzhaf, W., “Simulating Lossy Transmission Lines With PSpice”, RF Design, January 1993, pp. 25-27. [4] Cooper Industries, Belden Wire and Cable Master Catalog, 1996. [5] Paul , C.R., Analysis of Multiconductor Transmission Lines, Wiley Series in Microwave and Optical Engineering, K. Chang (Ed.), John Wiley & Sons, Inc., 1994. [6] Wadell, B.C., Transmission Line Design Handbook, Artech House, 1991. [7] National Semiconductor, F100K ECL 300 Series Databook and Design Guide, 1992. [8] Johnson , H.W. and M. Graham, High-Speed Digital Design: A Handbook of Black Magic”, Prentice Hall PTR, 1993. [9] Deutsch et. al., A., “High-speed signal propagation on lossy transmission lines”, IBM J. Res. Dev., Vol. 34, No. 4, pp. 601-615, July 1990. © Copyright 2016 Cadence Design Systems, Inc. All rights reserved. Cadence, the Cadence logo, and Spectre are registered trademarks of Cadence Design Systems, Inc. All others are properties of their respective holders. 1766 12/13 CY/DM/PDF
{"url":"https://www.pspice.com/resources/application-notes/transmission-lines-applications-pspice","timestamp":"2024-11-08T07:50:54Z","content_type":"text/html","content_length":"91985","record_id":"<urn:uuid:81950ab4-0407-4cfe-ab52-e80d5b5b527b>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00013.warc.gz"}
How To Use A Rangefinder Scope - Gun Goals You are planning a hunting trip with your buddy and think you have everything you need. Your rifle with a nice scope on it, rangefinder, binoculars, compass, plus all of the other necessary items in your day pack, or camping pack depending on how long and far you plan to trek. It seems like a lot of optics to tote around. Maybe you should consider a rangefinder scope. Even if you still plan to take a rangefinder with you, a range-finding scope is a useful tool in your arsenal and convenient for estimating target distance. All it takes is learning how to go from a basic aiming point to a BDC reticle scope. With a little practice and some math, you will soon know how to use a rangefinder scope. The BDC Reticle If you are used to a simple cross-hairs aiming point, then you may find that a BDC or Bullet Drop Compensator reticle takes some getting used to. Many people actually prefer the BDC for longer distance hunting as they have proven to be quite useful for range estimating. It does not work the same as a range finder where it reads and calculates the distance for you. Instead, the reticle is set up like a graph. With some simple math formulation, you will know how to elevate your rifle accordingly to hit target points at certain distances. Normally, a BDC reticle is set up with a number of horizontal hash marks on the verticle line, as opposed to just the simple cross-hairs. These marks represent different distances, so if zeroed in properly, you should be able to adjust where your bullet lands by elevating to the proper hash mark at the desired distance. A Christmas tree reticle is an example of this. Many times, you will also see horizontal hash marks on the vertical line. These compensate for wind at certain distances. Keep in mind that whether your scope is set up in MOA or MIL dots will make a bit of a difference. MIL dots are much more accurate and precise, but require more calculation, whereas MOA is a bit easier, but not quite as precise. Calculating your distance properly is going to depend very heavily on what power your scope is at also. Most BDC reticles are tested at a very specific environmental condition, with a specific caliber and load weight and at a specific zoom power, and only work correctly at these exact conditions. Most are rated optimally for around the 300-yard range, though most long-distance hunters still prefer them for even longer yardages than this. If you have a Nikon scope or another scope that offers some kind of a ballistics program, you probably have access to a bullet drop calculator of some sort. There are also charts you can get to give you a rough idea. Otherwise, there is a formula for figuring out your distance depending on whether your scope is in Mil dots or MOA. The formula to remember is WERM, or Width Equals Range x (MOA or Mil) Measured. So finding the Range would be your target width in inches divided by MOA or Mil-dots on your reticle, then multiply that by a special constant number. The MOA constant is 95.5, but some people use 100 just to make things easier, though a little less accurate. The Mil constant is 27.8. An example of this is if your standard deer chest is 18 inches wide and takes up 4 Mil-dots on your reticle, then per the formula: 18/4=4.5 x27.8= 125.1 yards away. Knowing this, you can now aim the correct aiming point per the distance and get a more accurate shot at your target. You can find charts to calculate your windage as well. The first thing to determine your wind if you don’t have a wind reading device is to remember the basic rules of mirage and how they pertain to windspeed. You should be able to tell which direction the wind is blowing from, and remember that the wind will blow a projectile in the direction of travel. Once you know approximate wind speed, you can determine how far to move your aiming point left or right. The formulas are different based on the bullet you use, so it’s nice to have a wind reading device like a Kestrel wind meter. Pros And Cons Like anything else, even rangefinder scopes have cons as well as pros. Obvious pros would be that you can estimate the distance and compensate for bullet drop or wind with a range-finding scope, which is very handy in itself, but even better when you don’t have a hunting partner to range your target for you. Many long-distance hunters do prefer DBC reticles over others, but there are cons to be considered as well. One of the big ones is that to accurately estimate your bullet drop, you have to use the exact elemental conditions, magnification, bullet caliber, load weight, and barrel as was used to test and set the range points. If you want to use a different load weight, you will probably be off or have to find a tweak in your ranging calculations to put your points on accurate again. You can’t change the magnification at the risk of being off. There are just many factors to consider. Hopefully, you now have a little more insight on how to use a rangefinder scope and what you need to know to operate one properly. They should not be scary or intimidating, just a little more to know. Once you have slain your first deer with one of these scopes, you will probably always be sure to have one at the ready. As with anything, becoming really good with a rangefinder scope requires practice and learning your scope at different ranges and with different environmental factors. So get out there and do some If you enjoyed this article you’ll probably like these:
{"url":"https://gungoals.com/how-to-use-a-rangefinder-scope/","timestamp":"2024-11-14T04:33:26Z","content_type":"text/html","content_length":"150260","record_id":"<urn:uuid:7e018166-5bc4-4235-bff1-786c27715ec3>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00736.warc.gz"}
Front Teeth–to–Carina Distance in Children Undergoing Cardiac Catheterization Knowledge of normal front teeth-to-carina distance (FT-C) might prevent accidental bronchial intubation. The aim of the current study was to measure FT-C and to examine whether the Morgan formula for oral intubation depth, i.e., endotracheal tube (ETT) position at front teeth (cm) = 0.10 x height (cm) + 5, gives appropriate guidance when intubating children of different ages. FT-C was measured in 170 infants and children, aged 1 day to 19 yr, undergoing cardiac catheterization. FT-C was obtained as the sum of the ETT length at the upper front teeth/dental ridge and the distance from the ETT tip to the carina. The latter measure was taken from an anterior-posterior chest x-ray. There was close linear correlation between FT-C and height: FT-C (cm) = 0.12 x height (cm) + 5.2, R = 0.98. The linear correlation coefficients (R) for FT-C versus weight and age were 0.78 and 0.91, respectively. If the Morgan formula had been used for intubation, the ETT tip would have been at 90 +/- 4% of FT-C. No patient would have been bronchially intubated, but the ETT tip would have been less than 0.5 cm from the carina in 13 infants. FT-C can be well predicted from the height/length of the child. The Morgan formula provides good guidance for intubation in children but can result in a distal ETT tip position in small infants. Careful auscultation is necessary to ensure correct tube position. TRACHEAL intubation is usually guided by direct visualization, and the endotracheal tube (ETT) is advanced until an appropriate depth marking is at the level of the vocal cords. Still, too-distal ETT placement is not uncommon, especially if the intubation is performed by less experienced practitioners.1In absence of radiographic confirmation, knowledge of normal upper front teeth–to–carina distance (FT-C) might be helpful in preventing bronchial intubation. We therefore measured FT-C in children of different ages. A second objective of the study was to examine whether the guideline for intubation depth suggested by Morgan and Steward2for children older than 4 yr—ETT position at upper front teeth (cm) = 0. 1 × height (cm) + 5 (the Morgan formula)—is also useful in young children and Materials and Methods Orally intubated children undergoing cardiac catheterization during anesthesia with or without muscle paralysis were included in the study. All patients were mechanically ventilated with pressure-controlled ventilation set at a peak end-inspiratory pressure of 11–23 cm H^2O, a rate of 13–45/min, a positive end-expiratory pressure of 3–5 cm, and a fraction of inspired oxygen of 0.21–1.0. The patients were supine and faced straight upward or 15°–45° laterally. The length mark of the ETT (Mallinckrodt Inc., Hazelwood, MO) at the upper front teeth/dental ridge (A) was recorded, and the distance from the ETT tip to the carina (B) was measured on an anterior–posterior chest x-ray, using the outer diameter of the ETT as reference (fig. 1). FT-C was calculated as A + B. If the resolution of the printed image did not allow precise identification of the structures, the higher resolution cine images were reviewed for guidance. Weight, height, and age were obtained from the patient’s chart. The study was approved by the institutional review board at Children’s Hospital and Regional Medical Center, Seattle, Washington, and the requirement for written informed consent was waived. Because the outer diameter of the endotracheal tube was used as reference for the B measurement, parallax errors due to x-ray “spread” were negligible,3but parallax errors due to the trachea not being exactly perpendicular to the anterior–posterior plane might be important. To estimate the latter error, the angle between the trachea and the horizontal plane was measured in 20 patients, aged 7 days to 10 yr, in whom lateral images had been obtained as part of the catheterization procedure. The angle between the trachea and the horizontal plane was 4°–26° (median, 16°). If the largest angle (26°) had been present in all study patients, it would have resulted in an underestimation of FT-C of 0.1–2.7% (median, 1.2%). No correction was made for this in the data. Statistical Analysis Statistical analysis was performed with Stata/SE 9.0 software (Stata Corporation, College Station, TX). Unless otherwise indicated, data in the text are mean ± SD. Using the method of least sum of squares, linear regression equations and 95% prediction interval bands were calculated for FT-C versus height, weight, and age, respectively. Multiple regression analysis was used to assess whether adding factors for sex, or a diagnosis associated with unusual airway features, would improve the linear regression model. P values less than 0.05 were considered statistically significant. Measurements were obtained in 170 patients aged 1 day to 19 yr. Sixty-two patients were younger than 1 yr, and 26 were younger than 1 month. Patient characteristics for all patients and for the infant subgroup are summarized in table 1. A cuffed ETT was present during 162 of the measurements. When compared with standard growth charts,∥36% and 45% of the patients were below the 25th percentile for height and weight, respectively. The deviation from normal growth was most noticeable in the infant subgroup, where 60% were below the 25th percentile for height and 66% were below the 25th percentile for weight. In children aged 1–19 yr, the corresponding figures were 27% for height and 31% for weight. No patient had severe dental, head, or neck abnormalities or markedly elevated abdominal pressure, but 6 patients had Down syndrome, 1 had DiGeorge syndrome, 1 had Goldenhar syndrome, 1 had velocardiofacial syndrome, 1 had CHARGE syndrome, and 1 had a history of premature birth at 30 weeks of gestation. Multiple regression analysis did not indicate that these 11 patients deviated from the rest of the group, and their measurements were therefore included in the regression analysis, but they have been assigned different symbols in figures 2 and 3. For the whole group (fig. 2), FT-C versus height and FT-C versus age were best described by linear regression equations, whereas the best fit model for the FT-C–versus –weight relation was a power equation: FT-C = 7.76 × weight^0.28, R ^2= 0.96. The closest correlation was obtained for FT-C versus height: FT-C (cm) = 0.12 × height + 5.2, R ^2= 0.98. Adding sex as an independent variable to the linear multiple regression model did not affect the FT-C–versus –weight or the FT-C–versus –age model but gave a small improvement in the FT-C–versus –height model (P < 0.05). Sex differences were small (for a given height, FT-C was only 1.4–3.3% greater in boys than in girls), and the regression lines for boys and girls combined are therefore presented in figure 2. In the infant subgroup (fig. 3), all three relations were best described by linear regression equations. The closest correlation was again obtained for FT-C versus height (fig. 3A). There was no sex difference in FT-C in the infant subgroup. If the ETT had been placed according to the Morgan formula, the ETT tip would have been at 90 ± 4% (range, 79–100%) of FT-C. No patient would have been bronchially intubated if the formula had been used, but in 14 patients (12 infants younger than 3 months, one 5-month-old infant, and one 6-yr-old girl), the formula would have resulted in an ETT tip–to–carina distance of less than 0.5 cm. The key finding of this study is that FT-C can be predicted from patient height. Although FT-C is also closely correlated to weight and age, the FT-C–versus –weight relation is not well described by a linear equation, and prediction interval bands are wider for both this relation and FT-C versus age (figs. 2 and 3). The current study group probably reflects the normal variation in a pediatric cardiac catheterization setting. The 1- to 19-yr-old patients did not differ much from standard growth curves, but many infants were small for age. It is therefore possible that different relations would have been obtained between FT-C and weight, and between FT-C and age, if the study group had consisted of healthy children, rather than children with cardiac disease. We propose, however, that the observed FT-C–versus –height relation is valid for the general population, because the length of the airway seems to increase in direct proportion to the length of the individual, and there is no indication that the presence of cardiac disease affects airway growth and the growth of the individual disproportionately. In six patients who returned for new examination within 6–12 months after the primary measurements, the FT-C/height ratios were thus 0.19–0.18, 0.21–0.20, 0.20–0.22, 0.19–0.20, 0.20–0.20, and 0.16–0.16. That the FT-C/height ratio remains rather constant during growth agrees with the fact that the FT-C–versus –height equation obtained in the infant subgroup and the equation obtained in all patients give similar FT-C estimates for infants. The corresponding FT-C estimates would be 11.1 and 11.2 cm for a 50-cm infant and 14.1 and 13.6 cm for a 70-cm infant, respectively. For practical purposes, the FT-C–versus –height equation for the whole group (fig. 2A) can therefore be used in infants as well. The finding that airway length is best correlated to the length of the individual is in agreement with previous studies in both children2,4and adults.5,6 Eagle5measured FT-C in adult patients and found that the expected FT-C values would be 21 and 26 cm in 140- and 180-cm individuals, respectively, which are close to the corresponding values of 22 and 27 cm given by the FT-C–versus –height equation in figure 2A. Many formulas have been suggested for estimating intubation depth in children.2,7,8,9The one we have found most reliable clinically was suggested by Morgan and Steward in 1982.2They estimated the distance from the incisors to mid-trachea in children aged 4–16 yr by combining measures of upper airway distances, taken from radiographs in 206 children, and lower airway distances, measured during rigid bronchoscopy in 50 children. The current findings (fig. 2A) suggest that the Morgan formula is also a useful guide when intubating younger children. The average distance from the ETT tip to the carina would have been approximately 1 cm in a 50-cm baby and 2 cm in a 100-cm child, had the Morgan formula been used. In some small infants, however, it can result in a distal ETT tip position ( fig. 3A). Clinically, we have therefore used a modified version of the formula in infants younger than 3 months: ETT length at front teeth/dental ridge (cm) = 0.10 × height (cm) + 4.10Lau et al. 7 have proposed another formula: ETT length (cm) = 0.5 × weight (kg) + 8. In the infants younger than 3 months included in the current study, the ETT tip–to–carina distance would have been 1.0–3.2 cm if the modified Morgan formula had been used and 0.5–3.3 cm if the Lau formula had been used. Age and weight information is sometimes more accessible in the operating room than height. Age- or weight-based linear formulas can give adequate guidance if the age/weight range is limited, but they are less reliable over greater age and weight ranges, and they are clearly useless in adults because the length of the airway does not automatically increase with increasing age or weight. For similar reasons, they are also less reliable in obese children and in children who have lost weight or stopped growing because of chronic disease. Bronchial intubation can have serious consequences in patients with cardiac disease, and the current FT-C data were therefore used to assess two commonly recommended formulas: ETT length (cm) = 0.5 × age (yr) + 12; and ETT length (cm) = 0.2 × weight (kg) + 12.11,12If applied only in children aged 3–14 yr (n = 74), as is usually suggested, the age formula would have resulted in bronchial intubation in 1 of 74 patients, and the position of the ETT tip would have been at 81 ± 6% of FT-C (range, 69–104%). The weight formula would have placed the tube closer to the carina at 91 ± 8% of FT-C (range, 77–115%), and 6 of 74 would have been bronchially intubated. A lower age limit of 2–3 yr is indeed appropriate for the age- and weight-based formulas; if applied in all 170 patients in our study, these formulas would have resulted in bronchial intubation in 43 and 74 patients, respectively. In contrast, no patient would have been bronchially intubated if the Morgan formula had been used. Although an ETT placed at the distance given by the Morgan formula rarely needs to be repositioned, a formula based on height cannot be expected to give useful guidance in patients with disproportional length-growth, e.g. , patients with chondrodysplasia, and adjustments may have to be made in patients with severe facial or dental abnormalities. Also, it should be noted that interventions can change the ETT tip–to–carina distance. Böttcher-Haberzeth et al. measured the changes in carina position during laparoscopy, and found that 20° head-down tilt combined with capnoperitoneum resulted in a cranial displacement of the carina by 1.2 + 0.11 × age in centimeters.13Had the Morgan formula been used for ETT positioning, such a change would have resulted in bronchial intubation in 54% of our patients. Consequently, sole reliance on a specific formula is not advisable, and careful auscultation is necessary to ensure appropriate position. Some methodologic issues should be noted. One limitation of our study is thus that the FT-C–versus –weight and FT-C–versus –age relations for the infant subgroup might not reflect the relation in the general population. Second, FT-C will depend on how the distance is measured. Using the ETT itself as measuring device, as was done in the current study, will likely give different but perhaps more clinically relevant values than those obtained by rigid or fiberoptic bronchoscopy. The definition of intubation depth is also important. Measuring the ETT length at the upper front teeth, as suggested by Morgan and Steward,2will give somewhat greater but probably more reproducible values than measuring the ETT length at the corner of the mouth. No correction was made for parallax errors, but, as outlined above, the resultant underestimation is small. Although the position of the carina normally varies little with ventilation, our measurements were not timed with the ventilatory cycle, and the ventilatory pressures varied considerably, especially in infants. This might be one explanation for the greater FT-C variation observed in the infant subgroup (fig. 3). Finally, the head position varied: Some patients faced straight upward, whereas others had their head turned to the side. It is unlikely that this had an important effect on the FT-C measurements.14It was appreciated, however, that flexion and extension of the neck results in a more caudal and cranial ETT tip position, respectively,15and that extension also increases the tracheal length,16and care was therefore taken to place the neck in a neutral anterior/posterior position. In summary, there was a close relation between the front teeth–to–carina distance and the length/height of the child. The Morgan formula provides good guidance for intubation in children but can result in a distal ETT tip position in small infants. The authors thank Thomas K. Jones, M.D. (Professor, Division of Pediatric Cardiology, Department of Pediatrics, Children’s Hospital and Regional Medical Center, Seattle, Washington), Jack C. Salerno, M.D. (Assistant Professor, Division of Pediatric Cardiology, Department of Pediatrics, Children’s Hospital and Regional Medical Center), and Terrence U. Chun, M.D. (Assistant Professor, Division of Pediatric Cardiology, Department of Pediatrics, Children’s Hospital and Regional Medical Center), for assistance in obtaining x-ray printouts; and Do Peterson, M.S. (Biostatistician, Office of Biostatistical Services, Children’s Hospital and Regional Medical Center), for assistance in performing the statistical analysis. Orf J, Thomas SH, Ahmed W, Wiebe L, Chamberlin P, Wedel SK, Houck C: Appropriateness of endotracheal tube size and insertion depth in children undergoing air medical transport. Pediatr Emerg Care 2000; 16:321–7 Morgan GAR, Steward DJ: Linear airway dimensions in children: Including those with cleft palate. Can Anaesth Soc J 1982; 29:1–8 Wells TR, Landing BH, Padua EM: The question of parallax-effect on radiographic assessment of short trachea in infants and children. Pediatr Radiol 1991; 21:490–3 Griscom NT, Wohl MEB, Fenton T: Dimensions of the trachea to age 6 years related to height. Pediatr Pulmonol 1989; 6:186–90 Eagle CCP: The relationship between a person’s height and appropriate endotracheal tube length. Anaesth Intensive Care 1992; 20:156–60 Cherng C-H, Wong C-S, Hsu C-H, Ho S-T: Airway length in adults: Estimation of the optimal endotracheal tube length for orotracheal intubation. J Clin Anesth 2002; 14:271–4 Lau N, Playfor SD, Rashid A, Dhanarass M: New formulae for predicting tracheal tube length. Pediatr Anesth 2006; 16:1238–43 Phipps LM, Thomas NJ, Gilmore RK, Raymond JA, Bittner TR, Orr RA, Robertson CL: Prospective assessment of guidelines for determining appropriate depth of endotracheal tube placement in children. Pediatr Crit Care Med 2005; 6:519–22 Weiss M, Gerber AC, Dullenkopf A: Appropriate placement of intubation depth marks in a new cuffed paediatric tracheal tube. Br J Anaesth 2005; 94:80–7 Frei FJ, Erb T, Jonmarker C, Sümpelmann R, Werner O: Kinderanästhesie, 3rd edition. Heidelberg, Springer, 2004, p 139 Wheeler M, Cóte CJ, Todres ID: Pediatric airway, A Practice of Anesthesia for Infants and Children, 3rd edition. Edited by Cóte CJ, Todres ID, Goudsouzian NG, Ryan JF. Philadelphia, WB Saunders, 2001, p 92Cóte CJ, Todres ID, Goudsouzian NG, Ryan JF WB Saunders Motoyama EK, Gronert BJ, Fine GF: Induction of anesthesia and maintenance of the airway in infants and children, Smith’s Anesthesia for Infants and children, 7th edition. Edited by Motoyama EK, Davis PJ. Philadelphia, Mosby Elsevier, 2006, p 337Motoyama EK, Davis PJ Mosby Elsevier Böttcher-Haberzeth S, Dullenkopf A, Gitzelmann CA, Weiss M: Tracheal tube tip displacement during laparoscopy in children. Anaesthesia 2007; 62:131–4 Olufolabi AJ, Charlton GA, Spargo PM: Effect of head posture on tracheal tube position in children. Anaesthesia 2004; 59:1069–72 Weiss M, Knirsch W, Kretschmar O, Dullenkopf A, Tomaske M, Balmer C, Stutz K, Gerber AC, Berger F: Tracheal tube-tip displacement in children during head-neck movement: A radiological assessment. Br J Anaesth 2006; 96:486–91 Jin-Hee K, Ro Y-J, Seong-Won M, Chong-Soo K, Seong-Deok K, Lee JH, Jae-Hyon B: Elongation of the trachea during neck extension in children: Implications of the safety of endotracheal tubes. Anesth Analg 2005; 101:974–7
{"url":"https://pubs.asahq.org/anesthesiology/article/108/6/1004/8411/Front-Teeth-to-Carina-Distance-in-Children","timestamp":"2024-11-02T15:38:35Z","content_type":"text/html","content_length":"161141","record_id":"<urn:uuid:ba9f6297-6b23-40b7-ba00-2b0f160c4d55>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00412.warc.gz"}
Sample Distribution This article is part of the EconHelp Tutoring Wiki In statistics, a sampling distribution or finite-sample distribution is the distribution of a given statistic based on a random sample of size n. It may be considered as the distribution of the statistic for all possible samples from the same population of a given size. The sampling distribution depends on the underlying distribution of the population, the statistic being considered, and the sample size used. For example, consider a normal population with mean μ and variance σ². Assume we repeatedly take samples of a given size from this population and calculate the arithmetic mean for each sample — this statistic is called the sample mean. Each sample has its own average value, and the distribution of these averages is called the “sampling distribution of the sample mean”. This distribution is normal since the underlying population is normal. This is an example of a simple statistic taken from one of the simplest statistical populations. For other statistics and other populations the formulas are frequently more complicated, and oftentimes they don’t even exist in closed-form. In such cases the sampling distributions may be approximated through Monte-Carlo simulations, bootstrap method, or asymptotic distribution theory. The standard deviation of the sampling distribution of the statistic is referred to as the standard error of that quantity. For the case where the statistic is the sample mean, the standard error is: ${\displaystyle \sigma _{\bar {X}}={\frac {\sigma }{\sqrt {n}}}}$ where σ is the standard deviation of the population distribution of that quantity and n is the size (number of items) in the sample. A very important implication of this formula is that you must quadruple the sample size (4×) to achieve half (1/2) the measurement error. When designing statistical studies where cost is a factor, this may have a factor in understanding cost-benefit tradeoffs. Alternatively, consider the sample median from the same population. It has a different sampling distribution which is generally not normal (but may be close under certain circumstances).
{"url":"https://wiki.ubc.ca/Sample_Distribution","timestamp":"2024-11-05T09:59:51Z","content_type":"text/html","content_length":"24421","record_id":"<urn:uuid:836b6fd8-a474-4246-825b-6710af16d16d>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00874.warc.gz"}
Definition of RATIONALLY : having reason or understanding : relating to, based on, or agreeable to reason : reasonable a rational explanation rational behavior : involving only multiplication, division, addition, and subtraction and only a finite number of times : relating to, consisting of, or being one or more rational numbers a rational root of an equation Examples of rational in a Sentence Adjective human beings are rational creatures insisted there was a rational explanation for the strange creaking noises and that there were no such things as ghosts Recent Examples on the Web Anti-intellectualism is much more than just the rejection of the scientific method or rational thought. —Matt Motta, Scientific American, 29 Oct. 2024 Military command structures are, by definition, authoritarian, but their effectiveness depends on accurate information and rational discussion among civilian and military leaders. —Nina Turner, Newsweek, 29 Oct. 2024 For instance, in one group, collect all rationals that, when squared, are less than 2; in the other, put all rationals whose squares are greater than 2. —Jordana Cepelewicz, Quanta Magazine, 21 June 2024 At the time the database had a little over 3 million elliptic curves over the rationals. —Lyndie Chiou, Quanta Magazine, 5 Mar. 2024 See all Example Sentences for rational Last Updated: - Updated example sentences
{"url":"https://www.merriam-webster.com/dictionary/rationally","timestamp":"2024-11-11T04:00:33Z","content_type":"text/html","content_length":"711274","record_id":"<urn:uuid:fa02708e-5b31-406c-bf41-95e21ab48ec4>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00007.warc.gz"}
Quantum Algorithms And Their Applications | Thane Ritchie Quantum Computing is a revolutionary technology based on quantum-physical realities and will change the landscape of the world significantly with the computational power and performance that surpasses classical machines. Moreover, what gives this promise a stronghold are the quantum algorithms that utilize superposition and entanglement amongst other principles in quantum theory to solve tasks that are beyond classical machines to compute. The range of application of quantum algorithms is quite vast and definitely spans areas including cryptography and optimization, machine learning and material science. This paper will focus on some important quantum algorithms as well as their application areas and the level of effect that these algorithms may have in various sectors of the Shor’s Algorithm: How Classical Cryptography is Tamed It is one of the widely circulated facts that almost everyone interested in quantum computing, Let us start with one of the best-known quantum algorithms roughly referred to as Shor’s Algorithm. It was designed by Peter Shor for integer factorization back in 1994. Thus, the task of large composites is now easy with Shor’s algorithm, for the reason being, this algorithm performed the task of factoring into primes significantly faster than any classical algorithm in existence. What this means for the fundamental aspects of cryptography especially, for the public key systems such as RSA is staggering since factoring based systems are used to safeguard communication made over the internet. Application Impact Cryptography Potential to break RSA and similar cryptosystems Cybersecurity Necessitates development of quantum-resistant encryption National Security Drives secure encryption for governmental use Shor’s Algorithm in Cryptography Conventional computing systems, if given adequate time, can bypass advanced encryption keys, which may explain why a more powerful quantum computer armed with Shor’s Algorithm can break through in a few short hours. This development nurtured strong work on quantum secure algorithm design and the emergence of industries and nations seeking to secure critical information from future quantum tide. Understanding Shor’s Algorithm • Cryptography: Shatters RSA and other public key encryption systems. • Cybersecurity: Stresses the importance of post quantum encryption. • National Security: Enables quantum secure encryption mechanisms for government communication. Oversized Grover’s Algorithm: Reducing a time interrupt in searching Grover’s Algorithm is a quantum search algorithm created by Lov Grover in 1996. This algorithm accomplishes a database unsorted search in a quadratic time. In a relative context searching in an unsorted go get database will cost a time in a linear form O(N) which imply should there be N number the entries a search is likely to take N steps. Grover’s Algorithm has, however, managed to decrease the mean value of the time taken in searches to about the square root of N . This in turn indicates great speed improvements have been realised. Grover’s Algorithm is far from operating at an exponential speedup, but it does find itself used in a fair amount of areas that involve search operations such as data-mining as well as chemical simulations. It is also the case that Grover’s Algorithm is less effective than Shor’s in this area, yet it can potentially reduce the strength of symmetric encryption by an entire two fold if the brute-forcing is improved…. which brings up a danger in cryptographic systems. Applications of Grover’s Algorithm include but are not limited to: • Database search that makes it more efficient when requiring information from a nonendent or unsorted database • Optimization for the purpose of attaining faster optimal solutions, this applies in ai and ml as well. • Requires a symmetric encryption to beef up security. Quoting Thane Ritchie, he said “Quantum algorithms like Grover’s are not only about speed improvements; they actually change our perception regarding how to deal with various problems on the largest scale, including modern challenges of cryptography and AI. The acceleration gives new possibilities for previously impossible tasks to be solved.” Quantum Approximate Optimization Algorithm (QAOA): Addressing optimization problems that are overly complex Logistics finance and algorithms for machine learning tend to have complex optimisation problems which the quantum approximate optimisation algorithm was designed for. For combinational optimisation problems he classic methods and quantum computations approximately solved using the combination of both. Best suited for practical instances where there is no feasible option of finding a solution in exact result but an approximate one would suffice. As an example, QAOA is useful for solving the problem of the traveling salesman, which is to find the shortest tour in which the salesman visits all the cities he is given once and returns to his starting point, which is a problem that increases in difficulty exponentially with the increase of the number of cities. With the use of QAOA, quantum computers can tackle such challenges efficiently thereby cutting down the time taken by industries to integrate and enhance their resource allocation, supply chain and even financial portfolio management. Applications of QAOA • Logistics: Improving supply chains and routing. • Finance: Management and optimization of a financial portfolio. • Machine Learning: Speeding up the process of training and parameter adjustment of complex models. Field Application Benefit Logistics Optimizing supply chain routing Reduces time and cost in logistics operations Finance Portfolio optimization and risk management Improves decision-making in investment strategies Machine Learning Parameter tuning and model training Accelerates model optimization in AI applications QAOA in Optimization Variational Quantum Eigensolver (VQE): Changing Material Science and Chemistry VQE or the Variational Quantum Eigensolver is an algorithm designed to determine the lowest energy configurations of a molecular system and is therefore critical for quantum chemistry and material science computations. The VQE certainly enables quantum computers to model the interaction between molecules and in magnitude enable precise control of reactions during the simulation which is of importance in the development and predicting properties of new materials. These types of simulations are beyond the reach of classical computers and this is because as the number of atoms involved increases complexity increases at an exponential rate. VQE might also prove to be important in the construction of pharmaceutical products by providing insight on the relationship of molecules within a drug which will accelerate the invention of drugs by pharmaceutical industries. In the same way, it can be useful in producing next generation energy storage systems such as more efficient batteries or broadly efficient solar panels. Applications of VQE • Drug development: Virtual simulation of drug molecules to shorten the period between drug launching and drug developmental process. • Drug development: Virtual simulation of drug molecules to shorten the period between drug launching and drug. • Energy: Making batteries and devices specific to other forms of energy. Quantum Machine Learning (QML): Facilitating AI Uses The confluence of quantum computer and machine learning has given rise to a new paradigm which holds promise for improving data analytical processes and model training in making learning machines. Besides, Quantum computers are predicted to speed up the training of models of ML and other tasks requiring massive data due to their ability to perform formal operations on large amounts of data and to solve large computations. To improve model classification, QSVM and quantum neural networks through quantum computation are being proposed for use. QML applications may change how diagnostics are done and how financial models are developed. While any industry that currently relies on being data driven will be able to conduct it on a less time basis. Applications of Quantum Machine Learning • healthcare: Increase the speed of diagnosis and effectiveness of personalized treatment programs. • insurance and finance: Better risk evaluation and enhancement of fraud monitoring. • Marketing: Better customer targeting and advertising strategies. Field Application Benefit Healthcare Diagnostics and treatment planning Faster, more personalized patient care Finance Fraud detection and risk assessment Improved security and investment decisions Marketing Customer segmentation and recommendation systems More targeted, effective marketing strategies Quantum Machine Learning Applications Obstacles and Future Promise of Quantum Algorithms The algorithms of course are promising but there are still hurdles to be crossed in building the hardware that is necessary for a working quantum computer. Quantum decoherence, or loss of quantum information, constrains the duration for which quantum information can be held, making it hard to execute complex quantum processing. Secondly, several quantum algorithms are still experimental and have yet to be implemented on a commercial scale due to the lack of sufficient advancements in quantum hardware. However, the prospects of quantum hardware becoming a reality make it possible for the algorithms to become practical in the future and therefore address major challenges in industries requiring greater processing power. Quantum algorithms, which range from encryption breaking and logistics optimization, or even AI and materials science improvement, are shifting orbit of the computational possibilities. Shor’s and Grover’s algorithms are already illustrative of the potential of quantum computing to revolutionize the domain of cyber security, whereas more recent algorithms such as QAOA or VQE are targeted at optimization and simulation problems in the real world. As the technological landscape of quantum computing improves, these algorithms will reshape complex approaches in many sectors.
{"url":"https://thane-ritchie.com/quantum-algorithms-and-their-applications/","timestamp":"2024-11-03T12:14:07Z","content_type":"text/html","content_length":"74972","record_id":"<urn:uuid:f8adf76c-4260-4e25-852a-08296a5b99dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00711.warc.gz"}
Spherically symmetric but variable charge density A spherical v... | Filo Spherically symmetric but variable charge density A spherical volume of radius contains a non-uniform charge density which varies with distance from the centre as , where is a positive constant. Find the electric field at a distance from the centre of the sphere for (a) and (b) . Also plot a graph showing the variation of electric field with . Not the question you're searching for? + Ask your question Here is the solution, Was this solution helpful? Found 6 tutors discussing this question Discuss this question LIVE 13 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice questions from Physics for JEE Main and Advanced Electrostatics and Current Electricity (McGraw Hill) View more Practice more questions from Electric Charges and Fields View more Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Physics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Spherically symmetric but variable charge density A spherical volume of radius contains a non-uniform charge density which varies with distance from the centre as , where is a positive Text constant. Find the electric field at a distance from the centre of the sphere for (a) and (b) . Also plot a graph showing the variation of electric field with . Topic Electric Charges and Fields Subject Physics Class Class 12 Answer Text solution:1 Upvotes 142
{"url":"https://askfilo.com/physics-question-answers/spherically-symmetric-but-variable-charge-densitya-spherical-volume-of-radius-a","timestamp":"2024-11-03T16:11:37Z","content_type":"text/html","content_length":"278702","record_id":"<urn:uuid:18d4cc9c-ab3c-4750-89e7-f832c7542cd9>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00270.warc.gz"}
creating ssh keys using ssh-keygen and copying to server - SillyCodes creating ssh keys using ssh-keygen and copying to server Creating SSH keys using ssh-keygen command on Linux : We can create SSH key using ssh-keygen command in CentOS and ubuntu Linux system. Please use the following command to create your SSH private and public keys Once you enter the above command, It will ask you for the keyphrase, Keyphrase adds an additional layer of security but if you want to make SSH or SCP through scripts then it is a good idea to leave it blank. Here i’m not using any keyphrase for simplicity. Sample Output : 1 root@ubuntu:/home/venkatesh/Desktop/python/parse# ssh-keygen -t rsa 2 Generating public/private rsa key pair. 3 Enter file in which to save the key (/root/.ssh/id_rsa): 4 /root/.ssh/id_rsa already exists. 5 Overwrite (y/n)? y 6 Enter passphrase (empty for no passphrase): 7 Enter same passphrase again: 8 Your identification has been saved in /root/.ssh/id_rsa. 9 Your public key has been saved in /root/.ssh/id_rsa.pub. 10 The key fingerprint is: 11 :321:dsa:219 root@ubuntu 12 The key's randomart image is: 13 +--[ RSA 2048]----+ 14 | | 15 | | 16 | | 17 | | 18 | S o E| 19 | .oo. = o+| 20 | .o.... Oo+| 21 | ..oo. = ==| 22 | o+ =-.| 23 +-----------------+ 24 root@ubuntu:/home/venkatesh/Desktop/python/parse# You can see your Public key and private key under your home directory ex : /root/.ssh folder 1 root@ubuntu:/home/venkatesh/Desktop/python/parse# ls /root/.ssh/id_rsa.pub 2 /root/.ssh/id_rsa.pub Moving SSH key’s to Server using ssh-copy-id command: Run the following command to move your SSH key’s to your server. 1 ssh-copy-id -i /root/.ssh/id_rsa username@serverAddress.com Sample output : 1 root@ubuntu:/home/venkatesh/Desktop/python/parse# ssh-copy-id -i /root/.ssh/id_rsa root@test.sillycodes.com 2 /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed 3 /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys 4 root@test.sillycodes.com's password: 6 Number of key(s) added: 1 8 Now try logging into the machine, with: "ssh 'root@test.sillycodes.com'" 9 and check to make sure that only the key(s) you wanted were added. Testing the SSH key: Now, let’s try to log into the server using SSH. 1 ssh root@yourserver.com -p Port_number Sample Output: 1 root@ubuntu:/home/venkatesh/Desktop/python/parse# ssh root@test.sillycodes.com 2 Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-83-generic x86_64) 4 * Documentation: https://help.ubuntu.com 5 * Management: https://landscape.canonical.com 6 * Support: https://ubuntu.com/advantage 8 Get cloud support with Ubuntu Advantage Cloud Guest: 9 http://www.ubuntu.com/business/services/cloud 11 82 packages can be updated. 12 0 updates are security updates. 14 *** System restart required *** 15 Last login: Wed Oct 11 20:11:41 2017 from 183.83.77.58 That’s it your successfully created SSH key’s and uploaded it into server. Leave a ReplyCancel reply
{"url":"https://sillycodes.com/ssh-key-generation-using-ssh-keygen-copying-server/","timestamp":"2024-11-04T23:53:56Z","content_type":"text/html","content_length":"293430","record_id":"<urn:uuid:0c0593d7-468f-4d18-8d9b-81cd9125be9c>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00179.warc.gz"}
How to Perform Hypothesis Testing The following are the main steps in hypothesis testing: 1. State the hypothesis and the alternative to the hypothesis 2. Identify the appropriate test statistic and its distribution. Ensure that any assumptions about the data are met (stationarity, normality, etc.) 3. Specify the significance level, $\alpha$ 4. From $\alpha$ and the distribution compute the 'critical value'. 5. Collect the data and calculate the test statistic 6. Compare test statistic with critical value and decide whether to accept or reject the hypothesis. First we state the hypothesis that we wish to test. We do this by identifying a null hypothesis and an alternative hypothesis. The null hypothesis, $H_0$, is the one that we want to test, while the alternative hypothesis, $H_A$, is the hypothesis that is accepted in the case where $H_0$ is rejected. Let's say that we want to test whether the mean return of Microsoft stock is positive. The parameter that we are testing is denoted by $\theta$ and the proposed value of the parameter is denoted by $ \theta_0$, which in this case is equal to $0$. So we say that our $H_0$ is $\theta = \theta_0$, that the returns are negative, and our $H_A$ is $\theta \neq \theta_0$. Including this formation, there are three possible ways to formulate null and alternative hypotheses: 1. $H_0: \theta = \theta_0$ versus $H_A: \theta \neq \theta_0$ (A "not equal to" alternative hypothesis) 2. $H_0: \theta \leq \theta_0$ versus $H_A: \theta > \theta_0$ (A "greater than" alternative hypothesis) 3. $H_0: \theta \geq \theta_0$ versus $H_A: \theta < \theta_0$ (A "less than" alternative hypothesis) In this case, where we are testing the returns of MSFT, $\theta = \mu_{MSFT}$, representing the stock's mean returns. Since we are testing whether the returns are positive or negative, we have that $ \theta_0 = 0$. Our example follows the first formulation of a hypothesis test. This is a two-sided hypothesis test (or two-tailed hypothesis test). The second and third formulations are examples of a one-sided hypothesis test (or one-tailed hypothesis test). With a one-sided test, we reject the null in favor of the alternative only if the data indivates that $\theta$ is repectively greater than or less than $\theta_0$. A two-sided test rejects the null in favor of the alternative if the data indicates that $\theta$ is either greater or less than $\theta_0$. So if we were to write out our hypothesis for MSFT in more qualitative terms, we would have: \begin{eqnarray} H_0 &:& \text{The mean return on Microsoft stock is $0$}\\ H_A &:& \text{The mean return on Microsoft stock is not $0$} \end{eqnarray} When forming a hypothesis test, the null and alternative hypothesis must be complementary to each other. Between them they must cover all values of $\theta$. Regardless of the type of hypothesis test we are performing, we always test the null hypothesis as if $\theta = \theta_0$. In the case of either of the one-tailed tests, this will still provide more than enough evidence for us to make a decision. For example, if $H_0: \theta \leq 0$, $H_A: \theta > 0$, and we have enough evidence to reject $H_0: \theta = 0$ in favor of $H_A: \theta > 0$, then this holds true for all values less than $0$ as well. The most common type of hypothesis test is the two-tailed, "not equal to", hypothesis test, because it presents a neutral view. The one-tailed hypothesis tests are less neutral than the "not equal to" test, reflecting the thoughts of the tester. One-tailed tests are often used to test "hoped for" results or results that the testers have a prior idea about.
{"url":"https://notebook.community/quantopian/research_public/notebooks/lectures/Hypothesis_Testing/notebook","timestamp":"2024-11-15T02:31:07Z","content_type":"text/html","content_length":"554554","record_id":"<urn:uuid:1c62740e-63c4-4c80-97ef-b0b3443a2bf9>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00716.warc.gz"}
Positive and Negative Numbers Calculator Help kids understand positive and negative numbers with this simple and easy calculator! Enter Numbers and Operation How to Use the Positive and Negative Numbers Calculator? Simply enter any two positive or negative numbers, choose the desired operation (Addition, Subtraction, Multiplication, or Division), and click "Calculate" to see the result. What is the Positive and Negative Numbers Calculator? A Positive and Negative Numbers Calculator for Kids is a simple, child-friendly tool designed to help kids learn about and practice operations with positive and negative numbers. This calculator is very useful and simple for introducing young learners kids to integers, helping them understand how to add, subtract, multiply, and divide numbers with different signs. How Positive and Negative Numbers Calculator Works and Example Input: Kids can enter both positive and negative numbers using a plus (+) or minus (-) sign before the number. Operations: It provides options for basic arithmetic operations—addition, subtraction, multiplication, and division. Many calculators for kids will have buttons like “+,” “−,” “×,” and “÷” so they can choose which operation to apply. Suppose a child wants to solve -5 + 3. Enter Numbers and Select Operation: • Enter "-5" in the first box and "3" in the second box. • Choose the addition operation (+). • Calculate and Display Result: The calculator will display the result, which is "-2." Here are some more sample example: Addition Example: • Problem: −7+4 • Solution: Moving 4 units to the right from -7 brings you to -3. • Result: −7+4=−3 Subtraction Example: • Problem: −3−2 • Solution: Moving 2 units to the left from -3 brings you to -5. • Result: −3−2=−5 Multiplication Example: • Problem: −3×4 • Solution: When multiplying a negative by a positive, the result is negative. • Result: −3×4=−12 Division Example: • Problem: −12÷3 • Solution: When dividing a negative by a positive, the result is negative. • Result: −12÷3=−4 What is the Positive and Negative Numbers Calculator Formula? This calculator uses basic arithmetic formulas based on the operation selected by the user, helping kids learn to add, subtract, multiply, and divide numbers with both positive and negative values. Benefits of Positive and Negative Numbers Calculator! • Helps kids understand positive and negative numbers. • Improves their arithmetic skills. • Provides instant results to reinforce learning. Frequently Asked Questions (FAQ) Q: Can kids use this calculator to learn basic math? A: Yes, it's designed to help kids understand and work with positive and negative numbers. Q: What is the purpose of this calculator? A: It helps kids learn the effects of operations with positive and negative numbers.
{"url":"https://www.theusalist.com/2024/10/positive-and-negative-numbers-calculator.html","timestamp":"2024-11-03T15:13:17Z","content_type":"application/xhtml+xml","content_length":"213953","record_id":"<urn:uuid:b1cc757f-857f-41d6-b895-b37c2ddb3693>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00396.warc.gz"}
Optimal Control Strategy for a Fully Determined HIV Model Intelligent Control and Automation, 2010, 1, 15-19 doi:10.4236/ica.201.11002 Published Online August 2010 (http://www.SciRP.org/journal/ica) Copyright © 2010 SciRes. ICA Optimal Control Strategy for a Fully Determined HIV Model Mohammad Shirazian1, Mohammad Hadi Farahi1,2 1Department of Applied Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran 2The Center of Excellence in Modeling and Computations in Linear and Nonlinear Systems Ferdowsi University of Mashhad, Mashhad, Iran E-mail: mo.shirazian@stu-mail.um.ac.ir, farahi@math.um.ac.ir Received January 25, 2010; revised March 21, 2010; accepted June 22, 2010 This paper shows how mathematical methods can be implemented to formulate guidelines for clinical testing and monitoring of HIV/AIDS disease. First, a mathematical model for HIV infection is presented which the measurement of the CD4+T cells and the viral load counts are needed to estimate all its parameters. Next, through an analysis of model properties, the minimal number of measurement samples is obtained. In the sequel, the effect of Reverse Transcriptase enzyme Inhibitor (RTI) on HIV progression is demonstrated by using a control function. Also the total cost of treatment by this kind of drugs has been minimized. The nu- merical results are obtained by a numerical method in discretization issue, called AVK. Keywords: HIV/AIDS, Mathematical Modeling, System Identification, Control Theory, Immunotherapy 1. Introduction Despite tremendous effort for mathematical modeling of HIV/AIDS (for example, see [1-4]), estimation of model parameters has not been attended a lot. For example, in [2,5,6], only the virus clearance rate and the death rate of infected CD4+T cells have been estimated. The impor- tance of parameter estimation in models, is due to pre- dicting “set-points” in the early infection stage for mak- ing the desired treatment decisions (See [7]). One of the objectives of this paper is presenting a re- alistic model, i.e. the basic model of HIV, and estimating all its parameters. It is necessary to mention that one can identify all of the model parameters by using measured output (For more details see [4]). Another objective is to add a control function to the identified basic model which plays the role of reverse transcriptase enzyme inhibitor drug in disease progres- In the sequel, the optimal control model of HIV will be solved by a method in discretization issue, called Numerical results are obtained using mathematical softwares, LINGO and MATLAB. 2. Translating Biological Knowledge to Ordinary Differential Equations (ODE) To make ODE’s from biological knowledge, first we need some syntax. For example, if we denote the count of uninfected and infected CD4+T helper cells, with a and b, respectively, the syntax “0a” can be used to present this biological descriptions: “Uninfected CD4+T cells die” and the syntax “ab bb” can present: “The reaction between two infected and uninfected CD4+T cells produces two infected CD4+T cells”. Now, for translating these syntaxes to the corresponding ODE’s, we use Mass action law. This law says: “The rate of change of products is proportional to the product of re- actants concentration”. So if the syntax “ab c” is obtained, according to the mass action law, we can write , for k > 0, where dc dt is denoted by c . Two other reactions in the previous syntax is dying a and b reactants, while producing c. So we have also these two ODE's as: akab and bkab , for k > 0. Fi- nally, the desired ODE, corresponding to the syntax “ab c ” is M. SHIRAZIAN ET AL. Copyright © 2010 SciRes. ICA Obviously, the rate of change of a product is the sum of changes from all reactions. 3. HIV Basic Model The target cells of HIV infection are lymphocyte helper cells, specially CD4+T cells. These cells become infected and begin to produce free virions. The main fact about HIV infection, is reducing the count of CD4+T cells, which have an essential role in protecting body against different pathogens. So it is important to understand the dynamics of CD4+T cell count as a function of time. In HIV infection basic model, three groups of molecules are considered; Uninfected CD4+T cells (T), infected CD4+T cells (I) and viral load (V). Biological descriptions, tran- slation to reactions and corresponding ODE’s are pre- sented in Table 1. Now, according to Table 1 and Section 2, the com- plete ODE model, which is sum of contributions from all reactions, is as follows: TsdT TV 4. Properties of HIV Basic Model There are two advantages to show the virous propagation in HIV disease, by the basic model (1). 1) From medical point of view, one important subject is the relative steady viral level during the asymptomatic stage of an HIV infection. This level is called “set-point”. When body reaches this level, immune system develops HIV antibodies and begins to attempt to fight the virus. The higher the viral load of the set point, the faster the virus will progress to full blown AIDS (See [8]). It can be shown that set-point is the amount of V, in the equilibrium of virus depicted by the model (1), that is ks d 2) It can be seen that a model of such a simple nature is able to adequately reflect the disease progression from the initial infection to an asymptomatic stage after the set-point is reached (See [9]). 5. Estimation of Models Parameters Using In this section, our aim is to estimate all parameters of HIV basic model (1). Clinically all six variables in model (1), can be measured. Since the cost of quantifying the infected cells is much higher, we are going to omit vari- able I, initially. For this, let 1 yT and 2 . After some calculations, model (1) can be changed to: The vector defines a one-to-one map for 0 and c . Therefore the identification of the original parameters of (1) is equivalent to the identification of It is known that for most HIV patients, 0 (See [7]). In this case, the following inverse map can be defined: Table 1. HIV basic model interactions. Biological description Translation to reactionsReaction rate Translation to ODE CD4+T cells production 0T CD4+T cells natural death 0T d TdT CD4+T cells become infected by virus TV IV Infected CD4+T cells death 0I Virus replication in infected CD4+T cells IV k VkI Virus natural death 0 V c VcV M. SHIRAZIAN ET AL. Copyright © 2010 SciRes. ICA . (4) Since there are three unknown parameters in each of Equation (2) and (3), it is necessary to generate at least two other equations based on each of them. This will be achieved by differentiating (2) and (3) more times, and produce upper derivatives of 1 y and 2 y. So one can concludes that at least four measurements of 1 y, CD4+T cell count, and five measurements of 2 y, viral load, are needed for a complete determination of model (1) pa- rameters (See [7]). Assume that the following measurements are avail- By discretization of Equations (2) and (3), and substi- tuting the approximated values of first derivative of 1 and the first and second derivatives of 2 y, we found that yyy i ii i yy yyy dd d Or in matrix form, we have yyy yy yyy d Similar matrix form can be obtained from (6). Thus, the variables i , i = 1, 2, ..., 6 and then from (4), all of the basic model parameters can be calculated. As an example, we considered the basic model (1), where the following estimated parameters are as Xia [7]. 7, 0.007,0.00000042163, 0.0999,0.2,90.67 . Table 2. Available measurements for the count of CD4+T cells and viral load. Time (t) CD4+T cell count (1 y) Viral load (2 t 0 y 0 y 1 y 2 tdd d y 3 tdd dd – 4 The solution of model (1) for [0,1000]t, with the initial values 01000T , 00I and 07000V, can be determined using the well-known numerical methods like RK4. The graphs of the propagation of healthy CD4+T cells, infected CD4+T cells and virous loads, re- spectively, are shown in Figure 1. 6. HIV Infection Optimal Control Model There are three convenient groups of drugs for AIDS retroviral therapy; Reverse transcriptase, Protease, and Integrase enzyme inhibitors. In this section, we study the role of reverse transcriptase inhibitors. The main action of this kind of drugs is preventing uninfected lymphocyte cells, to be infected by viral load. According to Table 1, Figure 1. The solution of basic model of HIV, model (1). M. SHIRAZIAN ET AL. Copyright © 2010 SciRes. ICA this action is equivalent to the reaction TV IV. So we control the first equation to prevent the transmis- sion of uninfected cells to infected ones. This control function is called ()ut, where 0()1ut. The most drug efficiency is in the case 1u which means CD4+T cells are not infected by viral load anymore. At the other side, 0u is the case which the drug does not change the disease progression. By above argument, the control system is as: (1 ), (1 ), TsdT TV u ITVu I Using [10], consider the objective functional to be de- fined as: TuTtut dt where 110 . Our goal is maximizing the objective functional (9) subject to the control system (8); that is, maximizing the total count of CD4+T cells and minimiz- ing the costs of treatment by applying some RTI drugs. The solution of this optimal control problem should be calculated by numerical methods. We have used a special discretization method, called AVK. For a detailed explanation of this method, see [11]. In AVK method, for solving the optimal control prob- Min,, , xu gxtuttdt (10) Subject to: ,,, , the following steps should be applied: Step 1. Form the total error function 1 E as: Exuxt fxtuttdt Step 2. Combine the total error function with the ob- jective functional (10) as follows: Min, , subjectto :, where nonnegative numbers 1 and 2 are two given weights and 12 Step 3. In order to control the error, add the following to the optimal control problem in Step 2. So the modified optimal control problem (10)-(11) can be formulated as: Min, , subjectto : Step 4. Calculate ut by minimizing the optimal control problem (13) using discretization method. For example, if the norm function ., is norm 1, then one can solve the following optimization problem: Min ,,,, subject to : ,, iiii iii xtx xt where 0f , 0i , () xt, () and 1 () ii xxt h for 0,1,., 1in and n Step 5. By the means of () ut for every i t, from (11), it is easy to find () t, for any i, 0,1,., 1 We use this technique to solve the control problem (8) with the objective functional (9). The parameters used in the basic control model (8) are exactly as (7). Assume that the treatment begins when CD4+T cells reach their minimum count, in the absence of drug. According to Figure 1, (129) 363T is the mini- mum count of CD4+T cells. So the treatment interval is [129, 1000] day. Also, note that by Figure 1, at t = 129, we have (129) 57I and (129) 28860V. Now, we divide [129, 1000] into n parts with length h. The discretization form of (14) is: Max 2 subjectto:,,0,01,0,1, 2,..., 363, 57,28860 iii i TsdT TVu TIVu in M. SHIRAZIAN ET AL. Copyright © 2010 SciRes. ICA Figure 1. The solution of optimal control problem (8)-(9), using AVK method. where assumed 12 The results of this optimization problem which ob- tained by LINGO and MATLAB softwares for 200n and 6 , are depicted in Figure 2. 7. Conclusions In this paper, the parameter of the basic model of HIV/ AIDS is estimated only by measurement of the CD4+T cells and the viral load count. Since the suggested mod- els for HIV, or infectious diseases like consumption, cholera, influenza and etc., have unknown parameters which should be estimated, one can use the proposed method in this paper to estimate the parameters of such One of the most important kinds of drug treatments for HIV immunotherapy is assumed. One can investigate the effects of other drugs, like Protease enzyme inhibitors in preventing AIDS progression. In these cases, one can use the described discretization method for solving such op- timal control problems. 8. References [1] D. Covert and D. Kirschner, “Revisiting Early Models of the Host-Pathogen Interactions in HIV Infection,” Com- ments Theoretical Biology, Vol. 5, No. 6, 2000, pp. 383- [2] M. A. Nowak and R.M. May, “Virus Dynamics: Mathe- matical Principles of Immunology and Virology,” Oxford University Press, New York, 2000. [3] A. S. Perelson and P. W. Nelson, “Mathematical Analysis of HIV-1 Dynamics in Vivo,” SIAM Review, Vol. 41, No. 1, 1999, pp. 3-44. [4] W.-Y. Tan and H. Wu, “Deterministic and Stochastic Models of AIDS Epidemics and HIV Infections with In- tervention,” World Scientific, Singapore, 2005. [5] M. A. Nowak and C. R. M. Bangham, “Population Dy- namics of Immune Responses to Persistent Viruses,” Sci- ence, Vol. 272, No. 5758, 1996, pp. 74-79. [6] X. Wei, S. K. Ghosh, M. E. Taylor, V. A. Johnson, E. A. Emini, P. Deutsch and J. D. Lifson, “Viral Dynamics in HIV-1 Infection”, Nature, Vol. 273, No. 6510, 1995, pp. [7] X. Xia, “Estimation of HIV/AIDS parameters,” Auto- matica, Vol. 39, No. 11, 2003, pp. 1983-1988. [8] R. Pattman, M. Snow, P. Handy, K. N. Sankar and B. Elawad, “Oxford Handbook of Genitourinary Medicine, HIV and AIDS,” Oxford University Press, USA, 2005. [9] X. Xia, “Modelling of HIV Infection: Vaccine Readiness, Drug Effectiveness and Therapeutical Failures,” Journal of Process Control, Vol. 17, No. 3, 2007, pp. 253-260. [10] K. R. Fister and S. Lenhart, “Optimizing Chemotherapy in an HIV Model,” Journal of Differential Equations, Vol. 1998, No. 32, 1998, pp. 1-12. [11] K. P. Badakhshan and A. V. Kamyad, “Numerical Solu- tion of Nonlinear Optimal Control Problems Using Non- linear Programming,” Applied Mathematics and Compu- tation, Vol. 187, No. 2, 2007, pp. 1511-1519.
{"url":"https://file.scirp.org/Html/2485.html","timestamp":"2024-11-14T10:28:34Z","content_type":"text/html","content_length":"107169","record_id":"<urn:uuid:13e239c9-dc48-46fc-91a0-6fe556941300>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00066.warc.gz"}
metric and imperial Metric measurements Metric measurements of area are widely used around the world due to their simplicity and consistency. The metric system employs the square meter (m²) as the base unit for measuring area. This unit is derived from the meter, which is the base unit for measuring length. The square meter represents the area of a square with sides measuring one meter. For larger areas, metric prefixes are used to indicate multiples or fractions of the square meter. For example, the hectare (ha) is commonly used to measure land areas. One hectare is equal to 10,000 square meters, or 100 meters by 100 meters. This unit is often used in agriculture and urban planning. For smaller areas, the square meter can be divided into smaller units using metric prefixes. The most commonly used subunits are square centimeters (cm²) which is equal to a ten thousandth of a square meter, and square millimeters (mm²) which is one millionth. These units are often used in scientific and engineering applications where precise measurements are required. Imperial / American measurements Imperial or American measurements of area are commonly used in the United States and a few other countries that have not adopted the metric system. These measurements are based on the foot, which is divided into inches, and the yard, which is composed of three feet. The most commonly used units of area in this system are square inches, square feet, square yards, and acres. The Old English word "acre" means field and it was generally considered the area that could be ploughed in a day using a yoke or oxen. The square inch is used for small-scale measurements, such as the size of a postage stamp or the area of a small object. The square foot is a larger unit and is commonly used to measure the area of rooms in houses or apartments. The square yard is even larger and is often used to measure the area of outdoor spaces, such as lawns or gardens. Finally, the acre is the largest unit of area in this system and is commonly used to measure the size of large plots of land, such as farms or estates. While the metric system is widely used around the world, Imperial or American measurements of area continue to be used in certain contexts, particularly in the United States and sometimes in the United Kingdom. Converting between metric and imperial area measurements To convert between metric and imperial area measurements, it is important to know the conversion factors for the specific units being used. For example, when converting square meters to square feet, the conversion factor is 1 square meter equals approximately 10.764 square feet. To convert square feet to square meters, the conversion factor is approximately 0.093 square meters per square foot. By multiplying or dividing the given area by the appropriate conversion factor, one can easily convert between the two systems. If you are converting the "square" versions of length units such as square meters to square feet, you can use the conversion factor for meters to feet and square this factor. For example, if I wish to convert 3 square meters to square feet and I know that there are 3.28 feet in a meter I can square this value to get the square meters to square feet conversion factor: 3.28² = 10.76. Now I can use this factor to convert 3 square meters into square feet: 3m² * 10.76 = 32.28ft². Alternatively, you can just use our area converters on this site or download our metric conversion app onto your
{"url":"https://www.metric-conversions.org/area-conversion.htm","timestamp":"2024-11-07T16:51:34Z","content_type":"text/html","content_length":"58471","record_id":"<urn:uuid:8faa2fde-8974-4eeb-8008-6fc907f14bf7>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00531.warc.gz"}
Pi - Brian R. Bondy What is Pi? Pi is the ratio of a circle’s circumference to its diameter. This magical number appears almost everywhere in every type of math. Pi cannot be written as the ratio of two numbers and is therefore an irrational number. Pi is also a transcendental number, which means that it is not the root of any polynomial equation with rational coefficients. Many people make a hobby of trying to memorize as much of Pi as they can. The current record is held by Akira Haraguchi, who has memorized Pi to over 83 thousand digits. Pi Clubs There are even clubs out there for people who have memorized part of Pi. You can join the Pi 100-club and the Pi 1000-club Download the digits of Pi, Phi, and e Download the first 10 thousand digits of Pi Download the first 1052 digits of Phi Download the first 956 digits of e PiMemorize - A free application for Windows I made a Windows application that you can use to help memorize Pi. The newer version 1.1 will also allow you to memorize Phi and e. The application is also known as PhiMemorize and eMemorize. Click here to download the Windows application PiMemorize - A free application for Windows Phone 7 I also made a similar Pi Memorize application for Windows Phone 7 (WP7). You can read about that WP7 app here.
{"url":"https://brianbondy.com/math/pi","timestamp":"2024-11-07T00:30:18Z","content_type":"text/html","content_length":"4984","record_id":"<urn:uuid:b3700679-1d68-46c6-87c6-ce27cf352af5>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00812.warc.gz"}
Convert inches to meters Convert inch to meter Converting inches into meters (in to meters) is a piece of cake with our inches to meters length conversion calculator. Input your value in the box with inch selected and get your answer instantly. You can change the units by selecting from the dropdown list. The inch is represented by the letters in or a double straight quotation mark (“). For example, 5 inches is written as 5 in or 5”. Inch is one of the smaller units of length measurement in the British imperial and the United States customary system of measurement. This estimation is thought to have derived from the width of the human thumb. The word ‘inch’ was derived from the Roman word "uncia" (twelfth). In the past, there have been multiple standards regarding the exact measurement of an inch. However, with the adoption of the international yard during the 1950s and 1960s, the inch has been based on the metric system and defined as exactly 25.4 mm. So it is quite easy to convert centimeter to inch or in to cm. According to the international scales, there are 2.54 centimeters in an inch. So inversely, an inch is equivalent to 1/2.54 centimeters. For inch to foot conversion, note that one foot is equal to 12 inches. So, while doing ft to in conversion just divide the inch digits by 12 since an inch is equivalent to 1/12 feet. For in to yard conversion, an inch is equivalent to 1/36 yard. Inch was used in the Textus Roffensis in 1120. It is the only surviving manuscript from the Laws of Æthelberht (which dates to the early 7th century). It is the earliest known reference to the inch in England. King Edward II of England proclaimed in 1324 that one inch was equivalent to three barley grains put end to end lengthwise. But as the barley grains' size could not be fixed, there was a possibility of having error in standardizing inch by this defintion. Actually before the establishment of the international yard and pound, numerous definitions were used for inch. In 1912, US and UK followed different definitions of inch measurement. Carl Edvard Johansson then reached a compromise. He produced gauge blocks with a nominal size of 25.4 mm that were accurate to within a few parts per million of both official standards. Because of the popularity of Johansson's blocks, several countries began using 25.4 mm as the inch between 1930 and 1964. It became known as the industrial inch. And this has since become the new official measurement of the inch. Inch is mostly used in countries following the FPS (foot-pound-second) system of units. It is sometimes used for measuring small dimensions such as mobile display or electronic parts. The meter is represented by the symbol ‘m’. Meter is the base unit of length in the SI system (International System of Units). The term meter was originally derived from the Greek words "metreo" (verb form) and "metron" (noun form) meaning The length of the path that light takes in the vacuum in 1/299792458 seconds was the official definition of a meter from 1983 to 2019. As of 2019, the meter has been redefined as the length of the path traveled by light in a vacuum during a time interval of 1/299792458 of a second where a caesium hyperfine transition frequency defines the second. Hence, the difference between the old and new definitions of meter is the addition of the definition of the second. However, interestingly the definition of the meter fixes the speed of light in a vacuum to the exact value of 299,792,458 m/s. Meters can be changed to other SI units using prefix multipliers based on powers of ten. For example, a meter consists of 100 centimeters, and one thousand meters equals a kilometer. To convert m to in (meter to inches), you will need to multiplu the meter by 39.37 since a meter equals 39.37 inches. Let's talk about large unites and how meter looks in comparison to them. 1609 meters is equivalent to a statute mile. The French first used the unit meter in the late 1700s. On March 30, 1791, the French Academy of Sciences specified the length of a meter. However, it was officially defined in 1799. In 1983, the meter was defined as it is measured now. But in 2019, the addition of the definition of second was added to it to reducec uncertainity. The English word 'metre' was first used at the beginning of 1797. It was first measured with an interferometer by Albert A. Michelson in 1893. Meter is used for length, height, width and thickness measurement of objects in countries that follow SI units. However, in the states it is not used commonly. How to convert inches to meters? How many meters are in 1 inch? About 0.0254 meters. So to convert inches to meters, you have to divide inch value by 39.370079 or multiply it by 0.0254. 1 in = 0.0254 m 39.370079 in = 1 m Inch to Meter conversion table Inch Meter 0.01 in 0.000254 m 0.1 in 0.00254 m 1 in 0.0254 m 2 in 0.0508 m 3 in 0.0762 m 5 in 0.127 m 10 in 0.254 m 20 in 0.508 m 100 in 2.54 m 1000 in 25.4 m
{"url":"https://www.measurementof.com/length-converter/inches-to-meters","timestamp":"2024-11-01T22:13:30Z","content_type":"text/html","content_length":"475367","record_id":"<urn:uuid:85e337f3-1f9c-43d3-92a1-9836c1244668>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00285.warc.gz"}
UPSC ESE 2014 Electronics & Telecommunication Engineering ECE Syllabus Indian Engineering Services Examination (IES / ESE) Paper I Syllabus 1. Materials and Components Structure and properties of Electrical Engineering materials; Conductors, Semiconductors and Insulators, magnetic, Ferroelectric, Piezoelectric, Ceramic, Optical and Super-conducting materials. Passive components and characteristics Resistors, Capacitors and Inductors; Ferrities, Quartz crystal Ceramic resonators, Electromagnetic an Electromechanical components. 2. Physical Electronics, Electron Devices and ICs Electrons and holes in semiconductors, Carrier Statistics, Mechanism of current flow in a semiconductor, Hall effect; Junction theory; Different types of diodes and their characteristics; Bipolar Junction transistor; Field effect transistors; Power switching devices like SCRs, CTOs, power MOSFETs; Basics of ICs - bipolar, MOS and CMOS types; basic of Opto Electronics. 3. Signals and Systems Classification of signals and systems: System modelling in terms of differential and difference equations; State variable representation; Fourier series; Fourier representation; Fourier series; Fourier transforms and their application to system analysis; Laplace transforms and their application to system analysis; Convolution and superposition integrals and their applications; Z-transforms and their applications to the analysis and characterisation of discrete time systems; Random signals and probability, Correlation functions; Spectral density; Response of linear system to random 4. Network theory Network analysis techniques; Network theorems, transient response, steady state sinusoidal response; Network graphs and their applications in network analysis; Tellegen’s theorem. Two port networks; Z, Y, h and transmission parameters. Combination of two ports, analysis of common two ports. Network functions : parts of network functions, obtaining a network function from a given part. Transmission criteria : delay and rise time, Elmore’s and other definitions effect of cascading. Elements of network synthesis. 5. Electromagnetic Theory Analysis of electrostatic and magnetostatic fields; Laplace’s and Piossons’s equations; Boundary value problems and their solutions; Maxwell’s equations; application to wave propagation in bounded and unbounded media; Transmission lines : basic theory, standing waves, matching applications, misconstrue lines; Basics of wave guides and resonators; Elements of antenna theory. 6. Electronic Measurements and instrumentation Basic concepts, standards and error analysis; Measurements of basic electrical quantities and parameters; Electronic measuring instruments and their principles of working : analog and digital, comparison, characteristics, application. Transducers; Electronic measurements of non electrical quantities like temperature, pressure, humidity etc; basics of telemetry for industrial use. Paper II Syllabus 1. Analog Electronic Circuits Transistor biasing and stabilization. Small signal analysis. Power amplifiers. Frequency response. Wide banding techniques. Feedback amplifiers. Tuned amplifiers. Oscillators. Rectifiers and power supplies. Op Amp PLL, other linear integrated circuits and applications. Pulse shaping circuits and waveform generators. 2. Digital Electronic Circuits Transistor as a switching element; Boolean algebra, simplification of Boolean functions, Karnaguh map and applications; IC Logic gates and their characteristics; IC logic families : DTL, TTL, ECL, NMOS, PMOS and CMOS gates and their comparison; Combinational logic Circuits; Half adder, Full adder; Digital comparator; Multiplexer Demultiplexer; ROM and their applications. Flip flops. R-S, J.K, D and T flip-flops; Different types of counters and registers Waveform generators. A/D and D/A converters. Semiconductor memories. 3. Control Systems Transient and steady state response of control systems; Effect of feedback on stability and sensitivity; Root locus techniques; Frequency response analysis. Concepts of gain and phase margins: Constant-M and Constant-N Nichol’s Chart; Approximation of transient response from Constant-N Nichol’s Chart; Approximation of transient response from closed loop frequency response; Design of Control Systems, Compensators; Industrial controllers. 4. Communication Systems Basic information theory; Modulation and detection in analogue and digital systems; Sampling and data reconstructions; Quantization & coding; Time division and frequency division multiplexing; Equalization; Optical Communication : in free space & fiber optic; Propagation of signals oat HF, VHF, UHF and microwave frequency; Satellite Communication. 5. Microwave Engineering Microwave Tubes and solid state devices, Microwave generation and amplifiers, Waveguides and other Microwave Components and Circuits, Misconstrue circuits, Microwave Antennas, Microwave Measurements, Masers, lasers; Microwave propagation. Microwave Communication Systems terrestrial and Satellite based. 6. Computer Engineering Number Systems. Data representation; Programming; Elements of a high level programming language PASCAL/C; Use of basic data structures; Fundamentals of computer architecture; Processor design; Control unit design; Memory organisation, I/o System Organisation. Microprocessors : Architecture and instruction set of Microprocessors 8085 and 8086, Assembly language Programming. Microprocessor Based system design : typical examples. Personal computers and their typical uses. Disclaimer : The syllabus mentioned here is as mentioned in UPSC Advertisement of the respective examination. This website will not be responsible in case of any changes and discrepancies are observed in syllabus. Kindly refer the Examination Advertisement as published by UPSC for final information. Other Question Papers :
{"url":"https://www.mdurohtak.in/2014/03/upsc-ese-2014-ece-syllabus.html","timestamp":"2024-11-04T23:37:25Z","content_type":"application/xhtml+xml","content_length":"301113","record_id":"<urn:uuid:c4f0efa1-d0dc-4998-97bf-7e0fc5da9c3a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00518.warc.gz"}
Go to the source code of this file. subroutine dorg2r (M, N, K, A, LDA, TAU, WORK, INFO) DORG2R generates all or part of the orthogonal matrix Q from a QR factorization determined by sgeqrf (unblocked algorithm). Function/Subroutine Documentation subroutine dorg2r ( integer M, integer N, integer K, double precision, dimension( lda, * ) A, integer LDA, double precision, dimension( * ) TAU, double precision, dimension( * ) WORK, integer INFO DORG2R generates all or part of the orthogonal matrix Q from a QR factorization determined by sgeqrf (unblocked algorithm). Download DORG2R + dependencies [TGZ] [ZIP] [TXT] DORG2R generates an m by n real matrix Q with orthonormal columns, which is defined as the first n columns of a product of k elementary reflectors of order m Q = H(1) H(2) . . . H(k) as returned by DGEQRF. M is INTEGER [in] M The number of rows of the matrix Q. M >= 0. N is INTEGER [in] N The number of columns of the matrix Q. M >= N >= 0. K is INTEGER [in] K The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. A is DOUBLE PRECISION array, dimension (LDA,N) On entry, the i-th column must contain the vector which defines the elementary reflector H(i), for i = 1,2,...,k, as [in,out] A returned by DGEQRF in the first k columns of its array argument A. On exit, the m-by-n matrix Q. LDA is INTEGER [in] LDA The first dimension of the array A. LDA >= max(1,M). TAU is DOUBLE PRECISION array, dimension (K) [in] TAU TAU(i) must contain the scalar factor of the elementary reflector H(i), as returned by DGEQRF. [out] WORK WORK is DOUBLE PRECISION array, dimension (N) INFO is INTEGER [out] INFO = 0: successful exit < 0: if INFO = -i, the i-th argument has an illegal value Univ. of Tennessee Univ. of California Berkeley Univ. of Colorado Denver NAG Ltd. September 2012 Definition at line 115 of file dorg2r.f.
{"url":"https://netlib.org/lapack/explore-html-3.4.2/df/db2/dorg2r_8f.html","timestamp":"2024-11-11T14:41:16Z","content_type":"application/xhtml+xml","content_length":"12556","record_id":"<urn:uuid:59e6f91d-7807-4f39-87a0-b9747d378329>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00882.warc.gz"}
The Influence of Temperature on the Bulk Settling of Cohesive Sediment in Still Water with the Lattice Boltzmann Method CCCC-FHDI Engineering Co., Ltd., Guangzhou 510230, China College of Harbor, Coastal and Offshore Engineering, Hohai University, Nanjing 210098, China State Key Laboratory of Hydraulic Engineering Simulation and Safety, Tianjin University, Tianjin 300072, China Key Laboratory of Coastal Disaster and Defence (Hohai University), Ministry of Education, Nanjing 210098, China Authors to whom correspondence should be addressed. Submission received: 9 April 2019 / Revised: 27 April 2019 / Accepted: 1 May 2019 / Published: 5 May 2019 Flocculation is very common and significant for cohesive sediment in coastal areas, and the influence of temperature on it cannot be neglected. The Lattice Boltzmann Method (LBM), combined with the extended Derjaguin‒Landau‒Verwey‒Overbeek (XDLVO) theory, which considers the micro-interaction forces between particles, was applied to simulate the settling and flocculation processes of cohesive sediment under various temperature conditions. The floc size, floc volume, suspended sediment concentration (SSC), and settling velocities were analyzed. The analyses revealed that with increasing temperature, both the mean floc diameter and floc volume grew, while the maximum floc diameter initially increased and then slightly decreased with its peak at 10 °C. During settling, the SSC change rate was exponentially related to the SSC, with an optimal fitting index of 0.3. The LBM sediment settling velocity was also compared with some formulas and physical model tests; the comparison results consistently demonstrated that the LBM was reasonable for modeling the bulk settling of cohesive sediment. Further discussions illustrated that the cohesive sediment is more difficult to flocculate at low temperatures due to the low aggregation frequency, while at high temperatures, some large flocs broke easily due to the effect of the short-distance force and macro force. 1. Introduction Cohesive sediment is an important type of sediment in the near shore zone. As a result of its own physical properties and external environments, cohesive sediment can form flocs easily. Temperature is one of many factors that influence flocculation [ ]. Particularly in middle- and high-latitude areas, the water temperature varies drastically: it may be close to 0 °C in winter and exceed 30 °C in summer [ Thus far, much research has been conducted to study the influence of temperature on cohesive sediment. Lau [ ] studied temperature’s effects on the settling velocity and deposition of cohesive sediment in an annular channel of distilled water that was housed in a temperature-controlled chamber. He claimed that, as temperature increased, the repulsive forces between particles increased while attractive forces remained the same, which resulted in a lower degree of deposition and a smaller effective settling velocity. However, his results were completely opposite to the still water experimental conclusions of Owen [ ], whose test was conducted in a settling tube in still water, and showed that the influence of temperature on water viscosity was the main cause for sediment velocity variation. Jiang et al. [ ] conducted a few indoor cylinder tests in flowing water, salt water and quiescent water, with clay samples (mean diameter of 17 μm) taken from the Changjiang Estuary, to study the influence of temperature on mud particle deposition and flocculation. Jiang’s study results supported the viewpoint that temperature was the major influencing factor on mud deposition. For water temperatures less than 25 °C, the deposition was limited as particles were preferentially deposited as single grains, while when it exceeded 25 °C, flocculation occurred, and the settling rate increased rapidly. Wan et al. [ ] conducted some tests in temperature-controllable and autostirring settling columns, and the results revealed that increasing the temperature had a positive effect on floc velocity, and the velocity with higher suspended sediment concentration (SSC) was much greater than that with lower SSC, whereas this impact was negligible when the SSC was over 8 g/L. Other studies on this subject via field observations have also been conducted and were mainly reflected by the day‒night or seasonal variation [ ]. Most of the studies deemed that the temperature affected the sediment properties of settling, aggregation, or deposition through changing the organism. Dickhudt [ ] measured the sediment erodibility in York River from April 2006 to October 2007, where the erodibility was low in summer and fall and high in winter and spring, with the highest erosion rate in May. Lee et al. [ ] analyzed some clay with a primary size of approximately 3 μm along the Belgian coastal zone and revealed that a low turbulent shear and/or temperature increase concurred with increasing median diameters. Andersen and Pejrup [ ] observed a marked seasonality with respect to equivalent settling diameter in situ at the Lister Dyb tidal area of Kongsmark, Denmark. They found that small diameters were observed during winter and early spring, whereas higher values were seen in other seasons. Xia et al. [ ] sampled suspended sediment in the Pearl River Estuary, China in January and July, and in their experiments the settling velocities and effective densities differed extensively between the two seasons. Guo and He [ ] also focused on in situ suspended sediment flocculation in the Yangtze River, and their results showed some differences in settling velocities owing to temperature but could not avoid other factors such as salinity and SSC. In addition, several researchers [ ] have studied the temperature effect through theoretical analysis. They can be summarized into two main viewpoints: One explanation was that the rising temperature weakened the short-distance force between particles; and the other held a viewpoint that the temperature mainly effected flocculation via water viscosity, that is, the increase in temperature decreases the water viscosity (1.00 × 10 at 20 °C, 1.31 × 10 at 10 °C and 0.80 × 10 at 30 °C) and accelerates the sediment settling velocity according to Stokes’s Law [ ]. Most properties of the flocculation process can usually be deduced from the flocs’ size and their effective density. With the increasing development of computer technology, mathematical models have emerged as one of the most important methods to study the settlement and flocculation of cohesive sediment, and some researchers have relied on the theory of fractal dimensions to explain the relationship between floc size and floc effective density [ ] and conducted other relevant works [ ] by computer. However, most of the models did not take into account the effect of temperature. The mathematical model can eliminate the interference of other factors, and, hence, the influential mechanism can be deeply studied. Qiao et al. [ ] studied the mechanism of temperature effects on the flocculation process of two cohesive sediment particles via the Lattice Boltzmann Method (LBM). Most studies on the temperature effect on cohesive sediment settling and flocculation were conducted based on field observations, physical model tests, or theory, whereas works using mathematical models are still inadequate, especially for the effect of temperature on bulk settling. Also, the temperature’s individual influence mechanism remains unclear. The LBM is widely used for studying the settling process of suspension systems [ ]. Therefore, as an extension of Qiao et al.’s [ ] work and as the innovation of the present study, the LBM is used to simulate the bulk settling and flocculation process of cohesive sediment under different temperature conditions. The simulation results were analyzed to reveal the influence of temperature and its mechanism. The paper is organized as follows: the numerical approach is described in Section 2 ; the computational conditions are obtained in Section 3 ; in Section 4 , the effects of the temperature on the flocculation and settling of illite are indicated; a discussion and the shortcomings of this research are included in Section 5 ; and we present our conclusions in Section 6 2. Methods and Model 2.1. Lattice Boltzmann Method The LBM is a relatively new numerical technique for modeling physical system responses. The LBM originated from Lattice Gas Automata. The basic concept of the LBM is to represent fluid as a particle distribution function located at each lattice node. Fluid particles move to neighboring nodes at discrete time steps, colliding with other fluid particles. In the LBM approximation, the fluid is described by a density distribution function ), which describes the particle status at a lattice location at time with the discrete velocity . The Boltzmann equation is used to solve the collision-induced evolution of the fluid particle, and the equation can be written as: $∂ f i ∂ t + e i ⋅ ∇ f i ( x , t ) = Ω i ( f i ) ,$ where the subscript represents the directions in which the particle may move. The D3Q19 topology [ ] is used in this study, which is a three-dimensional cubic lattice with 19 velocity directions, as illustrated in Figure 1 . Ω ) is the collision operator; in Nguyen and Ladd’s model [ ], Ω ) can be written as follows: $Ω i ( f i ) = Ω i ( f i e q ) + ∑ j l i j f j n e q ,$ is the local equilibrium function, and is the non-equilibrium function, i.e., $f j n e q = f j − f j e q$ . The hydrodynamic parameters, such as the mass density and momentum , are the function of the distribution function and discrete velocity , which can be written as follows: $ρ = ∑ i f i ρ u = ∑ i f i e i .$ The density and momentum should satisfy the mass conservation and momentum conservation. A suitable equilibrium distribution form of the D3Q19 topology can be written as follows [ $f i e q = a e i [ ρ + e i ⋅ ρ u c s 2 + ρ u u : ( e i e i − c s 2 I ) 2 c s 4 ] ,$ $c s = c l 2 / 3$ is the speed of sound; $c l = Δ x / Δ t$ is the lattice speed, in which Δ is the lattice space size and Δ is the time step; and equals 0 ( = 0), 1( = 1, 2, …, 6) and = 7, 8, …, 18). The coefficients of the three speeds $a e i$ are 0 ( = 0), 1/18 ( = 1, 2, …, 6) and 1/36 ( = 7, 8, …, 18). are the matrix elements of the linearized collision operator, which must satisfy the following eigenvalue equations: $∑ i l i j = 0 , ∑ i e i l i j = 0 , ∑ i e i e i ¯ l i j = λ e j e j ¯ , ∑ i e i 2 l i j = λ v e j 2 ,$ $e i e i ¯$ is the traceless part of $e i e i$ . The first two equations result from the conservation of mass and momentum, and the last two equations describe the isotropic relaxation of the stress tensor. The eigenvalues are related to the shear viscosity and bulk viscosity , respectively, which are within the range of −2 < < 0, in which $η = − ρ c s 2 ( 1 / λ + 1 / 2 )$ $η ν = − ρ c s 2 [ 2 / ( 3 λ ν ) + 1 / 3 ]$ A three-parameter collision operator is used in the present study. The post-collision distribution can be written in the same form as $f i * = a e i [ ρ + e i ⋅ ρ u c s 2 + ( ρ u u + ∏ n e q , * ) : ( e i e i − c s 2 I ) 2 c s 4 ] ,$ $∏ n e q , * = ( 1 + λ ) ∏ n e q ¯ + 1 3 ( 1 + λ v ) ( ∏ n e q : I ) I$ $∏ n e q = ∏ − ∏ e q$ is the non-equilibrium second moment, in which $∏ e q = ∑ i e i e i f i e q = ρ c s 2 I + ρ u u$ Considering an externally imposed force density , the time evolution of the LBM includes an additional contribution $F i ( x , t )$ $f i ( x + e i Δ t , t + Δ t ) = f i ( x , t ) + Ω i [ f ( x , t ) ] + F i ( x , t ) .$ The LBM is well-suited for the specific problem of modeling solid particle suspensions because of its ability to solve particles movement with arbitrary shapes and complex geometries [ 2.2. The Extended Derjaguin–Landau–Verwey–Overbeek Theory The extended Derjaguin–Landau–Verwey–Overbeek (XDLVO) theory on the interactions of particles can be used to explain the flocculation of cohesive sediment particles [ ]. According to this theory, there are three short-distance forces acting between particles in aqueous environments ( Figure 2 ), i.e., the Lifshitz‒van der Waals attractive force , the electrostatic double-layer repulsive force , and the Lewis acid‒base force . Each of them is the negative derivative of the corresponding potential to the distance. Among them, temperature only affects the electrostatic double-layer repulsive force by changing the Debye . For more details, please refer to [ The electrostatic double-layer repulsive force between spherical particles with radii of can be written as follows [ $F EL i − j = 4 R 1 R 2 R 1 + R 2 π ε 0 ε r κ ψ 0 2 exp ( − κ h i j ) ,$ − ( ) is the net distance between spheres, and is the distance between sphere centers; is the dielectric permittivity in a vacuum with a value of 8.854 × 10 is the relative dielectric constant, and for water, it is 78.5; is the surface potential of particles, which is very sensitive to salinity and pH but not sensitive to temperature when the temperature is below approximately 150 °C [ is the Debye length, related to temperature, cation valence, and ion concentration, which can be written as follows: $κ = ( 2 e 0 2 N A c z / ( ε 0 ε r k T ) ) 1 / 2 ,$ $e 0$ is the element charge 1.6 × 10 $N A$ is Avogadro’s number 6.022 × 10 is the Boltzmann constant 1.38 × 10 is the absolute temperature with a unit of K; is the cation concentration with a unit of mol/L; and represents the cation valence and is dimensionless. 2.3. Criterion Distance of Flocculation Yang et al. [ ] mentioned that the aggregation sign of cohesive sediment should be the contact of the sliding surface. They suggested that the anions are distributed on the surface of particles, and adsorb the cations in the media. The zone with cations can be separated into two layers, the inner layer with a high density of cations, which is called the absorbed layer, and the outer layer with less cations, which is called the diffuse layer, as illustrated in Figure 3 . Yang et al. [ ] consider that when the surface distance is less than twice the thickness of the slipping layer, the particles are wrapped by a common slipping layer, and they form a new or enlarge an old floc. The electric potential decays exponentially at large distance with decay length given by the Debye length , with on the particle surface and at the distance . Both can be measured in experiments, thus the slipping layer thickness can be calculated as follows: $δ = 1 κ ln 1 + e x p ( z e ψ T / 2 k ) 1 − e x p ( z e ψ T / 2 k ) ,$ is the slipping layer thickness when the potential equals the zeta-potential . From Equations (9) and (10), a rising temperature will thicken . Thus, twice the value of was taken as the criterion distance for flocculation. 3. Computational Conditions Four cases were set up to study the influence of temperature on the bulk settlement of cohesive sediment. The temperatures were 5 °C, 10 °C, 20 °C, and 30 °C in the four cases. The domain calculation area was 0.288 mm × 6 mm × 0.240 mm, and the space precision was 2 μm, thus a total of 144 lu × 3000 lu × 120 lu (lattice unit). The surroundings were defined as periodic boundaries, and the upper and lower boundaries were set as solid no-slip boundaries. The initial sediment concentration was set to 1.3 kg/m (volume concentration of 0.049%). Eight hundred sediment particles with diameters from 5 μm to 10 μm were scatted randomly in the whole calculation area. The calculation time step was set as 10 s, and the total time was 20 s. In all cases, the cation was selected as Na , with a valence of +1 and a concentration of 0.085 mol/L or salinity of 5 ppt. Illite was considered to be the clay mineral with a surface potential of −27.22 mV [ ] as illite is widely distributed and its chemical and physical properties that needed in this study are easy to obtain as they are studied more extensively. The sediment density was = 2650 kg/m . The water density was = 1000 kg/m , without considering the influence of salinity and temperature as the error of the density at the values adopted will be less than 4% according to the sea water state equation proposed by UNESCO [ ]. The other parameters are listed in Table 1 4. Results 4.1. Floc Size and Floc Volume Floc size and volume are two of the floc properties that are easy to obtain from the numerical results. Figure 4 a shows the time series of the maximum and mean floc sizes. The final results of them and the floc volume are shown in Figure 4 b. Floc volume is a sum of the volumes of all the flocs, and indicates the content of floc in the suspended column. During the same settling period, both the floc sizes and floc volume increased with increasing temperature. At the end of the simulation, in the cases of 5 °C, 10 °C, 20 °C, and 30 °C, the maximum floc sizes were 20.0 μm, 24.0 μm, 22.3 μm, and 23.3 μm, respectively, with the peak value at 10 °C. In the cases of 20 °C and 30 °C, the maximum floc sizes fluctuated slightly in the latter phase ( Figure 4 The mean floc size increased slightly but more stably than the maximum size did, from 15.2 μm at 5 °C to 15.5 μm at 10 °C, 15.8 μm at 20 °C, and 15.9 μm at 30 °C. The floc volume increased more obviously than the floc size. They were 3.8% at 5 °C, 4.0% at 10 °C, 4.3% at 20 °C, and 5.5% at 30 °C ( Figure 4 4.2. Settling and Flocculation Process The microscopic process of settling and flocculation might explain the preceding phenomenon. This process can be easily given out via the numerical simulations. Three particles numbered #681, #689, and #721 were selected to illustrate the settling process. In all four cases, they had formed or almost formed a floc. The corresponding diameters for particles #681, #689, and #721 were 6.00 μm, 7.16 μm, and 9.36 μm, respectively. The distance between the primary particles and their velocities at different temperatures are plotted in Figure 5 At the beginning, the large particle falling fast was on top of the small one with a low speed, resulting in a diminishing net distance between them before they met, as is common in all cases. At a low temperature (5 °C, Figure 5 a), the three primary particles cannot get close enough to form a new floc, and fell individually. The velocities of the particles increased when they collided (12.5 s to 17.5 s), but finally they readjusted to their initial speed with the increasing distance between each other. At a medium temperature (10 °C and 20 °C, Figure 5 a,b), the three particles formed a floc and settled together with a common speed higher than that of each primary particle; however, the time when the floc formed was earlier at 20 °C than that at 10 °C. In the case of high temperature (30 °C, Figure 5 d), the small particle (#681) was first captured by the medium one (#689), forming an unstable floc, which was destroyed by the collision of the large particle (#721), and then flocculated with the large particle, leaving the medium particle settling individually with a low speed. 4.3. Suspended Sediment Concentration Figure 6 illustrates the variation in SSC for each case every 5 s. It is shown that, in all four cases, SSCs above 4.0 mm continued to decline, while those at the bottom layers were increasing. They remained relatively stable in the 4.0–5.0 mm layer, but a small difference can be seen between the cases: the SSC in this layer went up slightly in the 5 °C case, but had a slight reduction in the 30 °C case, and stayed nearly constant in the other two cases. Chen and Shao [ ] revealed that the SSC change rate in still water was generally fitted to a first-order equation as follows: is the attenuation coefficient, in which a large value indicates a faster incline. The time series curves of relative SSC (ratio of ) at time to initial ) of water depths shallower than 4.5 mm, in which layer the SSC is somewhat unchanged as shown in Figure 6 , are draw in Figure 7 a. As shown in Figure 7 a, the fitting results of Equation (11) (the dash lines) were less superior in these cases, therefore, some trials of indexes of in the right side had been done. As a result, 0.3 was the best fitted index (the solid lines in Figure 7 a), thus yielding the following formula: $d C d t = − k c C 0.3 .$ The time when the SSC reduced to half of the initial SSC was taken as the half-life settlement period . From Equation (12), the half-life period can be written as follows: $t 1 / 2 = − c 0 0.7 0.7 k c ( ( 1 / 2 ) 0.7 − 1 ) .$ and half-life period can be solved from the fitting curve in Figure 7 a and they are shown in Figure 7 b. The temperature increase had a positive effect on and a negative effect on indicating a faster change in SSC at high temperatures than at low temperatures. 4.4. Sediment Settling Velocity In numerical simulations, the bulk velocities can be statistically analyzed from microscope by averaging the velocities of individual particles and flocs, which are the directional output of the simulation, with a weight of each volume; however, they are commonly difficult to measure directly in physical experiments. As a result, the bulk velocity is often calculated from the SSC half-life and the settling distance [ ]. The bulk velocity can be expressed simply as follows: $u ( h ) ¯ = H / ( 2 t 1 / 2 ) ,$ $u ( H ) ¯$ is the time-averaged (from 0 to ) bulk settling velocity at water depth , in which depth the SSC changes slightly during settling, and that is why 4.5 mm (4.0–5.0 mm) is chosen in Figure 7 a. The derivation of Equation (14) can refer to You’s work [ ]. The water depth and time in Equation (14) can be obtained easily through macroscopic observation. Table 2 , velocity 1 summarized the bulk velocity calculated from Equation (14) with the parameters in Figure 7 b. The time-averaged statistical velocities during the period of 0– in the 4.5 mm layer (4.0–5.0 mm) were listed in Table 2 , as velocity 2. The results of the two methods were almost identical, with a small relative error of approximately 2%. Equation (14) can be taken as the connection between the microscopic statistical methods and physical test methods. The floc velocities are also compared with the data of Xia et al. [ ], Khelifa and Hill [ ], Guo and He [ ], Dyer and Manning [ ], and Manning et al. [ ] and the formulas of Manning et al. [ ], Khelifa and Hill [ ] and Winterwerp [ ] in Figure 8 a. The primary particle diameter in the simulations ranged from 5 μm to 10 μm, and the simulated floc settling velocities lie between Winterwerp’s [ ] lines with primary particle sizes of 1 μm and 20 μm, and most of them between Khelifa and Hill’s [ ] lines with the same primary sizes. They also are between Manning et al.’s [ ] lines with an effective density of 160 kg/m and 1600 kg/m , but closer to the line of 1600 kg/m , probably due to the small size of the flocs, so they have a larger effective density [ ]. The simulation results overlap with the in situ observation data of Xia et al. [ ] for July 1999 and January 2000. It should be noted that the simulation velocities are each floc’s speed, while that of other studies are bulk velocity, or statistical macro data of suspended Figure 8 b illustrates the variation of average floc velocities in different temperature conditions, including the observational data from Xia et al. [ ] and Guo and He [ ], the experimental data from Wan et al. [ ], and the simulation results of LBM. The settling speed differs greatly owing to factors other than temperature, such as SSC, labeled in the legend; however, all the studies reveal a trend of the settling velocity increasing with the water temperature. The velocity change rate with temperature of this numerical result is 0.0382 mm/s/°C—larger than the rate of Wan et al.’s [ ] physical experiment results of low SSC (0.0248 mm/s/°C), smaller than that of Wan et al.’s [ ] high SSC (0.0513 mm/s/°C) and Xia et al.’s [ ] observation results (0.1031 mm/s/°C), and very close to Guo and He’s [ ] observation data (0.0364 mm/s/°C). 5. Discussion The above results indicate that the LBM is reasonable for using in modeling bulk settling velocity and that some collisions led to aggregation, while others did not, or, on the contrary, to disaggregation. On referring to Kim and Stolzenbach’s work [ ], the ratio of the global collision number to the total particle number is defined as the collision frequency , the ratio of collisions that created new flocs or enlarged original flocs to the total collisions is the capture frequency , and the product of the two ratios is the aggregation frequency . The three parameters were analyzed and are illustrated in Figure 9 As shown in Figure 9 a, with increasing temperature, continued to grow up, while began to decline slightly after its peak at about 20 °C after an initial increase. Sterling et al. [ ] stated that the collision frequency was due to three factors: Brownian motion, turbulence shear, and differential settling. In this simulation, the sediment particles were much larger than the molecule and thus the Brownian motion could be ignored; the turbulence effect was omissible in still water; therefore, the main factor was the differential settling. Figure 9 b showed a positive correlation between and settling velocity, which was consistent with the viewpoint of Sterling et al. [ in the simulations ranges from 2.1% to 4.2%, very close to the results of Kim and Stolzenbach [ ] (from 2.96% to 3.20%). The potential reasons for the difference between the two results might lie in Kim and Stolzenbach’s [ ] presumption that their model neglected the repulsive colloidal interaction, which is very sensitive to temperature (in Section 2.2 ) but was not mentioned in their study, so they might not take the temperature into account in their work. When the distance between two particles was less than 25 nm, the three short-distance forces were the most important. Among the forces, only the electrostatic double-layer repulsion force is related to the temperature. From Figure 10 a, the repulsive force was higher at a high temperature or a short distance between two particles. Additionally, the rising temperature thickened the slipping layer ( Table 1 ). Taken together, at a distance of twice the slipping layer thickness (the distance value of the left point of each curve in Figure 10 ), the particles must overcome a stronger repulsion force before flocculation at high temperature than at low temperature. Thus, the higher the temperature, the greater the capture frequency, as noted by Qiao et al.’s simulation of two primary particles [ However, the situation was different when considering multiple particles. Also, taking the above three particles in the 30 °C case as an example, when the large particle (#721) met the floc formed by the other two particles, this particle seized the small one (#681) and a new floc was formed, with particle #689 settling individually. This collision produced a new floc, but disaggregated an old floc, giving no change in the number of flocs and particles; thus, remained unchanged, while in the 20 °C case this collision made the larger as it enlarged the floc. From the aspect of force, when a third particle was involved, the difference between the two double-layer forces acting on particle #681, , was the same in magnitude as the particle gravity ( Figure 10 b), and the macroscopic force, including the gravity and hydrodynamic force, was highlighted. The higher the temperature, the more obvious the macro forces were. This phenomenon reveals that, at high temperature, few large flocs broke into small ones because of collision, resulting in a fluctuation of time series and a decrease in the maximum floc size, as illustrated in Figure 4 a. However, the mean floc size and total floc volume did not decrease (in Figure 4 b) because the large flocs that were collapsed by collision had a very small proportion and were only decreased in size but rarely dispersed into primary particles. A detailed force analysis of a large floc is more convincing but more complex when it is comprised of more primary particles because of its continuous changing spatial statuses and complicated interactive forces. Although the analysis of the floc of three primary particles is limited, it illustrates the flocculation formation and breakage at different temperatures. From the analysis, it is certain that, at high temperature, the macroscopic force becomes more obvious due to the small difference of short-distance forces between particles. The above analysis qualitatively explains why the fracture frequency increases at high temperatures, which is the main cause of the decrease in η[cap]. However, the reason is different for the low-temperature case, where the large repulsion force makes it more difficult for the particles to get close enough to form flocs, resulting in a small η[cap]. The above results could be explained by the present LBM model, but there is still an obvious shortage in that the flocs in each case were not sufficient in quantity and size. This shortage is because the LBM requires a large number of grids to describe the solid‒fluid boundary and micro properties of each sediment particle; thus, the computational cost of the LBM is extremely high. Each case in this study took three months on a supercomputer with 288 CPUs. A higher computational cost is expected if the sediment bulk properties are described in more detail. That is the reason for the less convincing results of the present study. Therefore, subsequent studies will involve further optimizing the calculation method and case design for better simulation results. In a natural environment, changes in other factors, such as water salinity, turbulence, and SSC, are unavoidable, so their influences on flocculation will be further studied by LBM in the future. 6. Conclusions The bulk settling and flocculation process of cohesive sediment containing primary particles of sizes of 510.0 μm in still water at various temperatures was simulated via the LBM. The floc size and volume were analyzed, and the effect of temperature on bulk settling was studied based on the macroscopic SSC change and microscopic statistics of particle and floc settling velocity. The difference of the properties at different temperatures was explained by the formation of flocs, aggregation frequency, and forces between particles. The following conclusions were obtained: The mean floc size and floc volume increased with increasing temperature. The maximum floc size initially increased and then decreased slightly with its peak at 10 °C and trough at 5 °C. The floc was not easily formed at low temperature but was unstable and cracked easily at high temperature. The aggregation process, aggregation frequency and forces between particles can be explained by the above. At low temperatures, the collision frequency η[glo] and capture frequency η[cap] were low, which meant the floc was not easily formed; at high temperatures, the large flocs were easily broken as the weighting of the macro force increased to have the same magnitude as the short-distance force. During settling, the SSC time series curves fit well with the equation $d C / d t = − k c C 0.3$, from which the settlement half-life period and bulk setting velocity were deduced. Increasing the temperature had a negative effect on the settlement half-life, indicating a faster SSC incline at high temperatures than at low temperatures. The macroscopic bulk velocity derived from the SSC change agreed well with the microscopic statistical settling velocity of each particle and floc. Both velocities agreed well with the existing physical test results, on-site observation data, and formulas, indicating that the LBM is a reasonable choice for simulating cohesive sediment bulk settling. Author Contributions Conceptualization, J.-f.Z. and Q.-h.Z.; methodology, J.-f.Z. and Q.-h.Z.; software, J.-f.Z.; validation, G.-q.Q. and J.-f.Z.; formal analysis, G.-q.Q., J.-f.Z. and W.-b.F.; investigation, G.-q.Q. and J.-f.Z.; resources, G.-q.Q., J.-f.Z. and Q.-h.Z.; data curation, G.-q.Q.; writing—original draft preparation, G.-q.Q.; writing—review and editing, G.-q.Q., J.-f.Z. and W.-b.F.; supervision, J.-f.Z., Q.-h.Z., X.F., W.-b.F. and Y.-c.L.; project administration, X.F., W.-b.F. and Y.-c.L.; funding acquisition, J.-f.Z., Q.-h.Z., X.F. and W.-b.F. This study was funded by the National Key Research and Development Program of China (Grant No. 2017YFC1404200), the National Natural Science Foundation of China (Grant No. 51579171, 51679161 & 51709091), the Science Fund for Creative Research Groups of the National Natural Science Foundation of China (Grant No. 51621092), the Open Funds from State Key Laboratory of Satellite Ocean Environment Dynamics (SOED) (No. SOED1609), Jiangsu Natural Science Foundation of China or Natural Science Foundation of Jiangsu Province (No. BK20170874), and the Fundamental Research Funds for the Central Universities (No. 2017B00514). We thank the National Supercomputer Center in Tianjin for their supply of the CPUs and A.J.C. Ladd for his original LBM code. Conflicts of Interest The authors declare no conflict of interest. 1. Winterwerp, J.C. On the flocculation and settling velocity of estuarine mud. Cont. Shelf Res. 2002, 22, 1339–1360. [Google Scholar] [CrossRef] 2. Mietta, F.; Chassagne, C.; Manning, A.J.; Winterwerp, J.C. Influence of shear rate, organic matter content, ph and salinity on mud flocculation. Ocean Dyn. 2009, 59, 751–763. [Google Scholar] [ 3. Ha, H.K.; Maa, J.P.Y. Effects of suspended sediment concentration and turbulence on settling velocity of cohesive sediment. Geosci. J. 2010, 14, 163–171. [Google Scholar] [CrossRef] 4. Jiang, G.; Zhou, H.; Ruan, W.; Yao, S.; Zhang, Z.; Zhao, L. Influence of water temperature on mud particle deposition—Laboratory tests. J. Coast. Res. 2004, 20, 59–66. [Google Scholar] 5. Etemad-shahidi, A.; Shahkolahi, A.; Liu, W.C. Modeling of hydrodynamics and cohesive sediment processes in an estuarine system: Study case in Danshui river. Environ. Model. Assess. 2010, 15, 261–271. [Google Scholar] [CrossRef] 6. Wan, Y.; Wu, H.; Roelvink, D.; Gu, F. Experimental study on fall velocity of fine sediment in the Yangtze Estuary, China. Ocean Eng. 2015, 103, 180–187. [Google Scholar] [CrossRef] 7. Dickhudt, P.J.; Friedrichs, C.T.; Schaffner, L.C.; Sanford, L.P. Spatial and temporal variation in cohesive sediment erodibility in the York River estuary, eastern USA: A biologically influenced equilibrium modified by seasonal deposition. Mar. Geol. 2009, 267, 128–140. [Google Scholar] [CrossRef] 8. Lau, Y.L. Temperature effect on settling velocity and deposition of cohesive sediments. J. Hydraul. Res. 1994, 32, 41–51. [Google Scholar] [CrossRef] 9. Owen, M.W. The Effect of Temperature on the Settling Velocities of an Estuary Mud; Hydraulics Research Station Report, No. INT 106; Hydraulics Research Station: Wallingford, UK, 1972. [Google 10. Lee, B.J.; Fettweis, M.; Toorman, E.; Molz, F.J. Multimodality of a particle size distribution of cohesive suspended particulate matters in a coastal zone. J. Geophys. Res. Oceans. 2012, 117, C03014. [Google Scholar] [CrossRef] 11. Andersen, T.J.; Pejrup, M. Biological mediation of the settling velocity of bed material eroded from an intertidal mudflat, the Danish Wadden Sea. Estuar. Coast. Shelf Sci. 2002, 54, 737–745. [ Google Scholar] [CrossRef] 12. Xia, X.M.; Li, Y.; Yang, H.; Wu, C.Y.; Sing, T.H.; Pong, H.K. Observations on the size and settling velocity distributions of suspended sediment in the Pearl River Estuary, China. Cont. Shelf Res. 2004, 24, 1809–1826. [Google Scholar] [CrossRef] 13. Guo, L.; He, Q. Freshwater flocculation of suspended sediments in the Yangtze River, china. Ocean Dyn. 2011, 61, 371–386. [Google Scholar] [CrossRef] 14. Winterwerp, J.C.; Van Kesteren, W.G.M. Introduction to the Physics of Cohesive Sediment in the Marine Environment; Elsevier: Amsterdam, The Netherlands, 2004; ISBN 0-444-51553-4. [Google Scholar] 15. Grabowski, R.C.; Droppo, I.G.; Wharton, G. Erodibility of cohesive sediment: The importance of sediment properties. Earth Sci. Rev. 2011, 105, 101–120. [Google Scholar] [CrossRef] 16. Khelifa, A.; Hill, P.S. Models for effective density and settling velocity of flocs. J. Hydraul. Res. 2006, 44, 390–401. [Google Scholar] [CrossRef] 17. Manning, A.J.; Dyer, K.R. Mass settling flux of fine sediments in Northern European estuaries: Measurements and predictions. Mar. Geol. 2007, 245, 107–122. [Google Scholar] [CrossRef] 18. Baugh, J.V.; Manning, A.J. An assessment of a new settling velocity parameterisation for cohesive sediment transport modeling. Cont. Shelf Res. 2007, 27, 1835–1855. [Google Scholar] [CrossRef] 19. Markussen, T.N.; Andersen, T.J. A simple method for calculating in situ floc settling velocities based on effective density functions. Mar. Geol. 2013, 344, 10–18. [Google Scholar] [CrossRef] 20. Zhang, J.; Zhang, Q. Hydrodynamics of fractal flocs during settling. J. Hydrodyn. Ser. B 2009, 21, 347–351. [Google Scholar] [CrossRef] 21. Tang, S.; Preece, J.M.; McFarlane, C.M.; Zhang, Z. Fractal morphology and breakage of DLCA and RLCA aggregates. J. Colloid Interface Sci. 2000, 221, 114–123. [Google Scholar] [CrossRef] 22. Weber-Shirk, M.L.; Lion, L.W. Flocculation model and collision potential for reactors with flows characterized by high Peclet numbers. Water Res. 2010, 44, 5180–5187. [Google Scholar] [CrossRef] 23. Maerz, J.; Verney, R.; Kai, W.; Feudel, U. Modeling flocculation processes: Intercomparison of a size class-based model and a distribution-based model. Cont. Shelf Res. 2011, 31, S84–S93. [Google Scholar] [CrossRef] 24. Yang, Z.; Yang, H.; Jiang, Z.; Huang, X.; Li, H.; Li, A.; Cheng, R. A new method for calculation of flocculation kinetics combining Smoluchowski model with fractal theory. Colloids Surf. A Physicochem. Eng. Asp. 2013, 423, 11–19. [Google Scholar] [CrossRef] 25. Zhang, J.F.; Maa, P.Y.; Zhang, Q.H.; Shen, X.T. Direct numerical simulations of collision efficiency of cohesive sediments. Estuar. Coast. Shelf Sci. 2016, 178, 92–100. [Google Scholar] [CrossRef 26. Qiao, G.Q.; Zhang, J.F.; Zhang, Q.H. Study on the influence of temperature to cohesive sediment flocculation. J. Sediment Res. 2017, 42, 35–40. (In Chinese) [Google Scholar] 27. Ladd, A.J.C. Numerical simulations of particulate suspensions via a discretized Boltzmann equation. Part 1. Theoretical foundation. J. Fluid Mech. 1994, 271, 285–310. [Google Scholar] [CrossRef] 28. Ladd, A.J.C. Numerical simulations of particulate suspensions via a discretized Boltzmann equation. Part 2. Numerical results. J. Fluid Mech. 1994, 271, 311–339. [Google Scholar] [CrossRef] 29. Ladd, A.J.C.; Verberg, R. Lattice-Boltzmann simulations of particle-fluid suspensions. J. Stat. Phys. 2001, 104, 1191–1251. [Google Scholar] [CrossRef] 30. Nguyen, N.Q.; Ladd, A.J. Lubrication corrections for Lattice-Boltzmann simulations of particle suspensions. Phys. Rev. E. 2002, 66, 046708. [Google Scholar] [CrossRef] 31. Zhang, J.F.; Zhang, Q.H. Lattice Boltzmann simulation of the flocculation process of cohesive sediment due to differential settling. Cont. Shelf Res. 2009, 31, S94–S105. [Google Scholar] [ 32. Zhang, J.F.; Zhang, Q.H.; Qiao, G.Q. A lattice Boltzmann model for the non-equilibrium flocculation of cohesive sediments in turbulent flow. Comput. Math. Appl. 2014, 67, 381–392. [Google Scholar 33. Van Oss, C.J. Interfacial Forces in Aqueous Media; CRC Press: Boca Raton, FL, USA, 1994; ISBN 978-1420015768. [Google Scholar] 34. Hoek, E.M.V.; Agarwal, G.K. Extended DLVO interactions between spherical particles and rough surfaces. J. Colloid Interface Sci. 2006, 298, 50–58. [Google Scholar] [CrossRef] [PubMed] 35. Qiao, G.Q.; Zhang, Q.H.; Zhang, J.F.; Cheng, H.J.; Lu, Z. Lattice Boltzmann model of cohesive sediment flocculation simulation based on the XDLVO theory. J. Tianjin Univ. 2013, 46, 232–238. (In Chinese) [Google Scholar] 36. Vinogradov, J.; Jackson, M.D. Zeta potential in intact natural sandstones at elevated temperatures. Geophys. Res. Lett. 2015, 42, 6287–6294. [Google Scholar] [CrossRef] 37. Yang, T.S.; Xiong, X.Z.; Zhan, X.L.; Yang, M.Q. The study on slipping water layers of cohesive sediment particles. J. Hydraul. Eng. 2002, 33, 20–26. (In Chinese) [Google Scholar] 38. Sondi, I.; Bišćan, J.; Pravdić, V. Electrokinetics of pure clay minerals revisited. J. Colloid Interface Sci. 1996, 178, 514–522. [Google Scholar] [CrossRef] 39. Tenth Report of the Joint Panel on Oceanographic Tables and Standards, Sidney, B.C. Canada, 1–5 September 1980; UNESCO Technical Papers in Marine Science No. 36; UNESCO: Paris, France, 1981. 40. Chen, H.S.; Shao, M.A. Effect of NaCl concentration on dynamic model of fine sediment flocculation and settling in still water. J. Hydraul. Eng. 2002, 8, 63–67. (In Chinese) [Google Scholar] 41. You, Z.J. The effect of suspended sediment concentration on the settling velocity of cohesive sediment in quiescent water. Ocean Eng. 2004, 31, 1955–1965. [Google Scholar] [CrossRef] 42. Dyer, K.R.; Manning, A.J. Observation of the size, settling velocity and effective density of flocs, and their fractal dimensions. J. Sea. Res. 1999, 41, 87–95. [Google Scholar] [CrossRef] 43. Manning, A.J.; Langston, W.J.; Jonas, P.J.C. A review of sediment dynamics in the Severn Estuary: Influence of flocculation. Mar. Pollut. Bull. 2010, 61, 37–51. [Google Scholar] [CrossRef] 44. Winterwerp, J.C. A Simple Model for Turbulence Induced Flocculation of Cohesive Sediment. J. Hydraul. Res. 1998, 36, 309–326. [Google Scholar] [CrossRef] 45. Kim, A.S.; Stolzenbach, K.D. Aggregate formation and collision efficiency in differential settling. J. Colloid Interface Sci. 2004, 271, 110–119. [Google Scholar] [CrossRef] [PubMed] 46. Sterling, M.C., Jr.; Bonner, J.S.; Ernest, A.N.S.; Page, C.A.; Autenrieth, R.L. Application of fractal flocculation and vertical transport model to aquatic sol-sediment systems. Water Res. 2005, 39, 1818–1830. [Google Scholar] [CrossRef] Figure 1. The possible velocity directions in the D3Q19 topology. The speed of discrete velocity |e[i]| equals 0 when particle keeps its original position after a time step, equals 1 when it moves to the faces of the cubic and $2$ to the edges of the cubic. Figure 2. The XDLVO potentials of different net distances h[ij] between particles. The Lifshitz‒van der Waals potential is always attractive; the electrostatic double-layer potential is repulsive and the Lewis acid‒base potential’s sign depends on the properties of colloids. XDLVO potential is the summation of the aforementioned three potentials, resulting in potential wells and potential barriers. Particles whose interactive forces conquer the potential barrier can form a stable floc. Figure 3. Illustration of slipping layer thickness, equaling the thickness of the electrostatic double layer in Yang et al.’s study [ ]. This zone is comprised of an absorbed layer and a diffuse layer. The slipping layer thickness can be calculated by the potentials on the particle surface and at the distance as the potential decays exponentially. When the particles are close enough and wrapped by a common slipping layer, they are considered as forming a new or enlarging an old floc. Figure 4. Floc properties of each case. (a) Time histories of maximum floc size (solid symbols) and mean floc size (hollow symbols) for the 5 °C, 10 °C, 20 °C, and 30 °C cases; (b) The maximum floc diameter, mean floc diameter (the left axis, with unit of μm) and floc volume (the right axis, with unit of 10^−15 m^3) at the end of the simulation. In (b), the thin solid line and dashed line represent the line fit of maximum and mean floc diameters, respectively, and the thick solid line is the exponential fit of floc volume. Figure 5. Time series of particle velocities and particles’ net distance, i.e., in Equation (8) and r[ij] − R[i] − R[j] Figure 3 . Solid symbols are net distances, and hollow symbols mean velocities. 1 represents the distance between particles #681 and #689; 2 represents that between particles #689 and #721; 3 represents that between particles #681 and #721; 2, and 3 represent the settling velocities of particles #681, #689, and #721, respectively. The left axes are for the distance, and the right ones are for the velocities. ( = 5 °C; ( = 10 °C; ( = 20 °C; and ( = 30 °C. Figure 6. SSC of each water depth in (a) 5 °C, (b) 10 °C, (c) 20 °C, and (d) 30 °C. The symbols between two labeled depths represent the SSC between those two depths. The SSC can be statistically obtained by the sediment weight in this zone. It was generally stable at depth of 4.5 mm, but there were some small differences between the cases. Figure 7. The SSC time series and its relative parameters. (a) The time series curves of relative SSC (ratio of c(t) at time t to initial c[0]) of a water system shallower than 4.5 mm; (b) The k[c] in Equation (12) and t[1/2] in Equation (13). In (a), the symbols are the experiment results, the solid lines and dashed lines are the fitting curves for n = 0.3 and n = 1.0, respectively. In (b), the solid square and hollow triangles symbols are t[1/2] and k[c], both of which are deduced from the solid curves in (a). The solid line and dashed line in (b) are the linear fittings for t[1/2] (t [1/2] = −0.380T + 21.366) and k[c] (k[c] = 0.120T + 2.661). Figure 8. The settling velocities of flocs of different diameter and temperature in many studies. ( ) Floc diameter and settling velocities from the observational data of Xia et al. [ ], Khelifa and Hill [ ], Guo and He [ ], Dyer and Manning [ ], and Manning et al. [ ]; theory results of Manning et al. [ ], Winterwerp [ ], and Khelifa and Hill [ ]; and the simulation results of LBM under temperature conditions ranging from 5 °C to 30 °C; ( ) floc velocities and temperature from the observational data of Xia et al. [ ], Wan et al. [ ], Guo and He [ ], and this simulation’s results. Figure 9. The change of Collision frequency η[glo], capture frequency η[cap] and aggregation frequency E[agg] with (a) temperature and (b) settling velocity. Figure 10. Comparisons of F[EL] and the gravity of the primary particle with diameters of 9.36 μm and efficient densities ρ[s]-ρ of 1650 kg/m^3. (a) The electrostatic double layer force between particles #681 and #689, F^EL[681–689] of different particle net distances at temperatures of 5 °C, 10 °C, 20 °C, and 30 °C; (b) the difference in electrostatic double-layer forces acting on particle #681, F^EL[681–689]–F^EL[681–721] with different particle net distances in the cases of 5 °C, 10 °C, 20 °C, and 30 °C. Case ID #1 #2 #3 #4 Temperature/(°C) 5 10 20 30 2δ/(nm) 18.7 19.1 20.0 20.7 Water viscosity ν/(10^−6 m^2 s^−1) 1.52 1.31 1.00 0.80 Table 2. Bulk settling velocity of different methods (velocity 1 v[1] is derived from H and t[1/2]; velocity 2 v[2] is the statistical result of all particles and flocs at water depths from 4.0 mm to 5.0 mm; ABS(1-2) |v[1]−v[2]| means the absolute difference between velocity 1 and velocity 2; |v[1]−v[2]|/(|v[1]+v[2]|/2) represents the relative difference between velocity 1 and velocity 2). Temperature/(°C) 5 10 20 30 (1) velocity 1 v[1]/(mm/s) 0.099 0.117 0.155 0.189 (2) velocity 2 v[2]/(mm/s) 0.101 0.117 0.152 0.185 (3) ABS(1-2) |v[1]−v[2]|/(mm/s) 0.002 0.000 0.003 0.004 (4) |v[1]−v[2]|/(|v[1]+v[2]|/2)/(%) 2.0% 0.0% 2.0% 2.1% © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/ Share and Cite MDPI and ACS Style Qiao, G.-q.; Zhang, J.-f.; Zhang, Q.-h.; Feng, X.; Lu, Y.-c.; Feng, W.-b. The Influence of Temperature on the Bulk Settling of Cohesive Sediment in Still Water with the Lattice Boltzmann Method. Water 2019, 11, 945. https://doi.org/10.3390/w11050945 AMA Style Qiao G-q, Zhang J-f, Zhang Q-h, Feng X, Lu Y-c, Feng W-b. The Influence of Temperature on the Bulk Settling of Cohesive Sediment in Still Water with the Lattice Boltzmann Method. Water. 2019; 11 (5):945. https://doi.org/10.3390/w11050945 Chicago/Turabian Style Qiao, Guang-quan, Jin-feng Zhang, Qing-he Zhang, Xi Feng, Yong-chang Lu, and Wei-bing Feng. 2019. "The Influence of Temperature on the Bulk Settling of Cohesive Sediment in Still Water with the Lattice Boltzmann Method" Water 11, no. 5: 945. https://doi.org/10.3390/w11050945 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2073-4441/11/5/945","timestamp":"2024-11-11T06:29:30Z","content_type":"text/html","content_length":"494432","record_id":"<urn:uuid:0c019e50-81e9-4911-b061-49ac36e188f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00865.warc.gz"}
What our customers say... Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences: The most thing that I like about Algebrator software, is that I can save the expressions in a file, so I can save my homework on the computer, and print it for the teacher whenever he asked for it, and it looks much prettier than my hand writing. Margaret, CA The complete explanations, a practical approach, low price and good assignments make it my best professional tutor. Richard Straton, OH Can I simply tell you how wonderful you are? May seem like a simple thing to you, but you just restored my faith in mankind (no small thing). Thank you for your kind and speedy response. T.G., Florida All in all I like the program. M.V., Texas I liked the detailed, clearly explained step by step process that Algebrator uses. I'm able to go into my class and follow along with the teacher; it's amazing! Natalie Olive, MO Search phrases used on 2012-12-29: Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among • Introductory Algebra Word Questions Worksheets • with unlike denominators calculator on online • Math worksheets Measurments • how to use TI-83 Plus cube root • graph cube roots • Free Algebra 1 Problem Solver Online • long division with polynomials solver • linear equation online calculator • answers for Mcdougal Littell Algebra 2 book • EXAMPLE OF GREATEST COMMON FACTOR.JAVA • simultaneous solver • multiply that has math poem • trig chart • Aptitude Test Sample paper • holt + algebra 2 + lesson • square root game interactive • mathematical elementary algebra • ti-89 log base( • solving for the square root • advanced algebra tutor • sixth grade algebra • algebra tiles, grade 8, simplify • "maximum principle" "neumann" "parabolic" problem • Natural Squares Calculator • free algebra solver • powerpoint conceptual physics • combinations on TI 83 calculator • "system of equations calculators" • quadratic equation for a hyperbola • Prentice hall learning strategies online algebra 2 • free mathworksheets 8th grade printable • third grade math practice sheets for simplest form • worksheets for adding and subtracting negative numbers • graph hyperbolas on ti 84 plus SE • 6th grade integers • cubed polynomials • integers worksheet free • simplifying complex irrational expressions • algebra tutoring software • how to solve partial differential equation • algebra tutorial on binomial expansions • 8th grade practice math worksheet • holt middle school math course 1 answers to 6-3 • substitution method (pre algebra) • fractions simplfying • factoring algebra • how to cheat ti-84 • saxon math paper • writing simultaneous equations in matrix form • list of sample questions multiplying integers • TI-83 cubic function • ks2 english reading test papers for free to download • Balancing Equations Calculator • physic-calculate force • math lessons graphing translations • ALGEBRA FOR DUMMIES ONLINE • polynomial solve online • free algebraic calculator • glencoe mathematics ebook • third order polynomial solve • understanding college algebra • help me with my homework least common multiple • complex radical expressions calculator • free online xy graph paper • highest common fraction tut • Hard Linear equations • algebra 2 glencoe/ mcgraw-hill student edition factoring • calculate next number in sequence online • 6 grade math volume conversion formula • perfect square worksheets • glencoe math tests • "free taks worksheets" • Rewrite your second order equation as a system of first order equations. • Calculate Wronskian • +algrebra expressions, equations, inequalities • linear nonlinear online worksheets • fraction on muber line in order from least to greatest • number sequences for the gmat • online calculator with pie • how to convert a circle into a square for dummies • "algorithm analysis" for dummies ebook free download • copy answer algebra 1 practice workbook • long division of high term polynomials calculator • number grid rule - maths coursework year 11 • hot to graph hyperbolas on the ti-83 plus • eigenvalue write program TI-83 plus • root solver • algebra calculator games • permutations and combinations made easy • standard deviation with decimals • online cubic graphing • online summation calculator • factor polynomial functions problem solver • Conceptual Physics Answers
{"url":"https://softmath.com/math-book-answers/adding-exponents/algebra-inequalities-free.html","timestamp":"2024-11-10T19:08:50Z","content_type":"text/html","content_length":"35786","record_id":"<urn:uuid:aa1824c0-1ba0-4b3d-97ba-f9fdc91e04d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00226.warc.gz"}
Internet Encyclopedia of Philosophy Inconsistent Mathematics Inconsistent mathematics is the study of commonplace mathematical objects, like sets, numbers, and functions, where some contradictions are allowed. Tools from formal logic are used to make sure any contradictions are contained and that the overall theories remain coherent. Inconsistent mathematics began as a response to the set theoretic and semantic paradoxes such as Russell’s Paradox and the Liar Paradox—the response being that these are interesting facts to study rather than problems to solve—and has so far been of interest primarily to logicians and philosophers. More recently, though, the techniques of inconsistent mathematics have been extended into wider mathematical fields, such as vector spaces and topology, to study inconsistent structure for its own sake. To be precise, a mathematical theory is a collection of sentences, the theorems, which are deduced through logical proofs. A contradiction is a sentence together with its negation, and a theory is inconsistent if it includes a contradiction. Inconsistent mathematics considers inconsistent theories. As a result, inconsistent mathematics requires careful attention to logic. In classical logic, a contradiction is always absurd: a contradiction implies everything. A theory containing every sentence is trivial. Classical logic therefore makes nonsense of inconsistency and is inappropriate for inconsistent mathematics. Classical logic predicts that the inconsistent has no structure. A paraconsistent logic guides proofs so that contradictions do not necessarily lead to triviality. With a paraconsistent logic, mathematical theories can be both inconsistent and interesting. This article discusses inconsistent mathematics as an active research program, with some of its history, philosophy, results and open questions. Table of Contents 1. Introduction Inconsistent mathematics arose as an independent discipline in the twentieth century, as the result of advances in formal logic. In the nineteenth century, a great deal of extra emphasis was placed on formal rigor in proofs, because various confusions and contradictions had appeared in the analysis of real numbers. To remedy the situation required examining the inner workings of mathematical arguments in full detail. Mathematics had always been conducted through step-by-step proofs, but formal logic was intended to exert an extra degree of control over the proofs, to ensure that all and only the desired results would obtain. Various reconstructions of mathematical reasoning were advanced. One proposal was classical logic, pioneered by Giuseppe Peano, Gottlob Frege, and Bertrand Russell. Another was paraconsistent logic, arising out of the ideas of Jan Łukasiewicz and N. A. Vasil’év around 1910, and first realized in full by Jaśkowski in 1948. The first to suggest paraconsistency as a ground for inconsistent mathematics was Newton da Costa in Brazil in 1958. Since then, his school has carried on a study of paraconsistent mathematics. Another school, centered in Australia and most associated with the name of Graham Priest, has been active since the 1970s. Priest and Richard Routley have forwarded the thesis that some inconsistent theories are not only interesting, but true; this is dialetheism. Like any branch of mathematics, inconsistent mathematics is the study of abstract structures using proofs. Paraconsistent logic offers an unusually exacting proof guide that makes sure inconsistency does not get out of hand. Paraconsistency is not a magic wand or panacea. It is a methodology for hard work. Paraconsistency only helps us from getting lost, or falling into holes, when navigating through rough terrain. a. An Example Consider a collection of objects. The collection has some size, the number of objects in the collection. Now consider all the ways that these objects could be recombined. For instance, if we are considering the collection {a, b}, then we have four possible recombinations: just a, just b, both a and b, or neither a nor b. In general, if a collection has κ members, it has 2^κ recombinations. It is a theorem from the nineteenth century that, even if the collections in question are infinitely large, still κ < 2^κ, that is, the number of recombinations is always strictly larger than the number of objects in the original collection. This is Georg Cantor’s theorem. Now consider the collection of all objects, the universe, V. This collection has some size, |V|, and quite clearly, being by definition the collection of everything, this size is the absolutely largest size any collection can be. (Any collection is contained in the universe by definition, and so is no bigger than the universe.) By Cantor’s theorem, though, the number of recombinations of all the objects exceeds the original number of objects. So the size of the recombinations is both larger than, and cannot be larger than, the universe, This is Cantor’s paradox. Inconsistent mathematics is unique in that, if rigorously argued, Cantor’s paradox is a theorem. 2. Background a. Motivations There are at least two reasons to take an interest in inconsistent mathematics, which roughly fall under the headings of pure and applied. The pure reason is to study structure for its own sake. Whether or not it has anything to do with physics, for example, Reimann geometry is beautiful. If the ideas displayed in inconsistent mathematics are rich and elegant and support unexpected developments that make deep connections, then people will study it. G. H. Hardy’s A Mathematician’s Apology (1940) makes a stirring case that pure mathematics is inherently worth doing, and inconsistent mathematics provides some panoramic views not available anywhere else. The applied reasons derive from a longstanding project at the foundations of mathematics. Around 1900, David Hilbert proposed a program to ensure mathematical security. Hilbert wanted: • to formalize all mathematical reasoning into an exact notation with algorithmic rules; • to provide axioms for all mathematical theories, such that no contradictions are provable (consistency), and all true facts are provable (completeness). Hilbert’s program was (in part) a response to a series of conceptual crises and responses from ancient Greece through Issac Newton and G. W. Leibniz (see section 6 below) to Cantor. Each crisis arose due to the imposition of some objects that did not behave well in the theories of the day—most dramatically in Russell’s paradox, which seems to be about logic itself. The inconsistency would not have been such trouble, except the logic employed at that time was explosive: From a contradiction, anything at all can be proved, so Russell’s paradox was a disaster. In 1931, Kurt Gödel’s theorems showed that consistency is incompatible with completeness, that any complete foundation for mathematics will be inconsistent. Hilbert’s program as stated is dead, and with it even more ambitious projects like Frege-Russell logicism. The failure of completeness was hard to understand. Hilbert and many others had felt that any mathematical question should be amenable to a mathematical answer. The motive to inconsistency, then, is that an inconsistent theory can be complete. In light of Gödel’s result, an inconsistent foundation for mathematics is the only remaining candidate for completeness. b. Perspectives There are different ways to view the place of inconsistent mathematics, ranging from the ideological to the pragmatic. The most extreme view is that inconsistent mathematics is a rival to, or replacement for, classical consistent mathematics. This seems to have been Routley’s intent. Routley wanted to perfect an “ultramodal universal logic,” which would be a flexible and powerful reasoning tool applicable to all subjects and in all situations. Routley argued that some subjects and situations are intractably inconsistent, and so the universal logic would be paraconsistent. He wanted such a logic to underly not only set theory and arithmetic, but metaphysics, ecology and economics. (For example, Routley and Meyer [1976] suggest that our economic woes are caused by using classical logic in economic theory.) Rotuley (1980, p.927) writes: There are whole mathematical cities that have been closed off and partially abandoned because of the outbreak of isolated contradictions. They have become like modern restorations of ancient cities, mostly just patched up ruins visited by tourists. In order to sustain the ultramodal challenge to classical logic it will have to be shown that even though leading features of classical logic and theories have been rejected, … by going ultramodal one does not lose great chunks of the modern mathematical megalopolis. … The strong ultramodal claim—not so far vindicated—is the expectedly brash one: we can do everything you can do, only better, and we can do more. A more restrained, but still unorthodox, view is of inconsistency as a non-revisionary extension of classical theory. There is nothing wrong with the classical picture of mathematics, says a proponent of this position, except if we think that the classical picture exhausts all there is to know. A useful analogy is the extension of the rational numbers by the irrational numbers, to get the real numbers. Rational numbers are not wrong; they are just not all the numbers. This moderate line is found in Priest’s work. As articulated by da Costa (1974, p.498): It would be as interesting to study the inconsistent systems as, for instance, the non-euclidean geometries: we would obtain a better idea of the nature of certain paradoxes, could have a better insight on the connections amongst the various logical principles necessary to obtain determinate results, etc. In a similar vein, Chris Mortensen argues that many important questions about mathematics are deeper than consistency or completeness. A third view is even more open-minded. This is to see all theories (within some basic constraints) as genuine, interesting and useful for different purposes. Jc Beall and Greg Restall have articulated a version of this view at length, which they call logical pluralism. c. Methods There are at least two ways to go about mathematical research in this field. The first is axiomatic. The second is model theoretic. The axiomatic approach is very pure. We pick some axioms and inference rules, some starting assumptions and a logic, and try to prove some theorems, with the aim of producing something on the model of Euclid, or Russell and A. N. Whitehead’s Principia Mathematica. This would be a way of obtaining results in inconsistent mathematics independently, as if we were discovering mathematics for the first time. On the axiomatic approach there is no requirement that the same theorems as classical mathematics be proved. The hardest work goes into choosing a logic that is weak enough to be paraconsistent, but strong enough to get results, and formulating the definitions and starting assumptions in a way that is compatible with the logic. Little work has so far been done using axiomatics. By far more attention has been given to the model theoretic approach, because it allows inconsistent theories to “ride on the backs” of already developed consistent theories. The idea here is to build up models—domains of discourse, along with some relations between the objects in the domain, and an interpretation—and to read off facts about the attached theory. A way to do this is to take a model from classical mathematics, and to tinker with the interpretation, as in collapsed models of arithmetic (section 5 below). The model theoretic approach shows how different logics interact with different mathematical structures. Mortensen has followed through on this in a wide array of subjects, from the differential calculus to vector spaces to topology to category theory, always asking: Under what conditions is identity well-behaved? Let Φ(a) be some sentence about an object a. Mortensen’s question is, if a = b holds in a theory, then is it the case that Φ(a) exactly when Φ(b)? It turns out that the answer to this question is extremely sensitive to small changes in logic and interpretations, and the answer can often be “no.” Most of the results obtained to date have been through the model theoretic approach, which has the advantage of maintaining a connection with classical mathematics. The model theory approach has the same disadvantage, since it is unlikely that radically new or robustly inconsistent ideas will arise from always beginning at classical ideas. d. Proofs It is often thought that inconsistent mathematics faces a grave problem. A very common mathematical proof technique is reductio ad absurdum. The concern, then, is that if contradictions are not absurd—a fortiori, if a theory has contradictions in it—then reductio is not possible. How can mathematics be done without the most common sort of indirect proof? The key to working inconsistent mathematics is its logic. Much hinges on which paraconsistent logic we are using. For instance, in da Costa’s systems, if a proposition is marked as “consistent,” then reductio is allowed. Similarly, in most relevance logics, contraposition holds. And so forth. The reader is recommended to the bibliography for information on paraconsistent logic. Independently of logic, the following may help. In classical logic, all contradictions are absurd; in a paraconsistent logic this is not so. But some things are absurd nevertheless. Classically, contradiction and absurdity play the same role, of being a rejection device, a reason to rule out some possibility. In inconsistent mathematics, there are still rejection devices. Anything that leads to a trivial theory is to be rejected. More, suppose we are doing arithmetic and hypothesize that Φ. But we find that Φ has as a consequence that j=k for every number j, k. Now, we are looking for interesting inconsistent structure. This may not be full triviality, but 0 = 1 is nonsense. Reject Φ. There are many consistent structures that mathematicians do not, and will never, investigate, not by force of pure logic but because they are not interesting. Inconsistent mathematicians, irrespective of formal proof procedures, do the same. 3. Geometry Intuitively, M. C. Escher’s “Ascending, Descending” is a picture of an impossible structure—a staircase that, if you walked continuously along it, you would be going both up and down at the same time. Such a staircase may be called impossible. The structure as a whole seems to present us with an inconsistent situation; formally, defining down as not up, then a person walking the staircase would be going up and not up, at the same time, in the same way, a contradiction. Nevertheless, the picture is coherent and interesting. What sorts of mathematical properties does it have? The answers to this and more would be the start of an inconsistent geometry. So far, the study has focused on the impossible pictures themselves. A systematic study of these pictures is being carried out by the Adelaide school. Two main results have been obtained. First, Bruno Ernst conjectured that one cannot rotate an impossible picture. This was refuted in 1999 by Mortensen; later, Quigley designed computer simulations of rotating impossible Necker cubes. Second, all impossible pictures have been given a preliminary classification of four basic forms: Necker cubes, Reutersvärd triangles, Schuster pipes or fork, and Ernst stairs. It is thought that these forms exhaust the universe of impossible pictures. If so, an important step towards a fuller geometry will have been taken, since, for example, a central theme in surface geometry is to classify surfaces as either convex, flat, or concave. Most recently, Mortensen and Leishman (2009) have characterized Necker cubes, including chains of Neckers, using linear algebra. Otherwise, algebraic and analytic methods have not yet been applied in the same way they have been in classical geometry. Inconsistent equational expressions are not at the point where a robust answer can be given to questions of length, area, volume etc. On the other hand, as the Adelaide school is showing, the ancient Greeks do not have a monopoly on basic “circles drawn in sand” geometric discoveries. 4. Set Theory Set theory is one of the most investigated areas in inconsistent mathematics, perhaps because there is the most consensus that the theories under study might be true. It is here we have perhaps the most important theorem for inconsistent mathematics, Ross Brady’s (2006) proof that inconsistent set theory is non-trivial. Set theory begins with two basic assumptions, about the existence and uniqueness of sets: • A set is any collection of objects all sharing some property Φ; • Sets with exactly the same members are identical. These are the principles of comprehension (a.k.a. abstraction) and extensionality, respectively. In symbols, x ∈ {z : Φ(z)} ↔ Φ(x); x = y ↔ ∀z (z ∈ x ↔ z ∈ y). Again, these assumptions seem true. When the first assumption, the principle of comprehension, was proved to have inconsistent consequences, this was felt to be highly paradoxical. The inconsistent mathematician asserts that a theory implying an inconsistency is not automatically equivalent to a theory being wrong. Newton da Costa was the first to develop an openly inconsistent set theory in the 1960s, based on Alonzo Church’s set theory with a universal set, or what is similar, W. V. O. Quine’s new foundations. In this system, axioms like those of standard set theory are assumed, along with the existence of a Russell set R = {x : x ∉ x} and a universal set V = {x : x = x}. Da Costa has defined “russell relations” and extended this foundation to model theory, arithmetic and analysis. Note that V ∈ V, since V = V. This shows that some sets are self-membered. This also means that V ≠ R, by the axiom of extensionality. On the other hand, in perhaps the first truly combinatorial theorem of inconsistent mathematics, Arruda and Batens (1982) proved where ∪R is the union of R, the set of all the members of members of R. This says that every set is a member of a non-self-membered set. The Arruda-Batens result was obtained with a very weak logic, and shows that there are real set theoretical theorems to be learned about inconsistent objects. Arruda further showed that where P (X) denotes all the subsets of X and ⊆ is the subset relation. Routley, meanwhile, in 1977 took up his own dialetheic logic and used it on a full comprehension principle. Routley went as far as to allow a comprehension principle where the set being defined could appear in its own definition. A more mundane example of a set appearing in its own defining condition could be the set of “critics who only criticize each other.” One of Routley’s examples is the ultimate inconsistent set, x ∈ Z ↔ x ∉ Z. Routley indicated that the usual axioms of classical set theory can be proven as theorems—including a version of the axiom of choice—and began work towards a full reconstruction of Cantorian set The crucial step in the development of Routley’s set theory came in 1989 when Brady adapted an idea from 1971 to produce a model of dialetheic set theory, showing that it is not trivial. Brady proves that there is a model in which all the axioms and consequences of set theory are true, including some contradictions like Russell’s, but in which some sentences are not true. By the soundness of the semantics, then, some sentences are not provable, and the theory is decidedly paraconsistent. Since then Brady has considerably refined and expanded his result. A stream of papers considering models for paraconsistent set theory has been coming out of Europe as well. Olivier Esser has determined under what conditions the axiom of choice is true, for example. See Hinnion and Libert (2008) for an opening into this work. Classical set theory, it is well known, cannot answer some fundamental questions about infinity, Cantor’s continuum hypothesis being the most famous. The theory is incomplete, just as Gödel predicted it would be. Inconsistent set theory, on the other hand, appears to be able to answer some of these questions. For instance, consider a large cardinal hypothesis, that there are cardinals λ such that for any κ < λ, also 2^κ < λ. The existence of large cardinals is undecidable by classical set theory. But recall the universe, as we did in the introduction (section 1), and its size |V|. Almost obviously, |V| is such large a cardinal, just because everything is smaller than it. Taking the full sweep of sets into account, the hypothesis is true. Set theory is the lingua franca of mathematics and the home of mathematical study of infinity. Since Zeno’s paradoxes it has been obvious that there is something paradoxical about infinity. Since Russell’s paradox, it has been obvious that there is something paradoxical about set theory. So a rigorously developed paraconsistent set theory serves two purposes. First, it provides a reliable (inconsistent) foundation for mathematics, at least in the sense of providing the basic toolkit for expressing mathematical ideas. Second, the mathematics of infinity can be refined to cover the inconsistent cases like Cantor’s paradox, and cases that have yet to be considered. See the references for what has been done in inconsistent set theory so far; what can be still be done in remains one of the discipline’s most exciting open questions. 5. Arithmetic An inconsistent arithmetic may be considered an alternative or variant on the standard theory, like a non-euclidean geometry. Like set theory, though, there are some who think that an inconsistent arithmetic may be true, for the following reason. Gödel, in 1931, found a true sentence G about numbers such that, if G can be decided by arithmetic, then arithmetic is inconsistent. This means that any consistent theory of numbers will always be an incomplete fragment of the whole truth about numbers. Gödel’s second incompleteness theorem states that, if arithmetic is consistent, then that very fact is unprovable in arithmetic. Gödel’s incompleteness theorems state that all consistent theories are terminally unable to process everything that we know is true about the numbers. Priest has argued in a series of papers that this means that the whole truth about numbers is inconsistent. The standard axioms of arithmetic are Peano’s, and their consequences—the standard theory of arithmetic—is called P A. The standard model of arithmetic is N = {0, 1, 2, …}, zero and its successors. N is a model of arithmetic because it makes all the right sentences true. In 1934 Skolem noticed that there are other (consistent) models that make all the same sentences true, but have a different shape—namely, the non-standard models include blocks of objects after all the standard members of N. The consistent non-standard models are all extensions of the standard model, models containing extra objects. Inconsistent models of arithmetic are the natural dual, where the standard model is itself an extension of a more basic structure, which also makes all the right sentences true. Part of this idea goes back to C. F. Gauss, who first introduced the idea of a modular arithmetic, like that we use to tell the time on analog clocks: On a clock face, 11 + 2 = 1, since the hands of the clock revolve around 12. In this case we say that 11 + 2 is congruent to 1 modulo 12. An important discovery in the late 19th century was that arithmetic facts are reducible to facts about a successor relation starting from a base element. In modular arithmetic, a successor function is wrapped around itself. Gauss no doubt saw this as a useful technical device. Inconsistent number theorists have considered taking such congruences much more seriously. Inconsistent arithmetic was first investigated by Robert Meyer in the 1970’s. There he took the paraconsistent logic R and added to it axioms governing successor, addition, multiplication, and induction, giving the system R#. In 1975 Meyer proved that his arithemtic is non-trivial, because R# has models. Most notably, R# has finite models with a two element domain {0, 1}, with the successor function moving in a very tight circle over the elements. Such models make all the theorems of R# true, but keep equations like 0 = 1 just false. The importance of such finite models is just this: The models can be represented within the theory itself, showing that a paraconsistent arithmetic can prove its own non-triviality. In the case of Meyer’s arithemetic, R# has a finitary consistency proof, formalizable in R#. Thus, in non-classical contexts, Gödel’s second incompleteness theorem loses its bite. Since 1976 relevance logicians have studied the relationship between R# and PA. Their hope was that R# contains PA as a subtheory and could replace PA as a stronger, more genuine arithmetic. The outcome of that project for our purposes is the development of inconsistent models of arithmetic. Following Dunn, Meyer, Mortensen, and Friedman, these models have now been extensively studied by Priest, who bases his work not on the relevant logic R but on the more flexible logic LP. Priest has found inconsistent arithmetic to have an elegant general structure. Rather than describe the details, here is an intuitive example. We imagine the standard model of arithmetic, up to an inconsistent element n = n + 1. This n is suspected to be a very, very large number, “without physical reality or psychological meaning.” Depending on your tastes, it is the greatest finite number or the least inconsistent number. We further imagine that for j, k > n, we have j=k. If in the classical model j≠ k, then this is true too; hence we have an inconsistency, j=k and j≠ k. Any fact true of numbers greater than n are true of n, too, because after n, all numbers are identical to n. No facts from the consistent model are lost. This technique gives a collapsed model of arithmetic. Let T be all the sentences in the language of arithmetic that are true of N; then let T(n) similarly be all the sentences true of the numbers up to n, an inconsistent number theory. Since T(n) does not contradict T about any numbers below n, if n > 0 then T(n) is non-trivial. (It does not prove 0 = 1, for instance.) The sentences of T(n) are representable in T(n), and its language contains a truth predicate for T(n). The theory can prove itself sound. The Gödel sentence for T(n) is provable in T(n), as is its negation, so the theory is inconsistent. Yet as Meyer proved, the non-triviality of T(n) can be established in T(n) by a finite procedure. Most striking with respect to Hilbert’s program, there is a way, in principle, to figure out for any arithmetic sentence Φ whether or not Φ holds, just by checking all the numbers up to n. This means that T(n) is decidable, and that there must be axioms guaranteed to deliver every truth about the collapsed model. This means that an inconsistent arithmetic is coherent and complete. 6. Analysis Newton and Leibniz independently developed the calculus in the 17th century. They presented ingenious solutions to outstanding problems (rates of change, areas under curves) using infinitesimally small quantities. Consider a curve and a tangent to the curve. Where the tangent line and the curve intersect can be though of as a point. If the curve is the trajectory of some object in motion, this point is an instant of change. But a bit of thought shows that it must be a little more than a point—otherwise, as a measure a rate of change, there would be no change at all, any more than a photograph is in motion. There must be some smudge. On the other hand, the instant must be less than any finite quantity, because there are infinitely many such instants. An infinitesimal would respect both these concerns, and with these provided, a circle could be construed as infinitely many infinitesimal tangent segments. Infinitesimals were essential, not only for building up the conceptual steps to inventing calculus, but in getting the right answers. Yet it was pointed out, most famously by Bishop George Berkeley, that infinitesimals were poorly understood and were being used inconsistently in equations. Calculus in its original form was outright inconsistent. Here is an example. Suppose we are differentiating the polynomial f(x) =ax^2+bx+c. Using the original definition of a derivative, In the example, ε is an infinitesimal. It marks a small but non-trivial neighborhood around x, and can be divided by, so it is not zero. Nevertheless, by the end ε has simply disappeared. This example suggests that paraconsistent logic is more than a useful technical device. The example shows that Leibniz was reasoning with contradictory information, and yet did not infer everything. On the contrary, he got the right answer. Nor is this an isolated incident. Mathematicians seem able to sort through “noise” and derive interesting truths, even out of contradictory data sets. To capture this, Brown and Priest (2004) have developed a method they call “chunk and permeate” to model reasoning in the early calculus. The idea is to take all the information, including say ε = 0 and ε ≠ 0, and break it into smaller chunks. Each chunk is consistent, without conflicting information, and one can reason using classical logic inside of a chunk. Then a permeation relation is defined which controls the information flow between chunks. As long as the permeation relation is carefully defined, conclusions reached in one chunk can flow to another chunk and enter into reasoning chains there. Brown and Priest propose this as a model, or rational reconstruction, of what Newton and Leibniz were doing. Another, more direct tack for inconsistent mathematics is to work with infinitesimal numbers themselves. There are classical theories of infinitesimals due to Abraham Robinson (the hyperreals), and J. H. Conway (the surreals). Mortensen has worked with differential equations using hyperreals. Another approach is from category theory. Tiny line segments (“linelets”) of length ϵ are considered, such that ϵ^2 = 0 but it is not the case that ϵ = 0. In this theory, it is also not the case that ϵ ≠ 0, so the logical law of excluded middle fails. The category theory approach is the most like inconsistent mathematics, then, since it involves a change in the logic. However, the most obvious way to use linelets with paraconsistent logics, to say that both ϵ = 0 and ϵ ≠ 0 are true, means we are dividing by 0 and so is probably too coarse to work. In general the concept of continuity is rich for inconsistent developments. Moments of change, the flow of time, and the very boundaries that separate objects have all been considered from the standpoint of inconsistent mathematics. 7. Computer Science The questions posed by David Hilbert can be stated in very modern language: Is there a computer program to decide, for any arithmetic statement, whether or not the statement can be proven? Is there a program to decide, for any arithmetic statement, whether or not the statement is true? We have already seen that Gödel’s theorems devastated Hilbert’s program, answering these questions in the negative. However, we also saw that inconsistent arithmetic overcomes Gödel’s results and can give a positive answer to these questions. It is natural to extend these ideas into computer science. Hilbert’s program demands certain algorithms—a step-by-step procedure that can be carried out without insight or creativity. A Turing machine runs programs, some of which halt after a finite number of steps, and some of which keep running forever. Is there a program E that can tell us in advance whether a given program will halt or not? If there is, then consider the program E*, which exists if E does by defining it as follows. When considering some program x, E* halts if and only if x keeps running when given input x. Then E* halts for E* if and only if E* does not halt for E*, which implies a contradiction. Turing concluded that there is no E*, and so there is no E—that there cannot be a general decision procedure. Any program that can decide in advance the behavior of all other programs will be inconsistent. A paraconsistent system can occasionally produce contradictions as an output, while its procedure remains completely deterministic. (It is not that the machine occasionally does and does not produce an output.) There is, in principle, no reason a decision program cannot exist. Richard Sylvan identifies as a central idea of paraconsistent computability theory the development of machines “to compute diagonal functions that are classically regarded as uncomputable.” He discusses a number of rich possibilities for a non-classical approach to algorithms, including a fixed-point result on the set of all algorithmic functions, and a prototype for dialetheic machines. Important results have been obtained by the paraconsistent school in Brazil—da Costa and Doria in 1994, and Agudelo and Carnielli in 2006. Like quantum computation, though, at present the theory of paraconsistent machines outstrips the hardware. Machines that can compute more than Turing machines await advances in physics. 8. References and Further Reading a. Further Reading Priest’s In Contradiction (2006) is the best place to start. The second edition contains material on set theory, continuity, and inconsistent arithmetic (summarizing material previously published in papers). A critique of inconsistent arithmetic is in Shapiro (2002). Franz Berto’s book, How to Sell a Contradiction (2007), is harder to find, but also an excellent and perhaps more gentle Some of da Costa’s paraconsistent mathematics is summarized in the interesting collection Frontiers of Paraconsistency (2000)—the proceedings of a world congress on paraconsistency edited by Batens et al. More details are in Jacquette’s Philosophy of Logic (2007) handbook; Beall’s paper in that volume covers issues about truth and inconsistency. Those wanting more advanced mathematical topics should consult Mortensen’s Inconsistent Mathematics (1995). For impossible geometry, his recent pair of papers with Leishman are a promising advance. His school’s website is well worth a visit. Brady’s Universal Logic (2006) is the most worked-out paraconsistent set theory to date, but not for the faint of heart. If you can find it, read Routley’s seminal paper, “Ultralogic as Universal?”, reprinted as an appendix to his magnum opus, Exploring Meinong’s Jungle (1980). Before too much confusion arises, note that Richard Routley and Richard Sylvan, whose posthumous work is collected by Hyde and Priest in Sociative Logics and their Applications (2000), in a selfless feat of inconsistency, are the same For the how-to of paraconsistent logics, consult both the entry on relevance and paraconsistency in Gabbay & Günthner’s Handbook of Philosophical Logic volume 6 (2002), or Priest’s textbook An Introduction to Non-Classical Logic (2008). For paraconsistent logic and its philosophy more generally see Routley, Priest and Norman’s 1989 edited collection. The collection The Law of Non-Contradiction (Priest et al. 2004) discusses the philosophy of paraconsistency, as does Priest’s Doubt Truth be a Liar (2006). For the broader philosophical issues associated with inconsistent mathematics, especially in applications (for example, consequences for realism and antirealism debates), see Mortensen (2009a) and Colyvan (2009). b. References • Arruda, A. I. & Batens, D. (1982). “Russell’s set versus the universal set in paraconsistent set theory.” Logique et Analyse, 25, pp. 121-133. • Batens, D., Mortensen, C. , Priest, G., & van Bendegem, J-P., eds. (2000). Frontiers of Paraconsistent Logic. Kluwer Academic Publishers. • Berto, Francesco (2007). How to Sell a Contradiction. Studies in Logic v. 6. College Publications. • Brady, Ross (2006). Universal Logic. CSLI Publications. • Brown, Bryson & Priest, G. (2004). “Chunk and permeate i: the infinitesimal calculus.” Journal of Philosophical Logic, 33, pp. 379–88. • Colyvan, Mark (2008). “The ontological commitments of inconsistent theories.” Philosophical Studies, 141(1):115 – 23, October. • Colyvan, Mark (2009). “Applying Inconsistent Mathematics,” in O. Bueno and Ø. Linnebo (eds.), New Waves in Philosophy of Mathematics, Palgrave MacMillan, pp. 160-72 • da Costa, Newton C. A. (1974). “On the theory of inconsistent formal systems.” Notre Dame Journal of Formal Logic, 15, pp. 497– 510. • da Costa, Newton C. A. (2000). Paraconsistent mathematics. In Batens et al. 2000, pp. 165–180. • da Costa, Newton C.A., Krause, D´ecio & Bueno, Ot´avio (2007). “Paraconsistent logics and paraconsistency.” In Jacquette 2007, pp. 791 – 912. • Gabbay, Dov M. & Günthner, F. eds. (2002). Handbook of Philosophical Logic, 2nd Edition, volume 6, Kluwer. • Hinnion,Roland & Libert, Thierry (2008). “Topological models for extensional partial set theory.” Notre Dame Journal of Formal Logic, 49(1). • Hyde, Dominic & Priest, G., eds. (2000). Sociative Logics and their Applications: Essays by the Late Richard Sylvan. Ashgate. • Jacquette, Dale, ed. (2007). Philosophy of Logic. Elsevier: North Holland. • Libert, Thierry (2004). “Models for paraconsistent set theory.” Journal of Applied Logic, 3. • Mortensen, Chris (1995). Inconsistent Mathematics. Kluwer Academic Publishers. • Mortensen, Chris (2009a). “Inconsistent mathematics: Some philosophical implications.” In A.D. Irvine, ed., Handbook of the Philosophy of Science Volume 9: Philosophy of Mathematics. North • Mortensen, Chris (2009b). “Linear algebra representation of necker cubes II: The routley functor and necker chains.” Australasian Journal of Logic, 7. • Mortensen, Chris & Leishman, Steve (2009). “Linear algebra representation of necker cubes I: The crazy crate.” Australasian Journal of Logic, 7. • Priest, Graham, Beall, J.C. & Armour-Garb, B., eds. (2004). The Law of Non-Contradiction. Oxford: Clarendon Press. • Priest, Graham (1994). “Is arithmetic consistent?” Mind, 103. • Priest, Graham (2000). “Inconsistent models of arithmetic, II: The general case.” Journal of Symbolic Logic, 65, pp. 1519–29. • Priest, Graham (2002). “Paraconsistent logic.” In Gabbay and Günthner, eds. 2002, pp. 287–394. • Priest, Graham (2006a). Doubt Truth Be A Liar. Oxford University Press. • Priest, Graham (2006b). In Contradiction: A Study of the Transconsistent. Oxford University Press. second edition. • Priest, Graham (2008). An Introduction to Non-Classical Logic. Cambridge University Press, second edition. • Priest, Graham, Routley, R. & Norman, J. eds. (1989). Paraconsistent Logic: Essays on the Inconsistent. Philosophia Verlag. • Routley, Richard (1977). “Ultralogic as universal?” Relevance Logic Newsletter, 2, pp. 51–89. Reprinted in Routley 1980. • Routley, Richard (1980). “Exploring Meinong’s Jungle and Beyond.” Philosophy Department, RSSS, Australian National University, 1980. Interim Edition, Departmental Monograph number 3. • Routley, Richard & Meyer, R. K. (1976). “Dialectical logic, classical logic and the consistency of the world.” Studies in Soviet Thought, 16, pp. 1–25. • Shapiro, Stewart (2002). “Incompleteness and inconsistency.” Mind, 111, pp. 817 – 832. Author Information Zach Weber Email: zweber@unimelb.edu.au University of Sydney, University of Melbourne
{"url":"https://iep.utm.edu/math-inc/","timestamp":"2024-11-08T12:03:21Z","content_type":"text/html","content_length":"67476","record_id":"<urn:uuid:43a720be-97d4-4dad-a534-99ca921291c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00875.warc.gz"}
Body mass index cut offs to define thinness in children and adolescents: international survey Body mass index cut offs to define thinness in children and adolescents: international survey BMJ 2007 335 doi: https://doi.org/10.1136/bmj.39238.399444.55 (Published 26 July 2007) Cite this as: BMJ 2007;335:194 1. Correspondence to: T J Cole tim.cole{at}ich.ucl.ac.uk Objective To determine cut offs to define thinness in children and adolescents, based on body mass index at age 18 years. Design International survey of six large nationally representative cross sectional studies on growth. Setting Brazil, Great Britain, Hong Kong, the Netherlands, Singapore, and the United States. Subjects 97876 males and 94851 females from birth to 25 years. Main outcome measure Body mass index (BMI, weight/height^2). Results The World Health Organization defines grade 2 thinness in adults as BMI <17. This same cut off, applied to the six datasets at age 18 years, gave mean BMI close to a z score of −2 and 80% of the median. Thus it matches existing criteria for wasting in children based on weight for height. For each dataset, centile curves were drawn to pass through the cut off of BMI 17 at 18 years. The resulting curves were averaged to provide age and sex specific cut-off points from 2-18 years. Similar cut offs were derived based on BMI 16 and 18.5 at 18 years, together providing definitions of thinness grades 1, 2, and 3 in children and adolescents consistent with the WHO adult definitions. Conclusions The proposed cut-off points should help to provide internationally comparable prevalence rates of thinness in children and adolescents. Much has been written about the epidemic of child obesity1 but malnutrition—meaning undernutrition—in infants, children, and adolescents poses a considerably larger public health problem internationally,2 3 4 5 and in the developed world anorexia nervosa is the third most common chronic condition of adolescence.6 Obesity and malnutrition represent opposite extremes on the spectrum of adiposity, and both are routinely quantified in terms of weight and height relative to the child's age.7 Yet the classification of malnutrition in later childhood and adolescence is currently unsatisfactory because of the lack of suitable cut offs for international use.8 Fifty years ago Gomez introduced his malnutrition classification of weight below a specified percentage of median weight for the child's age.9 This included three components: a measurement, a reference for age adjustment, and a set of cut offs.10 Later Seoane and Latham proposed splitting weight for age into weight for height and height for age,11 allowing underweight to be defined as wasting or stunting, or both.12 Subsequently Waterlow et al recommended the use of z scores for the definitions of underweight, wasting, and stunting, with the cut offs defined in terms of standard deviations (SDs) below the median rather than as percentages of the median.13 This ensures that the false positive screening rate is constant across age as applied to the reference population.10 In 1983 the World Health Organization (WHO) formally recognised the US National Center for Health Statistics (NCHS) classification14 as the international reference15 and has used it since to classify children as underweight, wasted, or stunted, each based on a cut off of −2 z scores.16 Wasting in particular is assessed with the NCHS/WHO weight for height reference, which compares the child's weight to the average weight of children of the same height.17 This ignores the child's age, which allows nutritional status to be assessed when age is not known. It also assumes that, on average, children of a given height weigh the same whatever their age; in infancy and adolescence, however, the weight-height relation depends on age.18 19 This can be seen by considering the index weight/height^p, where the height power p is allowed to vary with age. The index is adjusted for age and sex by dividing it by the same ratio based on median weight and height for the child's age and sex.7 For a weight for height index such as NCHS, the value of p is the ratio of the percentage growth rates in weight and height at each age, so it is largest when weight is growing fastest relative to height—that is, in infancy and adolescence when p is 3 or more as against 1.5 in mid-childhood.18 In later adolescence, as weight growth continues after height growth has stopped, p increases to infinity and height adjustment becomes impossible. This is an important general limitation of weight for height references in that they cannot be used in adolescence.18 20 For this reason the NCHS weight for height reference was truncated at age 10 in girls and 11.5 in boys.14 The weight/height^p index can alternatively be adjusted for height for age, where p is chosen to make the index uncorrelated with height among children at each age. This leads to a different pattern of p changing with age, with p=2 in infancy, rising to 3 in adolescence and then falling back to 2 in adulthood.19 21 22 23 Cole suggested fixing p at 2—that is, the body mass index (BMI).21 This is now used throughout infancy, childhood, adolescence, and adulthood. BMI has been used since the 1960s to assess obesity in adults24 25 and more recently in children.26 27 Many countries now have their own national reference centile charts for BMI for age.28 29 30 International BMI cut offs for child overweight and obesity, based on data from six countries, have been developed.31 The WHO 1995 expert committee16 endorsed the use of BMI for assessing thinness in adolescence, based on the BMI reference data from Must et al,32 and the recently published 2006 WHO growth standard also includes BMI for children aged 0-5 years.33 However, this is insufficient for international use because the BMI cut offs from Must et al were based on US data from the early 1970s and the WHO standard is restricted in age. Thus there are no valid BMI cut offs for assessing underweight or wasting in adolescents or children over 5 years. The international BMI cut offs for child overweight and obesity cover the age range 2-18 years and are based on the adult cut offs of 25 and 30 at 18 years.31 They have been widely used, with over 1100 citations in the seven years since publication. It would be logical to produce BMI cut offs for underweight using the same principle. However, underweight does not have the same meaning in adults and children. In adults, underweight or thinness indicates low BMI, whereas in children underweight is low weight for age and wasting is low weight for height.16 We have extended the adult term of thinness to children, meaning low BMI for age. Subjects and data We used the same methods as those used by the International Obesity TaskForce (IOTF) for the international overweight and obesity cut offs.31 We obtained BMI data from nationally representative surveys of children in six high and middle income countries: Brazil, Great Britain, Hong Kong, the Netherlands, Singapore, and the United States (table 1)⇓.31 Each survey had over 20000 subjects aged 6-18 years, and height and weight were measured with standard methods and quality control measures to minimise measurement error. Four of the datasets came from one-off surveys, while the British and US data were pooled from surveys collected over a period of time. The US data came from the national health examination surveys II and III, and the national health and nutrition examination surveys (NHANES) I and II, while for comparison Must et al used NHANES I data for their BMI reference.32 The Brazilian and US surveys used multi-stage sampling designs, and their data were analysed with survey weights. A total of 192727 subjects were involved, 97876 males and 94851 females from birth to 25 years (table 1).⇓ LMS method We analysed each dataset using the LMS method, which summarises the distribution of BMI by age and sex in terms of three curves called L (lambda), M (mu) and S (sigma).40 The M curve is median BMI by age, the S curve is the coefficient of variation of BMI, and the L curve expresses the skewness of the BMI distribution in terms of the Box-Cox power needed to transform the data to near normality. Any required BMI centile curve is defined in terms of the L, M, and S curves as follows: where z is the z score corresponding to the required centile (for example, z=0 gives the median M or z=0.67 gives the 75th centile) and the values of L, M, and S vary with age and sex. The reverse process, of converting a child's BMI to a z score, involves the equation: z=((BMI/M)^L −1)/(L×S) where the values of L, M, and S are for the child's age and sex. Note that the ratio BMI/M in the second equation, multiplied by 100, corresponds to BMI expressed as a percentage of the median (BMI%). So BMI% and z are linked in a way that depends on the variability S and skewness L, which in turn depend on age. Conventionally a BMI centile chart is based on a prespecified set of centiles (for example, 3rd, 10th, 25th, 50th, 75th, 90th, 97th)30 or z scores (−2 to +2 in increments of two thirds of a z score). 41 Here by contrast, quasi-centile curves are constructed to pass through a given BMI cut off at a given age (we chose 18 as it was the oldest age with data available in all six datasets). To do this the required BMI is substituted into the second equation and the corresponding z score obtained, by using L, M, and S values by sex for age 18 specific for the dataset. This z score is then substituted into the first equation and defines the required curve by age. We constructed centile curves of this form for each of the six datasets separately and then averaged the curves by age. The result is a single curve, based on all six datasets, that passes through the specified cut off at age 18. This exercise was repeated for each sex and for each of several distinct BMI cut offs at age 18. Choice of cut offs at age 18 The international cut offs for overweight and obesity were based on the widely accepted adult BMI values of 25 and 30.31 These values are related to health, indicating points on the BMI spectrum where risk increases appreciably, and are widely used.25 Health related cut offs for thinness in adults also exist, but there is less consensus in their use. WHO defines thinness grades 1, 2, and 3 as BMI below 18.5, 17, and 1616 42; the malnutrition universal screening tool of the British Association for Parental and Enteral Nutrition (BAPEN) scores 1 and 2 for BMI below 20 and 18.5, respectively43; and the WHO ICD-10 criteria for anorexia nervosa include BMI below 17.5 or weight below 85% of expected weight for height.44 45 In children, the diagnostic criteria for anorexia nervosa use BMI below the 5th or 10th centile, corresponding to −1.6 or −1.3 SD (z scores), to define underweight,46 47 while the criteria for malnutrition, based on weight for height rather than BMI, use the graded WHO cut offs of −1, −2, and −3 SD, corresponding roughly to 90%, 80%, and 70% of expected weight for height.16 48 Anomalously the WHO Expert Committee16 defined thinness in adolescence as BMI below the 5th centile rather than below −2 SD, probably because the NHANES I reference did not provide −2 SD cut offs. At age 18 the 5th centiles in Must et al were 17.5 for males and 16.7 for females, reflecting US youth in the early 1970s.32 An important question here is which cut off is the more appropriate, the 5th centile or −2 SD. WHO recommended the −2 SD criterion back in 1977,13 while the 5th centile was a pragmatic alternative at a time when a −2 SD BMI cut off was not available. For this reason we feel that −2 SD is the more appropriate cut off to use. On this basis, the simplest way to transfer the child cut offs from weight for height to BMI is to treat the two −2 SD cut offs as equivalent. Weight for height is weight adjusted for height while BMI for age is weight adjusted for height and age. So if weight for height were independent of age, as it is at certain ages,18 then the two cut offs would coincide. At other ages the variability in BMI is theoretically slightly less than for weight for height, as variability caused by age is adjusted for. Against that, the height adjustment for BMI is imperfect later in childhood,19 so on balance the variability is likely to be similar for the two indices. Thus the optimal cut off for our purposes would be a value of BMI at age 18 that coincided with a previously published adult cut off and which was also close to a child BMI cut off of −2 SD. But this introduces ambiguity as the z score corresponding to a given cut off will depend on the growth reference used. Here we use the six datasets as internal references to test the alternative cut offs. We also investigate the relation between z score and BMI%. Table 2⇓ gives BMI z scores and centiles corresponding to various published BMI cut offs at age 18, averaged across the six datasets, where the centiles correspond to the sex averaged z scores. In general the results are similar for boys and girls, and the cut offs range from the 0.6th to the 16th centile. BMI 18.5 is on the 16th centile and approximates to a z score of −1, while BMI 17 is on the 3rd centile and close to z score −2, and hence is near optimal for our purposes. Table 3⇓ looks at the BMI cut off of 17 in z score terms by dataset. The four Western countries are close to z score −2.0 in females and −2.1 in males, while the data from Hong Kong and Singapore are near to −1.4. The centiles indicate the prevalence of thinness in each country at age 18 when the survey was done, at which time the East Asian children were appreciably thinner. Figure 1⇓ shows the separate thinness curves for BMI 17 at age 18 by country and sex. Within each graph the country curves are largely superimposed and more so for girls than boys. Looking at the individual countries, Brazil is relatively low in both sexes while Hong Kong is high in boys, and for Singapore the boys' curve stands out at age 6. The BMI cut off of 17 is not only near to z score −2, it is also the WHO definition of thinness grade 2 in adults. Thus the WHO classification provides a bridge between child and adult, in that a young person with BMI 17 at age 18 is both a borderline thin adult (grade 2) and a borderline thin child (z score −2). For this reason we propose to use the cut off of 17 plus the other two WHO cut offs of 18.5 and 16 as the basis for our classification. Figure 2⇓ shows the thinness curves by country for BMI 18.5 at age 18, where the agreement between datasets is closer than for figure 1. Singapore is again anomalous at age 6, notably in the boys, probably because of the absence of data below this age. Figure 3⇓ shows the same curves for a cut off of BMI 16, where the agreement between countries is noticeably poorer, particularly in the boys, reflecting the greater extrapolation into the tails of the BMI distributions. Figure 4⇓ shows the composite curves for cut offs 16, 17, and 18.5, obtained by averaging the individual curves in figures 1, 2, and 3. To avoid a discontinuity at age 6 we smoothed the mean values with and without Singapore between ages 6 and 8. Table 4⇓ gives the values of the curves by exact half year from 2 to 18 years, and values for intermediate ages can be obtained by interpolation. Table 5⇓ shows the relation between BMI% and BMI z score at different ages, averaged across the datasets by sex, where the centiles correspond to the sex averaged z scores. Up to 6 years a z score of −2 corresponds to BMI 85% of the median, while from 14 years the same z score matches BMI 80%. This shift with age is caused largely by the sharp increase in variability in BMI that occurs between 6 and 12 years. The plot of the coefficient of variation of BMI (the S curve) against age by country31 shows it clearly, where all six countries follow the same pattern of an early plateau then a rise, then a later plateau. We propose that a BMI of 17 at age 18 is a suitable cut off to use as the basis for an international definition of thinness in children and adolescents. Three different criteria lead to this conclusion: BMI 17 is the WHO grade 2 cut off for thinness in adults16; BMI 17 at age 18 corresponds to a mean z score of −2 using our data (table 2); and, again with our data, BMI 17 at age 18 is 80% of the median (table 5).⇑ The latter two criteria mean that in childhood the new cut off will be similar in z score and % of the median terms to those used before, notably the WHO definition of wasting—that is, weight for height below −2 SD or 80% of the median. WHO defines thinness in adolescents16 as BMI below the smoothed 5th centile for age from Must et al cut offs that at age 18 are 17.5 for males and 16.7 for females.32 For comparison a cut off of 17 applied to our US data in table 3 (four surveys including NHANES I) corresponds to the 1st centile in boys and 2nd centile in girls. Using a cut off nearer the 2nd than the 5th centile seems reasonable in that WHO, which has always used a −2 SD cut off, opted for the 5th BMI centile of the Must et al reference only because there was no alternative. Using 17 as the cut off would unify the two WHO definitions of thinness, for adults and adolescents, while extending its use to children too. We have tried to avoid potential confusion between the terms “wasting” and “underweight” in children by adopting the term “thinness,” which WHO uses to mean low BMI in adults and adolescents. We extend the definition to include low BMI for age in children, linked to the adult definition through the fulcrum of BMI 17 at age 18. It is important to recognise, however, that thinness is not simply the opposite of fatness—a low BMI is more strongly correlated with lean mass than fat mass.49 Pelletier and Frongillo emphasise that most mortality related to malnutrition occurs with mild or moderate malnutrition3 so there is a need to distinguish between grades of malnutrition. In addition to our primary cut off of 17 we propose two secondary cut offs: 18.5, long used by WHO in adult studies42 and for grade 1 thinness,16 and 16, used for grade 3 thinness. Thus our three cut offs correspond to the WHO graded definition of thinness. Surprisingly, given its key role in the assessment of malnutrition, weight for height is poorer than weight for age or mid-arm circumference for predicting mortality.2 Pelletier's review summarises eight studies that compare anthropometric indicators for predicting mortality and shows that weight for height is consistently the least effective.2 Pelletier suggests that increased measurement error may explain this, but other possibilities are the use of weight for height rather than BMI and unsuitable cut offs. The three BMI cut offs proposed here allow this to be tested. The recent publication of the WHO child growth standard33 is likely in time to have a major impact on the growth assessment of young children. The centiles on the WHO BMI chart overlap with our proposed cut offs between 2 and 5 years, and figure 5 shows how the two compare (including the BMI 25 and 30 cut offs31).⇑ The BMI 17 cut off lies between the WHO −1 and −2 SD curves and corresponds to the 5-7th WHO centile, somewhat higher than the 3rd centile seen in table 2.⇑ This reflects the fact that the −2 SD curve is lower in the WHO standard than in our data (table 1)⇑ and other references such as the CDC 2000 reference.50 51 Thus far there is no advice from WHO about how to use the BMI chart for assessment of malnutrition. Limitations and strengths The key assumption of our analysis is that the cut offs have the same meaning irrespective of age, sex, and country. This is inevitably a simplification of a complex situation. The choice of 18 as the crossover age between child and adult is not ideal as BMI increases after this age faster in males than females. Age 20 would have been better, but some of our datasets lacked data at that age. In the datasets extending to age 20, BMI 17 at age 18 corresponded to BMI 17.7 in males and 17.2 in females at age 20 (fig 1),⇑ a slight male excess.52 Adjustment for this would make the cut offs for the two sexes closer together at young ages but further apart after age 10. Overall the male cut offs are slightly more extreme (table 2).⇑ Also the two East Asian countries (Hong Kong and Singapore) have appreciably higher prevalences of thinness than the other countries (table 3), which arise from their greater variability in BMI.31⇑ The lack of an adjustment for puberty is another limitation of the cut offs. BMI is known to be higher in more mature individuals of the same age,8 53 and delayed puberty is associated with thinness, 54 which an adjustment for pubertal stage might avoid. Such an adjustment is statistically complex but should be considered in the application of the cut offs to populations with delayed puberty, just as with linear growth. Finally, BMI is based on weight and does not differentiate between fat mass and lean mass, therefore it is an imperfect measure of either adiposity or leanness. In children it correlates with fat mass more strongly at the upper end of the adiposity spectrum (where fat mass makes up a larger proportion of weight) than at the lower end.49 So in thin children BMI is a better predictor of lean mass than fat mass. We believe that none of these differences invalidates the underlying principle of the cut offs, which is to provide a simple yet “good enough” tool to compare prevalences across populations that are inevitably heterogeneous. As with any screening tool its sensitivity and specificity need testing in the field. The main strength of the cut offs is their ability to compare rates of prevalence of thinness across countries, regions, and time. The cut offs avoid the conventional concept of a reference population in that they include data from several disparate populations, so they are at the same time representative of several countries and of none. This duality increases the perceived generalisability of the cut offs, even though they clearly cannot be universally representative. Instead a fixed BMI in adulthood acts as the reference point. A side effect of this is that because there is no reference, there are also no underlying z scores—individuals can be classified only relative to the cut offs in figure 4 and table 4.⇑⇑ Also, the cut offs are resilient to the possible addition of other datasets to the reference because of the way they are constructed. Adding, for example, an African dataset would have no effect on the cut offs at age 18 because of the standardisation to that age and only a modest effect at other ages because of dilution with the existing datasets, depending of course on the shape of the new BMI centile curve compared with the existing cut offs. The cut offs provide a classification of thinness for public health purposes, while BMI centiles have a valuable role to play in the clinical management of individuals, where the changes in BMI over time can be judged relative to the BMI centile chart. They are two different but complementary ways of assessing BMI. Implications for practice and policy We have previously described the international cut offs for overweight and obesity31 and discussed several issues about their use and interpretation that are just as relevant for cut offs for thinness. One issue not discussed there is the best way to report prevalence rates. The category of overweight, for example, can be defined either as the proportion of children with BMI beyond the overweight cut off or the proportion with BMI between the overweight and obesity cut offs. So the overweight group either does or does not include the obese group, and quite often papers fail to indicate which definition was used, beyond or between. Our preference is between, so that, for example, grade 1 thinness implies a BMI between the 17 and <18.5 cut offs. To this end we have developed a Microsoft Excel add-in module called lmsGrowth,55 which (inter alia) converts BMI to an ordered grade by interpolating to the child's exact age. The module codes normal weight (between the 18.5 and <25 cut offs) as 0 and overweight (25 to <30) and obesity (≥30) as +1 and +2, respectively, while thinness grades 1, 2, and 3 are coded as −1 (17 to <18.5), −2 (16 to <17), and −3 (<16). Finally, we emphasise that these cut offs need to be tested against new data; they are offered as a way forward and not as a definitive statement. But we hope they will prove helpful in providing a unified definition of thinness in children and adolescents based on thinness in adults. They can also be used in conjunction with the corresponding international definition of overweight and obesity. What is already known on this topic • Malnutrition in children and adolescents is a serious public health concern • It is better assessed as thinness (low body mass index for age) than as wasting (low weight for height) • There are no suitable thinness cut offs for this age group What this study adds • A new graded definition of thinness in childhood and adolescence is proposed, based on pooled international data for BMI and linked to the WHO recommended adult cut off points of 16, 17, and 18.5 at age 18 • The thinness cut off linked to 17 is close to the wasting cut off based on −2 z scores • The new definitions should encourage direct comparison of trends in child and adolescent thinness worldwide • We thank Carlos Monteiro (Brazil), Sophie Leung (Hong Kong), Machteld Roede (Netherlands), and Uma Rajan (Singapore) for allowing us access to their data. • Contributors: TJC and DN had the original idea. TJC did most of the statistical analyses, wrote the first draft, and is guarantor. KMF did further analyses of the US data. DN provided expertise on eating disorders and AAJ provided expertise on malnutrition. All authors participated in the discussion and interpretation of the results and contributed to the final paper. • Funding: Research at the UCL Institute of Child Health and Great Ormond Street Hospital for Children NHS Trust benefits from funding from the NHS Executive. TJC is supported by a Medical Research Council programme grant. • Competing interests: None declared. • Ethical approval: Not required. View Abstract
{"url":"https://www.bmj.com/content/335/7612/194?ijkey=1e76341931f259ff6d48b8d4a5aa57c14e84d27b&keytype2=tf_ipsecsha","timestamp":"2024-11-14T11:02:36Z","content_type":"text/html","content_length":"238647","record_id":"<urn:uuid:bf7d93ea-6971-44a0-92cd-58f80baa8cb0>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00532.warc.gz"}
Eberly College of Science Bachelor of Science in Mathematics Options for the Bachelor of Science Degree Actuarial Option The goal of this option is to train students to enter the actuarial science profession. Actuaries are in great demand in the insurance and other related businesses. The courses required in this option are intended to prepare students to pass one or more of a sequence of demanding examinations administered by a national organization, the Society of Actuaries. The key mathematical areas are probability, statistics, and advanced courses in insurance and in operations research. Applied and Industrial Option The goal of this option is to train students in the areas of applied and industrial mathematics. This option will prepare students to use mathematics to solve problems arising in industry and will also prepare students for graduate study in applied mathematics. The main mathematical tools needed are analysis, differential equations, numerical analysis and computing, probability and statistics, matrix theory, and mathematical modeling. Computational Option The goal of this option is to train students in the areas of mathematics most relevant to computers. These include the mathematical tools needed for analyzing algorithms as well as those mathematical problem-solving methods that can be implemented on computers. The main areas needed are numerical analysis, matrix theory, differential equations, statistics, combinatorics, and linear programming. General Option The goal of this option is to provide a way for students to construct, within limits, their own curricula. It thus allows room in the major for a student interested in an unusual area of application or one with an unusual range of interests within mathematics. The option requires at least one course at the 400 level in each of the areas of analysis, algebra, and applied mathematics. An approved sequence of 12 credits is required which consists of courses in an area related to mathematics. Graduate Option This goal of this option is to prepare students for graduate study in mathematics. The required courses include those that graduate mathematics departments normally expect their incoming students to have completed Systems Analysis Option The goal of this option is to train students to apply mathematics toward the solution of problems in business, economics, and the social and behavioral sciences. The main mathematical tools needed are matrix theory, linear programming, and statistics. An approved sequence of 12 credits is required which consists of courses in an area related to mathematics. Information for All Options General Education Requirements
{"url":"https://science.psu.edu/math/undergraduate/math-major/bs","timestamp":"2024-11-05T09:51:19Z","content_type":"text/html","content_length":"112923","record_id":"<urn:uuid:341335b3-abfe-4e7c-bfcc-82127759f50f>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00024.warc.gz"}
Max Klimm (Technische Universität Berlin): Complexity and Parametric Computation of Equilibria in Atomic Splittable Congestion Games Lecture by Max Klimm (Technische Universität Berlin): Complexity and Parametric Computation of Equilibria in Atomic Splittable Congestion Games We settle the complexity of computing an equilibrium in atomic splittable congestion games with player-specific affine cost functions as we show that the computation is PPAD-complete. To prove that the problem is contained in PPAD, we develop a homotopy method that traces an equilibrium for varying flow demands of the players. A key technique for this method is to describe the evolution of the equilibrium locally by a novel block Laplacian matrix where each entry of the Laplacian is a Laplacian again. These insights give rise to a path following formulation eventually putting the problem into PPAD. For the PPAD—hardness, we reduce from computing an approximate equilibrium for bimatrix win-lose games. As a byproduct of our analyse, we obtain that also computing a multi-class Wardrop equilibrium with class-dependent affine cost functions is PPAD-complete as well. As another byproduct, we obtain an algorithm that computes a continuum of equilibria parametrised by the players’ flow demand. For games with player-independent costs, this yields an output-polynomial algorithm. (Joint work with Philipp Warode) Time & Location May 10, 2021 | 02:15 PM
{"url":"http://www.facetsofcomplexity.de/monday/20210510-L-Klimm.html","timestamp":"2024-11-07T07:29:33Z","content_type":"text/html","content_length":"16676","record_id":"<urn:uuid:21d89db8-0c0b-4b3f-8ae3-fa222599f6f1>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00608.warc.gz"}
Highlight Rows with 'No' in Column C - Excel Formula for Python In this tutorial, we will learn how to write an Excel formula in Python that highlights rows with 'no' in column C. This formula can be useful when working with data in Excel and you want to visually identify rows that meet a certain condition. We will use the IF function in Excel to check if the value in column C is 'no' and return TRUE if it is, indicating that the row should be highlighted. If the value is not 'no', the formula will return FALSE, indicating that the row should not be highlighted. To implement this formula, we will use Python's pandas library, which provides powerful data manipulation and analysis tools. We will create a new column in the DataFrame called 'Highlight' and use the apply method to apply the formula to each row. Finally, we will use conditional formatting to highlight the rows where the 'Highlight' column is TRUE. Let's look at an example to better understand how the formula works. Suppose we have a DataFrame with a column C containing the following values: Applying the formula to each row would result in a new column 'Highlight' with the following values: C Highlight yes FALSE no TRUE no TRUE yes FALSE no TRUE yes FALSE Based on the 'Highlight' column, we can apply conditional formatting to highlight the rows where the value is TRUE, indicating that the value in column C is 'no'. In conclusion, by using the Excel formula =IF($C1='no', TRUE, FALSE) in Python, we can easily highlight rows with 'no' in column C. This can be done using the pandas library and applying the formula to each row of the DataFrame. Conditional formatting can then be used to visually highlight the rows that meet the condition. This technique can be helpful when working with large datasets and wanting to quickly identify specific rows based on a certain criteria. An Excel formula =IF($C1="no", TRUE, FALSE) Formula Explanation This formula uses the IF function to check if the value in column C is "no". If the value is "no", it returns TRUE, indicating that the row should be highlighted. If the value is not "no", it returns FALSE, indicating that the row should not be highlighted. Step-by-step explanation 1. The formula starts with the IF function, which has three arguments: the logical test, the value if true, and the value if false. 2. The logical test is $C1="no", where $C1 represents the value in column C of the current row. The dollar sign ($) before the C locks the column reference, so when the formula is copied to other cells, the column reference remains the same. 3. If the logical test is true (i.e., the value in column C is "no"), the formula returns TRUE. 4. If the logical test is false (i.e., the value in column C is not "no"), the formula returns FALSE. 5. The result of the formula is used to determine whether to highlight the row or not. This can be achieved by applying conditional formatting to the range of cells where the formula is applied. For example, if we have the following data in column C: | C | | | | yes | | no | | no | | yes | | no | | yes | Applying the formula =IF($C1="no", TRUE, FALSE) to each row would result in the following: | C | Result | | | | | yes | FALSE | | no | TRUE | | no | TRUE | | yes | FALSE | | no | TRUE | | yes | FALSE | Based on the result, you can apply conditional formatting to highlight the rows where the result is TRUE, indicating that the value in column C is "no".
{"url":"https://codepal.ai/excel-formula-generator/query/5v6tJJMV/excel-formula-highlight-rows-with-no-in-column-c","timestamp":"2024-11-08T11:38:49Z","content_type":"text/html","content_length":"98759","record_id":"<urn:uuid:0a35a52f-6efa-4c48-8c07-7ed358abb2b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00477.warc.gz"}
Simulation and Report hi all, I'm new and i'm learning Scan & solve, I have to do a simulation of a solar panel under stress to see if it breaks or not. In my model I put a force directed toward the outer edge (-z). 1) How do I see if the panel will break with a load applied? In my report (attached) what it means to: Minimum Maximum Danger Level (von Mises) 4.07145e-05 Criterion Limit Exceeded Danger Level (Max. Shear Stress) 8.94331e-05 Criterion Limit Exceeded Danger Level (Rankine) 1.8392e-05 Criterion Limit Exceeded Danger Level (Coulomb Mohr) 3.11072e-05 Criterion Limit Exceeded Danger Level (Modified Mohr) 1.8392e-05 Criterion Limit Exceeded
{"url":"https://www.scan-and-solve.com/forum/topics/simulation-and-report?page=1&commentId=6083097%3AComment%3A16702&x=1#6083097Comment16702","timestamp":"2024-11-11T11:51:02Z","content_type":"text/html","content_length":"53617","record_id":"<urn:uuid:c07e6311-889c-4534-87ea-ac24bf9256fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00717.warc.gz"}
Rounding to the Nearest Hundred Million Calculator - Online Tool 1. How to use rounding to the nearest hundred million calculator? Simply give the value in the input field and select round to hundred million and click the calculate button and get the answer instantaneously. 2. What is the best tool to round the nearest hundred million? The best tool to round off the number to the nearest hundred million is Rounding to the Nearest Hundred Million Calculator available in Roundingcalculator.guru. 3. Round this number 38736363663 to the nearest of hundred million? 38736363663 rounding to the nearest hundred million is 38700000000.
{"url":"https://roundingcalculator.guru/rounding-to-the-nearest-hundred-million-calculator/","timestamp":"2024-11-10T17:19:40Z","content_type":"text/html","content_length":"43139","record_id":"<urn:uuid:7b7356aa-1acf-4e46-8d5a-9ff6e9aac2c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00847.warc.gz"}
Power flow calculations using Inverse Dynamics Dear Biomch-L readers, I have completed many 3D multi-segment rigid body models of the human body to calculate the joint (F*v) and muscle powers (M*w) during human movement. I am familiar with Winter's work who emphasised the usefulness of this approach in biomechanics. He believed that a typical power analysis would show the distinct patterns of energy generation and absorption by the muscles, and this could have significant implications for training and conditioning. Winter's calculations are relatively simple. In his book "Biomechanics and Motor Control of Human Movement (2nd Ed, 1990), he intially stated that muscle power is the produce to net muscle moment (M) and angular velocity (w) yielding the formula P = Mw, where P is power in watts. This power could be therefore positive or negative depending on whether the muscles were performing a positive rate of change or negative change of work. Hence the terms muscle power generation and muscle power absorption were conceptualised mechanically. A little later on Winter says that the aforementioned formula should be modified to include the angular velocities of the adjacent segments in order to partition the transfer component so that w is repaced by (w1 - w2) and the muscle power equation now becomes P=M(w1-w2), where if w1 and w2 have the same polarity, the rate of transfer is the lesser of the two power components. In Zajac et al. (2002). Biomechanics and muscle coordination of human walking. Part 1: Introduction to concepts, power transfer, dynamics and simulations. Gait and Posture 16 (2002), 215-232, states the following: (i) any one muscle may effect the acceleration and power of ALL body segments because of dynamic coupling. (ii)the net power instantaneously delivered by a muscle to either the segment of origin or insertion must be found from COUPLED EQUATIONS OF MOTION and cannot be found from the dot product of its force vector at the origin (insertion) with the velocity vector of the origin (insertion) OR from the dot product of the net joint moment vector with the segment angular velocity vector. The reason is that the effects of the contributions of net joint moment to the joint intersegmental forces and the muscle contributions to joint intersegmental forces are not (iii) It is often erroneously stated that or inferred that a muscle delivers power to or absorbs power from only the segments to which it attaches. (iv) This error (iii) seems to arise because of the lack of recognition that the terms in the coupled dynamic equations are correct for computing muscle power to the entire sustem, but incorrect when used separately to find the net contribution to the segments to which they attach. Therefore, I have the following questions that I would like resolved: (a) From the inverse dynamics solution of two or more coupled rigid bodies, what would the value of just the net joint torque multiplied the corresponding segment angular velocity compute (i.e. just P = Tw)? Anything meaningful? (b) If the net joint torque (from inverse dynamics) multiplied by the difference in angular velocities of the adjacent segments was calculated would this satifactorily give the values of THE TOTAL SUM OF active muscle power flows in or out of the segment (i.e. P=T(w2-w1)). Is this always necessary or could sometimes P=Tw used? (c) Can joint muscle power or absorption be calculated accurately using Winter's approach using the joint torques found from an inverse dynamics solution? Is this what Winter meant or did he mean as in (ii) above? Is the methodology for calculating power flows correct in Winter? (d) Can the power flow equations easily applied to the 3D case since power is a scalar quantity? (e) How should power flows into or out of a segment be described taking into account statement (iv) by Zajac et al. Does this mean that P=T(w2-w1) give the net joint power which represents the SUMMED power by the net joint moment to/from ALL the segments? Your replies would be greatly appreciated, and a summary of replies posted. Thank you. Rene Ferdinands Department of Physics & Electronic Engieering University of Waikato New Zealand To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl For information and archives: http://isb.ri.ccf.org/biomch-l
{"url":"https://biomch-l.isbweb.org/forum/biomch-l-forums/biomch-l-1988-2010/14403-power-flow-calculations-using-inverse-dynamics","timestamp":"2024-11-08T05:42:02Z","content_type":"application/xhtml+xml","content_length":"54430","record_id":"<urn:uuid:2a6384e0-fd78-40e3-9646-d026af13b343>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00181.warc.gz"}
Fraction Word Problems Worksheets - 15 Worksheets.com Fraction Word Problems Worksheets About These 15 Worksheets These worksheets will help students practice and improve their understanding of fractions by solving real-world problems and scenarios that involve fractional quantities. These worksheets provide context and relevance to fractions, a fundamental concept in mathematics. Fraction word problems require students to apply their knowledge of fractions, operations such as addition, subtraction, multiplication, and division of fractions, and problem-solving skills to solve practical problems. Fraction word problems cover a wide range of topics and complexity levels, making them suitable for different grade levels and skill levels. They can involve tasks such as adding and subtracting fractions to find quantities, multiplying fractions to calculate proportions, dividing fractions to distribute resources, or solving real-life situations involving fractions, percentages, and ratios. These problems encourage students to interpret written information, identify relevant fractional values, and perform the necessary operations to find solutions. By working through fraction word problems, students not only reinforce their understanding of fractions but also develop critical thinking and mathematical reasoning skills. They learn to break down complex problems into smaller, manageable parts, apply fraction concepts to real-world scenarios, and communicate their solutions effectively. Fraction word problem worksheets are a valuable tool for educators to assess students’ comprehension of fractions and their ability to apply these concepts in practical contexts. These worksheets can be used in the classroom, as homework assignments, or as part of standardized test preparation. They help students develop confidence in their ability to work with fractions and prepare them for more advanced mathematical concepts that build upon fraction skills. What Are Fraction Word Problems? Fraction word problems are mathematical problems that involve fractions in a real-life or contextual setting. In these problems, you are presented with a situation or scenario that requires you to work with fractions to find a solution. Fraction word problems are designed to test your understanding of fraction concepts and operations, as well as your ability to apply them in practical Fraction word problems can involve various operations, such as addition, subtraction, multiplication, and division of fractions. They may require you to find a fraction of a whole, compare fractions, convert between fractions and decimals, simplify fractions, or solve ratio and proportion problems. Here are a few examples of fraction word problems: Problem – Mary ate 3/4 of a pizza. If the pizza had 8 slices in total, how many slices did Mary eat? Solution – To find the number of slices Mary ate, you multiply the total number of slices by the fraction representing the portion she ate – 8 slices × 3/4 = 6 slices. Therefore, Mary ate 6 slices of Problem – John has 5/8 of a cup of flour, and the recipe requires 3/4 of a cup. How much more flour does John need? Solution – To find the additional amount of flour John needs, you subtract the amount he already has from the required amount – 3/4 – 5/8 = 6/8 – 5/8 = 1/8. Therefore, John needs an additional 1/8 of a cup of flour. Problem – In a bag of marbles, 3/5 are blue and the rest are red. If there are 20 blue marbles, how many marbles are in total? Solution – To find the total number of marbles, you can set up a proportion using the fraction of blue marbles – 3/5 = 20/x. Cross-multiplying, you get 3x = 100. Solving for x, you find x = 100/3. Therefore, there are approximately 33.33 marbles in total. When solving fraction word problems, it is important to read the problem carefully, identify the key information, and determine the appropriate operations or strategies to solve the problem. You may need to simplify fractions, find common denominators, convert between mixed numbers and improper fractions, or apply other fraction concepts based on the specific problem. When in the Real World Would We Need to Solve Fraction Word Problems? Cooking and Baking – Recipes often involve fractions, such as measuring ingredients in fractions of cups or using fractional proportions. You may need to adjust a recipe based on the number of servings or convert between different units of measurement using fractions. Shopping and Discounts – Fraction word problems can arise when calculating discounts, sales prices, or comparing prices. For example, determining the final price after a certain percentage discount involves working with fractions. Measurement and Construction – In fields such as carpentry, woodworking, or interior design, you may encounter fraction word problems when measuring and cutting materials. This can include dividing a length into equal fractional parts or calculating the amount of material needed for a project. Finance and Money Management – Fractional concepts are relevant when dealing with financial calculations. For instance, understanding interest rates, calculating percentages, or dividing expenses among multiple individuals or accounts involves working with fractions. Food and Nutrition – Fraction word problems can be found in nutritional calculations, such as determining the proportion of macronutrients in a meal or calculating calorie intake based on fractional serving sizes. Medicine and Health – Medical dosages are often expressed as fractions or ratios, requiring accurate understanding and calculation of fractional quantities. For example, calculating medication doses based on body weight or determining the correct dilution of a solution involves working with fractions. Manufacturing and Production – Fraction word problems are encountered in production processes, such as calculating yield, determining the proportion of defective products, or estimating resource Sports and Fitness – In sports and fitness contexts, fraction word problems may arise when calculating sports statistics, understanding percentages in game performances, or tracking progress based on fractional improvements. Travel and Navigation – Fraction word problems can be relevant in travel situations, such as calculating distances, travel times, or determining fractional parts of a trip. Art and Design – Fractional concepts are essential in art and design, such as dividing a canvas into fractional parts or calculating the proportions and dimensions of an artwork. What Types of Jobs Involve Fractions? Architects and Civil Engineers – Professionals in these fields often need to work with fractional measurements when designing structures, calculating dimensions, or dividing spaces. Chefs and Bakers – Culinary professionals frequently use fractions when measuring ingredients, adjusting recipes, or scaling up or down the quantity of a dish. Construction Workers and Contractors – Fractional measurements are essential in construction for tasks like cutting materials, determining proportions, or calculating dimensions for building Pharmacists and Pharmacy Technicians – In the pharmaceutical industry, fractions are used to measure and compound medications accurately, especially when dealing with precise dosages. Tailors and Fashion Designers – Professionals in the fashion industry work with fractions to measure and adjust patterns, calculate fabric quantities, and ensure precise garment fitting. Woodworkers and Cabinetmakers – Fractional measurements are crucial in woodworking for tasks such as cutting lumber, joining pieces, or creating precise dimensions for furniture and cabinets. Electricians and Plumbers – Professionals in these trades use fractions when working with pipes, wiring, or electrical circuits, requiring accurate measurements and calculations. Machinists and Manufacturing Technicians – Precision manufacturing often involves working with fractions to measure tolerances, set machine parameters, or determine part dimensions. Landscapers and Gardeners – Fractional calculations are necessary for tasks like measuring areas, determining seed or plant quantities, or adjusting irrigation systems. Quantity Surveyors – Quantity surveyors estimate and calculate materials, costs, and quantities in construction projects, which often involve working with fractions. Financial Analysts and Accountants – Fractional calculations are common in finance and accounting, such as calculating interest rates, analyzing financial ratios, or determining percentage changes. Math Teachers and Tutors – Educators who teach mathematics, especially at the elementary or middle school level, help students understand and work with fractions.
{"url":"https://15worksheets.com/worksheet-category/fraction/","timestamp":"2024-11-08T02:56:25Z","content_type":"text/html","content_length":"135603","record_id":"<urn:uuid:b455f8ea-e721-49b3-9852-b34d78bc1c17>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00722.warc.gz"}
Franca, Andre (2016): Quantum many-body effects in gravity and Bosonic theories. Dissertation, LMU München: Faculty of Physics Preview 1MB Many-body quantum effects play a crucial role in many domains of physics, from condensed matter to black-hole evaporation. The fundamental interest and difficulty in studying this class of systems is the fact that their effective coupling constant become rescaled by the number of particles involved $g= \alpha N$, and thus we observe a breakdown of perturbation theory even for small values of the $\ttt$ coupling constant. We will study three very different systems which share the property that their behaviour is dominated by non-perturbative effects. \\ The strong CP problem - the problem of why the $\theta$ angle of QCD is so small - can be solved by the Peccei-Quinn mechanism, which promotes the $\theta$ angle to a physical particle, the axion. The essence of the PQ mechanism is that the coupling will generate a mass gap, and thus the expectation value of the axion will vanish at the vacuum. It has been suggested that topological effects in gravity can spoil the axion solution. By using the dual formulation of the Peccei-Quinn mechanism, we are able to show that even in the presence of such dangerous contributions from gravity, the presence of light neutrinos can stabilize the axion potential. This effect also puts an upper bound on the lightest neutrino mass.\\ We know that at high energies, gravitational scattering is dominated by black-hole formation. The typical size of black-holes is a growing function of the total center-of-mass energy involved in the scattering process. In the asymptotic future, these black-holes will decay into Hawking radiation, which has a typical wave-length of the size of the black-hole. Thus high energy gravitational scattering is dominated by low energy out states. It has been suggested that gravity is self-complete due to this effect, and that furthermore, there is a class of bosonic theories which can also be self-complete due to the formation of large classical field configurations: UV completion by Classicalization. \\ We explore the idea of Classicalization versus Wilsonian UV completion in derivatively coupled scalars. We seek to answer the following question: how does the theory decide which road to take at high energies? We find out that the information about the high energy behaviour of the theory is encoded in the sign of the quartic derivative coupling. There is one sign that allows for a consistent Wilsonian UV-completion, and another sign that contains continuous classical field configurations for localized sources. \\ In the third part of the thesis we explore non-perturbative properties of black holes. We consider the model proposed by Dvali and Gomez where black holes are described as Bose-Einstein condensates of $N$ gravitons. These gravitons are weakly interacting, however their collective coupling constant puts them exactly at the critical point of a quantum phase transition $\alpha N = 1$. We focus on a toy model which captures some of the features of information storage and processing of black holes. The carriers of information and entropy are the Bogoliubov modes, which we are able to map to pseuo-Goldstone bosons of a broken SU(2) symmetry. At the quantum phase transition this gap becomes $1/N$, which implies that the cost of information storage disappears in the $\Ninf$ limit. Furthermore, the storage capacity and lifetime of the modes increases with $N$, becoming infinite in the $\Ninf$ limit.\\ The attractive Bose gas which we considered is integrable in 1+1d. All the eigenstates of the system can be constructed using the Bethe ansatz, which transforms the Hamiltonian eigenvalue problem into a set of algebraic equations - the Bethe equations - for $N$ parameters which play the role of generalize momenta. While the ground state and excitation spectrum are known in the repulsive regime, in the attractive case the system becomes more complicated due to the appearance of bound states. In order to solve the Bethe equations, we restrict ourselves to the $\Ninf$ limit and transform the algebraic equations into a constrained integral equation. By solving this integral equation, we are able to study the phase transition from the point of view of the Bethe ansatz. We observe that the phase transition happens precisely when the constraint is saturated, and manifests itself as a change in the functional form of the density of momenta. Furthermore, we are able to show that the ground state of this system can be mapped to the saddle-point equation of 2-dimensional Yang--Mills on a sphere, with a gauge group U(N). Item Type: Theses (Dissertation, LMU Munich) Subjects: 500 Natural sciences and mathematics 500 Natural sciences and mathematics > 530 Physics Faculties: Faculty of Physics Language: English Date of oral examination: 13. July 2016 1. Referee: Dvali, Georgi MD5 Checksum of the PDF-file: 822ef17bc7fd51285b825397ccd521c7 Signature of the printed copy: 0001/UMC 24344 ID Code: 19956 Deposited On: 22. Dec 2016 14:01 Last Modified: 23. Oct 2020 20:07
{"url":"https://edoc.ub.uni-muenchen.de/19956/","timestamp":"2024-11-04T14:20:58Z","content_type":"application/xhtml+xml","content_length":"38581","record_id":"<urn:uuid:9131ec1b-12dd-41d4-b3ba-cf99351b6b6c>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00819.warc.gz"}
PRE11-C. Do not conclude macro definitions with a semicolon Macros are frequently used to make source code more readable. Macro definitions, regardless of whether they expand to a single or multiple statements, should not conclude with a semicolon. (See PRE10-C. Wrap multistatement macros in a do-while loop.) If required, the semicolon should be included following the macro expansion. Inadvertently inserting a semicolon at the end of the macro definition can unexpectedly change the control flow of the program. Another way to avoid this problem is to prefer inline or static functions over function-like macros. (See also PRE00-C. Prefer inline or static functions to function-like macros.) In general, the programmer should ensure that there is no semicolon at the end of a macro definition. The responsibility for having a semicolon where needed during the use of such a macro should be delegated to the person invoking the macro. Noncompliant Code Example This noncompliant code example creates a macro definition for a for loop in the program. A for loop should require braces, even if it contains only a single body statement. (See EXP19-C. Use braces for the body of an if, for, or while statement.) This macro takes an integer argument, which is the number of times the loop should run. The programmer has inserted a semicolon at the end of the macro definition by mistake. #define FOR_LOOP(n) for(i=0; i<(n); i++); int i; puts("Inside for loop\n"); The programmer expects to get the following output from the code: Inside for loop Inside for loop Inside for loop But because of the semicolon at the end of the macro definition, the for loop in the program has a null statement, so the statement "Inside for loop" gets printed just once. Essentially, the semicolon at the end of the macro definition changes the program control flow. Although this example might not actually be used in code, it shows the effect a semicolon in a macro definition can have. Compliant Solution The compliant solution is to write the macro definition without the semicolon at the end, leaving the decision whether or not to have a semicolon up to the person who is using the macro: #define FOR_LOOP(n) for(i=0; i<(n); i++) int i; puts("Inside for loop\n"); Noncompliant Code Example In this noncompliant code example, the programmer defines a macro that increments the value of the first argument, x, by 1 and modulates it with the value of the second argument, max: #define INCREMOD(x, max) ((x) = ((x) + 1) % (max)); int index = 0; int value; value = INCREMOD(index, 10) + 2; /* ... */ In this case, the programmer intends to increment index and then use that as a value by adding 2 to it. Unfortunately, the value is equal to the incremented value of index because of the semicolon present at the end of the macro. The + 2; is treated as a separate statement by the compiler. The user will not get any compilation errors. If the user has not enabled warnings while compiling, the effect of the semicolon in the macro cannot be detected at an early stage. Compliant Solution The compliant solution is to write the macro definition without the semicolon at the end, leaving the decision whether or not to have a semicolon up to the person who is using the macro: #define INCREMOD(x, max) ((x) = ((x) + 1) % (max)) Compliant Solution This compliant solution uses an inline function as recommended by PRE00-C. Prefer inline or static functions to function-like macros. inline int incremod(int *x, int max) {*x = (*x + 1) % max;} Risk Assessment Using a semicolon at the end of a macro definition can result in the change of program control flow and thus unintended program behavior. Recommendation Severity Likelihood Remediation Cost Priority Level PRE11-C Medium Probable Low P12 L1 Automated Detection Related Vulnerabilities Search for vulnerabilities resulting from the violation of this rule on the CERT website. Related Guidelines 11 Comments Ruchi, just two comments: □ This rule is specifically about macros that contain single statements (not expressions or multi-statement blocks). It should say so, both in the title and discussion. □ The rule should acknowledge other rules (eg PRE00-C & PRE10-C) that discuss similar issues. I think "Mitigation Strategies" section is redundant and can be removed. The info is useful, but it technically is not a 'mitigation' (eg a workaround for insecure code), it is rather a design constraint (how to write code correctly). So I moved the paragraph to the intro and wordsmithed it a bit. This recommendation applies to multi-statement macros for the same reason it does to single-statement ones. Is there any reason not to extend it to include all macros? If not, what is the process for changing the title of a recommendation or rule? Thanks for changing the title, Martin. #define FOR_LOOP(n) for(i=0; i<(n); i++) int i; puts("Inside for loop\n"); IMHO this is a bad coding practice example. And giving this example, though it achieves right result, is defeating the purpose of secure coding practice. Loop in macros should be allowed when they do definite task e.g. #define CALL_func_n_TIMES(n, func) while(--n){ func(); } #define PRINT_this_n_TIMES(n, this) while(--n){ printf("%s\n", this); } Though, PRINT_this_n_TIMES is itself bad, since it's going to print only strings, macros should be generic. Also, in for example, I think you will need to supply 'i' too like FOR_LOOP(n, i), and call macro appropriately. Because, macro can be in entirely different header file, and in that case if you don't mention 'i' explicitly, you will be creating side-effects that you might not understand until your program crashes or does something wrong. Take case: a lot of declarations in some function int i = 0; // I am going to use this i for counting something. very important. // but I am not going to mention this in comments, because I know // what I am doing. ... // a lot of things you do and count. now, unfortunately you call FOR_LOOP(3) ... // and BOOM, you just changed the number of things I counted. now, i == 3. ... // keep on going doing things and counting and incrementing i. return i // perhaps. Make decision based on returned value. // KA-BOOM wrong output, though you did all the right things. So if you use i in FOR_LOOP(n, i) it makes more sense, as things will be explicit. The FOR_LOOP example isn't a very good one. It exists only to demonstrate the point about the terminating semicolon. Most (but not all) uses of macros can be replaced by better alternatives, and the FOR_LOOP macro is a case in point -- the for statement is perfectly adequate and would be far more appropriate. That said, making the loop control variable an argument to the macro would solve the problem you point out. An even better solution would be to declare the variable directly in the for statement, like so: #define FOR_LOOP(i, n) for (int i=0; i<(n); i++) The advantage of this approach is that it limits the scope of the variable to the for statement alone. #define INCREMENT(x, max) ((x) = ((x) + 1) % (max)) Shouldn't it be an inline function? Plus, this is not actual "increment". Yes, as the description of this guideline suggests: Another way to avoid this problem is to prefer inline or static functions over function-like macros. I've addressed these issues. It is possible to violate the recommendations (like PRE00-C) and still have secure code, so I did not change that compliant solution. But I did add one that uses an inline function, as Martin suggests.
{"url":"https://wiki.sei.cmu.edu/confluence/display/c/PRE11-C.+Do+not+conclude+macro+definitions+with+a+semicolon?focusedCommentId=88018653","timestamp":"2024-11-09T04:36:36Z","content_type":"text/html","content_length":"106885","record_id":"<urn:uuid:2137d3b2-9352-48ef-816c-cde42d572092>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00187.warc.gz"}
Chapter 1 : साझेदारी लेखांकन MP BOARD - RIARGO COMMERCE CLASSES MP Board Class 12th Rathi Book Solutions : Share Capital Course Description Lesson of Conetnt Course Description Course Description: Chapter 1 : Accouting for Partnership – Class 12th MP Board Welcome to the Accouting for Partnership Class 12th MP board Course , specifically designed for Class 12th students of the MP Board. This course is crafted to provide detailed solutions to the exercises and problems found in the Share Capital chapter of the Rathi textbook, enabling students to tackle exam questions with confidence and accuracy. Course Objectives: 1. Provide Comprehensive Solutions: Offer detailed, step-by-step solutions to all exercises and problems in the chapter of the textbook. 2. Enhance Problem-Solving Skills: Help students develop the skills to approach and solve complex share capital problems efficiently. 3. Prepare for Exams: Equip students with the knowledge and techniques to excel in their board exams by focusing on accurate and precise solutions. Course Highlights: – Detailed Solutions: Each problem from the chapter is solved in a clear and detailed manner, ensuring students understand each step. – Exam-Oriented Approach: Focus on solving problems in a way that aligns with exam expectations and scoring patterns. – Interactive Sessions: Regular interactive sessions where students can ask questions and clarify their doubts about specific solutions. – Regular Assessments: Periodic assessments to test understanding and retention of the problem-solving techniques covered. – Supplementary Resources: Access to additional practice problems, previous years’ solved papers, and model answers to enhance preparation. – Conceptual Notes – Practice Mannual Prepared by Subject Expert – Objective Video Lecture and Notes – Previous Year Examination Solution Video and Notes – Online MCQ Test Series for Self Evaluation Join MP Board exam Lesson of Conetnt
{"url":"https://riargo.com/courses/chapter-1-%e0%a4%b8%e0%a4%be%e0%a4%9d%e0%a5%87%e0%a4%a6%e0%a4%be%e0%a4%b0%e0%a5%80-%e0%a4%b2%e0%a5%87%e0%a4%96%e0%a4%be%e0%a4%82%e0%a4%95%e0%a4%a8-mp-board/","timestamp":"2024-11-07T02:32:49Z","content_type":"text/html","content_length":"440150","record_id":"<urn:uuid:d66bee9d-a567-4eca-a474-18da7c1d5de3>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00793.warc.gz"}
Sage Potash Downside Variance | SAGE.V V SAGE Stock 0.23 0.03 15.00% Sage Potash downside-variance technical analysis lookup allows you to check this and other technical indicators for Sage Potash Corp or any other equities. You can select from a set of available technical indicators by clicking on the link to the right. Please note, not all equities are covered by this module due to inconsistencies in global equity categorizations and data normalization technicques. Please check also Equity Screeners to view more equity screening tools Sage Potash Corp has current Downside Variance of 64.32. Downside Variance (or DV) is measured by target semi-variance and is termed downside volatility. It is expressed in percentages and therefore allows for rankings in the same way as variance. One way to view downside volatility is the annualized variance of returns below the target. SUM(RET DEV)^2 Downside Variance = = 64.32 SUM = Summation notation RET DEV = Actual returns deviation over selected period N(ER) = Number of points with returns less than expected return for the period Sage Potash Downside Variance Peers Comparison Sage Downside Variance Relative To Other Indicators Sage Potash Corp is currently regarded as top stock in downside variance category among its peers. It is currently under evaluation in maximum drawdown category among its peers reporting about of Maximum Drawdown per Downside Variance. The ratio of Maximum Drawdown to Downside Variance for Sage Potash Corp is roughly Downside Variance is the probability-weighted squared below-target returns. The squaring of the below-target returns has the effect of penalizing failures at an exponential rate. This is consistent with observations made on the behavior of individual decision-making under. Compare Sage Potash to Peers Predict Sage Potash Thematic Opportunities Explore Investment Opportunities
{"url":"https://www.macroaxis.com/invest/technicalIndicator/SAGE.V/Downside-Variance","timestamp":"2024-11-09T03:24:18Z","content_type":"text/html","content_length":"237092","record_id":"<urn:uuid:9a3ae95b-d58e-4f37-949d-9442cb0276f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00851.warc.gz"}
GMAT Math - What Kind of Math is in the Quantitative Section? This post was updated in 2024 for the new GMAT. What’s the biggest secret to GMAT math success? It’s simple! Identify and study the correct quantitative concepts, strategize for problem solving, and leave rote memorization at home. The GMAT Quantitative section gives you 45 minutes to answer 21 questions. It’s an adaptive test, meaning that correct answers lead to questions of increasing difficulty. Incorrect answers have the opposite effect. Don’t let that worry you though! This is just how the test finds your math ability level. Here’s a bit of relief! You’ll never encounter questions that require more than a basic high school understanding of quantitative concepts. Generally speaking, the GMAT Quant section tests your abilities to analyze and problem-solve rather than any advanced knowledge of mathematics. What kind of math is on the GMAT? There is only one type of GMAT math question: Problem Solving. Problem Solving questions are likely already very familiar to you. They require you to just work out the question and choose the correct final answer. The Three GMAT Math Areas The quantitative knowledge necessary to ace the GMAT consists of basic high school mathematics. • Arithmetic: Number sense, operations on numbers, etc. • Algebra: Basic manipulation of expressions and solving equations • Word Problems/Applications: Word problems test arithmetic or algebra. But, remember the emphasis is on critical reasoning and understanding how to apply what you know from other areas of math. Here is just a small sample of Magoosh video lessons with helpful GMAT Quant tips and strategies related to the four math areas: Curious about the hardest concepts and questions you’ll encounter on the GMAT? Check out this video! GMAT Math Tips and Quant Practice Problems Now let’s talk about what you can do to improve your GMAT Math score! Here are a few helpful GMAT quant tips, followed by practice problems and detailed solutions, to get you right on track toward a higher score. Tip #1 — Rely on your Critical Reasoning; Not Deep Knowledge GMAT Quant problems test your ability to analyze data and draw conclusions, not advanced mathematical ability. As a result, the test can actually be very challenging for high-achieving students. You may have progressed through Calculus and beyond, but if you don’t have enough practice solving logical puzzles or real-world problems, then you’ll need to study up! Tip #2 — Arithmetic Questions: Use Your Number Sense The key to solving quantitative arithmetic questions is to rely on your number sense and avoid common pitfalls. It takes 1 pound of flour to make \(y\) cakes. The price of flour is \(w\) dollars for \(x\) pounds. In terms of \(w\), \(x\) and \(y\), what is the dollar cost of the flour required to make 1 cake? Click here for the answer! This is a typical problem dealing with units and ratios. Let’s use our number sense to quickly tackle this one. First, the fact that the price of flour is \(w\) dollars per \(x\) pounds, means that whatever the final answer is, the \(w\) and \(x\) need to be on opposite parts of the fraction. That’s because \ (w\) per \(x\) means \(w/x\). So either that, or its reciprocal will be in your final answer. So that narrows it down to just two choices without much work! Either \(\frac{xy}{w}\) or \(\frac{w}{xy}\). Finally, the question is asking for the cost of making one cake. So let’s what happens if we allow \(y\) to vary. Suppose \(y\) is small, like \(y=1\). Then it takes a whole pound of flour to make just 1 cake. But if \(y\) is larger, say \(y=4\), then that same one pound of flour goes much further, bringing the overall cost down per cake. As \(y\) increases, the cost per cake has to decrease. That tells you immediately that \(y\) must be on the bottom of the fraction (in order to get that kind of inverse relationship). Answer: \(\frac{w}{xy}\) See, that wasn’t too hard, right? There are certainly other ways to work this kind of problem out. If you want to see more on this topic, here’s an excellent refresher for GMAT Quant: Rates and Tip #3 — Algebra Problems: Try Backsolving or Picking Numbers Common strategies for algebra problems include backsolving and picking numbers. These techniques make it possible to solve a problem without actually solving it. In other words, you can avoid some of the heavy lifting of algebra if you can leverage the answer choices to your favor. Backsolving works by using the answer choices to work backwards. Often this means plugging in each numerical answer choice into given equations, but it can also sometimes be useful when the answers themselves are equations. Line \(k\) is in the rectangular coordinate system. If the \(x\)-intercept of \(k\) is \(–2\), and the \(y\)-intercept is 3, which of the following is an equation of line \(k\)? Picking numbers is precisely that! It’s when you pick values for some or all of the variables in a problem, and work the problem with your choices. This often requires you to plug in your numbers into answer choices or Data Sufficiency statements to help eliminate choices. If \(3xm + 2ym − 2yn − 3xn = 0\) and \(m ≠ n\), then what is the value of \(y\) in terms of \(x\)? Click here for the answer! Want to avoid the algebra? Let’s pick some convenient numbers for the variables. Keep in mind that \(m \neq n\). So, let’s start with \(m=2\) and \(n=1\). Plugging those into the given equation, we \(6x + 4y – 2y – 3x = 0\), which simplifies to: \(3x + 2y = 0\) Now we could even plug in a number for \(x\) and work out \(y\) from that (to compare with the answer choices), but there’s no need on such a simple equation. \(2y = -3x \implies y = \frac{-3x}{2}\) Answer: \(-\frac{3x}{2}\) Tip #4 — Word Problems: Don’t Get Lost! Word problems tend to overlap with the other categories. These kinds of problems test your ability to assess a given situation, set up proper steps, choose the correct mathematical tools to solve the problem, and finally to obtain the best answer. It’s crucial that you don’t get lost. When you read a long word problem, jot some things down as you go. Pay attention to constants and constraints given in the problem. And identify your goal. When a large municipal water tank is empty, it takes a Type JQ pump, working alone, 72 hours to fill the tank, whereas as Type JT pump, working alone, would take only 18 hours to fill the tank completely. If the tank starts at half full, how long would it take two Type JQ pumps and a Type JT pump, all three pumps working together, to fill the tank? Click here for the answer! There’s a lot to keep track of here, and some info is just not that important. For example, you don’t need to know that one pump is a “JQ” and other other is a “JT” pump, just that there are two types and they run at different rates. They could have been called “A” and “B” or “1” and “2” for all we care. But it is a good idea to jot down “JQ” and “JT” on your scratch paper to start organizing the rest of the data. The JQ pump fills the tank in 72 hours. How much water is that? We don’t know. But you can say it’s 1 tank worth. So write “1 tank in 72 hrs.” in your JQ column. Similarly, put “1 tank in 18 hrs.” in your JT column. Now, it goes on to ask about filling up a half-full tank. So, alone the JQ would take 36 hours. But we have two JQ’s, which by themselves would cut that fill time to 18 hours. Finally, the trickiest part, what happens when you add in the JT? By itself, it takes 9 hours to fill half the tank. Let’s bring in our number sense. Every time unit, the JQ’s are going to fill only half as much water as the JT, because the JT is pumping twice as fast. When the tank fills, two-thirds of the water was pumped in by the JT, and only a third of it by the two JT pumps. So either way you look at it, 6 hours are needed — either one third of 18 hours, or 2/3 of 9 hours. Answer: 6 Struggling to finish the quantitative section within the time limit? Learn about GMAT Timing Strategy in our ultimate pacing guide! Wrapping it All Up So now you know what topics to expect on the GMAT Math section! A few final words of advice: Know your fundamentals. Don’t try to do everything in your head. Instead, write out your scratch work during the test. Lastly, be sure to get in plenty of practice, and learn from your mistakes. If you need to brush up on your math or strengthen your Quant skills, Magoosh GMAT can help! Get high-quality, affordable test prep with over 200 video lessons and 800 practice questions. Try us for free with a 1-week trial! Happy studying! Leave a Comment Please leave any questions or suggestions in the comments, we try our best to respond within a few days! Your email address will not be published. 7 responses to “GMAT Math – What Kind of Math is in the Quantitative Section?” Shouldn’t adding the frequency percentages of all the sections add up to 100? Based on this graph word problems, integer properties and arithmetic and algebra account for more than 100%. Is it because multiple concepts can be used in each question? Hi Sandeep, Yes, that’s exactly it! Many GMAT questions defy categorization into a single concept, but we hope that this chart helps students prioritize information in their studies. Great, unambiguous, and unapologetic information. Really helpful. Thanks! Could you break these down by the topic areas that are assigned to the Magoosh practice problems? When studying and doing practice problems it would be great to be able to focus on the high frequency, but the categories for selecting practice problems don’t match the above. Hi Garrett! I see that you’re a Premium student, so I’ve forwarded your message to our Test Prep Experts. You’ll receive an answer back about your question in a different message. Hi Prep Expert, could you send me this breakdown too to assist in preparation? Hi Amery! Although you can get a private email response from our Test Prep Experts on this, I thought I should give a public response as the author of this post. 🙂 Unfortunately, it’s not quite possible to select categories in Magoosh GAMT Custom Practice that focus on all of the specific topics in the breakdown above. One problem is that there is a lot of overlap between the different topics. A geometry problem can also test algebra, for instance. And of course, word problems can cover any of the other topics. This is why the percentage frequencies in the tables above add up to well over 100%. Because of this, custom practice simply can’t capture every overlapping category as its own discrete category for the purposes of setting up practice sessions, although we do have some categories that matchup correnctly (algebra, percents and ratios, probability, etc…). Now, here’s the good news: the overall mix of Magoosh GMAT questions does match the concept mix seen above. So if you yjust to a general custom practice session, in which you are doing all topics instead of requesting specific ones, you’ll get a set of questions that is in line with the breakdown above. (Provided the set isn’t too short. Obviously, if you do just five or 10 questions per session, you may or may not hit on all the major GMAT math
{"url":"https://magoosh.com/gmat/what-kind-of-math-is-on-the-gmat-breakdown-of-quant-concepts-by-frequency/","timestamp":"2024-11-06T20:42:03Z","content_type":"text/html","content_length":"227693","record_id":"<urn:uuid:2099439d-151d-40a5-a49f-789a9c96714f>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00203.warc.gz"}
Memory and speed of processing in general gifted and excelling in mathematics students The present study examined the memory and speed of processing abilities associated with general giftedness (G) and excellence in mathematics (E). The research involved four groups of 16-18 years old participants varying in levels of G and E. 160 participants were tested on a battery of three memory tests and five speed of processing tests. Working-memory was found to be related to both G and E factors. The results reveal that G factor is related to high short term memory and that E factor is associated with high visual-spatial memory. Gifted students who excel in mathematics (G-E group) outperformed in all speed of processing tasks compared to the other three participants groups. The results can contribute to the theoretical knowledge about similarities and differences in memory and speed of processing abilities in G and E groups. Original language English Title of host publication The Proceedings of the Eighth Conference of the European Society for Research in Mathematics Education - CERME-8 Pages 1146-1155 State Published - 2013 Dive into the research topics of 'Memory and speed of processing in general gifted and excelling in mathematics students'. Together they form a unique fingerprint.
{"url":"https://cris.haifa.ac.il/en/publications/memory-and-speed-of-processing-in-general-gifted-and-excelling-in","timestamp":"2024-11-09T19:49:21Z","content_type":"text/html","content_length":"50389","record_id":"<urn:uuid:11f6c734-0074-4676-94ac-67a0fd93a8be>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00638.warc.gz"}
10 Best Math Puns That Will Make Your Students Laugh More students are exploring ways to make mathematical concepts fun for all ages. Make math interesting and fun by finding good math puns to share. Math doesn’t have to be a boring subject. There are many ways to incorporate math in a fun way including by telling jokes. You can find many puns online by kids and adults with many pretty hilarious. While students may hate math, using jokes may help others be comfortable with the concept. Here are some tips and suggests for good math puns to consider sharing with others along with tips on where to find appropriate options based on grade level by this service PimpMyPaper. Why Math Puns Are Fun As seen as another way to poke fun at math, using puns designed to look at math in a different way may encourage others to work through their struggles. There are different math puns shared by students in all academic levels from grade school through college. Sometimes people find them helpful when trying to remember how to solve a problem or how to check their answers. Many find it interesting to read related jokes just to see how people get creative with different math concepts. You may learn something new or different about how to approach problem solving when coming across the right joke. Other than that, they are fun to read and add humor to a subject known for its complexity. How the Best Math Puns Help You Remember Math Sometimes it’s the corny jokes that help jog your memory about certain math concepts. Some jokes just have that effect on people where they remember something funny or amusing while trying to solve a problem. Students often remember details of something when paired with humor or something important. The idea is to focus on the math concept with a portion of the joke standing out. List of Funny Math Puns Get an idea of what makes a math joke funny by reviewing different jokes available online. You may know someone that knows a few jokes offhand. You’ll learn there are different jokes for kids and adults while varying based on the type of math used to make things humorous. To get an idea of what laughs you can have with math here are a few jokes to consider. 1. Zero said to eight nice belt. 2. A roamin’ numeral is a number that can’t stay still. 3. Parallel lines have one thing in common; they never meet. 4. Count Dracula is probably the only monster good at math. 5. Woo a math teacher with an acute angle. 6. A mathematician will stop at nothing to avoid negative numbers. 7. Huddle in a 90 degree corner in a room to stay warm. 8. A farmer counted 296 chickens and rounded up with 300. 9. Math teachers enjoy parks because of their natural logs. 10. Use imaginary numbers to do math in your head. Sharing Math Jokes Puns with Others What are reasons why people like to share math jokes puns with other people? Sometimes they give others a good laugh when they need it. For others they can be good conversation starters to pass the time. You can cheer up a friend or help someone take their mind off of something with a good joke. Many like to share jokes online via social media or by text message. Many jokes about math are easy to remember making them something you can share fast and easy with others. Of course, that is if you find something you feel worthy of spending to encourage a good laugh. With math puns, funny situations help take the stress away from doing difficult forms of math. The right joke changes the mood and improves likelihood of understanding how to complete related math. Some seek jokes to get ideas for creating their own. Others may use them as a learning tool to help students understand related math concepts for better understanding. While there are many jokes about math to laugh at, the best ones vary based on interests of those that hear them often. Experiment with puns to see which options get the best laughs.
{"url":"https://www.clemsonemergingscholars.org/10-best-math-puns-that-will-make-your-students-laugh.html","timestamp":"2024-11-11T22:34:35Z","content_type":"text/html","content_length":"36361","record_id":"<urn:uuid:662df775-043a-4460-a30b-92b2828033eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00234.warc.gz"}
Cracking 256-bit RSA Keys - Surprisingly Simple! As it’s been making the rounds recently, I wanted to try my hand at cracking 256-bit RSA keys. Cracking 256-bit RSA – Introduction If you haven’t seen the video yet, Crown Sterling cracked a 256-bit RSA key in front of a live audience in 50 seconds. I wasn’t sure how impressive this was originally, and I wanted to try it out myself. For more information about RSA, and the math behind it, you can always check out the Wikipedia article. Additionally, for another example of the math behind RSA, and cracking it, I recommend the following post. All of that said, I’m no cryptographer, so this was more an attempt to see how easy it was for me to crack these keys. I’m in no way making any claims about anyone else’s research, or whether something is invalid or fake. This is also a fitting example of verifying claims yourself where possible though! Private Key Generation First, I generated a 256-bit RSA private key using OpenSSL root@kali:~/rsa256# openssl genrsa -out private.pem 256 Generating RSA private key, 256 bit long modulus e is 65537 (0x10001) Next, I printed the actual private key, as well as the modulus (n) root@kali:~/rsa256# openssl rsa -in private.pem -modulus writing RSA key -----BEGIN RSA PRIVATE KEY----- -----END RSA PRIVATE KEY----- I also generated the public key, as this is what I would be attacking. root@kali:~/rsa256# openssl rsa -in private.pem -outform PEM -pubout -out public.pem writing RSA key As you can see, the exponent and modulus are the same for the public key and the private key. root@kali:~/rsa256# openssl rsa -pubin -in public.pem -text -noout Public-Key: (256 bit) Exponent: 65537 (0x10001) root@kali:~/rsa256# openssl rsa -pubin -in public.pem -modulus -noout Finally, I converted the modulus from hex to an integer using ‘bc’, as this is the input I will use for cracking it. root@kali:~/rsa256# echo "ibase=16;B19D0C77A45D2A8FD9B9EEE42BEBE6CE0F0AF88B5FF529982D2D52257412A507" | bc Cracking the Key First, to crack the key, I created a 16 CPU DigitalOcean droplet. I verified the processors by checking cpuinfo after the machine booted. root@ubuntu-c-16-nyc3-01:~/rsa256# cat /proc/cpuinfo | grep "model name" model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz With the machine up and running, I installed make, cmake, and g++ to compile any tools that I might use. Next, I found a tool that implemented a Number Field Sieve called msieve. First, I cloned the GitHub repository. root@ubuntu-c-16-nyc3-01:~/rsa256# git clone https://github.com/radii/msieve Cloning into 'msieve'... remote: Enumerating objects: 1504, done. remote: Total 1504 (delta 0), reused 0 (delta 0), pack-reused 1504 Receiving objects: 100% (1504/1504), 634.68 KiB | 15.87 MiB/s, done. Resolving deltas: 100% (1132/1132), done. Next, I had to edit the makefile, so that the architecture matched the CPUs that I was using (Skylake). root@ubuntu-c-16-nyc3-01:~/rsa256/msieve# cat Makefile | grep "march" # gcc with basic optimization (-march flag could OPT_FLAGS = -O3 -fomit-frame-pointer -march=skylake -DNDEBUG #OPT_FLAGS = -O3 -fomit-frame-pointer -march=k8 -DNDEBUG $(CC) $(CFLAGS) -march=pentium2 -DBLOCK_KB=64 -DCPU_PENTIUM2 \ $(CC) $(CFLAGS) -march=pentium3 -DBLOCK_KB=64 -DCPU_PENTIUM3 \ $(CC) $(CFLAGS) -march=pentium4 -DBLOCK_KB=64 -DCPU_PENTIUM4 \ $(CC) $(CFLAGS) -march=pentium-m -DBLOCK_KB=32 -DCPU_PENTIUM_M \ $(CC) $(CFLAGS) -march=prescott -DBLOCK_KB=32 -DCPU_CORE \ $(CC) $(CFLAGS) -march=athlon -DBLOCK_KB=64 -DCPU_ATHLON \ $(CC) $(CFLAGS) -march=athlon-xp -DBLOCK_KB=64 -DCPU_ATHLON_XP \ $(CC) $(CFLAGS) -march=k8 -DBLOCK_KB=64 -DCPU_OPTERON \ $(CC) $(CFLAGS) -march=nocona -DBLOCK_KB=64 -DCPU_PENTIUM4 \ $(CC) $(CFLAGS) -march=nocona -DBLOCK_KB=32 -DCPU_CORE \ $(CC) $(CFLAGS) -march=k8 -DBLOCK_KB=64 -DCPU_OPTERON \ root@ubuntu-c-16-nyc3-01:~/rsa256/msieve# make x86_64 gcc -D_FILE_OFFSET_BITS=64 -O3 -fomit-frame-pointer -march=skylake -DNDEBUG -Wall -W -I. -Iinclude -Ignfs -Ignfs/poly -Ignfs/poly/stage1 -c -o common/filter/clique.o common/filter/clique.c After that change, I was able to successfully compile the application and view the help. root@ubuntu-c-16-nyc3-01:~/rsa256/msieve# ./msieve --help Msieve v. 1.46 usage: ./msieve [options] [one_number] numbers starting with '0' are treated as octal, numbers starting with '0x' are treated as hexadecimal -s save intermediate results to instead of the default msieve.dat -l append log information to instead of the default msieve.log -i read one or more integers to factor from (default worktodo.ini) instead of from the command line -m manual mode: enter numbers via standard input -mb hint for number of megabytes of memory for postprocessing (set automatically if unspec- ified or zero) -q quiet: do not generate any log information, only print any factors found -d deadline: if still sieving after minutes, shut down gracefully (default off) -r stop sieving after finding relations -p run at idle priority -v verbose: write log information to screen as well as to logfile -t use at most threads elliptic curve options: -e perform 'deep' ECM, seek factors > 15 digits quadratic sieve options: -c client: only perform sieving number field sieve options: -n use the number field sieve (80+ digits only; performs all NFS tasks in order) -nf read from / write to NFS factor base file instead of the default msieve.fb -np [X,Y] perform only NFS polynomial selection; if specified, only cover leading coefficients in the range from X to Y inclusive -np1 [X,Y] perform stage 1 of NFS polynomial selection; if specified, only cover leading coefficients in the range from X to Y inclusive -np2 perform stage 2 of NFS polynomial selection -ns [X,Y] perform only NFS sieving; if specified, handle sieve lines X to Y inclusive -nc perform only NFS combining (all phases) -nc1 [X,Y] perform only NFS filtering. Filtering will track ideals >= X (determined automatically if 0 or unspecified) and will only use the first Y relations (or all relations, if 0 or unspecified) -nc2 perform only NFS linear algebra -ncr perform only NFS linear algebra, restarting from a previous checkpoint -nc3 [X,Y] perform only NFS square root (compute dependency numbers X through Y, 1<=X<=Y<=64) With msieve running, I timed it using the q and n flags, as that seemed basic and straightforward. root@ubuntu-c-16-nyc3-01:~/rsa256/msieve# time ./msieve -q -n 80336855234907714168477675917972994189398342031083238074132216291031761724679 prp39: 275778021469467750604832321873164071587 prp39: 291309854232898176366046870573797527117 real 2m44.099s user 2m43.996s sys 0m0.092s While two minutes and forty-four seconds wasn't bad from a random program, I was hoping that I could do it faster! Cracking 256-bit RSA Keys - Docker Images I decided to try a few Docker images, to see if any of them could give me a lower time. First, I installed Docker to my droplet. Next, I found an image titled rsacrack, which sounded perfect I pulled down the image to my droplet. root@ubuntu-c-16-nyc3-01:~# docker pull b4den/rsacrack Using default tag: latest latest: Pulling from b4den/rsacrack 5667fdb72017: Pull complete d83811f270d5: Pull complete ee671aafb583: Pull complete 7fc152dfb3a6: Pull complete b83d8b6245c4: Pull complete 36cb8498468a: Pull complete 22aaf58c64c7: Pull complete fb0d231b82e1: Pull complete fc922e878061: Pull complete 84cf9426007f: Pull complete 897258e3faf3: Pull complete 6dc87ced1f13: Pull complete 71db44ed3509: Pull complete b266cfc4f7eb: Pull complete e62159d0a47e: Pull complete 8fc467efa1ec: Pull complete d4b519fd0423: Pull complete da3c0b30b580: Extracting [==================================================>] 3.126kB/3.126kB 365f1efd6c17: Download complete fa0aa39ead30: Download complete 97567ded379a: Download complete ca8390f60942: Download complete 54e964f3a0f6: Download complete 878a41964830: Download complete 85beee49cceb: Download complete Once I finally figured out how the image worked, I was able to run and time it. Also, it gave me a bit more information about the cracked prime, as well as generating a new private key for me. (after a few attempts, I was able to time it and run it) root@ubuntu-c-16-nyc3-01:~# time docker run b4den/rsacrack 80336855234907714168477675917972994189398342031083238074132216291031761724679 [*] pubkey.e: 65537 [*] pubkey.n: 80336855234907714168477675917972994189398342031083238074132216291031761724679 [*] Key looks like 256 bits [*] Using cadonfs to compute primes [*] results are: [u'275778021469467750604832321873164071587', u'291309854232898176366046870573797527117', 65537L] [*] Key extraction done. -----BEGIN RSA PRIVATE KEY----- -----END RSA PRIVATE KEY----- real 1m13.985s user 0m0.038s sys 0m0.025s As you can see, it only took one minute and fourteen seconds this time, which was a huge improvement! Note that the RSA private key is a bit different from the one I generated earlier, even though the modulus is the same. I also wanted to try cado-nfs, for no particular reason. That said, I was unable to get the boost libraries to properly work on my Ubuntu droplet. I was able to find a cado-nfs Docker image, so that seemed like the next best choice. First, I pulled down the image. root@ubuntu-c-16-nyc3-01:~# docker pull cyrilbouvier/cado-nfs.py Using default tag: latest latest: Pulling from cyrilbouvier/cado-nfs.py When running this image, a lot seemed to be going on, but it eventually printed the same primes as everything else. root@ubuntu-c-16-nyc3-01:~# time docker run cyrilbouvier/cado-nfs.py 80336855234907714168477675917972994189398342031083238074132216291031761724679 Info:root: Using default parameter file /cado-nfs/share/cado-nfs-2.2.1/factor/params.c75 Info:root: No database exists yet Info:root: Created temporary directory /tmp/cado.33dpijhe Info:Database: Opened connection to database /tmp/cado.33dpijhe/c75.db Info:root: Set tasks.threads=16 based on detected physical cpus Info:root: tasks.polyselect.threads = 2 Info:root: tasks.sieve.las.threads = 2 Info:root: slaves.scriptpath is /cado-nfs/bin Info:root: Command line parameters: /cado-nfs/bin/cado-nfs.py 80336855234907714168477675917972994189398342031083238074132216291031761724679 Info:root: If this computation gets interrupted, it can be resumed with /cado-nfs/bin/cado-nfs.py /tmp/cado.33dpijhe/c75.parameters_snapshot.0 Info:Linear Algebra: Total cpu/real time for bwc: 16.08/0.000250816 Info:Linear Algebra: Aggregate statistics: Info:Linear Algebra: Krylov: WCT time 2.07 Info:Linear Algebra: Lingen CPU time 4.26, WCT time 0.51 Info:Linear Algebra: Mksol: WCT time 1.62 Info:Quadratic Characters: Total cpu/real time for characters: 0.81/0.234679 Info:Square Root: Total cpu/real time for sqrt: 28.48/2.57451 Info:HTTP server: Shutting down HTTP server Info:Complete Factorization: Total cpu/elapsed time for entire factorization: 324.42/70.4059 Info:root: Cleaning up computation data in /tmp/cado.33dpijhe real 1m11.840s user 0m0.040s sys 0m0.095s As you can see, this brought the time down even further, and in the realm of under one minute! In the end, I only needed this droplet for around 20 minutes, so it only cost me ~$0.16 to crack my key. Verifying the Crack While all these applications had the same output, I wanted to verify that I was actually successful. First, I used Python to calculate the original modulus from my derived factors. root@kali:~/rsa256# python -c 'print (275778021469467750604832321873164071587*291309854232898176366046870573797527117)' Next, I encrypted a secret message with my public key. Note that the message needs to be shorter than the original key, so I only had 32 bytes. root@kali:~/rsa256# cat plaintext.txt Secret message! root@kali:~/rsa256# openssl rsautl -encrypt -pubin -inkey public.pem -in plaintext.txt -out encrypted.txt root@kali:~/rsa256# cat encrypted.txt I saved the output of the rsacrack Docker image as my cracked private key. root@kali:~/rsa256# cat cracked.pem -----BEGIN RSA PRIVATE KEY----- -----END RSA PRIVATE KEY----- Using this private key, I was still able to decrypt the message, even though it was different than my original private key! root@kali:~/rsa256# openssl rsautl -decrypt -inkey cracked.pem -in encrypted.txt -out decrypted.txt root@kali:~/rsa256# cat decrypted.txt Secret message! Private Key Generation While I was able to decrypt my secret message using the cracked private key, I was unsure how this key was generated. In this case, I found a StackOverflow post on how to generate an RSA key using specific input numbers. First, I created a small Python script (that I'll share below) to create a configuration file for asn1parse. root@kali:~/rsa256# python genKey.py root@kali:~/rsa256# python genKey.py > gen.conf Using this configuration file, I generated another private key using these values. root@kali:~/rsa256# openssl asn1parse -genconf gen.conf -out newkey.der 0:d=0 hl=3 l= 169 cons: SEQUENCE 3:d=1 hl=2 l= 1 prim: INTEGER :00 6:d=1 hl=2 l= 33 prim: INTEGER :B19D0C77A45D2A8FD9B9EEE42BEBE6CE0F0AF88B5FF529982D2D52257412A507 41:d=1 hl=2 l= 3 prim: INTEGER :010001 46:d=1 hl=2 l= 32 prim: INTEGER :3982F5F783B45B44CB2686B1417B9877548863440DFBDED42320CC8E06403979 80:d=1 hl=2 l= 17 prim: INTEGER :CF78EA3A80E059EBBF43FDC3986B72A3 99:d=1 hl=2 l= 17 prim: INTEGER :DB283CB4F59DEA588D5C9FE0BE96CE4D 118:d=1 hl=2 l= 16 prim: INTEGER :3B612B00A5841098657C8B33A0FB17AB 136:d=1 hl=2 l= 16 prim: INTEGER :650A250AED1E9437A55CE9DD0D21AC15 154:d=1 hl=2 l= 16 prim: INTEGER :6D48E6E2379B0C58F6B7CEDD7C441DCB root@kali:~/rsa256# openssl rsa -in newkey.der -inform der -text -check Private-Key: (256 bit) publicExponent: 65537 (0x10001) RSA key ok writing RSA key -----BEGIN RSA PRIVATE KEY----- -----END RSA PRIVATE KEY----- While this private key is different from either of my two earlier ones, at least I know that I generated it myself. Using this generated RSA key, I was still able to decrypt my secret message! root@kali:~/rsa256# openssl rsautl -decrypt -inkey generated.pem -in encrypted.txt -out decrypted-gen.txt root@kali:~/rsa256# cat decrypted-gen.txt Secret message! Genkey Code You can find the code for my genkey.py script below. For more information on RSA cracking, as well as where I got the egcd and modinv methods from, then you should check out this post def egcd(a, b): if a == 0: return (b, 0, 1) g, y, x = egcd(b % a, a) return (g, x - (b // a) * y, y) def modinv(a, m): gcd, x, y = egcd(a, m) if gcd != 1: return None # modular inverse does not exist return x % m def main(): p = 275778021469467750604832321873164071587 # edit with factor #1 q = 291309854232898176366046870573797527117 # edit with factor #2 n = 80336855234907714168477675917972994189398342031083238074132216291031761724679 # edit with modulus e = 65537 # edit if exponent is different phi = (p -1)*(q-1) d = modinv(e,phi) dp = modinv(e,(p-1)) dq = modinv(e,(q-1)) qi = modinv(q,p) print "asn1=SEQUENCE:rsa_key" print "" print "[rsa_key]" print "version=INTEGER:0" print "modulus=INTEGER:" + str(n) print "pubExp=INTEGER:" + str(e) print "privExp=INTEGER:" + str(d) print "p=INTEGER:" + str(p) print "q=INTEGER:" + str(q) print "e1=INTEGER:" + str(dp) print "e2=INTEGER:" + str(dq) print "coeff=INTEGER:" + str(qi) if __name__ == "__main__": As usual, you can find the code and updates in my GitHub repository as well. Cracking 256-bit RSA - Conclusion This was a fun exercise, and it was much faster than I expected to do the cracking. I could see a CTF using this as a challenge in the future, so it helps to know how to perform this attack. Finally, for some more examples of cracking and math, check out Rob's Twitter thread. Ray Doyle is an avid pentester/security enthusiast/beer connoisseur who has worked in IT for almost 16 years now. From building machines and the software on them, to breaking into them and tearing it all down; he’s done it all. To show for it, he has obtained an OSCE, OSCP, eCPPT, GXPN, eWPT, eWPTX, SLAE, eMAPT, Security+, ICAgile CP, ITIL v3 Foundation, and even a sabermetrics certification! He currently serves as a Senior Staff Adversarial Engineer for Avalara, and his previous position was a Principal Penetration Testing Consultant for Secureworks. This page contains links to products that I may receive compensation from at no additional cost to you. View my Affiliate Disclosure page here. As an Amazon Associate, I earn from qualifying 12 Comments 1. So, all of our server keys can be cracked ? Like ssh keys and stuff ? What should we use it now ? □ 256-bit RSA is actually rarely used, so you should be fine. For example, when you generate an SSH key, it uses RSA-2048 by default, which is infeasible to crack at this time! 2. If you really can find the private key, so gind my private key. This is my public key. □ Hi, I have no proof that it is your public key, and I covered how to do this in the post! That said, feel free to give it a try against your own personal keys if you’d like. 3. This is a fantastic walk-through! Great job and thank you for sharing! 4. Does this mean Bitcoin wallets are no longer secure? □ Not at all! Bitcoin actually uses AES-256-CBC which is far more secure than this. 5. Hi Ray, your work is amazing. Unfortunately the video is not available anymore. How can I see it ? □ Thanks! Eh, they probably pulled it down because it made them look bad haha. Was literally just them on stage making a big deal about cracking 256-bit RSA keys (and slowly). 6. Thanks alot , you made my day sir □ You’re welcome, glad that it helped This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://www.doyler.net/security-not-included/cracking-256-bit-rsa-keys","timestamp":"2024-11-14T19:46:31Z","content_type":"text/html","content_length":"241976","record_id":"<urn:uuid:5b0665c4-ab54-40b1-9ef9-3073d1ceb580>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00054.warc.gz"}
The 90-9-1 Rule in Reality Dr. Michael Wu, Ph.D. is Lithium's Principal Scientist of Analytics, digging into the complex dynamics of social interaction and online communities. He's a regular blogger on the Lithosphere and previously wrote in the Analytic Science blog. You can follow him on Twitter at mich8elwu. If you've ever managed a community you've probably heard of the "90-9-1 rule". If you have observed a community closely, you have probably seen it in action. Soon after a community launches, users begin to participate, but each user participates at a different rate. The minute difference in participation levels is accentuated over time, leading to a small number of hyper-contributors in the community who produce most of the community content. The 90-9-1 rule simply states that: • 90% of all users are lurkers. They read, search, navigate, and observe, but don't contribute • 9% of all users contribute occasionally • 1% of all users participate a lot and account for most of the content in the community But how real is this rule? Do all communities follow this rule consistently? If not, how far off is the deviation? Is the proportion really 90:9:1, or is it more like 70:25:5, or 80:19.99:0.01? Let's find out... Lithium has accumulated over 10 years of user participation data across 200+ communities, so we can address this question empirically with rigorous statistics. Rather than complicating the issue with the lurkers, I choose to analyze only the contributors (i.e. the 9% occasional-contributors and the 1% hyper-contributors). The proportion between these two groups of participants should be 9:1 or equivalently 90:10 according to the 90-9-1 rule. The 9-1 Part of the 90-9-1 Rule So the 90-9-1 rule excluding the lurkers says that: • 90% of the contributors (which is 9% of all users) are occasional-contributors. • 10% of the contributors (which is 1% of all users) are hyper-contributors, who generate most of the community content. What does the data tell us? On average, the top 10% of contributors (the hyper-contributors) generate 55.95% of the community content, and the rest of the 90% (the occasional-contributors) produces the remaining 44.05% of the content. With my statistician hat, you know I can't possibly be satisfied with just the average! So I plotted the distribution of content contributed by occasional-contributors versus the hyper-contributors across all communities. The standard deviation is 13.02%. Please note: The reason you only see 143 communities here, is because I've excluded communities that are less than 3 months old (these communities are too young that their participation dynamics are not stable enough for the analysis). As you can see from the data, the hyper-contributors can contribute anywhere from about 30% to nearly 90% of the community content with an average of 55.95%. This is certainly a substantial percentage (considering the fact that it is generated by only 10% of the contributors), so the 90-9-1 rule "sort of" holds. But, to be rigorous, it depends on what do you mean by "most" of the community content. If "most" meant at least 30% of the community content, then the 9-1 part of the 90-9-1 rule holds for 99.30% of our communities. If you meant at least 40% of the community content, then 89.51% of our communities satisfy this rule. But if "most" meant at least 50% of the community content, then only 65.73% of our communities are described by this rule. Turning the Problem Around This gives us a convenient spot to turn the problem around and look at the 90-9-1 rule from another perspective. We can define rigorously what "most" means (e.g. at least 30% of the community content), then calculate the fraction of contributors who generated these content and treat them as the hyper-contributors. We can then compare and see how far off we are from the expected ratio of Averaging across 143 communities, we see that if we define "most of the community content" to be "at least 30% of the total content," then the fraction of participants who contributed this amount ranges from 0.32% to 5.14% with an average of 2.73%. That means, on average, hyper-contributors consist of roughly 2.73% of the contributing population, so the remaining 97.27% of the participants are occasional contributors. And the ratio of hyper- to occasional-contributors is about 97:3, far from the expected value of 9:1. If instead, we define "most" to be "at least 40%" of total content, then we get roughly 5.07% hyper-contributors on average across 143 communities. Now the ratio of hyper- to occasional-contributors is about 19:1, which is closer but still quite far off the expected ratio of 9:1. If we defined "most" to be "at least 50%" of the total content, then the group that contributed this amount (which qualifies them to be hyper-contributors) is about 9.35% of the participants. This gives us a ratio that is much closer to the expected value of 9:1 on average. However, the variability is also very large. Even under this simple criterion of contributing at least 50%, the fraction of participants who contributed this amount may vary from less than 1% to ~18% of the participants. That means the ratio between hyper- and occasional-contributors may be anywhere from 99:1 to about So is 90-9-1 a hard and fast rule? Definitely not! Not even the 9-1 part of it. But it is certainly a great rule of thumb when looking at or explaining community data. And it tells us that participation in communities is highly skewed and unequal, and there is a small fraction of hyper-contributors who produce a substantial amount of the community contents. Next time I am going to start to dive deeper into the contribution level of the hyper-contributors, your community's real superusers. Updated 5 months ago Version 9.0 Joined September 04, 2008 Release Notes Review release notes and updates for all of our products! Select a label to browse by product or resource type.
{"url":"https://community.khoros.com/blog/release-notes/the-90-9-1-rule-in-reality/5463","timestamp":"2024-11-04T18:40:12Z","content_type":"text/html","content_length":"298006","record_id":"<urn:uuid:67b6a0ac-318e-4abf-be64-d928d38327cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00130.warc.gz"}
Why Did We Ever Think the First Programming Language Didn't Matter?Why Did We Ever Think the First Programming Language Didn't Matter? Credit: University of Michigan I recently had two CS colleagues (from different schools) make the claim that we shouldn’t worry about the programming language in the first course because it just doesn’t matter. They each believed that we should focus on the CS learning outcomes. The belief is that if students learn the concepts well, then the students can simply apply it to whatever language they learn next. Ben Shneiderman and Richard Mayer described it this way in 1979 (see link here): Learning a first language requires development of both semantic concepts and specific syntactic knowledge, while learning a second language involves learning only a new syntax, assuming the same semantic structures are retained. Learning a second language with radically different semantics (i.e., underlying basic concepts) such as LISP or MICRO-PLANNER may be as hard or harder than learning a first language. The empirical evidence I know suggests that the learning a second language isn’t easy for most learners. While there is typically transfer from one programming language to another, it’s not seamless. Tshukudu and Cutts have been studying what transfers and what doesn’t when students move between Python and Java (see paper here). David Weintrop studied students moving from block-based to text programming. Yes, there was transfer, but learning was slowed when they shifted modality (see blog post here). The first programming language is particularly important when we think about programming for other-than-CS majors. Students want to learn what’s valued in their desired community of practice. If a student wants to become a data scientist, R or Python makes a lot more sense than learning C. A computational artist might be motivated to learn Processing, but might not see a lot of relevance for learning MATLAB. Not everyone who learns programming today wants or needs the ability to switch languages as easily as computer scientists do. I’ve been thinking recently about this question from the other side. Why did anyone ever think that the first programming language didn’t matter? I have a hypothesis that this belief once was true, when the field was younger. When CS curricula were first defined in the late 1960’s, there was an emphasis on the mathematical underpinnings of programming. Nathan Ensmenger describes it in "The Computer Boys Take Over" as part of increasing the perceived professionalization of computer science. So students who entered computer science in the early days typically had a stronger mathematical background than the average students learning to code today. This is obvious when you consider that the average has shifted. Programming was mostly taught to undergraduates in the 1970’s, and today, there are probably more K-12 students learning coding in the US than there were undergraduate CS majors in the 1970’s. CS education was developed assuming a stronger mathematics background than we can assume today. Here’s my hypothesis: The transfer that Shneiderman and Mayer saw wasn’t from one language to another. It was transfer of different forms of the same mathematics. If we teach the semantics of the programming language based on mathematics students already know, then a new syntax is just a new formalism for the mathematics. Math has always been taught, "Now let’s try it this way." Mathematicians love to explore the same idea in different formalisms or with different approaches. Check out the Pythagorean theorem page in Wikipedia — there are more than a half-dozen proofs described. We’re conceptualizing the problem wrong if we think only about the programming knowledge transferring. For students with a strong mathematical background, the first programming language and future programming languages are just different notations for things the students already knows. That explains why what Shneiderman and Mayer are describing used to be true. But I don’t think it is anymore. But what if the coding learner doesn’t know a lot of mathematics already? What if it’s a sixth grader who struggles with math class and is now taking his first CS class? What if it’s a graphic designer who is trying to script PhotoShop but avoided math classes doesn’t think of themselves as a programmer (see Brian Dorn’s work)? I predict that transfer to Python or MATLAB is going to be a lot harder than what Shneiderman and Mayer were describing. What if the coding learner is a "conversational programmer" (see paper here) who wants to be able to talk to programmers about their tasks but doesn’t actually want to develop software? The modern coding learner is a lot different than the ones in the 1970’s. We don’t have to make programming about mathematics. We know that most people using Scratch are telling stories without a lot of math (see paper here). Conversational programmers struggle to find resources that help them learn because so many of them require a focus on the logic and mathematics (see paper here), but we are developing approaches to help conversational programmers learn without the math (see paper here). We might be able to teach a lot more people about programming if we don’t expect students to know mathematics first, which we may have been able to expect 40+ years ago. If I’m right, there are implications for researchers and for teachers. For researchers, if you’re looking for transfer and not measuring mathematical prior knowledge, you may be missing a critical explanation for any transfer you’re seeing. For teachers, be aware of accidentally teaching programming-as-math. If your students are struggling, maybe you’re relying too much on mathematical knowledge that isn’t there. Mark Guzdial is professor of electrical engineering and computer science in the College of Engineering, and professor of information in the School of Information, of the University of Michigan. Stephen Siegel Maybe you can teach more people to program if you downplay math, but they will be better programmers if they learn and use math. Leslie Lamport argues this view in an essay on teaching concurrency (http://lamport.azurewebsites.net/pubs/pubs.html#teaching-concurrency) and in a recent interview (https://learning.acm.org/bytecast/ep16-leslie-lamport). I recommend giving these a read and listen. Mark Guzdial Hi Stephen, Thank you for your comment, but I think you missed the point. MOST of the people who are studying programming today do not want to be better programmers. Most people who learn to program are K12 students, end-user programmers, or conversational programmers. Professional programmers and computer scientists are just one slice. Yes, they likely need math to be good at concurrent programming. Not everyone is going to do concurrent programming. Chris Smith Hi Mark, I do agree with your article at some level. It's always important to focus on the actual goal in education. If the actual goal isn't to teach mathematics, then requiring students to learn mathematics can only be justified by absolute necessity, and it's definitely not necessary. That said, are we really sure that learning mathematics isn't the actual goal? I don't mean learning how to solve math problems, but rather I mean mathematics in the same broader sense that you seem to use it here. (Obviously, knowing the quadratic formula doesn't help with programming at all; but being able to think formally by naming abstractions absolutely does!) I'd say that if learning coding has any value at all, it's because it's an indirect route to learning mathematics, and indeed to learning a different kind of mathematics - the expressive, communicative, modeling-heavy sort of mathematics that's so hard to capture in a classroom setting - that makes it so valuable. That also strikes me as precisely the skill set needed to become a "conversational programmer", where the goal is explicitly to understand abstractions and ideas in the way programmers think about them rather than merely to write working code. Does that mean the first programming language doesn't matter? No! I agree with you there. But the case for this sounds less like "coding shouldn't require math", and more like "mathematics can sometimes be learned as a result of coding, not as a prerequisite". Then the two qualities to look for in a beginning programming language are (a) pedagogical appropriateness, such as discoverability and empowerment of students to develop self-efficacy, and (b) easy of the path from that programming language to understanding abstraction, mathematical communication, and expressive modeling. Scratch, for instance, does great on criterion (a), and mediocre (but not terrible!) on criterion (b). Most other languages we have are mediocre at best on (a), and honestly not much better on (b) either. But that frames the search, at least, and in my biased opinion, highlights the value of approaches like Bootstrap or CodeWorld that are specifically designed to help students develop mathematical communication and modeling skills. The case of R or Python for data science is ultimately just not that interesting. Sure, if you are teaching a population of students who want to apply a specific tool, you should teach them that tool. But this doesn't apply at all to the conversational programmers you refer to, nor most K-12 students. If you don't want to be a programmer, then that industry motivation is out the window. The more important concern is authenticity in the student's mind, and there are lots of approaches to that besides just picking a poor learning tool (like R, for instance) just because it's commonly used. The argument for choosing a first language based on likelihood of professional applications is by and large just as weak as it always was. Mark Guzdial Hi Chris, Maybe for some people, learning programming is a way into learning mathematics. For all the reasons you list, programming can be a terrific context for learning mathematics. This is an area of active debate right now -- why are we teaching programming at all in K-12? See https://www.educationnext.org/computer-science-for-all-as-new-subject-spreads-debates-flare/ But there are reasons for using programming in education that have nothing to do mathematics. Computing is a powerful tool, and programming is a way of harnessing and directing that power. In our work with social studies teachers, they want to use computing for data visualizations in history class. Computing is necessary for dealing with hundreds of years of data. Programming is a great way to define the visualizations you want. The history teachers with whom we work really don't want to deal with mathematics. So, we're exploring alternative ways of thinking about programming: https:// Based on Katie Cunningham's work with conversational programmers (https://computinged.wordpress.com/2021/06/21/katie-cunninghams-purpose-first-programming-glass-box-scaffolding-for-learning-to-code /), I don't think that they think about their need as mathematics. I suggest that it's the math-iness that most turns them off to traditional programming classes. I support and encourage those who want to use programming for teaching mathematics. Let's not limit programming's role in education to just mathematics, though. Displaying all 4 comments
{"url":"https://acmwebvm01.acm.org/blogs/blog-cacm/253393-why-did-we-ever-think-the-first-programming-language-didnt-matter/fulltext","timestamp":"2024-11-05T13:59:23Z","content_type":"text/html","content_length":"34217","record_id":"<urn:uuid:e8720bd1-07e0-4f54-a879-cbd901c9a639>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00679.warc.gz"}
Minimum frequency spacing for having orthogonal sinusoidals - DSP LOG Minimum frequency spacing for having orthogonal sinusoidals In this post, the objective is to figure out the minimum separation between two sinusoidals having frequencies $f_1$, $f_2$ of duration $T$each to be orthogonal. Let the phase difference between the sinusoidals is $\phi$ where $\phi$ can take any value from $0$ to $2\pi$ (Refer Example 4.3 [DIG-COMM-SKLAR] For the two sinuosidals to be orthogonal, $\int_0^Tcos(2\pi f_1 t+\phi)cos(2 \pi f_2t)dt = 0$ Integrating and applying the limits, the above equation simplifies to (thanks to the detailed simplification in Example 4.3 [DIG-COMM-SKLAR] $cos(\phi)\left[\frac{sin(2\pi (f_1+f_2)T)}{2\pi(f_1+f_2)} + \frac{sin(2\pi (f_1-f_2)T)}{2\pi(f_1-f_2)} \right] \\ + sin(\phi)\left[\frac{cos(2\pi (f_1+f_2)T)-1}{2\pi(f_1+f_2)} + \frac{cos(2\pi (f_1-f_2)T)-1}{2\pi(f_1-f_2)} \right]=0$. $sin(n\pi)=0$ and $cos(2n\pi)=1$ where $n$ is an integer. Let as assume that $(f_1+f_2)T$ is an integer. Then two terms in the above equation vanishes as $sin(\2\pi (f_1+f_2)T)=0$ and $cos(\2\pi (f_1+f_2)T)=1$. The above equation simplifies to, $cos(\phi)\frac{sin(2\pi (f_1-f_2)T)}{2\pi(f_1-f_2)} + sin(\phi)\frac{cos(2\pi (f_1-f_2)T)-1}{2\pi(f_1-f_2)} =0$. For an arbitrary value of $\phi$from $0$ to $2\pi$ In a such a case, for the above equation to be zero, then the cosine term to be equal to 1 and the sine term need to be equal to 0 for making the above equation zero. To satisfy that requirement, need to have, $2\pi (f_1-f_2)T=2n\pi$ Ofcourse, the minimum value of $n$is 1, then For $\phi$= $0$ When $\phi$= $0$, then cosine term in the equation is already zero. To make the eqution 0, the sine term need to be equal to be zero. To satisfy that requirement, need to have, $2\pi (f_1-f_2)T=n\pi$ Ofcourse, the minimum value of $n$is 1, then % Simple Matlab/Octave code % Minimum frequency seperation between two sinusoidals T = 1; fs = 100; t = 0:1/fs:T; t = t(1:end-1); % with random phase f1 = 1; f2 = 2; phi = 2*pi*rand; % uniformly distributed from 0 tp 2pi s1 = cos(2*pi*f1*t+phi); s2 = cos(2*pi*f2*t); sum_with_phi_random = sum(s1.*s2) % with zero phase difference f3 = 3/4; f4 = 5/4; s3 = cos(2*pi*f3*t); s4 = cos(2*pi*f4*t); sum_with_phi_zero = sum(s3.*s4) close all hold on title(‘Minimum frequency seperation for random phase’) grid on hold on title(‘Minimum frequency seperation for zero phase’) grid on Figure: Two sinusoidals with frequency difference = 1/T Figure: Two sinusoidals with frequency difference = 1/2T 1. When the phase difference between two sinuosidals is not known, then the minimum frequency separation between them is $\frac{1}{T}$ for the sinusoidals to be orthogonal. 2. When the phase difference between two sinuosidals is zero, then the minimum frequency separation between them is $\frac{1}{2T}$ for the sinusoidals to be orthogonal. 3. In the above Matlab code snippet, with $\frac{1}{2T}$seperation, the sum of the product of two sinusoidals is only nearly equal to zero (and not zero). Need to think more and revert. [DIG-COMM-SKLAR] Digital Communications: Fundamentals and Applications (2nd Edition), Bernard Sklar Hope this helps, 8 thoughts on “Minimum frequency spacing for having orthogonal sinusoidals” 1. 3. In the above Matlab code snippet, with seperation, the sum of the product of two sinusoidals is only nearly equal to zero (and not zero). Need to think more and revert. I think it is because that the Matlab sum function cannot present the real meaning of integrate, (the area of the curve of the product of two sinusoidals). sum_with_phi_random = sum(s1.*s2) should be add the factor 1/fs, as below: sum_with_phi_random = sum(s1.*s2)*1/fs 2. one more question it was mentioned that for what ever value of phi form 0 to 2pi the cos term should be one and the sin term is zero.. But it is not necessary… check out for phi = 135degress and the cos term and sin term as 1 then even the equation goes to zero….. 3. everything is ok. But, what about the condition (f1+f2)T is integer that is assumed in the derivation??? what about if (f1+f2)T is not integer??? 1. @ram: I did not do the detailed analysis, but I think (f1+f2)T being an integer is required for saying that two sinusoidals with zero phase difference to be orthogonal is f1-f2 = 1/2T 4. everything is ok. But, what about the condition (f1+f2)T is integer that is assumed in the derivation??? what about if (f1+f2)T is not integer??? 5. @amr alaa: Thanks. You may subscribe to the RSS-over-email or RSS-feed 🙂 6. thanks very helpful 7. explanation about OFDM is very usefull who new for OFDM.thanks krishna. with regards rajesh neelakandan
{"url":"https://dsplog.com/2007/12/31/minimum-frequency-spacing-for-having-orthogonal-sinusoidals/","timestamp":"2024-11-01T19:30:19Z","content_type":"text/html","content_length":"107421","record_id":"<urn:uuid:f8cbb606-05bd-4f58-aa00-c67c234016d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00233.warc.gz"}
Data types allow us to categorize and handle different types of values that we can use in our programs. Each data type has specific characteristics and behaviors that help us to perform operations and data manipulations effectively. If you want to learn more Basic Data Types Booleans (bool) Booleans in Python (bool) are truth values that represent boolean logic. They can be True (true) or False (false) and are used in logical expressions and flow control. For example, in an if statement, it evaluates whether a condition is true or false. true = True false = False Integers (int) Integers in Python (int) are numbers without a decimal point, both positive and negative. They can be of any length and are used to represent whole numbers. Some examples of integers are 5, -10, 1000, etc. integer_number = 10 Floating Point Numbers (float) Floating point numbers in Python (float) are numbers with a decimal point. They can represent both integers and fractions. Some examples of floating point numbers are 3.14, -0.5, 2.71828, etc. float_number = 3.14 Strings (str) Strings in Python (str) are sequences of characters enclosed in single quotes (') or double quotes ("). They can contain letters, numbers, symbols, and spaces. Some examples of strings are "Hello", 'Python', "123", etc. string_text = "Hello, world!" Composite Data Types Lists (list) Lists in Python (list) are ordered and mutable collections of elements. They can contain any type of data, including numbers, strings, nested lists, etc. The elements of a list are indexed and can be accessed and modified using indexes. my_list = [1, 2, 3, 4, 5] Tuples (tuple) Tuples in Python (tuple) are ordered and immutable collections of elements. Like lists, they can contain any type of data. The main difference is that the elements of a tuple cannot be modified once the tuple has been created. my_tuple = (10, 20, 30, 40, 50) Dictionaries (dict) Dictionaries in Python (dict) are unordered collections of key-value pairs. Each item in a dictionary has a unique key that is used to access the corresponding value. They are very useful for representing structured and related data. my_dictionary = {"name": "John", "age": 30, "city": "Madrid"} Sets (set) Sets in Python (set) are unordered collections with no duplicate elements. They are used for mathematical set operations such as union, intersection, difference, etc. my_set = {1, 2, 3, 4, 5}
{"url":"https://www.luisllamas.es/en/python-data-types/","timestamp":"2024-11-08T14:33:59Z","content_type":"text/html","content_length":"37517","record_id":"<urn:uuid:5bbf7e19-703f-4680-b6b0-1556a3e1404c>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00119.warc.gz"}
Outsourced Verifiable Computation Cloud computing has become an indispensable tool in the IT industry. The ability to offload heavy computation is especially useful as the world transitions from Personal Computers to smartphones and tablets. And to top this off, the cost of cloud computing has seen a dramatic drop over the years. Services such as DigitalOcean now allow a user to spin up a server for as less as $5/month. Today, even the most secretive computations are being offloaded to a remote server. With such dependence on cloud computing, we need to take a moment to think about the adverse effects in store for us if a particular remote server begins to malfunction and provides incorrect results. And this is not as uncommon as one would expect. A remote server might provide incorrect results due to several reasons; from corrupted data sent as input to simple misconfiguration of server. So how do we ensure that the results sent back by a server are valid? Well… One simple solution to this would be to recompute the results locally and verify that the result is valid. But as you’ve probably guessed, this defeats the purpose of offloading computation. So what we’re looking for, is a protocol where verification of computation is inexpensive in comparison to actually running the computation. This is what Verifiable Computation is all about. Well there’s good news and bad news. The bad news is that the solution isn’t really straightforward and can get pretty mathematical. But the good news is, all of this is extremely cool and definitely worth your time. So I’ve decided to break this article down into several parts. In this one, we will go through a rough overview on how the process works and pick up on more details in the upcoming Overview of the solution A typical problem in this domain can be split into three steps. Let’s walk through a rather simplified example which we will later extend. The first task at hand is to rewrite the following program in a way that is easier to represent mathematically. Cryptography mainly deals with transformation on numbers and having a ASCII text file with code doesn’t really give us much to work with. We therefore attempt to encode the execution of this function in a way that is easier to work with. There are several ways one can go about doing this, from boolean / arithmetic circuits to constraint systems. We’ll be covering this in future articles so it’s completely alright if you treat them as buzz words for now. Just to recall, the XOR gate has a high output when both the inputs differ from each other. That is, A != B . The following is the truth table of the XOR gate. XOR between two bits A and B can also be rewritten as a function as shown As discussed, we’ll be converting our program into something else which is easier to work with. In our case, we’re going to convert it into an arithmetic circuit as shown below. If you follow the input wires to each gate, you’ll notice that the circuit does the same thing as the equation above. So now, the Prover converts the program into this circuit and runs the input x provided by the Client and returns the output of the execution along with a “certificate”. What does this certificate look like? The certificate for this execution is a **valid assignment to the input wires** . So if the Client provides input as *(A = 1, B = 0),* the Prover returns output O = 1 along with a certificate that could look like (A = 1, B = 0, Z1 = 1, Z2 = 0, O = 1). So all that the Verifier has to do is plug in the values into the circuit and check if this is a valid assignment to the circuit. However, this is rather impractical. For any program with decent complexity, the size of the certificate is going to be extremely large. In addition to this, the Verifier has to basically rerun the program to ensure that every variable in the certificate has the right value. Therefore, we further encode this circuit into a polynomial. Now I’m sure you are wondering “Since when is converting a function to a polynomial a good idea? ” But bear with me. Polynomials have quite a few useful properties that we’d like to make use of (more on this later). And how do we encode a function to a polynomial? Consider the following function. For now, let’s not bother ourselves with the construction of this function and only focus on a key attribute of this function. Which is, *L = 0* if and only if (A, B, Z1, Z2, O) **form a valid assignment to the arithmetic circuit we constructed above** . How? For L(t) to equate to 0, we require: Take a few minutes here. Refer to the circuit diagram that we drew earlier and see as to why the three conditions must be satisfied. Any function L constructed using an invalid assignment, will lead to a non zero polynomial . So what we’ve effectively done is converted the problem of “Checking if a program was executed correctly “ to “Check if a polynomial is a zero polynomial”. So far in this article, we’ve seen a very general outline of the steps involved in any VC (Verifiable Computation) protocol. The last few years have seen a number of different protocols that are both secure and practical (somewhat). These protocols can be divided into two categories; 1. Proof based protocols 2. Argument based protocols Proof based protocols are interactive in nature (require multiple rounds of back and forth communication between prover and verifier for a verifier to be convinced about the validity of a proof). A “proof” is stringent in terms of completeness of the protocol. They assume a powerful super-polynomial prover and therefore the protocols tend to be impractical for real world use cases. Argument based protocols could be both interactive and non-interactive (zero rounds of back and forth communication. Prover provides a “certificate” after running the computation and this “certificate” is able to convince any verifier without requiring any communication with the prover). An “argument” refers to a less-stringent definition of a proof which assumes that the prover to be in polynomial time at best. We’ll focus the upcoming parts on a subset of Argument based protocols which are able to construct SNARK proofs which stands for “Succinct Non-Interactive Argument of Knowledge ”. Let’s break this Succinct Proofs which are succinct allow verifiers to verify in a couple of milliseconds. That is, the size of the proof “certificate” is a few hundred bytes even for a verification of a function with millions of instructions. Non-Interactive As discussed earlier, non-interactive proofs do not require back and forth communication between prover and verifier. Once the “certificate” is generated by the prover, any verifier can verify if the prover has honestly run the computation Argument of Knowledge SNARKs are argument based proofs. Argument based proofs are efficient (we’ll see more of this in the later parts) There are several protocols which are able to generate SNARK proofs. From the next article, we’ll discuss one of these in detail along with an example. Conclusion That’s all for now! I’ll be uploading the second part of this series soon enough. Stay tuned for the next article if you found this one useful. Thank you for reading! Written by Akshay Pai
{"url":"https://delta.nitt.edu/blogs/outsourced-verifiable-computation","timestamp":"2024-11-14T07:05:27Z","content_type":"text/html","content_length":"32276","record_id":"<urn:uuid:9a7a1dad-d0fb-4556-8331-fe4350749763>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00463.warc.gz"}
Resistance of a cell Capacity, nominal voltage and internal resistance are the basic parameters determining the performance of the lithium ion cell. The two parameters capacity,C and voltage,V define the energy (E=C x V) of the cell, while the power depends on the internal resistance of the cell. For a power cell the resistance has to be of the order of milli ohms or lesser. The resistance depends on various factors and it changes with SOC, temperature of operation, the applied current and the cell age. There are several ways to measure the resistance of a lithium-ion cell: 1. AC Internal resistance measurement: This method is a standard way to measure resistance and it involves applying a small AC voltage (Vac) to the cell and measuring the resulting AC current (Iac). The internal resistance of the cell can then be calculated using Ohm's law (ACIR = Vac/Iac). This method is generally accurate but requires specialized equipment which can generate ac frequency signal of 1000Hz and 100 mA. The signal applied is instantaneous and it can be assumed that neither the SOC of the cell doesn't change nor the heating of cell occur in this process. 2. DC load test: In this method, a pulse signal or step change is required to observe change in voltage and current, before and after step. The cell's resistance can then be calculated using Ohm's law, if Vi and Ii is the initial current before step and after step it is Vf and If, then DCIR = (Vi – Vf ) / (Ii – If). The applied step change in current can be a step up in current, which is a charge pulse, or it can be a step down in current, which is a discharge pulse. This method is relatively simple and can be done with a cycler, but the results may not be as accurate as the SOC or state of the cell changes when current is applied. 3. Electrochemical impedance spectroscopy (EIS): This method involves applying a small AC voltage with a large frequency spectrum (typically 1 mHz to 100 kHz) to the cell and measuring the resulting AC current. The cell's impedance (which is related to resistance) can be calculated from the measurements. EIS can provide detailed information about the cell's electrochemical processes and is often used in research and development. However, it requires specialized equipment and can be time-consuming.
{"url":"https://www.batteryshortcut.com/2022/12/resistance-of-cell.html","timestamp":"2024-11-09T17:14:46Z","content_type":"application/xhtml+xml","content_length":"122425","record_id":"<urn:uuid:5704b9ac-5351-4afe-997b-2f3ca6189718>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00784.warc.gz"}
Are Mortgages Simple Interest and Compounded Monthly? People seem to be fascinated with how mortgages are calculated and paid off, but when it comes down to it, there’s nothing too mind-blowing happening. Each month, a portion of principal and interest are paid off as mortgage payments are made. Over time, the loan balance is reduced, as is the total amount of interest due. Mortgages Are Simple Interest • Simple interest means it’s not compounded • So you don’t pay interest on top of interest • What you owe in interest is pre-determined on a home loan • And paid over the life of the loan Here in the United States, mortgages use simple interest, meaning it is not compounded. So there is no interest paid on interest that is added onto the outstanding mortgage balance each month. Conversely, think of an everyday saving account that offers you compounding interest. If you have a balance of $1,000 and an interest rate of 1%, you’d actually earn more than 1% in the first year because that earned interest is compounded either daily or monthly. Put another way, you earn interest on your interest each day or month, which allows your money to grow more quickly. Mortgages don’t do that because the total amount of interest due is already calculated beforehand and can be displayed via an mortgage amortization schedule. For example, a $300,000 mortgage set at 4% on a 30-year fixed mortgage will have total interest due of $215,610 over the life of the loan. We know this beforehand because mortgages are amortized. Each month, the combined principal and interest payment will be exactly the same, but the composition of the payment will change. In month one, you’ll pay $432.25 in principal and $1,000 in interest for a total of $1,432.25. In month 360, you’ll pay the same $1,432.25, but only about $5 of that amount will go toward interest because the outstanding loan balance will be so small at that time. At no point would you pay interest on top of interest. Extra Payments Compound Principal • If you make extra mortgage payments • Your principal payment can compound • In the sense that a lower outstanding balance • Will lower each subsequent interest payment However, if you paid an extra $100 each month on top of your required mortgage payment, the principal portion would start compounding. In month one, you’d pay $1,532.25, with $1,000 going toward interest and $532.25 going toward the principal balance. This wouldn’t provide any extra benefit in the first month because you’d simply be paying $100 extra to get $100 more off your principal balance. However, in month two the total interest due would be calculated based on an outstanding balance that is $100 lower. And because payments don’t change on a mortgage, even more money would go toward the principal balance. The second payment would be $998.23 in interest and $534.02 in principal. Meanwhile, those making the standard monthly payment with no extra amount paid would pay $998.56 in interest and $433.69 in principal. That’s more than a $100 difference, $100.33 to be exact. And over time, this gap will widen. In month 60, the principal payment would be $121.70 higher on the loan where you’re paying an extra $100 per month. So the benefit of paying extra increases more and more over the life of the loan and eventually allows the mortgage to be repaid early. Are Mortgages Compounded Monthly? • Most mortgages don’t compound interest • But they are calculated monthly • Meaning the interest due for the month prior • Will be the same whether you pay early or late within the grace period As noted, traditional mortgages don’t compound interest, so there is no compounding monthly or otherwise. However, they are calculated monthly, meaning you can figure out the total amount of interest due by multiplying the outstanding loan amount by the interest rate and dividing by 12. Using our example from above, $300,000 multiplied by 4% and divided by 12 months would be $1,000. That represents the interest portion of the payment only. The $432.15 in principal is the remaining portion, and it lowers the outstanding balance to $299,567.75. In month two, the same equation is used, this time multiplying $299,567.75 by 4% and dividing by 12 months. That yields total interest of $998.56. And because the monthly payment is fixed and does not change, that must mean the principal portion of the payment rises. Sure enough, it’s a slightly higher $433.69. In other words, the interest due for the prior month is calculated on a monthly, not daily basis. This means it doesn’t matter when you pay your mortgage, as long as it is within the grace period. Generally, mortgage lenders allow you to pay the prior month’s mortgage payment by the 15th of the month with no penalty, even if the payment is technically due on the first of the month. Because interest isn’t accrued daily, but rather monthly, it doesn’t matter if you pay on the first or the 15th. As long as the payment is made on time, the same amount of interest will be due, and the same amount of principal will be paid off. To complicate matters, because the mortgage industry does that really well, there are so-called “simple interest mortgages” that calculate interest on a daily basis. Instead of calculating the amount of interest due by dividing by 12 (months), you divide by days (365) instead. These types of mortgages are not the norm, but if you happen to have one, the day you pay your mortgage will matter because interest is calculated every single day, even on leap years. That could make paying even a day later more expensive. But as mentioned, most mortgages are calculated monthly so it shouldn’t be an issue for many people. Tip: HELOCs are calculated daily as opposed to monthly because the outstanding balance can fluctuate as new draws are taken or paid back. Neg Ams Are the Only Mortgages That Compound Interest • There is one exception to the rule • A negative amortization loan such as the option ARM • It can compound interest if you make the minimum payment option • Which is less than the total amount of interest due each month To tie up some loose ends, there is one type of mortgage that compounds interest, and it too isn’t very common these days. The once very popular option arm, or pick-a-pay mortgage, which features negative amortization, allows for compounding interest. It does so because borrowers are allowed to pay less than the total amount of interest due for the month, which adds any shortfall to the outstanding loan balance. This means the borrower pays interest on top of interest in subsequent months if they don’t pay the full amount of interest due. The banks are happy to let it ride, but the borrower is the one who pays for the convenience. Again, these mortgages are pretty much a thing of the past, but it’s one good example of a mortgage with compounding interest. In summary, for most individuals their mortgage will be simple interest that is calculated monthly. That means no new interest will be added to the loan balance and all calculations will be made on a monthly basis, so paying early or late in the month should have no effect, as long as payment is received by the due date (or within the grace period). (photo: Jayel Aheram) Latest posts by Colin Robertson (see all) 14 thoughts on “Are Mortgages Simple Interest and Compounded Monthly?” What happens to the amount of interest charged per payment on a conventional mortgage loan (in the US) if more than 12 mortgage payments are made in one year? Can a mortgage company charge what they are referring to as “prepaid interest”? Prepaid interest is something you sometimes pay at closing depending on when you close in the month. If you make more than 12 payments in a year it would just increase the amount of interest you could possibly write off during that year. I made 17 mortgage payments in 2018. Each time my mortgage company charged me a rate equivalent to 1/12th of my 6.4% contracted APR. Did they overcharge me interest in 2018? On my last 2018 statement they reported arpx. $2085 in year to date interest paid. Yet, on my 1098 for 2018 they reported only $1551 paid. Can they do that legally? How are penalties applied to a mortgage when payments are late and the mortgage is in arrears? Hi Teresa, It may depend on the loan servicer, but typically there’s a late fee initially and if beyond 30 days late interest may begin to accrue on the delinquent amount as well until brought current. So is there a set amount of interest agreed to be paid at the onset of the mortgage and no matter how many extra payments I make I am still stuck paying the full amount of interest? Or by making extra payments am I paying less interest over the course of the loan, in which case wouldn’t that be re-compounding? There is no set amount if you pay more than originally scheduled – in that case you can reduce your interest expense based on what you pay extra. This will actually accomplish two things – reduce your total interest expense and shorten your loan term. But it will not lower your subsequent monthly payments, those will not change by making extra payments. I think if you make the extra regular payments, then you’ll be stuck with the full interest. But if you make extra principal payments, then that would lower the interest paid because the interest is based on the current principal balance. So if you’re doing the regular payments and it says your next payment isn’t due for x months, you’re taking on the full amount if you continue. I think they made a mistake in their math here: “The second payment would be $998.23 in interest and $534.02 in principal. Meanwhile, those making the standard monthly payment with no extra amount paid would pay $998.56 in interest and $433.69 in principal.” Actually, they would pay 998.56 in interest and 533.69 in principal no? If I am correct, then the next sentence is wrong as well which says: “That’s more than a $100 difference, $100.33 to be exact.” But I get their excitement. It’s correct. The borrower making the standard monthly payment (not the $100 extra) would pay $998.56 in interest and $433.69 in principal for month two. Meanwhile, the borrower paying $100 extra would have a second payment of $998.23 in interest and $534.02 in principal. I’m not sure how much, if any, of this is right. I think they compound monthly. The payment is calculated, if I understand correctly, in such a way that by paying that fixed amount each month (for a fixed interest loan, not ARM, I mean), you will retire the loan in 30 years (or whatever the term is). My recollection is that it’s compounded monthly. But you are definitely not on the hook for the whole interest amount if you make extra payments against the principal. Suppose you paid the loan off with lottery winnings three months in. Unless you have some kind of prepayment penalty baked in, they can’t make you pay interest that you didn’t accrue, because it was paid off. You may or may not have in the agreement to be required to make the monthly payment; in my case if you paid extra against the principal they would consider you “paying ahead of schedule” and they could tell you “oh, you don’t need to make a payment until X date (because it was in their best interest for me to let the interest build back up, of course)”. The idea that you’re on the hook for the full amount of interest even if you pay off principal early is, I think, insane–I would be shocked if anyone’s terms were really like that. Prepayment penalty aside, you can always pay a home loan off early and avoid any interest that hasn’t yet accrued. This is why mortgages are paid in arrears, because you can’t pay interest until it accrues. Are there any 30 yr mortgages where the interest portion is fixed amount per month instead of tilting? So that borrower pays a set principal and interest, the same each month throughout life of Only way I could see interest portion being the same each month is if it’s an interest-only loan where the principal balance never changes.
{"url":"https://www.thetruthaboutmortgage.com/are-mortgages-simple-interest-and-compounded-monthly/","timestamp":"2024-11-14T00:59:18Z","content_type":"text/html","content_length":"272007","record_id":"<urn:uuid:13d6d4b5-c863-407e-9280-7c305d2b83be>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00644.warc.gz"}
Dialethic Dialectic - 3 Quarks Daily Dialethic Dialectic by Carl Pierer Historically, formal logic and Hegel's philosophy's relation has been dominated by antipathy. Classical logic, developing from Aristotelian logic to the Frege/Russell logic of the 20^th century, has largely rejected Hegel because of his overt embrace of contradictions. Hegel, vice-versa, has not been too charitable to the formal logic of his day. In the second half of the 20^th century, however, formal logic has developed massively and in various directions. One of these, paraconsistent logics, have attempted to accommodate contradictions. Classical logic is anathema to contradictions, due to the explosion principle, a.k.a. ex falso quodlibet. A sketch of this principle is the following: since the classical or is non-exclusive, if we start with a true proposition A, the disjunct A or B is true for any proposition B. So, if we have A&~A, we get that A is true and hence A or B is true. But since ~A is true, too, from A or B we get that B must be true. Hence anything follows from a contradiction, or so the classical (and subsequently the Frege/Russell logic) claims. So contradictions seem to be a rather bad thing. Now, paraconsistent logics deny this explosion principle. There are different ways of doing this, but we will stick with Priest's way in his (Priest, 1989). His is a dialethic interpretation, meaning that he thinks there are sentences that are both true and false. This has some interesting consequences. Note, first of all, that this does not mean that all sentences are true and false. Most importantly, most classical notions are indeed preserve. So, we have, for propositions A and B: • ~A is true implies A is false • A is true implies ~A is false • A&B is true only if A is true and B is true • A&B is false only if at least one of A or B is false These are quite orthodox. Now, of course, on the dialethic point of view, A could be both true and false, and suppose B is true. Then A&B is both true and false. Next, we need the notion of logical consequence, which Priest defines also quite classically: A is a logical truth just if A is (at least) true under all assignments of values. A is a logical consequence of B just if every assignment of values that make B (at least) true makes A (at least) true.(Priest, 1989) What does this mean exactly? Well, let A = P or ~P. Now, suppose P is true. Then ~P is false. Hence A is true. Suppose then that P is false. Then ~P is true. Hence A is true. (This again is classical). Suppose finally that P is both true and false. Then ~P is both true and false. So A is both true and false. This shows that for all assignments of truth values (true, false, both) A is (at least) true. Hence A (the law of the excluded middle) is a logical truth, even for this dialethic logic. Where things get interesting, however, is with the notion of logical consequence. We run into trouble if we attempt to show that anything can be deduced from a contradiction. To see why, let's proceed in the same manner as before: Suppose we start with a proposition A which is both true and false. We can then form the disjunct A or B. Since A is both true and false, ~A is also true and false. In particular, we have that ~A is true. At this point, however, we run into a problem. We cannot deduce that B is true. For everything so far mentioned can be true, and B could still take the value false. Hence, by the definition of logical consequence, B is not a logical consequence of A&~A. In the article under discussion, Priest uses this sketch of a dialethic logic to argue that Hegel can consistently be read in a dialethic fashion. That is, that Hegel does indeed claim that there are true contradictions. While Hegel, according to Priest, is quite clear about this, subsequent interpreters have attempted to interpret Hegel's use of "contradiction" in a non-logical sense. Priest wishes to show that Hegel means what he says, i.e. logical contradictions. With this sketch of a more powerful formal logic developed above, we can see that there is at least no immediate (logical) objection to proceeding in this way. Priest, then gives an example how dialethic logic could be applied to dialectics. Consider movement. For Hegel, for an object b to be in a state of motion means to be at position s at time t. Yet, this is not all. If it were, then b 's being in motion would be no different from its being at rest. Instead, Hegel suggest, b is also not at position s at time t. Hence, to be in motion is both to be and not to be at a point at a certain time. So, the difference between rest and movement is that the sentence "b is at s at time t" is true only (i.e. consistent) when b is at rest. It is true and false (i.e. paraconsistent) when b is moving. We can see here the nice link that Priest makes with Hegel. Classical formal logic does have its place in the static, in the consistent. However, as soon as change enters the picture it becomes necessary to think paraconsistently. Of course, many objections can be raised to this characterisation. An important one is that this brand of formal logic seems quite happy with the fact that this is merely an external contradiction. What Priest means with this is that "b is at s at time t" and "b is not at s at time t" are perfectly meaningful in their own right and can be asserted independently. For the dialectician, however, the contradictions of interest are internal. That is, they are provoked by the concepts themselves. They are not accidental and depend on each other. Priest has quite a nifty reply to this. First, he introduces a logical operator ^, which turns a proposition into a noun phrase. What this means is that if A is the proposition "Mozart is a great composer", ^A would be "Mozart's being a great composer". Priest notes in particular that ^A is an object. While it is not entirely clear what he means by an object, it seems that it conforms more or less with the mathematical notion of an object. He writes: "Which object it denotes we may assume very little about". Furthermore, Priest needs the T-scheme: ^A is true if and only if A is true. There is some intuitive sense as to why this should be true and for the present purposes we shall contend with that. Secondly, identity statements (of the form a = b) will play a special role. Priest points out that they behave in quite the conventional way, with the law of identity (a = a) holding and the substitutivity of identicals being preserved. Of course, a statement like a = b can be true, false or both, like all statements for the dialectician. Note that Priest suggests it is always true that ^A ≠ ^~A. Since ^A can be thought of as an object and ^~A its opposite (for example "Mozart's being a great composer" and "Mozart's not being a great composer"), and since an object is not the same as its opposite, Priest considers this a natural requirement. Now, this allows us to illustrate the formal rendition of the intrinsic contradiction. If we suppose that A stands for "b is at s at time t", then Priest claims "we may take the instantaneous contradiction produced in a state of motion to be that the body's being in a certain place is its not being in that place, ^A = ^~A". Now however, this gives rise to the contradiction "A&~A" in the following way: Since we have the law of the excluded middle (A or ~A), we may assume without loss of generality that A is true. Hence, by the T-scheme, we have ^A is true. Since ^A = ^~A, by the substitutivity of identicals, we have that ^~A is true and hence – T-scheme again – that ~A is true. Therefore A&~A is true. We hence see that the kind of contradiction we have here is of the form (a = b) & (a ≠ b). The former is the contradiction of motion ^A = ^~A. The latter is always true, as was mentioned before. Priest thinks that this captures the more intimate relation of the internal contradiction: (…) the poles of a dialectical contradiction must have a tighter relation than mere extensional conjunction. For the poles of the identity in difference (a = b) & (a ≠ b), a and b are actually identical with (though different from) each other; (dialectical) identity is therefore the relationship between the poles of dialectical contradiction. (Priest, 1989) Of course, there are several ways to attack this proposal and to reject it. First of all, the dialethic claim of true contradictions is outrageous to many. What kind of truth, exactly, are we talking about here? Priest addresses this point in particular in his book Doubt Truth To Be A Liar. Secondly, dialectics is often framed a logic of concepts. How does the dialethic interpretation deal with this? (Ficara, 2013) replies to this. Yet, instead of going into the nitty-gritty of the debate, I would like to sketch why such an approach is interesting and why it would be good to understand its limitations. There are three main First, attempting to give a formal interpretation of Hegel allows to open a dialog. People feeling ill at ease with Hegel's use of contradictions can come to see that not all hell breaks loose and that the view need not amount to trivialism (where everything is true). Conversely, scepticism towards formal logic and mathematical thinking could be redeemed in the eyes of dialecticians. But this is more of a hope and perhaps not even the most interesting aspect. Second, while it is all fine and good to suggest that when Hegel criticised formal logic he only meant the consistent type, it would be interesting to see in what way dialethic logic can render the movement that dialectic describes. Formal logic (whether consistent or paraconsistent) seems to fix states, through definitions, through formalisation, thereby denying their internal development. To formulate it provocatively: to understand if this poses a deeper obstruction (deeper than the consistent/inconsistent divide) would help appreciate whether there is a need for a non-formal thinking. Third, Hegel's dialectic has subsequently found political interpretations in such thinkers as Marx, Engels, or Adorno. If his dialectic can be rendered in dialethic logic, how about the political ones? Understanding how politics enters into formalisation would allow us to appreciate if indeed there is a link between the kind of formal logic and the kind of politics it allows to express. Of course, all of these are big questions that are beyond the scope of the present essay. However, I hope that this sketch of Priest's argument helps to appreciate that formal logic and dialectic thinking need not be opposed. Indeed, that exploring the way in which they might work together, gives rise to many deep questions. Ficara, E. (2013). Dialectic and Dialethism. History and Philosophy of Logic, 35-52. Priest, G. (1989). Dialectic and Dialethic. Science and Society, 388-415.
{"url":"https://3quarksdaily.com/3quarksdaily/2018/02/dialethic-dialectic.html","timestamp":"2024-11-09T13:18:19Z","content_type":"text/html","content_length":"61704","record_id":"<urn:uuid:18a1cbcf-4d82-4d3d-bfd1-c8ca5a5b9a7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00345.warc.gz"}
Temperatures: Measure Thrice At Least…Before Reporting Once G. Raymond Peacock Temperatures.com, Inc. Southampton, PA 18966 Ph: 215-436-9730 www.temperatures.comThis email address is being protected from spambots. You need JavaScript enabled to view it. A measurement of any kind has error. Carpenters know this well and they often measure twice before cutting once. Common tape measures have resolutions down to 1/16th of an inch, but errors of as much as 1/2″ are not uncommon. The best measurement is a statistical assessment of the result of repeated measurements. Measurements always have accompanying uncertainties that can be quantified and reported by measuring more than once. This paper explains terminology and important statistics to help understand the basics of measurements, with an emphasis on infrared temperature, and the several key influences Nature and Man have on the process of dealing with them. If you have read the first modern standard on non-contact temperature sensor (radiation thermometer) measurement, ASTM E1256-15, you learned that even in a calibration lab it recommends the average of at least three measurements of blackbody source temperature in verifying an infrared thermometer’s calibration. Thus, measure thrice, report once is a more reliable approach to getting the best practical result. “Lies, damned lies, and statistics” is a phrase describing the persuasive power of numbers. But anytime one makes a measurement, of any parameter or variable, one is entering the world of numbers and statistics. To try to avoid them is not only impossible, it is foolish. If you report temperatures or temperature difference, better take care to do it correctly. Why are statistics involved? It’s like the old carpenter’s adage: “Measure twice, cut once”. Statistics is the way to understand and correct results for measurement errors rather than to pretend they don’t exist. Every measurement has an error and it takes at least three tries to get some idea of its size. So the carpenter’s rule should, in truth, state: “Measure at least thrice before cutting Thermographers, Meteorologists and Metrologists all face the same issues with measurements in reporting Temperatures, Weather properties and Calibrations, respectively. They all do, or are well-advised to, use statistics in a professional, but not difficult, way. However, this is getting ahead of the story I want to tell; the one about how you get to the point of using statistics, and the why and how. “Infrared Camera Accuracy and Uncertainty” is the title of a recent online article by FLIR Systems (http://www.flir.eu/science/blog/details/?ID=74935) in an effort to help users better understand some of the terminology around measurement errors, but it didn’t delve very far into the statistics beyond a few formulae. My news website reported on it along with some additional resources at http://www.tempsensornews.com/generic-temp-sensors/infrared-camera-accuracy-and- uncertainty/ The original article is on the European FLIR Blog at: http://www.flir.eu/science/blog/details/?ID=74935. Looking back, I realize that 14 years ago, I attempted a similar explanation at the 2003 IR/INFO (Reference 1). It covered more ground than the FLIR article, but tried to do too much, I now realize. In my opinion, neither article is really simple. The topic is complex. As a former mentor of mine used to say: “If you break down a complicated subject into basic components and explain those well, then things get easier to understand.” So, here goes another attempt to really make this very important topic easier to understand by considering first some fundamental pieces. Some Fundamentals: A Review Laboratory measurements quantifying the calibration uncertainty of a thermal imager involve pointing the camera at a calibrated, uniform blackbody source and recording/reading the output temperatures over a period of time. The test is repeated at different source temperatures and the differences in measured versus standard temperatures are measured, errors reported and lab uncertainty calculated for each calibration point or for the entire series. There are various methods. Uncertainty is a measure of the dispersion of the errors of the individual measurements. Lab calibration personnel follow prescribed standards and procedures to produce a calibration report or match a lab requirement for calibration uncertainty. To emphasize: individual measurements have errors and the dispersion of the errors is quantified as the measurement uncertainty, or the likely region in which the true measured temperature lies. This is true in a calibration lab and in the “Real World”, but it is more difficult in the “Real World” as I will explain. The terminology properly used states that the uncertainty in a measurement result is a numerical value plus or minus some variation for a set of confidence limits. A typical uncertainty statement, say for a calibration certificate would look like: The Calibration Uncertainty of this device at a temperature of 212 Degs F is +/- 1 Deg F with a confidence limit of 95%. This may sound foreign to some, so here are a few terms that are worth describing in more detail before going further. First, Accuracy: Accuracy is a term that describes how closely a measuring device comes to a standard, but not in numbers. So, someone who states that the accuracy of their measurement device is, say, 2% or 2 degrees, is not being precise, to be precise. They mix apples and oranges. “Accuracy” is a qualitative or descriptive term, expressed in words, like: “That instrument is in the 10 Degrees F accuracy category”. “Error” is the amount by which an individual measurement departs from some norm. It is quantitative. It is expressed numerically, like: The error at 100 Degrees F is +3 Degrees. “Uncertainty” is quantitative and is defined as follows (Reference 1): “A parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand (VIM Ref 4); the range assigned to a measurement result that describes, within a defined level of confidence, the limits expected to contain the true measurement result. Uncertainty is a quantified expression of measurement reliability.” Uncertainty is a property of each measurement, a measure of the dispersion of the errors of an instrument under certain conditions. Errors are variable and most easily measured in the calibration or standards laboratory, traceable to the International Temperature Scale of 1990 (ITS-90), or a later standard. Every temperature value on that scale has established uncertainty values also. One can learn more about ITS-90 details by visiting the website of the International Bureau of Weights and Measures, called BIPM, at http://bipm.org. The book, “Traceable Temperatures” has been my “Go To” reference since the early 1990’s when it was first published (Reference 2). I highly recommend the latest Edition of this book for anyone who is serious about measuring temperature by any means, contact or non- contact. It helps greatly in understanding the properties of many different types of temperature sensors, including Radiation Thermometry. Plus, it has a great introduction to both temperature and uncertainty in measurement in measuring it. There are, in fact, two fundamentally different uncertainties that affect a measuring device’s results. They are: Calibration Uncertainty and the Application Uncertainty. The overall measuring uncertainty that a Thermographer in the field must deal with is their combined effects. The statistically correct method of combining uncertainties is described nicely in the FLIR article as the Root Sum of the Squares (RSS) combination of errors, or, Total Uncertainty squared = Calibration (C) Uncertainty squared + Application (A) Uncertainty squared Or Total U^2 = CU^2 + AU^2 Total U^2 = CU^2 + AU^2 = CU^2 + IU^2 + ExU^2 There are many examples of the methods used to describe measurement uncertainty; some that I know are listed in the References section at the end of this paper. One very useful reference is the free online white paper from Beamex entitled Calibration uncertainty for non-mathematicians (Reference 3). Yet another, a paid download at the SPIE Digital Library, is an excellent little book by Dr. Peter Saunders of New Zealand’s National Measurement Institute, Radiation Thermometry, Fundamentals and Applications in the Petrochemical Industry (Reference 4). Not only is this a very thorough coverage of the principles of Radiation Thermometry, the same technology that a thermal imaging camera uses to produce temperature readings, it provides a series of worked uncertainty examples in radiation thermometry field measurements. So, why and how do we get to “Measurement Uncertainty”? And, furthermore, what is it really and why haven’t we heard about it before? The two basic calculations to quantify the dispersion of a series of measurements in the field, and calculate the uncertainty, are the Mean (M) of the measurements followed by calculation of their Standard Deviation (SD). The mean is the sum of the individual measurements divided by the number of measurements. Graph by JRBrown – Own work, Public Domain, But the mean is not enough to quantify a sample set of measurements. The graph above shows two sets of data with the same mean value, but widely different spread, or variation, in the data. The standard deviation is the square root of the variance of the measurements and, in turn, the variance is the average of the squared differences from the Mean. The basic uncertainty for a random, or Normal distribution of the results is expressed as U = +/- SD. This yields the uncertainty with a confidence limit of 68% for a very large set of measurement data points. In such a case the true value, within a confidence of 68%, lies between the average value and plus or minus one standard deviation. The expanded uncertainty of the results is expressed as U = +/- k*SD, where k is the coverage factor of 1, 2, or 3. The k values stand for confidence limits of: 1 for 68%, 2 for 95% and 4 for 99.7%, for a random or Normal distribution of the measurements. Then, once the Application Uncertainty is determined, it must be combined, as described above, with the Calibration Uncertainty to correctly calculate the overall uncertainty. The key to doing it correctly is that both the major uncertainty components are at the same level of confidence. That latter requirement makes it extremely difficult to calculate – with some certainty – if the manufacturer’s literature, or supplied calibration certificate, does not specify the calibration uncertainty of its products. The international effort to improve manufacturing product Quality resulted in ISO 2000 standards. It also resulted in the development of The Guide for the Use of Terminology in Measurements (GUM) (Reference 5) and the ISO 17025 standards a little later that have been adopted worldwide by most Standards authorities, including ASTM. In the past 25 years, they have become the international and national norms for reporting and describing measurement results in science and technology. This terminology is probably new to most readers, but it is the terminology that those who are serious about measurement and truly understand it, use. The first error is described in the Traceable Calibration Certificate for the device, usually provided by the device supplier, and often beyond the capability of the user. Although, it is not too difficult to periodically verify that a device has not shifted in calibration beyond the maker’s specification. The Application, or Use Error is another story. That’s where the “Measure Thrice” (at least) comes in and where one has to work a little harder to quantify the dispersion of measurement results. Measuring a very large number of data points is not easy in the field, but one or two are not enough, especially if one hopes to produce reliable results. Application Uncertainty: Measuring Thrice (or more) The best way to find the measurement uncertainty in an application is to take a series of measurements, if possible and required, on one or more locations on the object area of interest. If you stick with three measurements, you add the three results and divide by 3 to get the average result. Note, too, that if you take more measurements, the SD of the results gets smaller by the square root of the number of measurements. You need to calculate the measurements’ standard deviation, as described above. This alternately can be done by using a basic scientific calculator; all have built-in features for both quantities. So too, do spreadsheets such as MS Excel and Apple Numbers. First, the more measurements that you take, most times, helps reduce your standard deviation, since the coverage factors described above are for a normal distribution of results about the mean value. See the outer curve on the graph here. Then, the assumption that most make, is that the variations in measurement results are based on a randomness of the temperature variations we measure. NOTE: If you do not see variations in repeated measurements of your objects’ temperatures, then your measurement device is insensitive to them and you have the equivalent situation of using a yardstick to measure the diameter of a thread. In such a case you have no idea of its actual diameter or the variations in it. If you do see measurable variations in the temperature and they are within your measuring temperature limits, then you have a correct starting point. Your measured temperatures are what statisticians call samples of a population with n elements or values. If the distribution of the values are random, then your samples’ parameters of mean and SD values will be related to the mean and SD of the This problem has been studied many times in the past by many mathematicians. It depends on the number of degrees of freedom in a given measurement, or the number of measurements. If there are n measurements, there are n-1 degrees of freedom. This is where the details begin to get complicated and one has to get into more detailed statistics and something called the student’s t-distribution and statistics. Student’s t- distribution – plots of 1, 2, 3, 4, 5,10, 20, 50 and an Infinite Degrees of Freedom (or 2, 3, 4, 5, 11, 21, 51) and an infinite number of samples, or the Normal Distribution (limiting distribution) The t-distribution plays a role in a number of widely used statistical analyses, but here in this graph, one can see the differences between the Student’s t-distribution and the Normal, or random, The same graph used above and shown here again shows the probability that the standard deviation of a sample will fall within the limits shown by the curve. The inner curves are for “small” samples, typically less than 50. Above 50 the Normal distribution is implied since it is so close to it. The easiest way to find the probability associated with less than 51 samples is to use the t Distribution Calculator, a free tool provided by Stat Trek online at http://stattrek.com/online-calculator The consequence of all this statistics talk is to show that taking more samples (making more measurements) improved the calculation of the Uncertainty by reducing the size of the standard deviation times the coverage factor for a particular confidence interval. It shows that in the case of three measurements, the k factors to be used in the above formulas for calculating the uncertainties change the coverage factors as follows: • For 68% confidence intervals, 2 samples will have a SD multiplier of 1.8, for 3 samples, it’s about 1.3, 5 samples it’s 1.1 and 50 samples it’s 1.0. • For 95% confidence intervals, 2 samples will have a SD multiplier of 12.7, for 3 samples, it’s about 4.3, for 5 samples it’s 2.8 and for 50 samples it’s 2.0. • For 99.5% confidence intervals, 2 samples will have a SD multiplier of 127.3, for 3 samples, it’s 14.1, for 5 samples, it’s 5.6 and for 50 samples, it’s 3.0. Bottom Line: measuring thrice over twice is a big improvement in reducing the amount of measurement uncertainty when one seeks the best confidence in the results. Measuring even more samples reduces Uncertainties even more. To Recap: 1. Multiple measurements in the field are required to determine temperature and temperature difference averages and variation, with confidence limits, so as to be able to state the results of your measurements in statistically correct terms. 2. A full statement of results must include the calibration uncertainty using the same confidence limits. 3. It’s not very difficult to do the above, but it takes some understanding and care. The proper professional way to report measured temperatures is to use the technical terms and calculations agreed upon internationally among measurement professionals of all nations and their agreed vocabulary. It has been written about and widely publicized for more than 15 years. There are numerous free and paid resources to help thermographers learn how to use them. In this paper I have only discussed what are called Type A Uncertainties. There are also Type B and then some way to combine them. Suffice it to say, the dominant problem in the field one is usually determining are Type A ones. Additional resources to learn more about Type B Uncertainties and the Student’s t- distribution table are contained in the References Section. 1. Confidence Limits in Temperature Measurements, G. Raymond Peacock, IR/INFO 2003 2. Temperatures (An Introduction to Temperatures Measurement and Calibration), Second Edition, J. V. Nicholas & D. R. White, John Wiley & Sons, LTD, 2001 3. Calibration uncertainty for non-mathematicians, Free download at: http://resources.beamex.com/calibration-uncertainty-for-non-mathematicians. (This online white paper discusses the basics of uncertainty in measurement and calibration. The paper is designed for people who are not mathematicians or metrology experts, but rather to people who are planning and making the practical measurements and calibrations in industrial applications.) 4. Radiation Thermometry, Fundamentals and Applications in the Petrochemical Industry, Peter Saunders, SPIE (tutorial texts in optical engineering) 2007. (Available online at http://spie.org/ 5. GUM: Guide to the Expression of Uncertainty in Measurement is a free download online at: http://www.bipm.org/en/publications/guides/gum.html 6. NIST/SEMATECH e-Handbook of Statistical Methods, https://www.nist.gov/programs-projects/nistsematech-engineering-statistics- handbook, (A free Web-based book written to help scientists and engineers incorporate statistical methods into their work as efficiently as possible.) NIST (USA) 7. Introduction to Measurement Uncertainty, NPL e-Learning Program is a paid set of online videos (intro video is free) at: http://www.npl.co.uk/commercial- services/products-and-services/training/ 8. Essentials of expressing measurement uncertainty, http://physics.nist.gov/cuu/Uncertainty/index.html, NIST (USA). 9. NPL’s Beginner’s Guide to Temperature, http://www.npl.co.uk/educate-explore/factsheets/temperature/temperature-(poster), National Physical Laboratory, Last Updated: 7 Aug 2015 10. The kelvin, https://www.youtube.com/watch,v=PzoxxNefCUw&index=3&list=PLBB61840785E D0B1D, and http://www.npl.co.uk/reference/measurement-units/si-base-units/the- kelvin, National Physical 11. VIM: International Vocabulary of Metrology – Basic and General Concepts and Associated Terms (VIM 3rd edition) – JCGM 200:2012 (JCGM 200:2008 with minor corrections) http://www.bipm.org/utils/ 12. Calibration and R&R Practices for Reliable Temperature Measurements, G. Raymond Peacock, IR/INFO 2005 13. The Role of Standards & Calibration in IR Thermography, G. Raymond Peacock, IR/INFO 2011 14. Procedure for calibration and verification of the main characteristics of thermographic instruments- (Confirmed 2008-10-31) is online as a free downloadable pdf document. (www.oiml.org/
{"url":"https://irinfo.org/articles/05-01-2017-peacock","timestamp":"2024-11-03T02:49:38Z","content_type":"text/html","content_length":"62011","record_id":"<urn:uuid:6acfdcd2-7ba1-4f59-91bc-1cc8ec33580a>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00059.warc.gz"}
Sciencemadness Discussion Board - Science equipment looking for a home - Powered by XMB 1.9.11 12 13 14 15 .. 31 repo1030 posted on 26-1-2014 at 17:45 Posts: 7 I ordered a 24/40 distillation kit from Dr. Bob several days ago and it arrived yesterday (1-25-14). Registered: 17-2-2013 Each piece was neatly wrapped in bubble wrap and carefully packed into the shipping box. Everything arrived in pristine condition and I couldn't be happier. In addition to the glassware, I also ordered 2 ring stands, 3 large 3-finger clamps, and 3 boss-heads (which I was not charged for). I also received free rubber gloves, plastic/glass vials, and a Member Is Offline handful of disposable pipettes. All of which always come in handy during lab sessions. Mood: All of this plus shipping...$160! Cold! On top of all of this, his communication with me on any questions I had were answered in a timely manner. I would definitely order from him again ( and I'm sure I will in the near future). So if you're reading this thread and are hesitant on ordering from him, do yourself a favor and send him a message. You won't regret it. bfesser Thread Topped 26-1-2014 at 18:11 Resident Wikipedian Dech92 posted on 31-1-2014 at 11:34 Posts: 1 Registered: 18-1-2014 Location: Ontario Hi Bob, if you find the time could you make a updated spread sheet? Member Is Offline No Mood quantumchromodynamics posted on 1-2-2014 at 05:50 Hazard to Self Posts: 67 Registered: 25-9-2013 a thumbs up for bob Location: with much determination, Reporting > $200 shipment from Dr. Bob. Each object individually bubble wrapped. Related lots individually boxed. Individual boxes double boxed into shipping box. Variety of nowhere in particluar distillation glass is exceptional. I have never seen such massive thick walled vacuum flasks before. 2L separation funnel is outstanding! Communicates well. Prices very reasonable. Useful extra things added to shipment. Surprise! Member Is Offline tired but still Dr.Bob posted on 1-2-2014 at 19:29 International Hazard Posts: 2724 I am running out a few things, out of most 24/40 round bottom flasks, out of most beakers, and a few other things. I still have some grad cyls, filter flasks, Buchners and frit funnels, and lots of 14/2- glassware. I have almost gathered together another complete 14/20 kit, if anyone is interested in a 14/20 setup, I can make up one with almost Registered: 26-1-2011 everything, only missing larger flasks, but I have a few left up to 250ml. I can do a complete kit with a mix of 10-12 flasks (1-3 necks in 10-250 ml), distillation adapters, vacuum inlets, claisen adapter, condenser, gas inlets, stoppers, and more for $80 or up, depending on what you want. I still have lots of 24/40 adapters and fittings, just Location: USA - NC running out of larger rbfs. I also just found 2 more sealed packages of 100 2.5 x 7.5 cm TLC plates which I will sell for $50 plus postage (about $5-7 within the US, ask me for other countries). I also have found more silica gel cartridges, in several sizes and styles, if anyone has a use for them (free samples with any purchase). I will mention that Member Is Offline postage rates just went up yet again, and they raised parcel post rates a fair bit yet again, so please be aware that almost every package will be a few dollars more now than Thanks to everyone for their patience, I was sick for a little while, but doing better now. No Mood Attachment: Glassware Inventory.xls (70kB) This file has been downloaded 902 times PS, if anyone is interested in some scratch and dent glassware, I have a large number of filter flasks, erlenmeyers, and other glassware that have minor chips and scratches, etching, or dirt that don't make them unusable, but do make them less ideal for sure. I am working on a list/sale of them, but for sure I have several 125, 250, and 500 ml filter flasks with chipped tubulations (the tube sticking out to attach the hose to - I think), etc. i am willing to sell them for just a little over the postage to ship them, especially if someone can fire polish the rough edges. I also have a couple of nice items that need some real glassblower repair if anyone knows a glassblower I would be happy to sell cheap. [Edited on 2-2-2014 by Dr.Bob] SigurdMunn posted on 4-2-2014 at 13:58 Posts: 1 Dean and Stark Apparatus Registered: 4-2-2014 Location: USA Does anyone have a Dean and Stark apparatus (or something similar) they're trying to let go? Amazon etc. doesn't seem to have much of a selection under $125 and I don't really know where to look for cheap equipment. Gracias! Member Is Offline No Mood zenosx posted on 4-2-2014 at 14:08 Hazard to Others Posts: 188 Just received my second order from the good Dr. and couldn't be happier. Professionally wrapped, no breakages, extra little goodies here and there and good prices. Registered: 7-7-2012 Thanks again Dr.! Location: East TN / Near Oak Ridge Member Is Offline A question that sometimes drives me hazy: am I or are the others crazy? Albert Einstein Awaiting Results.... Dr.Bob posted on 5-2-2014 at 04:53 International Hazard Posts: 2724 Registered: 26-1-2011 Quote: Originally posted by SigurdMunn Does anyone have a Dean and Stark apparatus (or something similar) they're trying to let go? Amazon etc. doesn't seem to have much of a selection under $125 and I don't really Location: USA - NC know where to look for cheap equipment. Gracias! Member Is Offline I have them in 24/40 still, for $20 plus shipping. Will that work? No Mood UnintentionalChaos posted on 6-2-2014 at 10:39 International Hazard Posts: 1454 I just got my third order from Dr. Bob. I swear, every time I order something, it's like Christmas again. I think there are more extras in here than the stuff I paid for. I Registered: 9-12-2006 couldn't be happier. Everything is, as always, extremely well packed. Location: Mars Member Is Offline Department of Redundancy Department - Now with paperwork! 'In organic synthesis, we call decomposition products "crap", however this is not a IUPAC approved nomenclature.' -Nicodem Dr.Bob posted on 12-2-2014 at 14:33 International Hazard Posts: 2724 Registered: 26-1-2011 Here are a few more things i have found recently. There are some flat bottomed round boiling flasks. I have lots of the 24/40 250 ml ones ($4 each), a few 250 ml w/o joints and a few each of the 500 ml ones without joints. I have a few of the west condenser in 24/40 ($20 each), plus some various Liebig condensers ($20, $25, and $30 for baby, momma and Location: USA - NC papa). I also have some drying pistol pieces, but only the one shown, I am missing the other half ($10 ea or make me an offer). And lastly a liquid/liquid extractor in 24/40 ($40) shown alone or with the matching condenser ($25) or both for $60. I have better files I can email people if they send me an email address. Member Is Offline No Mood HeYBrO posted on 20-2-2014 at 02:19 Hazard to Others Posts: 289 Registered: 6-12-2013 I received my order from the famous Dr.Bob today. He said it would take 3-4 weeks got here in about 2.5! Everything is as he said, plus I got HEAPS of vials extra!! Excellent communication, glassware, prices and postage! Highly recommend! Location: 'straya Member Is Offline Oscilllator posted on 20-2-2014 at 02:48 National Hazard Posts: 659 Registered: 8-10-2012 Order arrived promptly and well packed. Pleasure doing business! If I happen to find myself with some spare money, this is definitely where I will go. Location: The aqueous Member Is Offline No Mood quantumchromodynamics posted on 28-2-2014 at 17:25 Hazard to Self Posts: 67 Registered: 25-9-2013 another good shipment from dr. bob Location: with much determination, At this point I hope everybody feels safe ordering from Dr.Bob, but I still wanted to say, ordering from Dr. Bob is a pleasure. He takes his stuff seriously, the packing is nowhere in particluar great, he does not quibble about silly bull shit, and he is very helpful with questions. Don't order any glass from e-bay if Dr. Bob can help you. You are just wasting time and money with e-bay if Dr. Bob can assist. Member Is Offline tired but still Dr.Bob posted on 28-2-2014 at 20:23 International Hazard Posts: 2724 Thanks to everyone, I have been busy lately. Recently I found some more 14/20 glassware and have tried to put together a couple more 14/20 kits, plus lots of extra pieces. Here are some photos below. The kits would each be $100, but I would be happy to add a few flasks or other small items to them at no additional cost. One has a lot of Kontes glass, Registered: 26-1-2011 the second one is mostly Ace glassware. Location: USA - NC The coil condensers would add $20, and I have extra one Liebig with a longer bottom joint, but it would work with most flasks, just sticking in further, only $15. I have a few short paths left, normal ones are $60, vacuum jacketed ones are $100. Then a bunch of odd pieces, make me an offer or ask. These would be good pieces to add to a kit. Then a Member Is Offline small sample of some multineck flasks (most listed in the spreadsheet) and some distillation heads, starting at $15 for the simple ones, and up. But I would offer some discounts on extras for any kit purchased as well. Same discounts for any purchase of multiple flasks. I have round, oval, peal, and other flasks, in 1, 2, and 3 neck versions Mood: from 25 to 250 ml size in 14/20. No Mood [Edited on 1-3-2014 by Dr.Bob] Mr_Magnesium posted on 28-2-2014 at 22:26 Hazard to Self Posts: 60 Registered: 4-8-2013 Any Australians/West coast looking to purchase glassware? Location: \rooted/ Hoping to find a glassware buddy to cut shipping costs Member Is Offline No Mood numos posted on 3-3-2014 at 19:32 Hazard to Others Posts: 269 Registered: 22-2-2014 Location: Pasadena My package arrived last Saturday, two days after I paid for it! Normally I don't give feedback but this is well deserved. Everything as described, nothing broken, and a bunch of extra goodies that made opening the box a delightful experience. Definitely going to buy from Dr. Bob again. Member Is Offline No Mood Dr.Bob posted on 22-3-2014 at 19:17 International Hazard Posts: 2724 Registered: 26-1-2011 I found some more Pipettors recently, most are Gilson Pipetman type, but a few others as well, in 2, 10, 20, 100, 200, and 1000 ul size. I have been asking $50 each, but if Location: USA - NC anyone wants a set of 3 I can sell any three for $100 plus a few tips, and six (one of each size) for $200, plus the fancy rotating rack and a good supply of tips. If you want other brands, or other types, please let me know, and I can see what I have. I am working on photos of more of them, but this is what the Gilsons look like. Member Is Offline No Mood HeYBrO posted on 23-3-2014 at 02:07 Hazard to Others Posts: 289 Registered: 6-12-2013 Did you find any 10/18 non-Hg thermometers too ? Location: 'straya Member Is Offline Dr.Bob posted on 24-3-2014 at 18:14 International Hazard Posts: 2724 Registered: 26-1-2011 Location: USA - NC I have not found any more non-Hg therms recently. I wish I had some, but there are places that sell them, some of which are listed in here, if you search around. They come on Ebay sometimes, just have to watch it every week. And the Chinese ship them anywhere, as they don't seem to understand postal regs, and no one seems to stop them. Member Is Offline No Mood Dr.Bob posted on 29-3-2014 at 08:45 International Hazard Posts: 2724 Registered: 26-1-2011 Here is another set of pipettors from Eppendorf, 10, 20, 100, 200, and 1000 ul. It does come with a lovely stand (not pictured). I'll do the entire set for $175 and throw in some sample tips as well. Location: USA - NC I do have some other various loose pipettors from 2 - 1000 ul, which I would sell for $50 each, Gilson, Eppendorf, Rainin, Finn, etc Member Is Offline No Mood Dr.Bob posted on 29-3-2014 at 09:32 International Hazard Posts: 2724 I finally found another box of larger flasks. First priority on these goes to people who had asked for a particular size first when I was out, but if they don't get back to me, then I will sell them to the first person who asks. Registered: 26-1-2011 First is a 3L 24/40 one neck rbf, in good condition, no scratches or chips, asking $20 for it. Location: USA - NC Next is a 3L 3 neck, center is 34/45 joint, both sides are 24/40, asking $40 for it. Member Is Offline Then a 2L 3 neck with center 29/42, both sides are 24/40 joints, asking $30 for it. Then I have a couple Pyrex brand 2L rbfs, 24/40 one neck, in good condition, asking $15 for each. (no photo) No Mood Next, a 300 ml 3 neck, center is 24/40 sides are 19/29 joints, asking $10. (no photo but I can provide one) Lastly, I have three 250 ml equilibrated addition funnels left now that are complete with stopcocks and working well. They are $60 each, if interested, please tell me which one you prefer. I do also have some non-equilibrated ones if anyone is interested, for $30. copperastic posted on 29-3-2014 at 13:20 Hazard to Others Posts: 158 Registered: 15-3-2014 Dr.Bob could you possibly gather some stuff for a distillation kit then see if i want it? Location: In your Thanks Member Is Offline The Volatile Chemist posted on 30-3-2014 at 04:49 International Hazard Posts: 1981 Registered: 22-3-2014 How long do you think you'll be doing this? In a year to a year and a half, I'll have more leeway with my parents to buying things online that aren't on specific sites (eg. Location: 'Stil' in Amazon, E-bay). Just curious. the lab... Member Is Offline Dr.Bob posted on 30-3-2014 at 11:38 International Hazard Posts: 2724 Registered: 26-1-2011 Unfortunately, I am hopeful to be done with most of this within the next year or so. My goal is to sell the stuff left from my friends business, although there seem to be a Location: USA - NC growing number of businesses going out in the US, so I have seen a lot of surplus stuff around the area. But if you want something specific, I can always list it on Ebay, and I have a small number of things on Amazon (they only allow most sellers to list things that have a UPC code or are already in their system, so that is not many science things). Member Is Offline Ebay just adds more fees, so it is only cost effective on smaller or cheaper items, unless you like higher prices. They add 15% fees on top of the paypal and other fees, and that is on the item AND the postage, so that drives the cost up to over 20% on most items. No Mood Chemosynthesis posted on 31-3-2014 at 20:42 International Hazard Posts: 1071 Registered: 26-9-2013 Placed a $300 order with Dr. Bob, which has been an absolute pleasure thusfar. Will let everyone the good news when it comes into my possession. Member Is Offline No Mood mmmmpie posted on 31-3-2014 at 22:39 Posts: 1 Registered: 31-3-2014 Location: UK Have you still got a 14/20 kit available and what sort of postage costs would it be to the UK? Member Is Offline Always Happy 12 13 14 15 .. 31
{"url":"http://www.sciencemadness.org/talk/viewthread.php?tid=15667&page=13#pid320979","timestamp":"2024-11-05T09:50:46Z","content_type":"application/xhtml+xml","content_length":"90099","record_id":"<urn:uuid:b56fc7c5-2e37-4815-a10c-41baf5093307>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00576.warc.gz"}
Practical Questions on Accounting Standard 9 : Revenue Recognition - Indian AccountingPractical Questions on Accounting Standard 9 : Revenue Recognition For better understanding of AS -9 Revenue recognition, it is advised to go through these practical question answers. These selective questions and answers for accounting standard definitely going to help you for better grasp AS-9 fundamentals. 1. Arjun Ltd. sold farm equipments through its dealers. One of the conditions at the time of sale is, payment of consideration in 14 days and in the event of delay interest is chargeable @ 15% per annum. The Company has not realized interest from the dealers in the past. However, for the year ended 31.3.2006, it wants to recognise interest due on the balances due from dealers. The amount is ascertained at Rs. 9 lakhs. Decide whether the income by way of interest from dealers is eligible for recognition as per AS 9. Answer : As per AS 9 “Revenue Recognition”, where the ability to assess the ultimate collection with reasonable certainty is lacking at the time of raising any claim, the revenue recognition is postponed to the extent of uncertainty inverted. In such cases, the revenue is recognized only when it is reasonably certain that the ultimate collection will be made. In this case, the company never realized interest for the delayed payments make by the dealers. Hence, it has to recognize the interest only if the ultimate collection is certain. The interest income hence is not to be recognized. 2. Y Ltd. used certain resources of X Ltd. In return X Ltd. receives Rs. 10 lakhs and Rs. 15 lakhs as interest and royalties respectively, from Y Ltd. during the year 2007 –2008. State on what basis X Ltd. should recognize their revenue, as per AS 9. Answer : As per AS 9 on ‘Revenue Recognition’, interest of Rs.10 lakhs received in the year 2007-2008 should be recognized on the time basis, whereas royalty of Rs. 15 lakhs received in the same year should be recognized on accrual basis as per the terms of relevant agreement. 3. According to Accounting Standard 9, when revenue from sales should be recognised? Answer : As per para 11 of AS 9 ‘Revenue Recognition’, revenue from sales should be recognized only when requirements as to performance are satisfied provided that at the time of performance it is not unreasonable to expect ultimate collection. These requirements can be given as follows: • The seller of goods has transferred to the buyer the property in the goods for a price or all significant risks and rewards of ownership have been transferred to the buyer and the seller retains no effective control of the goods transferred to a degree usually associated with ownership; and • No significant uncertainty exists regarding the amount of the consideration that will be derived from the sale of the goods. 4. M/s. Sea Ltd. recognized Rs. 5.00 lakhs, on accrual basis, income from dividend during the year 2010-11, on shares of the face value of Rs. 25.00 lakhs held by it in Rock Ltd. as at 31st March, 2011. Rock Ltd. proposed dividend @ 20% on 10th April, 2011. However, dividend was declared on 30th June, 2011. Please state with reference to relevant Accounting Standard, whether the treatment accorded by Sea Ltd. is in order. Answer : Para 8.4 of AS 9 “Revenue Recognition” states that dividend from investments in shares are not recognized in the statement of Profit and Loss until the right to receive dividend is established. In the given case, the dividend is proposed on 10th April, 2011, while it was declared on 30th June, 2011. Hence, the right to receive dividend is established on 30th June, 2011 only. Therefore, on applying the provisions stated in the standard, income from dividend on shares should be recognized by Sea Ltd. in the financial year 2011-2012 only. Therefore, the recognition of income from dividend of Rs. 5 lakhs, on accrual basis, in the financial year 2010-11 is not in accordance with AS 9. 5. M/s. Moon Ltd. sold goods worth Rs. 6,50,000 to Mr. Star. Mr. Star asked for a trade discount amounting to Rs. 53,000 and same was agreed to by M/s. Moon Ltd. The sale was effected and goods were dispatched. On receipt of goods, Mr. Star has found that goods worth Rs. 67,000 are defective. Mr. Star returned defective goods to M/s. Moon Ltd. And made payment due amounting to Rs. 5,30,000. The accountant of M/s. Moon Ltd. Booked the sale for Rs. 5,30,000. Discuss the contention of the accountant with reference to Accounting Standard (AS) 9. Answer : As per AS 9 ‘Revenue Recognition’, revenue is the gross inflow of cash, receivable or other consideration arising in the course of the ordinary activities of an enterprise from the sale of goods. However, trade discounts and volume rebates given in the ordinary course of business should be deducted in determining revenue. Revenue from sales should be recognized at the time of transfer of significant risks and rewards. If the delivery of the sales is not subject to approval from customers, then the transfer of significant risks and rewards would take place when the sale is affected and goods are dispatched. In the given case, if trade discounts allowed by M/s. Moon Ltd. are given in the ordinary course of business, M/s. Moon Ltd. should record the sales at Rs. 5,97,000 (i.e. Rs. 6,50,000 – Rs. 53,000) and goods returned worth Rs. 67,000 are to be recorded in the form of sales return. However, when trade discount allowed by M/S. Moon Ltd. is not in the ordinary course of business, M/s. Moon Ltd. should record the sales at gross value of Rs. 6,50,000. Discount of Rs. 53,000 in price and return of goods worth Rs. 67,000 are to be adjusted by suitable provisions. M/s Moon Ltd. might have sent the credit note of Rs. 1,20,000 to Mr. Star to account for these adjustments. In both the cases, the contention of the accountant to book the sales for Rs. 5,30,000 is not 6. The Board of Directors of X Ltd. decided on 31.3.2011 to increase sale price of certain items of goods sold retrospectively from 1st January, 2011. As a result of this decision the company has to receive ` 5 lakhs from its customers in respect of sales made from 1.1.2011 to 31.3.2011. But the Company’s Accountant was reluctant to make-up his mind. You are asked to offer your suggestion. Answer : As per para 10 of AS 9 ‘Revenue Recognition’, the additional revenue on account of increase in sales price with retrospective effect, as decided by Board of Directors of X Ltd., of ` 5 lakhs to be recognised as income for financial year 2010-11, only if the company is able to assess the ultimate collection with reasonable certainty. If at the time of raising of any claim it is unreasonable to expect ultimate collection, revenue recognition should be postponed.
{"url":"https://www.indianaccounting.in/2017/11/as9-practical-questions.html","timestamp":"2024-11-09T03:38:39Z","content_type":"application/xhtml+xml","content_length":"119456","record_id":"<urn:uuid:10b836d8-8c40-4d15-aee1-6669972c0eaa>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00405.warc.gz"}
Foundations of Chemical and Biological Engineering I 40 Separable Differential Equations By the end of this section, you should be able to: Provide initial conditions for well-mixed separable transient single-unit processes. Solve separable transient balances to find a property of a system at any given time. Separable Differential Equations Separable differential equations are differential equations where the variables can be isolated to one side of the equation. Take the following differential equations: 1 – [latex]\frac{dx}{dy}=(x^{3}+x)*(y-y^{2})[/latex] This equation is separable because you can completely isolate the x and y variables as follows: 2 – [latex]\frac{dx}{dy}=\frac{x}{x+y}[/latex] This equation is non-separable because you cannot completely isolate the x and y variables: These non-separable equations we will discuss later. Example: Chemical Reactor Consider a “continuous stirred-tank reactor” (CSTR). CSTRs are reactors with continuous feed and exit streams and some kind of mixer. Say we know the following information about this CSTR: • The feed enters at a constant volumetric flowrate of [latex]\dot{V}_{0}[/latex] in L/s • The volume of the tank is [latex]V[/latex] in L. • Initially ([latex]t=0[/latex]), the tank is filled to [latex]V_i[/latex] in L • The exit stream flows at a constant rate of [latex]\dot{V}[/latex] in L/s • We can assume that the density of all streams in the system is constant at [latex]\rho[/latex] in g/L We want to write a balance for the total (overall) mass in the system under transient conditions. We start off by writing out the overall balance: Mass is not being consumed or generated, just changed from one substance to another. This means the [latex]GEN[/latex] and [latex]CON[/latex] terms are negligible. We get: The units for the [latex]IN[/latex], [latex]OUT[/latex], and [latex]ACC[/latex] terms are kg/s. Simplifying the balance, we get: [latex]\dot{V}_{0}*\rho-\dot{V}*\rho = \rho*\frac{dV}{dt}[/latex] Since the densities are all constant, we can cancel them out: [latex]\dot{V}_{0} - \dot{V} = \frac{dV}{dt}[/latex] Using separation of variables (from calculus), we can integrate both sides. We want to find a given value about our system at a specific final time ([latex]t_{f}[/latex]), starting from an initial time (say [latex]t_{0} = 0[/latex]). We know the initial volume [latex]V_{i}[/latex] and want to find the final volume [latex]V_{f}[/latex]. [latex](\dot{V}_{0} - \dot{V})dt = dV[/latex] [latex]\int^{t_{f}}_{t_{0}}(\dot{V}_{0} - \dot{V})dt =\int^{V_{f}}_{V_{i}} dV[/latex] [latex](\dot{V}_{0} - \dot{V})*(t_{f}-t_{0}) =V_{f}-V_{i}[/latex] [latex]V_{f} = V_{i}+(\dot{V}_{0} - \dot{V})*(t_{f}-t_{0})[/latex] Let’s try substituting in some numbers to this equation. Say the rate of flow in is 5 L/min, and the flow out is 6 L/min with 300 L initially in the tank. How much water remains in the tank after 1 hour? Using the formula we derived, we can find this: [latex]V_{f} = V_{i}+(\dot{V}_{0} - \dot{V})*(t_{f}-t_{0})[/latex] [latex]V_{f} = 300 L+(5 \frac{L}{min} - 6 \frac{L}{min})*(60 min-0 min)$$ $$V_{f} = 300 L-60 L$$ $$V_{f} = 240 L$$ Note, you may say this is obvious and there is no need to derive our equation, and in this case our equation was relatively simple. However, we want to get in the practice of formulating these equations for when things start getting more complicated. In this class, we'll focus on formulating these equations rather than finding the solution. Exercise: Transient Mass Balance Consider pumping liquid from a tank that has one drain line as depicted below: Step 1: Determine what terms in the general balance are zero or negligible. [latex]IN - OUT + GEN - CON = ACC[/latex] Since there are no feed streams and no reactions (no mass generated or consumed), the [latex]IN[/latex], [latex]GEN[/latex], and [latex]CON[/latex] terms can be omitted from the balance. [latex]-OUT = ACC[/latex] Step 2: Write out the mass balance for species A in the system. [latex](-\dot{m}_{out,A}) = \frac{dM_{A}}{dt}[/latex] Step 3: Expand the mass terms. Recall that [latex]Mass=Volume*density[/latex], so we can use that to express both terms in the mass balance: [latex](-\dot{V}_{out,A}*\rho) = \rho*\frac{dV_{A}}{dt}[/latex] Since the density is constant, the density terms on both sides cancel each other out [latex](-\dot{V}_{out,A}) = \frac{dV_{A}}{dt}[/latex] The volume terms can be further expanded. Recall that the volume of a cylinder [latex]Area*Height[/latex] or [latex]A*h[/latex]. In differential terms, [latex]\frac{dV}{dt}=A\frac{dh}{dt} + h\frac {dA}{dt}[/latex]. Since the area is constant in this case, this simplifies to: [latex]\frac{dV}{dt} = A\frac{dh}{dt}[/latex]. [latex](-\dot{V}_{out,A}) = A_{tank}*\frac{dh}{dt}[/latex] [latex]A_{tank} = \frac{\pi}{4}*{D_{tank}}^{2}=\frac{\pi}{4}*{2 m}^{2}=3.14 m^{2}[/latex] Step 4: Solve the integral from t = 0 to t = 10 mins [latex](-\dot{V}_{out,A})dt = A_{tank}*dh[/latex] [latex]\int^{t=10 min}_{0 min}(-\dot{V}_{out,A})dt =\int^{h_{final}}_{h_{intial}} A_{tank}*dh[/latex] [latex]\int^{t=10 min}_{0 min}(-0.10\frac{m^3}{min})dt =\int^{h_{final}}_{h_{intial}}3.14m^{2}*dh[/latex] [latex](-0.1\frac{m^{3}}{min})*t\;\bigg|^{t=10 min}_{0 min} = (3.14m^{2})*h\;\bigg|^{h_{final}}_{5m}[/latex] [latex](-0.1\frac{m^{3}}{min})*(10min-0min) = (3.14m^{2})*(h_{final}-5m)[/latex] [latex]-1 m^{3}*\frac{1}{3.14m^{2}} = h_{final}-5m[/latex] [latex]-0.32 m = h_{final}-5m[/latex] [latex](-.32+5) m = h_{final}[/latex] [latex]h_{final}=4.68 m[/latex] Exercise: Transient Mass Balance Suppose we have a tank with an outlet at the bottom as shown below. Water is the only thing in the tank (we'll call this species A). The water flows out of the tank at a rate of [latex]\sqrt{19.6×h} \;m/s[/latex], where h is the height of water in m at any specific time (note that expression like this for outlet flow come from conservation of energy in a tank with gravity driving flow out of the tank). The tank has a cross-sectional area of [latex]1m^2[/latex] and the outlet pipe has a cross-sectional area [latex]10cm^2[/latex]. If the initial height of the water in the tank is 1 m, using a transient mass balance, calculate the height of water in the tank after 5 mins. Step 1: Determine what terms in the general balance are zero or negligible. [latex]IN - OUT + GEN - CON = ACC[/latex] Since there is no inlet stream and no chemical reactions in the tank, there is no [latex]IN[/latex], [latex]GEN[/latex] or [latex]CON[/latex] terms: [latex]-OUT = ACC[/latex] Step 2: Write out the mass balance for species A in the system. [latex](-\dot{m}_{out}) = \frac{dM}{dt}[/latex] Step 3: Express each term using the given quantities: Replace [latex]\text{Mass}[/latex] by [latex]\text{Volume}×\text{density}[/latex] for both terms: [latex](-\dot{V}_{out}×\rho) = \frac{dV×\rho}{dt}[/latex] Because [latex]\rho[/latex] are multiplied by both sides and we assume it is constant throughout the system, we can cancel [latex]\rho[/latex] in this step: [latex](-\dot{V}_{out}) = \frac{dV}{dt}[/latex] The volume of fluid flowing out of a pipe at any instant can be calculated using [latex]\dot{V}=u×A[/latex], where u is the instantaneous velocity of the fluid and A is the cross-sectional area of the pipe. We can use this to express the outlet flow. For the accumulation term, it is not beneficial to do this because we don't know the expression for [latex]u[/latex]. For the accumulation term, we can espress dV as [latex]A_{1}×dh[/latex] because we know that [latex]A_{1}[/latex] is constant, and this leaves only one variable (h) for us to solve for in the mass [latex]-u×A_{2} = \frac{A_{1}dh}{dt}[/latex] We can replace [latex]u[/latex] using [latex]u=\sqrt{19.6×h}[/latex] given in the question. T [latex]-\sqrt{19.6×h}×A_{2} = \frac{A_{1}dh}{dt}[/latex] Step 4: Solve the integral to find [latex]h_{f}[/latex]: [latex]\int_{t_{0}}^{t_{f}}-\sqrt{19.6×h}×A_{2}\;dt= \int_{h_{0}}^{h_{f}}A_{1}dh[/latex] Separate the equation by moving all terms involving h to one side: [latex]\int_{t_{0}}^{t_{f}}-\frac{A_{2}}{A_{1}}dt = \int_{h_{0}}^{h_{f}}\frac{1}{\sqrt{19.6×h}}dh[/latex] Replace the variables by the given information: [latex]A_{2}=10cm^2=10×10^{-4}m^2[/latex], [latex]A_{1}=1m^2[/latex], [latex]t_{0}=0s[/latex], [latex]t_{f}=5min=300s[/latex]. \int_{0}^{t=300}-\frac{10×10^{-4}}{1} dt&= \int_{h_{0}=1}^{h_{f}}\frac{1}{\sqrt{19.6}}×h^{-\frac{1}{2}}dh \\ -10×10^{-4}× t\;\bigg|_{0}^{t=300}& = \frac{2}{\sqrt{19.6}}×h^{\frac{1}{2}}\;\bigg|_{h_{0}=1}^{h_{f}}\\ -10×10^{-4}×(300-0)& = \frac{2}{\sqrt{19.6}}×h_{f}^{\frac{1}{2}}-\frac{2}{\sqrt{19.6}}×(1)^{\frac{1}{2}}\\ \bigg(\frac{-10×10^{-4}×(300-0)+\frac{2}{\sqrt{19.6}}×(1)^{\frac{1}{2}}}{\frac{2}{\sqrt{19.6}}}\bigg)^2&=h_{f} \\
{"url":"https://pressbooks.bccampus.ca/chbe220/chapter/separable-differential-equations/","timestamp":"2024-11-04T12:17:21Z","content_type":"text/html","content_length":"93158","record_id":"<urn:uuid:48296257-e7c4-491b-85e1-b00f335d6742>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00855.warc.gz"}
Quicksort vs. Shell 06-20-2014, 02:12 PM (This post was last modified: 06-20-2014 02:21 PM by Claudio L..) Post: #24 Claudio L. Posts: 1,885 Senior Member Joined: Dec 2013 RE: Quicksort vs. Shell I implemented all 3 methods for testing, since the answer wasn't clear after all the discussions. I tested by sorting the same lists with 2, 4, 6, ... 1998 elements (tested on the PC, I might test on calc later just to get a feeling for the real timings). In the tests, I added significant overhead to the comparison, to make it similar to our use case. Results were: Fully random case: Binary insertion: 2.01 sec Shell sort: 2.12 sec Quicksort: 1.83 sec Reversed list: Binary insertion: 0.56 sec Shell sort: 1.67 sec Quicksort: 2.28 sec 90% sorted, 10% random: Binary insertion: 1.91 sec Shell sort: 2.02 sec Quicksort: 2.17 sec So in the end, binary sort beats shell sort (point for Werner, good call there). The fully random test is a best-case for quicksort, but it only outruns binary insertion by 10%. Binary insertion shines on fully reversed lists, and both shell sort and binary sort can beat quicksort with almost sorted or reversed lists. So in the end, I think binary insertion is the method we'll use, since it's consistently fast, versus a quicksort that can fall down. For the record: I tuned each algorithm as follows: • Shell sort: Nothing that could be improved. • Binary sort: Once the position is found, memory is moved as a block using memmove(), rather than moving item by item in a loop. This increased the speed by a small percent, since memmove may use optimized CPU instructions (it may use SIMD on x86, and on ARM it uses STM/LDM). Still need to investigate what's the ideal threshold to switch from item-by-item loop to a memmove call due to the overhead of making the call. • Quicksort: Pivot: Tried using the center element and the median of 3. Didn't see any difference in speed on the cases I was evaluating. In the end I left the median of 3. Recursion: Non-recursive with a local stack for up to 2^32 elements. Fall-back: For small lists falls back to plain insertion, I tuned the threshold until I found optimum value, which was between 6 and 8. EDIT: Sort was made stable for all methods using method described in a post above. User(s) browsing this thread: 2 Guest(s)
{"url":"https://www.hpmuseum.org/forum/showthread.php?tid=1643&pid=14868&mode=threaded","timestamp":"2024-11-11T00:02:12Z","content_type":"application/xhtml+xml","content_length":"26753","record_id":"<urn:uuid:16a57724-2752-4dc2-aa69-ef634a5c57b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00126.warc.gz"}
Experimental Nuclear and Particle Physics Group Experimental Nuclear and Particle Physics GroupBACK The group is established as part of the Joint Consortium for Fundamental Physics by three universities in Hong Kong – HKU, CUHK and HKUST, through which we participate in international collaborations on big science. Under this umbrella of the joint consortium, a Hong Kong cluster formally joined the ATLAS experiment at the Large Hadron Collider in June 2014. One of the missions led by the Hong Kong cluster is to build up a Tier-2 (and Tier-3) computing center in Hong Kong, which is expected to play an important role in serving both the LHC physics community and the local scientific and engineering community. The center is designed to have 1000 processing cores and 1 petabytes of disk space. The laboratory is the part of the Tier-2 (and Tier-3) computing center for analyzing data collected by the ATLAS experiment at the LHC in CERN. The lab has access to the Worldwide LHC Computing Grid, which is the world's largest computing grid. Prof. Tu’s research is on experimental particle physics where the goal is to understand fundamental particles and their interactions. The startup of the Large Hadron Collider (LHC), the world's largest and highest energy particle accelerator, in 2009 opened up a new era at the high energy frontier. The unexplored energy domain of the LHC provides unique opportunities to answer fundamental questions in particle physics, such as the cause of electroweak symmetry breaking, the mass origin of particles in the Standard Model, the generation of baryon asymmetry in the Universe and the properties of dark matter. Exploring these topics could dramatically improve our understanding of nature. The recent observation of the Higgs boson in the ATLAS and the CMS experiments represents one such success. In view of this remarkable progress, the next several years will be a critical and significant period for the field development of the HEP. With strong support from the member institutions of the Hong Kong Joint Consortium for Fundamental Physics and the Research Grants Council, Hong Kong is now participating in the LHC ATLAS experiment. The Hong Kong particle physics group is involved in several projects including searching for supersymmetric particles, and searching for beyond Standard Model heavy gauge bosons. The group is also responsible for software and hardware upgrades: software development for muon reconstruction and Phase I Phase II muon detector electronics upgrade. Prof. Lee’s group is dedicated to the studies of the nuclear shell structure evolution and the nucleon correlations in nuclei. The experimental techniques include direct reactions, in-beam gamma spectroscopy and beta-decay spectroscopy. The state-of-the-art gamma-ray detector array and charged particle detector array will be constructed at the University of Hong Kong in collaboration with RIKEN (Japan) and IPN-Orsay (France). Both arrays are portable with fully integrable capability to the detection systems at the present facility RIKEN (Japan) and in the future-upgraded accelerated-based laboratories worldwide such as NSCL/FRIB (United States) and Spiral2 (France). Prof. Lee studies cutting-edge Gamma-ray detector array and charged particle detector array, based on international collaborations, will be developed to achieve high-efficient and high-resolution measurements for the studies of nuclear structure. The arrays are designed for easy configuration and full integration with other devices to meet the detection requirements of specific major experiments, which will be performed in the Radioactive-Isotope Beams facilities worldwide such as RIKEN (Japan) and NSCL/FRIB (United States). (For the complete publication list of the department, please go back to Prof. J.H.C. Lee 1. “Low-Lying Structure of ^50Ar and the N=32 Subshell Closure” D. Steppenbeck, S. Takeuchi, N. Aoi, P. Doornenbal, M. Matsushita, H. Wang, Y. Utsuno, H. Baba, S. Go, J.H.C. Lee, K. Matsui, S. Michimasa, T. Motobayashi, D. Nishimura, T. Otsuka, H. Sakurai, Y. Shiga, N. Shimizu, P.-A. Söderström, T. Sumikama, R. Taniuchi, J.J. Valiente-Dobón, and K. Yoneda, Phys. Rev. Lett. 114, 252501 2. "Neutron spectroscopic factors of ^55Ni hole-states from (p,d) transfer reactions", A. Sanetullaev, M.B. Tsang, W.G. Lynch, J.H.C. Lee, D.Bazin, D.Coupland, V.Henzl, D.Henzlova, M.Kilburn, A.M.Rogers, Z.Y.Sun, M.Youngs, R.J.Charity, L.G.Sobotka, M.Famiano, S. Hudan, D. Shapira, W.A. Peters, C. Barbieri, M. Hjorth-Jensen, M. Horoi, T. Otsuka, T. Suzuki and Y. Utsuno, Physics Letters B, 736, 137 (2014) 3. "Evidence for a new nuclear 'magic' number from the structure of ^54Ca", D. Steppenbeck, S. Takeuchi, N. Aoi, P. Doornenbal, M. Matsushita, H. Wang, H. Baba, N. Fukuda, S. Go, M. Honma, J. Lee, K. Matsui, S. Michimasa, T. Motobayashi, D. Nishimura, T. Otsuka, H. Sakurai, Y. Shiga, P.-A. Soderstrom, T. Sumikama, H. Suzuki, R. Taniuchi, Y. Utsuno, J.J.Valiente-Dobon, and K. Yoneda, Nature , 502, 207 (2013) 4. "In-Beam γ-Ray Spectroscopy of ^34, 36,38Mg: Merging the N=20 and N=28 Shell Quenching", P. Doornenbal, H. Scheit, N. Aoi, S. Takeuchi, K. Li, M. Matsushita, D. Steppenbeck, H. Wang, H. Baba, E. Ideguchi, N. Kobayashi, Y. Kondo, G. Lee, J.H.C. Lee, S. Michimasa, T. Motobayashi, H. Sakurai, M. Takechi and Y. Togano, Phys. Rev. Lett., 111, 212502 (2013) 5. "Neutron-hole states in ^45Ar from ^1H(^46Ar,d)^45Ar reactions", F. Lu. J.H.C. Lee, M.B. Tsang, D. Bazin, D. Coupland, V. Henzl, D. Henzlova, M. Kiburn, W.G. Lynch. A.M. Rogers. A. Sanetullaev, Z.Y Sun, M. Youngs, R.J. Charity, L.G. Sobotka, M. Famiano, S. Hudan, M. Horoi, Y. Ye, Phys. Rev. C, 88, 017604 (2013) 6. "A Laser Based Alignment System (LBAS) for Nuclear Physics Experiments", A.M. Rogers, J.H.C. Lee, B.E. Netta, M.S. Wallace, W.G. Lynch, H.K. Cheung, L. El-Mogaber, R. Fontus, T.K.Ghosh, V. Henzl, D. Henzlova, M. Kilburn, D.J. Oostdyk, D. Sanderson, Z.Y. Sun, M.B. Tsang, Nucl. Instrum. Meth. A, 707, 64 (2013) 7. "Well-developed deformation in ^42Si", S. Takeuchi, M. Matsushita, N. Aoi, P. Doornenbal,K. Li, T. Motobayashi, H. Scheit, D. Steppenbeck, H. Wang, H. Baba, D. Bazin, L. C`aceres, H. Crawford, P. Fallon, R. Gernhauser, J. Gibelin, S. Go, S. Grevy, C. Hinke, C. R. Hoffman, R. Hughes,E. Ideguchi, D. Jenkins, N. Kobayashi, Y. Kondo, R. Krucken, T. Le Bleis, J.H.C. Lee, G. Lee, A. Matta, S. Michimasa, T. Nakamura, S. Ota, M. Petri, T. Sako, H. Sakurai, S. Shimoura, K. Steiger, K. Takahashi, M. Takechi, Y. Togano, R. Winkler, and K. Yoneda, Phys. Rev. Lett., 109, 182501 (2012) 8. "Neutron spectroscopic factors of ^34Ar and ^46Ar", J.H.C. Lee, M.B. Tsang, D. Bazin, D. Coupland, V. Henzl, D. Henzlova, M. Kilburn, W.G. Lynch, A. Rogers, A. Sanetullaev, A. Signoracci, Z.Y. Sun, M. Youngs, K.Y. Chae, R.J. Charity, H.K. Cheung, M. Famiano, S. Hudan, P. O’Malley, W.A. Peters, K. Schmitt, D. Shapira, L.G. Sobotka, Phys. Rev. C, 83, 014606 (2011) 9. "Neutron-proton asymmetry dependence of spectroscopic factors in Ar isotopes", J.H.C. Lee, M.B. Tsang, D. Bazin, D. Coupland, V. Henzl, D. Henzlova, M. Kilburn, W.G. Lynch, A. Rogers, A. Sanetullaev, A. Signoracci, Z.Y. Sun, M. Youngs, K.Y. Chae, R.J. Charity, H.K. Cheung, M. Famiano, S. Hudan, P. O’Malley, W.A. Peters, K. Schmitt, D. Shapira, L.G. Sobotka, Phys. Rev. Lett., 104, 112701 (2010) Prof. Y.J. Tu 1. "Search for top quark decays t → Hq with 36 f b^−1 of pp collision data at sqrt (s ) = 13 TeV with the ATLAS detector”, Y.J. Tu, with ATLAS Collaboration, Journal of High Energy Physics 05 (2019) 2. "Search for four-top-quark production in the single-lepton and opposite-sign dilepton final states in pp collisions at sqrt( s ) = 13 TeV with the ATLAS detector ”, Y.J. Tu, with ATLAS Collaborators, Phys. Rev. D 99, 052009 3. "Search for pair production of up-type vector like quarks and for four-top-quark events in final states with multiple b-jets with the ATLAS detector”, Y.J. Tu, with ATLAS Collaborators, Journal of High Energy Physics 07 (2018) 089 4. "Search for new phenomena in tt¯ final states with additional heavy-flavour jets in pp collisions at sqrt( s ) = 13 TeV with the ATLAS detector”, Y.J. Tu, with ATLAS Collaborators, 5. "Measurements of the tt; charge asymmetry using the dilepton decay channel in pp collisions at √s = 7 TeV", Y.J. Tu, with CMS Collaborators, Journal of High Energy Physics, 4, 191 (2014) 6. "Measurements of tt Spin Correlations and Top-Quark Polarization Using Dilepton Final States in pp Collisions at √s =7TeV", Y.J. Tu, with CMS Collaborators, Phys. Rev. Lett., 112, 182001 (2014) 7. "Search for heavy, top-like quark pair production in the dilepton final state in pp collisions at √s =7TeV", Y.J. Tu, with CMS Collaborators, Phys. Lett. B, 716, pp103-121 (2012) 8. "Search for R-Parity Violating Decays of Sneutrinos to eμ, μτ, and eτ Pairs in pp; Collisions at √s =1.96TeV", Y.J. Tu, with CDF Collaborators, Phys. Rev. Lett., 105, 191801 (2010)
{"url":"https://bohr.physics.hku.hk/research/research_groups/5/?back=74f0f3e670f5d54b6b82de8a4be6fe6d","timestamp":"2024-11-10T18:30:11Z","content_type":"application/xhtml+xml","content_length":"151914","record_id":"<urn:uuid:20bfa658-acec-498c-9a3a-2c8a9bb57a6a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00858.warc.gz"}
Langlands program Langlands correspondence In QFT and String theory What is called the Langlands correspondence in number theory (Langlands 67) is first of all a conjectural correspondence (a bijection subject to various conditions) between 1. $n$-dimensional complex linear representations of the Galois group $Gal(\bar F/F)$ of a given number field $F$, and 2. certain representations – called automorphic representations – of the $n$-dimensional general linear group $GL_n(\mathbb{A}_F)$ with coefficients in the ring of adeles of $F$, arising within the representations given by functions on the double coset space $GL_n(F) \backslash GL_n(\mathbb{A}_F)/GL_n(\mathcal{O})$ (where $\mathcal{O} = \prod_v \mathcal{O}_p$ is the ring of integers of all formal completions of $F$). This is motivated from the abelian case ($n=1$), which is fully understood: For $n = 1$ then an $n$-dimensional representation of the Galois group factors through $GL_1$ and hence through an abelian group. Therefore, by adjunction, it is equivalently a representation of the abelianization of the Galois group. The Kronecker-Weber theorem says that for $F = \mathbb{Q}$, then the abelianized Galois group is the idele class group $GL_1(\mathbb{Q}) \backslash GL_1(\mathbb{A})$, and hence 1-dimensional representations of the Galois group are equivalently representations of this. Moreover, one finds that for any prime number $p$, then those representations which are “unramified at $p$” are invariant under the subring of p-adic integers, hence are representations of the double quotient group $GL_1(\mathbb{Q}) \backslash GL_1(\mathbb{A})/GL_1(\mathbb{Z}_p)$. More generally, the Artin reciprocity law says that for any number field $K$ there is an isomorphism between $GL_1(K) \ backslash GL_1(\mathbb{A}_K)/GL_1(\mathcal{O}_K)$ and the abelianization of the Galois group $Gal(\bar K/K)$. Conjecture 1 To each such automorphic representation $\pi$ is associated an L-function – the automorphic L-function $L_\pi$ – and in generalization of Artin reciprocity the conjecture of Langlands is that the Artin L-function $L_\sigma$ associated with the given Galois representation $\sigma$ is equal to this: $L_\sigma = L_\pi$ (Gelbart 84, conjecture 1 (page 27 (204))). More generally, analogous statements are supposed to hold for general reductive algebraic groups $G$ other than $GL_n$. Here now an L-function is assigned to data which in addition to the Galois representation consists of a linear representation ${}^L G \longrightarrow GL_n$ of the Langlands dual group of $G$. First of all: Conjecture 2 This more general L-function is conjectured to indeed behave like a decent L-function in that it has meromorphic analytic continuation to the complex plane and satisfies the “functional equation”-invariance under sending its parameter $s$ to $1-s$ (Gelbart 84, conjecture 2’ (page 29 (205))). Conjecture 3 This construction is supposed to behave well with respect to an analytic homomorphism ${}^L G \to {}^L G^{'}$ in that when changing the representation of ${}^L G^{'}$ by precomposition with this homomorphism one may find an accompanying change of Galois representation/automorphic representation from $G$ to $G^{'}$ such that the associated L-function remains invariant under these joint changes. This statement is what Robert Langlands calls functoriality (Gelbart 84, conjecture 3 (page 31 (207))) In fact this last conjecture implies the previous two (Gelbart 84, (page 32 (208))). Various versions and refinements of this conjecture have since been considered, for some perspective see (Taylor 02, Langlands 14, Harris 14). On the one hand the “localization” of the program to local fields leads to the conjecture of local Langlands correspondences (Gelbart 84, (page 34 (210))). On the other hand, the interpretation of the above story dually in arithmetic geometry in view of the function field analogy motivates the conjectural geometric Langlands correspondence, based on the following analogy: From this arithmetic geometry point of view the Langlands conjecture seems to speak of a correspondence that sends Dirac distributions on the moduli space of flat connections over an algebraic curve to certain “automorphic” functions on the moduli stack of bundles on the same curve. This suggests that the Langlands correspondence should be understood as a nonabelian version of a Fourier-Mukai -type integral transform. This version of the conjecture is known as the geometric Langlands correspondence. See there for more details. The original conjecture is due to Introductions and expository surveys include Surveys of the state of the program include Discussion with an eye towards geometric class field theory and geometric Langlands duality and quantum field theory is in More resources are at
{"url":"https://ncatlab.org/nlab/show/Langlands+program","timestamp":"2024-11-14T18:09:26Z","content_type":"application/xhtml+xml","content_length":"48900","record_id":"<urn:uuid:dd747088-3e47-4ae8-a83c-6ff5ecb2bf03>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00741.warc.gz"}
30. [Salts and Their Acid-Base Properties] | AP Chemistry | Educator.com Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. Section 1: Review Naming Compounds 41:24 Stoichiometry 37:19 Section 2: Aqueous Reactions & Stoichiometry Precipitation Reactions 31:14 Acid-Base Reactions 43:21 Oxidation Reduction Reactions 47:58 Stoichiometry Examples 31:50 Section 3: Gases Pressure, Gas Laws, & The Ideal Gas Equation 49:40 Partial Pressure, Mol Fraction, & Vapor Pressure 32:00 Kinetic Molecular Theory and Real Gases 31:58 AP Practice for Gases 25:34 Section 4: Thermochemistry Energy, Heat, and Work 37:32 Enthalpy & Hess's Law 32:34 Standard Enthalpies of Formation 23:09 Calorimetry 39:28 Section 5: Kinetics Reaction Rates and Rate Laws 36:24 Method of Initial Rates 30:48 Integrated Rate Law & Reaction Half-Life 32:17 Second Order & Zero-Order Rate Laws 26:40 Activation Energy & Arrhenius Equation 40:59 AP Practice for Kinetics 29:08 Section 6: Equilibrium Equilibrium, Part 1 46:00 Equilibrium, Part 2 40:53 Equilibrium: Reaction Quotient 45:53 Equilibrium: Examples 31:51 Le Chatelier's principle & Equilibrium 40:52 Section 7: Acids & Bases Acids and Bases 50:11 pH of Weak Acid Solutions 43:52 Percent Dissociation: Strong & Weak Bases 43:04 Polyprotic Acids 35:34 Salts and Their Acid-Base Properties 41:14 Common Ion Effect & Buffers 41:58 Buffer 32:24 Buffers, Part II 40:06 Buffers, Part III 38:43 Titrations: Strong Acid and Strong Base 42:42 Titrations: Weak Acid and Strong Base 42:03 Titration Examples & Acid-Base Indicators 52:03 Section 8: Solubility Solubility Equilibria 36:25 Solubility Equilibria, Part II 42:06 Solubility Equilibria, Part III 43:09 Section 9: Complex Ions Complex Ion Equilibria 43:38 Complex Ions & Solubility 31:30 Section 10: Chemical Thermodynamics Spontaneity, Entropy, & Free Energy, Part I 56:28 Spontaneity, Entropy, & Free Energy, Part II 39:55 Spontaneity, Entropy, & Free Energy, Part III 30:10 Spontaneity, Entropy, & Free Energy, Part IV 30:07 Spontaneity, Entropy, & Free Energy, Part V 44:56 Section 11: Electrochemistry Oxidation-Reduction & Balancing 39:23 Galvanic Cells 43:09 Cell Potential 48:41 Potential, Work, & Free Energy 41:23 Cell Potential & Concentration 34:19 Electrolysis 33:21 Section 12: Light Light 44:45 Section 13: Quantum Mechanics Quantum Mechanics & Electron Orbitals 54:00 Electron Configurations & Diagrams 34:04 Section 14: Intermolecular Forces Vapor Pressure & Changes of State 52:43 Phase Diagrams & Solutions 31:17 Vapor Pressure of Solutions 37:23 Colligatives Properties 34:11 Section 15: Bonding Bonding & Lewis Structure 48:39 Resonance & Formal Charge 36:59 Shapes of Molecules 41:21 Hybrid Orbitals 40:17 Section 16: AP Practice Exam AP Practice Exam: Multiple Choice, Part I 52:34 AP Practice Exam: Multiple Choice, Part II 32:15 AP Practice Exam: Multiple Choice, Part III 32:50 AP Practice Exam: Free response Part I 47:22 AP Practice Exam: Free Response Part II 43:05 AP Practice Exam: Free Response Part III 28:36 Hello, and welcome back to Educator.com, and welcome back to AP Chemistry.0000 Today, we are going to talk about salts and their acid-base properties.0004 Let's just jump on in, start with a definition, and start doing problems, because I think that is probably the best approach.0008 A salt is a generic term for an ionic compound.0015 Sodium phosphate; silver chloride; lead iodide; sodium chloride; you name it--potassium permanganate--these are all salts, because they are ionic compounds.0036 It is just a positive ion and a negative ion; we just call them all salts.0050 Salt, sodium chloride, is specific--just happens to take that name; but when we talk about salts, we are talking about any ionic compound.0054 Now, as you know (or as you should know), when you take an ionic compound and you drop it in water, it is either going to dissolve, or it's not.0062 If it dissolves, it breaks up into its free ions, and those ions are just floating around freely in the water; so let's write that down, actually.0069 When salts dissolve, their ions float around as free ions.0081 OK: so, for example, if we had something like sodium chloride (which is a solid, as you know--it's salt), if we drop it in water (an arrow with a little water on it--that means you have dissolved it in water), you end up creating Na+ + Cl-.0107 Now, I am not usually going to write the aq, the subscripts, but that is what that means--it just means "dissolved in water"--aq.0119 You have sodium ions floating around, and you have chloride ions floating around.0125 Well, often, when this happens, the anion of the salt that you drop into water and dissolve--it is actually going to be the conjugate base of a weak acid.0129 Let's say that again: Often (let's write it down), the anion--the negatively-charged ion, this one--will be the conjugate base of a weak acid.0141 Example: sodium fluoride--if we take sodium fluoride, and if we dissolve it in water, we produce sodium ion, and we produce fluoride ion; so there is F- floating around, and there is Na+ floating Now, you have seen this F- before; it is the conjugate base of the weak acid hydrofluoric acid, which means--a conjugate base just means you have taken an acid and you have taken off the H.0182 The thing that you are left with--that is the conjugate base.0195 So, over here, I am going to write; you have seen it as (oops, let me do this in red to separate) this: HF is in equilibrium with H+ + F-.0197 There is a certain Ka associated with this; in fact, it's 7.2x10-4.0215 So notice, this time, it doesn't show up as the acid; you didn't take the acid and drop it in solution--you dropped a salt whose negative ion, the anion, happens to be the conjugate base of a weak When this happens--when the anion of a salt that you dissolve in water happens to be the conjugate base of a weak acid--it actually reacts with water as a base.0240 Because remember--notice--as an acid, this is a weak acid; a weak acid means the equilibrium is on the left; that means it wants to be this way--it doesn't want to be this way.0249 That means, if you put some free F- anywhere near some H+, or anywhere near a source of H+, it is going to take that H+, and it is going to move over in this direction until this equilibrium is So, weak acid--strong conjugate base; a strong base means that it has a tendency, a very high affinity, for hydrogen ions--for protons--for hydrogen ions; it wants to take them.0275 When you take a salt and dissolve it in water, and all of a sudden create this free base, which happens to be the conjugate base of a weak acid (hydrofluoric acid), here is what it does: it actually behaves as a base now.0286 It does this reaction: it reacts with the water that is floating around in solution, and it actually takes the hydrogen ion from the water; it is acting as a base--it takes hydrogen ion.0301 This time, water is acting as the base; it becomes HF + OH-.0313 Notice what we have done: in this process, this base takes the H, creates hydrofluoric acid, and in the process it creates hydroxide ion.0321 When this happens, you create a basic solution; so if you took a neutral, normal water, which is pH 7, and if you drop in some sodium fluoride, well, the fluoride ion will pull hydrogens off of the water, creating hydroxide ion.0330 The pH of that solution is going to go up; it is going to become a basic solution--that is what a basic solution is: it's when the concentration of hydroxide ion is higher than usual.0345 So, any time you have a salt where the anion happens to be the conjugate base of a strong acid...conjugate base of a weak acid; forgive me--where it happens to be the conjugate base of a weak acid, it is going to react as a base in this reaction.0356 People call this a hydrolysis reaction; I don't really like that name--just know that now, this conjugate base is going to act as a base.0374 You have seen this reaction before; this is the reaction of a base with water.0382 There is a Kb associated with this: it is HF, OH-, over F- (because water is liquid water).0390 Here is the best part about it: the Kb, the relationship--we know the Ka already, but what is the relationship between the Ka and the Kb, since now the F- is actually behaving as a base, and not this way, as part of the weak acid equilibrium?0407 Now, it is involved in a basic equilibrium, where it is actually pulling off a hydrogen ion.0422 Here is the relationship--all Kas and Kbs are related by this equation: Ka, times the Kb, is equal to Kw.0429 So in this case, if you want to do a problem with this, and you need to treat this as a Kb problem, base problem, because this reaction is that of a base with water, all you do is: you take the Kb is equal to Kw over Ka for that species.0438 This is 10-14; in this particular case, it's 7.2x10-4, and you find the Kb, and you run this problem as a base problem.0459 We have already done it: weak acids, weak bases--that is all that is going on here.0470 So, let's actually do a problem, and it will make sense.0474 Let's see...OK; so, Example 1: Calculate the pH of a 0.35 Molar sodium fluoride solution.0479 The Ka is equal to (sorry, I had better say which Ka of what)...the Ka of hydrofluoric acid (the actual acid, the weak acid, where the base is coming from) is 7.2x10-4.0507 OK, what do we always do first?--we check the major species in solution to decide what chemistry is going to dominate.0529 Major species: well, you have sodium fluoride: sodium fluoride is completely soluble--remember the solubility chart from earlier in the year?0535 If you haven't memorized it, not a problem--just check it out; sodium fluoride, alkali metal, halogen--completely soluble.0545 That means what you have floating around in solution is sodium ion; you have fluoride ion; and you have H2O.0552 Well, we notice sodium ion doesn't do anything; it doesn't affect anything.0560 However, the anion happens to be the conjugate base of a strong acid, HF.0565 I keep saying "strong acid"; what is wrong with me?--weak acid, weak acid, weak acid.0571 So, F- is the conjugate base of a weak acid, hydrofluoric acid.0577 Well, it's the conjugate base of a weak acid; so what it is going to do--it is actually going to react as a base with the water, the following reaction.0582 It is going to be: F- + HOH goes to HF + OH-.0591 That is the reaction; this reaction is the reaction of a base with water.0608 We want to find the pH of this; well, now that we have our reaction, that we know what reaction is going to take place, we do our ICE chart.0616 Well, what is the initial concentration of the F-?--well, since we have full dissociation, it's 0.35.0626 Water doesn't matter; there is no HF formed yet--this is before anything happens, and there is no OH- before anything happens.0634 A certain amount of F- is going to disappear; as a species, it's going to react with H to become HF.0642 It is going to disappear; water doesn't matter; that means HF is going to show up, and OH- is going to show up.0649 There is absolutely nothing under the sun when it comes to these; we have done these several times--we know how it works, but now, because this is a base reaction, we need the Kb.0659 Kb is equal to Kw over Ka, equals 1.0x10-14, divided by 7.2x10-4, and we get a Kb of 1.4x10-11.0670 So now, we do 1.4x10-11 is equal to x, times x, divided by 0.35-x.0696 Well, look how small this is; you know what, x is going to be pretty small, so chances are, we can do the approximation.0709 When we solve for x, we get x=2.2x10-6; but notice, this was not hydrogen ion concentration; we created a base in this reaction: x is equal to the OH- concentration, which implies that the pOH is equal to the negative log of this.0722 The pOH ends up being 3.2 (was that correct?--yes), and (wait, is that...yes, it is) then the pH is equal to 14 minus the pOH, which equals 10.8.0742 Sure enough, a pH of 10.8 means that it is a basic solution.0773 We had a salt; the anion was the conjugate base of a weak acid; therefore, it is going to react as a base with the water in a standard base reaction.0778 That is what bases do: bases take hydrogen from water to create hydroxide ion.0794 The equilibrium expression for that: we said it's a Kb, not a Ka; it's not an acid dissociation--it is a base association, if you will; it's a base constant.0799 Well, we have the Ka; normally, hydrofluoric acid is listed as a Ka, because it behaves mostly as an acid.0810 Therefore, the Ka is listed; but the relationship between Ka and Kb is Ka times Kb equals 10-14, which is Kw.0818 We solve for the Kb; we treat it like any other weak base problems; and we solve it; we get the pOH, in this case, because it's a base--because we want the pH, we take 14 minus that, 10.8.0826 OK, let's do Example 2: oh, actually, before that, let me actually...so let's stop there, and now let me go back to blue ink...0841 Now, if you have a salt (now we are going to talk about the cation; we mentioned the anion; now we are going to talk about the cation) where the cation is the conjugate acid of a weak base, then this cation will act as an acid and create hydrogen ion--create an acidic solution.0858 So, you have to watch the salt; you take a look at the salt, dissolve it; you take a look now--you not only look at the anion--you look at the cation as well.0919 For the first one, we said if the anion happens to be the conjugate base of a weak acid; now, if the cation happens to be the conjugate acid of a weak base.0928 It is going to act as an acid, and it is going to produce an acidic solution.0944 Let's do an example, and I think it will make sense.0949 And again, it is the chemistry that you want to understand: you want to take a look at the species and decide how it is acting.0954 That is really what is going on; in this previous problem, we saw that, when we dissolved the sodium fluoride, we have F- floating around freely.0962 Well, what is F- going to do?--F- happens to be the conjugate base of a weak acid, so it is going to actually behave as a base; it is going start taking Hs away from water.0969 If you write down the reaction, everything should fall out.0980 OK, calculate the pH of a 0.10 Molar ammonium chloride solution; the Kb of NH3 equals 1.8x10-5.0985 OK, our major species: well, we know that anything that involves ammonium and chloride is going to be fully soluble; therefore, what is floating around in solution is ammonium ion, chloride ion, and Chloride--let's look at the anion first--chloride is the conjugate base of a strong acid, HCl.1031 So, Cl is not going to take any H from anything; it is just going to float around freely--we can ignore it.1040 However, NH4+, ammonium, is the conjugate acid of a weak base.1047 How do we find the conjugate acid?--just stick a hydrogen ion onto it...of the weak base ammonia.1056 Ammonia is the weak base; its conjugate acid is that, right?--because it comes from ammonia.1062 When you put ammonia in water, it takes a hydrogen from water; it becomes ammonium, and it creates hydroxide.1074 However, in this case, we didn't just drop ammonia into solution; we dropped the actual ammonium ion into solution as a salt.1084 We dropped it as a salt; now, there is just free ammonium ion floating around--all ammonium ion.1093 Well, ammonium ion is the conjugate acid of a weak base, which is ammonia.1099 Therefore, it is going to now behave as an acid.1104 Its equilibrium is going to be this: it is going to actually dissociate into H+ + Cl-...no, plus NH3.1107 There is a Ka associated with this, because this is an acid dissociation reaction: it is something that has a hydrogen ion to give up.1122 It gives it up, and it creates its other side; so now, this Ka is equal to Kw over the Kb.1130 Kb is the Kb that we have for ammonia, the conjugate base of that.1138 NH4+ is the dominant species; it is the conjugate acid of a weak base--it is going to behave as an acid, so we write it: behaving as an acid.1147 We are going to create H+; we are going to create an acidic solution, because we dropped it in water that was neutral.1164 We have Initial, we have Change, and we have Equilibrium; well, the initial concentration of NH4 is 0.10.1171 There is no H to start with; there is no ammonia to start with; this is -x; this is x; this is x; +x; this is 0.10-x; this is +x; this is +x.1180 Now, it's behaving as an acid, so we need a Ka.1194 That is equal to 1.0x10-14, divided by 1.8x10-5; the Ka for this reaction is equal to 5.6x10-10.1204 That tells me, this 5.6x10-10...it's a small number; it is a weak acid.1222 But, it is still an acid--it is going to produce some H+...it's weak, but it still behaving as a weak acid--it's going to give up its H+ into floating around freely in the water; that is the acidic So now, let's take 5.6x10-10 is equal to x, times x, over 0.10-x, which is approximately equal to x squared over 0.10; we end up with x=7.4x10-6, which does equal the hydrogen ion concentration, because here, we are directly forming hydrogen ion, and therefore, the pH of this is the negative log of that, which ends up being 5.1.1241 So there you have it: recap: if you have a salt floating around in solution (if you have a salt and it dissolves), if the anion is the conjugate base of a weak acid, it will create a basic You treat it as a base equilibrium that will react with water, in other words--pull off the hydrogen ion.1301 If the cation is the conjugate acid of a weak base, it will behave as an acid--you write the acid equilibrium and use the Ka to solve it as a weak acid problem, like you have done before.1307 A second type of cation--a second type of species (you know, it might be nice if I actually wrote my words properly here--what do you think?) that creates an acidic solution--is a highly-charged metal ion.1321 A good example is aluminum 3+; 3+ is a pretty high charge; you are also going to find solutions like, for example, chromium 6+, manganese 5+, 4+...things that have a really high charge after about 1 or 2.1362 Here is what is going on: if you take a salt like aluminum chloride, I'm going to actually write out the chemistry of what happens, because I want you to see what happens--how the species forms in solution, and then how we treat that species in solution, just like any other weak acid.1376 If you take aluminum chloride, and you dissolve it in water--well, you know that aluminum chloride is completely soluble, so you are going to end up with aluminum 3+, plus 3 chloride ions--it completely dissociates into its free ions.1395 Chloride doesn't do anything; it's the conjugate base of a strong acid, so it doesn't behave as a base--it just floats around.1408 However, aluminum (because it is so highly charged, and because it is in water)--the water molecules, the actual molecules themselves, surround the aluminum; and in the case of aluminum, you get something that looks like this.1415 I am actually going to draw the structure, so you see it; I'm going to write it out first.1431 Aluminum (so I'm going to bring it over here)--the aluminum 3+ actually associates with 6 water molecules (and I'm going to draw the lone pairs on them) to create this species called a complex Al(H2O)6, and the whole charge on the species is 3+; this is usually how we write it.1456 Any time you have a metal that is surrounded by, in this case, water (we will just deal with water for the moment), and this actually looks like something--you have an aluminum ion, and you have an H2O, an H2O...I'm going to draw this 3-dimensional structure.1465 I really shouldn't, because you really don't have to know this for the time being; we may talk about it a little bit later, towards the end of the course, but I think it's important; it's nice--there is nothing here that you shouldn't be able to sort of visualize.1487 There are these 6 water molecules: OH2, OH2, OH2, OH2...and it is actually the oxygen, believe it or not, that is attached to the aluminum here.1499 In a normal bond, it is just surrounding it; so you have these 6 water molecules, and they are arranged; four of them are arranged in a plane; so aluminum is in the middle; one here, here; one is here; and back here.1512 These dashed lines mean it is going back; it's facing away from you.1525 These wedges mean it's coming towards you; these straight lines mean there is one on top and one on bottom.1528 What you have is the following: what you have is aluminum, in the center; you have a water molecule here, a water molecule here, a water molecule back here, a water molecule back here; one up here, and one down here.1535 There are six of them around it; and this whole species is carrying a 3+ charge; well, of course it's carrying a 3+ charge--aluminum is 3+, and water is a neutral molecule, so when they aggregate (when they sort of grab onto aluminum, if you will--6 of them), you create this thing called a complex ion.1548 Well, here is what happens with this complex ion: now I'm going to take this complex ion; as it turns out, it's so highly charged that it actually ends up pulling a lot of the electrons towards itself, and actually creates an acidic hydrogen.1569 One of these hydrogens, believe it or not, actually comes off.1584 And here is the chemistry of it (and this is what is important: the structure you don't have to understand; the chemistry is what is going on): in solution, this is produced.1588 When you drop aluminum chloride into water, aluminum ion is formed; water is attracted to the aluminum ion, and six water molecules arrange themselves in a pattern around the aluminum to create this complex ion.1598 This complex ion has an acidic hydrogen; it gives it up.1612 Here is how it gives it up: Al(H2O)63+ (I'm going to leave off the brackets; I think it's just extra symbolism that is not necessary)--it dissociates into H+ + Al; now there is just a hydroxide attached to it, and there is 5 waters.1615 One of the waters has lost its hydrogen; now it's 2+.1639 Notice, this is just a standard dissociation of an acid.1644 You have this species that has this H that it can give up; it gives it up; it's right there.1649 The rest of it--it doesn't even matter what it is; all that matters is this.1656 You have a species that has a hydrogen ion; it gives up that hydrogen ion to create some conjugate base; this is what is important.1660 You treat this like any other weak acid; there is a Ka associated with this--we measured it, and the Ka of this happens to be 1.4x10-5.1667 That is it; this is just HA dissociating into H+ + A-; this A---yes, it happens to be a very, very complex-looking thing (we call it a complex ion), but you treat it the exact same way.1679 Don't let the makeup of the thing that you are discussing confuse you; it's the chemistry that matters--the chemistry is just: some species gives up a hydrogen ion, and then ends up as something The hydrogen ion is what is important; the equilibrium is handled the exact same way.1708 Example 3: This is what is important in science--you need to understand what is going on underneath.1717 The individual identities of the species--they don't really matter, as far as what is going on; they matter for the individual case that you happen to be dealing with, as a researcher, as a lab scientist, as a doctor, whatever it is, but the chemistry is all the same.1726 It is still just some species, some acid, that has a hydrogen to give up, and it gives it up.1742 The mathematics is handled exactly the same way; the species, the identity, is entirely irrelevant.1747 It is entirely irrelevant; this is what we want you to do.1752 Our ideal is to get you to think abstractly, to think big-picture; if you can handle the big picture, you will know what is going on with the little picture; the little details are just incidental--they change from problem to problem.1756 But, the big picture doesn't change; that is what is important.1769 In science, what you want to concentrate on is what doesn't change.1772 That is when you know something is important--if something is not changing, that is what you want to concentrate on.1780 So, Example 3--we have: Calculate the pH of a 0.010 Molar AlCl3 solution.1790 Well, here is what we know: when we have some aluminum salt that is dropped in water, the aluminum is going to float around freely as ion; that is the first thing that is going to happen--the aluminum chloride is going to dissociate.1813 Aluminum is...the water molecules are going to aggregate around aluminum, 6 of them are, and they are going to form the species Al(H2O)63+.1827 Now, it isn't important that you know the name of it, but this is called hexaaqua-aluminum (3).1841 You will actually do the naming towards the end of the course, when you talk about coordination compounds, but this is the species that actually forms in solution.1846 If we took a picture of the solution, that is the species that we find.1854 We find every aluminum ion surrounded by six water molecules, and that whole thing is carrying a 3+ charge.1858 Well, it is true--water does contribute some hydrogen ion--but it is Ka of 10-14; this one has a Ka of 1.4x10-5.1867 10 to the negative 5 is a lot bigger than 10 to the negative 14, so water can be ignored.1881 This is the dominant species in the water that will control the pH of the solution.1885 We know what this does: it behaves as an acid.1893 Al(H2O)63+ dissociates into H+ + AlOH(H2O)52+--just an acid dissociation; that is it.1896 This is being created; let's do Initial; let's do Change; let's do Equilibrium.1919 What is the initial concentration?--well, all of the aluminum is dissolved; that means all of this is formed; 0.010--there is nothing formed yet; there is nothing formed yet.1924 A certain amount is going to dissociate--that is how much is going to show up of the other species.1935 0.010-x, +x, +x; we have the Ka; we have the equilibrium expression; so we just put it in.1940 1.4x10-5 (I hope you're not getting sick of these problems; I know it's just over and over--it's the same thing; that is nice--we like patterns) equals x times x, divided by 0.010-x.1951 We can approximate this with x squared over 0.010; now, you might think to yourself, "Well, wait a minute; 0.010 is pretty small, and x...we are talking about 10 to the negative 5 here...maybe."1972 As it turns out, when you check the validity, the error ends up being about 3.7%; we are still below 5, so we are good; this is a perfectly good approximation.1987 So, x is equal to 3.7x10-4, which equals the hydrogen ion concentration; that implies that the pH is equal to 3.43.1998 3.43--OK, I think I want to write this a little bit slower, so that all these wacky lines don't show up--equals 3.43.2013 So notice, it's handled exactly the same way; the identity of the species is irrelevant--it's behaving as an acid; it's giving up a hydrogen ion.2027 It's a weak acid, 1.4x10-5; there is an equilibrium; we have to use an ICE chart.2036 Now, our final little situation here: what if we have this situation?2048 What if we have the situation: NH4F--what if we have ammonium fluoride?--it is a perfectly valid salt.2052 Salt--you drop it into water; what happens?--well, it's going to dissolve, if it's completely soluble; it's going to dissolve into NH4+ and F-.2074 Well now, I have a little bit of a problem: we have an anion which is the conjugate base of a weak acid, hydrofluoric acid; so it is going to behave as a base, and it is going to pull hydrogen off of water to produce hydroxide ion.2083 So, it is going to create a basic solution; but, we have ammonium ion also floating around in solution.2099 It is the conjugate acid of a weak base, ammonia, and it is going to behave as an acid, as a weak acid, itself.2106 It is going to give up its hydrogen ion to create an acidic solution.2115 The F- is going to go ahead and create basic solution; this is going to create an acidic solution; well, what is the final solution going to be--acidic or basic? How do we decide?2118 Well, when you have a situation where both of the ions are species that react, and one produces base; one produces water, it gets very, very complicated--that is the short answer.2128 The equilibrium gets complicated, and you actually will be dealing with stuff like this, if you go on to study analytical chemistry.2141 If you are a chemistry major, usually in your third year, you will take an analytical chemistry course, and there are ways to handle this mathematically.2147 Pretty complex; for our purposes, we just want to be able to sort of give a qualitative answer.2154 We want to be able to say is the solution acidic or basic, without specifying what the pH is.2159 It comes down to this: you do a quick test--if (I'm actually going to write this a little further to the left--excuse me) the Ka for the acidic ion (meaning this one) is bigger than the Kb for the basic ion, which is this one, well, the solution is acidic.2168 In other words, if the Ka for this is bigger than this, that means that the equilibrium for this is farther to the right; it produces more hydrogen ion than this produces hydroxide ion.2207 Therefore, there will be more hydrogen ion in solution; therefore, the solution will be acidic.2219 That is what this is saying: so, you need to find the Ka of this; you find the Kb of this; remember, Kb is 10 to the 14 over Ka of the acid, and this Kb is 10 to the 14 over Ka.2223 I'm sorry; Ka is 10-14/Kb for the conjugate base, what we did earlier.2238 If this is bigger, it will be acidic; if this is bigger, it will be basic.2249 So, if the Ka is less than the Kb (the Ka for the acidic ion, less than the Kb for the basic ion), then your solution is going to be basic.2254 And of course, the last possibility (always three possibilities when it comes to ordering: less than, greater than, or equal to): If the Ka is equal to the Kb, well, you know exactly what that is; that is going to get a neutral solution.2265 And now, let's do our final example: Example 4: Will a solution of aluminum sulfate (AlSO4, Al2(SO4)3) be acidic or basic?2283 Well, Al2(SO4)3 dissociates into Al2, aluminum 3+, plus 3 SO42-, so yes; we have an anion--negative ion--which is the conjugate base of a weak acid.2315 And aluminum happens to be that thing that forms that species, Al, the hexaaqua-aluminum (3); so this is going to create a basic solution; this is going to create an acidic solution; we need to compare the two.2350 So, we want to know the Ka of this; the Ka--we already know that one; that is going to be 1.4x10-5.2369 And we want to know the Kb of this; the Kb of this is 10 to the -14, over the Ka of this, which is 1.2x10-6.2383 I hope you guys saw what I did; I found out the species that are floating in solution; this--the conjugate acid of that is the HSO4.2401 This actually just forms this species; we have the Ka of this; we find the Kb; we don't take the H2SO4; we add one H to it, OK?--one H at a time.2411 It dissociates one at a time; it associates one at a time.2422 We get a Kb, is equal to 1.3x10-13; well, this is hugely bigger than this, which means that our solution will be acidic.2427 This aluminum, this aluminum species, will dominate the acidity of the solution; we will get an acidic solution; and that is how you handle it.2444 So, thank you for joining us here at Educator.com to discuss the acid-base properties of salts.2454 In our next lesson, I'm going to close off with just a brief discussion of some oxides, and then we will go ahead and move on to further aspects of acid-base equilibria.2459 Our free lessons will get you started (Adobe Flash® required). Get immediate access to our entire library.
{"url":"https://www.educator.com/chemistry/ap-chemistry/hovasapian/salts-and-their-acid-base-properties.php","timestamp":"2024-11-05T10:06:00Z","content_type":"application/xhtml+xml","content_length":"703476","record_id":"<urn:uuid:728a1fb7-0592-4feb-8460-a03e676706f6>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00019.warc.gz"}
Stress in Thick-Walled Cylinders or Tubes Radial and tangential stress in thick-walled cylinders or tubes with closed ends - with internal and external pressure. When a thick-walled tube or cylinder is subjected to internal and external pressure a hoop and longitudinal stress are produced in the wall. Stress in Axial Direction The stress in axial direction at a point in the tube or cylinder wall can be expressed as: σ[a] = (p[i] r[i]^2 - p[o ]r[o]^2 )/(r[o]^2 - r[i]^2) (1) σ[a] = stress in axial direction (MPa, psi) p[i] = internal pressure in the tube or cylinder (MPa, psi) p[o] = external pressure in the tube or cylinder (MPa, psi) r[i] = internal radius of tube or cylinder (mm, in) r[o] = external radius of tube or cylinder (mm, in) Stress in Circumferential Direction - Hoop Stress The stress in circumferential direction - hoop stress - at a point in the tube or cylinder wall can be expressed as: σ[c] = [(p[i ] r[i]^2 - p[o] r[o]^2) / (r[o]^2 - r[i]^2)] + [r[i]^2 r[o]^2 (p[o] - p[i]) / (r^2 (r[o]^2 - r[i]^2))] (2) σ[c] = stress in circumferential direction (MPa, psi) r = radius to point in tube or cylinder wall (mm, in) (r[i] < r < r[o]) maximum stress when r = r[i ](inside pipe or cylinder) Resultant Stress Combined stress in a single point in the cylinder wall cannot be described by a single vector using vector addition. Instead stress tensors (matrixes) describing the linear connection between two physical vectors quantities can be used. Stress in Radial Direction The stress in radial direction at a point in the tube or cylinder wall can be expressed as: σ[r]= [(p[i ]r[i]^2 - p[o] r[o]^2) / (r[o]^2 - r[i]^2)] - [r[i]^2 r[o]^2 (p[o] - p[i]) / (r^2 (r[o]^2 - r[i]^2))] (3) maximum stress when r = r[i ](outside pipe or cylinder) Example - Stress in Thick walled Cylinder In a cylinder with inside diameter 200 mm (radius 100 mm) and outside diameter 400 mm (radius 200 mm) there is a pressure 100 MPa relative to the outside pressure. Stress in axial direction can be calculated as σ[a] = (((100 MPa) (100 mm)^2 - (0 MPa) (200 mm)^2) / ((200 mm)^2 - (100 mm)^2) = 33.3 MPa Stress in circumferential direction - hoop stress - at the inside wall (100 mm) can be calculated as σ[c] = [((100 MPa) (100 mm)^2 - (0 MPa) (200 mm)^2) / ((200 mm)^2 - (100 mm)^2)] - [(200 mm)^2 (100 mm)^2 ((0 MPa)- (100 MPa)) / ((100 mm)^2 ((200 mm)^2 - (100 mm)^2))] = 167 MPa Stress in radial direction at the inside wall (100 mm) can be calculated as σ[r] = [((100 MPa) (100 mm)^2 - (0 MPa) (200 mm)^2) / ((200 mm)^2 - (100 mm)^2)] + [(200 mm)^2 (100 mm)^2 ((0 MPa)- (100 MPa)) / ((100 mm)^2 ((200 mm)^2 - (100 mm)^2))] = -100 MPa Note! - that in addition stress caused by pressure - stress can be induced in the pipe or cylinder wall by restricted temperature expansion. Online Thick Walled Pipe & Cylinder Calculator The calculator below can be used to calculate the stress in thick walled pipes or cylinders with closed ends. Related Topics • The relationships between forces, acceleration, displacement, vectors, motion, momentum, energy of objects and more. • Forces acting on bodies at rest under equilibrium conditions - loads, forces and torque, beams and columns. Related Documents
{"url":"https://www.engineeringtoolbox.com/stress-thick-walled-tube-d_949.html","timestamp":"2024-11-11T10:00:40Z","content_type":"text/html","content_length":"36543","record_id":"<urn:uuid:19332005-fe55-4274-9505-6fa1d7488a25>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00503.warc.gz"}
Generalized Fréchet filters - Math Research of Victor PortonGeneralized Fréchet filters Just a few minutes ago I conceived a definition of generalized Fréchet filters with definition for every poset on which filters are considered (however, I have not yet calculated the class of posets for which generalized Fréchet filter is defined; it should be easy but I am busy with other business). Generalized Fréchet filter on a poset $latex \mathfrak{A}$ is a filter $latex \Omega$ such that $latex \partial \Omega = \left\{ x \in \mathfrak{A} \hspace{0.5em} | \hspace{0.5em} \mathrm{atoms}\, x \text{ is infinite} \right\} . $ See my book for a definition of $latex \partial$.
{"url":"https://math.portonvictor.org/2015/04/24/generalized-frechet-filters/","timestamp":"2024-11-10T20:26:54Z","content_type":"text/html","content_length":"97574","record_id":"<urn:uuid:7e46e869-f458-4997-9544-989cc16954d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00320.warc.gz"}
Searching a sorted array faster than O(log(N)) The usual way to find an element in a sorted array is using a binary search, which takes log(n) time, where logs are understood to be in base 2 and N is the size of the array. Linus Torvalds made the clever observation that you can do better than log(N) if you know that the contents of the array are uniformly distributed. For instance, git's pack files store multiple objects identified by the SHA-1 hashes of their contents. Each pack file has an index containing a list of the pack's SHA-1 IDs and their objects' locations in the pack. The index is sorted by SHA-1 ID. Since SHA-1 is a cryptographic hash, we can assume its output is uniformly distributed. Linus described his technique as "Newton-Raphson" which is a bit of a misnomer, since N-R works on smooth differentiable curves whereas what we have is a straight line with some stochastic variations. What we're actually doing is an iterated linear interpolation. If the SHA-1 IDs were perfectly evenly distributed then a single linear interpolation would land us right on the target item, but the random variation means we will be off by some amount, so we need to continue searching. How far off will we be? It turns out (based on Monte Carlo simulation) that the expected error is about 0.31 * sqrt(N) with a standard deviation of about 0.26 * sqrt(N). This is a really promising result since it implies that each iteration reduces the search space to N^1/2 whereas an iteration of binary search reduces it to N/2. So we should expect a complete search to take O(log(log(N))) I wrote a simulation to try this out, and it matches this prediction: in fact the number of iterations was about 1 + log(log(N)). However what is the variation around this expected result? In my tests it turned out that the maximum number of probes was log(N) though for small N it bottomed out at about 16. When testing lots of different randomly filled arrays, the standard deviation was about 1.2 for all values of N, but when I tested fewer arrays this number ramped up. Junio Hamano's implementation of Linus's idea is included in git but disabled by default. He added a tweak that biases the linear interpolation towards the centre of the search range, so it's kind of a balance between binary search and linear interpolation search. In my simulator this tweaked version required (log(N)+3)/2 iterations on average with a standard deviation of 0.8. The maximum number of iterations was again log(N) but it bottomed out at about 12. Overall it's a bit slower but better behaved. In git, where a large repository might contain two million objects, and where pack index lookups are not particularly performance-critical, this improved lookup code doesn't provide a noticeable advantage. Still, I think it's interesting and the idea might be useful in other situations. Note that unlike a binary search, which can just use comparisons returning greater / equal / less, the linear interpolation search needs to know the absolute values of the elements. Git's code actually uses a lexicographic variant that ignores any common prefix shared by the elements in the search range, and uses only the next two bytes for the interpolation. To finish, here's a bit of code. In this example, 0.0 <= array[k] < 1.0, and I use k for keys and v for values of array elements. We are searching for vtarg. /* all bounds are exclusive */ double vlo = -DBL_MIN, vhi = +1.0; int klo = -1, khi = N; while(klo - khi > 1) { int kmid = klo + (khi-klo) * (vtarg-vlo) / (vhi-vlo); /* ensure rounding does not put us out of bounds */ if(guess_k <= min_k) guess_k = min_k + 1; if(guess_k >= max_k) guess_k = max_k - 1; double vmid = array[kmid]; if(vmid == vtarg) return(kmid); if(vmid < vtarg) klo = kmid, vlo = vmid; if(vmid > vtarg) khi = kmid, vhi = vmid; Addendum: there are a few corrections in a follow-up post.
{"url":"https://dotat.at/@/2009-07-28-searching-a-sorted-array-faster-than-o-log-n.html","timestamp":"2024-11-07T07:08:53Z","content_type":"text/html","content_length":"15626","record_id":"<urn:uuid:b9ed3e17-bbc6-45bb-9975-4e5c4257fb28>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00221.warc.gz"}
Commutators in pseudo-orthogonal groups We study commutators in pseudo-orthogonal groups O2n R (including unitary, symplectic, and ordinary orthogonal groups) and in the conformal pseudo-orthogonal groups GO2nR. We estimate the number of commutators, c(O2nR) and c(GO2nR), needed to represent every element in the commutator subgroup. We show that c(O2nR)≤4 if R satisfies the A-stable condition and either n≥ 3 or n = 2 and 1 is the sum of two units in R, and that c(GO2nR)≤3 when the involution is trivial and A = R ∊. We also show that c(O2nR)≤3 and c(GO2nR)≤ 2 for the ordinary orthogonal group O2n R over a commutative ring R of absolute stable rank 1 where either n > 3 or n = 2 and 1 is the sum of two units in R. All Science Journal Classification (ASJC) codes Dive into the research topics of 'Commutators in pseudo-orthogonal groups'. Together they form a unique fingerprint.
{"url":"https://pure.psu.edu/en/publications/commutators-in-pseudo-orthogonal-groups","timestamp":"2024-11-01T21:57:38Z","content_type":"text/html","content_length":"46937","record_id":"<urn:uuid:836e0447-4f6e-491e-a590-5c2c6e680886>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00313.warc.gz"}
Optimization models - Calafiore G.C., Ghaoui L.El. - 2014 Optimization Models Emphasizing practical understanding over the technicalities of specific algorithms, this elegant textbook is an accessible introduction to the field of optimization, focusing on powerful and reliable convex optimization techniques. Students and practitioners will learn how to recognize, simplify, model and solve optimization problems - and apply these basic principles to their own projects. A clear and self-contained introduction to linear algebra, accompanied by relevant real-world examples, demonstrates core mathematical concepts in a way that is easy to follow, and helps students to understand their practical relevance. Requiring only a basic understanding of geometry, calculus, probability and statistics, and striking a careful balance between accessibility and mathematical rigor, it enables students to quickly understand the material, without being overwhelmed by complex mathematics. Accompanied by numerous end-of-chapter problems, an online solutions manual for instructors, and examples from a diverse range of fields including engineering, data science, economics, finance, and management, this is the perfect introduction to optimization for both undergraduate and graduate students. Giuseppe C. Calafiore is an Associate Professor at Dipartimento di Automatica e Informatica, Politecnico di Torino, and a Research Fellow of the Institute of Electronics, Computer and Telecommunication Engineering, National Research Council of Italy. Laurent El Ghaoui is a Professor in the Department of Electrical Engineering and Computer Science, and the Department of Industrial Engineering and Operations Research, at the University of California, Berkeley. Optimization Models Giuseppe C. Calafiore Politecnico di Torino Laurent El Ghaoui University of California, Berkeley University Printing House, Cambridge CB2 8BS, United Kingdom Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning and research at the highest international levels of excellence. Information on this title: www.cambridge.org/9781107050877 © Cambridge University Press 2014 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2014 Printed in the United States of America by Sheridan Books, Inc. A catalogue record for this publication is available from the British Library ISBN 978-1-107-05087-7 Hardback Internal design based on tufte-latex.googlecode.com Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.Org/licenses/LICENSE-2.0. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “as is” basis, without warranties or conditions of any kind, either express or implied. See the License for the specific language governing permissions and limitations under the License. Additional resources for this publication at www.cambridge.org/optimizationmodels Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Dedicated to my parents, and to Charlotte. G. C. Dedicated to Louis, Alexandre and Camille L. El G. Preface page xi 1 Introduction 1 1.1 Motivating examples l 1.2 Optimization problems 3 1.3 Important classes of optimization problems 10 1.4 History 14 7 Linear algebra models 19 2 Vectors and functions 21 2.1 Vector basics 21 2.2 Norms and inner products 28 2.3 Projections onto subspaces 37 2.4 Functions 43 2.5 Exercises 33 3 Matrices 55 3.1 Matrix basics 33 3.2 Matrices as linear maps 61 3.3 Determinants, eigenvalues, and eigenvectors 64 3.4 Matrices with special structure and properties 73 3.5 Matrix factorizations 82 3.6 Matrix norms 84 3.7 Matrix functions 87 3.8 Exercises 91 4 Symmetric matrices 97 4.1 Basics 97 4.2 The spectral theorem 103 4.3 Spectral decomposition and optimization 107 4.4 Positive semidefinite matrices 110 4.5 Exercises 118 viii CONTENTS 5 Singular value decomposition 123 5.2 Singular value decomposition 123 5.2 Matrix properties via SVD 127 5.3 SVD and optimization 133 5.4 Exercises 143 Linear equations and least squares Motivation and examples The set of solutions of linear equations Least-squares and minimum-norm solutions Solving systems of linear equations and LS problems Sensitivity of solutions Direct and inverse mapping of a unit ball Variants of the least-squares problem 7 Matrix algorithms 199 7.1 Computing eigenvalues and eigenvectors 199 7.2 Solving square systems of linear equations 206 7.3 QR factorization 211 7.4 Exercises 213 II Convex optimization models 221 8 Convexity 223 8.1 Convex sets 223 8.2 Convex functions 230 8.3 Convex problems 249 8.4 Optimality conditions 268 8.5 Duality 272 8.6 Exercises 287 9 Linear, quadratic, and geometric models 293 9.1 Unconstrained minimization of quadratic functions 294 9.2 Geometry of linear and convex quadratic inequalities 296 9.3 Linear programs 302 9.4 Quadratic programs 311 9.5 Modeling with LP and QP 320 9.6 LS-related quadratic programs 331 9.7 Geometric programs 333 9.8 Exercises 341 10 Second-order cone and robust models 10.1 Second-order cone programs 10.2 SOCP-representable problems and examples 10.3 Robust optimization models 10.4 Exercises Semidefinite models 11.1 From linear to conic models 11.2 Linear matrix inequalities 11.3 Semidefinite programs 11.4 Examples of SDP models 11.5 Exercises 12 Introduction to algorithms 425 12.1 Technical preliminaries 427 22.2 Algorithms for smooth unconstrained minimization 432 12.3 Algorithms for smooth convex constrained minimization 432 12.4 Algorithms for non-smooth convex optimization 472 12.5 Coordinate descent methods 484 12.6 Decentralized optimization methods 487 12.7 Exercises 496 III Applications 503 13 Learning from data 505 13.1 Overview of supervised learning 303 13.2 Least-squares prediction via a polynomial model 307 13.3 Binary classification 311 13.4 A generic supervised learning problem 319 13.5 Unsupervised learning 324 13.6 Exercises 333 24 Computational finance 539 14.1 Single-period portfolio optimization 339 14.2 Robust portfolio optimization 346 14.3 Multi-period portfolio allocation 349 14.4 Sparse index tracking 336 14.5 Exercises 338 25 Control problems 567 15.1 Continuous and discrete time models 368 15.2 Optimization-based control synthesis 371 15.3 Optimization for analysis and controller design 379 15.4 Exercises 386 26 Engineering design 16.1 Digital filter design 16.2 Antenna array design 600 16.3 Digital circuit design 606 16.4 Aircraft design 609 16.5 Supply chain management 613 16.6 Exercises 622 Optimization refers to a branch of applied mathematics concerned with the minimization or maximization of a certain function, possibly under constraints. The birth of the field can perhaps be traced back to an astronomy problem solved by the young Gauss. It matured later with advances in physics, notably mechanics, where natural phenomena were described as the result of the minimization of certain "energy" functions. Optimization has evolved towards the study and application of algorithms to solve mathematical problems on computers. Today, the field is at the intersection of many disciplines, ranging from statistics, to dynamical systems and control, complexity theory, and algorithms. It is applied to a widening array of contexts, including machine learning and information retrieval, engineering design, economics, finance, and management. With the advent of massive data sets, optimization is now viewed as a crucial component of the nascent field of data science. In the last two decades, there has been a renewed interest in the field of optimization and its applications. One of the most exciting developments involves a special kind of optimization, convex optimization. Convex models provide a reliable, practical platform on which to build the development of reliable problem-solving software. With the help of user-friendly software packages, modelers can now quickly develop extremely efficient code to solve a very rich library of convex problems. We can now address convex problems with almost the same ease as we solve a linear system of equations of similar size. Enlarging the scope of tractable problems allows us in turn to develop more efficient methods for difficult, non-convex problems. These developments parallel those that have paved the success of numerical linear algebra. After a series of ground-breaking works on computer algorithms in the late 80s, user-friendly platforms such as Matlab or R, and more recently Python, appeared, and allowed generations of users to quickly develop code to solve numerical prob- Xll PREFACE lems. Today, only a few experts worry about the actual algorithms and techniques for solving numerically linear systems with a few thousands of variables and equations; the rest of us take the solution, and the algorithms underlying it, for granted. Optimization, more precisely, convex optimization, is at a similar stage now. For these reasons, most of the students in engineering, economics, and science in general, will probably find it useful in their professional life to acquire the ability to recognize, simplify, model, and solve problems arising in their own endeavors, while only few of them will actually need to work on the details of numerical algorithms. With this view in mind, we titled our book Optimization Models, to highlight the fact that we. focus on the "art" of understanding the nature of practical problems and of modeling them into solvable optimization paradigms (often, by discovering the "hidden convexity" structure in the problem), rather than on the technical details of an ever-growing multitude of specific numerical optimization algorithms. For completeness, we do provide two chapters, one covering basic linear algebra algorithms, and another one extensively dealing with selected optimization algorithms; these chapters, however, can be skipped without hampering the understanding of the other parts of this book. Several textbooks have appeared in recent years, in response to the growing needs of the scientific community in the area of convex optimization. Most of these textbooks are graduate-level, and indeed contain a good wealth of sophisticated material. Our treatment includes the following distinguishing elements. • The book can be used both in undergraduate courses on linear algebra and optimization, and in graduate-level introductory courses on convex modeling and optimization. • The book focuses on modeling practical problems in a suitable optimization format, rather than on algorithms for solving mathematical optimization problems; algorithms are circumscribed to two chapters, one devoted to basic matrix computations, and the other to convex optimization. • About a third of the book is devoted to a self-contained treatment of the essential topic of linear algebra and its applications. • The book includes many real-world examples, and several chapters devoted to practical applications. • We do not emphasize general non-convex models, but we do illustrate how convex models can be helpful in solving some specific non-convex ones. We have chosen to start the book with a first part on linear algebra, with two motivations in mind. One is that linear algebra is perhaps the most important building block of convex optimization. A good command of linear algebra and matrix theory is essential for understanding convexity, manipulating convex models, and developing algorithms for convex optimization. A second motivation is to respond to a perceived gap in the offering in linear algebra at the undergraduate level. Many, if not most, linear algebra textbooks focus on abstract concepts and algorithms, and devote relatively little space to real-life practical examples. These books often leave the students with a good understanding of concepts and problems of linear algebra, but with an incomplete and limited view about where and why these problems arise. In our experience, few undergraduate students, for instance, are aware that linear algebra forms the backbone of the most widely used machine learning algorithms to date, such as the PageRank algorithm, used by Google's web-search engine. Another common difficulty is that, in line with the history of the field, most textbooks devote a lot of space to eigenvalues of general matrices and Jordan forms, which do have many relevant applications, for example in the solutions of ordinary differential systems. However, the central concept of singular value is often relegated to the final chapters, if presented at all. As a result, the classical treatment of linear algebra leaves out concepts that are crucial for understanding linear algebra as a building block of practical optimization, which is the focus of this textbook. Our treatment of linear algebra is, however, necessarily partial, and biased towards models that are instrumental for optimization. Hence, the linear algebra part of this book is not a substitute for a reference textbook on theoretical or numerical linear algebra. In our joint treatment of linear algebra and optimization, we emphasize tractable models over algorithms, contextual important applications over toy examples. We hope to convey the idea that, in terms of reliability, a certain class of optimization problems should be considered on the same level as linear algebra problems: reliable models that can be confidently used without too much worry about the inner workings. In writing this book, we strove to strike a balance between mathematical rigor and accessibility of the material. We favored "operative" definitions over abstract or too general mathematical ones, and practical relevance of the results over exhaustiveness. Most proofs of technical statements are detailed in the text, although some results xiv PREFACE are provided without proof, when the proof itself was deemed not to be particularly instructive, or too involved and distracting from the context. Prerequisites for this book are kept at a minimum: the material can be essentially accessed with a basic understanding of geometry and calculus (functions, derivatives, sets, etc.), and an elementary knowledge of probability and statistics (about, e.g., probability distributions, expected values, etc.). Some exposure to engineering or economics may help one to better appreciate the applicative parts in the book. Book outline The book starts out with an overview and preliminary introduction to optimization models in Chapter 1, exposing some formalism, specific models, contextual examples, and a brief history of the optimization field. The book is then divided into three parts, as seen from Table 1. Part I is on linear algebra, Part II on optimization models, and Part III discusses selected applications. Table 1 Book outline. I Linear Symmetric matrices Singular value decomposition Linear equations and least squares Matrix algorithms II Convex Linear, quadratic, and geometric models Second-order cone and robust models Semidefinite models Introduction to algorithms III Applications Learning from data Computational finance Control problems Engineering design The first part on linear algebra starts with an introduction, in Chapter 2, to basic concepts such as vectors, scalar products, projections, and so on. Chapter 3 discusses matrices and their basic properties, also introducing the important concept of factorization. A fuller story on factorization is given in the next two chapters. Symmetric matrices and their special properties are treated in Chapter 4, while Chapter 3 discusses the singular value decomposition of general matrices, and its applications. We then describe how these tools can be used for solving linear equations, and related least-squares problems, in Chapter 6. We close the linear algebra part in Chapter 7, with a short overview of some classical algorithms. Our presentation in Part I seeks to emphasize the optimization aspects that underpin many linear algebra concepts; for example, projections and the solution of systems of linear equations are interpreted as a basic optimization problem and, similarly, eigenvalues of symmetric matrices result from a "variational" (that is, optimization-based) characterization. The second part contains a core section of the book, dealing with optimization models. Chapter 8 introduces the basic concepts of convex functions, convex sets, and convex problems, and also focuses on some theoretical aspects, such as duality theory. We then proceed with three chapters devoted to specific convex models, from linear, quadratic, and geometric programming (Chapter 9), to second-order cone (Chapter 10) and semidefinte programming (Chapter 11). Part II closes in Chapter 12, with a detailed description of a selection of important algorithms, including first-order and coordinate descent methods, which are relevant in large-scale optimization contexts. A third part describes a few relevant applications of optimization. We included machine learning, quantitative finance, control design, as well as a variety of examples arising in general engineering design. How this book can be used for teaching This book can be used as a resource in different kinds of courses. For a senior-level undergraduate course on linear algebra and applications, the instructor can focus exclusively on the first part of this textbook. Some parts of Chapter 13 include relevant applications of linear algebra to machine learning, especially the section on principal component analysis. For a senior-level undergraduate or beginner graduate-level course on introduction to optimization, the second part would become the central component. We recommend to begin with a refresher on basic xvi PREFACE linear algebra; in our experience, linear algebra is more difficult to teach than convex optimization, and is seldom fully mastered by students. For such a course, we would exclude the chapters on algorithms, both Chapter 7, which is on linear algebra algorithms, and Chapter 12, on optimization ones. We would also limit the scope of Chapter 8, in particular, exclude the material on duality in Section 8.3. For a graduate-level course on convex optimization, the main material would be the second part again. The instructor may choose to emphasize the material on duality, and Chapter 12, on algorithms. The applications part can serve as a template for project reports. Bibliographical references and sources By choice, we have been possibly incomplete in our bibliographical references, opting to not overwhelm the reader, especially in the light of the large span of material covered in this book. With today's online resources, interested readers can easily find relevant material. Our only claim is that we strove to provide the appropriate search terms. We hope that the community of researchers who have contributed to this fascinating field will find solace in the fact that the success of an idea can perhaps be measured by a lack of proper references. In writing this book, however, we have been inspired by, and we are indebted to, the work of many authors and instructors. We have drawn in particular from the largely influential textbooks listed on the side.1 We also give credit to the excellent course material of the courses EE364a, EE364b (S. Boyd), EE363 (S. Lall) at Stanford University, and of EE236a, EE236b, EE236C (L. Vandenberghe) at UCLA, as well as the slides that S. Sra developed for the course EE 227A in 2012 at UC Berkeley. In the last 20 years, we witnessed many exciting developments in both theory and applications of optimization. The prime stimulus for writing this book came to us from the thriving scientific community involved in optimization research, whose members gave us, directly or indirectly, motivation and inspiration. While it would be impossible to mention all of them, we wish to give special thanks to our colleagues Dimitris Bertsimas, Stephen Boyd, Emmanuel Cand&s, Constantin Caramanis, Vu Duong, Michael Jordan, Jitendra Malik, Arkadi Nemirovksi, Yuri Nesterov, Jorge Nocedal, Kannan Ramchan- dran, Anant Sahai, Suvrit Sra, Marc Teboulle, Lieven Vandenberghe, 1 S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. D. P. Bertsekas, Nonlinear Optimization, Athena Scientific, 1999. D. P. Bertsekas (with A. Nedic, A. Ozdaglar), Convex Analysis and Optimization, Athena Scientific, 2003. Yu. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, Springer, 2004. A. Ben-Tal and A. Nemirovski, Lectures on Modern Convex Optimization, SIAM, 2001. J. Borwein and A. Lewis, Convex Analysis and Nonlinear Optimization: Theory and Examples, Springer, 2006. PREFACE XVii and Jean Walrand, for their support, and constructive discussions over the years. We are also thankful to the anonymous reviewers of our initial draft, who encouraged us to proceed. Special thanks go to Daniel Lyons, who reviewed our final draft and helped improve our presentation. Our gratitude also goes to Phil Meyler and his team at Cambridge University Press, and especially to Elizabeth Horne for her technical support. This book has been typeset in Latex, using a variant of Edward Tufte's book style. Optimization is a technology that can be used to devise effective decisions or predictions in a variety of contexts, ranging from production planning to engineering design and finance, to mention just a few. In simplified terms, the process for reaching the decision starts with a phase of construction of a suitable mathematical model for a concrete problem, followed by a phase where the model is solved by means of suitable numerical algorithms. An optimization model typically requires the specification of a quantitative objective criterion of goodness for our decision, which we wish to maximize (or, alternatively, a criterion of cost, which we wish to minimize), as well as the specification of constraints, representing the physical limits of our decision actions, budgets on resources, design requirements that need be met, etc. An optimal design is one which gives the best possible objective value, while satisfying all problem constraints. In this chapter, we provide an overview of the main concepts and building blocks of an optimization problem, along with a brief historical perspective of the field. Many concepts in this chapter are introduced without formal definition; more rigorous formalizations are provided in the subsequent chapters. 1.1 Motivating examples We next describe a few simple but practical examples where optimization problems arise naturally. Many other more sophisticated examples and applications will be discussed throughout the book. 1.1.1 Oil production management An oil refinery produces two products: jet fuel and gasoline. The profit for the refinery is $0.10 per barrel for jet fuel and $0.20 per barrel for gasoline. Only 10,000 barrels of crude oil are available for processing. In addition, the following conditions must be met. 1. The refinery has a government contract to produce at least 1,000 barrels of jet fuel, and a private contract to produce at least 2,000 barrels of gasoline. 2. Both products are shipped in trucks, and the delivery capacity of the truck fleet is 180,000 barrel-miles. 3. The jet fuel is delivered to an airfield 10 miles from the refinery, while the gasoline is transported 30 miles to the distributor. How much of each product should be produced for maximum profit? Let us formalize the problem mathematically. We let x\, X2 represent, respectively, the quantity of jet fuel and the quantity of gasoline produced, in barrels. Then, the profit for the refinery is described by function go(*i/*2) — 0.1*1 + 0.2*2- Clearly, the refinery interest is to maximize its profit go- However, constraints need to be met, which are expressed as *1 + *2 < 10/ 000 (limit on available crude barrels) *1 > 1,000 (minimum jet fuel) > 2,000 (minimum gasoline) 10*i + 30*2 < 180,000 (fleet capacity). Therefore, this production problem can be formulated mathematically as the problem of finding *i,*2 such that go(*i/*2) is maximized, subject to the above constraints. 1.1.2 Prediction of technology progress Table 1.1 reports the number N of transistors in 13 microprocessors as a function of the year of their introduction. If one observes a plot of the logarithm of Nz versus the year (Figure 1.1), one sees an approximately linear trend. Given these data, we want to determine the "best" line that approximates the data. Such a line quantifies the trend of technology progress, and may be used to estimate the number of transistors in a microchip in the future. To model this problem mathematically, we let the approximating line be described by the equation z = xi;y 4- *2, (1.1) where y is the year, z represents the logarithm of N, and *i,*2 are the unknown parameters of the line (*1 is the slope, and *2 is the year: y{ no. transistors: N,- Table 1.1 Number of transistors in a microprocessor at different years. intercept of the line with the vertical axis). Next, we need to agree on a criterion for measuring the level of misfit between the approximating line and the data. A commonly employed criterion is one which measures the sum of squared deviations of the observed data from the line. That is, at a given year yz, Eq. (1.1) predicts x\yz + %2 transistors, while the observed number of transistors is Z{ — log Nz, hence the squared error at year yz- is (xiy* + %2 — zf)1, and the accumulated error over the 13 observed years is fo(Xl,X2) = Uw + X2 - z>)2- The best approximating line is thus obtained by finding the values of parameters x\, %2 that minimize the function /q. 1.1.3 An aggregator-based power distribution model In the electricity market, an aggregator is a marketer or public agency that combines the loads of multiple end-use customers in facilitating the sale and purchase’ of electric energy, transmission, and other services on behalf of these customers. In simplified terms, the aggregator buys wholesale c units of power (say, Megawatt) from large power distribution utilities, and resells this power to a group of n business or industrial customers. The /-th customer, i = 1,..., n, communicates to the aggregator its ideal level of power supply, say ci Megawatt. Also, the customer dislikes to receive more power than its ideal level (since the excess power has to be paid for), as well as it dislikes to receive less power that its ideal level (since then the customer's business may be jeopardized). Hence, the customer communicates to the aggregator its own model of dissatisfaction, which we assume to be of the following form dj(xj) = oii(xj - ci)2, i = l,...,n, where xz is the power allotted by the aggregator to the /-th customer, and 0i{ > 0 is a given, customer-specific, parameter. The aggregator problem is then to find the power allocations xz, / = 1,..., n, so as to minimize the average customer dissatisfaction, while guaranteeing that the whole power c is sold, and that no single customer incurs a level of dissatisfaction greater than a contract level d. The aggregator problem is thus to minimize the average level of customer dissatisfaction 1970 197S 1980 1985 Figure 1.1 Semi-logarithmic plot of the number of transistors in a microprocessor at different years. while satisfying the following constraints: *i = c, (all aggregator power must be sold) xi > 0, / = 1,..., n, (supplied power cannot be negative) oci(xi — Ci)2 < d, i = 1,..., n, (dissatisfaction cannot exceed d). 1.1.4 An investment problem An investment fund wants to invest (all or in part) a total capital of c dollars among n investment opportunities. The cost for the z-th investment is Wj dollars, and the investor expects a profit pz from this investment. Further, at most bj items of cost W[ and profit pz are available on the market (bz < c/iVi). The fund manager wants to know how many items of each type to buy in order to maximize his/her expected profit. This problem can be modeled by introducing decision variables i = 1,..., n, representing the (integer) number of units of each investment type to be bought. The expected profit is then expressed by the function n 1 = 1 The constraints are instead Y1 wixi — c' (limit on capital to be invested) Xi E {0,1,..., b{}, i = l,...,n (limit on availability of items). The investor goal is thus to determine x\,...,xn so as to maximize the profit /0 while satisfying the above constraints. The described problem is known in the literature as the knapsack problem. Remark 1.1 A warning on limits of optimization models. Many, if not all, real-world decision problems and engineering design problems can, in principle, be expressed mathematically in the form of an optimization problem. However, we warn the reader that having a problem expressed as an optimization model does not necessarily mean that the problem can then be solved in practice. The problem described in Section 1.1.4, f°r instance, belongs to a category of problems that are "hard" to solve, while the examples described in the previous sections are "tractable," that is, easy to solve numerically. We discuss these issues in more detail in Section 1.2.4. Discerning between hard and tractable problem formulations is one of the key abilities that we strive to teach in this book. 2.2 Optimization problems 1.2.1 Definition A standard form of optimization. We shall mainly deal with optimization problems1 that can be written in the following standard form: p* = mm f0(x) (1.2) subject to: //(x) < 0, i = l,...,m, • vector2 x E Rn is the decision variable; • /0 : Rn -> R is the objective function,3 or cost; • : R" R, / = 1,..., m, represent the constraints; • p* is the optimal value. In the above, the term "subject to" is sometimes replaced with the shorthand "s.t.:," or simply by colon notation Example 1.1 (An optimization problem in two variables) Consider the problem min 0.9x1 — 0.4xiX2 -f 0.6*2 — 6.4*! — 0.8*2 • — I < *1 < 2, 0 < *2 < 3. The problem can be put in the standard form (1.2), where: • the decision variable is * = (*i,*2) E R2; • the objective function /0 : R2 -> R, takes values /o(*) = 0.9*2 ~ 0.4*i*2 - 0.6*2 ~ 6.4*i - 0.8*2; • the constraint functions fj : Rn —> R, i = 1,2,3,4 take values fi(x) = -*i-l,/2(*) = *i-2,/3(*) = -x2,U(x) = *2 -3. 1 Often an optimization problem is referred to as a "mathematical program." The term "programming" (or "program") does not refer to a computer code, and is used mainly for historical reasons. 2 A vector * of dimension n is simply a collection of real numbers *i,*2,• • .,*«. We denote by R” the space of all possible vectors of dimension n. 3 A function / describes an operation that takes a vector * € R” as an input, and assigns a real number, denoted /(*), as a corresponding output value. The notation / : R” -* R allows us to define the input space precisely. Problems with equality constraints. Sometimes the problem may present explicit equality constraints, along with inequality ones, that is p* = mm /0(x) s.t.: fi(x) < 0, i — 1,... ,m, hfx) =0, / = l,...,p, where the hjS are given functions. Formally, however, we may reduce the above problem to a standard form with inequality constraints only, by representing each equality constraint via a pair of inequalities. That is, we represent hi{x) — 0 as hj(x) < 0 and hj(x) > 0. Problems with set constraints. Sometimes, the constraints of the problem are described abstractly via a set-membership condition of the form x E X, for some subset X of Kn. The corresponding notation is P* = mi5 /o(*)/ or, equivalently, p* = mm f0(x) s.t.: x e X. Problems in maximization form. Some optimization problems come in the form of maximization (instead of minimization) of an objective function, i.e., P* = max g0(x). (1.3) Such problems, however, can be readily recast in standard minimization form by observing that, for any go, it holds that max go(*) = - nun -goto- xeX xeX Therefore, problem (1.3) in maximization form can be reformulated as one in minimization form as ~P* =min/0(x), ^ xeX where f0 = -g0. Feasible set. The feasible set4 of problem (1.2) is defined as X = {x E R” s.t.: fi(x) < 0, i — l,...,m}. A point x is said to be feasible for problem (1.2) if it belongs to the feasible set X, that is, if it satisfies the constraints. The feasible set may be empty, if the constraints cannot be satisfied simultaneously. In this case the problem is said to be infeasible. We take the convention that the optimal value is p* = +00 for infeasible minimization problems, while p* = — 00 for infeasible maximization problems. 1.2.2 What is a solution? In an optimization problem, we are usually interested in computing the optimal value p* of the objective function, possibly together with a corresponding minimizer, which is a vector that achieves the optimal value, and satisfies the constraints. We say that the problem is attained if there is such a vector.5 4 In the optimization problem of Example i.i, the feasible set is the "box" in R2, described by — 1 < x\ < 2, 0 < x2 < 3. 5 In the optimization problem of Example i.i, the optimal value p* = — 10.2667 is attained by the optimal solution x\ — 2, *2 — 1.3333. Feasibility problems. Sometimes an objective function is not provided. This means that we are just interested in finding a feasible point, or determining that the problem is infeasible. By convention, we set /0 to be a constant in that case, to reflect the fact that we are indifferent to the choice of a.point x, as long as it is feasible. For problems in the standard form (1.2), solving a feasibility problem is equivalent to finding a point that solves the system of inequalities f(x) < 0, i = l,...,m. Optimal set. The optimal set, or set of solutions, of problem (1.2) is defined as the set of feasible points for which the objective function achieves the optimal value: Xopt = {x eR” s.t.: fo(x) = p*, fi(x) < 0, A standard notation for the optimal set is via the arg min notation: Xopt = arg min fo(x). A point x is said to be optimal if it belongs to the optimal set, see Figure 1.2. When is the optimal set empty? Optimal points may not exist, and the optimal set may be empty. This can be for two reasons. One is that the problem is infeasible, i.e., X itself is empty (there is no point that satisfies the constraints). Another, more subtle, situation arises when X is nonempty, but the optimal value is only reached in the limit. For example, the problem Figure 1.2 A toy optimization problem, with lines showing the points with constant value of the objective function. The optimal set is the singleton A'opt = {**}• min e has no optimal points, since the optimal value p* = 0 is only reached in the limit, for x -» +00. Another example arises when the constraints include strict inequalities, for example with the problem p* = min x s.t.: 0 < x < 1. In this case, p* = 0 but this optimal value is not attained by any x that satisfies the constraints. Rigorously, the notation "inf" should be used instead of "min" (or, "sup" instead of "max") in situations when one doesn't know a priori if optimal points are attained. However, in this book we do not dwell too much on such subtleties, and use the min and max notations, unless the more rigorous use of inf and sup is important in the specific context. For similar reasons, we only consider problems with non-strict inequalities. Strict inequalities can be safely replaced by non-strict ones, whenever the objective and constraint functions are continuous. For example, replacing the strict inequality by a non-strict one in (1.4) leads to a problem with the same optimal value p* = 0, which is now attained at a well-defined optimal solution x* = 0. Sub-optimality. We say that a point x is e-suboptimal for problem (1.2) if it is feasible, and satisfies p* <fo(x) < p* +€. In other words, x is e-close to achieving the best value p*. Usually, numerical algorithms are only able to compute suboptimal solutions, and never reach true optimality. 1.2.3 Local vs. global optimal points A point 2 is locally optimal for problem (1.2) if there exists a value JR > 0 such that 2 is optimal for problem rnin /o(x) s.t.: fj(x) <0, i = 1,... ,ra, |xz — z\ \ < R, i — \,...,n. In other words, a local minimizer x minimizes /0, but only for nearby points on the feasible set. The value of the objective function at that point is not necessarily the (global) optimal value of the problem. Locally optimal points might be of no practical interest to the user. The term globally optimal (or optimal, for short) is used to distinguish points in the optimal set Xopt from local optima. The existence of local optima is a challenge in general optimization, since most algorithms tend to be trapped in local minima, if these exist, thus failing to produce the desired global optimal solution. 1.2.4 Tractable vs. non-tractable problems Not all optimization problems are created equal. Some problem classes, such as finding a solution to a finite set of linear equalities or inequalities, can be solved numerically in an efficient and reliable way. On the contrary, for some other classes of problems, no reliable efficient solution algorithm is known. Without entering a discussion on the computational complexity of optimization problems, we shall here refer to as "tractable" all those optimization models for which a globally optimal solution can be found numerically in a reliable way (i.e., always, in any problem instance), with a computational effort that grows gracefully with the size of the problem (informally, the size of the problem is measured Figure 1.3 Local (gray) vs. global (black) minima. The optimal set is the singleton X0pt = {0.5}. The point x = 2 is a local minimum. by the number of decision variables and/or constraints in the model). Other problems are known to be "hard," and yet for other problems the computational complexity is unknown. The examples presented in the previous sections all belong to problem classes that are tractable, with the exception of the problem in Section 1.1.4. The focus of this book is on tractable models, and a key message is that models that can be formulated in the form of linear algebra problems, or in convex6 form, are typically tractable. Further, if a convex model has some special structure,7 then solutions can typically be found using existing and very reliable numerical solvers, such as CVX, Yalmip, etc. It is also important to remark that tractability is often not a property of the problem itself, but a property of our formulation and modeling of the problem. A problem that may seem hard under a certain formulation may well become tractable if we put some more effort and intelligence in the modeling phase. Just to make an example, the raw data in Section 1.1.2 could not be fit by a simple linear model. However, a logarithmic transformation in the data allowed a good fit by a linear model. One of the goals of this book is to provide the reader with some glimpse into the "art" of manipulating problems so as to model them in a tractable form. Clearly, this is not always possible: some problems are just hard, no matter how much effort we put in trying to manipulate them. One example is the knapsack problem, of which the investment problem described in Section 1.1.4 *s an instance (actually, most optimization problems in which the variable is constrained to be integer valued are computationally hard). However, even for intrinsically hard problems, for which exact solutions may be unaffordable, we may often find useful tractable models that provide us with readily computable approximate, or relaxed, solutions. 1.2.5 Problem transformations The optimization formalism in (1.2) is extremely flexible and allows for many transformations, which may help to cast a given problem in a tractable formulation. For example, the optimization problem min ^(xx + l)2 + {x2 - 2)2 s.t.: *1 > 0 has the same optimal set as min ((*i + l)2 + (*2 — 2)2) s.t.: x\ > 0. The advantage here is that the objective is now differentiable. In other situations, it may be useful to change variables. For example, 6 See Chapter 8. 7 See Section 1.3, Chapter 9, and subsequent chapters. the problem max *1*2*3 s.t.: *z > 0, i — 1,2,3, *1*2 < 2, *2*3 < 1 can be equivalently written, after taking the log of the objective, in terms of the new variables zz = log*z, i = 1,2,3, as max z\ + 3z2 + Z3 s.t.: zi + Z2 < log 2, 2z2 + Z3 < 0. The advantage is that now the objective and constraint functions are all linear. Problem transformations are treated in more detail in Section 8.3.4. 1.3 Important classes of optimization problems In this section, we give a brief overview of some standard optimization models, which are then treated in detail in subsequents parts of this book. 1.3.1 Least squares and linear equations A linear least-squares problem is expressed in the form m / n \ ^ min E E A'ixi ~b‘) > (!-5) * i=1 \;=1 / where Ajj, fcz, 1 < i < m, 1 < ; < n, are given numbers, and *gP is the variable. Least-squares problems arise in many situations, for example in statistical estimation problems such as linear regression.8 An important application of least squares arises when solving a set of linear equations. Assume we want to find a vector * E R” such that AjjXj = bj, i — 1,..., m. Such problems can be cast as least-squares problems of the form (1.5). A solution to the corresponding set of equations is found if the optimal value of (1.3) is zero; otherwise, an optimal solution of (1.5) provides an approximate solution to the system of linear equations. We discuss least-squares problems and linear equations extensively in Chapter 6. 8 The example in Section 1.1.2 is an illustration of linear regression. 1*3*2 Low-rank approximations and maximum variance The problem of rank-one approximation of a given matrix (a rectangular array of numbers Aq, l<i<m,l<j<n) takes the form m / n \ ^ min Y I Y An — Z(Xj 1 . xeR",zeRm “ y" ; 1J The above problem can be interpreted as a variant of the least-squares problem (1.5), where the functions inside the squared terms are nonlinear, due to the presence of products between variables Z[Xj. A small value of the objective means that the numbers Ajj can be well approximated by ZjX[. Hence, the "rows" (An,..., Ajn), i — 1,..., m, are all scaled version of the same vector (x\,. with scalings given by the elements in (24,...,zm). This problem arises in a host of applications, as illustrated in Chapters 4 and 3, and it constitutes the building block of a technology known as the singular value decomposition (SVD). A related problem is the so-called maximum-variance problem: min \ ^ n m*x E E Aifxi st: E A = L «=1 \j=i ) «=1 The above can be used, for example, to find a line that best fits a set of points in a high-dimensional space, and it is a building block for a data dimensionality reduction technique known as principal component analysis, as detailed in Chapter 13. 2.3.3 Linear and quadratic programming A linear programming (LP) problem has the form n n min Y cjxj s-h* Y AqXj < b{f i — 1,..., m, X 7=1 7=1 where Cj, bj and Aq, l<i<m,l<j<n, are given real numbers. This problem is a special case of the general problem (1.2), in which the functions //, i = 0,..., m, are all affine (that is, linear plus a constant term). The LP model is perhaps the most widely used model in optimization. Quadratic programming problems (QPs for short) are an extension of linear programming, which involve a sum-of-squares function in the objective. The linear program above is modified to where the numbers Qy, 1 < i < r, 1 < y < n, are given. QPs can be thought of as a generalization of both least-squares and linear programming problems. They are popular in many areas, such as finance, where the linear term in the objective refers to the expected negative return on an investment, and the squared term corresponds to the risk (or variance of the return). LP and QP models are discussed in Chapter 9. 2.3.4 Convex optimization Convex optimization problems are problems of the form (1.2), where the objective and constraint functions have the special property of convexity. Roughly speaking, a convex function has a "bowl-shaped" graph, as exemplified in Figure 1.4. Convexity and general convex problems are covered in Chapter 8. Not all convex problems are easy to solve, but many of them are indeed computationally tractable. One key feature of convex problems is that all local minima are actually global, see Figure 1.3 for an example. The least-squares, LP, and (convex) QP models are examples of tractable convex optimization problems. This is also true for other specific optimization models we treat in this book, such as the geometric programming (GP) model discussed in Chapter 9, the second- order cone programming (SOCP) model covered in Chapter 10, and the semidefinite programming (SDP) model covered in Chapter 11. Figure 1.4 A convex function has a "bowl-shaped" graph. Figure 1.5 For a convex function, any local minimum is global. In this example, the minimizer is not unique, and the optimal set is the interval A'opt = [2,3]. Every point in the interval achieves the global minimum value p* = —9.84. 2.3.5 Combinatorial optimization In combinatorial optimization, variables are Boolean (0 or 1), or more generally, integers, reflecting discrete choices to be made. The knapsack problem, described in Section 1.1.4, *s an example of an integer programming problem, and so is the Sudoku problem shown in Figure 1.6. Many practical problems actually involve a mix of integer and real-valued variables. Such problems are referred to as mixed- integer programs (MIPs). Combinatorial problems and, more generally, MIPs, belong to a class of problems known to be computationally hard, in general. Although we sometimes discuss the use of convex optimization to find approximate solutions to such problems, this book does not cover combinatorial optimization in any depth. Figure 1.6 The Sudoku problem, as it is the case for many other popular puzzles, can be formulated as a feasibility problem with integer variables. 1.3.6 Non-convex optimization Non-convex optimization corresponds to problems where one or more of the objective or constraint functions in the standard form (1.2) does not have the property of convexity. In general, such problems are very hard to solve. In fact, this class comprises combinatorial optimization: if a variable Xj is required to be Boolean (that is, x\ e {0,1}), we can model this as a pair of constraints x? — X\ < 0, x\ — xj < 0, the second of which involves a non- convex function. One of the reasons for which general non-convex problems are hard to solve is that they may present local minima, as illustrated in Figure 1.3. This is in contrast with convex problems, which do not suffer from this issue. It should, however, be noted that not every non-convex optimization problem is hard to solve. The maximum variance and low-rank approximation problems discussed in Section 1.3.2, for example, are non-convex problems that can be reliably solved using special algorithms from linear algebra. Example 1.2 (Protein folding) The protein folding problem amounts to predicting the three-dimensional structure of a protein, based on the sequence of the amino-acids that constitutes it. The amino-acids interact with each other (for example, they may be electrically charged). Such a problem is difficult to address experimentally, which calls for computer- aided methods. In recent years, some researchers have proposed to express the problem as an optimization problem, involving the minimization of a potential energy function, which is usually a sum of terms reflecting the interactions between pairs of amino-acids. The overall problem can be modeled as a nonlinear optimization problem. Unfortunately, protein folding problems remain challenging. One of the reasons is the very large size of the problem (number of variables and constraints). Another difficulty comes from the fact that the potential energy function (which the actual protein is minimizing) is not exactly known. Finally, the fact that the energy function is usually not convex Figure 1.7 Protein folding problem. Figure 1.8 Graph of energy function involved in protein folding models. may lead algorithms to discover "spurious" (that is, wrong) molecular conformations, corresponding to local minima of the potential function. Figure 1.8 is a three-dimensional rendition of the level sets of a protein's energy function. 1.4 History 1.4.1 Early stages: birth of linear algebra The roots of optimization, as a field concerned with algorithms for solving numerical problems, can perhaps be traced back to the earliest known appearance of a system of linear equations in ancient China. Indeed, the art termed fangcheng (often translated as "rectangular arrays") was used as early as 300 BC to solve practical problems which amounted to linear systems. Algorithms identical to Gauss elimination for solving such systems appear in Chapter 8 of the treatise Nine Chapters on the Mathematical Art, dated around 100 CE. Figure 1.9 pictures a 9 X 9 matrix found in the treatise, as printed Figure 1.9 Early Chinese linear in the 1700s (with a reversed convention for the column's order). It algebra text, is believed that many of the early Chinese results in linear algebra gradually found their way to Europe. a _ t #*** ...A ssè|.,p«| *11* „ +i m-t 0OBO+ iMo-t lit if III »%* Ip # t.J 3. imn nx 855- ? ** 1.4.2 ^ Optimization as a theoretical tool In the 1800s, Gauss (Figure 1.10) built on early results (and his own contributions) in linear algebra to develop a method for solving least- squares problems, which relied on solving an associated linear system (the famous normal equations). He used the method to accurately predict the trajectory of the planetoid Ceres. This early algorithmic result was an exception in the optimization landscape in eighteenth century Europe, as most of the development of the field remained at a theoretical level. The notion of optimization problems was crucial to the development of theoretical mechanics and physics between the seventeenth and nineteenth centuries. Around 1750, Maupertuis introduced (and later Euler formalized) the principle of least action, according to which the motion of natural systems could be described as a minimization problem involving a certain cost function called "energy." This optimization-based (or, variational) approach is indeed the foundation of classical mechanics. The Italian mathematician Giuseppe Lodovico (Luigi) Lagrangia (Figure 1.11), also known as Lagrange, was a key player in this development, and his name is associated with the notion of duality, Karl Fridrìé Gauss. Figure 1.10 Karl Friedrich Gauss (1777—1855). Figure 1.11 Giuseppe Lodovico (Luigi) Lagrangia (1736-1813). which is central in optimization. While optimization theory played a central role in physics, it was only with the birth of computers that it could start making its mark in practical applications, and venture into fields other than physics. 1.4.3 Advent of numerical linear algebra With computers becoming available in the late 40s, the field of numerical linear algebra was ready to take off, motivated in no small part by the cold war effort. Early contributors include Von Neumann, Wilkinson, Householder, and Givens. Early on, it was understood that a key challenge was to handle the numerical errors that were inevitably propagated by algorithms. This led to an intense research activity into the so-called stability of algorithms, and associated perturbation theory.9 In that context, researchers recognized the numerical difficulties associated with certain concepts inherited from some nineteenth century physics problems, such as the eigenvalue decomposition of general square matrices. More recent decompositions, such as the singular value decomposition, were recognized as playing a central role in many applications.10 Optimization played a key role in the development of linear algebra. First, as an important source of applications and challenges; for example, the simplex algorithm for solving linear programming problems, which we discuss below, involves linear equations as the key step. Second, optimization has been used as a model of computation: for example, finding the solution to linear equations can be formulated as a least-squares problem, and analyzed as such. In the 70s, practical linear algebra was becoming inextricably linked to software. Efficient packages written in FORTRAN, such as UNPACK and LAPACK, embodied the progress on the algorithms and became available in the 80s. These packages were later exported into parallel programming environments, to be used on super-com- puters. A key development came in the form of scientific computing platforms, such as Matlab, Scilab, Octave, R, etc. Such platforms hid the FORTRAN packages developed earlier behind a user-friendly interface, and made it very easy to, say, solve linear equations, using a coding notation which is very close to the natural mathematical one. In a way, linear algebra became a commodity technology, which can be called upon by users without any knowledge of the underlying algorithms. A recent development can be added to the long list of success stories associated with applied linear algebra. The PageRank algo- 9 See, e.g., N. J. Higham, Accuracy and Stability of Numerical Algorithms, SIAM, 2002. 10 See the classical reference textbook: G. H. Golub and C. F. Van Loan, Matrix Computations, IV ed, John Hopkins University Press, 2012. rithm,11 which is used by a famous search engine to rank web pages, relies on the power iteration algorithm for solving a special type of eigenvalue problem. Most of the current research effort in the field of numerical linear algebra involves the solution of extremely large problems. Two research directions are prevalent. One involves solving linear algebra problems on distributed platforms; here, the earlier work on parallel algorithms12 is revisited in the light of cloud computing, with a strong emphasis on the bottleneck of data communication. Another important effort involves sub-sampling algorithms, where the input data is partially loaded into memory in a random fashion. 1.4.4 Advent of linear and quadratic programming The LP model was introduced by George Dantzig in the 40s, in the context of logistical problems arising in military operations. George Dantzig, working in the 1940s on Pentagon-related logistical problems, started investigating the numerical solution to linear inequalities. Extending the scope of linear algebra (linear equalities) to inequalities seemed useful, and his efforts led to the famous simplex algorithm for solving such systems. Another important early contributor to the field of linear programming was the Soviet Russian mathematician Leonid Kantorovich. QPs are popular in many areas, such as finance, where the linear term in the objective refers to the expected negative return on an investment, and the squared term corresponds to the risk (or variance of the return). This model was introduced in the 30s by H. Markowitz (who was then a colleague of Dantzig at the RAND Corporation), to model investment problems. Markowitz won the Nobel prize in Economics in 1990, mainly for this work. In the 60S-70S, a lot of attention was devoted to nonlinear optimization problems. Methods to find local minima were proposed. In the meantime, researchers recognized that these methods could fail to find global minima, or even to converge. Hence the notion formed that, while linear optimization was numerically tractable, nonlinear optimization was not, in general. This had concrete practical consequences: linear programming solvers could be reliably used for day-to-day operations (for example, for airline crew management), but nonlinear solvers needed an expert to baby-sit them. In the field of mathematics, the 60s saw the development of convex analysis, which would later serve as an important theoretical basis for progress in optimization. 11 This algorithm is discussed in Section 3.5. 12 See, e.g., Bertsekas and Tsitsiklis, Parallel and Distributed Computation: Numerical Methods, Athena Scientific, 1997. i .4.5 Advent of convex programming Most of the research in optimization in the United States in the 6os- 80s focused on nonlinear optimization algorithms, and contextual applications. The availability of large computers made that research possible and practical. In the Soviet Union at that time, the focus was more towards optimization theory, perhaps due to more restricted access to computing resources. Since nonlinear problems are hard, Soviet researchers went back to the linear programming model, and asked the following (at that point theoretical) question: what makes linear programs easy? Is it really linearity of the objective and constraint functions, or some other, more general, structure? Are there classes of problems out there that are nonlinear but still easy to solve? In the late 80s, two researchers in the former Soviet Union, Yurii Nesterov and Arkadi Nemirovski, discovered that a key property that makes an optimization problem "easy" is not linearity, but actually convexity. Their result is not only theoretical but also algorithmic, as they introduced so-called interior-point methods for solving convex problems efficiently.13 Roughly speaking, convex problems are easy (and that includes linear programming problems); non-convex ones are hard. Of course, this statement needs to be qualified. Not all convex problems are easy, but a (reasonably large) subset of them is. Conversely, some non-convex problems are actually easy to solve (for example some path planning problems can be solved in linear time), but they constitute some sort of "exception." Since the seminal work of Nesterov and Nemirovski, convex optimization has emerged as a powerful tool that generalizes linear algebra and linear programming: it has similar characteristics of reliability (it always converges to the global minimum) and tractability (it does so in reasonable time). 2.4.6 Present In present times there is a very strong interest in applying optimization techniques in a variety of fields, ranging from engineering design, statistics and machine learning, to finance and structural mechanics. As with linear algebra, recent interfaces to convex optimization solvers, such as CVX14 or YALMIP13, now make it extremely easy to prototype models for moderately-sized problems. In research, motivated by the advent of very large datasets, a strong effort is currently being made towards enabling solution of extremely large-scale convex problems arising in machine learning, 13 Yu. Nesterov and A. Nemirovski, Interior-point Polynomial Algorithms in Convex Programming, SIAM, 1994. 14 cvxr.com/cvx/ 15 users.isy.liu.se/johanl/yalmip/ image processing, and so on. In that context, the initial focus of the 90s on interior-point methods has been replaced with a revisitation and development of earlier algorithms (mainly, the so-called "first- order" algorithms, developed in the 30s), which involve very cheap iterations. Linear algebra models Vectors and functions Le contraire du simple nest pas le complexe, mais le faux. Andre Comte-Sponville A vector is a collection of numbers, arranged in a column or a row, which can be thought of as the coordinates of a point in n- dimensional space. Equipping vectors with sum and scalar multiplication allows us to define notions such as independence, span, subspaces, and dimension. Further, the scalar product introduces a notion of the angle between two vectors, and induces the concept of length, or norm. Via the scalar product, we can also view a vector as a linear function. We can compute the projection of a vector onto a line defined by another vector, onto a plane, or more generally onto a subspace. Projections can be viewed as a first elementary optimization problem (finding the point in a given set at minimum distance from a given point), and they constitute a basic ingredient in many processing and visualization techniques for high-dimensional data. 2.1 Vector basics 2.1.1 Vectors as collections of numbers Vectors are a way to represent and manipulate a single collection of numbers. A vector x can thus be defined as a collection of elements X\,X2,.-. ,xn, arranged in a column or in a row. We usually write vectors in column format: Element x\ is said to be the z-th component (or the z-th element, or entry) of vector x, and the number n of components is usually referred to as the dimension of x. When the components of x are real numbers, i.e. Xj G 1R, then x is a real vector of dimension n, which we indicate with the notation x G 1R”. We shall seldom need complex vectors, which are collections of complex numbers Xj G C, z — 1,..., n. We denote the set of such vectors by C”. To transform a column-vector x to row format and vice versa, we define an operation called transpose, denoted by a superscript T: X\ X2 Sometimes, we use the notation x = (xi,..., xn) to denote a vector, if we are not interested in specifying whether the vector is in column or in row format. For a column vector x G Cn, we use the notation x* to denote the transpose-conjugate, that is the row vector with elements set to the conjugate of those of x. A vector x in M.n can be viewed as a point in that space, where the Cartesian coordinates are the components xz; see Figure 2.1 for an example in dimension 3. For example, the position of a ship at sea with respect to a given reference frame, at some instant in time, can be described by a two- dimensional vector x = (xi, X2), where X\, X2 are the coordinates of the center of mass of the ship. Similarly, the position of an aircraft can be described by a three-dimensional vector x = (x\, X2, X3), where Xi, X2, X3 are the coordinates of the center of mass of the aircraft in a given reference frame. Note that vectors need not be only two- or three-dimensional. For instance, one can represent as a vector the coordinates, at a given instant of time, of a whole swarm of m robots, each one having coordinates xW = (xj^x^), z = 1,...,m. The swarm positions are therefore described by the vector Figure 2.1 Cartesian representation of a vector in R3. ,(2) J2) Figure 2.2 The position of m robots in a swarm can be represented by a 2m-dimensional vector x = /r(l) Y(l) y(2) y(2) (m) (mh V 1 / *^2 / \ / -*2 / *# • / \ * 2 // where x^> = * — l,...,m, are the coordinates of each robot in a given fixed reference frame. of dimension 2m; see Figure 2.2. VECTORS AND FUNCTIONS 23 Example 2.1 (Bag-of-words representations of text) Consider the following text: "A (real) vector is just a collection of real numbers, referred to as the components (or, elements) of the vector; Rn denotes the set of all vectors with n elements. If x 6 R” denotes a vector, we use subscripts to denote elements, so that xz is the z-th component in x. Vectors are arranged in a column, or a row. If x is a column vector, xT denotes the corresponding row vector, and vice versa." The row vector c = [5, 3, 3, 4] contains the number of times each word in the list V = {vector, elements, of, the} appears in the above paragraph. Dividing each entry in c by the total number of occurrences of words in the list (15, in this example), we obtain a vector x = [1/3,1/5,1/5,4/15] of relative word frequencies. Vectors can be thus used to provide a frequency-based representation of text documents; this representation is often referred to as the bag-of-words representation. In practice, the ordered list V contains an entire or restricted dictionary of words. A given document d may then be represented as the vector x(d) that contains as elements a score, such as the relative frequency, of each word in the dictionary (there are many possible choices for the score function). Of course, the representation is not faithful, as it ignores the order of appearance of words; hence, the term "bag-of-words" associated with such representations. Temp. (°F) Example 2.2 (Temperatures at different airports) Assume we record the tern- Table 2.1 Airport temperature data, peratures at four different airports at a given time, and obtain the data in Table 2.1. We can view the triplet of temperatures as a point in a three-dimensional space. Each axis corresponds to temperatures at a specific location. The vector representation is still legible if we have more than one triplet of temperatures, e.g., if we want to trace a curve of temperature as a function of time. The vector representation cannot, however, be visualized graphically in more than three dimensions, that is, if we have more than three airports involved. Example 2.3 (Time series) A time series represents the evolution in (discrete) time of a physical or economical quantity, such as the amount of solar radiation or the amount of rainfall (e.g., expressed in millimeters) at a given geographical spot, or the price of a given stock at the closing of the market. If x(k), k = 1,...,T, describes the numerical value of the quantity of interest at time k (say, k indexes discrete intervals of time, like minutes, days, months, or years), then the whole time series, over the time horizon from 1 to T, can be represented as a T-dimensional vector x containing all the values of x(k), for k — 1 to k = T, that is x = [x(l) x(2) ■■■ x(T)]T €1RT. Figure 2.3 shows for instance the time series of the adjusted close price of the Dow Jones Industrial Average Index, over a 66 trading day period from April 19, 2012 to July 20, 2012. This time series can be viewed as a vector x in a space of dimension T = 66. Vector spaces Seeing vectors as collections of numbers, or as points, is just the beginning of the story. In fact, a much richer understanding of vectors comes from their correspondence with linear functions. To understand this, we first examine how we can define some basic operations between vectors, and how to generate vector spaces from a collection of vectors. 2.1.2.1 Sum and scalar multiplication of vectors. The operations of sum, difference, and scalar multiplication are defined in an obvious way for vectors: for any two vectors v^fv^ having equal number of elements, we have that the sum i/1) + z/2) is simply a vector having as components the sum of the corresponding components of the addends, and the same holds for the difference; see Figure 2.4. Similarly, if v is a vector and x is a scalar (i.e., a real or complex number), then ocv is obtained by multiplying each component of v by x. If ol — 0, then ocv is the zero vector, or origin, that is, a vector in which all elements are zero. The zero vector is simply denoted by 0, or sometimes with 0n, when we want to highlight the fact that it is a zero vector of dimension n. 2.1.2.2 Vector spaces. From a slightly more abstract perspective, a vector space, X, is obtained by equipping vectors with the operations of addition and multiplication by a scalar. A simple example of a vector space is X = Wl, the space of n-tuples of real numbers. A less obvious example is the set of single-variable polynomials of a given degree. Figure 2.3 The DJI time series from April 19 to July 20, 2012. Figure 2.4 The sum of two vec- tors Vd) (2) = is the vector v = (!) . -f vV) having components [uj + Example 2.4 (Vector representation of polynomials) The set of real polynomials of degree at most n — 1, n > 1, is Pn-1 = {p : p(t) = an_itn 1 + an-2tn 2 + • • • + a\t + a§, t £ R}, where a$,... ,an_\ £ R are the coefficients of the polynomial. Any polynomial p £ Pn-\ is uniquely identified by a vector v £ Rn containing its coefficients v = [an_ 1 ... a$]J and, conversely, each vector v £ Rn uniquely defines a polynomial p £ Pn-\. Moreover, the operations of multiplication of a polynomial by a scalar and sum of two polynomials correspond respectively to the operations of multiplication and sum of VECTORS AND FUNCTIONS 25 the corresponding vector representations of the polynomials. In mathematical terminology, we say that that Pn~ 1 is a vector space isomorphic to the standard vector space Rn. 2.1.2.3 Subspaces and span. A nonempty subset V of a vector space X is called a subspace of X if, for any scalars oc, f>, x,y G V =* ocx + (5yeV. In other words, V is "closed" under addition and scalar multiplication. Note that a subspace always contains the zero element. A linear combination of a set of vectors S = {x^\..., x} in a vector space A' is a vector of the form oc\X^ H + ocmx^m\ where oc\,...,ocm are given scalars. The set of all possible linear combinations of the vectors in S = {x^l\... ,x^} forms a subspace, which is called the subspace generated by S, or the span of S, denoted by span(S). In ]Rn, the subspace generated by a singleton S = {x^} is a line passing through the origin; see Figure 2.5. The subspace generated by two non-collinear (i.e., such that one is not just a scalar multiple of the other) vectors S = {x^\ x^2)} is the plane passing through points see Figure 2.6 and Figure 2.7. More generally, the subspace generated by S is a flat passing through the origin. 2.2.24 Direct sum. Given two subspaces X, y in Rn, the direct sum of X, y, which we denote by X 0 y, is the set of vectors of the form x + y, with x £ X, y £ y. It is readily checked that X 0 y is itself a subspace. Figure 2.5 Line generated by scaling of a vector xd). Figure 2.6 Plane generated by linear combinations of two vectors x^2\ 2.1.2.5 Independence, bases, and dimensions. A collection xW,..., x^m) of vectors in a vector space X is said to be linearly independent if no vector in the collection can be expressed as a linear combination of the others. This is the same as the condition OCjX^ — 0 : oc = 0. Later in this book we will see numerically efficient ways to check the independence of a collection of vectors. Given a set S = (x^1),..., x(m)} of m elements from a vector space X, consider the subspace S = span(S) generated by S, that is the set of vectors that can be obtained by taking all possible linear combinations of the elements in S. Suppose now that one element in S, say the last one x^m\ can itself be written as a linear combination of the remaining elements. Then, it is not difficult to see that we could remove xfrom S and still obtain the same span, that is1 span(S) = span(S \ x(m)). Suppose then that there is another element in {S \ x(m)}, say that can be be written as a linear combination of elements in {S \ Then again this last set has the same span as S. We can go on in this way until we remain with a collection of vectors, say B = ..., }, d < m, such that span(B) = span(S), and no element in this collection can be written as a linear combination of the other elements in the collection (i.e., the elements are linearly independent). Such an "irreducible" set is called a basis for span(S), and the number d of elements in the basis is called the dimension of span(S). A subspace can have many different bases (actually, infinitely many), but the number of elements in any basis is fixed and equal to the dimension of the subspace (rf, in our example). If we have a basis {x^l\... ,x^} for a subspace 5, then we can write any element in the subspace as a linear combination of elements in the basis. That is, any x e S can be written as X=Y^0Lix^' (2*1) for appropriate scalars otj. Example 2.5 (Bases) The following three vectors constitute a basis for IR3: ’ 1 ' ' 1 ~ ' 1 ' , *(3) = Given, for instance, the vector x = [1, 2, 3]T, we can express it as a linear combination of the basis vectors as in (2.1): 1 We use the notation A \ B to denote the difference of two sets, that is the set of elements in set A that do not belong to set B. = «1 4- ol2 + CC3 Finding the suitable values for the oc{ coefficients typically requires the solution of a system of linear equations (see Chapter 6). In the present case, the reader may simply verify that the correct values for the coefficients are cl\ — 1.5, oc2 = —2, 0C3 = 1.5. There are, however, infinitely many other bases for IR3. A special one is the so-called standard basis for IR3, which is given by ' 1 ' ' 0 ‘ ' 0 ' , *(2> = VECTORS AND FUNCTIONS 27 More generally, the standard basis for lRn is = ex, X(2) =e-L, xM = e„, where e* denotes a vector in 1RH whose entries are all zero, except for the z-th entry, which, is equal to one; see Figure 2.7. plane generated by e2, e3 In all the rest of this book we deal with finite dimensional vector spaces, that is with spaces having a basis of finite cardinality. There are some vector spaces that are infinite dimensional (one such example is the space of polynomials with unspecified degree). However, a rigorous treatment of such vector spaces requires tools that are out of the scope of the present exposition. From now on, any time we mention a vector space, we tacitly assume that this vector space is finite dimensional. 2.1.2.6 Affine sets. A concept related to the one of subspaces is that of affine sets, which are defined as a translation of subspaces. Namely, an affine set is a set of the form2 A = {x £ X : x = v + v £ V}, where is a given point and V is a given subspace of X. Subspaces are just affine spaces containing the origin. Geometrically, an affine set is a flat passing through The dimension of an affine set A is defined as the dimension of its generating subspace V. For example, if V is a one-dimensional subspace generated by a vector x^\ then A is a one-dimensional affine set parallel to V and passing through x@\ which we refer to as a line, see Figure 2.8. Figure 2.7 Standard basis of 1R3 and planes generated by linear combinations of (ei,e2) and of (^2,^3). 2 We shall sometimes use the shorthand notation A = + V to denote an affine set, and we shall refer to V as the subspace generating A. Figure 2.8 A line is a one-dimensional affine set. A line can hence be described by means of two elements: a point belonging to the line, and a vector u G X describing the direction of the line in space. Then, the line through xq along direction u is the set L = {x e X : x — xq + v, v G span(tz)}, where in this case span(tz) = {At* : A G R}. 2.2 Norms and inner products As we have seen, vectors may represent, for instance, positions of objects in space. It is therefore natural to introduce a notion of distance between vectors, or of the length of a vector. 2.2.1 Euclidean length and general £p norms 2.2.1.1 The concept of length and distance. The Euclidean length of a vector x G Rn is the square-root of the sum of squares of the components of x, that is Euclidean length of x = \J x\ + x\ + 4- y2 This formula is an obvious extension to the multidimensional case of the Pythagoras theorem in R2; see Figure 2.9. The Euclidean length represents the actual distance to be "travelled" to reach point x from the origin 0, along the most direct way (the straight line passing through 0 and x). It may, however, be useful to have a slightly more general notion of length and distance in a vector space, besides the Euclidean one. Suppose for instance that for going from 0 to x we cannot move along the direct route, but we have to follow some path along an orthogonal grid, as exemplified in Figure 2.10. This is the situation experienced, for example, by a driver who needs to move along a network of orthogonal streets to reach its destination. In this case, the shortest distance from 0 to x is given by the sum of the absolute values of the components of x: Length of x (along orthogonal grid) = |xi| + |x21 + Xfi The previous example shows that in vector spaces several different measures of "length" are possible. This leads to the general concept of norm of a vector, which generalizes the idea of Euclidean length. 2.2.1.2 Norms and £p norms. A norm on a vector space X is a realvalued function with special properties that maps any element x G X into a real number ||x||. Figure 2.9 The Euclidean length of a vector in R2 is computed by means of the Pythagoras theorem. ti of pa th = |*i I + \x2\ • L X2 J Figure 2.10 Length of path from 0 to x along an orthogonal grid is |*i | + |*21- VECTORS AND FUNCTIONS 29 Definition 2.1 A function from A' to R is a norm, if ||z|| > 0 Vx G X, and ||*|| = 0 if and only if x = 0; ||* + y|| < ||*|| + ||y||, for any x,y G X (triangle inequality); \\ocx\\ = \cc\||*||, for any scalar oc and any x G X, Examples of norms on the vector space X — Rn are the so-called £v norms, defined as ^=1 t , 1 < p < 00. In particular, for p = 2 we obtain the standard Euclidean length and for p = 1 we obtain the sum-of-absolute-values length IMIl = E M- The limit case p = oo defines the £& norm (max absolute value norm, or Chebyshev norm)3 ||x||oo = max \xir\. k=l,...,n In some applications, we may encounter functions of a vector x G R” that are not formally norms but still encode some measure of "size" of a vector. A prime example is the cardinality function, which we shall discuss in Section 9.5.1. The cardinality of a vector * is defined as the number of nonzero elements in *: card(x) = ]|T I(x*. fr 0), where I(x*. fr 0) k=1 1 if x^- fr 0 0 otherwise. The cardinality of a vector x is often called the £0 norm and denoted by ||x||o, although this is not a norm in the proper sense,4 since it does not satisfy the third property in Definition 2.1. 2.2.1.3 Norm balls. The set of all vectors with £p norm less than or equal to one, Bp = {x G R" : ||x||p < 1}, is called the unit £p norm ball. The shape of this set completely characterizes the norm. Depending on the value of p, the sets Bp have a different geometrical shape. Figure 2.11 shows the shapes of the B2, B\, and Boo balls in R2. 3 If, for instance, x is a vector representation of a time-series (see Example 2.3), then || x || 00 returns its peak amplitude. 4 This slight abuse of terminology is justified by the fact that \ 1/p card(x) = ||x||0 — lim t=i / zlloo < 1 We observe, for instance, that the £2 norm does not "favor" any direction in space, i.e., it is rotationally invariant, meaning that a vector of fixed length that rotates arbitrarily will maintain the same 12 norm. On the contrary, the same vector will have different £\ norms, reaching its smallest value when aligned to the coordinate axes. Inner product, angle, orthogonality 2.2.2.1 Inner product. A fundamental operation that can be defined between two vectors is the inner product. Definition 2.2 An inner product on a (real) vector space A' is a realvalued function which maps any pair of elements x, y £ X into a scalar denoted by (x,y). The inner product satisfies the following axioms: for any x,y,z £ X and scalar cc (x,x) > 0; (x, x) = 0 if and only if x = 0; (x + y,z) = (x,z) + (y,z); (ax,y) = cc{x,y); (*/y> = (y/*>- A vector space equipped with an inner product is called an inner product space. The standard inner product5 defined in Rn is the "row-column" product of two vectors (x,y) = xTy = £x*yfc. However, other inner products can be defined in Rw, see, for example, Exercise 2.4. Moreover, the inner product remains well defined also on vector spaces different from Rn, such as, for instance, the space of matrices, defined in Chapter 3. Figure 2.H Norm balls in R2. 5 The standard inner product is also often referred to as the scalar product (since it returns a scalar value), or as the dot product (since it is sometimes denoted by x • y). VECTORS AND FUNCTIONS 31 In an inner product space, the function yj(x,x) is a norm, which will often be denoted simply by ||x||, with no subscript. For example, for R" equipped with the standard inner product, we have 11*11 = = IMI2. The few examples below further illustrate the usefulness of the concept of the standard inner product between vectors. Example 2.6 (Rate of return of a financial portfolio) The rate of return r (or return) of a single financial asset over a given period (say, a year, or a day) is the interest obtained at the end of the period by investing in it. In other words, if, at the beginning of the period, we invest a sum S in the asset, we will earn Sencj = (l + r)Sat the end of the period. That is, r _ ^end ~ S S Whenever the rate of return is small (r <C 1), the following approximation is considered: with the latter quantity being known as the log-return. For n assets, we can define a vector r G 1R” such that the z-th component of r is the rate of return of the z-th asset. Assume that at the beginning of the period we invest a total sum S over all assets, by allocating a fraction X; of S in the z-th asset. Here, x G R” represents the portfolio "mix," and it is a non-negative vector whose components sum to one. At the end of the period the total value of our portfolio would be Send ~ hence the rate of return of the portfolio is the relative increase in wealth: = £(1 + ri)Xi - 1 = t Xi - 1 + E riXi = rT*. ^ /=1 1=1 1=1 The rate of return is thus the standard inner product between the vectors of individual returns r and of the portfolio allocation weights x. Note that, in practice, rates of return are never known exactly in advance, and they can be negative (although, by construction, they are never less than -1). Example 2.7 (The arithmetic mean, the weighted mean, and the expected value) The arithmetic mean (or, average) of given numbers x\,... ,xn, is defined as x = -(x1-\ + x„). The arithmetic mean can be interpreted as a scalar product: X = pTx, where x = [x\,... ,xn\T is the vector containing the numbers (samples), and p is a vector of weights to be assigned to each of the samples. In the specific case of arithmetic mean, each sample has equal weight 1 In, hence p 1, where 1 is a vector of all ones. More generally, for any weight vector p £ Kn such that Pi >0 for every i, and pi + • • • + pn = 1, we can define the corresponding weighted average of the elements of x as pTx. The interpretation of p is in terms of a discrete probability distribution of a random variable X, which takes the value xi with probability p/, i = 1,..., n. The weighted average is then simply the expected value (or, mean) of X, under the discrete probability distribution p. The expected value is often denoted by Ep{X}, or simply E{X}, if the distribution p is clear from context. 2.2.2.2 Angle between vectors. The standard inner product on W1 is related to the notion of angle between two vectors. If two nonzero vectors x, y are visualized as two points in Cartesian space, one can consider the triangle constituted by x,y and the origin 0; see Figure 2.12. Let 6 be the angle at 0 between the Ox and Oy sides of the triangle, and let z — x — y. Applying the Pythagoras theorem to the triangle with vertices yxx', we have that IMI2 = (Hyll2Sin0)2 + (||x||2- ||y||2COS0)2 = 11^112 + ilylll — 2!|x||2||y||2 COS0. But, - IMI2 = II* —yll2 = (*-y)T(*-y) = *T* + yTy — 2xTy, which, compared with the preceding equation, yields *Ty = IM|2||y||2cos0. The angle between x and y is therefore defined via the relation cos0 = jj'T/'n • (2-3) When xTy = 0, the angle between x and y is 6 = ±90°, i.e., x,y are orthogonal. When the angle 6 is 0°, or ±180°, then x is aligned with y, that is y = ocx, for some scalar oc, i.e., x and y are parallel. In this situation |xTy| achieves its maximum value \cc\ ||x Example 2.8 (Comparing text) We can use the word-frequency vector representation of text introduced in Example 2.1, for comparing text documents. In this context, similarity between two documents may be measured by means of the angle 0 between the two frequency vectors representing the documents, the documents being maximally "different" when the corresponding frequency vectors are orthogonal. As an exam- Figure 2.12 Angle 9 between vectors VECTORS AND FUNCTIONS 33 pie, consider the following headlines from the web edition of the New York Times on Dec. 7, 2010: (a) Suit Over Targeted Killing in Terror Case Is Dismissed. A federal judge on Tuesday dismissed a lawsuit that sought to block the United States from attempting to kill an American citizen, Anwar Al-Awlaki, who has been accused of aiding A1 Qaeda. (b) In Tax Deal With G.O.P., a Portent for the Next 2 Years. President Obama made clear that he was willing to alienate his liberal base in the interest of compromise. Tax Deal suggests new path for Obama. President Obama agreed to a tentative deal to extend the Bush tax cuts, part of a package to keep jobless aid and cut payroll taxes. (c) Obama Urges China to Check North Koreans. In a frank discussion, President Obama urged China's president to put the North Korean government on a tighter leash after a series of provocations. (d) Top Test Scores From Shanghai Stun Educators. With China's debut in international standardized testing, Shanghai students have surprised experts by outscoring counterparts in dozens of other countries. The text has been first simplified (e.g., plurals removed, verbs converted to present tense, etc.) and then compared against the dictionary V = {aid, kill, deal, president, tax, china}. The frequency vectors are r 1 1 r 1 1 10 ' 0 " " 0 " , *№ = , = , xw = _ 0 _ L 2 J _ 1 _ Table 2.2 displays the cos 0 between pairs of vectors representing the text. A high value of cos 0 suggests a high correlation between the two texts, while a cos0 near zero means that the two texts are nearly orthogonal (uncorrelated). Table 2.2 Cosine of angle 9 between texts. cos 6 x(a) x(b) 1 0.08l6 * * * * 2.2.2.3 Cauchy-Schwartz inequality and its generalization. Since | cos 61 < 1, it follows from equation (2.3) that l*Ty| < IWHIylb and this inequality is known as the Cauchy-Schwartz inequality. A generalization of this inequality involves general ip norms and is known as the Holder inequality: for any vectors x,y £ W1 and for any p, q > 1 such that 1 / p + 1 / <7 = 1, it holds that6 6 See Exercise 2.7. l*Tyl < E l**y*l ^ Ibllpllylk (2-4) 2.2.2.4 Maximization of inner product over norm balls. Given a nonzero vector ye Rn, consider the problem of finding some vector x e Bp (the unit ball in £p norm) that maximizes the inner product xTy: that is, solve max xTy. For p — 2 the solution is readily obtained from Eq. (2.3): x should be aligned (parallel) to y, so as to form a zero angle with it, and have the largest possible norm, that is, a norm equal to one. Therefore the unique solution is Y* y hence max^u^i xTy = ||y||2. Consider next the case with p = 00: since xTy — xiVir where each element Xj is such that |x*| < 1, then the maximum in the sum is achieved by setting X; = sgn(yz),7 so that xzyz- = |yz |. Hence, xto = Sgn(y), and max||x||oo<1 xTy = YJi=\ \]!i\ = Ill/lll- The optimal solution may not be unique, since corresponding to any yz = 0 any value x* e [—1,1] could be selected without modifying the optimal objective. Finally, we consider the case with p = 1: the inner product xTy = E?= 1 xiVi can now be interpreted as a weighted average of the y;s, where the XjS are the weights, whose absolute values must sum up to one. The maximum of the weighted average is achieved by first finding the yi having the largest absolute value, that is by finding one index m such that |yz | < \ym | for all i = 1,..., n, and then setting nJ * sgn(yz ) if i — m 0 otherwise We thus have xTy — max/ |yz| = ||y||oo. Again, the opti¬ mal solution may not be unique since in the case when vector y has several entries with identical maximum absolute value then m can be chosen to be any of the indices corresponding to these entries. Example 2.9 (Production margin) Consider a production process involving two raw materials r\, r2 and one finished product. The unit cost for the raw materials is subject to variability, and it is given by Cj = Ci + 0CjXi, i = 1,2, where, for i — 1,2, q is the nominal unit cost of material rif ocj > 0 is the cost spread, and \xj\ < 1 is an unknown term accounting for cost uncertainty. Production of one unit of the finished product requires a fixed 7 sgn denotes the sign function, which, by definition, takes values sgn(x) = 1, if x > 0, sgn(x) = —1, if x < 0, and sgn(x) = 0, if x = 0. VECTORS AND FUNCTIONS 35 amount mi of raw material r\ and a fixed amount m2 of raw material r2- Each finished product can be sold on the market at a price p which is not precisely known in advance. We assume that P = p + j6x3/ where p is the nominal selling price for one unit of the finished product, fi > 0 is the price spread, and \x$\ < 1 is an unknown term accounting for price uncertainty. The production margin (income minus cost) for each unit of finished product is thus given by margin = p — c\mi—C2m2 = p + fix?, — c\m\ — ociXimi — C2m2 — ct2x2m2 = nom_margin + xTy, where we defined nom_margin = p — c\m\ — C2W2, and xT = [xi, x2, X3\, y = [—tx.\m\, —a2m2, fi]T. We then see that the production margin is given by a constant term reflecting the nominal material costs and sale price, plus a variable term of the form xJy, with uncertainty vector x such that ||x||oo < 1. Our problem is to determine the maximum and the minimum production margin under the given uncertainty. Clearly, the margin lies in an interval centered at the nominal margin nom_margin, of half-length max xTy = ||i/||i = «imi + ^2^2 + fi. 2.2.3 Orthogonality and orthogonal complements 2.2.3.1 Orthogonal vectors. Generalizing the concept of orthogonality to generic inner product spaces, we say that two vectors x, y in an inner product space X are orthogonal if (x,y) = 0. Orthogonality of two vectors x, y £ X is symbolized by x _L y. Nonzero vectors x^\..., are said to be mutually orthogonal if (x^l\x^) = 0 whenever i ^ j. In words, each vector is orthogonal to all other vectors in the collection. The following proposition holds (the converse of this proposition is instead false, in general). Proposition 2.1 Mutually orthogonal vectors are linearly independent. Proof Suppose, for the purpose of contradiction, that x^\...,x^ are orthogonal but linearly dependent vectors. This would mean that there exist oc\,... ,oc^, not all identically zero, such that otiX^ = 0. But, taking the inner product of both sides of this equation with x^\ j — 1,..., d, we would get = 0, it would follow that Kj = 0 for all ; = 1,..., d, which contradicts the 2.2.3.2 Orthonormal vectors. A collection of vectors S — ..., } is said to be orthonormal if, for d, In words, S is orthonormal if every element has unit norm, and all elements are orthogonal to each other. A collection of orthonormal vectors S forms an orthonormal basis for the span of S. 2.2.3.3 Orthogonal complement. A vector x E X is orthogonal to a subset S of an inner product space X if x _L s for all s E S. The set of vectors in X that are orthogonal to S is called the orthogonal complement of <S, and it is denoted by S-1; see Figure 2.13. 2.2.3.4 Direct sum and orthogonal decomposition. A vector space X is said to be the direct sum of two subspaces A, B if any element x E X can be written in a unique way as x — a + b, with a E A and b E B; this situation is symbolized by the notation X = A 0 B. The following theorem holds. Theorem 2.1 (Orthogonal decomposition) If S is a subspace of an inner- product space X, then any vector x E X can be written in a unique way as the sum of one element in S and one in the orthogonal complement S1 (see Figure 2.14), that is ^ = 5® S1 for any subspace SOX. Proof We first observe that S n S± = 0, since if v E S n S1, then (v,v) = \\v\\2 = 0, which implies that v = 0. Next, we denote W = <5 + S1- (the space of vectors obtained by summing elements from S and elements from S±). We can choose an-orthonormal basis of W Figure 2.13 Example of a two- dimensional subspace S in R3 and its orthogonal complement S^. x = y + z Figure 2.14 Any vector can be written in a unique way as the sum of an element in a subspace S and one in its orthogonal complement SL. VECTORS AND FUNCTIONS 37 and extend it to an orthonormal basis8 of X. Thus, if W/^ there is an element z in the basis of X which is orthogonal to W. Since S C W, z is orthogonal to S as well, which means that z belongs to S1-. The latter is a subspace of W, therefore z is in W, and we arrive at a contradiction. Thus, we proved that S + S1- — X, that is each element x e X can be written as the sum of one element from xs e S and one element from y e S1, i.e., x = xs + y. It remains to be proved that such a decomposition is unique. Suppose it is not, for the purpose of contradiction. Then, there would exist xSl,xS2 e 5, and yi, 1/2 £ «S1, xSl ^ Xs2/ yi 3/2/ such that x = xSl + yi and x = xS2 + y2- But then, taking the difference of these last two expressions we would have 0 ^ xSl - Xs2 = y2 - yi, where the left-hand side belongs to S and the right-hand side belongs to which is impossible since S fl S1- — 0. □ The following proposition summarizes some fundamental properties of inner product spaces. Proposition 2.2 Let x,z be any two elements of a (finite dimensional) inner product space X, let ||x|| = yj (x,x), and let oc be a scalar. Then: 1. | <x,z> | < ||x||||z||, and equality holds iff x = ocz, or z = 0 (Cauchy- Schwartz); 2. ||x + z||2 + ||x — z||2 = 2||x||2 + 2||z||2 (parallelogram law); 3. if x _L z, then ||x + z||2 = ||x||2 + ||z||2 (Pythagoras theorem); 4. for any subspace S C X it holds that X = S 0 S1; 5. for any subspace S C X it holds that dim X = dim S + dim S-1. 2.3 Projections onto subspaces The idea of projection is central in optimization, and it corresponds to the problem of finding a point on a given set that is closest (in norm) to a given point. Formally, given a vector x in an inner product space X (say, e.g., X — Rn) and a closed set9 S C X, the projection of x onto S, denoted by ris(x), is defined as the point in S at minimal distance from x: n5(x) = argmin ||y-x||, where the norm used here is the norm induced by the inner product, that is ||y — x 11 = y/ (y — x,y — x). This simply reduces to the Euclidean norm, when using the standard inner product (see Eq. (2.2)), in which case the projection is called the Euclidean projection. 8 We discuss in Section 2.3.3 how it is always possible to construct an orthonormal basis for a subspace, starting from any given basis for that subspace. 9 A set is closed if it contains its boundary. For instance, the set of points x £ ]R2 such that |xi| < 1, 1*2! ^ 1 is closed, whereas the set characterized by |xi| < 1, |x21 < 1 is not. See Section 8.1.1 for further details. In this section we focus in particular on the case when S is a subspace. 2.3.1 Projection onto a one-dimensional subspace To introduce the concept of projections, we begin by studying a onedimensional case. Given a point (vector) x E X and a nonzero vector v E X, where X is an inner product space, the projection П^(х) of x onto the subspace generated by v (i.e., the one-dimensional subspace of vectors Sv = {Av, А E 1R}) is the vector belonging to Sv at minimum distance (in the sense of the norm induced by the inner product) from x. In formulas, we seek = argmm ||y-x||. We next show that the projection is characterized by the fact that the difference vector (x — П^(х)) is orthogonal to v. To see this fact, let xv be a point in Sv such that (x — xv)-Lvf and consider an arbitrary point у E Sv. By the Pythagoras theorem, we have that (see Figure 2.15) lly-*H2 = II (y-Xv)- (х-ХгОН2 = ||y Xu||2 + ИX Xd||2. Since the first quantity in the above expression is always nonnegative, it follows that the minimum over у is obtained by choosing у = xv, which proves that xv is the projection we sought. To find a formula for xv, we start from the orthogonality condition (x — xv)±v (x — xV/v) = 0. Then, we exploit the fact that xv is by definition a scalar multiple of v, that is xv — xv for some ос E ]R, and we solve for cc, obtaining xv = <xv, a = -n-rrf. Vector xv is usually called the component of x along the direction v, see Figure 2.15. If v has unit norm, then this component is simply given by xv = (v,x)v. 2.3.2 Projection onto an arbitrary subspace We now extend the previous result to the case when S is an arbitrary subspace (i.e., not necessarily one-dimensional). This is stated in the following key theorem, which is illustrated'in Figure 2.16. Figure 2.15 The projection of x onto the subspace Sv generated by a single vector v is a point xv G Sv such that the difference x — xv is orthogonal to v. xv is also called the component of x along the direction v. Figure 2.16 Projection onto a subspace. VECTORS AND FUNCTIONS 39 Theorem 2.2 (projection theorem) Let X be an inner product space, let xbea given element in X, and let S be a subspace of X. Then, there exists a unique vector x* £ S which is the solution to the problem Moreover, a necessary and sufficient condition for x* being the optimal solution for this problem is that x* 6 5, (x-x*) _L S. Proof Let S1- be the orthogonal subspace of S, then, by virtue of Theorem 2.1, any vector x E X can be written in a unique way as x = u H- z, 11 G S, z E «S-1-. Hence, for any vector y, lly-*H2 = \\(y-u)-z\\2 = \\y-u\\1 + \\z\\2-2(y-u,z). The last inner product term in the sum is zero, since z E S1- is orthogonal to all vectors in S. Therefore lly-*ll2 = lly-«ll2 + l|z||2, from which it follows that the unique minimizer of the distance ||y — x|| is x* = y = u. Finally, widi this choice, y — x = z E S1, which concludes the proof. □ A simple generalization of Theorem 2.2 considers the problem of projecting a point x onto an affine set A, see Figure 2.17. This is formally stated in the next corollary. Corollary 2.1 (Projection on affine set) Let X be an inner product space, let x be a given element in X, and let A = x(°) + S be the affine set obtained by translating a given subspace S by a given vector x®. Then, there exists a unique vector x* E A which is the solution to the problem min ||y-x||. yeA Moreover, a necessary and sufficient condition for x* to be the optimal solution for this problem is that X* E A, (x-x*) ±5. Proof We reduce the problem to that of projecting a point onto a subspace. Considering that any point y E A can be written as y = z + x(°), for some z E S, our problem becomes Figure 2.17 Projection on affine set. min ||y — jc|| = min ||z + — x||. yeA zeS The latter problem thus amounts to projecting the point (x — onto the subspace S. By the projection theorem, the optimality conditions for this problem are z* G S and z* — (x — x(°))_LS. In terms of our original variable, the optimum x* = z* + x® is characterized by x* G A, (x* - x) J_5, which concludes the proof. □ 2.3.2.1 Euclidean projection of a point onto a line. Let p e R” be a given point. We want to compute the Euclidean projection p* of p onto a line L = {xo + span(w)}, ||w||2 = 1, as defined in Section 2.1.2.6; see Figure 2.18. The Euclidean projection is a point in L at minimum Euclidean distance from p, that is p* = argmin ||x — pH2- Since any point x G L can be written as x = xo + v, for some v G span(w), the above problem is equivalent to finding a value v* for v, such that v* — arg min \\v - (p - x0)\\2. yespan (u) We recognize here the classical situation addressed by the projection theorem: for z = p — xq, we need to find the projection of z onto the subspace S = span(u). The solution must hence satisfy the orthogonality condition (z — v*)±u, i.e., ((z — v*),u) =0 where the inner product to be used here is the standard one. Recalling that v* = A* u and uTu = ||ziH2 = 1/ we hence have uTz — uTv* — 0 <=> uTz — A* = 0 A* — uTz = uT(p — xq). The optimal point p* is thus given by p* = X0 + = x0 + A*u = x0 + uT(p - xo)u, and the squared distance from p to the line is \\p-p*\\2 = !lp — ^olli — A*2 = IIf — ^olll — (mT(?7 — ^o))2- 2.3.2.2 Euclidean projection of a point onto an hyperplane. A hyperplane is an affine set defined as follows H = {z e R" : aTz = b}, where a ^ 0 is called a normal direction of the hyperplane, since for any two vectors Zi,Z2 G H it holds that (z.\ — Z2)-La, see Section 2.4.4. Figure 2.18 Projection of point p onto the line L = {xq + v, v £ span(w)}. VECTORS AND FUNCTIONS 41 Given a point p E R” we want to determine the Euclidean projection p* of p onto H. The projection theorem requires p — p* to be orthogonal to H. Since a is a direction orthogonal to H, the condition (p - p*) J_H is equivalent to saying that p — p* = oca for some oc E R. To find a, consider that p* E H, thus 0Tp* — fr, and multiply the previous equation on the left by aT, obtaining aTp — b — a: || a H2, _ aTp — b * aTp-b . . V =P (2-5) The distance from p to H is IIP~ P*||2 = M • ||<»||2 = ^U fn b^ (2-6) 2.3.2.3 Projection on a vector span. Suppose we have a basis for a subspace S C X, that is 5 = spanfx^1),... ,x^). Given a vector x e X, the projection theorem readily tells us that the unique projection x* of x onto S is characterized by the orthogonality condition (x — x*) J_ S. Since x* G S, we can write x* as some (unknown) linear combination of the elements in the basis of S, that is x* = £a,-x(,). (2.7) Then (x-x*) J_ S <=> (x-x*,x^) = 0, k = and these conditions boil down to the following system of d linear equations10 in d unknowns (the Cijs): Y^0Ci{x^k\x^) = {x^Kx), k = l,...,d. (2.8) 10 Linear equations are studied in detail in Chapter 6. Solving this system of linear equations provides the coefficients ol, and hence the desired x*. Projection onto the span of orthonormal vectors. If we have an orthonormal basis for a subspace S = span(S), then it is immediate to obtain the projection x* of x onto that subspace. This is due to the fact that, in this case, the system of equations in (2.8) immediately gives the coefficients oc— (x(k\x), i = 1,Therefore, from Eq. (2.7) we have that — Yh(x^l\x)x^\ We next illustrate a standard procedure to construct an orthonormal basis for span(S). 2.3.3 The Gram-Schmidt procedure Given a basis S = (x^1),.. . ,x(rf)} (i.e., a collection of linearly independent elements) for a subspace S = span(S), we wish to construct an orthonormal basis for the same subspace. The Gram-Schmidt procedure does so by working recursively as follows. Choose any element from S, say x^1), and let r(l)_ r(l) Note that is nonzero, for otherwise x^1) would be a linear combination (with zero coefficients) of the remaining vectors, which is ruled out by the independence of the elements in S (since S is assumed to be a basis), hence the division by H^1) || is well defined. 0 *(D = _£L Step 1 Observe now that if we project xonto span(z^)), obtaining *(2) — (x^2\z^)z^ (see Eq. (2.9)), then, by definition of projection, the residual x(2) — {x(2\z^)z^ is orthogonal to span(z^1)): we thus obtained our second element in the orthonormal basis: Observe again that is nonzero, since otherwise x(2) would be proportional to zW and hence also to x^1),which is not allowed by Figure 2.19 First two steps of the Gram-Schmidt procedure. VECTORS AND FUNCTIONS 43 the independence assumption. The first two iterations of the Gram- Schmidt procedure are illustrated in Figure 2.19. Next, we take the projection of onto the subspace generated by {z^)fz^}, Since zW,z(2) are orthonormal, this projection is readily computed using Eq. (2.9) as f(3) = (x(3),z(i))z(i) + (x(3)/Z(2))2(2)_ Considering the residual £(3) = x(3) _ (x(3)/Z(l))z(l) _ ^(3)/2(2))2(2) and normalizing11 we obtain the third element of the orthonor- ^1 a,™ Ui—- J3) _ £{3) The process is iterated, yielding at the generic iteration k: £(*) = The collection {z^l\... is, by construction, an orthonormal basis for S = span(S). The Gram-Schmidt procedure is a simple way to obtain an orthogonal basis, although it is not the most numerically reliable. In Section 7.3.1 we shall discuss a simple modification of the procedure that has better numerical properties.12 The Gram- Schmidt procedure is strictly related to the so-called QR (orthogonal- triangular) factorization of a matrix, and it is a key ingredient in the solution of least-squares problems and linear equations (more on these matters in Chapter 6). 2.4 Functions "Note again that the independence assumption prevents from being zero. Figure 2.20 Third step of the Gram- Schmidt procedure. 12 See also Section 7.3.2 for a variant of the method that does not require the original vectors to be independent. Besides the standard operations of sum and scalar multiplication, other operations can be defined on vectors, and this leads to the notion of functions, a basic object in optimization problems. We also show how the concepts of linear and affine functions are closely related to inner products. 2.4.1 Functions and maps A function takes a vector argument in Rn, and returns a unique value in R. We use the notation to refer to a function with "input" space Rn. The "output" space for gives the Euclidean distance from the point (x\,X2) to a given point We allow functions to take infinity values. The domain13 of a function /, denoted dom /, is defined as the set of points where the function is finite. Two functions can differ not by their formal expression, but because they have different domains. For example, the functions f,g defined as have the same formal expression inside their respective domains. However, they are not the same function, since their domain is different. We usually reserve the term map to refer to vector-valued functions That is, maps are functions that return a vector of values. We use the notation to refer to a map with input space Rn and output space Rm. The components of the map / are the (scalar-valued) functions /z, i = 1,..., m. 2.4.2 Sets related to functions Consider a function / : Rn —» R. We define a number of sets relevant to /. The graph and the epigraph of the function / are both subsets of Rn+1 ; see Figure 2.21. The graph of / is the set of input/output pairs that / can attain, that is: 13 For example, define the logarithm function as the function / : IR —> R, with values f(x) = log* if x > 0, and —00 otherwise. The domain of the function is thus 1R++ (the set of positive reals). -I -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 _ 1 Figure 2.21 The graph of the function is the set of input/output pairs, shown as a solid line. The epigraph corresponds to points on and above the graph, in light grey. / : ]R" ->• ]R, functions is ]R. For example, the function / : IR2 — R with values /(*) = \/(*1 - yi)2 + (*2 - 3/2)2 graph/ = {(*,/(*)) e R”+1 t x € Rn} . VECTORS AND FUNCTIONS 45 The epigraph, denoted by epi /, describes the set of input/output pairs that / can achieve, as well as "anything above": epi/ = j(x,f) E 1R"+1 : x E 1R”, f>/(x)j. Level and sublevel sets correspond to the notion of the contour of the function /. Both depend on some scalar value t, and are subsets of JRn. A level set (or contour line) is simply the set of points that achieve exactly some value for the function /. For f E 1R, the f-level set of the function / is defined as Cf(t) = {x E R" : /(x) = t}. A related notion is that of sublevel set. The f-sublevel set of / is the set of points that achieve at most a certain value for /: Lf{t) = {x E Rw : f(x)<t}. See Figure 2.22 for an example. 2.4.3 Linear and affine functions Linear functions are functions that preserve scaling and addition of the input argument. A function / : Kn —* 1R is linear if and only if Vx E HR*1 and oc E K,f(ocx) = ocf(x)', Vxi,X2 £ ]Rn,/(xi + X2) = /(*i) +/(*2). A function / is if and only if the function /(x) = /(x) — /(0) is linear (affine = linear + constant). Example 2.10 Consider the functions /1, /2, /3 : IR2 —^ IR defined below: /l(x) — 3.2xi+2x2, /2 (x) = 3.2xi + 2x2 “1“ 0.15, /3(x) = O.OOIX2 + 2.3xi + 0.3x2. The function f\ is linear; /2 is affine; /3 is neither linear nor affine (/3 is a quadratic function). Linear or affine functions can be conveniently defined by means of the standard inner product. Indeed, a function / : Kn —» 1R is affine if and only if it can be expressed as /(x) = aT x + b, for some unique pair [a, b), with a in ]Rn and b E 1R. The function is linear if and only if b = 0. Vector a E ]Rn can thus be viewed Figure 2.22 Level and sublevel sets of a function / : R2 —>• R, with domain R2 itself, and values on the domain given by f(x) = In Asin(*l+0-3*2-0.1) _j_ e0.2x2+0.7 as a (linear) map from the "input" space 3Rn to the "output" space 3R. More generally, any element a of a vector space X defines a linear functional fa : X -» R, such that /a(z) = (fl,z). For any affine function /, we can obtain « and b as follows: b = /(0), and 0,- = f(ej) — b, i — 1,... ,n. We leave it to the reader to prove that the identity f{x) — aTx 4- b then holds for any x. Example 2.11 (Linear functions and power laws) Sometimes, a nonlinear function can be "made" linear (or affine) via an appropriate change of variables. An example of this approach is given by the so-called power laws in physics. Consider a physical process which has inputs Xj > 0, j = 1,..., n, and a scalar output y. Inputs and output are physical, positive quantities, such as volume, height, or temperature. In many cases, we can (at least empirically) describe such physical processes by power laws, which are nonlinear models of the form y = ctx*1... xan\ where oc > 0, and the coefficients aj, j = 1,.. .,n are real numbers. We find power laws, for example, in the relationship between area, volume, and size of basic geometric objects; in the Coulomb law in electrostatics; in birth and survival rates of (say) bacteria as functions of concentrations of chemicals; in heat flows and losses in pipes as functions of the pipe geometry; in analog circuit properties as functions of circuit parameters; etc. The relationship x —» y is not linear nor affine, but if we introduce the new variables y = logy, Xj = logXj, = 1,..., then the above equation becomes an affine one: y = log oc + aj xj — aT x-\-b, where b — log oc. 2.4.4 Hyperplanes and half-spaces 2.4.4.1 Hyperplanes. As defined in Section 2.3.2.2, a hyperplane is a set described by a single scalar product equality. Precisely, a hyperplane in lRn is a set of the form H = jx G 3Rn : aTx = b j , (2.10) where a e 3Rn, a 0, and belR are given. Equivalently, we can think of hyperplanes as the level sets of linear functions, see Figure 2.23. When b = 0, the hyperplane is simply the set of points that are orthogonal to a (i.e., H is a (ft — 1)-dimensional subspace). This is Figure 2.23 Representation of a hyperplane as the level set of a linear function: H = {x : aTx = b}. VECTORS AND FUNCTIONS 47 evident since the condition aTx = 0 means that x has to be orthogonal to vector a, which in turn means that it lies in the orthogonal complement of span(«), which is a subspace of dimension n — 1. When b ^ 0, then the hyperplane is a translation, along the direction a, of that previously mentioned subspace. Vector a is the normal direction to the hyperplane; see Figure 2.23. The b term is related to the distance of the hyperplane from the origin. Indeed, for computing this distance we can project the origin onto the hyperplane (which is an affine set), obtaining (using Eq. (2.6) with p = 0) dist(H, 0) = If |[fl||2 = 1, then \b\ is precisely the distance of the hyperplane from the origin. If xo G H, then for any other element x G H, we have aTx = aTx0 = b. Hence, the hyperplane can be equivalently characterized as the set of vectors x such that x — x$ is orthogonal to a: H = jx G 3Rn : aT(x — xo) — 0 j , see Figure 2.24. 2.44.2 Equivalent representations of hyperplanes. We have seen that hyperplanes are affine sets of dimension n — 1 that generalize the usual notion of a plane in 3R3. In fact, any affine set of dimension n — 1 is a hyperplane of the form (2.10), for some a G Mn and be R. We next prove that the following two representation of a hyperplane are in fact equivalent: H = {x G R" : aTx = b}, € IR”, b 61R (2.11) = x0 + span(«i,..., m„_! ), (2.12) for some linearly independent vectors u\f... ,wn_\ G 3Rn and some vector xo £ lRn- Indeed, if H is given as in (2.11), then we take any xo G H and it holds that aT(x — xo) =0 for any x G H. Hence, we choose {u\,..., 1} to be a basis14 for span!#}-1 and immediately obtain the representation in (2.12). Conversely, starting from the representation in (2.12), we obtain (2.11) by choosing a to be a vector orthogonal to {u\,..., wn_i}, and b = «Txo* 2.4.4.3 Half-spaces. A hyperplane H separates the whole space into two regions: Figure 2.24 Representation of a hyperplane as the set of vectors orthogonal to a given direction a: H ~ {x : aT(x-x 0) =0}. 14 Numerically, a basis can be computed via the singular value decomposition (SVD), discussed in Chapter 5. Alternatively, one can use a variant of the Gram-Schmidt procedure, see Section 7.3.2. H- = jx : aTx< b j, H++ = jx : aTx > fo j . These regions are called half-spaces (H_ is a closed half-space, H++ is an open half-space). As shown in Figure 2.25, the half-space H_ is the region delimited by the hyperplane H — {aTx = b} and lying in the direction opposite to vector a. Similarly, the half-space H++ is the region lying above (i.e., in the direction of a) the hyperplane. Also, we remark that if *o is any point lying in the hyperplane H, then all points x G H+ are such that (x — *o) forms an acute angle with the normal direction a (i.e., the inner product between a and (x — xo) is positive: aT(x — *o) > 0). Similarly, for all points x G H_, we have that aT(x — xq) < 0, see Figure 2.26. 2.4.5 Gradients The gradient of a function / : W1 -» IR at a point x where / is differentiable, denoted by V/(x), is a column vector of first derivatives of / with respect to x\,..., xn: V/W = № - When n = 1 (there is only one input variable), the gradient is simply the derivative. An affine function / : JRn -> 3R, represented as f(x) = aTx 4- b, has a very simple gradient: V/(x) = a. The {a, b) terms of / thus have the following interpretation: b — /(0) is the constant term, referred to as the bias, or intercept (it corresponds to the point where the graph of / crosses the vertical axis); the terms aj, j = 1,.. .,n, which correspond to the components of the gradient of /, give the coefficients of influence of Xj on /. Example 2.12 (Gradient of a nonlinear function) The function / : R2 —> R, with values for x — [x\ X2]1": f(x) = sin Xi + 2*1*2 + x2 has partial derivatives 0/ , , ^-(*) = cos *1+2*2, ox 1 = 2XJ+2X2, hence the gradient at * is the vector V/(*) = [cos *1 + 2*2, 2*i + 2*2]T. Example 2.13 (Gradient of the distance function) The distance function from a point p G IR” to another point * G W1 is defined as p(x) = \\x - p||2 = \j E(x,- - Pi)2. Figure 2.25 Half-space. Figure 2.26 Geometry of half-spaces. $ V VECTORS AND FUNCTIONS 49 The function is differentiable at all x f=- p, and we have that V|o(x) = li——|p(x - p). Example 2.14 (Gradient of the log-sum-exp function) The log-sum-exp func- tion1^ lse : ~Rn R is defined as lse(x) = In ^^eX;^ . The gradient at x of this function is Vlse(x) = where z = [e*1 • • • ex”]T, and Z = £”=1 zv 15 This function appears in the objective of an important class of learning problems called logistic regression, as discussed in Section 13.3.5. It also appears in the objective and constraints of the so-called geometric programming models, discussed in Section 9.7.2. 2.4.5.i Chain rule for the gradient. Suppose / : lRm —» R is a differentiable function in n variables z = (z\,... ,zm), and each zz- is a differentiable function of n variables x = {x\,...,xn): z* = gi(x), i = 1 (which we write compactly as z = g(x), where g : Rtz ugmy Then, the composite function <p : Rn —* IR, with values = f(g(x)), has a gradient Vcp(x) E whose ;-th component . . . dgm(x) dXj dXj V/feW), j = 1 n. As a relevant example, when the functions gi are affine, z/ = gi(x) = ajx + loif fl2- E Rn, bi E 1R, i = 1,...,m, [Vf(x)];• = [a1;- ••• amj]Vf(g(x)), j = where ajj denotes the ;-th element of vector a^, i = 1,..., m, j = 1 ,...,n. 24.5.2 Affine approximation of nonlinear functions. A nonlinear function / : R*2 -4 R can be approximated locally via an affine function, using a first-order Taylor series expansion, see Figure 2.27. Specifically, if / is differentiable at point xq, then for all points x in a neighborhood of xq, we have that f{x o)-^W Figure 2.27 Affine approximation of f(x) in the neighborhood of a given point Xq. f(x) = f Oo) + V/(*o)T0 - *0) + e{x), where the error term e(x) goes to zero faster than first order, as x —» Xq, that is In practice, this means that for x sufficiently close to xo, we can write the approximation f(x)~ f(x0) + V/(x0)T(* - x0)- 2.4.5.3 Geometric interpretation of the gradient. The gradient of a function can be nicely interpreted in the context of the level sets defined in Section 2.4.2. Indeed, geometrically, the gradient of / at a point xo is a vector V/(x0) perpendicular to the contour line of / at level oc = f{x0), pointing from xo outwards to the x-sublevel set (that is, it points towards higher values of the function). Consider for example the function f(x) = lse(g(x)), g(x) = [sin(xx + 0.3x2), 0.2x2]T, (2.13) where lse is the log-sum-exp function defined in Example 2.14. The graph of this function is shown in the left panel of Figure 2.28, some of its contour lines are shown in the center panel of the figure. The right panel in Figure 2.28 also shows a detail of the contour lines of (2.13), together with arrows representing the gradient vectors at some grid points. The gradient V/(x0) also represents the direction along which the function has the maximum rate of increase (steepest ascent direction). Indeed, let v be a unit direction vector (i.e., ||i?||2 = 1), let e > 0, and consider moving away at distance e from xq along direction v, that is, consider a point x = xq + ev. We have that Figure 2.28 Graph of the function in (2.13) (left), its contour lines (center), and gradient vectors (arrows) at some grid points. f(x0 + ev) ~ f(x0) + eVf(x0)T'v, for 0, VECTORS AND FUNCTIONS 51 or, equivalently, f(x0 + ev) - f(x0) = V/(x0)' Looking at this formula we see that whenever e > 0 and v is such that V/(xq)t^ > 0/ then / is increasing along the direction v, for small e. Indeed, the inner product V/(xo)T^ measures the rate of variation of / at xq, along direction v, and it is usually referred to as the directional derivative of / along v. The rate of variation is thus zero, if v is orthogonal to V/(xq): along such a direction the function value remains constant (to first order), that is, this direction is tangent to the contour line of / at xq. On the contrary, the rate of variation is maximal when v is parallel to V/(xq), hence along the normal direction to the contour line at xo; see Figure 2.29. 2.4.6 Application to visualization of high-dimensional data Vectors with dimension higher than three cannot be visualized graphically. However, we can try to gain insight into high-dimensional data by looking at projections of the data onto lower-dimensional affine sets, such as lines (one-dimensional), planes (two-dimensional), or three-dimensional subspaces. Each "view" corresponds to a particular projection, that is, a particular one-, two- or three-dimensional subspace on which we choose to project the data. Figure 2.29 The gradient V/(x0) is normal to the contour line of / at Xo, and defines the direction of maximum increase rate. Projecting data on a line. A one-dimensional subspace of ]Rm is simply a line passing through the origin. Such a line is described by means of a unit-norm vector u G ]Rm defining the direction of the line in space. For each point x G ]Rm, the Euclidean projection of x onto the subspace span(w) is readily obtained from the projection theorem as x* = (uTx)u. The projected datum x* is still an ra-dimensional vector, so it is still impossible to visualize it, if m > 3. The point here, however, is that we are not interested in x* itself, but in the component of x along u, that is in the value of uTx, which is a scalar. If we have a batch of n points xW G ]Rm, i = 1,...,n, we can visualize these points along the direction u as points along a line: yW — wTx(l\ y(z) g 1R, i — \,...,n. Also, it is sometimes useful to offset the scalar data so as to center them in an appropriate way, e.g., by setting the average of the data to zero. Scoring. In effect, we are defining an affine function / : ]Rm -» R mapping each point x E Rm into a scalar value representing a sort of "score" of point x along the direction u: f(x) = uTx + V, where v is an offset. If we want to center the data so that their barycenter is at zero, we may impose that f(x^)-\ b f(x^) = 0, that is nv + uTJ2x(i) =0' which can be obtained by choosing the offset v = -uTx, x=-Tx(iK being x the average of the data points. The centered projection map can now be expressed as /(x) = uT (x — x). Example 2.15 (Visualizing US Senate voting data) We consider a data set representing the votes of US Senators in the period 2004-2006. This dataset is a collection of n vectors x^ E Rm, j = 1,..., n, with m = 645 being the number of bills voted, and n — 100 the number of Senators. Thus, x^ contains all the votes of Senator j, and the z-th component of x^\ Xi(j), contains the vote of Senator j on bill i. Each vote is encoded as a binary number, with the convention that Xj(/) = 1 if the vote is in favor of the bill, and xz-(;‘) — 0 if it is against. The vector x can now be interpreted as the average vote across Senators. A particular projection (that is, a direction in u E Rm, the "bill space") corresponds to assigning a "score" to each Senator, thus allowing us to represent each Senator as a single scalar value on a line. Since we centered our data, the average score across Senators is zero. As a tentative direction for projecting the data we choose the direction that corresponds to the "average bill." That is, we choose the direction u to be the parallel to the vector of ones in Rm, scaled appropriately so that its Euclidean norm is one. The scores obtained with this all-ones direction are shown in Figure 2.30. This figure shows the values of the projections of the Senators' votes x^ — x (that is, with average across Senators removed) on a normalized "average bill" direction. This projection reveals clearly the party affiliation of many senators. The interpretation is that the behavior of senators on an "average bill" almost fully determines her or his party affiliation. We do observe that the direction does not perfectly predict the party affiliation. In Chapter 5, we will see methods to determine better directions. -10 -5,0 5 10 score on average bill Figure 2.30 US Senators' scores on "average bill." Darker shade of gray indicates Republican Senators. VECTORS AND FUNCTIONS 53 2.5 Exercises Exercise 2.1 (Subspaces and dimensions) Consider the set S of points such that X\ -(-.2x2 "h 3x3 = 0, 3xi -(- 2X2 "b *3 ^ 0* Show that S is a subspace. Determine its dimension, and find a basis for it. Exercise 2.2 (Affine sets and projections) Consider the set in ]R3 defined by the equation V = jx G 1R3 : x\ + 2x2 + 3x3 — 1 j . 1. Show that the set V is an affine set of dimension 2. To this end, express it as x(°) + span(x^),x®), where x(°) G V, and are linearly independent vectors. 2. Find the minimum Euclidean distance from 0 to the set V, and a point that achieves the minimum distance. Exercise 2.3 (Angles, lines, and projections) 1. Find the projection z of the vector x = (2,1) on the line that passes through Xq = (1,2) and with direction given by vector u = (1,1). 2. Determine the angle between the following two vectors: ' 1 ' ’ 3 " X = / y = Are these vectors linearly independent? Exercise 2.4 (Inner product) Let x,y G Rn. Under which condition on oc G Rn does the function fix>y) = E “kXkVk define an inner product on Rn? Exercise 2.3 (Orthogonality) Let x, y G Rn be two unit-norm vectors, that is, such that ||x||2 — \\y\\i = 1. Show that the vectors x — y and x 4- y are orthogonal. Use this to find an orthogonal basis for the subspace spanned by x and y. Exercise 2.6 (Norm inequalities) 1. Show that the following inequalities hold for any vector x: -7=M2 < M~ < Il*||2 < ||*||l < Vn\\x\\2 < n\\x v« Hint: use the Cauchy-Schwartz inequality. 2. Show that for any nonzero vector x, where card(x) is the cardinality of the vector x, defined as the number of nonzero elements in x. Find vectors x for which the lower bound is attained. Exercise 2.7 (Holder inequality) Prove Holder's inequality (2.4). Hint: consider the normalized vectors u = x/||x||p, v — y/||y||^, and observe that Then, apply Young's inequality (see Example 8.10) to the products K»*l = \Uk\\Vk\- Exercise 2.8 (Bound on a polynomial's derivative) In this exercise, you derive a bound on the largest absolute value of the derivative of a polynomial of a given order, in terms of the size of the coefficients.16 For W G Rfc+1, we define the polynomial pw, with values 16 See the discussion on regularization *Tyl = ll*llPl|y||<r WTv\ < \\x\\p\\y\\qY^\ukVk\. Tpw{x) = Wi + w2x H h Wk+1Xk. in Section 13.2.3 for an application of this result. Show that, for any p > 1 Vx e [-1,1] : < C(k,p)\\v\\p, where v = (zv2, •.., wjt+1) £ R*, and C(*,p) - Hint: you may use Holder's inequality (2.4) or the results from Exercise 2.6. 3 Matrices A matrix is a collection of numbers, arranged in columns and rows in a tabular format. Suitably defining operations such as sum, product, and norms on matrices, we can treat matrices as elements of a vector space. A key perspective in this chapter is the interpretation of a matrix as defining a linear map between an input and an output space. This leads to the introduction of concepts such as range, rank, nullspace, eigenvalues, and eigenvectors, that permit a complete analysis of (finite dimensional) linear maps. Matrices are a ubiquitous tool in engineering for organizing and manipulating data. They constitute the fundamental building block of numerical computation methods. 3.1 Matrix basics 3.1.1 Matrices as arrays of numbers Matrices are rectangular arrays of numbers. We shall mainly deal with matrices whose elements are real (or sometimes complex) numbers, that is with arrays of the form a n 012 • • • a\n 021 «22 ’ * ‘ «2n «ml «m2 * ’ ' «mn This matrix has m rows and n columns. In the case of real elements, we say that A E Rm,n, or A E Cm'n in the case of complex elements. The i-th row of A is the (row) vector [an • • • the ;-th column of A is the (column) vector [ay . The transposition operation1 1 In case of a complex A, we denote by A* the Hermitian-conjugate of a complex matrix, obtained by taking the transpose of the matrix, with the conjugate values of the elements of A. works on matrices by exchanging rows and columns, that is [A ]ij = [A]ji, where the notation [A\ij (or sometimes simply Azy) refers to the element of A positioned in row i and column j. The zero matrix in ]Rm,n is denoted by 0mttl, or simply by 0, when dimensions are obvious from the context. The operations of multiplication by a scalar and of sum (of matrices of the same size) are defined in the obvious way (i.e., for multiplying by a scalar one multiplies every entry in the matrix by the scalar, and for summing two matrices with the same shape one sums corresponding elements in the same position). With these operations defined, we can see ]Rm,n as a vector space.2 Example 3.1 (Images) A gray-scale image is represented as a matrix of numerical values, where each entry in the matrix contains the value of intensity of the corresponding pixel in the image (a "double" type value in [0,1], where 0 is for black and 1 for white; or an "integer" type value, between 0 and 255). Figure 3.1 shows a gray-scale image with 400 horizontal pixels and 400 vertical pixels. 2 The terminology can be confusing at times: an element of a vector space need not be only a "vector" intended as a column of elements. Matrices indeed constitute elements of a vector space. We have also already seen that, for example, polynomials of degree at most n are elements of a vector space too. Figure 3.1 400x400 pixels gray-scale image (left) and intensity values for a rectangular detail in the upper-left position of the image (right). 3.1.2 Matrix products Two matrices can be multiplied if conformably sized, i.e., if A G ]Rm,n and B G ]Rn,P, then the matrix product AB G ]Rm,P is defined as a matrix whose (z,;)-th entry is [AB]ij = £ AikBkj. MATRICES 57 The matrix product is non-commutative, meaning that, in general, AB / BA. For example The nxn identity matrix (often denoted by In, or simply I, depending on context), is a matrix with all zero elements, except for the elements on the diagonal (that is, the elements with row index equal to the column index), which are equal to one. This matrix satisfies AIn — A for every matrix A with n columns, and I„B — B for every matrix B with n rows. A matrix A G Rm,n can also be seen as a collection of columns, each column being a vector, or as a collection of rows, each row being a (transposed) vector. We shall write correspondingly d\ #2 or A = where a\,...,an € lRm denote the columns of A, and ocj,..., ocj e lRn denote the rows of A. If the columns of B are given by the vectors ty € lRn, i = 1,..., p, so that B = \b\ • • • bp], then AB can be written as = Abi Jv ~ 11CAI • * * v In other words, AB results from transforming each column b{ of B into Abj. The matrix-matrix product can also be interpreted as an operation on the rows of A. Indeed, if A is given by its rows ocj, i = 1,..., m, then AB is the matrix obtained by transforming each one of these rows into ocjB, i — 1,..., m: aj B AB = B = Finally, the product AB can be given the interpretation as the sum of so-called dyadic matrices (matrices of rank one, see Section 3.4.7) of the form UifSj, where denote the rows of B: ab = YL aiPJ' A e Rm,n'B e Rn,P' 3.1.2.1 Matrix-vector product. Rule (3.1) for matrix multiplication also works when A e lRm,n is a matrix and b € lRn is a vector. In this case, the matrix-vector multiplication rule becomes (using the column representation of A) Ab=f ] akbk, A € Rm,n, beUn. That is, Ab is a vector in lRm obtained by forming a linear combination of the columns of A, using the elements in b as coefficients. Similarly, we can multiply matrix A e lRm,n on the left by (the transpose of) vector c € lRm as follows: cTA = £ ckaj, A e Rm'n,c € Rm. That is, cTA is a vector in R1,m obtained by forming a linear combination of the rows ocfc of A, using the elements in c as coefficients. Example 3.2 (Incidence matrix and network flows) A network can be represented as a graph of m nodes connected by n directed arcs. Here, we assume that arcs are ordered pairs of nodes, with at most one arc joining any two nodes; we also assume that there are no self-loops (arcs from a node to itself). We can fully describe such a kind of network via the so-called (directed) arc-node incidence matrix, which is an m x n matrix defined as follows: il if arc j starts at node i — 1 if arc j ends at node i , 1 < i < m, 1 < j < n. (3.2) 0 otherwise. Figure 3.2 shows an example of a network with m — 6 nodes and n = 8 arcs. The (directed) arc-node incidence matrix for this network is We describe a flow (of goods, traffic, charge, information, etc.) across the network as a vector x £ R”, where the j-th component of x denotes the amount flowing through arc j. By convention, we use positive values when the flow is in the direction of the arc, and negative ones in the opposite case. The total flow leaving a given node i is then F! AijXj — [Ax]z-, MATRICES 59 where [Ax]j denotes the z-th component of vector Ax. Next, we define the external supply as a vector b G lRm, with negative b{ representing an external demand at node i, and positive bj a supply. We assume that the total supply equals the total demand, which means that 1 Tb — 0. The flows x must satisfy the flow-balance equations, which represent a "conservation of mass"-type constraint at each node (the total in-flow at node i, plus supply/demand, must equal the total out-flow from node i). These constraints are represented by the vector equality Ax— b. 3.1.2.2 Product and transposition. For any two conformably sized matrices A, Bf it holds that (4B)T = BTAT, hence, for a generic chain of products A\A2 • • • Av it holds that 3.1.3 Block matrix products Matrix algebra generalizes to blocks, provided block sizes are consistent. To illustrate this, consider the matrix-vector product between a m x n matrix A and a n-vector x, where A, x are partitioned in blocks, as follows: A = A\ A2 / x = where A/ is m x n\, Xf G Rn/, i = 1,2, 7^ + n2 = w. Then Ax = A\X\ + A2X2. Symbolically, it works as if we formed the inner product between the "row vector" [A\, A2] and the column vector . Likewise, if an n x p matrix B is partitioned into two blocks B/, each of size nz*, i = 1,2, with n\ + n2 = n, then AB = A1 A2 A1B1 -I- A2B2. Again, symbolically, we apply the same rules as for the scalar product, except that now the result is a matrix. Finally, we discuss the so-called outer products. Consider the case for example when A is a m x n matrix partitioned row-wise into two blocks A\, A2, and B is a n x p matrix that is partitioned column-wise into two blocks B\, B2: A = B - B\ B2 Then, the product C = AB can be expressed in terms of the blocks, as follows: C = AB AiB\ A\B2 A2B1 A2B2 In the special case when A is a column vector and B is a row vector, that is, we have A = , B h b2 3.1.4 Matrix space and inner product The vector space Rm,n can be endowed with a standard inner product: for A, B £ Rm,n, we define (A,B) = trace AT B, where trace(X) is the trace of (square) matrix X, defined as the sum of the diagonal elements of X. This inner product induces the so-called Frobenius norm \](A,A) = y trace AAT = ||A||F = Our choice is consistent with the one made for vectors. In fact, the inner product above represents the scalar product between two vectors obtained from the matrices A, B, by stacking all the columns on top of each other; thus, the Frobenius norm is the Euclidean norm of the vectorized form of the matrix. The trace operator is a linear operator, and it has several important properties. In particular, the trace of a square matrix is equal to that of its transpose and, for any two matrices A £ Rm,n, B £ Rn'm, it holds that trace AB = trace BA. MATRICES 6l 3.2 Matrices as linear maps 3.2.1 Matrices, linear and affine maps We can interpret matrices as linear maps (vector-valued functions), or "operators," acting from an "input" space to an "output" space. We recall that a map / : X -A y is linear if any points x and z in X and any scalars A, y satisfy /(Ax + yz) — Af(x) + yf(z). Any linear map / : ]Rn —)> Rm can be represented by a matrix A E Rm'n, mapping input vectors i G Rn to output vectors y E Rm (see Figure 3.3): y = Ax. Affine maps are simply linear functions plus a constant term, thus any affine map / : Rn -a Rm can be represented as f(x) = Ax + b, for some A E RW/W, b E Rm. Example 3.3 A linear map that scales each component X/ of vector x by some scalar factor oi[, i = 1,..., n, is described by a diagonal matrix ai 0 0 0:2 0 For such a diagonal A, we thus have y — Ax <^> yz = azxz/ i = 1,..., n. 3.2.2 Approximation of nonlinear functions A nonlinear map / : Rn -a Rm can be approximated by an affine map, in the neighborhood of a given point xo (at which / is differentiable), as f(x) = f(xo) + Jf(x0)(x - x0) + o(||x - -roll), where o( \\x — Xo ||) are terms that go to zero faster than first order for x -A xq, and where Jy(xo) is the Jacobian of / at xq, defined as it .. 8 fm _ 8*1 Figure 3.3 Linear map defined by a matrix A. Thus, for x "near" x$, the variation Sf(x) = f(x) — f(xo) can be approximately described, to first order, by a linear map defined by the Jacobian matrix, i.e., Sf (x) ~ Jf(x0)Sx, Sx = x- xq. Also, a scalar-valued function3 / : W1 —>> R, twice differentiable at Xo, can be approximated locally to second-order around a given point Xq, using both the gradient and the second-derivatives matrix (Hessian): / - f{xo) + V/(x0)T(x - *o) + \(x - x0)TV2/(xo)(x - x0), where V2/(xq) is the Hessian matrix at xq, defined as V2/(* o) = In this case, / is approximated locally via a quadratic function defined via the Hessian matrix V2/(xq). 3.2.3 Range, rank, and nullspace Range and rank. Consider an m x n matrix A, and denote by aj, i = 1,..., n, its z-th column, so that A — [a\ ... an\. The set of vectors y obtained as a linear combination of the 0;s are of the form y = Ax for some vector x £ R". This set is commonly known as the range of A, and is denoted 1Z{A): 1Z(A) = {Ax : x£Rn}. By construction, the range is a subspace. The dimension of 11(A) is called the rank of A and denoted by rank(A); by definition the rank represents the number of linearly independent columns of A. It can be shown4 that the rank is also equal to the number of linearly independent rows of A; that is, the rank of A is the same as that of its transpose AT. As a consequence, we always have the bounds 0 < rank(A) < min(m,n). Nullspace. The nullspace of the matrix A is the set of vectors in the input space that are mapped to zero, and is denoted by J\f(A): AT (A) = {x £ Rn : Ax - 0} . This set is again a subspace. #L ... a2/ x\ dxidxn a2/ ... 3 For a scalar-valued function the Jacobian coincides with the transpose of the gradient vector. 4 This result, as well as some other results that are reported in this chapter without proof, can be found in any classical textbook on linear algebra and matrix analysis, such as, for instance: G. Strang, Introduction to Linear Algebra, Wellesley-Cambridge Press, 2009; R. A. Horn, C. R. Johnson, Matrix Analysis, Cambridge University Press, C. D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, 2001. MATRICES 63 3.2.4 The fundamental theorem of linear algebra The so-called fundamental theorem of linear algebra is a result that establishes a key connection between the nullspace of a matrix and the range of its transpose. We start by observing that any vector in the range of AT is orthogonal to any vector in the nullspace of A, that is, for any x E 7£(AT), and any z E J\T{A), it holds that xTz = 0. This fact can be easily proved by observing that every x E TZ(AT) is, by definition, a linear combination of the rows of A, that is, it can be written as x — AJy for some y E Rm. Hence, xTz = (ATy)Tz = yTAz = 0, Vz E J\f(A). Thus, 7£(At) and Af{A) are mutually orthogonal subspaces, i.e., J\f(A) J_ TZ(AT), or, equivalently, AT (A) = TZ^A1)1-. We recall from Section 2.2.3.4 that the direct sum of a subspace and its orthogonal complement equals the whole space, thus, R" = Af(A)®Ar(A)j~ = X(A)®n(AT). With a similar reasoning, we also argue that 'Jl(A)x = {y € Rm : yTz = 0, Vz € JZ(A)} = {yeRm: y1 Ax = 0,'ixe№1} = N(AT), hence we see that n{A)±X(Ar), and, therefore, the output space Rm is decomposed as Rm = n{A)®n(A)L = n{A)®M(AJ). The previous findings are summarized in the following theorem. Theorem 3.1 (Fundamental theorem of linear algebra) For any given matrix A E Rm,n, it holds that Af(A)J_7Z(AT) and TZ{A)L Af(AT), hence N{A)®K{AT) = R", K(A)®Af(AT) = Rm, dim AT (A) + rank(A) — n, (3.3) dim Af(AT) + rank(A) = m. (3.4) Consequently, we can decompose any vector x E Rn as the sum of two vectors orthogonal to each other, one in the range of AT, and the other in the nullspace of A: x — AT£ + z, z E AT {A). Similarly, we can decompose any vector w E Rm as the sum of two vectors orthogonal to each other, one in the range of A, and the other in the nullspace of AT: w — Acp-\-£, ^ E J\f(AT). A geometrical intuition for the theorem is provided in Figure 3.4. 3.3 Determinants, eigenvalues, and eigenvectors As we have seen, any matrix A represents a linear map. If the matrix is square, i.e. A E Kn'n, then it represents a linear map from lRn into itself. In this section, we briefly discuss how certain simple geometric shapes, such as lines and cubes in lRn, are mapped by the transformation y = Ax, and use these geometric interpretations to introduce the concepts of determinants, eigenvalues, and eigenvectors of a square matrix. 3.3.1 Action of a matrix along lines We start by asking how does a linear map A act on lines through the origin (one-dimensional subspaces). Consider a nonzero vector u E lRn and the line passing from the origin through u, that is the set Lu = {x = ocu, oc E 1R}. When A is applied to a vector xeR” belonging to Lu, it transforms it into an output vector y E ]R”: y = Ax = ocAu. We shall next show that the effect of A on any point x E Lu is to rotate the point by a fixed angle 6U, and then to shrink/amplify the Figure 3.4 Illustration of the fundamental theorem of linear algebra in IR3. Here, A — [a\ a?\. Any vector in R3 can be written as the sum of two orthogonal vectors, one in the range of A, the other in the nullspace of AT. MATRICES 65 length of x by a fixed amount yu. Note that the angle of rotation 6U and the length gain ju are constant and fixed for all points along the line Lw. To see this fact, consider the original length of x, as measured by the Euclidean norm: ||x||2 = \oc\\\u\\2, then II3/ II2 = \\Ax\\2 = MI|Au||2 = nuf l*lll“l|2 = lu\\x\\2, where we set 7u = ||Am||2/||m||2 to represent the gain in the direction u. Similarly, for the angle between x and y, we have _ yTx _ xTATx _ uTATu u ~ ¥Myh ~ ¥hhh ~ T^Nif' which again depends only on the line direction u, and not on the actual point along the line. The next example helps in visualizing the concept via a simple numerical experiment. Example 3.4 ( Action of a square matrix on lines) Consider the following 2x2 matrix r 1.2 0.4 0.6 1 A = Figure 3.5 shows how points x along an input direction u are mapped to points y = Ax along an output direction which forms an angle 6U with u. Also, as ||x||2 is kept constant and the direction u sweeps all possible directions, x moves along a circle, and the picture shows the corresponding locus for y (which, incidentally, turns out to be an ellipse). Three loci are displayed in Figure 3.3, corresponding to ||x||2 = 1,1.3,2 respectively. Figure 3.3 Graphical illustration in R2 of how points along lines and circles are mapped by the linear transformation y = Ax. Interestingly, one may discover by numerical experiments that there are in this example two input directions that are angle invariant under the map defined by A. By angle-invariant direction we here mean a direction such that when the input point x is along this direction, then the output point y is also along the same direction. In other words, the angle 6U is zero (or ±180°) for these special input directions; see Figure 3.6. In the current example, these invariant directions are described by the vectors uW = A) = The action of A on points x lying along an invariant direction u is indeed very simple: A acts as a scalar multiplication along these lines, that is Ax = Ax, for some scalar A. Figure 3.6 Input directions that are angle-invariant under A. 3.3.2 Determinants and the transformation of the unit cube Consider a 2 x 2 matrix an 012 021 022 , 1/2) = Ax= , y(4)=A*(4) = an + a12 «21 + «22 The determinant of this matrix is a real number, defined as follows: det A — #h#22 — #21012- Suppose we apply the linear map y = Ax to the four vectors defining the vertices of the unit square in R2, that is to the points x^1) = [0 0]T, *(2) = [10]T, jcP) = [01]T, = [11]T. The transformed points y(D = AxW = y(3) = AxP) = form the vertices of a parallelogram, see Figure 3.7. The area of the unit square is one. It can be verified by elementary geometry that the area of the transformed square (i.e., of the parallelogram) is instead equal to area = | det A |. The (absolute value of the) determinant of a 2 x 2 matrix thus gives the factor by which the area (two-dimensional measure) of the input unit square is increased or decreased when passing through A. In dimension larger than two, the determinant can be uniquely defined as the only real-valued function of an n x n matrix such that: (an + «12, «21 + «22) (1) switching two rows or two columns of the matrix changes the sign of the function; (2) the function is linear in each row (or column) of the matrix; (3) the function is equal to one for the identity matrix. The determinant of a generic matrix A E Rn,n can be computed by defining det a = a for a scalar 0, and then applying the following inductive formula (Laplace's determinant expansion): det(A) = dj, det A(j j), where i is any row, chosen at will (e.g., one may choose i — 1), and denotes an (n — 1) x (n — 1) submatrix of A obtained by eliminating row i and column j from A. It can be proved that, in generic dimension n, the absolute value of the determinant of A still describes the volume (n-dimensional measure) of the parallelotope obtained by transforming the unit hypercube through A. An interesting situation arises when the volume of the transformed cube is zero, that is, when det A = 0.5 In the 2x2 example, this happens whenever 011022 ~ a2iai2* i-ev when one of the rows (or one of the columns) is a multiple of the other. In such a case, the columns (and the rows) are not linearly independent, and the matrix has a non-trivial null space. This means that there exist directions in the input space along which all input vectors are mapped to zero by A. The same concept extends to generic dimension n, whence it can be proved that matrices 67 Figure 3.7 Linear mapping of the unit square. 5 A square matrix A for which det A = 0 is said to be singular. A E Rn,n is singular det A = 0 J\f(A) is not equal to {0}. We finally recall that the following identities hold for any square matrices A, B G Rn,n and scalar oc: det A = det AT det AB = det BA = det A det B det a: A — ocn det A. Moreover, for a matrix with upper block-triangular structure X — it holds that Xu X12 0 x22 Xn G RWl,Wl,X22 € R”2'”2, det X — det Xu det X22, an analogous result holds for lower block-triangular matrices. 3.3.3 Matrix inverses If A G ~Rn,n is nonsingular (i.e., det A ^ 0), then we define the inverse matrix A-1 as the unique n x n matrix such that AA-1 = A-1 A = I„. If A, B are square and nonsingular, then it holds for the inverse of the product that (AB)-1 = B~1A~1. Also, if A is square and nonsingular, then (AT)-1 = (A-1)T, that is, the order of transposition and inversion is exchangeable. It holds for the determinant of a square and nonsingular matrix A that det A = det AT = -— det A"1 Non-square, or square-but-singular, matrices do not possess a regular inverse. However, for a generic matrix A G Rm,n, a generalized inverse (or, pseudoinverse) can be defined. In particular, if m > n, then Ah is said to be a left inverse of A, if AliA = In. Similarly, if n > m, then An is said to be a right inverse of A, if AAri = Jm. In general, matrix AP1 is a pseudoinverse of A, if A A?1 A = A. Left/right inverses and pseudoinverses are further discussed in Section 3.2.3. 3.3.4 Similar matrices Two matrices A,B e W1'71 are said to be similar if there exists a nonsingular matrix P E Rn,n such that B = P_1AP. Similar matrices are related to different representations of the same linear map, under a change of basis in the underlying space. Consider the linear map y — Ax mapping Rn into itself. Since P E Rn,n is nonsingular, its columns are linearly independent, hence they represent a basis for R". Vectors x and y can thus be represented in this basis as linear combinations of the columns of P, that is there exist vectors x, y such that X = Px, y = Py. Writing the relation y = Ax, substituting the representations of x, y in the new basis, we obtain Py = APx => y — P~1APx = Bx, that is, matrix B = P-1AP represents the linear map y = Ax, in the new basis defined by the columns of P. 3.3.5 Eigenvectors and eigenvalues Eigenvalues and characteristic polynomial We are now ready to give a formal definition for eigenvectors and eigenvalues. We use the same concepts introduced when studying the action of A along lines, only we now allow for a slightly more general perspective by viewing A as a linear map from C” (the space of complex vectors with n components) into itself. Eigenvectors are simply directions in Cn that are angle-invariant under A. More precisely, we say that A E C is an eigenvalue of matrix A E 'Rn'n, and u E Cn is a corresponding eigenvector, if it holds that Au — Aw, u /0, or, equivalently, (A In — A)u = 0, u/0. This latter equation shows that in order for {A, u) to be an eigenvalue/eigenvector pair it must happen that: (a) A makes matrix A In — A singular (so that it possesses a non-trivial nullspace), and (b) w lies in the nullspace of AIn — A. Since Xln — A is singular if and only if its determinant is zero, eigenvalues can be easily characterized as those real or complex numbers that satisfy the equation6 det(AIn - A) = 0. In particular, p(A) = det(\In — A) is a polynomial of degree n in A, known as the characteristic polynomial of A. Multiplicities and eigenspaces. The eigenvalues of A £ Rn,n are thus the roots of the characteristic polynomial. Some of these eigenvalues can indeed be "repeated" roots of the characteristic polynomial, hence their multiplicity can be larger than one. Also, some eigenvalues can be complex, with nonzero imaginary part, in which case they appear in complex conjugate pairs.7 The following theorem holds. Theorem 3.2 (Fundamental theorem of algebra) Any matrix A £ Rn,n has n eigenvalues A;, i = 1,..., n, counting multiplicities. We call distinct eigenvalues the eigenvalues of A not counting multiplicities; i.e., we put in the set of distinct eigenvalues only one representative per each group of repeated eigenvalues with identical value. Each distinct eigenvalue A;, i = 1,..., k, has an associated algebraic multiplicity p\ > 1, defined as the number of times the eigenvalue is repeated as a root of the characteristic polynomial. We thus have 5^=1 = n. To each distinct eigenvalue A/, i — 1,..., k, there corresponds a whole subspace (pi = — A) of eigenvectors associated with this eigenvalue, called the eigenspace. Eigenvectors belonging to different eigenspaces are linearly independent, as formalized next. Theorem 3.3 Let A/, i — 1,..., k < n be the distinct eigenvalues of A £ RW/”. Let (pi = J\f(\iln — A), and let wW be any nonzero vectors such that u^ £ (pi, i = 1,... ,k. Then, the u^s are linearly independent. Proof Suppose initially that u^ £ (pj for j 7^ i. This would mean that Au^ — AywW = Afi/W, hence Ay = A/, which is impossible since the As are distinct. We then conclude that j f=- i implies U( £ (pj. Suppose now, again for the purpose of contradiction, that there exist an mW (say without loss of generality, the first one, i/1)) which is a linear combination of the other eigenvectors: 6 Note incidentally that A and AT have the same eigenvalues, since the determinant of a matrix and that of its transpose coincide. 7 This is only true for matrices with real elements, i.e., for A E . MATRICES 71 Then we have the two identities: Ai — Y^oci\iu^\ k k A^1)' = Au^ — Y^ociAu^ = i=2 i—2 and, subtracting these two equations, we obtain Ai)ww =0, where A* — Ai 7^ 0, since the eigenvalues are distinct by hypothesis. This would mean that u^2\..., u® are linearly dependent, hence at least one of these vectors, say without loss of generality u^2\ can be written as a linear combination of the other vectors u^\..., u(k\ At this point, by repeating the initial reasoning, we would conclude that also u^3\... ,u^ are linearly dependent. Proceeding in this way, we would eventually arrive at the conclusion that u are lin¬ early dependent, which would mean in particular that u (fc_1) € <pk. However, this is impossible, by virtue of the initial statement in our proof. We thus conclude that (3.5) was contradicted, so the proposition stands proved. □ Block-triangular decomposition. Thanks to eigenvalues and eigenvectors, a square matrix can be shown to be similar to a block-triangular matrix, that is, a matrix of the form8 A2 2 A2 p where the matrices An, i = 1,..., p are square. Let vi be the dimension of and let UW = [u^ • • • Uy)] be a matrix containing by columns a basis of ty. Note that, without loss of generality, this matrix can be chosen to have orthonormal columns. Indeed, take any basis of (pi and apply the Gram-Schmidt procedure (see Section 2.3.3) to this basis to obtain an orthonormal basis spanning the same subspace. With this choice, = IVr Let further QW be a n x (n — 1//) matrix with orthonormal columns spanning the subspace orthogonal to 1Z(UThe following corollary holds. Corollary 3.1 Any matrix A e R”'n is similar to a block-triangular matrix having a block A iIVj on the diagonal, where A ; is a distinct eigenvalue of A, and Vi is the dimension of the associated eigenspace. 8 See Section 3.4.8 for properties of block-triangular matrices. Proof The compound matrix P; = [U^ Q1'11] is an orthogonal matrix (the columns of P, form an orthonormal basis spanning the whole space C", see Section 3.4.6), hence it is invertible, and P, 1 = pT, see Section 3.4.6. Then, since AU^ = A;lV11, we have that L/(')T AL/(') = \jU®TuM = AiIVi, and QMTAU® = AiQ('}TU<,> = 0. Therefore P_ 1AP; = P(TAP, = A/I,, UWTAQ(i) 0 q(!')tah(') which proves the claim. □ Since similar matrices have the same set of eigenvalues (counting multiplicities),9 and since the set of eigenvalues of a block-triangular matrix is the union of the eigenvalues of the diagonal blocks, we can also conclude from Eq. (3.6) that it must always be vi < m (if Vi > m, then the block-triangular form in (3.6) would imply that A has at least Vi identical eigenvalues at Az, so the algebraic multiplicity of A; would be ]i\ — V\, which is a contradiction). 3.3.6 Diagonalizable matrices A direct consequence of Theorem 3.3 is that, under certain hypotheses, A £ M.n'n is similar to a diagonal matrix, i.e., in current terminology, A is diagonalizable.10 This is stated in the following theorem. Theorem 3.4 Let Az, i = 1,... ,k < n, be the distinct eigenvalues of A £ let \i{, i — 1,... ,k, denote the corresponding algebraic multiplicities, and let (pi = AT(\iIn ~ ^0* Let further — [u^ • • • u^)] be a matrix containing by columns a basis of (pi, being vz = dim (pi. Then, it holds that Vi < pi and, if Vi = pi, i = 1,..., k, then u = [u^ ... u«] is invertible, and where Ail^i 0 0 ^2^2 ’ ’ ’ 0 A kI, n J 9 This fact is readily proved by constructing the characteristic polynomial of a matrix B = P~1AP, since det(AJ - B) = det(A/ - P^AP) = det(P_1(A/ — A)P) = det(P_1) det(Al - A) det(P) = det(AI-A). 10 Not all matrices are diagonalizable. For example, A = \ \ not diagonalizable. However, it can be proved that for any given square matrix, there always exist an arbitrarily small additive perturbation that makes it diagonalizable; i.e., diagonalizable matrices form a dense subset of Rn'n. Proof The fact that vz < pi has already been proved below Eq. (3.6). Let then i/z = pi. Vectors u^\..., uty are linearly independent, since MATRICES 73 they are, by definition, a basis of Moreover, from Theorem 3.3, are linearly independent, for any ji G {1,.. .,!/*•}, f = 1. .,fc. This implies that the whole collection {uj^}z=1,...,^=1 is linearly independent. Now, since vz = for all i, then Y$=i vi = Ej=i ^ hence matrix U is full rank, and thus invertible. For each i = 1,..., k, we have that Au® = A{U®, j — 1,..., and this can be rewritten in compact matrix form as AU® = AfU®, i = l,...,Jfc, which is also rewritten as AU = UA, whereby the statement (3.7) follows by multiplying both sides by U_1 on the right. □ Example 3.5 (Eigenvectors and the Google PageRank) The effectiveness of Google's search engine largely relies on its PageRank (so named after Google's founder Larry Page) algorithm, which quantitatively ranks the importance of each page on the web, allowing Google to thereby present to the user the more important (and typically most relevant and helpful) pages first. If the web of interest is composed of n pages, each labelled with integer k, k — 1,..., n, we can model this web as a directed graph, where pages are the nodes of the graph, and a directed edge exists pointing from node k-[ to node k2 if the web page k\ contains a link to /C2- Let x^,k — 1,..., n, denote the importance score of page k. A simple initial idea would be to assign the score to any page k according to the number of other web pages that link to the considered page (backlinks). In the web depicted in Figure 3.8, for example, the scores would be X\ = 2, X2 = 1, *3 = 3, X4 = 2, so that page k — 3 appears to be the most relevant page, whereas page k = 2 is the least important. In this approach, a page's score can be interpreted as the number of "votes" that a page has received from other pages, where each incoming link is a vote. However, the web is not that democratic, since the relevance of a page typically depends on the relevance, or "authority" of the pages pointing to it. In other words, your page relevance should be higher if your page is pointed directly by Yahoo.com rather then, say, by Nobody.com. Votes should therefore be weighted, rather than merely counted, and the weight should be related to the score of the pointing page itself. The actual scoring count goes then as follows: each page j has a score Xj and nj outgoing links; as an assumption, we do not allow links from a page to itself, and we do not allow for dangling pages, that is pages with no outgoing links, therefore rij > 0 for all j. The score xj represents the total voting power of node ;, which is to be evenly subdivided among the Uj outgoing links; each outgoing link thus carries Xj/nj units of vote. Let Figure 3.8 A small web. Bfr denote the set of labels of the pages that point to page k, i.e., Bis the set of backlinks for page k. Then, the score of page k is computed as j€Bk Ui k= 1 Note that this approach is less direct than the initial pure counting one, since now scores are defined in an apparently circular way, i.e., page k's score is defined as a function of other pages' scores, which in turn depend on the score of page k, etc. We now apply this approach to the web in Figure 3.8. We have n\ — 3, ri2 = 2, n$ = 1, Yi\ = 2, hence *1 = *3 + 2*4, *2 = *3 = 3*1 + 2X2+ 2*4, *4 = 3*, + ^2. ‘ 0 0 1 £ ‘ 1 1 n 1 3 2 U 2 r X — . 3 2 0 0 J - *4 - We can write this system of equations in compact form exploiting the matrix-vector product rule, as follows: x = Ax, A = Computing the web pages' scores thus amounts to finding x such that Ax = x: this is an eigenvalue/eigenvector problem and, in particular, x is an eigenvector of A associated with the eigenvalue A = 1. We will refer to A as the "link matrix" for the given web. It can actually be proved that A = 1 is indeed an eigenvalue of A, for any link matrix A (provided the web has no dangling pages), due to the fact that A is, by construction, a so-called column stochastic matrix,11 that is a matrix with non-negative entries for which the sum of the elements over each column is one. In this example, the eigenspace (pi — J\T(In — A) associated with the eigenvalue A = 1 has dimension one, and it is given by ’ 12 “ (pi = AT (In ~ A) = span hence the solution to Ax = x is any vector x in (p\. Usually, the solution x is chosen so that the entries are normalized by summing up to one. In this case, " 12 " " 0.3871 " 11 See Exercise 3.11 MATRICES 75 Page 1 thus appears to be the most relevant, according to the PageRank scoring. Note that the method discussed so far can lead to ambiguities in certain cases where the eigenspace (pi has dimension larger than one. In such cases, in fact, there are multiple eigenvectors corresponding to the eigenvalue A = 1, therefore the ranking of pages is not uniquely defined. To overcome this difficulty, a modification of the basic approach is used in Google. Specifically, one considers, instead of A, the modified matrix A = (1 — p)A + pE, where p £ [0,1] and E is an n x n matrix whose entries are all equal to 1 /n. A typical choice for p is p = 0.15. The modified link matrix A has an eigenvalue at A = 1, and one can prove that the corresponding eigenspace has always dimension one, hence the page ranking vector is unique, up to scaling. The challenge in the real-world application resides, of course, in the huge size of the eigenvector problem that one is faced with. According to Google, the PageRank problem tallies up to about two billion variables, and it is solved about once a week for the whole World-Wide Web. 3.4 Matrices with special structure and properties We here briefly review some important classes of matrices with special structure and properties. 3.4.1 Square matrices A matrix A £ ~Km,n is said to be square if it has as many columns as rows, that is if m — n. 3.4.2 Sparse matrices Informally, a matrix A £ 'Rm,n is said to be sparse if most of its elements are zero. Several improvements in computational efficiency can be obtained when dealing with sparse matrices. For instance, a sparse matrix can be stored in memory by storing only its nonzero elements. Also, operations such as addition and multiplication can be performed efficiently by dealing only with the nonzero elements of the matrices.12 3.4.3 Symmetric matrices Symmetric matrices are square matrices that satisfy ap — ap for every pair i,j = 1,... ,n. More compactly, A is symmetric if A = AT. A symmetric n x n matrix is defined by the entries on and above the 12 See Exercise 7.1. main diagonal, the entries below the diagonal being a symmetric copy of those above the diagonal. The number of 'Tree" entries of a symmetric matrix is therefore ft + (ft - 1) H bl = ft(ft +1) Symmetric matrices play an important role in optimization and are further discussed in Chapter 4. Figure 3.9 shows an "image" of a sparse symmetric matrix, where nonzero elements are represented by gray levels, and zero elements in white. 3.4.4 Diagonal matrices Diagonal matrices are square matrices with — 0 whenever i 7^ j. A diagonal n x n matrix can be denoted by A = diag (a), where a is an ft-dimensional vector containing the diagonal elements of A. We usually write A = diag (a\,... ,an) = a 1 where by convention the zero elements outside the diagonal are not written. It can be readily verified that the eigenvalues of a diagonal matrix are simply the elements on the diagonal. Further, detA — • * * an, hence a diagonal matrix is nonsingular if and only if a\ 7^ 0, i = 1,..., ft. The inverse of a nonsingular diagonal matrix is simply A = 3.4.5 Triangular matrices Triangular matrices are square matrices in which all elements either above or below the diagonal are zero. In particular, an upper- triangular matrix A is such that = 0 whenever i > j: A = a 11 upper-triangular matrix, MATRICES 77 and a lower-triangular matrix is such that au — 0 whenever i < j: A = lower-triangular matrix. Similarly to diagonal matrices, the eigenvalues of a triangular matrix are the elements on the diagonal, and det A = «11^22 * • * ^nn- The product of two upper (resp. lower) triangular matrices is still upper (resp. lower) triangular. The inverse of a nonsingular upper (resp. lower) triangular matrices is still upper (resp. lower) triangular. 3.4.6 Orthogonal matrices Orthogonal matrices are square matrices, such that the columns form an orthonormal basis of R". If U = \u\ • • • un\ is an orthogonal matrix, then t / 1 if i — if ui ui = S „ 3 I 0 otherwise. Thus, UTU — UUT = In. Orthogonal matrices preserve length and angles. Indeed, for every vector x, \\Ux\\\ = (Ux)T(Ux) = xTUTUx = xTx = || x||2- Thus, the underlying linear map x -* Ux preserves the length (measured in Euclidean norm). In addition, angles are also preserved by orthogonal maps: if x, y are two vectors with unit norm, then the angle 6 between them satisfies cos 6 = xTy, while the angle 0' between the rotated vectors x' = Ux, y' = Uy, satisfies cos 6' = (x')Ty'. Since (Ux)T(Uy) = xTUTUy = xTy, we obtain that the angles are the same. The converse statement is also true: any square matrix that preserves lengths and angles is orthogonal. Further, pre- and postmultiplication of a matrix by an orthogonal matrix does not change the Frobenius norm (nor the ^2"induced norm formally defined later in Section 3.6.3), that is \\UAV||F = || A||f, for U, V orthogonal. Figure 3.10 shows an "image" of an orthogonal matrix. Example 3.6 The matrix Figure 3.10 Image of an orthogonal matrix. Orthogonality is not apparent in the image. is orthogonal. The vector x = [2 1]T is transformed by the orthogonal matrix above into Thus, U corresponds to a rotation of angle 45° counter-clockwise. More generally, the map defined by the orthogonal matrix u(e) = represents a counter-clockwise rotation of angle 6. cos 6 — sin 6 sin 6 cos 6 3.4.7 Dyads A matrix A £ Rm,n is a dyad if it is of the form A — uvT, for some vectors u £ Rm and v £ Rn. If u, v have the same dimension, then the dyad A — uvT is square. A dyad acts on an input vector x £ Rn as follows: Ax — (uvT)x = (vTx)u. The elements Ajj are of the form UjVj; thus each row (resp. column) is a scaled version of the others, with "scalings" given by vector u (resp. v). In terms of the associated linear map x —)> Ax, for a dyad A =yvT, the output always points in the same direction u, no matter what the input x is. The output is thus always a simple scaled version of u. The amount of scaling depends on the vector v, via the linear function x —»> vT x. If u and v are nonzero, the dyad uvT has rank one, since its range is the line generated by u. A square dyad (m = n) has only one nonzero eigenvalue in A = vTu, with corresponding eigenvector u. We can always normalize the dyad, by assuming that both u, v are of unit (Euclidean) norm, and using a factor to capture their scale. That is, any dyad can be written in normalized form: A = uv^ — (||u||2 • IMI2) TjV- = (ruvTr llMll2 IPII2 where cr > 0, and ||m||2 = \\v\\2 — 1- Figure 3.11 shows an "image" of a dyad. Example 3.7 (Single factor model of financial price data) Consider an m x T data matrix A which contains the log-returns (see Example 2.6) of m assets over T time periods (say, days). A single-factor model for these data is one based on the assumption that the matrix is a dyad: A — uvT, where v £ RT, and u £ Rm. According to the single-factor model, the Figure 3.11 Image of a dyad. Perhaps the reader can feel that rows and columns are scaled versions of each other. entire market behaves as follows. At any time t, 1 < t < T, the log-return of asset i, 1 < i < m, is of the form [A\jt — u(Vt. The vectors u and v have the following interpretation. • For any asset, the rate of change in log-returns between two time instants ti < t2 is given by the ratio zH2lvtx, independent of the asset. Hence, v gives the time profile for all the assets: every asset shows the same time profile, up to a scaling given by u. • Likewise, for any time t, the ratio between the log-returns of two assets i and j at time t is given by Uj/iij, independent of t. Hence u gives the asset profile for all the time periods. Each time shows the same asset profile, up to a scaling given by v. While single-factor models may seem crude, they often offer a reasonable amount of information. It turns out that with many financial market data, a good single factor model involves a time profile v equal to the log-returns of the average of all the assets, or some weighted average (such as the SP 500 index). With this model, all assets follow the profile of the entire market. 3.4.8 Block-structured matrices Any matrix A G lRm'” can be partitioned into blocks, or submatrices, of compatible dimensions: An Mi Mi Mi When A is square, and An = 0, A21 = 0 (here by 0 we mean a matrix block of all zeros of suitable dimension), then A is said to be block diagonal: " An 0 0 A22 The set A (A) of eigenvalues of A is the union of the sets of eigenvalues of An and °f A22: A block diagonal A(A) = A(An) U \(Mi)- Also, a block-diagonal matrix is invertible if and only if its diagonal blocks are invertible, and it holds that A —1 A square, block-partitioned matrix A of the form (3.8) is said to be block upper triangular, if A21 — 0, and block lower triangular, if A12 = 0: A = block lower triangular, block upper triangular. Also for block-triangular matrices it holds that the eigenvalues of A are the union of the eigenvalues of the diagonal blocks, that is A block (upper or lower) triangular =» A(A) = A(A11)UA(A22). The inverse of a nonsingular block-triangular matrix can be expressed as follows: /li1 _ 0 ‘ —A22 A2iA-y * A22 A21 — A12 A11A21 Both these formulas can be proved by checking directly that A A-1 = I and A-1 A = I, which are the properties defining unequivocally the inverse of a matrix. Two equivalent formulas also exist for the inverse of a nonsingular full block matrix (3.8). Let Si = A AnA22 ^21/ S2 = A22 — A2iAn1 A12, An ^12 A21 A22 —AT1 A- C_1 -A^ Ai2S2 ji ^12^22 ST1 Further equivalent expressions can be obtained by expanding the inverses of the Si and S2 blocks using a handy matrix identity, known as the matrix inversion lemma, or Woodbury formula: (An - A12A22^A2i) — An1 + Ah1Ai2(A22 - A2iAn:LAi2) 1A2iAn1. (3.9) 3.4.9 Rank-one perturbations A special case of Eq. (3.9) arises when A12 and A21 are vectors, that is, when a rank-one matrix (i.e., a dyad) is added to An. Specifically, for A\2 = u £ R”, Ajj = ^ £ R", and A22 = —1, the above formula becomes -г 1 ■. ЛГ111л;тЛГ11 (ЛП + uv ) = A'1 - ■ П- 1 . (3.10) 1 + v1Ли W This formula permits us to easily compute the inverse of a rank-one perturbation of An, based on the inverse of Ац itself. A further interesting property is that a rank-one perturbation cannot alter the rank of a matrix by more than one unit. This fact actually holds for generic (i.e., possibly rectangular) matrices, as stated next. Lemma 3.1 (Rank of rank-one perturbation) Let A £ Rm n and q £ Rm, p £ R". Then I rank(A) — rank(A + qpT)\ <1. Proof We next show that rank(A) < rank (A + qpT) -f 1; the symmetric condition rank(A + qpT) < rank(A) + 1 can be proved via an identical argument, exchanging the role of the A and A + qpT matrices. Since the rank coincides with the dimension of the range subspace of a matrix, what we need to prove is that dim7£(A) < dimT£(A + qpT) + 1. Recalling that, by the fundamental theorem of linear algebra, dim TZ{ A) + dim J\f(AT) = m, the previous condition is also equivalent to dimJ\f(AT + pqT) < dimA/'(AT) + 1. (3-n) We prove (3.11) by contradiction: let v = dimA/'(AT), and suppose, for the purpose of contradiction, that dim Л/ХА1” + pqT) > v + 1. Then, there would exist v + 2 linear independent vectors vv+2, all belonging to the nullspace of AT -f pqT, that is (AT + pqT)vj = 0, / = 1,..., v + 2, which implies that ATVi = —oiip, ocj = qTVi, i = 1,..., v + 2. Now, at least one of the scalars ocj must be nonzero, for otherwise ATVj = 0 for i = 1,..., v -f 2, which would contradict the fact that dim Л/Г(АТ) = v, and the result would be proved immediately. Assume then, without loss of generality, that ос\ Ф 0, and define vectors Wj = Vj+i — (ftz+i/fti)^i, i = 1,...,v + l. It can then be checked directly that T" ~T I 1 ~j~ A W{ = A V{+1 —A v\ = — az-+ip + &i+ip — 0, 1 — 1,.. .,v + l. We would then have v + 1 linearly independent vectors w\ belonging to the nullspace of AT, which is a contradiction, since dimA/’(AT) = v. □ 3.5 Matrix factorizations A substantial part of theoretical and numerical linear algebra is dedicated to the problem of matrix factorizations. That is, given a matrix A E Rm'n, write this matrix as the product of two or more matrices with special structure. Usually, once a matrix is suitably factorized, several quantities of interest become readily accessible, and subsequent computations are greatly simplified. For example, it is known that any square matrix A can be written as the product of an orthogonal matrix and a triangular matrix, that is A — QR, where Q is orthogonal and R is upper triangular. Once such a factorization is obtained we can, for instance, immediately evaluate the rank of A, which is just the number of nonzero elements on the diagonal of R. Also, we can readily solve for the unknown x a system of linear equations of the type Ax — b, as further discussed in Section 6.44.1. In terms of the linear map defined by a matrix A, a factorization can be interpreted as a decomposition of the map into a series of successive stages, see, e.g., Figure 3.12. We next briefly describe some of the most used matrix factorizations. 3.5.1 Orthogonal-triangular decomposition (QR) Any square A E IRn,n can be decomposed as A = QR, where Q is an orthogonal matrix, and R is an upper triangular matrix. If A is nonsingular, then the factors Q, R are uniquely defined, if the diagonal elements in R are imposed to be positive. If A E lRm'n is rectangular, with m > n, a similar decomposition holds: A = Q nRl where Q E ]Rm'm is orthogonal, and R\ E lRn,n is upper triangular, see Figure 3.13. The QR decomposition is closely related to the Gram-Schmidt orthonormalization procedure, and it is useful in the numerical solution of linear equations and least-squares problems, see Sections 7.3 and 6.44.1. 3.5.2 Singular value decomposition (SVD) Any nonzero A E ]Rm'n can be decomposed as A = UlVT, ■ Figure 3.12 Given a matrix factorization A — BCD, the linear map y — Ax is interpreted as the series connection of three stages. A R Figure 3.13 Image of a 10 x 6 matrix, and its QR decomposition. MATRICES 83 where V £ JRn'n and U £ ]Rm'm are orthogonal matrices, and £=[ E where r is the rank of A, and the scalars <7j > 0, i — 1,..., r, are called the singular values of A. The first r columns u\,... ,ur of U (resp. v\,...,vr of V) are called the left (resp. right) singular vectors, and satisfy Avj = (JiUi, ATUj = (JiVi, i — 1,..., r. The SVD is a fundamental factorization in numerical linear algebra, as it exposes all relevant information about the linear map described by A, such as the range, nullspace, and rank. SVD is discussed in depth in Section 5.1. Figure 3.14 shows a pictorial representation of the SVD of a 4 x 8 matrix. 3.5.3 Eigenvalue decomposition for diagonalizable matrices A square, diagonalizable,13 matrix A £ lRn'n can be decomposed as where U £ Cn,n is an invertible matrix containing by columns the eigenvectors of A, and A is a diagonal matrix containing the eigenvalues Ai,...,An of A in the diagonal. For generic matrices these eigenvalues are real or complex numbers, with complex ones coming in complex-conjugate pairs (see Section 3.3.6). The columns u\,..., un of U are called the eigenvectors of A, and satisfy Aui = \{Ui, i — \,...,n. Indeed, these relations can be compactly written as AM = AU, which is equivalent to A — UAli-1. 3.5.4 Spectral decomposition for symmetric matrices Any symmetric matrix A £ Rn,n can be factored as A = UAUT, where U £ IRn,n is an orthogonal matrix, and A is a diagonal matrix containing the eigenvalues Ai,..., An of A in the diagonal. All these eigenvalues are real numbers, for symmetric matrices. Thus, symmetric matrices are diagonalizable, their eigenvalues are always real, and corresponding eigenvectors can be chosen also to be real and to §Tfn — r Qm—r,n—r E = diag (C7i,...,c7y), Figure 3.14 Image of the SVD of a 4 x 8 matrix. 13 Diagonalizable matrices are introduced in Section 3.3.6. form an orthonormal basis. The columns u\,..., un of U are indeed the eigenvectors of A, and satisfy Auj = AjUj, i — 1,... ,n. This factorization is known as the spectral decomposition for symmetric matrices, and it is further discussed in Section 4.2. Remark 3.1 The singular values and singular vectors of a rectangular matrix A E Rm'n are related to spectral decompositions of the symmetric matrices AAT and ATA. Precisely, if A — 17EVT is an SVD of A, then the columns of U (resp. V) are eigenvectors of AAT (resp. AT A), with corresponding nonzero eigenvalues of ,i = 1,..., r. 3.6 Matrix norms 3.6.1 Definition A function / : ]Rm,n —* R is a matrix norm if, analogously to the vector case, it satisfies three standard axioms. Specifically, for all A,B eRm'n and alia G R: • f(A) > 0, and /(A) = 0 if and only if A = 0; • f(*A) = kl f(A); • fCA + B)<f(A)+f(B). Many of the popular matrix norms also satisfy a fourth condition called sub-multiplicativity: for any conformably sized matrices A, B Among the most frequently encountered matrix norms, we find the Frobenius norm: || A ||p = V trace AAT, and with p = 1,2,00, the £p-induced, or operator, matrix norms IIwUp7^0 ||w||p 3.6.2 Frobenius norm The Frobenius norm || A||p is nothing but the standard Euclidean (£2) vector norm applied to the vector formed by all elements of A E | A ||p = max = v trace AAT = MATRICES 85 The Frobenius norm also has an interpretation in terms of the eigenvalues of the symmetric matrix AAT (we recall that the trace of a square matrix represents the sum of the elements on the diagonal, as well as the sum of the eigenvalues of that matrix): ||A||F = V trace AAT = i{AAT). Let aj,..., ah denote the rows of A G Rm,n, then we have that II^IIf = EII aJ 111' therefore, for any x € IR”, it holds that \\Ax\\2 < ||A||f||x||2. (3.12) This is a consequence of the Cauchy-Schwartz inequality applied to \aj x\: m m \\Ax\\l = E \a7x\2 ^ EIIa7WlMl = II^IIfIMI2- /=1 /■=1 Inequality (3.12) also implies that the Frobenius norm is indeed sub- multiplicative, that is, for any B £ W1'?, it holds that ||AB||F < ||A||F||B||F. To see this fact, let b\,... ,bv denote the columns of B, then AB = [Abi • • • Abp\, and M ;=1 The Frobenius norm of A can be interpreted in terms of the linear map associated with A: u —>■ y = An, as pictured Figure 3.15. Spe¬ cifically, it provides a measure of the output's variance, when inputs are random; see Exercise 3.9 for details. 3.6.3 Operator norms While the Frobenius norm measures the response to random inputs, the so-called operator norms give a characterization of the maximum input-output gain of the linear map u -» y = Au. Choosing to measure both inputs and outputs in terms of a given £p norm, with typical values p = 1,2,00, leads to the definition u ^ Au Figure 3.15 A matrix as an operator. Matrix norms measure "typical" output sizes. where the last equality follows from the fact that we can divide both terms in the fraction by the (nonzero) norm of u. By definition, for every u, ||Au||p < || A||p||u||p. From this property follows that any operator norm is sub-multiplicative, that is, for any two conformably sized matrices A, B, it holds that This fact is easily seen by considering the product AB as the series connection of the two operators B, A: \Bu\\p < ||B||J|u| see Figure 3.16. For the typical values of p — 1,2,00, we have the following results (proofs of the first two cases are left to the reader as an exercise). • The -induced norm corresponds to the largest absolute column sum: max ||Au||i = max IMIi=i ;=1 ni=i Figure 3.16 Submultiplicativity of operator norms. • The ^oo-induced norm corresponds to the largest absolute row sum: max ||Am||oo = max Y\\aij\- ||m||oo=1 • 1 • The ^"induced norm (sometimes referred to as the spectral norm) corresponds to the square-root of the largest eigenvalue Amax of ATA: ||A||2 = max ||Aw||2 = \/Amax(ATA). N12 = 1 V The latter identity follows from the variational characterization of the eigenvalues of a symmetric matrix, see Section 4.3.1. Remark 3.2 It is possible to define other matrix norms, for example operator norms where different norms are used to measure input and output sizes. Some of these norms may be hard to compute. For example, the norm max ||Am||2 : |H|oo < 1, is hard to compute exactly, although good approximations are available. MATRICES 87 3.6.3.1 Spectral radius. The spectral radius p(A) of a matrix A G Rn,n is defined as the maximum modulus of the eigenvalues of A, that is p(A) = max |A;(A)|. Clearly, p(A) > 0 for all A, and A = 0 implies p(A) = 0. However, the converse is not true, since p(A) — 0 does not imply14 necessarily that A = 0, hence p(A) is not a matrix norm. However, for any induced matrix norm || • ||p, it holds that p(A) < \\A\\P. To prove this fact, let A/, V{ 7^ 0 be an eigenvalue/eigenvector pair for A, then IMIplMlp > HM'llp = l|A/®f||P = |A,-| 110,'llp, where the first inequality follows from the definition of induced matrix norm, hence |A/| < \\A\\p, for all i = 1,.. .,n, which proves the claim. It follows, in particular, that p(A) < min(||A||i, ||A||oo), that is p(A) is no larger than the maximum row or column sum of |A| (the matrix whose entries are the absolute values of the entries in A). 3.7 Matrix functions We have already encountered several scalar functions of matrix argument. For instance, given a matrix X G RW/”, the determinant detX, the trace of X, and any norm ||X|| are examples of functions / : Rn,n R. Here we briefly discuss functions that return a matrix argument, that is, functions / : Rn,n Kn,n. One example of such a function is the matrix inverse, defined on the domain of nonsingular matrices, that given X G Rn,n nonsingular, returns a matrix X-1 such that XX~l = X~lX = In. 3.7.1 Matrix powers and polynomials The integer power function f(X) = Xk, k = 0,1,... can be quite naturally defined via the matrix product, by observing that Xk = XX • • • X (k times; we take the convention that X° = In). 14 Take for instance A = . Matrices such that p(A) = 0 are called nilpotent. Similarly, negative integer power functions can be defined over nonsingular matrices as integer powers of the inverse: /(X) = X~k = {X~l)k, k = 0,1,... A polynomial matrix function of degree m > 0 can hence be naturally defined as p(X) = amXm + am-\Xm 1 + • • • + aiX + a$In, where 0/, i = 0,1,..., m, are the scalar coefficients of the polynomial. A first interesting result for matrix polynomials is the following one. Lemma 3.2 (Eigenvalues and eigenvectors of a matrix polynomial) Let X G lRn,n and let A, u be an eigenvalue/eigenvector pair for X (that is, Xu = Au, and let p(X) = amXm + am—iXm * + •••+ axX aQln. Then, it holds that p(X)w = p(A)w, where p(A) = 0wAm + flm_iAm_1 + • • • H- «iA + a$. That is, if A,u is an eigenvalue/eigenvector pair for X, then p(A),u is an eigenvalue/eigenvector pair for the polynomial matrix p(X). Proof The proof is immediate, by observing that Xu — Au implies that X2u = X(Xu) = X(Au) = A2u, X3u = X(X2u) = X(A2u) = A3u,... hence, p(X)u = (amXm + •••-}- a\X + aQln)u = (amAm + • • • + a\A + ao)u = p(A)u. A simple consequence of Lemma 3.2 is a result known as the eigenvalue shift rule: if A*(A), i = 1,...,n, denote the eigenvalues of a matrix A G TRn,n, then Ai(A + pI„) = Aj{A) + p, i = l,...,n. (3.13) For matrices X that admit a diagonal factorization, a polynomial of such a matrix argument can be expressed according to the same type of factorization, as detailed in the next lem'ma. MATRICES 89 Lemma 3.3 (Diagonal factorization of a matrix polynomial) Let X G JRn'n admit a diagonal factorization of the form (3.7) X = UAU~\ where A is a diagonal matrix containing the eigenvalues of X, and U is a matrix containing by columns the corresponding eigenvectors. Let p(t), t G 1R, be a polynomial p(t) = amtm + am_\tm * + ••• + a\t + a$. p{X) = Up(A)U~\ p{ A) = diag (p(A x),p(An)) • Proof If X = UAU~\ then X2 = XX = UAU~1UAU~1 = UA2U~1, X3 = X2X = UA3U~1, etc., hence for any k = 1,2,..., Xk = UAkU~l. p(X) = amXm + am-\Xm ^ + • • • + a\X + a$ln = U(amAm + am-.\Am *-!-•••+ a\A + aQln)U * - Up(A)U~1. 3.7.1. i Convergence of matrix powers. A topic of interest in many applications (such as numerical algorithms, linear dynamical systems, Markov chains, etc.) is the convergence of the matrix powers Xk for Zc -> 00. If X is diagonalizable, then our previous analysis states that Xk - UAkU~1 = YL^iUiv], i=1 where U[ is the z-th column of U and vj is the z-th row of li-1. We can draw some interesting conclusions from this expression. First, if |A/1 < 1 for all i (that is, if p(X) < 1), then each term in the sum tends to zero as k oo, hence Xk —» 0. Conversely, if Xk -» 0, then it must hold that p(X) < 1, for otherwise there would exist a A/ with | A/1 > 1 and the corresponding term in the above sum will either remain bounded in norm (if |A;| = 1), or grow indefinitely in norm, thus Xk —> 0 could not happen. Also, suppose | A/| < 1 for all i except for one (say, without loss of generality, the first one), for which we have Ai = 1. Then, it is easy to see that in such case Xk converges to a constant matrix: Xk U\V^, where U\ contains by columns the eigenvectors associated with the eigenvalue Ai = 1, and V\ contains the corresponding columns in V = IT"1. The above analysis, however, is limited by the assumption of diag- onalizability of X. The following theorem states a general result on convergence of matrix powers.15 Theorem 3.5 (Convergence of matrix powers) Let X E RW/". Then: 1. limfc_>oo Xk — 0 if and only if p(X) < 1; 2- EfcLo Xk converges if and only if p(X) < 1 (in which case the limit of the series is (I — X)-1); 3. lim^oo Xk = X ^ 0 if and only if |A*| < 1/or all except one (possibly repeated) eigenvalue A = 1, whose corresponding eigenspace has dimension equal to the algebraic multiplicity of A = 1. A/so, X = U\V^, where U\ contains by columns a basis for the eigenspace of X associated with A — 1, and V\ is such that Ui = I, and U{ — 0 for all eigenvectors U{ of X associated with eigenvalues A; ^ 1. Further, if Xk is convergent, then X(J — X) = (I — X)X = 0, and conversely if lim^oo Xk(I — X) = 0 then Xk is convergent. 3.7.2 Non-polynomial matrix functions Let / : R -y R be an analytic function, that is, a function which is locally representable by a power series /(0 = £ aktk k=0 which is convergent for all t such that |f| < R, R > 0. Ifp(X) < R (where p(X) is the spectral radius of X), then the value of the matrix function /(X) can be defined as the sum of the convergent series /(X) = £ akXk. k=0 Moreover, if X is diagonalizable, then X = LfALf-1, and 00 /00 \ f(X) = £ akXk = LZ £ akAk JJ-1 k=0 \fc=0 / = LZdiag (/(Aa),... ,/(A„)) ii_1 = Uf(A)U~\ (3.14) Equation (3.14) states in particular that the spectrum (i.e., the set of eigenvalues) of /(A) is the image of the spectrum of A under the mapping /. This fact is known as the spectral mapping theorem. 15 For a proof, see, for instance, Chapter 7 in C. D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, 2001. MATRICES 91 Example 3.8 A notable example of application of Eq. (3.14) is given by the matrix exponential: the function f(t) = el has a power series representation which is globally convergent e* = E ^ hence, for any diagonalizable X G Rn,n, we have 00 1 / \ ex = £ -r;Xk = tfdiag (eAl,...,eA") IT1. k=0 k' ' Another example is given by the geometric series /(f) = (l-0“1= for |f| < 1 = jR, from which we obtain that /(X) = (/-X)-1= £X*, for p(X) < 1. More generally, for cr 7^ 0, we have that 1 00 / . \ k f{t) = {t-a)-l = -l-^r-) , for |f| < |cr| — X, 1 00 / y\k /(X) = (X - crlr1 = - - £ ( - ) , for p(X) < |<r|, u k=o \ u / and X = iiALf-1 implies that (X - (7f)—1 = li(A - crJ)-1^-1. (3.13) 3.8 Exercises Exercise 3.1 (Derivatives of composite functions) 1. Let / : Rm -» Rfc and g : R” -» Rm be two maps. Let h : R" —> Rfc be the composite map h = fog, with values h(x) = f(g(x)) for x E R". Show that the derivatives of h can be expressed via a matrix-matrix product, as Jh(x) = Jf(g(x)) • Jg(x), where Jh(x) is the Jacobian matrix of h at x, i.e., the matrix whose (/,;) element 2. Let g be an affine map of the form g{x) — Ax + b, for A E Rm,n, b G Rm. Show that the Jacobian of h(x) — f(g(x)) is 3. Let g be an affine map as in the previous point, let / : R" R (a scalar-valued function), and let h(x) = f(g(x)). Show that V*fc(x) = ATVgf(g(x)), V2xh(x) = ATV2gf(g(x))A. Exercise 3.2 (Permutation matrices) A matrix P G R*1'” is a permutation matrix if its columns are a permutation of the columns of the n x n identity matrix. 1. For an n x n matrix A, we consider the products PA and AP. Describe in simple terms what these matrices look like with respect to the original matrix A. 2. Show that P is orthogonal. Exercise 3.3 (Linear maps) Let / : R” —> Rm be a linear map. Show how to compute the (unique) matrix A such that f{x) = Ax for every x G R”, in terms of the values of / at appropriate vectors, which you will determine. Exercise 3.4 (Linear dynamical systems) Linear dynamical systems are a common way to (approximately) model the behavior of physical phenomena, via recurrence equations of the form16 x(t + l) = Ax(t) + Bu(t), y(t)=Cx(t), t — 0,1,2,..., where t is the (discrete) time, x(t) G Rn describes the state of the system at time t, u(t) G RP is the input vector, and y{t) G Rm is the output vector. Here, matrices A, B, C, are given. 1. Assuming that the system has initial condition x(0) = 0, express the output vector at time T as a linear function of u(0), ..., u(T — 1); that is, determine a matrix H such that y(T) — Hlf(T), where U(T) = : _ u(T-l) contains all the inputs up to and including at time T — 1. 2. What is the interpretation of the range of H? 16 Such models are the focus of Chap ter 15. Exercise 3.5 (Nullspace inclusions and range) Let A, B G Rm n be two matrices. Show that the fact that the nullspace of B is contained in that of A implies that the range of BT contains that of AT. MATRICES 93 Exercise 3.6 (Rank and nullspace) Consider the image in Figure 3.17, a gray-scale rendering of a painting by Mondrian (1872-1944). We build a 256 x 256 matrix A of pixels based on this image by ignoring grey zones, assigning +1 to horizontal or vertical black lines, +2 at the intersections, and zero elsewhere. The horizontal lines occur at row indices 100, 200, and 230, and the vertical ones at column indices 50,230. 1. What is the nullspace of the matrix? 2. What is its rank? Exercise 3.7 (Range and nullspace of AT A) Prove that, for any matrix A E ]Rm,n, it holds that AT(AtA) - M{A), ■R(ATA) = n(AT). (3.16) Hint: use the fundamental theorem of linear algebra. Exercise 3.8 (Cayley-Hamilton theorem) Let A E ]Rn,n and let p(A) = det(A7n — A) — \n -j- cn—\\n -j- • • • -j- c^A -j- cq be the characteristic polynomial of A. 1. Assume A is diagonalizable. Prove that A annihilates its own characteristic polynomial, that is p(A) = An + cn—\An * + ••• + C\A + coIn = 0. Hint: use Lemma 3.3. 2. Prove that p(A) = 0 holds in general, i.e., also for non-diagonal- izable square matrices. Hint: use the facts that polynomials are continuous functions, and that diagonalizable matrices are dense in RW/”, i.e., for any e > 0 there exist A E Rn,n with ||A||p < e such that A + A is diagonalizable. Exercise 3.9 (Frobenius norm and random inputs) Let A E Rm,n be a matrix. Assume that u E HR*1 is a vector-valued random variable, with zero mean and covariance matrix In. That is, E{u} — 0, and E {uuT} = In. 1. What is the covariance matrix of the output, y — Aul 2. Define the total output variance as E{||y — y\\\}f where y = E{y} is the output's expected value. Compute the total output variance and comment. Figure 3.17 A gray-scale rendering of a painting by Mondrian. Exercise 3.10 (Adjacency matrices and graphs) For a given undirected graph G with no self-loops and at most one edge between any pair of nodes (i.e., a simple graph), as in Figure 3.18, we associate a n x n matrix A, such that _ I 1 if there is an edge between node i and node ;, tJ I 0 otherwise. This matrix is called the adjacency matrix of the graph 1. Prove the following result: for positive integer k, the matrix Ak has an interesting interpretation: the entry in row / and column j gives the number of walks of length k (i.e., a collection of k edges) leading from vertex i to vertex ;. Hint: prove this by induction on k, and look at the matrix-matrix product Ak~l A. 2. A triangle in a graph is defined as a subgraph composed of three vertices, where each vertex is reachable from each other vertex (i.e., a triangle forms a complete subgraph of order 3). In the graph of Figure 3.18, for example, nodes {1,2,4} form a triangle. Show that the number of triangles in G is equal to the trace of A3 divided by 6. Hint: For each node in a triangle in an undirected graph, there are two walks of length 3 leading from the node to itself, one corresponding to a clockwise walk, and the other to a counter-clockwise walk. Exercise 3.11 (Non-negative and positive matrices) A matrix A eW1'*1 is said to be non-negative (resp. positive) if > 0 (resp. a^ > 0) for all = 1,..., n. The notation A > 0 (resp. A > 0) is used to denote non-negative (resp. positive) matrices. A non-negative matrix is said to be column (resp. row) stochastic, if the sum of the elements along each column (resp. row) is equal to one, that is if 1TA = 1T (resp. A1 = 1). Similarly, a vector xeR" is said to be non-negative if x > 0 (element-wise), and it is said to be a probability vector, if it is non-negative and lTx = 1. The set of probability vectors in R” is thus the set S — {x E R” : x > 0, 1Tx = 1}, which is called the probability simplex. The following points you are requested to prove are part of a body of results known as the Perron-Frobenius theory of non-negative matrices. 1. Prove that a non-negative matrix A maps non-negative vectors into non-negative vectors (i.e., that Ax > 0 whenever x > 0), and that a column stochastic matrix A > 0 maps probability vectors into probability vectors. Figure 3.18 An undirected graph with n = 5 vertices. 17 The graph in Figure 3.18 has adjacency matrix ' 0 MATRICES 95 2. Prove that if A > 0, then its spectral radius p(A) is positive. Hint: use the Cayley-Hamilton theorem. 3. Show that it holds for any matrix A and vector x that \Ax\ < \A\\x\, where |A| (resp. \x\) denotes the matrix (resp. vector) of moduli of the entries of A (resp. x). Then, show that if A > 0 and Ai,V{ is an eigenvalue/eigenvector pair for A, then |A/||^| < A\vi\. 4. Prove that if A > 0 then p(A) is actually an eigenvalue of A (i.e., A has a positive real eigenvalue A = p(A), and all other eigenvalues of A have modulus no larger than this "dominant" eigenvalue), and that there exist a corresponding eigenvector v > 0. Further, the dominant eigenvalue is simple (i.e., it has unit algebraic multiplicity), but you are not requested to prove this latter fact. Hint: For proving this claim you may use the following fixed-point theorem due to Brouwer: if S is a compact and convex setlS in R”, and f : S —► S is a continuous map, then there exist an x E S such that f{x) = x. Apply this result to the continuous map f(x) = with S being the probability simplex (which is indeed convex and compact). 5. Prove that if A > 0 and it is column or row stochastic, then its dominant eigenvalue is A = 1. 18 See Section 8.1 for definitions of compact and convex sets. Symmetric matrices This chapter is devoted to symmetric matrices and to their special properties. A fundamental result, the spectral theorem, shows that we can decompose any symmetric matrix as a three-term product of matrices, involving an orthogonal matrix and a real diagonal matrix. The theorem has a direct implication for quadratic functions, as it allows us to decompose any quadratic function having no linear or constant terms into a weighted sum of squared linear functions involving vectors that are mutually orthogonal. The spectral theorem also allows us to determine when a quadratic function enjoys an important property called convexity. Next, we provide a characterization of the eigenvalues of symmetric matrices in terms of optimal values of certain quadratic optimization problems (variational characterization). We shall further discuss a special class of symmetric matrices, called positive semidefinite matrices, that play a relevant role in optimization models. Finally, we show that many important properties of a matrix A G Rm,tt, such as the range, the nullspace, the Frobenius, and the spectral norms, can be studied by analyzing the related symmetric matrices ATA and AAT. This observation naturally leads to the topic of singular value decomposition (SVD), that will be treated in Chapter 5. 4.1 Basics 4.1.1 Definitions and examples A square matrix A G RW/" is symmetric if it is equal to its transpose: A = AT, that is: Ajj = Ay, 1 < i,j < n. Elements above the diagonal in a symmetric matrix are thus identical to corresponding elements below the diagonal. Symmetric matrices are ubiquitous in engineering applications. They arise, for instance, in the description of graphs with undirected weighted edges between the nodes, in geometric distance arrays (between, say, cities), in defining the Hessian of a nonlinear function, in describing the covariances of random vectors, etc. The following is an example of a 3 x 3 symmetric matrix: The set of symmetric n x n matrices is a subspace of RW/W, and it is denoted by Sn. Example 4.1 (Edge weight and Laplacian matrices of a graph) Consider an (undirected) graph consisting of m = 6 nodes connected by n = 9 arcs (or edges), as in Figure 4.1. If we assume that each undirected edge between nodes i and j is assigned a weight Wy = w^, 1 < i,j < m, we obtain a symmetric matrix W E Sm of edge weights. In the example in Figure 4.1, assuming all weights are equal to one, we would obtain Also, the Laplacian matrix of a graph is defined as an m x m symmetric 4 3/2 2 A = 3/2 2 5/2 2 5/2 2 Figure 4.1 An undirected graph with m = 6 nodes and n = 9 edges. number of arcs incident to node i -1 0 Several key properties of a graph are related to the Laplacian matrix. If the graph has no self-loops and only one edge between any pair of nodes, then the Laplacian matrix is related to the (directed) node-arc incidence matrix A of any orientation of the graph,1 as 1 By an orientation of an undirected graph we indicate a directed graph obtained from an undirected graph by choosing some orientation of the edges. L = AAT G Sm. For the example in Figure 4.1, we have Example 4.2 (Sample covariance matrix) Given m points x^l\... ,x^ in Rn, we define the sample covariance matrix to be the n x n symmetric matrix 1 m where x £ Rn is the sample average of the points: 1 m The covariance matrix E is obviously a symmetric matrix. This matrix arises when computing the sample variance of the scalar products s* = wTx^l\ i — 1, . . . , m, where w £ lRn is a given vector. Indeed, the sample average of vector s is s = — (si H b sm) = wTi, while the sample variance is m m a2 = Y^(wJ x^ — s)2 = Y^(wT (x^ — x))2 = wJyLw. i=1 Z=1 Example 4.3 (Portfolio variance) For n financial assets, we can define a vector r G lRn whose components rare the rate of returns of the /c-th asset, k = 1,... ,n; see Example 2.6. Assume now that we have observed m samples of historical returns Al\ i — 1,...,m. The sample average over that history of return is f = (1 /m)(r^ + • • • + Am^), and the sample covariance matrix has (i,j) component given by 1 m *ij = ~ £(rf “ 1 ^ J ^ n- If w G represents a portfolio "mix," that is > 0 is the fraction of the total wealth invested in asset k, then the return of such a portfolio is given by p — rTw. The sample average of the portfolio return is fTw, while the sample variance is given by wTHw. 100 OPTIMIZATION MODELS Example 4.4 (Hessian matrix of a function) The Hessian of a twice differentiable function / : Rn —> R at a point x £ dom / is the matrix containing the second derivatives of the function at that point. That is, the Hessian is the matrix with elements given by The Hessian of / at x is often denoted by V2/(*). Since the second derivative is independent of the order in which derivatives are taken, it follows that Hjj — Hji for every pair (/,;), thus the Hessian is always a symmetric matrix. Hessian of a quadratic function. Consider the quadratic function (a polynomial function is said to be quadratic if the maximum degree of its monomials is equal to two) q(x) = x\ + 2*1*2 + 3*2 + 4*i + 5*2 + 6. The Hessian of q at * is given by I" ^4 d2q 1 3*3*2 ’ 2 2 For quadratic functions, the Hessian is a constant matrix, that is, it does not depend on the point * at which it is evaluated. The monomials in q(x) of degree two can also be written compactly as *1 + 2*x *2 + 3*2 = ^*TH*. Therefore, the quadratic function can be written as the sum of a quadratic term involving the Hessian, and an affine term: q(x) = i*TH* + cT* + d, cT = [4 5], d — 6. Hessian of the log-sum-exp function. Consider the log-sum-exp function lse : Rn —>• R, taking values lse(*) — In ^ eXl. The Hessian of this function can be determined as follows. First, we determine the gradient at a point *, as done in Example 2.14: Vlse(*) = ~z, where z = [e*1 • • • e*w], and Z = £”=1 zz. Then, the Hessian at a point * is obtained by taking derivatives of each component of the gradient. If gi(x) is the i-th component of the gradient, that is, SYMMETRIC MATRICES 101 dgj(x) = 2±_2± dxt Z Z2' and, for j 7^ i: dgi(x) = _zjzi dXj Z2 ' More compactly: V2lse(x) = ~ (zdiag(z) - zzT) . Example 4.5 (Projections and the Gram matrix) Suppose we are given d linearly independent vectors x^\..., *№ in Rn, and a vector i G Rn. In Section 2.3.2.3 we considered the problem of computing the projection x* of x onto the subspace spanned by x^\..., x(d\ Such a projection can be computed as X* = Xol, X = [xW • • • *№], where a € Rd is a vector of coefficients that must satisfy the so-called Gram system of linear equations (2.8) ■ *(1)TX(1) . . . *(l)Tx(d) ’ _ xW^xd) • ■ . x(d)Tx(d) . . H ’ * The right-hand term in these equations is a vector containing the components of x along the directions x^l\..., x(d\ and the coefficient matrix appearing in the left-hand side of these equation is a symmetric matrix called the Gram matrix: G = XTX e Sn. 4.1.2 Quadratic functions A quadratic function q : R” -> R is a second-order multivariate polynomial in x, that is a function containing a linear combination of all possible monomials of degree at most two. Such a function can hence be written as n n n q(x) = I] I] aijxixj + D °ixi + d> ;'=1 ;'=1 1= 1 where aq are the coefficients of the monomials of degree two X[Xj, C[ are the coefficients of the monomials of degree one X{, and d is the zero-degree (constant) term. The above expression has a more compact representation in matrix format as q(x) = xT Ax + cT x + d, where A E R"'" is a matrix containing in row i and column j the coefficient aq, and c is the vector of C{ coefficients. Notice that, since 102 OPTIMIZATION MODELS xT Ax is a scalar it is equal to its transpose, hence it holds that xTAx = xTATx, thus xT Ax = ^xT (A + AT)x, where H = A + AT is a symmetric matrix. A generic quadratic function can thus be represented as q(x) = ^xTHx + cT x + d H c cT 2d where He Sn. A quadratic form is a quadratic function with no linear and constant terms, that is c = 0, d = 0: q(x) = ixTHx, H £ S". Note that the Hessian of a quadratic function q(x) is constant: V2q(x) A generic, twice differentiable function / : R” -» R can be locally approximated in the neighborhood of a point xq via a quadratic function, by means of Taylor series expansion, see Section 3.2.2: = f(x0) + V/(x0)1 (x - x0) 4- - x0)TV2/(x0)(x - x0), where |/(x) — q(x)\ goes to zero faster than second order asx-> xo- Here, the approximating quadratic function has the standard format (4.1), with H = V2/(x0), c = V/(x0) - V2/(x0)x0, d = f(x0) - V/(x0)Tx0 + ^xJV2/(x0)x0. Two special cases: diagonal matrices and dyads. Let a = [a\ • • • an]T, then a diagonal matrix A = diag (a) a\ 0 0 02 0 ••• ... 0 ... 0 0 a A is a special case of a symmetric matrix. The quadratic form associated with diag (a) is q(x) = x diag (fl) x = £ «,-x?, that is, q(x) is a linear combination of pure squares x? (i.e., no cross product terms of type xzxy appear in the sum). Another important class of symmetric matrices is that formed by symmetric dyads, that is by vector products of the form A =‘ aaT = a\a 2 Ufi a\ Dyads are matrices of rank one, and the quadratic form associated with a dyad has the form q(x) = xTaaT x — (aTx)2, that is, it is the square of a linear form in x. It follows that the quadratic form associated with a dyad is always non-negative: q(x) > 0, for all x. 4.2 The spectral theorem 4.2.1 Eigenvalue decomposition of symmetric matrices We recall the definition of eigenvalues and eigenvectors of a square matrix from Section 3.3. Let A be an n x n matrix. A scalar A is said to be an eigenvalue of A if there exists a nonzero vector u such that An = A u. The vector u is then referred to as an eigenvector associated with the eigenvalue A. The eigenvector u is said to be normalized if ||w||2 = 1. In this case, we have2 u*Au = Au*u = A. The interpretation of u is that it defines a direction along which the linear map defined by A behaves just like scalar multiplication. The amount of scaling is given by A. The eigenvalues of A are the roots of the characteristic polynomial pA(A) = det(AI - A). That is, the eigenvalues A/, i = 1,..., n, are the roots (counting multiplicities) of the polynomial equation of degree n: Pa(A) = 0. For a generic matrix A G RW/", the eigenvalues A* can thus be real and/or complex (in complex conjugate pairs), and likewise the corresponding eigenvectors can be real or complex. However, the situation is different in the case of symmetric matrices: symmetric matrices have real eigenvalues and eigenvectors and, moreover, for each distinct eigenvalue A/, the dimension of the eigenspace (pi = M{\iln — A) coincides with the algebraic multiplicity of that eigenvalue. This is summarized in the following key theorem. 2 Recall that the superscript ★ denotes the Hermitian conjugate of a vector/matrix, obtained by transposing and taking the conjugate; if the vector/matrix is real, then the Hermitian conjugate simply coincides with the transpose. 104 OPTIMIZATION MODELS Theorem 4.1 (Eigendecomposition of a symmetric matrix) Let A E be symmetric, let A*, i = < n, be the distinct eigenvalues of A. Let further i = 1, ...,/c, denote the algebraic multiplicity of Az- Ofe multiplicity of A/ as a root of the characteristic polynomial), and let (pi = Afi^iln — A). Then, for all z = 1,... ,/c; 1. A* E R; 2. <pi-L<t>j, i 7^ /; 3. dim</>; = Pwrf 1. Let A, w be any eigenvalue/eigenvector pair for A. Then Au = Xu and, by taking the Hermitian conjugate of both sides, W*A* = A *w*. Multiplying the first of the previous equations on the left by u* and the second on the right by u, we have u*Au = A u*u, u*A*u = \*u*u. Since u*u = \\uH2 7^ 0, recalling that A real implies that A* = AT, and subtracting the two equalities in the previous expression it follows that u*{A - AT)u = (A - A*)||m||2. Now, A — AT — 0, since A is symmetric, hence it must be that A - A* = 0, which implies that A must be a real number. Notice that also the associated eigenvector u can always be chosen to be real. Indeed, if a complex u satisfies Au = Xu, with A, A real, then we also have that Re (Aw) = ARe(w) — ARe(w), which means that Re(w) is an eigenvector of A associated with A. Part 2. Let Vj E (pi, Vj E (pj, i 7^ ;. Since Av\ — Xpoi, Avj — X/Vj, we have vj Avi — Xjvj Vi vj Avi = vj ATVj — vj Avj = XjvJ Vj — Xjvj 1 Thus, subtracting these two equalities, we get (A; - Aj)vjVi = 0.- Since Ai 7^ Aj by hypothesis, it must be that vj — 0, i.e., Vj, V{ are orthogonal. Part 3. Let A be an eigenvalue of A, let y > 1 be its algebraic multiplicity, and let v be the dimension of (p = J\f(KIn — A). We know that, in general, v < y, that is, the geometric multiplicity (i.e., the dimension of the eigenspace) is no larger than the algebraic multiplicity, see Section 3.3.3. We next prove that for symmetric matrices it actually holds that v = y, by constructing an orthonormal basis for (f> composed of y elements. To this end, we state a preliminary result. Let B be a symmetric m x m matrix, and let A be an eigenvalue of B. Then, there exists an orthogonal matrix U = [u Q) e Rm'm, Q g such that Bu = Ku A 0 Bi = QTBQ <E S'"-1. (4.2) 0 Bi To see this fact, let u be any unit-norm eigenvector of B associated with A, i.e., Bu = Ku, ||w||2 = 1. Proceeding similarly to the reasoning done before Eq. (3.6), we can take Q to be a matrix containing by columns an orthonormal basis of the orthogonal complement of 7Z{u), so that U = [u Q] is by construction an orthogonal matrix: UTU = Im. Equation (4.2) then follows from the fact that Bu = Ku, uTB = KuT, and that uTQ = QTu = 0. We now first apply this result to A e Sn: since y > 1, there exists an orthogonal matrix = [u\ Qi] G lRn,ri, such that Au\ — Ku\, and Uf AUi = A 0 0 Ai A\ = Q7AQi G S n — 1 Now, if y = 1 we have finished the proof, since we found a subspace of (p of dimension one (the subspace is 1Z(ui)). If instead y > 1 then, due to the block diagonal structure of Uf AUi and to the fact that this matrix is similar to A, we conclude that A is an eigenvalue of A\ of multiplicity y — 1. We hence apply the same reasoning to the symmetric matrix Ai G there exist an orthogonal matrix ^2 = [U2 Q2] £ such Jt\U2 = KU2, \\U2W2 = 1, and AxU2 = We next show that the vector A 0 0 A2 u2 = Ui A2 = Q2AQ2eS 106 OPTIMIZATION MODELS is a unit-norm eigenvector of A, and it is orthogonal to U\. Indeed, AU2 = U\ A 0 0 A1 u; III uj U2 [U1 Ql] = 0, hence u2 is orthogonal to U\. If y = 2, then the proof is finished, since we have found an orthonormal basis of dimension two for (p (such a basis is U\, u2). If otherwise y > 2, we iterate the same reasoning on matrix A2 and find an eigenvector 1/3 orthogonal to U\,U2, etc. We can continue in this way until we reach the actual value of ]i, and at this point we exit the procedure with an orthonormal basis of (p composed of exactly ]i vectors. □ 4.2.2 The spectral theorem Putting together Theorem 4.1 and Theorem 3.4 it is easy to conclude that any symmetric matrix is orthogonally similar to a diagonal matrix. This is stated in the following so-called spectral theorem for symmetric matrices. Theorem 4.2 (Spectral theorem) Let A £ Kn,n be symmetric, let A* £ R, i = 1,..., n, be the eigenvalues of A (counting multiplicities). Then, there exists a set of orthonormal vectors U{ £ Rn, i — 1,..., n, such that Auj = \{Uj. Equivalently, there exists an orthogonal matrix U = [u\ • • • un] (i.e., UUT = UTU = In) such that A = UAUT = AiUiuJ, A = diag (Alr..., A„). The spectral theorem also shows that any symmetric matrix can be decomposed as a weighted sum of simple rank-one matrices (dyads) of the form U{uJ, where the weights are given by the eigenvalues Az. Example 4.6 (Eigenvalue decomposition of a symmetric 2x2 matrix) We give a practical numerical example of the spectral theorem. Let SYMMETRIC MATRICES 107 To determine the eigenvalues, we solve the characteristic equation: 0 = det(AJ - A) - (A - 3/2)2 - (1/4) = (A - 1)(A - 2), hence the eigenvalues are = 1, A2 = 2. For each eigenvalue A/, we look for a corresponding unit-norm vector such that Aui — Ai/;. For X\, we obtain the equation in u\ 0 = (A — A\)u\ = 1/2 -1/2 -1/2 1/2 which leads (after normalization) to the eigenvector U\ = (1/ V2)[l 1]T. Similarly, for A2 we obtain the eigenvector 112 = (1/ V2) [1 — 1]T. Hence, A admits the factorization ' 1 ’ 1 o' " 1 1 1 -1 4.3 Spectral decomposition and optimization In this section, we illustrate how the spectral decomposition of symmetric matrices can be used to solve very specific types of optimization problems, namely those that involve the maximization or minimization of quadratic forms3 over the Euclidean ball. 4.3.1 Variational characterization of eigenvalues We begin with expressing the eigenvalues of a symmetric matrix as optimal values of certain optimization problems. Since the eigenvalues of A G S” are real, we can arrange them in decreasing order:4 (A) = A!(A)>A2(/l)>--->A„(A) — ^min The extreme eigenvalues can be related to the minimum and the maximum attained by the quadratic form induced by A over the unit Euclidean sphere. For x^O the ratio is called a Rayleigh quotient. The following theorem holds. Theorem 4.3 (Rayleigh quotients) Given A E S”, it holds that Amin(A) < < Amax(A), V* ^ 0. Amax(A) = max xT Ax, X\ ||x||2=l Amin (-A) = pin xTAx, 3 Recall that a quadratic function is a quadratic form if it has quadratic terms but no linear or constant terms. 4 We shall maintain throughout the book this ordering convention for the eigenvalues of symmetric matrices. 108 OPTIMIZATION MODELS and the maximum and minimum are attained for x = u\ and for x = un, respectively, where u\ (resp. un) is the unit-norm eigenvector of A associated with its largest (resp. smallest) eigenvalue of A. Proof The proof is based on the spectral theorem for symmetric matrices and on the invariance of the Euclidean norm under orthogonal transformations. Let A = UAUT be the spectral factorization of A, where A contains the ordered eigenvalues on the diagonal, and U is orthogonal. Defining x = UTx, we have xTAx = xTUAUTx = xTAx n n n Amin E ^ E ^ Amax E Xb r=l !=1 i=1 that is, considering that Yl}= i xf = \\xW2 ~ ll^Txll2 = IMI2' Amin||*||2 — X Ax < Amax||x||2/ from which the first claim follows. Moreover, it is easy to check that the upper and the lower bound in the above inequalities are actually attained for x = U\ (the first column of U) and x — un (the last column of U), respectively, thus concluding the proof. □ 4.3.2 Minimax principle Theorem 4.3 is actually a particular case of a more general principle called the minimax principle for eigenvalues of symmetric matrices. Let us first state the following result. Theorem 4.4 (Poincare inequality) Let A e Sn and let V he any k-di- mensional subspace of R”, 1 < k < n. Then, there exist vectors x,y e V, with ||xH2 = 111/112 = 1/ such that x Ax < Afc(A), y Ay > An_/c^_i(A). Proof Let A = UAUT be the spectral factorization of A, and denote by Q = TZ(Uk) the subspace spanned by the columns of 14 — [uk • * • un]. Since Q has dimension n — k + 1 and V has dimension k, the intersection V fl Q must be nonempty (since otherwise the direct sum Q 0 V would have a dimension larger than n, the dimension of the embedding space). Take then a unit-norm vector x E V fl Q. SYMMETRIC MATRICES 109 Then x = Ufor some £ with ||£||2 = 1, hence xT Ax = ?;TUk UAUTUk£ = Y^Ai(A)^j < \k(A)'E$ = \k(A), which proves the first statement. The second one can be proved analogously, by applying the same reasoning to — A. □ From the Poincare inequality follows the minimax principle stated next, also known as variational characterization of the eigenvalues. Corollary 4.1 (Minimax principle) Let A E Sn and let V denote a subspace o/R”. Then, for k E {1,..., n} it holds that Afc(A) = max min xT Ax dim V=k xev/||x||2=l = min max xTAx. dim V=n—k+l xeV,\\x\\2=l Proof From the Poincare inequality, if V is any fc-dimensional subspace of R" then min:cGy ||x||2=1 xTAx < A^-(A). In particular, if we take V to be the span of {u\,..., then equality is achieved, which proves the first statement. The second statement follows by applying the first one to matrix — A. □ Example 4.7 (Matrix gain) Given a matrix A E Km,n, let us consider the linear function associated with A, which maps input vectors x E Rn to output vectors y E Rm: y = Ax. Given a vector norm, the matrix gain, or operator norm, is defined as the maximum value of the ratio ||Ax||/||x|| between the size (norm) of the output and that of the input, see Section 3.6. In particular, the gain with respect to the Euclidean norm is defined as 2 = max - 11 x 112 and it is often referred to as the spectral norm of A. The square of the input-output ratio in the Euclidean norm is ||Ax||2 _ xT(ATA)x In view of Theorem 4.3, we see that this quantity is upper and lower bounded by the maximum and the minimum eigenvalue of the symmetric matrix AT A E Sn, respectively: Amin(ATA) < ALÌ1M < \max(ATA) 110 OPTIMIZATION MODELS (notice incidentally that all eigenvalues of AT A, Aj(ATA), i = 1, ...,n are non-negative, since ATA is a positive-semidefinite matrix, as discussed next in Section 4.4). We also know from Theorem 4.3 that the upper and lower bounds are actually attained when x is equal to an eigenvector of AT A corresponding respectively to the maximum and the minimum eigenvalues of AT A. Therefore, IIA\\i — max ^ ||^ = Y^^max(^T^), (4-3) where this maximum gain is obtained for * along the direction of eigenvector U\ of AT A, and where this minimum gain is obtained for * along the direction of eigenvector un of AT A. One important consequence of the minimax property is the following result comparing the ordered eigenvalues of A, B with those of A + B. Corollary 4.2 Let A,Bg Sn. Then, for each k — \,...,n,we have Ajt(A) + Amin(B) < Afc(A + B) < Afc(A) + Amax(B). (4.4) Proof From Corollary 4.1 we have that ^A u(A + B) = min max (xT Ax + xTBx) dimV=n-k+l xGV/||x||2=1 > min max xT Ax + Amin(B) dim V—n—k+1 x€V,||x||2=l = Afc(A) + Amin(B), which proves the left-hand side inequality in (4.4); the right-hand side inequality follows from an analogous reasoning. □ A special case of Corollary 4.2 arises when a symmetric matrix A E Sn is perturbed by adding to it a rank-one matrix B = qqT. Since Amax(wT) = 11 ^7112' and Amin(^T) = 0, we immediately obtain from (4.4) that Ajt(A) < \k(A + qi7T) < \k(A) + \\q\\l, k = l,...,n. 4.4 Positive semidefinite matrices 4.4.1 Definition A symmetric matrix A C S" is said to be positive semidefinite (PSD) if the associated quadratic form is non-negative, i.e., xTAx > 0, Vx € K”. SYMMETRIC MATRICES 111 If, moreover, xTAx >0, VO ^ x € R”, then A is said to be positive definite. To denote a symmetric positive semidefinite (resp. positive definite) matrix, we use the notation A >x 0 (resp. A y 0). We say that A is negative semidefinite, written A ■< 0, if — A y 0, and likewise A is negative definite, written A -< 0, if —A y 0. It can immediately be seen that a positive semidefinite matrix is actually positive definite if and only if it is invertible. Example 4.8 (Sample covariance matrix) With the notation used in Example 4.2, we observe that any sample covariance matrix L is by definition positive semidefinite. This is due to the fact that for any u £ Rn, the quantity uTYu is nothing but the sample variance of the scalar products uTx(l\ i = I,, m, hence it is a non-negative number. 5 The set of real positive semidefinite matrices in M.n'n is denoted by Sn+ = {AeSn : Ah 0}. Similarly, S++ denotes the set of positive definite matrices in Kn'n. Remark 4.1 Principal submatrices of a PSD matrix. A simple observation is the following one: let X = {i\,..., im} be a subset of the indices {1,and denote by Ax a submatrix obtained from A £ Rn'n by taking the rows and columns with indices in X (this is called a principal submatrix of A of dimension m x m). Then, A y 0 =» Axh 0, VX (4.5) and, similarly, A >- 0 implies that Aj y 0. This fact is easily seen from the definition of PSD matrices, since forming the product xf AjXj is the same as forming the product xT Ax with a vector x whose entries x* are nonzero only if i £ X. One consequence of this fact, for instance, is that A y 0 implies that the diagonal elements an > 0, i = 1,..., n, and, likewise, Ay 0 implies that an > 0, i = 1,..., n. 5 As seen in Section 4.4.4, the converse is also true: any PSD matrix can be written as a sample covariance of certain data points. 4.4.2 Eigenvalues of PSD matrices We here maintain the notation that, for A £ Sn, the eigenvalues are arranged in increasing order as Ai(A) > • • • > A «(A). The following facts hold: A 0 Ai(A) >0, i = 1,...,n, A y 0 4^ Ai(A) >0, i = 1,..., n. 112 OPTIMIZATION MODELS To prove the first of these statements (the second is proved analogously), let A = UAUT be the spectral factorization of A, then xT Ax — xTUAUTx = zAz — A i(A)zf, where z = Ux. Now, xTAi>0VxGr zAz > 0 Vz e Rn, and the latter condition is clearly equivalent to A;(A) > 0, i = 1 The eigenvalues of a matrix A G Sm cannot decrease if a PSD matrix B is added to A. Indeed, if B y 0, then Amin(B) y 0, and it follows immediately from (4.4) that By 0 =>* Ajt(A + B) > Ajt(A), k — . ,n. (4.6) 4.4.3 Congruence transformations The following theorem characterizes the definiteness of a matrix when it is pre- and post-multiplied by another matrix. Theorem 4.5 Let A eSn, B e Kn'm, and consider the product C = BTAB e sm. (4.7) 1. If A y 0, f/zen C >: 0; 2. if Ay 0, ffren C y 0 if and only if rank B = m; 3. if B is square and invertible, then A y 0 (resp. A y 0) if and only if C y 0 (resp. C y 0). Proof For fact 1, we have that, for all x E Rm, xTCx = xTBTABx = zTAz > 0, with z = Bx, hence C y 0 as desired. For point 2, observe that, since Ay 0, then C y 0 if and only if Bx ^ 0 for all x ^ 0, i.e., if and only if dim J\f(B) = 0. By the fundamental theorem of linear algebra, dimJ\f(B) + rank(B) = m, from which the statement follows. For point 3, the direct implication follows from point 2. Conversely, let C y 0, and suppose for purpose of contradiction that A 0. Then there would exist z ^ 0 such that zTAz < 0. Since B is invertible, let x = B-1z, then xTCx — xTBTABx — zTAz < 0, which is impossible, since С у 0. An analogous argument shows that С >: 0 implies А У 0, thus concluding the proof. □ When В is square and invertible, then (4.7) defines a so-called congruence transformation, and A, С are said to be congruent. The inertia of a symmetric matrix A G SH,In(A) = (npos(A),nneg(A),nzero(A)), is defined as a triple of non-negative integers representing respectively the number of positive, negative, and zero eigenvalues of A (counting multiplicities). It can be proved that two matrices A G Sn, С G Sn have the same inertia if and only if they are congruent. The following corollary follows from Theorem 4.5, by simply observing that the identity matrix is positive definite. Corollary 4.3 For any matrix A G IRm,H it holds that: 1. АтА У 0, and AAT У 0; 2. ATA у 0 if and only if A is full-column rank, i.e., rank A — n; 3. AAT у 0 if and only if A is full-row rank, i.e., rank A = m. An interesting fact is that, under a certain positivity assumption on their linear combination, two symmetric matrices can be simultaneously "diagonalized" via an appropriate congruence transformation, as stated in the next theorem. Theorem 4.6 (Joint diagonalization by congruence) Let A\, A2 G Sn be such that A — oc\A\ ~Ь (X2A2 У 0 for some scalars Then, there exists a nonsingular matrix В G W1'4 such that both BTAiB and Вт A2B are diagonal. Proof Assume without loss of generality that 0C2 > 0. Since A у 0, A is congruent to the identity matrix, that is there exists some nonsingular Bi such that BjAB\ = In. Since BjA\B\ is symmetric, there exists an orthogonal matrix W such that W^^AiBiW — D, where D is diagonal. Taking В = Bi W we have that BT AB = WTBj ABi W = WT In W = I ВтАгВ = D, therefore, since A2 = (A — &\A\)I&2, matrix ВтA2B is also diagonal, which concludes the proof. □ The following fact is also readily established from Theorem 4.6. Corollary 4.4 Let А У 0 and С G Sn. Then there exists a nonsingular matrix В such that BTCB is diagonal and BT AB = In. 114 OPTIMIZATION MODELS 4.4.4 Matrix square-root and Cholesky decomposition Let A E S". Then A ^ О ЗВ У 0 : A — В2, (4.8) А у 0 3 В у 0 : А — В2. (4*9) Indeed, any А ^ 0 admits the spectral factorization A = UAUT, with U orthogonal and Л = diag (Ai,..., A,), Af > 0, i = 1 ,...,n. Defining A1/2 = diag (л/Ai/ • • • / л/Ai) and ® = ИЛ1/2ИТ, we have B2 = UA1/2UTUA1/2UT = UAUT = A. Conversely, if for some symmetric В it holds that A — BTB = B2, then A 0 follows from Corollary 4.3, and this proves (4.8). The proof of (4.9) is analogous. Further, it can be proved (not done here) that the matrix В in (4.8), (4.9) is unique. This matrix is called the matrix square-root of A: В — A1/2. Repeating the previous reasoning with В — A1/2LfT, we can also conclude that АУ0 & 3 B: A = BTB, А у 0 3B nonsingular : A = BTB. (4*10) In particular, Eq. (4.10) states that A is positive definite if and only if it is congruent to the identity. Notice further that every square matrix В has a QR factorization: В = QR, where Q is orthogonal and R is an upper-triangular matrix with the same rank as В (see Section 7.3). Then, for any А У 0 we have that A = BTB = RTQTQR = RT R, that is, any PSD matrix can be factorized as RTR where R is upper triangular. Further, R can be chosen to have non-negative diagonal entries. If А У 0, then these diagonal entries are positive. In this case, the factorization is unique and it is called the Cholesky decomposition of A. Using the matrix square-root, we can prove the following result, relating the eigenvalues of В and AB, for В symmetric and A у 0. Corollary 4.5 Let А, В E S”, with А У 0. Then, the matrix AB is diag- onalizable, has purely real eigenvalues, and it has the same inertia as B. Proof Let A1/2 у 0 be the matrix square-root of A. Then, a-1/2aba1/2 = a1/2ba1/2. The matrix A-1//2ABA1//2 on the left in this equation is similar to AB, hence it has the same eigenvalues as AB. Since the matrix on the right is symmetric, it is diagonalizable and its eigenvalues are real; therefore, also AB is diagonalizable and has purely real eigenvalues. Further, the matrix on the right is congruent to B, hence it has the same inertia as B. Thus, AB has the same inertia as B. □ 4.4.5 Positive-definite matrices and ellipsoids Positive-definite matrices are intimately related to geometrical objects called ellipsoids, which are further discussed in Section 9.2.2. A fulldimensional, bounded ellipsoid with center in the origin can indeed be defined as the set £ = {x € R” : xtP-1x < 1}, where P y 0. The eigenvalues and eigenvectors of P define the orientation and shape of the ellipsoid: the eigenvectors u\ of P define the directions of the semi-axes of the ellipsoid, while their lengths are given by y/Xi, where A* > 0, i = 1,..., n, are the eigenvalues of P, see Figure 4.2. Since P y 0 is equivalent to P_1 y 0, by the Cholesky decomposition (4.10) there exists a nonsingular A such that P_1 = AT A. Hence, the previous definition of ellipsoid £, which involved the product xTP~1x — xTATAx = ||Ax||2, is also equivalent to the following one: £ = {xe1R": ||Ax||2<1}. 4.4.6 The PSD cone and partial order The set of positive semidefinite matrices S” is a convex cone, as defined later in Section 8.1. First, it is a convex set, since it satisfies the defining property of convex sets (see Section 8.1), that is for any two matrices A\, A2 £ S” and any 9 £ [0,1], it holds that xT(6Ai + (1 - 9)A2)x = 6xTAix + (1 - 6)xtA2x > 0, Vx, hence 9A\ + (1 — 9)A2 £ S” . Moreover, for any A y 0 and any oc > 0, we have that ocA y 0, which says that S” is a cone. The relation “y" defines a partial order on the cone of PSD matrices. That is, we say that A y B if A — B y0 and, similarly, AyB if A — By 0. This is a partial order, since not every two symmetric matrices may be put in a ^ or >: relation. Take for example ’ 2 1 " , B = ' 1 1 " , c = ’ 1 1 ’ Figure 4.2 A two-dimensional ellipsoid. Then, one may check that A y B, B ■< C, but neither A ■< C nor A y C. Theorem 4.7 Let A y 0 and B y 0, and denote by p(-) the spectral radius of a matrix (that is, the maximum modulus of the eigenvalues of a matrix). Then AhB p(BA-1) < 1, (4.11) Ay B <=> p(BA~r) < 1. (4.12) Proof We prove (4.11), the reasoning for proving (4.12) being equivalent. By Corollary 4.4, there exists a nonsingular matrix M such that A = MIMT and B = MDMT, with D — diag (d\,... ,dn), dj > 0, f = 1,..., n. Then, A - B h 0 if and only if M(I - D)MT y 0 which, by Theorem 4.5, is satisfied if and only if I — D y 0, i.e., for di < 1, i = 1 , ...,n. Since BA-1 = MDMTM~TM~1 = MDM-1, we see that BA-1 is similar to matrix D, hence the eigenvalues of BA-1 are precisely d\, i — 1,..., n. Since the d;s are non-negative, we conclude that A-By 0 4* di < 1 Vi p{BA~l) < 1. Observe that for any two square matrices X, Y the products XY and YX have the same eigenvalues, hence p(XY) = p(YX). Therefore for A >-^0, B y 0 by Theorem 4.7 we have that A±B p(BA~l) = p{A~lB) < 1 4* B~l y A-1. More generally, the relation A>z B induces a corresponding relation among the ordered eigenvalues of A and B, and likewise on monotone functions of the eigenvalues, such as the determinant and the trace. Indeed, for any A, B £ Sn, it follows directly from Eq. (4.6) that A — B >z 0 implies that \k(A) = \k(B + (A-B)) > Afc(B), * = 1,...,«, (4.13) (notice, however, that the converse is not true, i.e. A*-(A) > Afc(B), / = 1,..., n, does not imply A — B y 0). Therefore, by (4.13), A — B >: 0 also implies that det A = IT A^)> n Ajt(B) = det B, trace A = Ak(A) > ]T] A*(B) = trace B. The following result is related to an important matrix equation arising in systems and control theory, called the Lyapunov equation. Theorem 4.8 (Symmetric sum) Let A y 0 and B G S”, and consider the symmetric sum S = AB + BA. Then, S y 0 (resp., S y 0) implies that B y 0 (resp., B y 0). Proof Since B G Sn it admits the spectral factorization B = UAUT where U is orthogonal and A is diagonal. Then, by Theorem 4.5, S y 0 if and only if UTSU y 0, that is if and only if (UTAU)A + A(UTAU) y 0. This implies that the diagonal elements [UTSU]a > 0, i = 1,... ,n, that is 2otjAj(B) >0, i — \,... ,n, where ol\ = ujAu{ > 0, being Uj the z-th column of U. The latter condition clearly implies that A/(B) > 0, i = 1,..., n, i.e., that B y 0. An analogous reasoning similarly shows that S y 0 implies B y 0. □ Example 4.9 Taking the matrix square-root preserves the PSD ordering. In particular, if A >- 0, B y 0, then Ay B => A1/2 > B1/2. (4.14) To see this fact consider the identity 2(A - B) = (A1/2 + B1/2)(A1/2 - B1/2) + (A1/2 - B1/2)(A1/2 + B1/2). Since A >- 0, B >: 0, then A1^2 >- 0, B1//2 >: 0, hence A1^2 + B1//2 >- 0. Thus, by applying Theorem 4.8 to the above sum, we have that A — B y 0 implies that A1^2 - B1/2 >- 0, which is the claim. Notice that a converse of (4.14) does not hold in general. Take for example " 2 1 " / B = ' 1.2 Then, A y 0, B y 0, A y B, but A2 / B2. 4.4.7 Sc/zur complements Let A 6 S", B e Sm, and consider the block-diagonal matrix M = A °n'm [ 0m,n B Then, it is easy to verify that My 0 (resp., My 0) A y 0,B y 0 (resp., A y 0, B y 0). We now state an important result on positive definiteness of block matrices that are not necessarily block diagonal. Theorem 4.9 (Schur complements) Let A € S", B € Sm, X € R"'' with B >- 0. Consider the symmetric block matrix M = and define the so-called Schur complement matrix of A in M S = A — XB_1XT. M y 0 (resp., M y 0) 4=> Proof Define the block matrix C = S y 0 (resp., S y 0). This matrix is square lower triangular and has all nonzero diagonal entries, hence it is nonsingular. Consider then the congruence transformation on M From Theorem 4.5 we have that My 0 (resp., M >- 0) if and only if CTMC y 0 (resp., CTMC >- 0). But CTMC is block diagonal and B y 0 by assumption, therefore we conclude that M y 0 (resp., M >- 0) if and only if S y 0 (resp., S y 0). □ 4.5 Exercises Exercise 4.1 (Eigenvectors of a symmetric 2x2 matrix) Let p, q £ R” be two linearly independent vectors, with unit norm (||p||2 = IMI2 — 1). Define the symmetric matrix A = pqT + qpT. In your derivations, it may be useful to use the notation c = pTq. 1. Show that p + q and p — q are eigenvectors of A, and determine the corresponding eigenvalues. 2. Determine the nullspace and rank of A. 3. Find an eigenvalue decomposition of A, in terms of p, q. Hint: use the previous two parts. 4. What is the answer to the previous part if p, q are not normalized? Exercise 4.2 (Quadratic constraints) For each of the following cases, determine the shape of the region generated by the quadratic constraint xT Ax <1. SYMMETRIC MATRICES 119 1. A = 2. A = 3. A = 1 -1 -1 1' -1 0 0 -1 Hint: use the eigenvalue decomposition of A, and discuss depending on the sign of the eigenvalues. Exercise 4.3 (Drawing an ellipsoid) 1. How would you efficiently draw an ellipsoid in ]R2, if the ellipsoid is described by a quadratic inequality of the form £ — ^x^Ax -f-2x -f-c ^ 0 j , where A is 2 x 2 and symmetric, positive definite, b £ IR2, and c £ IR? Describe your method as precisely as possible. 2. Draw the ellipsoid £ — {4x2 + 2*2 + 3x1*2 + 4*i + 5x2 + 3 < 1j. Exercise 4.4 (Interpretation of covariance matrix) As in Example 4.2, we are given m points x^,..., x^m) in Kn, and denote by E the sample covariance matrix: 1 m X- x)(x^ - X) where x G !R" is the sample average of the points: 1 m * = if; *(o. m i=1 We assume that the average and variance of the data projected along a given direction do not change with the direction. In this exercise we will show that the sample covariance matrix is then proportional to the identity. We formalize this as follows. To a given normalized direction w G ll^lb — 1/ we associate the line with direction w passing through the origin, C(w) — {tw : t £ IR}. We then consider the projection of the points i = 1,on the line C{w), and look at the 120 OPTIMIZATION MODELS associated coordinates of the points on the line. These projected values are given by tj(w) = argmin ||tw — x^\\2, / = 1,.. .,m. We assume that for any w, the sample average t(w) of the projected values i — 1,..., m, and their sample variance v2{w), are both constant, independent of the direction w. Denote by t and o2 the (constant) sample average and variance. Justify your answer to the following questions as carefully as you can. 1. Show that tj(w) = wTx^l\ i = 1,... ,ra. 2. Show that the sample average x of the data points is zero. 3. Show that the sample covariance matrix E of the data points is of the form <j2In. Hint: the largest eigenvalue Amax of the matrix Z can be written as: Amax — maxw {wT'Lw : wTw = 1}, and a similar expression holds for the smallest eigenvalue. Exercise 4.3 (Connected graphs and the Laplacian) We are given a graph as a set of vertices in V — (1,..., ?z}, with an edge joining any pair of vertices in a set E C V x V. We assume that the graph is undirected (without arrows), meaning that (/,;) £ E implies (;,/) £ E. As in Section 4.1, we define the Laplacian matrix by Lij - -1 if (/,;) e E, d(i) if i = j, 0 otherwise. Here, d(i) is the number of edges adjacent to vertex i. For example, d{4) = 3 and d(6) = 1 for the graph in Figure 4.3. 1. Form the Laplacian for the graph shown in Figure 4.3. 2. Turning to a generic graph, show that the Laplacian L is symmetric. 3. Show that L is positive-semidefinite, proving the following identity, valid for any w £ ]Rn: uTLu = q(u) = 1 Y1 (M> - ui)2- (i,j)€E Hint: find the values q(k), q[e^ ± e{), for two unit vectors e^, e\ such that (k,l) £ E. 4. Show that 0 is always an eigenvalue of L, and exhibit an eigenvec¬ tor. Hint: consider a matrix square-root6.of L. Figure 4.3 Example of an undirected graph. 6 See Section 4.4.4. 5. The graph is said to be connected if there is a path joining any pair of vertices. Show that if the graph is connected, then the zero eigenvalue is simple, that is, the dimension of the nullspace of L is 1. Hint: prove that if uTLu = 0, then U[ = Uj for every pair (/,/) € E. Exercise 4.6 (Component-wise product and PSD matrices) Let A, B G S” be two symmetric matrices. Define the component-wise product of A, B, by a matrix C G S” with elements Qy = A*yB;y, 1 < i,j < n. Show that C is positive semidefinite, provided both A, B are. Hint: prove the result when A is rank-one, and extend to the general case via the eigenvalue decomposition of A. Exercise 4.7 (A bound on the eigenvalues of a product) Let A,6 G Sn be such that A >- 0, B >~ 0. 1. Show that all eigenvalues of BA are real and positive (despite the fact that BA is not symmetric, in general). 2. Let A >- 0, and let B_1 = diag (\\aj ||i,..., ||flj||i), where aj, i = 1,..., n, are the rows of A. Prove that 0 < Ai(BA) < 1, Mi = 1 ,...,n. 3. With all terms defined as in the previous point, prove that p(I-ocBA) <1, Moc G (0,2). Exercise 4.8 (Hadamard's inequality) Let A G S” be positive semidefinite. Prove that det A < Y\aii' Hint: Distinguish the cases det A = 0 and det A 7^ 0. In the latter case, consider the normalized matrix A = DAD, where D = diag and use the geometric-arithmetic mean in¬ equality (see Example 8.9). Exercise 4.9 (A lower bound on the rank) Let A G S” be a symmetric, positive semidefinite matrix. 1. Show that the trace, trace A, and the Frobenius norm, ||A||f, depend only on its eigenvalues, and express both in terms of the vector of eigenvalues. 2. Show that (trace A)2 < rank(A)||A||£. 122 OPTIMIZATION MODELS 3. Identify classes of matrices for which the corresponding lower bound on the rank is attained. Exercise 4.10 (A result related to Gaussian distributions) Let £ £ S” + be a symmetric, positive definite matrix. Show that You may assume known that the result holds true when n = 1. The above shows that the function p : W1 —> R with (non-negative) values integrates to one over the whole space. In fact, it is the density function of a probability distribution called the multivariate Gaussian (or normal) distribution, with zero mean and covariance matrix £. Hint: you may use the fact that for any integrable function /, and invertible n x n matrix P, we have [ f(x)dx = | detP| • [ f(Pz)dz. JxeRn JzeRn Singular value decomposition This chapter is devoted to the singular value decomposition (SVD) of general rectangular matrices, and its applications. The singular value decomposition provides a full insight into the structure of linear maps, and gives an effective computational tool for solving a wealth of linear algebra problems. In optimization, SVD applications include some problems that are convex, in the sense that will be discussed in Chapter 8, as well as some non-convex problems that might seem very hard to solve at first glance, such as those involving rank minimization (Section 5.3.1), or optimization over rotation matrices (Section 5.3.3). 5.1 Singular value decomposition 5.1.1 Preliminaries The singular value decomposition (SVD) of a matrix provides a three- term factorization which is similar to the spectral factorization, but holds for any, possibly non-symmetric and rectangular, matrix A £ Rm'n. The SVD allows us to fully describe the linear map associated with A via the matrix-vector product y = Ax as a three-step process: first the input vector x undergoes an orthogonal transformation (rotation or reflection); then a non-negative scaling is performed on the entries of the rotated input vector, and possibly dimensions are added to or removed from this vector in order to match the dimension of the output space. Finally, another orthogonal transformation is performed in the output space. In formulas, we shall see that any matrix A £ Rm,n can be factored as A = UZVT, 124 OPTIMIZATION MODELS where V E RW/W and U E Rm'm are orthogonal matrices (describing the mentioned rotations/reflections in input and output space, respectively), and L = 0 m—r,r £ = diag (cq,..., crr) >- 0, (5.1) where r is the rank of A, and the scalars > 0, i = 1,..., r, represent the scaling factors on the rotated input vector, see Figure 5.1. Figure 5.1 Input-output map y — Ax, with A = 17LVT. Most of the relevant characteristics of A can be derived from its SVD. For instance, as we shall see next, if we know the SVD of a matrix A, then we also know the rank of A, the spectral norm (maximum gain) of A, and the condition number of A. Further, we can readily obtain orthonormal bases for the range and for the nullspace of A) we can solve systems of linear equations involving A as the coefficient matrix (see Section 6.4.2), and analyze the effect of errors in those equations; we can determine least-squares solutions to overdetermined systems of linear equations, or minimum-norm solutions to underdetermined systems. The SVD is also of fundamental importance in many applications. For example, the SVD arises in several nonlinear (and non-convex) optimization problems, it is a key tool for data compression, and it is employed in statistics for factor analysis and principal component analysis (PCA), where it can be used to reduce the dimensionality of high-dimensional data sets, by "explaining" the variance in the data in terms of a few factors, as seen in Section 5.3.2 and in Chapter 13. 5.1.2 The SVD theorem We here state the main SVD theorem and then provide a schematic proof for it. Theorem 5.1 (SVD decomposition) Any matrix A E lRm'” can be factored as A = UtVT (5.2) where V E №.n'n and U E Rm'm are orthogonal matrices (i.e., UTU = Im, VTV = In), and £ E Km'n is a matrix having the first r = rank A diagonal entries (cq,... ,vr) positive and decreasing in magnitude, and all other entries zero (see Eq. (5.1)). Proof Consider the matrix AT A £ S”. This matrix is symmetric and positive semidefinite, and it admits the spectral factorization AT A = VAnVT (3.3) where V £ RW/W is orthogonal (i.e., VTV — In) and An is diagonal, containing the eigenvalues A; = A;(AT A) > 0, / = 1,..., n, on the diagonal, arranged in decreasing order. Since r = rank A = rank ATA, the first r of these eigenvalues are strictly positive. Notice that AAT and AT A have the same nonzero eigenvalues, hence the same rank r. We thus define <7i = \jAj(ATA) = ]/MAAT) > 0,i = 1 r, Now, denote by V\,... ,vY the first r columns of V, i.e., the eigenvectors of AT A associated with A\,..., Ar. By definition, AT Av{ = AjVj, i — \,...,r, hence, multiplying both sides by A, (AAT)Avj = AjAvi, i = 1,.. .,r, which means that Avj, i = 1,..., r, are eigenvectors of AAT. These eigenvectors are mutually orthogonal, since vj AT Avj = AjvJ Vj = Therefore, the normalized vectors Avi Av[ A i if i = j 0 otherwise. 1 ' forms an orthonormal set of r eigenvectors of AAT associated with the nonzero eigenvalues Ai,..., Ar. Then, for i,j = 1,..., r, uj Av; = —vj AT Avj = — ift1" V; 1 J CTj 1 J CTj 1 J f CTj if i — j 1 0 otherwise. Rewritten in matrix format, the previous relation yields = diag (cr\,... ,(7>) = E. (3.4) 126 OPTIMIZATION MODELS This is already the SVD in its "compact" form. We next derive the "fulT'-version SVD. Notice that, by definition, AT Avj = 0, for / = r + 1,..., n, and this implies that Avi = 0, for i = r + 1,..., n. To verify this latter statement, suppose by contradiction that AT Avi — 0 and Avi ^ 0. Then Avj G J\f(AT) = Tl(A)L, which is impossible, since clearly Av[ G 1Z(A). Then, we can find orthonormal vectors Hr+i,..., um such that u\,..., uY, ur+1,..., um is an orthonormal basis for Rm, and ujAvj = 0, for i = 1,..., m; ; = r + 1,..., n. Therefore, completing (5.4), we obtain E 0 r.n-r . . 1?! • • • Vn 1 . Defining the orthogonal matrix U = [u\ • • • um\, the latter expression is rewritten as UTAV = £ which, multiplied on the left by U and on the right by VT finally yields the full SVD factorization A = UtVT. The following corollary can be easily derived from Theorem 5.1. Corollary 5.1 (Compact-form SVD) Any matrix A G Km'n can be expressed as A = Y^WivJ = UrLVrT where r = rank A, Ur = [u\ • • • ur\ is such that UjUr = lr, Vr — [v\ • • • vr] is such that VrT Vr = Ir, and v\ > (J2 > • • • > crr > 0. The positive numbers crz- are caZZed fte singular values of A, vectors U{ are called the left singular vectors of A, and V[ the right singular vectors. These quantities satisfy Avi = (TjUi, uj A = <7{Vi, i = 1,..., r. Moreover, of = A;(AAT) = A/(ATA), z = 1,...,r, and w*, z;/ are the eigenvectors of AT A and of AAT, respectively. 5.2 Matrix properties via SVD In this section we review several properties of a matrix A £ Km'n that can be derived directly from its SVD in full form A = UZVT, or compact form A = UrZV?~. 5.2.1 Rank, nullspace, and range The rank r of A is the cardinality of the nonzero singular values, that is the number of nonzero entries on the diagonal of E. Also, since in practice the diagonal elements in E may be very small but not exactly zero (e.g., due to numerical errors), the SVD makes it possible to define a more reliable numerical rank, defined as the largest k such that (7*. > ecri, for a given tolerance e > 0. Since r — rank A, by the fundamental theorem of linear algebra the dimension of the nullspace of A is dimN{A) — n — r. An orthonormal basis spanning J\f (A) is given by the last n — r columns of V, i.e. J\f(A) = TZ(Vnr), Vnr = [tV-fi • • • Vn]- Indeed, vr+i,... ,vn form an orthonormal set of vectors (they are columns of an orthogonal matrix). Moreover, for any vector £ = Vnrz in the range of Vnr, we have = UrZVrT Z = UrZVrT Vnrz = 0, since Vr Vnr = 0 due to the fact that V = [Vr Vnr] is orthogonal. This shows that the columns of Vnr provide an orthonormal basis for the nullspace of A. Similarly, an orthonormal basis spanning the range of A is given by the first r columns of U, i.e. n{A) = n(Ur), Ur = [mi ••• ur}. To see this fact notice first that, since XT,'' € W,n, r < n, is full row rank, then as x spans the whole IR" space, z = X V7,; x spans the whole Rr space, whence U{A) = {y.y = Ax,xeWt!l} = {y:y = Ur'LVjx,x£WC1} = {y : y = Urz, z e Rr} = 7l(Ur). 128 OPTIMIZATION MODELS Example 5.1 Consider the m x n matrix (m = 4, n = 5) A = A singular value decomposition of this matrix is given by A — UE1/T, with U = 0 0 0 -1 10 0 0 E = 0 0 0 >/08 ->/08 0 0 0 Vo2 0 0 >/5 0 0 Notice that E has a 3 x 3 nonzero diagonal block E = diag (Vi ,£7-2,03) (j\ — 4, (72 = 3, (73 = >/5. The rank of A (which is the number of nonzero elements on the diagonal of E) is thus r = 3 < min(m, w). We can also check that VTV = WT = I5, and UUT = (iTlI = I4. An orthonormal basis for the range of A is given by the first three columns of U: U{A) - n Similarly, an orthonormal basis for the nullspace of A is given by the last n — r — 2 columns of V: " 0 1 " AT (A) = U " 0 ->/08 " _ 0 5.2.2 Matrix norms The squared Frobenius matrix norm of a matrix A £ Rm'w can be defined as II^Hl = trace ATA = £A;(AtA) = 1 = 1 ' i=1 where Cj are the singular values of A. Hence the squared Frobenius norm is nothing but the sum of the squares of the singular values. The squared spectral matrix norm ||A||2 is equal to the maximum eigenvalue of AT A, see Eq. (4.3), therefore imi! = °i/ that is, the spectral norm of A coincides with the maximum singular value of A. Also, the so-called nuclear norm1 of a matrix A is defined in terms of its singular values: || AH* = ]TV/, r = rank A. appears in several problems related to low-rank matrix completion or rank minimization problems2, due to the fact that || A||* is the largest possible convex lower bound on rank A, over the set of matrices with spectral norm bounded by unity. 5.2.2.1 Condition number. The condition number of an invertible matrix A G RM,M is defined as the ratio between the largest and the smallest singular value: k(A) = ^ = ||A||2.||A-1||2. (5.5) This number provides a quantitative measure of how close A is to being singular (the larger k(A) is, the more close to singular A is). The condition number also provides a measure of the sensitivity of the solution of a system of linear equations to changes in the equation coefficients, see, e.g., Section 6.5. 5.2.3 Matrix pseudoinverse Given any matrix A G Rm'M, let r — rank A, and let A = UlLVT be its SVD, where Z has the structure (5.1), with E being the diagonal matrix containing the positive singular values. The so-called Moore- Penrose pseudoinverse (or generalized inverse) of A is defined as A+ = VlfUT e Rn'm where E+ = ^ * Or,m—r 0 n—r,r Qn—r,m—r \0i Vr 1 The norm is also called trace norm, or Ky Fan norm, after Ky Fan (1914- 2010). 2 See Section 11.4.1.4. 130 OPTIMIZATION MODELS Due to the zero blocks in £, Eq. (5.6) can be also written in a compact form that involves only the first r columns of V and U: Af = VrI'rluj. (5.7) Notice that, according to these definitions fr 0 r,m—r , E+E = !r 0 r,n — r Qm—r,r Om-r,m-r 0 n—r,r Qn-r,n-r whereby the following properties hold for the pseudoinverse3: AA+ = A+A = AA+A = AfAA+ = The following three special cases are particularly interesting. 1. If A is square and nonsingular, then A+ = A-1. 2. If A G Wn,n is full column rank, that is r = n < m, then A+A = VrVj = 1/VT = ln, that is, A+ is in this case a left inverse of A (i.e., a matrix that, when multiplied by A on the left, yields the identity: A+A = I„). Notice that in this case AT A is invertible and, from (5.3), we have that (ATA)“1AT = (VZ~2VT)VlTUT = VI~2IUj = VI,”1 Uj = A+. Any possible left inverse of A can be expressed as AH = Af + QT, (5.9) where Q is some matrix such that AT Q = 0 (i.e., the columns of Q belong to the nullspace of AT). To summarize, in the full-column rank case the pseudoinverse is a left inverse of A, and it has an explicit expression in terms of A: A G Rm'M, r = rank A = n < m => A+A = In, A+ = (ATA)-1AT. 3. If A G lRm,n is full row rank, that is r — m < n, then AA+ = UrUj = UUT - Im, that is, A+ is in this case a right inverse of A (i.e., a matrix that, when multiplied by A on the right, yields.the identity: AA+ = Im). 3 Notice that any matrix Af satisfying AA'A = A is a legitimate pseudoinverse of A. The Moore-Penrose pseudoinverse is just one of the possibly many pseudoinverses of A. In this book, however, the symbol Af typically denotes the Moore-Penrose pseudoinverse, unless specified otherwise. Notice that in this case AAT is invertible and, from (7.3), we have that AT(AAT)-1 = VtTUT{Ul}UTyl = VLT UT WL~2Ut ■ = VrE_1liT = Af. Any possible right inverse of A can be expressed as Ari = A+ + Q, where Q is some matrix such that AQ = 0 (i.e., the columns of Q belong to the nullspace of A). To summarize, in the full-row rank case the pseudoinverse is a right inverse of A, and it has an explicit expression in terms of A: A £ Rm,n, r — rank A = m < n ==> AA+ = Im, A+ — AT(AAT)“1. 5.2.4 Orthogonal projectors We have seen that any matrix A £ lR.m,n defines a linear map у = Ax between the input space Rn and the output space Rw. Moreover, by the fundamental theorem of linear algebra, the input and output spaces are decomposed into orthogonal components as follows: R" = Af{A) BAfiA)1- = Af(A) ®U(AJ) Rm = 'R(A)®'R(A)-L = П(А)®М(АТ). As previously discussed, the SVD A = WLVT provides orthonormal bases for all four of these subspaces: with the usual notation U = [Ur Unr], V = [Vr Vm], where Ur/ Vr contain the first r = rank A columns of U and V, respectively, we have that M(A) - n(Vnr), Af(A)1 = K(AT) = K(Vr), (5.10) 11(A) = n(Ur), KiA)2- = ЛГ(АТ) = n(U„r). (5.11) We next discuss how to compute the projections of a vector x £ Rn onto AT (A), JViA)1-, and the projections of a vector у £ Rw onto n(A),n(A)i-. First, we recall that given a vector x £ R” and d linearly independent vectors bi,...,bd £ R", the orthogonal projection of x onto the subspace span by {bi,..., Ьд} is the vector x* = Boc, 132 OPTIMIZATION MODELS where B = [b\ • • • i^], and oc G R^ should solve the Gram systems of linear equations BTBoc = BTx, see Section 2.3 and Example 4.5. Notice in particular that if the basis vectors in B are actually orthonormal, then it holds that BTB = I hence the Gram system has an immediate solution oc — BTx, and the projection is simply computed as x* = BBTx. Returning to our case of interest, let x G R” be given, and suppose we want to compute the projection of x onto J\f(A). Since an orthonormal basis for J\f(A) is given by the columns of Vnr, by the previous reasoning we have immediately that Maf(A) = (VnrVjr)x, where we used the notation [x]$ to denote the projection of a vector onto the subspace S. Now, observe that In = VVT = vrvj + vnrvjr, hence, using (5.8), PaT(a) = vnrVjr = In~ VrVj = I„- AfA. Matrix Ptf(A) is calle(i an orthogonal projector onto the subspace J\f(A). In the special case when A is full row rank, then A+ = AT(AAT)~1, and Pj\f(A) = In ~ AT(AAT)_1A, if A is full row rank. Via an analogous reasoning we obtain that the projection of x G Rn onto A/*(A)x = TZ(AT) is given by [*W(-A)-L = )x — Ptf(A)±X, P^fiA)1 — and specifically P^A)1- = At(AAt)-1A, if A is full row rank. Similarly, for y G Rm, we have that \y\n(A) — (UrUj)y — Pn(A)V' Pyz(A) ~ AA+, P71(A) ~ A(AtA)_1At, if A is full column rank, and finally [y\'Jl(A)L = (UnrUnr)y ~ ^n(A)1- == I™ ~~ ' (5-12) ~ — A(ATA)_1AT, if A is full column rank. 5.2.4.i Projections on subspaces. We consider the problem of computing the projection of a given vector y G Rm onto a subspace span of a given set of vectors S = span(a^\.. .,a^) C Rm, as already discussed in Section 2.3 and in Section 5.2. Clearly, S coincides with the range space of the matrix having these vectors as columns, A = [<aW • • • flM], hence the problem we face is minJ|z-y||2- (5-13) If r = dimS = rank(A), then the compact SVD of A is A = and the unique minimum norm solution of (5.13) is given by the projection theorem as 2* = Ms - Pn(A)y = AA'y = (UrUj)y, where P^(a) is the orthogonal projector onto 11(A). 4 Notice that the projection z* is a linear function of y, and that the matrix defining this projection is provided by the Ur factor of the SVD of A. Similarly, suppose we want to find the projection of y onto S^, the orthogonal complement of S. Since S1- = J\f(AT), the problem is written as mjn ||z-y||2 and the solution is given in (5.12) as z* = Ms-L =(hn- AA+)y. 5.3 SVD and optimization In this section, we illustrate how certain optimization problems can be conveniently solved via SVD. Further applications of SVD in optimization are given in Chapter 6. 5.3.1 Low-rank matrix approximations Let A G Rm'M be a given matrix, with rank(A) — r > 0. We here consider the problem of approximating A with a matrix of lower rank. In particular, we consider the following rank-constrained approximation problem s.t.: rank(A*.) = k, where 1 < k < r is given. Let a = ulvT = 4 Since z G 7Z(A) means that z = Ax for some x e R”, the problem (5.13) can also be rewritten equivalently as min \\Ax — y||2/ which is know as the least-squares problem (thoroughly covered in Chapter 6). Since we found that z* = AA^y, it follows that the optimal solution of the LS problem above is x* = A^y. 134 OPTIMIZATION MODELS be an SVD of A. We next show that an optimal solution of problem (5.14) can be simply obtained by truncating the previous summation to the fc-th term, that is Ak = YLWiVi Remark 5.1 Matrix of movie scores. Assume that A contains scores of user ratings of movies,^ where Ajj contains the score of the z-th user in the ;-th movie. Thus, the z-th row gives the score profile of user z. Then, the rank-one approximation A « cr\U\v\ corresponds to a model where all the movies are rated to a single profile (given by v{), and users differ from each other only by a scalar multiple as given by u\\ AX] « czz1/Zi71;. To prove the low-rank approximation result above, observe that the Frobenius norm is unitarily invariant, meaning that ||Y||f = || QVR||p for all Y E Rm'n and any orthogonal matrices Q E Rm,m, R E Rn'n. Therefore, where Z = l/TA*W With this change of variables, problem (5.14) reads diag (cti, ... ,c7r) 0r,n-r Qm—r,r Qm—r,n—r s.t.: rank(Z) = k. Notice then that Z can be assumed to be diagonal, since considering nonzero off-diagonal elements in Z only worsens the Frobenius norm objective in this problem. Therefore, the objective (5.16) becomes /0 = ||diag (<7i,... ,crr) — diag (z\,... ,zr) ||£ = £((7* - z,-)2. Since the constraint rank(Z) = k requires that exactly k of the diagonal entries Z[ are nonzero, the best choice is to set = cr?-, for i — 1,..., k, and Zi = 0, for z > k. In this way, the zZ/ i = 1,..., k, "neutralize" the largest singular values of A, so that the residual terms in the objective only contain the r — k smallest singular values, that is, an optimal solution is diag(<7i,... ,(7jt,0,...,0) 0r,n-r Qm—r,r 0m—r,n—r and the optimal objective is L °f- 5 See, e.g., the Netflix problem described in Section 11.4.1.4. The optimal solution for the original problem (5.14) can then be recovered from the change of variables Z = UT AkV, giving Ak = UZ*VT = EWivJ, 1 = 1 which coincides indeed with (5.15), as desired. Following the very same reasoning, we can actually prove that the solution in (5.15) is optimal not only for the Frobenius norm objective, but also for the spectral (maximum singular value) matrix norm. That is, Ak is optimal also for the following problem (hint: also the spectral norm is unitarily invariant): min ||A —AJ|? Akew">n 11 /cl12 s.t.: rank (Ak) = k. The ratio = Ml = gf+---+^ indicates what fraction of the total variance (Frobenius norm) in A is explained by the rank k approximation of A. A plot of rjk as a function of k may give useful indications on a good rank level k at which to approximate A, see, e.g., Example 5.2 for an application in image compression. Clearly, r]k is related to the relative norm approximation error , _ \\A~AkU _ °k+\ + • • •+ -i * IIaIIf + • • • + ar k' Remark 5.2 Minimum "distance" to rank deficiency. Suppose A E Rm,n, m > n is full rank, i.e., rank(A) — n. We ask what is a minimal perturbation 8A of A that makes A + 5A rank deficient. The Frobenius norm (or the spectral norm) of the minimal perturbation 8A measures the "distance" of A from rank deficiency. Formally, we need to solve min II^AlIp SAeRm,n 11 IIF s.t.: rank(A + 8 A) = n — 1. This problem is equivalent to (5.14), for 8A — Ak — A. The optimal solution is thus readily obtained as 8A* — Ak- A, where A = JQL1 CjUjvJ is a compact SVD of A, and Ak = E^1 ^ivj. Therefore, we have SA* = -<Tnu„v~l. This result shows that the minimal perturbation that leads to rank deficiency is a rank-one matrix, and that the distance to rank deficiency is ||M*||F = ||M*||2 = <7-„. 136 OPTIMIZATION MODELS Example 5.2 (SVD-based image compression) Figure 3.2 shows a gray-scale image, which is represented by a 266 x 400 matrix A of integers corresponding to the gray levels of the pixels. This matrix is full rank, i.e., rank (A) = 266. We computed the SVD of matrix A representing the image, and then plotted the ratio % in (3.17), for k from 1 to 266, see Figure 3.3. We see from this curve, for instance, that k = 9 already captures 96% of the image variance (the relative approximation error is e~ 0.04); k = 23 corresponds to % = 0.98; k — 49 corresponds to % = 0.99, and k = 154 corresponds to rj* = 0.999. Figure 3.4 shows visually a comparison of these approximations: the image is already intelligible at coarse approximation k = 9; for k = 49 we have a reasonably good compressed image, whilst at k = 154 the image is barely distinguishable from the original. To better understand the usefulness of the approximations, suppose we need to transmit the image over a communication channel. If transmitting the original image required sending N1 = 266 x 600 — 106,400 numerical values, transmitting the k = 49 approximation requires sending k right singular vectors, k left singular vectors, and k singular values, that is N2 = 266 x 49 + 49 + 400 x 49 = 32,683 numerical values. The relative compression would then be N1 -N? ------- -2 x 100 = 225%. *v I Figure 5.2 A 266 x 400 gray-scale image. Figure 5.3 Complement of the approximation error e* as a function of rank k. Figure 5.4 Rank k approximations of the original image in Figure 5.2, for k = 9 (top left), k — 23 (top right), k = 49 (bottom left), and k = 154 (bottom right). 5.3.2 Principal component analysis Principal component analysis (PCA) is a technique of unsupervised learning (see Section 13.5 for further discussion on learning from data), widely used to "discover" the most important, or informative, directions in a data set, that is the directions along which the data varies the most. 5.3.2.1 Basic idea. To make the intuition clear, consider for example the two-dimensional data cloud in Figure 5.5: it is apparent that there exists a direction (at about 45 degrees from the horizontal axis) along which almost all the variation of the data is contained. In contrast, the direction at about 135 degrees contains very little variation of the data. This means that, in this example, the important phenomena underlying the data are essentially uni-dimensional along the 45 degrees line. The important direction was easy to spot in this two- dimensional example. However, graphical intuition does not help when analyzing data in dimension n > 3, which is where principal component analysis (PCA) comes in handy. Let Xj G Rw, i = 1,..., m, be the given data points one wishes to analyze, denote by x = ^ YliL\ xi average of the data points, and let X be the n x m matrix containing the centered data points: ■ x, i = 1,..., m. We look for a normalized direction in data space, 2 G R”, ||z||2 = 1, such that the variance of the projections of the centered data points on the line determined by 2 is maximal. Our choice of the Euclidean norm in the normalization of 2 is made because it does not favor any particular direction. The components of the centered data along direction 2 are given by (see, e.g., Section 2.3.1) ocj — xj2, i — 1,..., m. Notice that ocjZ are the projections of X\ along the span of 2. The mean-square variation of the data along direction 2 is thus given by -i m m m i=1 1=1 zTXjxjz ■ The direction 2 along which the data has the largest variation can thus be found as the solution to the following optimization problem: max 2 (XX )z Figure 5.5 A data cloud in R2. № = 1. 138 OPTIMIZATION MODELS Let us now solve this problem via the SVD of X: let X = UrXVrT = £ (TiUivJ be a compact SVD of X, where r = rank(X). Then H = xxT = urx2uj is a spectral factorization of H. From Theorem 4.3 we have that the optimal solution of this problem is given by the column u\ of Ur corresponding to the largest eigenvalue of H, which is cr\. The direction of largest data variation is thus readily found as 2 = u\, and the mean-square variation along this direction is proportional to of. 5.3.2.2 Deflation. We can next proceed to determine a second-largest variation direction. To this end, we first deflate the data by removing from them the component along the already found largest variation direction u\. That is, we consider the deflated data points *j1)=Xi -Ui(ujxi), i = and the deflated data matrix = [jeP ••• x£>] = (in-ulUJ)x. We can readily obtain the compact SVD for the deflated matrix X^1) from the SVD of X = YJi=\ o'iUivJ. Namely: r r x(1) = YL aiuivl - E aiu iMiT uivJ i=1 1=1 = E£r<M<l,ir -°r\u\vl i=1 r = E aiuiv]> where the first line follows from the fact that u\, i — 1,..., m, form an orthonormal set, thus uj U{ is zero whenever i 7^ 1, and it is equal to one when i = 1. Notice that the summation in the dyadic expansion of DW now starts from i = 2. The second-largest direction of variation 2 thus solves the following optimization problem min 2T(X(1)x(1)T)2 zelRn s.t.: ||z||2 = 1, and the solution, again from Theorem 4.3, is 2 = U2, that is the direction of the left singular vector corresponding to the largest singular value of X^1), which is indeed (72. We can actually iterate this deflation process until we find all r principal directions. The above reasoning shows that these principal directions are nothing but the left singular vectors u\,...,ur of the centered data matrix X, and the mean-square data variations along these directions are proportional to the corresponding squared singular values (t\, ..., a}. Remark 5.3 Explained variance. We have seen that the eigenvectors u\,...,ur of XXT (which coincide with the left singular vectors of X) provide us with principal directions corresponding to decreasing mean-square variation in the data. The mean-square variation is also sometimes referred to as variance in the data. The variance along direction u\ is cr\) the variance of the deflated data along U2 is cf, etc. Therefore, the total variance in the data is cr\ + • • • + a}, which coincides with the trace of the Gram data matrix H = XXT: trace (XXT) = trace(UrT?Uj) — trace (E2 U J Ur) — trace (E2) = (7j H Yof- If we project the centered data onto the span of the first k < r principal directions, we obtain a projected data matrix x* = u*Tx, where Ucontains the columns U\,..., w*. The data variance contained in this projection (i.e., the variance explained by the first k principal directions) is trace(XfcXj) = trace(l^XXTUk) = trace ([I* 0]E2[4 0]T) = o* + ... + o%. Hence, we can define the ratio between the variance "explained" by the projected data and the total variance, as Vi + * * ' + Pic , V erf H Yo-f If this ratio is high, we can say that much of the variation in the data can be observed on the projected /c-dimensional subspace. Example 5.3 (PCA of market data) As a numerical example, we considered data consisting in the returns of six financial indices: (1) the MSCI US index, (2) the MSCI EUR index, (3) the MSCI JAP index, (4) the MSCI PACIFIC index, the (3) MSCI BOT liquidity index, and the (6) MSCI WORLD index. We used monthly return data, from Feb. 26, 1993 to Feb. 28, 2007, for a total of 169 data points, as shown in Figure 5.6. 140 OPTIMIZATION MODELS x 10 k Figure 5.6 Monthly returns of six financial indices. The data matrix X has thus m = 169 data points in dimension n — 6. Centering the data, and performing the SVD on the centered data matrix X, we obtain the principal axes U{, and the corresponding singular values: U = a = 1.0765 0.5363 0.4459 0.2519 0.0114 ] Computing the ratios tj^ in (3.19), we have 7 x 100 = [67.77 84.58 96.21 99.92 99.99 100]. From this we deduce, for instance, that over 96% of the variability in the returns of these six assets can be explained in terms of only three implicit "factors" (say, z = [z\ Z2 Z3]T). In statistical terms, this means that each realization of the return vector x E 1R6 can be expressed (up to a 96% "approximation") as x — x + U$z, where z is a zero-mean vector of random factors, and L/3 = [u\ Ui 1/3] is the factor loading matrix, composed of the first three principal directions of the data. 5.3.2.3 Computing the PC A. In practice, one is interested in computing just a few principal directions, hence the full SVD is not required. The power iteration algorithm, described in Section 12.5.3, allows us to address PCA problems when only a few directions are sought, even for large data sets. 5.3.2.4 Link with low-rank matrix approximation. The PCA problem is closely linked with the low-rank approximation problem examined in Section 5.3.1. Precisely, the vector 2 that is optimal for problem (5.18) is nothing else than an eigenvector of the matrix XXT that corresponds to the largest eigenvalue of that matrix. As such, it is a left singular vector corresponding to the largest eigenvalue of the centered data matrix X (see Corollary 5.1). In effect, we can perform a low-rank approximation to the centered data matrix, and the resulting low-rank matrix provides the principal components. For example, if we solve the rank-one approximation problem min IIX — cruvT\\j; : llwlU = llulh = 1, 0" > 0, then at optimum u — u\ is the principal direction with the largest variance.6 The k-rank approximation to an n x m data matrix X: X « £ (TjUivJ allows one to interpret it as a sum of k different "factors," where each factor is a dyad of the form pqT. For instance, linking back to Example 5.3, if our data matrix X contains the returns of different assets over a time period, with X/y being the one-period return of asset i in period ;, then a rank-one approximation X « pqT, so that Xjj = piqj for every pair (/,;), is such that vector q could be interpreted as the return of a "typical" asset, while vector p contains some positive or negative factors that are specific to each asset. A general low-rank approximation can be interpreted as an effort to represent data as a (small) linear combination of typical profiles, each asset assigning its own scaling to each profile. 5.3.3 Procrustean transformation problems The problem of approximately superimposing two ordered groups of three-dimensional points by means of a rigid displacement (rotation and translation) is a classical one in robotics, manufacturing, and 6 See Section 12.5.3 142 OPTIMIZATION MODELS computer vision, where it is encountered under various names such as the absolute orientation, pose estimation, procrustean transformation problem, or the matching problem. In this section, we illustrate the matching problem for data in generic dimension n, and highlight its connection with the SVD. Let A = [a\ • • • am\ G lR"'m, B =[&!••• bm] e ]R"'m be given matrices containing by columns two ordered sets of points. The matching problem considered here amounts to determining a rigid rotation and a translation of the data set B that brings this set to approximately match the A set. In a manufacturing context, for instance, the data points in A are interpreted as "template" points of a machined part (e.g, from computer-assisted design, or CAD), while the data in B are points physically measured on the actual machined part, and one is interested in knowing whether the actual part is in tolerance, by bringing it to match the template by a suitable displacement. Formally, the rotation is represented by means of an orthogonal matrix R G ]Rn,n, and the translation by a vector t G W1. The displaced points are described by the matrix Bd — RB -f- tl^~. The problem is to minimize the matching error, as measured by the squared Frobenius norm of A — B^: min ||y4 — (RB + flT)||p (5-20) s.t.: RRT = In. The following theorem holds. Theorem 5.2 Given A, B G ]R"'m, let — lm = AP, = BP and let BAT= UtVT be an SVD for BAT. Then, an optimal solution for problem (5.20) is given by R* = VUT, t* = —(A-R*B)1. Proof For any fixed R, the objective (3.20) as a function of t is f0(R,t) = ||A — RB — £1t||f = ||A - RB\\l - 2trace(A - RB)ltT + mtTt. This can be minimized with respect to t by setting the gradient with respect to t to zero: V,/o = -2T(A - RB) 1 + 2mf = 0, which yields the optimal translation vector as a function of R: t = —(A — RB)1. m Substituting this t back into the objective we obtain f0(R) = \\A-RB\\2v. The minimum of /o(R) over orthogonal matrices can be determined as follows: recall that for orthogonal R it holds that ||RB||f = ||B||p, and let UtVT be the singular value factorization of BAT. Then, we write || A — RB||p = || A |jp + ||RB||p — 2 trace RBAT = ||Hi+ ||5 Hi-2 traceTS = \\A\\l + \\B\\l-21£Tii<7i' (5-21) 1 = 1 where T = VTRU is an orthogonal matrix, and r = rank(BAT) . Clearly, (3.21) is minimized if Yli=\ Ta^i is maximized. Since orthogonality of T imposes |TZZ | < 1, the maximum is achieved by choosing Ta = 1, i.e. T = In, which results in an optimal orthogonal matrix R* = VUT. Notice finally that real orthogonal matrices may represent reflections as well as proper rotations. Proper rotations are characterized by the conditions that detR = +1, whereas for reflections we have detR = — 1. If in the problem we insist on having proper rotation matrices, and it turns out that detR* = — 1, then we may renormalize the last column of U by multiplying it by —1. □ Example 5.4 We show a simple example consisting of n — 4 points in R3, as shown in Figure 5.7, with -1 -1 A - 1 -1 -1 1 ' 3.0000 4.4142 3.0000 B = 0.9000 -0.5142 144 OPTIMIZATION MODELS -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 ~ We want to determine a displacement that superimposes B on A, minimizing the matching cost (5.20). Applying Theorem 5.2, we compute the SVD r 2.7284 -2.7284 0 2.9284 3.1284 0 BAT = = UY.V u = £ = V = -0.1516 0.9884 0 0.9884 0.1516 0 3.8477 0 0 0 0.5776 0.8163 0 0.8163 -0.5776 0 R* = VUT = 0.7193 0.6947 -0.6947 0.7193 with optimal matching error || A — R*B — f*lT||F = 0.1804. Figure 5.7 Matching two-dimensional data sets. Matrix R* corresponds to a rotation of —44.0048° around an axis pointing outwards to the X\,X2 plane. 5.4 Exercises Exercise 5.1 (SVD of an orthogonal matrix) Consider the matrix -12 2 2-12. 2 2-1 1. Show that A is orthogonal. 2. Find a singular value decomposition of A. Exercise 5.2 (SVD of a matrix with orthogonal columns) Assume a matrix A = [01,..., 0m] has columns 0; G ]RW, i = 1,..., m that are orthogonal to each other: ajaj = 0 for 1 < i f=- j < n. Find an SVD for A, in terms of the 0/s. Be as explicit as you can. Exercise 5.3 (Singular values of augmented matrix) Let A G W1'171, with n>m, have singular values cq,..., <jm. 1. Show that the singular values of the (n + m) x m matrix are a\ — yjl + of, i = 1,..., m. 2. Find an SVD of the matrix A. Exercise 3.4 (SVD of score matrix) An exam with m questions is given to n students. The instructor collects all the grades in a n x m matrix G, with G,-y the grade obtained by student i on question j. We would like to assign a difficulty score to each question, based on the available data. 1. Assume that the grade matrix G is well approximated by a rank- one matrix sqT, with s e W1 and q G IRm (you may assume that both s,q have non-negative components). Explain how to use the approximation to assign a difficulty level to each question. What is the interpretation of vector s? 2. How would you compute a rank-one approximation to G? State precisely your answer in terms of the SVD of G. Exercise 5.5 (Latent semantic indexing) Latent semantic indexing is an SVD-based technique that can be used to discover text documents similar to each other. Assume that we are given a set of m documents Di,...,Dm. Using a "bag-of-words" technique described in Example 2.1, we can represent each document Dj by an ft-vector dj, where n is the total number of distinct words appearing in the whole set of documents. In this exercise, we assume that the vectors dj are constructed as follows: dj(i) = 1 if word i appears in document Dy, and 0 otherwise. We refer to the n x m matrix M = [d\,.. .,dm] as the "raw" term-by-document matrix. We will also use a normalized7 version of that matrix: M = \d\,.. .,dm\, where dj = dj/\\dj\\2, j = l,...,ra. Assume we are given another document, referred to as the "query document," which is not part of the collection. We describe that query document as an ft-dimensional vector q, with zeros everywhere, except a 1 at indices corresponding to the terms that appear in the query. We seek to retrieve documents that are "most similar" to the query, in some sense. We denote by q the normalized vector ? = <?/!№• 1. A first approach is to select the documents that contain the largest number of terms in common with the query document. Explain how to implement this approach, based on a certain matrix-vector product, which you will determine. 2. Another approach is to find the closest document by selecting the index j such that \\q — rfy112 is the smallest. This approach can introduce some biases, if for example the query document is much shorter than the other documents. Hence a measure of similarity based on the normalized vectors, \\q — djW2, has been proposed, under the name of "cosine similarity". Justify the use of this name for that method, and provide a formulation based on a certain matrix-vector product, which you will determine. 3. Assume that the normalized matrix M has an SVD M = UXVT, with E an n x m matrix containing the singular values, and the unitary matrices U = [u\,..., un\, V = [v\f..., vm] of size n x ft, m x m respectively. What could be an interpretation of the vectors ui, v\, I — 1,..., r? Hint: discuss the case when r is very small, and the vectors u/, vj, I = 1,..., r, are sparse. 4. With real-life text collections, it is often observed that M is effectively close to a low-rank matrix. Assume that a optimal rank-k approximation (k <C min(n, m)) of M, M*-, is known. In the latent semantic indexing approach8 to document similarity, the idea is to first project the documents and the query onto the subspace generated by the singular vectors u\,..., u^, and then apply the cosine similarity approach to the projected vectors. Find an expression for the measure of similarity. 7 In practice, other numerical representation of text documents can be used. For example we may use the relative frequencies of words in each document, instead of the £2-norm normalization employed here. 8 In practice, it is often observed that this method produces better results than cosine similarity in the original space, as in part 2. Exercise 5.6 (Fitting a hyperplane to data) We are given m data points d\,..., dm G Rn, and we seek a hyperplane H(c,b) = {x £ R” : cTx = b}, where c e R”, c / 0, and b £ R, that best "fits" the given points, in the sense of a minimum sum of squared distances criterion, see Figure 5.8. Formally, we need to solve the optimization problem min £]dist2(di,'H(c,b)) : ||c||2 = where dist(rf, H) is the Euclidean distance from a point d to H. Here the constraint on c is imposed without loss of generality, in a way that does not favor a particular direction in space. 1. Show that the distance from a given point d G R” to K is given by dist (d,U(c,b)) = \cJd-b\. 2. Show that the problem can be expressed as min fo{b,c), b,c: ||c||2=l where /0 is a certain quadratic function, which you will determine. 3. Show that the problem can be reduced to min cT(DDT)c s.t.: ||c||2 = 1, where D is the matrix of centered data points: the z-th column of D is di — d, where d = (1/m) Y4L1 di is the average of the data points. Hint: you can exploit the fact that at optimum, the partial derivative of the objective function with respect to b must be zero, a fact justified in Section 8.4.1. 4. Explain how to find the hyperplane via SVD. Exercise 3.7 (Image deformation) A rigid transformation is a mapping from BJ1 to Bn that is the composition of a translation and a rotation. Mathematically, we can express a rigid transformation (p as <p(x) = Rx + r, where R is an n x n orthogonal transformation and rGR”a vector. 148 OPTIMIZATION MODELS We are given a set of pairs of points (xf,yz-) in ]Rn, i = 1,m, and wish to find a rigid transformation that best matches them. We can write the problem as min Y II Rx{ + r — i/z 11i • RT R = In, (5*22) where In is the n x n identity matrix. The problem arises in image processing, to provide ways to deform an image (represented as a set of two-dimensional points) based on the manual selection of a few points and their transformed counterparts. 1. Assume that R is fixed in problem (5.22). Express an optimal r as a function of R. 2. Show that the corresponding optimal value (now a function of R only) can be written as the original objective function, with r = 0 and %[, \/i replaced with their centered counterparts, ^ m ^ m Xi = Xi-x, * = -£>/, Vi = Vi-V, y=- El/r m j=1 171 j=1 3. Show that the problem can be written as min IIjRX- Y||f : RTR = for appropriate matrices X, Y, which you will determine. Hint: explain why you can square the objective; then expand. 4. Show that the problem can be further written as max trace RZ : XT R = ln, for an appropriate n x n matrix Z, which you will determine. 5. Show that R = VUT is optimal, where Z = USVT is the SVD of Z. Hint: reduce the problem to the case when Z is diagonal, and use without proof the fact that when Z is diagonal, In is optimal for the problem. 6. Show the result you used in the previous question: assume Z is diagonal, and show that R — In is optimal for the problem above. Hint: show that RTR — In implies \Ra\ < 1, i = 1,...,n, and using that fact, prove that the optimal value is less than or equal to trace Z. Figure 5.9 Image deformation via rigid transformation. The image on the left is the original image, and that on the right is the deformed image. Dots indicate points for which the deformation is chosen by the user. 7. How woud you apply this technique to make Mona Lisa smile more? Hint: in Figure 3.9, the two-dimensional points X{ are given (as dots) on the left panel, while the corresponding points y; are shown on the left panel. These points are manually selected. The problem is to find how to transform all the other points in the original image. Linear equations and least squares We introduce here linear equations and a standard formalism for their representation as an element-wise vector equality of the form Ax = y, where x G Rn is the unknown variable, A G ]Rm,n is the coefficient matrix, and y G Rm is the known term vector. Linear equations constitute a fundamental building block of numerical linear algebra, and their solution is usually a key part in many optimization algorithms. Actually, the problem of finding a solution to a set of linear equations Ax — y can also be interpreted as an optimization problem, that of minimizing ||Ax — y\\i with respect to x. We shall characterize the set of solutions of linear equations, and then discuss approximate solution approaches, which are useful when no exact solution exists. This leads to the introduction of the least-squares problem, and its variants. Numerical sensitivity issues and solution techniques are also discussed, together with relations with matrix factorizations such as the QR factorization and the SVD. 6.1 Motivation and examples Linear equations describe the most basic form of relationship among variables in an engineering problem. Systems of linear equations are ubiquitous in all branches of science: they appear for instance in elastic mechanical systems, relating forces to displacements, in resistive electrical networks, relating voltages to currents, in curve fitting, in many geometrical problems such as triangulation, trilateration, and localization from relative position measurements, in discrete-time dynamical systems relating input and output signals, etc. Linear equations form the core of linear algebra, and often arise as constraints in optimization problems. They are also an important building block of optimization methods, since many optimization algorithms rely on 132 OPTIMIZATION MODELS solution of a set of linear equations as a key step in the algorithm's iterations. We next provide a few illustrative examples of linear equations. Example 6.1 (An elementary 3x2 system) The following is an example of a system of three equations in two unknowns: xi + 4.5x2 = h 2x\ 4- 1.2x2 — —3.2, —O.lxi -f- 8.2x2 — 1.5. This system can be written in vector format as Ax — y, where A is a 3 x 2 matrix, and y is a 3-vector: 4.5 " / y = A solution to the linear equations is a vector x G R2 that satisfies the equations. In the present example, it can be readily verified by hand calculations that the equations have no solution, i.e., the system is infeasible. Example 6.2 (Trilateration) Trilateration is a method for determining the position of a point, given the distances to known control points (anchors). Trilateration can be applied to many different areas such as geographic mapping, seismology, navigation (e.g., GPS systems), etc. In Figure 6.1, the coordinates of the three anchor points 0i,02/fl3 £ R2 are known, and the distances from point x = [xi X2]T to the anchors are measured as d\,d2,d3. The unknown coordinates of x are related to the distance measurements by three nonlinear equations ||x U\ H2 — \\x-a2W2 = d2' Il*-«3ll2 = 4- However, by subtracting the second and the third equation from the first one, we obtain a system of two linear equations in the variable x 2(02 — ai)T x ■dl + lMÌÌ- IKII2' 2(a3-al)Tx = d\ -d\ + \\a3\\\ - ||«i||1, such that each solution to the original nonlinear system is also a solution to this linear system1 of two equations in two unknowns (the desired coordinates). This system is put in the standard vector format Ax — y, with A = 2(^2 ~ 0i)T 2(a3 -fli)T d\ - d\ + \\a2W2 ~ ll^iII2 d] -d\+ ||fl3|l2 — !l«i111 The matrix A is invertible whenever vectors #2 ~ a\ and 03 — 01 are not parallel, i.e., whenever the three centers are not collinear. We shall see that, when A is invertible, the linear system has a unique solution. Under Figure 6.1 A trilateration problem on a plane (view from above). At point x, we measure the distances from three beacons fli,«2/«3/ in order to determine the coordinates of x. 1 The converse statement, however, may not hold. That is, not every solution to the linear system is necessarily a solution to the original system of nonlinear equations. For a given solution x* of the linear system, we have to check a-posteriori whether II** ~ as\\i ~ dl also holds, to ensure that x* is also a solution to the original system of nonlinear equations. such a hypothesis, if the solution of the linear system also satisfies \\x — «3 H2 = c%, then we obtained the (unique) solution of the original system of nonlinear equations; if it does not, then we conclude that the original system has no solution (the measurements are inconsistent). Example 6.3 (Force/torque generation) Consider a rigid body moving in a horizontal plane, equipped with n thrusters as shown in Figure 6.2. Each thruster i is placed at coordinates (x;,y7) with respect to the center of mass, and can impress a force of intensity /z to the rigid body, along its direction of action 0*. Suppose we want to impress to the body an overall resultant force / = [fx fy]T and resultant torque r: determine the intensities //, i = 1,..., n, such that the desired overall resultant force and torque are attained. Notice that the resultant force along the x and y axes is given by fi cos 07 and fi sin 0/, respectively, while the resultant torque is cos 0/ — jq sin0/). Figure 6.2 Determine thrust intensities /;, i = 1,..., n, so that the resultant force / and torque r are attained. In order to match the desired force and torque, the thruster intensities should therefore satisfy the following system of three linear equations in the n unknowns /1,... ,/n: /1 COS 61 + b f„ cos 6„ = fx, /1 sin 01 H + f„ sin e„ = fy, /1^1 H VfnO-n = T, where we defined the coefficients = y/ cos 0Z- — Xj sin 0Z-, i = l,...,n. This system of linear equations can be written in more compact vector notation as COS 01 •• COS 6n ' /1 ' ' fx ' sin 01 sin 6n 154 OPTIMIZATION MODELS Example 6.4 (Polynomial interpolation) Consider the problem of interpolating a given set of points (*/,!//), i = 1 with a polynomial of degree n — 1: p(x) = an-ixn~1 + ■ ■ ■ + a-ix + a0. Clearly, the polynomial interpolates the i-th point if and only if p(x*) = \ji, and each of such conditions is a linear equation in the polynomial coefficients aj, j = — 1. An interpolating polynomial is hence found if the following system of linear equations in the aj variables has a solution: a$ + X\a\ + • • • + an-\x^ 1 = t/i, ao + *2^1 H I-&n-ix2 1 — Vh ao + Xma^ + • • •= ym. This system can be rewritten in compact vector notation as ■ 1 1 xl ■ X2 • vn-l ■' X1 vn-l • • X2 " yi _ 1 X2m ■ _ an—1 . ym _ where the matrix of coefficients on the left has a so-called Vandermonde structure. Example 6.5 (Fitting a power law to experimental data) We consider the problem of constructing a power-law model (see Example 2.11) that explains a batch of experimentally observed data. Suppose we are given experimental data with input vectors > 0 and associated outputs x/i > 0, i — 1,..., m, and that we have an a priori belief that these data may come from a power-law form model of the form y = ax"1 • • • xan". Here, the variables of the problem are oc > 0, and the vector a G IRn. Taking logarithms, we have that each observation produces a linear equation in the dj variables: yi — aTx^ + b, i = l,...,m, (6.1) where we defined b = log a, Xf = log xif y{ - log yf. These equations form a system of linear equations that can be written in compact matrix form as follows: LINEAR EQUATIONS AND LEAST SQUARES 155 In practice, one cannot expect that the experimental data are perfectly explained by equation (6.1). It is much more realistic to assume that the power model explains the observations up to a certain residual error, that is yi — aT+ b + rj, i = 1,..., m, where r — [zq • • • rm]T is the vector of residuals, accounting for the mismatch between model and reality. In this situation, it is very reasonable to seek a model (i.e., an (0, b) vector) that makes the mismatch minimal in some sense. If the Euclidean norm of r is chosen as the mismatch criterion, then the best fit model can be found by solving the following optimization problem: mm ||Xz-y||2, where z = [aT, b]T G Rn+1, and X G 'R(n+1)'m is a matrix whose z-th column is 1]T. This problem belongs to the class of so-called least-squares problems, which are discussed in Section 6.3.1. Example 6.6 (CAT scan imaging) Tomography means reconstruction of an image from its sections. The word comes from the Greek "tomos" (slice) and "graph" (description). The problem arises in many fields, ranging from astronomy to medical imaging. Computerized axial tomography (CAT) is a medical imaging method that processes large amounts of two- dimensional X-ray images in order to produce a three-dimensional image. The goal is to picture, for example, the tissue density of the different parts of the brain, in order to detect anomalies, such as brain tumors. Typically, the X-ray images represent "slices" of the part of the body that is examined. Those slices are indirectly obtained via axial measurements of X-ray attenuation, as explained below. Thus, in CAT for medical imaging, one uses axial (line) measurements to get two-dimensional images (slices), and from those slices one may proceed to digitally reconstruct a three-dimensional view. Here, we focus on the process that produces a single two-dimensional image from axial measurements. Figure 6.3 shows a collection of slices of a human brain obtained by CAT scan. The pictures offer an image of the density of tissue in the various parts of the brain. Each slice is actually a reconstructed image obtained by a tomography technique explained below. From 1D to 2D: axial tomography. In CAT-based medical imaging, a number of X-rays are sent through the tissues to be examined along different directions, and their intensity after they have traversed the tissues is captured by a receiver sensor. For each direction, we record the attenuation of the X-ray, by comparing the intensity of the X-ray at the source to the intensity after the X-ray has traversed the tissues, at the receiver's end, see Figure 6.4. 156 OPTIMIZATION MODELS Similarly to the Beer-Lambert law of optics, it turns out that, to a reasonable degree of approximation, the log-ratio of the intensities at the source and at the receiver is linear in the densities of the tissues traversed. To formalize this idea, consider a discretized version of a rectangular slice of a tissue, divided into a number n of volume elements (called voxels), see Figure 6.5, each having unknown density Xj, j = I,... ,n. Figure 6.3 CAT scan slices of a human brain (Source: Wikipedia). Figure 6.4 X-ray traversing tissues (Source: Wikipedia). Figure 6.3 Beams traversing a section of tissue divided into n voxels. A (typically large) number m of beams of intensity Iq at the source travel across the tissue: the i-th beam, i = 1,..., m, has a path of length ajj through voxel j, j = 1,..., n. The log-attenuation of the i-th beam intensity due to the /-th voxel is proportional to the density of the voxel xj times the length of the path, that is ajjXj. The total log-attenuation for beam i is therefore given by the sum of the log-attenuations: voxel j (ck'Jtoiiy: Xj) In n Vi = log T = E aijxi> li 7=1 where J; is the intensity of the z-th beam at the receiver end. As a simplified example, consider a square section containing four voxels traversed by four beams, as shown in Figure 6.6. Figure 6.6 A simplified 4 voxel example. = X\ + X2 -- X3 + X4 = \/2xi + V2x4 2/4 = X\ ~\~ X3 The vector of unknown densities x = [x\ • • • X4]"1" is linearly related to the vector of observed log-intensity ratios y =[y\ • • • 1/4]T via the system of linear equations . *4 Recovering the densities Xj from the yi measurements thus amounts to solving a system of linear equations of the form y — Ax, where A E Rm,n. Note that depending on the number n of voxels used, and on the number m of measurements, the matrix A can be quite large. In general, the matrix is "fat," in the sense that it has (many) more columns than rows (n m). Thus, the system of equations resulting from CAT scan problems is usually underdetermined (has more unknowns than equations). As shown in the previous examples, generic linear equations can be expressed in vector format as Ax — y, (6.2) where x E ]Rn is the vector of unknowns, y E ]Rm is a given vector, and A € !Rm,n is a matrix containing the coefficients of the linear equations. The examples motivate us to address the problems of solving linear equations. They also raise the issues of existence of a solution (does a solution x exist) and unicity (if a solution exists, is it unique?). We next discuss some fundamental properties of linear equations, focusing on issues of existence, uniqueness, and characterization of all possible solutions. We anticipate that, depending on the size and properties of A and y, system (6.2) can have no solution, or a unique solution, or a whole infinity of possible solutions. In the latter case, the set of solutions actually forms a subspace of Rn; in the first case (no solution), we shall introduce suitable notions of approximate solution. Our analysis of linear equations will make use of much of the definitions and facts related to vector spaces that we introduced in Chapter 2 and Chapter 3. 6.2 The set of solutions of linear equations 6.2.1 Definition and properties The solution set of the system of linear equations (6.2) is defined as as S = {x E Rn : Ax = yj. Let 01,..., an E Rm denote the columns of A, i.e. A = [a\ • • • an\, and notice that the product Ax is nothing but a linear combination of the columns of A, with coefficients given by x: Ax = x\a\ 4 b xnan. We recall that, by definition, the range of a matrix is the subspace generated by its columns, therefore, no matter what the value of the x coefficients is, the vector Ax always lies in 1Z(A). It then follows that whenever y £ 71(A) equations (6.2) do not admit a solution (i.e., they are infeasible), hence the solution set S is empty. Equivalently, system (6.2) admits a solution if and only if y E 11(A), that is if and only if y is a linear combination of the columns of A. This condition can be checked via the rank test2 rank([A y}) = rank(A). (6.3) Suppose next that condition (6.3) is satisfied, hence a solution x exists such that y = Ax. We next show that the solution set is an affine set: notice that another solution x x for the system exists if and only if A(x - x) — 0, hence x — x must lie in the nullspace of A, J\f(A). All possible solutions for the system must therefore have the form x = x + z, for z E J\f(A). That is, the solution set S is the affine set given by a translation of the nullspace of A: S = {x = x + z : z E A/XA)}. It also follows from this fact that the solution x is unique if and only if J\f(A) = {0}. We now recap our findings in the following fundamental proposition. 2 Clearly, it always holds that 71(A) C 7Z([Ay]). Thus, the rank test expresses the condition that dim7Z(A) = dim7Z([Ay)), implying that 71(A) =U([Ay}). LINEAR EQUATIONS AND LEAST SQUARES 159 Proposition 6.1 (The solution set of linear equations) The linear equation Ax — y, A e Rm'n admits a solution if and only z/rank([A y\) = rank(A). When this existence condition is satisfied, the set of all solutions is the affine set S = {x = x z : z G .A/^A)}, where x is any vector such that Ax = y. In particular; the system has a unique solution if (6.3) is satisfied, and M{A) = {0}. 6.2.2 Under determined, over determined, and square systems We briefly discuss three typical situations that may arise in systems of linear equations, namely when there are more unknowns than equations (underdetermined), when there are more equations than unknowns (overdetermined), and when there are as many equations as unknowns. These three cases are discussed under the hypothesis that A is full rank. The following theorem holds for full rank matrices (see also an equivalent result previously stated in Corollary 4.3). Theorem 6.1 The following two statements hold: 1. A G Rm'n is full column rank (i.e., rank(A) = n) if and only if AT A is invertible; 2. A E Rm'n is full row rank (i.e., rank(A) = m) if and only if A AT is invertible. Proof Consider the first point. If AT A is not invertible, then there exists x/0 such that AT Ax = 0. Then xT AT Ax = ||Ax||2 = 0, hence Ax = 0. Hence A is not full column rank. Conversely, if ATA is invertible, then ATAx =£ 0 for every x 0, which implies that Ax 0 for every nonzero x, as desired. The proof for the second point in the theorem follows similar lines. □ 6.2.2.1 Overdetermined systems. The system Ax = y is said to be overdetermined when it has more equations than unknowns, i.e., when matrix A has more rows then columns ("skinny" matrix): m > n. Assume that A is full column rank, that is rank(A) = n. By (3.3) it follows that dimAf(A) — 0, hence the system has either one or no solution at all. Indeed, the most common case for overdetermined systems is that y ^ 1Z{A), so that no solution exists. In this case, it is often useful to introduce a notion of approximate solution, that is a solution that renders minimal some suitable measure of the mismatch between Ax and y, as further discussed in Section 6.3.1. 160 OPTIMIZATION MODELS 6.2.2.2 Underdetermined systems. The system Ax = y is said to be underdetermined if it has more unknowns than equations, i.e., when matrix A has more columns than rows ("wide" matrix): n > m. Assume that A is full row rank, that is rank (A) = m, and then 71(A) — Rm. Recall from (3.3) that rank(A) + dim J\f(A) = n, hence dim J\f (A) — n — m > 0. The system of linear equations is therefore solvable with infinite possible solutions, and the set of solutions has "dimension" n — m. Among all possible solutions, it is often of interest to single out one specific solution having minimum norm: this issue is discussed in detail in Section 6.3.2. 6.2.2.3 Square systems. The system Ax = y is said to be square when the number of equations is equal to the number of unknowns, i.e. when matrix A is square: m — n. If a square matrix is full rank, then it is invertible, and the inverse A-1 is unique and has the property that A~l A = I. In the case of square full rank A the solution of the linear system is thus unique and it is formally written as x = A~xy. Note that the solution x is, however, rarely computed by actually determining A-1 and multiplying it by y; see instead Section 7.2 for numerical methods for computing the solution of nonsingular systems of linear equations. 6.3 Least-squares and minimum-norm solutions 6.3.1 Approximate solutions: least squares When y 7Z(A), the system of linear equations is infeasible: there is no x such that Ax = y. This situation happens frequently in the case of overdetermined systems of equations. In such cases it may, however, make sense to determine an "approximate solution" to the system, that is a solution that renders the residual vector r = Ax — y as "small" as possible. A natural way of measuring the size of the residual is by the use of a norm: we thus wish to determine x such that the norm of the residual is minimized. In this section we discuss in particular the most common case where the norm selected for measuring the residual is the standard Euclidean norm, whence the problem becomes min || Ax — y H2* (6.4) LINEAR EQUATIONS AND LEAST SQUARES l6l Since the function z2 is monotone increasing for z > 0, the previous problem is also equivalent to minimizing the square of the Euclidean norm: nun \\Ax-y\\\, (6.5) and from this latter formulation derives the name of least-squares (LS) solution of the linear equations, that is a solution that minimizes the sum of the squares of the equation residuals: \\Ax-y\\\ = £(«/"*-y«)2/ where aj denotes the z'-th row of A. Problem (6.4) has an interesting geometric interpretation: since the vector Ax lies in 1Z(A), the problem amounts to determining a point y — Ax in 11(A) at minimum distance from y, see also Section 5.24.1. The projection theorem (Theorem 2.2) then tells us that this point is indeed the orthogonal projection of y onto the subspace 7Z(A), see Figure 6.7. We can thus apply Theorem 2.2 to find an explicit solution to problem (6.5), as formalized in the following proposition. Proposition 6.2 (LS approximate solution of linear equations) Let A £ Rm'n, y £ Rm. The LS problem always admits (at least) a solution. Moreover, any solution x* £ Rn of (6.5) is a solution of the following system of linear equations (the normal equations) A1* Ax* = (6.6) and vice versa. Further, if A is full column rank (i.e., rank(A) = n), then the solution to (6.5) is unique, and it is given by x* = (ATA)~1ATy. (6.7) Proof Given any y £ Rm, by Theorem 2.2 there exists a unique point y £ 1Z(A) at minimal distance from y, and this point is such that (V ~ V) £ IZ(A)1- = J\f(AT), that is AT(y-y) = 0. Since y £ 1Z(A), there certainly exists an x such that y — Ax, which proves that (6.5) admits a solution. Then, substituting y = Ax in the previous orthogonality condition, we have ATAx = H(A)± = J\f(AT) Figure 6.7 Projection of y onto 7Z(A). l62 OPTIMIZATION MODELS which shows the equivalence between the LS problem (6.5) and (6.6). Finally, if A is full column rank then, by Theorem 6.1, AT A is invertible, hence the unique solution of (6.6) is given by (6.7). □ Remark 6.1 Normal equations and optimality. The normal equations are nothing else than the optimality conditions for the optimization problem mm f{x), where/(x) = \\Ax — y\\\. As we will see in Section 8.4, when the function is differentiable, convex, and the problem has no constraints, optimal points are characterized by the conditions V/(x) = 0. In our case, the gradient of / at a point x is easily seen to be V/(x) = AT (Ax — y). 6.3.2 The underdetermined case: minimum-norm solution We consider next the case when the matrix A has more columns than rows: m < n. Assuming that A is full row rank, we have that dim A[(A) = n — m > 0, hence it follows from Proposition 6.1 that the system y — Ax has an infinite number of solutions and that the set of solutions is Sx = {x : x = x + z, z £ J\f(A)}, where x is any vector such that Ax = y. (We are interested in singling out from the set of solutions Sx the one solution x* with minimal Euclidean norm. That is, we want to solve the problem which is equivalent to minxes- ||*||2- Corollary 2.1 can be directly applied to the case at hand: the (unique) solution x* must be orthogonal to N(A) or, equivalently, x* £ 1Z(AT), which means that x* — AT£, for some suitable £. Since x* must solve the system of equations, it must be Ax* = y, i.e., AAT£ = y. Since A is full row rank, AAT is invertible and the unique £ that solves the previous equation is £ = (AAT)_1y. This finally gives us the unique minimum-norm solution of the system: ** = AT(AATrV (6.8) The previous discussion constitutes a proof for the following proposition. Proposition 6.3 (Minimum norm solution) Let A £ Rm n, m < n, be full rank and let y £ Rm. Among the solutions of the system of linear equations Ax = y there exists a unique one having minimal Euclidean norm. This solution is given by (6.8). LINEAR EQUATIONS AND LEAST SQUARES 163 6.3.3 LS and the pseudoinverse For A E Rm'n, y E Rm, consider the LS problem nun ||Ax — y||2. (6.9) Under the hypothesis that a solution to the linear equations Ax = y exists, any solution of these equations is also a minimizer of (6.9) and, vice versa, any minimizer of (6.9) is a solution of the linear equations. Considering problem (6.9) is therefore, in some sense, "more general" that considering the linear equations Ax = y, since (6.9) has a solution even when the linear equations do not, and it has the same solution set as Ax — y, when this set is nonempty. Notice further that (6.9) has multiple (infinitely many) solutions, whenever A has a non-trivial nullspace. Indeed, all solutions to (6.9) are the solutions of the normal equations (6.6), and these equations have multiple solutions if and only if J\f(ATA) = N(A) is non-trivial. Among all possible solutions to the normal equations AT Ax = ATy we are now interested in finding the unique minimum norm one (note that, due to (3.16), these equations always admit at least one solution). From Corollary 2.1, we have that the unique minimum norm solution x* must be orthogonal to J\f(A) or, which is the same, must belong to 7Z(AT). Therefore, x* is uniquely determined by the following two conditions: (a) it must belong to 7Z{AT), and (b) it must satisfy the normal equations (6.6). We claim that such a solution is simply expressed in terms of the Moore-Penrose pseudoinverse as follows: x* = A+y. (6.10) This fact is readily proved as follows. Let A = Ur'LV^f be a compact SVD of A. Then, the Moore-Penrose pseudoinverse is expressed in (5.7) as A+ = WE-1 Uj, hence it follows that x* = A+y = Vr'L^Ujy = Vr£, thus x* E 7Z{Vr), but from (5.10) we have lZ(Vr) = 7Z(AT), hence condition (a) is satisfied by x* in (6.10). Furthermore, ATAx* = ATAA+y = VrYAlJ UrLVrT VrlTlUj y = Vrtfv? Vr'L-'uJ y = VrTLUjy = ATy, which shows that also condition (b) is satisfied, hence x* = A+y provides the unique minimum norm solution to the LS problem (6.9). This is summarized in the following corollary. Corollary 6.1 (Set of solutions of LS problem) The set of optimal solutions of the LS problem p* = mm || Ax - y||2 164 OPTIMIZATION MODELS can be expressed as ^opt — A+y + Af(A), where A^y is the minimum-norm point in the optimal set. The optimal value p* is the norm of the projection of y onto orthogonal complement of U{A):forx* G Xopt, p* = \\y-Ax*\\2 = \\(Im - AA+)y||2 = \\Pn{A),yh, where the matrix Pn^± is the projector onto TZ(A)L, defined in (5.12). If A is full column rank, then the solution is unique, and equal to x* = Afy = (ATA)_1ATy. 6.3.4 Interpretations of the LS problem The LS problem (6.4) can be given a variety of different (but of course related) interpretations, depending on the application context. Some of these interpretations are briefly summarized in the next para- graphs, where the first two items have already been diffusely discussed in the previous sections. 6.3.4.1 Approximate solution of linear equations. Given a system of linear equations Ax = y which is possibly infeasible (i.e., may have no exact solution), we relax the requirement, and ask for a solution x tha.t'frpproximately solves the system, i.e., such that Ax ~ y. In the LS method, the approximate solution is such that the equation residual vector r — Ax — y has minimal Euclidean norm. 6.3.4.2 Projection onto 1Z{A). Given a point y G Rm, the LS problem seeks a coefficient vector x such that y is approximated in the best possible way (according to the Euclidean norm criterion) by a linear combination of the columns a\,...,an of A. An LS solution x* gives the optimal coefficients for this linear combination, such that y* — Ax* = x\a\ H b x„an is the projection of y onto the subspace spanned by the columns of A. 6.3.4.3 Linear regression. Denoting by aj, i = 1,..., m, the rows of A, the LS problem (6.4) can be rewritten as LINEAR EQUATIONS AND LEAST SQUARES 165 that is, given "output" points y/, and "input" points i = 1,m, we are trying to approximate the output points with a linear function /(«;) = ajx of the input points, where x here is the parameter defining the linear function. A classical example in two dimensions is the fitting of a straight line through experimental or measured data. Given scalar output observations y* G R, and input observations £/ G R, i = 1,..., m, we seek an affine function /(£) = *i£ + x2 = aTx, « = (x\ is the slope of the line, x2 is the intercept with the vertical axis) approximating the output in the LS sense: m m min I>1& + *2 ~ Vi)2 = mm J^(aJx - i/i)2- i—1 1 = 1 Figure 6.8 shows an example where the data points £; represent the market price of a given item, and yz represents the average number of customers who buy the item at price £/. The straight line in the figure represents the linear model obtained from the observed data via an LS fit. This model shows how customers react to variations in the price of a given item and can be used, for instance, to predict the value of the average number of customers buying the item at new price tags. Another example of a linear regression fit comes from a popular model used for prediction of time series, called the auto-regressive model. This model assumes that the value of a discrete time signal yt is a linear combination of a certain number of past values of the signal itself: yt = Xiyt-i + • • • + xnyt-n, t = where x\ are constant coefficients, and n is the "memory length" of the model. The interpretation of such a model is that the next output is a linear function of the past. Elaborate variants of auto-regressive models are widely used for prediction of time series arising in finance and economics. If we want to approximately fit an auto-regressive model to an observed signal, we collect observations of the actual signal {yt}\-n<t<mr with m>n, and seek parameters x such that the total squared error of fit is minimized: min £(y, - xtft-1 x„yt-n)2. x (=1 This problem is readily expressed as an LS problem, with appropriate data A, y. Figure 6.8 Linear regression example. l66 OPTIMIZATION MODELS 6.3.4.4 Minimal perturbation to feasibility. Suppose the linear equations Ax — y are infeasible, that is no x E Rn satisfies the equations. We may then consider the following perturbed version of the problem Ax — y + by, where by E Rm is a perturbation on the right-hand side of the equations, and ask what is the smallest (in the Euclidean norm sense) perturbation by that renders the equations feasible. Clearly, since by — Ax — y, the answer is again given by the LS solution x*, which renders \\byW2 minimal, that is by* = Ax* — y. This interpretation raises the important issue of the presence of uncertainty, or perturbations, in the problem data. In the linear equations Ax — y the "data" are the matrix A and the vector y, and these are considered to be given and certain. Allowing for possible perturbations in the y term may render feasible a nominally unfeasible problem and, as we have just seen, such a minimal perturbation is given via the solution of an LS problem. A more elaborate perturbation model considers the possibility of joint perturbations on both the coefficient matrix A and the vector y, that is it considers the perturbed equations (A + SA)x = y + by, and seeks the minimal perturbation matrix A = [6A by] such that the equations become feasible. When the size of the perturbation matrix is measured by the spectral norm, ||A||2, then determining such a minimal perturbation is known as a total least-squares (TLS) problem, see further discussion in Section 6.7.5. 6.3.4.5 Best linear unbiased estimator (BLUE). A further important interpretation of the LS problem arises in the context of statistical estimation. Suppose one assumes a linear statistical model between an unknown deterministic vector of parameters x E Rn and its "measurements" y E Rm: y — Ax-Ez, (6.11) where z is a vector of random errors, and A E Rm,n is assumed to be full rank, with m > n. The meaning of equation (6.11) is that each measurement reading yz is equal to a linear function aj x of the unknown parameter x, plus a random noise term zz . We assume that z has zero mean and unit covariance matrix, that is E{z} — 0, var{z} = E{zzT} = Im. According to this model, the readings vector y is, a priori, a random vector with E{y} = Ax, var{y} = E{(y - E{y})(y - E{y})T} = Im. LINEAR EQUATIONS AND LEAST SQUARES 167 A linear estimator x of the unknown parameter x is defined as a linear function of y: x = Ky, (6.12) where K G R",m is some to-be-determined gain of the estimator. Notice that the estimator x is itself a random vector, with E{x} = KE{y} = KAx. An estimator is said to be unbiased if its expectation coincides with the unknown parameter to be estimated, that is if E{x} = x. We see from the previous equation that in order for x to be an unbiased estimator, for any x, we must have KAx = x, that is, KA = ln, which means that K must be a left inverse of A. According to (5.9), any left inverse of A can be written as follows: K = A+ + Q= (ATA)-1AT + Q, where Q G Rn,m is any matrix such that QA = 0. A BLUE estimator is an unbiased linear estimator of the form (6.12) that has the minimal possible covariance matrix. Letting x=(Ai + Q)y = {Ai + Q)(Ax + z) = x + (Af + Q)z, the covariance of x is var{x} = E{(x - x)(x - x)T} = E{(A+ + Q)zzT(Af + Q)T} = (A+ + Q)E{zzT}(A+ + Q)T = (A+ + Q)(A+ + Q)T = Af AiT + QQT, where the last passage follows from the fact that /4+QT = 0. Since QQT >r 0, it follows from (4.6) that var{£} = A+A+T + QQT h A+A+T, VQ, hence the minimal covariance matrix is achieved by taking Q = 0, that is with the estimator x = Ky, K = A+ = {ATA)~lAJ. The BLUE estimator for the linear model (6.11) therefore coincides with the solution of the LS problem (6.4). l68 OPTIMIZATION MODELS 6.3.5 Recursive least squares In the context of the parameter estimation interpretation of the LS problem discussed in the previous section, we seek to estimate an unknown parameter x G Rn from a series of m > n noisy linear measurements j/z : = ajx + zif i = 1,..., m, where z* are independent and identically distributed (iid), zero-mean and unit variance random noise terms, and aj are the rows of matrix A G Rm'n. In such a context, it makes sense to ask the following question: suppose we observed m > k > n measurements and that we solved the estimation problem, determining the optimal estimate x^ based on the k available measurements. Then, a new measurement i/fc+i arrives. Can we avoid re-solving the whole problem again from scratch, and instead find a simpler way to update the previous estimate by incorporating the new information? The answer to this question is positive, and we next derive a well-known recursive solution for the LS estimation problem. Let A*. G Rk/tl be the matrix containing the first k rows aj,.. .,aj, and let y^) G Rk be the vector containing the first k measurements: yW — [yi • • • yk]T- Let further the new measurement be 3/jfc+l = ak+lx + zk+1/ and let ^ h ■ yw ' , Hk = AkAk >- 0, where we assume that Ahas full rank n. The optimal estimate based on the first k measurements is, from (6.7), *(*) = H^AjyW, whereas the optimal estimate based also on the additional (k + l)-th measurement is X(k+1) = H^Aj+1/k+V = + ak+1yk+1), (6.13) Lffc+i — Aj+1 Afr+i = Hfr + Now, using the rank-one perturbation formula (3.10) for the inverse of 1, we have that 7k+l — 1 + ak+l^k . LINEAR EQUATIONS AND LEAST SQUARES 169 which, substituted in (6.13), gives xiM) = lfl*+ia*"+1) Hk^AkV{k) + flfc+iy*+i) = 0 ~ ^~HklaMak+1) x(k) + V+iy*+l = *(k) + Hfc~ V+i • (6.15) Formulas (6.14) and (6.13) provide the desired recursion for updating the current LS solution when a new measurement is available. An advantage of this formulation is that (6.13) permits us to compute the new solution x^+i) jn a number of operations (scalar multiplications and additions) of the order of n1, whereas the direct formula (6.13) would require a number of operations of order of n3 (if the inverse of H*-+1 were to be computed from scratch). This approach can be used recursively, as further measurements are collected: one starts at some ko with H^0 (invertible) and x^k°\ Then, for each k > ko, we update the inverse matrix H^T1 according to (6.14) and the estimate according to (6.13), and keep iterating as long as new measurements are available. 6.4 Solving systems of linear equations and LS problems We first discuss techniques for solving a square and nonsingular system of equations of the form Ax — y, A e , A nonsingular. (6.16) 6.4.1 Direct methods If A e RW/" has a special structure, such as being an upper (resp., lower) triangular matrix, then the algorithms of backward substitution [resp., forward substitution), described in Section 7.2.2, can be directly applied to solve (6.16). If A is not triangular, then the method of Gaussian elimination, described in Section 7.2.3, applies a sequence of elementary operations that reduce the system to upper triangular form. Then, backward substitution can be applied to this transformed system in triangular form. A possible drawback of these methods is that they work simultaneously on the coefficient matrix A and on the right-hand side term y, hence the whole process has to be redone if one needs to solve the system for several different right-hand sides. 170 OPTIMIZATION MODELS 6.4.2 Factorization-based methods Another common approach for solving (6.16) is the so-called factor- solve method. With this approach, the coefficient matrix A is first factored into the product of matrices having a particular structure (such as orthogonal, diagonal, or triangular), and then the solution is found by solving a sequence of simpler systems of equations, where the special structure of the factor matrices can be exploited. Some of these factorization-based methods are described next. An advantage of factorization methods is that, once the factorization is computed, it can be used to solve systems for many different values of the right- hand side y. 6.4.2.1 Via SVD. If the SVD of A E R"'" is available, we can readily solve (6.16) as follows. Let A = LiLVT, where If, V E R.n'n are orthogonal, and L is diagonal and nonsingular. Then, we write the system Ax — y as a sequence of systems (see Figure 6.9) Uw — y, Lz = w, VTx = z, which are readily solved sequentially as w — UTy, z = YTxw, x = Vz. 6.4.2.2 Via QR factorization. We show in Section 7.3 that any nonsingular matrix A E Kn'n can be factored as A = QR, where Q E RW/" is orthogonal, and R is upper triangular with positive diagonal entries. Then, the linear equations Ax = y can be solved by first multiplying both sides on the left by QT, obtaining QTAx = Rx = y, y = QTy, and then solving the triangular system Rx = y by backward substitution. This factor-solve process is represented graphically in Figure 6.10. 6.4.3 SVD method for non-square systems Consider the linear equations Ax — y, where A E Rm,n, and y E Rm. We can completely describe the set of solutions via SVD, as follows. Let A = LZLVT be an SVD of A, and pre-multiply the linear equation by the inverse of Uf UT; then we express the equation in terms of the "rotated" vector x = VTx as Ix = y, Figure 6.9 Factor-solve method with SVD factorization. £ , R Rx = y Q y = Qy Figure 6.10 Factor-solve method with QR factorization: first solve for y the system Qy = y (with Q orthogonal), then solve for x the system Rx = y (with R triangular). LINEAR EQUATIONS AND LEAST SQUARES 171 where y = UTy is the "rotated" right-hand side of the equation. Due to the simple form of £ in (5.1), the above becomes Vi*i = Vi, i = 0 = yif i = r + 1,.. .,ra. (6.17) Two cases can occur. 1. If the last m — r components of y are not zero, then the second set of conditions in (6.17) are not satisfied, hence the system is infeasible, and the solution set is empty. This occurs when y is not in the range of A. 2. If y is in the range of A, then the second set of conditions in (6.17) hold, and we can solve for x with the first set of conditions, obtaining i = r. The last n — r components of x are free. This corresponds to elements in the nullspace of A. If A is full column rank (its nullspace is reduced to {0}), then there is a unique solution. Once vector x is obtained, the actual unknown x can then be recovered as x = Vx. 6.4.4 Solving LS problems Given A E ]Rm,n and y E ]Rm, we here discuss solution of the LS problem mm ||Ax-y||2. All solutions of the LS problem are solutions of the system of normal equations (see Proposition 6.2) AT Ax = ATy. (6.18) Therefore, LS solutions can be obtained either by using Gaussian elimination and backward substitution to the normal equations, or by applying a factor-solve method to the normal equations. 644.1 Using the QR factorization. Given A E lRm'n and y E lRm, with m > n, rank(A) = n, Theorem 7.1 guarantees that we may write A = QR, where R E lRn,n is upper triangular, and Q E Km,n has orthonormal columns. Then ATA = RTQTQR = RTR, since QTQ = In- Therefore, the normal equations become RTRx = RTQTy, and, multiplying both sides on the left by R-T (the inverse of RT), we obtain an equivalent system in upper-triangular form Rx = QTy 172 OPTIMIZATION MODELS which can be solved by backward substitution. The numerical cost of solving the LS problem using QR can thus be evaluated as follows: we need ~ 2mn2 operations to compute the QR factorization of A, plus 2nm operations for forming QTy, and then n2 operations for applying backward substitution, thus overall still ~ 2mn2 operations. 64.4.2 Using the Cholesky factorization. Another possibility, when m > n, rank(A) = n, is to use the Cholesky factorization of M = AT A for solving the normal equations (6.18). With the Cholesky factorization, a symmetric positive definite matrix M G Sn is factored as M — LLT, where L e lRn'n is nonsingular and lower triangular. This factorization requires ~ n3/3 operations. The normal equations then become LLT x = b, b = ATy. These can be solved by first finding z such that Lz = b, which can be done by forward substitution (n2 operations), and then determining x such that LT x = z, which can be done via backward substitution (n2 operations). This method requires ~ mn2 (for computing the product M = AT A), plus ~ ft3/3 (for the Cholesky factorization), plus 2mn for computing b, plus 2n2 operations for solving the two auxiliary triangular systems. The overall complexity is thus ~ mn2 + n3/3. This complexity figure is lower than the ~ 2mn2 complexity from the QR approach; the Cholesky method is indeed twice as fast as QR, if m n. However, the Cholesky method is very sensitive to roundoff errors in the finite precision computations, so the QR method is preferable for dense and medium sized A matrices. When A is very large and sparse, one can exploit specialized algorithms for sparse Cholesky factorization, yielding a complexity figure much smaller than ~ mn2 + n3/3. The Cholesky method may thus be preferable in these cases. 6.4.4.3 Using the SVD. Yet another possibility is to solve the normal equations (6.18) via SVD. If A = Ur'LV^f is a compact SVD of A, then the unique minimum norm solution to the normal equations is given by (see (6.10)) X = A+y = VrZ-'uJy. LINEAR EQUATIONS AND LEAST SQUARES 173 6.5 Sensitivity of solutions In this section, we analyze the effect of small perturbations in the data on the solution of square and nonsingular linear equations. The following results also apply to the normal equations, hence to the LS approximate solution of linear equations. 6.5.1 Sensitivity to perturbations in the input Let x be the solution of a linear system Ax = y, with A square and nonsingular, and y ^ 0. Assume that we change y slightly by adding to it a small perturbation term 8y, and call x + 5x the solution of the perturbed system: A(x + Sx) = y + 6y. (6.19) Our key question is: if 5y is "small," will 5x also be small or not? We see from (6.19), and from the fact that Ax — y, that the perturbation 5x is itself a solution of a linear system ASx = by, and, since A is assumed to be invertible, we can formally write 5x — A~l5y. Taking the Euclidean norm of both sides of this equation yields IMI2 = Il^-1<ty|l2 < ll^lhllfylb (6.20) where ||A_1||2 is the spectral (maximum singular value) norm of A-1. Similarly, from Ax = y it follows that ||y||2 = ||Ax||2 < HAH2M2, hence M21 < KK (6.21) Multiplying (6.20) and (6.21), we get ||<fr||2 / и ,-i,| и llfylb 1И7 ТЯГ This result is what we were looking for, since it relates a relative variation on the "input term" у to the relative variation of the "output" x. The quantity k(A) = ||A_1||2||A||2, 1 < K(A) < 00 is the condition number of matrix A, see Eq. (5.5). Large k(A) means that perturbations on y are greatly amplified on x, i.e. the system is very sensitive to variations in the input data. If A is singular, 174 OPTIMIZATION MODELS then k(A) = oo. Very large k(A) indicates that A is close to being numerically singular; we say in this case that A is ill conditioned. We summarize our findings in the following lemma. Lemma 6.1 (Sensitivity to input perturbations) Let A be square and nonsingular, and let x, Sx be such that Ax = y A(x + Sx) = y + Sy. Then it holds that IMI2 II3/II2 ' where k(A) = || A~l U211^ II2 is the condition number of A. 6.5.2 Sensitivity to perturbations in the coefficient matrix We next consider the effect on x of perturbations on the A matrix. Let Ax — y and let 8A be a perturbation such that (A + 8A) (x + Sx) — y, for some 8x. Then we see that ASx = —SA(x + Sx), hence Sx = — A~18A(x + Sx). Therefore ||fo||2 = \\A~l8A(x + )112 < ||A_1||2||M||2||x + ^x||2, \\S*h / |M_ln ,M|| l|M||2 Jx+jx\[2 112,1 ll2W We see again that the relative effect on x of a small perturbation jpf[j~2 <C 1 is small only if the condition number is not too large, i.e. if it is not too far away from one, k(A) ~ 1. This is summarized in the next lemma. Lemma 6.2 (Sensitivity to perturbations in the coefficients matrix) Let A be square and nonsingular, and let x, 8A, Sx be such that Ax = y, (A + 6A)(x + 5x) = y. Then it holds that \\Sx\\2 ^,aJSA\\2 ||x + ^x||2 LINEAR EQUATIONS AND LEAST SQUARES 175 6.5.3 Sensitivity to joint perturbations on A, y We finally consider the effect on x of simultaneous perturbations on A and y. Let Ax — y, and let 6A, Sy be perturbations such that (A + SA) (x + Sx) = y + Sy, for some 5x. Then, A5x — by — SA(x + Sx), hence Sx = A~lSy — A~1SA(x + Sx). Therefore ||<fr||2 = \\A-l&y-A-x5A{x + 5x)\\i < Il'4~1<tyll2 + ||A_1M(s: + ^)||2 < II■A 1 IHI^ylk + ||/4 1H2IIMH2IIX + fo||2, and, dividing by \\x + SxH2, \M\2 <1|A-i||2IM2_J№_ + K(/4)Jl^ll2 ||x + fo||2 II3/II2 ||* + ^||2 11^112 But ||y||2 = 11112 < PH2IMI2, hence ll^lk <K(A)iM2_№_ + K(y4)F/4ll2 \\x + Sx\\2 \\y\h ll* + fo||2 ll^lk ‘ Next, we use ||x||2 = \\x + Sx — SxW2 < \\x + SxW2 + ||<fa||2/write Mb <tW№f1+i%U,w.l»t ||* + ^l|2 111/II2 V ||* + <fr||2/ 11^112 ' from which we obtain l№ <K(A)№(1 + 7J#rW>¥^. Ik + ^ll2 ||y||2 V II*+ <^11 and hence »<N12 < K(A) (\\Sy\\2 ||M||; ll* + ^l|2 “ i_icM)J№ v II1/II2 l№ V ' llyII2 The "amplification factor" of the perturbations is upper bounded by —Therefore, this bound is smaller than some given 7, if k(A) < 7 1 + 7 We thus see that the effect of joint perturbations is still controlled by the condition number of A, as formalized next. 176 OPTIMIZATION MODELS Lemma 6.3 (Sensitivity to joint perturbations in A, y) Let A be square and nonsingular; let 7 > 1 be given, and ZeZ x, 8y, 8A, 8x be such that Ax = y, (A + JA) (x + 8x) — y + 8y. 1 + 71 wfac/z implies _Ni_ < . (¥yh , MII2 |x + ^x||2 _ \ ||J/1|2 + № 6.5.4 Sensitivity of the LS solution We have seen in Section 6.5 that the sensitivity of the solution of a system of linear equations to perturbations in the y term or in the A matrix is proportional to the condition number of A. We now apply these results to study the sensitivity of the LS solution. Since any solution of the LS problem (6.5) is a solution of the normal equations (6.6), we immediately have that the sensitivity of an LS solution is dictated by the condition number of the matrix ATA. Suppose rank(A) = n, and let A = UrY*Vf be a compact SVD for A. Then, ATA = VrY?Vf, and we have that k(At A) = = k2(A). Also, since the LS solution is a linear function of the y term, x* = A'y, we see that if the y term is perturbed to y + 8y, then the LS solution is perturbed to x* + 8x, where 8x must satisfy 8x — A^Sy. If the perturbation 8y is bounded in norm as ||^yH2 ^ 1/ then the set of corresponding perturbations 8x is the ellipsoid (see Section 4.4.5 and Lemma 6.4) £Sx = {Sx = AfSy, ||<5y||2 < 1}. This ellipsoid has its axes aligned to the directions and semi-axis lengths given by When rank(A) = n, then A+ = (ATA)_1AT, and the ellipsoid has the explicit representation £&x — {$x : 8xT(ATA)8x < 1}. LINFAR EQUATIONS AND LEAST SQUARES 177 6.6 Direct and inverse mapping of a unit ball Here we focus on the linear map y = Ax, A E lRm'n, (6.22) where i G Rn is the input vector, and y E Rm is the output, see Figure 6.11, and we consider two problems that we call the direct and the inverse (or estimation) problem. 6.6.1 Direct problem In the direct problem, we assume that the input x lies in a unit Euclidean ball centered at zero, and we ask where the output y is. That is, we let x E Bn, Bn = {ze Rn : ||z||2 < 1} and we want to characterize the image of this set under the linear mapping (6.22), that is the set £y = {y : y = Ax, x £ Bn}. (6.23) Let A = WLVT be a full SVD of A, let r — rank A, and let L be the r x r diagonal matrix containing the nonzero singular values of A. Notice first that if the input x is in the direction of one of the left singular vectors Vj, then the output y is either zero (if i > r), or it is along the direction of the corresponding right singular vector U{, scaled by a factor aj. Indeed, denoting by e\ a vector of all zeros except for a one in position i, we have that Avi = UtVTVi = ULel = { ° lf [ > r' I (JiUi otherwise. Notice also that the unit ball Bn is invariant under a linear map defined via an orthogonal matrix, that is {y : y = Qx, x E Bn} = Bn, for any orthogonal matrix Q E 'Rn,n. This is due to the fact that QQT = QTQ = ln, since ||y||2 = yTy = xTQTQx = ||x||I- It follows that we may equivalently define the image set in (6.23) by assuming x = Vz, z E Bn, hence Ey = {y : y = ULz, z € Bn}. Now, defining U = [Ur Um] and zT = [zj zj], where Ur contains the first r columns of U, Unr contains the last n — r columns of U, and y = Ax Figure 6.11 Input-output map y = Ax. 178 OPTIMIZATION MODELS Z\, Z2 are the first r components of z and the last n — r components of z, respectively, recalling the block structure of £ in (5.1), we have ' EZl ' o 1 The lower block of these relations (U„ry = 0) just says that y £ 1Z{A), while the first block defines the shape of the image set inside 1Z(A). Since ||z||2 < 1 implies \\z-[ ||2 < 1, we thus have that ~ {y ^ : UJV — ^*zi/ zi ^ Br}- Since E is invertible, we can also write 2l =S ~lUjy, zjzl<l <£> yTHy<l, H = UrZ-2Uj, and H = UrZ~2Uj = AfTA*r thus finally Sy = {y G n{A) : yTHy < 1}, H = AfTA\ (6.24) The set in (6.24) is a bounded but possibly degenerate ellipsoid (which is flat on the subspace orthogonal to TZ(A)), with the axis directions given by the right singular vectors and with the semi-axis lengths given by i = 1,..., ft, see Figure 6.12, and Section 9.2.2 for a discussion on representations of ellipsoids. Figure 6.12 The image of a unit ball Bn under a linear map is an ellipsoid The ellipsoid £y takes a very simple shape, if we represent it in a proper orthogonal basis for 1Z(A). That is, expressing every y £ 71(A) as y = Urx, the image set £y is simply given by £y = {y = Urz : z G £x}, £x = {x G Rr : xTZ~2x < 1}, where £x is a non-degenerate ellipsoid with axes aligned to the standard axes of !Rr, and semi-axis lengths given by i — 1,..., r. When LINEAR EQUATIONS AND LEAST SQUARES 179 A is full rank and "wide" (that is, n > m, r = rank A = m), then A+ = AT{AAT)~1/ and H = AfTAf = (AAT)~1/ thus £y is the bounded and non-degenerate (full-dimensional) ellipsoid Sy = {y € Rm : yT(AAT)-1y < 1}. 6.6.2 Inverse problem We tackle the inverse problem in a similar way: suppose we know that the output y is in the unit ball Bm, and we ask what is the set of input vectors x that would yield such a set as output. Formally, we seek the pre-image of the unit ball under the linear map (6.22), that is £x = {xeRn: Ax e Bm}. Since Ax G Bm if and only if xT ATAx < 1, we immediately obtain that the pre-image set we seek is the full-dimensional (but possibly unbounded) ellipsoid (see Section 9.2.2) £x = {x e : xT(ATA)x < 1}. This ellipsoid is unbounded along directions x in the nullspace of A (clearly, if x E J\f(A), then y = Ax = 0 E Bm). If A is "skinny" and full rank (i.e., n < m and r = rank(A) = n), then ATA >- 0 and the ellipsoid £x is bounded. The axes of £x are along the directions of the left singular vectors V{, and the semi-axis lengths are given by cr--1, i = see Figure 6.13. Figure 6.13 Pre-image of an output unit ball under the linear map y = Ax. We summarize our findings in the next lemma. l80 OPTIMIZATION MODELS Lemma 6.4 (Image and pre-image of a unit ball under a linear map) Given A E Rm'”, let A = Ur'LVf be a compact SVD of A, with r = rank(A), and let B^ = {z eW : \\z\\2 < 1} be the unit Euclidean ball in W. Let £y = {y e Rm : y — Ax, x £ Bn}, Ex = {x E R” : Ax E Bm}. 1. The image set Ey is a bounded ellipsoid having as axes the right singular vectors of A: Uj, i — 1,..., m. The lengths of the semi-axes are given by cri > 0, for i = 1,... ,r, and are zero for i = r -b 1,..., m. That is, the ellipsoid is degenerate (flat) ifr<m. Moreover, if r = m < n, then Ey is bounded and full-dimensional, and it has the explicit representation ey = {y G ]Rm : yT{AAT)~ly < 1}, AAr >~ 0. 2. The pre-image set Ex is a non-degenerate ellipsoid having as axes the left singular vectors of A: Vj, i = 1,... ,n. The lengths of the semi-axes are given by af1 > 0,for i = 1,..., r, and are infinite for i = r + 1,..., n. That is, the ellipsoid is unbounded (i.e., cylindrical) along the directions vr+i,... ,vn, if r < n. Moreover, if r = n < m, then Ex is bounded, and it has the explicit representation sx = {x € R" : xr(ATA)x < 1}, ATA 0. 6.6.3 The control and estimation ellipsoids The ellipsoids £y, Ex are usually called the control and the estimation ellipsoid, respectively. This terminology reflects two important engineering interpretations that can be given to £y, £x, as discussed next. Consider the input/output map (6.22), where x is interpreted as the actuator input given to the "system" A, and y is the resulting output; x usually represents physical quantities such as forces, torques, voltages, pressures, flows, etc. The Euclidean norm squared ||x ||2 has the interpretation of energy associated to the input. The control ellipsoid £y thus represents the set of outputs that are reachable with unit input energy. The singular value 07 measures the control authority of the input along the output direction U{ E Rm. Small aj means that it is hard (i.e., energy expensive) to reach outputs along the direction U[. If r < m, then there is actually no control authority along the directions ur+\,..., um, i.e., the output subspace spanned by these vectors is unreachable. LINEAR EQUATIONS AND LEAST SQUARES l8l Conversely, suppose one needs to determine the value of an unknown parameter 6 via noisy linear measurements on 6 of the form a]e = yi + yi, i = where is the nominal reading of the measurement, and yz- represents the uncertainty (e.g., noise) on the z-th measurement. In vector format, we have A6 = $ + y, where we assume that the uncertainty vector y is bounded in norm: ||yII2 < 1* Let 6 be the parameter value corresponding to the nominal readings, that is A§ = y, then the actual (unknown) value of 6 is 6 = 6 + x, where x represents the uncertainty on 0 induced by the uncertainty in the measurements. It clearly holds that A (6 + x) — y -Fy, whence A* = y/ II3/II2 < I- The estimation ellipsoid Ex thus provides the uncertainty region around the nominal parameter 0, caused by the uncertainty y on the nominal measurement readings. The axes of this ellipsoid are along the directions of the right singular vectors of matrix A, with semiaxis lengths given by ar1, z = 1,..., r: the smaller is, the smaller is the confidence interval around the 6 parameter along direction V{. The shortest axis cr^xv\ denotes the direction in input space which is least sensitive to measurement errors; the largest axis a~lvr denotes the direction in input space which is most sensitive to measurement errors. Example 6.7 (Force generation) Consider again Example 6.3 relative to a rigid body equipped with n thrusters that need to impress to the body a desired resultant force and torque. Here, the input vector x contains the individual forces generated by the thrusters, and the output y is the resultant force/torque. As a numerical example we consider n = 6 thrusters positioned at angles 0i = 0, 62 = n/16, $3 = (15/16)71, O4 — n, % = (17/16)71, 06 = (31/16)71, and take as output only the components of the force: y = \fx /y]T- Thus, we have y = Ax, with cos 06 sin 0i sin 06 1 0.9808 -1 -0.9808 0 0.1951 0 -0.1951 The set of output resultant forces y obtainable with inputs x such that || x || 2 < 1 is given by the control ellipsoid £y = {y ■ yJp~1y < i}. 182 optimization models P = AAT = 5.8478 0.0 0.0 0.1522 The control ellipsoid has in this case its axes aligned with the standard axes of R2, and semi-axis lengths given by cq — 2.4182 (horizontal axis) and c2 = 0.3902 (vertical axis), see Figure 6.14. Generating an output force in the vertical direction thus requires about 6.2 times the control effort than generating an output force in the horizontal direction. Example 6.8 (Trilateration) Consider Example 6.2, dealing with a problem of localization of an object's coordinates from measurements of the distance of the object from three known beacons. This is indeed an estimation problem, where the unknown position is to be determined from (possibly noisy) distance measurements. Calling a\, a2,a^ the points representing the three beacons, the A matrix relating the unknown position with the vector of measured data is r 2(a2-fli)T [ 2(a3 -ai)T The estimation ellipsoid Ex gives us a precise assessment of how errors in the measurement vector impact the uncertainty on the object position. If the measurement errors are bounded in the unit Euclidean ball, the uncertainty region around the nominal object position is given by the estimation ellipsoid £x = {x : xTHx < 1}, H = AT A. For a numerical example, we consider two situations. In the first scenario the beacons are located at " 0 " ’ 5 " a1 = , a2 = / «3 = while in the second scenario they are located at ’ 0 " " 0 " " 5 " ai = , a2 — , ^3 = Figure 6.15 shows the estimation ellipsoids in the two scenarios. The estimation ellipsoids represent in this example the confidence regions in Figure 6.14 Control ellipsoid of resultant forces reachable with unit-norm input. LINEAR EQUATIONS AND LEAST SQUARES 183 space where the object's actual position is located, given the measurements' uncertainty. We observe that in the first scenario we have large uncertainty along the V2 axis (which is almost vertical), hence in this situation we would have good confidence on the object's horizontal location and poor confidence on the vertical one. The second scenario yields a more "spherical" ellipsoid, with more balanced uncertainty in the horizontal and vertical axes. 6.7 Variants of the least-squares problem 6.J.1 Linear equality-constrained LS A generalization of the basic LS problem (6.5) allows for the addition of linear equality constraints on the x variable, resulting in the constrained problem min ||Ax — y\\\, s.t.: Cx = d, where C E and (J E Rp. This problem can be converted into a standard LS one, by "eliminating" the equality constraints, via a standard procedure described in Section 12.2.6.1. Alternatively, the multiplier-based method described in Section 9.6.1 can be used to solve the problem via an augmented system of normal equations. Figure 6.15 Confidence ellipsoid for the actual object position in the first scenario (left) and in the second scenario (right). 184 OPTIMIZATION MODELS 6.7.2 Weighted LS The standard LS objective is a sum of squared equation residuals ll^-ylli = E'f' ri = aJx~yi' where aj, i — 1,..., m, are the rows of A. In some cases, the equation residuals may not be given the same importance (for example, it is more important to satisfy, say, the first equation than the other ones), and this relative importance can be modeled by introducing weights into the LS objective, that is fo(x) = £w}rl where Wj > 0 are the given weights. This objective can be rewritten in the form fo{x) = || W(Ax — y)\\\ = II Awx-yw\\l, W = diag (w\,... ,wm), Aw = WA, yw — Wy. Hence, the weighted LS problem still has the structure of a standard LS problem, with row-weighted matrix Aw and vector yw. Actually, the weight matrix may also be generalized from diagonal to symmetric positive definite. Assuming W >- 0 means giving different weights to different directions in the residual space. Indeed, let r = Ax — y denote the vector of residuals, then the weighted LS objective is f0(x) = || W(Ax — y) H2 = \\Wr\\22 = rT(WTW)r, and the unit-level set in residual space £r = {r : rT(WTW)r < 1} is an ellipsoid with axes aligned to the eigenvectors uz of W, and semiaxis lengths given by Ar1, where A/, i — 1,... ,m, are the eigenvalues of W. The largest semi-axis, having length A~\n, is the direction along which the cost function /0 is least sensitive to residuals: the residual vector can be large along this direction, and still remain in the unit- level cost set Er. Similarly, the smallest semi-axis, having length A“^x, is the direction along which the cost function /0 is most sensitive to residuals. Therefore, Amax = Ai > • • • > Am = Amjn act as residual weights, along the corresponding directions The solution of the weighted LS problem is thus analogous to the solution of a standard LS problem, and it amounts to solving the weighted normal equations = A^yw, that is ATPAx = ATPy, P = WTW, WhO. LINEAR EQUATIONS AND LEAST SQUARES 185 6.7.3 ^2-regularized LS Regularized LS refers to a class of problems of the form min ||Ax — yHi + <p(x), where a "regularization," or penalty, term (p(x) is added to the usual LS objective. In the most usual cases, (p is proportional either to the l\ or to the £2 norm of x. The l\-regularized case gives rise to the LASSO problem, which is discussed in more detail in Section 9.6.2; the ^-regularized case is instead discussed next. To motivate the idea of ^-regularization, we use a "control" interpretation. Consider the linear map f(x) = Ax, where x is the input, and let y be a given, desired, output, see Figure 6.16. The standard LS problem can be interpreted as a control problem: we want to determine a suitable input x so that the output is as close as possible (in Euclidean norm sense) to the assigned desired output y. Here, all the focus is on matching the desired output as closely as possible. However, doing so requires an effort in the input, that is it requires a certain energy in the "actuators," and this energy can be measured by the squared norm of the input vector, ||x||2- A natural tradeoff thus arises, since on the one hand we want the output matching error to be small, and on the other hand we want to spend little input energy for achieving this goal. We may formalize such a tradeoff by defining a "mixed" objective fo(x) = \\Ax-y\\l + 'Y\\x\\l, 7>0, where 7 > 0 defines the relative weights between the two competing criteria: small 7 biases the problem towards solutions with good output matching but possibly large input energy; large 7 biases the problem instead towards solutions with small input energy but possibly poor output matching. The resulting problem Figure 6.16 A linear control problem: find x such that Ax is close to target output y. min \\Ax — y\\\ + lIMli- is an ^-regularized LS problem. Recalling that the squared Euclidean norm of a block-partitioned vector is equal to the sum of the squared norms of the blocks, i.e., = m we see that the regularized LS problem can be rewritten in the format of a standard LS problem as follows: Il^*-yll2 + 7ll*ll2 = l86 OPTIMIZATION MODELS A = y = A more general format for regularized LS allows for the introduction of weighting matrices on the output matching residuals and on the deviation of the input term from a nominal value, resulting in the following problem: min \\Wl{Ax-y)\\l + ||W2(x — x0)||| where Wi >- 0, W2 >- 0 are weighting matrices, and Xq is some given nominal value for x. This generalized regularization is sometimes referred to as Tikhonov regularization (see Section 6.7.4 f°r an interpretation in terms of Bayesian statistics). This regularized LS problem can still be cast in the format of a standard LS problem as follows: min ||Ax — ; A = y = Clearly, (6.25) is a special case of (6.27), with Wi = lm, W2 = y/yln, x0 = 0„. 6.7.3.1 Sensitivity of the regularized solution. We saw in Section 6.3.4 thaLthe sensitivity of an LS solution to changes in the y parameter is dictated by the condition number k(A A) -(A). We next show that indeed regularization improves (i.e., decreases) the condition number, and hence it reduces the sensitivity of the solution to perturbations in the problem data. Consider for simplicity the full-column rank case, where r — rank(A) — n. In such a case, the compact SVD of A can be written as A = iirEVT, with WT = In. The regularized problem (6.23) is a standard LS problem with the A matrix given in (6.26), for which we have ATA = AT A + yl„ = Vl}VT + 71„ V(E2 + 7 W1 The singular values of A A are thus given by cj + y, where <ju i = 1 are the singular values of A. The condition number k(At A) is then given by k(A A) = crmax(ATA) _ CT-2iax(A) + 7 _ k2(A) + L ,(^) + 7 1 + L 1 + L ' 1+ L' LINEAR EQUATIONS AND LEAST SQUARES 187 L = 7 Whenever L 1, the second term in the expression of k(AtA) is close to one, whereas the first term is much smaller than k2(A), therefore L > 1 => k(AtA) < k(AtA), which means that regularization may greatly improve the condition number. Notice further that even in the case when AT A is singular (recall that the LS solution is non-unique in this case), the regularized matrix AT A is always nonsingular, hence the solution of the regularized problem is always unique (for 7 > 0). 6.7.4 Bayesian interpretation of Tikhonov regularization There is an important interpretation of the regularized problem (6.27) in the context of statistical estimation. Consider a generalization of the linear, noisy measurements model introduced in Section 6.3.4.5, Eq. (6.11): y = Ax + zm, (6.29) where zm represents a vector of random measurement errors having zero mean and covariance matrix var{zmz^} = £m >- 0. When we discussed linear unbiased estimation (BLUE) in Section 6.3.4.3, our goal was to build a minimum variance estimator of x as a linear function of the measurements y. In the so-called Bayesian approach to statistical estimation, however, one always assumes, before making any actual measurement on the unknown parameter x, that one has some prior belief on the value of this parameter. Such a prior belief may come from past experience or other pre-existing information, and it is quantified by assuming a prior probability distribution on x. Here, we assume that our prior information is synthesized by the relation X0 = X -f Zp, (6.30) where Xq is a given vector quantifying our a priori belief on x (i.e., our prior knowledge makes us believe that a reasonable value for x may be Xq), and zp is a zero-mean random vector quantifying the uncertainty we have on our a priori belief. We suppose that zp is statistically independent of zm, and we only know the covariance matrix of zp: var{zp} =JLp y 0. 188 OPTIMIZATION MODELS Now, we build a linear estimator of x which takes into account both the information coming from the measurements (6.29) and the prior information (6.30), that is x = Kmy + Kpx0, where Km is the measurement gain of the estimator, and Kp is the prior information gain. For this estimator to be BLUE we need to determine Km,Kp so that E{x} = x and var{x} is minimal. Notice that ,K = ’ Km Kp ‘ . x° . x = Kyf y = where we may consider y as an augmented measurement y — Ax + z, A = ~ A " , z = Em 0 0 E* E = var{z} = since zm and zv are assumed to be independent. Then, we have that E{x} = E {Ky} = E (K(Ax + z)} = KAx, hence for the estimator to be unbiased we must have E{x} = x, thus KA = I. If we let x1/2 = 15 Vi/2 Q y 1/2 be the matrix square-root of E >- 0, then the previous relation can also be written as KL1/2T.~1/2A = I, which means that ' 2 should be a left inverse of X Any such left inverse can be written as KZ1/2 = (X,“1/2A)+ + Q, where Q is any matrix such that Q(IT1/2A) - 0. x — E{x} = x — x = Kz, LINEAR EQUATIONS AND LEAST SQUARES 189 var{x} = E{KzzTKT} = KZKT = KE1/2E1/2KT [using (6.32), (6.33)] = (IT1/2A)+(E~1/2A)+t + QQt. Since for any Q we have var{f} y (E_1'/2A)+(E“1'/2A)+T, we have that the minimal variance is attained for Q = 0, thus the BLUE estimator gain is given by (6.32) with Q = 0, i.e., f = Ky = (E~1/2A)+lr1/2y = £m1/2A 1 ‘ 2m1/2y ' = ( A'E^A + E; aT y —1/2 y —1/2 Al Z-jm Lup = (Ve-M + L-1) (at E-^ + Ej ^0) • E+=var{f} = (ATE-M + E-1) \ we obtain the two matrix gains in (6.31) as Km = X+ATE-\ p -= S+E-1. It is immediate to verify from (6.34) that the BLUE estimator coincides with the solution of the regularized LS problem in (6.27)-(6.28), by setting weights Wi = Y^12, W2 = YpX/2. Using the matrix inversion lemma (see (3.9)), we can also express Z+ as follows: E+ = (atE-xA + E"1) 1 = Ep - EpAT(Em + AEpAT)_1AEp. Letting Q — Ym + ALpAT, we can express the estimator in the alternative form x = E+ (ATE~1y + Ep1a:o) = (I - EpATQ-1A)Ep (ATE-Xy + E-1^) [add and subtract EpATQ ly\ = Xq + EpATQ_1 (y — Axq) + Zy, Z = LPATL-1 - EpATQ_1(AEpATE^1 + I). 190 OPTIMIZATION MODELS Collecting Lp on the left and Lm1 on the right in the expression for Z, we have that therefore the expression for the estimator in (6.35) simplifies to This latter format of the estimator is referred to as the innovation form, where the term y — Ax0 has indeed the interpretation of innovation, or new information, brought by the measurement y with respect to the a priori best guess of the output Ax0. The derivations presented in this paragraph form the basis of the recursive solution approach to linear estimation (and of recursive solution of LS problems): we started from prior information Xq, collected measurements y, and built the BLUE estimator x (in the form (6.31) or (6.36)). If now a new measurement, say ynew, becomes available, we can iterate the same idea by constructing an updated estimator that has the previous estimate x as prior information and ynew as measurement, and then iterate the mechanism as further new measurements become available. The explicit formulation of the recursive BLUE estimator is left to the reader as an exercise. 6.7.5 Total least squares One of the interpretations of the standard LS problem mentioned in Section 6.3.4 was in terms of minimal perturbation Sy in the y term necessary for the linear equations Ax = y -f Sy to become feasible. The total least-squares (TLS) approach extends this idea by allowing the perturbation to act both on the y term and on the A matrix. That is, we seek a perturbation matrix [5A 5y] £ Rm,n+i w^h minimal Frobenius norm, such that the equations (A + SA)x = y + 6y are feasible. Formally, we seek to solve the following optimization problem: Z = Zp(at-ATQ-1(AZpAT+ 'Lm)')Z-1 = Lp (v4T — y4TQ-1Q) Z,"1 = 0, x = x0 + 'LpAJQ X(y-Ax0). min ||[M£y]||! s.t.: y + Syen(A + SA). (6-3 7) DT = [Ay], SDT = [6A5y\r LINEAR EQUATIONS AND LEAST SQUARES 191 and let us assume for simplicity that rank(D) = n 4- 1, and let DT = E <*uiv7 (6-39) be an SVD for DT. We shall make the further technical assumption that Cn+1 < ^min(^)* (b-4°) Then, the feasibility condition y + Sy G K(A + M) is equivalent to the existence of a vector c G R”+1 (normalized without loss of generality with || c || 2 = 1) such that (DT + SDT)c = 0, cn+1 ± 0, ||c||2 = 1, (6.41) and, in turn, since DT is assumed full rank, this condition is equivalent to requiring that DT 4- SDT becomes rank deficient (and that a vector can be found in the nullspace of this matrix with a nonzero component in the (n 4- l)-th position; but we'll see that this latter requirement is insured by condition (6.40)). Consider then the following problem: min || <5DT ||p s.t.: rank(DT + SDT) = n. A solution for this problem can be readily obtained, as discussed in Remark 5.2, and it is given by SD = for which we have DT 4-SDT* = YLcriuiv!' whence a vector c in the nullspace of DT + SDT* is c = vn+\. We now check that this vector is such that cn+1 7^ 0, hence all conditions in (6.41) are satisfied. Indeed, suppose for contradiction that c = vn+i = [uT 0]T then, since vn+\ is an eigenvector of DDT associated with cr2n+v DDTvn+i = o^+1vn+1 which would imply that (ATA)v — which is in contradiction with the starting assumption (6.40). Once the optimal perturbation SDT* = [5A* Jy*] is found, a solution x such that (A + 5A*)x = y + 6y* A y 7n+1* 0 192 OPTIMIZATION MODELS is easily found by considering that [x — 1]T is proportional to uw+i. Hence once vn+\ is found from SVD of D, we can easily determine oc such that Also, x can be found directly by considering that [x — 1]T must be an eigenvector of DDT associated with eigenvalue that is - /7"2 yTy _ The upper block in these equations then yields the optimal solution xtls = {ATA - o^+1I)~1ATy. We summarize the previous derivations in the following theorem. Theorem 6.2 (Total least squares) Given A G Rm'n/ y G Rm, with the notation set in (6.38), (6.39), assume that rank(D) — n -Vi, and that condition (6.40) is satisfied. Then the TLS problem (6.37) has a unique solution given by [5A*5y*] = -an+xun+1vl+v Moreover, the optimal solution vector xjls such that (A + SA*)xns = y + 6y* is found as *tls = (ATA - a^+lI)~'lATy. Remark 6.2 Interestingly, the TLS problem is also closely connected to the hyperplane fitting problem discussed in Exercise 5.6. To see this connection, consider a hyperplane through the origin Ti = {z G R”+1 : zTc = 0} and let us seek a normal direction c, ||c||2 = 1, such that the mean- square distances from TL to the data d{ forming the columns of matrix D in (6.38) is minimized. The sum of these squared distances is equal to ||DTc||2, which is minimized by the direction c — uM+i, where vn+i is the left singular vector of DT corresponding to the smallest singular value (7'n+1. This is the same direction arising in the TLS problem in (6.41), from which the optimal TLS solution xTLg can be found by normalizing the last entry in c to —1. LINEAR EQUATIONS AND LEAST SQUARES 193 6.8 Exercises Exercise 6.1 (Least squares and total least squares) Find the least- squares line and the total least-squares3 line for the data points (xj, x/i), i — 1,.. .,4, with x = (—1,0,1,2), y = (0,0,1,1). Plot both lines on the same set of axes. Exercise 6.2 (Geometry of least squares) Consider a least-squares problem f — rnin \\Ax — 1/H2, where A £ ]Rm'”, y £ Rm. We assume that y £ 1l{A)f so that p* > 0. Show that, at optimum, the residual vector r = y — Ax is such that rTy > 0, ATr = 0. Interpret the result geometrically. Hint: use the SVD of A. You can assume that m > n, and that A is full column rank. Exercise 6.3 (Lotka's law and least squares) Lotka's law describes the frequency of publication by authors in a given field. It states that XaY — b, where X is the number of publications, Y the relative frequency of authors with X publications, and a and b are constants (with b > 0) that depend on the specific field. Assume that we have data points (X*, Yz), i = 1,..., m, and seek to estimate the constants a and b. 1. Show how to find the values of a, b according to a linear least- squares criterion. Make sure to define the least-squares problem involved precisely. 2. Is the solution always unique? Formulate a condition on the data points that guarantees unicity. Exercise 6.4 (Regularization for noisy data) Consider a least-squares problem mm \\Ax — J/H2, in which the data matrix A £ lR.m'n is noisy. Our specific noise model assumes that each row aj £ lRn has the form a\ — d( + Ui, where the noise vector U{ £ R" has zero mean and covariance matrix o2In, with a a measure of the size of the noise. Therefore, now the matrix A is a function of the uncertain vector u = {u\,... ,un), which we denote by A(u). We will write A to denote the matrix with rows aj, i = 1,..., m. We replace the original problem with 3 See Section 6.7.5. mm Eu{\\A(u)x -y\\\}, 194 OPTIMIZATION MODELS where Eu denotes the expected value with respect to the random variable u. Show that this problem can be written as mm \\Ax-y\\l + \\\x\\l, where A > 0 is some regularization parameter, which you will determine. That is, regularized least squares can be interpreted as a way to take into account uncertainties in the matrix A, in the expected value sense. Hint: compute the expected value of ((#; + Ui)Tx — yi)2, for a specific row index i. Exercise 6.5 (Deleting a measurement in least squares) In this exercise, we revisit Section 6.3.5, and assume now that we would like to delete a measurement, and update the least-squares solution accord- ingly.4 We are given a full column rank matrix A € iR'"'", with rows aj, i = 1,..., m, a vector y G ]Rm, and a solution to the least-squares problem j* =argmin YL(aix~yi)1 ~ argmin ||Aj — 1/H2. x i—1 x Assume now we delete the last measurement, that is, replace (am,ym) by (0,0). We assume that the matrix obtained after deleting any one of the measurements is still full column rank. 1. Express the solution to the problem after deletion, in terms of the original solution, similar to the formula (6.15). Make sure to explain why any quantities you invert are positive. 2. In the so-called leave-one-out analysis, we would like to efficiently compute all the m solutions corresponding to deleting one of the m measurements. Explain how you would compute those solutions computationally efficiently. Detail the number of operations (flops) needed. You may use the fact that to invert a n x n matrix costs 0(n3). Exercise 6.6 The Michaelis-Menten model for enzyme kinetics relates the rate y of an enzymatic reaction to the concentration x of a substrate, as follows: where /3*, / = 1,2, are positive parameters. 1. Show that the model can be expressed as a linear relation between the values 1/y and 1/x. 4 This is useful in the context of cross- validation methods, as evoked in Section 13.2.2. LINEAR EQUATIONS AND LEAST SQUARES 193 2. Use this expression to find an estimate /3 of the parameter vector /3 using linear least squares, based on m measurements (x*,i/i), i = 1 ,...,ra. 3. The above approach has been found to be quite sensitive to errors in input data. Can you experimentally confirm this opinion? Exercise 6.7 (Least norm estimation on traffic flow networks) You want to estimate the traffic (in San Francisco for example, but we'll start with a smaller example). You know the road network as well as the historical average of flows on each road segment. 1. We call qi the flow of vehicles on each road segment i G I. Write down the linear equation that corresponds to the conservation of vehicles at each intersection j G /. Hint: think about how you might represent the road network in terms of matrices, vectors, etc. 2. The goal of the estimation is to estimate the traffic flow on each of the road segments. The flow estimates should satisfy the conservation of vehicles exactly at each intersection. Among the solutions that satisfy this constraint, we are searching for the estimate that is the closest to the historical average, q, in the f^-norm sense. The vector q has size I and the z-th element represents the average for the road segment i. Pose the optimization problem. 3. Explain how to solve this problem mathematically. Detail your answer (do not only give a formula but explain where it comes from). JS.0 _EL 0 M.0 in 0 .EL 4. Formulate the problem for the small example of Figure 6.17 and solve it using the historical average given in Table 6.1. What is the flow that you estimate on road segments 1, 3, 6, 13, and 22? 3. Now, assume that besides the historical averages, you are also given some flow measurements on some of the road segments of Figure 6.17 Example of the traffic estimation problem. The intersections are labeled a to h. The road segments are labeled 1 to 22. The arrows indicate the direction of traffic. 196 OPTIMIZATION MODELS the network. You assume that these flow measurements are correct and want your estimate of the flow to match these measurements perfectly (besides matching the conservation of vehicles of course). The right column of Table 6.1 lists the road segments for which we have such flow measurements. Do you estimate a different flow on some of the links? Give the difference in flow you estimate for road segments 1, 3, 6, 15, and 22. Also check that your estimate gives you the measured flow on the road segments for which you have measured the flow. Exercise 6.8 (A matrix least-squares problem) We are given a set of points pi,..., pm G IRn, which are collected in the n x m matrix P = [pi, • • • / Pm]- We consider the problem m \ minF(X) = £l\\xi-pi\\l + 9 E \\xi~xj\\l X i=l Z 1 <i,j<m where A > 0 is a parameter. In the above, the variable is an n x m matrix X = [x\, ...,xm], with Xj G IRn the f-th column of X, i = 1,...,m. The above problem is an attempt at clustering the points pf, the first term encourages the cluster center xz- to be close to the corresponding point p/, while the second term encourages the x;s to be close to each other, with a higher grouping effect as A increases. 1. Show that the problem belongs to the family of ordinary least- squares problems. You do not need to be explicit about the form of the problem. 2. Show that ? E ll*«-*/ll2 = trace XHXT, 1 <i,j<m where H = mlm — 11T is an m x m matrix, with Im the m x m identity matrix, and 1 the vector of ones in Rm. 3. Show that H is positive semidefinite. 4. Show that the gradient of the function F at a matrix X is the n x m matrix given by VF(X) =2(X —P + AXH). Hint: for the second term, find the first-order expansion of the function A -> trace((X + A)H(X + A)T), where A e R"'m. 5. As mentioned in Remark 6.1, optimality conditions for a least- squares problem are obtained by setting the gradient of the objective to zero. Using the formula (3.10), show that optimal points Table 6.1 Table of flows: historical averages cj (center column), and some measured flows (right column). are of the form 1 mX „ . x = ...-pi+ -t p, i = l,...,m, raA + lr raA + 1 where p = (l/m)(p\ + ... + pm) is the center of the given points. 6. Interpret your results. Do you believe the model considered here is a good one to cluster points? Matrix algorithms In this chapter we present a compact selection of numerical algorithms for performing basic matrix computations. Specifically, we describe the power iteration method for computing eigenvalues and eigenvectors of square matrices (along with some of its variants, and a version suitable for computing SVD factors); we discuss iterative algorithms for solving square systems of linear equations, and we detail the construction of the QR factorization for rectangular matrices. 7.1 Computing eigenvalues and eigenvectors 7.1.1 The power iteration method In this section we outline a technique for computing eigenvalues and eigenvectors of a diagonalizable matrix. The power iteration (PI) method is perhaps the simplest technique for computing one eigenvalue/eigenvector pair for a matrix. It has rather slow convergence and it is subject to some limitations. However, we present it here since it forms the building block of many other more refined algorithms for eigenvalue computation, such as the Hessenberg QR algorithm, and also because interest in the PI method has been recently revived by applications to very large-scale matrices, such as the ones arising in web-related problems (e.g., Google PageRank). Many other techniques exist for computing eigenvalues and eigenvectors, some of them tailored for matrices with special structure, such as sparse, banded, or symmetric. Such algorithms are described in standard texts on numerical linear algebra. Let then A E IRn,n, assume A is diagonalizable, and denote by Ai,...,A2 the eigenvalues of A, ordered with decreasing modulus, that is | Ai| > | A21 > • • • > |An| (notice that we are assuming that |Ai| 200 OPTIMIZATION MODELS is strictly larger than |A21, that is A has a "dominant" eigenvalue). Since A is diagonalizable, we can write it as A = I/All-1, where we can assume without loss of generality that the eigenvectors u\,...,un forming the columns of U are normalized so that \\uj\\2 = 1. Notice that from Lemma 3.3 we have that Ak = UAkU~1, that is AkU = UAk. Let now x E Cn be a randomly chosen "trial" vector, with ||x||2 = 1, define x — Uw, and consider Akx = AkUw = UAkzv = Y^wirfui- Observe that, if x is chosen at random (e.g., from a normal distribution and then normalized), then the first entry W\ of w, is nonzero with probability one. Dividing and multiplying the previous expression by Aj, we obtain That is, Akx has a component oc^U\ along the span of u\, and a component oc^z along the span of ..., un, i.e., A x = cc^Ui + oc^z, ock = W\\k E €, z = ” Wi (K “wi VAi For the size of the 2 component, we have (letting /3* = Wj/wi) № = = Elfrl k n ««•lb = Elfrl where the last inequality follows from the ordering of the moduli of the eigenvalues. Since |A2/Ai| < 1, we have that the size of the 2 component, relative to the component along U\, goes to zero for k —» oo, at a linear rate determined by the ratio | A21 /1 Ai |. Thus Akx —> oc^U\, which means that Akx tends to be parallel to u\, as k —> 00. Therefore, by normalizing vector Akx, we obtain that A ^ t x(k)=ji^E' <7-i> and notice also that x(k) —> wi implies that Ax(k) —» = Ai^i, hence x'k(k)Ax(k) —> Aii^i/i (here, * denotes the transpose conjugate, since the Uj vectors can be complex valued). Therefore, recalling that u\u\ — \\ui H2 = 1, we have lim x'k(k)Ax(k) = Ai, that is the product x'k(k)Ax(k) converges to the largest-modulus eigenvalue of A. Evaluating (.Akx)*A(Akx) = + u\Az + X^ui + z*Az) shows that the convergence of x(k)'kAx(k) to Ai still happens at a linear rate dictated by the | A21 /1 Ai | ratio. The above reasoning suggests the following iterative Algorithm 1, where we notice that normalization in (7.1) only changes the norm of vector x(k) and not its direction, therefore normalization can be performed at each step (as in step 4 in the algorithm) while still obtaining x(k) in the form (7.1) at the k-th iteration. Algorithm 1 Power iteration. Require: A G diagonalizable, |Ai| > |A21, x G Cn, ||x||2 = 1 1: k — 0; x(k) = x 2. repeat 3: y(fc + 1) = Ax(k) 4: x(k + l) =y{k + l)/\\y(k + l)\\2 3: A (k + 1) = x*(k + 1 )Ax(k + 1) 6: k = k + l 7: until convergence One big advantage of the power iteration is that the algorithm relies mostly on matrix-vector multiplications, for which any special structure of A, such as sparsity, can be exploited. Two main drawbacks of the PI method are that (a) it determines only one eigenvalue (the one of maximum modulus) and the corresponding eigenvector, and (b) its convergence rate depends on the ratio |A2|/1Ai|, hence performance can be poor when this ratio is close to one. One technique to overcome these issues is to apply the PI algorithm to a properly shifted version of matrix A, as discussed next. 202 OPTIMIZATION MODELS 7.1.2 Shift-invert power method Given a complex scalar cr, and A G 1Rn,n diagonalizable, consider the matrix Bff = (A-(Tl)-x. By the spectral mapping theorem, see (3.15), Ba has the same eigenvectors as A, and the eigenvalues of Ba are ]i{ = (A* — cr)-1, where A/, i — 1,..., n are the eigenvalues of A. The largest modulus eigenvalue of Ba, Pmax/ now corresponds to the eigenvalue A* which is closest to cr in the complex plane. Applying the PI method to Ba we thus obtain the eigenvalue A; which is closest to the selected cr, as well as the corresponding eigenvector. The shift-invert power method is outlined in Algorithm 2. Algorithm 2 Shift-invert power iteration. Require: A G Rn,n diagonalizable, (A — crl)""1 has a dominant eigenvalue, x G Cn, 11 at 112 = 1, cr G C 1: k — 0; x(k) = x 2: repeat 3: y(k + l) = (A-criy'xik) 4: x(k + l) =y(k + l)/||y(* + l)||2 3: A (k + 1) = x*(k + 1 )Ax(k + 1) 6: k = k + 1 7: until convergence The advantage of the shift-invert power method over the PI method is that we can now converge rapidly (but still at linear rate) to any desired eigenvalue, by choosing the "shift" cr sufficiently close to the target eigenvalue. However, the shift-invert method requires that one knows beforehand some good approximation of the target eigenvalue, to be used as a shift. If such a good approximation is not known in advance, a variation of the method would be to start the algorithm with some coarse approximation a, and then at some point when a reasonable approximation of the eigenvector is obtained, to modify the shift dynamically, improving it iteratively as we improve the estimate of the eigenvector. This idea is discussed in the next paragraph. 7.1.3 Rayleigh quotient iterations Suppose that at some step of the shift-invert power algorithm we have an approximate eigenvector x(k) f=- 0. Then, we look for some approximate eigenvalue cr^, that is for a scalar that satisfies approximately the eigenvalue/eigenvector equation x(k)crjc ~ Ax(k), where by approximately we mean that we look for a^ such that the squared norm of the equation residual is minimal, i.e., min || x(k)ajc — Ax(k) ||2- By imposing that the derivative of this function with respect to CTfr is zero, we obtain _ x*(k)Ax(k) k x*{k)x(k) ' (7 ) a quantity known as a Rayleigh quotient, see also Section 4.3.1. If we adaptively choose shifts according to (7.2) in a shift-invert power algorithm, we obtain the so-called Rayleigh quotient iteration method, outlined in Algorithm 3. Unlike the PI method, the Rayleigh quotient iteration method can be proved to have locally quadratic convergence, that is, after a certain number of iterations, the convergence gap of the running solution at iteration k + 1 is proportional to the square of the gap of the solution at iteration k. Algorithm 3 Rayleigh quotient iteration. Require: A E lRn,n, x E C”, ||x||2 = 1 Ensure: A is diagonalizable, x is an approximate eigenvector of A 1: k = 0; x(k) = y 2: repeat _ x*(k)Ax(k) 3' Uk ~ x*(k)x(k) 4: y(k + l) = (A-o-kl)~lx(k) 5: x(k +1) = y(k + l)/||y(fc + 1)||2 6: k = k + 1 7: until convergence Example 7.1 As an example of application of the PI method and its variants, we consider the problem of computing the Google PageRank eigenvector discussed in Example 3.3. Here, we have 0 1 \ ' and, due to the nature of the problem, we know in advance that the dominant eigenvalue is X\ = 1. We actually computed exactly (since this was a simple, toy-scale problem) the corresponding eigenvector in Example 3.3, v = ^-[1249 6] , which we can now use to quantify the distance 204 OPTIMIZATION MODELS of the approximate iterates produced by the algorithms from the actual exact eigenvector. Notice that the actual eigenvector normalization that is used in the algorithms is irrelevant: one may use the Euclidean norm, or any other normalization such as, for instance, requiring that the entries of the eigenvector sum up to one. Figure 7.1 shows the course of the error e(k) = ||x(fc)/||*(fc)||2 ~ ^/IMklb over 20 iterations of the basic PI algorithm (Algorithm 1), of the shift-invert algorithm (Algorithm 2, where we chose a — 0.9), and of the Rayleigh quotient iteration algorithm (Algorithm 3, started with constant a = 0.9, with adaptive Rayleigh quotient adjustments taking over after the first two plain iterations). Figure 7.1 Approximation error for the dominant eigenvector of the PageRank matrix. 7.1.4 Computing the SVD using power iterations The factors of the SVD of A G lRm,n can be obtained by computing the spectral factorization of the two symmetric matrices AAT and ATA. Indeed, we have seen in the proof of Theorem 3.1 that the V factor is the eigenvector matrix from the spectral factorization of AT A = VAnVT and that the U factor has as columns the eigenvectors of AAT = UAmUT. (7.3) An and Am are diagonal matrices whose first r diagonal entries are the squared singular values of, i = 1,..., r, and the remaining diagonal entries are zero. In the following, we outline how a power iteration method can be used to determine the left and right singular vectors corresponding to the largest singular value of a matrix. The basic idea is to apply MATRIX ALGORITHMS 205 power iteration to the square (symmetric) matrix AT A, but in an implicit way, bypassing the explicit computation of that matrix, which would be in general dense.1 Consider the following recursion for 1 See Remark 3.1. Jt = 0,1,2,...: „№+!) = mi) ft- _j_ n ATu(k + 1) v(k~f 1) = \\ATu(k + 1)||2* Eliminating u(k-f 1) leads to the fact that v(k + 1) is proportional to AT Av(k). Since v(k + 1) has unit norm, we have n ATAv(k) v(k + 1) - w hence we recognize the power iteration applied to the square (symmetric) matrix ATA. Similarly, the sequence of u(k)s correspond to the power iteration applied to AAT. Hence, the algorithm below computes the largest singular value of A, cq, and the associated left and right singular vectors u\,V\ (with cq — u[Av{), provided the largest singular value is separated from the second largest. Algorithm 4 Power iteration for singular values. Require: A G Rm,n, with cq > cq, v G 1R", ||i?||2 = 1 1: k = 0; v(k) = v 2: repeat 3: y(k + l) = Av(k) 4: u(k + 1) = y{k + l)/||y(fc + 1)||2 3: z(k -j-1) — A^u(k -f-1) 6: v(k + l) = z(fc + l)/||z(fc + l)||2 7: k = k + l 8: until convergence This technique can then be applied recursively on a deflated version of the matrix A, in order to determine also the other singular values and their corresponding left and right singular vectors. More precisely, we define the matrix A{ = A/__! - (TjUivJ, i = 1,..., r; A0 = A, cr0 = 0, where r = rank(A), and apply Algorithm 4 to A*, for z = 1,.. .,r, in order to obtain all terms of the compact SVD of A (under the hypothesis that the singular values are well separated). 206 OPTIMIZATION MODELS 7.2 Solving square systems of linear equations In this section we discuss numerical techniques for solving systems of linear equations of the form Ax = y, A £ R"'", A invertible. The general rectangular case can be dealt with via SVD, and it is discussed in Section 6.4.3. 7.2.1 Diagonal systems We start by considering the simplest possible structure that a system of linear equations may have, that is, the diagonal structure. A square, diagonal, nonsingular system of linear equations has the form ’ yi " x = . Vn . with a\\, (I22, • • • / &nn 7^ 0- It is rather obvious that the unique solution of such a system can be written immediately as 3/1/an yi/an x = y-n! An 7.2.2 Triangular systems A second situation where the solution of a square nonsingular system is quite easy to obtain is when the A matrix has triangular structure, that is A is of the form or of the form A = 0-nl Q-nl (upper-triangular matrix), (lower-triangular matrix), with 011,022,• • • ,0ww 7^ 0- Consider for instance the lower-triangular case: 0 •• • 0 " y\ " «22 • • • 0 X = «m2 ' • The solution can be obtained by a so-called forward substitution technique: start from the first equation and obtain x\ = 1/1/011 then, substituting this value in the second equation, we have 02iXi + 022^2 = 0213/1 / dll + «22*2 = 3/2* Hence, we obtain X2 = \/ye next substitute xi, X2 in the third equation to obtain X3, and proceeding in the same way we eventually terminate by obtaining xn. This scheme is summarized in Algorithm 5. Algorithm 5 Forward substitution. Require: A G 'Rn,n nonsingular and lower triangular and y G Rw. 1: xi = 3/1/011 2: for i = 2 to n do 3: S = Vi 4: for / = 1,..., i — 1 do 5: S = S — 0/yXy 6: end for 7: xf = s/au 8: end for An analogous algorithm can be readily devised for the solution of upper-triangular systems, as formally stated in Algorithm 6. Algorithm 6 Backward substitution. Require: A G JR”/W nonsingular and upper triangular and y G W1. 1: xn — yn!unn 2: for i = n — 1,..., 1 do 3: S = y/ 4: for / = i + 1,..., n do 5: S = S — UijXj 6: end for 7- */ ^/ 0/z 8: end for Remark 7.1 Operation count. It is easy to determine the total number of algebraic operations (divisions, multiplications, and sums/subtractions) 208 optimization models required to obtain the solution of a triangular system via backward substitution. At each stage i — n,..., 1, the algorithm performs n — i multiply- and-sum operations, plus one division. Therefore, the total count is 2(n — i) + 1 = n2. 7.2.3 Gaussian elimination As shown in Section 7.2.2, the solution of triangular nonsingular systems is very easy to obtain. But what about a generic nonsingular but possibly non-triangular matrix? The idea we illustrate in this section is to transform a generic system into an equivalent upper-triangular system by means of appropriate operations, and then to solve the resulting triangular system using backward substitution. Such an iterative triangularization technique is known as Gaussian elimination. Example 7.2 (Simple illustration of Gaussian elimination) Consider the system " 1 3 " " 1 " x = This system is nonsingular, but it is not in triangular form. However, if we multiply the first equation in the system by 2 then subtract it from the second equation and substitute the so-obtained equation in place of the second equation, we obtain an equivalent system " 1 3 " " 1 " x = Further, if we multiply the first equation in the system by 4 then subtract it from the third equation and substitute the so-obtained equation in place of the third equation, we obtain " 1 x — where we note that the elements below the first entry in the first column have been zeroed out. Finally, if we now multiply the second equation by —1 then subtract it from the third equation and substitute the so- obtained equation in place of the third equation in the system, we obtain an equivalent system in upper triangular form: " 1 x = This system can now be readily solved by backward substitution, obtaining X3 = 1/7, *2 — 3/14, x\ = 1/7. We now describe Gaussian elimination in more generality. Consider a square nonsingular system «12 ' ‘ ' yi ' «22 ' ' X = «m2 ' ' . yn . Substitute each equation from / = 2 onward with equation j minus equation 1 multiplied by flyi/flu (assuming a\\ ^ 0), thus obtaining the equivalent system «13 ' ' • «(1) “2m X = fl(1) •• “m3 ■ a(1) . rf1. where = a\j — ayaji /a\\. Next, substitute each equation from j = 3 onward with equation j minus equation 2 multiplied by ^ (assuming 7^ 0), thus obtaining the equivalent system ' 1 • O O £ «13 ' «(1) • “23 a(2) • “33 • «(1) «2m • «(2) “3m x = «(2) • “m3 3S ” 1 Clearly, proceeding in this same way for n — 1 times, we eventually determine a system which is equivalent to the original systems and which is in upper-triangular form: • 0 0 £ «13 • a(1) • fl(2) • “33 • fl(1) “2m • «(2) “3m x = 0 • .. 1 1 ■ ■ This latter system can then be solved by backward substitution. 210 OPTIMIZATION MODELS Remark 7.2 Elimination with pivoting. Notice that the approach described above fails if, at any stage k = — 1 of the procedure, we encounter a diagonal element = 0, since division by such an element is not possible. In practice, numerical problems shall also be expected if \a^k is very small. To overcome this difficulty, the basic procedure can be modified so as to include partial or full pivoting. The idea of (full) pivoting is very simple: at stage k of the procedure we look for the largest-modulus element among a, i > k, j > k. Such element is called the pivot, and the rows and columns of the current-stage matrix are exchanged so as to bring this element into position (fc,/c); then the elimination phase proceeds as previously described, and the process is repeated. Notice that when exchanging two rows of the matrix the elements in the vector y also need to be exchanged accordingly. Similarly, when exchanging two columns of the matrix the corresponding elements of x need to be exchanged. Partial pivoting works in a similar way, but only the elements in the column below element akk 1 are searched for a pivot, therefore only two rows need be exchanged in this case. Pivoting increases the numerical effort required for computing the solution, since a search over pivot element is involved at each stage, and memory management operations are required in order to exchange rows (and columns, in the case of full pivoting). The next algorithm describes Gaussian elimination with partial pivoting. Algorithm 7 Gaussian elimination with partial pivoting. Require: A G Kn'n nonsingular and y G R”. Let S = [Ay). i. for / = 1,..., n — 1 do 2: find ip such that |sz- z-| > \ski\ for all k = i,..., n y. let S G- exchange row i with row ip of S 4: for k = i + 1,..., n do 5: for / = n + 1 do 6: Skj — Skj — (ski/Sii)Sij 7: end for 8: end for 9: end for Operations count. We next compute the number of elementary operations required for solving a square system via Gaussian elimination. Considering first the Gaussian elimination procedure, we see that at the first iteration of the process it takes 2n + 1 operations to update the second row of matrix S — [Ay\ (one division and n multiply- subtracts to find the new entries along the row). To zero out all the entries in the first column below the first entry and to update all the rows from the second onwards takes therefore (n — l)(2n + 1) operations. We next need (n — 2) (2n — 1) operations to get the second column zeroed out and the matrix updated; for the third column we need (n — 3) (In — 3) operations, etc. The sum of all of these operations is: £(n-«)(2(n-i + l) + l) = n£i(2i + 3) = 2n£i2 + 3n^i ;'=1 1=1 1=1 1=1 0n{n - l)(2n - 1) , 0n(n- 1) ~ ^ 6 2 = ~ n3 (here, the notation ~ denotes the leading term in the polynomial; this notation is more informative than the usual O(-) notation, since the coefficient of the leading term is indicated). We finally need to apply backward substitution to the transformed triangular system, which takes an additional n2 operations. This leaves the leading complexity term unaltered, so the total number of operations for solving a generic nonsingular system is ~ n3. 7.3 QR factorization The QR factorization is a linear algebra operation that factors a matrix into an orthogonal component, which is a basis for the row space of the matrix, and a triangular component. In the QR factorization a matrix A G Rm,n, with m>n, rank(A) = n, is thus decomposed as: A = QR, where Q G lRm,n has orthogonal columns (i.e., QTQ = In) , and R G RW/” is upper triangular. There are many ways of calculating the QR factorization, including the Householder transformation method, the modified Gram- Schmidt algorithm, and the fast Givens method. Here, we describe the method based on the modified Gram-Schmidt (MGS) procedure. 7.3.1 Modified Gram-Schmidt procedure We recall from Section 2.3.3 that, given a set of linearly independent vectors {a^\... ,a^}r the Gram-Schmidt (GS) procedure constructs an orthonormal set of vectors {q^\ .. - ,q^} having the same span 212 OPTIMIZATION MODELS as the original set, as follows: for k = 1,..., n, £(*) = ^^(O, (74) II w Let S^_i = span{a^\... f a^k~^} and let Sj^-_1 denote the orthogonal complement of S^_i- In Eq. (7.4) the GS procedure computes the projection of onto S^-i, and then subtracts it from a^k\ thus obtaining the projection of onto S^_v It is easy to see that the projection operation in (7.4) can be expressed in matrix form as follows: Z(k) = ps£-_/k)' ps^ = 1 - psk-v psk_a = È (?(,)(7(i)T/ (7-5) Z = 1 with PSo = 0, PSoX = P Further, the orthogonal projector matrix Ps± = I — Psjfc_1 can be written as the product of elementary projections onto the subspaces orthogonal to each q that PsLì = 1), • • • P,(1)X, i>x = / - ?(iVf)T, k > 1. This fact can be easily verified directly: take for instance k = 3 (the general case follows from an identical argument), then = (j-«(2V2)T№-«(,V1)T> (since q^Tq^ = 0 ) = I — q^1>jq^T — q^q^T = I-PS2=Pst In the MGS, each = P^(i)± • • • P^k-i)±Ia^ is thus computed recursively as follows: £(*)(1) = fl(*), £«(2) = P9(i)±C(*)(1) = (I-9(1V1)t)CW(1) = ZW(l)-qWqWT&kHl), £<*>(3) = P9(2)xCW(2) = £W(2) - ^2)<7(2)t^)(2), £<*>(*) = Pq{k_vl^k\k-1) = £(*>(* - 1) - (k - 1). Although the two formulations (GS and MGS) are mathematically equivalent, the latter can be proved to be more stable numerically. The MGS procedure is next formalized as an algorithm. MATRIX ALGORITHMS 213 Algorithm 8 Modified Gram-Schmidt procedure. Require: A set of l.i. vectors {a^\..., a^ }, a W G IRm, m > n. 1: for / = 1,...,« do _ r(f) n(i) 2: c,v 1 — Av ' 3: end for 4: for i = 1,..., 7? do y ^=11^)11 6: (?(')= C(')/ni 7: for / = i + 1,. . ., n do 8: m = qVnfi), £(;) = £(/) _ ^(0 9: end for 10: end for Operations count. For large m, n, the computational work is dominated by the innermost loop of Algorithm 8: m multiply-add for computing rij = q^T^\ and m multiply-subtract for computing £(/) = £(/) — rjjq(l\ for a total of 4m operations per inner loop. The overall operation count for Algorithm 8 is thus approximately given by Yh 4m — Yh(n ~ i)^m — {n2 — ^%m ~ 2mn2. l=l;=l+l i=l MGS as a QR decomposition. We next show that the MGS algorithm actually provides the Q and R factors of a QR factorization of A. Let a^l\... denote the columns of A. We see from (7.5) that £№ = aW and, for j > 1, £0’) = «O’) — Let now ryy = ||^ ||, r/y = q(l)Tad\ and recall qW = ^/rjj. The previous equation becomes rjjqW = flW - YLrij^l)' that is a0) = r}jqW + 214 OPTIMIZATION MODELS This latter equation gives the desired factorization A = QR, with rn r12 • • • rln 0 r22 • • • r2n 0 0 • • • Tnn The above reasoning constitutes a constructive proof of the following fact. Theorem 7.1 (QR factorization) Any matrix A E ]Rm,n, with m > n, rank(A) = n, can be factored as A = QR, where R E Rn'n is upper triangular with positive diagonal entries, and Q E 'Rm,n has orthonormal columns (that is, it satisfies QTQ = In)- 7.3.2 MGS and QR decomposition for rank-deficient matrices. In the standard GS procedure we assumed that the vectors {a^\ ..., aaW E IRm, are linearly independent, that is matrix A — [a^ • • • aE Rm'n is full column rank. In this paragraph, we discuss how to generalize the GS procedure and the QR factorization to the case when A is not full rank, i.e., when {a^\... ,a^} are not linearly independent. In this case, let k < n be the smallest integer for which the vector is a linear combination of the previous vectors {a^\..., }, that is 1 = 1 for some scalars di{, i = 1 ,...,k — 1. Since, by construction, {q^\ ..., span the same subspace as {a^l\... ,a^k~~^}, we also have .<»> = £ for some scalars ocj, i = 1,... ,k — 1. Therefore, since the vectors qj, j = 1,... ,k — 1, are orthonormal, = 0Cj, hence we see from (7.4) that = 0, thus the standard procedure cannot proceed further. The generalized procedure, however, proceeds by just discarding all vectors aW, k' > k, for which ) = 0, until either the procedure is terminated, or a vector a**') with £(*') ^ 0 is found, in which case the corresponding normalized vector q^'^ [«(1) • • • «(")] = [qW • • • ?(»)] is added to the orthonormal set, and the procedure is iterated. Upon termination, this modified procedure returns a set of r — rank A orthonormal vectors {q^\... which form an orthonormal basis for TZ (A). This procedure provides a generalized QR factorization, since each column of A is represented as a linear combination of the columns of Q = [q^ • • • qM], with a non-decreasing number of nonzero coefficients. In particular, a first block of n\ > 1 columns of A are written as a linear combination of q^\ a second block of ni > 1 columns of A are written as a linear combination of q^\q^2\ etc., till the r-th block of nr columns of A, which is written as a linear combination of q^\.. .,q^r\ where n\ -f ti2 + • • • + nr = n. In formulas, A = QR, R = R12 • • • Rir R22 ■ ■ • Rir 0 ' 0 •• the matrix R being in a block-upper-triangular form. The columns of R can then be reordered so that the column corresponding to the first element of Ra is moved to column /, i = 1,..., r (column pivoting) This corresponds to writing A = QRET, R= [R M], where £ is a suitable column-permutation matrix (note that permutation matrices are orthogonal), R £ W,r is upper triangular and invertible, and M £ IRr,n~r. Notice that an alternative, "full," form of the QR factorization uses all m columns in the Q matrix: m — r orthonormal columns are added to q^ • • • q^ so as to complete an orthonormal basis of Rm. Therefore, m — r trailing rows with zeros are appended to the R matrix, to obtain A = QRET, Q £ lRm'm, QTQ = Im, R R M ®m—r,r Qm—r,n—r 7.4 Exercises Exercise 7.1 (Sparse matrix-vector product) Recall from Section 3.4.2 that a matrix is said to be sparse if most of its entries are zero. More formally, assume am xn matrix A has sparsity coefficient 7 (A) <C 1, where 7(A) = d(A)/s(A), d(A) is the number of nonzero elements in A, and s(A) is the size of A (in this case, s(A) — mri). 2l6 OPTIMIZATION MODELS 1. Evaluate the number of operations (multiplications and additions) that are required to form the matrix-vector product Ax, for any given vector x E M.n and generic, non-sparse A. Show that this number is reduced by a factor 7(A), if A is sparse. 2. Now assume that A is not sparse, but is a rank-one modification of a sparse matrix. That is, A is of the form A + uvT, where A E ]Rm,n is sparse, and u E ]Rm, i? E ]Rm are given. Devise a method to compute the matrix-vector product Ax that exploits sparsity. Exercise 7.2 (A random inner product approximation) Computing the standard inner product between two vectors a,b £]R” requires n multiplications and additions. When the dimension n is huge (say, e.g., of the order of 1012, or larger), even computing a simple inner product can be computationally prohibitive. Let us define a random vector r E ]RW constructed as follows: choose uniformly at random an index i E {1,..., n}, and set rz- — 1, and Tj = 0 for j =£ i. Consider the two scalar random numbers a, b that represent the "random projections" of the original vectors a, b along r: a = rT a — aj, b = rTb = bi. Prove that nE{ab} — aTb, that is, nab is an unbiased estimator of the value of the inner product aTb. Observe that computing nab requires very little effort, since it is just equal to naify, where i is the randomly chosen index. Notice, however, that the variance of such an estimator can be large, as it is given by var{nab} = n ajbf — (aTb\ k=1 (prove also this latter formula). Hint: let denote the f-th standard basis vector of W1; the random vector r has discrete probability distribution Prob{r = £;} = 1 /n, i = 1,.. .,n, hence E{r} = ^1. Further, observe that the products r^j are equal to zero for k ^ j and that the vector r2 = \r\,..., r2]T has the same distribution as r. Generalizations of this idea to random projections onto k-dimensional subspaces are indeed applied for matrix-product approximation, SVD factorization, and PCA on huge-scale problems. The key theoretical tool underlying these results is known as the Johnson- Lindenstrauss lemma. MATRIX ALGORITHMS 217 Exercise 7.3 (Power iteration for SVD with centered, sparse data) In many applications such as principal component analysis (see Section 3.3.2), one needs to find the few largest singular values of a centered data matrix. Specifically, we are given a n x m matrix X = [x\,..., xm\ of m data points in W1, i = 1,..., m, and define the centered matrix X to be X = [x\ • • • xm\, Xi = xt — x, / = 1,..., m, with x = — Y4L1 xi lhe average of the data points. In general, X is dense, even if X itself is sparse. This means that each step of the power iteration method involves two matrix-vector products, with a dense matrix. Explain how to modify the power iteration method in order to exploit sparsity, and avoid dense matrix-vector multiplications. Exercise 7.4 (Exploiting structure in linear equations) Consider the linear equation in x E R” Ax = y, where A E ]Rm'n, y E Rm. Answer the following questions to the best of your knowledge. 1. The time required to solve the general system depends on the sizes m, n and the entries of A. Provide a rough estimate of that time as a function of m, n only. You may assume that m, n are of the same order. 2. Assume now that A — D + uvT, where D is diagonal, invertible, and u E lRm, v E R”. How would you exploit this structure to solve the above linear system, and what is a rough estimate of the complexity of your algorithm? 3. What if A is upper-triangular? Exercise 7.5 (Jacobi method for linear equation) Let A = (fl/y) E ]R”,n, b E R”, with an 7^ 0 for every / = 1,..., n. The Jacobi method for solving the square linear system Ax = b consists of decomposing A as a sum: A = D + R, where D = diag (an, • • • ,ann), and R contains the off-diagonal elements of A, and then applying the recursion x(k+l) = o-\b-Rx^), k = 0,1,2,..., with initial point xiO) = D ~1 b. 218 optimization models The method is part of a class of methods known as matrix splitting, where A is decomposed as a sum of a "simple" invertible matrix and another matrix; the Jacobi method uses a particular splitting of A. 1. Find conditions on D, R that guarantee convergence from an arbitrary initial point. Hint: assume that M = —D_1jR is diagonaliz- able. 2. The matrix A is said to be strictly row diagonally dominant if V z = 1,..., n : \au\ > J^\aij\. Show that when A is strictly row diagonally dominant, the Jacobi method converges. Exercise 7.6 (Convergence of linear iterations) Consider linear iterations of the form x(k + 1) = Fx(k) + c, k — 0,1,..., (7.6) where F E ]Rn,n, c E W1, and the iterations are initialized with x(0) = xq. We assume that the iterations admit a stationary point, i.e., that there exists x E R” such that (.I - F)x = c. (7.7) In this exercise, we derive conditions under which x(k) tends to a finite limit for k —> 00. We shall use these results in Exercise 7.7, to set up a linear iterative algorithm for solving systems of linear equations. 1. Show that the following expressions hold for all k = 0,1,...: x(k + l)-x(k) = Fk(I-F)(x-x 0), (7.8) x(k)-x = Fk(x0 - x). (7.9) 2. Prove that, for all xo, limfc->oo x(k) converges to a finite limit if and only if Fk is convergent (see Theorem 3.5). When x(k) converges, its limit point x satisfies (7.7). Exercise 7.7 (A linear iterative algorithm) In this exercise we introduce some "equivalent" formulations of a system of linear equations Ax = b, A E ]Rm'“, (7.10) and then study a linear recursive algorithm for solution of this system. 1. Consider the system of linear equations Ax - AAfb, (7.11) where A+ is any pseudoinverse of A (that is, a matrix such that AA+A = A). Prove that (7.11) always admits a solution. Show that every solution of equations (7.10) is also a solution for (7.11). Conversely, prove that if b £ 1Z(A), then every solution to (7.11) is also a solution for (7.10). 2. Let R £ ]Rn'm be any matrix such that J\f(RA) = J\f(A). Prove that A+ = (RA)+jR is indeed a pseudoinverse of A. 3. Consider the system of linear equations jRAx = Rb, (7.12) where R £ lRn'm is any matrix such that J\f(RA) = J\[(A) and Rb £ TZ(RA). Prove that, under these hypotheses, the set of solutions of (7.12) coincides with the set of solutions of (7.11), for A+ = (RA)+jR. 4. Under the setup of the previous point, consider the following linear iterations: for k = 0,1,..., x(k + 1) = x(k) + ocR(b — Ax(fc)), (7-1-3) where oc ^ 0 is a given scalar. Show that if lim^oo x{k) — x, then x is a solution for the system of linear equations (7.12). State appropriate conditions under which x(fc) is guaranteed to converge. 5. Suppose A is positive definite (i.e., A £ S”, A >- 0). Discuss how to find a suitable scalar oc and matrix R £ RW/” satisfying the conditions of point 3, and such that the iterations (7.13) converge to a solution of (7.12). Hint: use Exercise 4.7. 6. Explain how to apply the recursive algorithm (7.13) for finding a solution to the linear system Ax = b, where A £ HRm'n with m > n and rank A = n. Hint: apply the algorithm to the normal Convex optimization models All truth passes through three stages: First, it is ridiculed; Second, it is violently opposed; Third, it is accepted as self-evident. Arthur Schopenhauer We have seen in Section 6.3.1 that the ordinary least-squares problem can be solved using standard linear algebra tools. This is thus a case where the solution of a minimization problem can be found efficiently and globally, i.e., we are guaranteed that no other point besides the LS-optimal solutions may possibly yield a better LS objective. Such desirable properties actually extend essentially to a wider class of optimization problems. The key feature that renders an optimization problem "nice" is a property called convexity, which is introduced in this chapter. In particular, we shall characterize convex sets and convex functions, and define the class of convex optimization problems as those where a convex objective function is minimized over a convex set. Engineering problems that can be modeled in this convexity framework are typically amenable to an efficient numerical solution. Further, for certain types of convex models having particular structure, such as linear, convex quadratic, or convex conic, specialized algorithms are available that are efficient to the point of providing the user with a reliable "technology" for modeling and solving practical problems. 8.1 Convex sets 8.1.1 Open and closed sets, interior, and boundary We start by recalling informally some basic topological notions of subsets of Rn. A set X C W1 is said to be open if for any point x e X 224 OPTIMIZATION MODELS there exist a ball centered in x which is contained in X. Precisely, for any x G Rn and e > 0 define the Euclidean ball of radius r centered at x Be(x) = {z : ||z - xH2 < e}. Then X C Rw is open if Vx G X, 3c >0: B€(x) C X. A set A' C W1 is said to be closed if its complement Rw \ X is open. The whole space R” and the empty set 0 are declared open by definition. However, they are also closed (open and closed are not necessarily mutually exclusive attributes), according to the definition. The interior of a set A" C W1 is defined as int A* = {z G A* : Be(z) C X, for some e > 0}. The closure of a set A’ C Rw is defined as X — {z G R" : z = lim x*., x* G X, Vfc}, i.e., the closure of X is the set of limits of sequences in X. The boundary of X is defined as dX = X\intX. A set A' C Rw is open if and only if A* — int X. An open set does not contain any of its boundary points; a closed set contains all of its boundary points. Unions and intersections of open (resp. closed) sets are open (resp. closed). A set A* C R” is said to be bounded if it is contained in a ball of finite radius, that is if there exist x G R” and r > 0 such that X C Br(x). If A* C ]Rn is closed and bounded, then it is said to be compact. Example 8.1 (Intervals on the real line) Let a, b G R, a < b. The interval [a, b] is a closed set. Its boundary is the discrete set {a, b} and its interior is the (open) set{x: a < x < b}. The interval [a, b) is neither closed nor open. The interval (a, b) is open. A semi-infinite interval of the form [a, +00) is closed, since its complement (—oo, a) is open.1 8.1.2 Combinations and hulls Given a set of points (vectors) in Rw, V — (x^1),... ,x(m)}, the linear hull (subspace) generated by these points is the set of all possible linear combinations of the points, that is the set of points of the form X = AiX^) H b Am^m\ (8.1) 1 Note that, technically, +00 is not a boundary point of the interval [a, +00) as a subset of R, since the definition of closure of a set (and hence of boundary) requires the boundary point to belong to the underlying space R, which does not contain +00. If one considers instead intervals as subsets of the extended real line R U {—oo, +oo}, then [a, +00] is closed, (a, +00) is open, and [a, +00) is neither open nor closed. CONVEXITY 225 for A; E R, / = 1,..., ra. The affine hull, aff P, of V is the set generated by taking all possible linear combinations (8.1) of the points in V, under the restriction that the coefficients A* sum up to one, that is Eili A; = 1. aff V is the smallest affine set containing V. A convex combination of the points is a special type of linear combination x = AixW + • • • + Amx(m) in which the coefficients A* are restricted to be non-negative and to sum up to one, that is A/ > 0 for all i, and Y4L1 A* = 1. Intuitively, a convex combination is a weighted average of the points, with weights given by the A/ coefficients. The set of all possible convex combination is called the convex hull of the point set: {m m '| x = AiX^ : Aj > 0, f = 1,..., m; A/ = 1 > . Similarly, we define a conic combination of a set of points as a linear combination where the coefficients are restricted to be non-negative. Correspondingly, the conic hull of a set of points is defined as conic(x^^,.. .,x(m)) = Ix = J^AjX^ : A; > 0,i = 1,.. . ,ra I. Figure 8.1 gives examples of the convex hull and of the conic hull of a set of points. Convex and conic hulls may be defined not only for discrete collections of points, but also for generic sets. The convex (resp. affine, conic) hull of a set C E R” is the set of all possible convex (resp. affine, conic) combinations of its points, see Figure 8.2 for an example. 8.1.3 Convex sets A subset C C Rn is said to be convex if it contains the line segment between any two points in it: X\,X2 C C, A E [0, l] =/- Axi -f- (1 — A)x2 E C. Figure 8.1 Convex hull (a) and conic hull (b) of a set of points in IR2; (c) shows the convex hull of the standard unit basis vectors of IR3, The conic hull of this same set is the entire positive orthant IR^. Figure 8.2 The convex hull of the union of two ellipses. 226 OPTIMIZATION MODELS The dimension d of a convex set C C IRn is defined as the dimension of its affine hull. Notice that it can happen that d < n. For example, the set C = {x = [oc 0]T : oc E [0, 1]} is a convex subset of ]R2 (the "ambient space," of dimension n = 2), but its affine dimension is d — 1. The relative interior, relint C, of a convex set C is the interior of C relative to its affine hull. That is, x belongs to relint C if there exists an open ball of dimension d contained in affC, centered at x, and with positive radius, which is contained in C, see Figure 8.3 for a pictorial explanation. The relative interior coincides with the usual interior when C is "full-dimensional," that is when its affine dimension coincides with the dimension of the ambient space. Subspaces and affine sets, such as lines, planes, and higher-dimensional "flat" sets are obviously convex, as they contain the entire line passing through any two points, not just the line segment. Halfspaces are also convex, as geometrical intuition suggests. A set C is said to be a cone if it has the property that if x E C, then ocx E C, for every oc > 0. A set C is said to be a convex cone if it is convex and it is a cone. The conic hull of a set is a convex cone. A set C is said to be strictly convex if it is convex, and X\ 7^ X2 £ C, A E (0, 1) => Xx\ + (1 — A)*2 G relint C, that is, the interior of the line segment joining any two different points X\,Xi E C is contained in the relative interior of C. The intuitive idea of convex and non-convex sets is best given by a picture in two dimensions, see Figure 8.4. strictly convex non convex 8.1.4 Operations that preserve convexity Certain operations on convex sets, such as intersection, projection, perspective transformation, etc., preserve convexity. Figure 8.3 In this figure, the convex set C C R3 has an affine hull of dimension d = 2. Thus, C has no "regular" interior. However, its relative interior, relint C, is given by the darker shaded region in the picture. Figure 8.4 Left, a strictly convex set R2. Middle, convex set in R2: for any two pair of points X\, Xj, the line segment joining the two points is entirely in the set. Right, a non-convex set: there exist a pair of points such that the segment joining them is not entirely included in the set. CONVEXITY 227 8.14.1 Intersection. If C\, ...,Cm are convex sets, then their intersection c= n Ci is still a convex set. This fact can be easily proved by direct application of the definition of convexity. Indeed, consider any two points x^\x^ G C (notice that this implies that xW'xW G Q, i = 1,..., m) and take any A G [0, 1]. Then, by convexity of Q, the point x = AxW -f (1 — A)x(2) belongs to C/, and this is true for all i — 1,..., m, therefore x G C. The intersection rule actually holds for possibly infinite families of convex sets: if C(x), a G A C ]R9, is a family of convex sets, parameterized by X, then the set C= f| C« is convex. This rule is often useful to prove convexity of a set, as in the following examples. Example 8.2 (Polyhedra) A half-space H = {x G Rn : cTx < d}, c ^ 0 is a convex set. The intersection of m half-spaces Hi, i — 1,..., m, is a convex set called a polyhedron, see Figure 8.5. More on polyhedra and polytopes (which are bounded polyhedra) in Section 9.2. Example 8.3 (The second-order cone) The set in Rn+1 JCn — {(x,t), x G Rn, f G R : ||x||2 < t} is a convex cone, called the second-order cone. In R3 a second-order cone is described by the triples (x\,X2, t) that satisfy the equation x\ + x\< t1, t > 0, see Figure 8.6. Horizontal sections of this set at level t > 0 are disks of radius t. JCn is a cone, since it is non-negative invariant (i.e., z G JCn implies ocz G /C n, for all oc > 0). The fact that JCn is convex can be proven directly from the basic definition of a convex set. Alternatively, we may express JCn as a continuous intersection of half-spaces, as follows. From the Cauchy- Schwartz inequality, we have that t > 11at112 Vm, ||wH2 < 1 : t > uTx, JCn = Pi |(x,f) G Rn+1 : t > wTx|. u: ||m||2<1 Each one of the sets involved in the intersection, for fixed u, represents a half-space in (x, f), which is a convex set. Figure 8.5 The intersection of halfspaces is a convex polyhedron. In the figure is an example in R2. Figure 8.6 The second-order cone in R3. 228 OPTIMIZATION MODELS 8.1.4.2 Affine transformation. If a map / : W1 ]Rm is affine, and C C IRn is convex, then the image set /(C) = {/(*) : xec} is convex. This fact is easily verified: any affine map has a matrix representation f(x) = Ax + fc. Then, for any y^\y^ G /(C) there exist x^/x^2) in C such that yi1) — Ax^ + b, y^ = Ax^ + b. Hence, for A G [0,1], we have that A1/1) + (1 — A)y^ = A(\x^ + (1 — A)x®) + b = f{x), where x = Ax^1) + (1 — A)x^2) G C. In particular, the projection of a convex set C onto a subspace is representable by means of a linear map, see, e.g., Section 5.2, hence the projected set is convex, see Figure 8.7. 8.1.5 Supporting and separating hyperplanes We say that H = {x G ]Rn : aTx = b} is a supporting hyperplane for a convex set C C ]RM at a boundary point 2 G 3C, if 2 G 1~L, and C C'H_, where H- is the half-space = {x €R" : ATX < b}. A key result of convex analysis2 states that a convex set always admits a supporting hyperplane, at any boundary point, see Figure 8.8. Theorem 8.1 (Supporting hyperplane theorem) If C C W1 is convex and 2 G 3C, then there exists a supporting hyperplane for C at 2. Given two subsets Q, C2 of IRn, we say that T~L separates the two sets, if Ci C 7~L_ and C2 C where %+ — {x G IRn : aTx > b}. Figure 8.7 Convex set {(x,y,z): y>x2,z>y2} and its projection on the space of (x,y) variables. The projection turns out to be the set {{x,y) :y>x2}. 2 See, for instance, Corollary 11.6.2 in the classical book by T. Rockafellar, Convex Analysis, Princeton University Press, 1970. Figure 8.8 (a) Supporting hyperplanes of a convex set at two different boundary points; (b) illustration of the separation theorem. CONVEXITY 229 We say that H strictly separates C\, C2 if C\ C % and C2 C 77++, H— = {x E R” : aTx < b}, 77++ — {x E Rn : aTx > b}. Another fundamental result of convex analysis states that any two disjoint convex sets can be separated by a hyperplane,3 see Figure 8.8. Theorem 8.2 (Separating hyperplane theorem) Let Q, C2 be convex subsets oflRn having empty intersection (C\ n C2 = 0). Then there exists a separating hyperplane 77 for C\,C2- Furthermore, if C\ is closed and bounded and C2 is closed, then C\, C2 can be strictly separated. Example 8.4 (Farkas lemma) An important application of the separating hyperplane theorem is in the proof of the so-called Farkas lemma, which is stated next. Let A E Km'n and y E Rm. Then, one and only one of the following two conditions is satisfied: 1. the system of linear equations Ax — y admits a non-negative solution x > 0; 2. there exist 2 E Rm such that zTA > 0, zTy < 0. Notice first that these two conditions cannot be true at the same time. This is easily seen by contradiction, since if 1 holds then, for all z, zT Ax — zTy, for some x > 0. But if also 2 holds, then zTA > 0, hence zT Ax > 0 (since x > 0), and from 1 we would have that zTy > 0, which contradicts zTy < 0. Next, it suffices to prove that if 1 doesn't hold then 2 must hold. Suppose then that there exists no x > 0 such that Ax = y. Since Ax, x > 0, denotes the set of conic combinations of the columns of A (which we here denote by conic (A)), we have that y 0 conic(A). Since the singleton {y} is convex closed and bounded, and conic(A) is convex and closed, we can apply the separating hyperplane theorem and claim there must exist a hyperplane {x : zTx = q} that strictly separates y from conic(A), that is zTy < q, zTAv > q, Vi/ > 0. Now, the second condition implies that q < 0 (take v — 0), hence the first condition implies that zTy < 0. Furthermore, the condition zT Av > q for all v > 0 implies that zT A > 0, which would conclude the proof. This latter fact is verified by contradiction, since suppose that zT A had a negative component, say the z-th component. Then, one may take v to be all zero except for the z-th component, which is taken positive and sufficiently large so as to make zT Av — Vj[zT A]{ smaller than q, so obtaining a contradiction. An equivalent formulation of the Farkas lemma is obtained by considering that statement 2 above implies the negation of statement 1, and vice versa. That is, the following two conditions are equivalent: 1. there exists x > 0 such that Ax — y; 3 See, for instance, Corollary 11.4.2 in Rockafellar's book. 2. zTy > 0, Vz : zTA > 0. This formulation yields the following interpretation in terms of systems of linear inequalities: let az- G Rm, i = 1,..., n, be the columns of A, then yTz >0, Vz : ajz > 0, / = 1,..., n, if and only if there exist multipliers x\ > 0, i — 1,..., n such that y is a conic combination of the a{s, that is, if and only if 3xj > 0, / — 1,..., m such that y = a\X\ + ... + anxn. 8.2 Convex functions 8.2.1 Definitions Consider a function / : Rn —> R. The effective domain (or, simply, the domain) of / is the set over which the function is well defined: domf = {x G Rn : —00 < /(x) < 00}. For example, the function/(x) = log(x) has effective domain domf = R++ (the strictly positive reals), and the function aTx + b f(x) = (8-2) has effective domain domf = {x: cTx + d 7^ 0}. A function / : lRn -A R is convex if domf is a convex set, and for all x,y G domf and all A G [0,1] it holds that / (Ax + (1 - A)y) < A/(x) + (1 - A)/(y). (8.3) We say that a function / is concave if —/ is convex. A visualization of convexity for a function of scalar variable is given in Figure 8.9. A function / is strictly convex if inequality (8.3) holds strictly (i.e., with < instead of <), for all x 7^ y in the domain and all A G (0,1). A function / : Rn —> R is strongly convex if there exists an m > 0 such that /(x)=/(x)-| ||x||2 is convex, that is if / (Ax + (1 - A)y) < A/(x) + (1 - A)/(y) - yA(l - A) ||x - y|||. Clearly, strong convexity implies strict convexity. A fundamental fact about convex functions is that they are continuous, as specified by the next lemma, reported here without proof.4 4 See Section 10 in Rockafellar's book CONVEXITY 23I Z = \X\ + (1 - X)x2 Lemma 8.1 (Continuity of convex functions) If f : W1 -» R is convex, ften ft is continuous over int dom /. Moreover; / zs Lipschitz on every compact subset X G int dom/, meaning that there exists a constant M > 0 such that Notice, however, that convex functions may have discontinuities at the boundary of the domain. For example, the function has convex domain dom/ = [—1, 1], and it is continuous over the interior of its domain. However, it has discontinuities at the boundary points of the domain. 8.2.1.1 Extended-valued functions. Sometimes, it is useful to extend the range of values of / to include also ±00. For instance, a natural extension / of f(x) = log(x) is An arbitrary effective domain X can be imposed artificially on a function /, by defining an extended-valued function / : Kn —> [—00, +00], which is equal to / for x G X and equal to +00 otherwise. For example, for the linear-fractional function (8.2), we can define an Figure 8.9 Convex / : 1R —>■ 1R: for any two points in the domain, the function lies below the chord connecting the two points. I/M — /(y)I < M||*-y||2, 'ix^ex. x2 if x e (-1, l], f(x) = < 2 if x = —1, +00 otherwise log(x) if x > 0, —00 if x < 0. 232 OPTIMIZATION MODELS extended-valued function / which is equal to / over the set {x : cTx + d > 0}, and is equal to +00 otherwise. The effective domain of such an / is dom/ = {x : cTx + d > 0}. Similarly, the function /(X) = log det(X) is often defined over the domain S” + (the set of symmetric, positive definite matrices), by setting its extension to be equal to +00 outside this set. In the sequel, we shall use the same symbol to denote a function and its natural extension, unless ambiguity arises. The definitions of convexity (and strict and strong convexity) still hold if / is allowed to be extended-valued (i.e., to take values in the extended real line [—00,00]), provided that extended arithmetic is used to interpret the inequalities, and that / is proper, meaning that f(x) < +00 for at least one x G R” and f(x) > —00 for all x G R”. We shall always consider convex extended-valued functions that are proper. Note that for / to be convex, the convexity of the domain is required. For example, the function f(x) = ix 1 -foo otherwise is not convex, although is it linear (hence, convex) on its (non-convex) domain (—00, —1) U (1, +00). The function fix) = { * if x > 0, l+oo otherwise is convex over its (convex) domain dom / = R++. A useful extended-valued function is the indicator function of a convex set X, defined as5 I (x) = { ° if * G X' 1 +00 otherwise. 8.2.1.2 Epigraph and sublevel sets. Given a function / : Kn —> (—00, +00], its epigraph (i.e., the set of points lying above the graph of the function) is the set epi/ = {(x,t),x e dom/, t e R : f(x) < t} . It holds that / is a convex function if and only if epi / is a convex set. For oc G R, the x-sublevel set of / is defined as See = {x £ Rn • /(*) < ft}- 5 See also Exercise 8.2. It can be easily verified that if / is a convex function, then Sa is a convex set, for any oc G R. Also, if / is strictly convex, then Sa is a strictly CONVEXITY 233 convex set. However, the converses of the latter two statements are not true in general. For instance, /(x) — ln(x) is not convex (it is actually concave), nevertheless its sublevel sets are the intervals (0, ea], which are convex. A function / such that all its sublevel sets are convex is said to be quasiconvex. 8.2.1.3 Closed functions. A function / : Kn (—00,00] is said to be closed if its epigraph is a closed set. In turn, this is equivalent to the fact that every sublevel set Sa of /, oc £ R, is closed. Function / is said to be lower semicontinuous6 (lsc) if for any xo E domf and e > 0 there exists an open ball B centered in xo such that f(x) > f{x0) — e for all x E B. All sublevel sets Sa are closed if and only if / is lsc. For a proper convex function, the concepts of closedness and lower semicontinuity are equivalent, i.e., a proper convex function is closed if and only if it is lsc. If / is continuous and domf is a closed set, then / is closed. As a particular case, since R” is closed (as well as open), we have that if / is continuous and domf = Kn, then / is closed. Also, if / is continuous then it is closed and all sublevel sets are bounded if and only if / tends to +00 for any x that tends to the boundary of dom/, see Lemma 8.3. 8.2.14 Interior and boundary points of sublevel sets. Let / be a proper, closed, and convex function with open domain, let oc E R, and con¬ sider the sublevel set Sa = {x E R” : /(x) < oc}. Then it holds that7 See = closure of {x E R” : /(x) < oc}, relint Sa = {x E Rn : /(x) < oc}. For instance, when Sa is full-dimensional, this result implies that points such that f(x) = oc are on the boundary of Sa, and points such that f{x) < oc are in the interior of Sa. Therefore, in particular, if /(xo) < oc, then there exists an open ball centered in xo which is contained in Sa; this fact is exploited for proving several properties of convex optimization problems. 8.2.1.s Sum of convex functions. If fj : R” —> R, i = 1,.. .,m, are convex functions, then the function /(*) = L «»/«(*)/ oci > 0, i — l,... ,m 6 See, e.g., Section 7 in Rockafellar's book for further details. 7 See Corollary 7.6.1 in the cited book by Rockafellar. 234 OPTIMIZATION MODELS is also convex over fl/dom/. This fact easily follows from the definition of convexity, since for any G dom/ and A E [0,1], /(Ax + (1 - A)y) = £ KjfiiAx + (1 - A)y) 1 = 1 < E*i(A/i(*) + (1-A)/i(y)) = A/(*) + (! - A)/(y)- For example, the negative entropy function fix) = ~'Exil°gxi ! = 1 is convex over dom/ = R!j_+ (the set of n-vectors with strictly positive entries), since it is the sum of functions that are convex over this domain (convexity of — zlogz can be verified by checking that the second derivative of this function is positive over R++, see Section 8.2.2). Similarly, we can easily verify that the sum of strictly convex functions is also strictly convex, that is /, g strictly convex => f + g strictly convex. Moreover, the sum of a convex function and a strongly convex function is strongly convex, that is ^/ convex, g strongly convex => / + g strongly convex. To prove this fact, observe from the definition that strong convexity of g means that there exist m > 0 such that g(x) — (m/2)\\x\\2 is convex. Since the sum of two convex functions is convex, it holds that f(x) + g(x) — (m/2)||x||2 is convex, which in turn implies that / + g is strongly convex. 8.2.i.6 Affine variable transformation. Let / : —> R be convex, and g(x) = f(Ax + b), A E R"'m, b E R”. Then g is convex over domg = {x : Ax + b E dom/}. For example, /(z) = — log(z), is convex over dom/ = R++, hence f(x) — — log (ax + b) is also convex over ax + b > 0. 8.2.2 Alternative characterizations of convexity Besides resorting to the definition, there are several other rules or conditions that can characterize convexity of a function. We here mention a few of them. Flere and in the sequel, when mentioning convexity of a function it is implicitly assumed that dom / is convex. CONVEXITY 235 8.2.2.1 First-order conditions. If / is differentiable (that is, dom/ is open and the gradient exists everywhere on the domain), then / is convex if and only if Mx,y € dom/, /(y) > f(x) + V/(x)T(y - x), (8.4) and it is strictly convex if (8.4) holds with strict inequality for all x,y G dom/, x ^y. To prove this fact, suppose / is convex. Then, the definition (8.3) implies that for any AG (0,1] /(*+-%-*»-/(*> < m _/w, which, for A -» 0, yields V/(x)T(y — x) < /(y) — f(x), proving one direction of the implication (strict convexity follows by simply replacing < with <). Conversely, if (8.4) holds, then take any x,y £ dom/ and A G [0,1], and let 2 = Ax + (1 — Ay). Then, f(x) > /(z) + V/(z)T(x — z), /(y) > /(z) +V/(z)T(y-z). Taking a convex combination of these inequalities, we get A/(x) + (1 - A)/(y) > /(z) + V/(z)T0 = /(z), which concludes the proof. The geometric interpretation of condition (8.4) is that the graph of / is bounded below everywhere by any one of its tangent hyperplanes or, equivalently, that any tangent hyperplane is a supporting hyperplane for epi/, see Figure 8.10. From Eq. (8.4) we also draw the following observation: the gradient of a convex function at a point x G (if it is nonzero) divides the whole space in two half-spaces: U++{x) = {y : V/(x)T(y — x) > 0}, H-(x) = {y: V/(x)T(y —*)< 0}, and any point y G T-L++(x) is such that /(y) > f(x). Figure 8.10 The epigraph of a differentiable convex function lies on the half-space defined by any of its tangent hyperplanes. 236 OPTIMIZATION MODELS 8.2.2.2 Second-order conditions. If / is twice differentiable, then / is convex if and only if its Hessian matrix V2/ is positive semidefinite everywhere on the (open) domain of /, that is if and only if V2/ ^ 0 for all x G dom/. This is perhaps the most commonly known characterization of convexity. To see this fact, let xq G dom / and let v G be any direction. Since dom/ is open, the point z = Xq + Az; is still in dom/, for sufficiently small A > 0, hence /(z) = f(x 0) + A\7f(x0)Tv + ^A2vtV2f(x0)v + 0(A3), which, by (8.4), implies ^A2vTV2f(x0)v + 0(A3) = f(z) - f(x0) - AVf(x0)Tv > 0. Dividing by A2 > 0, we have \vTV2f(x0^+^® >0, which, for A —)► 0, shows that vTW2f(xo)v > 0, proving the first part of the claim. Conversely, suppose V2/(x) >1 0 for all x G dom/. Then, for any y G dom/, by a version of the second order-Taylor approximation theorem, it holds that /(y) = /(*) + V/(x)T(y - x) + l(y - x)TV2/(z)(y - x), where the Hessian is computed at some unknown point z lying in the segment between x and y. Since the last term in the above expression is non-negative (due to positive semidefiniteness of the Hessian), we obtain f(y) > f(x) + V/(x)T(y — x), which proves that / is convex. By a similar argument one can prove that / is strongly convex if and only if V2/ >z ml, for some m > 0 and for all x G dom/. It also holds that V2/ >■ 0 for all x G dom / implies that / is strictly convex (but the converse of this last statement is not necessarily true, in general; take for instance f(x) = x4). Example 8.5 (Establishing convexity via Hessian) We give three examples where convexity of a function is checked by checking the positive semidefiniteness of the Hessian matrix. l. Consider two quadratic functions in two variables p(x) = 4x2 + 2x\ + 3*i*2 + 4*i + 5x2 + 2 x 105, q(x) = 4x2 — 2x2 + 3xxX2 + 4xi + 5x2 + 2 x 105. CONVEXITY 237 The Hessian of p is independent of x, and it is given by the constant matrix r 8 3 3 4 V2p = The eigenvalues of V2p are ~ 2.39, A2 — 9.6: they are both positive, hence V2p is positive definite and therefore p{x) is convex (strongly). Likewise, the Hessian of q is V2q = 8 3 3 -4 whose eigenvalues are Ai ~ —4.71, A2 — 8.71, hence V2q is not positive semidefinite, thus q(x) is not convex. For a generic quadratic function of several variables, which can always be cast in the standard form f(x) = ^xTHx + cT x + d, where H is a symmetric matrix, the Hessian is simply given by V2/ = H, hence / is convex if and only if H is positive semidefinite, and it is strongly convex if H is positive definite. 2. Consider the so-called square-to-linear function — if y > 0, +00 otherwise with domain dom/ = {{x,y) G x R : y > 0} . This function is convex, since its domain is convex and, in the interior of the domain, the Hessian is given by y21 -yx —yxT XTX We check that the Hessian is indeed positive semidefinite: for any w = (z, t) e x R, we have wrV2f(x,y)zv = = Hyz - tx\\l > 0. 3. The log-sum-exp (lse) function f{x) = In ( Yj XT X 238 OPTIMIZATION MODELS is monotonically increasing and convex over domf = Rn. Indeed, the Hessian of this function is (see Example 4.4) V2Ise(x) = (Zdiag(z) -zzT^ , Z = [e*1 • • • ex"}, Z=Ylzi- i = 1 We now check that V2lse(x) is positive semidefinite by verifying that wTV2lse(x)w > 0 for all ^ G R”: Z2wTV2lse(x)w = ZipTdiag (z) w — wTzzTw where the last passage follows from the Cauchy-Schwartz inequality: [ wi>/zi [ TVny/z^ 8.2.2.3 Restriction to a line. A function / is convex if and only if its restriction to any line is convex. By restriction to a line we mean that for every Xq E lRn and v E !Rn, the function of scalar variable t g(t) = /(*0 + tv) is convex. This rule gives a very powerful criterion for proving convexity of certain functions. Example 8.6 (Log-determinant function) The so-called log-determinant function /(X) = — log det X is convex over the domain of symmetric, positive definite matrices, domf = S” +. To verify this fact let Xq E be a positive definite matrix, V E S”, and consider the scalar-valued function g(t) = — logdet(X0 + tV). Since Xq >- 0, it can be factored (matrix square-root factorization) as X0 = xl/2xl/2, hence det(X0 + tV) = det (Xj/2Xj/2 + fV) = det (Xj/2(J + tX-1/2VX0-1/2)Xl/2) = detXp/2 det(7 + fX0~1/2VX0“1/2) detXj72 = detX0det(l + fX^1/2VX^1/2) = detXo n (1 + t\j(Z)), CONVEXITY 239 where A*(Z), i — 1,..., n, are the eigenvalues of the matrix Z = XQ 1/2V X^1^2. Taking the logarithm, we thus obtain g(t) = -logdetX0 + f>log(l + t\i(Z)). The first term in the previous expression is a constant, and the second term is the sum of convex functions, hence g(t) is convex for any Xq G S”+, V G Sn, thus — logdetX is convex over the domain S”+. 8.2.24 Pointwise supremum or maximum. If (f<x)cceA is a family of convex functions indexed by the parameter oc, and A is an arbitrary membership set for oc, then the pointwise supremum function f(x) = sup /a(x) is convex over the domain {n^^dom fa} H {x : f(x) < 00}. Note that the sup in the above definition can be substituted equivalently by a max whenever A is compact (i.e., closed and bounded). We next give a proof of this fact for a special case of the maximum of two convex functions. Let fi, /2 be convex, and let f(x) = max{/i (x),f2(x)}, then for any x,y G dom/i H dom/2 and A G [0,1], we have /(Ax + (1 - A)y) = max{/i(Ax + (1 - A)y),/2(Ax + (1 - A)y)} < max{A/i(x) + (1 - A)/i(y), A/2(x) + (1 - A)/2(y)} < Amax{/1(x),/2(x)} + (1 — A) max{/i(y),/2(y)} = A/(x) + (l-A)/(y). There are many examples of application of this rule to prove convexity. For instance, given a norm || • ||, the dual norm8 is defined as the function /(*) = № max yT x. This function is convex over Rn, since it is defined as the maximum of convex (in fact, linear) functions, indexed by the vector y. Similarly, the largest singular value of a matrix X G lRn,m /(X) =crmax(X) = max ||Xi>||2 is convex over ]Rw'm, since it is the pointwise maximum of convex functions which are the composition of the Euclidean norm with the affine function X -» Xu. It actually turns out that not only is the supremum of convex functions convex, but also a converse statement holds (under the additional technical condition of closedness). That is, every closed convex 8 For example, the dual of the Euclidean (£2) norm is the Euclidean norm itself (i.e., the Euclidean norm is self-dual) 11*115 = SUP *Tz= jjTiT = IMh- I№=1 IWI2 The dual of the norm: o-norm is the £\- — sup X z IU!|oo = l T” -EM = Mi- The dual of the ^i-norm is the £<»- norm (take the dual of the dual of the ^oo-norm). More generally, the dual of the ^p-norm is the ^-norm, where q satisfies 11-. p - + - = 1, i.e., q = —L—. p <? p-i 240 OPTIMIZATION MODELS function / can be expressed as the pointwise supremum of affine functions. In particular, / can be expressed9 as the pointwise supremum over all affine functions that are global underestimators of /. Formally, let / : IR/2 —> R be a closed convex function. Then f = f, where f(x) = sup{a(x) : a is affine, and a (z) </(z),Vz}. (8.5) If / is not closed, then the equality f(x) = f(x) still holds for any x G intdom/. 8.2.2.5 Partial minimization. If / is a convex function in (x,z) (i.e., it is jointly convex in the variables x and z), and Z is a nonempty and convex set, then the function g(x) = inf f(x,z) is convex (provided that g(x) > —00 for all x). Example 8.7 (Schur complement lemma) As an example of application of the partial minimization rule, we here give an alternative proof of the Schur complement rule for block-symmetric matrices, see Theorem 4.9. Let S be a symmetric matrix partitioned into blocks: S = where both A and C are symmetric and square. Assume that C >~ 0 (i.e. C is positive definite). Then the following properties are equivalent: • S is positive semidefinite; • the Schur complement of C in S, defined as the matrix A - BC-1BT, is positive semidefinite. To prove this fact we recall that S ^ 0 if and only if xTSx > 0 for any vector x. Partitioning x as x = (y,z), conformably to S, we have that S>: 0 if and only if Z(y,z) = >0, v(y,z). This is equivalent to requiring that 0 < f{y) = min g(y,z) holds for all y. To obtain a closed-form expression for /, we minimize g with respect to its second argument. Since this problem is unconstrained, we just set the gradient of g with respect to z to zero: Vzg(y,z)=2(Cz + BTy)=0, 9 See Section 12 in Rockafellar's book. CONVEXITY 24I which leads to the (unique) optimizer z*(y) = — C~1BTy (notice that C-1 exists, since we are assuming C >- 0). Plugging this value into g, we obtain Suppose now that St: 0. Then the corresponding quadratic function g is jointly convex in its two arguments (y,z). Due to the partial minimization rule, we have that the pointwise minimum function f(y) is convex as well, hence its Hessian must be positive semidefinite, and therefore A — BC~1BT >r 0, as claimed. Conversely, if A — BC~1BT t 0, then f(y) >0 for any y, which implies that S y 0, thus concluding the proof. 8.2.2.6 Composition with monotone convex/concave functions. The composition with another function does not always preserve convexity of a function. However, convexity is preserved under certain combinations of convexity/monotonicity properties, as discussed next.10 Consider first the case of functions of a scalar variable. If / — h o g, with h,g convex and h non-decreasing, then / is convex. Indeed, the condition f(x) <z corresponds to h(g(x)) < z, which is equivalent to the existence of y such that h(y) < z, g) < y. This condition defines a convex set in the space of {x,y,z)-variables. The epigraph of / is thus the projection of this convex set onto the space of (x,z)-variables, hence it is convex, see Figure 8.11. In a similar way we can verify that if g is concave and h is convex and non-increasing, then / is convex. Analogous arguments hold for concavity of composite functions: / is concave if g is concave and h is concave and non-decreasing, or if g is convex and h is concave and non-increasing. These rules have direct extensions to functions of a vector argument. For instance, if the component functions gi : Rw -* 1R, i = l,...,k, are convex and h : 1R* —> 1R is convex and non-decreasing in each argument, with doing* — dom h = 1R, then x —> (h o g)(x) = h(Zi (*)/ • • •/£*(*)) is convex. Example 8.8 Applying the composition rules, the reader may verify as an exercise the following facts: • if g is convex then exp g(*) is convex; • if g is concave and positive, then logy(x) is concave; • if g is concave and positive, then 1 /g(x) is convex; • if g is convex and non-negative and p > 1, then [g(x)]P is convex; • if g is convex then — log(—g(x)) is convex on {x : g(x) < 0}; • if gi, i = l,...,k are convex, then In egi^ is convex. 10 For proofs of these results, see Chapter 3 in the book by S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. Figure 8.11 The set {(x,y,z) : h(y) < z, g(x) < y} and its projection on the space of (x, z)-variables, which is the epigraph of /. The epigraph of g is the projection of the same set on the space of (x,y)-variables. 242 OPTIMIZATION MODELS 8.2.2.7 Jensen's inequality. Let / : W1 R be a convex function, and let z G R" be a random variable such that z G intdom / with probability one. Then /(E{z}) < E{/(z)} (/convex), (8.6) where E denotes the expected value of a random variable, and where it is assumed that the involved expectations exist. To prove this key result, we use the fact that / can be represented as the pointwise supremum of affine functions, see (8.5). Indeed, for all z G intdom/, we have /(z) = supa(z) > fl(z), \/a G A, where A is the set of affine functions that are global underestimators of /. Then, taking expectation of both sides of the previous equation, and recalling that the expectation operator is monotone, we have that E{/(z)} > E{a(z)} - a (E{z}), Va G A, where the last equality follows from the fact that a is affine and E{ } is a linear operator. This implies that E{/(z)} > sup a (E{z}) =/(E{z}), which proves (8.6). A special case of Jensen's inequality arises if one considers a discrete probability distribution for z, with support at m discrete probability-mass points x^l\... ,x^ G R". If the random variable z may take on value with probability 0/, i — 1,...,m (note that 0/ > 0, z = 1,..., m, and Y^Li 0/ — 1/ since 6 = [61 • • • 6m] represents a discrete probability distribution), then m m E{z} = £0,-x^, E{/(z)} = £0f/(*«) /=1 i=1 and (8.6) reads (m \ m X>«*(0) ^ E№W)' (8-7) which holds for any 0 in the probability simplex, i.e., such that 0/ > 0, i = 1,m, and YJiL\ = Observe further that E{z} is a point in the convex hull of {x^\..., x}, that is a convex combination of these points. Hence (8.7) says that a convex function / evaluated at any convex combination of points is no larger than the same combination of the values of / at the given points. If / is a concave function, then Jensen's inequality clearly holds reversed: E{/(z)} < f(E{z}) (/ concave). (8.8) Example 8.9 (Geometric-arithmetic mean inequality) Given positive numbers x\,..., xn, their geometric mean is defined as . n '1/n fg(x) = (n*i- whereas their standard, arithmetic, mean is /«(*) = n i=1 We next show that, for any x > 0, it holds that fg{x) <fa{x). To verify this fact, we take the logarithm of fg, obtaining log fg(x) = l'L[oE(xi) H / = 1 ^ log „ %l = logMx)> where the inequality follows from application of Jensen's inequality (8.8) to the concave function log. Finally, since log is monotone increasing, the inequality log fg(x) < log fa(x) holds if and only if fg(x) < fa(x), which is the statement we intended to prove. Example 8.10 (Young's inequality) Given numbers a,b > 0 and p,q > 0 such that it holds that ab < V <? a relation known as Young's inequality. To prove this fact, we consider that _ elnab _ ginfl+lnb _ eJlnflP+llnM^ Then we observe that the exponential function is convex and, since 1 / p + 1/^ = 1, apply Jensen's inequality to the last expression, obtaining that ab < -einaP + -elnM = -aP + -tf, p q p q which is the statement we wanted to prove. 8.2.3 Subgradients and subdifferentials Consider again the characterization of a convex differentiable function given in (8.4). This relation states that at any point x E dom /, 244 OPTIMIZATION MODELS the function f(y) is lower bounded by an affine function of y, and that the bound is exact at x. That is, /(y) > f(x) +g]{y- x), Vy G dom/, (8.9) where gx = V/(x). Now, it turns out that even when / is non- differentiable (hence the gradient may not exist at some points) relation (8.9) may still hold for suitable vectors gx. More precisely, if x E dom / and (8.9) holds for some vector gx E Rn, then gx is called a subgradient of / at x. The set of all subgradients of / at x is called the subdifferential, and it is denoted by df(x). A subgradient is a "surrogate" of the gradient: it coincides with the gradient, whenever a gradient exists, and it generalizes the notion of gradient at points where / is non-differentiable. A key result on subgradients is next stated without proof.11 Theorem 8.3 Let f : Rn —> IR be convex and let x E relint dom/. Then 1. the subdifferential df(x) is a closed, convex, nonempty and bounded set; 2. if f is differentiable at x, then df(x) contains only one element: the gradient of f at x, that is, df(x) = {V/(x)}; 3. for any v E Rn it holds that (>( \ • r f(x + tv) -/(*) t fjAx) = lim — —= max v g, 0+ t gedf(x) where f^(x) is the directional derivative of f at x along the direction v. In words, this theorem states that, for a convex /, a subgradient always exists at all points in the relative interior of the domain. Moreover, / is directionally differentiable at all such points. For a convex function / it thus holds that, for all x E relint dom/, f{y) > f(x) +gj{y- X), Vy € dom/, Vgx G df(x). Example 8.11 Consider the absolute value function (see Figure 8.12) f(x) = |x|, x E IR. For x > 0, / is differentiable, hence d/(x) = {V/(x)} = {1}. For x < 0, / is also differentiable, and 3/(x) = (V/(x)} = { — 1}. On the contrary, / is non-differentiable at x = 0. However, for all y E IR we can write 11 For reference on this and other results on subgradients and subdifferentials, see Section 1.2 of the book by N. Z. Shor, Minimization Methods for Non-differentiable Functions, Springer, 1985; or Chapter D in J.-B. Hiriart- Urruty and C. Lemarechal, Fundamentals of Convex Analysis, Springer, 2001. Figure 8.12 Absolute value function f(x) = \x\, x e R. CONVEXITY 245 hence, for all 1/ E R, we have f{y) > /(o) + g(y - 0), V# : |g| < 1, which, compared to (8.9), shows that all g E [—1, 1] are subgradients of / at zero. Actually, these are all the possible subgradients, thus the interval [—1, 1] is the subdifferential of / at zero. Thus, we have that aivi - / s8n(x) if * ^°' d|X| \ [-1,1] if* = 0. Similarly, considering the norm function /(*) = IWh, xeR", we can write f(y) = llylli = E \Vi\ = E ."V* SM < Eftyi' V£: Holies < 1- Hence, for all 1/ E Rn it holds that /(y) — /(0) + ^T(y — 0), : IlgJIoo < 1. All vectors g E Rn such that ||g||oo < 1 are thus subgradients of / at zero, and indeed it holds that 3/(0) = {g : ||g||oo < 1}. 8.2.3.1 Subgradient calculus. Besides resorting to the definition, as we did in the previous example, there exist several useful "rules" for computing subgradients and subdifferentials of functions obtained by composition, sum, pointwise maximum, etc., of individual operands. We here summarize some of these rules.12 Chain rules. Let q : Rn —» Rm and h : Rm —> R be such that the composite function / = h o q : R” -> R, with values f(x) = h(q(x))/ is convex. Then: 1. if q is differentiable at x and q(x) E domh, then 3/(*) = k(x)Tdqh^(x))r where Jq(x) is the Jacobian of q at x; 2. if m = 1 and h is differentiable at q(x) E dom h, then 12 For details, see the previously cited reference by Shor, as well as Section 2.3 of F. H. Clarke, Optimization and Nonsmooth Analysis, SIAM Classics in Applied Mathematics, 1990. v(l) = 246 OPTIMIZATION MODELS Affine variable transformation. As a particular case of the first of the previous chain rules, let h : Rm —R be convex and let q(x) — Ax -f b, where A G Rm'", b G Rm. Then the function from R” to R f(x) = h(q(x)) = h(Ax + b) has subdifferential df(x) = ATdqh(q(x)) for all x such that q(x) G dom h. Example 8.12 Consider the function f(x) = \aTx-b\f a G Rn, b G R. This function is the composition of h(x) = \x\ with the affine function q(x) = aTx — b. Therefore, applying the affine variable transformation rule, we have that a • sgn(flTx — b) if aTx — bfO, d\a x — b\ = a • dh(a x — b) — < r - i t , r! 1 1 v ; | a-[-1, 1] if aTx — b = 0. Sura or Zmcur combination. Let h : Rn —» R, q : Rn —» R be convex functions, let oc,f>> 0, and let f(x) = och(x) + fq{x). Then, for any x G relint dom h Pi relint dom q it holds that df(x) — ocdh(x) + fidq(x). Example 8.13 Consider the function f(x) = £ X - bi\, fl; e R”, hi e R. Applying the sum rule, we have A special case is given by the i\ norm function f(x) = \\x\\i = £”=1 \xj\, for which we obtain ei-sgn{xj) ifXj^O, e, • [—1, 1] if x where e; denotes the f-th standard basis vector of R”. Pointwise maximum. Let /; : 1Rn —» R, i = 1,..., m, be convex functions, and let f(x) = max fi(x). Then, for x G dom / it holds that 3/(x) = co{dfi(x) : i G a(x)}, where a(x) is the set of indices of the functions /; that are "active" at x, that is the ones that attain the maximum in the definition of /, hence f(x) = fi(x), for i G a(x). This property also has an extension to pointwise maxima of arbitrary (possibly uncountable) families of convex functions, under some additional technical assumptions. More precisely, let /(*) = sup/«(*), where /# are convex and closed functions. Then, for any x G dom / it holds that d/(x) 2 co(9/a(x) : Oi G a{x)}, where a(x) — {cc e A : f(x) = fcc(x)}- Moreover, if A is compact and the map oc —> /a is closed, then equality holds in the previous inclusion, i.e., df(x) = co{3/a(x) : a G a(x)}. (8.10) Example 8.14 {Polyhedral functions) Consider the polyhedral function (see Section 9.3.1) f{x) — max ajx — bj. Here, the component functions fj{x) = ajx — bj are differentiable, hence dfi(x) = {V/z-(x)} = {aj}, thus we have 3/(x) = co{«f : i G a(x)}. Similarly, let f{x) = \\Ax — b\\oo= max \ajx-bj\, where aj G R” denote the rows of A g RW/”. Applying the max rule for the subdifferential, for f = \ajx — bj\, we have df(x) = co{dfi(x) : i € «(*)}, af.Cvl-/ flrSgn(fl,Tx-b;) if flTx — bj ^ 0, } a,- • [—1,1] if aj x — bj = 0. 248 OPTIMIZATION MODELS Example 8.15 {£\, £2, and £00 norms) Subdifferentials of typical £p norms can be obtained by expressing the norm in the form of a supremum of linear functions over a suitable set, and then applying the sup rule for the subdifferentials. Specifically, we observe that ||x||i = max vTx, \\xW2 = max vTx, ||xII00 = max vJx. The t\ norm case has already been considered, so let us study the two other cases. For the £2 norm case, using (8.10) we have, for fv = vT x, 31|xH2 = co{dfv{x) : v G a(x)} = co{u : v G a(x)} = co{fl(x)}, where a(x) = {v : ||x||2 = vTx, ||u||2 < 1}. For x ^ 0, ||x||2 is attained for v = x/1|xH2, hence a(x) is the singleton {x/1|H2and 3||x||2 = W||*||2}. For x — 0, we have instead ||x||2 = 0 which is attained for all feasible v, hence a(x) — {v : \\v\\2 < 1}, and 3||x||2 = {v : |M|2 < 1}- a|i*i|2 = { ra ifx^o, I {«£*": №<1} if * = 0. For the £oo norm case, we have similarly 3||x||oo = Co{fl(*)}, where a(x) = {v : ||x||oo = vTx, ||z;||i < 1}. For x ^ 0, ||x||oo is attained at the vectors vj = eysgn(xy), where j G /(x), J(x) being the set of indices of the largest entries in |x| (there is only one element in J(x), if x has a single entry of maximum modulus, or more than one element, if there are several entries with the same maximum value). Hence, for x ^ 0, 311*11«, = co{eysgn(*y), j € /(*)}. For x = 0, we have instead ||x||oo = 0 which is attained for all feasible v, hence a(x) = {v : ||i?||i < 1}, and 31|x||oo = {v : ||u||i < 1}. Summarizing, aiivii _ f c°{ey - sgn(xy), / e /(x)} if x / 0, \ {geR": yh <1} ifx = 0. Example 8.16 (Largest eigenvalue of a symmetric matrix) Consider a symmetric matrix A(x) whose entries are affine functions of a vector of variables x G Rn : A(x) = A0 + X\A\ H b xnAn, where Aj G Sm, z = 0,..., n, and let f (x) = Amax(A(x)). CONVEXITY 249 To determine the subdifferential of / at x we exploit Rayleigh's variational characterization (see Theorem 4.3), which states that f{x) = Ama x{A(x)) = max zTA(x)z z: ||z||2=l = max Z1 AqZ+Y' X;ZT A;Z. z:||z||2=l tl f(x) is thus expressed as the max over z (on the unit sphere) of functions fz(x) = zTA(x)z which are affine in x (hence, / is indeed convex). The active set a(x) = {2 : ||z||2 = 1, fz{x) = /(*)} is composed of the eigenvectors of A(x) associated with the largest eigenvalue (and normalized with unit norm). We hence have that df{x) = co{Vfz{x) : A(x)z = \max{A(x))z, \\z\\2 = 1}, V/z(x) = [zTAiz ■ ■ • ZTAnz}T. In particular, / is differentiable at x whenever the eigenspace associated with Amax(A(x)) has dimension one, in which case df(x) = (V/Z(x)}, where z is the unique (up to a sign, which is, however, irrelevant) normalized eigenvector of A(x) associated with Amax(A(x)). 8.3 Convex problems An optimization problem of the form p* = min f0(x) subject to: fi(x) <0, i = 1,..., m, hi(x) = 0, i = is called a convex optimization problem, if • the objective function /o is convex; • the functions defining the inequality constraints, /;, i = 1,..., m, are convex; • the functions defining the equality constraints, h{, i = 1,..., q, are affine. The feasible set of this problem is the set of points x that satisfy the constraints: X = {x e 1R” : fi(x) < 0, i = 1, - ■ • ,m; h^x) = 0, i = 1, • • • ,q}. Note that the sets {x : fj(x) < 0} are the 0-sublevel sets13 of a convex function, hence they are convex. Also, the sets {x : hfx) = 0}, where 13 We shall typically and tacitly assume in most of the derivations in the rest of this book that functions ft are proper and closed, and that their sublevel sets are full-dimensional, i.e., their relative interior coincides with the standard interior. hj is an affine function, are flats, hence convex. Therefore, X is the intersection of convex sets and it is thus convex. The linear equality constraints are usually expressed more compactly in matrix form as Ax = b, hence a generic format of a convex optimization problem is p* = min /of» r *eiRM subject to: fj(x) <0, i = 1,. Ax = b, . .,m, where A G R^,?1 and b G RL A convex optimization problem can equivalently be defined as a problem where one minimizes a convex objective function, subject to the restriction x G X, i.e. that the decision variable must belong to a convex set X: min f0(x). The problem is said to be unconstrained when X — Rn. Solving the optimization problem means finding the optimal minimal value p* of the objective, and possibly also a minimizer, or optimal solution, that is a vector x* G X such that /o(x*) = p*. If X is the empty set, we say that the problem is infeasible: no solution that satisfies the constraints exists. In such a case it is customary to set p* +00. When X is nonempty, we say that the problem is feasible. If the problem is feasible and p* = —00 we say that the problem is unbounded below. Notice that it can also happen that the problem is feasible but still no optimal solution exists, in which case we say that the optimal value p* is not attained at any finite point. The optimal set (or set of solutions) is defined as the set of feasible points for which the objective function attains the optimal value: #opt = (* G X : fo(x) = P*} • We shall also write, using the argmin notation, A’opt = argmin f0(x). If x* G Xopt is such that //(x*) < 0, we say that the z-th inequality constraint is inactive (or slack) at the optimal solution x*. Conversely, if fi(x*) = 0, we say that the z-th inequality constraint is active at x* . Similarly, if x* is in the relative interior of X, we say that the whole feasible set X is inactive (see Figure 8.13), and if x* is on the boundary of X we say that X is active at the optimum (see Figure 8.14). Figure 8.13 In this example, X — {x: |x| < 1} is inactive at the optimum for the problem minxex /o(*)- Figure 8.14 In this example, X = {x : \x\ < 1} is active at the optimum for the problem minxG^ fo(x). CONVEXITY 251 Feasibility problems. In some cases, it may not be of interest to actually minimize an objective function, rather one is just interested in verifying if the problem is feasible or not and, in the positive case, to determine any point in the feasible set. This is called a feasibility problem: find x G X or prove that X is empty. We next provide some simple examples illustrating an (incomplete) taxonomy of several cases that may arise in a convex optimization problem. Example 8.17 • Consider the problem p — min subject to: x\ < 2, X2 < 1. The feasible set X for this problem is nonempty and it is given by the rectangle [—a/2, \/2] x [—1, 1]. The optimal objective value is p* = — a/2 — 1, which is attained at the unique optimal point ** = [-V2 -i]T, see Figure 8.15. Consider the problem p = min xelR2 subject to: (x1-2)2<1/ %2 > 0. The feasible set X for this problem is nonempty and it is depicted in Figure 8.16. The optimal objective value is p* = 0, which is attained at infinitely many optimal points: the optimal set is *opt = {[*1 0]T : € [1, 3]}. The problem p = mm e1 subject to: %2 > {x\ — l)2 + 1, *2 — x1 + ^ < 0 is unfeasible, thus, by convention, p* = +00. The problem — mm subject to: x > 0 is feasible, and the optimal objective value is p* = 0. However, the optimal set is empty, since p* is not attained at any finite point (it is only attained in the limit, as x —> 00). X\+X2= P* Figure 8.15 A feasible problem with unique optimal solution. Figure 8.16 A feasible problem with multiple optimal solutions. 252 OPTIMIZATION MODELS • Consider the problem p* = min Xi subject to: Xi + X2 > 0. The feasible set is a half-space, and the problem is unbounded below (p* = —00). No optimal solution exists, since the optimal value is attained asymptotically for x\ —>• — oo, x\ > — X2, see Figure 8.17. • Consider the problem p* = min (x + l)2. This is an unconstrained problem, for which p* = 0 is attained at the (unique) optimal point x* = —1. Consider next a constrained version of the problem, where p* = min (x + l)2 xeR subject to: x > 0. This problem is feasible, and has optimal value p* = 1. However, this optimal value is not attained by any feasible point: it is attained in the limit by a point x that tends to 0, but 0 does not belong to the feasible set (0, +00). These kinds of "subtleties" are avoided if we insure that the feasible set is a closed set, see the discussion in Section 8.3.2. 8.3.1 Local and global optima A point z is said to be a local optimum for the optimization problem minx(Ex fo(x)f if there exists r > 0 such that z is optimal for the problem min fo(x) subject to: ||x-z||2 < r. In other words, z is locally optimal if there exists a ball Br of radius r > 0 centered in z such that for all points x E Br D X it holds that fo(x) > /o(z). That is, z minimizes /0 locally in a ball of radius r. If z is a global optimum point (i.e., a point in Afopt), then it holds instead that fo(x) > fo(z), for all x E X. A key fact is that in convex optimization problems any local optimal point is also globally optimal. This is in stark contrast with generic non-convex optimization problems, which are plagued by the possible existence of many local optima that are not globally optimal, see Figure 8.18. Numerical optimization algorithms may be trapped in local minima and hence often fail to converge to the global optimum, if the problem is not convex. The following key theorem holds. Theorem 8.4 Consider the optimization problem Figure 8.17 A feasible problem with unbounded-below objective. global minimum Figure 8.18 A non-convex objective/0 may have local minima that are not globally optimal. min fo(x). If fo is a convex function and X is a convex set, then any locally optimal solution is also globally optimal Moreover, the set <Y0pt of optimal points is convex. Proof Let x* G X be a local minimizer of fo, let p* = /o(x*), and consider any point y 6 X. We need to prove that fo(y) > fo(x*) = p*. By convexity of fo and X we have that, for 9 G [0, 1], xq — 0y + (1 — 0)x* G X, and foM < Of0(y) + (1 - 6)f0(x*). Subtracting fo(x*) from both sides of this equation, we obtain Since x* is a local minimizer, the left-hand side in this inequality is non-negative for all small enough values of 6 > 0. We thus conclude that the right-hand side is also non-negative, i.e., /o(y) > fo(x*), as claimed. Also, the optimal set is convex, since it can be expressed as the p*-sublevel set of a convex function: Xopt = {x G x : fo(x) < p*}, which ends our proof. □ 8.3.2 Existence of solutions A sufficient condition for the existence of a solution for problem (8.11) is essentially established via the classical Weierstrass theorem. Theorem 8.5 (Weierstrass extreme value theorem) Every continuous function f : R” —R on a nonempty compact (i.e., closed and bounded) set attains its extreme values on that set. Applying this theorem to our optimization problem (8.11) or (8.14), we obtain the following lemma. Lemma 8.2 If the set X C dom/o is nonempty and compact, and fo is continuous on X, then problem (8.14) attains an optimal solution x*. Note that, since convex functions are continuous on open sets, we have that if fo is convex and X C int dom/o, then fo is continuous on X and the hypotheses of Lemma 8.2 hold. While very useful, Lemma 8.2 still only provides a sufficient condition for existence of the solution, meaning, for instance, that a solution may well exist also when the feasible set X is not compact. The most typical case is for unconstrained minimization of fo, where 254 OPTIMIZATION MODELS X — lRn, which is obviously not compact. The next lemma provides another sufficient condition for existence of a solution, in cases when X is not compact (i.e., open and/or unbounded). To this end, we need to introduce the notion of a coercive function. Definition 8.1 (Coercive function) A function f : Rn —> 1R is said to be coercive if for any sequence {xk} C int domf tending to the boundary of domf it holds that the corresponding value sequence {f(xk)} tends to +00. The next lemma states that the sublevel sets of a continuous coercive function are compact sets. Lemma 8.3 A continuous function f : 1R/1 —» R with open domain is coercive if and only if all its sublevel sets Sa = {x : f(x) < oc}, oc e IR, are compact. Proof Notice first that continuity of / immediately implies that Sa is closed. We next show that this set must also be bounded, if / is coercive. For the purpose of contradiction, suppose there exists an oc £ R such that Sa is unbounded. Since, by definition of effective domain, it holds that Sa C dom /, then Sa unbounded implies that also domf is unbounded. Then there would exist a sequence {xk} C Sa ^ domf such that {||xjt||} —> 00 for k —> 00. But by coercitivity this would imply that also {/(xjt)} +00/ which contradicts the hypothesis that f(xjc) < oc, for all xk £ Sa. We thus conclude that the sublevel set must be bounded. Conversely, suppose all Sa are compact. Consider any sequence {xk} C domf that tends to a boundary point of domf and suppose, for the purpose of contradiction, that the corresponding value sequence {f(xk)} remains bounded, i.e., there exist a finite oc £ IR such that f(xk) < oc, for all k. Then, {xk} C and, since is compact, {xk} has a limit x £ S&. But this would imply that the limit x belongs to the effective domain of /, since f(x) < oc, hence f{x) is finite. We thus have that the sequence {xk} C domf (which by assumption tends to a boundary point of domf) has a limit x £ domf, which contradicts the hypothesis that domf is an open set. □ We then have the following result on existence of solutions to the unconstrained version of problem (8.14). Lemma 8.4 If X = Kn (unconstrained optimization), and /0 is continuous and coercive, then problem (8.14) attains an optimal solution x*. Proof To verify this result, take an oc £ R such that the sublevel set Sa is nonempty By Lemma 8.3 the set Sa is compact, hence by the Weierstrass theorem /0 attains a global minimum x* on Sa. □ A further sufficient condition for existence of a solution is obtained by combining the results in Lemma 8.2 and Lemma 8.4, as stated in the next result, whose proof is left as an exercise to the reader. Lemma 8.5 If X. C dom/o is nonempty and closed, and /0 is continuous on X and coercive, then problem (8.14) attains an optimal solution x*. 8.3.3 Uniqueness of the optimal solution We warn the reader not to confuse the concept of global optimality with that of uniqueness of the optimal solution. For any convex optimization problem any locally optimal solution is also globally optimal, but this does not mean, in general, that the optimal solution is unique. A simple example of this fact is given, for instance, in problem (8.13), which is convex but admits infinitely many optimal solutions: all points with coordinates (xi,0), with x\ E [1, 3], are globally-optimal solutions. Intuitively, one may observe that such a lack of uniqueness is in this case due to the "flatness" of the objective function around the optimal points. Indeed, since flatness is ruled out by strict convexity, one can prove the following sufficient condition under which the optimal solution of a convex optimization problem is unique. Theorem 8.6 Consider the optimization problem (8.14). If /0 is a strictly convex function, X is a convex set, and x* is an optimal solution to the problem, then x* is the unique optimal solution, that is Xopt — {**}• Proof Suppose, for the purpose of contradiction, that x* is optimal for (8.14), and there exists another point y* x* which is also optimal for this problem. That is, both x* and y* are feasible and /o(x*) = /o(y*) = p*. Let then A E (0, 1) and consider the point z = Ay* + (1 — A)x*. By convexity of X, point z is feasible. Moreover, by strict convexity of /0 it holds that /o(z) < A/o(y*) + (1 — A)/o(x*) = p*, which would imply that z has a better objective value than x*, which is impossible since x* is globally optimal. □ Another sufficient condition for uniqueness of the optimal solution can be stated for the class of convex programs with linear objective function (actually, we shall later show that any convex optimization problem can be converted into an equivalent problem having linear objective) and strictly convex feasible set. We first state a simple preliminary lemma, which establishes that any optimal solution of a convex optimization problem with a linear objective must be on the boundary of the feasible set. 256 OPTIMIZATION MODELS Lemma 8.6 Consider the optimization problem (8.14), let /0 be a nonconstant linear function (thus, /0 = cTx, for some nonzero c E M.n), and let X be convex and closed. If x* is an optimal solution to the problem, then x* belongs to the boundary of X. Proof Suppose, for the purpose of contradiction, that x* is optimal and belongs to the interior of the feasible set X, see Figure 8.19. Let p* = cTx* be the optimal objective level. By definition of an interior point, there exists an open ball of radius r > 0 centered at x* and entirely contained in X. That is, all points z such that \\z — x* H2 < r are feasible. Choose then z = x* — ccc with cc = 0.5r/ ||c||2- It is immediate to check that this z is inside the above-mentioned ball, hence it is a feasible point. Moreover, fo(z) = cTz — cTx* — occTc = p* — 2\f^2 < V*/ hence the objective level of z is better (lower) than p*, which contradicts the assumption that x* is globally optimal. □ We can now establish the following sufficient condition for uniqueness of an optimal solution for convex programs with linear objective. Theorem 8.7 Consider the optimization problem (8.14), let /0 be a nonconstant linear function (thus, /0 = cT x, for some nonzero c E M.n), and let X be closed, full-dimensional, and strictly convex. If the problem admits an optimal solution x*, then this solution is unique. Proof Suppose, again for the purpose of contradiction, that x* is optimal but non-unique. Then there exists y* f x* which is feasible and such that p* = cTx* — cTy*. Consider then a point z in the open segment joining x* and y*: z = Ay* + (1 — A)x*, A E (0, 1). By strict convexity of X, the point z is in the relative interior of X, which coincides with the interior of X, since we assumed X to be full-dimensional. Hence, z belongs to the interior of X, and /o(z) = cTz — A cTy* + (1 — A)cTx* = p*, thus, z is optimal. But according to Lemma 8.6 no optimal solution can be in the interior of X (it must be on the boundary), hence we found a contradiction. □ Remark 8.1 In some situations one can "regularize" a convex optimization problem by slightly modifying it in order to insure strict convexity of the objective or of the feasible set X. For instance, if /0 is convex (but not strictly), then one may consider a problem with a modified objective /o(*) =/o(*) + 7ll*-c||i for some small 7 > 0 and c E IRE The addition of the strongly convex term 7||x — c||2 makes /o(x) also strongly.convex, and hence strictly Figure 8.19 When the objective is linear, an optimal solution x* cannot be in the interior of the feasible set, for otherwise it would be possible to move away from x* while remaining feasible and improving the objective. CONVEXITY 257 convex. Thus, Theorem 8.6 would guarantee uniqueness of the optimal solution, for the modified problem. Similarly, a convex problem with linear objective and inequality constraints fi(x) < 0, i = 1,..., m, where fi are convex (but not strictly convex), may be modified by adding a strongly convex term to the left-hand side of the constraints, thus making the feasible set strictly convex. 8.3.4 Problem transformations An optimization problem can be transformed, or reformulated, into an equivalent one by means of several useful "tricks," such as: monotone transformation of the objective (e.g., scaling, logarithm, squaring) and constraints; change of variables; addition of slack variables; epigraphic reformulation; replacement of equality constraints with inequality ones; elimination of inactive constraints; etc. By the term "equivalent" referred to two optimization problems, we here mean informally that the optimal objective value and optimal solutions (if they exist) of one problem can be easily obtained from the optimal objective value and optimal solutions of the other problem, and vice versa. We next analyze each of the mentioned transformation tricks. 8.3.4.1 Monotone objective transformation. Consider an optimization problem of the form (8.11). Let cp : ]R ^ R be a continuous and strictly increasing function over X, and consider the transformed problem £* = mj£ <p(Jo(*)) (8-*6) subject to: f(x) <0, i = 1,..., m, hi(x) — 0, i = 1,... ,q. Clearly, problems (8.11) and (8.16) have the same feasible set. We next show that they also have the same set of optimal solutions. Indeed, suppose x* is optimal for problem (8.11), i.e., fo{x*) = p*. Then, x* is feasible for problem (8.16), thus it holds that <P(fo(x*)) = <p(p*) >8*- (8-17) Assume next that x* is optimal for problem (8.16), i.e., (p(fo(x*)) = g*. Then, x* is feasible for problem (8.11), thus it holds that /o(x*) > p\ (8.18) Now, since (p is continuous and strictly increasing over X, it has a well-defined inverse (p~l, thus we may write <P(fo(n)=g* ** <P~Hg*) =/o(**), 238 OPTIMIZATION MODELS which, substituted in (8.18), yields > p*. Since (p is strictly increasing and <K<p_1(g*)) = g*, the latter relation also implies that g* > <p(p*), which, together with (8.17), implies that it must be (p(p*) = g*. This means that for any optimal solution x* of problem (8.11) it holds that <p(fo(x*)) = g*, which implies that x* is also optimal for problem (8.16). Vice versa, for any optimal solution x* of problem (8.16) it holds that f0(x*) = = p*, which implies that x* is also optimal for problem (8.11). An example of a frequently encountered transformation is the logarithmic one: since q>(-) = log(-) is strictly increasing (for nonnegative argument), then if /0 is non-negative we can substitute the original objective with the transformed one, obtaining an equivalent problem. Preserving/creating convexity. If the original problem (8.11) is convex, then the transformed problem (8.16) is also convex, provided that q> is convex (besides being strictly increasing). A common convexity-preserving objective transformation consists of "squaring" a (non-negative) objective. Indeed, (p(-) — (-)2 is convex and strictly increasing (for non-negative argument), hence if /0 is non-negative and convex, then we can apply the previous equivalence result, preserving the convexity of the objective. Another elementary convexity-preserving objective transformation is simply given by multiplication of the objective function by a positive scalar, that is, a problem with objective /0 is equivalent to a problem with objective otfo, for a > 0, and convexity is preserved by this transformation. If some transformations preserve convexity, some other transformations can actually be used for "creating" convexity from an originally non-convex objective. This convexity-inducing technique typically works in conjunction with the change of variable and constraint transformation tricks described in the next sections. 8.3.4.2 Monotone constraint transformation. Strictly monotone functions can also be used to transform functional constraints into equivalent ones. If a constraint in a problem can be expressed as £(x) < r(x), and cp is a continuous and strictly increasing function over A’, then this constraint is equivalent to < f(r(x)), where equivalent here means that the set of x satisfying the first constraint coincides with the set of x satisfying the second constraint. Similarly, if cp is continuous and strictly decreasing over A’, then the constraint is equivalent to cp(£(x)) > <p(r(x)). 8.3.4.3 Change of variables. Consider an optimization problem of the form (8.11), and let F : X —» Y be an invertible mapping (i.e., for every y £ Y there exist a unique x £ X such that F(x) = y), describing a change of variables of the form y = F(x) & x = F_1(y), where the set X includes the intersection of the domain of /0 with the feasible set X of the problem. Then, problem (8.11) can be reformulated in the new variable y as P* = min g0(y) (8.19) subject to: gi{y) <0, i = 1,..., m, s,-(y)=0, i = l,...,q, where gj(y) = (y)), i = 0,1,..., m, and s,(y) = hi{F~x{y)), i = 1,... ,q. Clearly, if x* is optimal for problem (8.11) then y* = F(x*) is optimal for problem (8.19). Vice versa, if y* is optimal for problem (8.19) then x* = F_1(y*) is optimal for problem (8.11). If the original problem (8.11) is convex, then the problem in transformed variables (8.19) is convex whenever the variable transformation is linear or affine, that is if y — F(x) — Bx + c, where B is an invertible matrix. Sometimes, a well-chosen variable transformation may transform a non-convex problem into a convex one. A notable example is illustrated next. Example 8.18 (Optimization problems involving power laws) Many problems dealing with area, volume, and size of basic geometric objects; birth and survival rates of (say) bacteria as functions of concentrations of chemicals; heat flows and losses in pipes, as functions of the pipe geometry; analog circuit properties as functions of circuit parameters; etc., involve quantities that are described by power laws, that is monomials of the form 260 OPTIMIZATION MODELS ocx^x2~ • • • Xnn, where oc > 0, x* > 0, i = 1,... ,n, and a; are given real numbers. An optimization problem of the form *(0) J°) 5fc ^9 Utl p = mm otQxf x2 • • • x„" a0) (i) s.t.: otjxf x22 • • • xj < bj, j = 1,..., m, Xi > 0, i — 1,..., ft, is non-convex in the variables xi,... ,xn. However, applying a logarithmic transformation to the objective and constraint functions, we obtain an equivalent problem in the form £* = min log a0 + L fl/0) lo§ xi x /=1 n ... s.t.: log otj + ai l°g xi < l°g fy/ j — 1/ • • • / m. Z = 1 Then, introducing the change of variables y* = log x-u i = 1,..., ft, over the domain Xi > 0, the last problem is rewritten equivalently in the y variables as g* = min log «o + L a |0)yi y i=i s.t.: log otj + ]T] of Vi < log fry, 7 = 1,..., m, which is a convex (and in particular, linear) programming problem in y. £.3.4.4 Addition of slack variables. Equivalent problem formulations are also obtained by introducing new "slack" variables into the problem. We here describe a typical case that arises when the objective involves the sum of terms, as in the following problem p* = min (8.20) * i=l s.t.: x G ?C. Introducing slack variables £/, / = 1,..., p, we reformulate this problem as 8* — min (8.21) x,t fr[ s.t.: x G X <Pi(x)<ti i = l,...,r, where this new problem has the original variable x, plus the vector of slack variables t = These two problems are equivalent in the following sense: 1. if x is feasible for (8.20), then x, t\ — <p*(x), i = 1,... ,r, is feasible for (8.21); 2. if x, t is feasible for (8.21), then x is feasible for (8.20); 3. if x* is optimal for (8.20), then x*, t* = <p*(x*), i — 1,...,r, is optimal for (8.21); 4. if x*, t* is optimal for (8.21), then x* is optimal for (8.20); 5- <?* = P*- The first two points are immediate to prove. To check point 3 observe first that, by point 1, x*, f* — <p/(x*), i = l,...,r, is feasible for (8.21). If it were not optimal for this problem, then there would exist another feasible pair y* £ X, t* = (tJ*, ..., t*) with a better objective, i.e., such that E/=i ^ < C=i = E/=i But such a y* is feasible for problem (8.20) and since <Pi(y*) < t* for i = 1,... ,r, we would have that E/=i ^/(y*) ^ C= 1 Ti < XX= 1 which would contradict the fact that x* is optimal for problem (8.20). Points 4 and 3 follow from an analogous reasoning. A similar approach can also be followed when the problem has a constraint of the form E ?«■(*) ^ °' which can be equivalently substituted by the constraints £f, <0, <p,-(x) < f,, 1 = involving x and the slack variable t E Rr. Remark 8.2 Generality of the linear objective. A common use of the slack variable //trick,/ described above consists of transforming a convex optimization problem of the form (8.11), with generic convex objective /0, into an equivalent convex problem having linear objective. Introducing a new slack variable f £ IR, problem (8.11) is reformulated as t* = min t (8.22) subject to: fi(x) <0, i = 1,..., m, hj(x) = 0, i — 1,.. .,9, /oO) <f- Problem (8.22) has a linear objective in the augmented variables (x, t), and it is usually referred to as the epigraphic reformulation of the original problem (8.11). Essentially, the "price to pay" for having a linear objective is the addition of one scalar variable t to the problem. Any convex optimization problem can thus be reformulated in the form of an equivalent convex problem with linear objective. 262 OPTIMIZATION MODELS £.3.4.5 Substituting equality constraints with inequality constraints. In certain cases, we can substitute an equality constraint of the form b(x) = u with an inequality constraint b(x) < u. This can be useful, in some cases, for gaining convexity. Indeed, if b(x) is a convex function, then the set described by the equality constraint {x : b(x) = u} is non-convex in general (unless b is affine); on the contrary, the set described by the inequality constraint {x : b(x) < u] is the sublevel set of a convex function, hence it is convex. We give a sufficient condition under which such a substitution can be safely performed. Consider a (not necessarily convex) problem of the form v* = mi5 /0 W (8-23) s.t.: b(x) = u, where u is a given scalar, together with the related problem in which the equality constraint is substituted by an inequality one: g* = min f0(x) (8.24) s.t.: b(x) < u. Clearly, since the feasible set of the first problem is included in the feasible set of the second problem, it always holds that g* < p*. We next prove that it actually holds that g* = p*, under the following conditions: (z) /0 is non-increasing over X, (ii) b is non-decreasing over JC, and (iii) the optimal value p* is attained at some optimal point x*, and the optimal value g* is attained at some optimal point x*. The first condition means that, for x,y e X, fo(x) < /o(y) ^ X > y, where the vector inequality is to be intended element-wise. Similarly, the second condition means that, for x,y E X, b(x) > b(y) o x>y. To prove that under these assumptions it holds that g* = p*, we deny the statement and assume that g* < p*. Then it must be that b(x*) < u (for if b(x*) = u, then x* would be feasible also for the equality constrained problem, and it would hold that g* = p*), thus b(x*) — u> b(x*), which, by monotonicity of b implies that x* > x*. In turn, by monotonicity of /0, this implies that fo{x*) < fo(x*), that is p* < g*, which contradicts the initial statement. An interpretation of this setup and assumptions is obtained by considering the x variable as a vector of "resources" (e.g., money, labor, etc.), the objective /0 as an index representing the performance achievable with the given resources, and the constraint b(x) = c as a budget constraint, where b measures the resource consumption. The monotonicity hypothesis on /0 is typically satisfied when the objective models a situation in which the more resources we put, the better performance we achieve. It is intuitively clear that, under such assumptions, the inequality constraint will always be saturated at optimum, since from the point of view of the objective, it is better to consume all the available resources, up until when their budget is saturated. Analogously, if the problem is in maximization form max fo(x) s.t.: b(x) = u, then a sufficient condition for replacing the equality constraint with an inequality one is that both /0 and b are non-decreasing over X. Remark 8.3 Observe that while we have p* = g* (under the stated hypotheses) and every optimal solution of (8.23) is also optimal for (8.24), the converse is not necessarily true, that is problem (8.24) may have optimal solutions which are not feasible for the original problem (8.23). However, this converse implication holds if the objective is strictly monotone. Example 8.19 (Budget constraint in portfolio optimization) A typical problem in portfolio optimization (see also Example 2.6 and Example 4.3) amounts to determining a portfolio mix x G R” such that the expected return is maximized, and the portfolio volatility remains under a fixed level. Formally, max rT x s.t.: xT£x < a2, 1T x + (p(x) = 1, where r £ R” is the vector of expected returns of the component assets, £ £ S+ is the returns covariance matrix, a2 is the given upper bound on portfolio volatility, and (p{x) is a function measuring the cost of transaction, which is a non-decreasing function of x. The last constraint expresses the fact that, assuming a unit initial capital in cash, the sum of the invested amounts, lTx, must be equal to the initial capital, minus the expense for transaction costs. Assuming that r > 0, the objective function in the above problem is increasing in x, and the left-hand side of the equality constraint is nondecreasing in x. Hence, we can write the problem equivalently by substituting the equality constraint with the inequality one 1Tx + (p(x) < 1. 264 OPTIMIZATION MODELS If (p is a convex function, the modified problem is convex, whereas the original formulation is not convex in general (unless </> is affine). Remark 8.4 Problems with a single constraint. Consider a convex optimization problem with linear objective and a single inequality constraint min cTx s.t.: b(x) < u, (8.25) 14 The feasible set is closed if b is continuous. Actually, the weaker condition of b being lower semi-continuous (or closed, as we usually assume tacitly), is sufficient to ensure closedness of the feasible set. min cTx s.t.: b(x) — u. and assume that c 7^ 0, that the feasible set {x : b(x) < u} is closed,14 and that the problem attains an optimal solution x*. Then, from Lemma 8.6, we have that this solution must belong to the boundary of the feasible set, which means that it must hold that b(x*) = u. Therefore, whenever problem (8.23) attains an optimal solution, this solution is also optimal for the equality constrained problem 8.3.4.6 Elimination of inactive constraints. Consider a generic convex optimization problem of the form P* = min fo{x) (8.26) s.t.: fi(x) < 0, / = 1,...,m. Ax = b, and suppose the optimum is attained at some point x*. The inequality constraints that are active at x* are defined as those that hold with equality at that optimal point. We can thus define the set of indices that correspond to active constraints as A(x*) = {/: fi(x*)= 0, i = Similarly, we define the complementary set of indices corresponding to constraints that are inactive at optimum: A{x*) — {z : fi(x*) < 0, z = 1,...,m}. It is a rather intuitive fact (although the proof is not entirely trivial) that all the inactive constraints can be removed from the original problem, without changing the optimal solution. More precisely, the following proposition holds. Proposition 8.1 Consider the convex problem (8.26) and assume it admits an optimal solution x*. Then, x* is also optimal for the problem min /o(x) (8.27) s.t.: fj(x) <0, i € A(x*), Proof Let A' = {x : fi(x) < 0, z = 1,.. . ,m; Ax = b}, XA = ix: fiix) < 0/ * ^ A(x*); Ax = b} denote the convex feasible sets of the original problem (8.26) and of the reduced problem (8.27), respectively, and let T={x : fi(x) <0, ie A{x*)}. Since x* E X C A4, x* is feasible for problem (8.27). We suppose, for the purpose of contradiction, that x* is not optimal for this problem, which means that there exists x E XA such that fo(x) Next observe that x* E X C X and actually, since /;(x*) < 0 for i E A(x*), x* belongs to the interior of X (see Section 8.2.14). Therefore, there exists an open ball B centered in x* that is also contained in X. Consider now a point of the form z(A) = (1 — A)x* + Ax, for A E [0,1] (such points are a convex combination of x* and x). One can choose A / 0 sufficiently small so that z(A) is close to x* and belongs to B. Hence, for such A, z(A) E B C X. Notice that, since x* E X C A4 and x E X4, by convexity, all points along the segment {z(A), A E [0, 1]} are also in Xj±- But for the chosen A we also have z(A) E X, hence z(A) E XAnl=X, which means that z(A) is feasible for the original problem (8.26). By Jensen's inequality, we then have that /0(z(A)) < (1 — A)/o(x*) + A/o(x) < (l-A)/0(x*)+A/o(x*)=/o(x*) = p*/ where the second inequality in the above chain follows from the position /o(x) < /0(x*) that we are taking for the purpose of arriving at a contradiction. Indeed, we found a point z(A) which is feasible for problem (8.26) and which yields an objective value /o(z(A)) < p*, which is impossible, since p* is the optimal value of the problem. □ Remark 8.5 A few remarks are in order regarding the discussed technique of elimination of inactive constraints. First, notice that, for using this technique in practice, one should know in advance which constraints are inactive at optimum. However, unfortunately, these are typically only known once the original problem is solved! Nevertheless, there are particular cases where an a priori analysis of the problem may help determine which constraints will necessarily be inactive at optimum, and these constraints can be effectively removed, thus reducing the '"size" of the problem one needs to solve numerically. An illustration of such a situation is described in Example 8.23. 266 OPTIMIZATION MODELS A second warning concerns the fact that while all optimal solutions of the original problem are also optimal for the reduced problem, the converse is not true, in general. To convince oneself of this fact, one may consider minimizing the univariate convex function shown in Figure 8.20 with constraints x > 0 and x < 1. Finally, the result only holds for convex problems, and may fail, in general, when the objective or constraints are not convex (consider, for instance, the minimization of the univariate function shown in Figure 8.21, with constraints x > 0 and x < 1). 5.3.5 Special classes of convex models The class of convex optimization problems encompassed by the formulation (8.11) is quite broad, allowing for a general type of convex objective and constraint functions. In this book, however, we concentrate on some more specialized optimization models, obtained by further qualifying the objective and constraint functions in (8.11). For such specialized models there exist well-established and efficient numerical solvers that provide the user with a reliable technology for effectively solving in practice most design problems that can be formulated. The fundamental convex models, which will be treated in detail in forthcoming chapters, are the following ones. 8.3.5.1 Linear and quadratic programs. Linear programs (LPs) are a specialized case of (8.11), where all the functions involved in the problem description are linear (or affine). They hence have the standard form p* = min cT x s.t.: ajx — hi < 0, i — 1,..., m, — ^eq* Quadratic programs (QPs, QCQPs) are also a specialized case of (8.11), in which the functions describing the objective and the inequality constraints are quadratic, i.e., they are polynomials in the x variable of degree at most two: f(x) = xTHjX + cjx + d\, where Hz are n x n symmetric matrices. Quadratic problems are convex if and only if Hj >z 0 for i = 0,1,..., m. Clearly, quadratic programs include linear programs as a special case, when Hz = 0, / = 0,1,..., m. Linear and quadratic programs are discussed in detail in Chapter 9. 8.3.5.2 Geometric programs. Geometric programming (GPs) is an optimization model in which the variables are non-negative, and the objective and constraints are sums of powers of those variables, with 1 1 ► 0 1 * Figure 8.20 Constrained minimization of a convex function: all constrained- optimal points with inactive constraints are also optimal for the unconstrained problem, but not vice versa. , 1 ► 0 1 x Figure 8.21 Constrained minimization of a non-convex function: if inactive constraints are removed the constrained-optimal solution may no longer be optimal for the unconstrained problem. non-negative weights. Although GPs are not convex in their natural formulation, they can be transformed, via a logarithmic change of variables, into convex problems. GPs arise naturally in the context of geometric design, or with models of processes that are well approximated With power laws. They also arise (via their convex representation) when trying to fit discrete probabilities models in the context of classification. The objective and constraint functions involved in GPs are so-called posynomials, that is non-negative sums of monomials: k a{i) a(:i] fi(x) = E cj xi'1 ■ ■ • xn”, i = 0,1,..., m, with cf > 0, and domain x > 0. Geometric programs are thus an extension of the optimization problems involving power laws (monomials) discussed in Example 8.18; they are further discussed in Section 9.7. 8.3.5.3 Second-order cone programs. Second-order cone programs (SOCP) further extend convex quadratic programs, by dealing with constraints of the form fj(x) = \\AjX + bj\\2 < cjx + dj, i = where A\ E ]Rm«'M are given matrices, bj E 1RW/, cz E lR.n are given vectors, and dz E IR are scalars. SOCPs arise, for example, in several geometrical optimization problems, as well as robust counterparts of linear programs when the data is affected by deterministic unknown- but-bounded or stochastic uncertainty, and in financial optimization problems. They are treated in detail in Chapter 10. 8.3.5.4 Semidefinite programs. Semidefinite programs (SDPs) are convex optimization problems that involve minimization of a linear objective subject to a constraint that imposes positive semidefiniteness of a symmetric matrix that depends affinely on a vector of variables x E Rn. Specifically, given symmetric matrices Fz E Sm, i = 0,1,..., n, a semidefinite program usually takes the following form: p* = min cT x s.t.: F(x) >z 0, where F(x) = Fq + xi^i- Since F(x) ^0 if and only if Amin(F(x)) > 0, the above SDP can be expressed in the usual form of (8.11), as 268 OPTIMIZATION MODELS p* — min cT x s.t.: f(x) < 0, with f(x) = — Amin(F(x)). SDPs encompass as special cases SOCPs, QPs and LPs. They are discussed in Chapter 11. 8.4 Optimality conditions We here provide conditions characterizing optimality of a feasible point in a convex optimization problem. The following result holds. Proposition 8.2 Consider the optimization problem minxex fo(x)f where /0 is convex and differentiable, and X is convex. Then, x G X is optimal V/o(x)T(y — x) > 0, Vy G X. (8.28) Proof We know from (8.4) that for every x,y G dom/o it holds that foiy) > fo(x) + V/o(x)T(y - x). (8.29) The implication from right to left in (8.28) is immediate, since V/o(x)T (y — x) >0 for all y G X implies, from (8.29), that /o(y) > fo(x) for all y G X, i.e., that x is optimal. Conversely, suppose that x is optimal, we show that it must be that V/o(x)T(y — x) > 0 for all y G X. If V/otx) = 0, then the claim holds trivially. Consider then the case when V/o(x) 7^ 0, and suppose, by the purpose of contradiction, that x is optimal but there exists y G X such that V/o(x)T(y — x) <0. Then any point xq = 6y + (1 — 6)x, 6 G [0,1], along the segment from x to y is feasible and, for sufficiently small 9, Xq is in a neighborhood of x where the sign of the first-order term of the Taylor expansion of /0 prevails over all other terms, hence Mxe) = fo(x) + Vf0(x)T(xe-x) + o(\\xe-x\\) = fo(x) + 0V/o(x)T(y - x) + o{e\\y - *||) — fo(x) + negative term, which implies that for such a 9 we would have fo(xg) < fo(x), which contradicts the optimality of x. □ If V/o(x) 7^ 0, then Eq. (8.28) states that V/o(x) is a normal direction defining a hyperplane {y : Vfo(x)T(y — x) =0} such that (i) x is on the boundary of the feasible set X, and (ii) the whole feasible set lies on one side of this hyperplane (see Figure 8.22), that is in the half-space defined by = {y : V/0(x)T(y - *) > 0}. CONVEXITY 269 Figure 8.22 Geometry of the first- order condition for optimality: all points y in the feasible set are such that y — x is a direction of increase for /0 (for V/0(x) + 0). Notice that the gradient vector V/o(x) defines two sets of directions: for directions v+ such that V/o(x)Tt?+ > 0 (i.e., directions that have positive inner product with the gradient) we have that if we make a move away from x in the direction v+ then the objective /0 increases. Similarly, for directions V- such that V/o(x)T^+ < 0 (i.e., descent directions, that have negative inner product with the gradient) we have that if we make a sufficiently small move away from x in the direction v_ then the objective /0 locally decreases. Condition (8.28) then says that x is an optimal point if and only if there is no feasible direction along which we may improve (decrease) the objective. 8.4.1 Optimality conditions for unconstrained problems When the problem is unconstrained, i.e., X — Rn, then the optimality condition (8.28) requires that V/o(x)T(y — x) > 0 for all y G P. This implies that the condition must be satisfied for any y\, and it should also be satisfied for any y2 = 2x — yi, which would imply that V/o(x)T(yi — x) > 0 and — V/o(x)T(yi — x) >0, and this is only possible if V/o(x) = 0. We thus proved the following proposition. Proposition 8.3 In a convex unconstrained problem with differentiable objective, x is optimal if and only if V/0(x) = 0. (8.30) 8.4.2 Optimality conditions for equality-constrained problems Consider the linear-equality constrained optimization problem min fo(x) s.t.: Ax = b, 270 OPTIMIZATION MODELS where /0 : R" —> IR is convex and differentiable, and A E Rm'n, b E !Rm define the equality constraints. Here, the convex feasible set is the affine set of solutions of the linear equations: X — {x: Ax — b}. Using (8.28), we have that x E X is optimal if and only if V/o(x)T(y-x) >0, Vye*. Now, all vectors y E X (that is, all vectors y such that Ay = b) can be written asy = x + z, z E J\f{A), hence the optimality condition becomes V/o(x)Tz>0, VzeAr(A). Since z E Af(A) if and only if —z E J\f(A), we see that the condition actually implies V/0(x)Tz = 0, VzeN’(A). In words, this means that V/o(x) must be orthogonal to J\f(A), i.e. Vf0(x)€Af(A)± = n(AJ), where the last line follows from the fundamental theorem of linear algebra, see Section 3.2. The condition V/o(x) E TZ(AT) is equivalent to the existence of a vector of coefficients v E Rm such that V/o(x) = ATv.Hn conclusion, we have that x is optimal for problem (8.31) if and only if Ax - b, and 3v E Rm : V/0(x) - ATv. Since the same reasoning can be repeated using — v instead of v, the above condition is also often expressed equivalently as in the following proposition. Proposition 8.4 A point x is optimal for problem (8.31) if and only if Ax = b, and 3v € Rm : V/0(x) + ATv = 0. 8.4.3 Optimality conditions for inequality-constrained problems The following result gives sufficient conditions for optimality of a convex inequality constrained problem. Proposition 8.5 Consider the convex optimization problem minJG^ /0 (*), where /0 is differentiable, and the feasible set X is defined via convex inequalities as X = {x E Rn : fi(x) < 0, / = m}, CONVEXITY 271 where /*, / = 1,..., m, are convex and continuosly differentiable. Let x e X be a feasible point, and let A(x) denote the set of indices of the constraints that are active at x, that is A{x) = {i: fi(x) = 0, i = If there exist A, > 0, i € A(x), such that v/oW+ E WM = 0' (8.32) then x is optimal Proof Consider first the case when A(x) is empty. Then x is feasible and condition (8.32) simply prescribes that V/o(x) = 0, in which case x coincides with an unconstrained minimum of /0. Assume next that A(x) is nonempty, and notice that, from the convexity of //, it follows that, for all y, fiiy) > fi(x) + V/,-(x)T(y -x) = V/,-(;t)T(y - x), for i € A{x). Therefore, for any x G X such that A(x) is nonempty, the hyperplane {V//(x)T(y — x) = 0}, i E ^4(x), divides the whole space into two complementary half-spaces n% = {y : V/,-(x)T(y — x) > 0} ^ = (y : V/;(x)T(y — x) < 0}, and the feasible set X is not contained15 in , thus it must be contained in , i.e., X C 'Vt'f. Since this is true for all i € A{x), we obtain that XCV(x)= fl H(i\ Now, the statement (8.32) is equivalent to V/o(*)T(y-x) > - E AiVfi{x)T(y-x), Vy. In particular, since A/ > 0, it holds that V/o(*)T(y-x) > 0, VyeV(x), and since X QV(x), this implies that V/0(x)T(y-x) >0, Vy e X, which, from Proposition 8.2, means that x is optimal. □ 15 Since, for y € %++, (8-33) im~ plies that /*• (y) > 0, hence y is infeasible. 272 OPTIMIZATION MODELS 8.4.4 Optimality conditions for non-differentiable problems All the previous conditions were stated for the case when /0 is differentiable. However, similar results do hold also for non-differentiable problems, provided that subgradients (and subdifferentials) are suitably used instead of gradients. Specifically, the equivalent of condition (8.28) for non-differentiable /0 is16 x £ X is optimal <£> 3gx G df0(x) : gj(y - x) > 0, Vy G X. The implication from right to left is immediate, applying the definition of a subgradient; the converse direction is slightly more involved and it is not proved here. For unconstrained problems, the optimality condition becomes x £ is optimal 0 £ dfo(x). Example 8.20 Consider the unconstrained minimization problem min fo(x), fo(x) — max ajx + bj. Here, the objective is a polyhedral function, which is non-differentiable. The subdifferential of /0 at x is given by dfo(x) = c°{«i: aJx + bi = fo(x)}, and the optimality condition 0 £ dfo(x) hence requires that 0 £ co{aj : ajx + bi = f0(x)}. 16 See, e.g., Section 1.2 in N. Z. Shor, Minimization Methods for Non- differentiable Functions, Springer, 8.5 Duality Duality is a central concept in optimization. Essentially, duality provides a technique for transforming an optimization problem (the primal problem) into another related optimization problem (the dual problem), which can provide useful information about the primal. In particular, the dual problem is always a convex optimization problem (even when the primal is not), and its optimal value provides a lower bound on the optimal objective value of the primal. When also the primal is convex and under some constraint qualification conditions, the primal and dual objective values actually coincide. Moreover, under some further assumptions, the value of the primal optimal variables can be recovered from the optimal dual variables. This feature is useful whenever the dual problem happens to be "easier" to solve than the primal. Further, duality plays an important role in certain algorithms for the solution of convex problems (see, e.g., Section 12.3.1), since it permits us to control, at each iteration of the algorithm, the level of suboptimality of the current solution candidate. Duality is also a key element in decomposition methods for distributed optimization (see Section 12.6.1). Consider the optimization problem in standard form (8.n)-(8.i3), which is recalled below for clarity and is here denoted as the primal problem: P* = min f0(x) (8.34) subject to: f(x) <0, i = 1,..., m, (8.35) hi(x)= 0, i = l,...,q, (8.36) and let V denote the domain of this problem, that is the intersection of the domains of the objective and the constraint functions, which is assumed to be nonempty. We shall build a new function, called the Lagrangian, defined as a weighted sum of the problem objective and the constraint functions, namely C : V x ]Rm xR^ —> R, with m q C{x,k,v) =fo(x) + EA ifi(x) + EV'M*X i=1 i=1 where A = [Aj • • • Am] is the vector of weights relative to the inequality constraints, and v — [v\ • • • Vq] is the vector of weights relative to the equality constraints; A and v are called the vectors of Lagrange multipliers, or dual variables, of the problem. Notice that we are not assuming convexity of /0, /1,... ,/m or of hi,..., hq, for the time be- ing. 8.5.1 The Lagrange dual function Suppose that the value of the multipliers A,v is fixed, with A > 0 (element-wise, i.e., A* > 0, i = 1,..., m). We may then consider the minimum (infimum) of the Lagrangian over the x variable: g( A,v) — inf£(x,A,v) f m q \ = inf Mx) + E AiMx) + E vM*) ■ (8-37) xeu \ /=1 1=1 / For given (A,v), if C(x, A,v) is unbounded below w.r.t. x, then the minimization over x yields g(A,v) = —00; otherwise, g(A,v) is a finite value. The function g(A,v) : IRm x 1RT —> R is called the (Lagrange) dual function of the problem (8.34)-(8-36). We next state two key properties of the dual function. 274 OPTIMIZATION MODELS Proposition 8.6 (Lower bound property of the dual function) The dual function g( A,v) is jointly concave in (A,v). Moreover, it holds that g{A,v) <p*, VA > 0, Vv. (8.38) Proof To prove this proposition, first notice that, for each fixed x, the function £(x, A,v) is affine in (A,v), hence it is also concave in (A,v) (recall that linear and affine functions are both convex and concave). But then, g(A, v) is defined as the pointwise infimum of concave functions (parameterized by x), hence it is concave (this immediately follows by applying the pointwise maximum rule for convex functions to the function — C, which is convex in (A, v) for any fixed x). Notice that we just proved that g(A,v) is concave, regardless of the fact that fi, i = 0,..., m are convex or not; i.e., g(A, v) is always concave! The second part of Proposition 8.6 can be proved by considering that any point x which is feasible for problem (8.34)-(8.36) must, by definition, satisfy the constraints of the problem, hence //(x) < 0, i = 1,..., m, and h{(x) = 0, i = 1,..., p. Thus, since A* > 0, we have that Aifi(x) < 0 and Vjhi(x) = 0, therefore m q C(x, A, v) = fo(x)+ '£i\ifi(x)+ j2vjhi{x) i = 1 i=1 = fo(x) + YlAifi(x) ^ fo(x), V feasible x, Vv, VA > 0. (8.39) Now, since the infimum of a function is no larger than the value of that function evaluated at any given point x, we have that g(A,v) = inf £(x,A,v) < £(x,A,v), Vx E V. (8.40) Therefore, by combining (8.39) and (8.40), we have that for any point x E V which is feasible for problem (8.34)-(8.36), it holds that g(A, v) < £(x,A,v) < /0(x), V feasible x E V, Vv, VA > 0. Further, since this inequality holds for the value of /o(x) at all feasible x, it also holds for the optimal value of /0, which is p*, from which we conclude that g(A, v) < p*, Vv, VA > 0. □ 8.5.2 The dual optimization problem From (8.38) we have that, for any v and A > 0, the value of g(A,v) provides a lower bound on the primal optimal objective value p*. It is then natural to try to find the best possible of such lower bounds. CONVEXITY 275 This can be done by seeking for the maximum of g(A, v) over v and A > 0. Since g(A,v) is always a concave function, this is a concave maximization problem (which is equivalent to a convex minimization problem, where we minimize — g). Finding the best possible lower bound d* on p*, obtainable from the dual function, is called the dual problem: d* = max g(A,v) s.t.: A > 0. (8.41) It is a remarkable fact that the dual problem is always a convex optimization problem, even when the primal problem is not convex. Moreover, it follows from Proposition 8.6 that d* < p*. This property is usually referred to as the weak duality property, and the quantity S* - p* - d* is called the duality gap, which represents the "error" with which the dual optimal objective d* approximates from below the primal optimal objective p*. 8.5.3 Constraint qualification and strong duality Under some additional hypotheses (such as convexity of the primal problem, plus "constraint qualification") a stronger relation actually holds between the optimal values of the primal and of the dual. We say that strong duality holds when d* = p*, that is, when the duality gap is zero. The following proposition establishes a sufficient condition for strong duality to hold.17 Proposition 8.7 (Slater's conditions for convex programs) Let /;, i = 0,..., m, be convex functions, and let hi, i = 1,..., q, be affine functions. Suppose further that the first k < m of the f functions, i = 1,..., k are affine (or let k = 0, if none of the /1,... ,fm is affine). If there exists a point x E relint V such that fi(x) < 0,...,fk(x) < 0; fk+\(x) < 0< 0; hi(x) = 0 ,...,hq(x) = 0, then strong duality holds between the primal problem (8.34) and the dual problem (8.41), that is p* = d*. Moreover, if p* > — oo, then the dual optimal value is attained, that is, there exist A*,p* such that g(A*,p*) = d* - p*. 17 See, e.g., Section 5.2.3 of S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. 276 OPTIMIZATION MODELS In words, Proposition 8.7 states that for convex programs we have strong duality whenever there exists a point that satisfies the affine inequality constraints and affine equality constraints, and that satisfies strictly (i.e., with strict inequality) the other (non-affine) inequality constraints. Example 8.21 (Dual of a linear program) Consider the following optimization problem with linear objective and linear inequality constraints (a so-called linear program in standard inequality form, see Section 9.3) p* = min cT x (842) s.t.: Ax < b, where A G Rm'” is a matrix of coefficients, and the inequality Ax < b is to be intended element-wise. The Lagrangian for this problem is £(x, A) = cT x + A T(Ax — b) = (c + AT A)T x — A Tb. In order to determine the dual function g(A) we next need to minimize C(x, A) w.r.t. x. But C(x, A) is affine in x, hence this function is unbounded below, unless the vector coefficient of x is zero (i.e., c + ATA — 0), and it is equal to —ATb otherwise. That is, /A\ / ifc + ATA^0, \ -ATb if c + ATA = 0. The dual problem then amounts to maximizing g(A) over A > 0. Clearly, if g(A) — — oo, then there is nothing to maximize, therefore in the dual problem we make explicit the condition that we maximize over those A for which g(A) is not identically equal to —00. This results in the following explicit dual problem formulation: d* = max —A Tb (843) s.t.: c -f- ATA — 0, A > 0, and, from weak duality, we have that d* < p*. Actually, Proposition 8.7 guarantees that strong duality holds, whenever problem (8.42) is feasible. We may also rewrite the dual problem into an equivalent minimization form, by changing the sign of the objective, which results in —d* = min bT A s.t.: ATA + c — 0, A > 0, and this is again an LP, in standard conic form (see Section 9.3). It is interesting to next derive the dual of this dual problem. The Lagrangian is in this case = bTA - ?7TA + i/t.(AtA + c), where tj is the vector of Lagrange multipliers relative to the inequality constraints A > 0, and v is the vector of Lagrange multipliers relative to the equality constraints AT A + c = 0. The "dual dual" function can then be obtained by minimizing Cj w.r.t. A, that is ( \ • t r M ^ I ~°° if b + Av ~ V ^ 0' qiVfV) = inf Cd(A,rj,v) = < T . '' ; a ' i' J I cTv \ib + Av -tj =0. The "dual dual" problem thus amounts to maximizing cj(rj,v) over v and over tj > 0, that is —dd* = max cTv vfrj s.t.: b -f Av = 7/, 7/ > 0. Notice further that the combined constraints b -f Av — 7/ and 7/ > 0 are equivalent to the single constraint b + Av > 0, where the variable 7/ is eliminated, therefore —dd* — max cTv s.t: b + Av > 0, and weak duality prescribes that — dd* < —d*, i.e., dd* > d*. Furthermore, Proposition 8.7 states that strong duality holds if problem (8.43) is feasible, in which case we have dd* = d*. Rewriting the "dual dual" problem in equivalent minimization form, and changing variable (—v) —x, we also observe that, in this special case of LP, we recover exactly the primal problem: dd* = min cT x s.t.: Ax < b, whence dd* — p*. To summarize, if the primal is feasible then strong duality holds, and we have p* = d*. Also, if the dual is feasible, then we have dd* = d*, but since the "dual dual" is equivalent to the primal we have dd* = p*, hence again p* = d*. This means that, in LP, strong duality holds between p* and d* when either the primal or the dual are feasible (equivalently, strong duality may fail only in the "pathological" situation when both the primal and the dual are unfeasible). 8.5.4 Recovering primal variables from dual variables There are various reasons for being interested in considering the dual of an optimization problem. A first reason arises when the primal is some difficult non-convex optimization problem: if we are able to determine explicitly the dual of this problem, since the dual is always 278 OPTIMIZATION MODELS convex (more precisely, a concave maximization), we can compute efficiently a lower bound rf* on the optimal primal value p*, and such a lower approximation of p* may be of interest in many practical situations. A second reason is related to the fact that the dual problem has as many variables as there are constraints in the primal. For example, consider an LP as in (8.42) with A G Rm,n, where n is very large compared to m, that is the number n of variables in x is much larger than the number m of constraints. Whenever strong duality holds, the optimal value p* can be computed by solving the dual problem in A, with a much smaller number m of decision variables, which may be advantageous in some cases. A third reason is due to the fact that certain dual problems may have a particular structure that makes them either solvable explicitly (e.g., analytically), or amenable to specific solution algorithms that exploit the special structure of the dual (such as, for instance, the fact that the dual constraints may be a simple restriction of the dual variables to the positive orthant). Once the dual-optimal solution is found, under strong duality, it is in some cases possible to recover from it a primal-optimal solution, as explained next. Assume that strong duality holds between problems (8.34) and (8.41), and that both the primal and the dual optimal values are attained at x* and (A*,i/*), respectively. Then, since p* = fo(x*), d* — g(A*,i/*), and p* = rf*, we have that /o(**) ^ g(A*,v*) = inf £(x,A*,v*) (8.44) < C{x, A*,v*), VigD. Since the last inequality holds for all x G V, it must hold also for x*, hence we obtain that fo(x*) = inf £(x,A*,v*) m q < C{x', A*,v*) = f0(x*) + E A*//(^*) + E WW) i—\ i=1 < /o(**)/ where the last inequality follows from the fact that x* is optimal, hence feasible, for the primal problem, therefore fj(x*) < 0, hj(x*) — 0, and A* is optimal, hence feasible, for the dual, therefore A* > 0, whereby each term A*fi(x*) is < 0, while each term v*h((x*) is zero. Observing the last chain of inequalities, since the first and the last terms are equal, we must conclude that all inequalities must actually hold with equality, that is fo(x*) = inf C{x, A*,v*) = C(x*,\*,v*), xeV - which has two consequences: 1. it must hold that A*/*(x*) = 0, for z = 1,..., m; a property known as complementary slackness; 2. the primal-optimal point x* is a minimizer of £(x, A*, i/*). We say that an inequality constraint is slack when it is satisfied with strict inequality at the optimum. Conversely, we say that the constraint is active when it is satisfied with equality at the optimum. The complementary slackness property prescribes (for a problem where strong duality holds) that a primal and the corresponding dual inequality cannot be slack simultaneously, that is, if fi(x*) < 0, then it must be that A* = 0, and if A* > 0, then it must be that /*(x*) = 0. By looking at the dual variables A* we can hence determine which of the primal constraints are saturated (active) at the optimum. The second consequence (i.e., the fact that x* is a minimizer of £(x,A*,i/*)) can, in some cases, be used to recover a primal-optimal variable from the dual-optimal variables. First observe that if the primal problem is convex, then £(x, A*,i/*) is also convex in x. Global minimizers of this function can then be determined by unconstrained minimization techniques. For instance, if £(x, A*,i/*) is differentiable, a necessary condition for x to be a global minimizer is determined by the zero-gradient condition Vx£(x, A*,i/*) -0, that is, m q Vx/oW + E **Vxfi(x) + E v*Vxhi{x) = 0. (8.45) i=1 1=1 However, £(x, A*,i/*) may have multiple global minimizers, and it is not guaranteed that any global minimizer of £ is a primal-optimal solution (what is guaranteed is that the primal-optimal solution x* is among the global minimizers of £). Therefore, in general, care should be exerted when recovering the primal-optimal solution from the dual. A particular case arises when C(x, A*,i/*) has an unique minimizer (which happens, for instance, when £(x, A*, v*) is strictly convex). In this case the unique minimizer x* of £ is either primal feasible, and hence it is the primal-optimal solution, or it is not primal feasible, and then we can conclude that the no primal-optimal solution exists. We summarize this fact in the following proposition. Proposition 8.8 (primal-optimal from dual-optimal solution) Assume strong duality holds for problem (8.34), assume the dual attains a dual-optimal solution (A*,i/*), and assume that £(x, A*,i/*) has a unique 280 optimization models minimizer x*. If x* is feasible for (8.34), then x* is primal optimal, otherwise the primal problem does not admit an optimal solution. Example 8.22 (Dual of minimum-norm solution of linear equations) Consider the problem of determining the minimum Euclidean norm solution of a system of underdetermined linear equations: p* = inin 11*11! s.t.: Ax = b, where A £ ]Rm'”, n > m, is full rank. The Lagrangian for this problem is £(xfv) = xTx + vT (Ax — b). This is a (strictly) convex quadratic function, which is minimized when Vx£(x, v) — 2x + ATv = 0, that is for the (unique) minimizer x*(v) = — (l/2)ATv, hence g(v) = inf£(x,v) = £(x*(v),v) = — (A AT )v — vTb. The dual problem is thus d* = max —^vT (AAT)v — vTb. v 4 Since AAT y 0, this is a (strictly) concave quadratic maximization problem, and the maximum is attained at the unique point i/* = -2 (AAT)~1b, d* - bT (AAT)~1b. Since the primal problem has only linear equality constraints and it is feasible (due to the assumption that A is full rank), Slater's conditions guarantee that strong duality holds, hence p* = d*. Since further the Lagrangian is strictly convex in x, we can recover the primal-optimal solution from the dual-optimal solution, as the unique minimizer of £(x,v*), which results in x* - x*(v*) = AT(AAT)~1b. 8.5.5 Karush-Kuhn-Tucker optimality conditions For an optimization problem with differentiable objective and constraint functions, for which strong duality holds, we can derive a set of necessary conditions for optimality, known as the Karush-Kuhn- Tucker (KKT) conditions. Consider problem (8.34)-(8.36) and its dual (8.41), let strong duality hold, and let x*, (A*, 1/*) be a primal- and a dual-optimal solution, respectively. Then, it holds that 1. (primal feasibility) /*(x*) < 0, i = 1,.. . ,ra, and h((x*) = 0, i = 1 2. (dual feasibility) A* > 0; 3. (complementary slackness) A*fi(x*) — 0, z = 1,...,m; 4. (Lagrangian stationarity) VxC(x, A*, i/*)x=x* = 0. The first two items are obvious, since x*, A* are primal-dual optimal, hence they must be primal-dual feasible. The last two items follow from the reasoning previously exposed from Eq. (8.44) to Eq. (8.43). We next show that, for a convex primal problem, the above conditions are actually also sufficient for optimality. Indeed, the first two conditions imply that x* is primal feasible, and A* is dual feasible. Further, since C(x, A*,t/*) is convex in x, the fourth condition states that x* is a global minimizer of £(x, A*,i/*), hence g(A*,i/*) = inf C(x, A*,i/*) = £(x*,A*,i/*) m q = Mx*) + ZVfi(x*) + LviWx*) i=l 1=1 where the last equality follows from the fact that /zz-(x*) = 0 (from primal feasibility), and that A*//(x*) = 0 (from complementary slackness). The above proves that the primal-dual feasible pair x*, (A*, v*) is optimal, since the corresponding duality gap is zero. 8.5.6 Sensitivity of the optimal value The optimal dual variables have an interesting interpretation as sensitivities of the optimal value p* to perturbations in the constraints. To make things more precise, consider the following perturbed version of the primal problem (8.34), p*(u,v) = mm f0(x) subject to: //(x) < U[, i = l,...,m, hi(x) = vif i = l,...,q, where the right-hand sides of the inequality and equality constraints have been modified from zero to u, v, respectively. For u = 0, v = 0 we recover the original, unperturbed, problem, i.e., p*(0,0) = p*. A positive Uj has the interpretation of relaxing the z-th inequality constraint, while negative U\ means that we tightened the z-th inequality constraint. We are interested in determining how perturbations in 282 OPTIMIZATION MODELS r -, r -, the constraints impact the optimal value of the problem. We here state some results without formal proof. First, it can be proved that, if the primal problem (8.34) is convex, then so is the function p* (u, v); see Section 8.5.8. Further, under the hypothesis that strong duality holds and that the dual optimum is attained, it holds that p*(u,v) > p*(0,0) - a condition that can also be expressed by saying that — (A*,p*) provides a subgradient for p*(u,v) at u = 0, v = 0. When the function p*(u, v) is differentiable (which happens under some technical conditions, e.g., when the Lagrangian is smooth and strictly convex), the optimal dual variables actually provide the derivatives of — p*(w, v) at u = 0, v = 0, that is dp* (0,0) dp* (0,0) A' d^T’ Vi dvt ' (8‘46) hence A*,p* have the interpretation of sensitivities of the optimal value p* to perturbation of the constraints. To better understand in practice the importance of this interpretation, consider a problem with inequality constraints only, assuming the conditions for (8.46) to hold are satisfied. Inequality constraints often have the interpretation of resource constraints, thus a perturbation Uj > 0 means that we relax the z-th constraint, i.e., that we allow for more resources. On the contrary, a perturbation U[ < 0 means that we tighten the constraint, i.e., that we cut the resources. Now, if A* = 0, then it means that, up to first-order approximation, the problem objective is insensitive to the perturbation (indeed, at least in non-degenerate problems, A* = 0 implies that //(**) < 0, that is the constraint is slack, which means that it is not resource-critical). Conversely, if A* > 0, then complementary slackness implies that fi(x*) = 0, which means that the constraint is active, i.e., resource critical. In such a situation, changing the resource level (i.e., perturbing the right-hand side of the constraint to a small level U\) will have an impact in the optimal value, and this perturbation is given (up to first order) by —A*wz-. 8.5.7 Max-min inequality and saddle points From (8.37) and (8.41) it follows that the dual optimal objective can be expressed as the following max-min value: d* = max min£(x, A,i/). A>0,1/ xev CONVEXITY 283 Also, notice that max C(x,Kv) = I /oW “ ~ °' A>o,v I +00 otherwise, which means that the primal optimal value p* can be expressed as the min-max value p* = minxez> max\>Q/V £(x, A,v). The weak duality relation d* < p* is then equivalent to the following min-max/max- min inequality max min£(x, A, v) < min max £(x, A, v), (8.47) A>0,v x£P “ xeV A>0,v while the strong duality condition d* = p* is equivalent to the fact that the above min-max and max-min values are the same: d* = p* 4^ max min£(x, A, v) = min max £(x, A, v). A>0,vx£T> xev \>0,v This means that, if strong duality holds, the max and min operations can be exchanged without changing the value of the problem. Also, if p* and d* are attained, we say that £(x, A, v) has a saddle point at the primal and dual optimal values (x*,A*,v*), see Figure 8.23. Equality (8.47) is actually a special case of a more general inequality, known as the max-min inequality, which can be stated as follows: for any function cp : x Rm —»> R and for any nonempty subsets XCR"Jcr it holds that sup inf (p(x,y) < inf sup<p(x,y). (8.48) yeY xeX xeX yeY To prove this relation, notice that for any y £ Y it holds that inf ^ inf sup <p(x,y), X£X X(zX. y(zy hence (8.48) follows by taking the sup over y £ Y of the left-hand side. It is interesting to study under which conditions equality holds in equation (8.48), hence the max and min operations can be safely exchanged. The following theorem, adapted from an original result due to M. Sion, provides a sufficient condition for max-min/min-max equality to hold. Theorem 8.8 (Minimax theorem) Let X C W1 be convex and compact, and let Y C Rm be convex. Let cp : X xY R be a function such that for every y £ Y, <p(',y) is convex and continuous over X, and for every x e X, q>(x,-) is concave and continuous over Y. Then equality holds in (8.48), i.e., sup mincp(x,y) = min sup<p(x,y). yeY xeX xeX yeY Figure 8.23 A function £ with a saddle point. 284 OPTIMIZATION MODELS We observe that the theorem can be easily reformulated for the case when it is the set Y, instead of X, that is compact. To see this fact it suffices to consider the function q>{y,x) = — (p{x,y) and apply Theorem 8.8. Indeed, if cp satisfies the hypotheses of the theorem, then <p(y, •) is concave over X for every y £ Y, and q>(-,x) is convex over Y for every x £ X. Therefore, if X, Y are convex and Y is compact, applying Theorem 8.8 we obtain supmin<p(i/, x) = minsup <p(y, x). (8.49) xeX V^Y VeY xex mmq>{y,x) = min-<p(x,y) = - max <p(x,y), yeY yeY yeY supf(y,x) = sup-<p(x,y) = - inf <p(x,y), xex xex xex then (8.49) becomes sup ( - max f(x,y) ) = min ( - inf cp{x,y) ) , V yeY ) yeY \ xeX J that is inf max(p(xfy) = max inf (pix,y), xex yeY T J yeY xex Jt which constitutes an alternative statement of Theorem 8.8, to be used when Y, instead of X, is compact. Example 8.23 (Square-root LASSO) As an illustrative example, we consider a version of least squares that is sometimes referred to as the "square-root LASSO" f =mm \\Ax — b||2 + A||x|||. Here, A £ Rm'n, b e lRm and the parameter A > 0 are given. The above problem is a useful variant of least squares, in which the ^i-norm penalty encourages sparsity (number of nonzeros) in the solution (we cover this topic in more detail in Section 9.6.2). We can express the problem as p* — min max uT (Ax — b) 4- vTx, x (ufv)eY where Y = {(ufv) : ||wH2 < 1, ||u||oo < A} is compact. Theorem 8.8 applies here, and exchanging the min and max leads to p* = max min uT (Ax — b) -f vTx. (u,v)eY x Note that the infimum over x of the term (ATu + v)Tx would be —00, unless the coefficient ATu -f v is zero, hence p* = ma x—bTu : v + ATu = 0, \\u\\2 < 1, IM|oo < A = max -bTu : \\u\\2 < 1, ||AT2/||oo < A. CONVEXITY 285 The above problem is a kind of dual to the original problem, which is useful for its analysis, and algorithm design. In particular, the above dual can be used to eliminate variables from the original problem, based on a simple evaluation of the norms of the columns ai,...,an of A. Let us write the problem as max — bT u : || w H2 < 1, \aj u\ < \, i — \, Now, if Iklh < a for some z, with aj the z-th column of A, then |aju\ < A, Vu : ||wH2 < 1. This means that the constraint \aju\ < A is not active, and that we can safely eliminate it from the dual.18 In other words, the optimal value p* remains the same if we remove the z-th column of A, or equivalently, set Xj = 0. The simple test IklU < a thus allows us to predict that x\ = 0 at optimum, without solving for p*. 8.5.8 Subgradients of the primal value function Consider a primal convex problem of the form p*(u) — min f0{x) (8.50) subject to: f(x) < Ui, i — 1,..., m, where u is a vector of parameters. We consider the case of inequality constraints only, leaving the extension to the case when also equality constraints are present as an easy exercise to the reader. We next show that p*(u) is a convex function of u, and that subgradients for this function can be obtained from the Lagrange multipliers associated with (8.50). The Lagrangian of (8.50) is £(x,A,u) = f0(x) + Y^Aifi(x) - ATu, 1=1 with dual function g(A, u) = minxex> C{x, A, u), and dual problem d*(u) =maxv(A, u) = maxmin£(x, A, u). (8.51) A>0 A>0 xev As already observed in the previous section, we have that max£(x,A,w) = max/0(x) + £ A,-(/;(x) - «,•) A>0 A>0 j | _ f fo(x) if fi(x) < uir i = l,...,m, 1 +00 otherwise, therefore, the primal problem can be written equivalently as p*(u) = min max C(x, A, u). xev a>o 18 See the section on elimination of constraints 8.3.4.6. 286 OPTIMIZATION MODELS Notice that C(x,\,u) is convex in (x,u), for each given A (C is actually convex in x and linear in u). Therefore, by the pointwise max rule (see Section 8.2.24), the function maxA>0 C(x,\,u) is still convex in (x,u). Then, applying the property of partial minimization (Section 8.2.2.5) allows us to conclude that the optimal primal value function p*(w) is convex in u. Considering now the identity m m Y A jfi(x) = min Y A iVi, s.t.: /,(x) < vir • "j 71 a II? Vfl *l->" we have that ,=1 ,=i g{A, u) = min f0{x) + Y \ifi{x) - ATu xeV i=1 = min min fo(x) + AT(v - u), s.t.:/,(x) < Vj, i = 1,.. .,m ueRm xeV = mj^p*(v)+^r(v-u)- Now, suppose that p* (u) is finite and strong duality holds, i.e. rf* (u) = p*(w), and let \u be the optimal Lagrange multiplier achieving the optimum in (8.51). Then we have f(u) = d*(u) =g(Au,u) = mm p* (v) + \^(v - u) < p* (0) + (v — u), W. This Tatter inequality proves that — Xu is a subgradient of p* at u, that is p*0) > P*(u) - Au(v - u), Mv. This property is exploited, for instance, in the primal decomposition methods described in Section 12.6.1.2. 8.5.9 Subgradients of the dual function Consider a primal problem of the form V* = min fo(x) subject to: f(x) <0, i = 1,..., m, with is dual rf* = maxA>og(A), where g(A) is the dual function g(A) = min C{x, A), with C{x, A) = fo(x) + YliL\ ^ifi(x). We again consider the case of inequality constraints only, the extension to the case of equality constraints being straightforward. CONVEXITY 287 We already know that g is a concave function (irrespective of convexity of /0, fi, i — 1,..., m); we next show that we can readily find a subgradient for this function (or, to be more precise, for the convex function —g). For a given A, let xA = argmin C(x, A), so that g(A) = C{x\, A), if such minimizer exists. For all z £ Rm, it holds that g(z) — min C(x,z) < C(xA,z) = f0(xA) + £ zifi(xA) i=1 m m = fo(xA) + £ Aifi(xA) + i - Ai)fi(xA) i=1 1 = 1 = g(A) + £(z/-A;)/,(xA). F(x) = [/i(x)/2(x) ■■■ fm{x))r, the previous inequality becomes g(z) <g(A) + F(xA)J(z-A), Vz, which means that F(x\) is a subgradient19 for g at A. Such a subgradient is thus obtained essentially at no additional cost, once we evaluate g(A) by minimizing C(x, A) with respect to x. Summarizing, we have that [/i(*\)/2(*a) • ' • fm(xA)]T € dg(A), for any xA that minimizes £(x, A) over x £ V. This property is exploited, for instance, in the dual decomposition methods described in Section 12.6.1.1. Further, if V is assumed to be compact and nonempty, /0, fi, i = 1,..., m are continuous, and the minimizer xA is unique for every A, then one can prove that g(A) is continuously differentiable, hence the above subgradient is unique and actually provides the gradient of g at A: Vg(A) = [fi(xA)f2(xA) • • • fm(xA)}T, VA. 8.6 Exercises Exercise 8.1 (Quadratic inequalities) Consider the set defined by the following inequalities: (x\ > X2 — 1 and *2 ^ 0) or (xi ^ x2 ~ 1 and X2 < 0). 19 There is a slight abuse of terminology here, since subgradients are defined for convex functions, while for concave functions we should more properly talk about supergradients, intending that a vector h is a supergradient for a concave function / if —h is a subgradient of —/. 288 OPTIMIZATION MODELS 1. Draw the set. Is it convex? 2. Show that it can be described as a single quadratic inequality of the form q(x) — xTAx + 2bTx + c < 0, for a matrix A = AT £ R2,2, b £ R2 and c £ R which you will determine. 3. What is the convex hull of this set? Exercise 8.2 (Closed functions and sets) Show that the indicator function lx of a convex set X is convex. Show that this function is closed whenever X is a closed set. Exercise 8.3 (Convexity of functions) 1. For x,y both positive scalars, show that ye*/y — max oc(x + y) — yoc • In a:. Use the above result to prove that the function / defined as yex/y if x > 0, y > 0, is convex. 2. Show that for r > 1, the function fr : R^ R, with values is concave. Hint: show that the Hessian of —/ takes the form xdiag (y) — zzT for appropriate vectors y > 0, z > 0, and scalar k > 0, and use Schur complements20 to prove that the Hessian is positive semidefinite. Exercise 8.4 (Some simple optimization problems) Solve the following optimization problems. Make sure to determine an optimal primal solution. 1. Show that, for given scalars a, [5, +00 otherwise, —00 if oc < 0, 2\[5\^ otherwise. 2. Show that for an arbitrary vector z £ Rm, CONVEXITY 289 3. Show that for an arbitrary vector z E Rm, we have m z2 \z\\l = min £ -j- : d > 0, = L d i=1 a< /=1 Exercise 8.5 (Minimizing a sum of logarithms) Consider the following problem: p* = max y ocihxxi s.t.: x > 0, lTx = c, where c > 0 and cc( > 0, i = 1,..., n. Problems of this form arise, for instance, in maximum-likelihood estimation of the transition probabilities of a discrete-time Markov chain. Determine in closed-form a minimizer, and show that the optimal objective value of this problem is p* = 0cln(c/a) + y OCilnOii, i=1 where cc = Exercise 8.6 (Monotonicity and locality) Consider the optimization problems (no assumption of convexity here) V\ = min f0(x), Pi = mi£ fo(x), * = ^3Aw- where ^ C ^2. 1. Prove that p\ > (be., enlarging the feasible set cannot worsen the optimal objective). 2. Prove that, if p\ — pz, then it holds that P13 = Pi* =* P23 = Pt 3. Assume that all problems above attain unique optimal solutions. Prove that, under such a hypothesis, if p\ — p\, then it holds that P23 = pi => ph = pi 290 OPTIMIZATION MODELS Exercise 8.7 (Some matrix norms) Let X = [x\,..., xm\ E R”,m, and p E [1, +00]. We consider the problem If the data is centered, that is, XI = 0, the above amounts to finding a direction of largest "deviation" from the origin, where deviation is measured using the Ip-norm. 1. Is <pp a (matrix) norm? 2. Solve the problem for p — 2. Find an optimal u. 3. Solve the problem for p = 00. Find an optimal u. 4. Show that where l/p + l/q = 1 (hence, (pv{X) depends only on XTX). Hint: you can use the fact that the norm dual to the Ip-norm is the lq- norm and vice versa, in the sense that, for any scalars p > 1, q > 1 with 1/p + l/q = 1, we have Exercise 8.8 (Norms of matrices with non-negative entries) Let X £ R+,m be a matrix with non-negative entries, and p,r £ [1, +00], with p >r. We consider the problem is concave when p > r. 2. Use the previous result to formulate an efficiently solvable convex problem that has (pp/r(X)r as optimal value. Exercise 8.9 (Magnitude least squares) For given n-vectors a\,..., am, we consider the problem (pp(X) = max ||XTtt||j7 : uTu = 1. <pp(X) = max ||Xu||2 : \\v\\q < 1, (pp,r(X) = max ||Xu||r : \\v\\p < 1. 1. Show that the function fx : R+ —>* R, with values 1. Is the problem convex? If so, can you formulate it as an ordinary least-squares problem? An LP? A QP? A QCQP? An SOCP? None of the above? Justify your answers precisely. 2. Show that the optimal value p* depends only on the matrix K = AT A, where A = [a\,.. .,am\ is the n x m matrix of data points (that is, if two different matrices A\,A2 satisfy AjA\ = AJA2, then the corresponding optimal values are the same). Exercise 8.10 (Eigenvalues and optimization) Given an n x n symmetric matrix Q, define w\ — are min xT Qx, and u\ = min xTQx, Ml2 = l Ml2 = l and for k = 1,2,..., n — 1: wjt+i = arg min xT Qx such that wj x — 0, i — 1,..., k, pjt+i = min xTQx such that wj x = 0, i — 1,..., k. 1^112 = 1 Using optimization principles and theory: 1. show that ]i\ < ^2 ^ < Y-n) 2. show that the vectors wi,...,wn are linearly independent, and form an orthonormal basis of R"; 3. show how ]i\ can be interpreted as a Lagrange multiplier, and that pi is the smallest eigenvalue of Q; 4. show how ]i2,..., ]in can also be interpreted as Lagrange multipliers. Hint: show that is the smallest eigenvalue of QW)c/ where Wk = [wM,... ,wn]• Exercise 8.11 (Block norm penalty) In this exercise we partition vectors x eW1 into p blocks x = (*i,..., Xp), with Xj G RW/, n\ H h np = n. Define the function p : R" —> R with values p(x) = £||*,.||2. 1. Prove that p is a norm. 2. Find a simple expression for the "dual norm," p*(x) = sup zTx. 3. What is the dual of the dual norm? 292 OPTIMIZATION MODELS 4. For a scalar A > 0, matrix A E Rm'n and vector y E Km, we consider the optimization problem p*{A) == min ||^x-y||2 + Ajo(x). Explain the practical effect of a high value of A on the solution. 3. For the problem above, show that A > Cmax(A;) implies that we can set — 0 at optimum. Flere, A[ E Rm>n' corresponds to the z-th block of columns in A, and (7max refers to the largest singular value. Linear, quadratic, and geometric models In this chapter we study three classes of optimization models. The first two classes (linear and quadratic programs) are characterized by the fact that the functions involved in the problem definition are either linear or quadratic. The third class of problems (geometric programs) can be viewed as an extension of linear programming problems, obtained under a suitable logarithmic transformation. A quadratic function in a vector of variables x = [x\ x2 • *' xn\ is a polynomial in x where the maximum degree of the monomials is equal to two. Such a degree-two polynomial can be written generic- ally as follows fo{x) = -xTHx + cTx + d (a quadratic function), (9.1) where d £ R is a scalar constant term, c £ Rn is a vector containing the coefficients of the terms of degree one, and H £ Kn'n is a symmetric matrix that contains the coefficients of the monomials of degree two. A linear function is of course a special case of a quadratic function, obtained considering H — 0: /o(x) = cT x + d (a linear function). Linear and quadratic models treated in this chapter take the form minimize fo(x) (9.2) subject to: Aeqx = beq, fi(x) <0, i = 1,.. .,m, where /0,... ,/m are either quadratic or linear functions. More precisely, we shall be mainly concerned with the case when these func- 294 OPTIMIZATION MODELS tions are convex, which happens if and only if the Hessian matrices of fi, i = 0,1,..., m, are positive semidefinite, see Example 8.5. 9.1 Unconstrained minimization of quadratic functions Let us start our discussion by examining the unconstrained case, that is problem (9.2) when no constraints are present, thus x is unrestricted: x E Rw. Consider first the linear case, fo(x) = cTx + d: It is an intuitive fact that p* = — 00 (i.e., the objective is unbounded below) whenever c ^ 0, and p* = d, otherwise. Indeed, for c ^0 one may take x = — occ, for any oc > 0 large at will, and drive /0 to —00. On the contrary, for c — 0 the function is actually constant and equal to d. We have therefore, for a linear function, Consider next the general quadratic case p* = min -xTHx + cTx + d. r xgW1 2 Several situations are possible, depending on the sign of the eigenvalues of H (which we recall are all real, since H is symmetric). (a) H has a negative eigenvalue A < 0. Then let u be the corresponding eigenvector and take x = ocu, with oc 7^ 0. Since Hu — Xu, we which tends to — 00, for oc —» 00. Hence, in this case /0 is unbounded below, i.e., p* = — 00. (b) All eigenvalues of H are non-negative: A; > 0, i — 1,... ,n. In this case /0 is convex. We know from Eq. (8.30) that the minima are characterized by the condition that the gradient of the function is zero, that is The minimizer should thus satisfy the system of linear equations Hx = —c, and the following sub-cases are possible. p* = min cTx + d. d if c = 0, —00 otherwise. V/0(x) = Hx + c = 0. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 295 (b.i) If c £ 71(H), then there is no minimizer. Indeed, this case implies that H is singular, hence it has an eigenvalue A = 0 and a corresponding eigenvector u. Thus, taking x = ecu we have that, along direction u, fo(x) — oc(cTu) + d. But cTu ^ 0 since u E J\f(H) and c £ 71(H) must have a nonzero component along J\f(H), therefore fo(x) is unbounded below. (b.2) If c E 71(H), then /0 has a finite global minimum value p* = 0.5 cTx* + d (the fact that the minimum is global is a consequence of convexity), which is attained at any minimizer x* such that Hx* = — c. All such minimizers are of the form x* = -Hfc + z, CeV(H). Since J\f(H)± TZ(HT) = 71(H), and since c E 71(H), it holds that cT£ = 0, for all £ E J\f(H), hence f - -l/2cTH+c + rf + l/2cT£ = —icTH+c + d. (c) All eigenvalues of H are positive: A/ > 0, i = 1,..., n. Then H is invertible, and there is a unique minimizer at x* = -H-V (9.4) with corresponding optimal objective value p* = -lcTH_1c + d. Summarizing, the minimum value p* of the quadratic function (9.1) is characterized as follows * _ f ~lcTHfc + d if H h 0 and c € n(H), ^ l—oo otherwise. Example 9.1 (Least squares) We have actually already encountered a special case of a quadratic minimization problem, in the context of the least- squares approximate solution of linear equations, see Section 6.3.1. Indeed, the LS problem amounts to minimizing /o(x) = ||^4x — y\\\, hence fo(x) = (Ax - y)T(Ax - y) = xJ AJAx - 2yJAx + yJy, which is a quadratic function in the standard form (9.1), with H = 2 (ATA), c = —2AJy, d = yTy. Note that /0 is always convex, since AT A X 0. The solution is given by the first-order optimality condition in (9.3). Since c E 11(H), an LS solution satisfying these conditions always exists. Further, if A is full 296 OPTIMIZATION MODELS rank, then AT A >- 0, then the solution is unique and it is given by the well-known formula x* = —H-1c = (AJA)-1ATy. Example 9.2 (Quadratic minimization under linear equality constraints) The linear equality-constrained problem minimize /o(x) subject to: Ax = b, with fo(x) = jXTHx + cTx + d, can be readily converted into unconstrained form by eliminating the equality constraints. To this end, we parameterize all x such that Ax = b as x = x + Nz, where x is one specific solution of Ax = b, N is a matrix containing by columns a basis for the nullspace of A, and z is a vector of free variables. Then we substitute x in /0 and obtain a problem which is unconstrained in the variable z: 1 - mm fo(z) — -zTHz + cTz + d, H = NTHN, c = NT{c + Hx), d = d + cTx+^xTHx. Another approach is also possible, when /0 is convex, by exploiting the necessary and sufficient conditions in (8.31). That is, we seek x £ Rn and A £ Rm such that Ax = b, V/o(x) = AT A. Y/oM = Hx + c, we see that the optimal solution of the equality-constrained convex quadratic problem can be obtained by solving the following system of linear equations in the variables x, A: ’ A 9.2 Geometry of linear and convex quadratic inequalities 9.2.1 Linear inequalities and polyhedra The set of points x £ R" satisfying a linear inequality aj x < bj is a closed half-space; the vector a, is normal to the boundary of the LINEAR, QUADRATIC, AND GEOMETRIC MODELS 297 half-space and points outwards, see Section 2.4.4.3. A collection of m linear inequalities aj x < hi, i m, (9.6) thus defines a region in ]Rm which is the intersection of m half-spaces and it is named a polyhedron. Notice that depending on the actual inequalities, this region can be unbounded, or bounded; in the latter case it is called a poly tope, see Figure 9.1. It is often convenient to group several linear inequalities using matrix notation: we define cr 1 . am . 3 * and then write inequalities (9.6) in the equivalent matrix form Ax < b, where inequality is intended component-wise. Figure 9.1 Example of polytopes in R2 The polytope on the left of Figure 9.1 is described by six linear (left) and in R3 (right), inequalities, each defining a half-space: -0.8732 ' ' 1 " x < A polytope represented as the intersection of half-spaces is called an <H-polytope. Any %-polytope also admits an equivalent representation as the convex hull of its vertices, in which case it is called a V- polytope. For example, the two-dimensional polytope in Figure 9.1 298 OPTIMIZATION MODELS can be represented as the convex hull of its vertices, given as columns in the following matrix: V = 0.1562 0.9127 0.8086 1.0338 -1.3895 -0.8782 -1.0580 -0.6358 0.6406 0.1386 2.3203 -0.8311 The intersection of a polytope P with a supporting hyperplane H is called a face of P, which is again a convex polytope. Vertices are indeed the faces of dimension 0; the faces of dimension 1 are the edges of P, and the faces of dimension dim P — 1 are called the facets, see Figure 9.1. The intersection of a polytope with an affine set (such as the set of points satisfying linear equalities Aeqx — beq) is still a polytope. Indeed, the set P = {x : Ax < b, AeqX = beq} can be expressed equivalently in "inequality-only" form as follows: P = {x : Ax < b, AeqX < beq, — AeqX < —beq}. Convex inequalities at vertices of polytopes. If a convex inequality is satisfied at the vertices of a polytope, then it is satisfied at all points of the polytope. More precisely, consider a family of functions f(x,9) : ]r« x ^ which are convex in the parameter 6 for each given x, and let 9^\..., 9^) be given points in Rm defining the vertices of a V-polytope © = co{0<1>,.../0W}. Then it holds that f(x,9^) < 0, i = 1,...,p & f(x,9) < 0, V0 e 0. (9.7) The implication from right to left is obvious, since if the inequality is satisfied at all points of the polytope 0, it is satisfied also at its vertices. The converse implication also follows easily from convexity: any 9 G 0 is written as a convex combination of the vertices d = £) Yjai = 1, «(• > o, i = 1,..., p, i=1 i f(x,d) = / < 0' where the first inequality is the Jensen's inequality for convex functions. This simple property has many useful applications in engineering design problems, where inequalities typically have the meaning LINEAR, QUADRATIC, AND GEOMETRIC MODELS 299 of specifications that need to be satisfied by the design. In such a context, if one needs to guarantee that some specification (expressed as a convex inequality) holds over a whole polytopic region for a parameter 0, then it suffices to insure that the specification is met at a finite number of points, namely at the vertices of the polytope. Example 9.3 (The probability simplex) The probability simplex is the polytope defined as P = {xe1Rn : x> 0, = The name suggests the fact that any x in the probability simplex has a natural interpretation of a discrete probability distribution, i.e. the xz-s are non-negative and they sum up to one. The probability simplex in Rn has n vertices, which correspond to the standard orthonormal basis vectors for Rn, that is P = co{<?<1>,..., <?(">}, where eW is an n-vector whose entries are all zeros, except for a one in position i. The probability simplex in R3 is depicted in Figure 9.2. Example 9.4 (The £i~norm ball) The fi-norm ball is the set {x c Rn : ||x111 < 1}/ that is the set where 1**1 — set indeed a polytope, since the previous inequality can be verified to be equivalent to a collection of 2n linear inequalities. To see this fact, consider sign variables S; (E {—1,1}, i = 1,..., n. Then n n Ukfl = max i=i s,€{-u}i=1 Therefore IMIi < 1 if and only if maxs.€{_1/:l}. Y%=\ sixi ^ 1/ which is in turn equivalent to requiring that sixi < 1/ f°r all si £ {—1/1}/ i = 1/ • • • fm. This is a collection of 2n linear inequalities, corresponding to all the possible combinations of n sign variables. For example, for n — 3 the t\- norm ball is the octahedron depicted in Figure 9.3. 9.2.2 Quadratic inequalities and ellipsoids Consider the zero-level set of a quadratic inequality, i.e., the set of x G !Rn such that fo(x) = ^xTHx + cTx + d < 0. (9.8) Figure 9.2 The probability simplex (x G R3 : x > 0,lTx = 1}. 300 OPTIMIZATION MODELS This set is convex if H ^ 0, in which case it is a (possibly unbounded) ellipsoid. When H >- 0 and d < (1/4)cTH~1c, then this zero-level set is a bounded and full-dimensional ellipsoid with center in x = — whence (9.8) is rewritten as /00) = - x)TH(x - x) - |CTH_1c + d< 0. (9.9) A bounded, full-dimensional ellipsoid is also usually represented in the form £ = {x : (x — x)TP-1(x — x) < 1}, P >- 0, (9-io) where P is the shape matrix of the ellipsoid. Clearly, this representation is analogous to (9.8) and (9.9), with H = 2P_1, cTH~1c/4-d = l. The eigenvectors U{ of P define the directions of the semi-axes of the ellipsoid; the lengths of the semi-axes are given by y/Xl, where A; > 0, i = 1,..., n, are the eigenvalues of P, see Figure 9.4. The trace of P thus measures the sum of the squared semi-axis lengths. The volume of the ellipsoid £ is proportional to the square-root of the determinant of P: vol (£) = «»(detP)1/2, cc„ = nY{n/1y where T is the Gamma function (cx„ denotes the volume of the unit Euclidean ball in R"). When H >z 0, if H has a zero eigenvalue, then the ellipsoid is unbounded along the directions of the eigenvectors associated with the zero eigenvalues. The zero-level set can in this case assume a variety of geometrical shapes, such as elliptic paraboloid, elliptic cylinder, parabolic cylinder, etc., see Figure 9.5. An alternative representation of a (possibly unbounded) ellipsoid is in term of the inverse image of a unit ball, i.e. £ - {x 6 R" : || Ax - b\\l < 1}, A € (9.11) which is equivalent to (9.8), with H = 2A ' A, cT = —2bT A, d = bTb- 1. If A is full column rank and n < m, then A 1 A >- 0, and (9.11) represents a bounded ellipsoid. If A is symmetric and positive definite, then (9.11) represents a bounded ellipsoid with center x = A~1b and volume proportional to det A-1. A further representation of a bounded (and possibly flat) ellipsoid is in terms of the image of the unit ball under an affine transformation, that is £ = {xeRn : x = Bz + x: ||z||2 < 1},. B e !Rn'm. (9.12) Figure 9.4 A two-dimensional ellipsoid. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 30I Such an ellipsoid is flat (degenerate, or not fully dimensional) whenever 1Z{B) 7^ Rn. If B is full row rank and n < m, then (9.12) is equivalent to (9.10) and it represents a bounded full-dimensional ellipsoid with shape matrix P — BBT y 0. Example 9.5 (Zero-level sets of convex quadratic functions) We give three simple examples of the geometrical shape of the zero-level sets of a convex quadratic function. Consider equation (9.8), with H = 2/9 0 0 0 1/2 0 0 0 2 [0 0 0], d= -1, then the zero-level set is a bounded ellipsoid, see Figure 9.5. If we take instead " 1 0 0 1 0 2 0 , cT = [0 0 -1], = 0, then the zero-level set is the epigraph of an elliptic paraboloid, see Figure 9.5. Further, for H = H = CT = [0 0 - 1], 0, the zero-level set is the epigraph of a parabolic cylinder, see again Figure 9.5. Notice finally that if H = 0, then the zero-level set is a half-space. The previous discussion suggests that the family of sets of x £ Rn satisfying a collection of m convex quadratic inequalities -xTHi% + cjx + d{ < 0, Hi >z 0, i = 1,..., m includes the family of polyhedra and polytopes, but it is much richer; Figure 9.6 shows an example in R2. Figure 9.6 Intersection of the feasible sets of three quadratic inequalities in IR2. Figure 9.5 Examples of zero-level sets of convex quadratic functions in R3. From left to right: a compact ellipsoid, an elliptic paraboloid, and a parabolic cylinder. 302 OPTIMIZATION MODELS 9.3 Linear programs A linear optimization problem (or, linear program, LP) is one of the standard form (9.2), where every function /o,/i, •. • ,/m is affine. Thus, the feasible set of an LP is a polyhedron. Linear optimization problems admits several standard forms. One comes directly from (9.2): p* = min cT x -f d S.t.: Ae qX — ^eq/ Ax < b, where the inequalities are understood component-wise; we shall denote this form as the inequality form of the LP. The constant term d in the objective function is, of course, immaterial: it offsets the value of the objective but it has no influence on the minimizer. Remark 9.1 Conic form ofLP. Another standard form, frequently used in several off-the-shelf algorithms for LP, is the so-called conic form: p* = min cTx + d S.t.: Aq qX — be q, X > 0. Clearly, the conic form is a special case of the inequality form, obtained by taking A = —I, and b — 0. However, we can also go the other way, and reformulate any standard inequality-form LP into conic form. To this end, consider an inequality-form LP, let x = x+ — x_, where x+ — max(x,0), X- = max(—x,0) are respectively the positive and the negative part of x, and let £ = b — Ax. Then the inequality constraints can be written as £ > 0, and it must be that x+ > 0, x_ > 0. Thus introducing the augmented variable z = [x+ x_ f], we write the problem as follows: min [cT -cT0]Tz + d z S.t.: Aeq Aeq 0 J Z = beq, A —A I j Z = b, Z > 0, which is an LP in the variable z, in conic standard form. Remark 9.2 Geometric interpretation ofLP. The set of points that satisfy the constraints of an LP (i.e., the feasible set) is a polyhedron (or a polytope, when it is bounded): X — {x G Rn : AeqX = beq, Ax < b}. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 303 Let Xj G X be a feasible point. With such point is associated the objective level cT xj (from now on, we assume without loss of generality that d = 0). A point Xj G X is an optimal point, hence a solution of our LP, if and only if there is no other point x G X with lower objective, that is: Xj G X is optimal for LP cTx > cTXj, Vy G X, see also the discussion in Section 8.4. Vice versa, the objective can be improved if one can find x G X such that cT(x — Xj) < 0. Geometrically, this condition means that there exists a point x in the intersection of the feasible set X and the open half-space {x : cT(x — Xj) < 0}, i.e., that we can move away from Xj in a direction that forms a negative inner product with direction c (descent direction), while maintaining feasibility. At an optimal point x* there is no feasible descent direction, see Figure 9.7. The geometric interpretation suggests that the following situations may arise in an LP. • If the feasible set is empty (i.e., the linear equalities and inequalities have an empty intersection), then there is no feasible and hence no optimal solution; we assume in this case by convention that the optimal objective is p* — +00. • If the feasible set is nonempty and bounded, then the LP attains an optimal solution, and the optimal objective value p* is finite. In this case, any optimal solution x* is on a vertex, edge or facet of the feasible polytope. In particular, the optimal solution is unique if the optimal cost hyperplane {x : cT x = p*} intersects the feasible polytope only at a vertex. • If the feasible set is nonempty but unbounded, then the LP may or may not attain an optimal solution, depending on the cost direction c, and there exist directions c such that the LP is unbounded below, i.e. p* = —00 and the solution x* "drifts" to infinity, see Figure 9.8. Figure 9.7 LP: move as far as possible in the direction —c, while maintaining feasibility. At the optimum x* there are no feasible moves that improve the objective. Figure 9.8 An LP with unbounded optimal objective. 304 OPTIMIZATION MODELS Example 9.6 (A linear program in two dimensions) Consider the optimization problem min 3xi + 1.5x2 subject to: — 1 < x\ < 2, 0 < X2 < 3. The problem is an LP, and it can be put in standard inequality form: min : min 3xi + 1.5x2 subject to: — x\ <1, X\ <2, — x2 < 0, x2 < 3 x хеш2 or, using matrix notation, min* cTx subject to Ax < b, with " -1 " 1 " The level curves (curves of constant value) of the objective function are shown, along with the feasible set, in Figure 9.9. The level curves are straight lines orthogonal to the objective vector, cT = [3 1.5]. The problem amounts to finding the smallest value of p such that p = cTx for some feasible x. The optimal point is x* = [—1 0]T, and the optimal objective value is p* = —3. Example 9.7 (A drug production problem.1) A company produces two kinds of drugs, Drug I and Drug II, containing a specific active agent A, which is extracted from raw materials purchased on the market. There are two kinds of raw material, Raw I and Raw II, which can be used as sources of the active agent. The related production, cost, and resource data are given in Tables 9.1-9.3. The goal is to find the production plan which maximizes the profit for the company. Figure 9.9 A toy LP. 1 Problem taken from A. Ben-Tal, A. Nemirovski, Lectures on Modern Convex Optimization, SIAM, 2001. LP formulation. Let us denote by XDrugi, xorugn the amounts respectively of Drug I and Drug II, per 1000 packs produced. Let XRawi, XRawn denote the amounts (in Kg) of raw materials to be purchased. According to the problem data, the objective to be minimized in this problem has the form /oW “ /costs (*) —/income(*)/ /costs (*) — 100xRawi + 199.90xRawn + 700xorugi T 800xQrugjj represents the purchasing and operational costs, and /income (■*•) = 6500xorugi + 7100XQrugjj represents the income from selling the drugs. Further, we have a total of five inequality constraints, and additional sign constraints on the variables. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 3O5 Drug I Drug II Selling price (USD) per 1000 packs Content of agent A (grams) per 1000 packs Manpower required (hrs) per 1000 packs Equipment required (hrs) per 1000 packs Operational costs (USD) per 1000 packs Purch. price (USD/kg) Agent content (g/kg) Raw I Raw II Budget (USD) Manpw. (hrs) Equip, (hrs) storage cap. (kg) • Balance of active agent: 0-01xRawi -f- 0.02xRawn — 0.5xj)rugi — 0.6x]3rUgjj > 0. This constraint says that the amount of raw material must be enough to produce the drugs. • Storage constraint: *RawI + XRawII < 1000. This constraint says that the capacity of storage for the raw materials is limited. • Manpower constraint: 90.0xorugl d- lOO-O^Drugll ^ 2000, which expresses the fact that the resources in manpower are limited: we cannot allocate more than 2,000 hours to the project. • Equipment constraint: 40.0xQrugi + 50.0xE)rugn < 800. This says that the resources in equipment are limited. • Budget constraint: 100.0xRawi -f- 199.90xRawn + 700xDrugi + 800x£)rUgn < 100,000. This limits the total budget. • Sign constraints: •*-RawI ^ 0, XRawII ^ 0/ ^Drugl ^ 0, ^Drugll ^ 0. Solving this problem (e.g., via the Matlab I in prog command, or via CVX) we obtain the following optimal value and a corresponding optimal solution: p* = -14085.13, 4awI = 0, 4awII = 438.789, 4>mgI = 17.552, 4rugII = 0. Table 9.1 Drug production data. Table 9.2 Contents of raw materials. Table 9.3 Resources. 306 OPTIMIZATION MODELS Note that both the budget and the balance constraints are active (i.e., they hold with equality at the optimum), which means that the production process utilizes the entire budget and the full amount of active agent contained in the raw materials. The solution promises the company a quite respectable profit of about 14%. 9.3.1 LPs and polyhedral functions We say that a function / : Rn R is polyhedral if its epigraph is a polyhedron, that is if epi / = {(*,f) € IR"+1 : f(x) < f) can be represented as epi f={ (x,t) € R"+1 : C < d > , (9.13) for some matrix C G Rm'n+1 and vector d G Rm. Polyhedral functions include in particular functions that can be expressed as a maximum of a finite number of affine functions: f(x) — max ajx + hi, where a\ G Rn, h\ G 1R, i = 1,..., m. Observing that for any family of functions fa(x) parameterized by oc G A it holds that max/a(x) < t <=> foc{x) < t, Moc G A, we see that the epigraph of / epif = i (x,t) G : max ajx + ty < t\ I J can be expressed as the polyhedron epi f = j(x,f) G R”+1 : ajx-1- bj < t, i = 1,.. .,m|. Example 9.8 (The i^-norm function) The £00-norm function /(x) = IWU x G R”, is polyhedral since it can be written as the maximum of 2n affine functions: f(x)= max max(xj,—xi). Polyhedral functions also include functions that can be expressed as a sum of functions which are themselves maxima of affine functions, /(*) = T max aJjX + bu, LINEAR, QUADRATIC, AND GEOMETRIC MODELS 307 for given vectors a/y G and scalars b{j. Indeed, the condition (x, t) G epi / is equivalent to the existence of a vector u £ W such that 13uj < ^ ' a7jx + bij <Uj,i = l,...,m;j = l,...,q, (9.14) hence, epi / is the projection (on the space of (x, t)-variables) of a polyhedron, which is itself a polyhedron. Example 9.9 (The £i~norm function) The ^-norm function f(x) — ||x||i, x G R”, is polyhedral since it can be written as the sum of maxima of affine functions: f(x)= £ ma x{xi,~xi). Example 9.10 (Sum of largest components in a vector) For x G R”, the sum of the k-largest components in x is written as Sk(x) = £ *[,•], where x^ is the z-th largest component of x. The function s^(x) is convex, since it is the pointwise maximum of linear (hence, convex) functions: sk(x) = max */,+... + Xh. (9.15) The functions inside the maximum are linear, hence the function is polyhedral. Notice that there are Cn^ = (£) linear functions2 involved in the maximum, each obtained by choosing a particular subset of k indices in {1,..., n}. For example, with k — 2 and n — 4, S2(x) = max(xi + *2/*2 + X^,x^ + X\,X\ + X4, *2 + X4,X3 -I- X4). Hence, to represent the constraint s^(x) < a based on the above repre¬ sentation, we need to consider six constraints *1 + *2 < Ci' *2 + *3 < Oi, X3 + X\ < Oi, X\ + *4 < Oi, X2 + X4 < Oi, X3 -I- X4 < Oi. The number of constraints grows very quickly with n, k. For instance, for n = 100, k = 10, we would need more than 1013 linear constraints! A more efficient representation is possible based on the following expression (which we next prove) S]c(x) = min kt +Y2 max(0, jq — t). (9-i6) 1 i=1 Using this form, the constraint s^(x) < a can be expressed as follows: there exist a scalar t and an n-vector u such that kt + YlUi — °i, U — Q' Ui — Xi — i=h---,n. 2 The binomial coefficient (J) denotes the number of distinct k-element subsets that can be formed from a set containing n elements. It holds that \kj k\(n-k)\' where ! denotes the factorial of an integer. 308 OPTIMIZATION MODELS The above expression shows that s* is convex, since the set of points (x, a) such that the above holds for some u, t is a polyhedron. The representation (9.16) is much more efficient than (9.15), as it involves a polyhedron with 2n + 1 constraints. The price to pay is a moderate increase in the number of variables, which is now 2n + 1 instead of n. The lesson here is that a polyhedron in an ^-dimensional space, with an exponential number of facets, can be represented as a polyhedron in a higher dimensional space with a moderate number of facets. By adding just a few dimensions to the problem we are able to deal (implicitly) with a very high number of constraints. We next provide a proof for the expression (9.16). We can assume without loss of generality that the elements of x are in decreasing order: x\,...,xn. Then, sjt(x) = x\ + b x^. Now choose t such that x* > t > Xjt+i- We have n k k fcf + E max(0, x{ - t) = kt + £(x; - f) = E *>' = sk{x). 2 = 1 2 = 1 2 = 1 Since Sfr(x) is attained for a particular choice of t, we obtain that s^x) is bounded below by the minimum over t: s/c(x) > min ikt+J2 max(0, X[ — t) On the other hand, for every t, we have k k - sk(x) = f^(xj - t +1) = kt + f^(xi - t) 2 = 1 2 = 1 k n < kt -j- max(0, X{ — t) < kt + max(0, Xj — t). 2=1 2=1 Since the upper bound above is valid for every t, it remains valid when minimizing over t, and we have Sfc(x) < min kt +YÌ, max(0, Xj — t), t : 1 2 = 1 which concludes the proof. 9.3.1.1 Minimization of polyhedral functions. The problem of minimizing a polyhedral function, under linear equality or inequality constraints, can be cast as an LP. Indeed, consider the problem min f(x) s.t: Ax < b, with / polyhedral. We formally cast this problem as min t s.t.: Ax < b, (x, t) £ epi /. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 309 Since epi / is a polyhedron, it can be expressed as in (9.13), hence the problem above is an LP. Notice, however, that explicit representation of the LP in a standard form may require the introduction of additional slack variables, which are needed for representation of the epigraph, as was done for instance in (9.14). Example 9.11 (£\ and loo-norm regression problems) The concept of an ap¬ proximate solution of an inconsistent system of linear equations Ax — b has been introduced in Section 6.3.1 in the context of least squares, where we seek a vector x that minimizes the £2~norm of the residual vector r = Ax — b. Depending on the context, however, it may be sensible to seek approximate solutions that minimize other norms of the residual. We next show that two frequently encountered cases, namely those where the £co or the £\ norm is employed as a measure of residual error, can be cast as linear programs and hence solved efficiently using LP numerical codes. The choice of the norm reflects the sensitivity of the solution to outliers in the data and on the distribution of the residuals. Consider for example a regression problem with random data A E ]R1000'1oo^ b e R1000: the £x norm tends to encourage sparsity (number of nonzeros) of the residual vector, whereas the £co norm tends to equalize the magnitude of the residuals, see Figure 9.10. histogram of Ax — b at minimum of j|Ac — 6||i histogram of Ax — b at minimum of ||Ac — 6||oo Consider first the minimization of the £00 residual: mm || Ax - fell*,, A E Rm'",fe E Rm. (9.17) The problem may be first rewritten in epigraphic form, adding a slack scalar variable t: min t s.t.: || Ax — fe||00 < t, Figure 9.10 Histograms of £\ (top) and £00 (bottom) residuals on a randomly generated problem with A E ^1000,100 310 OPTIMIZATION MODELS and then we observe that \\Ax-bWoo < t max \ajx-bj\ < t \ajx — bj\ < t, z = Hence, problem (9.17) is equivalent to the following LP in variables x e Rn and t e 3R: min t s.t.: ajx — bj<t, z = l,...,m, ajx — b[ > —t, i = 1,..., m. Similarly, for the minimization of the t\ residual, we have that mm ||Ax - b\\lf A <E Rm'n,b <E Rm. is equivalent to a problem with a vector u of additional slack variables u e Rm: min ui s-h*. |ajx — bj\ < Uj, i — 1,...,m, X/U i=l which is in turn easily cast as a standard LP as follows: min 1 Tu s.t.: ajx — bj < Uj, i = 1,..., m, ajx — bj > —Uj, i = 1,..., m. Finally, note that also mixed £oo/£\ regression problems can be cast in LP form. For instance, an £&> regression with an £\ regularization term mm ||Ax-&||oo + 7ll*lli is equivalent to the following optimization problem in variables x6R” and slacks u e R", t G R: min t + 7 E ui s.t.: \ajx — bi\ < t, i = \Xi\<Ui, i = \,...,n. This latter problem is in turn readily cast into standard LP format. 9.3.2 LP duality Consider a "primal" LP in inequality form: p* = min cT x s.t.: Ae qX — be q, Ax < b. The Lagrangian for this problem is C{x,K,y) — cT x + AT (Ax — b) + yT (Aeqx — be q) = (c + ATA + Ajqy)Tx - ATb - yTbeq. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 311 The dual function g(\,p) is obtained by minimizing C(x,\,y) w.r.t. x. But £(x, A) is affine in x, hence g(A, p) is unbounded below, unless the vector coefficient of x is zero (i.e., c + ATA -1- Ajqp = 0), and it is equal to —ATb — pTbeq otherwise. That is, g(\,fl) = S -XTb~HTbeq if C + ATA + Ajqfi = 0, l—oo otherwise. The dual problem then amounts to maximizing g(h,\i) over A > 0 and p. Clearly, if g(A, p) = — oo, then there is nothing to maximize, therefore in the dual problem we make explicit the condition that we maximize over those A, p for which g( A, p) is not identically equal to —oo. This results in the following explicit dual problem formulation: rf* = max —A Tb — pTbe q A,p s.t.: c + ATA -f- = 0, A > 0, By changing the sign of the objective, we rewrite the dual in minimization form as follows —rf* = min bT A + bJQ}i A .]i 4 s.t.: ATA -f- -f- c — 0, A > 0, which is again an LP in the variables (A,^). From Proposition 8.7, and from a discussion analogous to that presented in Example 8.21, we have that strong duality holds between the primal and dual LP (i.e., p* = d*), provided that at least one of the two problems is feasible. 9.4 Quadratic programs A quadratic optimization problem (or quadratic program, QP) is one of the standard form (9.2), where /0 is a quadratic function (9.1) and are affine functions. Thus, the feasible set of QP is a polyhedron (as in LP), but the objective is quadratic, rather than linear. If the H matrix in (9.1) is positive semidefinite, then we have a convex QP. The standard form of a QP is thus p* = min \xT Hx + cTx x 2 S.t.: AeqX = be q, Ax < b. 312 OPTIMIZATION MODELS Example 9.12 (a QP in two variables) Consider the problem i [x\ — X\X2 + 2x^j — 3xi — 1.5x2/ s.t.: — 1 < X\ < 2, 0 < X2 < 3. This is a QP that can be cast in standard form with H = , cT - [-3 -1.5], A = r — - ■ 1 We can inspect that the eigenvalues of H are non-negative H = UAU 1, U = hence the considered QP is a convex one. The optimal solution of the QP is (in general, we need to use a QP solver to find this) x* = [2 1.25]T, and the optimal objective value is p* = —5.5625, see Figure 9.11. " -0.3827 0.9239 " , A = " 2.2071 Figure 9.11 Level curves of the objective function and optimal solution of the QP (9.18). 9.4.1 Constrained least squares Quadratic programs arise naturally from least-squares problems when linear equality or inequality constraints need be enforced on the decision variables. This is indeed the case in many situations where the variables have physical significance (they represent for instance lengths, volumes, concentrations, inertias, relative proportions, etc.) and constraints such as lower and upper limits on the variable values are naturally introduced. A linearly-constrained LS problem takes the form LINEAR, QUADRATIC, AND GEOMETRIC MODELS 313 f = min ||R* — y||| S.t.I Ae qX — be q, Ax < b. This is clearly a convex QP, having objective (neglecting a constant with H = 2RTR >z 0, cT = -2yTR. Example 9.13 (Tracking a financial index) As an applicative example, consider a financial portfolio design problem, where the entries of x G R" represent the fractions of an investor's total wealth invested in each of n different assets, and where r(k) £ Rn represents the vector of simple returns of the component assets during the k-th period of time [(k — 1)A, kA\, where A is a fixed duration, e.g., one month; see also Example 2.6. Suppose that the component of the vector y £ RT represents the return of some target financial index over the k-th period, for k — 1,..., T: the so-called index tracking problem is to construct a portfolio x so as to track as close as possible the "benchmark" index returns y. Since the vector of portfolio returns over the considered time horizon is we may seek for the portfolio x with minimum LS tracking error, by minimizing \\Rx — y\\^. However, we need to take into account the fact that the elements of x represent relative weights, that is they are nonnegative and they sum up to one. The index tracking problem is therefore a constrained LS problem, thus a convex QP: As a numerical example, we consider again the financial data previously used in Example 3.3, consisting of 169 monthly return data of six indices: the MSCI US index, the MSCI EUR index, the MSCI JAP index, the MSCI PACIFIC index, the MSCI BOT liquidity index, and the MSCI WORLD index, as shown in Figure 3.6. The problem is to track the target index MSCI WORLD, using a portfolio composed of the other five indices. Solving the convex QP in (9.19) with this data, we obtain the optimal portfolio composition term d = ||y||2) z = Rx, R = € 1RT'", p* = mm \\Rx-y\\l s.t.: 1T x = 1 x > 0. x* = [0.5138 0.3077 0.0985 0.0374 0.0426]T, 314 OPTIMIZATION MODELS and hence the optimal-tracking portfolio return sequence z* = Rx*, with tracking error \\Rx* — y\\\ — 2.6102 x 10-4. Figure 9.12 shows the result of investing one euro into each of the component indices and benchmark index, and into the tracking-optimal portfolio. As expected, the value sequence generated by the optimal portfolio is the closest one to the target index. 9.4.2 Quadratic constrained quadratic programs A generalization of the QP model is obtained by allowing quadratic (rather than merely linear) equality and inequality constraints. A quadratic constrained quadratic program (QCQP) thus takes the form p* = min xT Hqx + 2cJ x + do (9*20) s.t.: xTH(X+ 2cJx + d( < 0, / £ X, xTHjX -J- 2cy~x -\- dj = 0, j £ £, where X, £ denote the index sets relative to inequality constraints and equality constraints, respectively. A QCQP is convex if and only if Ho t 0, Hj y 0 for i £ X, and Hy = 0 for ; £ £. In other words, a QCQP is convex whenever the functions describing the objective and the inequality constraints are convex quadratic, and all the equality constraints are actually affine. Figure 9.12 Light gray lines show the time value of component indices; the solid black line is the value of the target index, and the dashed black line is the value of the tracking-optimal portfolio. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 315 Example 9.14 (Minimizing a linear function under an ellipsoidal constraint) Consider the following special case of a QCQP, where a linear objective is minimized under an ellipsoidal constraint: min cT x s.t.: (x — x)TP~l(x — x) < 1, where P y 0. Here, the feasible set X is the ellipsoid {x : (x — x)TP_1 (x — x) < 1}/ and the geometrical interpretation of the problem is to move as far as possible in the direction — c while remaining in the ellipsoid, see Figure 9.13(a). This optimization problem admits a "closed form" solution. To obtain this solution we first perform the following change of variables: z = E~1(x — x)/ i.e., x — Ezyx, where P = E2 is the symmetric square-root factorization of P. Then the original problem transforms in the new variable z to min cTz -j- d s.t.: zTz < 1, where c = Ec, and d — cTx is a constant term. In the new variable z the problem has a simpler interpretation: move as far as possible in the direction —c, maintaining a distance from the origin at most one, see Figure 9.13(b). Clearly, the optimal solution is simply a vector lying in direction — c having unit norm, that is * _ c _ Ec z = Going back to the original x variables we then obtain the optimal solution Figure 9.13 Minimization of a linear objective under an ellipsoidal constraint. x* = Ez*+x = == + x. 3l6 OPTIMIZATION MODELS While convex QCQP are relatively "easy" to solve, non-convex QCQPs are a class of very hard optimization problems. This fact should not be surprising, since non-convex QCQPs represent a bridge between continuous and discrete optimization. For instance, a quadratic equality constraint such as xj — 1 implies that X{ G {—1,1}; similarly, the constraint xj = x* implies that x\ G {0,1}, i.e., that x; is a binary variable. Consequently, for instance, hard combinatorial graph partitioning problems can be cast in non-convex QCQP format, see the next example. Example 9.15 (The max-cut problem) The so-called max-cut problem is defined as follows: given a graph G = (V,E), where V — {1,..., n} is the set of vertices and E is the set of edges, let W(j = wjj > 0 be given weights defined on the edges (i,j) G E, with zv(j = 0 if (/,;) ^ E. Then, the max- cut problem amounts to determining a subset S C V that maximizes the sum of weights over those edges that have one end point in S and the other in the complementary set S = V \ S. In order to model the problem in the QCQP setting, we define node variables x\, i = 1,..., n such that Xi = 1 if i G S and Xj = —1 if i G S. Then the quantity (1 — XjXj)/2 is equal to zero if i,j are in the same subset of vertices, and it is equal to one otherwise. Therefore, the max-cut problem is equivalent to the following non-convex QCQP P* =min ^^(l-x/xy) s.t.: xj = 1, i = l,...,n. Convex approximations, or relaxations, of non-convex quadratic programs are discussed in Section 11.3.3. 9.4.3 Quadratic programming duality We next derive the explicit dual for some special classes of quadratic programming models. 9.4.3.1 Dual of convex QP. We consider first the case of a primal convex QP with linear inequality constraints: p* = min xTHox + 2Cq x -Edo s.t: Ax < b, with Hq >z 0. The Lagrangian for this problem is £(x, A) — x^H()X ~b 2cq x ~b do -b (Ax — 2?) = ~b (2cq "b -A^A}^x ~b do — b~^A. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 317 According to point (b) in Section 9.1 there are now two possibilities. If 2cq + ATA is in the range of Hq, that is, if there exists z such that Hqz = 2 eg + A, then £(x, A) has a finite minimum value (w.r.t. x), given by g(A) = -^(2c0 + AT\)TH^2c0 + ATA)+d0-bT\. If instead 2co + AT A is not in the range of Ho, then g(A) — —00. The dual problem can thus be written as follows: d* = max -l(2c0 + ATX)TH^(2c0 + ATA)+d0-bT\ A ,2 4 s.t.: H0z = 2co “h A^A, A > 0. Substituting the equality constraint in the objective, and observing that H0H0+H0 = H0, we may simplify this problem to d* = max — jZTHoz + rfo ~ K,z 4 s.t.: H0z = 2co “h ATA, A > 0, which is again a convex QP. According to Proposition 8.7, the strong duality condition p* = rf* holds whenever the primal problem is feasible. Notice that if Ho >- 0, then the dual problem simply becomes d* = max -l(2c0 + AT\)THQ1(2c0 + AT\)+do-bT\ A 4 s.t.: A > 0. 9.4.3.2 Dual of convex QCQP. Consider a primal convex QCQP of the form (9.20) where, for simplicity, we assume only inequality constraints are present, specifically p* = min xTHox + 2c J x + dn s.t.: xTHjX + 2c^x + di < 0, i = 1,..., m. Further, we assume that the objective is strictly convex, that is Hq >- 0, while Hj >z 0, / = 1,..., m. The Lagrangian for this problem is C(x, A) = xJHox + 2cjx + do + ^A,(xTH,-x + 2cJ x + dj) = xT H(A)x + 2c(A)Tx + d(\), 318 OPTIMIZATION MODELS where we defined m m m H( A) = H0 + £ AiHi, c( A) = c0 + £ d( A) = rf0 + E A<df- /=1 /=1 Z = 1 Since we assumed Hq >- 0, we have that H(A) >- 0 for any A > 0, hence the unique unconstrained minimum over x of C(x, A) can be expressed explicitly using Eq. (9.4) as x*(A) - —H(A)_1c(A), g(A) = C{x*(A),A) = -c(A)TH(A)-1c(A) +d(A). The dual thus assumes the form d* = max -c(A)TH(A)“1c(A) + d(A) s.t.: A > 0, or, in equivalent minimization form, -d* = min c(A)TH(A)_1c(A) —d(A) s.t.: A > 0. Further, using an epigraphic reformulation, we have —d* = min t s.t.: c(A)TH(A)-1c(A) — d(A) < t, A > 0. The first constraint can be expressed equivalently, using the Schur complement rule, in the form of a positive-semidefinite constraint f + d( A) c(A)T c(A) H( A) The dual problem is hence finally expressed in the form min t A ,t t + d( A) c(A)T c(A) H( A) A > 0. This problem belongs to the class of so-called semidefinite programming models, which are studied in detail in Chapter 11. According to Proposition 8.7, strong duality is guaranteed if the primal problem is strictly feasible, that is if there exists an x satisfying the inequality constraints with strict inequality. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 319 9.4.3.3 Dual ofnon-convex QCQP with a single constraint. Finally, we consider the case of a possibly non-convex QCQP, with a single inequality constraint, that is p* = min xTHox + 2cq x + do s.t.: xTHiX + 2cjx + d\ <0. The Lagrangian for this problem is given by £(x, A) = xT(Hq -(- AHi)x -b 2(cq -b Aci)^~x -I- (do -b Ad\). According to points (a) and (b) in Section 9.1, the minimum of £(x, A) w.r.t. x is —00, unless the following conditions are satisfied: Ho + AHi ^ 0, cq + Acj E 72, (Ho + AHi). Under these conditions, the minimum gives the value of the dual function at A: g(A) = — (co -b Aci)t(Hq + AHi)+(co + Aci) + do + Ad\. The dual problem is then d — max —(cq ~b Aci)^~(Ho AHi)^(cq T- Ac^) -b do -b Ad\ s.t.: Hq + AHi y 0, Co + Aci E TZ(Hq + AHi), A > 0. Next, we reformulate the problem in epigraphic minimization form, as follows: —d* = min t A ,t s.t.: (cq T" Aci)t(Hq + AH\)^(cq + AC\) — (rfg Ad\) < f, Hq + AHi y 0, Co + Aci E 1Z(Hq + AHi), A > 0. Then, applying a general version of the Schur complement rule (see Section 11.2.3.2), we may equivalently rewrite the first three constraints of this problem in the form of a positive semidefiniteness condition on a suitable matrix, obtaining —d* = min A ,t t -b dQ ~b Adi (cq + Aci) (cq + Aci)T H0 + A Hi 320 OPTIMIZATION MODELS A > 0. The dual of the considered non-convex QP is thus a convex semidef- inite program (see Chapter 11). An important result can be proved on strong duality for the problem under consideration.3 Specifically, if the primal problem is strictly feasible, then it holds that p* = rf*. Notice that this statement cannot be claimed by simply appealing to Proposition 8.7, since the primal problem is not assumed to be convex here. This result is also connected with the so-called 5-procedure for quadratic functions, discussed in Section 11.3.3.1. 9.5 Modeling with LP and QP 9.5.1 Problems involving cardinality and their £\ relaxations Many engineering applications require the determination of solutions that are sparse, that is possess only few nonzero entries (low- cardinality solutions). The quest for low-cardinality solutions often has a natural justification in terms of the general principle of parsimony of the ensuing design. However, finding minimum cardinality solutions (i.e., solutions with small £q norm) is hard in general, from a computational point of view. For this reason, several heuristics are often used in order to devise tractable numerical schemes that provide low (albeit possibly not minimal) cardinality solutions. One of these schemes involves replacing the £0 norm with the £\ norm. This use is justified by extensive numerical evidence showing that, indeed, the £\ heuristic is effective for obtaining low-cardinality solutions; an application of this idea is developed in a linear binary classification application in Section 13.3. In the present section we further discuss this issue, trying to provide some analytical support for this heuristic. The convex envelope env/ of a function / : C R is the largest convex function that is an underestimator of / on C, i.e., env/(x) < /(x) for all x £ C and no other convex function is uniformly larger than env / on C, that is env/ = sup{(p : C —>* R : (p is convex and (p < /}. Intuitively, the epigraph of the convex envelope of / corresponds to convex hull of the epigraph of /, see Figure 9.14. Finding the convex envelope of a function is a hard problem in general. However, some special cases are well known. For instance, if C — [0, l]n (the unit hypercube) and / is a monomial / = X1X2 • • • xn, 3The proof is not elementary; see, e.g., page 657 in S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. 06 08 x' Figure 9.14 A non-convex function and its convex envelope (dashed) on [-1.1]- LINEAR, QUADRATIC, AND GEOMETRIC MODELS J21 en v f env(-f) i.e., the convex envelopes for / and —/ on the unit hypercube are polyhedral functions. Also, for x e R the convex envelope of card (x) on [—1,1] is env/ = |x| and, for the vector case x G Rn, envcard (x) = — ||x||i, on C = {x : ||x||oo < R}. This fact justifies the use of the t\ heuristic, at least in the case where the domain is bounded in the ioo ball, since the £\ norm yields the best convex approximation of card (x) from below. Another interesting relation between the £\ norm of x E lRn and its cardinality is obtained via the Cauchy-Schwartz inequality applied to the inner product of |x| and nz (x), where |x| is the vector whose entries are the absolute values of x, and nz (x) is the vector whose z-th entry is one whenever x/ ^ 0, and is zero otherwise. Indeed, we have that, for all xgIR”, 11*11! = nz (x)T|x| < |jnz (*)||2 • ||*||2 = ||x||2ycaiT(x), card (x)<k => ||x||i < fc||x||2- This latter relation can be used to obtain convex relaxations on certain cardinality-constrained problems. For instance, consider the problem p* = min cTx + llxllo s.t.: Ax < b, card (x) < k. r xeRn Then, the objective of this problem is lower bounded (under the cardinality constraint) as follows: cTx+||x||2 > cTx + ||x||^//c. Therefore, we can obtain a lower bound for p* by solving the problem p* = min cTx+ ||x||?/fc s.t.: Ax < b, r xeRn which can be expressed as a (convex) QP, by introducing a vector of slack variables mgIR” and a scalar t: p* = min cTx + f2/fc s.t.: Ax < b, E ui ^ f' — Ui < Xj < Ui, i = 1, . . .,ft. — max ^0, 1 — n + Y^ XjJ , = max —Xj, 322 OPTIMIZATION MODELS Another classical problem where a cardinality penalty term is replaced by an £i-norm term is the LASSO problem, discussed in Section 9.6.2. Example 9.16 (Piece-wise constant fitting) Suppose one observes a noisy time-series which is almost piece-wise constant. The goal in piece-wise constant fitting is to find what the constant levels are. In biological or medical applications, such levels might have interpretations of "states" of the system under observation. Formally, let x G Rn denote the signal vector (which is unknown) and let 1/ G Rn denote the vector of noisy signal observations (i.e., y is true signal x, plus noise). Given y, we seek an estimate x of the original signal x, such that x has as few changes in consecutive time steps as possible. We model the latter requirement by minimizing the cardinality of the difference vector Dx, where D 6 1Rn~1'n is the difference matrix D = -1 1 0 0 -1 1 •• 0 •• 0 -1 1 so that Dx = [f2 — *1/ £3 — x2/ ..., xn — in_i]T. We are thus led to the problem min ||y — xH2 s.t.: card(Dx)<k, where k is an estimate on the number of jumps in the signal. Here, the objective function in the problem is a measure of the error between the noisy measurement and its estimate x. We can relax this hard problem via the ^-norm heuristic, by replacing the cardinality constraint with an i\ constraint, thus obtaining the QP for some suitably chosen q (note that choosing q — k need not imply that the solution resulting from the relaxed problem is such that card (Dx) < k). Alternatively, one may cast a problem with a weighted objective: min lly-*112 + 7110*111, for some suitable trade-off parameter 7>0. Figure 9.13 shows an example of signal reconstruction via piece-wise fitting. The top panel in Figure 9.13 shows the unknown signal x (dashed) and its available noisy measurement y; the center panel shows the unknown signal x (dashed) and its reconstruction x obtained via the i\ heuristic in (9.21); the bottom panel shows the unknown signal x (dashed) and its reconstruction x obtained by solving a variation on (9.21) where the i2 norm is used instead of the t\ norm in the constraint. We notice that the i\ heuristic is successful in eliminating the noise from the signal, while preserving sharp transitions in the phase (level) changes in the signal. On the contrary, with an £2 heuristic, noise elimination only comes at the price of sluggish phase transitions. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 323 51 1 1 , , 1 , 1 1 , ' 1 1 ■" ! , x,x (u ....1 . . 1 T ■■■■ 1 ■■■—■ sing il) I I 1 1 1 1 1 1 1 1 i --1 -=d Figure 9.15 Example of reconstructing a piece-wise constant signal (top) from noisy measurements, using t\ (center) or £2 (bottom) heuristics. 9.5.2 LP relaxations of Boolean problems A Boolean optimization problem is one where the variables are constrained to be Boolean, i.e. to take on values in {0, 1}. For example, a Boolean LP takes the form p* = min cTx s.t.: Ax <b, x £ {0,1}”. Such problems are usually very hard to solve exactly, since they potentially require combinatorial enumeration of all the the 2” possible point arrangements in {0,1}”. A tractable relaxation of a Boolean problem is typically obtained by replacing the discrete set {0,1}” with the hypercube [0,1]”, which is a convex set. For instance, the relaxation of the previous Boolean LP yields the standard LP p* = min cTx s.t.: Ax <b, x £ [0,1]”. Since the feasible set of the relaxed problem is larger than (i.e., it includes) the feasible set of the original problem, the relaxation provides a lower bound on the original problem: p* < p*. The solution 324 OPTIMIZATION MODELS of the LP relaxation is not necessarily feasible for the original problem (i.e., it may not be Boolean). However, if it happens that the solution of the LP relaxation is Boolean, then this solution is also optimal for the original problem (prove this as an exercise). Such a situation arises for instance when b is an integer and the A matrix has a particular property called total unimodularity. 9.5.2.1 Total unimodularity. A matrix A is said to be totally unimodu- lar (TUM) if every square submatrix of A has determinant —1, 1, or 0. This matrix concept has interesting applications for LP relaxations of Boolean problems, due to the fact that polytopes defined via TUM matrices have integer vertices,4 that is all vertices of such polytopes have integer entries. Theorem 9.1 Let A E Rm,n be an integral matrix. The following statements hold: (1) A is TUM if and only if for any integral vector b E W1 all vertices of the polyhedron {x : Ax <b, x > 0} are integral; (2) if A is TUM then for any integral vector b E lRn all vertices of the polyhedron {x : Ax — b, x > 0} are integral; (3) A is TUM if and only if AT is TUM if and only if [A I] is TUM. Also, the following corollary provides a useful sufficient condition for TUM. Corollary 9.1 A matrix A E lRm'n is TUM if all the following conditions are satisfied: (a) each entry of A is —1, 1, or 0; (b) each column of A contains at most two nonzero entries; (c) the rows of A can be partitioned into two subsets R\ U R2 = {1,..., m} such that in each column j with two nonzero entries it holds that HieRi aij — D/GR2 ah' An immediate consequence of the previous corollary is that a (0,1, — 1) matrix is TUM if it contains no more than one 1 and no more than one —1 in each column. Such a particular situation actually arises in several optimization problems on graphs, where A is the incidence matrix of a directed graph or the incidence matrix of a bipartite graph, as illustrated in the following examples. 4 For details and proofs of results related to total unimodular matrices and linear programming, see, e.g., Section 19 of A. Schrijver, Theory of Linear and Integer Programming, Wiley, 1998. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 325 Example 9.17 (Weighted bipartite matching) A weighted bipartite matching problem arises when n agents need to be assigned to n tasks, in a one-to-one fashion, and the cost of matching agent i to task j is Wjj, see Figure 9.16. Defining variables Xjj such that Xjj = 1 if agent i is assigned to task j and Xij = 0 otherwise, the problem is written in the form of a Boolean LP: p* — min w"x s.t.: € {0,1} Vi,/ = 1 Y^ Xfj = 1 V; = 1,..., n (one agent for each task), z=1 n YYjxij ~ 1 V/ = 1, • • •, n (one task for each agent). An LP relaxation is obtained by dropping the integrality constraint on the Xjj variables, thus obtaining : min L wi)xij (9-22) i.j= 1 s.t.: Xjj > 0 \/i,j = 1,.. L x‘j = 1 V/ = l,...,n, YYjxij — ^ v/ = 1,...,n. Although, in general, the optimal solution of the relaxed problem is not guaranteed to be Boolean, in the present special case it is possible to prove that any vertex solution of the relaxed problem is Boolean. Indeed, the constraints in problem (9.22) can be written more compactly as x > 0, Ax — 1, (9*23) where A G R2”'”2 is the (undirected) incidence matrix of the undirected bipartite graph in Figure 9.16, and x G W1 is a vector containing a column vectorization of the matrix of variables [xjj\, i,j = 1,..., n. The rows of A correspond to nodes in the graph, say the first n nodes are the agents and the last n are the tasks. The columns of A represents the edges in the graph, and Aie = 1 if edge e is incident on node i, and Aje — 0 otherwise. The polytope represented by (9.23), known as the bipartite perfect matching polytope, thus has integer vertices, due to total unimodularity of A, hence for the weighted bipartite matching problem the LP relaxation actually provides an optimal solution to the original Boolean LP. As a numerical example, consider matching n — 4 agents to respective tasks, with costs described by the matrix W: W = Agents Tasks Figure 9.16 Bipartite matching problem. 326 OPTIMIZATION MODELS The LP relaxation provides the optimal agent/task matching (1,3), (2,2), (3,4), (4,1), with associated optimal cost p* = 4. Example 9.18 (Shortest path) The shortest path problem is the problem of finding a path between two vertices (or nodes) in a directed graph such that the sum of the weights along the edges in the path is minimized. Consider a directed graph with nodes V = {1,..., n} and edge set E, let t be the target node, let s be the source node, and let we denote the cost for traveling along edge e £ E. Then, the shortest (minimum cost) path can be found by solving the following Boolean LP: p* — min we*e X eeE S.t.I Xg £ {0,1} Vc £ E, Ax = b, where A £ Kn'\E\ is the (directed) incidence matrix of the graph (i.e. Aje = 1 if edge e £ E starts at node i, A[e — —1 if edge e £ E ends at node i, and A[e — 0 otherwise), and b £ is a vector such that bs — 1, bt = —1, and b{ = 0, Vz ^ s, t. Again, matrix A is TUM, hence the standard LP relaxation actually yields an optimal Boolean solution for this problem. 9.5.3 Network/lows Consider a network described by a directed graph with m nodes connected by n directed edges, as in Figure 9.17. Figure 9.17 Example of a directed graph with m = 6 nodes and n = 11 edges. The (directed) arc-node incidence matrix-A £ Rm,n of the network LINEAR, QUADRATIC, AND GEOMETRIC MODELS 327 is defined as in (3.2) and, for the example in Figure 9.17, we have A flow (of traffic, information, charge) is represented by a vector x £ W1 of signed edge variables (xz > 0 if the flow is in the direction of the arc, and jq < 0 if it is in the opposite direction). The net flow out of node i is out-flow; = (Ax)i = AijXj. Suppose that with each edge flow jq is associated a convex cost function (pi(xj): a minimum-cost flow problem amounts to determining minimal-cost flows that satisfy given supply/demand requirements, as well as capacity constraints on the links, that is: min ^<pi(xi) s.t.: Ax = b, I < x <u, x i=1 where b is the vector of external supplies at nodes (bj > 0 if node j is a source, bj < 0 if node / is a sink, and bj = 0 otherwise) which satisfies a flow conservation equality 1T b — 0 so that the total supply equals the total demand, and Z, u are lower and upper bounds on the flows, respectively (for instance, I — 0 if we want to impose that flows must follow the direction of the arcs). The constraint Ax — b represents the balance equations of the network. In the special case where (pi are linear functions, the above problem is an LP with objective (pTx, where (pi now represents the unit cost of a flow through link i. 9.5.3.1 Maximum flow problems. A related problem arises when there is one single source node s and one single sink node t, and one seeks to maximize the flow between s and t. Letting the (unknown) external supply vector be b = 7c, where es = 1, et = — 1, and = 0 for i 7^ s, t, we then seek to maximize 7 while satisfying the flow balance and capacity constraints, that is max 7 s.t.: Ax — ye, I < x < u, which is an LP. 328 OPTIMIZATION MODELS 9.5.4 Nash equilibria in zero-sum games Game theory models the behavior of conflicting rational players aiming at maximizing their payoff (or at minimizing their cost) via actions that take into account counter-actions from the other players. A central role in the theory of games is played by the so-called two- person zero-sum games, that is games involving two players A, B with perfectly conflicting objectives, i.e., such that if the payoff for player A is p then the payoff for player B is — p. In the discrete case, the possible "moves" or choices for each player are discrete and finite, hence they can be listed by rows and columns (say, actions for player A in the rows and actions for player B in the columns). These types of games are also known as matrix games. If A has m available actions and B has n available actions, then the game can be represented via a payoff matrix P £ Rm'", where the entry P/y represents the payoff for player A when she plays the z-th action and B plays the ;-th action (choices of actions by the two players are assumed to be taken simultaneously). Since the game is zero sum, it is enough to specify in P the payoffs for player A, since the ones for player B are simply —P. As a first example, consider two hot-dog vendors on the same street, competing for the same market. Each vendor has a fixed cost of $200, and must choose a high price ($2 per sandwich) or a low price ($1 per sandwich). At a price of $2, vendors can sell 200 sandwiches, for a total revenue of $400. At a price of $1, vendors can sell 400 sandwiches, for a total revenue of $400. If both vendors fix the same price, they split the sales evenly between them, otherwise the vendor with the lower price sells the whole amount and the vendor with the higher price sells nothing. The payoff table (payoffs are profits: revenue minus fixed cost) is given in Table 9.4. price $1 price $2 min row price $1 price $2 max col For such a game it is rational for each player to choose the strategy that maximizes her minimum payoff: best strategy for A: max min P/;, * ; best strategy for B: min max Pz/-, ; * that is, player A looks at the minimum value over each row, finding the vector of minimal payoffs for her actions (shown as the rightmost column in the table), then she chooses the action corresponding to Table 9.4 Payoff matrix P for the hot-dog vendors game. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 329 the largest entry in this column. The safe strategy for A is thus to set the price to $1. Similarly, player B looks at the maximum value over each column (since her payoffs are the negative of the entries in the table), finding the vector of minimal payoffs for her actions (shown as the lowermost row in the table), then she chooses the action corresponding to the smallest entry in this row. The safe strategy for B is also to set the price to $1. Notice that in this case both the A player and the B player strategies lead to an equal payoff. When this happens, we have found an equilibrium solution for the game in pure strategies. Such a solution is characterized by the fact that the common payoff represents a saddle point of the payoff matrix, that is an entry such that max min Pz/- = min max P/;, i j j i J the numerical value of such a common payoff is called the value of the game. A game may have multiple saddle points, but these will all have an equivalent value. A saddle-point equilibrium represents a decision by two players upon which neither can improve by unilaterally departing from it. However, not all games posses a saddle point. As a second example consider the game of Odd-or-Even in which two players simultaneously call out one of the numbers: zero, one, or two. If the sum of the outcomes is odd, then the 'Odd' player wins from the other player an amount in dollars equal to the numerical value of the sum, and vice versa. The payoff matrix of the game is Odd \ Even min row max col It is immediate to check that this game has no saddle point, hence no pure strategy yields an equilibrium solution: —2 = max min Po < min max Po = 1. i j j i In such cases, each player can resort to mixed strategies, that is she can choose a decision at random, according to an assigned probability distribution over the decisions. Players now reason precisely as in the previous case, but focusing on the expected payoffs. The problem then amounts to finding suitable distributions so that a saddle point for the expected payoff matrix exists (equalizing strategy). Suppose then that player A plays decision i with probability qK{ , and player Table 9.5 Payoff matrix P for the Odd-or-Even game. 330 OPTIMIZATION MODELS B plays decision j with probability q^jB\ and let q^A\ q^ be vectors representing the probability distributions on the strategies of players A and B, respectively. Then the vector of expected payoffs for player A corresponding to her possible strategies is Pq^ (the z-th entry in this column vector is the average (according to the probabilities on the strategies of player B) of the payoffs of player A if she chooses the z-th action). The overall expected payoff for player A, considering that she also randomizes upon her strategies, is therefore Now, since player A wants to maximize her worst-case expected payoff against all possible choices of q^ (each player knows the payoff matrix P, but does not know the opponent's randomization strategy), player A's problem amounts to solving Va = max min q(A^TPq(B\ where Sm, Sn denote respectively the probability simplex in m and n dimensions. Player B reasons in a dual way, hence the problem for player B amounts to solving Vb = min max q^TPq^B\ (9*24) q(B)eSn q(A)eSm A fundamental result of Von Neumann actually guarantees that Va = Vg = V, the value of the game, i.e., that any matrix game has a min- max equilibrium solution, either in pure or mixed strategy. Under such an equilibrium strategy, the expected payoff for player A is at least V, no matter what player B does, and the expected loss for player B is at most V, no matter what player A does. If V is zero we say the game is fair. If V is positive, we say the game favors player A, while if V is negative, we say the game favors player B. In the next paragraph we outline a connection between games and optimization problems, and show how the optimal mixed strategy for a matrix game can be computed via linear programming. 9.5.4.1 LP solution of matrix games. We consider the problem of computing the optimal mixed strategy for player B, the case of player A being completely analogous. Start by rewriting problem (9.24) in epi- graphic form Vb = min 7 7 eR,qWeSn s.t.: max q^TPq^ < 7. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 331 Then observe that maxyey f(y) < 7 if and only if f(y) < 7 for all y E y, therefore the problem is rewritten as Vg = min 7 7GR ,q(B)eSn s.t.: qWTPqW < 7/ £ Sm. Now, simplex Sm is a poly tope having as vertices the standard basis of Rm. Hence, applying the vertex result in (9.7), we have equivalently Vg = min 7 7ER ,qW<ESn s.t.: eJPq^<y, i = l,...,m. Considering that P is nothing but the z-th row of P, and writing explicitly the condition q^ E Sn as qW > 0, = 1, we finally obtain a problem in standard LP format: Vg — min 7 s.t.: < 7lm/ (?(B) > 0, lJ(j(B) = 1. An analogous reasoning shows that the equilibrium strategy for player A may be found by solving the following LP: Va = max 7 s.t.: PTq(A^ > 7lw, q(A^ > 0, 1 hq^ — 1. Using the LP approach, the reader may verify, for instance, that the Odd-or-Even game in Table 9.5 is a fair game (V = 0) and has optimal mixed strategies " 1/4 ' ' 1/4 " qW = 9.6 LS-related quadratic programs A major source of quadratic problems comes from LS problems and their variants. We have already seen in Example 9.1 that the standard LS objective f0(x) = \\Ax-y\\l is a convex quadratic function, which can be written in the standard form fo(x) = -xTHx + cTx -I- d, 332 OPTIMIZATION MODELS H = 2(ATA), c = -2ATy, d = yTy. Finding the unconstrained minimum of /o is just a linear algebra problem, which amounts to finding a solution for the system of linear equations (the normal equations) resulting from the optimality condition V/o(x) = 0 (see Section 8.4.1): ATAx = ATy. We next illustrate some variants of the basic LS problem, some of which are also amenable to a simple linear-algebra based solution. 9.6.1 Equality constrained LS Some variants on the basic LS problem have been discussed in Section 6.7. Here, we briefly discuss the case of an LS problem with additional linear equality constraints on the variables. It has been shown in Example 9.2 that minimizing a convex quadratic function under linear equality constraints is equivalent to solving an augmented system of linear equations. Therefore, solving the linear equality constrained LS problem inin \\Ax-y\\l s.t.: Cx — d is equivalent to solving the following linear equations in x, A (see Eq. (9-5)): c 0 9.6.2 t\ regularization and the LASSO problem Regularized LS problems, with an 12 regularization term, have been discussed in Section 6.7.3. An important variation arises when the regularization term in (6.25) involves the l\ norm of x, instead of the 12 norm. This results in the following problem, known as the basis pursuit demising problem (BPDN): min 11 Ax — 1/H2 -+- A||x||lr A > 0, (9.25) where ||x||i = |xi| + • • • + \xn\. Problem (9.25) has received enormous attention in recent years from the scientific community, due to its relevance in the field of compressed sensing (CS). The basic idea behind (9.25) is that the t\ norm of x is used as a proxy for the cardinality of x (the number of nonzero entries* in x), see Section 9.5.1 LINEAR, QUADRATIC, AND GEOMETRIC MODELS 333 for a justification of this fact. The interpretation of (9.25) is then that it formalizes a tradeoff between the accuracy with which Ax approximates y and the complexity of the solution, intended as the number of nonzero entries in x. The larger A is, the more problem (9.23) is biased towards finding low-complexity solutions, i.e., solutions with many zeros. Such kinds of solution are of paramount importance, for instance, in signal and image compression. Suppose for example that y G Rm is a vector representing the gray-scale levels of pixels in a multi-megapixel image (thus m can be extremely large, e.g., in the order of a few million), and let A contain by columns n fixed feature vectors (i.e., what is usually referred to as a dictionary): we seek to approximate the original image as a linear combination Ax of the columns of A with few nonzero coefficients. Then, instead of transmitting the whole bulky image vector y, one could transmit only the few nonzero coefficients in x and still the receiver (who knows the feature basis A) can approximately reconstruct the image as Ax. We already encountered a problem similar to (9.23) in the context of piecewise constant fitting, see Example 9.16. Problem (9.23) can be cast in the form of a standard QP by introducing slack variables u € R": mis„ HAx-yll2+AEM‘' x,ueK* “ s.t.: |x,-| < Ui, i = 1,.. . ,n. An essentially analogous version of problem (9.23) is obtained by imposing a constraint on the i\ norm of x (instead of inserting this term in the objective as a penalty), resulting in the so-called least absolute shrinkage and selection operator (LASSO) problem:5 min II Ax — y||2 s.t.: ||x||i < Oi. The LASSO problem can readily be cast in the standard QCQP format by introducing slack variables. Yet another version in which the problem can be formulated is in the form of minimization of ||x||i subject to a constraint on the residual norm, that is min || x ||i s.t.: \\Ax-y\\2<e, which can also be cast as a QCQP. All these variations on the LASSO problem yield convex optimization models that can be solved by standard efficient algorithms for QCQP, at least in principle. Notice, however, that the typical applications where LASSO-type prob- 5 Often, in the literature, the term LASSO is also used to refer to problem (9.25). 334 OPTIMIZATION MODELS lems arise may involve a very large number of variables, hence several specialized algorithms have been developed to solve t\-regularized problems with maximal efficiency; see Section 12.3.3.8 and Section 12.3.4 for a discussion on some of these algorithms. Example 9.19 (Image compression in a wavelet basis) A gray-scale image, represented by a vector y £ ]Rm, typically admits an essentially sparse representation, in a suitable basis. This means that, for an appropriate dictionary matrix A £ lRm,n, the image y can be well approximated by a linear combination Ax of the feature vectors, where the coefficients x of the combination are sparse. Usual dictionary matrices employed in image analysis include discrete fourier transform (DFT) bases, and wavelet (WT) bases. Wavelet bases, in particular, have been recognized to be quite effective in providing sparse representations of standard images (they are used, for instance in the Jpeg2000 compression protocol). Consider, for example, the 256 x 256 gray-scale image shown in Figure 9.18. Each pixel in this image is represented by an integer value yz- in the range [0, 255], where the 0 level is for black, and 233 is for white. The histogram of y for the original image is shown in Figure 9.19; clearly, in this representation, the image is not sparse. However, if we consider the image representation in the wavelet transform domain (which implicitly amounts to considering a suitable dictionary matrix A containing by columns the wavelet bases), we obtain a vector representation y whose absolute value has the histogram shown in Figure 9.20. For this example, we are using a Daubechies orthogonal wavelet transform, hence A is a 65,536 x 65,536 orthogonal matrix. Figure 9.20 shows that the wavelet representation y of the image contains very few large coefficients, while most of the coefficients are relatively small (however, y is not yet sparse, since its elements are not exactly zero). If all these small coefficients are retained, then y carries the same information as y, that is, it is a lossless encoding of the original image, in the wavelet domain: y = Ay. However, if we allow for this equality to be relaxed to approximate equality y ~ Ax, we may tradeoff some accuracy for a representation x in the wavelet domain which has many zero coefficients, i.e., a sparse representation. Such a sparse tradeoff can typically be obtained by solving the LASSO problem (9.23) for suitable A, that is min* 21| Ax — yH2 + ^IIx|| 1. In our specific situation, since A is orthogonal, we have that the above problem is equivalent to ^ll*-yll2 + AMl/ where y = ATy is the image representation in the wavelet domain. Interestingly, this problem is separable, i.e., it can be reduced to a series of univariate minimization problems, since Figure 9.18 "Boat" original 256 x 256 gray-scale image. Figure 9.19 Histogram of y, for the boat image. Figure 9.20 Histogram of the wavelet transform y, for the boat image. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 335 Moreover, each of the single-variable problems min -(xj — yi)2 + X( z admits a simple closed-form solution as (see Section 12.3.3.5) x* = J ° if < a, 1 1 Vi ~ tegp-d/i) otherwise. In words, this means that all coefficients in the wavelet basis are thresh- olded to zero if their modulus is smaller than A, and are offset by A, otherwise (soft thresholding). Once we have computed x*, we can reconstruct an actual image in the standard domain, by computing the inverse wavelet transform (i.e., ideally, we construct the product Ax*). In our current example, solving the LASSO problem with A = 30 we obtained a representation x* in the wavelet domain that has only 4,540 nonzero coefficients (against the 65,536 nonzero coefficients present in y or in y). We have therefore a compression factor of about 7%, meaning that the size of the compressed image is only 7% of the size of the original image. Reducing the regularization parameter to A = 10, we obtained instead a representation x* in the wavelet domain with 11,431 nonzero coefficients, and thus a compression factor of about 17%. Figure 9.21 shows the original image along with the reconstructed compressed images obtained, respectively, by choosing A = 10 and A = 30 in the LASSO problem. 9.7 Geometric programs Geometric programming (GP) deals with optimization models where the problem variables are positive (they typically represent physical Figure 9.21 Comparison of original boat image (a), wavelet compression with A = 10 (b), and wavelet compression with A — 30 (c). 336 OPTIMIZATION MODELS quantities such as pressure, areas, prices, concentrations, energy, etc.) and appear in the objective and constraint functions in the form of non-negative linear combinations of positive monomials, see Example 8.18 and Section 8.3.5.2. 9.7.1 Monomials and posynomials For two vectors x E Rn, a £ Rn, and scalar c > 0, we use the following notation to represent a positive monomial: CXa = CX^X22 • • • Xfl1, x > 0. A posynomial is then defined as a function / : R++ R which is a non-negative linear combination of positive monomials: /(*) = * > 0, (9-26) where C\ > 0 and e Rn, i = 1,...,K. Further, a generalized posynomial is any function obtained from posynomials via addition, multiplication, pointwise maximum, and raising to a constant power. For example, the function / : R++ —> R with values f(x) — max (^lx\3 x7lr X\X2X^, yjx\ + x\ is a generalized posynomial. Example 9.20 (Construction and operating costs of a storage tank) Consider a cylindrical liquid storage tank with height h and diameter d, as shown in Figure 9.22. The tank includes a base, which is made of a different material from the tank itself. In our model, the base's height does not depend on the tank's height. This is a reasonable approximation for heights not exceeding a certain value. The costs to manufacture, and then operate during a given period of time (say, a year) the tank, include the following. • Filling costs represent the costs associated with supplying a volume of liquid (say, water) in the given time period. These costs depend on the ratio Vgupp/Vtank, where VSupp is the volume to be supplied, and Vtank is the volume of the tank (the smaller the volume of the tank is with respect to the volume to be supplied, the more often we have to refill the tank, and the larger the cost). Thus, the filling cost is inversely proportional to the tank's volume: Cfm(d,h) = where ct\ is some positive constant, expressed in (say) dollars, and Cl = 4ax Vsupp/7T. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 337 • Construction costs include the costs associated with building a base for the tank, and costs associated with building the tank itself. In our model, the first type of cost depends only on the base area nd2/ 4, while the second type of cost depends on the surface of the tank, ndh (this assumes that we can use the same base height for a variety of tank heights). Thus, the total construction cost can be written as ^constr (d, H) Cbase {d, h) T~ Ctank(^/^0 C^d T~ C3dh, where C2 = 0:2 tt/4, C3 = oc3tc, with &2 > 0, oc3 > 0 constants expressed in dollars per square meter. The total manufacturing and operating cost function is thus the posy- nomial function Qotal(d/J0 = Qill(d,h) + CCOnstr{dfh) - cxh7xd~2 + c2d2 + c3dh. (9.27) Assuming the numerical values VSupp = 8x 105 litres, oc\ = 10 $, 0C2 = 6 $/m2, a3 = 2 $/m2, we obtain the plot shown in Figure 9.23 for the level curves of the total cost Ctotai(d,h). The sublevel sets are non-convex, hence Ctota\(d,h) is non-convex. Example 9.21 (Signal-to-noise ratio in wireless communications) Consider a cellular wireless network with n transmitter/receiver pairs. Transmit powers are denoted by p\,..., pn. Transmitter i is supposed to transmit to receiver i but, due to interference, some signal from the other transmitters is also present. In addition, there is (self-) noise power in each receiver. To measure this, we form the signal to interference plus noise ratio (SINR) at each receiver, which takes the form 'Yi t / ^ 1, • • •, n, li + CTi where S; is a measure of the (desired) signal power received from transmitter i, Ij is the total signal power received from all the other receivers, and cri is a measure of the receiver noise. The SINR is a (in general, complicated) function of the power used at the transmitters. We can express the SINRs at the receivers in terms of the powers pi,..., pn more explicitly, by assuming that the received powers Su are linear functions of the transmitted powers pi,..., pM. This model, also known as the Rayleigh fading model, states that Si = Gup» i = l,...,n, and li = GijPj, where the coefficients G[j, 1 < i,j < n, are known as the path gains from transmitter j to receiver i. The SINR functions ‘=' ”• are not posynomials, but their inverses are indeeed posynomial functions in the powers p\,...,pn. 7j1 (P) = 7^-P; 1 + E G^P/P/ ' = !/• • •,n. ^ii j# Figure 9.23 Some level curves of the tank cost Ctota\(d,h). 338 OPTIMIZATION MODELS 9.7.2 Convex representation of posynomials Monomials and (generalized) posynomials are not convex. However, we can obtain a convex representation, via a simple change of variables, plus a logarithmic transformation. Consider first a simple positive monomial function f(x) = CXa = CXafx22 • • • xfi1, X e R++, c > 0. Taking a logarithmic change of variables y* = ln*j, i = l,...,n, (9.28) we have g(y) = f(x(y)) — ceflli/1 • • • eUnyn — cefliyi_l [letting b = In c] = eaTy+b. The exponential function is convex, hence g(y) is convex (and positive) over Rn. Further, since we have seen that transforming an objective or constraint function via an increasing function yields an equivalent problem (see, e.g., Section 8.3.4.1), we can consider instead of g(y) the function g(y) =lnf(y) = aTy + b, which turns out, in this case, to be also convex in y. This further logarithmic transformation has the additional advantage of actually resulting in a linear function of the variable y (notice also that dealing directly with the exponential function g(y) may raise numerical problems, since this function, although convex, may take very large values). Optimization models in which only positive monomials appear in the objective and constraints can thus be transformed into equivalent linear programs, as shown also in Example 8.18. Consider next / to be a posynomial, with the notation defined in (9.26). Using again the change of variables (9.28), and letting bj = In Ci, we have S(y)=/(*(y)) = Y^e«)y+bi, which is a sum of convex functions, hence it is convex in the variable y. To avoid dealing with the large range of numerical values that exponential functions typically attain, we further take the logarithm of g, obtaining the function £(y) = ln£(y) = In ^A')y+6i^ = lse(Ay + b), LINEAR, QUADRATIC, AND GEOMETRIC MODELS 339 where we defined A £ TRK/H as the matrix having rows i = l,...,K,b= [b\ • • • ,^k]T/ and where lse is the log-sum-exp function defined in Example 2.14 (see also Example 4.4 and Example 8.5), which is a convex function. Thus, we can view a posynomial as the log-sum-exp function of an affine combination of the logarithm of the original variables. Since the lse function is convex, this transformation will allow us to use convex optimization to solve models based on posynomials. Remark 9.3 Convex representation of generalized posynomials. Adding variables, and with the logarithmic change of variables seen above, we can also transform generalized posynomial inequalities into convex ones. Consider, for example, the posynomial / with values f(x) =ma x{fi{x),f2(x)), where /i,/2 are posynomials. For t > 0, the constraint f(x) < t can be expressed as two posynomial constraints in (x,t), namely f\(x) < t, /2M < t- Likewise, for t > 0, cl > 0, consider the power constraint (/(*))“ < t, where / is an ordinary posynomial. Since oc > 0, the above is equivalent to fix) < t1/a, which is in turn equivalent to the posynomial constraint in (x,t) g(x,t)±rl/«f(x)< 1. Hence, by adding as many variables as necessary, we can express a generalized posynomial constraint as a set of ordinary posynomial ones. 9.7.3 Standard forms ofGP A geometric program is an optimization problem involving generalized posynomial objective and inequality constraints, and (possibly) monomial equality constraints. In standard form, a GP can be written as mm f0{x) s.t.: fi(x) < 1, i = l,...,m, hj(x) = 1, i = l,...,p, where /0,... ,fm are generalized posynomials, and hi, i = 1,..., p, are positive monomials. Assuming for simplicity that /0,... ,/m are standard posynomials, we can express a GP explicitly, in the so-called standard form: 340 OPTIMIZATION MODELS k=1 Ki E < L i = 1,.. i = 1,.. • ■.p. where a^koy... ,U(kmy r(1)/.. .,r(p) are vectors in ]Rn, and cki, gf are positive scalars.6 Using the logarithmic transformations described in Section 9.7.2, we may rewrite the above GP (which is non-convex) into an equivalent convex formulation, as follows: min lse(Aoj/ + bo) s.t.: lse(Ajy + bj) <0, i = 1,..., m, Ry + h = 0, (9*29) where A* is a matrix with rows ajlfy..., aJKiy i = 0,1,..., ra; bj is a vector with elements cu,..., c^./, i = 0,1,..., ra; R is a matrix with rows rj^y..., rjy and h is a vector with elements lngi,..., lngp. Example 9.22 (Optimization of a liquid storage tank) We consider again the liquid storage tank model introduced in Example 9.20. The problem is to find the diameter d and height h of the tank, so as to minimize the"cost Ctota[(d,h) in (9.27), which is a posynomial function, subject to constraints. The constraints involve upper and lower bounds on the variables 0 < d < dmaX/ 0 < h < hmax. We might also include an upper bound Kmax on the aspect ratio of the tank (the aspect ratio constraint is useful, for instance, to take into account structural resistance to wind) b < Kmaxd. The problem takes the standard GP form min c\h~ld~2 + c^d1 + c^dh d,h s.t.: 0 < d^\xd < 1, 0 < h~lxh < 1, Km Ixhd'1 < 1. Using the numerical values Vgupp — 8 x 105 litres, cc\ = 10 $,0:2 = 6 $/m2, &3 = 2 $/m2, and bounds dmax = 20 m, hmax = 30 m, Kmax = 3, we can solve this problem numerically, obtaining the optimal solution d* = 14.84, h* = 22.26, with corresponding optimal objective value C*otal = 5191.18. 6 Converting the problem into the standard form when the original formulation involves generalized posy- nomials entails adding new variables and constraints; see Remark 9.3. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 34I 9.8 Exercises Exercise 9.1 (Formulating problems as LPs or QPs) Formulate the problem Vj = «un /;•(*)/ for different functions fj, j — 1,..., 5, with values given in Table 9.6, as QPs or LPs, or, if you cannot, explain why. In our formulations, we always use x £ Rw as the variable, and assume that A £ Rm'w, y £ Rm, and k £ {1,... ,m} are given. If you obtain an LP or QP formulation, make sure to put the problem in standard form, stating precisely what the variables, objective, and constraints are. Hint: for the last one, see Example 9.10. Exercise 9.2 (A slalom problem) A two-dimensional skier must slalom down a slope, by going through n parallel gates of known position (xj,yj), and of width c\, i = 1,..., n. The initial position (*o,i/o) is given, as well as the final one, (xw+i,yw+i). Here, the x-axis represents the direction down the slope, from left to right, see Figure 9.24. 1. Find the path that minimizes the total length of the path. Your answer should come in the form of an optimization problem. 2. Try solving the problem numerically, with the data given in Table 9.7. Exercise 9.3 (Minimum distance to a line segment) The line segment linking two points p,q £ W1 (with p ^ q) is the set C = {Ap + (1 — \)q : 0 < A < 1}. 1. Show that the minimum distance D* from a point a £ Rn to the line segment C can be written as a QP in one variable: min ||Ac + d||2 : 0 < A < 1, for appropriate vectors c,d, which you will determine. Explain why we can always assume a — 0. 2. Prove that the minimum distance is given by7 Dl= { 1ra_ (Alp--?))2 1 q llp-tfz if pTq < min(qTq,pTp), if pTq > qTq, if pTq > pTp. /iM = + IMIi /2« = + Mi h\x) = - ii*iii f4(x) = + IMI2 fs(x) = ■yl[<] + II* Table 9.6 Table of the values of different functions /. |z|[/j denotes the element in a vector z that has the i-th largest magnitude. Figure 9.24 Slalom problem with n = 5 obstacles. "Uphill" (resp. "downhill") is on the left (resp. right) side. The middle path is dashed, initial and final positions are not shown. 3. Interpret the result geometrically. Table 9.7 Problem data for Exercise 9.2. 7 Notice that the conditions expressing D* are mutually exclusive, since |pT<?l < llplklklh- 342 OPTIMIZATION MODELS Exercise 9.4 (Univariate LASSO) Consider the problem mm/(x) = d|«*-y||2 +A|*|, xeR I where A > 0, a £ Rm, y £ Rm are given, and x G R is a scalar variable. This is a univariate version of the LASSO problem discussed in Section 9.6.2. Assume that y ^ 0 and a ^ 0, (since otherwise the optimal solution of this problem is simply x = 0). Prove that the optimal solution of this problem is _ f 0 if |flTy| < A, * “ I *ls - sgn(xls)|^| if |aTy| > A, . aTV corresponds to the solution of the problem for A — 0. Verify that this solution can be expressed more compactly as x* = sthrA/||fl||2 (^is), where sthr is the soft threshold function defined in (12.66). Exetcide 9.5 (An optimal breakfast) We are given a set of n = 3 •v * ‘ types’ of food, each of which has the nutritional characteristics described in„Table 9.8“.' Find the optimal composition (amount of servings per!-each food.) of a breakfast having minimum cost, number of calories between 2,000 and 2,250, amount of vitamin between 5,000 and 10,000, and sugar level no larger than 1,000, assuming that the maximum number of servings is 10. Exercise 9.6 (An LP with wide matrix) Consider the LP p* = min cTx : / < Ax <u, where A e ]Rm,w, c £ R", and l,u E Rm, with / < u. We assume that A is wide, and full rank, that is: m < n, m = rank(A). We are going to develop a closed-form solution to the LP. 1. Explain why the problem is always feasible. 2. Assume that c ^ TZ(AT). Using the result of Exercise 6.2, show that p* = — 00. Hint: set x = Xq + tr, where xo is feasible, r is such that Ar = 0, cTr > 0, and let t —00. Table 9.8 Food costs and nutritional values per serving. LINEAR, QUADRATIC, AND GEOMETRIC MODELS 343 3. Now assume that there exists d £ Rm such that c = ATd. Using the fundamental theorem of linear algebra (see Section 3.2.4), any vector x can be written as x — ATy + z for some pair (y,z) with Az = 0. Use this fact, and the result of the previous part, to express the problem in terms of the variable y only. 4. Reduce further the problem to one of the form min dTv : I <v < u. Make sure to justify any change of variable you may need. Write the solution to the above in closed form. Make sure to express the solution steps of the method clearly. Exercise 9.7 (Median versus average) For a given vector v £ Rw, the average can be found as the solution to the optimization problem min lb — xl||o, (9*30) *<E1R "Z where 1 is the vector of ones in R”. Similarly, it turns out that the median (any value x such that there is an equal number of values in v above or below x) can be found via min \\v -xl||i. (9.31) We consider a robust version of the average problem (9.30): min max \\v + u — xl\\?f (9*32) * k:|M|oo<AM in which we assume that the components of v can be independently perturbed by a vector u whose magnitude is bounded by a given number A > 0. 1. Is the robust problem (9.32) convex? Justify your answer precisely, based on expression (9.32), and without further manipulation. 2. Show that problem (9.32) can be expressed as min Y (11?; — xl + A)2 . 3. Express the problem as a QR State precisely the variables, and constraints if any. 4. Show that when A is large, the solution set approaches that of the median problem (9.31). 344 OPTIMIZATION MODELS 5. It is often said that the median is a more robust notion of "middle" value than the average, when noise is present in v. Based on the previous part, justify this statement. Exercise 9.8 (Convexity and concavity of optimal value of an LP) Consider the linear programming problem p* = min cTx : Ax < b, where c £ Rw, A £ Rm,n, b £ ]Rm. Prove the following statements, or provide a counter-example. 1. The objective function p* is a concave function of c. 2. The objective function p* is a convex function of b (you may assume that the problem is feasible). 3. The objective function p* is a concave function of A. Exercise 9.9 (Variational formula for the dominant eigenvalue) Recall from Exercise 3.11 that a positive matrix A > 0 has a dominant eigenvalue A = p(A) > 0, and corresponding left eigenvector w > 0 and right eigenvector v > 0 (i.e., wT A = AwT, Av = Av) which belong to the probability simplex S = {x £ ]Rn : x > 0, lTx = 1}. In this exercise, we shall prove that the dominant eigenvalue has an optimization-based characterization, similar in spirit to the "variational" characterization of the eigenvalues of symmetric matrices. Define the function / : S —» R++ with values qJ x f(x) = min ——, for x £ S, where aj is the /-th row of A, and we let ^ = +00 if x* = 0. I Xj 1. Prove that, for all x £ S and A > 0, it holds that Ax > /(x)x > 0. 2. Prove that /(x) < A, Vx £ S. 3. Show that f(v)= A, and hence conclude that A = ma x/(x), which is known as the Collatz-Wielandt formula for the dominant eigenvalue of a positive matrix. This formula actually holds more generally for non-negative matrices,8 but you are not asked to prove this fact. 8 For a non-negative matrix A > 0 an extension of the results stated in Exercise 3.11 for positive matrices holds. More precisely, if A > 0, then A = p(A) > 0 is still an eigenvalue of A, with a corresponding eigenvector v > 0 (the difference here being that A could be zero, and not simple, and that v may not be strictly positive). The stronger results of A > 0 and simple, and v > 0 are recovered under the additional assumption that A > 0 is primitive, that is there exists an integer k such that Ak > 0 (Perron- Frobenius theorem). Exercise 9.10 (LS with uncertain A matrix) Consider a linear least- squares problem where the matrix involved is random. Precisely, the residual vector is of the form A(S)x — b, where the m x n A matrix is affected by stochastic uncertainty. In particular, assume that — A) + AiSi, where 6j, i = 1,..., p are i.i.d. random variables with zero mean and variance of. The standard least-squares objective function ||A(<5)x — b||2 is now random, since it depends on S. We seek to determine x such that the expected value (with respect to the random variable S) of || A(S)x — &H2 is minimized. Is such a problem convex? If yes, to which class does it belong (LP, LS, QP, etc.)? Second-order cone and robust models Second-order cone programming (SOCP) is a generalization of linear and quadratic programming that allows for affine combinations of variables to be constrained inside a special convex set, called a second-order cone. The SOCP model includes as special cases LPs, as well as problems with convex quadratic objective and constraints. SOCP models are particularly useful in geometry problems, approximation problems, as well as in probabilistic (chance-constrained) approaches to linear optimization problems in which the data is affected by random uncertainty. Data uncertainty also motivates the introduction in this chapter of robust optimization models, which enable the user to obtain solutions that are resilient (robust) against the uncertainty that is in practice often present in the description of an optimization problem. 10.1 Second-order cone programs The second-order cone (SOC) in R3 is the set of vectors (xi,X2,t) such that yjx\ + x\ < t. Horizontal sections of this set at level oc > 0 are disks of radius oc. A visualization of the SOC in R3 is given in Figure io.i. This definition can actually be extended to arbitrary dimension: an (n + 1)-dimensional SOC is the following set: fcn = {(x,t), x e Rn, t e R : ||x||2 < t}. (io.i) Example 10.1 (Magnitude constraints on affine complex vectors) Many design problems involve complex variables and magnitude constraints. Such constraints can often be represented via SOCs. The basic idea is that the o X2 Figure io.i The second-order cone in R3. 348 OPTIMIZATION MODELS magnitude of a complex number z = z% + ;zj, with z^, zj the real and imaginary parts, can be expressed as the Euclidean norm of (zk,zj): — \/zi? + 2' .2 _ For example, consider a problem involving a magnitude constraint on a complex number /(x), where x G Rn is a design variable, and the complex-valued function / : Rn —» C is affine. The values of such a function can be written as f(x) = (fljx + bR) + j(ajx + b;), where a^aj G R”, b&r bj G R. For f G R, the magnitude constraint can be written as nT r -4- hn < t, a^x + bR aj x + bj which is a second-order cone constraint on (x, t). lo.i.i Geometry An SOC is a convex cone. First, the set JCn in (10.1) is convex, since it can be expressed as the intersection of (an infinite number of) halfspaces: K,n = P| j(x, t), x G Rn, t G R : xTu < . u: \\u\\2<l Second, it is a cone, since for any z e JCn it holds that ocz G Kn, for any oc > 0. lo.i.i.i The rotated second-order cone. The rotated second-order cone in R^+2 is the set Krn = |(^/JZ/2)/^ £ Rn,y G R,z G R : xTx < 2yz, y > 0, z > oj. Note that the rotated second-order cone in Rn+2 can be expressed as a linear transformation (actually, a rotation) of the (plain) second- order cone in Rn+2, since 11 x 112 < 2yz, y > 0, z > 0 < A(y+z). That is, (x,y,z) € K7n if and only if (w, t) e /C„, where (x, (y-z)/Vl), t - (yfz)/V2. SECOND-ORDER CONE AND ROBUST MODELS 349 These two sets of variables are related by a rotation matrix R: ' In 0 ‘ = R , R = - 72{y + z) - V2 J which proves that rotated second-order cones are also convex. Constraints of the form ||x||^ < 2yz, as appearing in (10.2), are usually referred to as hyperbolic constraints. 10.1.1.2 Quadratic constraints. Second-order cone constraints can be used to express convex quadratic inequalities. Precisely, if Q = QT >: 0, the constraint xTQx + cTx < t (io*3) is equivalent to the existence of w, y, z such that wTw < 2yz, z = 1/2, w = Q1/2x, y = t — cTx, (10.4) where Q1//2 is the matrix square-root of the positive semidefinite matrix Q. In the space of (x,w,y) variables, the above constraints represent the intersection of a rotated second-order cone constraint (■w,y,z) G K7n, with the affine sets z = 1/2, {(x,w) : w = Q1/2x}, {(x,y) : y - t - ctx}. 10.1.1.3 Second-order cone inequalities. The standard format of a second-order cone constraint on a variable expresses the condi¬ tion that (y, t) G Km, with y G Rm, t G R, where y, t are some affine functions of x. Formally, these affine functions can be expressed as y = Ax + b, t — cTx + d, hence the condition (y, t) G Km becomes ||Ax -K bU2 ^ c^x + d, where A G Rm,?t, b G Rm, c G R”, and d G R. For example, the convex quadratic constraint (10.3) can be expressed in standard SOC form by first writing it in the rotated conic form (10.4), and then applying (10.2), which results in the SOC V2Q1/2x 1/2 t — cTx ^ t — cTx “i-1/2. 350 OPTIMIZATION MODELS 10.1.2 SOCP in standard form A second-order cone program is a convex optimization problem having linear objective and SOC constraints. When the SOC constraints have the standard form (10.5), we have a SOCP in standard inequality form: min cTx (10.6) s.t.: \\AjX + bi\\2 < cjx + di, i = where Aj E ]Rm/'n are given matrices, b{ E Rm*, c\ E R” are vectors, and d{ are given scalars. An equivalent formulation, called conic standard form, makes the conic inclusions explicit in the constraints: min cTx s.t.: (AjX + hi, cjx + di) E Km., i = l,...,m. SOCPs are representative of a quite large class of convex optimization problems. Indeed, LPs, convex QPs, and convex QCQPs can all be represented as SOCPs, as illustrated next. Example 10.2 (Square-root LASSO as an SOCP) Return to the square-root LASSO problem mentioned in Example 8.23: p* = mm ||Ax - fc||2 + M\x\\v where A E Hm,n, b E ]Rm, and the parameter A > 0 are given. The problem can be expressed as an SOCP, namely p* = min t + A Y] ui : t > \\Ax — b\\2, U[ > Ixz-1, z = 1,... ,n. x,t,u ^ Linear programs as SOCPs. The linear program (LP) in standard inequality form min c 1 x s.t.: ajx<bi, i — \, ...,m, can be readily cast in SOCP form as min cTx s.t.: ||Qx + d;||2 < bj — ajx, i — 1,...,m, where C/ = 0, d{ — 0, i — 1,..., m. SECOND-ORDER CONE AND ROBUST MODELS 351 Quadratic programs as SOCPs. The quadratic program (QP) min xTQx + cTx s.t.: ajx <bi, i — 1,..., m, where Q = QT ^ 0, can be cast as an SOCP as follows. First, we set w = Q1^2x and introduce a slack variable y, thus rewriting the problem as min cTx + y s.t.: wTw < y, w = Q1/2x, ajx < hi, i = 1,.. . ,ra. Then we observe that wTw < y can be expressed in rotated conic form by introducing a further slack variable z: wTw < yz, with z linearly constrained so that z = 1. Therefore, we have min cTx + y s.t.: wTw < yz, z = l w = Q1/2x, «7" x < hi, i = 1,..., m, which, using (10.2), can be further rewritten as min cTx + v z,y <y + l, a- x < b{, i = 1,... ,m. Quadratic-constrained quadratic programs as SOCPs. The convex quadratic- constrained quadratic program (QCQP) min xTQox + aQX X , „T s.t.: x QiX + a- x <b(, z = l,...,m, with Qj = Q} >z 0, i = 0,1,..., m, can be cast as an SOCP as follows. First, we introduce a slack variable t and rewrite the problem in epigraph form as min 0q~x +1 s.t.: xTQ0x < t, xTQiX + a[x < bj, i = 1,...,m. Now, each constraint of the form xTQx + aTx < b is equivalent to the existence of w, y, z such that wTw < yz, z = I, zv = Q1/2x, y — b — aJx 352 OPTIMIZATION MODELS Therefore, the QCQP is rewritten as min a J x + t s.t.: Wq w0 <t,w0 = QI/2x, wjQm < bi - ajx, Wi = Q]/2x, i = which, using (10.2), can be further rewritten in explicit SOCP format as min a J x + t x,t u 2 qI/2x t -1 2 Q)'2x bj — aj x — 1 <bj — ajx + 1, / = m. Remark 10.1 There is an alternative, and simpler, way to convert a quadratic constraint of the form xTQx + aTx < b into an SOCP constraint, when the matrix Q is positive definite (hence Q and Q1//2 are invertible). Since xTQx + *Tx = (Q^x+ \Q~l/2a)J (q1/2*+^Q~1/2«) - ^Q^a, we have that (10.7) is equivalent to < b + \aJQ 1a, which is in turn equivalent to the SOCP constraint 10.1.3 SOCP duality Consider the SOCP in standard form (10.6): p* = min cTx s.t.: \\A{X-\-biW2 < cj x + di, i — 1, ...,m. We have p* — min max cTx + ^À;(||AjX + b{H2 — cjx — d{ x A>0 = min max cTx + Yl (UJ iX + ~ ^i(CJX + ^)) ' SECOND-ORDER CONE AND ROBUST MODELS 353 where we have used the dual representation of the Euclidean norm in the second line. Applying the max-min inequality (8.48), we obtain p* > d*, where d* = max- min cT x + Y] (uj (AfX + bj) - Aj(cJ x + df)) . \\ui\\2<hi, i=l,...,m x i=1 V / Solving for x, we obtain the dual problem: d* = max ( ujb[ — A/d; ] ui,\i,i=l,...,m yi=1 J s-t.: £ (Aj^ - Ajdj) = c, \\uj\\2 < A,-, i = 1,... ,m. Note that the dual problem is also an SOCP. From Slater's conditions for strong duality, it turns out that if the primal problem is strictly feasible, then p* = d*, and there is no duality gap. Example 10.3 (Dual of square-root LASSO) Return to the square-root LASSO problem of Example 10.2. Since it can be written as an SOCP, we can apply the result above; strong duality results from the primal problem being strictly feasible (in a trivial way, as it has no constraints). Alternatively, we can directly represent the primal as a minimax problem: p* — min max uT(b — Ax) + vTx : Hull? < 1, |M|oo < A. X u,v Applying the minimax Theorem 8.8, we obtain p*=maxuTb : ||m||2 < 1, ||ATu||oo < A. From this, we can observe that if ||ATu||oo < A for every u with ||u||2 < 1, then the second constraint is inactive at optimum. This implies that P* — IIb\\2, hence x = 0 is optimal for the primal problem. 10.2 SOCP-representable problems and examples 10.2.1 Sums and maxima of norms SOCPs are useful to tackle problems involving minimization of sums or maxima of Euclidean norms. The problem min EIIa-*-M2, * /=1 where E Rm'n, bj E ]Rm are given data, can be readily cast as an SOCP by introducing auxiliary scalar variables y\,..., and rewriting the problem as 354 OPTIMIZATION MODELS mm Ey, ,y ;=i s.t.: ||A;a: — i»,-||2 < yi, i = p. Similarly, the problem min max ||Az-x — b{\\2 * i=l,...,p can be cast in SOCP format, by introducing a scalar slack variable y, as follows: min y s.t.: \\AjX - fc,-||2 < y, i = l,...,p. Block sparsity We have seen in Section 9.5.1 and Section 9.6.2 that the addition of a regularization term of the form ||x||i in a problem objective has the effect of encouraging sparsity in the solution, that is it promotes solutions with low cardinality. In many problems, however, one may be interested in seeking solutions that are block sparse. By a block- sparse solution we here mean that the solution vector x E is partitioned into blocks Xi E R”1', i = 1,..., p; ni H hnp=n, and we wish many of these blocks to be zero, while the nonzero blocks need not be sparse. Clearly, EINI2 = that is, the sum of f2”norms of the blocks is precisely the f^-norm of the vector whose components are ||x;||2- Therefore, including in the objective a term proportional to £?=1 ||x;||2 will promote solutions where many blocks are zero and few blocks are nonzero and possibly full. As an example, consider an LS problem where one seeks to minimize || Ax — y\\i and A is structured in blocks as A = [A\ • • • Ap], where A[ G Rm,n*. Then, a block-sparse regularization of the problem can be expressed as min || Ax — t/||2 + 7 E \\xi\\l> i=i ‘ SECOND-ORDER CONE AND ROBUST MODELS 355 where 7 > 0 is a penalty weight (the higher 7 the more incentive is put on finding block-sparse solutions). This problem is put in SOCP form by introducing slack scalar variables z and t\,...,tp\ min z + 7^]f; X,Z,t “ s.t.: \\Ax — y\\2 < z, \\Xi\\2<ti, i = l,...,p. 10.2.3 Quadratic-over-linear problems Consider the optimization problem * cjx + di s.t.: c{ x + di> 0, By introducing auxiliary scalar variables t\,..., tp, we first rewrite it as in y ti J i=1 s.t.: ||Atx - bi\\l < (cjx + dj)ti, i = b-..,p, cj x + di > 0, i = 1,..., p. Then, we apply (10.2) to obtain the SOCP formulation 2 (A'X-bi) cj x + di — U < c- x + d{ + i = 1,. 10.2.4 Log-Chebyshev approximation Consider the problem of finding an approximate solution to an overdetermined system of linear equations ajx ~ 1ft, x/i> 0, i = 1,..., m. In certain applications it makes sense to use an error criterion based on the maximum logarithm of the ratios (ajx)/yi, instead of the usual LS criterion. The log-Chebyshev approximation problem thus amounts to solving mm max s.t.: ajx > 0, log(a/x) - logy, 356 OPTIMIZATION MODELS Now, for aj x > 0, it holds that CL^ X l°g(flirx) — logy; = log = log max and, since log is monotone increasing, max log(fl^x) — logy/ = log max max Again, since log is monotone increasing, we have that minimizing the log of a function over some set is equivalent to taking the log of the minimum of the function over the same set, hence the log-Chebyshev approximation problem is equivalent to Then, introducing an auxiliary scalar variable t and expressing the problem in epigraphic form, we have The latter set of constraints are of the hyperbolic form 1 < t-{ajx)/yir hence applying (10.2) we finally express the problem in SOCP form min t 10.2.5 Chance-constrained LP Chance-constrained linear programs arise naturally from standard LPs, when some of the data describing the linear inequalities is uncertain and random. More precisely, consider an LP in standard inequality form: mm max max s.t.: aj x > 0, i = 1,..., m. min t s.t.: (aj x)/yi <t, i = I,..., m, (ajx)/yi >l/t, i = aj x > 0, i = 1,..., m. s.t.: {ajx)/yi<t, i = 1,..., m, <t + (ajx)/yir i = min crx Suppose now that the problem data vectors 0/, i — 1,... ,m, are not known precisely. Rather, all is known is that a\ are random vectors, SECOND-ORDER CONE AND ROBUST MODELS 357 with normal (Gaussian) distribution with mean value E{flz } = Uj and covariance matrix var{az*} = E; y 0. In such a case, also the scalar aj x is a random variable; precisely, it is a normal random variable with It makes therefore no sense to impose a constraint of the form ajx < hi, since the left-hand side of this expression is a normal random variable, which can assume any value, so such a constraint would always be violated by some outcomes of the random data It then appears natural to ask that the constraint «,T*< hi be satisfied up to a given level of probability p; E (0, 1). This level is chosen a priori by the user, and represents the probabilistic reliability level at which the constraint will remain satisfied in spite of random fluctuations in the data. The probability-constrained (or chance-constrained) counterpart of the nominal LP is therefore where p* are the assigned reliability levels. The reader should be warned that this chance-constrained problem may be very hard to solve, and it is not guaranteed to be convex, in the general case. However, in the specific case of concern (namely, when a\, i = 1,..., m, are independent normal random vectors, and p, > 0.5, i = 1,..., m), one may prove that the chance-constrained problem is indeed convex, and it can actually be recast in the form of an SOCP. Proposition io.i Consider problem (io.8)-(io.9), under the assumptions that pi > 0.5, i = 1,..., m, and that a\, i — 1,..., m, are independent normal random vectors with expected values and covariance matrices Ei y 0. Then, (io.8)-(io.9) is equivalent to the SOCP min cTx s.t.: ajx < bi-^>~1(pi)\\Z]/2x\\2, i = (10.10) where 0-1(p) is the inverse cumulative probability distribution of a standard normal variable. Proof We start by observing that lE{aJ x} — aj x, var {aj x} = xTE,x. min cTx s.t.: Prob{ajx < bj} > pz-, i = 1,..., m, which follows from subtracting on both sides of the equation ajx< bj the expected value !E{aJx} — ajx, and then dividing both sides by 358 OPTIMIZATION MODELS the positive quantity crj(x) = yJx^lLjX. This latter quantity is nothing but the standard deviation of aj x and, using the matrix square-root factorization ^2, it holds that CTi(x) = xTZjX = ||£./2x||2. (10.12) zM = (mi3) T(W = <10'14) we have from (10.11) that Prob{fl^x < bj} — Prob{z/(x) < T/(x)}. (10-15) Now, observe that Zj(x) is a standardized normal random variable (that is, a normal variable with zero mean and unit variance), and let <!>(£) denote the standard normal cumulative probability distribution function, i.e., <!>(£) = Prob{zt-(x) < £}. (10.16) Function <£(£) is well known and tabulated (also, it is related to the so-called error function, erf(£), for which it holds that <£(£) = 0.5(1 + erf(£/ V^2))); a plot of this function is shown in Figure 10.2. It then follows from (10.13) and (10.16) that Prob{ajx < bj} = 0(tz(x)), hence each constraint in (10.9) is equivalent to ®(Ti(x)) > Pi. (10.17) Since 0 is monotone increasing, denoting by O"1 the inverse cumulative distribution function, we have that (10.17) holds if and only if Tj(x) > where, for pi > 0.5, 0_1(p/) is a positive number. Flence, recalling (10.14) aRd (10.12), we have that each constraint in (10.9) is equivalent to the SOC constraint bi-a[x>Q-l(pi) ||xV2x||2, whereby the claim follows immediately. . □ Figure 10.2 Plot of the standard normal cumulative probability distribution function <!>(£). SECOND-ORDER CONE AND ROBUST MODELS 359 Example 10.4 (Value at risk (VaR) in portfolio optimization) Let us recall the financial portfolio model introduced in Section 4.3. The vector r G lRn contains the random rate of returns of n assets, and x £ is the vector describing the fractions of investor's wealth allocated in each of the assets. A classical assumption in portfolio management (albeit a debatable one) is that r is a normal random vector, with expected value r — E{r} and covariance Z = var{r} >- 0, which are assumed to be known. A popularly embraced technique for measuring downside risk in a portfolio is the so-called value at risk (VaR). VaR is defined as the oc percentile of the portfolio return, typically for low oc values (e.g., 1, 5 or 10 percent). Thus, VaR measures the potential loss in value of a risky portfolio over a defined period, for a given confidence level. For example, if the VaR on a portfolio return is 80% at oc = 0.05, it means that there is at most 5% chance that the portfolio value will drop more than 80% over the given period. Formally, let us define by q(x) = rTx the random return of the portfolio over the fixed period of time over which r is defined. Then, the loss of the portfolio is simply defined as £(x) = —q(x), and the value at risk oc is defined, in general, as VaRa(x) — inf : Prob{£(x) > 7} < oc = - sup : Prob{^(x) < Q < oc. In the case when the cumulative distribution function of £(x) is continuous and strictly increasing (as it is assumed in the case considered here, where £(x) is normal), then sup : Prob{^(x) < £} < oc = inf : Prob{^(x) < £} > oc, c f and the expression on the right is the definition of the inverse cumulative distribution function of p(x) (also known as the quantile function) : Prob{<?0) < Q > VaRa(x) = and for v > 0 we have that VaRa(x) < v > -v & F?{x){-v) < a, where the last condition reads Prob{rTx < —v} < oc, or, equivalently, taking the complementary event, Prob{rTx + i^>0}>p, p = 1 — oc (note that we used > instead of >, since this makes no difference here, due to continuity of the probability distribution). This constraint is of the 360 OPTIMIZATION MODELS form (10.9), hence, for oc < 0.5, it can be expressed equivalently in SOC form as in (10.10), and therefore VaRa(x) < v <=> rTx -I- v > 0_1(1 — a:)||L1/2x||2, where O is the standard normal cumulative distribution function. A typical portfolio allocation problem may thus be posed as follows: given a target desired return \i on the investment and a risk level oc < 0.5 (e.g., oc — 0.02), find a portfolio composition (assume that no short selling is allowed, thus x > 0) with expected return at least y and minimal VaRa. Such a problem is easily posed as an SOCP as follows: min v s.t.: x > 0, E xi = rT X > }l, rTX + V > 0_1(1 - a)||L1/2x||2. 10.2.6 Facility location problems Consider the problem of locating a warehouse to serve a number of service locations. The design variable is the location of the warehouse, x e R2, while the service locations are given by the vector yz E R2, i = 1,... ,m. One possible location criterion is to determine x so as to minimize the maximum distance from the warehouse to any location. This amounts to considering a minimization problem of the form min max ||x — y/||2, x i=l,...,m which is readily cast in SOCP form as follows: min t x,t s.t.: ||x-y,-||2 < t, i = An alternative location criterion, which is a good proxy for the average transportation cost, is the average distance from the warehouse to the facilities. This leads to the problem ^ m "r -Dl*-yill2, which can be cast as the SOCP 2 m min — ti SECOND-ORDER CONE AND ROBUST MODELS 361 Figure 10.3 shows the results of an example with m — 10 randomly chosen service locations, for both the min-max criterion and the min-average criterion. The optimal objective was 0.5310 for the max-distance case, and 0.3712 for the average-distance case. Location minimizing max distance Location minimizing average distance Figure 10.3 Warehouse position for min-max criterion (left) and min- average criterion (right). 10.2.J GPS localization with time synchronization We consider here a more realistic version of the planar trilateration problem discussed in Example 6.2. In that schematic problem, we wished to determine the 2D position coordinates of a point x E R2, using range (distance) measurements from three anchor points (radio beacons, satellites, etc.) whose geographic coordinates are exactly known. In navigation systems such as GPS (Global Positioning System), however, these distances are computed indirectly from "time- of-flight" measurements, as explained next. In simplified terms, each beacon (or satellite) transmits packets, each containing a time stamp with the precise time instant at which the packet left the transmitter; let tj denote the time at which the packet left transmitter i, as measured by the clock onboard satellite i. All satellites have extremely precise atomic clocks on board, hence all satellite clocks are perfectly synchronized on the Coordinated Universal Time (UTC). The user at point x has a GPS receiver that receives packets, and the receiver has a local clock on board. If a packet from satellite i is received at time tf (this time is as measured in the local receiver clock), then the receiver may compute the time of flight of the packet as the difference between tf and tj. Then, assuming the packet travels at the speed of light c, the receiver may convert the time-of-flight information into distance information. However, the GPS receiver typically has a cheap clock on board, which is not synchronized with the satellites' clocks. Therefore, for correct time-of-flight evaluation, one must 362 OPTIMIZATION MODELS convert the local clock time to UTC, i.e., [*f]uTC = t? + 5, where 5 is the offset between the UTC and local time (notice that 5 could be of the order of several seconds). The time of flight of the packet is therefore given by fi = [*f ]utc — tJ = tf — tJ + 6 = A i + 6, where A/ = tf — tj is the time difference reading of the receiver, and 5 is the (unknown) clock offset. The corresponding distance measurement is then di = cfi = cA; + cS. If m satellites are available, at known positions aj, i = 1,..., m, we may write m equations in the three unknowns (the two coordinates x\, %2 of x, and the sync parameter 5): ||x — 0z||2 — cAj + cS, i = 1,...,m. (10.18) If m = 4 satellites are available, by squaring each equation, and then taking the difference between the first three equations and the fourth, we obtain a system of three linear equations in the three unknowns: 2(a4-a1)Tx = d\ -dl+ \\ai\\l - ||aa|||, 2(fl4-a2)T* = d^ — 112 — 2 (fl4-fl3)Tx = dl-dl + \\a4\\l-\\a3\\l. That is 2(04 — ai)Tx + 2c2(A4 — Ai)J = c2(Al - A|) + ||a4||l — ||«i ||i^ 2(a4 — ci2)rx + 2c2(A4 — A2)J = c2(A| — A4) + ||a4 III — lla2 111' 2(a4 — a^)Tx + 2c2(A4 — A3)<5 = c2(A§ - A|) + ||a4||| - ||fl3||^ The solutions to the original system of nonlinear equations (10.18) are contained in the solution set of the above system of linear equations. A solution of this system of linear equations, which is a posteriori checked to also satisfy ||x — 04H2 = CA4 + cS, yields the position estimate x, as well as the clock synchronization parameter S. However, this approach needs m = 4 satellites. If only m — 3 satellites are available, one may still seek for a solution of the three nonlinear equations (10.18) (three equations in three unknowns), although the solution of this system of equations is not guaranteed to be unique. We can actually find a solution to (10.18) with m — 3 using convex optimization, as follows. Starting from secoNd-order cone and robust models 363 (10.18), we write a system of three equivalent equations, where the first two equations are the difference between the first two equations in (10.18) squared and the third equation squared. That is, system (10.18) is equivalent to 2(>3-a1)Tx + 2c2(A3-A1)<5 = c2(A2 — A2) + ||«3 111 — llfli 111' 2(fl3 — ai)T x + 2c2(A3 — A2)<5 = c2(A2 — A2) + ||«3||2 — ||«2||2/ 1!^ — «3II2 = cA3 + cS. A consistent solution can then be found by finding the minimum 5 such that the previous equations are satisfied. Such a minimization problem would not be convex, due to the last nonlinear equality constraint. However, we can relax this latter constraint to an inequality one, since we are guaranteed that, at the optimum, equality will actually hold (due to the fact that the relaxed problem has a single inequality constraint, see Remark 8.4). Therefore, a solution to system (10.18), with m = 3, can be found by solving the following SOCP min 5 s.t.: 2(a3 — «i)Tx + 2c2(A3 — Ai)(5 = c2(A^ - A§) + ||fl3||2 - ||fli||2, 2(a3 - a2)Jx + 2c2(A3 - A2)S = c2(A\ - A\) + ||«3||| - ||«2||^ ||*-«3||2 < cA3 + cS (notice that another solution to the same system of equations can be found by maximizing 5, instead of minimizing it). As numerical examples, we considered instances of m = 3 randomly positioned beacons and unknown point on the plane, obtaining the results shown in Figure 10.4. Sometimes, however, since the solution of the system is not unique, the solution obtained by minimizing 5 does not correspond to the correct sync and point position (see Figure 10.4(b)). In such cases, the correct solution could actually be found by maximizing 8. The intuitive reason for this fact is that if we parameterize all solutions x satisfying the first two linear equations, they will be a linear function of 5. Then substituting these x into the equation \\x — 03= (CA3 + cS)2 we would obtain a quadratic equation in 5, which has at most two roots, indeed corresponding to the maximum and minimum values of 8 subject to the constraints in the above SOCP. Localization and synchronization can then be performed by solving for both min 8 and max 5, under the above constraints, and then deciding which solution is the correct one using previous knowledge (such as approximate prior knowledge of position, additional bearing measurements, WiFi signals, etc.). 364 OPTIMIZATION MODELS 10.2.8 Separation of ellipsoids We next consider the problem of finding a hyperplane that separates two ellipsoids in Rn, or determining that no such hyperplane exists. We start first by seeking a condition under which a sphere B of unit radius centered at zero, B = {x : ||x||2 < 1}, is entirely contained in the half-space H = jx : aTx < i? j , where a E Rn and b E R are given. The containment condition requires that the inequality defining the half-space remains satisfied for all points in the sphere, that is Figure 10.4 (a) Correct sync and localization; (b) wrong sync (light gray circles): p is the true point position, x is the estimated one; in this case, the correct sync (black circles) can be found by maximizing 6, instead of minimizing it. B CH <^> aTx < i?, Vx : 11 at112 < 1. The latter condition is in turn equivalent to b > max aTx. x: ||x||2<l The maximum of the inner product on the right-hand side of this inequality is attained for a unit-norm x aligned to a (see Section 2.2.24), that is for x — a/1|a||2, for which the condition simplifies to Ben b>\\a\\2. Now consider the case with an ellipsoid instead of a sphere. An ellipsoid E can be described as an affine transformation of the unit sphere: E = {x — x + Ru: ||u||2<l}, SECOND-ORDER CONE AND ROBUST MODELS 365 where f G R" is the center, and R is a matrix that determines the shape of the ellipsoid. The containment condition £ C H is thus equivalent to b > maxu: ||w||2<i aT (x + Ru). Using the same argument as before, we easily obtain that £ cTi b > aTx + ||RTfl||2. Note that the condition that £ should be contained in the complementary half-space 7? = {x : aTx > b} is readily obtained by changing a, b into —a, —b, that is: ec n o b < aTX- j|RTfl||2- Next, consider two ellipsoids £i = {xj + RtUi : ||w;||2 < 1}, i = 1,2, where x; G lRn are the centers, and R;, i = 1,2, are the shape matrices of the ellipsoids. The hyperplane {x : aTx = b} separates (possibly not strictly) the two ellipsoids if and only if £\ E H and £2 £ H, or vice versa. Thus, b\ = aTx 1 -f- ||Rj"fl||2 < b < aTX2 — ||Rjfl||2 = ^2- Thus, the existence of a separating hyperplane is equivalent to the existence of a E !Rn such that aT(x2 - Xi) > ||Ri"fl||2 + 11^2 flll2- Exploiting the homogeneity with respect to a (i.e., the fact that a can be multiplied by any nonzero scalar, without modifying the condition), we can always normalize this condition so that aT (f 2 — *1) — 1- This results in an SOCP in variables a £ Rn, t £ R: p* = min t r a,t s.t.: ||Rj"fl||2 + ||Rjfl||2 < t, aT (x 2 — xi) = 1. The ellipsoids are separable if and only if p* < 1. In this case, an appropriate value of b is b — . 10.2.9 Minimum surface area problems Consider a surface in R3 that is described by a function from the square C = [0, 1] x [0, 1] into R. The corresponding surface area is defined via the integral A(f) = Jc^l + \\Vf(x,y)\\ldxdy. 366 OPTIMIZATION MODELS The minimum surface area problem is to find the function / that minimizes the area A(f), subject to boundary values on the contour of C. To be specific, we will assume that we are given values of / on the lower and upper sides of the square, that is f(x,0) = l(x), f(x,l) = u(x), x e [0,1], where / : ]R —»]R and u : ]R —> ]R are two given functions. The above is an infinite dimensional problem, in the sense that the unknown is a function, not a finite-dimensional vector. To find an approximate solution, we here resort to discretization. That is, we discretize the square C with a grid of points ((/ — \)h, (j — l)h), 1 < i,j < K + 1, where K is an integer, and where h = 1/K is the (uniform) spacing of the grid. We represent the variable of our problem, /, as a matrix F in ]rK+1/K+i with elements = f((i — \)h, (j — l)h). Similarly, we represent the boundary conditions as vectors L and IF To approximate the gradient, we start from the first-order expansion of a function of two variables, valid for some small increment h: We thus obtain that the gradient of / at a grid point can be approximated as Gi,y = V/((«- l)h,(j-l)h)^ K(Fj+1/j - Fy) K(Fy+1-Jy) 1 < i,j < K. Now, approximating the integral with a summation over all grid points, we obtain a discretized version of the problem in SOCP format, as follows: [ K(Fi+ij - Fj j) Fi,i = Lj = l((i — l)h), FitK+i = Ui = u((i-l)h), 1 < i < K + 1, 1 < i < K + 1. 10.2.10 Total variation image restoration A discretization technique similar to the one introduced in the previous section can also be used in the context of digital image restoration. Digital images always contain noise, and the objective of image restoration is to filter out the noise. Early methods involved a least-squares approach, but the solutions exhibited the "ringing" SECOND-ORDER CONE AND ROBUST MODELS 367 phenomenon, with spurious oscillations near edges in the restored image. To address this phenomenon, one may add to the objective of the least-squares problem a term which penalizes the variations in the image. We may represent a given (noisy) image as a function from the square C = [0, 1] x [0, 1] into R. We define the image restoration problem as minimizing, over functions / : C -> R, the objective Jc\\Vf(x)\\2dx + \Jc(f(x)-f(x)y where the function / is our restored image. The first term penalizes functions which exhibit large variations, while the second term accounts for the distance from the estimate to the noisy image /. This is an infinite dimensional problem, in the sense that the variable is a function, not a finite-dimensional vector. Therefore, we tackle it approximately via discretization. We can discretize the square C with a square grid, as in Section 10.2.9: , 1 ^ i, j < K -f-1. We represent the data of our problem, /, as a matrix F E ]R^+1^+1/ with elements F;y = /(x/y). Similarly, we represent the variable / of our problem with a (K + 1) x (K + 1) matrix F containing the values of / at the grid points X/y. We then approximate the gradient V/(x) as in (10.19): K(FM/j - Fjrj) K(Fi,j+1 - Pi,j) , 1 < /,/ < K. The discretized version of our problem is thus written in SOCP format as + A {Fij-Fuf As an example, Figure 10.3(a) shows an original gray-scale image of 256 x 256 pixels. Each pixel has a value from 0 (black) to 255 (white), corresponding to its luminance level. Figure 10.3(b) is obtained by adding Gaussian noise with standard deviation a = 12 to each pixel of the original image. This noisy image constitutes the F matrix. Given F, we aim to restore (i.e., de-noise) the image by solving problem (10.20) for the F variable. Notice that this problem can be quite 'Targe scale:" in our small example with a 256 x 256 image we have 65,536 variables in the optimization problem. De-noising a 368 OPTIMIZATION MODELS 1,024 x 1,024 image would already involve a number of variable of the order of a million. For this reason, specialized and fast convex optimization algorithms should be used for this type of problem. In this example, we used the TFOCS package (see cvxr. com/tfocs/) to solve the problem numerically. In particular, for the choice of the parameter A — 8, we obtained the restored image shown in Figure 10.3(c). (b) Noise (cr — 12) added (c) TV recovered (a) Original 10.3 Robust optimization models In this section we introduce the reader to models and techniques that permit us to take into account the presence of uncertainty in the data describing an optimization problem, and to obtain solutions that are robust against this uncertainty. Section 10.3.1 introduces the main ideas, while Section 10.3.2 illustrates how to deal with uncertain data in linear programs. Section 10.3.3 discusses a robust least-squares model, and Section 10.3.4 presents an approximate approach for obtaining robust solutions to general uncertain optimization problems. 10.3.1 Robust LP Consider a linear program in standard inequality form: rnin cT x (10.21) s.t.: ajx<bif i = l,...,m. In many practical cases, the data of the linear program (contained in the vectors c, b, and a[, i = 1,..., m) are not known precisely. For example, the coefficient matrix A e ]Rm'n (whose rows are aj, i = 1,..., m) may be given by a known nominal matrix A, plus a perturbation A, which is only known to be bounded in norm as Figure 10.5 (a) Original image; (b) image with noise added; (c) de-noised image obtained via solution of (10.20), with A = 8. SECOND-ORDER CONE AND ROBUST MODELS 369 IHIf < p. A robust LP in this case seeks a solution that minimizes the objective, while guaranteeing constraint satisfaction for all possible values of the uncertainty term, that is min cT x s.t.: (A + A)x<b, VA : ||A||f < p. We anticipate that this robust LP is equivalent to the following SOCP (this follows as a particular case of the ellipsoidal uncertainty model discussed later in Section 10.3.2.3) min cTx s.t.: ajx + p\\x\\2 < bj, i = Solving the LP without taking into account the uncertainty in the problem's data might make the supposedly "optimal" solution become suboptimal, or even infeasible. This idea is discussed and developed in the next example and in the following sections. Example 10.5 (Uncertainty in the drug production problem) Let us reconsider the drug production problem discussed in Example 9.7, assuming now that a small variation may occur in some data of the problem, due to uncertainty. Specifically, we assume that the content of the active agent in the raw materials is subject to variation, with a margin of relative error of 0.5% for raw material I, and of 2% for raw material II. The possible values of the coefficients are shown as intervals in Table 10.1. agent content (g/kg) Raw I Raw II Let us now check the impact of this uncertainty on the optimal solution previously computed in Example 9.7, when uncertainty was ignored: p* = -8819.658, 4awI - 0, 4awII - 438.789, x^ = 17.552, x*Druf,u = 0. The uncertainty affects the constraint on the balance of the active agent. In the nominal problem, this constraint was 0-01xRawi -{- 0.02xRawn 0.05xorUgi 0.600x£)rUgjj ^ 0. At optimum, this constraint is active. Therefore, even a tiny error in the first and second coefficients makes the constraint become invalid, i.e., the optimal solution computed on the nominal problem (which ignored uncertainty) may result in being infeasible on the actual data. An adjustment policy. To remedy the problem, there is a simple solution: adjust the levels of drug production so as to satisfy the balance Table 10.1 Uncertainty on agent contents in raw materials. 370 OPTIMIZATION MODELS constraint. Let us adjust the production of Drug I, since that of Drug II is zero according to the original plan. Clearly, if the actual content of active ingredient increases, the balance constraint will remain valid. In such a case, there is nothing to adjust, and the original production plan is still valid (feasible) on the actual uncertain problem and nominally optimal. The balance constraint does become invalid only if "nature is against us," that is when the level of active agent is less than originally thought. Since the original optimal production plan recommends us to purchase only the raw material II, a change in the corresponding coefficient (nominally set at 0.02) to the lesser value 0.0196 results, if we are to adopt the above simple "adjustment policy," in a variation in the amount of production of Drug I from 17,532 packs (the nominal value) to the (2% less) value of 17,201 packs. Accordingly, the cost function will decrease from the nominal value of 8,820 to the 21% (!) smaller value of 6,929. This shows that, for this problem, even a tiny variation in a single coefficient can result in a substantial decrease in the profit predicted by the model. If we are to believe that the uncertain coefficients are actually random, and take their extreme values, say, with 1/2 probability each, then the expected value of the cost (still with the above adjustment policy) will be also random, with expected value (8,820+6,929)72 = 7,874. Thus, the expected loss due to random uncertainty is still high, at 11%. Uncertainty can originate also from implementation errors. Often, the optimal variable x* corresponds to some action or implementation process, which may be fraught with errors. For example, in a manufacturing process, the planned production amounts are never exactly implemented due to, say, production plant failures or fixed sizing of the production lots. Implementation errors may result in catastrophic behavior, in the sense that when the optimal variable x* is replaced by its error-affected version, the constraints may become violated, or the cost function may become much worse (higher) than thought. 10.3.2.1 Robust optimization: main idea. In robust optimization, we overcome the mentioned issues by taking into account the fact that the data in the LP may be imprecisely known right since the modeling phase. This will in turn provide us with solutions that are guaranteed to "work" (i.e., to remain feasible), no matter what the uncertainty does. To this end, we postulate that a model of the uncertainty is known. In its simplest version, this model assumes that the individual rows fl/s are known to belong to given sets U( C Rn, i = 1,..., m. We can think of those sets as sets of confidence for the coefficients of the linear program. The main idea in robust LP is to impose that each constraint be satisfied in a worst-case sense, that is each constraint should hold for all possible values of 0; £ 17*. SECOND-ORDER CONE AND ROBUST MODELS 371 The robust counterpart to the original LP is thus defined as follows: min cTx s.t.: ajx < b{, Vfl/ E 77/, i = The interpretation of the above problem is that it attempts to find a solution that is feasible for any choice of the coefficient vectors a\ within their respective sets of confidence IT*. The robust counterpart LP is always convex, independent of the shape of the sets of confidence 17/. Indeed, for each i, the set Xi = {x : ajx < b[, Vfl/ E 17/} is representable as the intersection of (possibly infinitely many) convex sets (namely, half-spaces) = fl ix: aJx^ b‘}' hence X\ is convex. Therefore, robust LPs are still convex optimization problems. However, depending on the type and structure of the uncertainty sets 17/, and due to the fact that these sets may contain a dense infinity of elements, it may be difficult to express the robust program in some usable explicit form. There are, however, notable exceptions for which this is possible, and for these classes of uncertainty sets 17/ we shall say that the robust LP counterpart is "computationally tractable." We shall next detail three such tractable cases, which are important in applications. To this end, we notice first that the robust half-space constraint aT x < b, \/a e U (which is, as we previously remarked, convex irrespective of the set 17) can be expressed in terms of an inner optimization problem, as follows: max aT x < b. (10.22) What makes the robust LP "tractable" is indeed the possibility of solving the inner problem (10.22) explicitly, as detailed in the next section. 10.3.2 Tractable robust LP counterparts We here discuss three cases for which the robust counterpart of an LP with uncertain data leads to a tractable convex optimization problem, namely the cases of (i) discrete uncertainty, (ii) box (or interval) uncertainty sets, and (iii) ellipsoidal uncertainty sets. 372 OPTIMIZATION MODELS 10.3.2.1 Discrete uncertainty. In the discrete uncertainty model, the uncertainty on each coefficient vector a* is described by a finite set of points: Ut = {a^ «<*>}, where each vector af^ e ]Rn, k = 1,..., K*, corresponds to a particular "scenario," or possible outcome of the uncertainty. The robust halfspace constraint aj x <b, Mai £ Ui can simply be expressed as a set of Kj affine inequalities: af^Jx <b, k = 1,..., Ki. (10.23) Note that the discrete uncertainty model actually enforces more than feasibility at points af \ In fact, the constraints (10.23) imply that also holds for any set of non-negative weights Ai,..., A^ summing to one. Therefore, satisfaction of the discrete inequalities (10.23) implies satisfaction of the inequality a^< b for all a\ belonging to the convex hull of the set Ui. With discrete uncertainty, the robust counterpart to the original LP (10.21) becomes min cT x s.t.: af^Tx < b, k — 1,..., Kp, i — 1,..., m. Thus, the discrete-robust counterpart of an LP is still an LP, with a total of m(K\ + • • • + Km) constraints, where Kj is the number of elements in the discrete set Ui. The discrete uncertainty model is attractive for its simplicity, since the robust counterpart retains the same structure (LP) of the original problem. However, such a model may become impractical to solve in case of a very large number of discrete points. 10.3.2.2 Box uncertainty. The box uncertainty model assumes that every coefficient vector aj is only known to lie in a "box," or, more generally, a hyper-rectangle in Kn. In its simplest case, this uncertainty model has the following form: Ui = {dj : I|fl, - 0,-lloc < Pi}, (10.24) where pi > 0 is a measure of the size of the uncertainty, and 0; represents the nominal value of the coefficient vector. The set 17/ is a "box" SECOND-ORDER CONE AND ROBUST MODELS 373 (hypercube) of half-side length pi around the center dj. The condition ai 6 U{ can be equivalently expressed as a{ = di+piSi, \\6j||oo < I, (10.23) where Si represents the uncertainty around the nominal value d{. Note that the robust counterpart to the robust half-space constraint aj x < bi, Mai € Uj can be handled as a discrete model, by considering as discrete uncertainty points the vectors af^ — dj -f pjVk, where vk, k = 1,... ,2n, represent the vertices of the unit box (that is, vectors having ±1 as components). Indeed, enforcing the constraint af^Tx < bj at the vertices of the hypercube Uj implies that b holds for every aj in the convex hull of the vertices, that is in all Uj. This approach, however, may not be practical, since the number of vertices (hence the number of constraints in the scenario counterpart of the LP) grows geometrically with the dimension n. There is actually a much more effective reformulation of the robust counterpart in the case of LP with box uncertainty, which can be obtained by examining the inner optimization problem in (10.22). We have1 b > max aj x = max dj x + pjSj x *i£Ui Mco<l = dj x + pj max &Jx M~<i = ci] x + pi\\x\\i. Therefore, the robust counterpart of the original LP (10.21) under box uncertainty (10.24) can be written as an optimization problem with polyhedral constraints: min cTx s.t.: djx + pj\\x\\i < bj i = . This problem can in turn be expressed in standard LP form by introducing a slack vector u E W1: min cTx s.t.: dj x + piY, uj ^ bj, i = l,...,m/ — Uj < Xj < Uj, j = 1,..., n. 1 See Section 2.2.24. Notice that Eq. (10.23) implies that each entry of aj is bounded in an interval centered in the corresponding entry of dj and of half-width 374 OPTIMIZATION MODELS equal to pj. Thus, according to this model, all entries in the vector a\ have the same uncertainty radius pj. The model can be easily generalized to include the case where each entry of aj is bounded in an interval of possibly different length, by assuming that &i = Q-i -f" Pi © &i/ j I <5? 11 oo ^ 1/ where now pj E W1 is a vector containing the half-lengths of the uncertainty intervals in each entry of Uj, and © denotes the Hadamard (entry-by-entry) product of two vectors. The reader may verify as an exercise that the robust counterpart of the original LP, in this setting, is given by the following LP: min cTx s.t.: OjX + PjU<bj, i = l,...,m — Uj < Xj < Uj, j — 1,..., n. An example of application of robust linear programs with interval uncertainty, in the context of inventory control, is presented in Section 16.5. 10.3.2.3 Ellipsoidal uncertainty. In the ellipsoidal uncertainty model each vector aj is contained in an ellipsoid Uj of the form Uj = {a( = at + RjSj : ||^||2 < 1} , (10.26) where Rj E R”'P is a matrix which describes the "shape" of the ellipsoid around its center dj. If Rj = ptI for some pj > 0, then this set is simply a hypersphere of radius pj centered at dj and we refer to this special case as the spherical uncertainty model. Ellipsoidal uncertainty models are useful to "couple" uncertainties across different components of the coefficient vector aj. This is in contrast with the previous "box" model, which allows uncertainties to take their largest values independently of each other (the box model is also referred to as the "independent intervals" model). With an ellipsoidal model, the robust half-space constraint becomes2 b > max ajx = max djx + 8jRjx ai^Ui Pill 2<1 = fl,Ti+ max j7(Rjx) The robust counterpart of the original LP (10.21) under ellipsoidal uncertainty (10.26) is thus the following SOCP: min cTx s.t.: djx + \\Rjx||2 < bj i = l,...,m. 2 See Section 2.2.24 SECOND-ORDER CONE AND ROBUST MODELS 375 10.3.3 Robust least squares Let us start from a standard LS problem: mm \\Ax-y\\2, where A G Rm,n, y G Rm. Now assume that the matrix A is imperfectly known. A simple way to model the uncertainty in the data matrix A is to assume that A is only known to be within a certain "distance" (in matrix space) to a given "nominal" matrix A. Precisely, let us assume that where || • || denotes the largest singular value norm, and p > 0 measures the size of the uncertainty. Equivalently, we may say that A - A + A, where A is the uncertainty, which satisfies ||A|| < p. We now address the robust least-squares problem: mm max ||(A + A)x - y\\2. x l|A|l<P The interpretation of this problem is that we aim at minimizing (with respect to x) the worst-case value (with respect to the uncertainty A) of the residual norm. For fixed x, and using the fact that the Euclidean norm is convex, we have that II(A + A)x — y||2 < ||ix-y||2 + ||Ax||2. By definition of the largest singular value norm, and given our bound on the size of the uncertainty, we have ||Ax||2 < ||A|| • ||*||2 <p||x|| Thus, we have a bound on the objective value of the robust LS problem: max \\(A + A)x-y||2 < \\Ax - y||2 + p||x||2. It turns out that the upper bound is actually attained by some choice of the matrix A, specifically for the following dyadic matrix: A = o er in- (Ax~y)xT. \\Ax — y||2 • ||x||2 Hence, the robust LS problem is equivalent to mm ||Ax-y||2 + p||x||2. 376 OPTIMIZATION MODELS This is a regularized LS problem, which can be cast in SOCP format as follows: min u -f pv, s.t.:w > \\Ax-y\\2f v > ||*||2. Further linear equality or inequality constraints can also be easily added to the problem, while retaining its SOCP structure. 10.3.4 The scenario approach to robustness Generic convex optimization problems, subject to generic uncertainty, do not admit exact tractable robust reformulations. When an uncertain problem does not fall into one of the categories discussed in the previous sections (or into some other tractable class, not discussed in this book), one can resort to a general approach to robustness that has the advantage of being completely general, at the price of being approximate, in a sense to be specified next. Consider a generic convex optimization problem, subject to uncertainty: min f0(x) s.t.: fi(x,6) < 0, i = where f, i = 0,... ,m, are convex in x for each value of S E U, while they can be arbitrary functions of S. Also, U is a generic uncertainty set. Tlie robust counterpart of this problem, min f0(x) (10.27) s.t.: fi(x,5) < 0, i = 1,...,ra, \/5eU, does not admit a tractable reformulation, in general. Let us then assume that the uncertainty S is random, with some probability distribution. The actual distribution of the uncertainty need not be known, all we need is a number N of independent and identically distributed (iid) samples 6^\... ,5^ of the uncertainty, generated according to this distribution. Using these samples of the uncertainty, called scenarios, we act "as if" the uncertainty was discrete, and consider a problem (the so-called scenario problem) that is robust only with respect to the discrete sampled scenarios: min fo(x) (10.28) s.t.: fi(x,S^) < 0, z — 1,..., m, ; = 1,...,N. Clearly, the solution obtained from the scenario problem is not guar¬ anteed to be robust in the sense of problem (10.27), since it cannot be guaranteed, in general, to be feasible for all values of the uncertainty 5 E U. However, a remarkable fact is that, if N is chosen to be SECONP-ORDER CONE AND ROBUST MODELS 377 suitably large, then the scenario solution can be proved to be probabilistically robust, up to a pre-specified level of probability cl E (0, 1). This means that we can fix a desired level of probabilistic robustness cl, and then find a value of N such that the ensuing scenario problem will provide an optimal solution x* with the property3 that Prob{<!> : fi(x*,5) < 0, i = 1,... ,m) > oc. (10.29) That is, the scenario solution is indeed robustly feasible, up to a probability level a. The tradeoff between the desired level cl of robustness and the required number N of required scenarios is captured by the following simple formula4 N > - {n +10). (10.30) 10.4 Exercises Exercise 10.1 (Squaring SOCP constraints) When considering a second-order cone constraint, a temptation might be to square it in order to obtain a classical convex quadratic constraint. This might not always work. Consider the constraint X\ + 2x2 > \\x\\2f and its squared counterpart: (*i +2x2)2 > \\x\\l. Is the set defined by the second inequality convex? Discuss. Exercise 10.2 (A complicated function) We would like to minimize the function / : R3 —> R, with values: /(x) = max ^Xi + x2 — min (min(xi + 2, x2 + 2xi — 5), X3 — 6), (xi -x3)2 + 2x^\ 1 - XI J ' with the constraint ||x||oo < 1. Explain precisely how to formulate the problem as an SOCP in standard form. Exercise 10.3 (A minimum time path problem) Consider Figure 10.6, in which a point in 0 must move to reach point p = [4 2.5]T, crossing three layers of fluids having different densities. In the first layer, the point can travel at a maximum speed V\, while in the second layer and third layers it may travel at lower maximum 3 To be precise, since x* is itself random, a priori, the probabilistic robustness property (10.29) itself holds only up to a certain level of confidence. However, this confidence level can be made so high that (10.29) can be considered a certain event, for any practical purpose. 4 This formula is a simplified one, obtained under the assumptions that problem (10.28) is feasible and admits a unique optimal solution. The constant 10 appearing in the formula is related to the (hidden) level of confidence of statement (10.29), and refers to a confidence level of about 1 — 1.7 x 10~5. If higher confidence is required, then this constant grows just a little. For instance, for confidence level 1 — 7.6 x 10~10, the constant becomes 20. Details and proofs of results related to the scenario approach can be found in G. Calafiore, Random convex programs, SIAM J. Optimization, 2010. 378 OPTIMIZATION MODELS Figure 10.6 A minimum-time path problem. speeds, respectively Vi — Vi/rj2, and v3 = vi/rjs, with t]2,7/3 > 1. Assume v\ = 1, 7/2 = 1.5, 7/3 = 1.2. You have to determine what is the fastest (i.e., minimum time) path from 0 to p. Hint: you may use path leg lengths £\, £2/ ^3 as variables, and observe that, in this problem, equality constraints of the type £( = "something" can be equivalently substituted by inequality constraints k > "something" (explain why). Exercise 10.4 (fc-ellipses) Consider k points X\,... ,**■ in R2. For a given positive number d, we define the k-ellipse with radius d as the set of points x £ R2 such that the sum of the distances from x to the points X{ is equal to d. 1. How do fc-ellipses look when k — 1 or k = 2? Hint: for k — 2, show that you can assume X\ — —X2 — p, ||p||2 = 1/ and describe the set in an orthonormal basis of Rn such that p is the first unit vector. 2. Express the problem of computing the geometric median, which is the point that minimizes the sum of the distances to the points i — 1,..., k, as an SOCP in standard form. 3. Write a code with input X = (xi,... ,x^) £ R2^ and d > 0 that plots the corresponding fc-ellipse. Exercise 10.5 (A portfolio design problem) The returns on n = 4 assets are described by a Gaussian (normal) random vector r £ Rn, having the following expected value f and covariance matrix E: ' 0.12 ' 0 ' , £ = . °-03 . 0 • 0 _ SECOND-ORDER CONE AND ROBUST MODELS 379 The last (fourth) asset corresponds to a risk-free investment. An investor wants to design a portfolio mix with weights x £ Rn (each weight Xj is non-negative, and the sum of the weights is one) so as to obtain the best possible expected return fT x, while guaranteeing that: (i) no single asset weights more than 40%; (ii) the risk-free assets should not weight more than 20%; (iii) no asset should weight less than 5%; (iv) the probability of experiencing a return lower than q — —3% should be no larger than e — 10~4. What is the maximal achievable expected return, under the above constraints? Exercise 10.6 (A trust-region problem) A version of the so-called (convex) trust-region problem amounts to finding the minimum of a convex quadratic function over a Euclidean ball, that is min -xTHx + cTx + d x 2 s.t.: xTx < r2, where H y 0, and r > 0 is the given radius of the ball. Prove that the optimal solution to this problem is unique and is given by x(A*) = — (H + A*7)_1c, where A* = 0 if ||H-1c||2 < r, or otherwise A* is the unique value such that ||(H +A*/)_1c||2 = r. Exercise 10.7 (Univariate square-root LASSO) Consider the problem min fix) = \\ax-y\\2 +h\x\, where A > 0, a G Rm, y G IRm are given, and x G R is a scalar variable. This is a univariate version of the square-root LASSO problem introduced in Example 8.23. Assume that y 7^ 0 and a ^ 0, (since otherwise the optimal solution of this problem is simply x — 0). Prove that the optimal solution of this problem is if |flTy| < A||y||2, Sgn(%)^/5|p52 i, |„Ty| > a№. = Ml' Exercise 10.8 (Proving convexity via duality) Consider the function / : R++ —> R/ with values f(x) = 2 max t — ^ yjX[ -f t2. 1 i=1 380 OPTIMIZATION MODELS 1. Explain why the problem that defines / is a convex optimization problem (in the variable t). Formulate it as an SOCP. 2. Is / convex? 3. Show that the function g : R++ R, with values n 1 1 = E is convex. Hint: for a given y £ IR++/ show that g(y) = max -xTy-f{x). Make sure to justify any use of strong duality. Exercise 10.9 (Robust sphere enclosure) Let B\, i — 1,..., m, be m given Euclidean balls in Rn, with centers x\ and radii pi > 0. We wish to find a ball B of minimum radius that contains all the B/, i = 1,..., m. Explain how to cast this problem into a known convex optimization format. Semidefinite models Semidefinite programming (SDP) is an optimization model with vector or matrix variables, where the objective to be minimized is linear, and the constraints involve affine combinations of symmetric matrices that are required to be positive (or negative) semidefinite. SDPs include as special cases LPs, QCQPs, and SOCPs; they are perhaps the most powerful class of convex optimization models with specific structure, for which efficient and well-developed numerical solution algorithms are currently available. SDPs arise in a wide range of applications. For example, they can be used as sophisticated relaxations (approximations) of non- convex problems, such as Boolean problems with quadratic objective, or rank-constrained problems. They are useful in the context of stability analysis or, more generally, in control design for linear dynamical systems. They are also used, to mention just a few, in geometric problems, in system identification, in algebraic geometry, and in matrix completion problems under sparsity constraints. 11.1 From linear to conic models In the late 1980s, researchers were trying to generalize linear programming. At that time, LP was known to be solvable efficiently, in time roughly cubic in the number of variables or constraints. The new interior-point methods for LP had just become available, and their excellent practical performance matched the theoretical complexity bounds. It seemed, however, that, beyond linear problems, one encountered a wall. Except for a few special problem classes, such as QP, it appeared that as soon as a problem contained nonlinear terms, one could no longer hope to recover the nice practical and theoretical efficiency found in LP In previous decades it had been 382 OPTIMIZATION MODELS noted that convex problems could be efficiently solved in theory (under some mild conditions), but the known numerical methods were extremely slow in practice. It seemed, however, that to harness the power of interior-point methods and apply them to problems other than LP, one had to look closely at convex optimization. A breakthrough occurred by rethinking the role of the set of non-negative vectors, which is the basic object in LP. In the standard conic form, a generic LP can be written as min : cTx, s.t.: Ax = b, x £ R+, X ^ where R+ is the set of non-negative vectors in Rn, i.e., the positive orthant. Researchers asked: what are the basic characteristics of R!j_ that make interior-point methods work so well? In other words, are there other sets that could be used in place of R+, and still allow efficient methods? It turns out that the key characteristic of interest in R+ is that it is a convex cone (i.e., a convex set that is invariant under positive scaling of its elements), and that many of the desirable features of LP can be extended to problems involving as constraint sets some specific convex cones, other than R^_. This idea yields a broad class of convex optimization models of the form min : cTx, s.t.: Ax = b, x £ /C, where /C is a convex cone. For example, when /C is the second-order cone (or any combination of second-order cones, arising when, say, some variables are in a cone, others in another, and all are coupled by affine equalities) then the above model specializes to an SOCP model. The SDP model class is instead obtained when x is a matrix variable, /C is the cone of positive semidefinite matrices, and we minimize a linear function of x under affine equality constraints on the entries of x. Efficient interior-point solution methods can indeed be extended from LP to SOCP and SDP, although the numerical complexity of SOCP, and especially of SDP models, remains higher than that of the LP model. The practical consequence of this is that the scale of SOCP and SDP problems that can be solved numerically on a standard workstation remains smaller than that of LP models. Current technology permits us to solve generic LPs with a number of variables and constraints on the order of millions, and generic SOCP /SDP models two or three orders of magnitude smaller. ii.2 Linear matrix inequalities 11.2.1 The cone of positive semidefinite matrices We recall from Section 4.4 that an n xn symmetric matrix F is positive semidefinite (PSD, denoted by F y 0) if and only if all of its eigenvalues are non-negative. An alternative and equivalent condition for F to be PSD is that the associated quadratic form is non-negative: zTFz > 0, Vz G R". The set S” of PSD matrices is a convex cone. Indeed, S” is a cone, since F G S” implies that ocF G S”, for any oc > 0. Moreover, S” is convex, since for any Fi,Fj G S+ and 7 G [0, 1], we have that zT(7fi + (1 - 7)h)z = yzTFlZ + (1 - 7)2TF2z > 0, Vz € R". Example 11.1 (PSD matrices) • The matrix F = is symmetric, and its eigenvalues Aa - 4.8506, A2 - 2.1168, A3 - 0.3477. All the eigenvalues are non-negative, hence F is PSD (actually, all eigenvalues are strictly positive in this case, hence FJ-0). • For any vector v G Rn, the dyad F = vvT is PSD, since the associated quadratic form is a perfect square: q(x) — xT(vvT)x = (vTx)2 > 0. • More generally, for any, possibly rectangular, matrix A, the matrices ATA and AAT are both PSD. The converse is also true: any PSD matrix F can be factored as F = AT A, for some appropriate matrix A. 11.2.2 Linear matrix inequalities 11.2.2.1 Definition. A linear matrix inequality (LMI) in standard form is a constraint on a vector of variables x G Rm of the form F(x) = F0 + V0, (11.1) where the n x n coefficient matrices Fq,.. .,Fm are symmetric. Sometimes, these matrices are not explicitly defined. That is, if F : Rm -4 Sn is an affine map that takes its values in the set of symmetric matrices of order n, then F(x) ^0 is an LMI. 384 OPTIMIZATION MODELS Example 11.2 (Representation of an LMI in standard form) Quite often linear matrix inequality constraints are imposed on matrix variables, rather than on vector variables. The following one is a typical example. For a given square matrix A E Rn,n and positive definite matrix P E S” +, the so- called Lyapunov inequality -I-ATP-PA tO (11.2) is an LMI in the matrix variable P. To express this LMI in the standard format (11.1) involving a vector of variables x, one may define a suitable one-to-one mapping between the symmetric matrix variable P and a vector x E Rm containing the m = n(n + l)/2 free entries of P. Usually, we take x as a vectorization of matrix P; for instance, x contains first the diagonal elements of P, then the elements in the first upper diagonal, etc. For example, if n = 3, then m = 6, and X(P) = V11 V22 The coefficient matrix Fq can then be easily obtained by setting x — 0 and plugging P(x) into (11.2), obtaining Fq = —I. The coefficient matrices Fif i — 1,..., m, can be obtained by setting x* = 1, Xj = 0, ; 7^ i, plugging P(x) into (11.2), and then subtracting Fq from the result: F5 = 2 an a12 0i3 Fi = — ^12 0 0 a13 0 0 F3 = — 0 0 032 «31 ^032 033 + 022 «21 «22 + «33 2023 0 02i 0 2021 0n + 022 «23 2«31 «32 «11 + «22 «23 2012 «13 «13 0 «32 «11 + «33 0 0i2 «11 + 033 012 The operation of rewriting a generic LMI into the standard format (11.1) is elementary conceptually, although it can be very tedious in practice; usually, it is done automatically and internally by parsers/solvers for convex optimization, such as CVX or Yalmip, so that it remains completely transparent to the user, who can express the problem in the most natural format. SEMIDEFINITE MODELS 385 11.2.2.2 Geometry and convexity of LMI sets. Let us denote by X the set of points x G Rm that satisfy an LMI: XjFj y oj . (11.3) The set X is convex. To verify this fact, recall that F(x) >r 0 if and only if zTF(x)z > 0, for all zgR”. Since z TF(x)z = /o(z) + L Xifi(z), i=1 where f(z) = zTF(z, i = 0,... ,ra, we obtain that the points x such that F(x) y 0 belong to the intersection of an infinite number of half-spaces Fiz = {x e Rm : ajx + bz > 0}, (114) az = L/i(z) • • • fm(z)], h = /0(z), each parameterized by z G Rn, which proves that X is a convex set; see Figure 11.1. ^ Figure n.i For each z E Kn, the LMI v ^ feasible set X belongs to a half-space \ ^ ^ _ Hz defined in (11.4). For some specific vectors 2 G 1Rn, it may happen that the boundary of the half-space %z is a supporting hyperplane for the feasible set X, see Section 8.1.3. Indeed, a necessary condition (but not sufficient, in general) for the hyperplane {x G ]Rn : ajx + bz = 0} to have a point Xq in common with the set X is that zTF(xo)z = 0/ that is F(x0) is singular, and z is an eigenvector associated with the null eigenvalue of F(x0). Therefore, a necessary condition for (az,bz) to define a supporting hyperplane for X is that z belongs to the nullspace of F(x) for some x G Rm. 386 OPTIMIZATION MODELS Example 11.3 Figure 11.2 shows the set X of points x £ R2 that satisfy the LMI F(x) = x1F1 + x2F2 -1^0, (11.5) Figure 11.3 shows some of the hyperplanes aj x + bz = 0, obtained for a few randomly chosen directions z E R5. 11.2.2.3 The conic standard form. In an alternative form, known as the conic standard form, an LMI in the symmetric matrix variable X can be expressed as the intersection of the positive semidefinite cone with an affine set: X € A, X >r 0, (11.6) where A is an affine set. The set S = S+ n A is called a spectrahedron. The standard forms (11.1) and (11.6) are equivalent, in the sense that one can always transform one into the other (at the expense possibly of adding new variables and constraints). Therefore, the feasible set X of a standard-form LMI is a spectrahedron. Figure 11.4 shows an example of a 3D plot, in the (x\,X2,x$) space, of a spectrahedron corresponding to the combined LMIs Ph 0, ATP + PAdiO, for , A = _ *3 *2 _ Geometrically, the boundary of the LMI set X in (11.3) is defined by a multivariate polynomial surface in the x variable. Indeed, it is a known fact that a symmetric matrix F(x) is PSD if and only if the sums gk(x) of the principal minors of F(x) of order k, k = 1,...,n, are non-negative (a principal minor of order k is the determinant of a submatrix of F(x) obtained by considering a subset / C {1,... ,n} of its rows and columns of cardinality k). Figure 11.2 The feasible set (gray region) of the LMI (11.5). Contour lines (in black) show the locus of points x = (xi,x2) for which detF(x) = 0. Figure 11.3 Feasible set of (11.5), together with hyperplanes a Jx + bz = 0, for some randomly chosen z. Figure 11.4 A 3D plot of a spectrahedron corresponding to the Lyapunov inequalities P >z 0, ATP + PA ■< 0, A G 1R2'2. SEMIDEFINITE MODELS 387 Since F(x) is affine in x, the functions gk(x), k = 1, ...,n, are polynomials of degree at most k in the variable x, and the set X is described by the system of polynomial inequalities X= {x eRm : gk(x) >0,k = l,...,n}, which is a closed semialgebraic set. Notice that gi(x) = traceF(x), and gn(x) = detF(x). In particular, the boundary of the LMI feasible region X is described by the determinant {x : gn(x) = det F(x) > 0}, while the other polynomials gk(x) only isolate the convex connected component of this region. Example 11.4 Consider the LMI in x E R2 F(x) = 1 + X\ X\ x2 Xl X\ x2 1 -x2 0 1 + X2 h o. The feasible set X is described by the points in the intersection of the following polynomial inequalities, and it is depicted in Figure 11.5. trace F(x) = 3 + x\ >0, 3 + 2x\ + 2xi*2 — 2*i — 2*2 > 0, det F(x) = 1 + xi - 2*i - 2*2 + 2*1*2 + *1*2 - *2 > 0. 9i{x) = 0 Figure 11.5 Intersection of the regions where g*(x) > 0, k = 1,2,3, for the LMI (11.7). 388 OPTIMIZATION MODELS 11.2.3 Useful "tricks" for LMI manipulation Several manipulations are often useful in order to represent constraints in a suitable LMI format. We here discuss some of these "LMI tricks." 11.2.3.1 Multiple LMIs. Multiple LMI constraints can be combined into a single LMI constraint. Consider two affine maps from Rm to the space of symmetric matrices of order n\ and U2, respectively: F1 : Rm -> SnL F2 : Rm -> S”L Then the two LMIs F1(x)t 0, F2(x)t 0 are equivalent to one LMI, involving a larger matrix of size (n 1 + n2) x (fti + n2), having F\, F2 as diagonal blocks: F(x) = Fi(x) 0 0 F2(x) t 0. This rule is an immediate consequence of the fact that the eigenvalues of a block-diagonal matrix are the union of the eigenvalues of each of the diagonal blocks. 11.2.3.2 Block matrices and the Schur complement rule. Consider a symmetric block matrix A CT С В A, В symmetric. The following implications hold: 1. MXO (resp. M >- 0) => A >z0, B y_0 (resp. A x- 0, B >- 0). 2. If B = 0, then M y 0 A y 0, C = 0. 3. If A = 0, then M y 0 B X 0, C = 0. These three rules can all be verified by considering the quadratic form ~ T r A CT g(w,z) = Point 1 is proved by observing that M > 0 implies g(w,z) > 0 Vip,z, hence g(w, 0) > 0, which implies that А У 0, and likewise for proving that В У 0. Point 2 from right to left is obvious; for the converse implication first observe that, by point 1, M У 0 implies А У 0. Moreover, M >: 0 (with В — 0) implies that g{w,z) = wT Aw 4- wTCz > 0Л \/w,z. The first term is always non-negative, while for any w / 0 the second term is unbounded below in 2, unless C = 0; hence g(w,z) > 0 \/w,z implies C = 0. Point 3 follows from an analogous reasoning. The Schur complement rule (see Theorem 4.9) provides necessary and sufficient conditions for positive definiteness of the block matrix M in terms of suitable conditions on its blocks; it is also very useful for converting certain nonlinear matrix inequalities (e.g., quadratic) into LMI form. The standard Schur complement rule states that, if By 0, then Mto <& A-CTB-1C>0. Or, equivalently, if A y 0, then Mto <£> B-CA~lCTt 0. For (strict) positive definiteness, the Schur rule states that My 0 B y 0, A-CTB~1Cy0, My 0 AyO, B- CA-lCT y 0. There also exist a (more involved) version of the Schur rule, that can be applied when neither A nor B is strictly positive definite, namely: MhO By 0, A-CTB+C^0, (I-BBf)C = 0, MtO <=> At 0, B-CAfCTt 0, (I-AA+)CT = 0. Observe, for instance, the first of these rules: since I — BBf is a projector onto 71(B)1-, the condition (I — BB+)C = 0 is equivalent to requiring that 7l(C) C 7Z(B). Similarly, the condition in the second rule (I — AA+)CT = 0 is equivalent to requiring that 7l(CT) C 7Z(A). 11.2.3.3 Congruence transformations. Given a symmetric matrix M, a congruence transformation on M is a matrix obtained by pre- and post-multiplying M by a matrix factor R, that is G = RTMR. We have from Theorem 4.7 that MtO => RT MR t 0, My 0 => RT MR >-0 (if R is full column rank). Further, if R is square and nonsingular, then the implication holds in both ways: M >: 0 RTMR t 0, (if R is nonsingular). 390 OPTIMIZATION MODELS 11.2.3.4 Finsler's lemma and variable elimination. The following set of matrix inequality equivalences is generally known under the name of Finsler's lemma. Lemma 11.1 (Finsler) Let A E Sn and B E Rm'”. The following statements are equivalent: 1. zTAz > 0 for all z E J\f(B), z 0; 2. BjAB± y 0, where B± is a matrix containing by columns a basis for N{B), that is a matrix of maximum rank such that BB± = 0; 3. there exists a scalar A E R such that A + ABTB >- 0; 4. there exists a matrix Y E R”,m such that A + YB + BTYT y 0. Proof The implication 1^2 follows by considering that any 2 E J\f(B) can be written as z = B±v, for some vector v, hence the statement in 1 means that vTBj^AB±v > 0 for all v 7^ 0, which is equivalent to 2. The implications 3—^2 and 4 —>► 2 both follow from the congruence transformation rule, by multiplying the respective matrix inequalities by B^ on the right and by B J on the left. The proof of 1 —y 3 is slightly more involved, and can be skipped if the reader is not interested in these technical details. Let 1 hold and write any vector y E Rn as the sum of two orthogonal components, one along J\f(B) and one along J\fL(B) — 7Z(BT): y — z -f- w, 2 E jV'(B), w E 7Z(BT). If we let Bj_, BT denote matrices containing by columns a basis for TV'(B) and a basis for 7Z(BT), respectively, we may write 2 = B±v, w = B-]-£, for some free vectors v, £, hence y = z + w = B±v + BT£. (n-8) Now observe that, by point 2, BjAB^ y 0, hence ZTAz = VT(BjAB j_)v > t]a 11 ^ 112^ where rja > 0 denotes the smallest eigenvalue of BjAB^ (which is strictly positive, since this matrix is positive definite). Similarly, notice that Bw 7^ 0 for w ^ 0, since by definition w cannot be in the nullspace of B, therefore wTBTBw > 0 for all w = BT£, which means that BjBTBBT y 0, whence wT(BTB)w = y (BjBtBBj)^-> t]buwl SEMIDEFINITE MODELS 391 where Vb > 0 denotes the smallest eigenvalue of ByBTBBT. Now consider the following quadratic form: yT(A + ABTB)y = (z + w)T (A + ABTB)(z + ip) [since Bz — 0} = zTAz + 2ztAzp + wT (A + ABTB)w [(11.8H11.9)] > r/avTv + 2vTBlABTS + ZT(BjABT+\tibI)Z [R = BTABT] = . s. Val R RT BjABj + \rjbl _ s _ We next show that one can always find a value for A such that the matrix in the last expression is positive definite, which would imply that yT (A + ABTB)y > 0 for all y ^ 0, which would in turn prove the desired statement, i.e., that there exists a A such that A + ABTB >- 0. To this end, notice that, by the Schur complement rule, we have that Bj ABy 4- A t]}jI >-0 & BjABj + \t]hl ——RTR y 0, and, by the eigenvalue shift rule, the latter condition is equivalent to A > An Vb — RTR-BlABT ) , 7« / which concludes this part of the proof. Finally, the implication 3—^4 follows immediately by choosing Y = \ B, whereby all implications in the lemma are proved. □ A generalized form of the equivalence between points 2 and 4 in Finsler's lemma is usually known in the LMI lingo as the "variable elimination lemma." Let A(x) be an affine function of a vector of variables x, and let Y be an additional matrix variable, not depending on x. Then, the elimination lemma1 states that x,Y : A(x) + CYB + BTYTCT >- 0 Cj_A(x)Cj >- 0, Bj_A(x)Bj_ y 0. This rule is useful for converting the condition on the left, containing both variables x and Y, into the conditions on the right, where the variable Y has been eliminated. 11.2.3.5 LMI robustness lemma. Another useful rule deals with LMIs that depend affinely on a matrix Y of uncertain parameters. In this case, the condition that the LMI holds robustly, i.e., for all values of Y in a norm-bounded set, can be converted into a standard LMI condition. More precisely, let A(x) E S”, be a matrix that depends affinely on a vector x of variables, and let B £ lRm'”, C E W1'?. Then, we have that2 the LMI in variable x: 1 See, e.g., Section 2.6 in S. Boyd, L. El Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, 1994; or Chapter 2 in R. E. Skelton, T. Iwasaki and K. Grigoriadis, A Unified Algebraic Approach to Linear Control Design, CRC Press, 1998. 2 See again the two books mentioned above. 392 OPTIMIZATION MODELS A(x) + CYB + BTYTCT h o holds robustly for all Y G R?,m : ||Y||2 < 1, if and only if the following LMI in x and A G R holds: or, equivalently, if and only if the following LMI in x and A G R holds: 11.2.4 Linear, quadratic, and conic inequalities in LMI form Many special cases of convex inequalities, such as affine, quadratic, and second-order cone inequalities can be represented in LMI format. Affine inequalities. Consider a single affine inequality in x G Rn: aTx < b, where a G Rn, b G R. This is a trivial special case of an LMI, where the coefficient matrices are scalar: Fq = b, Fz = — i = 1,... ,n. Using the previous rule on multiple LMIs, we obtain that a set of ordinary affine inequalities Quadratic inequalities. Consider a convex quadratic inequality f(x) = xTQx + cTx + d < 0, Q^O. If f(x) is strictly convex (i.e., Q y 0), then the inequality f(x) < 0 can be expressed in the form of an LMI, using the Schur complement A{x) - ACCT BT ^ B A Im ~ ’ A(x) — ABTB C CT AIp - ajx < bi, i = l,...,m can be cast as a single LMI F(x) >r 0, where b\ — ajx rule, as If instead f(x) is convex, but not strictly, then Q y 0, and we can factor it as Q = ETE, hence, using the Schur complement rule again, we obtain that /(x) < 0 is equivalent to the LMI -cT x — (Ex) >- 0. Second-order cone inequalities. Also second-order cone (SOC) inequalities can be represented as LMIs. To verify this fact, let us start with the elementary SOC inequality ||y||2 < t, with y E Rn and t e ]R. This SOC inequality is equivalent to the LMI * yT y tin Indeed, the equivalence is immediate for t = 0. If instead t > 0, then for every zeR” and every oc E R, we have that = ||fz + ocy\\\ + a2(f2 Therefore, if ||yH2 < t then the previous expression is > 0 for all (z,oc), and, conversely, if this expression is non-negative for all (2, a) then it is non-negative ioxz — y,oc — —t, which implies that ||y H2 < t. More generally, a second-order cone inequality of the form ||Ax 4- b\\2 < cTx + d, with A E Rm,n, b E lRm, c E R”, d E R, can be expressed as the LMI >z 0. (11.11) cTx + d (Ax + i7)T (Ax + b) (cTx 4- d)In To verify this fact observe first that (11.11) implies that both the diagonal blocks are PSD, that is cTx + d > 0. Now, suppose first that cTx 4- d > 0; then, the Schur complement rule insures that (11.11) is equivalent to (11.10). If instead cTx 4- d = 0, then (11.11) implies that it must be that Ax + b = 0, hence (11.11) and (11.10) are still equivalent in this case. 11.3 Semidefinite programs 11.3.1 Standard forms A semidefinite program (SDP) is a convex optimization problem, where one minimizes a linear objective function under an LMI constraint. In standard inequality form an SDP is expressed as 394 OPTIMIZATION MODELS mm cTx s.t.: F(x) >z 0, F(x) = Fo + Y^XìFì, 1=1 Fif i = 0,..., m, are given n x n symmetric matrices, c G lRm is the given objective direction, and x G lRm is the optimization variable. The geometric interpretation is, as usual, that we move as far as possible in direction — c, while remaining feasible; an optimal point for problem (11.12) is thus a farthest point in the feasible set X — {x : F(x) >: 0} along the direction opposite to c, see Figure 11.6. Standard conic form. The standard conic form of an SDP derives from the corresponding conic representation of its LMI constraint. Denoting by X G Sn the matrix variable, the conic LMI formulation (11.6) imposes that X >z 0 and that X should belong to an affine set A. This latter affine set is specified by imposing a number of affine constraints on the entries of X, using the standard inner product for a matrix space, that is (A,B) = trace ATB, hence A = ^{X G S” : traceA(X = hi, i — 1,...,m}, where A{ are given symmetric n x n matrices, and b\ are scalars, i — 1,..., m. Similarly, the linear objective function is expressed via the inner product trace CX, where C G Sn. A generic conic-form SDP is thus expressed as trace CX X ^ 0, trace A{X = b{, i = 1,. Figure 11.6 Example: SDP (11.12), with cT = [1 — 0.6], and F(x) given by (11.7). The optimal solution is x* = [-0.4127 0.1877]T. 11.3.2 SDP duality To obtain the dual of an SDP, we first establish the following SDP- based characterization of the largest eigenvalue of a symmetric matrix X: Amax(X) = u(X) = max traceZX : Z ^ 0, traceZ = l. (11.14) Indeed, let X — UAUT be a spectral decomposition of X, with A = diag(Ai,...,An) containing the eigenvalues of X arranged in decreasing order, so that Ai = Amax(X). Using the change of variable Z —> UTZU, and exploiting the fact that UUT is the identity, as well as properties of the trace operator,3 we obtain v(X) = v(A). Hence, v(X) — max trace ZA : Z y 0, trace Z = 1 = max YY A ■ Z >z 0, trace Z = 1 z )=l = max ATz : z > 0, z7 = 1 ■ h = max Ai = Amax(X). The third line stems from the fact that z is feasible for problem (11.16) if and only if diag (z) is feasible for problem (11.15).4 Now consider the SDP in standard conic form (11.13). We express it as min trace CX s.t.: Amax(—X) < 0, trace AjX = bj, i — 1,..., m. Using the following variational representation of eigenvalues above, we obtain V = In the second line, we have used the max-min inequality (8.48); in the third line, we have absorbed the variable A into Z; and eliminated the latter in the final step. As with LP and SOCPs, the dual problem is also an SDP. A similar derivation shows that the dual to the dual SDP above is nothing other than the primal we started with. From Slater's conditions for strong duality,3 it turns out that if the primal problem is strictly feasible, then p* = d*, and there is no duality gap. trace CX + Vi(bj — trace AjX) — A trace ZX y 0, trace Z = 1 vTb : + AZ = 0, Z ^ 0, A > 0 vTb : C - YY ViAi 1=1 + z = o, z^o vTb : SEMI DEFINITE MODELS 395 3 See Section 3.1.4. 4 Note that, at optimum, Z = UTeieJU = u\u[, with ei (resp. u\) the first unit vector (resp. column of U). This shows that Z is rank-one at optimum, that is, it is of the form 22T for some 2 with 2T2 = 1. Thus: Amax(X) = max 2TX2 : 2T2 = 1, which is the Rayleigh quotient representation, as given in Theorem 4.3. 5 See Proposition 8.7. Example 11.5 (Variational characterization of the maximal eigenvalue) The dual of the variational characterization of the largest eigenvalue (11.14) 396 OPTIMIZATION MODELS turns out to be min v : vl >z X. The value of the above problem can be directly shown to be indeed the largest eigenvalue, after spectral decomposition of X. In this case, there is no duality gap, as guaranteed by the strict feasibility of the primal problem (11.14). 11.3.3 SDP relaxation ofnon-convex quadratic problems Consider an optimization problem in which both the objective and constraint functions are (not necessarily convex) quadratic, as introduced in Section 9.4.2: p* = min xT Hqx + 2c X x + do s.t.: xTHjX + 2cJx + di < 0, i G X, xTHjx + 2cjx + dj = 0, j G £. Here, Ho and Hj, i E X,£ are symmetric matrices. In general, the above problem, which we referred to as a quadratically constrained quadratic problem (QCQP), is non-convex, and hard to solve. Not surprisingly, there are many applications for this rich class, some of which are described in the exercises. Semidefinite optimization may be used to obtain bounds on such hard QCQP problems, via a technique called rank relaxation. The basic idea is to first express the problem in terms of the variable x and an additional symmetric matrix variable X = xxT. We can rewrite the above in an equivalent way: p* = min trace HnX + 2cX x + do s.t.: trace H/X + 2cjx + dj < 0, i G X, trace HjX + 2cjx + dj — 0, j G £, X = xxT. Here we have exploited the fact that trace AB — trace BA for any matrices A, B of compatible size. We can now relax the last equality constraint X = xxT into a (convex) inequality one X ^ xxT, which in turn can be written as an LMI in X, x: Since we have relaxed a constraint into a more general, convex one in the context of a minimization problem, we obtain a lower bound p* > q*r where q* is the optimal value of the convex problem: SEMIDEFINITE MODELS 397 q* = min 1 x,X We further observe that the objective function is linear; the constraints except the last are all linear equalities and inequalities; and the last one is an LMI. Hence, the above is an SDP. The approach can be pushed further to provide not only a bound on the original hard problem, but also quality guesses as to an optimal solution. However, there are no guarantees in general that such guesses are even feasible. In particular, an optimal solution x* for the SDP above is not even guaranteed to be feasible for the original problem. One case when such guarantees exist is the so-called S- procedure, which is discussed in Section 11.3.3.1. Another case when the approach works well, and can be further analyzed with success, relates to quadratic Boolean optimization. Precisely, consider the special case of a non-convex QCQP: p* — max xTHx s.t.: xf = 1, / = 1,... ,n. Here, H is a given n x n symmetric matrix. Applying the relaxation approach above leads to an upper bound, as we begin with a maximization problem: p* < q*, where q* = min trace HX s.t.: Xjj = 1, i = 1,..., n, We note that the variable x only appears in the last (LMI) constraint. Since the latter holds for some x G if and only if X ^ 0, we can further reduce our relaxed problem to q* = min trace HX 1 x s.t.: Xu = 1, i — 1,..., n, Several interesting results are known for the above bound. First, the quality of the bound is bounded, independently of the problem size n. Precisely, we have6 trace HqX + 2cq x + do trace HjX 2.cJx d{ ^ 0, i G X, trace HjX + 2cjx + dj — 0, / G £, 6 This result, originally due to Yu. Nesterov, is given as Theorem 3.4.2 in Ben-Tal and Nemirovski, Lectures on Modern Convex Optimization, SIAM, 2001. 398 OPTIMIZATION MODELS -q* < P* < <7*. 71 r ' In addition, there exists a method to generate points x that are feasible for the original problem (that is, Boolean vectors), and such that the corresponding objective xTHx achieves the lower bound ^q*. 11.3.3.1 The S-procedure. The so-called 5-procedure establishes an equivalence between a certain LMI condition and an implication between two quadratic functions.7 More precisely, let fo(x), /i(x) be two quadratic functions: /oO) — xTFqx + 2gQ x -f- ho fi(x) = xTF1x + 2gJx + h1 . SoT where Fq,Fi G S”, go,gi € IRn, and ho,h\ G IR. We do not assume convexity, that is Fo, Fi are not required to be positive semidefinite. Assume that the constraint /1 (x) < 0 is strictly feasible, i.e., that there exists a point x G W1 such that /i(x) < 0. Then, the following two statements (a) and (b) are equivalent: (a) fi{x) <0 =» f0(x) < 0; (b) there exists a scalar r > 0 such that ' ?! . So ho _ 7 See also the discussion in Section 9.4.3.3. Notice that statement (a) can be interpreted in terms of inclusion of the zero-sublevel set of f\ in that of /0, i.e., X\ C Xq, where X\ = {x G ]Rn : fi(x) < 0}, Xo == {x G Rn : fo(x) < 0}- Also, statement (b) can be formulated equivalently as 3r > 0 : fo(x) — rfi(x) < 0, Vx. The implication from (b) to (a) is immediate to prove. Indeed, if (b) holds, then by multiplying the LMI in (b) on the left by [xT 1] and on the right by its transpose, we obtain that fo(x) < t/i(x) for some r > 0. Therefore, if /i(x) < 0, then also fo(x) < 0, which is the statement in (a). The converse implication, from (a) to (b), is more difficult to prove, and it needs the assumption of strict feasibility on f\. This latter part of the proof is not reported here. SEMIDEFINITE MODELS 399 The implication from (b) to (a) can also be easily extended to an arbitrary number of quadratic functions. Indeed, defining -1 T fi(x) = Fi gi gj hi it is easy to check that the statement: 3ri,...,rm > 0 : Fo go So~ h0 ! LTi i = 0,1,..., m, Fi gi gj hi implies the statement fm{x) ^ 0 /oO) < 0. In terms of zero-sublevel sets X[ = {x G : fi(x) < 0}, i = 0,1,...,m, the above implication states equivalently that (11.17) implies that that is, Xq contains the intersection of the sets Xj, i = 1,... ,m. Also, condition (11.17) can be expressed equivalently as 3ri,...,Tm > 0 : /o(x) - ^T,/;(x) < 0, Vx. 11.4 Examples of SDP models SDP models arise in a wide variety of application contexts. Here, we expose a necessarily small selection of them; further examples are discussed in some of the application chapters; see, in particular, Chapter 13. 11.4.1 Some matrix problems Semidefinite programs arise often as extensions of matrix problems from linear algebra, involving matrices that depend affinely on a vector of variables x. We next describe some of these problems. 11.4.1.1 Minimization of the spectral norm. Let A(x) G be a matrix whose entries are affine functions of a vector of variables x G Rm. This means that A(x) can be written as A(x) — Aq + x\ A\ + • • • + xmAm. 400 OPTIMIZATION MODELS The problem of minimizing the spectral norm of A(x), min ||A(x)||2, can be cast as an SDP problem as follows. First recall that || A{x) H2 = <j\(A(x)), where cq(A(x)) is the largest singular value of A(x), which coincides with the square-root of the largest eigenvalue of AT (x) A{x), see Corollary 5.1. Then we have that \\A(x)\\2<t & ||A(x)||2<f2 ^ A 1(AT(x)A(x))<t2r and the latter condition holds if and only if Ai(AT(x)A(x)) < t2, i — 1,...,n. Since, by the eigenvalue shift rule (3.13), it holds that Ai{AT{x)A{x) - t2In) = Aj(AT(x)A(x)) - t2, i = we have that A/(AT(x)A(x)) < t2, Vi Ai(AT(x)A(x) - t2In) < 0, Vz, and the latter condition is equivalent to requiring that AT(x)A(x) — t2In •< 0. Using the Schur complement rule, this matrix inequality is further rewritten in LMI format (in variables t2 and x) as t2In AT(X) A(x) Ip >- 0. Since t — 0 if and only if A(x) =0, this LMI is also equivalent to tin AT(x) A(x) which is obtained via congruence, pre- and post-multiplying by the diagonal matrix diag ^1/y/t, \/T0)' assuming t > 0. Problem (11.18) is thus equivalent to the following SDP in the variables x, t: min t tin AT(x) A(x) tlp y 0. 11.4.2.2 Minimization of the Frobenius norm. Let again A(x) G ]RP,n be a matrix whose entries are affine functions of a vector of variables x G !Rm. The problem of minimizing the Frobenius norm (squared) of A(x), nun ||A(x)|||, (11.19) can be formulated in SDP format as follows trace Y Y A(x) AT(x) I„ h 0. To verify this equivalence, we first observe that ||A(x)||f = trace A(x)AT(x), and that, by the Schur complement rule, AO & A(x)AT(x)<Y, and, since X ^ Y implies trace X < trace Y, the constraint (11.21) implies that ||A(x)||p < trace Y. Now, if x* is optimal for problem (11.19), then ** and y* = A(x*)AT(x*) are feasible for problem (11.20), and are indeed optimal for this problem, since the objective value of (11.20) is tracey* = trace A(x*)AT(x*) and it cannot be improved, for otherwise x* would not be optimal for problem (11.19). Conversely, if x* and y* = A(x*)AT(x*) are optimal for problem (11.20), then x* is also optimal for problem (11.19), f°r otherwise there would exist a point x 7^ x* such that || A(x) ||p < || A(x*) ||p, and this would imply that x, Y = A(x)AT(x) improve the objective of (11.20) with respect to x*, which would contradict the optimality of 11.4.1.3 Minimization of the condition number of a PD matrix. Let F(x) be a symmetric n x n matrix whose entries are affine functions of a vector of variables x G Rm. We address the problem of determining x such that F(x) >-0 and the condition number of F(x) is minimized. The condition number is defined as k(F(x)) — ~ an(F(x)Y where cq and crn are the largest and the smallest singular values of F(x), respectively. Under the condition that F(x) >>- 0, however, the singular values of F(x) coincide with the eigenvalues of F(x), hence ■(FW)<T « ^min (F(x) ) 402 OPTIMIZATION MODELS We thus want to solve the problem 7* = min 7 (11.22) s.t.: F(x) y 0, k(F{x)) < 7. Observe that F(x) >-0 if and only if there exists a scalar \i > 0 such that F(x) >z pi. Moreover, for \i > 0, 7 > 1, it holds that F(x) h pi ^ Amin(F(x)) > ft Am.n(F(x)) < F(x) ^ 7pi & Amax(F(x)) < 7/7 therefore, the constraints in problem (11.22) are equivalent to ]i > 0, ]il ■< F(x) di 7pi- (11*23) Notice, however, that these constraints are not in LMI form (w.r.t. all variables x, 7,^), due to the presence of the product term yp. This issue cannot be eliminated (in general) and, indeed, problem (11.22) cannot be converted into a single SDP, unless F(x) is linear in x. Let us first consider this special case: if F(x) is linear in x (i.e., F(0) — 0), then condition (11.23) is homogeneous in (x,jt), meaning that it holds for some (x,ft) if and only if it holds for (ocx,ocft), for any scalar oc > 0. We can then divide all terms in (11.23) by ]i, and obtain the equivalent condition I ^ F(x) ^ 71. (11*24) Therefore, if F(x) is linear in x, then problem (11.22) is equivalent to the SDP [for F(0) = 0] 7* = min 7 s.t.: I ^ F(x) ^ 7/. When F(x) is not linear in x (but, of course, still affine in x), then problem (11.22) cannot be converted into a single SDP problem. However, it remains computationally tractable, since it can be easily solved via a sequence of SDP problems. More precisely, select a fixed value of 7 > 1, and consider the following SDP problem: u = min u (11.25) xGKm ,}i>0 s.t.: ]il F(x) •< 7]il, and let p,x denote its optimal variables. If (11.23) is feasible, then it means that we have found an x such that k(F(x)) < 7, thus the SEMIDEFINITE MODELS 403 value of 7 may not be optimal, and it may be decreased. On the other hand, if (11.23) *s infeasible (by convention in this case we set fi = oo), then it means that the selected 7 was too small, and it should be increased, since for sure 7* > 7. We can therefore find the optimal 7* by proceeding iteratively according, for instance, to a bisection technique:8 8 See also Exercise 12.3. 1. initialization: find any x such that F(x) >- 0, and set 7iow = 1, 7uP = k(F(x)); 2. if 7up — 7low < et then return x* = x, 7* = 7up, and exit. 3. set 7 = 5 (7iow + 7uP); 4. solve SDP problem in (11.25), and find its optimal variables x, fi; 5. if fi < 00 (problem was feasible), then set 7up = 7; 6. If fi = 00 (problem was infeasible), then set 7iow = 7; 7. go to 2. Clearly, at iteration k = 0 we know that the optimal 7 is located in an interval of length £ — k(F(0)) — 1; at the first iteration it is located in an interval of length £/2, at the second iteration in an interval of length 1122, etc. Thus, if the procedure exits after k iterations (number of passages through point 3 in the procedure), then we know that the optimal 7 is located in an interval of length £/2k. The above iterative procedure thus exits with an e-suboptimal solution as soon as £/2k < e, that is, taking base-2 logarithms, when k is the smallest integer such that k > log2(^/e), i.e., for k = \ 1' i°g2- Example 11.6 Consider a variation on the localization problem based on trilateration as discussed in Example 6.2 and in Example 6.8. Supposing there are m + 1 beacons (with m > 2), the localization problem can be written in the form Ap = y, where pT = [pi P2} is the vector of planar coordinates of the object that we want to localize, y G IRm is a known term vector that depends on the distance measurements from the object to the beacons, and AT = [ • • • Sm j , Si = ai+i - av i = 1,..., m, where az- G lRn are the vectors containing the coordinates of the beacons, and 5j are the beacon positions relative to a reference beacon a\. It has been discussed in Example 6.8 that if the measurement vector y 404 OPTIMIZATION MODELS is affected by spherical uncertainty (errors), then this uncertainty is reflected into uncertainty in the localization of p, and the uncertainty region around the nominal position is described by the estimation ellipsoid Ep = {P ■ PT(ATA)p < 1}. The lengths of the semi-axes of this ellipsoid are given by cq-1, cr^1, where d\, d2 are the singular values of A. Here we consider an "experiment design" type of problem: we assume that the position of the anchors is not known completely, and we aim at finding good anchor positions, so that the error region around the nominal estimated position is "as spherical as possible." The rationale behind this criterion is that we want to avoid having certain directions with large uncertainty and other directions with small uncertainty; in other words, uncertainty in the localization should be distributed as evenly as possible along all directions. This criterion is quantified by the eccentricity of the ellipsoid, which is simply the ratio between the largest and the smallest semi-axes, thus it coincides with the condition number of A. To consider a tractable problem formulation, we assume that the reference beacon a\ is fixed (e.g., a\ — 0), and that also the directions of the relative beacon positions 5j are given. That is, we assume that <5; = PVi, ||u; lb = 1/ Pi > 0, i = 1, • • •, m, where the directions Vj are given and fixed, while the distances pi from the reference beacon are the variables to be determined. The problem then becomes to find Xj = pf > 0, i = 1,..., m, so as to minimize the (sqhared) condition number of A(x), which coincides with the condition number of the symmetric matrix r -I r -| t m F(x) = ^ ^ ••• S,„ J ^ (Si ••• 5m J =YJxivivJ. Since F(x) is linear in x, we have that there exists an x such that k(F(x)) < 7 if and only if there exists an x such that condition (11.24) is satisfied. Therefore, our beacon placement problem can be expressed in the form of the following SDP min 7 s.t.: L xiviv7 ^ I1’ i=1 m L xivivJ ^ l> X[ >0, i — 1,... ,m. For example, choosing m = 5 random directions V[ (columns in the matrix below) ’ -0.5794 0.7701 0.2323 -0.1925 -0.9880 ' -0.8151 -0.6379 -0.9727 0.9813 0.1543 SEMIDEFINITE MODELS 405 we obtain optimal beacon placements for x* = [0.6711 0.2642 0.2250 0.2277 0.6120]T (or positive scalar multiples of this vector, due to homogeneity), with 7* = 1, see Figure 11.7. The matrix F(x*) at the optimum results to be, in this case, the identity matrix. 11.4.1.4 Matrix completion problems. Matrix completion problems refer to a class of problems in which one aims at recovering a matrix X from incomplete information about its entries (plus some a priori knowledge or assumptions). For example, suppose that only a few entries of a matrix X G Rm,n are revealed (i.e., we know that Xjj = djj, for (/,;) G /, where / is a set of indices of cardinality q < mn, where djj are given numbers), and we wish to guess what the whole matrix X is. Indeed, as such, this may look like an "impossible" problem, as it really is. Recovering the entire matrix from incomplete information is not possible, unless we have some additional information, or we make some assumptions on X. To make a simple example, consider the incomplete matrix X shown in Figure 11.8. Is it possible to recover the whole X from this incomplete information? The answer is no, if we have no additional information. The answer is instead yes, if we know that this is a Sudoku matrix... In our treatment we shall not consider Sudoku-type completion problems, since they involve integer variables, and are thus typically non-con vex and computationally hard. However, we consider other types of completion problems, where the prior assumption is that the unknown matrix should have minimal rank. These problems are therefore named minimum-rank matrix completion problems. A famous example in this class is the so-called "Netflix problem." This is a problem arising in a recommender system, where rows in matrix X represent users and columns represent movies. Users are given the opportunity of giving a rate mark on the movies. However, each user rates only a few movies (if any), hence only a few entries of the X matrix are known. Yet, it would be interesting to complete this matrix by "guessing" the missing entries, so that the vendor might recommend movies that a user may like and be willing to order. Here, one may observe that users' preferences on the movies are a function of a few factors (such as genre, country of production, filmmaker, etc.), hence each row in X may be written as the product of a row vector with few terms (the factors) times a large "factor loading" matrix, which implies that X will have rank no larger than the number of factors. This suggests the idea that X can be completed by finding the matrix of minimum rank which is compatible with the available entries. Formally, this gives rise to the following optimization problem:9 Figure 11.7 Optimal beacon placement. Figure 11.8 An incomplete 9x9 matrix. 9 For a full treatment of this problem and pointers to related literature, see the paper by E. Candés, B. Recht, Exact matrix completion via convex optimization, Foundations of Computational Mathematics, 2009. 406 OPTIMIZATION MODELS min rankX (11.26) s.t.: Xij = dij, for (i,j) £ ], where the index set / has cardinality q <mn. Observe indeed that if the prior information on X is that rankX = r < min(m,n), then X is defined by a number of free terms (or degrees of freedom) which is smaller than mn. In particular, considering the compact SVD of X = Yli=1 &iUivJ, we have that X has r(m + n — r) degrees of freedom, corresponding to rm — r(r + l)/2 parameters for the Uj vectors (the r(r + l)/2 term corresponds to the degrees of freedom to be subtracted due to the orthonormality conditions among the U{ vectors), plus rn — r(r + l)/2 parameters for the v\ vectors, plus r parameters for the singular values <J[. It is thus apparent that when r is small compared to m, n then the matrix X is defined by a number of parameters which is much smaller than the number of its entries. Finding a matrix completion with minimum rank thus amounts to finding the "simplest" (i.e., having the least number of degrees of freedom) matrix compatible with the observed entries. There are, however, two kinds of difficulty related to problem (11.26). A first issue concerns uniqueness of the recovered matrix: by observing few entries of a low-rank matrix we cannot be determin- istically sure to recover uniquely the hidden matrix itself. A well- known example is given by the rank-one matrix X = 0 0 ••• 1 0 0 ... 0 0 0 ••• 0 whose entries are all zero, except for the one in the upper-right corner. Clearly, such a matrix could not be recovered from observing a generic subset of its entries (unless our observation set contains the "1" in the upper-right corner). We shall not dwell too much on this uniqueness issue, which has been the subject of extensive research in the "compressive sensing" literature. We only mention that to go round this problem we should take a probabilistic point of view, by considering matrices that are generated by certain random matrix ensembles, as well as random policies for selecting the entries to be observed (e.g., uniformly). Under such hypotheses there exist results that, roughly speaking, guarantee with high probability that the actual hidden matrix can be recovered from observation of a subset / of its entries, provided that the cardinality of / is Jarge enough. SEMIDEFINITE MODELS 407 The second difficulty with problem (11.26) is instead of a computational nature: rankX is not a convex function of X, and problem (11.26) is indeed a very hard (technically NP-hard) optimization problem: known algorithms for finding an exact solution to this problem take a time which is a double exponential function of the matrix size, which makes them basically useless, as soon as the dimension grows. There exist, however, relaxations (i.e., approximations) of the above problem which are amenable to efficient solution. We next discuss one such approximation. Consider the SVD of a generic matrix X E Rm,n: X = Yli=\ CiuivJ• The rank of X coincides with the number of its nonzero singular values, that is with the cardinality of the vector s(X) = [<ii ... ,<Jn}T- We can then say that rankX = ||s(X)||0, where ||s(X)||o denotes indeed the £0 (pseudo) "norm" of the vector s(X), which counts the number of nonzero entries in s(X). Now, since ||s(X)||o is non-convex, we may substitute it with the ^i-norm, as justified in Section 9.5.1, and hence minimize the i\ norm of s(X), ||s(X)||i — Yli=\ ^'(X), instead of rankX. The t\ norm of the vector of singular values is actually a matrix norm; hence, it is convex in its argument. It is known as the nuclear norm:10 \\X\U = \\s(X)\\1 = f^cri(X), The nuclear-norm heuristic thus amounts to solving, instead of (11.26), the convex optimization problem min IIXII* S t.: Xij = djj, for (i,j) <E /. The interesting fact, which is not proven here,11 is that the nuclear- norm minimization problem can be expressed in the form of the following SDP: min trace Y + trace Z s t.: Xq = dij, for (i,j) 6 J, Y X XT Z h 0, (when X is symmetric, one may take Y — Z in the above problem, thus eliminating one matrix variable, without loss of generality). Moreover, it can be proved that the solution from this heuristic "often" (in the probabilistic sense mentioned above) coincides with 10 When X is symmetric, and positive semidefinite, the nuclear norm reduces to the trace. 11 For a proof see, e.g., M. Fazel, Matrix Rank Minimization with Applications, Ph.D. thesis, Stanford University, 2002. 408 optimization models the solution of the original problem (11.26). Under appropriate hypotheses (essentially, that X comes from a random ensemble of matrices, that entries are sampled at random - for example uniformly - over rows and columns, and that q is sufficiently large), there thus exists a regime in which, with high probability, the solution to the computationally hard rank minimization problem is unique, and it coincides with the solution to the nuclear-norm minimization problem. One estimate for how large q needs to be for the recovery regime to hold prescribes that q should be of the order of d5//4rlogd, where d = max(m, n) and r is the rank of X. The minimum-rank completion problem in (11.26) is actually a special case of a more general class of problems, called affine rank minimization problems, where one minimizes the rank of a matrix X subject to affine constraints on the matrix entries, that is where A : lRm'n —y IRQ is a given linear map, and b G is a given vector. In this context, each entry bj of vector b can be interpreted as a linear measurement on the entries of X, and the problem amounts to reconstructing the unknown X from the given q linear measurements. A convex relaxation of problem (11.27) *s obtained by replacing the rank function with the nuclear norm function, which in turn yields an SDP formulation of the relaxed problem. Example 11.7 (Completion of Euclidean distance matrices) Consider a set of points p\,...,pm € m > r, and define the Euclidean distance matrix (EDM) for these points as the symmetric matrix D G Sm whose (z,;)-th entry is the Euclidean distance between pi and pj, that is A typical problem, arising for instance in autonomous agents localization, cartography, computer graphics, and molecular geometry endeavors, is to determine (up to an orthogonal transformation and absolute offset) the configuration of points pi, i = 1,..., m, from possibly incomplete information on the distance matrix12 D. Let P = [pi - - - pm] G W'ni, and let P denote the matrix of centered data points: where each column pj of P is equal to pi — p, where p is the centroid of the data points. Observe that we can express each entry of D in terms of the Gram matrix G = PT P, since 12 The paper by A. Y. Alfakih, On the uniqueness of Euclidean distance matrix completions, Linear Algebra and its Applications, 2003, gives a full treatment of Euclidean matrix completions, and also contains references to most of the related literature. min rank X s.t.: A(X) = b, Dij = WPiWi + \\Pj\\l - 2p7 Vj = Gu + Gjj —2 Gjj, i,j = 1 SEMIDEFINITE MODELS 409 D = diag (G) 1T + ldiag (G)T - 2 where diag (G) here denotes the column vector containing the diagonal entries of G. Now notice that, since El = 0, it holds that EDE = -2EGE = -2 We may draw two conclusions from the last two equations. First, equation (11.29) indicates that if we know the distance matrix D, then we can recover the configuration of centered points (up to an orthogonal transformation) by computing a matrix square-root factorization of EDE. Second, equation (11.28) implies (by using Lemma 3.1) that rankD < rank G + 2, hence (since G = PT P, P e Rr,m implies rank G < r) rank D < r + 2. Since r is the embedding dimension of the data points, the previous bound implies that rank D is typically small compared to m, especially in geo-localization problems, where r — 2 (planar localization) or r = 3 (3D localization). This fact suggests that, at least with high probability, when r < m we may recover the full Euclidean distance matrix D (and hence the centered configurations), from a sufficiently large number q of randomly selected observations of the entries of D. For a numerical test, we considered several randomly generated configuration matrices P with r = 2, m — 30, where pi, i = 1,..., m, are extracted from a standard normal distribution. For each instance of P, we constructed the corresponding Euclidean distance matrix D, and we solved the following (symmetric) nuclear norm minimization problem trace Y X(j = D(j, for (i,j) € /, where / is an index set of cardinality q indicating the q randomly selected entries of D that are revealed to the solver. The number q is to be compared with the number of free entries in the symmetric matrix X, which is nv = m(m + l)/2. Clearly, if the ratio rj = q/nv is high (close to one), then we are revealing most of the entries, hence we expect correct recovery (i.e., that X = D) with high probability, while we expect lower recovery rates for lower rj. The numerical experiment was conducted for values of rj in {0.6, 0.7, 0.8, 0.9, 0.95}. For each 77, we solved problem (11.30) N = 50 times, each time with a randomly extracted P matrix. We declared D recovered when the optimal X was such that ||X — D||p/1|ZD||p < 10-3, and we kept record of the average successful recoveries. Results are shown in Figure 11.9. T1 = 0.6 T1 = 0.7 r] — 0.8 T1 = 0.9 T] = 0.95 Figure 11.9 Rates of recovery of the full EDM, for r = 2, m = 30, as a func- 11.4.2 Geometrical problems Several geometric problems involving spheres, ellipsoids, and polytopes can be posed in terms of convex programs and in particular 410 OPTIMIZATION MODELS SDPs,13 as exemplified next. 11.4.2.1 Largest ellipsoid contained in a poly tope. Let us describe a bounded ellipsoid as the image of a unit ball under an affine map, that is £ = {x g IRn : x = Qz + c, ||z||2 < 1}, (n-31) where c G IRn is the center of the ellipsoid, and Q G S+ is the square- root of the shape matrix P of the ellipsoid, see Section 9.2.2. The lengths of the semi-axes of £ are given by the singular values of Q, Ci(Q), i = \,...,n (see Lemma 6.4), which, since Q ^ 0, coincide with the eigenvalues of Q, A* (Q), i = 1,..., n. Let then V denote a polytope, described as the intersection of m given half-spaces: V — {x G ]Rn : ajx < b{, i — 1,..., m}. The containment condition £ C.V means that the inequalities «,T*< hi, i — 1,..., m, must be satisfied for all x £ £, that is, for i — 1,..., m, it must hold that ajx <bi, x = Qz + c, V2 : ||z||2 < 1, see Figure 11.10. Substituting x = Qz + c into the inequalities ajx < b{, we have, for i — 1,..., m, Qz + aj c < bi, Vz : ||z||2 < 1 max aj Qz + ajc < b[. Since the max in the equation above is attained for z = / II II2 (see, e.g., Section 2.2.24), we obtain that £ C.V ^ ||Qfl/H2 + aJc < bi, i — 1,.. .,m. (11*32) We now consider two possible "measures" of the size of £: a first typical measure is the volume of the ellipsoid, which is proportional to det Q; another measure is sum of the semi-axis lengths, which coincides with the trace of Q. A maximum-volume ellipsoid contained in V is obtained by maximizing det Q, Q >- 0, under the constraints in (11.32). However, since log is a monotone increasing function, we can equivalently maximize log det Q, which has the advantage of being a concave function of Q, over the domain Q y 0 (see Example 8.6). A maximum-volume ellipsoid contained in V can thus be obtained by solving the following convex optimization problem (it involves the maximization of a concave objective /0 = log det Q, which is equivalent to minimizing the convex objective —/0): 13 Many of these problems are also discussed in Chapter 8 of Boyd and Vandenberghe's book. Figure 11.10 Ellipsoid 8 inscribed in a poly tope V. max log det Q s.t.: Q y 0, ||Q«i||2 + ajc < bj. i = This problem is convex, but it is not in SDP format, due to the log det Q objective. However, it can be reformulated into an equivalent SDP format, although this reformulation is not detailed here.14 Standard software for convex optimization, such as CVX, automatically recognize a logdet-type objective, and transform it internally into an equivalent SDP approximation. The maximum-volume ellipsoid contained in V is called the Lowner-John ellipsoid of V) every full-dimensional convex set has a unique Lowner-John ellipsoid. Moreover, if the Lowner-John ellipsoid is scaled by a factor n around its center, then one obtains an ellipsoid that contains V. That is, if £* is a Lowner-John ellipsoid for V, then it is the unique maximum volume ellipsoid such that £* CV, and it holds that V C n£*. Further, if the V set is symmetric around its center, then one may improve the scaling factor from n to y/n. A maximum-sum-of-semiaxis lengths ellipsoid contained in V is instead obtained by maximizing trace Q, Q y 0, under the constraints in (11.32). Since trace Q is linear (hence concave) in Q, we directly obtain the following SDP problem: max trace Q (ii-33) s.t.: Qh 0, b{ - aj c aj Q Qcii (bi-ajc)l y 0, i = 1,..., m, where the last LMI constraints were obtained by applying (11.11) to the SOC constraints ||H2 + ajc < b{. A relevant special case arises when we constrain a priori £ to be a sphere, that is we set Q = rln, where r > 0 represents the radius of the sphere. In this case, problem (11.33) specializes to max r s.t.: r\\aj\\2 + ajc < bj, i = 1,... ,m, 14 See Section 4.2 in A. Ben-Tal and A. Nemirovski, Lectures on Modern Convex Optimization, SIAM, 2001. which is simply an LP. The center c of the largest sphere inscribed in V is usually known as the Chebyshev center of the polytope. 412 OPTIMIZATION MODELS 11.4.2.2 Smallest ellipsoid containing a poly tope. We next consider the problem of finding a minimum-size ellipsoid £ that contains a polytope V defined by means of its vertices (see Figure 11.11): V = co{x{V,...,x№}' We describe the ellipsoid £ by means of the following representation: £ = { x R" : where c is the center of the ellipsoid, and P is the shape matrix. Notice that, by the (non-strict) Schur complement rule, the LMI condition in the above representation is equivalent to the conditions Ph 0, (x-c)en(P), 1. If we let P = QQT be a full-rank factorization of P, where Q IR'‘V", rankP = m < n, then P+ = QT+Q+, Q+Q = Im, and U(P) = U(Q)- Therefore, the condition (x — c) E 7Z(P) means that there exists z E Rm such that x — c = Qz, and (x - c)TP\x -c)= zTQTQT+Q+Qz = zTz, (x — c)TPf(x — c) < 1 <=> ||z||2 < 1. Therefore, the representation in (11.34) is equivalent to the representation in (11.31) £ = {x eRn : x — Qz 4- c, ||z||2 < 1}, with Q E ]Rn'm full column rank but possibly rectangular. This representation allows for the description of bounded ellipsoids that may be "flat" along some directions, i.e., ellipsoids that are contained in an affine space of dimension lower than the embedding dimension n. Whenever an ellipsoid is flat, its volume is identically zero, hence minimizing the volume measure may be inappropriate for possibly flat ellipsoids. A frequently used measure of size, that can be used also for flat ellipsoids, is instead the sum of the squared semi-axis lengths, which is given by the trace of P. A minimum-trace ellipsoid containing V can thus be computed as follows. Observe that V C £ if and only if e £ for / = 1,..., p, hence (*W — c) P ^0, i — 1,..., p, Figure 11.11 Ellipsoid £ circumscribing a poly tope V. from which we obtain the SDP SEMIDEFINITE MODELS 413 min trace P If the LMI condition in (11.34) *s instead strengthened to a strict inequality, then the condition P y 0 implies that the ellipsoid £ is full-dimensional, and we can factor P = Q2, with Q >- 0. Then the representation in (11.34), under strict inequality, is equivalent to the representation The volume of £ is in this case proportional to det(Q) = det(A_1). A minimum-volume ellipsoid containing V can then be obtained by minimizing det (A-1), Ay 0, under the containment condition V C £. However, the objective function det (A-1) is not convex over A y 0. To overcome this issue, we simply consider the logarithm of the volume as the minimization objective: this gives the same min- imizer as the original problem, since the log function is monotonic increasing, and has the advantage that the modified objective is convex over the domain Ay 0, see Example 8.6. A minimum- volume ellipsoid containing V can thus be obtained by solving the following convex optimization problem: In the special case where we seek for the minimum-size sphere that contains V, then both the minimum-volume and the minimum-trace problems are equivalent to the following SOCP: Example 11.8 (Inner and outer ellipsoidal approximations of a polytope) Consider a polytope V with vertices described by the columns of the following matrix (see Figure 11.12): £ = {x e R" : (x - c)TQ-1Q"1(* - c) < 1}, Qy 0. Posing A = Q-1 and b = Ac, this is also equivalent to £ = {x £ R" : II Ax - b\\l <!}, Ay 0. /0 = logdet(A 1) = — logdet(A) Ae§n ,b£Rn n —log det (A) s.t.: Ay 0; || Ax^ — b\\2 < 1, / = p. min r s.t.: ||*W — c 112 <r, i — \,...,p. reR'CeW1 -2.7022 -0.1863 0.2824 0.8283 1.5883 -1.4118 -0.9640 1.3385 0.9775 0.3340 414 OPTIMIZATION MODELS The same polytope also has the inequality representation as the set of x G 1R2 such that -0.9845 ' ' 0.9164 “ x < Figure 11.12 shows the polytope V, together with the minimum volume (solid line) and the minimum trace (dashed line) circumscribed ellipsoids, and the maximum volume (solid line) and maximum sum-of-axis lengths (dashed line) inscribed ellipsoids. 11.4.2.3 Smallest volume ellipsoid containing the union of ellipsoids. We here consider the problem of finding a minimal ellipsoid covering a set of m given ellipsoids, see Figure 11.13. Let the given ellipsoids be specified in the form of convex quadratic inequalities = {x e lRn : fi(x) < 0}, f = 1,.. .,m, where, for i = 1,..., m, f,(x) = xr FjX + x + hi = with F( >- 0, and hi < g- Ffgi. These ellipsoids have centers at x^ = —Fflgi, and the inequalities ffx) < 0 are strictly feasible, since fi(x^) < 0. Let the outer-bounding ellipsoid (to be determined) be parameterized as £ = {xeKn: f0(x) < 0}, fo(x) = \\Ax + b\\l-l Fo go So h0 with A >- 0, Fq = A2, go — Ah, ho — hTb — 1 (notice that we used this description for £ in order to avoid a homogeneous parameterization: here ho is not a free variable). Then, from the 5-procedure we have that Si C £ o 3ti > 0 : ’ Fo -< Ti ' Fi _ go h0 _ — Ll . ^ hi _ Figure 11.12 Inner and outer ellipsoidal bounding of a polytope. Figure 11.13 An ellipsoid E circumscribing m — 3 ellipsoids E\, i = 1,2,3. SEMIDEFINITE MODELS 415 Elaborating on this latter condition, with the position obtain Ab, we ' A2-Ti-F; _ bTA - gj bTb — 1 — Tfhf . i>T-s7 bT A 2b — 1 — Tjhj . n-T -1 - Till, Using the Schur complement rule, the latter matrix inequality is equivalent to " F0 - t, F, b-gi 0 F — gj —1 — Tihi F r<0, (11-38) 0 b -F0 _ which is an LMI condition in the variables Fq, b. Since the volume of £ is proportional to det1//2 Fq-1, we may find a minimum-volume outer-bounding ellipsoid by minimizing the logarithm of det1/2 F"1 (which is a convex function of Fq >- 0) under the LMI constraints in (11.38), obtaining the convex optimization problem s.t.: ti > 0, Remark 11.1 Smallest sum-of-semiaxis lengths ellipsoid containing the union of ellipsoids. An ellipsoid with minimal sum-of-semiaxis lengths covering the union of ellipsoids can be found by solving an SDP problem. Such a formulation can be obtained by considering the representation £{ = {x : x = + E{Zi, || 2/1| 2 < 1} for the given ellipsoids i = 1,m, and the representation in (11.34) for the covering ellipsoid £. The precise formulation can then be obtained by applying the LMI robustness lemma to (11.35), which is left to the reader as an exercise. 11.4.2.4 Maximum volume ellipsoid contained in the intersection of ellipsoids. We here consider the problem of finding a maximal-volume ellipsoid contained in the intersection of a set of m given ellipsoids, see Figure 11.14. Let the given ellipsoids be represented as >-0>, Z = 1,..., 712, Figure 11.14 An ellipsoid 8 inscribed I in the intersection of two ellipsoids. where are the centers and Pi >z 0 are the shape matrices of the given ellipsoids, for i = 1,..., m. Let instead the to-be-determined £i = i x e Rn : r " 1 Pi 4l6 OPTIMIZATION MODELS inscribed ellipsoid be represented as £ — {x G Rn : x = Qz + c, ||z||2 < 1}, with c G R”, Q G S”+. Now, we observe that £ C if and only if 1 (x-*W)T (x — *W) Pi b 0, Vx G £, that is, if and only if (c - *W) + Qz (c — x^)T + zTQ >: 0, Vz : ||z||2 < 1. We then rewrite this last condition as 1 (c — *W)T (c — Pi 0 Q o Q Vz : 11 z 112 < I, and apply the LMI robustness lemma to obtain the following LMI condition on Q, c, A/: 1 — A* (c — *W)T 0 (C-*W) p. Q 0 Q A il„ >- 0. Since the volume of £ is proportional to det Q, we obtain a maximum- volume ellipsoid inscribed in the intersection of the £zs, by solving the following convex optimization problem: max log det(Q) s.t.: Q >- 0, i = 1 Clearly, a maximal sum-of-semiaxis-lengths ellipsoid contained in Di£i can be obtained analogously, by maximizing trace Q, under the same constraints of problem (11.40). 11.4.2.5 Minimal ellipsoid containing the intersection of ellipsoids. On the contrary to all the previous minimal and maximal ellipsoidal covering problems, the problem of finding a minimal ellipsoid £ containing the intersection of m given ellipsoids £{ (see Figure 11.15) is computationally hard. This problem does not admit an exact solution SEMIDEFINITE MODELS 417 computable via convex optimization. However, we can find a suboptimal solution via convex optimization, based on a sufficient condition for the containment DjSj C £. Let the given ellipsoids £[ be represented as in (11.36) and let the to-be-determined, covering ellipsoid £ be represented as in (11.37). Then we have from the 5-procedure that a sufficient condition for the containment DjSj C £ is that there exist scalars > 0 such that (11.17) holds, that is (using the fact that Fq = A2, go = Ab, ho = bTb — 1 ,b= Ab): A2 Ab bT A b 1 A 0. Rewriting the previous inequality in the form " F0 b bT -1 ' F{ gi ' . & hi . -< 0 and applying the Schur complement rule, we obtain the LMI condition Fq - E'=i r,Ft b - Hit ?igi 0 bT - E'Li TlgJ -1 - riU Tjhj bT 0 b -F0 Then, a suboptimal minimum volume ellipsoid can be computed by solving the following convex optimization problem: s.t.: > 0, A suboptimal ellipsoid can also be easily computed for the case of the trace criterion (we minimize the sum of the squared semi-axis lengths). To this end, we describe £ as £ = {x € Rn : (x — c)TP~1(x — c)< 1}, which corresponds to the previous representation for Fq = P_1, go = P~lc, Hq — cTP~1c — 1. Hence, the sufficient condition (11.17) becomes P~l P~xc cTP~1 ctP-1c-1 which we rewrite as E"ii rtFt -ZiLingi -1 - E'li Tib, A 0. Figure 11.15 An ellipsoid E circumscribing the intersection of two ellipsoids. 418 optimization models Using the Schur complement rule, this latter condition is equivalent to the following LMI in P, c and Ti,..., rw: ■ LT=i Ti8i 1 ■LT^rigJ -1 - E™1 T,h, cT I c -P -< 0. Then a suboptimal minimum trace ellipsoid can be computed by solving the following SDP: min trace (P) s.t.: ti > 0, i = 1,..., m, 11.5 Exercises Exercise 11.1 (Minimum distance to a line segment revisited) In this exercise, we revisit Exercise 9.3, and approach it using the 5-procedure of Section 11.3.3.1. 1. Show that the minimum distance from the line segment C to the origin is above a given number R > 0 if and only if || A(p — q) + qH2 > R2 whenever A(1 — A) > 0. 2. Apply the 5-procedure, and prove that the above is in turn equiv¬ alent to the LMI in r > 0: llp-<?ll2 + T -t/2 (?T(P - 9) “ T/2 qTq-R2 >■ 0. 3. Using the Schur complement rule,15 show that the above is consistent with the result given in Exercise 9.3. Exercise 11.2 (A variation on principal component analysis) Let X = € W1'171. For p = 1,2, we consider the problem (pp{X) = max \xju\v : uTu — 1. (11-43) u i=1 If the data is centered, the case p = 1 amounts to finding a direction of largest "deviation" from the origin, where deviation is measured using the ^i-norm; arguably, this is less sensitive to outliers than the case p = 2, which corresponds to principal component analysis. 15 See Theorem 4.9. 1. Find an expression for $2, in terms of the singular values of X. SEMIDEFINITE MODELS 419 2. Show that the problem, for p = 1, can be approximated via an SDP, as (pi(X) < ipi(X), where ipi(X) = max y xjLZx/ : If ^ 0, traceU = 1. 17 /=1 Is i/?i a norm? 3. Formulate a dual to the above expression. Does strong duality hold? Hint: introduce new variables z* — xj Uxi, i = 1,..., m, and dualize the corresponding constraints. 4. Use the identity (8.52) to approximate, via weak duality, the problem (11.43). How does your bound compare with ipi? 3. Show that ipi(X)2 = min trace D : D diagonal, D y 0, D y XTX. Hint: scale the variables in the dual problem and optimize over the scaling. That is, set D = ocD, with Amax(XD-1XT) = 1 and oc > 0, and optimize over oc. Then argue that we can replace the equality constraint on D by a convex inequality, and use Schur complements to handle that corresponding inequality. 6. Show that <h(X)= max ||Xp||2. v : \\vLo<l Is the maximum always attained with a vector v such that \vi\=l for every /? Hint: use the fact that ||z||i = max zTv. 7. A result by Yu. Nesterov16 shows that for any symmetric matrix q £ problem u* = max vT Qv v : |M|oo<l can be approximated within zr/2 relative value via SDP. Precisely, (2/zr)rf* < p* < rf*, where rf* = min trace D : D diagonal, D y Q■ (1144) Use this result to show that < <MX). That is, the SDP approximation is within « 80% of the true value, irrespective of the problem data. 16 Yu. Nesterov, Quality of semidef- inite relaxation for nonconvex quadratic optimization, discussion paper, CORE, 1997. 420 OPTIMIZATION MODELS 8. Discuss the respective complexity of the problems of computing (p2 and \pi (you can use the fact that, for a given m x m symmetric matrix Q, the SDP (11.44) can be solved in 0(ra3)). Exercise 11.3 (Robust principal component analysis) The following problem is known as robust principal component analysis:17 p* = min||A —XlU + AUXHa, where || • ||* stands for the nuclear norm/8 and || • ||i here denotes the sum of the absolute values of the elements of a matrix. The interpretation is the following: A is a given data matrix and we would like to decompose it as a sum of a low rank matrix and a sparse matrix. The nuclear norm and i\ norm penalties are respective convex heuristics for these two properties. At optimum, X* will be the sparse component and A — X* will be the low-rank component such that their sum gives A. 1. Find a dual for this problem. Hint: we have, for any matrix W: IIW||*= max traceWTY : ||Y||2 < 1, where || • H2 is the largest singular value norm. 2. Transform the primal or dual problem into a known programming class (i.e. LP, SOCP, SDP, etc.). Determine the number of variables and constraints. Hint: we have ||Y||2 < 1 <£==> I — YYT hO, where I is the identity matrix. 3. Using the dual, show that when A > 1, the optimal solution is the zero matrix. Hint: if Y* is the optimal dual variable, the complementary slackness condition states that |Y-*.| < A implies X? — 0 at optimum. Exercise 11.4 (Boolean least squares) Consider the following problem, known as Boolean least squares: (f) = min || Ax - b\\2 : xz-E {—1,1}, i — Here, the variable is r G ]Rn, where A E 1Rm,n and b E Rm are given. This is a basic problem arising, for instance, in digital communications. A brute force solution is to check all 2n possible values of x, which is usually impractical. 17 See Section 13.5.4. 18 The nuclear norm is the sum of the singular values of the matrix; see Section 11.4.1.4 and Section 5.2.2. l. Show that the problem is equivalent to (p = min trace(ATAX) — 2 bT Ax + bT b s.t.: X — xxT, Xu 1, i 1,..., ti, in the variables X = XT G Rn,n and igR”. 2. The constraint X = xxT, i.e., the set of rank-1 matrices, is not convex, therefore the previous problem formulation is still hard. Show that the following '"relaxed" problem, which is an SDP, </>sdp = min trace(ATAX) — 2 bT Ax + bT b x X 1 . „ s.t.: T >: 0, x1 1 Xjj 1, i 1,..., Tif produces a lower-bound to the original problem, i.e., (p > (psdp- Once this problem is solved, an approximate solution to the original problem can be obtained by rounding the solution: xsdp = sgn(x*), where x* is the optimal solution of the semidefinite relaxation. 3. Another approximation method is to relax the non-convex constraints X{ G { — 1,1} to convex interval constraints — 1 < X; < 1 for all i, which can be written as ||x||oo < 1. Therefore, a different lower bound is given by: (p > (pint = mm \\Ax - b\\\ : ||x||oo < 1. Once this problem is solved, we can round the solution by xint = sgn(x*) and evaluate the original objective value ||Axint — b\\\. Which one of (psdp and cp{nt produces the closest approximation to (p? Justify your answer carefully. 4. Use now 100 independent realizations with normally distributed data, A G ]R10'10 (independent entries with mean zero) and b G R10 (independent entries with mean one). Plot and compare the histograms of ||Axsdp — &|li °f Part 2' ll^xint — b\\2 Part 3' anc* the objective corresponding to a naive method || Axis — b\\\, where Xjs = sgn((ATA)-1ATb) is the rounded ordinary least-squares solution. Briefly discuss the accuracy and computation time (in seconds) of the three methods. 3. Assume that, for some problem instance, the optimal solution (x,X) found via the SDP approximation is such that x belongs 422 OPTIMIZATION MODELS to the original non-convex constraint set {x : X{ G {—1,1}, i — 1,..., n}. What can you say about the SDP approximation in that case? Exercise 11.5 (Auto-regressive process model) We consider a process described by the difference equation y(t + 2) = oc\(t)y(t + \) +0L2{t)y{t) oc^(t)u(t)/ t = 0,1,2,..., where the u(t) G R is the input, y(t) G R the output, and the coefficient vector a(t) G R3 is time-varying. We seek to compute bounds on the vector a(t) that are (a) independent of t, (b) consistent with some given historical data. The specific problem we consider is: given the values of u(t) and y(t) over a time period 1 < t < T, find the smallest ellipsoid £ in R3 such that, for every t, 1 < t < T, the equation above is satisfied for some a(t) G £. 1. What is a geometrical interpretation of the problem, in the space of as? 2. Formulate the problem as a semidefinite program. You are free to choose the parameterization, as well as the measure of the size of £ that you find most convenient. 3. Assume we restrict our search to spheres instead of ellipsoids. Show that the problem can be reduced to a linear program. 4. In the previous setting, a(t) is allowed to vary with time arbitrarily fast, which may be unrealistic. Assume that a bound is imposed on the variation of cc(t), such as \\oc(t + 1) — cc(t) W2 < /3, where /3 > 0 is given. How would you solve the problem with this added restriction? Exercise 11.6 (Non-negativity of polynomials) A second-degree polynomial with values p(x) = yo + yi* + 1/2*2 is non-negative everywhere if and only if 3/0 yi/2 yi/2 yi which in turn can be written as an LMI in y — (yo/3/1/3/2)' ^ 0. yo yi/2 yi/2 yi In this exercise, you show a more general result, which applies to any polynomial of even degree 2k (polynomials of odd degree can't be non-negative everywhere). To simplify, we only examine the case k — 2, that is, fourth-degree polynomials; the method employed here can be generalized to k > 2. 1. Show that a fourth-degree polynomial p is non-negative everywhere if and only if it is a sum of squares, that is, it can be written as P(x) = X>(*)2, where qis are polynomials of degree at most two. Hint: show that p is non-negative everywhere if and only if it is of the form p(x) = Po (o - fll)2 + &l) ((* - «2)2 + &2) / for some appropriate real numbers az, b{, i = 1,2, and some po > 0. 2. Using the previous part, show that if a fourth-degree polynomial is a sum of squares, then it can be written as p(x) = 1 x x2 Q for some positive semidefinite matrix Q. 3. Show the converse: if a positive semidefinite matrix Q satisfies condition (11.43) f°r every x, then p is a sum of squares. Hint: use a factorization of Q of the form Q — AAT, for some appropriate matrix A. 4. Show that a fourth-degree polynomial p(x) = 1/0 + V\x + 3/2*2 + 1/3 x3 +1/4 x4 is non-negative everywhere if and only if there exists a 3 x 3 matrix Q such that Qto, yi-i = Qij' ( = !/••-/5. Hint: equate the coefficients of the powers of x in the left and right sides of equation (11.43). Exercise 11.7 (Sum of top eigenvalues) For X E Sn, and i E {1,..., 7?}, we denote by AZ-(X) the z-th largest eigenvalue of X. For k E {1,..., ft}, we define the function fa : Sn R with values A(x) = f>(x). This function is an intermediate between the largest eigenvalue (obtained with k — 1) and the trace (obtained with k — ft). 424 OPTIMIZATION MODELS 1. Show that for every £ G R, we have /jt(X) < £ if and only if there exist Z G S” and s G R such that t — ks — trace(Z) >0, Z>z 0, Z — X + si ^ 0. Hint: for the sufficiency part, think about the interlacing property19 of the eigenvalues. 2. Show that /*. is convex. Is it a norm? 3. How would you generalize these results to the function that assigns the sum of the top k singular values to a general rectangular m x n matrix, with k < min(ra, ft)? Hint: for X G Rm,n, consider the symmetric matrix 19 See Eq. (4.6) Introduction to algorithms In this chapter we illustrate some iterative techniques (algorithms) for solving numerically, up to a given accuracy, different classes of optimization problems. These methods share a common general structure: some initial information (such as an initial candidate point xq E R”) is given at iteration k — 0, together with a desired numerical accuracy e > 0. At each iteration k = 0,1,..., some information about the problem is collected at the current point x and this information is used to update the candidate point according to algorithm-specific rules, thus obtaining a new point x^+i- Then, a stopping criterion is checked (usually by verifying if the current solution meets the desired accuracy level e). If yes, then the current point is returned as a numerical solution (to accuracy e) of the problem; otherwise we set k <— k 1, and iterate the process. Algorithms differ from one another with respect to the type of information that is collected at point x]c/ and to the way this information is used to update the current solution. A typical update rule takes the form of a simple recursion Xfc+l =Xk + SkVk, (12.1) where the scalar s^- > 0 is called the stepsize, and v* E R” is the update (or search) direction. The meaning of Eq. (12.1) is that from the current point xwe move away along direction and the length of the move is dictated by the stepsize s*. Some algorithms, such as the descent methods described in Section 12.2.1, can be applied to general (i.e., possibly non-convex) problems. However, no kind of "guarantee" of convergence can usually be given under such generality. On the contrary, if the problem is convex, then (possibly under some further technical assumptions) the algorithms presented in this chapter are typically guaranteed to 426 OPTIMIZATION MODELS converge to a global optimal solution (if such a solution exists). Also, under convexity assumptions, we can estimate the rate at which convergence happens, and in some cases predict a priori the number of iterations required to reach the desired numerical accuracy. A basic classification of algorithms distinguishes between first- order and second-order methods. This terminology derives from classical unconstrained optimization of differentiable functions, and refers to whether only first derivatives (gradient) of the objective function are used to determine the search direction at each step (first-order methods), or also the second derivatives (Hessian) are used for the same purpose (second-order methods). In this chapter, we outline some standard first- and second-order methods for unconstrained minimization, and then discuss various extensions that enable us to account for the presence of constraints, for non-differentiability of the objective and the constraint functions, and for decentralized structures of the optimization process itself. This chapter is organized as follows: Section 12.1 presents some technical preliminaries that are later needed in the analysis of the optimization algorithms. Section 12.2 discusses techniques for unconstrained minimization of differentiable non-convex or convex objective functions, presenting the gradient method, general descent methods, and the Newton and quasi-Newton algorithms for convex minimization, together with versions adapted to deal with the preseTtce of linear equality constraints on the variables. Section 12.3 discusses techniques for dealing with differentiable convex optimization problems with inequality constraints, and presents in particular a second-order method (the barrier method, Section 12.3.1) and a first-order method based on the concept of proximal gradients (Section 12.3.2). This section also discusses some specialized techniques for the LASSO and related problems. Section 12.4 presents methods for constrained optimization of convex but possibly non- differentiable functions. It describes in particular the projected subgradient method (Section 12.4.1), the alternate subgradient method (Section 12.4.2), and the ellipsoid algorithm (Section 12.4.3). $ec~ tion 12.3 we discuss coordinate descent methods. Finally, in Section 12.6 we briefly outline decentralized optimization techniques, such as the primal and the dual decomposition methods. Remark 12.1 Some parts of this chapter are rather technical. These technicalities are needed for the analysis of the various types of convergence of the algorithms. The reader who is mainly interested in the key ideas and in the general description of the algorithms may, however, safely skip all convergence proofs, as well as most of Section 12.1. 12.1 Technical preliminaries Most of the proofs of convergence of the optimization algorithms described in the next sections hinge upon one or both of the following "regularity" properties of the functions involved in the problem description, namely Lipschitz continuity of the gradients and strong convexity. These properties and their consequences are summarized in the following subsections. Assumption 1 (Working hypotheses) In the rest of this chapter, we shall make the standing assumption that /0 : R" —> R is a closed function, that is all the sublevel sets Sa = {x : fo(x) < oc}, oc G R, are closed sets. Further, we assume that /0 is bounded below, and that it attains its (global) minimum value /q at some point x* G dom/g. Given a point xo G dom/o, we define So as the sublevel set So = {x : fo(x) < fo{x0)}. 12.1.1 Gradient Lipschitz continuity A function /0 : R" —> R is said to be Lipschitz continuous on a domain S C RM, if there exists a constant R > 0 (possibly depending on S) such that \fo(x) - fo{y)\ < R\\x - y\\2, Vx,yeS. A differentiable function /0 : R" —> R is said to have Lipschitz continuous gradient on S if there exists a constant L > 0 (possibly depending on S) such that IIV/0(x) - V/0(y)||2 < L\\x-y\\i, Vx,yeS. (12.2) Intuitively, /0 has Lipschitz continuous gradient if its gradient "does not vary too fast." Indeed, if /0 is twice differentiable, the above condition is equivalent to a bound on the Hessian of /0. The following proposition summarizes some useful implications of gradient Lipschitz continuity.1 Lemma 12.1 1. If fo : R" —> R is twice continuously differentiable, then (12.2) holds if and only if /0 has bounded Hessian on S, that is I|V2/o(x)||f < L, Vies. 2. If fo is continuously differentiable, then (12.2) implies that l/o(x)-/o(y)-V/o(y)T(x —y)| < ^||x —ylli, Vx,y e S. (12.3) 1 Proofs of the following facts can be found, for instance, Yu. Nesterov, Introductory Lectures on Convex Optimization: a Basic Course, Springer, 2004. 428 OPTIMIZATION MODELS 3. If /0 is continuously differentiable and convex, then (12.2) implies that 0</o(x)-/0(y)-V/0(y)T(x-y)< ^\\x-y\\l, Vx,yeS, and that the following inequality holds Vx,y E S: }l|V/o«-V/0(y)||i < (V/o(x)-V/0(y))T(x-y) < L\\x-y\\22. 12.1.1.1 Quadratic upper bound. Inequality (12.3) implies that, for any given y E S, /o(*) is upper bounded by a (strongly) convex quadratic function: fo(x) </o(y) + V/o(y)T(x —y) + ^||x —y|||, Vx,yes, (12.5) where the quadratic upper bound function is defined as /up (x) = /o(y) + V/o(y)T(x — y) + ^||x — y III- (12.6) 12.1.1.2 Implications on the unconstrained minimum. Let x* E dom/o be a global unconstrained minimizer of /q. Then, clearly, it must hold that x* E So (recall So is the sublevel set defined in Assumption 1), hence we may write that x* G argmin/o(x), and, if /0 is differentiable, the unconstrained optimality condition requires that V/o(x*) = 0. If, further, /0 has Lipschitz continuous gradient on So, then it holds that j^HY/bOOIl! </o(x)—/0 < ^\\x-x*\\l, Vxes0. (12.7) The bound on the right in (12.7) is readily obtained by evaluating (12.3) at y — x*, and recalling that V/o(x*) = 0. The bound on the left in (12.7) is instead obtained by first evaluating (12.3) at x — y — ^ V/o(y), which yields /o(*)</o(y)-L||v/o(y)||i VyeS0. Then, since /q < /o(*), Vx E dom/o, this last inequality also implies /0 ^ /o(y) “ jl W^foiy) II2' ^ S0, which is the desired bound. 12.1.1.3 Lipschitz constant for functions with compact sublevel sets. If /0 is twice continuously differentiable, and the sublevel set So — (x : fo(x) < fo(xo)} is compact, then /0 has Lipschitz continuous gradient on So- This is due to the fact that the Hessian is continuous, hence ||V2/o(x)||f is continuous and, from the Weierstrass theorem, it attains a maximum over the compact set So- Therefore, applying point 1 of Lemma 12.1, we have that /0 has Lipschitz continuous gradient on So, and a suitable Lipschitz constant is L = max ||V2/o(x)||f. Compactness of the sublevel set So is guaranteed, for instance, if /0 is coercive (see Lemma 8.3), or when /0 is strongly convex (see the next section). 12.1.2 Strong convexity and its implications We recall from the definition in Section 8.2.1 that a function /0 : W1 —> IR is said to be strongly convex if there exists m > 0 such that fo(x) - y||x||£ is convex. From this definition it also follows that if /0 is strongly convex and twice differentiable, then V/o(x) >: ml, Vx e dom/o. Strong convexity has several other interesting implications, as discussed next. 12.1.2.1 Quadratic lower bound. We know from the characterization in (8.4) that a differentiable function / is convex if and only if Vx,y G dom/, /(y) > f(x) + V/(x)T(y - x), (12.8) which means that the linear function /(x) + V/(x)T (y — x) is a global lower bound on/(y). Then, applying (12.8) to /(x) = /o(x) — j \\xW2, we have that a differentiable /0 is strongly convex if and only if Vx,ye dom/o, /0(y) > /0(x) + V/0(x)T(y - x) + j ||y - x\\\, which means geometrically that at any x E dom/o, there is a convex quadratic function /low(y) =/oM + V/o(x)T(y-x) + y Ily — x||2 that bounds from below the graph of /0, that is such that /0 (y) > /low(y)/ f°r all y G dom f0, see Figure 12.1. 430 OPTIMIZATION MODELS This quadratic lower bound property also holds for non-differen- tiable functions, using subgradients instead of gradients. Indeed, if / is convex, but possibly non-differentiable, then it holds that, for all x E relint dom /, f(y) > fix) + hj(y- x), Vy € dom/, \fhx € df(x), where hx is a subgradient of / at x. Thus, if /0 is strongly convex, but possibly non-differentiable, applying the previous inequality to the convex function fo(x) — (m/2)||x||2, it holds that, Vx E relint dom/o and for gx E 3/0(x), My) - y llylli ^ /o(*) - y Ml + (gx - mx)T{y - x), thus ^ My) > Mx) +gl(y~x) -mxT(y-x) + y(||y||i - ||x|||) = /oW+^J(y-*) + yl|y-*ll2 holds for all y E dom/ and all gx E dfo(x). Thus, also in the non- differentiable case, a strongly convex function /0 admits a quadratic lower bound, at any x E relint dom /g. 12.1.2.2 Quadratic upper bound. We next show that if /g is strongly convex and twice continuously differentiable, then /g has Lipschitz continuous gradient over Sg, hence it can be upper bounded by a quadratic function. We start by observing that, for any initial point xg E dom/g, if /g is strongly convex then the level set Sg = {y : /g(y) < /o(*g)} is contained in a regular ellipsoid and hence it is bounded. To see this fact, it suffices to consider the strong convexity inequality (12.9), from which we obtain that y e So 0 > /o(y) -Mxo) > V/o(x0)T(.y-x0) + ylly-xolli where the region of y satisfying the inequality V/o(xo)T(y ~ *o) + j\\y — xo\\2 < Oisa bounded ellipsoid. Since the Hessian of /0 is assumed to be continuous, it remains bounded over bounded regions, which implies that there exists a finite constant M > 0 (possibly depending on xq) such that V2/o(x) ^ MI, Vx G S0. By Lemma 12.1, this implies in turn that /0 has Lipschitz continuous gradient over So (with Lipschitz constant M), hence it admits a quadratic upper bound, i.e., fo(x) </o(*) + V/0(x)T(y-x) + y||y-x||^ Vx,i/ e S0. 12.1.2.3 Bounds on the optimality gap. Summarizing the findings of the previous two sections, we have that for a strongly convex and twice differentiable function /0 it holds that ml ^ V2/o(x) ^ MI, Vx G So, and that /0 is upper and lower bounded by two convex quadratic functions, as follows: /low(y) < /0(y), Vx, y e dom/0, foiy) </uP(y), Vx,yeS0, /low(y) = Mx) + V/o(x)T(y-x) + y||y-x||^, (12.10) /up(y) = /oM + V/o(x)T(y - x) + y ||y - x|||- (12.11) From these two inequalities we can derive key bounds on the gap between the value of /0 at any point x G dom/o and the global unconstrained minimum value /q . Let the minimum /q over dom/o be attained at some x* G dom/o (such a minimizer is unique, since /0 is strongly convex). As we already discussed, it must be that x* G So- Then, writing the inequality /low(y) < foiy) for X = x* we obtain (since V/0(x*) = 0) My) >/o +yliy —*12' Vyes0. (12.12) Further, the inequality /o(y) > /iow(y) Vy G So, implies that it also holds that /0(y) > minz f\ow(z). Setting the gradient of f\ow{z) to zero yields the minimizer z* = x — ^ V/o(x), hence /o(3/) — ir^n/low(2) = flow ) = Mx)~f:\\VMx)\\l Vx,y e S0. 432 OPTIMIZATION MODELS Since this last inequality holds for all y, it also holds for y — x*, thus fo > № - T||V/0(x)|& Vx e So. (12.13) Putting together (12.12) and (12.13), we obtain j\\x-x*\\l<f0(x)-fS<2-\\Vfo(x)\\l, Vx e S0, (12.14) which shows that the gap /0 (*) — /0 (as we^ as ^e distance from x to the minimizer x*) is upper bounded by the norm of the gradient of /0 at x. Also, following a reasoning analogous to that in Section 12.1.1.2, we obtain the "swapped" inequality llv/o(x)||2 </o(x) —/0 < y||x-x*||^, VxgSq. (12.15) 12.2 Algorithms for smooth unconstrained minimization The focus of this section is on iterative algorithms for solving numerically the unconstrained optimization problem /0* = AM- where /0 : Rn -> R is the objective function. We shall here assume that Assumption 1 holds, that an initial point xo £ dom/o is given, and that /0 is continuosly differentiable.2 Notice that no convexity assumption is yet made, at this point. 12.2.1 First-order descent methods We start by discussing a simple class of first-order methods where the iterates are of the form (12.1), and the search direction vis computed on the basis of the gradient of /0 at x^. 12.2.1.1 Descent directions. Consider a point x^ £ dom/o and a direction £ Rn. Using the first-order Taylor series expansion3 for /0, we have that fo(xk + SVk) ~ /o(Xjfc) + sVfo(xk)Tvk, for s -» 0. The local rate of variation of /0, in the neighborhood of x^ and along the direction is thus given by + sM-AM>=v/ofa)V s—HJ S The local directional rate of variation 5^ is thus positive whenever V/o(xjt)T Vk > 0, that is for directions vthat form a positive inner 2 We shall informally define as smooth optimization problems the ones with objective and constraint functions that are once or twice differentiable, and as non-smooth the other cases. 3 See Section 24.5.2. product with the gradient V/o(*jt)* On the contrary, directions v* for which Vfo(xk)Tvk < 0 are called decrease (or descent) directions. The reason for this is that if the new point x^+i is chosen according to (12.1) as Xfc+i — xk + svk, then fo(xk+i) < fo(xk)r f°r sufficiently small s > 0. (12.16) It is then natural to ask what is the direction of maximum local decrease: from the Cauchy-Schwartz inequality we have that -||V/o(**)l|2|M2 < Vfo{xk)Tvk < \\Vf0(xk)\\2\\vk\\2t hence S^ is minimal over all v^ with \\v1d\2 = 1/ for Vk = ~mo(xf)\\2' (12-17) i.e., when v^ points in the direction of the negative gradient. The direction Vfc in (12.17) is thus called the steepest descent direction, with respect to the standard Euclidean norm. 12.2.1.2 A descent scheme. From the discussion in Section 12.2.1.1 follows quite naturally the idea of updating recursively the search points according to (12.1), choosing at each iteration the search direction as a descent direction, such as, for instance, the anti-gradient Vic = — ^fo{xk)- A generic descent method is outlined in Algorithm 9. The behavior of this algorithm depends on the actual choice of the descent directions, and on the strategy used for determining the stepsizes s^. It should be clear that the fact that v^ is a direction of descent does not imply that the function value at x^+i will decrease for any sic > 0, see Figure 12.2. Indeed, the decrease guaranteed by (12.16) is only local (i.e., for infinitesimal s^), hence a key problem is to find a finite stepsize that guarantees a sufficient decrease. A discussion on typical techniques for stepsize selection is given in the next section. 12.2.1.3 Stepsize selection. Consider the restriction of /0 along the direction vjc’. 4>(s) = fo{xk + svk), s > 0. Clearly, (p is a function of the scalar variable s, and (p(0) = /o(xjt)* Choosing a suitable stepsize amounts to finding s > 0 such that (p(s) < <p(0). A natural approach would then be to compute s that minimizes (p, that is s* = argmin <p(s). s> 0 Figure 12.2 Even if /0 is convex, the stepsize should be chosen with care, in order to insure a sufficient function decrease. Here, a local direction of decrease at x* is given by a move to the "left." If > 0 is small enough we may end up at point x[+1, for which fo{xl+1) < fo{xk). However, if s* is too large, we may end up at point xl!+1, where the decrease condition is not satisfied. 434 OPTIMIZATION MODELS Algorithm 9 Descent algorithm. Require: /0 : Rn -> IR differentiable, xo G dom/o, e > 0 1: Set k = 0 2: Determine a descent direction v 3: Determine step length s*- > 0 4: Update: xfc+1 = + skvk 5: If accuracy e is attained, then exit and return x^, else let k <— k + 1 and go to 2. This method is called exact line search, and provides a stepsize s* with the best possible decrease. However, finding s* requires solving a univariate (and generically non-convex) optimization problem, which may be computationally demanding. For this reason, exact line search is rarely used in practical algorithms. A more practical alternative consists of searching for an s value guaranteeing a sufficient rate of decrease in (p. Consider the tangent line to (p at 0: <p(s) ~ £(s) = <j>{0) + sdk, 6k = Vfo(xk)Tvk < 0, s > 0. £ is a linear function with negative slope 6^. Now, for oc G (0, 1), it holds that the line I(s) = (p(0) + s(ocSjc)/ s > 0 lies above i(s), hence it also lies above (p(s), at least for small s > 0, see Figure 12.3. Figure 12.3 Illustration of the Armijo condition and backtracking line search. Bold segments denote the regions where the condition is met. Since cp is bounded below while £ is unbounded below, there must exist a point where cp(s) and £(s) cross; let s be the smallest of such points. All values of s for which cp(s) < I(s) provide a sufficient rate of decrease, given by the slope cc5k of t. This rate condition is known as the Armijo condition, stating that the valid stepsizes must satisfy <p(s) < <t>(0)+s(oc6k), or, more explicitly, fo(xk + svk) < fo{xk) + sa{Vfo{xk)Tvk), for the chosen oc £ (0, 1). The Armijo condition is clearly satisfied by all s £ (0, s), hence this condition alone is still not sufficient to insure that the stepsize is not chosen too small (which is necessary for convergence of the method). We shall see in Section 12.2.2.1 that, under the Lipschitz continuous gradient assumption, there exists a constant stepsize that satisfies the Armijo condition. However, such a constant stepsize is generally unknown in advance (or it may be too small for practical efficiency of the method), hence a usual practice amounts to employing a so- called backtracking approach, whereby an initial value of s is fixed to some constant value s^t (typically, S[n{t = 1), and then the value of s is iteratively decreased at a fixed rate /5 £ (0, 1), until the Armijo condition is met, see Algorithm 10. Algorithm 10 Backtracking line search. Require: /0 differentiable, oc £ (0, 1), /5 £ (0, 1), E dom/o, vk a descent direction, s^t a positive constant (typically, s^t = 1) 1: Set s = s^it, 4 = Vf0(xk)Tvk 2: If fo(xk + svk) < f{xk) + scc5k, then return sk = s 3: Else let s <— /3s and go to 2 12.2.2 The gradient method In this section, we analyze more closely the convergence properties of Algorithm 9, for the most common case where the descent direction is simply the negative gradient (that is, the direction of steepest local descent). We take henceforth Vh = -V/o(xjt). Very little can actually be said about the properties of the gradient descent algorithm, unless we make some additional assumptions on the regularity of the objective function. More precisely, we shall assume that /0 has Lipschitz continuous gradient on So, that is, there 436 OPTIMIZATION MODELS 12.2.2.1 Lower bound on the stepsize. Let now xbe the current point in a gradient descent algorithm, and let x = xk-sVf0{xk). Evaluating /0 and /up in (12.6) at x we obtain the restrictions of these functions along the direction vk = — V/o(xfc): (p(s) = fo(xk - sVfo(xk))r <Ms) - Mxk)-s\\VMxk)f2 + s2±\\Vfo(xk)\\22r where (12.3) clearly implies that <P(s) < <puP(s), s > 0. It can be readily observed that cp(s) and <£up (s) have the same tangent line at s = 0, which is given by Consider then the line previously defined for the Armijo rule: This line intercepts the upper bound function cpup(s) at the point It is then clear that a constant stepsize s= sup would satisfy the Armijo condition at each iteration of the algorithm, see Figure 12.4. Note, however, that one would need to know the numerical value of the Lipschitz constant L (or an upper bound on it), in order to implement such a stepsize in practice. This can be avoided by using backtracking. Indeed, suppose we initialize the backtracking procedure with s = Sinit- Then, either this initial value satisfies the Armijo condition, or it is iteratively reduced until it does. The iterative reduction certainly stops at a value s > jSsup, hence backtracking guarantees that To summarize, we have that, for both constant stepsizes and step- sizes computed according to a backtracking line search, there exists a constant > 0 such that ?(s) =fo(xk) - soc\\Wfo(xk)\\l, oc G (0, 1). Sup — ^ (1 0i). sk > min (sjnit, /3sup) = sjb. Sk > Vfc = 0,1,. 12.2.2.2 Convergence to a stationary point. Consider a gradient descent algorithm xk+1 = xk-skVfo(xk), with stepsizes computed via a backtracking line search (or constant stepsizes), satisfying the Armijo condition fo(xk+i) < fo(xk) - ska\\Vf0(xk)\\l (12.21) fo(xk) — fo(xk+i) > skoc\\Vfo(xk)\\l [using (12.20)] > siba\\Vf0{xk)\\l, V/c = 0,1,... Summing these inequalities from 0,1,k, we obtain s^LWVMxdWl < /o(*o) -/o(*k+i) < fo(xo)-fo- 1=0 Since the summation on the left is bounded by a constant as k -» oo, we conclude that it must be that lim ||V/o(*jfc)||2 = 0. k-> oo This means that the algorithm converges to a stationary point of /0, that is to a point where the gradient of /0 is zero. Notice that such a point is not necessarily a local minimum of /0 (for instance, it could be an inflection point, see Figure 12.3). Further, by noticing that EHV/o(*,)|lI > (* + 1) minjV/o(*i)|& Figure 12.4 For a function with Lip- schitz continuous gradient there is a constant stepsize s = sup that satisfies the Armijio condition. Figure 12.5 Stationary points are points where the gradient is zero. These include extrema (max, min), as well as inflection, or saddle, points. 438 OPTIMIZATION MODELS we obtain from the previous inequality that gi £ (I222) where we defined g*k = min ||V/o(X/)||2. This means that the sequence of minimal gradient norms decreases at a rate given by the square-root of the number of iterations k. The stopping criterion in Algorithm 9 is then typically set as and, using (12.22), we obtain that this exit condition is achieved in at most _ [1/o(*o)-/o' %ax — 9 e2 slbx iterations, where the notation [•] denotes the smallest integer no smaller than the argument (i.e., the ceiling operator). 12.2.2.3 Analysis of the gradient method for convex functions. In the previous section we analyzed the convergence behavior of the gradient descent algorithm for generic, possibly non-convex, functions. We verified that, even under a Lipschitz continuity assumption on the gradient, only convergence to a stationary point can be guaranteed globally. In this section, we show that if the function /0 is convex, then convergence to a global minimum can be guaranteed, and an explicit bound can be derived on the rate at which /o(xfc) converges to /q . In the rest of this section we shall thus assume additionally that /0 is convex. First, we observe that, for convex /0, x* is a (global) minimizer if and only if V/o(x*) = 0, see Section 8.4.1. Therefore, the gradient algorithm converges to a global minimum point. We next analyze at which rate this convergence is reached. Consider the decrease in objective function obtained at one step of the gradient algorithm (we consider, for simplicity in the proofs, the backtracking parameter to be fixed at oc = 1/2): from (12.21) we have fo(xk+i) < /oOjO -s*a||V/o(xjfc)H2- (12.23) Since /0 is convex, it holds that /o(y) > fo(x) + V/o(x)T(y - x), Vx,y (E dom/o, which, for y = x*, gives fo(x) < /0 + V/o(x)T(x - x*), .V* e dom/o. Substituting this into (12.23), we obtain /o(**+i) < fo(xk) - Sk0L\\Vfo(xk)\\2 [letting a = 1/2] < f5 + Vfo(xk)'(xk-x*)-SMVMxk)\\ = /0 + ^ (lk - x*\\l - \\xk -x*- skVMxk)\\l) = /0 + ^ (ll^fc — ill — ll^fc+i — 111) [since sk > sjb] < /0* + A- (\\xk-x*\\j- ||xfc+i — ac*||l) • Considering this inequality for k = 0,1,..., we have /o(*l)-/o ^ 2^ (ll*0-**ll2- 11*1 -**112)/ /o(*2)-/o < (ll*l -**112-11*2-** ||i), /o(*3)-/o < 2^ (ll*2 — **111 — 11*3 — ** II2) ' Hence, summing the first k of these inequalities, we have that O/0M-/0*) s ^(ll*o-*‘ii2-ll*»-*’#!) < 2^11«-** ill- Now, since the sequence /o(*fc) — /0 is non-increasing with respect to k, its value is no larger than the average of the previous values of the sequence, that is /o(*fc)-/o < JE(/o(*«)-/o) < 2^jfclk-**lli (12.24) which proves that /o(*/0 ~* /0 at ieast at a rate which is inversely proportional to k. We then achieve an accuracy ef on the objective function, i.e., in at most r 11*0-** ni kmax — iterations. We shall see next that this bound on the rate of convergence can be improved, at least for the class of strongly convex func¬ 440 OPTIMIZATION MODELS 12.2.2.4 Analysis of the gradient method for strongly convex functions. We obtain improved convergence results for the gradient method, under an assumption of strong convexity on the objective function /0, i.e., there exists m > 0 such that fo(x) = /o(x) - y||x||ij is convex. We hence next assume, in addition to the hypotheses in Assumption 1, that /0 is twice continuously differentiable and it is strongly convex (strong convexity also implies that /0 has Lipschitz continuous gradient on So, for some Lipschitz constant M>m). Global convergence rate. We next derive a result on the convergence rate of the gradient algorithm on a strongly convex objective, which improves upon the generic estimate given in (12.24). To this end, consider again the objective decrease in one iterate guaranteed by (12.23), where for simplicity we set oc — 1/2: fo{xk+i) < /o(*jfc) — sjt«J|V/o(xfc)||i [for a = 1/2] = fo(xk)-Sj\\Vfo(xk)\\22 [since sk > sib] < f0(xk) - y ||V/o(xfc)ll2- Subtracting /q on both sides of this inequality, and using the bound previously derived in (12.13), we have that fo(xk+l) - /0 < (/o(*Jt)-/o)-yl|V/o(*fc)ll2 < (fo(xk) —/0) — 2wy (/o(xfc) —/0) = (1-msib)(/o(xO-/o). Now, we recall from (12.18), (12.19) that, for oc = 1/2, rasib = mmin(sinit,j6sUp) = min(msinit, f>{m/M)). Since jS < 1 and m/M < 1, we have that ms^ < 1, hence where c — 1 — ms^ E (0, 1). Applying the last inequality recursively from 0 to k, we obtain fo(xk) ~ /0 < c*(/o(*o) - /0 )> (12.25) which proves that convergence happens at a geometric rate. This type of convergence is usually named linear, and a reason for this is that the logarithm of the optimality gap /o(*fc) “ /0 decreases linearly with iterations, that is log(/o(**) - /0) < ^log c + d0, d0 = log(/0(x0) - /„*). We then see that an accuracy er is achieved on the objective function, i.e., fo(Xk)-f$ ^ e' in at most ^max — log(l/e') + d0' iterations. Further, from (12.25), together with Eq. (12.14) and (12.15) we have that m I. ^1') . , / \ \ . i-A/f j\\xk-x*\\l < M*k)-fo < ck(fo(xo)-fS) < cky ll*o - x*\\l> Il**-*l2 < ck/2\[^\\x0- x*\\2, which provides an upper bound on the rate of convergence of x*. to x*. Similarly, from the left inequality in (12.15) aRd the right inequality in (12.14) we obtain that 2^l|V/0(**)||i < fo(xk) -/0 < ck(fo(xo)~fZ) < c*^||V/o(*o)lli which provides an upper bound on the rate of convergence of the gradient to zero. Also, from (12.14) we may derive useful stopping criteria in terms of the accuracy on the objective function value and on the minimizer. In fact, if at some iteration k we verify in a gradient algorithm that the condition || V/o(x*.) II < e is met, then we can conclude that i.e., the current objective value /o(*fc) is ef = e2/(lm) close to the minimum. Similarly, from (12.14) we have that i.e., the current point x*. is c" — e/m close to the global minimizer 442 OPTIMIZATION MODELS 12.2.2.5 Summary of the convergence properties of the gradient method. We here summarize the convergence properties for the gradient descent algorithm, under our standard Assumption 1 and that /0 is continuously differentiable. Different types and rates of global convergence can be guaranteed, depending on additional properties of 1. If /0 has Lipschitz continuous gradient on So, then the gradient algorithm (with exact line search, backtracking line search, or constant stepsizes) converges globally to a stationary point of /0 (that is, a point where the gradient of /0 is zero). Moreover, min^o,...,*: || Vfo(xj) H2 goes to zero at a rate proportional to 1 / Vk, where k is the iteration count of the algorithm. 2. If /0 has Lipschitz continuous gradient and it is convex, then the gradient algorithm (with exact line search, backtracking line search, or constant stepsizes) converges to a global minimizer x*. Further, it produces a sequence fo(xk) that converges to the global minimum value /q at a rate proportional to 1 /k. 3. If /0 is strongly convex, then the gradient algorithm (with exact line search, backtracking line search, or constant stepsizes) converges to the (unique) global minimizer x*. Further, the sequences Wfo(Xk) -fo II2, ll*fc - x*\\2, and ||V/o(xjfc)||2 all converge to zero at a linear rate, that is the logarithm of these sequences tends linearly to —00. It is important to remark that the important part in the analysis developed in the previous sections is to highlight the worst-case functional dependence of the accuracy of the iterates with respect to k. The precise numerical value of the accuracy bounds, however, is hardly useful in practice, since it requires the knowledge of several constants and quantities (e.g., L, M, m, \\xq — x*||2, etc.) that are rarely known exactly. 12.2.3 Variable-metric descent methods A variation on the gradient descent scheme is derived by considering descent directions that are obtained as a suitable linear transformation of the gradient. These methods employ a standard recursion of the type Xk-1-1 = Xk + skvk, Vk = -HkVfo(xk), Hk.y 0. That is, the gradient is pre-multiplied by a symmetric positive definite matrix Hk. Clearly, vk is a descent direction, since (due to positive definiteness of Hk) h = V/0.(xfc)T^ = -Vf0(xk)THkVf0(xk) < 0, Vfc. The name "variable metric" derives from the fact that Hk y 0 defines an inner product (x,y)k = xTHky on Rn, and hence induces a norm (or metric) ||x||fc = \JxTHkx at each iteration of the algorithm, and vk is the steepest descent direction, with respect to this norm. If matrices Hk are chosen so that Hk >z coIn, for some to > 0 and V/c, (12.26) then we have that |V/0(**)To*| = VMxk)THkVf0(xk)T > c;||V/ok)||i Vfc, IHI2 = \\HkVfo(xk)\\2 > cv\\Vf0(xk)\\2r Vfc. It is then not difficult to see that all steps, onwards from Eq. (12.21), essentially follow in an analogous manner, by substituting stepsizes s 4- cos', where s' is the stepsize for the variable-metric method. Therefore, all the previous convergence results, as summarized in Section 12.2.2.3, aPply to ^e variable metric methods as well (under the assumption (12.26)). However, although the results on the global convergence rate are the same as for the standard gradient method, using suitable matrices Hk may drastically change the local convergence rate of the algorithm. By local we here mean the rate at which the algorithm converges to a local minimum x*, when the initial point xq is sufficiently close to x*. Under suitable hypotheses (such as the Hessian of /0 at x* is positive definite), it can be proved (we did not do it) that the standard gradient algorithm converges to x* locally at a linear rate, that is IIxk — x* H2 < K • ak, if ||xq — x* || is small enough, where a < 1 and K is some constant (that depends on xq). Notice that this result holds without the need of convexity or strong convexity assumptions. Incidentally, we did prove that, for strongly convex functions, the gradient algorithm converges globally, and not only locally, at a linear rate. This local linear convergence rate can indeed be improved, by using a suitable variable-metric method. For example, we shall see in Section 12.2.4 that the (damped) Newton method is nothing but a variable-metric algorithm, in which one 444 OPTIMIZATION MODELS chooses H^1 — V2/o(xfc), aRd superlinear local convergence can be proved for this method. Other specific variable-metric algorithms are discussed in Section 12.2.5. 12.2.4 The Newton algorithm The Newton method is a well-known iterative technique, originally used for finding a root of a nonlinear function of one variable, say g : R —> R. In order to determine a point for which g(x) = 0, one proceeds as follows: starting from a current candidate point x^, we approximate g locally with its tangent line g(x) = g(x%) + gf(xjc)(x — *fc), and then say that the updated candidate x*-+i f°r the root is the point where g(x) = 0, that is v _ v 8(xk) 1+1 * s'te)' In our optimization context, we essentially adapt this idea to the multivariate setting, observing that the (unconstrained) minima we are seeking are nothing but the "roots" of the system of equations V/o(x) = 0. Informally, our function g is now the gradient of /0, and the derivative of g is the Hessian matrix of /0. Therefore, the Newton update formula becomes ' *k+1 =xk- [V2/o(^)]_1V/o(^), k = 0,1,... (12.27) (this formula clearly implies that the Hessian should be non-singular, in order to be meaningful). There is also an alternative and more formal interpretation of the recursion (12.27): consider the second-order Taylor approximation of /0 around the current candidate point x^\ fo(x) - fqk)(x) (12.28) = fo(xk) + V/o(x;t)T(x -xk) + 2^ - xk)TV2fo(xk)(x - xk), and assume that V2/o(x^) >- 0. The minimum of the quadratic approximation (x) is characterized by v/f'(*) = Vfo(xk) + V2f0(xk)(x - xk) = 0, which is attained at x — x^ — [V2/o(x^)]_1 V/o(xfc), and the minimum value of the quadratic approximation is mm f\k\x) = f0(xk) - 1, 4 = V/o(^)T[V2/o(^)]“1V/o(^), and Xk > 0 is called the Newton decrement, since it quantifies the gap between the current value of fo(xk) and the minimum of the quadratic approximation: fo(xk) - mm f\k\x) = ^4. The interpretation of (12.27) *s thus that the updated point xk+1 is the one which minimizes the local quadratic approximation of /0 at xk, see Figure 12.6. The basic Newton iterations in (12.27) are/ in general, not guaranteed to converge globally. To correct this issue, a stepsize sk > 0 is introduced in the update, as follows: *fc+i = *fc-Sfc[V2/o(*jO]-1V/o(x;t), k = 0,1,..., (12.29) which defines the so-called damped Newton method. The Newton and damped Newton methods are second-order methods, since they need at each step second-order local information (the Hessian) about the objective function. The recursion in (12.29) can also be interpreted as a variable-metric descent algorithm, where the descent direction is given by vk = —HfcV/o(xjfc), with Hk — [V2/o(x^)]_1, and the step- sizes can be chosen according to the usual rules (e.g., by backtracking). The Newton method (and its damped version) is particularly useful for minimizing strongly convex functions, since this class of functions guarantees that V2/o(x) ^ ml, Vx, for some m > 0 (hence Figure 12.6 Univariate illustration of a Newton iteration. 446 OPTIMIZATION MODELS the Hfr matrices are well defined), and linear global convergence can be guaranteed. The Newton direction v* is actually a steepest descent direction, in the metric induced by the local Hessian. This means that vk = — [ V2/o (xjt)]-1 V/o (acjfc) (12.30) is the direction that minimizes 5^ — V/o(x*-)Tz; over v such that 0TV2/o(Xjt)i7 = A 2. It can be proved4 that under the assumptions of strong convexity and that the Hessian of /0 is Lipschitz continuous over So, i.e., there exists a constant L' > 0 such that l|V2/oM - V2/o(y)||2 < L'\\x-y\\2, Vx,y <E So, then the damped Newton method (with backtracking line search) behaves qualitatively in two phases, an initial one where the norm of the gradient decreases linearly, and a second phase of fast quadratic convergence. More precisely, there exist constants 77 £ (0, m1/Lt) and 7 > 0 such that • (Damped phase) while ||V/o(xfc)||2 > 77, most iterations require backtracking. This phase ends in at most (/o(*o) — /0 )/7 iterations; • (Quadratically convergent phase) when ||V/o(xfc)||2 < 77. In this phase, all iterations take full Newton steps (i.e., s*- = 1), and the gradient converges to zero quadratically, i.e., IIV/o(xt)||2 < const. • (1/2)2' k, t > k. Overall, we may conclude that the damped Newton method reaches accuracy /o(*fc) — /0 < e in at most iterations, where 7, €0 are constants that depend on m, Lf and the initial point xo- A more advanced analysis of the Newton method, based on the hypotheses of /0 being strictly convex and self-concordantf actually provides the following more useful bound, which does not depend on unknown quantities: ^ .7 = « < 1/2, 7 ^ e 20 — Soc where oc, f are the backtracking parameters. Also, it can be proved that, under the self-concordance hypothesis, /o(x/0 — /q < holds 4 We will not do it here. See, for instance, Section 9.5 in Boyd and Van- denberghe's book. 5 A convex function / : R —► R is self- concordant if \f" {x)\ < 2[f"{x)]3/2, for all x £ dom/. A function / : Rn —» R is self-concordant if it is self-concordant along any line in its domain, that is if f(x -f tv) is a self- concordant function of t £ R, for all x £ dom/ and all v £ Rn. Many useful convex functions have the self-concordance property, such as, for example, the so-called logarithmic barrier function for linear inequalities, f(x) = - Yli log(bj — ajx), which is self-concordant over the domain where the inequalities ajx<b, are satisfied. in the quadratically convergent phase of the Newton method (for Afc < 0.68, a number that derives from an advanced analysis that we are not presenting here), hence the decrement \k gives a precise upper bound on the optimality gap, which can be used in the Newton algorithm as a stopping criterion, see Algorithm 11. Algorithm 11 Damped Newton algorithm. Require: /0 twice continuously differentiable and either (i) strongly convex with Lipschitz continuous Hessian or (ii) strictly convex and self-concordant; xo £ dom /0, e > 0 1: Set k = 0 2: Determine the Newton direction vk — — [V2/o(xfc)]_1 V/o(x*-), and (squared) decrement A2 = — V/o(x*-)Tz^ 3: If A2 < e, then return xk and quit 4: Determine step length sk> 0 by backtracking 5: Update: x*-+1 = xk + skvk/ k 4- k + 1, and go to 2 12.2.4.1 Cost of Newton iterations. Equation (12.31) shows that the number of Newton iterations necessary for achieving an e accuracy on the objective value grows extremely slowly with respect to e_1. This dependence is "doubly logarithmic," and one can consider it constant to most practical purposes, since log2log2 ^ < 6, Vc > 1CT19. However, the main limit in application of the Newton algorithm consists of the numerical cost for computing, at each iteration, the Newton direction vk in (12.30). To compute this direction, one has to solve for the vector ugR” the system of linear equations (the Newton system) ['V2fo(xk)}v = - V/0(x*). This system can be solved via Cholesky factorization6 in 0(n3/ 3) operations, for a generic unstructured Hessian. Moreover, at each step, one has to compute and store the whole Hessian matrix, and this can be a limiting factor when the problem dimension n is very large. To avoid recomputing the Hessian matrix at each step, several approximate approaches have been proposed, that avoid computation of second derivatives, as well as the solution of the Newton system. Some of these techniques, also known as quasi-Newton methods, are briefly discussed next. 6 See Section 64.4.2. 448 OPTIMIZATION MODELS 12.2.5 Quasi-Newton methods Quasi-Newton methods are variable-metric methods in which the Hk matrix is updated at each iteration according to some suitable rule, and it is used as a proxy for the Hessian matrix. The advantage over the "exact" Newton method is that calculation of second derivatives is avoided, together with the problem of inverting the Hessian. Observe that for a convex quadratic function f(x) ~ 2*T^X + bTx + G A >- 0, it holds that V/(x) = Ax + b, and V2/(x) = A, thus the Hessian A satisfies the relation V/(x)-V/(y) = A(x-y). Multiplying both sides on the left by H = A-1, we obtain the so- called secant condition for the inverse Hessian H: H(V/(*)-V/(y)) = *-y. The intuitive idea is thus that if a matrix H satisfies the secant condition, and if function /0 can be approximated by a quadratic function, then its (inverse) Hessian matrix would be approximated by H. Quasi-Newton methods initialize Hq = ln, and then update this matrix according to various rules satisfying the secant condition HM{Vf(xk+1)-Vf(xk)) = xk+i-xk. Typical update rules satisfying the secant condition are the following ones. Let Akx = xk+1 - xk, Akg = Vf0{xk+1) - Vf0(xk), zk = HkAkg. • Rank-one correction: u _ u , (AkX-zk)(Akx-zk)T Davidon-Fletcher-Powell (DFP) correction: _ „ , AkxAkxT zkzj Hk+l AkgT Akx A kgTZk' Broyden-Fletcher-Goldfarb-Shanno (BFGS) correction: u , (zkAkxT) + {zkAkxT)T zkzj Mk+1 — nk H 7 7 vk' &kgTzk A kgTZk' AkgT Akx vk = l + -JZ*—*-. &kg zk It can be proved that quasi-Newton methods have a global convergence rate analogous to that of the gradient method, while the local convergence happens at a superlinear rate.7- 7 See, for instance, Chapter 6 in J. N0- cedal, S.J. Wright, Numerical Optimization., Springer, 2006. 12.2.6 Dealing with linear equality constraints We here consider the problem of minimizing a convex objective /o, under linear equality constraints, that is p*= ™,/»W (12-32) s.t.: Ax = b, where A E Rm'n, with A full row rank: rank A = m. There are essentially three ways of dealing with this problem. The first approach consists of eliminating the equality constraints, so transforming the problem into an unconstrained one, to which the methods described in the previous sections could be applied. A second technique consists of applying a descent technique to the problem (e.g., the gradient method, or the Newton method), computing the descent direction so that each iterate remains feasible. We next describe these two approaches. 12.2.6.1 Elimination of linear equality constraints. Since rank A = m, we can find a matrix N E Rn'w_m containing by columns a basis for the nullspace of A. For example, N can be chosen to contain the last n — m orthonormal columns of the V factor of the SVD of A, see Section 5.2.4. Then we know from Proposition 6.1 that all vectors x satisfying Ax = b can be written in the form X = X + N2, where x E R” is some fixed solution of Ax = b, and z E R”~m is a new free variable. Now, substituting x in /0, we obtain a new objective function /0(2) =fo(x + Nz), (12.33) in the new variable z. Problem (12.32) is then equivalent to the unconstrained problem p* = min /o(z). Once this problem is solved (e.g., via one of the methods for unconstrained minimization discussed previously), and an optimal variable z* is obtained, we can recover an optimal variable for the original problem as x* =x + Nz*. One possible advantage of this approach is that the transformed unconstrained problem has n — m variables, which can be much less than the original number of variables n. One drawback, however, 450 OPTIMIZATION MODELS appears when the matrix A is sparse; in general, a corresponding matrix N is dense. In that case, it may be more beneficial to work directly with the equality constraints, as is done in Section 12.2.6.3. 12.2.6.2 Feasible update gradient algorithm. A second approach for dealing with problem (12.32) consists of applying a descent method with a suitably chosen descent direction guaranteeing feasibility at each iteration. Let us first observe that the optimality conditions for problem (12.32) state that (see Example 8.4.2) x is optimal if and only if Ax = b, and Vfo(x) = AT A, for some A £ Rm, where the second condition states that the gradient at the optimum is orthogonal to the nullspace of A, i.e., the above characterization is equivalent to x optimal Ax = b, and V/o(x) £ J\f±(A). (12.34) We now adapt the gradient descent algorithm to problem (12.32). The idea is simply to take as update direction the gradient of /0, projected onto the nullspace of A. That is, we take vk — —PV fo(xk), P = NNT, where N £ ]Rn'n-m contains by columns an orthonormal basis for AT (A). The P matrix is an orthogonal projector onto J\f{A) (see Section 5.2.4), which implies that for any vector £ £ R", the vector P£ lies in J\f(A). Then, if the current point xk satisfies the constraints (i.e., Axfr = b), then also the updated point Xk+1 = xk + Skvk = xk- skPVfo(xk) satisfies the constraints, since Axk+1 = Axk - skA(PVfo{xk)) = b. This guarantees that, if we start the algorithm with a feasible point xo, all subsequent iterates will remain feasible. We next verify that vk is indeed a direction of descent. Note that P >z 0, and moreover zTPz = 0 iff z_LAf(A). Decrease directions are characterized by the condition that V/o(x^-)T Vk < 0, and we have that Y7 ft \T V7 ( ( \ DY7 f ( ^ ^ fo{Xk) 0 (A), v/ota) »» =-VAMPV/ota) | =0,ifV/0ta)€^W. which means that, at each iteration, either vk is a decrease direction, or V/o(*fc) E J\fA-(A)/ which, in view of (12.34), implies that xk is optimal. This gradient projection algorithm converges with properties analogous to those of the standard gradient algorithm. 12.2.6.3 Feasible update Newton algorithm. The Newton algorithm can also be easily adapted to deal with the linear equality constrained problem (12.32), by designing the Newton iterates to be feasible at each step. We have seen that the standard Newton update point is computed as the minimizer of the quadratic approximation of /0 at xk, see (12.28). The idea of the modified method is then to determine the update as the minimizer of the same quadratic approximation, under the equality constraints. That is, given the current feasible point xk, the updated point should solve minx fo(xk) + V/o(xjt)T(x -xk) + \(x- xk)TV2f0(xk)(x - xk) s.t.: Ax = b. The optimality conditions for this problem can be characterized explicitly (see Example 9.2) as Ax = b, and V/^(x) = ATA, for some A E Rm. Setting Ax = x — xk (the full Newton step), and observing that AAX = Ax — Axk — Ax — b, and that V/^(x) = V/o(x*-) + V2/o(x/0(x ~ xk), the previous conditions are rewritten in terms of Ax as AAX = 0, and Vfo{xk) + V2/o(x*-)Ax — ~ATA, for some A E Rm (here, we just renamed the vector —A to A), which, in compact matrix notation, is written as ' V2/o(xj.) AT ' A 0 Solving the above system of linear equations (known as the KKT system for the linear equality constrained Newton method) yields the desired step A*. The modified Newton method then updates the current point according to xk+l ~ xk sk^x- The Newton decrement is now defined as A2 = AjV2/0(xfc)Ax, and it holds that f(x) ~ minv &\y) = -X2k. Ay—b Z 452 OPTIMIZATION MODELS In the quadratically convergent phase it also typically holds that fo(xk) ~ P* < hence A2 can be used as a stopping criterion for the algorithm. The damped Newton algorithm with linear equality constraints is schematically described in Algorithm 12. Algorithm 12 Damped Newton with linear equality constraints. Require: /0 twice continuously differentiable and either (/) strongly convex with Lipschitz continuous Hessian or (ii) strictly convex and self-concordant; *o £ dom/o, Ax0 = b, e > 0 1: Set k — 0 2: Determine Newton step Ax by solving (12.35), aRd the (squared) decrement A2 = AjV2/o(x*-)Ax 3= K A? < e, then return xk and quit 4: Determine step length sk > 0 by backtracking 5: Update: x*-+i — xk + k <— k + 1, and go to 2 12.3 Algorithms for smooth convex constrained minimization In this section, we discuss two techniques for dealing with differentiable convex constrained optimization. The problem under study is the form V* = min fo(x) (12.36) s.t.: x G X, where /0 is convex, and X is either some some "simple" convex constraint set (such as a norm ball, the positive orthant, etc.) or, more generally, it is of the form X = {x £ RM : fi(x) <0/i = l,...,m}, (12.37) where /; are convex functions. We assume, without loss of general¬ ity, that no linear equality constraints are present (if there are such constraints, they can be eliminated via the procedure described in Section 12.2.6.1). In Section 12.3.1, we describe a rather general technique for solving (12.36), based on the idea of barrier functions for the constraint set which, as we shall see, allows the solution of the constrained problem via a sequence of unconstrained minimization ones (this technique requires all functions f, i = 0,1,..., m, to be twice differentiable). In Section 12.3.2 we discuss instead an alternative method based on the concept of proximal mappings, which is suitable for cases where X is of "simple" form (we shall be more precise as to what simple means) and boils down to a scheme of gradient step followed by a projection onto the feasible set. 12.3.1 Barrier algorithm for convex constrained minimization We next consider problem (12.36), where A* is a closed set described as in (12.37), where f are convex, closed, and twice continuously differentiable. That is, we consider the convex optimization problem V* = min fo(x) (12.38) s.t.: fi(x) < 0, i = We further assume that p* is finite and it is attained at some optimal point x*, and that the problem is strictly feasible, that is there exists x £ dom/o such that f(x) < 0, i = 1,..., m. This latter assumption guarantees that Slater conditions are verified, hence strong duality holds and the optimum of the dual of (12.38) is attained (duality plays an important role in barrier methods, as it will become clear soon). A continuous function cp : R is said to be a convex barrier function for the set A', if it is convex, and (p —> 00 as x approaches the boundary of A\ Typical examples of convex barrier functions for X are the following ones: 1. power barrier: YT=i(~fiix))~V' f°r P > 1; 2. logarithmic barrier: — YT=i ^n(~~fi(x))' 3. exponential barrier: Y4L1 exp(~ 1//KX))- Here, we consider in particular the logarithmic barrier function 4>(x) = -£ W-/«(*))/ for which we have the explicit derivatives m 1 V(KX) = E (12-39) 2 = 1 Ji\X) m ■1 m 'i vVM = The idea is to consider an unconstrained problem obtained by adding to /0 a penalty term given by the logarithmic barrier, that is we consider the problem nun fo{x) + -<p(x). 454 OPTIMIZATION MODELS where t > 0 is a parameter weighting the importance of the original objective /0 and the barrier in the new objective. We assume that an optimal solution x*(t) for this problem exists and it is unique, and that an initial strictly feasible point xo E X is known (see Section 12.3.1.2 for a technique for determining a suitable initial feasible point). Multiplying the previous objective by t does not change the minimizer, so we can equivalently consider the problem mm ipt(x) = tfo(x) + 0(x), t > 0, (12.40) for which x* (t) is still the unique minimizer. Clearly, the role of cf>(x) is to prevent the solution of this problem from drifting out of the feasible domain X, i.e., (f) acts indeed as a barrier for the feasible set X: since cp(x) is equal to +00 outside the domain X, and it tends to +00 as x approaches the boundary of the domain, the minimizer x*(f) is guaranteed to be strictly feasible, i.e., /;(x*(f)) < 0, i = 1,..., m. The first-order optimality conditions for ip(t) then state that Vipf(x*(f)) = 0, that is, in view of (12.39), m 1 fV/o(**(0) + E -ft(s*(0)V/i(**(*» = °- (1241) A*(t) = —> 0, we see that the optimality condition in (12.41) equivalently states that the Lagrangian £(x, A) of problem (12.38), evaluated for A = A*(f), £(x,A*(0)=/oW + E^*(0//W is minimized at x* (f), since it holds that V£(x* (f), A* (t)) = 0. Hence, recalling that the dual function g(A) = min* C(x, A), for any A > 0, is a lower bound for p*, and evaluating g at A = A*(t) , we obtain that f > g(A*(t)) = C(x*(t),A*(t))=f0(x*(t)) + 1£A*(t)fi(x*(t)) m -i = /<,(*•(())-f This is the key inequality justifying the use of the barrier method for solving (12.38), since it states that the solution x*(t) of the unconstrained problem (12.40) is an e-suboptimal solution to the original problem, that is, for given e > 0, it holds that fo(x*(t)) - f < €, ifj<e, which clearly implies that fo{x*(t)) -» p* as t —> oo. Ideally, one may fix a value t > m/e, and solve the unconstrained problem (12.40), using for instance the Newton method, to obtain an e-suboptimal solution for (12.38). While this idea may work in principle, it may be problematic in practice, due to the fact that the initial point xq may be far from an optimal point x* and, more critically, that the function ipt{x) to be minimized tends to be ill conditioned for large t (its Hessian varies rapidly near the boundary of the feasible set). This implies that the Newton method may require many iterations to converge to x*(f). The usual approach is then to solve a sequence of unconstrained minimization problems, for increasing values of t, starting from an initial moderate value t[nit, until the exit condition m/t < e is met. The outline of such a sequential barrier method is given in Algorithm 13. Algorithm 13 Sequential barrier method. Require: xo strictly feasible, hnit > 0, p > 1, e > 0 1: Set k = 0, t = t{nit, x — xo 2: Solve minz tfo(z) + <p(z), using the (damped) Newton method, with initial point x, and let xk be the corresponding optimal solution 3: Update x xk 4: If m/t < e, then return x and quit 3: Update t <— pt,k k + 1, and go to 2 Each iteration k of Algorithm 13 is called a centering step (or, an outer iteration), and xk is the k-th central point. The curve traced by the minimizers of ipt, {x*(t), t > 0}, is called the central path, and it is a curve lying in the interior of the feasible set X. For this reason, the barrier algorithm belongs to the family of so-called interior-point methods. Each centering step requires a number of inner iterations, which are the iterations needed by the Newton method in order to compute x*k to a given accuracy. The numerical efficiency of the barrier method depends therefore on a tradeoff between the number of outer iterations (centering steps), and the effort required for each of these iterations, that is the number of inner iterations. As we discussed previously, setting hnit > m/e would make Algorithm 13 terminate in just one outer iteration, but this may require a large number of inner iterations. Instead, increasing t progressively as f^+i — hh, 456 OPTIMIZATION MODELS where denotes the value of t used in the k-th centering step, allows us to reduce the number of inner iterations per outer iteration. This is mainly due to the fact that the Newton algorithm for the k-th centering step is started at an initial point which is the minimizer of the previous objective Since is not much larger than t^-\, intuitively iptk should not change too much with respect to f>tk_x, hence the new minimizer x£ should be "not too far" from the previous minimizer x£_v Overall, the number of centering steps required by Algorithm 13 for solving (12.38) to e accuracy on the objective value is given by 'log(me~1/finit) ' and each centering step requires a number of inner iterations (i.e., iterations of the Newton method) that can be bounded above, for instance, by (12.31), if the corresponding assumptions on ipt are satisfied (namely, if ipt self-concordant). 12.3.1.1 Self-concordant barriers for LP, QCQP, and SOCP. We next illustrate specific barriers and their derivatives for the standard models of LP, QCQP, and SOCP. Consider a linear program in standard inequality form: p* = min cTx s.t.: ajx <bj, i = 1,..., m. The logarithmic barrier for this problem is <p{x) = - Eln(bi-ajx), which can be proved to be self-concordant. From (12.39), we have v<Kx) = E i=1 bi-ajx' m anT vVw - E A (convex) quadratically constrained quadratic program (9.20), with no equality constraints, has the standard form (12.38), with fo{x) = ^xT Hqx + CqX + do, fi(x) = HjX + cjx + dj, = The logarithmic barrier for this problem m / /1 <p(x) = - ln ( “ I ~xTHiX + cjx + d{ can be proved to be self-concordant, and we have that ™ _ f (Hfx + Cf)(HiX + C/)T , f Hi W ~ fe l/P*)P £-/;(*)' A second-order cone program (10.6) in the standard form p* = min cT x s.t.: || A/X -j- bj 112 < cJx-\-d{, i = 1, can be formulated equivalently by "squaring" the constraints as p* = min cTx s.t.: fi(x) < 0, i = 1,... ,m, cj x + d{> 0, / = 1,..., m, fi{x) = llAjX + ^lp - (c,TX + rf,)2, 1 = 1,...,/«. The logarithmic barrier for this equivalent problem can be proved to be self-concordant, and we have that ™ Vfi(x) q ~ ,§-//(*) cTx + d,-' = fV//WV/.P)T . 2A;tA,--c,c,t c;tc;- n) h {fiW P -/<W (c,TX + rf,)2' where V/* (x) = 2(AjrA; — c/cT)x + 2(Ajb + djCi). 12.3.1.2 Computing an initial feasible point. The barrier method needs to be initialized with a (strictly) feasible point xq. Such an initial point can be determined by solving a preliminary optimization problem, usually called the phase I problem. The rationale behind the phase I method is to introduce a slack variable accounting for the violation of the original constraints of problem (12.38). That is, we substitute the original constraints f(x) < 0 with relaxed constraints of the form fi(x) < s, where s G R is a new variable, and consider the phase I optimization problem 458 OPTIMIZATION MODELS s* = min s s.t.: fi(x) < s, i = First, we observe that we can always easily find an initial strictly feasible point Xo, so for this problem. To this end, it suffices to choose any point x0 £ dom/o, and then select any scalar so such that s0 > max fi(x0). Starting with this initial feasible point, we can then solve the phase I problem using a barrier method, obtaining an optimal point x*, and the optimal (minimal) violation s*. Then we conclude that: • if s* < 0, then it means that //(x*) < s* < 0, hence x* is strictly feasible for the original problem (12.38); • if s* — 0, then the original problem (12.38) is feasible, but it does not admit a strictly feasible point (note, however, that from a numerical point of view one can never really say that a variable is exactly zero); • if s* > 0, then the original problem (12.38) is infeasible. In practice, one solves the phase I problem, and if s* < —e, for some reasonably small e > 0, then x* is strictly feasible for the original problem (12.38), and it can be used as an initial point for solving this problem via Algorithm 13. 12.3.2 Proximal gradient algorithm In this section we discuss a first-order technique for solving constrained convex optimization problems of the form (12.36), when /0 is convex and differentiable, and A' is a convex set of simple structure (we shall soon define what we mean by "simple"). This method follows as a special case of a more general family of techniques used for solving a class of optimization problems with mixed differentiable plus non-differentiable objective, based on the concept of proximal mapping. This concept is reviewed next. 12.3.2.1 Proximal mapping and projections. Given a closed convex function h : ]Rn -* ]R (not necessarily differentiable), we define the proximal mapping of h as follows: Since h{z) is convex and the additional term ||z — xH2 is strongly convex, then for each x the function h(z) + 0.5||z — xH2 is also strongly convex (see Section 8.2.1.5). Moreover, convexity of h implies that h(z) > h(x) +t]J(z — x), for all x in the interior of domh, where t]x is a subgradient of h at x. Hence fc(z) + \llz - x\\l > h(x) + f]J(z-x) + ^\\z - x\\l, which implies that the function on the left of this inequality is bounded below. This property, together with strong convexity, guarantees that the global minimizer prox^(x) is well defined, since it exists and it is unique. An interesting special case arises when h(z) is the indicator function of a closed convex set A*, i.e., \ r , x - / 0 if z £ X, h{z) = W2) = \ +00 otherwise. In this case, we have pioxj (x) = argmin -||z - x\\\, (12.42) ^ zEA. L hence prox^(x) = [x]x is the Euclidean projection of x onto X. We next refer to as "simple" those functions h for which it is easy to compute the proximal mapping. Accordingly, we denote as simple a convex set X onto which it is computationally easy to determine a projection; examples of such sets are illustrated in Section 12.3.3. We observe that the constrained minimization problem (12.36) can be rewritten into an unconstrained form rnin fo(x) + Ix(x), (12.43) where the indicator function I % (*) acts as a non-differentiable barrier for the feasible set X. In the next section we discuss an algorithm for solving a more general class of problems of the form rnin /o(x) + h(x), (12.44) where /0 is convex and differentiable, and h(x) is convex and "simple." Problem (12.44) clearly includes (12.43), f°r h(x) = Ix(x)- 12.3.2.2 Proximal gradient algorithm. We address the solution of the problem rnin f(x), (12.45) f(x) =fo(x) +h(x), (12.46) 460 OPTIMIZATION MODELS via a modification of the gradient algorithm, adapted to the current situation where h(x) may not be differentiable (hence its gradient may not exist). Given a current point x^, the approach is to first perform a standard gradient step (using the gradient of /0 only), and then compute the new point x^+i v*a the proximal map of h. In formulas, we take xk+l = proxSkh(xk - skVfo(xk)), where s*. > 0 is a stepsize. We next show how this update can be interpreted in terms of a "modified" gradient step. By the definition of proximal map, we have Xk+1 = prcKSkh(xk - skVfo(xk)) = argmin ( skh(z) + I ||z - xk + skVf0(xk)\\% [dividing by sjc does not change the minimizer] = argmin (h(z) + ^-\\(z-xk)+skVfQ(xk)\\ij = argmin (h(z) + ||z - xk\\l + Vf0(xk)T(z - xk) + sk + 2 [adding the constant term /o(*j0 — 2HI Y/b(*fc)ll2 ctoes not change the minimizer] = argmin (h(z)+f0(xk) + V/o(x*)T(z - xk) + T-||z - xk\\j The interpretation of this last formulation is that the updated point Xfc+i is the minimizer of h(z) plus a local quadratic approximation of /o(z) at Xfc, that is x*-+i = argminz ipk(z), where tpk(z) = h(z)+qk(z), qJt(z) = fo(xk) + ^fo(xk)T(z-xk) + —\\z-xk\\l. (12.47) Let us define a vector gs(*) as follows: gs(x) = i (x - proxsh(x - sV/oO))) / and also set, for notational simplicity, gk = gsk(Xk) = 7-(xk - xk+l)- (12.48) With the above notation, we can formally write our algorithm as Xk+l =Xk- skgk/ • (12.49) where g^ has the role of a "pseudo" gradient, and it is called the gradient map of /0 on h at x^. Indeed, g^ inherits some of the key properties of a standard gradient. For instance, we shall prove that the optimality condition for (12.45) is £s(x) = 0. Also, if h = 0, then gk = V/o(x^), hence g^ is simply the gradient of /0 at x^. If instead h — lx, then the geometrical meaning of the gradient map is illustrated in Figure 12.7. Figure 12.7 Illustration of a proximal gradient step, for the case when h is the indicator function of a closed convex set X. In this case x/t+i *s the Euclidean projection of x/t+i onto X. A version of the proximal gradient algorithm, with constant step- sizes, is formally stated in Algorithm 14. Algorithm 14 Proximal gradient algorithm (with constant stepsizes). Require: /0 convex, differentiable, bounded below, with Lipschitz continuous gradient (Lipschitz constant L); h convex and closed, Xq e dom/o, e > 0 1: Set k = 0, s = 1/L 2: Update: xk+1 = proxgh(xk - sVf0(xk)) 3: If accuracy e is attained (see, e.g., (12.60)), then exit and return Xfa else let k k + 1 and go to 2. 12.3.2.3 Convergence of the proximal gradient algorithm. This section is rather technical. The reader not interested in the details may just consider the result given at the end of this section, and move to the next section. We shall next prove the convergence of Algorithm 14 under the hypothesis that /0 is strongly convex, with Lipschitz continuous gradient on dom/o. We start by observing that V(ft(z) = V/0(x;0 + — (z - xk), 462 OPTIMIZATION MODELS with qk defined in (12.47), hence V<7fc(xfc+1) = V/o(x*) -gk. Also, we observe that a point is optimal for problem (12.43), i-e-/ it minimizes / = /0 + h, if and only if g*- = 0. To verify this fact, we recall that the optimality condition for problem (12.43) requires (noticing that h may not be differentiable)8 that 0 e dh(xk) + Vf0{xk), (12.50) where dh(x) is the subdifferential of h at x. Similarly, the optimality condition for fa requires 0 g dh(z) + V^(z) = dh(z) + Vfo{xk) H (z ~ xk)r and, in view of (12.30), this condition is satisfied for 2 = x^, if x* is optimal for /. Thus, if x* is optimal for (12.43), then xk+1 — xk> hence g= 0, due to (12.48). Conversely, if g= 0, then it must be that Xfc+i = x*. (again, due to (12.48)), and then since x^+i minimizes tpk(z), from the optimality conditions we have that 0 G dh(xk+1) + Vqk(xk+1) = dh(xk+1) + Vf0(xk) - gk, that is gk G dh{xk+1) + V/o(xjt), (12.51) which, for xjc+i = x*., gives 0 = G dh(xjc) + V/o(xfc), which implies that Xfc is optimal for /. Note that (12.31) means that there exists a subgradient 77^+1 G 3/z(x*-+i), such that V/o(*k) = gk ~ Vk+i, (12.52) where, by definition of a subgradient, it holds that h(z) > /1(^+1) + rjk+l(z - xk+1), Vz e domh. (12.53) The last two relations will be useful soon. Now, the assumptions of strong convexity and Lipschitz continuous gradient on /0 imply that there exists m, L > 0, L > m, such that (see Eqs. (12.4), (12.10)) /o(z) > fo{xk) + V/o(xj-)T(z — xk) + ^||z — xk\\2, Vz G dom/o, /o(z) < fo(xk) + Vfo(xk)T(z-xk) + ^\\z-xk\\l, VzGdom/o. The second of these inequalities, evaluated at 2 = x^+i, and f°r step- sizes such that 1/s*. > L, yields 8 See Section 8.4.4 fo(xk+i) < qk(xk+i)- [from (12.47)] = h(xk+1) + qk(xk+1) - ^-\\xk+i - xk\\l From the first inequality (adding h(z) on both sides) we have instead that, for all z £ dom/o, /(z)-y \\z-xk\\l > h(z)+fo(xk) + Vfo(xk)T(z-xk) = Hz) + f0(xk) + Vf0(xk)T(xk+1 - xk) + Vfo(xk)T (z - xk+1) [using (12.52)] = h(z) +fo(xk) + Vfo(xk)T(xk+1 - xk) + si (z- Xk+i )-qk+i(z- xk+i) [from (12.53)] > Hxk+i) + fo(xk) + Vf0(xk)T(xk+1 - xk) + 8k(z~xk+1) 2 Sk + 8k (z-Xk+i) [from (12.48)] = h(xk+i) + qk(xk+1)-^\\gk\\l + gj(z-xk+i) = h(xk+1) + qk(xk+1) - ^\\gk\\l + 8k (z ~ xk) + 8k (xk - xk+1) [use (12.48)] = h{xM) + qk(xk+1) + Sj-\\gk\\l+gJ(z-xk) [1/sjt > L, (12.54)] > Hxk+i) +fo(xk+i) + S-^\\8k\\l +8k(z~ xk) [from (12.46)] = f(xk+1) + j\\gk\\l + gJ(z-xk). (12.55) Using this inequality for z = x^, we obtain /(*k+l) </(*k)-yl|gkll2/ which shows that the proximal gradient algorithm is a descent algorithm, and using again inequality (12.35) f°r z = x*/ where x* is the minimizer of / we are seeking (hence /(x^+i) > f(x*)), we get 8k(xk-x*) > ^\\gk\\2 + ^\\xk- x*\\i (12.56) Further, rewriting (12.55) as /(z) > /(*k+i) + y II&II2 + 8k (2 - Xk) + YII2 _ xk\\l, Vz e dom/o, and minimizing both sides over z (note that the minimum of the expression on the right is attained at z — x^ — (1 /m)g^), we obtain f(xk+i)-f(x*) < ^IlgkllKl/m-Sk), (12.58) where 1/m — > 0, since this is implied by L > m and < 1/L. 464 OPTIMIZATION MODELS Also, evaluating (12.57) at z = x*, we obtain f(xk+1)-f(x*) < gk(xk-x*)-^\\gk\\l-^\\xk-x*\\l < gk (xk - X*) - ^WgkWl = ^ (\\Xk~X*\\2- \\xk~X* ~Skgk\\£) = ^ (\\xk-x*\\2- Ikfc+i-*!*) • (12-59) We next wrap up all these preliminaries. To derive our final result, let us consider, for simplicity, the case of constant stepsizes = s = 1/L (the proof can be adapted also for the case of stepsizes obtained via backtracking line search). Recalling (12.49), we have H*Jt+l-**ll2 = \\(xk ~ x*) ~ SkgkWl = II** - **111 + slhkWl - ^kgk (** - **) [using (12.56)] < (1 - msk)\\xk - x*\\l [for sk = 1/L] = Il*fc-**ll2^ (l-”)k|l*0-**lli which shows that the proximal gradient algorithm converges to x* at a linear rate. Also, if m, L are known, then (12.58) provides a stopping criterion for Algorithm 14, based on checking the norm of since, for e > 0, \\gk\\2<2efr^ => f(xk+i)-/(**)< e- (12.60) Further, adding inequalities (12.59), we get £/(*/) -/(**) ^ iz E (ll*«-i -**ll2- II*!-**!!!) i=1 i=1 = \ (||*o — **lli — II**-**!!!) < ^ll*o **lli- Since f(xj) is non-increasing with i, the last value /(x*-) is no larger than the average of the previous values, that is /(**)“/(**) < I Yl(f(xi) -/(**)) < ^Il*0-**||2/ which shows that /(x*.) —>> f(x*) at rate 1/k. Our findings are summarized in the next theorem. Theorem 12.1 For Algorithm 14 it holds that Moreover, under the additional hypothesis that /0 is strongly convex (with strong convexity constant m), it also holds that 12.3.3 Computing proximal maps and projections We here discuss several relevant cases of functions h for which the proximal maps are "easy" to compute. We recall that if h is the indicator function of a closed convex set X, then the proximal map is just the Euclidean projection onto X (see (12.42)) hence, in this case, the proximal gradient algorithm solves the constrained optimization problem (12.36): and the projection [x]x is x, if x G X, or it is equal to the projection of x onto the hyperplane {x : aTx = b}, if x ^ X. This latter projection is computed as discussed in Section 2.3.2.2, hence p* = min f0(x) s.t.: x G X. 12.3.3.1 Projection onto a half-space. Let X be a half-space X = {x : aTx <b}, a 0. Then, for given igR", we have prox/A,(x) = argmin ||z - x\\% = [x]x, 12.3.3.2 Projection onto the positive orthant. Let X - R!J_ = {x G : x > 0}. Then, for given xgR11, we have [*]* = argmin ||z - x\\l = argmin j^(z,- - x,)2, z>o z?>0 466 OPTIMIZATION MODELS where we see that the optimal z should have components z* = X/, if Xj- > 0, or Z( = 0 otherwise, hence [x)x = [*]+ = max(0,x), where the max is here intended element-wise. 12.3.3.3 Projection onto the standard simplex. Let X be the standard (probability) simplex X = {x e : x > 0, lTx = 1}. Computing the projection [x]x amounts to solving 1 II m2 mm -||z — x||| s.t.: z > 0 lTz = 1. Considering the (partial) Lagrangian for this problem, we have C(z,v) = — 11 z — x H2 + v(lTz — 1) and the dual function g(v) — min £(z,v) = min \ \\z — x\\? +i/(lTz — 1) z>0 z>0 2" MZ v y = ^ E ( \ ~Xi) 2 + vzi^j - v The problem we need to solve for determining the dual function is separable, meaning that the optimal solution z is obtained by finding the optimal values of the individual components, which are obtained by solving a simple one-dimensional minimization: z* (1/) = argmin -(zj - Xj)l + vzh Zj Z The function to be minimized here is a convex parabola, having its vertex at v-t — xz- — v. The minimizer z* (v) is thus the vertex, if V[ > 0, or it is zero otherwise. That is z*( v) = max(x — i/l,0). The optimal value v* of the dual variable v should be then obtained by maximizing g(y) with respect to v. However, there exists only one value of v which makes z* (v) belong to the simplex (i.e., primal feasible: E/z*(e) — 1), hence this must be the optimal value of the dual variable. In summary, the projection is computed as . [x]x = z*(i/*) = max(i — i/*l,0), where v* is the solution of the scalar equation9 ^ max(xf — v, 0) = 1. 12.3.3.4 Projection onto the Euclidean ball Let X be the unit Euclidean ball X = {xeKCl: ||x||2<l}. Then, it is straightforward to verify that the projection of x onto X is r , \ X if X 2 < 1, [Ax = \ I 11*112 if 11X112 > 1. 12.3.3.5 Projection onto the t\-norm ball. Let X be the unit t\ ball X = {xe№.n : ||x||i < 1}. For a given x £ Rn, computing the projection [x]x amounts to solving min -1|z — x112• (12.62) l|z||i<i 2 The Lagrangian of this problem is £(z,A) = ^||z - x\\\ + A(||z||i - 1), hence the dual function is q{A) = mm£(z,A) = min ^||z — x||2 + A(||z||i — 1) = £minQ(zi-x;)2 + A|z,|) - A. (12.63) We then see that the values z*(A) that minimize the above function are found by solving the following univariate minimizations: z*(A) = argmin<p(z,-, A), <p(zir\) = Lz,--X;)2 + A|z,-|, i = l,...,n. Zi 2 9 See Exercise 12.8. 468 OPTIMIZATION MODELS To solve this problem, we use the identity |z;| = max^.|<1 QiZi, and write min \(zi — X{)2 + À|z/| = min ( \{zì — xA2 + À max p.-z; | 2, 2 2, y2 |e,|<i J = min max — x,-)2 + Ap,z,- 2 i |c,|<i 2 [using Theorem 8.8] = max min -(z* — X/)2 + A^z/. k/l<i z' 2 The inner minimization (w.r.t. z?) is readily solved by setting the derivative to zero, obtaining Z* (A) = Xi - \Qi, (12.64) which, substituted back, yields 1 /1 min — (Z; - X;)2 + À-QjZj = A U/X,- - -A^? Continuing the previous chain of equalities, we thus have that 1 ' '2 , 1U.1 _ 1 _ (1 i„2 min -(zz- — X/) + A|z/| = A max yQiXi — - Aq The function to be maximized here (w.r.t. Q() is a concave parabola, havings its vertex at V[ = X//A (we let here A > 0, since for A = 0 the dual function is trivially zero). Hence, if \vj\ < 1 the maximum is attained at q* = V[ — X//A. Otherwise, the maximum is attained at one of the extremes of the feasible interval Qj £ [—1, 1] and, in particular, at q* — 1 if X{ > 0, and at q* = — 1 if X[ < 0. Therefore, * _ j xi/A if \%i\ < A, ^ 1 sgn(xz) otherwise. Correspondingly, the minimizer z*(A) of <p(zz, A) is given by (12.64) * / a \ a * J 0 if \xi\ < A, 2, (A) = x,-Aft =| ^ _ Asgn(;(i) otherwjse This can be more compactly written as z*(A) = sgn(Xj’)[|x,j| - A] + = sthrA(x,), i = (12.66) where [•]+ denotes the projection onto the positive orthant (positive part of the argument). The function sthr^ is known as the soft threshold function, or shrinkage operator, see Figure 12.8. When x is a vector, ! ‘ ' ° ' x/X Figure 12.8 Soft threshold function. we indicate with the notation sthr^(x) a vector whose z-th entry is sthr^Xf), i = 1,... ,n. Now, since strong duality holds for problem (12.62), and the solution to this problem is unique, the optimal primal variable [x]% coincides with z*.(A*), where A* > 0 is the value of the dual variable that maximizes z?(A). We can, however, find the optimal A* via a simplified reasoning in the present case. Specifically, consider first the case when ||x||i < 1, then in this case the projection is simply x itself, i.e., [x]x = x. Consider then the case ||x||i > 1, then the projection [x]x will be on the boundary of X, which means that ||[x];t||i = 1. We then use this condition to determine the optimal A: A* is the solution of the scalar equation f>?(A)| = l. (12.67) Notice that, from simple manipulation of (12.65), we have that k‘(A)l = { N-A tfW>A Hence, equation (12.67) becomes max(|x, | - A,0) = 1, and A* is the value of A that solves this equation. Once A* is found, the projection we seek is given by [x]x = Sgn(x;)[|x;| - A*] + . 12.3.3.6 Projection onto the positive semidefinite cone. Consider the cone of positive semidefinite matrices X = {X eSn : XhO} = S'|. Given a matrix X G S" we want to compute its projection onto X. Since we are working in a matrix space, we shall define projections according to the Frobenius norm, that is [X]* = argmin||Z-X| Let now X = UAUT be a spectral factorization for X, where U is an orthogonal matrix, and A is diagonal, containing the eigenvalues of X on the diagonal. Since the Frobenius norm is unitarily invariant, we have that ||Z - X||| = \\Z-UAUT\\l=\\U(UTZU-A)UT\\j = j|L7TZL7 - A||p = ||Z — A|||, 470 OPTIMIZATION MODELS whence Z* = UZ*UT. In summary, the projection of X = UAUT onto the positive semidefinite cone is given by 12.3.3.7 Proximal map of t\ regularization. In many problems of practical relevance the function h in (12.46) is a scalar multiple of the £\ norm of x. For instance, in the £\-regularized least-squares problem (also known as the LASSO), we consider which is of the form (12.45), with fo(x) = (l/7)||Ar — b||2 strongly convex (assuming A is full rank), and h(x) — ||x||i convex but non- differentiable. This class of problems is thus solvable by means of the proximal gradient algorithm. To this end, we need to be able to efficiently compute the proximal map of s/z, where s > 0 is a scalar (the stepsize), namely This is precisely the problem already considered in (12.63), f°r which we showed that the solution is given by the soft threshold function in (12.66), i.e., proxsh(x) = sthrs(x), where the z-th component of the vector sthrs(x) is sgn(xz-)[|xz-| — s] + . 12.3.3.8 Proximal gradient algorithm for the LASSO. Using the result in the previous section, we can specify the proximal gradient algorithm in the case of the LASSO problem in (12.68). Notice that we have in this case from which it follows that the strong convexity constant for /0 is [X]s« = U[A]+UT. V2/o(x) = ^(ATA), Further, we have that l|V/o(x) - V/o(y)||2 = ^||ATA(x-y)||2 < ^max(ATA)||x-y||2/ from which we obtain a global Lipschitz constant for the gradient: l = —Crmax(ATA). (12.70) If L and m can be computed (or at least estimated) as described above, the following proximal gradient algorithm (Algorithm 15), solves the LASSO problem (12.68) by returning a solution x guaranteeing that /(x) — /(x*) < e, where e > 0 is the required accuracy. This algorithm is also known as the ISTA (iterative shrinkage-thresholding algorithm). Algorithm 15 Proximal gradient for LASSO (constant stepsizes). Require: e > 0, xq, A full rank. 1: Compute m, L according to (12.69), (12-7°) 2: Set k = 0, s = 1/L 3: Compute gradient V/o(x*-) = (2/7)(ATAx^ — ATb) 4: Update: xk+l = sthrs(xfc - sVf0(xk)) 5: Compute ||#t||2 = \\xk - xk+1\\2/s 6: If \\gk\\l < 2emL/{L — m), then return x = x^+i and exit, else let k <- k + 1 and go to 3. We remark that in recent years there has been a tremendous activity in theory, algorithms and applications of t\ regularization and LASSO-related problems, especially in the context of the "compressive sensing" field. More sophisticated techniques thus exist for solving the LASSO and related problems. The key essential advance provided by these techniques consists of "accelerating" the basic proximal gradient scheme, so as to reach convergence rates of the order of 1/A:2 (recall that the basic proximal gradient has convergence rate of the order of 1/k on objective value). Although a full coverage of these methodologies is out of the scope of this book, we shall discuss one of these "fast" methods in the next section. 12.3.4 Fast proximal gradient algorithm The basic proximal gradient algorithm discussed in the previous section can be suitably modified so as to reach an accelerated convergence rate (in the objective function values) of order 1/k2. One of these modifications is the so-called FISTA type (fast iterative shrinkage- thresholding algorithm).10 A version of this algorithm, with constant stepsizes, is reported in Algorithm 16. 10 See A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM Journal on Imaging Sciences, 2009. 472 OPTIMIZATION MODELS Algorithm 16 Fast proximal gradient (constant stepsizes). Require: xq, a Lipschitz constant L for V/o 1: Set k = 1, s = 1/L, yi = *0/ fi = 1 2: Update: % = prox^y* - sV/0(yjt)) 3: Update: ffc+i = —^ 4: Update: yfc+1 = xk + ^(xk - 5: If \\xic — Xk_i W2 < €, then return x = and exit, else let k k +1 and go to 2. When applied to the specific LASSO problem in (12.68), step 2 in this algorithm simply reduces to soft thresholding: pronto* “ sV/o(y*)) = sthrs(y* - sV/o(y/t)). The exit condition in step 5 here simply checks a minimal improvement on the optimal variable, and does not imply e-suboptimality either on the optimal value or on the minimizes However, the following result holds for the convergence of Algorithm 16. Theorem 12.2 For the sequence x^k — 1,..., generated by Algorithm 16 it holds that /(**)-/(**) < jkTy2 ll*o-**Hi where x* is any optimal solution to problem (12.45). This result11 shows that Algorithm 16 guarantees a convergence rate in objective function values of the order of 1 /k2. When the Lipschitz constant L for V/o is not known a priori, the algorithm can be modified so as to include a backtracking step for incrementally adjusting the value of L, as specified in Algorithm 17. 12.4 Algorithms for non-smooth convex optimization The proximal gradient algorithm discussed in Section 12.3.2 is applicable for solving constrained convex optimization problems of the form (12.36), when the objective /0 is differentiable, and the feasible set X has "simple" structure. In this section we discuss algorithms that permit the treating of more general situations, in which either the objective function and/or the functions describing the inequality constraints of the problem are non-differentiable. Specifically, Section 12.4.1 presents the projected subgradient algorithm, which can be used when /0 is non-differentiable and X is "simple." This algorithm essentially has the structure of the (projected) gradient algorithm, but uses subgradients of /0 instead of its 11 See the cited paper by Beck and Teboulle for a proof. Algorithm 17 Fast proximal gradient (with backtracking). Require: xq, Lq > 0,77 > 1 1: Set k = 1, yi — xq, h = 1 2: Let pz- = proxj_/?(y^ — Vfo(}/k)/M/), and find the smallest non- negative integer z such that f(pi) < foiyk) + vT/o(i/(t)(*jt -yk) + ^Y I kit - VkWl + Kxk) holds for Mi = 3: Set 4: Update: xk = prox^h(\jk - Vf0{yk)/Lk) 5: Update: tl+, = 6: Update: yk+l = xk+lj^{xk- *k_x) 7: If ||Xk — Xfc-i II2 < £/ then return x = x*. and exit, else let fc k +1 and go to 2. gradients. Despite this similarity, however, subgradient-type algorithms are not (in general) descent methods. Moreover, in order to guarantee convergence, the stepsizes must be chosen according to rules that are quite different from the ones used in the gradient algorithm. Also, the "price to pay" for the increased generality of subgradient methods is that convergence happens at a slower rate, which can be proved to be of the order of 1/ Vk, where k is the iteration number. In Section 12.4.2 we then discuss the alternate subgradient algorithm, which is a modified version of the subgradient method that makes it possible to treat the case where /0 is non-differentiable and X is not necessarily "simple" and it is described by a set of convex inequalities. Finally, Section 12.4.3 presents the ellipsoid algorithm, which is a classical method for non-differentiable constrained optimization. 12.4.1 The projected subgradient algorithm We consider a constrained minimization problem of the form p* = min f0(x), (12.71) where X C Kn is a convex and closed set, and /0 is a convex function, with dom/o = Rn. We assume that this problem admits an optimal solution x*. For any point x G int X, by the definition of subgradient, we have that /o(z) > fo{x)+gJ(z-x), VzeX 474 OPTIMIZATION MODELS holds for any gx G 3/o(x). Evaluating this inequality at z — x*, we have that gJ(x-x*)>fo(x)-fo(x*)>0, (12.73) which is a key inequality for proving the convergence of the subgradient method. Also, any subgradient gx of /0 at x divides the whole space into two half-spaces K++ = {z : gj (z - x) >0}, (z-x)< 0}, and we see from (12.72) that for all z G %++ we have /o(z) > fo(x), hence the optimal points cannot lie in %++, thus x* G TL-, see Figure 12.9. We next describe a subgradient algorithm for solving (12.71) when A' is a "simple" closed convex set, where by simple we mean that it is easy to compute the Euclidean projection of a point onto X. Consider problem (12.71), and the recursion Xk+1 = [Xk - skgk\x, k = 0,1,..., (12.74)
{"url":"https://ru.djvu.online/file/qdx1wXLFUjOX0","timestamp":"2024-11-10T18:19:52Z","content_type":"text/html","content_length":"1049584","record_id":"<urn:uuid:5fb9c609-6328-4862-8c96-59fdb7b5ce4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00202.warc.gz"}
GraphicsPath Class (System.Drawing.Drawing2D) GraphicsPath Class Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided Represents a series of connected lines and curves. This class cannot be inherited. public ref class GraphicsPath sealed : MarshalByRefObject, ICloneable, IDisposable public sealed class GraphicsPath : MarshalByRefObject, ICloneable, IDisposable type GraphicsPath = class inherit MarshalByRefObject interface ICloneable interface IDisposable Public NotInheritable Class GraphicsPath Inherits MarshalByRefObject Implements ICloneable, IDisposable In .NET 6 and later versions, the System.Drawing.Common package, which includes this type, is only supported on Windows operating systems. Use of this type in cross-platform apps causes compile-time warnings and run-time exceptions. For more information, see System.Drawing.Common only supported on Windows. Applications use paths to draw outlines of shapes, fill the interiors of shapes, and create clipping regions. The graphics engine maintains the coordinates of geometric shapes in a path in world coordinate space. A path may be composed of any number of figures (subpaths). Each figure is either composed of a sequence of connected lines and curves or a geometric shape primitive. The starting point of a figure is the first point in the sequence of connected lines and curves. The ending point is the last point in the sequence. The starting and ending points of a geometric shape primitive are defined by the primitive specification. A figure that consists of a sequence of connected lines and curves (whose starting and ending points may be coincident) is an open figure, unless it is closed explicitly. A figure can be closed explicitly, by using the CloseFigure method, which closes the current figure by connecting a line from the ending point to the starting point. A figure that consists of a geometric shape primitive is a closed figure. For purposes of filling and clipping (for example, if a path is rendered using FillPath), all open figures are closed by adding a line from the figure's first point to its last point. A new figure is implicitly started when a path is created or when a figure is closed. A new figure is explicitly created when the StartFigure method is called. When a geometric shape primitive is added to a path, it adds a figure containing the geometric shape, and also implicitly starts a new figure. Consequently, there is always a current figure in a path. When lines and curves are added to a path, an implicit line is added as needed to connect the ending point of the current figure to the starting point of the new lines and curves to form a sequence of connected lines and curves. A figure has a direction that describes how line and curve segments are traced between the starting point and the ending point. The direction is defined in the order that lines and curves are added to a figure, or is defined by the geometric shape primitive. The direction is used in determining the path interiors for clipping and fill. GraphicsPath() Initializes a new instance of the GraphicsPath class with a FillMode value of Alternate. GraphicsPath(FillMode) Initializes a new instance of the GraphicsPath class with the specified FillMode enumeration. Initializes a new instance of the GraphicsPath class with the specified PathPointType and Point arrays and with the specified FillMode GraphicsPath(Point[], Byte[], FillMode) enumeration element. GraphicsPath(Point[], Byte[]) Initializes a new instance of the GraphicsPath class with the specified PathPointType and Point arrays. Initializes a new instance of the GraphicsPath array with the specified PathPointType and PointF arrays and with the specified FillMode GraphicsPath(PointF[], Byte[], FillMode) enumeration element. GraphicsPath(PointF[], Byte[]) Initializes a new instance of the GraphicsPath array with the specified PathPointType and PointF arrays. GraphicsPath(ReadOnlySpan<Point>, ReadOnlySpan<Byte>, Initializes a new instance of the GraphicsPath class with the specified PathPointType and Point arrays and with the specified FillMode FillMode) enumeration element. GraphicsPath(ReadOnlySpan<PointF>, ReadOnlySpan<Byte>, Initializes a new instance of the GraphicsPath class with the specified PathPointType and Point arrays and with the specified FillMode FillMode) enumeration element. FillMode Gets or sets a FillMode enumeration that determines how the interiors of shapes in this GraphicsPath are filled. PathData Gets a PathData that encapsulates arrays of points (points) and types (types) for this GraphicsPath. PathPoints Gets the points in the path. PathTypes Gets the types of the corresponding points in the PathPoints array. PointCount Gets the number of elements in the PathPoints or the PathTypes array. AddArc(Int32, Int32, Int32, Int32, Single, Single) Appends an elliptical arc to the current figure. AddArc(Rectangle, Single, Single) Appends an elliptical arc to the current figure. AddArc(RectangleF, Single, Single) Appends an elliptical arc to the current figure. AddArc(Single, Single, Single, Single, Single, Appends an elliptical arc to the current figure. AddBezier(Int32, Int32, Int32, Int32, Int32, Int32, Adds a cubic Bézier curve to the current figure. Int32, Int32) AddBezier(Point, Point, Point, Point) Adds a cubic Bézier curve to the current figure. AddBezier(PointF, PointF, PointF, PointF) Adds a cubic Bézier curve to the current figure. AddBezier(Single, Single, Single, Single, Single, Adds a cubic Bézier curve to the current figure. Single, Single, Single) AddBeziers(Point[]) Adds a sequence of connected cubic Bézier curves to the current figure. AddBeziers(PointF[]) Adds a sequence of connected cubic Bézier curves to the current figure. AddBeziers(ReadOnlySpan<Point>) Adds a sequence of connected cubic Bézier curves to the current figure. AddBeziers(ReadOnlySpan<PointF>) Adds a sequence of connected cubic Bézier curves to the current figure. AddClosedCurve(Point[], Single) Adds a closed curve to this path. A cardinal spline curve is used because the curve travels through each of the points in the array. AddClosedCurve(Point[]) Adds a closed curve to this path. A cardinal spline curve is used because the curve travels through each of the points in the array. AddClosedCurve(PointF[], Single) Adds a closed curve to this path. A cardinal spline curve is used because the curve travels through each of the points in the array. AddClosedCurve(PointF[]) Adds a closed curve to this path. A cardinal spline curve is used because the curve travels through each of the points in the array. AddClosedCurve(ReadOnlySpan<Point>, Single) Adds a closed curve to this path. A cardinal spline curve is used because the curve travels through each of the points in the array. AddClosedCurve(ReadOnlySpan<Point>) Adds a closed curve to this path. A cardinal spline curve is used because the curve travels through each of the points in the array. AddClosedCurve(ReadOnlySpan<PointF>, Single) Adds a closed curve to this path. A cardinal spline curve is used because the curve travels through each of the points in the array. AddClosedCurve(ReadOnlySpan<PointF>) Adds a closed curve to this path. A cardinal spline curve is used because the curve travels through each of the points in the array. AddCurve(Point[], Int32, Int32, Single) Adds a spline curve to the current figure. AddCurve(Point[], Single) Adds a spline curve to the current figure. AddCurve(Point[]) Adds a spline curve to the current figure. A cardinal spline curve is used because the curve travels through each of the points in the array. AddCurve(PointF[], Int32, Int32, Single) Adds a spline curve to the current figure. AddCurve(PointF[], Single) Adds a spline curve to the current figure. AddCurve(PointF[]) Adds a spline curve to the current figure. A cardinal spline curve is used because the curve travels through each of the points in the array. AddCurve(ReadOnlySpan<Point>, Single) Adds a spline curve to the current figure. AddCurve(ReadOnlySpan<Point>) Adds a spline curve to the current figure. AddCurve(ReadOnlySpan<PointF>, Single) Adds a spline curve to the current figure. AddCurve(ReadOnlySpan<PointF>) Adds a spline curve to the current figure. AddEllipse(Int32, Int32, Int32, Int32) Adds an ellipse to the current path. AddEllipse(Rectangle) Adds an ellipse to the current path. AddEllipse(RectangleF) Adds an ellipse to the current path. AddEllipse(Single, Single, Single, Single) Adds an ellipse to the current path. AddLine(Int32, Int32, Int32, Int32) Appends a line segment to the current figure. AddLine(Point, Point) Appends a line segment to this GraphicsPath. AddLine(PointF, PointF) Appends a line segment to this GraphicsPath. AddLine(Single, Single, Single, Single) Appends a line segment to this GraphicsPath. AddLines(Point[]) Appends a series of connected line segments to the end of this GraphicsPath. AddLines(PointF[]) Appends a series of connected line segments to the end of this GraphicsPath. AddLines(ReadOnlySpan<Point>) Appends a series of connected line segments to the end of this GraphicsPath. AddLines(ReadOnlySpan<PointF>) Appends a series of connected line segments to the end of this GraphicsPath. AddPath(GraphicsPath, Boolean) Appends the specified GraphicsPath to this path. AddPie(Int32, Int32, Int32, Int32, Single, Single) Adds the outline of a pie shape to this path. AddPie(Rectangle, Single, Single) Adds the outline of a pie shape to this path. AddPie(Single, Single, Single, Single, Single, Adds the outline of a pie shape to this path. AddPolygon(Point[]) Adds a polygon to this path. AddPolygon(PointF[]) Adds a polygon to this path. AddPolygon(ReadOnlySpan<Point>) Adds a polygon to this path. AddPolygon(ReadOnlySpan<PointF>) Adds a polygon to this path. AddRectangle(Rectangle) Adds a rectangle to this path. AddRectangle(RectangleF) Adds a rectangle to this path. AddRectangles(ReadOnlySpan<Rectangle>) Adds a series of rectangles to this path. AddRectangles(ReadOnlySpan<RectangleF>) Adds a series of rectangles to this path. AddRectangles(Rectangle[]) Adds a series of rectangles to this path. AddRectangles(RectangleF[]) Adds a series of rectangles to this path. AddRoundedRectangle(Rectangle, Size) Adds a rounded rectangle to this path. AddRoundedRectangle(RectangleF, SizeF) Adds a rounded rectangle to this path. AddString(String, FontFamily, Int32, Single, Point, Adds a text string to this path. AddString(String, FontFamily, Int32, Single, Adds a text string to this path. PointF, StringFormat) AddString(String, FontFamily, Int32, Single, Adds a text string to this path. Rectangle, StringFormat) AddString(String, FontFamily, Int32, Single, Adds a text string to this path. RectangleF, StringFormat) ClearMarkers() Clears all markers from this path. Clone() Creates an exact copy of this path. Closes all open figures in this path and starts a new figure. It closes each open figure by connecting a line from its endpoint to its starting CloseAllFigures() point. Closes the current figure and starts a new figure. If the current figure contains a sequence of connected lines and curves, the method closes the CloseFigure() loop by connecting a line from the endpoint to the starting point. Creates an object that contains all the relevant information required to generate a proxy used to communicate with a remote object. (Inherited from MarshalByRefObject) Dispose() Releases all resources used by this GraphicsPath. Determines whether the specified object is equal to the current object. (Inherited from Object) Finalize() Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. Flatten() Converts each curve in this path into a sequence of connected line segments. Flatten(Matrix, Single) Converts each curve in this GraphicsPath into a sequence of connected line segments. Flatten(Matrix) Applies the specified transform and then converts each curve in this GraphicsPath into a sequence of connected line segments. GetBounds() Returns a rectangle that bounds this GraphicsPath. GetBounds(Matrix, Pen) Returns a rectangle that bounds this GraphicsPath when the current path is transformed by the specified Matrix and drawn with the specified Pen. GetBounds(Matrix) Returns a rectangle that bounds this GraphicsPath when this path is transformed by the specified Matrix. Serves as the default hash function. (Inherited from Object) GetLastPoint() Gets the last point in the PathPoints array of this GraphicsPath. GetLifetimeService() Retrieves the current lifetime service object that controls the lifetime policy for this instance. (Inherited from MarshalByRefObject) GetPathPoints(Span<PointF>) Gets the points in the path. GetPathTypes(Span<Byte>) Gets the PathPointType types for the points in the path. Gets the Type of the current instance. (Inherited from Object) InitializeLifetimeService() Obtains a lifetime service object to control the lifetime policy for this instance. (Inherited from MarshalByRefObject) Indicates whether the specified point is contained within (under) the outline of this GraphicsPath when drawn with the specified Pen and using the IsOutlineVisible(Int32, Int32, Pen, Graphics) specified Graphics. IsOutlineVisible(Int32, Int32, Pen) Indicates whether the specified point is contained within (under) the outline of this GraphicsPath when drawn with the specified Pen. Indicates whether the specified point is contained within (under) the outline of this GraphicsPath when drawn with the specified Pen and using the IsOutlineVisible(Point, Pen, Graphics) specified Graphics. IsOutlineVisible(Point, Pen) Indicates whether the specified point is contained within (under) the outline of this GraphicsPath when drawn with the specified Pen. Indicates whether the specified point is contained within (under) the outline of this GraphicsPath when drawn with the specified Pen and using the IsOutlineVisible(PointF, Pen, Graphics) specified Graphics. IsOutlineVisible(PointF, Pen) Indicates whether the specified point is contained within (under) the outline of this GraphicsPath when drawn with the specified Pen. Indicates whether the specified point is contained within (under) the outline of this GraphicsPath when drawn with the specified Pen and using the IsOutlineVisible(Single, Single, Pen, Graphics) specified Graphics. IsOutlineVisible(Single, Single, Pen) Indicates whether the specified point is contained within (under) the outline of this GraphicsPath when drawn with the specified Pen. IsVisible(Int32, Int32, Graphics) Indicates whether the specified point is contained within this GraphicsPath, using the specified Graphics. IsVisible(Int32, Int32) Indicates whether the specified point is contained within this GraphicsPath. IsVisible(Point, Graphics) Indicates whether the specified point is contained within this GraphicsPath. IsVisible(Point) Indicates whether the specified point is contained within this GraphicsPath. IsVisible(PointF, Graphics) Indicates whether the specified point is contained within this GraphicsPath. IsVisible(PointF) Indicates whether the specified point is contained within this GraphicsPath. IsVisible(Single, Single, Graphics) Indicates whether the specified point is contained within this GraphicsPath in the visible clip region of the specified Graphics. IsVisible(Single, Single) Indicates whether the specified point is contained within this GraphicsPath. Creates a shallow copy of the current Object. (Inherited from Object) Creates a shallow copy of the current MarshalByRefObject object. (Inherited from MarshalByRefObject) Reset() Empties the PathPoints and PathTypes arrays and sets the FillMode to Alternate. Reverse() Reverses the order of points in the PathPoints array of this GraphicsPath. SetMarkers() Sets a marker on this GraphicsPath. StartFigure() Starts a new figure without closing the current figure. All subsequent points added to the path are added to this new figure. Returns a string that represents the current object. (Inherited from Object) Transform(Matrix) Applies a transform matrix to this GraphicsPath. Warp(PointF[], RectangleF, Matrix, WarpMode, Applies a warp transform, defined by a rectangle and a parallelogram, to this GraphicsPath. Warp(PointF[], RectangleF, Matrix, WarpMode) Applies a warp transform, defined by a rectangle and a parallelogram, to this GraphicsPath. Warp(PointF[], RectangleF, Matrix) Applies a warp transform, defined by a rectangle and a parallelogram, to this GraphicsPath. Warp(PointF[], RectangleF) Applies a warp transform, defined by a rectangle and a parallelogram, to this GraphicsPath. Warp(ReadOnlySpan<PointF>, RectangleF, Matrix, Applies a warp transform, defined by a rectangle and a parallelogram, to this GraphicsPath. WarpMode, Single) Widen(Pen, Matrix, Single) Replaces this GraphicsPath with curves that enclose the area that is filled when this path is drawn by the specified pen. Widen(Pen, Matrix) Adds an additional outline to the GraphicsPath. Widen(Pen) Adds an additional outline to the path. Applies to See also Collaborate with us on GitHub The source for this content can be found on GitHub, where you can also create and review issues and pull requests. For more information, see our contributor guide.
{"url":"https://learn.microsoft.com/en-us/dotnet/api/system.drawing.drawing2d.graphicspath?view=net-8.0&viewFallbackFrom=dotnet-plat-ext-3.1","timestamp":"2024-11-02T04:59:52Z","content_type":"text/html","content_length":"127628","record_id":"<urn:uuid:b6b12589-6ae7-4dcb-8d51-3d8b7cde42f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00600.warc.gz"}
Concept drift and data centricity This plot shows how coefficients in a linear model can change (not only in effect size, but also in sign) as new data is added to the training set (as a result of data or concept drift). Think of it as new retail sales data being added to the set over time. In the plot, b is the coefficient of interest and z is the proportion of new data (Population 2) gradually added to the existing training data (Population 1). First, all the data is from P1 (so z is 0), then it’s 75% P1 and 25% P2 (z is 0.25), and so on. As we add more of new data, we observe how the estimated effect changes. It starts out negative, becomes positive, then negative again. When the old and new data are equally mixed (z is 0.50), the previously negative effect disappears. This thought experiment (by John Mount) reminds me of Lord’s Paradox (John calls it a continuous version of Simpson’s Paradox and that’s another way of putting it). The data changes, but the model assumptions remain the same, and that’s a problem. This is another example of why staying true to the data, or data centricity, is critical to getting the right insights from models for decision making. You can find the Python code walkthrough and Jupyter notebook here. If you want to learn more about data centricity, here is a one-pager.
{"url":"https://ozer.gt/log/2024/09/22/concept-drift-and-data-centricity/","timestamp":"2024-11-03T07:35:01Z","content_type":"text/html","content_length":"27867","record_id":"<urn:uuid:5d65923f-995e-42e7-a4fd-f784bada371c>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00102.warc.gz"}
Printable Calendars AT A GLANCE Multiplication Worksheets 0 2 Multiplication Worksheets 0 2 - Students multiply 0 or 1 times numbers up to 12. Multiply by 0 and 1. The 2 times table worksheets bundle is a valuable educational resource designed to facilitate the learning and mastery of multiplication for young. There are two worksheets in this section that include all of the possible questions. In particular recall of the 2, 5 and 10 'times tables', multiplying by whole tens and solving. Web multiplication worksheets and tables. Web zip, 2.16 mb. 1 ____x 1 0 ____x 10 1 ____x 3 2 ____x 1 1 ____x 8 2 ____x 10 0 ____x 6 g. Web these grade 2 multiplication worksheets emphasize early multiplication skills; Web basic multiplication (0 through 12) on this page you'll find all of the resources you need for teaching basic facts through 12. Students multiply 0 or 1 times numbers up to 12. Use this sheet for skill practice or. Our grade 3 multiplication worksheets start with the meaning of multiplication and follow up with lots of multiplication practice and the. Web this multiplication (by 0, 1, and 2) worksheet is perfect to practice multiplication skills. Your elementary grade students will love this multiplication (by 0, 1, and 2) worksheet. Multiplication facts with 0's & 1's. Use our multiplication worksheet generator to create multiplication worksheets for free. (up to 12s) lay all of the cards on the. Multiplication facts with 0's & 1's. Web this multiplication (by 0, 1, and 2) worksheet is perfect to practice multiplication skills. Free math worksheets, charts and calculators. This printable work sheet presents 35 problems for practicing multiplication math facts: Multiplication Facts to 49 No Zeros with Target Fact 5 (LP) 1 ____x 1 0 ____x 10 1 ____x 3 2 ____x 1 1 ____x 8 2 ____x 10 0 ____x 6 g. Helping students learn their multiplication facts is one of the foundations of any math curriculum. Web math practice 4 you: Use this sheet for skill practice or. Cheatsheeting.com has been visited by 10k+ users in the past month multiplication facts 0 2 worksheets times tables worksheets timed There are two worksheets in this section that include all of the possible questions. Web multiplication worksheets and tables. Includes multiplication games, mystery pictures,. This printable work sheet presents 35 problems for practicing multiplication math facts: Our grade 3 multiplication worksheets start with the meaning of multiplication and follow up with lots of multiplication practice and the. Free Multiplication Worksheets, Multiplication Problems, Addition Your elementary grade students will love this multiplication (by 0, 1, and 2) worksheet. Multiplying by 0 through 2. Web these grade 2 multiplication worksheets emphasize early multiplication skills; Our grade 3 multiplication worksheets start with the meaning of multiplication and follow up with lots of multiplication practice and the. There are two worksheets in this section that include all. Multiplication Worksheets 06 Web basic multiplication (0 through 12) on this page you'll find all of the resources you need for teaching basic facts through 12. Cheatsheeting.com has been visited by 10k+ users in the past month There are two worksheets in this section that include all of the possible questions. Web these grade 2 multiplication worksheets emphasize early multiplication skills; 1 ____x. Free Multiplication Printable Worksheets The 2 times table worksheets bundle is a valuable educational resource designed to facilitate the learning and mastery of multiplication for young. There are two worksheets in this section that include all of the possible questions. Web this multiplication (by 0, 1, and 2) worksheet is perfect to practice multiplication skills. Free math worksheets, charts and calculators. Our grade 3. 1 Digit By 2 Digit Multiplication Worksheets Includes multiplication games, mystery pictures,. There are two worksheets in this section that include all of the possible questions. Use this sheet for skill practice or. Hand2mind.com has been visited by 10k+ users in the past month (up to 12s) lay all of the cards on the. Free Printable Times Table Worksheets Customize and Print Free | worksheets | math drills |. Cheatsheeting.com has been visited by 10k+ users in the past month Math explained in easy language,. In particular recall of the 2, 5 and 10 'times tables', multiplying by whole tens and solving. Your elementary grade students will love this multiplication (by 0, 1, and 2) worksheet. 20++ Mixed Multiplication Worksheets Worksheets Decoomo Your elementary grade students will love this multiplication (by 0, 1, and 2) worksheet. Helping students learn their multiplication facts is one of the foundations of any math curriculum. Multiplication facts with 0's & 1's. Multiply by 0 and 1. The 2 times table worksheets bundle is a valuable educational resource designed to facilitate the learning and mastery of multiplication. Multiplication Worksheets Mathematics.lk Web these grade 2 multiplication worksheets emphasize early multiplication skills; Free | worksheets | math drills |. Hand2mind.com has been visited by 10k+ users in the past month 1 ____x 1 0 ____x 10 1 ____x 3 2 ____x 1 1 ____x 8 2 ____x 10 0 ____x 6 g. Web math practice 4 you: Multiplication Worksheets 0 2 - 1 ____x 1 0 ____x 10 1 ____x 3 2 ____x 1 1 ____x 8 2 ____x 10 0 ____x 6 g. Web math practice 4 you: Our grade 3 multiplication worksheets start with the meaning of multiplication and follow up with lots of multiplication practice and the. Hand2mind.com has been visited by 10k+ users in the past month Use our multiplication worksheet generator to create multiplication worksheets for free. The 2 times table worksheets bundle is a valuable educational resource designed to facilitate the learning and mastery of multiplication for young. Multiplication facts with 0's & 1's. There are two worksheets in this section that include all of the possible questions. Web this section includes math worksheets for practicing multiplication facts to from 0 to 49. Use this sheet for skill practice or. There are two worksheets in this section that include all of the possible questions. 1 ____x 1 0 ____x 10 1 ____x 3 2 ____x 1 1 ____x 8 2 ____x 10 0 ____x 6 g. In particular recall of the 2, 5 and 10 'times tables', multiplying by whole tens and solving. Helping students learn their multiplication facts is one of the foundations of any math curriculum. Web these grade 2 multiplication worksheets emphasize early multiplication skills; Hand2mind.com has been visited by 10k+ users in the past month (up to 12s) lay all of the cards on the. Math explained in easy language,. Helping students learn their multiplication facts is one of the foundations of any math curriculum. Web Basic Multiplication (0 Through 12) On This Page You'll Find All Of The Resources You Need For Teaching Basic Facts Through 12. Multiplication facts with 0's & 1's. Web zip, 2.16 mb. Hand2mind.com has been visited by 10k+ users in the past month Web math practice 4 you: (Up To 12S) Lay All Of The Cards On The. Cheatsheeting.com has been visited by 10k+ users in the past month Math explained in easy language,. Your elementary grade students will love this multiplication (by 0, 1, and 2) worksheet. In particular recall of the 2, 5 and 10 'times tables', multiplying by whole tens and solving. Helping Students Learn Their Multiplication Facts Is One Of The Foundations Of Any Math Curriculum. Our grade 3 multiplication worksheets start with the meaning of multiplication and follow up with lots of multiplication practice and the. Web these grade 2 multiplication worksheets emphasize early multiplication skills; There are two worksheets in this section that include all of the possible questions. Web this section includes math worksheets for practicing multiplication facts to from 0 to The 2 Times Table Worksheets Bundle Is A Valuable Educational Resource Designed To Facilitate The Learning And Mastery Of Multiplication For Young. Use our multiplication worksheet generator to create multiplication worksheets for free. Multiply by 0 and 1. 1 ____x 1 0 ____x 10 1 ____x 3 2 ____x 1 1 ____x 8 2 ____x 10 0 ____x 6 g. This printable work sheet presents 35 problems for practicing multiplication math facts: Related Post:
{"url":"https://ataglance.randstad.com/viewer/multiplication-worksheets-0-2.html","timestamp":"2024-11-08T18:32:27Z","content_type":"text/html","content_length":"38167","record_id":"<urn:uuid:d43ed32b-bfac-4324-be80-dbf40166dcba>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00795.warc.gz"}
My name is Emmanuel and I am the lead Upper-level Mathematics Tutor for University Tutorial Services. I wanted to inform you that tutoring services are available for MATH 447 as well as the other following upper-level math courses: MATH 304 MATH 323 MATH 324 MATH 330 MATH 346 MATH 448 You can sign up for appointments on Starfish which is available on your my.binghamton.edu page. Appointments can hold up to eight people at a time and are available on a first come first serve basis. You must sign up for an appointment on Starfish, otherwise the tutor will not be there. We ask that you come with specific questions or topics that you want to go over. Additionally, I tutor MATH 346 and MATH 448. If you are unable to sign up for a session that seems like it is available it may be because someone from another course signed up before you. If this is the case, then MATH 447 students will not be able to register for the session. If you have any questions please redirect them to uts@binghamton.edu. Additionally, if you have any questions about my sessions feel free to email me at edavis15@binghamton.edu. To login into Webwork use your binghamton username (from your email) as your username and password. You can post questions and discuss the course at the course Piazza site. Note the Piazza site is shared with the other Probability section, so not all policies, homework assignments, exams, etc. will be relevant. Homework 1- Due Friday Jan 26. The problems assigned on webassign were Chapter 2- 2, 8, 11, 15, 31, 39, 43, 47, 53, 57 Homework 2- Due Friday Feb 2. Turn the above written problems in on Friday. The Webassign problems are due Monday. Your webassign codes have been extended until Feb. 14th. The problems assigned on webassign were Chapter 2- 65, 71, 75, 87, 89, 93, 97, 113, 121 Homework 3- Due Friday Feb 9. Note this week, I'm trying webwork for the online homework. Homework 6- Due Friday Mar 9. The written part is due now Monday. Solutions to the written parts of the first eight homeworks are now on blackboard. MIDTERM Feb. 23 I have uploaded 2 practice midterms to “myCourses”. They are essentially midterms I given before. We used a different book, so the content and emphasis doesn't exactly line up, but the exams should give you idea of what to expect. I recommend treating them like a practice exams and spending 80 minutes on them. REMARK ON PRACTICE MIDTERMS - pmf stands for probability mass function and means the same thing as distribution function. Midterm 1 info Apr. 11 - Review additional class Apr. 11 from 4:40-6:10 in Old Champlain G102 (next to the math building). Apr. 20 - Section 6.4 (Note this class will end early, around 8:45) Apr. 27 - Section 7.3-7.4 (We'll do student evaluation this day). Note: I will be gone May 9 - 16th. I'll have some offices hours before I leave. Practice Finals have been posted on myCourses. As usual, these are problems I have written for slightly different so some problems might seem harder/easier than originally intended. Also not everything is covered.
{"url":"http://www2.math.binghamton.edu/p/people/renfrew/447-18","timestamp":"2024-11-08T01:26:48Z","content_type":"text/html","content_length":"23812","record_id":"<urn:uuid:1e54f4c2-ebf5-4104-a28c-e4629fdc885e>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00712.warc.gz"}
23.9 Inductance - College Physics 2e | OpenStax Learning Objectives By the end of this section, you will be able to: • Calculate the inductance of an inductor. • Calculate the energy stored in an inductor. • Calculate the emf generated in an inductor. Induction is the process in which an emf is induced by changing magnetic flux. Many examples have been discussed so far, some more effective than others. Transformers, for example, are designed to be particularly effective at inducing a desired voltage and current with very little loss of energy to other forms. Is there a useful physical quantity related to how “effective” a given device is? The answer is yes, and that physical quantity is called inductance. Mutual inductance is the effect of Faraday’s law of induction for one device upon another, such as the primary coil in transmitting energy to the secondary in a transformer. See Figure 23.37, where simple coils induce emfs in one another. In the many cases where the geometry of the devices is fixed, flux is changed by varying current. We therefore concentrate on the rate of change of current, $ΔI/ΔtΔI/Δt$, as the cause of induction. A change in the current $I1I1$ in one device, coil 1 in the figure, induces an $emf2emf2$ in the other. We express this in equation form as where $MM$ is defined to be the mutual inductance between the two devices. The minus sign is an expression of Lenz’s law. The larger the mutual inductance $MM$, the more effective the coupling. For example, the coils in Figure 23.37 have a small $MM$ compared with the transformer coils in Figure 23.27. Units for $MM$ are $(V⋅s)/A=Ω⋅s(V⋅s)/A=Ω⋅s$, which is named a henry (H), after Joseph Henry. That is, $1 H=1Ω⋅s1 H=1Ω⋅s$. Nature is symmetric here. If we change the current $I2I2$ in coil 2, we induce an $emf1emf1$ in coil 1, which is given by where $MM$ is the same as for the reverse process. Transformers run backward with the same effectiveness, or mutual inductance $MM$. A large mutual inductance $MM$ may or may not be desirable. We want a transformer to have a large mutual inductance. But an appliance, such as an electric clothes dryer, can induce a dangerous emf on its case if the mutual inductance between its coils and the case is large. One way to reduce mutual inductance $MM$ is to counterwind coils to cancel the magnetic field produced. (See Figure 23.38.) Self-inductance, the effect of Faraday’s law of induction of a device on itself, also exists. When, for example, current through a coil is increased, the magnetic field and flux also increase, inducing a counter emf, as required by Lenz’s law. Conversely, if the current is decreased, an emf is induced that opposes the decrease. Most devices have a fixed geometry, and so the change in flux is due entirely to the change in current $ΔIΔI$ through the device. The induced emf is related to the physical geometry of the device and the rate of change of current. It is given by where $LL$ is the self-inductance of the device. A device that exhibits significant self-inductance is called an inductor, and given the symbol in Figure 23.39. The minus sign is an expression of Lenz’s law, indicating that emf opposes the change in current. Units of self-inductance are henries (H) just as for mutual inductance. The larger the self-inductance $LL$ of a device, the greater its opposition to any change in current through it. For example, a large coil with many turns and an iron core has a large $LL$ and will not allow current to change quickly. To avoid this effect, a small $LL$ must be achieved, such as by counterwinding coils as in Figure 23.38. A 1 H inductor is a large inductor. To illustrate this, consider a device with $L=1.0 HL=1.0 H$ that has a 10 A current flowing through it. What happens if we try to shut off the current rapidly, perhaps in only 1.0 ms? An emf, given by $emf=−L(ΔI/Δt)emf=−L(ΔI/Δt)$, will oppose the change. Thus an emf will be induced given by $emf=−L(ΔI/Δt)=(1.0 H)[(10 A)/(1.0 ms)]=10,000 Vemf=−L(ΔI/Δt)=(1.0 H)[(10 A)/(1.0 ms)]=10,000 V$. The positive sign means this large voltage is in the same direction as the current, opposing its decrease. Such large emfs can cause arcs, damaging switching equipment, and so it may be necessary to change current more slowly. There are uses for such a large induced voltage. Camera flashes use a battery, two inductors that function as a transformer, and a switching system or oscillator to induce large voltages. (Remember that we need a changing magnetic field, brought about by a changing current, to induce a voltage in another coil.) The oscillator system will do this many times as the battery voltage is boosted to over one thousand volts. (You may hear the high pitched whine from the transformer as the capacitor is being charged.) A capacitor stores the high voltage for later use in powering the flash. (See Figure 23.40.) It is possible to calculate $LL$ for an inductor given its geometry (size and shape) and knowing the magnetic field that it produces. This is difficult in most cases, because of the complexity of the field created. So in this text the inductance $LL$ is usually a given quantity. One exception is the solenoid, because it has a very uniform field inside, a nearly zero field outside, and a simple shape. It is instructive to derive an equation for its inductance. We start by noting that the induced emf is given by Faraday’s law of induction as $emf=−N(ΔΦ/Δt)emf=−N(ΔΦ/Δt)$ and, by the definition of self-inductance, as $emf=−L(ΔI/Δt)emf=−L(ΔI/Δt)$. Equating these yields Solving for $LL$ gives This equation for the self-inductance $LL$ of a device is always valid. It means that self-inductance $LL$ depends on how effective the current is in creating flux; the more effective, the greater $ΔΦΔΦ$/ $ΔIΔI$ is. Let us use this last equation to find an expression for the inductance of a solenoid. Since the area $A A$ of a solenoid is fixed, the change in flux is $Δ Φ = Δ ( B A ) = A Δ B Δ Φ = Δ ( B A ) = A Δ B$. To find $Δ B Δ B$, we note that the magnetic field of a solenoid is given by $B=μ0nI=μ0NIℓB=μ0nI=μ0NIℓ$. (Here $n=N/ℓn=N/ℓ$, where $N N$ is the number of coils and $ℓ ℓ$ is the solenoid’s length.) Only the current changes, so that $ΔΦ=AΔB=μ0NAΔIℓΔΦ=AΔB=μ0NAΔIℓ$. Substituting $Δ Φ Δ Φ$ into $L=NΔΦΔIL=NΔΦΔI$ gives This simplifies to This is the self-inductance of a solenoid of cross-sectional area $A A$ and length $ℓ ℓ$. Note that the inductance depends only on the physical characteristics of the solenoid, consistent with its Calculating the Self-inductance of a Moderate Size Solenoid Calculate the self-inductance of a 10.0 cm long, 4.00 cm diameter solenoid that has 200 coils. This is a straightforward application of $L=μ0N2AℓL=μ0N2Aℓ$, since all quantities in the equation except $LL$ are known. Use the following expression for the self-inductance of a solenoid: The cross-sectional area in this example is $A=πr2=(3.14...)(0.0200 m)2=1.26×10−3m2A=πr2=(3.14...)(0.0200 m)2=1.26×10−3m2$, $N N$ is given to be 200, and the length $ℓ ℓ$ is 0.100 m. We know the permeability of free space is $μ0=4π×10−7T⋅m/Aμ0=4π×10−7T⋅m/A$. Substituting these into the expression for $L L$ gives $L = (4π×10−7 T⋅m/A)(200)2(1.26×10−3 m2)0.100 m = 0.632 mH. L = (4π×10−7 T⋅m/A)(200)2(1.26×10−3 m2)0.100 m = 0.632 mH.$ This solenoid is moderate in size. Its inductance of nearly a millihenry is also considered moderate. One common application of inductance is used in traffic lights that can tell when vehicles are waiting at the intersection. An electrical circuit with an inductor is placed in the road under the place a waiting car will stop over. The body of the car increases the inductance and the circuit changes sending a signal to the traffic lights to change colors. Similarly, metal detectors used for airport security employ the same technique. A coil or inductor in the metal detector frame acts as both a transmitter and a receiver. The pulsed signal in the transmitter coil induces a signal in the receiver. The self-inductance of the circuit is affected by any metal object in the path. Such detectors can be adjusted for sensitivity and also can indicate the approximate location of metal found on a person. See Figure 23.41. Energy Stored in an Inductor We know from Lenz’s law that inductances oppose changes in current. There is an alternative way to look at this opposition that is based on energy. Energy is stored in a magnetic field. It takes time to build up energy, and it also takes time to deplete energy; hence, there is an opposition to rapid change. In an inductor, the magnetic field is directly proportional to current and to the inductance of the device. It can be shown that the energy stored in an inductor $EindEind$ is given by This expression is similar to that for the energy stored in a capacitor. Calculating the Energy Stored in the Field of a Solenoid How much energy is stored in the 0.632 mH inductor of the preceding example when a 30.0 A current flows through it? The energy is given by the equation $Eind=12LI2Eind=12LI2$, and all quantities except $EindEind$ are known. Substituting the value for $LL$ found in the previous example and the given current into $Eind=12LI2Eind=12LI2$ gives $Eind = 12LI2 = 0.5(0.632×10−3 H)(30.0 A)2=0.284 J. Eind = 12LI2 = 0.5(0.632×10−3 H)(30.0 A)2=0.284 J.$ This amount of energy is certainly enough to cause a spark if the current is suddenly switched off. It cannot be built up instantaneously unless the power input is infinite.
{"url":"https://openstax.org/books/college-physics-2e/pages/23-9-inductance","timestamp":"2024-11-02T17:35:24Z","content_type":"text/html","content_length":"607796","record_id":"<urn:uuid:f6dfdf1c-09ab-4a5d-a346-6d68de794351>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00403.warc.gz"}
Zookeepers - DREME For Teachers 1+ Players Moderate teacher engagement • Three “food” piles: One pile of 3 objects, one pile of 4 objects, and one pile of 5 objects. Each pile should use different objects and each of the small objects within one pile should be the same. For example, one pile of 3 counting cubes, one pile of 4 square blocks, and one pile of 5 triangle blocks. • Three cards (one per “food” pile). Create your own cards or use these examples. Each card needs to match the “food” in that pile. For example, one card of counting cubes, one card of square blocks, and one card of triangle blocks. • A dish or container to hold the “food.” • A toy or picture to represent the hungry animal, such as this example. Setup — Less than 5 minutes • Children are the zookeepers and must feed the hungry animal! Gather the “food” piles. • Create the cards or print the ones provided. • Put the piles of “food,” cards, and a dish or container on a flat surface. Place the animal toy or picture by the dish. 1. Shuffle the cards and draw two (for example, the counting cubes card and the square blocks card). These are the two “food” piles that children will use to compare the quantities. 2. Compare how many objects are in each of the two piles by counting or matching the objects one-by-one. Say: “The animal always wants to eat from the pile that has more. How many objects are in each pile?” • If one pile has more, move one object from that pile into the dish. • If the two piles have the same, put one object from either pile—not one from each pile—into the dish. 3. Children take turns shuffling and drawing cards, comparing the piles, and moving “food” into the dish. 4. The game ends when all the piles are empty or there is only one object left. The animal is full and happy. Good job, zookeeper! Checks for Understanding To deepen children’s learning about early math concepts, talk and ask questions while doing this activity together. Here are some examples to get you started: • “How can you figure out how many blocks you have?” • “How many frogs are there?” • “How did you know to put in a frog toy and not a wooden block?” • “You said you have three frog toys and seven blocks. Is three greater than seven or less than seven?” • “Which food pile has more?” • “How many more orange linking cubes do you have than frog toys?” • “Since you started with five cubes, how many cubes will have left once you fed one to the bear? Activity Modifications Once you have tried the activity, here are some other things you can do. Try these modifications to keep the activity interesting and challenging for children all year. • Use five frames or 10 frames to help children count objects side by side in order to compare. • Create more than three piles of small objects. • Use larger numbers of small objects, such as 15-18 objects per pile. • Sets with similar quantities are more challenging to compare than sets containing very different numbers of objects. Have children use sets with a similar number of objects in each set (a set of 13, a set of 12, and a set of 14). • To break a tie when two sets have the same number of objects, have children compare the number of each object already in the container and put in one of the least common types of objects. If the number of objects in the two sets is still equal, the child can choose which one to add. • Play the game with a timer and have children try to fill their container before time runs out. • During a tie, the child can choose one set and add the entire set to the container. That set is then eliminated from the game and that object type card is removed. • Have children sort objects into their sets before beginning the activity as opposed to having all the objects together in the same container or the teacher being the one to sort the objects. • Give roles to the participating children. Put one child in charge of a set of objects, and whenever that set of objects is drawn, that child is responsible for counting (for example, if frog toys are being used, the child always puts out the set of frog toys and counts it). Put one child in charge of the container and of counting how many of each object is in the container.
{"url":"https://preschoolmath.stanford.edu/resource/comparing-sets/","timestamp":"2024-11-02T23:29:44Z","content_type":"text/html","content_length":"93521","record_id":"<urn:uuid:7832984f-c264-4485-9f92-ebd134d8a6b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00597.warc.gz"}
Cash-out Mortgage Refinance or Home Equity Loan?Cash-out Mortgage Refinance or Home Equity Loan? Column Delivered August 21, 2000, Revised September 6, 2002 �I need $50,000 to remodel my house. Is it better to refinance my existing mortgage (with a balance about $140,000) into a new $190,000 mortgage, or should I borrow the extra $50,000 with a home equity loan.?� Every homeowner in need of extra cash faces this question. To answer it, you must consider several factors, including: * The interest rate and points you have to pay to refinance the first mortgage, compared with the same costs for a second mortgage. * Any mortgage insurance requirement on the new first mortgage. * The interest rate, mortgage insurance, and period remaining on the term of the existing first mortgage. * The term you select on the new first relative to that on the new second. * The amount of cash you need. * Your income-tax bracket. * The length of time you expect to remain in your home. * The interest rate you can earn on savings. All these factors are pulled together in calculator (3d), Refinance to Raise Cash or Take Out a Second Mortgage. This calculator computes all costs of both options over a future time period specified by the user. It also shows a break-even interest rate on the second mortgage -- the highest rate you can pay on the second and come out ahead of the refinance option. The second mortgage is the less-costly option if it is available at an interest rate below the break-even rate. Consider your case. You have a $140,000 first mortgage and you need $50,000. The average age of most refinanced mortgages is a few years, so I'm assuming you acquired yours two years ago, at 7 percent for 30 years, without mortgage insurance. Example 1 assumes you are in the highest income tax bracket (39.6%) and can earn 5% on your investments. Your house is now worth $213,000. A new loan for $190,000 plus settlement costs will require mortgage insurance. I�m assuming the insurance will continue during the entire 5 years you expect to remain in your home. The new first mortgage would be for 30 years at 8.25% and one point. The second mortgage for $50,000 plus costs would be for 15 years at 11.5% and one point. The break-even rate on the second mortgage is 18.25%, well above the market rate of 11.5% for the second. Over 5 years, the second would cost $11,361 less than refinancing the first. Example 2 is the same, except that I assume you can afford a 15-year term on the new first mortgage cash-out. The break-even rate on the second would fall to 16.86%, and the savings on the second would drop to $8,982. Example 3 is the same as Example 2, except that I assume you are in the 15% tax bracket. The break-even rate on the second mortgage would drop to 14.98%, and the savings to $8,230. Example 4 is the same as 3 except that I assume that your house will appreciate by 5% a year, resulting in termination of mortgage insurance on the new first mortgage after 18 months. The break-even rate on the second would fall to 13.21%, and the savings to $4,021. Example 5 goes one step further and assumes that marked recent appreciation in the value of your house eliminates the need for mortgage insurance altogether. The break-even rate on the second would drop to 12.41% and the savings to $2,138. Borrowers who acquired mortgages a few years ago at rates significantly below the current market are likely to do better taking second mortgages than refinancing. But older mortgages carrying higher rates can be a different story. For example, lets make all the assumptions of Example 1, but instead of having a 7% 30-year loan from 1998 we assume you have a 10% 30-year loan from 1990. The break-even would be 9.98%, or below the market rate on the second, and refinancing would save you $2,467 over 5 years compared to the second. If we apply the assumptions of Example 5 to the 10% mortgage, the breakeven on the second would be 3.81% and the savings from refinancing $17,106. But don't rely on generalizations because no two situations are identical. Use the calculator to find the answer that applies to your precise situation. Copyright Jack Guttentag 2002
{"url":"http://www.explainingmortgages.com/mtgpro/a%20-%20refinance/cash-out%20refi%20or%20home%20equity%20loan.htm","timestamp":"2024-11-08T04:19:24Z","content_type":"text/html","content_length":"19634","record_id":"<urn:uuid:8c6f018c-1796-4ecb-9856-e1af4429ffb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00357.warc.gz"}
How is SGPI to CGPA calculated? How to Calculate CGPA from SGPA? 1. CGPA= (SGPAs of All semesters in an academic year)/Number of semesters. 2. CGPA of 1 college academic term = (SGPA of semester 1+ SGPA of semester 2)/ 2. 3. SGPA to Percentage Calculator. 4. [(SGPA * 10) – 7.5 = Percentage] 5. For example: 6. SGPA = Σ(Ci×GPi)/ΣCi. How do I get SGPI? The SGPI is the weighted average of the grade points obtained by a student in all the courses of a semester. Note: The SGPI should be calculated upto two decimal places. What is SGPI? SGPI- Semester Grade Performance Index. Outstanding. Is Sgpa and percentage same? Converting SGPA to percentage is simple after you have calculated your SGPA. To get the percentage, multiply (*) the SGPA with 10 and then subtract from 7.5 from the results. These will give you the percentage. For example, using the SGPA we got (8.3), we can quickly get the percentage. What is SGPI and CGPA? SGPI means Semester Grade Performance Index. the courses by the learner during the semester. CGPA means Cumulative Grade Point Average. For 4 Years Course : If duration of course is of four years, the degree shall be given to students based upon CGPA considering last four semesters performance. How do I know my CGPA? How to Calculate CGPA 1. Step 1: Add the grade points i.e 9+8+7+8+8 = 40. 2. Step 2: Divide the sum by 5 i.e 40/5 = 8. 3. Thus, your CGPA is 8.0. How do you calculate average SGPI? To calculate your grade point average, first multiply the number of credits each class is worth by the point value for the letter grade that you earned in that class. Next, total the grade points of all of your classes for that semester and divide it by the number of credit hours that you attempted. Is SGPI and CGPA same? The SGPI is the weighted average of the grade points obtained in all. the courses by the learner during the semester. CGPA means Cumulative Grade Point Average. For 4 Years Course : If duration of course is of four years, the degree shall be given to students based upon CGPA considering last four semesters performance. What is SGPI and CGPI? An up to date assessment of the overall performance of a learner from the time s/he enrolled the University of Mumbai is obtained by calculating a number called the Cumulative Grade Performance Index (CGPI), in a manner similar to the calculation of SGPI. What percentage is 9.6 CGPA? CGPA to Marks Chart by CBSE CGPA Percentage 9.6 91.2 9.5 90.25 9.4 89.3 9.3 88.35 Is 7.5 A good CGPA? It is hard for US Universities to pick the best students from each college across the world, so above 8.5 CGPA is good enough for getting into Top Ivy League colleges. The point is, 7.5 CGPA by itself means nothing. How do I convert my GPA to CGPA? GPA is the score which is acquired within a single semester whereas CGPA is the overall score, including the GPA’s acquired in all the semesters and then concluding all the final CGPA is taken out….GPA to CGPA Conversion Table. CGPA (10 Point) GPA (4 Point) Grade (4 Point Scale) <5.0 0 – 1.3 F Is there a way to convert SGPA to percentage? Here is the useful SGPA to Percentage conversion calculator online for you to convert SGPA to percentage with ease. Just enter the SGPA and submit to know the corresponding percentage. Just copy and paste the below code to your webpage where you want to display this calculator. Which is the correct formula to convert sgpi to percentage? After a lot of effort, they came out with the following useless formula to convert SGPI into percentage – Percentage = (SGPI*7.1)+12. I tried to figure out weather this formula is correct or not, but to the best of my knowledge, even this formula seems to be incorrect. How to calculate your percentage from your CGPA? This is why we have presented the method to calculate your percentage from your CGPA. The formula is quite easy, we first convert SGPA to CGPA and take the total CGPA and multiply it by 9.5 to come to a percentage. For better understanding have a look at the table below: Which is the SGPA to CGPA calculator for KTU? The SGPA to CGPA Calculator for the esteemed KTU (APJ Abdul Kalam Technology University) is as follows- SGPA = Σ (Ci×GPi)/ΣCi
{"url":"https://www.raiseupwa.com/blog/how-is-sgpi-to-cgpa-calculated/","timestamp":"2024-11-03T02:59:49Z","content_type":"text/html","content_length":"105654","record_id":"<urn:uuid:341bbc9c-af56-4214-a075-832b8fd26e49>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00734.warc.gz"}
Theoretical Description¶ The mapie.regression.MapieRegressor class uses various resampling methods based on the jackknife strategy recently introduced by Foygel-Barber et al. (2020) [1]. They allow the user to estimate robust prediction intervals with any kind of machine learning model for regression purposes on single-output data. We give here a brief theoretical description of the methods included in the module. Before describing the methods, let’s briefly present the mathematical setting. For a regression problem in a standard independent and identically distributed (i.i.d) case, our training data All the methods below are described with the absolute residual conformity score for simplicity but other conformity scores are implemented in MAPIE (see Theoretical Description for Conformity Scores 1. The “Naive” method¶ The so-called naive method computes the residuals of the training data to estimate the typical error obtained on a new test data point. The prediction interval is therefore given by the prediction obtained by the model trained on the entire training set Since this method estimates the conformity scores only on the training set, it tends to be too optimistic and underestimates the width of prediction intervals because of a potential overfit. As a result, the probability that a new point lies in the interval given by the naive method would be lower than the target level The figure below illustrates the naive method. 2. The split method¶ The so-called split method computes the residuals of a calibration dataset to estimate the typical error obtained on a new test data point. The prediction interval is therefore given by the prediction obtained by the model trained on the training set Since this method estimates the conformity scores only on a calibration set, one must have enough observations to split its original dataset into train and calibration as mentioned in [5]. We can notice that this method is very similar to the naive one, the only difference being that the conformity scores are not computed on the calibration set. Moreover, this method will always give prediction intervals with a constant width. 3. The jackknife method¶ The standard jackknife method is based on the construction of a set of leave-one-out models. Estimating the prediction intervals is carried out in three main steps: • For each instance i = 1, …, n of the training set, we fit the regression function n leave-one-out models. • The corresponding leave-one-out conformity score is computed for each • We fit the regression function The resulting confidence interval can therefore be summarized as follows is the leave-one-out conformity score. This method avoids the overfitting problem but can lose its predictive cover when 4. The jackknife+ method¶ Unlike the standard jackknife method which estimates a prediction interval centered around the prediction of the model trained on the entire dataset, the so-called jackknife+ method uses each leave-one-out prediction on the new test point to take the variability of the regression function into account. The resulting confidence interval can therefore be summarized as follows As described in [1], this method guarantees a higher stability with a coverage level of a priori assumption on the distribution of the data 5. The jackknife-minmax method¶ The jackknife-minmax method offers a slightly more conservative alternative since it uses the minimal and maximal values of the leave-one-out predictions to compute the prediction intervals. The estimated prediction intervals can be defined as follows As justified by [1], this method guarantees a coverage level of The figure below, adapted from Fig. 1 of [1], illustrates the three jackknife methods and emphasizes their main differences. However, the jackknife, jackknife+ and jackknife-minmax methods are computationally heavy since they require to run as many simulations as the number of training points, which is prohibitive for a typical data science use case. 6. The CV+ method¶ In order to reduce the computational time, one can adopt a cross-validation approach instead of a leave-one-out approach, called the CV+ method. By analogy with the jackknife+ method, estimating the prediction intervals with CV+ is performed in four main steps: • We split the training set into K disjoint subsets • K regression functions • The corresponding out-of-fold conformity score is computed for each k(i) is the fold containing i. • Similar to the jackknife+, the regression functions As for jackknife+, this method guarantees a coverage level higher than a priori assumption on the distribution of the data. As noted by [1], the jackknife+ can be viewed as a special case of the CV+ in which 7. The CV and CV-minmax methods¶ By analogy with the standard jackknife and jackknife-minmax methods, the CV and CV-minmax approaches are also included in MAPIE. As for the CV+ method, they rely on out-of-fold regression models that are used to compute the prediction intervals but using the equations given in the jackknife and jackknife-minmax sections. The figure below, adapted from Fig. 1 of [1], illustrates the three CV methods and emphasizes their main differences. 8. The jackknife+-after-bootstrap method¶ In order to reduce the computational time, and get more robust predictions, one can adopt a bootstrap approach instead of a leave-one-out approach, called the jackknife+-after-bootstrap method, offered by Kim and al. [2]. Intuitively, this method uses ensemble methodology to calculate the By analogy with the CV+ method, estimating the prediction intervals with jackknife+-after-bootstrap is performed in four main steps: • We resample the training set with replacement (bootstrap) • These predictions are aggregated according to a given aggregation function • The sets As for jackknife+, this method guarantees a coverage level higher than 9. The Conformalized Quantile Regression (CQR) Method¶ The conformalized quantile regression (CQR) method allows for better interval widths with heteroscedastic data. It uses quantile regressors with different quantile values to estimate the prediction bounds. The residuals of these methods are used to create the guaranteed coverage value. Notations and Definitions¶ Mathematical Formulation¶ The prediction interval Note: In the symmetric method, As justified by the literature, this method offers a theoretical guarantee of the target coverage level 10. The ensemble batch prediction intervals (EnbPI) method¶ The coverage guarantee offered by the various resampling methods based on the jackknife strategy, and implemented in MAPIE, are only valid under the “exchangeability hypothesis”. It means that the probability law of data should not change up to reordering. This hypothesis is not relevant in many cases, notably for dynamical times series. That is why a specific class is needed, namely Its implementation looks like the jackknife+-after-bootstrap method. The leave-one-out (LOO) estimators are approximated thanks to a few boostraps. However, the confidence intervals are like those of the jackknife method. The residuals are no longer considered in absolute values but in relative values and the width of the confidence intervals are minimized, up to a given gap between the quantiles’ level, optimizing the parameter Moreover, the residuals are updated during the prediction, each time new observations are available. So that the deterioration of predictions, or the increase of noise level, can be dynamically taken into account. Finally, the coverage guarantee is no longer absolute but asymptotic up to two hypotheses: 1. Errors are short-term independent and identically distributed (i.i.d) 2. Estimation quality: there exists a real sequence The coverage level depends on the size of the training set and on Be careful: the bigger the training set, the better the covering guarantee for the point following the training set. However, if the residuals are updated gradually, but the model is not refitted, the bigger the training set is, the slower the update of the residuals is effective. Therefore there is a compromise to make on the number of training samples to fit the model and update the prediction intervals. Key takeaways¶ • The jackknife+ method introduced by [1] allows the user to easily obtain theoretically guaranteed prediction intervals for any kind of sklearn-compatible Machine Learning regressor. • Since the typical coverage levels estimated by jackknife+ follow very closely the target coverage levels, this method should be used when accurate and robust prediction intervals are required. • For practical applications where leave-one-out simulation is high, it is advised to adopt the CV+ method, based on out-of-fold simulations, or the jackknife+-after-bootstrap method, instead. Indeed, the methods based on the jackknife resampling approach are very cumbersome because they require to run a high number of simulations, equal to the number of training samples • Although the CV+ method results in prediction intervals that are slightly larger than for the jackknife+ method, it offers a good compromise between computational time and accurate predictions. • The jackknife+-after-bootstrap method results in the same computational efficiency, and offers a higher sensitivity to epistemic uncertainty. • The jackknife-minmax and CV-minmax methods are more conservative since they result in higher theoretical and practical coverages due to the larger widths of the prediction intervals. It is therefore advised to use them when conservative estimates are needed. • The conformalized quantile regression method allows for more adaptiveness on the prediction intervals which becomes key when faced with heteroscedastic data. • If the “exchangeability hypothesis” is not valid, typically for time series, use EnbPI, and update the residuals each time new observations are available. The table below summarizes the key features of each method by focusing on the obtained coverages and the computational cost. Method Theoretical coverage Typical coverage Training cost Evaluation cost Naïve No guarantee 1 Split 1 Jackknife No guarantee CV No guarantee Conformalized quantile regressor Here, the training and evaluation costs correspond to the computational time of the MAPIE .fit() and .predict() methods. [1] Rina Foygel Barber, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. “Predictive inference with the jackknife+.” Ann. Statist., 49(1):486–507, February 2021. [2] Byol Kim, Chen Xu, and Rina Foygel Barber. “Predictive Inference Is Free with the Jackknife+-after-Bootstrap.” 34th Conference on Neural Information Processing Systems (NeurIPS 2020). [3] Yaniv Romano, Evan Patterson, Emmanuel J. Candès. “Conformalized Quantile Regression.” Advances in neural information processing systems 32 (2019). [4] Chen Xu and Yao Xie. “Conformal Prediction Interval for Dynamic Time-Series.” International Conference on Machine Learning (ICML, 2021). [5] Jing Lei, Max G’Sell, Alessandro Rinaldo, Ryan J Tibshirani, and Larry Wasserman. “Distribution-free predictive inference for regression”. Journal of the American Statistical Association, 113 (523):1094–1111, 2018.
{"url":"https://mapie.readthedocs.io/en/stable/theoretical_description_regression.html","timestamp":"2024-11-04T16:39:06Z","content_type":"text/html","content_length":"52744","record_id":"<urn:uuid:a9fcf37f-17ea-4f8f-80b2-3dff6a3ca00c>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00633.warc.gz"}