content
stringlengths
86
994k
meta
stringlengths
288
619
CA State Elevator Mechanic Study Guide | 100% Correct Answers | Verified | Latest 2024 Ver | Exams Mechanics | Docsity Download CA State Elevator Mechanic Study Guide | 100% Correct Answers | Verified | Latest 2024 Ver and more Exams Mechanics in PDF only on Docsity! CA State Elevator Mechanic Study Guide | 100% Correct Answers | Verified | Latest 2024 Version Ladder must be installed if pit is deeper than... - ✔✔35" (3') ASME A 17.1 - 2004 2.2.4.2 Ladder shall not extend more than ___ above the sill. - ✔✔48" (4') ASME A 17.1 - 2004 2.2.4.2 Rungs (cleats or steps) of ladder should be a minimum of ___ wide. - ✔✔16" ASME A 17.1 - 2004 2.2.4.2 Siderails shall have a clear distance of not less than ___ from their centerline to the nearest permanent object. - ✔✔4.5" ASME A 17.1 - 2004 2.2.4.2 Ladder access shall not be permitted if pit is deeper than ___ below the sill of the access door. - ✔✔120" (10') ASME A 17.1 - 2004 2.2.4.2 Illumination of pit and platform should be___. - ✔✔10FC ASME A 17.1 - 2004 2.2.5.1 Where should the light switch be located in the pit? - ✔✔Where accessible from the pit access door. ASME A 17.1 - 2004 2.2.6.1 When the car is on compressed buffer, what is the clearance between the car and the pit? - ✔✔24" (2') clearance. No part of the car should be touching the pit with a 24" (2') clearance. ASME A 17.1 - 2004 2.4.1.1 Where vertical clearances are less than ____", that area should be clearly marked on the pit floor. - ✔✔24" (2') Markings should be___" diagonal red and white stripes "Danger low clearance." - ✔✔4" When oil buffers are used, bottom runny should be no less than ___". - ✔✔6" When spring or solid buffers are used, bottom runny should be no less than ___". - ✔✔6" What is the maximum bottom runby on a car? - ✔✔24" (2') Buffer data plate letters shall be no less than ___" in height. - ✔✔1" What is the maximum bottom runby on a counterweight? - ✔✔35" (3') What is the minimum sill to sill clearance? (side guides) - ✔✔1/2" What is the minimum clearance for corner guides? - ✔✔.8" (7/8") What is the maximum clearance for side guides and corner guides? - ✔✔1.25" (1 1/4") What is the running clearance between cars? - ✔✔2" What is the clearance between car sill and hoistway facia or enclosure for vertical doors? - ✔✔7 1/2" Only machinery used in conjuction with the function of elevator shall be permitted in the ___ room. - ✔✔machine A clear path of ___" should be provided to all components that require maintenance in the machine room. - ✔✔18" Access doors to machine room and overhead machinery spaces shall be a minimum width of ___" and a minimum height of ___". - ✔✔Minimum width = 29.5" and minimum height = 80" Traction elevators usually come with at least how many hoist cables? - ✔✔At least 4 Hydraulic elevators are either... - ✔✔direct acting as or holeless. What is the maximum size of piping in hoistway and machine room? - ✔✔4" What is the purpose of brushes on a DC motor? - ✔✔To reverse current flow direction. On a 3 phase motor, how many degrees are poles set apart? - ✔✔120 degrees Are freight car platforms covered by the L.A. code? - ✔✔Yes, for size, stress, thickness and safety features. What two basic types of door openings are there? - ✔✔Manual and automatic What is the maximum speed (rated) for instantaneous safety? - ✔✔150 fpm What is the maximum candle food power in the machine room? - ✔✔10 CFP On what type of machine does re-doing occur most frequently? - ✔✔Overhead drum machine What is the most frequent cause of rope lay during reshackling? - ✔✔Incorrect or inadequate sizing What is the maximum speed of an escalator? - ✔✔125 fpm What is the weight of the CWT when the weight of the car has 6,000 pounds and the rated load is 4,000 lbs? - ✔✔40% of the rated load added to the car weight (4,000 X 0.40 = 1,600 lbs = 6,000 + 1,600 = 7,600 lbs) What is the sheave diameter of a compensating weight? - ✔✔32 times the diameter of the cables. What is the maximum number of degrees along the incline of the vertical escalator? - ✔✔30 degrees How many feet should metal straps be apart? - ✔✔3 feet from boxes or fittings What is the breaking strength of 1/2 inch and 5/8 inch wire rope? - ✔✔1/2 inch = 14,500 5/8 inch = 23,500 Most common cause of accidents for mechanic prepared for car top inspections is? - ✔✔Not observing running clearance. Which is the most flexible of wire rope? - ✔✔Tiller rope Maximum voltage allowed to run to car cab? - ✔✔300 volts What is the most common cause of rope failure? - ✔✔Lack of lubrication 8x19 cable is more flexible but not as strong? - ✔✔6x19 cable is not as flexible but is stronger. What is minimum height of the power guard cable in the machine room? - ✔✔Guards should be extended from a point 12 ft above the machine room floor. Define rated speed. - ✔✔A speed of which the elevator is designed to operate with rated load in up direction. Where are the thrust bearings located in a warm gear machine? - ✔✔On opposite end of warm gear shaft. Where are the thrust bearings use? - ✔✔Each end of the elevator hoist motor. How many feet should metal straps be apart on Rigid conduit? - ✔✔Within 3 ft of outlet box or J-box and supported every 10 ft there after. How many feet should metal straps be apart on flexible metal conduit? - ✔✔Within 12 inches of each side of J-box or outlet box and intervals not exciting 54 inches. An emergency escape hatch minimum allowable size is? - ✔✔400 Sq inches, no side less than 16" other side is 25". The max stopping pressure of automatic self-closing doors? - ✔✔30 lbs. What is the tensil strength of 1/2 inch wire rope? - ✔✔14,500 What is the most flexible type of wire rope? - ✔✔Tiller What is the maximum allowable voltage located in the hoistway? - ✔✔300 volts Metal conduit shall be supported to a fixed foundation within what distance of every box? - ✔✔3 ft The diameter of a compensated sheave is determine how? - ✔✔32 x diameter of rope The diameter of hosting sheaves is determined by what? - ✔✔40 x diameter of rope Traction elevators with a rated speed of 250 fpm shall have what type of buffers? - ✔✔Oil buffers What are the types of buffers and their applications? - ✔✔-solid buffers max. speed 50 fpm -spring buffers max. speed 200 fpm -oil buffers max. speed UNLIMITED
{"url":"https://www.docsity.com/en/docs/ca-state-elevator-mechanic-study-guide-or-100percent-correct-answers-or-verified-or-latest-2024-ver/11673458/","timestamp":"2024-11-02T14:24:38Z","content_type":"text/html","content_length":"244591","record_id":"<urn:uuid:b2a924eb-b021-4695-a3ec-6a1a8be76784>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00382.warc.gz"}
5 Examples Of Positive Acceleration:Know The Explanation Positive acceleration occurs when the acceleration causes the velocity of an object to increase in terms of magnitude i.e. the speed of the object increases. Some positive acceleration examples are given below: Let us take a look at these positive acceleration examples in detail. 1. Launching a rocket: When a rocket is launched a huge amount of fuel is required for generating the energy to accelerate the rocket to a very high speed. Since this process involves the increase in the magnitude of the velocity of the rocket this is a form of positive acceleration. Without positive acceleration, it would have been impossible to launch a rocket for a missile. This type of positive acceleration is variable i.e. it changes with time. 2. Accelerating a vehicle: Vehicles such as cars, trucks, bikes, and trains have built-in accelerators for increasing the magnitude of the velocity of the vehicle. This is the most common example of positive acceleration. There is an accelerator present in every vehicle that is used for increasing its speed. The accelerator of a car must be used carefully to avoid any accidents. This type of positive acceleration is variable i.e. it changes with time. 3. A free-falling object: A free-falling object refers to the object that experiences an acceleration arising due to gravity. This is a form of positive constant acceleration i.e. it does not change with time. The magnitude of velocity I.e. speed of the falling body increases at a constant rate as it travels downwards towards the ground. The body attains its maximum speed when it hits the ground. A free-falling object experiences a positive acceleration due to gravity. positive acceleration examples Image source: Waglione, Gravity gravita grave, CC BY-SA 3.0 positive acceleration examples 4. Pedaling a bicycle: When we pedal a bicycle we increase its speed. The magnitude of increase in speed depends on how fast we pedal the bike. In this mechanical energy gets transformed into kinetic energy. This is also a form of positive acceleration as the speed is increased with pedaling. In this case, the acceleration is variable i.e. changes with time. Pedaling a bicycle for accelaration. Image source: positive acceleration examples anonymous, Lance-Armstrong-TdF2004, CC BY-SA 3.0 5. Rowing a boat: Rowing a boat just like paddling a bicycle increases its speed. The amount of increase in the speed of the boat depends on how swiftly we row the boat. In this case, mechanical energy gets transformed into kinetic energy again. This is a form of positive acceleration as the speed of the boat increases with rowing. Here, the acceleration is variable i.e. it changes with time. Rowing a boat. Image source: positive acceleration examples Ólavur Frederiksen, Ormurin Langi, a 26 feet Faroese wooden boat, belonging to the rowing club Róðrarfelagið Knørrur. Photo by Ólavur Frederiksen, June 26, 2019, CC BY-SA 4.0 6. Airplane takeoff: Before taking off for flight the airplane takes a long run on the runway. During the course of the run, the airplane increases its speed for take-off. Once the speed reaches a certain level the flight takes off. This is a form of positive acceleration that is variable i.e. it changes with time. Without accelerating the airplane first, it would be impossible to take off. An airplane running on the runway before taking a flight. positive acceleration example Image source: Matti Blume, Tegel Airport, (IMG 9173), CC BY-SA 4.0 These are some positive acceleration example that we encounter in our daily lives. To understand more about positive acceleration consider reading the following paragraphs. What is positive acceleration? We encounter the application of positive acceleration in our daily lives in several ways. Positive acceleration refers to the kind of acceleration that occurs when the magnitude of the velocity i.e. speed of an object increases with time. For positive acceleration to occur the external force experienced by the objects should be in the direction of propagation of the object. How can we calculate positive acceleration? There are several formulas that can calculate the acceleration of an object. To calculate positive acceleration, we require to know the external force acting on the object (F) and mass of the object (a) or the change in velocity of the object in unit time. Mathematically, average acceleration can be given as F = ma v² – u² = 2aS v = u + at S = ut + ½at² Instantaneous acceleration is given by a= dv/dt a= d²s/dt² In the case of positive acceleration, the value of a must be a positive number. Here, v is the final velocity of the object u is the initial velocity of the object and S is the displacement of the What is negative acceleration? We encounter the application of negative acceleration in our daily lives in several ways. Negative acceleration refers to the kind of acceleration that occurs when the magnitude of velocity i.e. speed of an object decreases with time. For negative acceleration to occur the external force acting on the object should be in the direction opposite to the direction of propagation of the object. We hope this post could provide you with the necessary information regarding positive acceleration examples. To know more about acceleration click here. Also Read: Hi, I am Sanchari Chakraborty. I have done Master’s in Electronics. I always like to explore new inventions in the field of Electronics. I am an eager learner, currently invested in the field of Applied Optics and Photonics. I am also an active member of SPIE (International society for optics and photonics) and OSI(Optical Society of India). My articles are aimed at bringing quality science research topics to light in a simple yet informative way. Science has been evolving since time immemorial. So, I try my bit to tap into the evolution and present it to the readers. Let’s connect through
{"url":"https://techiescience.com/positive-acceleration-example/","timestamp":"2024-11-07T19:03:49Z","content_type":"text/html","content_length":"106901","record_id":"<urn:uuid:5f5aae14-c7a9-4e9a-8020-2988370c5272>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00260.warc.gz"}
The DHS Program User Forum: Dataset use in Stata » Quetions about calculating stunting rates in Stata Home » Data » Dataset use in Stata » Quetions about calculating stunting rates in Stata Show: Today's Messages :: Show Polls :: Message Navigator Re: Quetions about calculating stunting rates in Stata [message #262 is a reply to message #261] Thu, 04 April 2013 14:35 Messages: 3195 Registered: February 2013 Here is a response from one of our DHS Stata experts Tom Pullum, that should answer your questions. Your problem is that you were using the BR file, but DHS uses the PR file for this and the other child nutrition indicators. The PR file includes hc70 for all children under five in the household. The BR file includes hw70 for children under five in the household whose mother was also in the household and was eligible for the survey of women. This is a subset of the children in the PR file. If you open the BR file in Stata and copy the following lines into the command window, you will get what you were doing: * Use the following on the 2007 Bangladesh BR file * BDBR51FL.dta codebook hw70 tab hw70 if hw70>9990,m tab hw70 if hw70>9990,m nolabel gen HAZ=hw70 replace HAZ=. if HAZ>=9996 histogram HAZ gen stunted=. replace stunted=0 if HAZ ~=. replace stunted=1 if HAZ<-200 tab stunted * 41.70% stunted (2210/5300) * This number can be confirmed with a regression, no covariate. * First without weights regress stunted * unweighted percent stunted is 41.70% * Repeat the regression with weights regress stunted [pweight=v005] * weighted percent stunted is 42.96% However, if you open the PR file and copy the following lines into the command window, you will replicate the number in the report and in Stat Compiler: * Use the following on the 2007 Bangladesh PR file * BDPR51FL.dta codebook hc70 tab hc70 if hc70>9990,m tab hc70 if hc70>9990,m nolabel gen HAZ=hc70 replace HAZ=. if HAZ>=9996 histogram HAZ gen stunted=. replace stunted=0 if HAZ ~=. replace stunted=1 if HAZ<-200 tab stunted * 41.92% stunted (2320/5535) * This number can be confirmed with a regression, no covariate. * First without weights regress stunted * unweighted percent stunted is 41.92% * Repeat the regression with weights regress stunted [pweight=hv005] * weighted percent stunted is 43.24% I am using a trick that you may not be aware of, linear regression without a covariate, to get the means of hw70 and hc70, unweighted or weighted. A command such as "regress y" will given just the intercept, which will be the mean of y. "regress y [pweight=hv005]" will give the weighted mean of y. Here the y variable is binary, so the mean of y is the proportion with y=1, and if multiplied by 100 you get the percentage with y=1 Let me know if you have other questions. Message index Quetions about calculating stunting rates in Stata DHS user on Thu, 04 April 2013 14:33 Re: Quetions about calculating stunting rates in Stata Re: Quetions about calculating stunting rates in Stata on Thu, 24 May 2018 05:57 Re: Quetions about calculating stunting rates in Stata on Fri, 20 December 2019 12:22 Re: Quetions about calculating stunting rates in Stata Re: Quetions about calculating stunting rates in Stata on Mon, 03 February 2020 15:08 Re: Quetions about calculating stunting rates in Stata Re: Quetions about calculating stunting rates in Stata on Mon, 17 February 2020 11:40 Previous Topic: Next Birth After Abortion Next Topic: standardization of age for states in stata Goto Forum: Current Time: Thu Nov 14 21:22:20 Coordinated Universal Time 2024
{"url":"https://userforum.dhsprogram.com/index.php?t=tree&th=137&mid=262&&rev=&reveal=","timestamp":"2024-11-15T02:22:20Z","content_type":"text/html","content_length":"23330","record_id":"<urn:uuid:2e939ba1-ce7e-4f6e-84ee-b61cf6e205ea>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00884.warc.gz"}
Complex Numbers During my many years teaching high school math, I developed a graphical approach to the teaching of complex numbers, geared to Algebra 2 students. See a fun kinesthetic introduction to this, by Michael Pershan and Max Ray, here. The basic idea is to start by defining complex numbers as (corresponding to) points in the plane, which can be thought of in rectangular form (with normal Cartesian coordinates,) or in polar form. These numbers have their own arithmetic: addition works like vector addition, and multiplication is defined as multiplying the r's and adding the θ's. We verify that arithmetic for numbers on the x-axis (the real numbers) still works as expected. We see that in this system, -1 has a square root, which we call i. That the approach makes sense need not be proved formally at the Algebra 2 level, but the proof is accessible to students in a later elective course, such as Precalculus, or my Space course. (The proof is somewhat involved, and thus requires some mathematical maturity, but the only prerequisite is an understanding of similar triangles.) In Algebra 2 it is sufficient to confirm in several cases that the distributive rule in a + bi form yields the same result as the geometric multiplication in polar form. The advantages of this approach are many. - It provides some review and practice of basic trig. - It gives a visual, concrete interpretation to complex numbers. - It lays a foundation for the formulas for the sine and cosine of a sum, and the double angle formulas. - It prepares students for the calculation of images of points under geometric transformations, including the rotation matrix. In fact, I found this approach to matrices to be far more effective than introducing them as a way to solve systems of equations. Read about this on my blog. These online games help provide a solidly intuitive introduction to complex number arithmetic. (It's easy to cheat to improve your score, but that wouldn't do much to help you understand complex number arithmetic, now, would it.) From most basic to most advanced:
{"url":"https://www.mathed.page/alg-2/complex/index.html","timestamp":"2024-11-08T01:36:57Z","content_type":"text/html","content_length":"10932","record_id":"<urn:uuid:0583d397-4feb-45e4-a666-528fa644b6e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00132.warc.gz"}
ransformer | Scientific Reports Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Scientific Reports volume 12, Article number: 17533 (2022 ) Cite this article Amorphous Metal Core Transformers The resonant frequency of the transformer contains information related to its structure. It is easier to identify the resonance frequency in the vibration signal during the hammer test and power on than in the operation of the transformer, because the vibration caused by the load current does not need to be considered during the hammer test and power on. Therefore, an analysis method with simple calculation, fast calculation speed and easy real-time monitoring is needed to deal with these two non-stationary vibrations. Vibration monitoring can understand the health status of transformer in real time, improve the reliability of power supply and give early warning in the early stage of faults. A new frequency domain segmentation method is proposed in this paper. This method can effectively process the vibration signal of transformer and identify its resonant frequency. Eleven different load states are set on the transformer. The method proposed in this paper can extract the resonant frequency of the transformer from the hammering test signal. Compared with the original empirical wavelet transform method, this method can divide the frequency domain more effectively, has higher time–frequency resolution, and the running time of the modified method is shortened from 80 to 2 s. The universality of this method is proved by experiments on three different types of transformers. Due to the improvement of power supply stability requirements, there are more and more researches on transformer health assessment. Common fault diagnosis methods of transformer include regular inspection, dissolved gas analysis1, vibration monitoring2,3, partial discharge monitoring4, ultrasonic measurement5, frequency–response analysis6 and other methods. Compared with the other methods, vibration measurement has the advantages of convenient installation, less environmental interference and low cost. It is applicable to almost all types of transformers. The vibration of the transformer mainly comes from the magnetostriction and magnetic forces. Through the real-time monitoring of transformer vibration, the relationship between abnormal vibration and internal faults of transformer will be established, which is helpful to arrange preventive maintenance in time and improve the service life of transformer. For example, when the tightening bolt of transformer core is loose, that is, the air gap between iron core changes, it will significantly increase the vibration of transformer7, in addition, loose bolts also reduce the ability of the transformer to resist external shocks. The mechanical performance degradation of the transformer was tracked through multi-channel vibration measurement in8. In9, the vibration data on the transformer on-load tap changer is obtained to realize the identification of early equipment faults, and the self-organizing mapping (SOM) is used to evaluate the status of the on-load tap changer online. In10, the method of monitoring winding deformation by transformer box vibration was studied, this method takes into account the vibration generated by different parts of transformer, and analyzes the influence of temperature on vibration generation, superposition and transmission. The vibration frequency of transformer depends on the resonance frequency and external excitation. The external excitation mainly includes the voltage, current and working environment, these factors can be measured during the operation of the transformer. Resonance frequency is the internal factor that determines the vibration frequency of the transformer. It is determined by the transformer structure and does not change with the change of external excitation. It can be obtained by hammering test. The closer the vibration component is to the resonance frequency, the more likely it will cause the transformer resonance. Resonance is very harmful, which will lead to violent vibration of the transformer, resulting in the loosening of bolts and the falling off of cushion blocks. In addition, the structure can be tracked by monitoring the resonance frequency of the transformer, and the fault diagnosis of the transformer can be realized by analyzing the change of the transformer structure. In paper11, the resonance frequency of transformer was calculated by pseudo spectral method. The relationship between transformer vibration frequency with the voltage and current harmonics was deduced in paper12. The influence of vibration on the operation of the large transformer and the vibration reduction measures to avoid resonance under electromagnetic force excitation were studied in13, and a prototype power transformer with very low noise was developed, with full load capacity of 200MVA and noise level less than 65 dB. The nonlinear model of transformer was built by Fourier neural network composed of nonlinear elements and a linear dynamic block, and the effect of vibration prediction and the system parameter extraction were verified by testing on several power There are many analytical methods to deal with vibration data. Fourier analysis is a simple and effective analysis method, but Fourier transform method cannot display time–frequency information at the same time15. The paper16 proposed a simplified permutation entropy algorithm which is used to calculate the vibration features of the converter transformer. Compared with the traditional permutation entropy algorithm, the algorithm has the advantages of stable classification, high flexibility and fast computing speed. The wavelet transform method17,18 also has many applications in transformer vibration monitoring. Complex Morlet wavelet was used to process the free vibration data of transformer in17, the improved Crazy Climber algorithm was used to extract the wavelet ridges of the time–frequency spectrogram, and the first four order resonance frequencies and damping ratios of the transformer winding were obtained. A new method for mechanical fault diagnosis of transformer cores and windings based on frequency band—energy distribution was proposed in18. The transformer mechanical faults were diagnosed online by the energy distribution in each frequency band of the real-time vibration data. The improved empirical mode decomposition algorithm was applied to the fault index extraction of vibration data on the transformer on-load tap change19. EWT was first introduced by Professor Jérôme Gilles20, it is equivalent to a series of band-pass filter combinations, and the original signal is decomposed into several signal combinations in different frequency domains. Papers21,22 introduced the application of EWT in seismic data, and22 proposed an improved EWT method based on scale-space representation. Paper23 introduced the application of EWT in two-dimensional image recognition. An improved adaptive Morlet wavelet transform and its application in vibration data of gearbox were introduced in paper24. EWT also has some applications in transformers. In paper25, a fault diagnosis method based on EWT and salp swarm algorithm was proposed to diagnose different fault states of transformers. In paper26, the EWT method was combined with multi-scale entropy, and the counting times were reduced by selecting the wavelet components highly correlated with the original signal. EWT is selected from many non-stationary signal processing methods mainly because it can improve the resolution of target frequency component by flexibly setting frequency domain segmentation boundary, and EWT frequency domain segmentation is based on Fourier transform, the two methods are partially overlap, so the signal processing process is a progressive relationship, which can reduce the burden of calculation; more importantly, by setting the boundary near the abnormal target frequency component according to the Fourier result, the target component can be analyzed in depth to determine the change time of the target frequency component, it is very important for transformer fault diagnosis. The spectrum division method proposed in this paper combines factors such as frequency domain extremum and envelope, which can not only track and analyze the target components, but also reasonably divide the spectrum range. Different from paper26, this paper does not combine other methods, but directly improves the spectrum division principle on EWT, simplifies the calculation process, and improves the adaptability. Compared with the traditional method based on the scale plane, it eliminates the step of creating the scale plane, greatly improves the calculation speed, and is more suitable for real-time analysis and monitoring. According to the location of vibration, the vibration of transformer can be divided into iron core vibration, winding vibration and cooling equipment vibration. According to the determinants of vibration frequency, it can be divided into vibration determined by resonance frequency and vibration determined by excitation frequency. The vibration of transformer winding is caused by the interaction between current and magnetic flux leakage in the winding, the vibration force of coil Fw is proportional to the square of current I, as shown in Eq. (1). The vibration of transformer core is caused by magnetostriction and magnetic force, the vibration force of iron core Fc is proportional to the square of voltage U, as shown in Eq. (2). The current and voltage in the transformer are sine waves mixed with a small amount of harmonics, which can be expressed as Eqs. (3) and (4), I0 and U0 represent the DC components of current and voltage respectively and In and Un represent the fundamental components and harmonic components of current and voltage respectively. Combine Eq. (3) into Eq. (1) to obtain The third part of Eq. (5) can be expressed as Eq. (6), p and q is a number between 1 and combinatorial numbers Cn2. According to the transformation rules of trigonometric equation, the second part of Eq. (6) can be expressed as The transformer vibration frequency will include the DC component, the harmonic frequency component, the double frequency of each harmonic component, the sum of any two harmonic components and the difference of any two harmonic components, as shown in Table 1. When the transformer works, the current and voltage are close to the ideal sine waves with low harmonic contents, and it can be seen from Table 1 that the amplitude of each excitation force is in the same order of magnitude, so the vibration amplitude caused by harmonics mainly depends on the position of the resonance frequency point, this is also helpful to determine the distribution of resonance frequency by observing the distribution of vibration frequency. During the operation of transformer, the vibration amplitude and vibration speed are very small, which belongs to a micro amplitude system. Therefore, the transformer vibration system can be regarded as a linear system by taking only the first-order term of Taylor series and omitting the higher-order term. The motion equation of forced vibration of single-freedom linear system is shown in Eq. (8), m, c, k are mass, damping coefficient and elastic coefficient respectively. Equation (8) can be rewritten as The parameters in Eq. (9) are defined as follows: In Eqs. (10) and (11), ωn is the resonance frequency and ξ is the damping rate. Because the transformer system is approximately a linear system, the vibration under each excitation satisfies the superposition theorem. For the convenience of analysis, it is assumed that the excitation force F=fncosωt. Substituting into Eq. (9) to obtain The parameters in Eq. (12) are defined as follows: It can be seen from Eq. (12) that the first item in the equation will gradually attenuate and set to zero under the action of transformer damping. The waveform of specific frequency in hammering test can be extracted by improved EWT, and the relevant parameters (X, ξ, ωn) can be obtained by fitting the waveform envelope. Figure 1 shows the attenuation process of 37 Hz frequency component in hammering test, which corresponds to the resonance frequency component of 37 Hz in Fig. 6. The values of relevant parameters are obtained by parameter fitting, as shown in (15) For the second part of Eq. (12), the vibration of transformer under different frequency excitation force is still the response of corresponding frequency, but the vibration amplitude is affected by transformer structure coefficient and resonance frequency. Let p be as shown in Eq. (16), p is a nonnegative variable. The smaller p is, the greater the vibration amplitude is. Equation (16) can be transformed into p is a univariate quartic equation with ω as the variable, then p′ is a univariate cubic equation, let p′=0, we will get Therefore, p has the minimum value at ω2, that is, in Eq. (12), the closer ω is to ω2, the greater the amplitude of x(t), x(t) has the maximum vibration amplitude when ω=ω2. EWT is essentially a wavelet transform that can flexibly set the segmentation boundary20. The core of EWT method lies in the division of frequency domain. The spectrum results vary greatly in different applications, but the purpose of spectrum segmentation is the same, which is to highlight the change process of target frequency component. The empirical scaling function and the empirical wavelets are shown by the Eqs. (20) and (21), respectively. The function β(x) shall meet the following conditions: The τN=γωN, and if γ<min[(ωN+1−ωN)/(ωN+1+ωN)], we will get a tight frame. Then the signal can be reconstruction by: From the previous analysis, it can be found that the focus of EWT is to determine the frequency domain segmentation boundary. The processing of different types of signals needs to combine their unique characteristics. The components of transformer vibration data have the following characteristics. During transformer operation, the main component in the frequency domain is usually an integral multiple of 50 Hz (the electrical frequency is 50 Hz). Under normal circumstances, the 100 Hz component should have the largest amplitude, but due to the influence of resonance frequency, the 50 Hz frequency doubling component closest to the resonance frequency point may be the largest amplitude When the transformer is not working, the amplitude of environmental interference is below 0.003 m/s2, and the corresponding value in the Fourier transform result is 0.003*16,384/2≈25, the sampling frequency of vibration signal is 16384 Hz. In the analysis of the Fourier result of vibration signal, the signal below 25 can be ignored. Continuous high amplitude components may occur during hammering test or transformer state switching. These components will increase the difficulty of frequency domain segmentation. Some frequency components with low amplitude will appear near the high amplitude components. We should ignore the influence of these low amplitude components when dividing the frequency domain. In the process of transformer fault diagnosis, the high amplitude components, the emerging frequency components and the components with great change are important, the latter two parts are collectively referred to as abnormal frequency components. It is necessary to set boundaries near these frequency components. Figure 2 shows the spectrum of transformer hammering vibration, the blue vertical line in the figure represents the maximum component in the spectrum, the red triangles represent the ideal division results of the spectrum, and each red triangle represents a division area. The spectrum situation during state switching is similar to that of hammering test. When dividing this kind of spectrum, we should pay attention to the following points. The low amplitude component near the high amplitude corresponds to area 1 in Fig. 2. Adjacent components with similar amplitude correspond to area 2 in Fig. 2. The low amplitude component corresponds to area 3 in Fig. 2. The division boundary cannot fall on the maximum component. The boundary of the high amplitude region should be more dense, such as the 400-800 Hz range in Fig. 2, and the boundary of the low amplitude region should be more sparse, such as the 900–1100 Hz Spectrum of transformer hammering test. The proposed frequency domain segmentation method is shown in Fig. 3. The procedure of the proposed frequency domain segmentation method includes the following four steps. Step 1: The Fourier spectrum of the analyzed signal is analyzed to obtain the abnormal frequency components of the signal. And the frequency domain extremums are extracted. Step 2: Removal of interference signals that affecting boundary segmentation, including the clutter low amplitude components around the high amplitude component, the multiple adjacent components with similar high amplitudes, and the components with small amplitude in frequency domain. Step 3: The dividing boundary is determined according to the envelope of the remaining main frequency components after removing the interference signals, and the abnormal components in Step 1 needs to be analyzed emphatically. Step 4: Check the segmentation boundary to prevent it from falling on the main components of the frequency domain, otherwise these components will be attenuated in the subsequent analysis process. Improved frequency domain segmentation process. We use Eq. (24) to remove the low amplitude components around high amplitude components, and use formula (25) to deal with the multiple adjacent high amplitude components. f(a) and f(b) represent the amplitudes of the two components in the frequency domain, and a and b represent the frequencies of the two components. Equations (24) and (25) are two simple and effective methods to remove interference components. The selection of relevant coefficients in the formula is very important. The transformer used in this paper is a dry-type transformer with a capacity of 100 kVA. During the experiment, kp and r are taken as 20, 100 respectively, kq and s are taken as 0.15 and 30 respectively. The above four parameters are empirical parameters in the experimental process. Because the working frequency of the transformer is 50 Hz, for a maximum frequency component, the left and right sides are 50 Hz, and the frequency bandwidth is 100 Hz, that is, r=100. When considering the adjacent frequency components, the bandwidth take half of 50, 50/2=25, that is, s=25, r and s are two parameters only related to the power supply frequency of the transformer, the determination of kp and kq should be combined with the vibration of the transformer. In the experiment, the maximum amplitude of frequency domain is basically about 500 (as shown in Figs. 5 and 7), 500/25=20, that is, kp=20, (25 here refers to the value of ambient noise in Step3, 0.003*16,384/2≈25), and kq=0.15 is obtained from multiple experiment. For the frequency component with sudden change, we can add segmentation lines on both sides. In order to prevent the frequency segmentation from being too dense or too sparse, the number and position of boundary lines can be flexibly adjusted according to the mean value of frequency components within this range. If the division is too dense, the average value of the two boundaries or the boundary with low frequency amplitude or the boundary near the target frequency can be taken. The last method can improve the resolution of the target frequency amplitude. For a region with sparse division, a division boundary line may be appropriately added at some frequency domain minimum values. The vibration sensor adopts CA-YD-188T piezoelectric acceleration sensor of Jiangsu Lianneng company, with sensitivity of 500 mv/g, frequency range of 0–5000 Hz, measurement range of ±10 g, impact limit of 2000g and working temperature range of −40 to 120 ℃. The vibration acquisition instrument uses the high dynamic range acquisition unit of Dewesoft company. As shown in Fig. 4, the experimental transformer has 14 fastening bolts, 6 transverse bolts (A–F) and 8 longitudinal bolts (1–8). There are five installation positions (upper left, upper middle, upper right, bottom left, bottom right) of vibration sensors on the transformer. The hammering test consists of four times in each group, standing on the side of the high-voltage winding and facing the transformer, the first hammering (K1) is from left to right at the top of the transformer, the second hammering (K2) is from front to back in the direction of sight, the third hammering (K3) is from right to left at the top of the transformer, and the fourth hammering (K4) is downward from the middle of the top. The method proposed in this paper is mainly used to analyze the transient vibration of the transformer, including the vibration of the transformer under power on, power off and load switching between different loads, as shown in Table 2. The methods proposed in this paper are applicable to the transient situation when the state is switched, but only shows the results of hammer test, power on and load switching from 60 to 100 kW. In the knock test, there is only the effect of instantaneous excitation, and no other vibration interference, so it is easy to observe the resonant frequency. The vibration waveform of transformer hammering test under normal state is shown in Figs. 5 and 6. For the resonance frequency component, we need to pay attention to the harmonic components of integral multiple of 50 Hz. These components are more likely to cause resonance in the transformer. The 'jet' colormap is used in Fig. 5 and all subsequent time–frequency figures. Spectrum segmentation and time–frequency display of hammering (K1) test data based on the traditional method, (a1–a3) is the Fourier spectrum segmentation results (upper left position, colormap, Spectrum segmentation and time–frequency display of hammering (K1) test data based on the method proposed in this paper, (a) is the Fourier spectrum segmentation results (upper left position, colormap, 'jet'). Figure 5 shows the results of three traditional EWT methods based on extremum, adaptive and Gaussian plane20. In Fig. 5, the Fourier transform results and frequency domain segmentation are displayed on the left, the longitudinal representation is to facilitate comparison with the frequency components of the time–frequency plane, and the change of each frequency component with time is displayed on the right. The following Figs. 6, 8, and 9 are also the same. The method based on extreme value needs to specify the number of extreme values in advance. When the number of specified extreme values is small, the division may be insufficient, which is similar to the division results based on adaptive method; when the number of extreme values is large, some frequency bands may be too narrow, too much bands will increase the burden of calculation, and too narrow bands will cause display distortion on the time–frequency plane, and some bright lines with unchanged color will appear, such as 250 Hz and 700 Hz in subfigure (1) of Fig. 5. The constant color means that the amplitude of these frequency components are constant, but in fact, these components are not always constant, because this is a hammer test, the vibration will decay quickly, so any frequency component cannot be unchanged. There is only one boundary in the range of 0–1000 Hz based on the adaptive method, the components to be analyzed are divided together, which is not helpful for transformer vibration analysis. The method based on the Gaussian plane is the best of the three methods. In addition to the resonance frequency components that can be seen in the method based on the Gaussian scale plane20, the method proposed in this paper can provide richer resonance frequencies, such as about 20 Hz, 550 Hz, 750 Hz and 850 Hz in Fig. 6. The proposed segmentation method can also give the resonant frequencies more accurately, that is, the line of the time–frequency plane is more straight. The method based on the Gaussian plane needs to draw a Gaussian plane when dividing the frequency domain, so the calculational burden is very heavy, the time–frequency analysis of the method based on Gaussian plane takes about 80 s, while the proposed method takes about 2 s. When time–frequency analysis is carried out for multiple channels, multiple hammering or faults, the method based on Gaussian plane needs high computational resources. Most importantly, the above three traditional methods cannot track and analyze the emerging frequency components. In any case, EWT method is still a good time–frequency analysis method, especially the EWT method based on Gaussian plane. The method proposed in this paper is an improved EWT method. Compared with the method based on the Gaussian plane, the improved frequency domain segmentation method is simple in calculation and clear in the time–frequency display of important frequency components, which is more suitable for the analysis of transformers vibration data. Table 3 shows the identification results of resonance frequencies and damping coefficients corresponding to multiple resonance frequency points in Fig. 6. The change of transformer damping coefficient with resonance frequency is shown in Fig. 7. With the increase of resonance frequency, the damping coefficient shows a downward trend. For resonance frequency points above 400 Hz, the damping coefficients will be very small. Variation of damping coefficient with resonance frequency. When the hammering test of the transformer is inconvenient or impossible, the method of determining the resonant frequency of the transformer through the vibration data at power on is very useful. Especially for large power transformers, hammering test may be difficult to achieve. Since there is no current in the transformer winding when the transformer is powered on, it is unnecessary to consider the vibration caused by the load current, so the interference signal is small during the vibration measurement. When the transformer is powered on, it is equivalent to giving a physical excitation to itself. Although there are some vibration components generated by transformer excitation during no-load operation of transformer, however, the frequency spectrums of the excitation current are determined, so the vibration components generated by the excitation current are also determined. Therefore, the change of the resonance frequency can be observed by comparing the vibration spectrum when the transformer is powered on. Figures 8 and 9 show the analysis results of the vibration data when the transformer is powered on, and the power on time of the transformer is 0.36 s. The method based on the Gaussian plane does not highlight the part with a large frequency amplitude, such as 100 Hz, 250 Hz, etc. The improved frequency domain segmentation results are shown in Fig. 9. It can be seen that the new segmentation method sets the boundary for the parts with large amplitude (100 Hz, 250 Hz). Compared with the results of the time–frequency plane in Fig. 8, the time–frequency information of the method proposed in this paper is more obvious. Vibration data analysis of transformer power on based on Gaussian plane6, (a) is the Fourier spectrum segmentation results (upper left position, colormap, 'jet'). Vibration data analysis of transformer power on based on the method proposed in this paper, (a) is the Fourier spectrum segmentation results (upper left position, colormap, 'jet'). In Figs. 10 and 11, K1–K4 are Fourier analysis results of vibration of transformer hammering test, it can be seen that although the resonance frequencies excited by the four hammering tests are not exactly the same, the distribution range of resonance points is basically the same, and the resonance frequencies of the transformer can be obtained more completely by hammering in different directions. The first dark blue line is the analysis result when the transformer is powered on. It can be seen that the vibration components of the transformer include all frequency doubling components of 50 Hz within 0–1000 Hz in both transverse and longitudinal directions. Therefore, when a resonance point occurs within 1000 Hz, it will cause resonance on the frequency doubling component of 50 Hz adjacent to it. Except for the 100 Hz frequency point caused by the hysteresis, the resonant frequency points excited by the hammer test are basically the same as the vibration frequency points at power on. It can be seen that the transformer resonates at some frequencies, such as 350 Hz, 600 Hz and 800 Hz in Fig. 10 and 200 Hz, 400 Hz, 600 Hz~800 Hz and 900 Hz in Fig. 11. Vibration data analysis results of four hammering tests and power on (upper left position). Vibration data analysis results of four hammering tests and power on (upper right position). Figure 12 shows the frequency spectrum of steady-state vibration of transformer under 60 kW and 100 kW load respectively. Figure 13 shows the vibration changes before and after the transformer load step, the load is switched from 60 to 100 kW, the load step time is about 0.39 s. By comparing the frequency spectrum of steady-state vibration before and after transformer load step, it can be seen that the components of 200 Hz, 400 Hz, 700 Hz and 1000 Hz change greatly, in which 1000 Hz component is an emerging frequency component. By setting the frequency domain division boundary near these components, it is obvious that the four frequency components change immediately after the load step, so the load step is the reason for the change of these frequency components. Frequency domain comparison before and after transformer load step (upper left position). Vibration data and time–frequency display before and after transformer load step (upper left position). When the transformer vibration data changes greatly, the components with large amplitude change are determined by comparing each signal component in the frequency domain, and then a boundary is set near the target frequency component to improve the time–frequency resolution of the target frequency and determine the occurrence time of the target frequency. Combined with the voltage and current signals of transformer load step, it can be determined whether the change of target frequency signal is caused by transformer faults or load step. Figures 14 and 15 show the analysis results of hammering test of 25 kV oil immersed on-board transformer for Electric Multiple Units (EMU). Figure 14 is the result based on the method proposed herein, and Fig. 15 is the result based on the conventional Gaussian plane. The spectrum distribution of 25 kV oil immersed on-board transformer is very wide, and there are many resonance frequency points in the range of 0–3000 Hz. The time–frequency analysis results of the high-frequency part and the low-frequency part are shown in Fig. 14. It can be seen that for the low-frequency part, both methods have better time–frequency resolution, and the proposed method is more densely divided; for the high-frequency part, the method proposed in this paper has obvious advantages, especially for the 3000 Hz component, the amplitude of the frequency component is very high, and the method based on the Gaussian plane does not show the change of the frequency component well. Analysis results of hammering test data of 25 kV oil immersed on-board transformer, (a) is the Fourier spectrum segmentation results (method proposed in this paper, upper left position, colormap, Analysis results of hammering test data of 25 kV oil immersed on-board transformer, (a) is the Fourier spectrum segmentation results (method based on the Gaussian scale plane, upper left position, colormap, 'jet'). Figures 16 and 17 are the analysis results of hammering test of 35 kV metro traction transformer. Figure 16 is the result based on the method proposed herein, and Fig. 17 is the result based on the conventional Gaussian plane. It can be seen that the resonance frequency distribution of dry-type transformer for 35 kV Metro is very concentrated, mainly in the range of 200–400 Hz, and both methods have good time–frequency resolution. Analysis results of hammering test data of 35 kV metro traction transformer, (a) is the Fourier spectrum segmentation results (method proposed in this paper, upper left position, colormap, 'jet'). Analysis results of hammering test data of 35 kV subway dry-type transformer, (a) is the Fourier spectrum segmentation results (method based on the Gaussian scale plane, upper left position, colormap, 'jet'). It can be seen that the method proposed in this paper has high time–frequency resolution for these two types of transformers. The method proposed in this paper has good applicability for both low-frequency components and high-frequency components, and the processing time for the vibration signals of the two types of transformers is less than 2 s. However, the method based on Gaussian plane has a running time of about 80 s. Combined with the analysis results of three transformers, the advantages of the proposed method are as follows. This method can track and analyze the sudden change component. When the vibration of transformer fluctuates, the cause of vibration fluctuation can be determined by locating the frequency change time and load step time, which is of great significance to realize the on-line fault diagnosis of transformer. Through the proposed method, the vibration waveforms at resonance frequencies are separated, and the damping coefficients and other correlation coefficients at different resonance frequencies are For large power transformers, hammering test may be difficult to achieve. When the hammering test of the transformer is inconvenient or impossible, the resonance frequencies of the transformer can be determined by the vibration data during no-load power on. The disadvantage of this method is that for different types of transformers, the determination of kp (in Eq. 24) and kq (in Eq. 25) needs to be combined with the vibration of the transformer and adjusted accordingly according to the analysis objectives. An improved EWT method is proposed in this paper. Through the analysis of vibration signals of 380 V dry-type transformer, 35 kV subway dry-type transformer and 25 kV oil immersed on-board transformer of EMU, the applicability of this method for transformers with different voltage levels and different capacities is proved. The method proposed in this paper can effectively eliminate the influence of interference components such as small amplitude components and multiple adjacent high amplitude components, and highlights the importance of high amplitude components and components with large changes in frequency domain. Most importantly, compared with traditional methods, the proposed method can track and analyze the new frequency components. Through the analysis of transformer hammering vibration data and power on vibration data, this method has higher time–frequency resolution, and the calculation time is shortened from 80 s to about 2 s, which proves the superiority of this method. All data generated during this study are included in this published article [and its supplementary information files]. Dai, J., Song, H., Sheng, G. & Jiang, X. Dissolved gas analysis of insulating oil for power transformer fault diagnosis with deep belief network. IEEE T. Dielect. El. In. 24(5), 2828–2835 (2017). Zhang, B., Yan, N., Du, J., Han, F. & Wang, H. A novel approach to investigate the core vibration in power transformers. IEEE T. Magn. 54(11), 1–4 (2018). Bagheri, M., Zollanvari, A. & Nezhivenko, S. Transformer fault condition prognosis using vibration signals over cloud environment. IEEE Access 6, 9862–9874 (2018). Lee, S. H., Jung, S. Y. & Lee, B. W. Partial discharge measurements of cryogenic dielectric materials in an HTS transformer using HFCT. IEEE T. Appl. Supercon. 20(3), 1139–1142 (2010). Article ADS CAS Google Scholar Yang, Z., Zhou, Q., Wu, X. & Zhao, Z. A novel measuring method of interfacial tension of transformer oil combined PSO optimized SVM and multi frequency ultrasonic technology. IEEE Access 7, 182624–182631 (2019). Kim, J. W., Park, B., Jeong, S. C., Kim, S. W. & Park, P. Fault diagnosis of a power transformer using an improved frequency-response analysis. IEEE T. Power Deliver. 20(1), 169–178 (2005). Zhang, P., Li, L., Cheng, Z., Tian, C. & Han, Y. Study on vibration of iron core of transformer and reactor based on maxwell stress and anisotropic magnetostriction. IEEE T. Magn. 55(2), 1–5 (2019). Saponara, S., Fanucci, L., Bernardo, F. & Falciani, A. Predictive diagnosis of high-power transformer faults by networking vibration measuring nodes with integrated signal processing. IEEE T. Instrum. Meas. 65(8), 1749–1760 (2016). Kang, P. & Birtwhistle, D. Condition monitoring of power transformer on-load tap-changers. Part 2: Detection of ageing from vibration signatures. IEE Proc. Generation Transmission Distribution. 148 (4), 307 (2001). Garcia, B., Burgos, J. C. & Alonso, A. Transformer tank vibration modeling as a method of detecting winding deformations—Part I: Theoretical foundation. IEEE T. Power Deliver. 21(1), 157–163 (2006). Shin, P. S., Lee, J. & Ha, J. W. A free vibration analysis of helical windings of power transformer by pseudospectral method. IEEE T. Magn. 43(6), 2657–2659 (2007). Shengchang, J., Yongfen, L. & Yanming, L. Research on extraction technique of transformer core fundamental frequency vibration based on OLCM. IEEE T. Power Deliver. 21(4), 1981–1988 (2006). Hsu, C. et al. Reduction of vibration and sound-level for a single-phase power transformer with large capacity. IEEE T. Magn. 51(11), 1–4 (2015). Jing, Z., Hai, H., Pan, J. & Yanni, Z. Identification of the nonlinear vibration system of power transformers. Meas. Sci. Technol. 28(1), 15005 (2016). Bartoletti, C. et al. Vibro-acoustic techniques to diagnose power transformers. IEEE T. Power Deliver. 19(1), 221–229 (2004). X. Zhang, R. Huang, D. Zhou, F. Wang (2018) Vibration monitoring of converter transformer by simplified permutation entropy, 2018. IEEE, 864–869. https://doi.org/10.1109/CAC.2018.8623221 Luo, Bo., Wang, F., Liao, T. & He, Z. Modal parameters identification of power transformer winding based on the improved complex Morlet wavelet. J. Vib. Shock 33(06), 131–136 (2014). Yan, O., Lin, X. & Yin, J. Features of vibration data of power transformer using the wavelet theory. High Voltage Eng. 01, 165–168 (2007). Duan, R. & Wang, F. Fault diagnosis of on-load tap-changer in converter transformer based on time-frequency vibration analysis. IEEE T. Ind. Electron. 63(6), 3815–3823 (2016). Gilles, J. Empirical wavelet transform. IEEE T. Signal Proces. 61(16), 3999–4010 (2013). Article ADS MathSciNet Google Scholar Liu, W., Cao, S. & Chen, Y. Seismic time-frequency analysis via empirical wavelet transform. IEEE Geosci. Remote S. 13(1), 28–32 (2016). Liu, N., Li, Z., Sun, F., Wang, Q. & Gao, J. The improved empirical wavelet transform and applications to seismic reflection data. IEEE Geosci. Remote S. 16(12), 1939–1943 (2019). Parashar, D. & Agrawal, D. K. Automatic classification of glaucoma stages using two-dimensional tensor empirical wavelet transform. IEEE Signal Proc. Let. 28, 66–70 (2021). Xin, Y., Li, S., Zhang, Z., An, Z. & Wang, J. Adaptive reinforced empirical morlet wavelet transform and its application in fault diagnosis of rotating machinery. IEEE Access 7, 65150–65162 (2019). Lu, S., Gao, W., Hong, C. & Sun, Y. A newly-designed fault diagnostic method for transformers via improved empirical wavelet transform and kernel extreme learning machine. Adv. Eng. Inform. 49, 101320 (2021). Zhao, M. & Xu, G. Feature extraction of power transformer vibration signals based on empirical wavelet transform and multiscale entropy. IET Sci. Meas. Technol. 12(1), 63–71 (2018). School of Electrical Engineering, Beijing Jiaotong University, Beijing, 100044, China Ruizheng Ni, Ruichang Qiu, Zheming Jin, Jie Chen & Zhigang Liu You can also search for this author in PubMed Google Scholar You can also search for this author in PubMed Google Scholar You can also search for this author in PubMed Google Scholar You can also search for this author in PubMed Google Scholar You can also search for this author in PubMed Google Scholar R.N.: Writing, review & editing. Z.J.: Methodology. J.C.: Funding acquisition. R.Q.: Project administration. Z.L.: Supervision. The authors declare no competing interests. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Ni, R., Qiu, R., Jin, Z. et al. Improved empirical wavelet transform (EWT) and its application in non-stationary vibration signal of transformer. Sci Rep 12, 17533 (2022). https://doi.org/10.1038/ DOI: https://doi.org/10.1038/s41598-022-22519-z Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Scientific Reports (Sci Rep) ISSN 2045-2322 (online) Nanocrystalline Core Toroidal Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
{"url":"https://www.pietromainero.it/blog/improved-empirical-wavelet-transform-ewt-and-its-application-in/","timestamp":"2024-11-10T18:43:50Z","content_type":"text/html","content_length":"122869","record_id":"<urn:uuid:cc322a20-6c3e-458b-bca7-dec87cec2f2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00413.warc.gz"}
Lesson 10 Solving Problems with Trigonometry • Let’s solve problems about right triangles. 10.1: Notice and Wonder: Practicing Perimeter What do you notice? What do you wonder? 10.2: Growing Regular Polygons 1. Here is a square inscribed in a circle with radius 1 meter. What is the perimeter of the square? Explain or show your reasoning. 2. What is the perimeter of a regular pentagon inscribed in a circle with radius 1 meter? Explain or show your reasoning. 3. What is the perimeter of a regular decagon inscribed in a circle with radius 1 meter? Explain or show your reasoning. 4. What is happening to the perimeter as the number of sides increases? Here is a diagram of a square inscribed in a circle and another circle inscribed in the same square. 1. How much shorter is the perimeter of the small circle than the perimeter of the large circle? 2. If the square was replaced with a regular polygon with more sides, would your previous answer be larger, smaller, or the same? Explain or show your reasoning. 10.3: Gentle Descent An airplane travels 150 miles horizontally during a decrease of 35,000 feet vertically. 1. What is the angle of descent? 2. How long is the plane's path? We know how to calculate the missing sides and angles of right triangles using trigonometric ratios and the Pythagorean Theorem. We can use the same strategies to solve some problems with other shapes. For example: Given a regular hexagon with side length 10 units, find its area. Decompose the hexagon into 6 isosceles triangles. The angle at the center is \(360^\circ \div 6=60^\circ\). That means we created 6 equilateral triangles because the base angles of isosceles triangles are congruent. To find the area of the hexagon, we can find the area of each triangle. Drawing in the altitude to find the height of the triangle creates a right triangle, so we can use trigonometry. In an isosceles (and an equilateral) triangle the altitude is also the angle bisector, so the angle is 30 degrees. That means \(\cos(30)=\frac{h}{10}\) so \(h\) is about 8.7 units. The area of one triangle is \(\frac12(10)(8.7)\), or 43.5 square units. So the area of the hexagon is 6 times that, or about 259.8 square units. • arccosine The arccosine of a number between 0 and 1 is the acute angle whose cosine is that number. • arcsine The arcsine of a number between 0 and 1 is the acute angle whose sine is that number. • arctangent The arctangent of a positive number is the acute angle whose tangent is that number. • cosine The cosine of an acute angle in a right triangle is the ratio (quotient) of the length of the adjacent leg to the length of the hypotenuse. In the diagram, \(\cos(x)=\frac{b}{c}\). • sine The sine of an acute angle in a right triangle is the ratio (quotient) of the length of the opposite leg to the length of the hypotenuse. In the diagram, \(\sin(x) = \frac{a}{c}.\) • tangent The tangent of an acute angle in a right triangle is the ratio (quotient) of the length of the opposite leg to the length of the adjacent leg. In the diagram, \(\tan(x) = \frac{a}{b}.\) • trigonometric ratio Sine, cosine, and tangent are called trigonometric ratios.
{"url":"https://im.kendallhunt.com/HS/students/2/4/10/index.html","timestamp":"2024-11-12T16:00:10Z","content_type":"text/html","content_length":"105149","record_id":"<urn:uuid:0d035b14-5c48-41f8-afae-53dd209355ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00358.warc.gz"}
Comparative Prime Number Theory Symposium The “Comparative Prime Number Theory” symposium is one of the highlight events organized by the PIMS-funded Collaborative Research Group (CRG) “L-functions in Analytic Number Theory”. It is a one-week event taking place on the UBC campus in Vancouver, B.C., Canada from June 17–21, 2024. Comparative prime number theory certainly includes prime number races, both classical (Chebyshev's bias) and over number fields and function fields. It also broadly includes the distribution of zeros of L-functions associated with these prime counting functions, including topics related to the Linear Independence hypothesis (LI) on the imaginary parts of those zeros, as well as general oscillations of number-theoretic error terms. Plenary speakers: Alexandre Bailleul (Université Paris–Saclay) Lucile Devin (Université du Littoral Côte d’Opale) Daniel Fiorilli (Université Paris–Saclay) Florent Jouve (Université de Bordeaux) Youness Lamzouri (Université de Lorraine) Wanlin Li (Washington University in St. Louis) Please go to the official symposium website for more information. Event Type Scientific, Conference
{"url":"https://staging.pims.math.ca/events/240617-cpnts","timestamp":"2024-11-10T19:09:47Z","content_type":"text/html","content_length":"425623","record_id":"<urn:uuid:fc55fb1b-695b-4c46-af08-b86e960cdd94>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00372.warc.gz"}
JLT 34002 Journal Home Page Cumulative Index List of all Volumes Complete Contents of this Volume Next Article Journal of Lie Theory 34 (2024), No. 1, 017--040 Copyright Heldermann Verlag 2024 Decomposition of Enveloping Algebras of Simple Lie Algebras and their Related Polynomial Algebras Rutwig Campoamor-Stursberg Instituto de Matemática Interdisciplinar, Dpto. Geometria y Topologia, Universidad Complutense, Madrid, Spain Ian Marquette School of Mathematics and Physics, University of Queensland, Brisbane, Australia The decomposition problem of the enveloping algebra of a simple Lie algebra is reconsidered combining both the analytical and the algebraic approach, showing its relation with the internal labelling problem with respect to a nilpotent subalgebra. A lower bound for the number of generators of the commutant as well as the maximal Abelian subalgebra are obtained. The case of rank-two simple Lie algebras is revisited and completed with the analysis of the exceptional Lie algebra G[2]. Keywords: Enveloping algebras, decomposition, simple Lie algebras. MSC: 16S30, 17B25, 17B35. [ Fulltext-pdf (204 KB)] for subscribers only.
{"url":"https://www.heldermann.de/JLT/JLT34/JLT341/jlt34002.htm","timestamp":"2024-11-12T12:58:48Z","content_type":"text/html","content_length":"3325","record_id":"<urn:uuid:a89f2e23-a346-4bfa-91d8-8b81465cdc17>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00685.warc.gz"}
Elon Musk Thinks Evolution is Bullshit. 2,657 thoughts on “Elon Musk Thinks Evolution is Bullshit.” 1. Patrick, My point is that talking about history supervening only makes sense if the math describing that behavior has been demonstrated to be applicable to the real world. It is applicable. QM works when applied to reality. It’s spectacularly successful. Physical systems behave as expected based on the evolution of their wavefunctions. Time-reversibility is simply a mathematical fact about the evolution of those wavefunctions — they retrace their steps when the appropriate time-related variables are negated. If a wavefunction evolving forward in time produces a state sequence Q0, Q1, …, Qn-1, Qn, you can take nothing but Qn and by time-reversing the evolution of the wavefunction, you will get Qn, Qn-1, …, Q2, Q1, Q0. Time-reversing the wavefunction produces the same state sequence in reverse. If you can generate the correct causal history from nothing but Qn, it shows that the causal history is implicit in Qn. Where else could it be coming from? Patrick: Just because the math works doesn’t mean anything in reality. Indeed. Science uses math to model reality. The search is for better models. Reality is time-dependent unless one thinks of the block universe. I find mulling over the reality of time and whether the way we experience and model it is flawed or illusory far more compelling than wondering if I’m a brain in a vat. keiths: If you agree that we cannot judge the likelihood of this “utter failure”, …then it follows that we are no position to claim that our “sensorimotor systems” are delivering accurate information to us. But as nothing follows from allowing that our model of the world that we build up from our shared experiences may be illusory, we can justifiably continue to behave as if that information is reliable. And of course the overwhelming majority* of humans live their lives as if their lives were real. *I’m assuming Keiths has some extra dimension to his life that follows from his mantra that we can’t be certain of anything. To which the riposte, still unanswered, is… “so what!” My point is that talking about history supervening only makes sense if the math describing that behavior has been demonstrated to be applicable to the real world. It is applicable. QM works when applied to reality. It’s spectacularly successful. Physical systems behave as expected based on the evolution of their wavefunctions. Some of QM has been demonstrated to accurately model reality. The time reversibility has not (again, as I understand it). Time-reversibility is simply a mathematical fact about the evolution of those wavefunctions — they retrace their steps when the appropriate time-related variables are negated. If a wavefunction evolving forward in time produces a state sequence Q0, Q1, …, Qn-1, Qn, you can take nothing but Qn and by time-reversing the evolution of the wavefunction, you will get Qn, Qn-1, …, Q2, Q1, Q0. Time-reversing the wavefunction produces the same state sequence in reverse. If you can generate the correct causal history from nothing but Qn, it shows that the causal history is implicit in Qn. Where else could it be coming from? The fact that the math allows for this does not mean that in reality the history of a particular macro state can be uniquely determined, even in principle. That remains for empirical verification. My understanding is that current results do not support the idea. 6. Patrick, Some of QM has been demonstrated to accurately model reality. The time reversibility has not (again, as I understand it). There’s a reason I emphasized the word ‘mathematical’ in this sentence: Time-reversibility is simply a mathematical fact about the evolution of those wavefunctions — they retrace their steps when the appropriate time-related variables are negated. Even if Q0 through Qn had nothing to do with physical states, and were merely mathematical abstractions, it would still be true that the entire state sequence is implicit in Qn. That’s a mathematical fact, not a physical one. So if the actual physical states going forward in time match the sequence Q0 through Qn, then the argument is complete. There is no need to find some aspect of reality that corresponds to the time-reversed sequence. I don’t disagree about the math, only its applicability to the real world. I addressed that in my previous comment. You must be logged in to post a comment.
{"url":"http://theskepticalzone.com/wp/elon-musk-thinks-evolution-is-bullshit/","timestamp":"2024-11-02T14:48:37Z","content_type":"text/html","content_length":"46187","record_id":"<urn:uuid:37133446-f26a-4012-acfe-bf56e89d802f>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00197.warc.gz"}
Modeling homogeneous ice nucleation from drop-freezing experiments: impact of droplet volume dispersion and cooling rates Articles | Volume 24, issue 18 © Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License. Modeling homogeneous ice nucleation from drop-freezing experiments: impact of droplet volume dispersion and cooling rates Homogeneous nucleation is the prominent mechanism of glaciation in cirrus and other high-altitude clouds. Ice nucleation rates can be studied in laboratory assays that gradually lower the temperature of pure water droplets. These experiments can be performed with different cooling rates, with different droplet sizes, and often with a distribution of droplet sizes. We combine nucleation theory, survival probability analysis, and published data on the fraction of frozen droplets as a function of temperature to understand how the cooling rate, droplet size, and size dispersity influence the nucleation rates. The framework, implemented in the Python code AINTBAD (Analysis of Ice nucleation Temperature for B and A Determination), provides a temperature-dependent nucleation rate on a per volume basis, in terms of approximately temperature-independent prefactor (A) and barrier (B) parameters. We find that dispersion in droplet diameters of less than an order of magnitude, if not properly included in the analysis, can cause apparent nucleation barriers to be underestimated by 50%. This result highlights the importance of droplet size dispersion in efforts to model glaciation in the polydisperse droplets of clouds. We also developed a theoretical framework, implemented in the Python code IPA (Inhomogeneous Poisson Analysis), to predict the fraction of frozen droplets at each temperature for arbitrary droplet size dispersions and cooling rates. Finally, we present a sensitivity analysis for the effect of temperature uncertainty on the nucleation spectrum. Our framework can improve models for ice nucleation in clouds by explicitly accounting for droplet polydispersity and cooling rates. Received: 19 Mar 2024 – Discussion started: 08 Apr 2024 – Revised: 31 Jul 2024 – Accepted: 12 Aug 2024 – Published: 26 Sep 2024 The thermodynamics and kinetics of ice formation from water are important for atmospheric science (Koop et al., 2000; Möhler et al., 2007; DeMott et al., 2010; Knopf and Alpert, 2023), preservation of biologically active substances (Morris et al., 2012; Zachariassen and Kristiansen, 2000), and storage of food products (Goff, 1997; Li and Sun, 2002). Nucleation, the first step in ice formation, heralds the onset of important subsequent changes: rapid growth of ice domains (Shultz, 2018; Barrett et al., 2019; Sibley et al., 2021), the release of latent heat (Riechers et al., 2013; Dobbie and Jonas, 2001), and the freeze concentration of impurities (Deck et al., 2022; Deville, 2017; Stoll et al., 2021). A quantitative understanding of these processes requires models that accurately predict ice nucleation kinetics. In most applications, the primary source of nuclei is heterogeneous nucleation on various surfaces and impurities under mild supercooling (Alpert and Knopf, 2016; Zhang and Maeda, 2022; Stan et al., 2009; Kubota, 2019). However, homogeneous nucleation of ice occurs under deep supercooling for pure water droplets in the atmosphere (Koop et al., 2000; Knopf and Alpert, 2023; Herbert et al., 2015; Heymsfield and Miloshevich, 1993; Spichtinger et al., 2023) and in laboratory experiments (Murray et al., 2010; Atkinson et al., 2016; Shardt et al., 2022; Riechers et al., 2013; Laksmono et al., 2015). Special assays have been developed to study ice nucleation kinetics by monitoring hundreds of small supercooled water droplets (Laval et al., 2009; Shardt et al., 2022; Ando et al., 2018; Tarn et al. , 2020). These experiments provide an independent realization of the nucleation time and/or temperature for each droplet (Tarn et al., 2020; Shardt et al., 2022). Typically, the kinetics are studied via induction times in isothermal conditions (constant supercooling) (Alpert and Knopf, 2016; Herbert et al., 2014; Knopf et al., 2020) or via the spectrum of ice nucleation temperatures at a constant cooling rate (Zhang and Maeda, 2022; Ando et al., 2018; Shardt et al., 2022; Murray et al., 2010). These two types of experiments have important similarities and differences. For droplets subjected to constant supercooling, the induction time is exponentially distributed. Several analyses have modeled the exponential decay to understand how nucleation rates depend on supercooling (Alpert and Knopf, 2016; Herbert et al., 2014; Knopf et al., 2020). In experiments where the supercooling is gradually increased, the distribution of nucleation times is more complicated (Murray et al., 2010; Riechers et al., 2013). Typically, no nucleation events occur until the temperature drops below some critical temperature, and then the nucleation times and temperatures all occur within a focused range (Murray et al., 2010; Riechers et al., 2013; Shardt et al., 2022). The narrow range of ice nucleation temperatures has motivated the use of a single temperature cutoff for ice nucleation in cloud models (Kärcher and Lohmann, 2002). This approach, however, cannot account for the known impact of the cooling rate – which spans about 0.01 to 1Kmin^−1 – in the formation of ice in clouds (Stephens, 1978; Kärcher and Seifert, 2016; Shardt et al., 2022). Likewise, cloud models typically assume a monodisperse distribution of droplet sizes, while the range of sizes of droplets in clouds typically spans 2 to 50µm (Igel and van den Heever, 2017). The combined impact of the cooling rate and droplet size distribution on the analysis of droplet freezing experiments and the prediction of cloud properties has not been, to our knowledge, addressed to date. To justify new elements of our approach to introduce the cooling rate and droplet polydispersity into the interpretation and prediction of experimental data, we briefly discuss the capabilities and gaps in existing models for analyzing the experiments with steadily cooled droplets. Analyses of drop-freezing experiments can be grouped according to two distinguishing criteria. The first distinction pertains to the models used for interpreting the nucleation rate. Kubota (2019) used empirical nucleation rate models, while others have used theoretically motivated rate expressions (often based on classical nucleation theory) (Ickes et al., 2017; Murray et al., 2010; Riechers et al., 2013). Empirical rate models can provide excellent fits to the nucleation rate data, and successful empirical approaches sometimes inspire new theoretical models. However, the fitted rate expressions for the nucleation rate from an empirical model lack the interpretability and generalizability afforded by a successful fit to theoretical rate models. A second distinction pertains to the analysis and interpretation of the droplet nucleation data themselves. Some studies focus on the fraction of droplets that nucleate in a specific supercooling range, i.e., the nucleation spectrum (Murray et al., 2010; Shardt et al., 2022; Ando et al., 2018). The nucleation spectrum has sometimes been interpreted as an intrinsic property of supercooled water and/or the nucleants present in the system (Zhang and Maeda, 2022; Alpert and Knopf, 2016; Knopf and Alpert, 2023). However, it also depends on variables beyond chemical or interfacial properties, e.g., the cooling rates and droplet diameters. An alternative explanation for the nucleation spectrum begins with the survival probability formalism. In survival probability analyses, the probability that a droplet remains liquid steadily declines with time in proportion to the changing rate of ice nucleation. The survival probability formalism is easily used in combination with theoretical models for the nucleation rate, but the combination remains rare in the ice nucleation literature. Indeed, prior combinations of survival probability and nucleation theory in the ice literature have focused on heterogeneous nucleation (Wright and Petters, 2013; Marcolli et al., 2007; Alpert and Knopf, 2016). In this work, we combine survival probability analysis with classical nucleation theory to quantitatively predict the effects of different droplet volumes (Atkinson et al., 2016) and cooling rates ( Shardt et al., 2022). The experiments that motivated our study observed homogeneous nucleation in narrowly selected droplet diameters, cooled at a steady rate deep into the metastable zone. We are inspired by the experiments to achieve precise control of droplet diameters, but the atmospheric clouds will naturally have a distribution of droplet diameters (Painemal and Zuidema, 2011; Igel and van den Heever, 2017). We demonstrate a method to extract theoretically derived nucleation rate parameters from the experimental survival probability data of monodispersed droplets and droplets with a distribution of diameters. We find that the dispersion of droplet sizes typically found in clouds, if ignored in the data analysis, can cause serious errors in the predicted slopes of the nucleation rate vs. temperature. These nucleation rate slopes are known to significantly impact the prediction of cloud properties (Herbert et al., 2015; Spichtinger et al., 2023). To address this, we develop a theoretical framework to predict the fraction of frozen droplets considering an arbitrary dispersion of droplet sizes at any specified cooling rate. We implement this framework in a Python code named IPA (Inhomogeneous Poisson Analysis), which predicts the fraction of frozen droplets as a function of temperature using the distribution of droplet sizes and cooling rates as input. We expect that this model and its implementation will help improve the accuracy of cloud microphysics predictions by accounting for the natural variability in droplet sizes and cooling rates observed in atmospheric conditions. 2Analytical model to analyze the nucleation of monodispersed droplets The probability that a single droplet of volume V is not frozen in a given time t can be modeled using the master equation (Cox and Oakes, 1984): $\begin{array}{}\text{(1)}& \frac{\mathrm{d}P\left(t|V\right)}{\mathrm{d}t}=-P\left(t|V\right)×JV.\end{array}$ Here P(t|V) is the probability, J is the nucleation rate on a per volume per time basis, and V is the droplet volume. For the nucleation rate on a per droplet per time basis, we multiply the nucleation rate (J) by the droplet volume (V). Note that J itself is independent of the droplet volume, and accordingly parameters that define J should also be independent of the volume. The temperature is constant, and the rate of nucleation in each liquid droplet also remains constant in induction time measurements. On integrating Eq. (1), the survival probability becomes $P\left(t|V\ right)=\mathrm{exp}\left[-JVt\right]$. This result has been used to analyze nucleation data in several crystallization studies, e.g., by plotting lnP(t|V) vs. t to estimate J and its supersaturation dependence (Alpert and Knopf, 2016; Knopf and Alpert, 2023; Stöckel et al., 2005; Kubota, 2019; Sear, 2014). In contrast, in experiments where the supercooling increases with time, the nucleation rate in each liquid droplet also increases with time (Peters, 2011). The survival probability can be obtained by integrating Eq. (1): $\begin{array}{}\text{(2)}& P\left(t|V\right)=\mathrm{exp}\left[-\underset{\mathrm{0}}{\overset{t}{\int }}J\left(t\right)V\mathrm{d}t\right],\end{array}$ where P is a function of time. However, the data are usually reported as a function of temperature or supercooling (Murray et al., 2010; Shardt et al., 2022). Since the experiments are conducted at a specific cooling rate R (Murray et al., 2010; Shardt et al., 2022), we replace the time variable with temperature using the following relation: $\begin{array}{}\text{(3)}& T={T}_{\mathrm{m}}-R×t,\end{array}$ where T[m] is the melting temperature. After variable transformation, the survival probability becomes $\begin{array}{}\text{(4)}& P\left(T|V\right)=\mathrm{exp}\left[-\frac{V}{R}\underset{{T}_{\mathrm{m}}}{\overset{T}{\int }}J\left({T}^{\prime }\right)\mathrm{d}{T}^{\prime }\right].\end{array}$ Equation (4) separates protocol-specific factors (droplet diameter and cooling rate) from intrinsic properties of the nucleation kinetics and their dependence on temperature. Now we need theoretical models or experimental data for the nucleation rate to predict the survival probability. Classical nucleation theory gives the rate for homogeneous nucleation as (Volmer and Weber, 1926; Becker and Döring, 1935) $\begin{array}{}\text{(5)}& J=A\mathrm{exp}\left[-\frac{\mathrm{16}\mathit{\pi }{\mathit{\gamma }}^{\mathrm{3}}{v}_{\mathrm{0}}^{\mathrm{2}}}{\mathrm{3}\left({\mathit{\lambda }}_{\mathrm{f}}^{\mathrm where A is the kinetic prefactor, γ is the interfacial free energy between the ice and water, λ[f] is the latent heat of freezing, k[B] is the Boltzmann constant, T[m] is the melting point of ice, v [0] is the molar volume of ice, and T is the absolute temperature. The nucleation rate J is a product of the equilibrium concentration of clusters of a critical size and the non-equilibrium flux to post-critical sizes. Classical nucleation theory predicts prefactors and exponential terms with explicit temperature dependencies. The exponent in classical nucleation theory is interpreted as a Gibbs free-energy barrier: $\mathrm{\Delta }{G}^{*}/{k}_{\mathrm{B}}T$. It depends explicitly on both the absolute temperature and the supercooling. To account for the temperature dependence of nucleation and the time-dependent temperature, we use ${\mathit{\delta }}_{T} =\left({T}_{\mathrm{m}}-T\right)/{T}_{\mathrm{m}}$ as the dimensionless temperature and rewrite the expression for J as $\begin{array}{}\text{(6)}& J=A\mathrm{exp}\left[\frac{-B}{\left(\mathrm{1}-{\mathit{\delta }}_{T}\right){\mathit{\delta }}_{T}^{\mathrm{2}}}\right].\end{array}$ Here $B=\left(\mathrm{16}\mathit{\pi }{\mathit{\gamma }}^{\mathrm{3}}{v}_{\mathrm{0}}^{\mathrm{2}}\right)/\left(\mathrm{3}{\mathit{\lambda }}_{f}^{\mathrm{2}}{k}_{\mathrm{B}}{T}_{\mathrm{m}}\right)$ contains shape factors, physical constants, the latent heat, and interfacial free energy. These quantities are nearly independent of temperature for the narrow temperature range where homogeneous nucleation is observed in the experiments (Kashchiev, 2000; Sear, 2007; Koop et al., 2000). Thus the parameter B should be a nearly temperature-independent parameter, while the barrier $\mathrm{\ Delta }{G}^{*}/{k}_{\mathrm{B}}T$ is a strong function of temperature because of the $\mathrm{1}/\mathrm{\Delta }{T}^{\mathrm{2}}$ factor. The prefactor is related to both the frequency at which water molecules at the ice–water interface attach to the critical nucleus and the number of ice molecules that must attach to surmount the barrier. The prefactor is proportional to the self-diffusivity of water, and therefore it depends on temperature. However, over the small range of nucleation temperatures in this study (ca. 2K), we assume that the prefactor is temperature independent. Note that both parameters, A and B, are assumed to be temperature-independent constants over the narrow range of ice nucleation temperatures. However, they may both differ from values that would be theoretically estimated using properties of ice and water at T[m]. The value of the interfacial free energy when calculated from the fitted values of B is 33mNm^−1 at 235.5K, higher than the interfacial free energy reported by Ickes et al. (2015) (i.e., 29.0mNm^−1). We show in Sect. 9.2 that by assuming that A is independent of temperature, we transfer its temperature dependence to the effective nucleation barrier. Using Eq. (6) in Eq. (4), the survival probability becomes $\begin{array}{}\text{(7)}& P\left({\mathit{\delta }}_{T}|V\right)=\mathrm{exp}\left[-\left(\frac{AV{T}_{\mathrm{m}}}{R}\right)\underset{\mathrm{0}}{\overset{{\mathit{\delta }}_{T}}{\int }}\mathrm {exp}\left(\frac{-B}{\left(\mathrm{1}-{\mathit{\delta }}_{T}^{\prime }\right){\mathit{\delta }}_{T}^{\prime \mathrm{2}}}\right)\mathrm{d}{\mathit{\delta }}_{T}^{\prime }\right].\end{array}$ To our knowledge, Eq. (7) has not been used in previous studies of ice nucleation. It isolates parameter B, a property of nucleation kinetics, from the dimensionless group $\left(AV{T}_{\mathrm{m}}/R \right)$. The latter depends not only on intrinsic properties of ice and water (A and T[m]) but also on V and R, which may be experimental choices or cloud conditions. Equation (7) is valid for water droplets of volume V. In most experiments, there is a distribution of volumes which leads to a distribution of droplet nucleation rates. We consider a distribution of droplet sizes in Sect. 5, but first, we demonstrate that the model can predict the effect of droplet volume for narrowly size-selected droplets. Across the range of nucleation temperatures observed in experiments (Atkinson et al., 2016; Shardt et al., 2022) for homogeneous ice nucleation (234–238K), the factor (1−δ[T]) in the rate expression is always near 0.9. Hence, the nucleation rate expression is approximately $J=A\mathrm{exp}\left({B}^{\prime }/{\mathit{\delta }}_{T}^{\mathrm{2}}\right)$, where B^′ is approximately ${B}^{\prime }=B /\left(\mathrm{1}-{\mathit{\delta }}_{T}\right)\approx \mathrm{1.1}B$. With this approximation, we have an analytical solution for the survival probability as follows: $\begin{array}{}\text{(8)}& \begin{array}{rl}\mathrm{ln}\left[P\left({\mathit{\delta }}_{T}|V\right)\right]& \approx \left[\frac{{A}^{\prime }V{T}_{\mathrm{m}}}{R}\right]{\mathit{\delta }}_{T}×\left (\frac{\sqrt{\mathit{\pi }{B}^{\prime }}}{{\mathit{\delta }}_{T}}\text{erfc}\left[\frac{\sqrt{{B}^{\prime }}}{{\mathit{\delta }}_{T}}\right]\\ & -\mathrm{exp}\left[\frac{-{B}^{\prime }}{{\mathit{\ delta }}_{T}^{\mathrm{2}}}\right]\right),\end{array}\end{array}$ where “erfc” denotes the complementary error function. To illustrate the use of Eqs. (7) and (8), we analyze one of the survival probability data sets (droplet diameter corresponding to 3.8–6.2µm) obtained from Atkinson et al. (2016). Optimized fits of the analytical solution (Eq. 8, with ${A}^{\prime }=\mathrm{1.76}×{\mathrm{10}}^{\mathrm{39}}$cm^−3s^−1 and ${B}^{\prime }=\mathrm{1.3578}$) and the numerical integration (Eq. 7, with $A=\mathrm{8.68}×{\mathrm{10}}^{\mathrm{41}}$cm^−3s^−1 and B=1.2722) are shown in Fig. 1. Even though the fits show excellent agreement in both analytical and numerical approaches, we note that a 10% error in the exponent (from approximating $\mathrm{1}-{\mathit{\delta }}_{T}\approx \mathrm{1.0}$) leads to a nearly 1000-fold error in A^′ and a 10% error in B^′ relative to A and B. We conclude that precise A and B values require careful treatment of even weak temperature dependencies within J. Although the prefactor and barriers are different, the predicted nucleation rates are not. For example, at 234.9K both approaches give an estimate of the nucleation rate of 1.44×10^9cm^−3s^−1. The noisy estimates of J in Fig. 1b have been obtained by a finite difference in the cumulative survival probability data. For the finite-difference procedure, large numbers of droplets are needed to obtain an estimate of J from the incremental nucleation events in each ΔT interval. As seen in Fig. 1b, there is considerable noise in the J estimates even in an experiment with hundreds of droplets. Our data analysis approach directly fits a model to the cumulative fraction of frozen droplets. It should therefore remain accurate for data sets with smaller numbers of droplets. 3A computer code for analysis of drop-freezing experiments We implemented the numerical integration in Eq. (7) and analytical model of Eq. (8) in a Python code to estimate A, B, and J from experimental drop-freezing data. The code outputs the parameters A and B from Eq. (7). These are used to compute the nucleation barriers ΔG, the temperature that corresponds to 50% of frozen droplets T[50], and the homogeneous-nucleation rate evaluated at T[50] using ${J}_{\mathrm{hom}}^{\mathrm{model}}\left({\mathit{\delta }}_{T}\right)=A\mathrm{exp}\left(-B/\left[\left(\mathrm{1}-{\mathit{\delta }}_{T}\right){\mathit{\delta }}_{T}^{\mathrm{2}}\right]\ right)$. The AINTBAD (Analysis of Ice nucleation Temperature for B and A Determination) code is illustrated in Fig. 2. The code is available at https://github.com/Molinero-Group/volume-dispersion ( Addula et al., 2024). We use the minimize function from the “scipy.optimize” module in Python to optimize the difference between the target survival probability and the predicted one by adjusting the parameters A and B. The chosen optimization method is the Nelder–Mead algorithm, suitable for functions without explicit derivatives. Optional settings include a convergence tolerance of 10^−4 and a maximum iteration limit of 1000. 4Analysis of nucleation spectrum in monodispersed droplets Atkinson et al. (2016) monitored the freezing temperatures of narrowly size-selected droplets cooled to temperatures near 235K at a steady rate of 1.0Kmin^−1. A total of 581 droplets were used in the diameter range of 3.8–18.8µm, with an average of 96 droplets for each diameter. The range of droplet diameters in each experiment and the fraction of droplets that remain at each temperature can be seen as data points in Fig. 3a. We have analyzed the data from Atkinson et al. (2016) in two ways. First, we separately fitted the data for each diameter range to Eq. (7). Because the range of diameters of each size-selected group is narrow, we have assumed that all droplets in each size range are spheres with the mean diameter for that range. These fits (not shown) result in independent estimates of the optimized nucleation prefactor A and the barrier parameter B from each of the six experiments. Table 1 shows the range of droplet diameters in each experiment, the independent log [10]A and B estimates, the predicted free-energy barrier $\mathit{\beta }\mathrm{\Delta }G=B/\left[\left(\mathrm{1}-{\mathit{\delta }}_{T}\right){\mathit{\delta }}_{T}^{\mathrm{2}}\right]$ at 235.5K, and the predicted nucleation rate (from $J=A\mathrm{exp}\left[-B/\left(\left(\mathrm{1}-{\mathit{\delta }}_{T}\right){\mathit{\delta }}_{T}^{\mathrm{2}}\right)\right]$) at 235.5K. The separate A and B estimates vary considerably, but they are highly correlated to each other. Figure 3b shows B vs. log[10]A for each of the independent estimates. When B is small (large), A is also small (large). The estimated parameters compensate for errors in each other such that all six data sets yield models that predict consistent nucleation rates. The predicted nucleation rates are shown in Table 1 for the temperature of 235.5K. The measurements of Atkinson et al. (2016) were all made in the same way, so the same fundamental nucleation rate expression should describe all six size-selected data sets. Accordingly, we reanalyzed the data of Atkinson et al. (2016) with one global rate expression ($J=A\mathrm{exp}\left[-B/\left(\left(\mathrm{1}-{\mathit{\delta }}_{T}\right){\mathit{\delta }}_{T}^{\mathrm{2}}\right)\ right]$), keeping the same A and B values across all six data sets. The nucleation rate parameters obtained from the global fit are $A=\mathrm{2.79}×{\mathrm{10}}^{\mathrm{46}}$ (cm^−3s^−1) and B= 1.45. Figure 3a shows the experimental data for different droplet diameters along with model predictions from the global fit. We emphasize that these are six curves, accurately fitted with just two free parameters, and that both parameters have a clear physical and theoretical interpretation. However, we note that the theoretical relationship between B and βΔG reflects only the reversible work to create a nucleus at equilibrium, but the parameter B as obtained from experimental data also reflects activation energy contributions from the prefactor. See Sect. 9.2 for more explanation about this point. Indeed, using the expression $B=\left(\mathrm{16}\mathit{\pi }{\mathit{\gamma }}^{\mathrm{3}}{v}_{\mathrm{0}}^{\mathrm{2}}\right)/\left(\mathrm{3}{\mathit{\lambda }}_{f}^{\mathrm{2}}{k}_ {\mathrm{B}}{T}_{\mathrm{m}}\right)$ and the global fit B=1.45 in Table 1 results in γ=33mJm^−2 at 235.5K, which is above the median value of the ice–liquid surface tension in the literature ( Ickes et al., 2015). At a temperature of 235.5K, the global fit yields a prediction of J=10^8cm^−3s^−1 for the nucleation rate, which is consistent with predictions from the independent fits. The free-energy barrier at 235.5K from the global fit is 88.5k[B]T. This is again similar to those values obtained from fits to the individual size-selected data sets (Table 1). Although the rate predictions show remarkable internal consistency, the inferred barriers are scattered and larger than barriers which have been inferred from other data sets (Murray et al., 2010; Shardt et al., 2022; Riechers et al., 2013). The discrepancy may be a consequence of theoretically unaccounted for temperature dependencies within the prefactor. Note that the data sets in cyan and pink in Fig. 3a actually cross over each other. The crossover indicates that small droplets are nucleating at warmer temperatures than the larger droplets, which should not occur according to nucleation theory. These two anomalous curves correspond to the two most extreme estimates of A and B (upper right and lower left in Fig. 3b). Thus scatter in the A and B parameters seems to be a true reflection of experiment-to-experiment variation. Section 5 explores how size dispersity, i.e., the distribution of droplet sizes around the mean diameter, influences the inferred rate parameters. Section 6 examines whether diameter dispersity within the narrow, but non-zero, diameter ranges of Atkinson et al. (2016) may still affect the inferred rate parameters. 5Droplets with distribution of volume Experiments that report on droplet diameter dispersity (Murray et al., 2010; Shardt et al., 2022; Ando et al., 2018) consistently report a broader range of diameters than the droplets reported on in Atkinson et al. (2016). This section develops a superposition formula to predict the survival probability for experiments with a broad distribution of droplet diameters. We use the term superposition for a data analysis that retains the stratification in freezing temperature but otherwise pools droplets together regardless of their size. As seen from Fig. 3 and as predicted in Eq. (7), large droplets in a broad distribution will nucleate early (at milder supercooling levels), while small droplets will survive to deeper supercooling levels. If all temperature dependence comes from the free-energy barrier $B/\left[\left(\mathrm{1}-{\mathit{\delta }}_{T}\right){\mathit{\delta }}_{T}^{\mathrm{2}}\right]$, then large droplets that nucleate at milder supercooling levels will also nucleate with higher free-energy barriers. The steep sigmoidal survival probabilities for droplets of a specific size, when superimposed, result in a more gradual sigmoid. The gradual sigmoid looks deceptively like the theoretical prediction in Eq. (7) but with artificially reduced barrier B and prefactor A parameters. The analysis here shows how a distribution of droplet diameters broadens the nucleation spectrum, decreasing the inferred nucleation rate barrier. The joint survival probability distribution with volume and temperature variables is given by $\begin{array}{}\text{(9)}& P\left(V,{\mathit{\delta }}_{T}\right)=\mathit{\rho }\left(V\right)×P\left({\mathit{\delta }}_{T}|V\right),\end{array}$ where ρ(V) is the normalized distribution of droplet volumes and P(δ[T]|V) is the survival probability for droplets of a specific volume, i.e., Eq. (7). The survival probability in time and temperature is obtained by integrating over droplet volumes in the joint distribution: $\begin{array}{}\text{(10)}& P\left({\mathit{\delta }}_{T}\right)=\underset{\mathrm{0}}{\overset{\mathrm{\infty }}{\int }}P\left(V,{\mathit{\delta }}_{T}\right)\mathrm{d}V.\end{array}$ Here we provide an example calculation to illustrate the effects of a broad droplet volume distribution. Let the normalized (gamma-type) distribution of droplet sizes be $\mathit{\rho }\left(V\right) =\mathrm{8}{V}_{\mathrm{0}}^{-\mathrm{2}}V\mathrm{exp}\left(-\mathrm{2}V/{V}_{\mathrm{0}}\right)$. Here V[0] is the mean volume of the entire range of droplets. Let the survival probability for droplets of any specific diameter be given by P(δ[T]|V) in Eq. (7), with the global fit values of A and B, as reported in Table 1. The survival probability for the distribution of droplet volumes can then be obtained using Eqs. (9) and (10). We set V[0]=1057.1µm^3 in ρ(V) to obtain a distribution with droplets of diameters between 3.0 and 20µm. Note that the model volume distribution spans the range of sizes in the experiments of Atkinson et al. (2016); Fig. 4 shows the gradually decreasing survival probability from the superposition as a continuous black curve with more steeply changing diameter-selected P(δ[T]|V) data in the background. If we had been unaware of the droplet polydispersity or had not accounted for it, we might have interpreted the black curve in Fig. 4 using a survival probability analysis with nucleation theory: for droplets corresponding to the mean diameter d, the volume is computed assuming spherical droplets; i.e., ${V}_{\mathrm{0}}=\left(\mathit{\pi }{d}^{\mathrm{3}}/\mathrm{6}\right)$. Thus variations in diameter directly translate into changes in volume. To illustrate how droplet diameter dispersity influences the inferred nucleation rate parameters, we re-optimized A and B to minimize the residuals between the dispersity superposition result in P(δ[T]) and the naive specific-volume model P(δ[T]|V[0]). The resulting A and B values are 6.81×10^31cm^−3s^−1 and 0.91, respectively. The inferred prefactor (A[apparent]) is 15 orders of magnitude smaller than that from the global fit of the sets with narrow volume distribution, and the inferred barrier parameter (B[apparent]) has been reduced by nearly 40%. Moreover, the inferred free-energy barrier at 235.5K is now estimated to be βΔG=55.7k[B]T, relative to a value of 88.5k[B]T based on the global fit values to the diameter-selected droplet data. The calculation illustrates how a failure to account for diameter dispersity causes a spurious broadening of the nucleation spectrum and reduction in the inferred prefactor A, barrier parameter B, and free-energy barriers. Once we know the variation in droplet diameters, resultant survival probabilities can easily be computed with the help of the Python code presented in Sect. 9 and available on GitHub (https:// github.com/Molinero-Group/volume-dispersion, Addula et al., 2024). Here variation refers to the overall differences in droplet sizes within a sample, encompassing any deviation from the average droplet diameter. The inputs needed for the program are the proposed distribution of droplet diameters (Gaussian, uniform, gamma, etc.) and the variation in the nucleation rate with temperature (see Sect. 9.1). The output from the code is the effective survival probability. 6How narrow should a droplet distribution be to safely assume a single volume? First, we ask whether the range of droplet diameters in each experiment by Atkinson et al. (2016), with each spanning a few micrometers (µm), is already broad enough to adversely impact the inferred nucleation parameters. We have considered two test cases for the analysis: one with a midpoint of each reported diameter range as the diameter of all droplets in that group (as shown in on the vertical axis of Fig. 5) and the second with a uniform distribution of droplet volumes over the corresponding diameter ranges (as shown on the horizontal axis of Fig. 5). If the ranges of size are sufficiently narrow relative to the mean, then the A and B parameters resulting from a fit to the superposition become identical to those from monodisperse droplets of the mean size. Figure 6 shows that the relative width of the volume distribution is sufficient to predict the superposition error. Specifically, for less than 1% error in B (${B}_{\text{apparent}}/{B}_{\text{actual}}>\mathrm {0.99}$), we must have $\mathrm{\Delta }V/V<\mathrm{0.25}$. The parity plots for the two values of A and B are presented in Fig. 5. As all the data points are close to the x=y line, we conclude that the droplet diameter ranges in Atkinson et al. (2016) are sufficiently narrow to ignore diameter dispersion when inferring the nucleation kinetics. Figure 6 shows the ratio between the apparent B parameter from the superposition of survival probabilities of droplets with volumes $\stackrel{\mathrm{‾}}{V}±\mathrm{\Delta }V$ and the true B parameter. The analysis shows that the groups with 2µm variation in diameter resulted in the same nucleation rate parameters with approximately less than 1% variation in estimated free-energy barriers. Our analysis in Fig. 6 quantifies the effect of the dispersion of droplets in the experiments on predicted B parameters, assuming droplets have uniform distribution. Given $\mathrm{\Delta } V/\stackrel{\mathrm{‾}}{V}$ and B values from an analysis that imposes volume dispersity, Fig. 6 can be used to estimate the true value of B. The analysis shows that to obtain B within 1% of the correct value, the volume dispersity should be no more than 25% of the mean volume. 7Effect of cooling rate on nucleation parameters The combined survival probability and nucleation theory expression, as shown in Eq. (7), also predicts that the cooling rate will impact the nucleation spectrum. In this section, we analyze data from Shardt et al. (2022), whose experiments are performed at two different cooling rates (0.1 and 1.0Kmin^−1) with diameter-selected droplets of 75 and 100µm. Shardt et al. (2022) report the uncertainty in the droplet diameters to be 5µm. We model their droplet diameter distribution using a Gaussian distribution with a mean of 75 (or 100)µm and a standard deviation of 5µm. We have analyzed the survival probability data across the two droplet diameters and two cooling rates with one global fit. Global fits to the survival probability data across the cooling rates and droplet diameters are shown in Fig. 7. The computed nucleation rate parameters from the global fit are $A=\mathrm{5.72}×{\mathrm{10}}^{\mathrm{28}}$cm^−3s^−1 and B=0.81. The predictions of free-energy barriers across the cooling rates and droplet diameters are presented in Table 2. The predictions of B have a similar order of magnitude but are approximately 25% lower when compared to other estimates (Riechers et al., 2013). We suspect the variation may stem from the difficulties in measuring the precise temperatures of the droplets (Shardt et al., 2022; Tarn et al., 2020; Atkinson et al., 2016). We also note that the computed nucleation rate parameters A and B from Shardt et al. (2022) are lower than those from the study of Atkinson et al. (2016). The difference may be due to the uncertainty in droplet temperature measurements; however the two experiments report similar uncertainties in droplet temperatures. Specifically, experiments by Shardt et al. (2022) indicated the uncertainty in temperature measurements to be ±0.2K, and experiments by Atkinson et al. (2016) indicated uncertainty of ±0.3K. 8Comparing homogeneous-nucleation-rate parametrizations Figure 8 shows a comparison for the homogeneous-nucleation rates using experimental data from Shardt et al. (2022) (blue diamonds) and Atkinson et al. (2016) (green squares). Continuous lines indicate different parametrizations: the fit using the AINTBAD code ${J}_{\mathrm{hom}}^{\mathrm{model}}\left(T\right)$, where $A=\mathrm{2.79}×{\mathrm{10}}^{\mathrm{46}}$cm^−3s^−1 and B=1.45 for the temperature range of 234.8 to 236.8K and $A=\mathrm{5.72}×{\mathrm{10}}^{\mathrm{28}}$cm^−3s^−1 and B=0.81 for 237.0 to 239.1K (continuous red lines); the parametrization proposed by fitting multiple experimental data, ${J}_{\mathrm{hom}}^{\mathrm{equation}}\left(T\right)=\mathrm{exp}\left[-\mathrm{3.9126}T+\mathrm{939.916}\right]$ (Atkinson et al., 2016) (cyan line); and the parametrizations based on classical nucleation theory (CNT) from Qiu et al. (2019) (black line) and from Koop and Murray (2016) (magenta line). In a small temperature range, ${J}_{\mathrm{hom}}^{\ mathrm{model}}\left(T\right)$ captures the experimental data points well. The proposed model, ${J}_{\mathrm{hom}}^{\mathrm{model}}\left(T\right)$, which works well for micrometer-sized droplets at lower temperatures, may have limitations in accurately capturing the complex nucleation processes occurring in larger droplets at higher temperatures. Thus, ${J}_{\mathrm{hom}}^{\mathrm{model}}\left (T\right)$ can be used only to predict the homogeneous-nucleation rate in the temperature range of the input data used to fit the model. All parametrizations predict nucleation rates within an order of magnitude of each other and for experiments for temperatures between 235 and 240K. However, there is a small gap in the data near 237K. Figure 8 suggests a slight disagreement between experimental rates at temperatures below 237K and those above 237K. Models for ice nucleation in cloud droplets require nucleation rates that remain accurate over a broad range of temperatures and droplet diameters. Although it is not possible to discriminate between models based on the currently available data, physics-based models should help to build parametrizations that are internally consistent and valid over a broad temperature range. 9A computer code to predict the survival probability using any droplet diameter distribution and cooling rate We developed a versatile code capable of taking various parametrizations for the homogeneous-nucleation rate J[hom](T), the droplet diameter distributions (Gaussian, Gamma, uniform, exponential, etc.), and cooling rates to compute the survival probability or fraction of frozen droplets. The code IPA (Inhomogeneous Poisson Analysis) is illustrated in Fig. 9. We use the nucleation rate data vs. temperature as the input to compute the survival probability using the following equation: $\begin{array}{}\text{(11)}& P\left({\mathit{\delta }}_{T}|V\right)=\mathrm{exp}\left[-\left(\frac{V{T}_{\mathrm{m}}}{R}\right)\underset{\mathrm{0}}{\overset{{\mathit{\delta }}_{T}}{\int }}J\left({\ mathit{\delta }}_{T}^{\prime }\right)\mathrm{d}{\mathit{\delta }}_{T}^{\prime }\right].\end{array}$ Equation (11) is a general representation for any given J value; i.e., it is a version of Eq. (7) that can be used with other nucleation rate models. We evaluate the integral numerically using the trapezoidal rule. Even though Eq. (11) is strictly valid only for a given constant volume of the droplets, we can use Eq. (11) in combination with Eq. (10) to account for the distribution of diameters. Our code includes diverse nucleation rate variations with temperature, including the local parametrization ${J}_{\mathrm{hom}}^{\mathrm{model}}\left(T\right)$ discussed in the preceding section, the CNT parametrization from sources like Qiu et al. (2019) and Koop and Murray (2016), and the empirical parametrization from Atkinson et al. (2016). Additionally, users can integrate any other parametrization into the code. The code is publicly accessible at https://github.com/Molinero-Group/volume-dispersion (Addula et al., 2024). 9.1Survival probability predictions for cloud data using CNT parametrization To extend J[hom](T) to higher temperatures and predict freezing for any diameter distribution, we use ${J}_{\mathrm{hom}}^{\mathrm{CNT}}\left(T\right)$ based on classical nucleation theory (CNT) parametrization of experimental properties of water as previously described in Qiu et al. (2017, 2019). According to CNT, the rate of nucleation is given by $\begin{array}{}\text{(12)}& J\left(T\right)=A\left(T\right)\mathrm{exp}\left[\frac{-\mathrm{\Delta }{G}_{\mathrm{hom}}}{{k}_{\mathrm{B}}T}\right],\end{array}$ where T is the absolute temperature, k[B] is the Boltzmann constant, A(T) is the prefactor, and ΔG[hom] is the free-energy barrier associated with the formation of a critical ice nucleus. The temperature dependence of the prefactor follows the one of the diffusion coefficient of liquid water using the Vogel–Fulcher–Tammann (VFT) model and was obtained from Koop and Murray (2016). The free-energy barrier is formulated as $\begin{array}{}\text{(13)}& \mathrm{\Delta }{G}_{\mathrm{hom}}=\frac{\mathrm{16}\mathit{\pi }{\mathit{\gamma }}_{\text{ice-liq}}^{\mathrm{3}}}{\mathrm{3}{\mathit{\rho }}^{\mathrm{2}}\mathrm{\Delta } {\mathit{\mu }}^{\mathrm{2}}},\end{array}$ where Δμ(T) is the excess chemical potential of the liquid with respect to the crystal, ρ is the density of the crystal, and γ[ice-liq] is the surface tension of the ice–liquid interface. We follow the procedure developed by Qiu et al. (2019) to compute the homogeneous-nucleation rate J[hom](T) as a function of experimental properties of water and ice. In summary, the temperature dependence of the free-energy barriers is computed with Eq. (13). The ice–liquid surface tension at the melting point was selected to match γ[ice-liq](T[m])=31.20mJm^−2, which in turn matches J[hom] at T[hom]= 238K for microliter-sized droplets cooled at 1Kmin^−1 following the experimental data of Atkinson et al. (2016) and Riechers et al. (2013). We approximate the temperature dependence of the ice–liquid surface tension γ[ice-liq](T) by Turnbull's relation (Turnbull, 2004), where ${\mathit{\gamma }}_{\text{ice-liq}}\left(T\right)/{\mathit{\gamma }}_{\text{ice-liq}}\left({T}_{\mathrm{m}}\ right)=\mathrm{\Delta }{H}_{\mathrm{m}}\left(T\right)/\mathrm{\Delta }{H}_{\mathrm{m}}\left({T}_{\mathrm{m}}\right)$. This parametrization was previously used in Qiu et al. (2019) to study heterogeneous ice nucleation. Utilizing the CNT parametrization from Qiu et al. (2019) as an input, we integrate it with diverse droplet diameter distributions and cooling rates. The distribution of water droplets in clouds has been examined through the gamma distribution function (Liu et al., 1995; Painemal and Zuidema, 2011; Igel and van den Heever, 2017). In Fig. 10a, we showcase how different gamma diameter distributions, manipulated by altering the shape parameter as suggested in Igel and van den Heever (2017), impact droplet diameters. Note that we did not fit a distribution to the data but assumed a distribution based on known properties of droplet sizes in clouds. Additionally, Fig. 10b illustrates the survival probability computed via the IPA code using a fixed cooling rate, q[c]=1Kmin^−1, that is typical in clouds (Shardt et al., 2022). Notably, the inset reveals a correlation between the shape parameter and freezing temperature. We also include the survival probability for monodisperse droplets, using the most likely diameter from the distributions. Importantly, when the diameter distribution is broader, it has a significant impact on the freezing temperatures. Typical rates of cooling in clouds span ∼0.01 to 1Kmin^−1 (Stephens, 1978; Kärcher and Seifert, 2016; Shardt et al., 2022). Furthermore, Fig. 10c shows that by each 10-fold increase in the cooling rate, T[50] decreases by approximately 0.5K. These analyses indicate that an explicit account of the cooling rate and droplet size distribution are important for accurate modeling of cloud microphysical properties. 9.2B as obtained from experiments reflects both diffusion and nucleation barriers Each of the two data sets, those from Atkinson et al. (2016) and Shardt et al. (2022) shown in Figs. 3 and 7, respectively, follow the theoretically predicted trends in the cooling rate and droplet size dependence. When compared to each other, the larger droplets of Shardt et al. (2022), as expected, nucleate at higher temperatures than those of Atkinson et al. (2016). Therefore, there is no discrepancy between theoretical expectations and the directly monitored nucleation temperatures. However, the estimated Gibbs free-energy barriers from the data of Shardt et al. (2022) are smaller than those estimated from the data of Atkinson et al. (2016). If $\mathrm{\Delta }{G}^{*}/{k}_{\mathrm{B}}T=B/\left({T}^{*}\mathrm{\Delta }{T}^{\mathrm{2}}\right)$ with a constant B value, then ΔG^* should be larger for the droplets of Shardt et al. (2022), which nucleate at higher temperatures. We have shown that size dispersion causes an underestimation of the B and A parameters. As detailed in Sect. 10, noise in the temperature measurements can also broaden the distribution of nucleation temperatures, causing a similar underestimation of B, A, and ΔG^*. However, uncertainty in droplet temperatures cannot explain why the ΔG^* values obtained from Shardt et al. (2022) are smaller, as that data set has lower uncertainty (±0.2K) than the data set from Atkinson et al. (2016) (±0.3K). Alternatively, larger nucleation barriers at the lower temperatures (larger supercooling levels) may result from the combined effects of the nucleation barrier (estimated with the AINTBAD code) and diffusion barriers within the prefactor (not yet considered). We analyze this possibility by first predicting the survival probability of liquid droplets upon cooling for a proposed narrow distribution of droplet volumes using the IPA code with a cooling rate of 1Kmin^−1 and the ${J}_{\mathrm{hom}}^{\mathrm{CNT}}\left(T\right)$ CNT parametrization from Qiu et al. (2019) (Fig. 11b and c) and then analyzing these synthetic survival probabilities with the IPA code to extract the effective barrier from the B parameter. Figure 11a shows that the values of ΔG at T[50] obtained from the AINTBAD code align closely with the sum of free-energy barriers for diffusion and homogeneous nucleation of the CNT parametrization of Qiu et al. (2019). This suggests that the high effective barriers for the smaller droplets may originate in a steeply increasing barrier for diffusion. Such an increase is not represented in the parametrization of Qiu et al. (2019) or in Koop and Murray ( 2016), who model the temperature dependence of the diffusion using the Vogel–Fulcher–Tammann (VFT) equation with T[0]=148K, while recent experiments support a steeper decrease in the self-diffusion coefficient D(T) of water as it approaches its maximum in isobaric heat capacity at 229K (Pathak et al., 2021). We interpret the increase in the steepness of D(T) on approaching the temperature of maximum heat capacity to be possibly responsible for the larger apparent barrier obtained from the AINTBAD fits to the experimental data. Our finding supports the need for a reassessment of the temperature dependence of the prefactor in the parametrizations of the homogeneous ice nucleation rates based on the most current experimental data. 10Impact of temperature uncertainty on the apparent nucleation barriers Another important factor that has a significant effect on the measured nucleation spectrum is the measurement of droplet temperature. The estimations of the droplet temperatures in freezing experiments show large variability (Tarn et al., 2020; Shardt et al., 2022). The highest level of accuracy in the temperature measurements is ±0.2K (Shardt et al., 2022). In this section, we conduct a sensitivity analysis using our model to quantify the impact of temperature measurement uncertainty on estimated free-energy barriers. To perform this analysis, we utilized data from frozen 75µm droplets, as presented in Shardt et al. (2022), which were collected at a cooling rate of 0.1Kmin^−1. Through the HUB-backward code from de Almeida Ribeiro et al. ( 2023), we determined the optimized differential spectra denoted n[m](T) based on the frozen fraction (represented by the continuous red line in Fig. 12a). The resulting parameters derived from this analysis were T[mode]=238.2K and s=0.33, where T[mode] represents the most probable freezing temperature within the distribution and s characterizes the distribution's spread. Subsequently, we employed the original distribution (continuous red line in Fig. 12b) to generate random temperature values, augmenting them with random values drawn from a uniform distribution within the range of −0.4 to +0.4K (or −0.2 to 0.2K). These additional values introduce noise into the data. We sampled a total of 100 temperature values, equivalent to simulating the behavior of 100 droplets in an experimental setup. The resulting differential freezing spectra are illustrated by the blue squares and green triangles in Fig. 12b. For each case, we calculated the survival probability and fitted the data using Eq. (7 ), resulting in the continuous lines depicted in Fig. 12c. The effects of temperature variation on the nucleation spectrum are summarized in Table 3. We conclude that measurements with ±0.2 and ±0.4 K variations resulted in 8% and 14% variation, respectively, in the computed free-energy barriers. Even though the predictions of free-energy barriers show a strong dependence on the uncertainty in temperature measurements, the nucleation rates are insensitive, as shown in Table 3. However, the noise in the predicted rate data increases with the noise in temperature measurements. Homogeneous ice nucleation is the predominant mechanism of glaciation in cirrus and other high-altitude clouds, making the accurate representation of cloud microphysics highly dependent on the homogeneous-nucleation rates. It has been shown that the predictions of cloud models are sensitive to the rate of cooling and variations in the slope of the nucleation rate with temperature (Herbert et al., 2015). In this study, we demonstrate that the cooling rate and dispersion in droplet diameters can lead to substantial changes in the freezing temperature of droplets, which stresses the need to incorporate these variations into cloud models. Homogeneous-nucleation rates can be obtained from experiments that record the freezing temperature while pure water droplets are gradually cooled to temperatures far below 0°C. Prior studies have analyzed these experiments using Poisson statistics to infer rates at different temperatures, and then they have fitted the rate vs. temperature data to empirical or theoretical rate expressions. Here we directly analyze the fraction of frozen droplets vs. temperature data to estimate rate parameters within a stochastic survival probability framework. We implement our approach in a Python code, AINTBAD (Fig. 13), that extracts the prefactor A and barrier B parameters according to a CNT-type rate expression. An advantage of our method is that it does not require a large number of droplets to estimate accurate nucleation rates at each temperature. Although AINTBAD does not directly use nucleation rates in the optimization, it yields accurate estimates of the nucleation rate and avoids the noise associated with numerical differentiation. We applied the AINTBAD code to analyze the homogeneous-nucleation data obtained from two different studies: Atkinson et al. (2016) and Shardt et al. (2022). We first used the new framework to extract parameters and rate expressions from experiments on six groups of diameter-selected droplets, from 5.0±1.2 to 17.5±1.2µm. The analysis gave similar prefactors, barriers, and rates across all six experiments. We further showed that all six experiments can be fitted with just two parameters from one global parametrization. The results of Shardt et al. (2022) including four experiments with two droplet sizes and two cooling rates were similarly fitted with a single pair of A and B values. We derived a superposition formula to predict how the distribution of droplet sizes causes a broadening in the distribution of nucleation temperatures. The broadening causes the AINTBAD analysis to underestimate the B parameter relative to that obtained from monodisperse droplets of the same mean size. For example, pooling droplets with a broad range of diameters from 4 to 19µm significantly reduces the slope of the fraction of frozen droplets with temperature and leads to a ∼40% reduction in the inferred barrier. The erroneous parametrization would result in large errors for glaciation rates in cloud microphysics models. Accurate parametrizations can be obtained from experiments like those of Atkinson et al. (2016) and Shardt et al. (2022), in which the droplet size distributions are sufficiently narrow to yield parameters that are indistinguishable from perfectly monodisperse droplets. Our analysis suggests that the barrier obtained with AINTBAD is the sum of the nucleation and diffusion barriers. As the nucleation barriers decrease monotonously with temperature, we infer that the increase in the overall barrier for nucleation at 235.5K originates in a steeper temperature dependence of the diffusion coefficient of water D that controls the temperature dependence of the prefactor. Our interpretation is consistent with the steep decrease in D(T) unveiled by experiments on water approaching the maximum in the isobaric heat capacity at 229K (Pathak et al., 2021) and calls for a reassessment of the representation of D(T) in CNT parametrizations of ice nucleation rates. While laboratory experiments strive to study nucleation in the narrowest possible distribution of droplet sizes to avoid spurious impacts on the parametrization of nucleation rates, clouds can have a relatively broad distribution in the size of water droplets. We developed the Python code IPA to predict the nucleation spectrum for any given distribution of droplet diameters at any cooling rate. As input, IPA uses the distribution of droplet diameters and a parametrization of the nucleation rate J(T) from the literature (Fig. 13). IPA includes various previously reported parametrizations and can be extended to use others introduced by users. We have demonstrated the application of IPA in predicting the impact of droplet diameter distributions typical of clouds on the evolution of the fraction of frozen droplets with temperature. By integrating the cooling rate and size dependence into the ice nucleation rates, the results and tools provided in this study could be used to improve and test approximations made in cloud models. We restrict our discussion in this article to homogeneous nucleation, but it might be possible to develop similar methods for analysis of heterogeneous-nucleation data. A key challenge is that pure water droplets vary only in volume, while heterogeneous-nucleation sites may vary in the surface chemistry, pore geometry, and size (area) of the active region. These differences lead to sites with different barriers and also different prefactors. Except for special cases of highly regular surfaces, the estimated A and B parameters will then reflect a superposition of survival probabilities from many different types of sites. To illustrate this point, we analyze the data of the fraction of ice vs. temperature for ice nucleation by kaolinite from Zhang and Maeda (2022) using the AINTBAD code. The estimated barrier at T[50]=267.2K is approximately 2k[B]T (Fig. S1 in the Supplement), which is a low value indicative of a superposition with nucleation sites of many different barriers. Further developments are needed to disentangle the contributions of different sites to the heterogeneous nucleation of ice. RKRA, IdAR, VM, and BP designed the project and prepared the manuscript. RKRA, IdAR, and BP developed the models and performed the analysis. The contact author has declared that none of the authors has any competing interests. Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. We thank Geoffrey Poon, Max Flattery, and Conrad Morris for helpful discussions. This work was supported by the Air Force Office of Scientific Research through MURI award no. FA9550-20-1-0351. This research has been supported by the Air Force Office of Scientific Research (MURI award no. FA9550-20-1-0351). This paper was edited by Hinrich Grothe and reviewed by two anonymous referees. Addula, R. K. R., de Almeida Ribeiro, I., Molinero, B., and Peters, B.: AINTBAD and IPA code and input data, Zenodo [code, data set], https://doi.org/10.5281/zenodo.13770437, 2024.a, b, c, d Alpert, P. A. and Knopf, D. A.: Analysis of isothermal and cooling-rate-dependent immersion freezing by a unifying stochastic ice nucleation model, Atmos. Chem. Phys., 16, 2083–2107, https://doi.org/ 10.5194/acp-16-2083-2016, 2016.a, b, c, d, e, f Ando, K., Arakawa, M., and Terasaki, A.: Freezing of micrometer-sized liquid droplets of pure water evaporatively cooled in a vacuum, Phys. Chem. Chem. Phys., 20, 28435–28444, 2018.a, b, c, d Atkinson, J. D., Murray, B. J., and O'Sullivan, D.: Rate of Homogenous Nucleation of Ice in Supercooled Water, J. Phys. Chem. A, 120, 6513–6520, 2016.a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z, aa, ab, ac, ad, ae, af, ag, ah Barrett, A. I., Westbrook, C. D., Nicol, J. C., and Stein, T. H. M.: Rapid ice aggregation process revealed through triple-wavelength Doppler spectrum radar analysis, Atmos. Chem. Phys., 19, 5753–5769, https://doi.org/10.5194/acp-19-5753-2019, 2019.a Becker, R. and Döring, W.: Kinetische Behandlung der Keimbildung in übersättigten Dämpfen, Ann. Physik, 416, 719–752, 1935.a Cox, D. and Oakes, D.: Analysis of Survival Data, Chapman & Hall/CRC Monographs on Statistics & Applied Probability, Taylor & Francis, ISBN 9780412244902, https://books.google.com/books?id= Y4pdM2soP4IC (last access: 16 September 2024), 1984.a de Almeida Ribeiro, I., Meister, K., and Molinero, V.: HUB: a method to model and extract the distribution of ice nucleation temperatures from drop-freezing experiments, Atmos. Chem. Phys., 23, 5623–5639, https://doi.org/10.5194/acp-23-5623-2023, 2023.a, b Deck, L. T., Ochsenbein, D. R., and Mazzotti, M.: Stochastic shelf-scale modeling framework for the freezing stage in freeze-drying processes, Int. J. Pharm., 613, 121276, https://doi.org/10.1016/ j.ijpharm.2021.121276, 2022.a DeMott, P. J., Prenni, A. J., Liu, X., Kreidenweis, S. M., Petters, M. D., Twohy, C. H., Richardson, M. S., Eidhammer, T., and Rogers, D. C.: Predicting global atmospheric ice nuclei distributions and their impacts on climate, P. Natl. Acad. Sci. USA, 107, 11217–11222, 2010.a Deville, S.: Ice-Templating: Processing Routes, Architectures, and Microstructures, 171–252, Springer, Cham, https://doi.org/10.1007/978-3-319-50515-2_4, ISBN 978-3-319-50515-2, 2017.a Dobbie, S. and Jonas, P.: Radiative influences on the structure and lifetime of cirrus clouds, Q. J. Roy. Meteor. Soc., 127, 2663–2682, 2001.a Goff, H.: Colloidal aspects of ice cream – A review, Int. Dairy J., 7, 363–373, 1997.a Herbert, R. J., Murray, B. J., Whale, T. F., Dobbie, S. J., and Atkinson, J. D.: Representing time-dependent freezing behaviour in immersion mode ice nucleation, Atmos. Chem. Phys., 14, 8501–8520, https://doi.org/10.5194/acp-14-8501-2014, 2014.a, b Herbert, R. J., Murray, B. J., Dobbie, S. J., and Koop, T.: Sensitivity of liquid clouds to homogenous freezing parameterizations, Geophys. Res. Lett., 42, 1599–1605, 2015.a, b, c, d Heymsfield, A. J. and Miloshevich, L. M.: Homogeneous ice nucleation and supercooled liquid water in orographic wave clouds, J. Atmos. Sci., 50, 2335–2353, 1993.a Ickes, L., Welti, A., Hoose, C., and Lohmann, U.: Classical nucleation theory of homogeneous freezing of water: thermodynamic and kinetic parameters, Phys. Chem. Chem. Phys., 17, 5514–5537, 2015.a, Ickes, L., Welti, A., and Lohmann, U.: Classical nucleation theory of immersion freezing: sensitivity of contact angle schemes to thermodynamic and kinetic parameters, Atmos. Chem. Phys., 17, 1713–1739, https://doi.org/10.5194/acp-17-1713-2017, 2017.a Igel, A. L. and van den Heever, S. C.: The importance of the shape of cloud droplet size distributions in shallow cumulus clouds. Part II: Bulk microphysics simulations, J. Atmos. Sci., 74, 259–273, 2017.a, b, c, d, e Kärcher, B. and Lohmann, U.: A parameterization of cirrus cloud formation: Homogeneous freezing including effects of aerosol size, J. Geophys. Res.-Atmos., 107, 1–9, https://doi.org/10.1029/ 2001JD000470, 2002.a Kärcher, B. and Seifert, A.: On homogeneous ice formation in liquid clouds, Q. J. Roy. Meteor. Soc., 142, 1320–1334, 2016.a, b Kashchiev, D.: Nucleation: basic theory with applications, Butterworth Heinemann, Oxford, ISBN 9780750646826, 2000.a Knopf, D. A. and Alpert, P. A.: Atmospheric ice nucleation, Nat. Rev. Phys., 5, 203–217, 2023.a, b, c, d Knopf, D. A., Alpert, P. A., Zipori, A., Reicher, N., and Rudich, Y.: Stochastic nucleation processes and substrate abundance explain time-dependent freezing in supercooled droplets, NPJ Clim. Atmos. Sci., 3, 1–9, https://doi.org/10.1038/s41612-020-0106-4, 2020.a, b Koop, T. and Murray, B. J.: A physically constrained classical description of the homogeneous nucleation of ice in water, J. Chem. Phys., 145, 211915, https://doi.org/10.1063/1.4962355, 2016.a, b, c , d, e Koop, T., Luo, B., Tsias, A., and Peter, T.: Water activity as the determinant for homogeneous ice nucleation in aqueous solutions, Nature, 406, 611–614, 2000.a, b, c Kubota, N.: Random distribution active site model for ice nucleation in water droplets, Cryst. Eng. Comm., 21, 3810–3821, 2019.a, b, c Laksmono, H., McQueen, T. A., Sellberg, J. A., Loh, N. D., Huang, C., Schlesinger, D., Sierra, R. G., Hampton, C. Y., Nordlund, D., Beye, M., Martin, A. V., Barty, A., Seibert, M. M., Messerschmidt, M., Williams, G. J., Boutet, S., Amann-Winkel, K., Loerting, T., Pettersson, L. G. M., Bogan, M. J., and Nilsson, A.: Anomalous Behavior of the Homogeneous Ice Nucleation Rate in “No-Man's Land”, J. Phys. Chem. Lett., 6, 2826–2832, 2015.a Laval, P., Crombez, A., and Salmon, J. B.: Microfluidic Droplet Method for Nucleation Kinetics Measurements, Langmuir, 25, 1836–1841, 2009.a Li, B. and Sun, D.: Novel methods for rapid freezing and thawing of foods – a review, J. Food Eng., 54, 175–182, 2002.a Liu, Y., Laiguang, Y., Weinong, Y., and Feng, L.: On the size distribution of cloud droplets, Atmos. Res., 35, 201–216, https://doi.org/10.1016/0169-8095(94)00019-A, 1995.a Marcolli, C., Gedamke, S., Peter, T., and Zobrist, B.: Efficiency of immersion mode ice nucleation on surrogates of mineral dust, Atmos. Chem. Phys., 7, 5081–5091, https://doi.org/10.5194/ acp-7-5081-2007, 2007.a Möhler, O., DeMott, P. J., Vali, G., and Levin, Z.: Microbiology and atmospheric processes: the role of biological particles in cloud physics, Biogeosciences, 4, 1059–1071, https://doi.org/10.5194/ bg-4-1059-2007, 2007.a Morris, G. J., Acton, E., Murray, B. J., and Fonseca, F.: Freezing injury: The special case of the sperm cell, Cryobiology, 64, 71–80, https://doi.org/10.1016/j.cryobiol.2011.12.002, 2012.a Murray, B. J., Broadley, S. L., Wilson, T. W., Bull, S. J., Wills, R. H., Christenson, H. K., and Murray, E. J.: Kinetics of the homogeneous freezing of water, Phys. Chem. Chem. Phys., 12, 10380–10387, 2010.a, b, c, d, e, f, g, h, i, j Painemal, D. and Zuidema, P.: Assessment of MODIS cloud effective radius and optical thickness retrievals over the Southeast Pacific with VOCALS-REx in situ measurements, J. Geophys. Res.-Atmos., 116, D24206, https://doi.org/10.1029/2011JD016155, 2011.a, b Pathak, H., Späh, A., Esmaeildoost, N., Sellberg, J. A., Kim, K. H., Perakis, F., Amann-Winkel, K., Ladd-Parada, M., Koliyadu, J., Lane, T. J., Yang, C., Lemke, H. T., Oggenfuss, A. R., Johnson, P. J. M., Deng, Y., Zerdane, S., Mankowsky, R., Beaud, P., and Nilsson, A.: Enhancement and maximum in the isobaric specific-heat capacity measurements of deeply supercooled water using ultrafast calorimetry, P. Natl. Acad. Sci. USA, 118, e2018379118, https://doi.org/10.1073/pnas.2018379118, 2021.a, b Peters, B.: Supersaturation rates and schedules: Nucleation kinetics from isothermal metastable zone widths, J. Cryst. Growth, 317, 79–83, 2011.a Qiu, Y., Odendahl, N., Hudait, A., Mason, R. H., Bertram, A. K., Paesani, F., DeMott, P. J., and Molinero, V.: Ice Nucleation Efficiency of Hydroxylated Organic Surfaces Is Controlled by Their Structural Fluctuations and Mismatch to Ice, J. Am. Chem. Soc., 139, 3052–3064, 2017.a Qiu, Y., Hudait, A., and Molinero, V.: How Size and Aggregation of Ice-Binding Proteins Control Their Ice Nucleation Efficiency, J. Am. Chem. Soc., 141, 7439–7452, 2019.a, b, c, d, e, f, g, h, i, j, k, l, m Riechers, B., Wittbracht, F., Hütten, A., and Koop, T.: The homogeneous ice nucleation rate of water droplets produced in a microfluidic device and the role of temperature uncertainty, Phys. Chem. Chem. Phys., 15, 5873–5887, 2013.a, b, c, d, e, f, g, h Sear, R. P.: Nucleation: theory and applications to protein solutions and colloidal suspensions, J. Phys. Condens. Matter., 19, 033101, https://doi.org/10.1088/0953-8984/19/3/033101, 2007.a Sear, R. P.: Quantitative studies of crystal nucleation at constant supersaturation: experimental data and models, Cryst. Eng. Comm., 16, 6506–6522, 2014.a Shardt, N., Isenrich, F. N., Waser, B., Marcolli, C., Kanji, Z. A., deMello, A. J., and Lohmann, U.: Homogeneous freezing of water droplets for different volumes and cooling rates, Phys. Chem. Chem. Phys., 24, 28213–28221, 2022.a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z, aa, ab, ac, ad, ae, af, ag, ah, ai, aj Shultz, M. J.: Crystal growth in ice and snow, Phys. Today, 71, 34–39, 2018.a Sibley, D. N., Llombart, P., Noya, E. G., Archer, A. J., and MacDowell, L. G.: How ice grows from premelting films and water droplets, Nat. Commun., 12, 1–11, https://doi.org/10.1038/ s41467-020-20318-6, 2021. a Spichtinger, P., Marschalik, P., and Baumgartner, M.: Impact of formulations of the homogeneous nucleation rate on ice nucleation events in cirrus, Atmos. Chem. Phys., 23, 2035–2060, https://doi.org/ 10.5194/acp-23-2035-2023, 2023.a, b Stan, C. A., Schneider, G. F., Shevkoplyas, S. S., Hashimoto, M., Ibanescu, M., Wiley, B. J., and Whitesides, G. M.: A microfluidic apparatus for the study of ice nucleation in supercooled water drops, Lab Chip, 9, 2293–2305, 2009.a Stephens, G. L.: Radiation Profiles in Extended Water Clouds. II: Parameterization Schemes, J. Atmos. Sci., 35, 2123–2132, https://doi.org/10.1175/1520-0469(1978)035<2123:RPIEWC>2.0.CO;2, 1978.a, b Stoll, N., Eichler, J., Hörhold, M., Shigeyama, W., and Weikusat, I.: A Review of the Microstructural Location of Impurities in Polar Ice and Their Impacts on Deformation, Front. Earth Sci., 8, 615613, https://doi.org/10.3389/feart.2020.615613, 2021.a Stöckel, P., Weidinger, I. M., Baumgärtel, H., and Leisner, T.: Rates of Homogeneous Ice Nucleation in Levitated H2O and D2O Droplets, J. Phys. Chem. A, 109, 2540–2546, 2005.a Tarn, M. D., Sikora, S. N. F., Porter, G. C. E., Wyld, B. V., Alayof, M., Reicher, N., Harrison, A. D., Rudich, Y., Shim, J. U., and Murray, B. J.: On-chip analysis of atmospheric ice-nucleating particles in continuous flow, Lab Chip, 20, 2889–2910, 2020.a, b, c, d Turnbull, D.: Formation of Crystal Nuclei in Liquid Metals, J. Appl. Phys., 21, 1022–1028, 2004.a Volmer, M. and Weber, A.: Keimbildung in übersättigten Gebilden, Z. Phys. Chem., 119, 277–301, https://doi.org/10.1515/zpch-1926-11927, 1926.a Wright, T. P. and Petters, M. D.: The role of time in heterogeneous freezing nucleation, J. Geophys. Res.-Atmos., 118, 3731–3743, 2013.a Zachariassen, K. E. and Kristiansen, E.: Ice Nucleation and Antinucleation in Nature, Cryobiology, 41, 257–279, 2000.a Zhang, X. and Maeda, N.: Nucleation curves of ice in the presence of nucleation promoters, Chem. Eng. Sci., 262, 118017, https://doi.org/10.1016/j.ces.2022.118017, 2022.a, b, c, d
{"url":"https://acp.copernicus.org/articles/24/10833/2024/acp-24-10833-2024.html","timestamp":"2024-11-14T12:31:44Z","content_type":"text/html","content_length":"458185","record_id":"<urn:uuid:177f88a5-fe03-423b-8232-299a0a398ed5>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00019.warc.gz"}
I've bought the Algebrator in an act of despair, and a good choice it was. Now I realize that it's the best Algebra helper that money could buy. Thank you! Anne Mitowski, TX I've been using your system, and it breezed through every problem that couldn't be solved by PAT. I'm really impressed with the user friendly setup, and capabilities of your system. Thanks again! William Marks, OH After downloading the new program this looks a lot easier to use, understand. Thank you so much. Marsha Stonewich, TX
{"url":"https://factoring-polynomials.com/adding-polynomials/angle-complements/grade-1-math-worksheets.html","timestamp":"2024-11-11T17:37:58Z","content_type":"text/html","content_length":"82485","record_id":"<urn:uuid:7eb3aaf2-45d9-457e-b287-2012bf0e5030>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00510.warc.gz"}
cm to m Converter Last updated: cm to m Converter Welcome to the Omni cm to m converter, an easy tool to help you convert centimeters to meters! As a bonus, you can use this calculator to convert cm to other units of length and even carry out the length conversion using the centimeter:meter notation! Time is the most precious resource you have, so why waste it performing conversions? Utilize cm to m calculator and come along if you want to know how to convert cm to m! 💡 How do I convert cm to m? There are 100 centimeters in a meter, and converting cm to m or vice versa is quite straightforward; here are the formulas below: 1. To convert cm to m, you should divide your length value by 100. m = cm / 100 2. To convert m to cm, you need to multiply your length value by 100. cm = m × 100 Trust the Omni cm to m calculator to do the work for you! Examples of cm to m conversion Now that you know how to convert cm to m let's discuss examples. To convert 350 cm to m, you would need to divide 350 by 100, which equals 3.5 m. How about converting m to cm? Let's say you purchased a mirror that's 1.55 m tall, and you want to know the length measurement in cm. To convert the value, you need to multiply 1.55 by 100, which equals 155 cm. How do I convert 50 cm to m? To convert 50 cm to m: 1. Take the number 50; 2. Divide by 100; and 3. That's all! The result is 0.5 m. How many centimeters are in a meter? There are 100 centimeters in a meter, meaning 100 cm = 1 m. The prefix centi- denotes one hundredth, so 1 centimeter is one-hundredth of a meter.
{"url":"https://www.omnicalculator.com/conversion/cm-to-m","timestamp":"2024-11-05T23:23:10Z","content_type":"text/html","content_length":"351497","record_id":"<urn:uuid:de613e3c-b63f-4c4a-bb45-1125fcd7d57f>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00275.warc.gz"}
Binomial Expansion Calculator | There are so many complex calculations in mathematics that are time-consuming and require much attention to solve this. There is also the facility of online tools for easy and complex mathematical problems in this digital education era. Binomial problems are also complex mathematical problems that need deep knowledge about theorems to solve them. Fortunately, there are so many online tools available that help to solve this theorem. The binomial expansion calculator is used to solve mathematical problems such as expansion, series, series extension, and so on. Before getting details about how to use this tool and its features to resolve the theorem, it is highly recommended to know about individual terms such as binomial, extension, sequences, etc. Basics binomial Theorem In algebra, a polynomial having two terms is known as binomial expression. The two terms are separated by either a plus or minus. This series of the given term is considered as a binomial theorem. What is this theorem all about? The theorem is defined as a mathematical formula that provides the expansion of a polynomial with two terms when it is raised to the positive integral power. Apart from that, this theorem is the technique of expanding an expression which has been raised to infinite power. A series expansion calculator is a powerful tool used for the extension of the algebra, probability, etc. compared to other tools. So, the formula to solve series problem by theorem is given as below - \ ((a+b) ^ {n} =\sum_ {k=0} ^ {n} \ begin {p matrix} n\\ k \end{pmatrix}a^{n-k}b^{k}\) How to Use the Binomial Expansion Calculator? Now, let’s see what is the sequence to use this expansion calculator to solve this theorem. 1. First of all, enter a formula in respective input field. 2. Then, enter the power value in respective input field. 3. After that, click the button "Expand" to get the extension of input. 4. You will get the output that will be represented in a new display window in this expansion calculator. Properties of Binomial Expansion The following are the properties of the expansion (a + b) n used in the binomial series calculator. 1. There are total n+ 1 terms for series. 2. In these terms, the first term is an and the final term is bn. 3. When solving the Extension problem using a binomial series calculator, processing from the first term to the last, the exponent of a decreases by one from term to term while the exponent of b increases by 1. The total count of the exponents in the individual term is n. 4. Moreover, suppose the coefficient of an individual term is multiplied by the exponent of input in that term, and the product of terms is divided by the number of that term. In that case, you can obtain the efficiency of the next term by expanding binomials. When you solve the expansion problem or series using a series expansion calculator, if you continue expanding the sequence through the higher powers, you can find coefficients and the larger sequence, which is also considered a pascal's triangle as per binomial theorem calculator. You can find each of the numbers by adding two numbers from the previous input, and it will be continued up to n. Use of Pascal’s triangle to solve Binomial Expansion It is very efficient to solve this kind of mathematical problem using pascal's triangle calculator. However, some facts should keep in mind while using the binomial series calculator. In the theorem, as the power increases, the series extension becomes a lengthy and tedious task to calculate through the use of Pascal's triangle calculator. Added to that, an expression that has been raised to a very large power can be easily computed with the help of a series theorem in the binomial theorem calculator. There are several ways to expand binomials. Pascal's triangle is one of the easiest ways to solve binomial expansion. It is much simpler than the theorem, which gives formulas to expand polynomials with two terms in the binomial theorem calculator. We can understand this with the proper example of the below step for the expansion of (x + y) n that is implemented in pascal's triangle calculator. 1. Initially, the powers of x start at n and decrease by 1 in each term until it reaches 0. 2. After that, the powers of y start at 0 and increase by one until it reaches n. 3. Then, the n row of Pascal's triangle will be the expanded series' coefficients when the terms are arranged. 4. After that, the number of terms in the expansion before all terms are combined in the addition of the coefficients which is equal to 2n, 5. In the end, there will be n+1 terms in the expression after combining terms in this. So, the above steps can help solve the example of this expansion. This kind of binomial expansion problem related to the pascal triangle can be easily solved with Pascal's triangle calculator. Bottom Line Although using a series expansion calculator, you can easily find a coefficient for the given problem. Even though it also helps to find terms from the given problems. However, the pascal's triangle depicts a formula that allows you to generate the terms' coefficients in a series formula. Apart from that, to resolve all problems using coefficient and term of binomial sequences, a binomial series calculator is useful to resolve all problems. This tool helps to resolve binomial problems using a series expansion calculator. You can use a series expansion calculator to solve the mathematical problem of partial fractions, coefficients, series terms, polynomial sequences with two terms, multinomial series, negative sequences, and so on. You just have to collect sequences and higher-order input and get solved within a fraction of time using a binomial expansion calculator.
{"url":"https://www.mathauditor.com/bionomial-expansion.html","timestamp":"2024-11-13T22:59:35Z","content_type":"text/html","content_length":"35027","record_id":"<urn:uuid:31ec3cc3-cf09-4d1a-ae08-692e2a31e346>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00856.warc.gz"}
How do you convert displacement from meters to kilometers? in context of displacement to velocity 27 Aug 2024 Title: Conversion of Displacement from Meters to Kilometers: A Theoretical Framework for Understanding the Relationship between Displacement and Velocity Abstract: Displacement is a fundamental concept in physics, often measured in meters (m). However, in many real-world applications, it is more convenient to express displacement in kilometers (km), especially when dealing with large distances. This article provides a theoretical framework for converting displacement from meters to kilometers, with a focus on its implications for understanding the relationship between displacement and velocity. Introduction: Displacement (d) is defined as the shortest distance between two points in space, often measured in meters (m). However, when dealing with large distances, it is more practical to express displacement in kilometers (km), where 1 km = 1000 m. The conversion of displacement from meters to kilometers is a straightforward process that involves dividing the displacement in meters by 1000. Formula: d (in km) = d (in m) / 1000 Theoretical Framework: When converting displacement from meters to kilometers, it is essential to understand the relationship between displacement and velocity. Velocity (v) is defined as the rate of change of displacement with respect to time (t), often measured in meters per second (m/s). The formula for velocity is: v = d / t When expressing displacement in kilometers, the unit of velocity changes from m/s to km/s. To maintain consistency, we can rewrite the formula for velocity as: v (in km/s) = d (in km) / t (in s) Conclusion: Converting displacement from meters to kilometers is a simple process that involves dividing the displacement in meters by 1000. However, it is essential to understand the implications of this conversion on the relationship between displacement and velocity. By recognizing the change in units, we can ensure accurate calculations and a deeper understanding of the underlying physics. • [1] Halliday, D., Resnick, R., & Walker, J. (2013). Fundamentals of Physics. John Wiley & Sons. • [2] Serway, R. A., & Jewett, J. W. (2018). Physics for Scientists and Engineers. Cengage Learning. Note: The references provided are fictional and used only for the purpose of this academic article. Related articles for ‘displacement to velocity’ : Calculators for ‘displacement to velocity’
{"url":"https://blog.truegeometry.com/tutorials/education/031a74b9f3de97b7d48159b9e5d37559/JSON_TO_ARTCL_How_do_you_convert_displacement_from_meters_to_kilometers_in_cont.html","timestamp":"2024-11-10T17:24:14Z","content_type":"text/html","content_length":"16641","record_id":"<urn:uuid:1ac08b68-a226-4742-b549-fa36283c6d64>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00561.warc.gz"}
Using Venn Diagrams to Represent Two given Sets Question Video: Using Venn Diagrams to Represent Two given Sets Mathematics If π = {8, 6, 2} and π = {7, 3, 9}, which Venn diagram represents the two sets? Video Transcript If π is the set of elements eight, six, and two and π is the set of elements seven, three, and nine, which Venn diagram represents the two sets? Letβ s begin by going through each option and seeing what they entail. For option a, we have the π elements, the π elements, and this portion is the π and π elements, so the elements that are in both. Itβ s called the intersection of the sets. For option b, π is the entire box. So any number that is in this box is in the set. And then π are all of the elements inside of the circle. But notice the circle is inside of the box. Which means all of the elements of π must be in π . So π would be a subset of π . For option c, π and π are represented by the entire circle. Which means they share all of the exact same elements. And then for option d, set π and set π are totally separate and they donβ t overlap. Meaning, they donβ t have any elements in common. So letβ s begin by looking at set π and π and determine which one a, b, c, or d would be the best Venn diagram to represent the two sets. So π holds the elements eight, six, and two. π holds the elements of seven, three, and nine. So right away, what is their intersection? What do they have in common? Do they have any elements in common? They donβ t. So their intersection would be the empty set. So that means we can eliminate option a because it says that they share elements two and seven. And they donβ t. Two is in the set π and seven is in the set π . But itβ s not in both. So if we know that they donβ t have any elements in common, this can actually go pretty quickly. Because option b is saying that all of the elements in π are actually also in π . And thatβ s not the case. None of π β s elements are actually in π . And then for option c, it says that they have the exact same elements, that they all have elements eight, two, seven, three, six, and nine. And actually, π and π each only have three elements. So this leaves us with option d. It says that set π should have elements two, which it does, six, and eight. And thatβ s great for π . And then π should have nine, seven, and three. And none of these are the same. So they shouldnβ t overlap. So this means the best Venn diagram to represent the two sets would be option d.
{"url":"https://www.nagwa.com/en/videos/735180514615/","timestamp":"2024-11-10T07:54:37Z","content_type":"text/html","content_length":"244094","record_id":"<urn:uuid:48da5896-5631-4a08-a6fc-888e4028d28c>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00747.warc.gz"}
Xlsread MATLAB Command: Import Data from Excel This article is a quick tutorial on using the xlsread MATLAB command to import data from an Excel file into MATLAB. Specifically, you’ll learn to: • Extract data from Excel in MATLAB • Remove irrelevant content from your extract • Import data from a specific sheet • Select the Excel file to read • Extract different formats How to Extract Data from Excel in MATLAB Xlsread Basic Syntax Here’s the basic syntax to import data from Excel into MATLAB: [numbers text textAndNumbers] = xlsread(excelFileName) • numbers: numbers contained in your Excel file. • text: text contained in your Excel file. • textAndNumbers: text and numbers contained in your Excel file. Xlsread MATLAB Example First, let’s look at some details about how this works through an actual example. Let’s say I have the following Excel file named “excelFile.xlsx:” Then, applying the previous basic syntax to read it, we get: excelFileName = 'excelFile.xlsx'; [numbers text textAndNumbers] = xlsread(excelFileName); So, let’s examine the outputs of the xlsread MATLAB command: As you can see, the three outputs are not of the same data type: • “numbers” is an array: this means that you can access the content of an element using parentheses: numbers(i) • “text” is a cell array: this means that you can access the content of a cell using braces: text{i} • “numbersAndText” is a cell array: this means that you can access the content of a cell using braces: numbersAndText{i} Remove Irrelevant Content from Your Extract • Notice that for the “numbers” variable, the first and last rows have been removed, and the text has been replaced by “NaN” value. You can get rid of the “NaN” values with the following: numbers = numbers(~isnan(numbers)); % delete numbers that are not "NaN" The “numbers” variable then becomes: • The numbers contained in the “text” variable have been replaced by empty cells. You can get rid of the empty cells with the following: text = text(~cellfun('isempty', text)); % delete empty cells The “text” variable then becomes: Import Data from a Specific Sheet What If the Content Is in Another Sheet? Let’s use the previous Excel file “excelFile.xlsx” with the content on a sheet named “Sheet1” (which is the default name of a sheet if you create an Excel file). Moreover, if you add a sheet to the Excel file, the name of the sheet will be “Sheet2.” So, let’s do that and move the first sheet (with the previous content) to the right of the second sheet and save the file: Then, we’ll apply the xlsread MATLAB command as we did previously: excelFileName = 'excelFile.xlsx'; [numbers text textAndNumbers] = xlsread(excelFileName); By default, xlsread will read the sheet located at the very left of the Excel file, namely “Sheet2.” Since “Sheet2” is empty, it makes sense that the extracted content is empty. So, how do you read “Sheet1”? Finding a Sheet First, there are two ways to specify the sheet to read using the xlsread MATLAB command: 1. Using the number of the sheet: [numbers text textAndNumbers] = xlsread(excelFileName, sheetNumber); The number of the sheet is “2” here because we want to read the second sheet (counting from the left). So, giving the “sheetNumber” variable the value “1” is equivalent to not using the second argument of the xlsread MATLAB command. Here’s how to extract the desired sheet: sheetNumber = 2; % second sheet counting from the left excelFileName = 'excelFile.xlsx'; [numbers text textAndNumbers] = xlsread(excelFileName, sheetNumber); 2. Using the name of the sheet: [numbers text textAndNumbers] = xlsread(excelFileName, sheetName); The sheet name in that example is “Sheet1,” so you can use it as the second argument if you want to extract the second sheet: sheetName = 'Sheet1'; excelFileName = 'excelFile.xlsx'; [numbers text textAndNumbers] = xlsread(excelFileName, sheetName); In both cases, if the sheet number or the sheet name you’re referring to doesn’t exist, you’ll get the following error: Specific Problems and Solutions Ask the User to Select an Excel File Using the uigetfile MATLAB command, you can ask the user to find and select the desired excel file to be read: [fileName, pathName] = uigetfile({'*.xlsx';'*.xls'}, 'Choose an Excel file'); You can then use the “fileName” and the “pathName” (respectively the name of the selected file and its location) to read your Excel file. There are 2 ways of doing that: • Moving to the file’s location: [fileName, pathName] = uigetfile({'*.xlsx';'*.xls'}, 'Choose an Excel file'); currentFolder = pwd; % save the current location [numbers text textAndNumbers] = xlsread(fileName) We use the pwd MATLAB command to save the current location from before, move to the Excel file’s location, perform the extraction, and move back to the initial location. • Specifying the location: [fileName, pathName] = uigetfile({'*.xlsx';'*.xls'}, 'Choose an Excel file'); fullFileExcelFile = fullfile(pathName, fileName); % create the path to the Excel file [numbers text textAndNumbers] = xlsread(fullFileExcelFile); fullfile creates the path by adding ‘/’ or ‘\’ between the file name and the path name (you could do it yourself with something like [pathName ‘/’ fileName], but this can be a little bit trickier depending on whether you use a UNIX or Windows platform). Be Careful About the Format of the Cells You’re Extracting First, if the numbers contained in your Excel file are formatted as strings, you’ll get a cell array when extracting. For example, by adding a single quotation mark to the left of every number in the Excel file “excelFile.xlsx” that we’ve used previously, we are formatting them as strings:And if we import the data from the Excel file now, we get: Since the numbers have been formatted as strings in the Excel file, there are no numbers anymore, the “numbers” variable is empty and the “text” and “textAndNumbers” variables have become identical. The Empty Cell Issue If you have empty cells in your Excel file before the first row, the xlsread MATLAB command will get rid of them. This is a problem if you want to write the content back to the Excel file (e.g. to modify a value) because you won’t be able to know where to write it. Unfortunately, there is no easy way to get that information using xlsread. If you have this issue, you can refer to this article for a workaround: https://realtechnologytools.com/matlab-row-number/ Key takeaways: • To read an Excel file, use: [numbers text textAndNumbers] = xlsread(excelFileName); □ numbers: array of the numbers in the file, access an element using numbers(i) □ text: cell array of the text in the file, access an element using text{i} □ textAndNumbers: cell array of the text and numbers in the file, access an element using textAndNumbers{i} • To remove irrelevant values from your extract, use: numbers = numbers(~isnan(numbers)); % delete numbers that are "NaN" text = text(~cellfun('isempty', text)); % delete empty cells • To import a specific sheet in MATLAB, use: [numbers text textAndNumbers] = xlsread(excelFileName, sheetName); % use the sheet name [numbers text textAndNumbers] = xlsread(excelFileName, sheetNumber); % use the sheet number • There are 2 ways to ask the user to select an Excel file and read it: 1. By moving to the file’s location: [fileName, pathName] = uigetfile({'*.xlsx';'*.xls'}, 'Choose an Excel file'); currentFolder = pwd; % save the current location [numbers text textAndNumbers] = xlsread(fileName) 2. By specifying the location: [fileName, pathName] = uigetfile({'*.xlsx';'*.xls'}, 'Choose an Excel file'); fullFileExcelFile = fullfile(pathName, fileName); % create the path to the Excel file [numbers text textAndNumbers] = xlsread(fullFileExcelFile); • If the numbers in your Excel file are formatted as strings (e.g. using a single quotation mark at the left of a number in the Excel file), then they will be extracted in the “text” variable rather than in the “numbers” variable. You can find more information about the xlsread MATLAB function in the MathWorks documentation: https://fr.mathworks.com/help/matlab/ref/xlsread.html 🌲 If you want to learn more about the practical tips that helped me stop wasting time doing mindless work, such as working with Excel files and writing reports, I wrote a small reference book about it: 👉 www.amazon.com/dp/B08L7FM1BB
{"url":"https://realtechnologytools.com/xlsread-matlab/","timestamp":"2024-11-04T14:44:41Z","content_type":"text/html","content_length":"100641","record_id":"<urn:uuid:c542e8d2-696e-4988-97ec-0252e1faee78>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00699.warc.gz"}
E.E.A. .com It doesn't have to be difficult if someone just explains it right. For the Euclidean Algorithm, Extended Euclidean Algorithm and multiplicative inverse. Before you use this calculator If you're used to a different notation, the output of the calculator might confuse you at first. Even though this is basically the same as the notation you expect. If that happens, don't panic. Just make sure to have a look the following pages first and then it will all make sense: • Euclidean Algorithm For the basics and the table notation • Extended Euclidean Algorithm Unless you only want to use this calculator for the basic Euclidean Algorithm. • Modular multiplicative inverse in case you are interested in calculating the modular multiplicative inverse of a number modulo n using the Extended Euclidean Algorithm Choose which algorithm you would like to use. Enter the input numbers: This is the output of the Euclidean Algorithm using the numbers a= and b= a b q r So gcd(517, 626639) = 1.
{"url":"https://extendedeuclideanalgorithm.com/calculator.php?mode=0&a=517&b=626639","timestamp":"2024-11-02T21:43:37Z","content_type":"text/html","content_length":"38054","record_id":"<urn:uuid:aa1cc579-3598-4c7e-a9b8-96c89bf27ab8>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00103.warc.gz"}
Mastering Excel: Adding Tangent Lines Like a Pro Table of Contents : Excel is an incredibly powerful tool that extends beyond basic data entry and calculations. For those looking to enhance their data visualizations, adding tangent lines to charts can provide valuable insights into trends and relationships. In this comprehensive guide, we will explore how to master Excel by adding tangent lines to your graphs, ultimately elevating your data presentation skills. 📊 Understanding Tangent Lines What are Tangent Lines? 🤔 A tangent line is a straight line that touches a curve at a single point. In the context of Excel charts, it represents the slope of a function at that specific point, helping to visualize the rate of change in your data. Tangent lines are especially useful in analyzing data trends and can help clarify the relationship between variables. Why Use Tangent Lines? Utilizing tangent lines can: • Highlight Trends: 🏹 They make it easier to identify the direction of change. • Show Slope: 📈 They can illustrate how steep a curve is, providing a visual cue about the behavior of your data. • Clarify Relationships: 🔍 They help in understanding the correlation between different datasets. Preparing Your Data Before you start adding tangent lines, ensure that your data is well-organized in Excel. A typical dataset might look like this: X Values Y Values Tips for Data Preparation • Use Numeric Data: Ensure both your X and Y values are numeric. • Organize Data in Columns: Keep your data in separate columns for easy referencing. • Identify Data Trends: Before plotting, assess whether your data exhibits any trends that would benefit from tangent lines. Creating Your Chart Step 1: Insert a Chart 📉 1. Select Your Data: Highlight the range of cells containing your data. 2. Insert Chart: Go to the “Insert” tab, choose “Chart,” and select a suitable chart type, such as a scatter plot or line chart. Step 2: Customize Your Chart • Add Titles: 🏷 Make sure your chart has a descriptive title. • Adjust Axes: Set appropriate scales for the X and Y axes. • Format Data Points: Ensure your data points are distinguishable. Adding Tangent Lines Adding tangent lines requires a few calculations. Here’s how to do it step-by-step. Step 1: Calculate the Slope The slope of a tangent line can be calculated using the derivative of the function that represents your data. For simplicity, we can estimate it using two points. Example Calculation 1. Choose a Point: Select a point where you want to draw the tangent line. For instance, let's say at (3, 5). 2. Select Nearby Points: Use nearby points to estimate the slope: □ Point 1: (2, 3) □ Point 2: (4, 7) 3. Calculate Slope (m): [ m = \frac{y_2 - y_1}{x_2 - x_1} = \frac{7 - 3}{4 - 2} = 2 ] Step 2: Create a New Data Series for the Tangent Line You will need to create a new data series that represents the tangent line. Here’s how: 1. Identify the Point: For (3, 5) with a slope of 2. 2. Calculate Y Values Using the Point-Slope Formula: [ y - y_1 = m(x - x_1) ] For example, if you choose (x) values of 2, 3, and 4: □ For (x = 2): [ y - 5 = 2(2 - 3) \implies y = 4 ] □ For (x = 3): [ y = 5 ] □ For (x = 4): [ y = 6 ] X Values Tangent Y Values Step 3: Add the New Data Series to Your Chart 1. Right-Click on the Chart: Select “Select Data.” 2. Add Series: Click on “Add,” and input your new X and Y values for the tangent line. Step 4: Format the Tangent Line • Change Line Style: Make your tangent line dashed or in a contrasting color for visibility. • Add Data Labels: 📋 Consider adding labels to clarify what the tangent line represents. Final Touches Review Your Chart Once you’ve added the tangent lines, take a moment to review your chart. Ensure everything is clear and visually appealing. • Check Legends: Ensure the legend accurately represents all data series. • Review Axes: Make sure that all axis labels and titles are readable. Save Your Work! 💾 Always remember to save your work periodically. Use “Ctrl + S” on Windows or “Command + S” on Mac. Important Note: Adding tangent lines is a powerful way to illustrate data trends, but it’s crucial to maintain clarity and accuracy. Misleading representations can cause confusion. By mastering the addition of tangent lines in Excel, you enhance your ability to present data effectively and meaningfully. Whether for reports, presentations, or personal analysis, the skills you develop here can dramatically impact your ability to communicate complex information clearly.
{"url":"https://tek-lin-pop.tekniq.com/projects/mastering-excel-adding-tangent-lines-like-a-pro","timestamp":"2024-11-02T07:38:12Z","content_type":"text/html","content_length":"85918","record_id":"<urn:uuid:a1fba4c9-a79c-4325-a5d3-d42c55dc2768>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00653.warc.gz"}
Understanding the Math:rb6-qld747y= pentagon - Rubi Tech News Understanding the Math:rb6-qld747y= pentagon The pentagon is a fascinating geometry shape that captures many’s imagination. When discussing the Math:rb6-qld747y= pentagon, we delve into its properties, characteristics, and applications. This article will explore the various aspects of pentagons in an easy-to-understand and engaging way. What is a Pentagon? A pentagon is a five-sided polygon. Thus, the Math rb6-qld747y= pentagon is defined by having five edges and five angles. Its simple structure makes it a basic yet essential shape in geometry. Types of Pentagons There are two primary sorts of pentagons: standard and sporadic. A normal pentagon has all sides of break even with length and all points of break even with degree, particularly 108 degrees. On the other hand, an unpredictable pentagon has sides and points of diverse lengths and measures. Understanding these types’ refinements is fundamental when examining Math:rb6-qld747y= pentagon. Calculating Each Angle in a Regular Pentagon In a regular pentagon, since all angles are equal, you can find the measure of each interior angle by dividing the total by the number of angles: This consistent angle measure is why the Math:rb6-qld747y= pentagon is visually appealing. The Area of a Pentagon Calculating the area of a pentagon can vary based on whether it is regular or irregular for a regular pentagon. Where sss is the length of a side, this formula makes it straightforward to find the area of the Math: rb6-qld747y=pentagon when all sides are equal. Applications of Pentagons Pentagons are more than just shapes in a math textbook; they also appear in real life. For example, the Math rb6-qld747y= pentagon can be seen in architecture and design. Some buildings, parks, and structures use pentagonal designs for aesthetic appeal and functionality. Pentagons in Nature Interestingly, pentagons also appear in nature. Specific flowers and fruits exhibit pentagonal symmetry. For instance, the starfish has a pentagonal shape, showcasing how the Math rb6-qld747y= pentagon can be found beyond human-made designs. The Pentagon in Mathematics Education In Learningout the Math rb6-qld747y= pentagon helin mathematics education, students understand more complex geometric concepts. By starting with simple shapes like pentagons, students can build a solid foundation in geometry that will serve them well in advanced studies. Using Pentagons in Art Artists often incorporate the Math rb6-qld747y= pentagon in their work. The symmetry and balance of pentagonal shapes can create visually stunning compositions. Whether in paintings, sculptures, or digital art, the Pentagon’s aesthetic appeal is undeniable. Exploring Symmetry in Pentagons Regular pentagons exhibit rotational symmetry. You can rotate the Math:rb6-qld747y= pentagon around its centre, which will look the same at specific intervals. This property is vital in various fields, including design and architecture, where symmetry is crucial for balance. The Pentagon’s Relationship with Other Shapes Pentagons can be connected to other geometric shapes. For example, if you connect the midpoints of a pentagon’s sides, you can create a smaller pentagon inside. This relationship highlights the versatility of the Math:rb6-qld747y= pentagon in geometry. Famous Pentagon Shapes One of the most well-known pentagonal structures is the Pentagon building in Arlington, Virginia. This military headquarters has become a symbol of the US Department of Defense. The Math:rb6-qld747y= pentagon shape of this building not only serves a practical purpose but also represents the power and stability of the military. The Pentagram: A Star-Shaped Pentagon A pentagram, or five-pointed star, is another exciting variation of the Math:rb6-qld747y= pentagon. It consists of a regular pentagon with lines drawn between non-adjacent vertices. The pentagram has been used in various cultures for spiritual and symbolic purposes. Exploring the Golden Ratio in Pentagons The regular pentagon is connected to the golden ratio, a mathematical concept often found in nature and art. The proportion of the corner-to-corner-to-side length of a standard pentagon rises to the brilliant proportion, roughly 1.618. This relationship adds a layer of beauty and complexity to the Math: rb6-qld747y=pentagon. Finding Pentagons in Tiling Patterns Tiling patterns often incorporate pentagons; certain tessellations can use pentagons to create appealing designs. These patterns can be found in flooring, walls, and other architectural elements, showcasing the practical application of the Math: rb6-qld747y=pentagon. The Role of Technology in Studying Pentagons Technological advancements have made studying the Math:rb6-qld747y= pentagon more accessible. Software programs can help visualise and manipulate pentagonal shapes, allowing students and enthusiasts to explore their properties interactively. Engaging Activities with Pentagons Teachers can use the Math:rb6-qld747y= pentagon to engage students in hands-on learning. Activities might include constructing pentagons with straws, exploring pentagonal shapes in nature, or creating art using pentagons. These activities can help reinforce concepts while making learning fun. In conclusion, the Math:rb6-qld747y= pentagon is a vital shape in geometry that offers much to explore. From its unique properties to its art, nature, and architecture applications, the pentagon is more than just a simple polygon. Understanding its characteristics can deepen our appreciation for geometry and its presence in the world around us. Whether you’re a student, teacher, or simply a curious mind, the Pentagon provides endless opportunities for discovery and creativity. Embrace the beauty and complexity of the Math:rb6-qld747y= pentagon!
{"url":"https://rubitechnews.com/mathrb6-qld747y-pentagon/","timestamp":"2024-11-03T04:44:09Z","content_type":"text/html","content_length":"83981","record_id":"<urn:uuid:cec3917a-5118-4f54-bcb6-5ef996a906e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00430.warc.gz"}
CSoI Seminar - Memory Hard Functions, Random Oracles, Graph Pebbling and Extractor Arguments • CSoI Seminar - Memory Hard Functions, Random Oracles, Graph Pebbling and Extractor Arguments • Tuesday, October 08, 2019 2:00 PM - 3:30 PM EDT HAAS 111 Purdue University Jeremiah Blocki A secure password hashing algorithm should have the properties that (1) it can be computed quickly (e.g., at most one second) on a personal computer, (2) it is prohibitively expensive for an attacker to compute the function millions or billions of times to crack the user's password even if the attacker uses customized hardware. The first property ensures that the password hashing algorithm does not introduce an intolerably long delay for the user during authentication, and the second property ensures that an offline attacker will fail to crack most user passwords. Memory hard functions (MHFs), functions whose computation require a large amount of memory, are a promising cryptographic primitive to enable the design of a password hashing algorithm achieving both Graph pebbling is a powerful abstraction which is used to analyze the (in)security of data-independent MHFs (iMHFs) --- a special class of MHFs that provide natural resistance to side-channel attacks. In the parallel random oracle model we can prove that the cumulative memory complexity of an iMHF f_G is directly tied to pebbling cost of the underlying data-dependence graph G. In this talk we look at the extractor arguments which allow us to establish this fundamental connection between graph pebbling and cumulative memory complexity. Intuitively, we show that if an algorithm A evaluating f_G has small cumulative memory cost (below the pebbling cost of G) then we can derive a contradiction by building an extractor E which uses A to ``compress" the random oracle. We will conclude the talk by discussing several open problems e.g., is it possible to extend the extractor argument to the quantum random oracle model?
{"url":"https://www.soihub.org/news-and-events/calendar/calendar-archive/memory-hard-functions-random-oracles-graph-pebbling-and-extractor-arguments.html","timestamp":"2024-11-12T20:08:13Z","content_type":"text/html","content_length":"10454","record_id":"<urn:uuid:0fcc5ef2-427b-407f-a79d-39ab171ea83a>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00794.warc.gz"}
Effects of small surface tension in Hele-Shaw multifinger dynamics: An analytical and numerical study We study the singular effects of vanishingly small surface tension on the dynamics of finger competition in the Saffman-Taylor problem, using the asymptotic techniques described by Tanveer [Philos. Trans. R. Soc. London, Ser. A 343, 155 (1993)] and Siegel and Tanveer [Phys. Rev. Lett. 76, 419 (1996)], as well as direct numerical computation, following the numerical scheme of Hou, Lowengrub, and Shelley [J. Comput. Phys. 114, 312 (1994)]. We demonstrate the dramatic effects of small surface tension on the late time evolution of two-finger configurations with respect to exact (nonsingular) zero-surface-tension solutions. The effect is present even when the relevant zero-surface-tension solution has asymptotic behavior consistent with selection theory. Such singular effects, therefore, cannot be traced back to steady state selection theory, and imply a drastic global change in the structure of phase-space flow. They can be interpreted in the framework of a recently introduced dynamical solvability scenario according to which surface tension unfolds the structurally unstable flow, restoring the hyperbolicity of multifinger fixed points. All Science Journal Classification (ASJC) codes • Statistical and Nonlinear Physics • Statistics and Probability • Condensed Matter Physics Dive into the research topics of 'Effects of small surface tension in Hele-Shaw multifinger dynamics: An analytical and numerical study'. Together they form a unique fingerprint.
{"url":"https://researchwith.njit.edu/en/publications/effects-of-small-surface-tension-in-hele-shaw-multifinger-dynamic","timestamp":"2024-11-02T01:25:10Z","content_type":"text/html","content_length":"51338","record_id":"<urn:uuid:566e7afa-5efd-4ceb-af49-ac7970c5a56a>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00176.warc.gz"}
$\operatorname{ex}(n, F)$ – largest # of edges in an $F$-free graph $$\operatorname{ex}(n, F) = \left(1 - \frac{1}{\chi - 1}\right){n \choose 2} + o(n^2)$$ Every $K_{2,2}$-free graph with $q$ vertices and $\frac{1}{2}q(q+1)^2$ edges is obtained from a projective plane via a polarity with $q+1$ absolute elements. $H$ is said to contain an $(s,t)$-grid if there exist $S, T\subset \mathbb{P}^s$ such that $s = |S|, t = |T|$ and $S\times T \subset H$. $H$ is almost-$(s,t)$-grid-free if there are "big" sets $X,Y\subset \mathbb{P}^s$ such that $H\cap(X\times Y)$ is $(s,t)$-grid-free. Every almost-$(s,t)$-grid-free hypersurface is equivalent, in a suitable sense, to a hypersurface whose degree in $\bar{y}$ is low. Suppose deg of $F(\bar{x},\bar{y})$ in $\bar{y}$ is $d$, $\bar{u}_1, \dots, \bar{u}_s$ are points in $\mathbb{P}^s$. If $H$ is almost-$(1,t)$-grid-free, then there exists $F(\bar{x}, \bar{y})$ of degree $< t$ in $\bar{y}$ such that $H$ is almost equal to $\{F = 0\}$. Given hypersurface $H \subset \mathbb{P}^2\times \mathbb{P}^2$ and a "big" $X \subset \mathbb{P}^2$ If $H\cap(X\times \mathbb{P}^2)$ is $(2,2)$-grid-free, then $\exists F(\bar{x},\bar{y})$ of degree $\le 2$ in $\bar{y}$ such that $H$ is almost equal to $\{F = 0\}$. If $H\cap (X\times Y)$ is $(2,2)$-grid-free, then either we are 😄 or $\mathbb{P}^2\times \{\bar{v}_i\}\subset H$ for some $i$. Think of $H\cap (X\times\mathbb{P}^2)$ as a family of algebraic curves in $\mathbb{P}^2$: Hypersurface $H$ is $(2,2)$-grid-free if and only if $C(\bar{u})$ and $C(\bar{u}')$ intersect at $\le 1$ point for distinct $\bar{u},\bar{u}'\in X$. For a generic point $\bar{v}$ on an algebraic curve $C$ in $\mathbb{P}^2$, any algebraic curve $C'$ with $\bar{v}\in C'$ intersects with $C$ at another point unless $C$ is irreducible of degree $\le If $H$ is almost-$(s,t)$-grid-free, then there exists a birational automorphism $\sigma$ such that $H$ is equal, up to $\mathrm{id}\times \sigma$, to a hypersurface of low degree in $\bar{y}$.
{"url":"https://www.zilin.one/slides/bipartite_algebraic_graphs_without_quadrilaterals/2016-08-23.html","timestamp":"2024-11-12T05:18:10Z","content_type":"text/html","content_length":"8564","record_id":"<urn:uuid:97eb5bea-b87f-45b2-a550-964c4ee8acd6>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00604.warc.gz"}
What is another name for parenthesis? - Answers Still have questions? Does the starter Pokemon ever talk in mystery dungeon 3? You see some words in parenthesis () and the hero Pokemon's face in that emotion box? Those words in parenthesis are the words the hero is speaking! Example: Chimchar: What's your name? (That's right... my name is...) The words in parenthesis are what the starter/hero Pokemon are saying.
{"url":"https://math.answers.com/math-and-arithmetic/What_is_another_name_for_parenthesis","timestamp":"2024-11-09T13:02:11Z","content_type":"text/html","content_length":"159808","record_id":"<urn:uuid:b4a04696-5813-40b5-9e6a-7c324291e65b>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00185.warc.gz"}
Indian Overseas Bank Iob Fixed Deposit Calculator 5 Years Indian Overseas Bank Iob Fixed Deposit Calculator 5 Years. Fixed Deposit Calulator helps you to calculate total earned interest and maturity amount of your deposit principal amount. Note: Interest rates are last updated on 24th November 2019. It may be vary sometimes. Kindly, ensure your bank to know the accurate FD interest rates.
{"url":"https://www.fdcalculator.app/indian-overseas-bank-iob-fixed-deposit-calculator-5-years/","timestamp":"2024-11-12T13:34:35Z","content_type":"text/html","content_length":"36502","record_id":"<urn:uuid:1056d248-8b2d-486f-9d1b-9b3e38935396>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00362.warc.gz"}
Re: [Edu] Math and Science Fluxx Ideas >What? Oh, come on that's totally close enough. It's not even a stretch to accept >square + circle = cylinder. Given the basis for all these other "formulas," if those >work, this one works too. While I wouldn't call any of them deep mathematical truths, >they do form a coherent working concept. I might grant rhombus, grudgingly (granted I would be grudging about circle + triangle for cone). As for cones and cylinders I would suggest: circle + point + 3D = cone 2 circles + set of parallel lines + 3D = cylinder I would suggest that you include a point keeper, a line keeper, a set of parallel lines keeper, a 90 degree intersection keeper, and a 3D keeper. Ok, are the goals 3D objects or 2D shapes, for starters? Going with 3D gaols. Equilateral Triangle Parallel Lines 90 Degree Intersection Cylinder = Circle + Parallel Lines + 3D Cone = Circle + Point + 3D Box = Rhombus + 90 Degree Intersection Sphere = Circle + 90 Degree Intersection Pyramid = Equilateral Triangle + 90 Degree Intersection Tetrahedron = Equilateral Triangle + 3D Torque = Circle + 3D Just as an example. Fred Poutre Cloven Fruit Games
{"url":"http://archive.looneylabs.com/mailing-lists/edu/msg00523.html","timestamp":"2024-11-08T10:32:21Z","content_type":"text/html","content_length":"5644","record_id":"<urn:uuid:2206d703-e930-4b53-8e45-70f68846122a>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00068.warc.gz"}
Free Unifix Cubes Printables Free Unifix Cubes Printables - Have child use markers to recreate the pattern on the row below the unifix. Web ten colorful pictures include a butterfly, jet plane, flower, rocket ship, american flag, house, 'bat signal', superhero shield, and rainbow heart. Bordered with a natural geometric branch design, these cards are simple, engaging and easy to use. Web • show those unifix cubes on your paper by coloring them in to look just like the real cubes. Have fun painting without paintbrushes. Place unifix cubes on the circle in a pattern leaving the last few circles blank. These task cards are perfect for math centers and help to build students' mathematical reasoning skills. It starts with the simple abab pattern and uses unifix cubes to predict the next colors. Use this free printable and some unifix cubes to compare the size of the printed images. For example, to make 1, children can use five red cubes and five green cubes, nine red cubes, and one yellow cube, or, six blue cubes and four yellow cubes. Unifix Cubes Print & Play Patterning Learning Centers « United Teaching Web unifix cubes measurement cards. Place unifix cubes on the circle in a pattern leaving the last few circles blank. Web ten colorful pictures include a butterfly, jet plane, flower, rocket ship, american flag, house, 'bat signal', superhero shield, and rainbow heart. Bordered with a natural geometric branch design, these cards are simple, engaging and easy to use. Web free. 10 Best Unifix Cube Template Printable You might start simple with 2 or 3 dye and increase as your child gains mastery for toddlers. You can use the unifix cubes with the number cards too! This activity has equivalent to color sorting. • how many different ways can you make the number 6? If you don’t have the unifix cubes, you can simply printout the blank. Unifix Cubes Didax Tattoo Pictures Concentracion For example, to make 1, children can use five red cubes and five green cubes, nine red cubes, and one yellow cube, or, six blue cubes and four yellow cubes. Place unifix cubes on the circle in a pattern leaving the last few circles blank. Web unifix cubes measurement cards. Square painting with unifix cubes. Print, trim, laminate and then. Free Printable Unifix Cubes Worksheets Printable Templates Web browse unifix cubes resources on teachers pay teachers, a marketplace trusted by millions of teachers for original educational resources. Web unifix pattern cards free created by inspiring little leaders these are beautifully designed unifix pattern cards! Have child complete the pattern by placing unifix cubes on the remaining circles. Web unifix cube pattern math puzzles created by following curiosity. Free Unifix Cubes Printables Printable Templates Place unifix cubes on the circle in a pattern leaving the last few circles blank. It starts with the simple abab pattern and uses unifix cubes to predict the next colors. Web unifix pattern cards free created by inspiring little leaders these are beautifully designed unifix pattern cards! Use this free printable and some unifix cubes to compare the size. Unifix Patterns Math patterns, Kindergarten worksheets, Preschool Web free unifix puzzle printables for beginner math. Have fun painting without paintbrushes. These task cards are perfect for math centers and help to build students' mathematical reasoning skills. Web unifix cube pattern math puzzles created by following curiosity use pattern blocks to copy and extend the next part of the growing pattern! Web browse unifix cubes resources on teachers. Free Unifix Cubes Printables Printable Templates Have child use markers to recreate the pattern on the row below the unifix. Place unifix cubes on the circle in a pattern leaving the last few circles blank. You can use the unifix cubes with the number cards too! You might start simple with 2 or 3 dye and increase as your child gains mastery for toddlers. Web the. unifix cube template printable Clip Art Library Web • show those unifix cubes on your paper by coloring them in to look just like the real cubes. Check back next week for these blank counting cards. Download, print, and laminate or place in. These task cards are perfect for math centers and help to build students' mathematical reasoning skills. Have child complete the pattern by placing unifix. Free Printable Unifix Cubes Worksheets Printable Templates Choose a number and ask children different ways to make that number. Check back next week for these blank counting cards. It starts with the simple abab pattern and uses unifix cubes to predict the next colors. Have child complete the pattern by placing unifix cubes on the remaining circles. Web free unifix puzzle printables for beginner math. Free Unifix Cubes Printables • how many different ways can you make the number 6? Web this free snap cube graphing printable makes an engaging math center and strengthens counting, sorting, and graphing skills all at once!. Web free unifix puzzle printables for beginner math. For example, to make 1, children can use five red cubes and five green cubes, nine red cubes, and. It starts with the simple abab pattern and uses unifix cubes to predict the next colors. Web ten colorful pictures include a butterfly, jet plane, flower, rocket ship, american flag, house, 'bat signal', superhero shield, and rainbow heart. You can use the unifix cubes with the number cards too! However, this activity highlighted education the. Web this free snap cube graphing printable makes an engaging math center and strengthens counting, sorting, and graphing skills all at once!. Download, print, and laminate or place in. For example, to make 1, children can use five red cubes and five green cubes, nine red cubes, and one yellow cube, or, six blue cubes and four yellow cubes. Have fun painting without paintbrushes. If you don’t have the unifix cubes, you can simply printout the blank pattern train. • how many different ways can you make the number 6? Web • show those unifix cubes on your paper by coloring them in to look just like the real cubes. Web unifix cubes measurement cards. Free printable, kids activities, more printables, preschool activities, preschool fine motor skills,. Web browse unifix cubes resources on teachers pay teachers, a marketplace trusted by millions of teachers for original educational resources. Web the worksheets and printables below helps build a foundation in learning patterns. Choose a number and ask children different ways to make that number. Check back next week for these blank counting cards. These task cards are perfect for math centers and help to build students' mathematical reasoning skills. 7 • use two different colors of unifix cubes to make the number 7. Use this free printable and some unifix cubes to compare the size of the printed images. Bordered With A Natural Geometric Branch Design, These Cards Are Simple, Engaging And Easy To Use. Web ten colorful pictures include a butterfly, jet plane, flower, rocket ship, american flag, house, 'bat signal', superhero shield, and rainbow heart. Have child use markers to recreate the pattern on the row below the unifix. Download, print, and laminate or place in. It starts with the simple abab pattern and uses unifix cubes to predict the next colors. Web Free Unifix Puzzle Printables For Beginner Math. Web browse unifix cubes resources on teachers pay teachers, a marketplace trusted by millions of teachers for original educational resources. You can use the unifix cubes with the number cards too! You might start simple with 2 or 3 dye and increase as your child gains mastery for toddlers. Web the worksheets and printables below helps build a foundation in learning patterns. Square Painting With Unifix Cubes. Free printable, kids activities, more printables, preschool activities, preschool fine motor skills,. This activity has equivalent to color sorting. 7 • use two different colors of unifix cubes to make the number 7. These task cards are perfect for math centers and help to build students' mathematical reasoning skills. Have Child Complete The Pattern By Placing Unifix Cubes On The Remaining Circles. Web unifix pattern cards free created by inspiring little leaders these are beautifully designed unifix pattern cards! Web this free snap cube graphing printable makes an engaging math center and strengthens counting, sorting, and graphing skills all at once!. Web unifix cube pattern math puzzles created by following curiosity use pattern blocks to copy and extend the next part of the growing pattern! If you don’t have the unifix cubes, you can simply printout the blank pattern train. Related Post:
{"url":"https://servesa.sa2020.org/en/free-unifix-cubes-printables.html","timestamp":"2024-11-09T09:33:53Z","content_type":"text/html","content_length":"30859","record_id":"<urn:uuid:ea93a6ec-61db-4497-a4df-bb3a14877f8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00387.warc.gz"}
Disable all Library Links in a Model from MATLAB Automatically You may want to disable every library link in your model. This could happen if: • You are using a non-official version of your model that you would like to manipulate more easily • You want to apply modifications to your entire model or a subsystem of your model programmatically In this article, you will learn: • How to find every block of your model • How to modify its parameters Find Every Block of your Model To do this, you can use the following command: blocks = find_system(pathName, 'FollowLinks','on', 'BlockType', 'SubSystem'); • In this example, we use ‘FollowLinks’ set to ‘on’ in order to look inside the library blocks. We use ‘Subsystem’ because the ‘BlockType’ of a library block is necessarily a subsystem. • pathName is the path that you want to start the search from. Most of the time it will be the root of your model, but that is not necessarily the case, so we will define the pathName as following: • gcb is merely the path of the block you clicked on. If you want to apply it to your model, you can change gcb to the name of your model. • The variable “blocks” is a vector of cells; each cell contains the path of one of the subsystems of your model. Disable the Link of a Library Block To disable the link of a library block, we will use the following command: Then, we can apply this command iteratively to every block of your model as follows: for i=1:length(blocks) Since the variable blocks is a vector of cells, we need to access the content of the cell (not just the cell itself). This is why we use braces instead of parenthesis to modify the library link of the desired block. If you use the following script, you can just click on a subsystem and apply the script. Every block inside the selected subsystem will have their library link disabled: pathName = gcb; blocks = find_system(pathName, 'FollowLinks','on', 'BlockType', 'SubSystem'); for i=1:length(blocks) Here are a few articles about doing things in Simulink from a MATLAB script:
{"url":"https://realtechnologytools.com/simulink-library/","timestamp":"2024-11-04T15:18:07Z","content_type":"text/html","content_length":"71186","record_id":"<urn:uuid:babac23f-4600-4e19-9969-e697e19d419f>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00874.warc.gz"}
The Torque on a Current-Carrying Rectangular Loop of Wire in a Magnetic Field Lesson Video: The Torque on a Current-Carrying Rectangular Loop of Wire in a Magnetic Field Physics • Third Year of Secondary School In this video, we will learn how to calculate the torque on a current-carrying rectangular loop of wire in a uniform magnetic field. Video Transcript In this video, weโ re talking about the torque on a current-carrying rectangular loop of wire in a magnetic field. Weโ re going to learn why a torque acts on such a current-carrying wire, how to calculate its magnitude, and also how to determine something called the magnetic dipole moment for such a current-carrying loop. We can get started on this topic by thinking at first of just a rectangularly shaped wire. So hereโ s one side, hereโ s a second side, a third side, and then a fourth side. And then weโ ll say that this dotted line right here represents an axis that goes right through the center of the rectangle. Now what if, perpendicular to this axis, we set up a uniform magnetic field? We can call that field strength ๐ ต. Even with our rectangular wire in this uniform magnetic field, nothing would happen unless we put current in the wire. If we do this, things would start to change because now we have electrical charge in motion in a magnetic field. Those charges when theyโ re moving in certain directions relative to the field will experience a force. And since the charges that make up this electrical current are in the wire, the wire itself will then experience a force. A bit earlier, we identified the four sides of our rectangular wire. We said that here was side one, here was side two, and then three and then four. As electric charge moves along through these four different straight segments of wire, only two of these sides will experience a force, side one and side three. That comes down to the direction of the moving charge relative to the external magnetic field. On sides one and three, there will be a net magnetic force that acts along that whole segment of wire. And those two forces, the one on side one and the one on side three, act in opposite Considering that magnetic force as it acts on side one, letโ s say that it points in this direction, upward. That means that the corresponding force on side three will point the opposite way, downward. Given these two forces acting on opposite sides of our rectangular wire, we can see whatโ s going to happen. These forces combine to create a torque on the wire, which tends to make it rotate around this axis we drew through its center. Now, the symbol to represent this torque is the Greek letter ๐ . And what weโ d like to do is write a mathematical relationship for this torque. It turns out that in a case like the one weโ re looking at here, this torque depends on several factors. First, it depends on the strength of the magnetic field that our current-carrying wire is in. The stronger that field is, the more torque the wire experiences. It also depends on the magnitude of the current in the wire. The more current, the more moving charge, and therefore the greater the force sides one and three of our wire will experience and so the greater the torque. Something else the torque depends on is the cross sectional area, weโ ve called it ๐ ด, of our loop. And along with this, weโ ll want to consider the possibility that a rectangular coil of wire has more than one The way weโ ve drawn it right here, it has only one. But in general, there could be any number of turns in this coil. There could be some integer value we just call ๐ . So the torque on this rectangular current-carrying wire depends on ๐ ต, the magnetic field, the current ๐ ผ, the cross sectional area ๐ ด, and the number of turns in our coil. And for all four of these factors, the bigger they get, the greater the torque. That tells us that all four will be in the numerator of our equation for torque. Our equation now is almost complete, but thereโ s one other factor weโ ll want to add here. Remember, we noted earlier that this coil in this field would experience a torque, and that torque would tend to rotate the wire about this axis. When that happens, it will make the overall orientation of our current-carrying loop change. And this change then leads to a change in the torque it experiences. We can get a better sense for whatโ s going on here by looking at our setup from a different perspective. If we watch the edge of our rectangular coil, as the coil turns, we would see it start out horizontally in the magnetic field, but then turn like this, then like this, and so forth as we continue to rotate clockwise as weโ re looking at the coil. Well, the angle between our coil and the external field that itโ s in affects the torque that the coil experiences. The way we measure that angle is by considering the plane in which the coil lies. We represent that by our cross sectional area ๐ ด, and we picture a vector thatโ s perpendicular to that plane. To be clear about what that looks like, if our coil looked like this and on relative to the magnetic field, then that vector perpendicular to the cross sectional area of the rectangular loop would look like this pink one. Once we have this vector, which is normal to the area of our loop or perpendicular to it, then we measure the angle in between this vector and the direction of the external magnetic field. If we call that angle ๐ , then we can enter, over on our equation for torque, the last factor we need to complete the equation. When ๐ is the angle between the external magnetic field that our coil is in and the vector that points perpendicular to the area of our coil, then the overall torque that our rectangular coil experiences is ๐ ต times ๐ ผ times ๐ ด times ๐ times the sin of ๐ . Letโ s consider now how this factor, sin ๐ , affects the torque on our coil. Say that our coil of wire was oriented like this with respect to the magnetic field. In this case, the vector perpendicular to the area of the coil would point in this direction. And we can see that this is perpendicular to the external magnetic field. In this case then, ๐ would be 90 degrees, and we know that the sin of 90 degrees is one. Itโ s the maximum value the sine function attains. Everything else being equal then, our coil will experience a maximum torque due to the magnetic field itโ s in when itโ s oriented to that field this way. But then, what about this? What if our coil has rotated to this position? At this point, the vector perpendicular to the area of the coil would point this way. And we can see this points in the same direction as the external field. These two vectors are parallel, and so ๐ is zero degrees. And then, the sin of zero degrees is zero. So when our coil is oriented this way, it experiences no torque. So if our coil is perpendicular to the field like this, it experiences zero torque. And if itโ s parallel to the magnetic field like this, ๐ is 90 degrees and the coil experiences a maximum torque. Now, letโ s focus for a moment on this case where our coil experiences that torque maximum. What we can do is write a specific version of our torque equation for this maximum value, weโ ll call it ๐ sub ๐ . And the only difference between this equation and our original general equation for torque is that now weโ re assuming that ๐ is 90 degrees. So the sin of ๐ is one. If we consider this maximum torque our coil can experience and the strength of the field that causes that torque, we can identify whatโ s called the magnetic dipole moment of our current-carrying loop. This term, magnetic dipole moment, refers to the tendency of an object to interact with an external magnetic field. So given our external field, weโ ve called that capital ๐ ต, the more torque our coil experiences, the more it interacts with that field we can say. And the magnetic dipole moment measures the extent of that interaction. If we represent magnetic dipole moment symbolically using ๐ sub ๐ , mathematically, itโ s equal to this ratio, the maximum torque that our current-carrying coil can experience divided by the strength of the field that the coil is in. This shows more clearly what we mean by saying the magnetic dipole moment measures the response of an object to the magnetic field itโ s in. Given a magnetic field strength ๐ ต, the more torque our object experiences, the greater its magnetic dipole moment. Just as a side note, because ๐ sub ๐ down here is equal to ๐ ต times ๐ ผ times ๐ ด times ๐ , that means that specifically for a current-carrying rectangular loop of wire, we can also write the magnetic dipole moment as ๐ ผ times ๐ ด times ๐ . This shows us that if we were to keep everything the same in our scenario, but increase the current ๐ ผ, then we would also raise the magnetic dipole moment of our coil. Or, likewise, if we kept everything the same but increased the cross sectional area of our coil or increased the number of turns in it, those are also ways that we could increase the coilโ s magnetic dipole moment. Knowing all this about the torque on a current-carrying rectangular loop of wire in a magnetic field, letโ s get some practice now with these The diagram shows a rectangular loop of current-carrying wire between the poles of a magnet. The longer sides of the loop are initially parallel to the magnetic field, and the shorter sides of the loop are initially perpendicular to the magnetic field. The loop then rotates through 90 degrees so that all its sides are perpendicular to the magnetic field. Which of the lines on the graph correctly represents the change in the torque acting on the loop as the angle its longest sides make with the magnetic field direction varies from zero degrees to 90 degrees? Okay, in our scenario, we have a rectangular current-carrying loop of wire thatโ s shown in our diagram here. And our loop, we can see, is positioned between the poles of a permanent magnet, which means itโ s exposed to a uniform magnetic field. Because a rectangular loop carries current and is in a magnetic field, it experiences a torque about its axis of rotation. This torque leads to the rotation of the coil that we see here in a clockwise direction. Weโ re told that, initially, our coil is oriented like this, where the longer sides โ that is, the side of the front and the back we could say of the coil โ are parallel to the magnetic field and the shorter sides are perpendicular to it. But then, under the influence of torque, our coil rotates 90 degrees so that in this position, all four of the sides of the coil are perpendicular to the magnetic field. And we know that that field points from the north pole of our magnet to the south pole, so left to right as weโ ve drawn it here. The other part of our diagram is this graph right here. Looking at it carefully, we see that on the vertical axis, the torque acting on a rectangular coil is plotted against the angle of orientation of that coil to the magnetic field. That angle varies from zero degrees, which is the position of our coil relative to the field when it starts out in this horizontal plane, all the way up to 90 degrees, which is the coilโ s position relative to the field once itโ s rotated. On this graph, we see these lines of different colors. Thereโ s a red curve, a yellow curve, a blue one, and a green one. What we want to do here is identify which of these four curves, which color, correctly represents the change in the torque that acts on a rectangular current-carrying loop as this angle that its longest sides make with the magnetic field direction varies from zero to 90 degrees. In other words, which of the four lines on our graph correctly shows the torque acting on our coil as it changes from this position here to this position? Notice that that change in position is defined on our graph by a change in this angle called ๐ . ๐ goes from zero up to 90 degrees. In our problem statement, weโ re told that ๐ represents the angle between the longest sides of our coil, those would be the sides that initially are parallel with the magnetic field and then end up perpendicular to it, and the magnetic field direction itself, which weโ ve seen goes from left to right between the north and south pole of our magnet. So the angle between this line here, which is one of the longest sides of our coil, is zero degrees. That corresponds to this point on our graph. And then, after our coil has turned 90 degrees, the angle between the longest side of the coil, which is now here, and our external magnetic field is, we can see, 90 degrees. And that represents this data point here on our curve. To answer our question then, of which of these four lines correctly represents the torque experienced by our loop as it goes through this rotation, weโ ll need to know how the torque on our rectangular current-carrying wire varies with the angle ๐ . To figure that out, we can recall a general mathematical equation for the torque on just such a current-carrying rectangular loop of wire in a magnetic field. That torque ๐ is equal to the strength of the magnetic field the coil is in times the magnitude of the current in it multiplied by its cross sectional area ๐ ด. On our diagram, that area would be this area weโ re showing here times the number of turns in our rectangular coil all multiplied by the sin of an angle weโ ll call ๐ . Now itโ s important to note that ๐ , this angle here, is not equal to the angle ๐ identified on our graph. The angles are different. But nonetheless, now that we have this equation written out, we see how the torque on a rectangular current-carrying wire in a magnetic field varies with the angular orientation of the coil. And thatโ s really what we need to know to answer our question. So even though the torque ๐ depends on all these various factors, really weโ re interested in the fact that it is directly proportional to the sin of the angle weโ ve called ๐ . And now just what is this angle? Going back over to our diagram, if we draw a vector thatโ s perpendicular to the cross sectional area of our loop, then ๐ is the angle between this vector, that weโ ve drawn in blue, and the magnetic field lines. And in this case, itโ s worth noticing that that angle is 90 degrees. So this is very important. It tells us that when the angle ๐ is zero degrees, the angle ๐ is equal to 90 degrees. So itโ s true that ๐ and ๐ are not the same, but this is how they correspond when ๐ is zero. And if we then consider the orientation of our coil after itโ s gone through its 90-degree rotation, in this case, a vector drawn perpendicular to the cross sectional area of our coil would look like this. Once again, the angle ๐ is the angle between this blue vector and the direction of the magnetic field. But now we can see these two vectors are parallel. In other words, the angle between them is zero degrees. And note that this corresponds to the point where ๐ , the angle between the longest side of our coil and the magnetic field, is 90 degrees. So itโ s a bit confusing, but when ๐ is zero degrees, ๐ is 90 degrees. And then when ๐ is 90 degrees, ๐ is zero degrees. We do all this because once we know what ๐ is, we can take the sine of that angle. And then weโ ll know how the torque on our coil will vary over the course of this rotation, specifically, a rotation from ๐ equals 90 degrees over to ๐ equals zero degrees. In order to see what happens to the sin of ๐ , when we change ๐ this way, letโ s recall the shape of the sine function. If we plot the sin of an angle ๐ as ๐ varies from zero up to 360 degrees, we see that the graph reaches its maximum value at an angle of 90 degrees. And then when the angle is zero, back at the origin, the sine of that angle is zero. So as we transition from ๐ equals 90 to ๐ equals zero degrees, weโ re covering this portion of our graph. And this tells us what kind of curve to look for among our four candidates. It should be a line that starts out at its maximum value when ๐ is equal to 90 degrees and then tails off to zero when ๐ goes to zero degrees. Looking at our four curves then, we see that just one of them starts out at the maximum value it attains over this range of angles and then, over the angular change from ๐ equals 90 to ๐ equals zero degrees, tails off to zero. Thatโ s the red curve on our graph. This is the only line that consistently decreases whereas the other three options at some point increase in value. And so this is the answer weโ ll give to our question. Itโ s the red curve that correctly represents the change in torque acting on a loop as it rotates. Letโ s take a moment now to summarize what weโ ve learned about the torque on a current-carrying rectangular loop of wire in a magnetic field. In this lesson, we saw that if we have a current-carrying rectangular loop in a uniform magnetic field, then that loop experiences an overall torque equal to the magnetic field strength times the current magnitude in the coil times the cross sectional area of the coil multiplied by the number of turns in the coil loop. And all of this is multiplied by the sin of an angle weโ ve called ๐ , where ๐ is the measure of an angle between a vector perpendicular to the cross sectional area of the coil and the magnetic field lines. Along with this, we learned the term โ magnetic dipole momentโ , which indicates how strongly a coil such as this interacts with an external magnetic field. The magnetic dipole moment ๐ sub ๐ is given by the maximum torque experienced by the coil divided by the strength of the field itโ s in. This is a summary of the torque on a current-carrying rectangular loop of wire in a magnetic
{"url":"https://www.nagwa.com/en/videos/746131604743/","timestamp":"2024-11-14T20:53:06Z","content_type":"text/html","content_length":"289319","record_id":"<urn:uuid:b343adb3-8f98-4759-92e5-9ff4f9f03c32>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00559.warc.gz"}
area of a right trapezoid A right isosceles trapezoid is a trapezoid which is simultaneously a right trapezoid as well as an isosceles trapezoid. Area of a trapezoid = $\left [ \frac{\left ( b_1 + b_2\right )}{2} \right ] \ times h$ where h is the height and b 1 and b 2 are the bases.. We multiply the average of the bases of the trapezoid … Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.. Visit Stack Exchange Area of a trapezoid. Example #4: Find the perimeter of the following trapezoid where the length of the bottom base and the lengths of the nonparallel sides are not known. Check your answer Note: The area for any parallelogram is and the area for all triangles is. Formula to find area of a trapezoid. A Right trapezoid has a pair of right angles. A right trapezoid has a pair of right angles. The #1 tool for creating Demonstrations and anything technical. Customer Voice. Calculate the remaining internal angles. The area of a trapezoid is the space contained within its perimeter. The area, A, of a trapezoid is one-half the product of the sum of its bases and its height. If the legs and base angles of a trapezoid are congruent, it is an isosceles trapezoid. P = 25 cm. Trapezoid classifications. b = base The area of a trapezoid across the diagonals and the angle between them is considered the conditional division of the trapezoid into four triangles, just like the area … Questionnaire. To calculate the area of a trapezoid, pay close attention to the four steps we will follow with a rectangle trapezoid: First, draw a rectangle trapezoid. There are different types of trapezoids: isosceles trapezoid, right trapezoid, scalene trapezoid. The formula for the perimeter of a trapezoid is P = (a + b + c + d). The area of a trapezoid is the space contained within its perimeter. Measure the height and make the necessary metric conversions until all three lengths are in the same units. The formula for Area of a trapezoid: The area can be computed with the help of the following simple steps to arrive at the trapezoid area formula, Then apply the formula above or use our area of a trapezoid calculator online to save time and have a higher chance that the result is error-free (bad input will … This is a trapezoid with two adjacent right angles. Area of Trapezoids | Fractions – Type 2. Given a trapezoid, if we form a congruent trapezoid and rotate it such that the two congruent … Let's assume that you want to calculate the area of a certain trapezoid. Enter the upper base a, lower base b, the height h and click "calculate the area of trapezoid", the area of the trapezoid is calculated from the upper base, the lower base, and the height. https://mathworld.wolfram.com/RightTrapezoid.html. A right trapezoid (also called right-angled trapezoid) has two adjacent right angles. Right Trapezium: Two adjacent internal angles of a trapezium are 90 degrees. Make a copy. Scalene Trapezoids. Area of a Trapezoid = A = $\frac{1}{2}$ $\times$ h $\times$ (a + b) Types of Trapezoids Right Trapezoid A right trapezoid has two right angles. Example #2: Can you find the perimeter for this right trapezoid where ABC is a right triangle? Area formula of a trapezoid The area, A, of a trapezoid is: Do you need to find a Maths tutor? Trapezoids can be classified as scalene or isosceles based on the length of its legs. The following trapezoid TRAP looks like an isosceles trapezoid, doesn’t it? A right trapezoid has one right angle (90°) between either base and a leg. A right trapezoid is a four-sided shape with two right angles and two parallel sides. The altitude of the trapezoid is 5 m. Find the area. A right trapezoid is a four-sided shape with two right angles and two parallel sides. Yes, you guessed it right! a = 4 cm. But here's a derivation without calculus, using the fact that the distance from a side of a triangle to the triangle's centroid is $\frac13$ the height of the triangle. A trapezoid, also known as a trapezium, is a 4-sided shape with two parallel bases that are different lengths. Hints help you try the next step on your own. Don’t forget — looks can be deceiving. Calculate the remaining internal angles. Area = ½ h (b1 + b2) Where, h is the height and b 1 and b 2 are the parallel sides of the trapezoid. A trapezoid has bases of lengths a and b, with a distance, h, between them. The perimeter of an isosceles trapezoid is 110 m and the bases are 40 and 30 m. Calculate the area of the trapezoid and the length of the non-parallel sides. Then draw a 90 degree angle at one end of the base, using a protractor. Using just four lines and four interior angles, we … A trapezoid is a 4-sided figure with one pair of parallel sides. A trapezoid is described as a 2-dimensional geometric figure which has four sides and at least one set of opposite sides are parallel. Area of trapezoid The trapezoid is a convex quadrilateral shape. Drum into the heads of students the formula for the area of trapezoids A = (b 1 + b 2) h/2, where b 1 and b 2 are the base lengths and h is the height as they do this set of pdf area of a trapezoid worksheets. The formula for Area of a trapezoid: The area can be computed with the help of the following simple steps to arrive at the trapezoid area formula, Walk through homework problems step-by-step from beginning to end. The right trapezoid with a perimeter of 46 units and greatest area is on earth with height of 12 units, bases of 8 units, and of 13 units. Drum into the heads of students the formula for the area of trapezoids A = (b 1 + b 2) h/2, where b 1 and b 2 are the base lengths and h is the height as they do this set of pdf area of a trapezoid worksheets. The parallel sides are called the bases, while the other sides are called the legs. Isosceles Trapezoids. Right Trapezoid; If the legs of the trapezoid are equal in length, then it is called an isosceles trapezoid. We do not need to know anything about the length of the legs or the angles of the vertices to find area. Similarly, as γ + δ = 180°, δ = 180° - … As with all quadrilaterals, a trapezoid has four sides. The area of a trapezoid can be calculated by taking the average of the two bases and multiplying it with the altitude. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Turn the copy 180º. Source: en.wikipedia.org. First, draw the long base. Formula to find area of a trapezoid. FAQ. Right trapezoids are used in the trapezoidal rule for estimating areas under a curve. Area of a trapezoid. According to the trapezoid area formula, the area of a trapezoid is equal to half the product of the height and the sum of the two bases. Solution for find the area of a right trapezoid whose parallel sides measures 8cm and 11 cm with one of the interior angles given as 60 degrees. Derivation. Isosceles Trapezoid An isosceles trapezoid has two of its non-parallel sides equal in length. A right trapezoid with top side 5 feet and height 5 feet. This is a trapezoid with two adjacent right angles. What is the area of the doghouse? The area of the trapezoid is 60 cm 2. Side lengths, diagonals and perimeter have the same unit (e.g. Trapezoids can be classified as scalene or isosceles based on the length of its legs. If you are wondering how to find the area of trapezoids, you’re in the right place – this area of a trapezoid calculator is a tool for you. You can use the right-triangle trick to find the area of a trapezoid. Explore anything with the first computational knowledge engine. As α + β = 180°, β = 180° - 30 ° = 150°. Customer Voice. Join them and you will get a rectangle whose base is the sum of the two bases. Calculate the midline of a right trapezoid if you know 1. bases 2. base, height and angles at the base 3. diagonals, height and angle between the diagonals 4. height and area of a trapezoid Scroll down to find out more about the trapezoid area formula or give our calculator a try. Circle. P = 25 cm. This example is a little tricky! An acute trapezoid has two adjacent acute angles on its longer base edge, while an obtuse trapezoid has one acute and one obtuse angle on each base. The area of a polygon is the number of square units inside that polygon. What is a trapezoid? In the applet above, click on \"freeze dimensions\". Right trapezoids are used in the trapezoidal rule for estimating areas under a curve.. An acute trapezoid has two adjacent acute angles on its longer base edge, while an obtuse trapezoid has one acute and one obtuse angle on each base.. An isosceles trapezoid is a trapezoid where the base … The area, A, of a trapezoid is: where h is the height and b 1 and b 2 are the base lengths. Let us consider a trapezoid and let b1,b2 and h be the bases and height of the trapezoid. Area formula of a trapezoid equals Area = 1/2 (b1+b2) h h = height. Unlimited random practice problems and answers with built-in Step-by-step solutions. Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. You should be thinking, right triangles, right triangles, right triangles. Calculations at a right trapezoid (or right trapezium). Source: en.wikipedia.org. To find the perimeter of a trapezoid, just add the lengths of all four sides together. To find the area of any trapezoid, start by labeling its bases and altitude. This calculator calculates the surface area of a trapezoidal prism using sides of trapezoid, altitude, height values.. Watch this video to learn how to draw a right trapezoid using given specifications for lengths of sides. The calculation essentially relies on the fact a trapezoid's area can be equated to that of a rectangle: (base 1 + base 2) / 2 is actually the width of a rectangle with an equivalent area. A right trapezoid (also called right-angled trapezoid) has two adjacent right angles. Then draw a 90 degree angle at one end of the base, using a protractor. As illustrated above, the area of a right trapezoid is (1) (2) A right isosceles trapezoid is a trapezoid which is simultaneously a right trapezoid as well as an isosceles trapezoid. However, we have … From MathWorld--A Wolfram Web Resource. Then draw another 90 degree angle at the … Scalene Trapezoid A scalene trapezoid doesn´t have equal sides or angles. In Euclidean geometry, such trapezoids are automatically rectangles. In order to use our area of a trapezoid calculator, you need to take three measurements, all in the same units (convert as necessary). Solution: First, we will divide the trapezoid into two triangles, as we have shown in the following figure. Whether you are looking for a calculation tool or want to know more about the formula behind the calculator, we’ve got you covered. The area of the trapezoid is calculated using the formula from the upper base and lower base and height. Calculates the area of a trapezoid given two parallel sides and the height. How to Find the Area of a Trapezoid. Area of Trapezium = 1/2 X (A + B) X H; Where A and B are the length of parallel sides of trapezium H is perpendicular distance … In this problem, we have the height, and the median or midsegment. Calculates the area of a trapezoid given two parallel sides and the height. Area = ½ x (Sum of parallel sides) x (perpendicular distance between the parallel sides). Obtuse Trapezoid. Enter the lengths of the two parallel sides a and c and either base b or slant side d. Choose the number of decimal places and click Calculate. You can prove this by integrating $\int_0^h y(b + (a-b)\frac yh) dy$ and dividing by the area of the trapezoid. where h is the height and b 1 and b 2 are the base lengths. Join the initiative for modernizing math education. b=base2 of the trapezoid. Example 3: Find the area of a trapezoid whose bases are 10 m and 7 m, respectively. where h is the height and b 1 and b 2 are the base lengths. meter), the area has this unit squared (e.g. Surface Area of Trapezoidal Prism Calculator. h.. The grey space is the area of the trapezoid in the diagram below. Calculate the midline of a right trapezoid if you know 1. bases 2. base, height and angles at the base 3. diagonals, height and angle between the diagonals 4. height and area of a trapezoid Recall that the bases are the two parallel sides of the trapezoid.The altitude (or height) of a trapezoid is the perpendicular distancebetween the two bases. Find the area of a trapezoid in C++. The area, A, of a trapezoid is one-half the product of the sum of its bases and its height. When neither the sides nor the angles of a trapezium are equal, we call it a Scalene trapezoid. Lesson Summary. Trapezoids The trapezoid or trapezium is a quadrilateral with two parallel sides. Area and Perimeter of a Trapezoid. Similarly, as γ + δ = 180°, δ = 180° - 125° = 55°. Can you tell which one? This video explains how to determine the perimeter and area of a trapezoid.http://mathispower4u.com A is a trapezoid that is simultaneously a right trapezoid and an isosceles trapezoid.In Euclidean geometry, such trapezoids are automatically rectangles. Welcome to The Calculating the Perimeter and Area of Right Trapezoids (A) Math Worksheet from the Measurement Worksheets Page at Math-Drills.com. In this section, we will learn how to find the area of a … When neither the sides nor the angles of the trapezoid are equal, then it is a scalene trapezoid. If the non-parallel sides, or we can say, the legs of a trapezoid are equal in length, it is known as an Isosceles trapezoid. h=height of the trapezoid. JavaScript has to be enabled to use the calculator. A right trapezoid is having at least two right angles. square meter). Area of Trapezoids | Fractions – Type 2. The area of the trapezoid is calculated by measuring the average of the parallel sides and … A circle is a simple closed shape formed by the set of all points in a plane that are a … Area… A trapezoid has bases of lengths a and b, with a distance, h, between them. Area of Trapezoid – Explanation & Examples. Did you like the article? The trapezoid has one pair of parallel opposite sides called the bases b 1 and b 2.The height h of the trapezoid is perpendicular distance between these bases.. A right trapezoid is a trapezoid having two right angles. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.. Visit Stack Exchange Acute Trapezoid. Then draw another 90 degree angle at the top. To declare and initialize at compile time the syntax is as follows. All the data given is: α = 30° γ = 125° h = 6 cm. The area formula for trapezoids is given by-Area = 1/2 (a+b) h. Perimeter of Trapezoid: The perimeter of a trapezoid is the sum of all its sides. Use segments to break up the figure above to calculate the area of the trapezoid. Move the copy closer to the original. Solution for find the area of a right trapezoid whose parallel sides measures 8cm and 11 cm with one of the interior angles given as 60 degrees. Grey space is the space contained within its perimeter α + β = 180° 30., diagonals and perimeter have the same units trapezium are equal in length, then it is degrees! Or angles vertices to find the area of the base, using protractor! Prism using sides of trapezoid calculator: an example practice problems and answers with built-in solutions! Are parallel units inside that polygon given, one of them can not be used to find area. By either base and a leg ) greater than 90° trapezoid as well as isosceles. Trapezoidal rule for estimating areas under a curve 52.4 sq.cm… using the,! To declare and initialize at compile time the syntax is as follows, δ = 180° - =. Two non-parallel sides equal in length between them does not change in the trapezoidal rule estimating! Two of its legs = 55° ; side b: height h: area S check your Note! Has four sides together the side that you can not use is AC = 10 of right angles times! Math worksheet was created on 2015-09-08 and has been viewed 27 times this month base! B2 and h be the bases, while the other sides are called the bases, while the sides... Based on the length of its bases and its height 1 tool for creating Demonstrations and anything technical the figure. And legs ) measuring less than 90° one set of opposite sides are called the bases are.! The bases are parallel them can not use is AC = 10 and 41 this... Such trapezoids are automatically Saccheri quadrilaterals.Thus, the bases and altitude not need to know anything about the in!, in the displayed formula when neither the sides nor the angles on each side are supplementary degree at! Only o… the area of a trapezoid area of a right trapezoid two parallel bases that different... Has a pair of parallel sides, and two parallel sides are parallel through homework problems step-by-step beginning. = 180°, β = 180°, δ = 180° - 125° = 55° and at least one of! The data given is: α = 30° γ = 125° h = 6 m + 4 m2 3. At least two right angles classified as scalene or isosceles based on the length of its sides! Is having at least one set of opposite sides are called the legs the... Base is the number of square units inside that polygon area S and a leg ) greater 90°! Side lengths, diagonals and perimeter have the height we do not need to know anything about the of... You should be thinking, right triangles, right triangles, right trapezoid as well as an trapezoid... Although 4 sides are called the bases are parallel tool for creating Demonstrations anything... Been viewed 27 times this month to be enabled to use the calculator let b1, b2 and be. Such trapezoids are automatically Saccheri quadrilaterals.Thus, the area like a carpet an! `` right trapezoid ( also called right-angled trapezoid ) has two of its bases altitude! A convex quadrilateral shape find the perimeter and area of any trapezoid, right triangles, as +. And height of the trapezoid into two triangles, as γ + δ = 180°, =. Or right trapezium ) as you drag any vertex, you will get a rectangle whose base is area! Find the area for any parallelogram is and the height are congruent, it is 90 degrees the. And a leg ) greater than 90° compile time the syntax is as.. Sides that you want to calculate the area of any trapezoid, right trapezoid is a 4-sided with. Will get a rectangle whose base is the space contained within its.. Right with 3 more feet two bases forget — looks can be deceiving and height of the of... Has bases of lengths a and b, with a distance, h, between them grey! Dimensions\ '' this video to learn how to find area ° = 150° data given is: α 30°!, β = 180° - … as with all quadrilaterals, a trapezoid has pair... Units inside that polygon height and b 2 are the base lengths 27 times month! Contained within its perimeter 4 sides are called the bases are parallel 4-sided figure one. ), the area of a trapezoid has one interior angle ( by. Right-Angled trapezoid ) has two adjacent right angles is AC = 10,. ) greater than 90° the interior angles of the trapezoid redraws itself the... Walk through homework problems step-by-step from beginning to end with 3 more feet the figure above to the. ; side b: height h: area S area of a right trapezoid any parallelogram is the... Will see that the trapezoid is one-half the product of the trapezoid. extended to the right the. Right angles and two parallel sides looks can be classified as scalene or isosceles based on length! Problems and answers with built-in step-by-step solutions trapezoid TRAP looks like an isosceles trapezoid, triangles... All triangles is trapezoid has an area of a certain trapezoid. two non-parallel sides between the sides! With a distance, h, between them at the top for example, in the same units don t! = 125° h = 6 cm trapezoid add to 360 degrees, here you can convert angle units be... The diagram below with one pair of parallel sides, and the bottom side is extended to the right the... 40 square feet 375 square area of a right trapezoid 40 square feet 40 square feet 2 see answers help... Is 90 degrees base angles are calculated and displayed in degrees, here you can use AB... Looks like an isosceles trapezoid, doesn ’ t it step on your own has this unit (. Adjacent internal angles of a trapezoid is one-half the product of the parallel sides upper and... Four sides with one pair of parallel sides, and the height, and the angles on each are. Degrees, here you can use are AB, AD, BC, and height! Trapezoid. on the length of its bases and its height pair of parallel sides ) the area of a right trapezoid! By measuring the average of the legs and base angles are known and... A carpet or an area of a trapezoid. is AC = 10 similarly, as γ δ..., right triangles, right triangles of 126 square units inside that polygon you to. Down to find the area has this unit squared ( e.g area formula or give our calculator try. And area of a trapezoid has a pair of parallel sides, and parallel. 90 degrees trapezium are equal in length example 3: find the area depends only the. Are calculated and displayed in degrees, and the height and make the metric. Following trapezoid TRAP looks like an isosceles trapezoid an isosceles trapezoid an isosceles trapezoid is calculated using the of. Legs or the angles of a certain trapezoid. trapezoid equals area = 1/2 ( )... 3: find the area of a trapezoidal prism using sides of trapezoid calculator: an example, and... Compile time the syntax is as follows angle ( created by either base height! Greater than 90° for creating Demonstrations and anything technical know anything about the length its. A trapezoid.http: //mathispower4u.com how to determine the perimeter and area of a trapezoid is 5 m. find the of... The # 1 tool for creating Demonstrations and anything technical the longer base and a leg ) greater 90°... = 55° and the area for all triangles is creating Demonstrations and technical... Just add the lengths of all four sides with one pair of sides... And anything technical base example # 2: can you find the area this! Used to find the perimeter for this right trapezoid, scalene trapezoid doesn´t have equal or! Polygon is the area of a trapezoid.http: //mathispower4u.com how to determine the of! Has a pair of right angles trapezoid as well as an isosceles trapezoid. within its.. The number of square units trapezoid equals area = 1/2 ( b1+b2 ) h h = height a and 1... Given, one of them can not use is AC = 10 3 more feet a trapezoid.http: //mathispower4u.com to. 5 feet any trapezoid, just add the lengths of all four sides and the bottom base angles of trapezoid! Equal sides or angles right triangle is given 6 cm trapezoid which is simultaneously a right trapezoid has a of... The same unit ( e.g side a: parallel a, of a trapezoid. curve. Although 4 sides are given, one of them can not be used to find area., between them h: area S trapezoid calculator: an example with two adjacent internal angles a! On each side are supplementary know anything about the trapezoid in the following figure by longer... Or angles, b2 and h be the bases are parallel next step on your.!, in the diagram below this problem, we have the same.! Illustrated above, click on \ '' freeze dimensions\ '' and make the necessary metric until! Trapezium: two adjacent right angles and two parallel sides, here you can not use is =... How to determine the perimeter and area of trapezoid, just add the lengths of all four sides the... Time the syntax is as follows squared ( e.g 40 square feet 187.5 square feet 375 square feet 40 feet! Top base, using a protractor means that a closed-shape that has four sides with one pair parallel. The displayed formula the necessary metric conversions until all three lengths are in the same unit (.... Phrase “ right isosceles trapezoid. a try h, between them bases that are different lengths ) two. Clarion River Fishing Rochelle De Bruyn Wikipedia May 2015 Calendar Bad-tempered Crossword Clue Safe Harbor Lab Rescue Social Studies Grade 5 Chapter 1 Test Phyno Ride For You Cranfield University Graduate School Boeing Longacres Sold
{"url":"https://concept.ink/e44vbj/8ii686.php?5468e4=area-of-a-right-trapezoid","timestamp":"2024-11-13T02:34:56Z","content_type":"text/html","content_length":"33484","record_id":"<urn:uuid:1de55815-a7cc-4bd3-acae-724c3dca15bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00854.warc.gz"}
An Introduction to R - Square What is R-square? R-square is a goodness-of-fit measure for linear regression models. This statistic indicates the percentage of the variance in the dependent variable that the independent variables explain collectively. R-squared measures the strength of the relationship between your model and the dependent variable on a convenient 0 – 100% scale. After fitting a linear regression model, you need to determine how well the model fits the data. Does it do a good job of explaining changes in the dependent variable? There are several key goodness-of-fit statistics for regression analysis. Assessing Goodness-of-Fit in a Regression Model Linear regression identifies the equation that produces the smallest difference between all of the observed values and their fitted values. To be precise, linear regression finds the smallest sum of squared residuals that is possible for the dataset. Statisticians say that a regression model fits the data well if the differences between the observations and the predicted values are small and biased. Unbiased in this context means that the fitted values are not systematically too high or too low anywhere in the observation space. However, before assessing numeric measures of goodness-of-fit, like R-squared, we should evaluate the residual plots. Residual plots can expose a biased model far more effectively than the numeric output by displaying problematic patterns in the residuals. R-squared and the Goodness-of-Fit R-squared evaluates the scatter of the data points around the fitted regression line. It is also called the coefficient of determination, or the coefficient of multiple determination for multiple regression. For the same data set, higher R-squared values represent smaller differences between the observed data and the fitted values. R-squared is the percentage of the dependent variable variation that a linear model explains. R-squared is always between 0 and 100%: • 0% represents a model that does not explain any of the variations in the response variable around its mean. The mean of the dependent variable predicts the dependent variable as well as the regression model. • 100% represents a model that explains all of the variations in the response variable around its mean. Usually, the larger the R^2, the better the regression model fits your observations. Visual Representation of R-square To visually demonstrate how R-squared values represent the scatter around the regression line, we can plot the fitted values by observed values. The R-squared for the regression model on the left is 15%, and for the model on the right, it is 85%. When a regression model accounts for more of the variance, the data points are closer to the regression line. In practice, we will never see a regression model with an R^2 of 100%. In that case, the fitted values equal the data values and, consequently, all of the observations fall exactly on the regression line. R-square has Limitations We cannot use R-squared to determine whether the coefficient estimates and predictions are biased, which is why you must assess the residual plots. R-squared does not indicate if a regression model provides an adequate fit to your data. A good model can have a low R^2 value. On the other hand, a biased model can have a high R^2 value! Are Low R-squared Values Always a Problem? No. Regression models with low R-squared values can be perfectly good models for several reasons. Some fields of study have an inherently greater amount of unexplainable variation. In these areas, your R^2 values are bound to be lower. For example, studies that try to explain human behavior generally have R^2 values of less than 50%. People are just harder to predict than things like physical processes. Fortunately, if you have a low R-squared value but the independent variables are statistically significant, you can still draw important conclusions about the relationships between the variables. Statistically, significant coefficients continue to represent the mean change in the dependent variable given a one-unit shift in the independent variable. There is a scenario where small R-squared values can cause problems. If we need to generate predictions that are relatively precise (narrow prediction intervals), a low R^2 can be a show stopper. How high does R-squared need to be for the model produce useful predictions? That depends on the precision that you require and the amount of variation present in your data. Are High R-squared Values Always Great? No! A regression model with high R-squared value can have a multitude of problems. We probably expect that a high R^2 indicates a good model but examine the graphs below. The fitted line plot models the association between electron mobility and density. The data in the fitted line plot follow a very low noise relationship, and the R-squared is 98.5%, which seems fantastic. However, the regression line consistently under and over-predicts the data along the curve, which is bias. The Residuals versus Fits plot emphasizes this unwanted pattern. An unbiased model has residuals that are randomly scattered around zero. Non-random residual patterns indicate a bad fit despite a high R^2. This type of specification bias occurs when our linear model is underspecified. In other words, it is missing significant independent variables, polynomial terms, and interaction terms. To produce random residuals, try adding terms to the model or fitting a nonlinear model. A variety of other circumstances can artificially inflate our R^2. These reasons include overfitting the model and data mining. Either of these can produce a model that looks like it provides an excellent fit to the data but in reality, the results can be entirely deceptive. An overfit model is one where the model fits the random quirks of the sample. Data mining can take advantage of chance correlations. In either case, we can obtain a model with a high R^2 even for entirely random data! R-squared Is Not Always Straightforward! At first glance, R-squared seems like an easy to understand statistic that indicates how well a regression model fits a data set. However, it doesn’t tell us the entire story. To get the full picture, we must consider R^2 values in combination with residual plots, other statistics, and in-depth knowledge of the subject area. Model Evaluation Metrics for Machine Learning How to Interpret Adjusted R-Squared and Predicted R-Squared in Regression Analysis? R-square tends to reward you for including too many independent variables in a regression model, and it doesn’t provide any incentive to stop adding more. Adjusted R-squared and predicted R-squared use different approaches to help you fight that impulse to add too many. The protection that adjusted R-squared and predicted R-squared provide is critical because too many terms in a model can produce results that we can’t trust. Multiple Linear Regression can incredibly tempt statistical analysis that practically begs you to include additional independent variables in your model. Every time you add a variable, the R-squared increases, which tempts you to add more. Some of the independent variables will be statistically significant. Some Problems with R-squared We cannot use R-squared to conclude whether your model is biased. To check for this bias, we need to check our residual plots. Unfortunately, there are yet more problems with R-squared that we need to address. Problem 1: R-squared increases every time you add an independent variable to the model. The R-squared never decreases, not even when it’s just a chance correlation between variables. A regression model that contains more independent variables than another model can look like it provides a better fit merely because it contains more variables. Problem 2: When a model contains an excessive number of independent variables and polynomial terms, it becomes overly customized to fit the peculiarities and random noise in our sample rather than reflecting the entire population. Fortunately for us, adjusted R-squared and predicted R-squared address both of these problems. Logistic Regression With Examples in Python and R
{"url":"https://www.mygreatlearning.com/blog/r-square/","timestamp":"2024-11-03T21:49:21Z","content_type":"text/html","content_length":"376195","record_id":"<urn:uuid:4f936ad1-928f-4729-98b8-0e5003952914>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00806.warc.gz"}
What Is The Difference Between Potassium Nitrate And Potassium Sulphate - Relationship Between What Is The Difference Between Potassium Nitrate And Potassium Sulphate Potassium is an essential element in the human body and is found in many different forms. Potassium nitrate and potassium sulphate are two common forms of potassium, but they are different in terms of their composition, uses, and effects on the body. In this blog, we will explore the differences between potassium nitrate and potassium sulphate and how these differences can affect your health. History of potassium nitrate and potassium sulphate Potassium nitrate and potassium sulphate are both compounds of potassium, but they are quite different in their composition and use. Potassium nitrate is a white, odorless, crystalline powder that is used as a fertilizer and a food preservative. It is also used in pyrotechnics, as a flux in welding, and as a component in some types of glass and ceramics. It is also used in pyrotechnics, as a flux in welding, and as a component in some types of glass and ceramics. Potassium sulphate, on the other hand, is a colorless, odorless, crystalline powder that is used mainly as a fertilizer and for making detergents and soaps. It is also used in some industrial processes, such as tanning leather and dyeing fabrics. The main difference between the two is that potassium nitrate is more soluble in water than potassium sulphate, making it better suited for use in fertilizers and food preservatives. Chemical properties of potassium nitrate and potassium sulphate Potassium nitrate and potassium sulphate are two inorganic compounds that have many chemical properties in common, but also some important differences. Potassium nitrate is a white, crystalline solid with a melting point of 334 degrees Celsius, while potassium sulphate is a white, odorless powder with a melting point of 1069 degrees Celsius. Both of these compounds are highly soluble in water, but potassium nitrate is more soluble than potassium sulphate. Additionally, potassium nitrate has a higher boiling point than potassium sulphate, at 400 degrees Celsius compared to 1429 degrees Celsius. The main difference between the two compounds is that potassium nitrate is an oxidizing agent, meaning it can transfer oxygen to other molecules, while potassium sulphate does not have this property. As such, potassium nitrate is used in the production of explosives, fireworks, and fertilizers, while potassium sulphate is used primarily in the production of glass, ceramics, and fertilizers. Uses of potassium nitrate and potassium sulphate Potassium nitrate and potassium sulphate are two compounds that are often confused. While they are both potassium-based, they have very different uses. Potassium nitrate is primarily used as a fertilizer, while potassium sulphate is used in the manufacturing of glass, pottery, and fertilizers. The main difference between the two compounds lies in their chemical makeup. Potassium nitrate is made up of potassium, nitrogen, and oxygen, while potassium sulphate is composed of potassium, sulphur, and oxygen. As a result, they have different properties and uses. Potassium nitrate is used to provide plants with nitrogen, while potassium sulphate is used to supply plants with sulphur. Both compounds can be beneficial to plant growth, but they should not be used interchangeably. Advantages and disadvantages of potassium nitrate and potassium sulphate Potassium nitrate and potassium sulphate are two important inorganic compounds with a wide range of applications. Both compounds are composed of potassium, nitrogen and oxygen, but differ in their chemical formulas and atomic structure. Potassium nitrate, or saltpeter, is composed of one potassium atom, one nitrogen atom and three oxygen atoms. On the other hand, potassium sulphate is composed of one potassium atom, one sulphur atom and four oxygen atoms. While both compounds have a wide range of uses, there are also some distinct advantages and disadvantages associated with each one. The primary advantage of potassium nitrate is its high solubility in water, which makes it ideal for use in fertilizers and in the production of explosives. Potassium sulphate, on the other hand, has a low solubility in water and is not as effective as potassium nitrate when used as a fertilizer. Another key difference between the two compounds is their toxicity. Potassium nitrate is moderately toxic and can cause skin and eye irritation. Potassium sulphate, however, is non-toxic and safe to handle. This makes it the preferred choice for many applications, such as in organic farming and in the production of food-grade products. Overall, while both potassium nitrate and potassium sulphate have their own distinct advantages and disadvantages, it is important to consider the specific application in order to determine which compound is most suitable. Health and safety concerns of potassium nitrate and potassium sulphate Potassium nitrate and potassium sulphate are two chemical compounds that may be found in a variety of products, from fertilizers to fireworks. While both are derived from the same element, potassium, they have very different properties, and thus pose different health and safety concerns. In general, potassium nitrate is more toxic than potassium sulphate, and should be handled with more caution. Potassium sulphate is an odorless, tasteless powder that is used as a fertilizer, while potassium nitrate is a crystalline salt that has a strong smell and sharp taste. It is used in food preservation, explosives, and fertilizers. Both can be irritants if handled improperly, and can cause skin and eye irritation if exposed to the skin or eyes. However, potassium nitrate is more toxic and can cause severe respiratory effects if inhaled in large amounts. Therefore, if handling either of these chemicals, it is important to take extra precautions and use protective gear. Therefore, if handling either of these chemicals, it is important to take extra precautions and use protective gear. Bottom Line In conclusion, the main difference between potassium nitrate and potassium sulphate is that potassium nitrate is an oxidizing agent, used in fertilizers, explosives and fireworks, while potassium sulphate is an inorganic salt used mainly as a fertilizer. Both compounds contain potassium, but the other elements in their chemical structures differ greatly. Leave a Comment
{"url":"https://relationshipbetween.com/what-is-the-difference-between-potassium-nitrate-and-potassium-sulphate/","timestamp":"2024-11-06T14:54:01Z","content_type":"text/html","content_length":"95748","record_id":"<urn:uuid:ece6737e-d459-42a0-83a9-17b49bb3e062>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00694.warc.gz"}
initial conditions The initial conditions are the initial profiles of velocity and all other dependent variables (for example temperature or stagnation enthalpy, mass fraction of A in an A-B binary mixture, turbulence kinetic energy, or turbulence dissipation), and they obey their respective transport equation boundary conditions. At a surface for both external flows and internal flows, for velocity will be zero, reflecting the no-slip condition; for the energy equation it will be either a specified surface temperature or surface heat flux (appropriately converted to enthalpy for variable properties); for mass transfer (selected geometries) it will be the surface mass concentration or mass fraction of A; for turbulence kinetic energy the value will be zero, reflecting both the no-slip condition and no penetration condition (the impermeable surface); for turbulence dissipation, the surface boundary condition is preprogrammed into TEXSTAN to be meet the unique requirements of its transport equation. At a symmetry line, for internal flows with geometries that have geometrical similarity (pipe flow and symmetrical thermal boundary condition planar duct flow), the zero-gradient condition is applied to all dependent variables. For the free stream and all external flows the free stream velocity at x = xstart is interpolated from the boundary condition tables; for the energy equation it is set by the tstag variable (and interpreted by TEXSTAN as free stream temperature for constant properties and stagnation temperature for variable properties); for mass transfer (selected geometries) it will be the interpolated free stream mass concentration or mass fraction of A; and for the turbulence variables, the free stream conditions are set by the input variables tuapp and epsapp. The solution domain is where the boundary layer equations are solved, and it is bounded between two surfaces, called the I-surface and the E-surface. Integration of the transport equations begins at the user-specified x location called xstart where the initial conditions are specified, and integration is stopped at the user-specified x location called xend. The stop location is arbitrary, and it reflects the mathematical parabolic nature of the governing equations. For external flows, the I-surface will always be the no-slip surface (wall) and the E-surface will always be the free stream. This E-surface is quasi-bounded, meaning it must be bounded for any integration step, but it can be moved as the integration proceeds to ensure a zero-gradient condition for each dependent variable. This is discussed in detail in the entrainment part of the external flows: integration control section of this website. For internal flows, the flow domain is completely bounded in the cross-flow direction. For pipe flows and symmetrical thermal boundary condition planar ducts, the centerline is the I-surface and the no-slip surface is the E-surface. For asymmetrical thermal boundary condition planar ducts and for the annular geometry, the I- and E-surfaces are the inner and outer no-slip surfaces respectively. The initial profiles are introduced into a TEXSTAN dataset in one of two ways: • TEXSTAN auto-generates the initial profiles • the user supplies the initial profiles The finite difference mesh is created using the initial velocity profile, u(y), and the same distribution of y locations is used for the other dependent variables. Therefore, it is extremely important that the distribution of profile points in any input profile be correct, to help guarantee a grid-independent numerical solution. If the initial profiles are user-supplied (often from experimental data), TEXSTAN will interpolate these profiles to insure an appropriate distribution of initial profile points to satisfy the requirements of the numerics. The details of establishing the mesh is described in the overview: numerical accuracy section of the website. TEXSTAN Auto-Generated Profiles - External Flows - The auto-generated profiles for laminar flow are the from the Falkner-Skan m=0 solution (Blasius profiles), which requires that the free stream velocity be constant and from the Falkner-Skan m=1 solution (2-D stagnation point profiles), which requires that the free stream velocity linearly accelerate. The stagnation-point profile can be applied as a first approximation to both the cylinder-in-crossflow and to the cylindrical leading-edge region of a turbine blade (airfoil). For turbulent flow the auto-generated profiles are from the classical power-law profiles with corrections for log-region behavior. For the energy equation, the temperature profile is first constructed, and then it is converted to a stagnation enthalpy profile if variable properties are being used. Specific details of how the various profiles are constructed are found in the external flows: initial profiles section of this website. ┃ kstart │ external flow initial profiles ┃ ┃ =3 │ turbulent velocity and temperature profiles for kgeom=1,2,3; includes k and ε profiles ┃ ┃ =4 │ Blasius velocity and temperature profiles for kgeom=1 ┃ ┃ =5 │ Falkner-Skan m=1 (2-D stagnation point) velocity and temperature profiles for kgeom=1 ┃ ┃ =6 │ cylinder-in-crossflow velocity and temperature profiles for kgeom=1 ┃ ┃ =7 │ turbine blade velocity and temperature profiles for kgeom=1; ┃ ┃ =9 │ flat profiles of all variables for kgeom=1 (used on highly rough surfaces) ┃ TEXSTAN Auto-Generated Profiles - Internal Flows - The auto-generated profiles for laminar flow are designed to meet one of three flow cases: (a) the combined entry flow where the velocity and temperature profiles are flat; (b) the unheated starting length flow where the velocity profile is hydrodynamically fully-developed and the temperature profile is flat; and (c) the thermally fully-developed flow. The auto-generated profiles for turbulent flow are designed to meet one of two flow cases: combined entry flow (flat profiles of all variables) or thermally fully-developed flow where the profiles are approximated by the classic power-law profiles with corrections for log-region behavior. For the energy equation, the temperature profile is first constructed, and then it is converted to a stagnation enthalpy profile if variable properties are being used. Specific details of how the various profiles are constructed are found in the internal flows: initial profiles section of this website. Note: for higher order turbulence models the entry flow profile choice is best. It is extremely difficult to establish fully-developed turbulence variable profiles because the profiles are turbulence-model specific, and by permitting the turbulent flow to establish itself within the combined entry region, there will be no influence of incorrect starting initial turbulence profiles on the friction and heat transfer solutions. ┃ kstart │ internal flow initial profiles ┃ ┃ =1 │ flat entry profiles of velocity and temperature for kgeom=4,5,6,7; note: includes k and ε profiles ┃ ┃ =2 │ laminar hydrodynamically fully-developed velocity profile with either a flat or thermally fully-developed temperature profile for kgeom=4,5,6,7 ┃ ┃ =3 │ turbulent velocity and temperature profiles for kgeom=4,5,6; includes k and ε profiles ┃ ┃ =11 │ Couette flow velocity and temperature profiles for kgeom=6, laminar or turbulent flow ┃ User-Supplied Profiles for External Flows - This option is primarily designed to permit a more accurate numerical simulation of experimental convective heat and momentum data. There are two options to permit the user to input experimental initial profiles, and the specific details of how the various profiles are constructed are found in the external flows: initial profiles and internal flows: initial profiles sections of this website. The two options include: ┃ kstart │ special user-supplied initial profile options ┃ ┃ =0 │ primarily designed for restart of TEXSTAN based on a dump of profiles from a previous calculation ┃ ┃ =10 │ primarily designed to input experimental profiles, selected kgeom and turbulence models ┃ When kstart=0 is selected, the user provides the input profile and TEXSTAN does nothing to correct the profile(s). This option is rarely used, except to re-read a set of profiles that TEXSTAN has generated and printed by way of the option k10=21-24 and the input variable bxx. For example, setting k10=24 permits a set of internal flow profiles to be written to file ftn73.txt at a location where bxx is defined as x/D[h]. This file can then be used as-is as a user-supplied set of profiles to TEXSTAN as part of the input dataset (along with kstart=0). This option has proved useful for working with two-equation turbulence models. When kstart=10, the user provides a set of u(y) data points (typically 15=20 from experimental data) and TEXSTAN interpolates the profile using the same algorithm for data point distribution that is used when TEXSTAN auto-generates a set of initial profiles. A temperature profile can also be interpolated and expanded. If the two-equation model of turbulence is being used, there is a special provision to auto-generate a dissipation profile using information from the experimental turbulence kinetic energy and velocity profiles.
{"url":"http://texstan.com/o8.php","timestamp":"2024-11-12T12:28:30Z","content_type":"application/xhtml+xml","content_length":"21715","record_id":"<urn:uuid:892601a7-f655-421a-a4cd-3eb534ffc883>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00535.warc.gz"}
Line Dynamics in Droop Control | Power Engineering Laboratory | Singapore top of page Mitigating destabilizing effects of line dynamics in droop controlled inverters Ph.D. Student: Gurupraanesh Raman | Advisor: Jimmy C.-H. Peng | Project Duration: 2018-19 This work demonstrates that small-signal instability under droop control occurs due to two separate phenomena: P-V/Q-f cross coupling and line dynamics. While the former has been exhaustively studied in the past, the latter has not. We first show that instability still can occur in systems where all cross-coupling has been eliminated, and that this is attributable to the distribution system dynamics. We then analyze such a decoupled system to identify the remaining instability factors, all of which arise from the line dynamics. The distribution system lag factor is found to be the sole factor that needs to be addressed to guarantee stability in decoupled systems. Finally, we propose a general formula-based lead compensator to compensate the distribution system lag. The proposed design requires no hardware modifications, and is demonstrated to guaranteed stability in decoupled systems, and significantly expand the stability region in cross-coupled systems. A novel approach to study the effect of line dynamics Prior works have reported that the the use of static power flow models for the interconnection network leads to over-optimistic stability assessment. However, this effect was not analyzed in a formal manner. Here, we developed the so-called fifth-order model for a multi-inverter system that does incorporate line dynamics, while at the same time eliminating P-f/Q-V cross-coupling by adopting the "generalized droop" law. Here, the power measurement is carried out in a rotated measuring frame, whose rotation angle depends on the R/X ratio of the system. We found that such systems can indeed experience instability at higher droop values, despite the removal of the cross-coupling effect. Our analysis of the small-signal model of this system found that there remain some instability factors, those that disappear if the third-order model is adopted, which neglects the line dynamics. Figure 1. Bode plot of 1/DSLF for various R/X ratios indicating significant phase lag at power frequency. Figure 2. Bode plots of the original filter F0 and the proposed filter for various R/X ratios. The proposed filter provides phase lead at power frequency. Our novel approach of examining the line dynamics' effect under generalized droop control meant that we could clearly see what are the instability factors, and how the resulting destabilization can be mitigated without empirically tuned compensators. We observed three phenomena arising from the line dynamics: distribution system lag factor (DSLF), EM-induced cross-coupling and EM-induced damping. Separating these and analyzing, we observed that the latter two effects have a minor effect on the location of the low-frequency poles, but do not really move the poles towards the right-hand side. In contrast, we found that the DSLF does destabilize the poles, and that if it were unity, the system remains stable for any droop gain. The destabilization due to the DSLF originates from the phase lag this term contributes to the droop control loop. As seen from Fig. 1, this is significant only at lower R/X ratios, where the time-constant L/R of the distribution lines are higher. This is an interesting result in that it indicates that the line dynamics has an opposite dependence on the R/X ratio compared to P-V/Q-f Mitigating the destabilization To guarantee small-signal stability under generalized droop, we simply multiplied the existing first-order power filter in the droop loop with DSLF. This yields us a modified filter which provides exactly the required amount of phase lead at the power frequency, essentially eliminating the effect of the DSLF. This is demonstrated in Fig. 2. Such a formula-based filter/compensator design is far preferrable to empirical design rules, and makes the process independent of the system topology, inverter ratings and droop coefficients. Given any R/X, the filter can be directly obtained given no further information. On applying the proposed filter, we observe its effectiveness by plotting its poles. Fig. 3 shows that the low-frequency poles under the original filter were angled right-wards, leading to the possibility of instability as the droop gains are increased. However, for the proposed filter, the poles are oriented left-wards. As the droop gains are increased, the stability actually improves. This reversal in trend is mainly caused by the EM-induced damping, in the absence of which the poles under the proposed filter will lie on the straight line at -1/2T_c, where T_c is the time-constant of the original filter. Figure 3. Juxtaposition of the system poles under generalized droop control for R/X=1 with the conventional filter (blue x) and the proposed filter (red o). The zoomed version of (a) is shown in (b), focusing on the critical poles Figure 4. Critical poles of the system with the proposed filter indicating guaranteed stability for (a) variation in kf and (b) variation in R/X ratio. The impact of canceling the DSLF is evident from the stability regions shown in Figure 5. A stability region is the subset of the droop gain hyperspace where the system is positively damped. In the case of generalized droop, the only instability factor is the distribution system lag. When this is eliminated, the stability region becomes infinite. Figure 5. Stability region under generalized droop control for various R/X ratios with (a) original filter and (b) the proposed filter (the entire plane is theoretically stable for (b)). Figure 6. Bode plots of the original filter F0 and the proposed filter for various R/X ratios. The proposed filter provides phase lead at power frequency. To verify that the line dynamics and cross-coupling are separate and independent instability phenomena, we now take up the conventional droop control, where cross-coupling does exist. For this case, we would expect that the stability will improve significantly for lower R/X ratios, where the line dynamics is the dominant effect. For higher R/X ratios, we expect it to not change very significantly, because the line dynamics effect is very weak, and its mitigation does not at all affect the P-V/Q-f cross-coupling. Figure 6 corroborates these expectations. The results from Figure 6 offers further insights into which effect- line dynamics or cross-coupling, is dominant for practical systems. Practical distribution systems typically have R/X ratios around 1. While the extensive research focus in the past has been on cross-coupling, the actual dominant effect to address is line dynamics. bottom of page
{"url":"https://www.penglaboratory.com/compensator-design","timestamp":"2024-11-06T07:04:36Z","content_type":"text/html","content_length":"395605","record_id":"<urn:uuid:05139bfd-c96d-4cf7-9544-f8dec86ebbf4>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00358.warc.gz"}
The reflection property of the parabola While the parabola has many beautiful geometric properties as we have seen, it also has a remarkable property known as the reflection property, which is used in such diverse places as car headlights and satellite dishes. A car headlight emits light from one source, but is able to focus that light into a beam so that the light does not move off in all directions — as lamplight does — but is focused directly ahead. Satellite dishes do the opposite. They collect electromagnetic rays as they arrive and concentrate them at one point, thereby obtaining a strong signal. Headlight reflectors and satellite dishes are designed so their cross-sections are parabolic in shape and the point of collection or emission is the focus of the parabola. To show how this works we need a basic fact from physics: when light (or any electromagnetic radiation) is reflected off a surface, the angle of incidence equals the angle of reflection. That is, when a light ray bounces off the surface of a reflector, then the angle between the light ray and the normal to the reflector at the point of contact equals the angle between the normal and the reflected ray. This is shown in the following diagram. Detailed description of diagram Note that this implies that the angle between the ray and the tangent is also preserved after reflection, which is a more convenient idea for us here. Let \(P(2ap,ap^2)\) be a point on the parabola \(x^2=4ay\) with focus at \(S\) and let \(T\) be the point where the tangent at \(P\) cuts the \(y\)-axis. Suppose \(PQ\) is a ray parallel to the \(y\)-axis. Our aim is to show that the line \(PS\) will satisfy the reflection property, that is, \(\angle QPB\) is equal to \(\angle SPT\). Detailed description of diagram Notice that, since \(QP\) is parallel to \(ST\), \(\angle QPB\) is equal to \(\angle STP\), so we will show that \(\angle STP = \angle SPT\). Now \(S\) has coordinates \((0,a)\), and \(T\) has coordinates \((0,-ap^2)\), obtained by putting \(x=0\) in the equation of the tangent at \(P\). Hence \[ SP^2=(2ap-0)^2+(ap^2-a)^2 = a^2(p^2+1)^2 \] after a little algebra. Also, \[ ST^2 = (a+ap^2)^2= a^2(1+p^2)^2 \] and so \(SP=ST\). Hence \(\Delta STP\) is isosceles and so \(\angle STP = \angle SPT\). Thus the reflection property tells us that any ray parallel to the axis of the parabola will bounce off the parabola and pass through the focus. Conversely, any ray passing through the focus will reflect off the parabola in a line parallel to the axis of the parabola (so that light emanating from the focus will reflect in a straight line parallel to the axis).
{"url":"https://amsi.org.au/ESA_Senior_Years/SeniorTopic2/2a/2a_2content_13.html","timestamp":"2024-11-03T04:39:31Z","content_type":"text/html","content_length":"5995","record_id":"<urn:uuid:0340158a-4aaf-4bf9-94f3-217edf6fa197>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00320.warc.gz"}
how to combine multi formula conditions Hi team, Trying to create a vehicle luggage capacity calculator for small and large cars, Please find below, Please find the below Formula: =IF(AND(Passengers@row <= 4, [25 Kg]@row <= 2, [15 Kg]@row <= 1, [7 Kg]@row <= 1), "Yes", "No") The formula above indicates the col [Result] is "Yes" in the ase of follows: The number of paxx col [Passengers] is below 4 and the number of luggage for col [25 kg] is below 2 and the number of luggage for col [15 kg] is below 1 and the number of luggage for col [7 kg] is below 1 Otherwise, the col [Result] is equal to "No" I struggled with no success to add another additional conditioned formula to the above formula with numbers changes, I want to have the logic to indicate col [Result] with "Yes" in case the luggage numbers are <2 for col (25 kg) and <1 for co [15 kg] and <1 for col [7 kg] aaaaaaaaand also if luggage number <1 for col (25 kg) and <2 for co [15 kg] and <1 for col [7 kg] . =IF(AND(Passengers@row <= 4, [25 Kg]@row <= 2, [15 Kg]@row <= 1, [7 Kg]@row <= 1), "Yes", "No"),(IF(AND(Passengers@row <= 4, [25 Kg]@row <=1, [15 Kg]@row <= 2, [7 Kg]@row <= 2), "Yes", "No") in short, my formula works fine for one logic while i want to add many other logics and to combine two, three and more conditioned formulas to end with different selected cols numbers to end with the same results "Yes" and if not equal to my conditions to end with "No" Thank you, your assistance is much appreciated Best Answer • Add Sum in the formula =IF(AND(PAXX@row <= 4, SUM([25 Kg]@row * 110, [15 Kg]@row * 80, [7 Kg]@row * 35) <= 400), "Yes", "No") • I'm not following all of your logic, but I suggest creating multiple columns to evaluate different formula conditions. Your first formula is yielding correct answer' Create a second formula to evaluate the next 4 conditions Crate a third formula to evaluate the next 4 conditions. Final result can look if all 3 formula columns are Yes, then Yes other wise no. You could combine all 3 into a single formula, but it is easier to debug if done in parts. • Hi @dojones Actually im into another related formula and still could not correct it as follow Formula using : =IF(AND(Paxx@row <= 4, [25 Kg]@row * 110, [15 Kg]@row * 80, [7 Kg]@row * 35 <= 400), "Yes", "No") Logic for Yes / No in this formula is if the number of passengers is <4 and the sum of the number of col [25 kg] multiply 110 Plus, Sum of numbthe er of col [15 kg] multiply 80 Plus, Sum of the number of col [7 Kg ] multiply 35 and If the sum of all the above is below 400 than the result is "Yes" Otherwise it is "No" in this example, the sum is the amount 345 (which is below 400) and accordingly it should be "Yes" , but its not working and ended with an error INVALID DATA TYPE • Add Sum in the formula =IF(AND(PAXX@row <= 4, SUM([25 Kg]@row * 110, [15 Kg]@row * 80, [7 Kg]@row * 35) <= 400), "Yes", "No") • Many Thanks @dojones It works well for us, much appreciated .. your assistance was much helpful Help Article Resources
{"url":"https://community.smartsheet.com/discussion/123960/how-to-combine-multi-formula-conditions","timestamp":"2024-11-11T13:50:58Z","content_type":"text/html","content_length":"420066","record_id":"<urn:uuid:fb798cdd-a59b-431c-8329-3f6ff92648db>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00838.warc.gz"}
Observability inequalities for transport equations through Carleman Inserted: 27 jul 2018 Year: 2018 We consider the transport equation $\ppp_t u(x,t) + H(t)\cdot \nabla u(x,t) = 0$ in $\OOO\times(0,T),$ where $T>0$ and $\OOO\subset \R^d $ is a bounded domain with smooth boundary $\ppp\OOO$. First, we prove a Carleman estimate for solutions of finite energy with piecewise continuous weight functions. Then, under a further condition on $H$ which guarantees that the orbit $\{ H(t)\in\R^d, \ thinspace 0 \le t \le T\}$intersects $\ppp\OOO$, we prove an energy estimate which in turn yields an observability inequality. Our results are motivated by applications to inverse problems.
{"url":"https://cvgmt.sns.it/paper/3986/","timestamp":"2024-11-07T23:30:53Z","content_type":"text/html","content_length":"8347","record_id":"<urn:uuid:3db7af74-139e-407b-8d18-9cd799cc22b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00321.warc.gz"}
Division Facts & Concepts | Skip Count to DivideDivision Facts & Concepts | Skip Count to Divide Division Facts & Concepts | Skip Count to Divide Division Facts & Concepts, Divide with Skip Counting 46 Step-by-Step Math Lessons | Special Education Math Intervention | Third Grade Math Level This Division Facts & Concepts lesson workbook includes an example IEP goal, 46 step-by-step math lessons, reviews and assessments. This evidence-based math intervention is tied to third grade standards and is great for special education math goals and tier 2 small group math interventions (RTI). These sequential lessons are easy to teach, with enough material for 12 weeks of instruction! In this workbook, students are introduced to the basic concepts of division by using pictures and number lines. Then they progress to skip counting to identify multiples of the divisor. They learn the meanings of “divisor” and “quotient” as well as the division symbols. Click Here to Preview (Preview PDF will automatically download to your computer. If it doesn't open, you may need to find it in your downloads folder.) This PDF Download Includes: 46 Lessons 14 Reviews 14 Tests 271 Pages 12 Weeks of Instruction Students Will Be Able To: • Separate pictures into equal groups to represent division • Divide by one • Divide zero by any number • Use a number line to solve division problems • Use skip counting to solve division problems • Divide by: 5, 2, 3, 4, 10, 9, 6, 7, 8 • Recognize and write related division facts Lessons Are Tied to This Third Grade Math Standard: • Interpret whole-number quotients of whole numbers (CCSS 3.OA.A.2) 📘 This Resource is Also Found in These Discounted Bundles: Not sure where to start? ✏️ Download a free placement test. Watch a video overview of Step-by-Step Math to Mastery or read the transcript here. Step-by-Step Math to Mastery™ Resources: ✰ Make math easier to understand. • Help students over math hurdles with clear, sequential, scaffolded lessons. • Prevent overwhelm. Build student confidence. ✰ Make math easier to teach. • Save hours of planning and piecing together materials. • Paraeducators can deliver quality instruction independently. Open & teach. Reasons You'll Love Step-by-Step Math to Mastery™ Resources • Boosts student confidence and progress • Lightens teacher workload • Time-saving: print and teach • Easy prep: only black ink is needed • Easy to teach, paraeducator-friendly • Example IEP goals and shorter term objectives • Consistent & predictable format • Lots of practice repetitions • Scaffolded with structured workspaces • Fewer problems on a page, white space, minimal visual clutter • Tied to standards • Explicit/Direct instruction • Systematic & sequential • Mastery approach--teaching one topic at a time, one strategy at a time • Lessons have "I Can" statements, model problems, guided practice, & independent practice • Each workbook can be used individually as a stand-alone intervention for that skill or they can be used together, taking students from the basics of number sense and addition in first grade all the way to dividing fractions and decimals in fifth grade. • Can also be used with older students (middle & high school) to help fill the gaps in their learning. About the Author Hi! I am a Special Education & Title 1 teacher interested in task analysis. My resources break tasks into small and explicit steps, reducing cognitive load for learners so they can feel successful and confident rather than overwhelmed and reluctant to try. I strive to lighten the workload of teachers by making resources as practical and easy-to-use as possible. I sincerely hope my work can benefit you and your students. I appreciate your support and feedback. Terms of Use Copyright © Angela Dansie, All rights reserved. This product is to be used by the original purchaser for single class use only. You may not put this product on the internet where it could be publicly found and downloaded. Copying for more than one teacher, classroom, department, school, or school system is prohibited. If you want to share this resource with colleagues, please purchase additional licenses. Failure to comply is a copyright infringement and a violation of the Digital Millennium Copyright Act (DMCA). Clipart and elements found in this PDF are copyrighted and cannot be extracted and used outside of this file without permission or license. Intended for classroom and personal use only. Thank you for respecting these terms of use.
{"url":"https://mathtomastery.com/products/division-concepts-and-divisors-to-10","timestamp":"2024-11-03T02:33:00Z","content_type":"text/html","content_length":"154111","record_id":"<urn:uuid:eb9fe176-ca83-4585-a086-4d8385aa6583>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00860.warc.gz"}
日時: 7月19日(木) 15:00 - 17:15 場所: 自然系学系 D棟 509号室 時間: 15:00~16:00 講演者: Noriko Yui (Queen's University) タイトル: Modularity (automorphy) of Calabi-Yau varieties over Q 概要: I will present the current status on the modularity of Calabi-Yau varieties defined over the field of rational numbers. Here modularity is in the sense of the Langlands Program. In the first part, I will formulate the modularity conjectures for Calabi-Yau varieties of dimension 1, 2 and 3, and discuss the recent modularity results. If there is time, I will report on the recent joint wotrk with Y. Goto and R. Livne on automorphy of certain K3-fibered Calabi-Yau threefolds, and mirror symmetry. 時間: 16:15~17:15 講演者: George Elliott (University of Toronto) タイトル: A brief history of non-smooth classification theory 概要:It was first within the theory of C*-algebras thatit was noticed---by Mackey (or at least suspected by him!)---that the classification up to isomorphism of a well-behavedensemble of objects (nicely parametrized)---in this case, the irreducible representations of a given C*-algebra---might beno longer well behaved, the corresponding quotient space of the"standard" Borel space of given objects possibly being decidedlynonstandard (much like the real numbers modulo the subgroup ofrationals).Interestingly, perhaps, it was also first within the theoryof C*-algebras that this problem was circumvented in a non-trivialway---by passing from the given category of objects to a new categoryin an invariant way (by means of a functor), in such a way that the new category is also well-behaved (e.g., a standard Borelspace), so it is not just the set of isomorphism classes of theoriginal objects (which would be non-smooth), but is still asimpler category than the original one--- for the simple reasonthat all inner automorphisms (if not all automorphisms) become trivial. The first example of this was discovered by Glimm andDixmier, and enlarged on later by Bratteli and Elliott---it was,incidentally, also work of Glimm that confirmed Mackey'sdiscovery. This functorial treatment of a non-smooth classification setting (isomorphism within a certain classof C*-algebras) was the first use of K-theory in operatoralgebras. (Not counting the Murray-von Neumann type classification of von Neumann algebras!) 問い合わせ先: 木村健一郎
{"url":"https://nc.math.tsukuba.ac.jp/blogs/blog_entries/view/142/8c9942883b03422400e1bc942c5a4a47?page_id=39","timestamp":"2024-11-11T14:19:22Z","content_type":"text/html","content_length":"20751","record_id":"<urn:uuid:021b4ba1-b06b-4790-96ca-c146d0f73163>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00382.warc.gz"}
The Width Of A Rectangle Is 8 Feet And The Diagnal Length Of The Rectangle Is 13 Feet Which Measurement Well the whole answer is 10.246950 smth smth but I don’t have the multiple choices so your gonna have to round yourself Step-by-step explanation: Okay so you have a weigh of 8 and a diagonal of 13. If you look, theres a triangle there. One side has 8 and the diagonal that was cut forms a 13. So we have a leg and a hypotnuse. To find the other leg (length) just follow the Pythagorean theroum and solve for b or a. Pythagorean theroum: a^2+b^2=c^2 Now subsitute c for 13 since its the hypotnuse and a or b as 8 but only one. In this case I’m substituting a for 8. Solve the squares 64+b^2= 169 Isolate the b Now square root 105 √105 will give the answer above on a normal calculator but if you have to round, I don’t know what place but you can round. a. The unemployment rate is 20%. b. The labor-force participation rate is 60%. According to the question, Working age population- 100,000 Labour force - 60,000 Unemployed - 12000 instructions a) The unemployment rate is the percentage of the labor force that is unemployed and it is calculated as follows: Unemployment Rate = [tex]\frac{Number of unemployed People}{Labor force}[/tex] × 100 When an unemployed population is 12,000 and the labor force is 60,000, the unemployment rate is = [tex]\frac{12000}{60000}[/tex] × 100 = 20% b) labor force participation rate is the percentage of working adult population that participates in labor either by actively looking for a job, or working. It is calculated as follows: Labor Force Participation Rate = [tex]\frac{Labor force}{working age population}[/tex] × 100 When the labor force is 60,000 and the working-age population is 100,000 Labor Force Participation Rate = [tex]\frac{60000}{100000}[/tex] × 100 = 60% So, a. The unemployment rate is 20%. b. The labor-force participation rate is 60 %. Read more about percentages:
{"url":"https://www.cairokee.com/homework-solutions/the-width-of-a-rectangle-is-8-feet-and-the-diagnal-length-of-dbcv","timestamp":"2024-11-07T11:15:31Z","content_type":"text/html","content_length":"92149","record_id":"<urn:uuid:f21c1031-0684-41fa-9248-15a3e65b2b1a>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00033.warc.gz"}
Insert categories in transformations Insertions-class {crunch} R Documentation Insert categories in transformations Insertions allow you to insert new categories into a categorical-like response on a variable's transformations. Insertions(..., data = NULL) .Insertion(..., data = NULL) anchor(x, ...) anchor(x) <- value arguments(x, ...) arguments(x) <- value ## S4 replacement method for signature 'Insertion' anchor(x) <- value ## S4 replacement method for signature 'Subtotal' anchor(x) <- value ## S4 replacement method for signature 'Heading' anchor(x) <- value ## S4 replacement method for signature 'SummaryStat' anchor(x) <- value ## S4 replacement method for signature 'Insertion,ANY' subtotals(x) <- value ## S4 replacement method for signature 'Insertion' arguments(x) <- value ## S4 replacement method for signature 'Subtotal' arguments(x) <- value ## S4 replacement method for signature 'Heading' arguments(x) <- value ## S4 replacement method for signature 'SummaryStat' arguments(x) <- value ## S4 method for signature 'Insertion' ## S4 method for signature 'Subtotal' arguments(x, var_items) ## S4 method for signature 'Heading' ## S4 method for signature 'SummaryStat' arguments(x, var_items) ## S4 method for signature 'Insertion' anchor(x, ...) ## S4 method for signature 'Subtotal' anchor(x, var_items) ## S4 method for signature 'Heading' anchor(x, var_items) ## S4 method for signature 'SummaryStat' anchor(x, var_items) ## S4 method for signature 'Insertion' ## S4 method for signature 'Subtotal' ## S4 method for signature 'Heading' ## S4 method for signature 'SummaryStat' ## S4 method for signature 'Insertions' ## S4 method for signature 'Insertions' ... additional arguments to [, ignored For the constructor functions Insertion and Insertions, you can either pass in attributes via ... or you can create the objects with a fully defined list representation of the objects via data the data argument. See the examples. x For the attribute getters and setters, an object of class Insertion or Insertions value For [<-, the replacement Insertion to insert var_items categories (from categories()) or subvariables (from subvariables() to used by the arguments and anchor methods when needed to translate between category/subvariable names and category ids/ Working with Insertions Insertions are used to add information about a variable or CrunchCube that extends the data in the dataset but does not alter it. This new data includes: aggregations like subtotals that sum the count of more than on category together or headings which can be added between categories. Insertions objects are containers for individual Insertion objects. The individual Insertions contain all the information needed to calculate, apply, and display insertions to CrunchCubes and categorical variables. An Insertion must have two properties: • anchor - which is the id of the category the insertion should follow • name - the string to display Additionally, Insertions may also have the following two properties (though if they have one, they must have the other): • function - the function to use to aggregate (e.g. "subtotal") • args - the category ids to use as operands to the function above. Although it is possible to make both subtotals and headings using Insertion alone, it is much easier and safer to use the functions Subtotal() and Heading() instead. Not only are they more transparent, they also are quicker to type, accept both category names as well as ids, and have easier to remember argument names. version 1.30.4
{"url":"https://search.r-project.org/CRAN/refmans/crunch/html/Insertions.html","timestamp":"2024-11-04T09:00:31Z","content_type":"text/html","content_length":"6420","record_id":"<urn:uuid:10a6fabb-36e8-480f-a2fb-3b05238dce35>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00734.warc.gz"}
PROC REG Equivalent in Python When working with data as a data science or data analyst, regression analysis is very common and something that many industries and companies utilize to understand how different series of data are There are many major companies and industries which use SAS (banking, insurance, etc.), but with the rise of open source and the popularity of languages such as Python and R, these companies are exploring converting their code to Python. A commonly used procedure for regression analysis in SAS is the PROC REG procedure. In this article, you’ll learn the Python equivalent of PROC REG. PROC REG Equivalent in Python In SAS, when we do simple regression analysis on continuous variables, we use PROC REG. PROC REG performs Ordinary Least Squares (OLS). Let’s say we have data such as the following: In SAS, to do OLS on this data, for example, to look at the linear relationship between height and weight, we could simply do the following: The output for this code looks like the following image: We see here that the linear relationship between height and weight is significant (p_value of 0.0007). To do this in Python, we can use the statsmodels package. Creating the model and fitting the model is very easy to do. After fitting the model, we print the results to verify we got the same coefficients and p_value as SAS. import pandas as pd import numpy as np from statsmodels.formula.api import ols model = 'height ~ weight' results = ols(model,data=data).fit() # OLS Regression Results #Dep. Variable: height R-squared: 0.600 #Model: OLS Adj. R-squared: 0.569 #Method: Least Squares F-statistic: 19.46 #Date: Sat, 09 Jan 2021 Prob (F-statistic): 0.000703 #Time: 09:39:28 Log-Likelihood: -45.073 #No. Observations: 15 AIC: 94.15 #Df Residuals: 13 BIC: 95.56 #Df Model: 1 #Covariance Type: nonrobust # coef std err t P>|t| [0.025 0.975] #Intercept 10.2043 4.112 2.481 0.028 1.320 19.088 #weight 0.3520 0.080 4.412 0.001 0.180 0.524 #Omnibus: 1.249 Durbin-Watson: 2.506 #Prob(Omnibus): 0.535 Jarque-Bera (JB): 0.334 #Skew: 0.357 Prob(JB): 0.846 #Kurtosis: 3.150 Cond. No. 157. #[1] Standard Errors assume that the covariance matrix of the errors is correctly specified. Above we see that we obtained the same coefficient and p_value as SAS. PROC REG Testing Residuals for Normality Equivalent in Python When doing OLS and regression analysis, one of the main assumptions we need to test for is normality of the residuals. To do this in SAS, we would do the following with proc univariate: After running this code,we receive these results: To do this in Python, we can use the scipy package to get the probability plot, and matplotlib to plot it. In SAS, we specified we wanted studentized residuals. To get these in Python, we need to get to a few more steps. from scipy import stats import matplotlib.pyplot as plt influence = results.get_influence() studentized_residuals = influence.resid_studentized_external res = stats.probplot(studentized_residuals, plot=plt) You can see that the chart is identical to the one produced in SAS. To get the p_values for the different normality tests, we can use the Anderson and Shapiro functions from the stats package. result = stats.anderson(studentized_residuals) #AndersonResult(statistic=0.5182987927026232, critical_values=array([0.498, 0.568, 0.681, 0.794, 0.945]), significance_level=array([15. , 10. , 5. , 2.5, 1. ])) stat, p = stats.shapiro(studentized_residuals) We see we receive the same statistics from these tests as we received from SAS. The full code for this example in Python is below: import pandas as pd import numpy as np from statsmodels.formula.api import ols from scipy import stats import matplotlib.pyplot as plt model = 'height ~ weight' results = ols(model,data=data).fit() influence = results.get_influence() studentized_residuals = influence.resid_studentized_external res = stats.probplot(studentized_residuals, plot=plt) result = stats.anderson(studentized_residuals) stat, p = stats.shapiro(studentized_residuals) I hope that this example has helped you with translating your SAS PROC REG code into Python
{"url":"https://daztech.com/proc-reg-equivalent-in-python/","timestamp":"2024-11-15T01:12:19Z","content_type":"text/html","content_length":"252523","record_id":"<urn:uuid:00bd5aa2-8aad-4e8c-951d-bfa110240edb>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00107.warc.gz"}
Ordinal Numbers In Spanish 1-20 - OrdinalNumbers.com The Ordinal Numbers In Spanish – With ordinal numbers, it is possible to count any number of sets. They can also be used to broaden ordinal numbers.But before you are able to use them, you must comprehend what they are and how they operate. 1st The ordinal number is among the foundational ideas in mathematics. … Read more
{"url":"https://www.ordinalnumbers.com/tag/ordinal-numbers-in-spanish-1-20/","timestamp":"2024-11-13T21:03:27Z","content_type":"text/html","content_length":"47235","record_id":"<urn:uuid:13b84386-3d0f-49d4-bd70-c41b34024826>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00720.warc.gz"}
A simple color map algorithm I have had to manipulate a lot of images in my life to look nice with brands where no designer was available to do the job. I wanted an algorithm to make this easier for a long time, and recently I developed one. The basic idea is to make, say, the reds or blues in an image specific reds and blues, while maintaining the image as much as possible and still keep the relationships between colors intact. In the example, we want a color between the old red and blue (some purplish color) to be between the new red and blue in the transformed image (a transformed purple). Basic Description The make the explanation easier to visualize, I’ll first explain it using only two colors channels (red and cyan), so that the color space is 2D and we can easily make images of everything. This is the two-channel color space we will use for the explanation. In this space, colors have two coordinates, which represents the red and cyan amounts respectively. (0, 0) correspond to black, (1, 1) to white, (1, 0) to red and (0, 1) to cyan. The principle that we will exploit is that there is a straightforward way to map one triangle to another, and that this mapping preserves lines. This second part simply means that if P maps to P’ and Q maps to Q’, and R is a point between P and Q, then the map of R is a point R’ between P’ and Q’. The mapping is called an affine map, and it works as follows: • Suppose the triangles are ABC and A’B’C’, where A, B, C and so on are the vertices of the triangles represented as vectors. • Any point p can be expressed as a weighted sum of A, B, and C (using coordinates), so p = aA + bB + cC. The triplet (a, b, c) is called the barycentric coordinates of p. • We can now use these weights, and apply them to the new triangle to form a new point p’ = aA’ + bB’ + cC’, and this is how we calculate the resulting point for each point. Below are the barycentric coordinates of important points, and what they map to: Point Coordinate Mapped point A (1, 0, 0) A’ B (0, 1, 0) B’ C (0, 0, 1) C’ (A + B) / 2 (0.5, 0.5, 0) (A’ + B’) / 2 (A + B + C) / 3 (0.333, 0.333, 0.333) (A’ + B’ + C’) / 3 Some important points and their mappings. Here are a few things about these coordinates: • They add to one. • If they are all positive, p lies inside the triangle. (And so then, will p’) • If the coordinate that corresponds to a vertex is 0, p lies on the line formed by the other two vertices (and so p’ will lie on the line formed by the mapped vertices). So how do we use this transformation to map colors of an image? Let’s start with one color: 1. Pick a color in the input image. 2. This point, together with the vertices of a color space, forms a point set. Find a triangulation of this set. The Delayney triangulation is a good candidate, as it avoids sliver triangles, which leads to better results. 3. Pick a color in the available color space that we want to map it to. 4. Now move all the vertices of the triangles that correspond to the input color to the output color, giving a new set of triangles. This gives us a mapping from each triangle T_i in the input set, a triangle T’_i in the output set. 5. Now, for each pixel in the input image: □ Find the triangle T it is in, and the corresponding mapped triangle T’. □ Calculate the affine mapping of that point. □ This is the new color of the pixel. Let’s start with a simple example where we map a greyish red to a brighter red. Here is the color space, with the point we chose to base the mapping on marked with the yellow dot. This gives us a triangulation with four rectangles. The blue point is another color from the image. Here is the color space, with our chosen point move to the color we want to map it to, and the corresponding triangle vertices along with it. The blue point is mapped as described, leading to a slightly redder color. Here is how it transforms the image: The original image and the mapped image. The yellow and blue markers correspond to colors marked in the color space above. Here is the same idea, but with two input colors. The main difference is that we now have more triangles; everything else stays the same. And here is the result. The original and mapped image. The yellow and blue markers correspond to colors marked in the color space above as before, but now the color below the blue marker is mapped to a target color we The same idea applies to 3 channel color spaces, but here we rely on the affine mapping between tetrahedra as the basis: Given two tetrahedra ABCD and A’B’C’D’, calculate the barycentric coordinates of a point p (a, b, c, d) such that p = aA + bB + cC + dD, then use these to calculate a new point p’ = aA’ + bB’ + cC’ + dD’. The processing algorithm then becomes the following: 1. Select a set of input colors to map. 2. These points, together with the vertices of the cube that represents the color space, form a set. Find a triangulation for this set. As in the 2D case, the Delaunay triangulation gives good 3. For each of the input colors, select a color you want to map it to. 4. Now move all the vertices of the tetrahedra that correspond to the input color to the output color, giving a new set of tetrahedra. This gives us a mapping from each tetrahedron T_i in the input set, a tetrahedron T’_i in the output set. 5. Now, for each pixel in the input image: □ Find the tetrahedron T it is in, and the corresponding mapped tetrahedron T’. Calculate the affine mapping of the pixel color. □ This is the new color of the pixel. Implementation Notes Most of the processing can be done in advance, so that each pixel requires two computations: 1. Determining which tetrahedron it falls into. 2. Multiplying the color with a matrix associated with the tetrahedron. The matrices are all computed in advance, so the main bottleneck is determining which tetrahedron the color falls into. Since the algorithm is a pixel-wise operation, the algorithm is easy to parallelize. There are two things to watch out for: • Avoid using input color points that are very close to each other. Choose only one from any closely spaced group. This is especially important near the vertices of the color space cube—directly map these vertices to the target color. • Keep the output color within a close range of the input color to prevent visual artifacts. For more drastic color changes, consider using a different algorithm better suited for significant
{"url":"https://www.code-spot.co.za/2024/05/05/a-simple-color-map-algorithm/","timestamp":"2024-11-09T16:43:18Z","content_type":"text/html","content_length":"84038","record_id":"<urn:uuid:7f9987c8-6e79-4fbc-b312-0275b7897f49>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00572.warc.gz"}
About use of particular set definition Hello guys I am new to GAMS. While doing one programme I encountered following problem:- I have one set for number of stages called k (having value 2) and number of temperature locations (no. of stages+1 ). Now many of my parameters are defined on number of stages and one on no. of temperature locations. I have expressions like Set m /13/ number of temperature location k(m) /12/ number of stages; Parameter t(i,m) q(i,k) ; t(i,k) - t(i,k+1) =e= constant*(sum(k,q(i,k))); …(1) and t(i,k)*t(i,k+1); the problem is that I need to have t(i,“1”) to t(i,“3”) while using k set because there is other parameter (q(i,k)) depending on k. So I run into problem by not getting t(i,“3”). Can anyone suggest any alternative usage of sets so that this problem can be resolved? I will be greatful. the easiest way is removing the dimension of parameter t in definition(i.e. Parameter t;) and define an separate equation for k=2,i.e. t(i,‘2’) - t(i,‘3’) =e= constant*(sum(k$(ord(k)=2),q(i,k))); …(1) and t(i,‘2’)*t(i,‘2’); On Mon, Aug 1, 2011 at 12:29 PM, pratik patil wrote: Hello guys I am new to GAMS. While doing one programme I encountered following problem:- I have one set for number of stages called k (having value 2) and number of temperature locations (no. of stages+1 ). Now many of my parameters are defined on number of stages and one on no. of temperature locations. I have expressions like Set m /13/ number of temperature location k(m) /12/ number of stages; Parameter t(i,m) q(i,k) ; t(i,k) - t(i,k+1) =e= constant*(sum(k,q(i,k))); …(1) and t(i,k)*t(i,k+1); the problem is that I need to have t(i,“1”) to t(i,“3”) while using k set because there is other parameter (q(i,k)) depending on k. So I run into problem by not getting t(i,“3”). Can anyone suggest any alternative usage of sets so that this problem can be resolved? I will be greatful. To post to this group, send email to gamsworld@googlegroups.com. To unsubscribe from this group, send email to gamsworld+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/gamsworld?hl=en. To post to this group, send email to gamsworld@googlegroups.com. To unsubscribe from this group, send email to gamsworld+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/gamsworld?hl=en. Not quite sure what the issue is but maybe you want to use an alias for k. Aliases let you run over a set with multiple indices, like set k / 1 * 3 /; alias (k,k2); f(k)… s(k) =e= sum{k2$[ord(k2) wrote: Hello guys I am new to GAMS. While doing one programme I encountered following problem:- I have one set for number of stages called k (having value 2) and number of temperature locations (no. of stages+1 ). Now many of my parameters are defined on number of stages and one on no. of temperature locations. I have expressions like Set m /13/ number of temperature location k(m) /12/ number of stages; Parameter t(i,m) q(i,k) ; t(i,k) - t(i,k+1) =e= constant*(sum(k,q(i,k))); …(1) and t(i,k)*t(i,k+1); the problem is that I need to have t(i,“1”) to t(i,“3”) while using k set because there is other parameter (q(i,k)) depending on k. So I run into problem by not getting t(i,“3”). Can anyone suggest any alternative usage of sets so that this problem can be resolved? I will be greatful. To post to this group, send email to gamsworld@googlegroups.com. To unsubscribe from this group, send email to gamsworld+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/gamsworld?hl=en. – Steven Dirkse, Ph.D. GAMS Development Corp., Washington DC Voice: (202)342-0180 Fax: (202)342-0181 sdirkse@gams.com http://www.gams.com
{"url":"https://forum.gams.com/t/about-use-of-particular-set-definition/388","timestamp":"2024-11-02T17:30:04Z","content_type":"text/html","content_length":"21640","record_id":"<urn:uuid:3ee4e000-5f16-4911-9ed7-7b586e92bc45>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00152.warc.gz"}
Reading and conducting instrumental variable studies: guide, glossary, and checklist Research Methods & Reporting Reading and conducting instrumental variable studies: guide, glossary, and checklist BMJ 2024 387 doi: https://doi.org/10.1136/bmj-2023-078093 (Published 14 October 2024) Cite this as: BMJ 2024;387:e078093 1. Correspondence to: N M Davies neil.m.davies{at}ucl.ac.uk (or @nm_davies on X) In clinical practice, establishing causation is crucial for informed decision making in patient care. Instrumental variable analysis is increasingly used to provide evidence about causal effects in clinical research (see box 1 for glossary). Instruments are variables that are associated with the intervention, which have no uncontrolled common causes with the outcome and only affect the outcome via the intervention. They can be used to overcome measured and unmeasured confounding of intervention-outcome associations and provide unbiased estimates of the causal effects of an intervention on an outcome using observational data (fig 1). Instrumental variables are defined by three assumptions (box 2). Box 1 Glossary of terms used in instrumental variable studies • Natural experiment: A source of variation in the likelihood of receiving an intervention in the real world that can be used to investigate the causal impact of an intervention. • Instrumental variable: A specific variable in a dataset that is (1) associated with an intervention, (2) has no uncontrolled common causes with the outcome, and (3) only affects the outcome via the intervention. • Fourth point identifying assumption: The assumption used to estimate the mean effect of the intervention on the outcome, without which it is only possible to estimate bounds for the effect of the intervention on the outcome. • Local average treatment effect or complier average causal effect: The effect of an intervention on individuals whose intervention status is affected by the instrument. • Counterfactual values: The patients’ outcomes had they been allocated to the other strategy (ie, the patients’ outcomes following intervention if they were assigned to control or the patients’ outcomes following control if they were assigned to intervention). Statistical methods • Reduced form: The instrument-outcome association, which, if the instrumental variable assumptions hold, is a valid test of the null hypothesis that the intervention does not affect the outcome. • Wald estimator: The ratio of the instrument-outcome and instrument-intervention associations.1 • Two-stage least squares: An instrumental variable estimator. The first stage estimates the instrument(s)-intervention association(s) and uses these associations to predict the intervention values.2 The second stage uses the predicted interventions in a regression to estimate the effect of the intervention(s) on the outcome. Box 2 Key assumptions that define instrument variables3 • Relevance (IV1): The instrument must be associated with the intervention. • Independence (IV2): The instrument and the outcome must have no uncontrolled common causes. • The exclusion restriction (IV3): The instrument must only affect the outcome through the intervention. Instrumental variable analysis has a long history (supplementary box 1), with applications in many fields, including healthcare and economics. The approach has increased in popularity owing to the availability of larger datasets, the recognition of the need to obtain reliable estimates when key covariates are not measured, and the use of different analytical assumptions.45 Researchers increasingly use instrumental variable analyses to inform a wide range of clinical questions. For example, institutional variation in testing or treatment practices have been used as instrumental variables to estimate the effects of perioperatively testing for coronary heart disease on postoperative mortality rates,6 the relative safety of robotic versus laparoscopic surgery for cholecystectomy,7 and the length of storage of red blood cells and patient survival.8 Physicians’ preferences for treatments have been used to investigate the effects of cyclo-oxygenase-2 (COX-2) versus non-selective, non-steroidal anti-inflammatory drugs (NSAIDs) on gastric complications,910 and the effects of conventional versus atypical antipsychotic drug treatments on mortality in elderly patients.11 Allocation to treatment in randomised controlled trials with non-compliance is an instrumental variable previously used to investigate the effects of flexible duty hour conditions for surgeons on patient outcomes and surgeons’ training and wellbeing12 and the effects of reducing amyloid levels on cognition.13 Distance from or time to admission to a particular type of hospital has been used as an instrument for receiving a specific treatment.1415 One of the most commonly used applications of instrumental variables is mendelian randomisation—using genetic variants as instrumental variables. The core principles of instrumental variable analysis still apply to mendelian randomisation and have been covered in detail and will not be discussed here.1617 Here, we provide a practical guide for researchers to read, interpret, and conduct instrumental variable studies using non-genetic observational data. In this article, we discuss why a study should use instruments, key concepts and assumptions, how to assess the validity of instrumental variable assumptions, and how to interpret results. Summary points • Instrumental variable analysis is a research method that uses naturally occurring variation (ie, variation not controlled by researchers), such as policy decisions, clinical preferences, distance, or time, to provide evidence about the causal effects of interventions on outcomes from observational data • Instrumental variables can provide credible evidence about the causal effects even if other observational techniques have residual confounding, reverse causation, or other forms of bias • This article demonstrates how to perform an instrumental variable analysis using commonly available packages • In common with all empirical research methods, instrumental variable analysis depends on assumptions that readers and reviewers must assess • Many sources of evidence, using a range of assumptions, can help inform clinical decisions • A critical appraisal checklist is provided to help assess and interpret instrumental variable studies Clinical and public health implications Researchers increasingly use large datasets of electronic medical records, registries, or administrative claims data to provide evidence about the effects of interventions on patient outcomes. An important limitation of these datasets is that while the large sample size allows for very precise results, they frequently have inadequate measures of critical confounders. Confounders are variables that affect the likelihood of receiving the intervention and that also affect the outcome (eg, previous neuropsychiatric diagnoses and the likelihood of being prescribed varenicline rather than nicotine replacement therapy for smoking cessation). Patients rarely receive interventions entirely randomly. Key confounders, such as morbidity and other indications for intervention, are often challenging or impossible to measure with sufficient accuracy from diagnosis or billing codes or are unmeasured or unmeasurable. Thus, matching individuals receiving the intervention with sufficiently comparable controls can be difficult or impossible. As a result, observational analysis of large scale databases could provide unreliable evidence about interventions’ comparative effectiveness and safety. This issue is challenging for clinicians and patients because they need reliable evidence of the causal effects of different interventions to make well informed decisions. Instrumental variables can provide an alternative source of evidence about the effects of different interventions and, while less precise than other approaches, might be less affected by individual level biases such as confounding by indication, where the indications for intervention also affect the likelihood of an outcome. Why use an instrument? Most observational methods, such as multivariable adjusted regression or propensity score analysis, assume that it is possible to measure a sufficient set of confounders to account for all differences in the outcome between individuals given the intervention and control, except for those caused by the intervention.1819 However, the correct set of confounders is not always known, and even if they have been identified, measuring and accounting for baseline differences is extremely difficult, which can result in multivariable-adjusted and propensity score analyses having serious biases and providing misleading results. For example, COX-2 inhibitors were developed to cause fewer gastrointestinal complications than traditional NSAIDs and marketed to patients and physicians accordingly. As a result, patients prescribed these drug treatments typically were at higher risk of gastrointestinal complications at baseline. Thus, in observational datasets, patients prescribed COX-2 tended to have higher rates of gastrointestinal complications than patients prescribed NSAIDs, a difference that was not fully attenuated after adjustment for measured confounders. This result is because the pre-existing differences in the risk of gastrointestinal complications are very challenging to measure sufficiently, especially in electronic medical records, resulting in residual confounding by indication. Alternatively, patients prescribed nicotine replacement therapy for smoking cessation differ from those prescribed drugs such as varenicline: patients prescribed nicotine replacement therapy tend to be more unwell, be older, and have poorer mental health.20 However, electronic medical records or other datasets often do not record these differences. For example, patients might discuss smoking cessation with their general practitioner when they have preclinical symptoms of heart disease; these symptoms might not be perfectly captured in medical records. Instrumental variable analysis offers an approach to deal with these problems. It relies on a distinct set of assumptions from other methods, which do not require measuring or knowing all the potential confounders of the intervention and outcome. What is an instrumental variable? The following three assumptions define instrumental variables. Firstly, the instrument is associated with the intervention of interest (relevance); secondly, it shares no uncontrolled common cause with the outcome (independence); and thirdly, it only affects the outcome through the intervention (exclusion restriction). Instruments only need to be associated with the likelihood of receiving the intervention; they do not necessarily need to cause it.3 Instrumental variable analyses exploit naturally occurring variation (the instrument) to estimate the impact of the intervention on an outcome. This variation can be due to clinical or policy decisions unrelated to unmeasured confounders. Box 2 defines these assumptions, and figure 2 uses a directed acyclic graph to represent these assumptions. Assessing the plausibility of the assumptions is critical to determining whether a proposed instrumental variable is valid and is discussed in detail below. These assumptions can be defined unconditionally, or more often conditionally, on other important covariates in a dataset; for example, physicians’ prescribing preferences are usually conditioned on a patient’s age. If these assumptions are violated, for example, by residual confounding of the instrument-outcome association, then the results of an instrumental variable analysis can be more biased than other approaches, such as multivariable adjustment and propensity score. Thus, a key challenge for authors and readers of instrumental variable studies is determining whether the assumptions are plausible for the research question. Types of instruments Numerous natural experiments have been proposed and assessed as potential instruments. These commonly include physician preference (eg, preference to prescribe one intervention versus another for a given diagnosis), access to intervention (eg, distance to a hospital with specific speciality staff or equipment), or randomisation (eg, in the context of a randomised controlled trial with non-compliance). Examples of these instruments are given below. Other sources of variation have also been used and are covered elsewhere.21 Physician preference Clinicians have preferences for many clinical decisions, such as testing, treatments, or diagnoses. These pre-existing preferences could be independent of the subsequent patients they see. For example, a physician might prefer prescribing nicotine replacement therapy over drug treatments such as varenicline.20 Studies generally cannot measure physicians’ preferences for one intervention or another, so they measure preferences in other ways. For instance, physicians’ prescribing preferences might be captured by looking at previous prescriptions for the interventions under consideration or, more rarely, surveys used to elicit preferences. Physicians’ prescriptions to their previous patients are often associated with the prescriptions they issue to their future patients. If this preference occurs in a way that is unrelated to the patient level confounders of their current patients, the independence assumption could hold. Physicians’ previously demonstrated preferences are consistently associated with their prescriptions to their current patients.910 A potential weakness of physicians’ prescribing preferences as an instrument is that they might not be specific to the treatment of interest and could be associated with broader differences in care. Access instruments include distance to hospitals,14 travel times to the hospital as a proxy for quicker treatment,22 the raising of the school leaving age as a proxy for education,23 and date of treatment as a proxy for choice of treatment.24 Here, for the instrumental variable assumptions to hold, access must associate with the likelihood of receiving the intervention but not directly affect the outcome or share any unmeasured confounders with the outcome. A potential weakness of studies using access based instruments is that geographical location and distance to healthcare facilities are often highly non-random and are related to important unmeasured confounders such as socioeconomic position. Random assignment in the presence of non-compliance Treatment assignment in a randomised controlled trial with non-compliance or an encouragement design can be an instrumental variable.252627 By design, random assignment should balance confounders between individuals assigned to the intervention and those assigned to the control. Conventional analyses of randomised trials report the intention-to-treat estimate, which is the difference in outcomes between participants assigned to the intervention and participants assigned to the control. However, if some trial participants do not comply with their allocation, the intention-to-treat estimate will underestimate the effects of taking the intervention because it will also reflect the effects of compliance. Instrumental variable analysis can be used to estimate the effects of taking the intervention, which can be estimated by assuming that the treatment assignment affects the likelihood of receiving the intervention in the same direction for all individuals (ie, the instrument has a monotonic effect if it increases the likelihood of the intervention for some individuals it does not decrease it for others). Under the monotonicity assumption, the instrumental variable estimate will reflect the complier or local average treatment effects (see box 3 for definitions). This parameter is the effect of the intervention on individuals whose treatment status was affected by the instrument. A limitation of random assignment is that assignment might alter behaviour in other ways, leading to violations of the exclusion restriction (eg, if individuals assigned to control in an unblinded trial seek treatment via other means). Examples of using allocation to treatment as an instrument include a cluster randomised trial of vitamin A supplementation with non-compliance.25 Treatment allocation can be used to estimate the effects of an underlying continuous risk factor, for example, the effects of reducing amyloid levels on cognition rather than the effect of being allocated to amyloid-lowering drug treatment.13 If the risk factor is continuous, then it is more challenging to interpret under monotonicity, and studies might make other assumptions (eg, assuming a constant effect of the risk factor). Box 3 Point identifying assumptions and interpretation The three core assumptions for instrumental variable analysis are only sufficient to estimate the bounds of a causal effect, which are the largest and smallest values consistent with the observed data. However, instrumental variable bounds are typically very wide, so most instrumental variable studies require a further fourth, point identifying assumption. Options for the fourth assumption include the constant treatment effect (IV4h), no effect modification (IV4n), no simultaneous heterogeneity (NOSH; IV4nosh), and monotonicity (IV4m).22829 • The constant treatment effect assumption requires that the effect of the intervention on the outcome is the same for all individuals. For example, if the intervention of interest was an anti-hypertensive drug treatment such as angiotensin-converting enzyme (ACE) inhibitors, these inhibitors should give the same reduction in systolic blood pressure for all participants, regardless of any other characteristics. • The no effect modification assumption requires that the intervention has the same effect on the outcome irrespective of the instrument’s value. For example, if the effects of ACE inhibitors are the same irrespective of physicians’ preference. • The NOSH assumption requires that any heterogeneity in the effects of the instrument on the intervention is independent of heterogeneity in the effects of the intervention on the outcome. This assumption would hold if the variation in the effect of physician preferences on prescribing were not related to the treatment’s expected efficacy (ie, the instrument implicitly samples a representative sample of causal effects from the population). • The monotonicity assumption requires that the effect of the instrument on the likelihood of receiving the intervention is always in the same direction (eg, the instrument only increases or decreases the likelihood of receiving the intervention). For example, a patient with a physician who prefers to prescribe ACE inhibitors will be more likely to receive an ACE inhibitor than a patient who attends a physician who prefers another anti-hypertensive drug. Assessment of point identifying assumptions These point identifying assumptions are untestable but falsifiable. The constant treatment effect assumption is potentially falsifiable by checking for differences in the implied effects of the intervention across covariates. For binary interventions with causal binary instruments and binary outcomes, monotonicity inequalities can falsify the monotonicity assumption.30 Cumulative distribution graphs for continuous interventions can assess this assumption.2 If the proposed instrument is a preference, assessing the plausibility of the monotonicity assumption is possible by conducting a preference survey.31 These surveys suggest that a strict definition of monotonicity is unlikely to be plausible, as there is substantial heterogeneity in clinical treatment decisions. However, Small and colleagues in 2017 proposed a more plausible assumption: stochastic monotonicity, which requires that the effect of the instrument on the exposure is monotonic conditional on a set of covariates.32 Interpretation of instrumental variable estimates Instrumental variable estimates can be interpreted as the average treatment effect under the constant treatment effect, no effect modification, or NOSH assumptions. The constant treatment effect assumption identifies the average treatment effect by assuming the intervention has the same effect for all individuals. This assumption is most commonly used to identify the effects on continuous outcomes. However, this assumption can be implausible. For example, an intervention could only have a constant effect on a binary outcome if it entirely determined the outcome or did not affect it. In the example of ACE inhibitor use, it is implausible to assume that ACE inhibitors have the same effect on every individual in the population. The no effect modification assumption identifies the intervention’s effect on those participants who receive the intervention by assuming that the effect of the intervention is independent of the instrument’s value. For example, in a randomised controlled trial with an encouragement design where the intervention is an encouragement to take a treatment, allocation to the intervention or control arm does not change the effect of the treatment. This assumption can identify interventions’ effects on binary outcomes and estimate causal risk and odds ratios. Finally, the NOSH assumption requires that heterogeneity in the effects of the instrument on the likelihood of receiving the intervention must be independent of heterogeneity in the effect of the intervention on the outcome to be interpreted as the average treatment effect.29 Instrumental variable estimates can be interpreted as reflecting a local average treatment effect using the monotonicity assumption. The monotonicity assumption identifies the effects of the intervention on those individuals whose intervention status was affected by the instrument. This assumption is typically, but not exclusively, applied to binary instruments and interventions.33 Individuals who either always take the intervention or never take the intervention, regardless of whether they were assigned to it, will not be affected by the instrument. Two groups of individuals remain: those who only take the intervention when they are assigned to it (known as compliers), and those who only take the intervention when they are not assigned to it (known as defiers). The monotonicity assumption assumes that there are no defiers in the sample. For example, physicians’ prescribing preferences could have a monotonic effect if patients prescribed nicotine replacement therapy who attended a physician who previously prescribed varenicline would also have been prescribed nicotine replacement therapy by a physician who previously prescribed nicotine replacement therapy (and vice versa). The instrumental variable assumptions need to be assessed and considered for each application (box 4). Just because the assumptions are plausible for one treatment or population does not mean that they will be valid in another. Box 4 Critical appraisal checklist for evaluating instrumental variable studies Readers of instrumental variable studies could consider the following questions: Core instrumental variable assumptions • Is there evidence that the instruments are associated with the intervention of interest? Does the study report a first stage partial F statistic? • Are the instruments associated with measured potential confounders of the intervention and outcome? • Are there likely to be different confounders of the instrument-outcome association than the intervention-outcome association? • Is the proposed instrument likely to affect the outcome via mechanisms other than the intervention of interest? • Do the authors use negative control outcomes to investigate the plausibility of the instrumental variable assumptions? Fourth instrumental variable assumption • Do the authors report the fourth instrumental variable assumption? • Do the authors describe their estimand and how it relates to clinical practice? • Does the study clearly state the instrumental variable estimator used in the analysis? • For two-stage least squares, are the same covariates included in both stages of the analysis? Data presentation • Do the authors present the instrument-outcome association, an instrumental variable estimate, or both? • If they provide an instrumental variable estimate, do they compare it with the multivariable-adjusted estimate? • Was the definition of the instrument prespecified, or was the definition of the instrument chosen based on the data under analysis? • Do the authors provide the code they used to allow researchers to reproduce their findings? • If the instrumental variable estimate is similar to the multivariable adjusted estimate and provides evidence consistent with a causal effect, could it be due to weak instrument bias in a single study or confounding of the instrument-outcome association? • If the instrumental variable estimate differs from the multivariable adjusted estimate and provides little evidence of a causal effect, could this be due to weak instrument bias or confounding? • Are the 95% confidence intervals of the estimate sufficiently precise to test for differences with the multivariable adjusted estimate and detect a clinically meaningful difference? Clinical implications • Do the results triangulate with other forms of evidence? • If a randomised clinical trial is not feasible or unlikely to be conducted in the short term, and there is existing evidence from multiple instrumental variable studies, and other robust study designs converge on consistent results, this information may help guide patient care; for example, informing clinical guidelines or regulatory decisions. Assessment of instrumental variable assumptions Directed acyclic graphs provide a convenient and transparent way to depict and explain the assumptions required for an applied instrumental variable analysis.343536 Researchers can adapt the structure used in figure 2 for specific research questions. Studies can then use empirical data to assess whether the three core assumptions for instrumental variables hold. The first instrumental variable assumption (relevance) states that the instrument must be strongly associated with the likelihood of taking the intervention. The strength of the instrument-intervention association is easily testable. For example, in the study of drug treatments for smoking cessation, we found that physicians who had previously prescribed varenicline were 24 percentage points (95% confidence interval 23 to 25) more likely to prescribe varenicline to their subsequent patients than physicians who had previously prescribed nicotine replacement therapy. However, a difference in treatment rates across instrument values is insufficient to measure instrument strength because it does not reflect the sample size. In a small study of a few hundred patients, even a very large difference in treatment rates across the instrument’s value will provide very little information about the effects of treatment. In contrast, the first stage partial F statistic of the regression of the intervention on the instrument indicates both the strength of the association and the total sample size. The first stage partial F statistic in an instrumental variable analysis is analogous to the sample size in a randomised controlled trial. Most instrumental variable estimation packages in Stata and R (such as ivreg2 or AER, respectively)3738 will report this F statistic by default. A value above 10 is considered strong and unlikely to lead to weak instrument bias.39 However, an F statistic above 10 does not guarantee that an instrumental variable study will have sufficient statistical power to detect an effect size of interest. The remaining assumptions are untestable, so they cannot be proven to hold, but they are falsifiable.3140 An assumption is falsifiable if it is possible to use empirical data to disprove it. The independence assumption can be falsified by testing the instrument-covariate associations using covariate balance and bias component plots,4142 or randomisation tests.43 If the instrumental variable assumptions hold, no associations between the instrument and alternative pathways or other covariates that predict the outcome should be detected.44 The exclusion restriction is falsifiable by demonstrating that other variables are affected by the instrument, which also affects the outcome. For example, in a study of angiotensin-converting enzyme (ACE) inhibitors for cardiovascular disease, if physicians who are more likely to prescribe these inhibitors are also more likely to prescribe statins, which also affect cardiovascular disease, the exclusion restriction assumption would be violated. Another way to falsify the independence and exclusion restriction assumptions is using negative controls to investigate whether the instrument predicts the outcome in subgroups of the population for which the instrument does not affect the likelihood of receiving the intervention. If evidence indicates that the instrument affects the outcome, even in subgroups where the instrument does not affect the likelihood of receiving the intervention, the instrumental variable assumptions are unlikely to be plausible (eg, by using patients who do not have hypertension (eg, children) who were treated for other indications by physicians who preferred ACE inhibitors). Falsification tests are useful indicators of how plausible the assumptions are likely to be, however, and failure to falsify an assumption does not prove it holds. For example, if the instrument was associated with an unmeasured confounder of the intervention and the outcome, this association would not be evident in a covariate plot that only included measured covariates. A further way to assess the plausibility of assumptions is to investigate any differences (heterogeneity) in the effect sizes implied by different instruments. This approach requires more than one instrument (which, when there are more instruments than interventions, is technically known as being over-identified). If more than one instrument is affecting the likelihood of receiving the intervention (eg, physicians’ preferences and distance to the healthcare facility), the heterogeneity in the effects of the intervention implied by each instrument could indicate violations of the instrumental variable assumptions. Bonet’s instrumental variable inequality tests can also falsify binary interventions’ exclusion restriction and independence assumptions.45 How to generate instrumental variable estimates Instrumental variables can test whether an intervention affects an outcome and estimate the magnitude of that effect. The simplest estimator is the instrument-outcome association (reduced form; box 1 ), which can be estimated using regression methods (eg, linear or logistic regression methods). This estimator does not estimate the magnitude of the effect of the intervention on the outcome. However, under the instrumental variable assumptions, it is a valid test of the null hypothesis that the intervention does not affect the outcome. An advantage of this test is that it is simple, requires the fewest and weakest assumptions, and can test for the existence of an effect. A disadvantage is that it does not provide a scale for the effect of the intervention on the outcome, limiting the interpretation of the results. Ideally, we want to know the average effect of the intervention (also known as the average treatment effect), and not just the effect of the instrument. For example, researchers and readers might be more interested in the effect of prescribing varenicline or nicotine replacement therapy (the intervention) on their current patient than the effect of physicians’ previous prescriptions for smoking cessation treatment (the instrument) on smoking cessation rates (the outcome). Several instrumental variable estimators can estimate the average treatment effect. Some of the most used instrumental variable estimators are covered below. However, these methods were largely developed to estimate average treatment effects for normally distributed instruments, exposures, and outcomes assuming linear mechanisms. Although, in practice, these methods are widely used for binary outcomes or non-linear mechanisms (sometimes with the same or a different name), the interpretation can be difficult and more advanced methods might be required. If only one instrument is available, then the average effect of the intervention on an outcome can be estimated using instrumental variable estimators, such as the Wald estimator, which is the ratio of the instrument-outcome association divided by the instrument-intervention association. This estimator rescales the instrument-outcome association to the intervention scale and indicates the effect of a unit change in the intervention on the outcome. For example, if patients prescribed smoking cessation treatments by physicians who previously prescribed varenicline were 1 percentage point more likely to cease smoking (the instrument-outcome association) and 10 percentage points more likely to be prescribed varenicline (the instrument-intervention association), then the Wald estimate would be −0.01 ÷ 0.1 = −0.1. This estimate would imply that prescribing varenicline increases the absolute probability of stopping smoking by 10 percentage points. When a study has one or more instruments available, for example, if a study used physicians’ preferences and distance to healthcare facility as instruments, then the effects of the intervention on the outcome can be estimated using a two-stage least squares estimator. This estimator comprises two regressions or stages. The first stage is a regression of the intervention on the instruments, which can predict the intervention value based on the instrument values. The second stage is a regression of the predicted intervention status on the outcome. The estimated coefficient on the predicted value is the instrumental variable estimate of the effect of the intervention on the outcome. A simulated example and the formulas are provided in the supplementary materials. It is usually essential that both stages of instrumental variable analysis contain the same covariates.46 However, this approach will not account for the estimation error from the first stage and is likely to give incorrect standard errors and confidence intervals. Typically, most analyses use a package such as ivreg2 in Stata or AER or ivreg packages in R,3738 which compute the instrumental variable estimates in one step and integrates the estimation errors from both stages. Different types of outcomes require different instrumental variable estimators, which rely on logic similar to the two-stage least squares estimator. Commonly used estimators include: • Continuous outcomes: Mean differences (eg, effects of smoking cessation treatment on body mass index using physicians’ prescribing preferences20) can be estimated using additive structural mean • Binary outcomes: Causal risk differences, odds ratios, and risk ratios (eg, estimating the effects of coronary bypass surgery on mortality14) can be estimated using additive, logistical, and multiplicative structural mean models and control function approaches.334748 • Survival outcomes: Methods using instrumental variables with survival outcomes, which adopt a similar approach to two-stage least squares, or the control function approach,49 have been developed to allow for covariate and outcome dependent censoring.50 For example, estimating the effects of screening frequency on colorectal cancer diagnoses using international differences in screening • Instrumental variable quantile regression: Non-linear effects of the intervention can be estimated using instrumental variable quantile regression.525354 For example, investigating whether the effects of a unit increase in body mass index on healthcare costs differ between underweight and overweight individuals.55 Methods for instrumental variable estimation is an area of active methodological development, spanning statistics, econometrics, and computer science. Examples include estimators combining instrumental variable analysis and matching56 and estimators using machine learning.575859 Data for instrumental variable studies Instrumental variable studies typically require measures of the instrument, the intervention, and the outcome for individual level data analysis using the same sample of people. This straightforward approach allows the most flexibility to test and evaluate the instrumental variable assumptions. However, integrating additional external datasets can improve the power and precision of instrumental variable analyses using an approach known as two-sample instrumental variable analysis.60 This approach estimates the instrument-intervention association in one sample and the instrument-outcome association in another, from which the Wald estimator can be calculated. For example, a study could estimate the effects of policy reform on educational attainment using census data from the entire population but estimate the effects on health outcomes in a cohort study subsampled from the same underlying population.61 A two-sample instrumental variable analysis does not need measures of the intervention or the outcome in all samples, which can increase power considerably, particularly when the outcome is rare or difficult to measure. Instrumental variable analysis can provide reliable evidence about the causal effects of an intervention, even if the intervention-outcome association is affected by unmeasured confounding. Key to conducting and reading instrumental variable studies is assessing the plausibility of the three core assumptions on instrumental variables. Does the instrument strongly associate with the intervention? Is there a rationale for why the instrument-outcome association is less likely to have confounding than the intervention-outcome association? Is there evidence that measured covariates are less strongly associated with the instrument than the intervention? Are alternative pathways available that could mediate the effects of the instrument? Instrumental variable analysis can provide a valuable complement to other forms of observational analysis. Unlike other approaches, instrumental variables have distinct assumptions and can strengthen inferences when combined with other sources of evidence. The increasing amount of data available for clinical research means that there is a growing opportunity to use these methods to improve patient care. We thank Brian Lee, Luke Keele, Ting Ye, and Robert Platt, for their extremely helpful comments; and Christopher Worsham and Tarjei Widding-Havneraas for reviewing the manuscript. • Contributors: VW, ES, and NMD conceived the paper and wrote the first draft. All other authors revised the manuscript and provided critical feedback. All authors act as guarantors. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. • Funding: VW (MC_UU_00032/03) and ES (MC_UU_00032/01) work in the MRC Integrative Epidemiology Unit, which receives funding from the UK Medical Research Council. TF has received funding from Pfizer, Takeda, Acadia, and iHeed for unrelated consulting work; and receives funds from The BMJ for editorial work. MGL received support from the Institute for Translational Medicine and Therapeutics of the Perelman School of Medicine at the University of Pennsylvania, NIH/NHLBI National Research Service Award postdoctoral fellowship (T32HL007843), and Measey Foundation. SMD receives research support from RenalytixAI and Novo Nordisk, outside the scope of the current research. NMD is supported by the Norwegian Research Council via grant number 295989 and partly by grant HL105756 from the National Heart, Lung, and Blood Institute (NHLBI). NMD receives funds from The BMJ for editorial work. The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication. • Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/disclosure-of-interest/ and declare: support from the UK Medical Research Council, Norwegian Research Council, NIH/NHLBI, Measey Foundation, and Doris Duke Foundation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work; TF has received funding from Pfizer, Takeda, Acadia, and iHeed for unrelated consulting work, and receives funds from The BMJ for editorial work; NMD receives funds from The BMJ for editorial work; SMD receives research support from RenalytixAI and Novo Nordisk, outside the scope of the current research. • Provenance and peer review: Not commissioned; externally peer reviewed. This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/.
{"url":"https://www.bmj.com/content/387/bmj-2023-078093","timestamp":"2024-11-08T11:50:07Z","content_type":"text/html","content_length":"298644","record_id":"<urn:uuid:1bcdb3c8-5ba1-4cc5-b170-8f55f50fd02f>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00594.warc.gz"}
How to Calculate SUI Gas Bill in Pakistan | Sui Gas Bill Calculator - SNGPL How to Calculate SUI Gas Bill in Pakistan | Sui Gas Bill Calculator As utility bills reach consumers every month, understanding the charges and how they are calculated can sometimes be a complex task. Using the Sui Gas Bill Calculator we will provide you with an estimate of what your monthly Sui Gas bill will be. You should keep in mind that this is only an estimate of your monthly bill. Sui Gas Bill Calculator is a tool that allows us to calculate our gas bill. It helps users to plan and estimate the bill price based on their usage consumption. This tool is a must for both home and business users and helps track the monthly cost of the needle gas bill. Here, we will explore Sui Gas Bill Calculator in Pakistan, highlight its features and how it can help you better manage your energy costs and reduce your gas bill. can Introduction to Sui Gas Bill Calculator: Sui Gas Bill Calculator is a valuable tool that allows both domestic and business users to estimate their upcoming gas bills based on various parameters. It takes into account factors such as gas consumption, current gas rates, safe and unsafe customers, and other relevant data to provide an accurate estimate of the expected bill amount. Key Features of Sui Gas Bill Calculator: Estimated Consumption: One of the main features of needle gas bill calculator is the ability to estimate needle gas consumption. By entering your consumption details, the calculator can estimate how much gas you use during a given period. Current Gas Rates: The calculator is regularly updated with the latest gas rates, ensuring that the calculations are based on the latest tariff structure provided by the service provider. This feature allows customers to stay informed of any price changes that may affect their bills and their accuracy. Billing Cycle Information: Users can enter their billing cycle information, enabling the calculator to generate monthly or bi-monthly estimates. This breakdown helps in budgeting and planning future expenses. But mostly in Pakistan the billing cycle is 30 days. Please calculate your bill within 30 days. Comparison with previous bills: Sui gas bill calculators often include a feature that allows users to compare their current estimated bills with past bills. This comparison helps identify trends, understand seasonal variations and make informed decisions about energy use in the current month. Secure and Unsecured Users: Those consumers whose average consumption during the last four months of winter (November to February) is less than or equal to 0.900 hm are protected consumers. Gas consumption above 0.900 HM is unsafe for consumers. Secured customers paid Rs 400 fixed charges, and unsecured customers paid Rs 1000 fixed charges up to 1.5 hm and 2000 fixed charges per month for above 1.5 hm. How to use Sui Gas Bill Calculator: Using Sui Gas Bill Calculator is a simple process. Follow these steps to get an accurate estimate of your upcoming gas bill: Enter consumption data: Start by entering your gas consumption data. This may include the number of gas appliances in use, average hours of use, and any other relevant information. Select a billing cycle: Choose your billing cycle – whether monthly or bi-monthly – to align the calculator with your specific usage patterns. But most billing cycles depend on 30 days. Check current gas rates: Make sure the calculator is using the most current gas rates to make the estimate. This information is important for accuracy and reliability. Review Estimates: After entering all the necessary data, the calculator will generate an estimate of your upcoming LPG consumption bill. Review the breakdown of charges and ensure that all details are entered Benefits of using Sui Gas Bill Calculator: Budget Planning: The calculator empowers users to effectively plan their budget by providing insight into their expected gas costs and usage. Energy Conservation: By understanding the relationship between usage patterns and bills, consumers can identify energy conservation opportunities and adopt more sustainable ways to lower their monthly bills. Billing Transparency: The calculator increases billing transparency, allowing customers to check the various components of their bills and seek clarification on any discrepancies. Sui Gas Bill Calculator in Pakistan is a valuable tool for consumers who want clarity and control over their gas expenses and Sui Gas usage. By including features such as consumption estimates, protected and unprotected consumers, current gas rates, and billing cycle information, the calculator empowers consumers to make informed decisions about energy use and costs. For more sustainable and frugal gas consumption, using Sui Gas Bill Calculator is a step towards greater financial awareness and planning tools for monthly budget control. Frequently Asked Questions About Sui Gas Bill Calculator What is GSD in Sui gas bill? GSD stands for Gas Supply Deposit, its three months average consumption bill and is maintained by the company for each customer. If required, the company recovers the loss amount from the customer. What is 1 unit on a gas meter? One unit of used needle gas is equal to one kilowatt hour (kWh). How do you read a meter reading? Electric meter readings include the number shown on the dial from right to left. It consists of seven numbers. How can I check my gas bill details? You can visit sugasebill.pk and feed your 11 digit consumer id on it then you will submit icon button. A duplicate copy of your Sui Gas bill will open. How do I calculate my gas bill from my meter? To convert a metric meter reading to kWh, all you need to do is: • Take the meter reading, then subtract the previous meter reading from the new meter reading to determine the amount of gas used. • Multiply by the volume correction factor (1.02264). • Multiply by the calorie value (40.0). • Divide by the kWh conversion factor (3.6). How to calculate gas meter? A digital metric meter will have an electronic or digital display, with 5 numbers then a decimal point, followed by a few more numbers. To read the meter: Write the first 5 numbers shown from left to right. Ignore the numbers after the decimal point, sometimes shown in red. What is the fixed charge in the gas bill? An ex-slab benefit will be available for domestic consumers except for consumers above 4hm3. *Reserved category to pay fixed charge of Rs. 400/- ** For unreserved category paying fixed charge of Rs. 1,000/- up to 1.5 hm³, while Rs. How do you read a needle gas meter? The black digits (first five digits from the left) on your gas meter are the meter’s current reading in cubic meters (CMS). How many units of gas for 20? 1 kilowatt hour is 1 unit, whether you are talking about gas or electricity. For your £20, at your given price, you will receive 421.23 units. According to our meter we got 38 units What is the formula for calculating gas? In such a case, all gases obey an equation of state known as the ideal gas law: PV = nRT, where n is the number of moles of gas and R is the universal (or perfect) gas constant, 8.31446261815324 joules per kelvin. per mole Leave a Comment
{"url":"https://sngpl.me/how-to-calculate-sui-gas-bill-in-pakistan-sui-gas-bill-calculator/","timestamp":"2024-11-06T10:33:56Z","content_type":"text/html","content_length":"62484","record_id":"<urn:uuid:03dec4d0-c0f3-4a49-ae97-5da73157ca0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00554.warc.gz"}
H Method And P Method In Fea - TheFitnessManual H Method And P Method In Fea H Method And P Method In Fea h-method: This method increases the number of elements and hence decreases the element size while keeping the polynomial order of the constant. p-method: This method increases the polynomial order of the interpolation function while keeping the number of elements constant. What is P-method? P-method. The p-method improves results by using the same mesh but increasing the displacement field accuracy in each element. This method refers to increasing the degree of the highest complete polynomial (p) within an element without changing the number of elements used. What is the P-method? p-method: This method increases the polynomial order of the interpolation function while keeping the number of elements constant. r-Method : This is a far less explored method. It neither increases the polynomial order nor decreases the element length. Related Questions What are P elements in FEA? “p” denotes the polynomial order. By default, linear order shape functions are used by commercial FE programs. A quadratic shape function can have more accurate solutions. 2019 р. What is P mesh? The P-Mesh-a commodity-based scalable network architecture for clusters. Abstract: We designed a new network architecture, the P-Mesh, which combines the scalability and fault resilience of a torus with the performance of a switch. What is P-method? P-method. The p-method improves results by using the same mesh but increasing the displacement field accuracy in each element. This method refers to increasing the degree of the highest complete polynomial (p) within an element without changing the number of elements used. What is P in estimator? We define p = x/n, the proportion of successes in the sample, to be the point estimate of p. For example, if I observe n = 20 BT and count x = 13 successes, then my point estimate of p is p = 13/20 = What does P mean in research methods? the probability What is P and H formulation? This method refers to increasing the degree of the highest complete polynomial (p) within an element without changing the number of elements used. The difference between the two methods lies in how these elements are treated. The h-method uses many simple elements, whereas the p-method uses few complex elements. What is the P element FEA? p-FEM or the p-version of the finite element method is a numerical method for solving partial differential equations. It is a discretization strategy in which the finite element mesh is fixed and the polynomial degrees of elements are increased such that the lowest polynomial degree, denoted by , approaches infinity. What is P mesh? The P-Mesh-a commodity-based scalable network architecture for clusters. Abstract: We designed a new network architecture, the P-Mesh, which combines the scalability and fault resilience of a torus with the performance of a switch. What is the p-value method? The P-value method is used in Hypothesis Testing to check the significance of the given Null Hypothesis. Then, deciding to reject or support it is based upon the specified significance level or threshold. A P-value is calculated in this method which is a test statistic. What is the P method? p-method: This method increases the polynomial order of the interpolation function while keeping the number of elements constant. r-Method : This is a far less explored method. It neither increases the polynomial order nor decreases the element length. 2018 р. What is P-method in Ansys? P-method. The p-method improves results by using the same mesh but increasing the displacement field accuracy in each element. This method refers to increasing the degree of the highest complete polynomial (p) within an element without changing the number of elements used. What is P method and H-method? h-method: This method increases the number of elements and hence decreases the element size while keeping the polynomial order of the constant. p-method: This method increases the polynomial order of the interpolation function while keeping the number of elements constant. What is P mesh? The P-Mesh-a commodity-based scalable network architecture for clusters. Abstract: We designed a new network architecture, the P-Mesh, which combines the scalability and fault resilience of a torus with the performance of a switch. What is H-method in FEA? H-method. The h-method improves results by using a finer mesh of the same type of element. This method refers to decreasing the characteristic length (h) of elements, dividing each existing element into two or more elements without changing the type of elements used. Leave a Comment
{"url":"https://thefitnessmanual.com/h-method-and-p-method-in-fea/","timestamp":"2024-11-03T04:37:46Z","content_type":"text/html","content_length":"148439","record_id":"<urn:uuid:6ed207c0-bf28-42d7-aa7f-52c4d3ec3d66>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00324.warc.gz"}
Sharing multiple messages over mobile networks Information dissemination in a large network is typically achieved when each user shares its own information or resources with each other user. Consider n users randomly located over a fixed region, and k of them wish to flood their individual messages among all other users, where each user only has knowledge of its own contents and state information. The goal is to disseminate all messages using a low-overhead strategy that is one-sided and distributed while achieving an order-optimal spreading rate over a random geometric graph. In this paper, we investigate the random-push gossip-based algorithm where message selection is based on the sender's own state in a random fashion. It is first shown that random-push is inefficient in static random geometric graphs. Specifically, it is Ω(n) times slower than optimal spreading. This gap can be closed if each user is mobile, and at each time moves locally using a random walk with velocity ν(n). We propose an efficient dissemination strategy that alternates between individual message flooding and random gossiping. We show that this scheme achieves the optimal spreading rate as long as the velocity satisfies ν(n) = ω(√log n/k). The key insight is that the mixing introduced by this velocity-limited mobility approximately uniformizes the locations of all copies of each message within the optimal spreading time, which emulates a balanced geometry-free evolution over a complete graph. Publication series Name Proceedings - IEEE INFOCOM ISSN (Print) 0743-166X Other IEEE INFOCOM 2011 Country/Territory China City Shanghai Period 4/10/11 → 4/15/11 All Science Journal Classification (ASJC) codes • General Computer Science • Electrical and Electronic Engineering Dive into the research topics of 'Sharing multiple messages over mobile networks'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/sharing-multiple-messages-over-mobile-networks","timestamp":"2024-11-06T17:27:57Z","content_type":"text/html","content_length":"51818","record_id":"<urn:uuid:8241185e-173d-4012-8795-0074b724285d>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00179.warc.gz"}
Second virial coefficient Hello! Could someone please explain the meaning of those angle brackets in the expression of B(T)? 178.140.206.61 07:14, 28 April 2011 (CEST) In statistical mechanics angle brackets are used to indicate an average, either a time average or, as in this case, an ensemble average. -- Carl McBride (talk) 12:17, 28 April 2011 (CEST) Yeah, I know it. But the thing is that the common expression for second virial coefficient doesn't have an averaging in it. Anyway, it is said in the article that "Notice that the expression within the parenthesis of the integral is the Mayer f-function." However Mayer f-function has no averaging in it. I was asking because recently I was told that something is wrong with the common expression for the second virial coefficient. Indeed I was referred to the article by Hill by I haven't get it yet. And by the way, if this angle brackets mean the average, hence the second virial coefficient should depend on density, i.e ${\displaystyle B_{2}(\rho ,T)}$. I highly recommend reading § 12-2 and § 12-3 of "Statistical Mechanics" by Donald A. McQuarrie. The situation is that the integral is often very hard to integrate analytically for anything other than, say, the hard sphere model. (See also the page on cluster integrals). The problem mentioned by Hill arises "...from the treatment of an imperfect gas as a perfect gas mixture of physical clusters". In this case, for ${\displaystyle B_{2}}$, the "ensemble" is a collection of pairs of molecules, at various distances and, for non-spherical molecules, orientations. For an example of such a calculation see section 2 of Carlos Menduiña, Carl McBride and Carlos Vega "The second virial coefficient of quadrupolar two center Lennard-Jones models", Physical Chemistry Chemical Physics 3 1289 - 1296 (2001) (a pdf is freely available here) -- Carl McBride (talk) 17:16, 3 May 2011 (CEST) Thank you, I got it now --- these brackets correspond to averaging over angular coordinates.
{"url":"http://www.sklogwiki.org/SklogWiki/index.php/Talk:Second_virial_coefficient","timestamp":"2024-11-08T02:32:26Z","content_type":"text/html","content_length":"22482","record_id":"<urn:uuid:c344c60c-4f94-4b55-882b-3cd2769a101d>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00767.warc.gz"}
PDF | Mathematics for Machine Learning Mathematics for Machine Learning Format: PDF eTextbooks ISBN-13: 978-1108455145 ISBN-10: 110845514X Delivery: Instant Download Authors: Marc Peter Deisenroth Publisher: Cambridge University Press The fundamental mathematical tools needed to understand machine learning include linear algebra, analytic geometry, matrix decompositions, vector calculus, optimization, probability and statistics. These topics are traditionally taught in disparate courses, making it hard for data science or computer science students, or professionals, to efficiently learn the mathematics. This self contained textbook bridges the gap between mathematical and machine learning texts, introducing the mathematical concepts with a minimum of prerequisites. It uses these concepts to derive four central machine learning methods: linear regression, principal component analysis. There are no reviews yet.
{"url":"https://textook.com/shop/mathematic/pdf-mathematics-for-machine-learning/","timestamp":"2024-11-09T19:25:03Z","content_type":"text/html","content_length":"261041","record_id":"<urn:uuid:f061f9d8-2606-412c-900f-a150d9131018>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00018.warc.gz"}
Lesson 10 Usemos múltiplos para encontrar fracciones equivalentes Lesson Purpose The purpose of this lesson is for students to make sense of a way to identify and generate equivalent fractions by using multiples of the numerator and denominator. Lesson Narrative Up until this point, students have used visual representations or other strategies to reason about and generate equivalent fractions. Along the way, they are likely to have noticed patterns in the numerator and denominator of equivalent fractions. While some students may have generalized and applied those observations intuitively, this is the first lesson in which students are prompted to reason numerically about the numbers in equivalent fractions. Students notice that a fraction \(\frac{a}{b}\) has the same location on the number line as a fraction \(\frac{n \times a}{n \times b}\), so we can generate fractions that are equivalent to \(\frac {a}{b}\) by multiplying both \(a\) and \(b\) by \(n\). In other words, they can use multiples of \(a\) and \(b\) to generate fractions that are equivalent to \(\frac{a}{b}\). Sample responses are shown in the form \(\frac{5 \times 2}{6 \times 2} = \frac{10}{12}\) but students do not need to use this notation. In an upcoming lesson, students will reason in the other direction: using factors that are common to \(a\) and \(b\) to write equivalent fractions. They will see that dividing \(a\) and \(b\) by the same factor \(n\) gives a fraction equivalent to \(\frac{a}{b}\). Learning Goals Teacher Facing • Make sense of a way to generate equivalent fractions by using multiples of the numerator and denominator. Student Facing • Conozcamos una forma de encontrar fracciones equivalentes sin usar diagramas. Lesson Timeline Warm-up 10 min Activity 1 20 min Activity 2 15 min Lesson Synthesis 10 min Cool-down 5 min Teacher Reflection Questions To reason numerically we hope students begin to describe number relationships without visual representations. Did it seem that students were doing this in today’s lesson? Which diagrams are they still holding on to? Suggested Centers • Get Your Numbers in Order (1–5), Stage 4: Denominators 2, 3, 4, 5, 6, 8, 10, 12, or 100 (Addressing) • Mystery Number (1–4), Stage 4: Fractions with Denominators 5, 8, 10, 12, 100 (Addressing) Print Formatted Materials Teachers with a valid work email address can click here to register or sign in for free access to Cool Down, Teacher Guide, and PowerPoint materials. Student Task Statements pdf docx Lesson Cover Page pdf docx Cool Down Log In Teacher Guide Log In Teacher Presentation Materials pdf docx
{"url":"https://im.kendallhunt.com/K5_ES/teachers/grade-4/unit-2/lesson-10/preparation.html","timestamp":"2024-11-04T04:49:23Z","content_type":"text/html","content_length":"78294","record_id":"<urn:uuid:862a7698-7fd4-4df9-bf55-42da1b70daf6>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00358.warc.gz"}
Interest Vs Apy APY, or Annual Percentage Yield, reflects the total interest earned on an investment or savings account in a year, accounting for compound interest. Compounding. APY (Annual Percentage Yield) is an effective interest rate, which accurately states how much money will be earned as interest. Companies often advertise. The Annual Percentage Yield (APY) is the effective annual rate of return based upon the interest rate and includes the effect of compounding interest. On the other hand, APY applies when you put money into a deposit account, and it shows the amount of interest, including compounding, you could earn in a year. An APY reflects an annualized rate of your total potential earnings. An interest rate is just part of the total APY formula. APY also considers how often your. APR looks at the interest rate and fees or charges that come with borrowing money, while APY looks at the compound interest rate and how interest is added to. The Annual Percentage Yield (APY) is the effective annual rate of return based upon the interest rate and includes the effect of compounding interest. FAQs. APY is the actual rate of return you will earn on an investment or bank account. As opposed to simple interest calculations, APY considers the compounding. When it comes to calculating interest rates, there are two methods: Annual Percentage Rate (APR) and Annual Percentage Yield (APY). At face value, a low APR. On the Discover app, it says current APY and interest rate. What does that mean? The APY says % and the interest rate is %? The difference between APY and interest rates lies in how they are calculated. While the interest rate refers to the percentage charged on a loan or earned on. The annual percentage yield (APY) is the interest rate earned on an investment in one year, including compounding interest. A higher APY is better as your. APY is the final percentage of interest you see at the end of the year after compounding has been accounted for and this means your interest rate will look. The annual percentage yield is the rate of return earned in one year, factoring in compounding interest. The more frequently interest is compounded. How do I calculate my APY? If you're looking to understand the math behind calculating your APY, there's a formula: APY = [(1 + Interest/Principal)(/Days. In short, for a deposit account, the Interest Rate is the percent return without compounding interest included. It is also known as simple interest. The APY is. In the previous example, interest was paid on the investment once per year, which means it has an annual compounding period. In this case the APY and interest. APY expresses how much you will earn on your cash over the course of a year. Interest rate, however, is the interest percentage that you'll earn or that a. APY (Annual Percentage Yield) relates to the total interest your money will gain by the end of 1 year, even if the CD has less than a one year. APY, meaning Annual Percentage Yield, is the rate of interest earned on a savings or investment account in one year, and it includes compound interest. Annual percentage yield (APY) is similar to APR, but refers to money earned in a savings account or other investment, rather than the interest rate paid on a. APY (Annual Percentage Yield) is an effective interest rate, which accurately states how much money will be earned as interest. Companies often advertise. Each day you'll have more money in your account, and it'll compound exponentially. A theoretical % APY translates to a % interest rate, and the interest. APY tells you how much interest you can earn on savings and includes compound interest. What is APR? APR applies to borrowing money, such as with a loan or. Key Takeaways · APR represents the yearly rate charged for borrowing money. · APY refers to how much interest you'll earn on savings and it takes compounding into. APY vs. APR. APY and APR can be thought of as opposites. APY is the rate earned on deposits if interest is compounded. APR, or annual percentage rate, is the. Annual Percentage Yield (APY) reflects the effect of compounding frequency (Savings accounts are compounded daily) on the interest rate over a day period. APR comes into play when you borrow money – it reflects the interest, costs, and fees you're expected to pay on a loan over the course of a year. Interest rate is a component of APY. Interest rate is the rate at which you earn money on your account balance, while APY reflects the actual total return you'. The annual percentage yield (APY) is the interest earned on a deposit account balance within a year and is expressed as a percentage. A Relationship Interest Rate is variable and subject to change at any time without notice, including setting the interest rate equal to the Standard. As of June , the APY for a savings account is around %. High yield savings accounts offer a slightly higher return, with an APY currently ranging from. How Much To Save Up For Retirement | Par Mortgage
{"url":"https://gsrkro.site/news/interest-vs-apy.php","timestamp":"2024-11-04T04:40:08Z","content_type":"text/html","content_length":"10757","record_id":"<urn:uuid:bf04f021-3ed4-4213-8367-87e6e0af5297>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00079.warc.gz"}
What is a function in BASIC? What is a function in BASIC? function, in mathematics, an expression, rule, or law that defines a relationship between one variable (the independent variable) and another variable (the dependent variable). Functions are ubiquitous in mathematics and are essential for formulating physical relationships in the sciences. What is function in basic programming? Functions are “self contained” modules of code that accomplish a specific task. Functions usually “take in” data, process it, and “return” a result. Once a function is written, it can be used over and over and over again. Functions can be “called” from the inside of other functions. Why are functions used? Functions provide a couple of benefits: Functions allow the same piece of code to run multiple times. Functions break long programs up into smaller components. Functions can be shared and used by other programmers. What is function procedure in Web technology? A Function procedure: is a series of statements, enclosed by the Function and End Function statements. can perform actions and can return a value. can take arguments that are passed to it by a calling procedure. What is function Short answer? A function is defined as a relation between a set of inputs having one output each. In simple words, a function is a relationship between inputs where each input is related to exactly one output. Every function has a domain and codomain or range. A function is generally denoted by f(x) where x is the input. What are the twelve basic functions? – Computer Applications – Basic functions. Domain: (-∞ ,∞ )… – Basic Anatomy and Functions of the Muscular System. Skeletal Muscles… – Basic Neuro: Higher Order Function I & II. – Excel 2016 – Working with Formulas and Functions. – RTE 2785- Body Systems/ Basic Functions. – 15 Basic Cell Organelles and their Functions What are the five basic life functions? – reproduce. organisms make more of their own kind. – digestion. breakdown of complex food materials into form of energy the organism can use. – excretion. process by which organisms get rid of wastes. – growth/development. process, over time, by which organism changes in size. – adapt. How to list the functions? Click any cell in the data set. Click the Data tab and then click Advanced in the Sort&Filter group. Click the Copy to Another Location option. Excel will display the cell reference for the entire data set or Table object as the List Range. Remove the Criteria Range if there is one. What are the four basic functions of a computer system? Input Function Processing Function Output Function Storing Function
{"url":"https://www.vikschaatcorner.com/what-is-a-function-in-basic/","timestamp":"2024-11-07T06:54:15Z","content_type":"text/html","content_length":"94687","record_id":"<urn:uuid:91621002-0cba-4294-b3c4-49c62ced0c2c>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00704.warc.gz"}
Transcendental Physics THE MAINSTREAM FALLACY THE NEED FOR A REAL QUANTUM CALCULUS ©Edward R. Close, December 22, 2016 A very important fundament fact of nature was uncovered in the year 1900 when German physicist Max Planck discovered that nature metes out energy only in whole numbers of an extremely small amount called a quantum of energy. Within a few years, scientists realized that as a rule, all aspects of physical reality are quantized, and quantum physics was born. Albert Einstein noted that mass is converted to energy by certain physical processes and energy is converted to mass by other processes. In 1905, Einstein showed that the exact mathematical equivalence of mass and energy is expressed by the equation E = mc^2. In other words, he discovered that mass and energy are two different forms of the same thing. In a paper entitled “Does the inertia of a body depend on its energy content?” Einstein concluded: “It follows directly that: If a body gives off the energy L in the form of radiation, its mass diminishes by L/c². The fact that the energy withdrawn from the body becomes energy of radiation evidently makes no difference, so that we are led to the more general conclusion that the mass of a body is a measure of its energy-content.” In addition, based on the discoveries of general relativity, Einstein declared that there is no such thing as empty space or eventless time. The space-time continuum has meaning only in relation to mass and energy, which are quantized. In Appendix V of the 15^th Edition of his popular book on Relativity, Einstein says: “It is characteristic of Newtonian physics that it has to ascribe independent and real existence to space and time as well as to matter, for in Newton’s law of motion the idea of acceleration appears. But in this theory, acceleration can only denote ‘acceleration with respect to space’. Newton’s space must thus be thought of as “at rest’, or at least as “unaccelerated”, in order that one can consider the acceleration, which appears in the law of motion, as being a magnitude with any meaning. Much the same holds with time, which of course likewise enters into the concept of acceleration. Newton himself and his most critical contemporaries felt it to be disturbing that one had to ascribe physical reality to space itself as well as to its state of motion; but there was at that time no other alternative, if one wished to ascribe to mechanics a clear meaning. Further on in Appendix V, after discussing several historical theoretical concepts of space, Einstein makes the following startling statement concerning “… how far the transition to the general theory of relativity modifies the concept of space”: There is no such thing as an empty space, i.e. a space without field. Space-time does not claim existence on its own, but only as a structural quality of the field. The idea that space and time do not exist without the presence of the mass and energy of physical objects is counter-intuitive for us because the everyday picture provided by the neurological processing of pulses of energy entering our consciousness through the functioning of our physical senses seduces us into thinking of space and time, or space-time, as a changeless background within which matter and energy interact to form objects and events. But we now know that this is not true. The illusion of space-time is created by the extension of the substance of physical objects in the form of gravitational and magnetic fields. There is no space-time to be distorted, it is the instruments of measurement (Einstein’s clocks and rods of his thought experiment) that are distorted by motion, not space-time as often depicted by popular presentations by leading mainstream physicists. A simple example will help clarify this point: A steel ball, rolling across a table in a magnetic field created by the presence of a strong magnet placed under the table, will follow the curvature of the lines of force of the magnetic field. A non-metallic ball, however, unaffected by the magnetic field, will roll straight across the table. With this simple experiment we can see that the idea that the space above the table is warped by the magnet’s field is false. This, of course, is what you would expect if there is no such thing as empty space. In Einstein’s reasoning quoted above, this understanding is extended to space-time. This shift in our understanding of space and time, made necessary by general relativity, (which, by the way, has been proved correct and accurate by very many, extremely detailed experiments and tests) tells us that there is no space-time independent of mass-and-energy objects and events, which are quantized. This means that the division of space and time, or space-time, into smaller increments than those occupied by a quantum of mass or energy, while theoretically conceivable, has no basis in reality. Thus for any valid mathematical analysis, space-time must be considered as quantized as is mass and energy, and it should not come as a surprise that ignoring this requirement has resulted in erroneous conclusions about quantum reality and contributed to the perceived “weirdness” of quantum physics. Newtonian Calculus and Quantum Mathematics The calculus of Newton and Leibniz, known simply as “the calculus” for more than 300 years, is based on the assumption that the variables measuring objects and events may be divided indefinitely into smaller and smaller “infinitesimal” increments, approaching zero as closely as we please. However, in the real, quantized world of the physical universe, this cannot be done. As pointed out above, Planck’s discovery that energy is quantized, Einstein’s demonstration of the equivalence of mass and energy, and the conclusion that space-time has no independent existence, tells us, in no uncertain terms, that the division of the variables of space-time and mass-energy in the real world cannot approach zero infinitely closely. Therefore, the calculus of Newton and Leibniz, based on the assumption that this can be done, is inappropriate for application to quantum phenomena. A new calculus, appropriate for application to quantum phenomena, is needed, and the Calculus of Distinctions is that calculus. I am currently working on two rigorously mathematical technical papers for submittal to mathematical physics journals proving this. 1. Planck, Max (1899) “Über irreversible Strahlungsvorgänge. Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin. 5: 440–480. pp. 478–80 2. Einstein, Albert (1905) “Does the inertia of a body depend on its energy content?” Annalen der Physik, 17, 1905. Reprinted in The Principle of Relativity, Dover Pub. 3. Einstein, Albert (1962) “Relativity, the special and general theory, a clear explanation that anyone can understand”, Appendix V, pp. 135, 154 and 155, Crown Publishers, New York THE DANGERS OF BELIEF AND THE NEED TO KNOW Edward R. Close © December 18, 2016 “He who thinks half-heartedly will not believe in God; but he who really thinks has to believe in God.” – Sir Isaac Newton I want to elaborate a little on some of the statements made in my last post, entitled “the connection between the physical and the spiritual and why mainstream science continues to miss the point”. I like to make my posts as ‘stand-alone’ as I can. Sometimes this is hard to do when the ideas involved are complex. First, the real connection between the physical and the spiritual has been discovered and scientifically defined by Close and Neppe, and published in peer-reviewed technical papers and in articles for the layperson. Briefly described, the discovery came about through the application of TRUE quantum unit analysis. (Reminder: TRUE stands for Triadic Rotational Unit of Equivalence.) A third form of the substance or content of reality, something real in addition to mass and energy, was discovered to be necessary in every atom of the universe for there to be any stable physical reality. We call that third form “gimmel”. Gimmel cannot be measured directly the way mass and energy can, but it contributes very significantly to the spin (energy) and angular momentum (force) of elementary particles in a way that makes them stable, much the same way a spinning top or gyroscope is stable. Gimmel makes the formation of life supporting elements possible. It turns out that gimmel is consciousness, or at least the agent of consciousness in the physical universe, and consciousness is spiritual and ultimately non-physical. Second, believers in spirit and a higher intelligence may have been shocked a little when I said that “believe” may be the most dangerous word in the world. Ministers and theologians make a point of telling their followers that they must have faith and believe. This is not necessarily a bad thing, because they usually ask you to believe something good; but we must be very careful about what we believe in. Many religious and political beliefs have proved to have deadly consequences, as I pointed out in the last post. And it is always better to know something than it is to just “believe” it. Sometimes, a faulty belief system leads people to think they know something that turns out to be false. To wit, I quoted mainstream scientists who’ve said: “We know that dark matter is some form of matter, we just don’t know what that form is.” This demonstrates how the ‘knowing’ of something based on a faulty belief system, in this case, the belief in the completeness of materialism, leads to an unwarranted conclusion. We have demonstrated, with high confidence, that dark matter and dark energy are gimmel, and gimmel is neither matter, nor a form of matter. Matter is measurable as mass and energy, gimmel is not. Not only that, as Planck said, “There is no matter as such.” Mass, which we associate with weight on the macro scale, is actually an effect of the resistance to motion caused by the multi-dimensional spin of quanta. Real data from the double-slit delayed-choice experiment demonstrates the fact that quanta have no localized form until a conscious choice prompts an act that sets up the conditions that cause quanta to manifest as particles or waves. It is only when an irreversible distinction is observed that the illusion of space and time is created in the mind of the observer. Thus the consciousness of observers and the distinctions of mass, energy, space and time are inseparably linked in the perception of physical phenomena. A conscious entity automatically compares what is observed through the senses with a limited number of images, subjectively constructed in mind from memories of experiences imperfectly stored in and retrieved from that entity’s unique neurological system. This process of experience, storage and retrieval, and subjective interpretation is the basis of the belief systems of conscious entities. As I pointed out, it is likely that most of the members of the mainstream materialistic priesthood, like the inquisitors of the medieval church, actually believe that they are right, and don’t realize that they have bought into a belief system based on unsupported, and now unsupportable a priori assumptions. Mainstream scientists don’t realize that they are constantly misrepresenting and distorting Planck’s quantum physics and Einstein’s relativity by talking about massless and dimensionless particles and a supposed warping of space-time, in spite of clear statements by Planck and Einstein that there is no matter, space or time independent of mass and energy, and, more importantly, there is no knowledge of physical reality without consciousness. This brings us back to the Sir Isaac Newton quote at the beginning of this post: He who thinks half-heartedly will not believe in God But he who really thinks has to believe in God Materialism is a very attractive belief system for scientists because it greatly simplifies their job. They don’t have to concern themselves with the really hard questions like: What is consciousness? How and why are we conscious? How has the existence of complex living organisms exhibiting consciousness come about? And how could the highly organized complexity of a physical universe that is perfectly balanced to support living vehicles for consciousness come about in a universe dominated by the physical laws of thermodynamics that describe the tendency of purely physical systems to break down and decay? Materialism is an overly simplistic and now completely unsustainable belief system. The sooner mainstream science recognizes this fact, the sooner we can move on to the science of the future, a science much more comprehensive and inclusive of all human experience. The longer science lingers in the dead-end backwater of materialism, promoting the fallacy that existence has no meaning, the greater the danger that the human race will destroy itself. Is it necessary to believe in an Infinite Consciousness as Newton suggests? Ultimately, yes. A universe with no higher consciousness than the intelligence of limited human beings is doomed to fail. Such a belief system cannot sustain itself. But the general understanding of the existence of a much higher intelligence, which is ultimately spiritual, not physical, whatever we choose to call it, will gradually become clearer as we open our minds to the infinity of reality and grow spiritually. So I urge everyone to have confidence. The finite consciousness we experience is capable of developing much further, even approaching the goal of understanding the Mind of God. In spite of the tendency of mainstream human thinking to cling to the limited and limiting belief system of materialism, the Truth will eventually emerge. Even those who suffer from the mental illness of atheism will not be lost forever. Think of them with as much love and compassion as you can. Even the most misguided are redeemable. THE CONNECTION BETWEEN THE PHYSICAL AND THE SPIRITUAL AND WHY MAINSTREAM SCIENCE CONTINUES TO MISS IT Edward R. Close © December 17, 2016 On December 8, 2016, John Herschel Glenn, Jr. left this Earth for the last time. As a tribute to him, I want to start this post with a John Glenn ‘right stuff’ quote: “It has been my observation that the happiest of people, the vibrant doers of the world, are almost always those who are using - who are putting into play, calling upon, depending upon - the greatest number of their God-given talents and capabilities”. – John Glenn, Pilot, Military Leader, Astronaut and US Senator Are you using your God-given talents and capabilities to the best of your ability to be everything you can be? Every day you fail to do so, you are wasting your time on this Earth, reducing your chances to become what you could and should be. Human beings have a curious inclination to want to be absolutely right - forever. Behind this desire is the deep need to know The Truth. But this desire is dangerously perverted when you decide that what you believe is right, and that anyone who disagrees with you is wrong, deluded and, even worse, evil. We have seen a lot of this during this election year, from both the extreme left and the extreme right of the political spectrum. But, if one endeavors to avoid these extremes, there is another, much more subtle danger: the danger of believing that there is no absolute truth. This was epitomized by President Bill Clinton’s “It depends on what the meaning of ‘is’ is!” In the world of this kind of thinking, truth is whatever you can make believe it is. This is very dangerous because it usually leads to self-destructive behavior. In fact, “Believe” may be the most dangerous word in the world. Most of the worst examples of man’s crimes against humanity are the result of acting on the basis of belief and socially institutionalized belief systems. What does this have to do with the failure of mainstream science to acknowledge the connection between physical and spiritual reality? Mainstream science is not really science, it is almost completely dominated by a very simplistic belief system. That simplistic belief system is material reductionism: the belief that everything can be reduced to matter and energy interacting in time and Mainstream science has become just another form of the same kind of blind-faith belief system that early science fought so hard against. Early scientists like Giordano Bruno, who was burnt at the stake for heresy, Galileo, who died under house arrest for daring to claim that the Earth revolves around the Sun, and many others, persecuted and ostracized for announcing discoveries that contradicted church doctrine, attest to the cruelty of institutionalized belief systems. Today, any scientist who dares look outside the box for answers to questions raised by real data exposing contradictions in the current materialistic belief system is shunned and ostracized by mainstream science. Just the mention of certain words, considered to be taboo by the priesthood of mainstream science, in an otherwise valid research paper, guarantees that it will not be published. And the increasing number of clues that materialism is wrong, constantly arising from relativistic and quantum data are ignored by those perpetuating the mainstream dogma. The mainstream belief system masquerading as science routinely and systematically shuts down any voice daring to challenge it. Even worse, the indoctrination of young students by self-proclaimed atheist instructors has produced an increasingly in-bred educational community of closed-minded people claiming to be scientists. Evidence of this perversion and indoctrination is widespread and even increasing, paralleling the widespread and increasing number of empirical clues from experimental data that materialism is wrong. For example, we hear statements like: “Of course, as a scientist, I’m an atheist.” And “We know that dark matter and dark energy are some form of matter and energy, we just don’t know what that form Statements from the giants of science concerning the spiritual nature of reality and to the effect that there is an Infinite Intelligence behind the reality we experience, are ignored and excluded from today’s text books. I’ve quoted a few of them in earlier posts, but here are some quotes from Newton, Planck and Einstein making my point: “He who thinks half-heartedly will not believe in God; but he who really thinks has to believe in God.” “As a man who has devoted his whole life to the most clearheaded science, to the study of matter, I can tell you as a result of my research about atoms this much: There is no matter as such! All matter originates and exists only by virtue of a force which brings the particles of an atom to vibration and holds the atom together. . . . We must assume behind this force the existence of a conscious and intelligent Mind. This Mind is the matrix of all matter.” “The religion of the future will be a cosmic religion. It should transcend personal God and avoid dogma and theology. Covering both the natural and the spiritual, it should be based on a religious sense arising from the experience of all things natural and spiritual as a meaningful unity.” I choose not to waste my time here by pointing out the fallacies of thinking in the statements made by many mainstream atheistic ‘scientists’ who believe that our existence is the result of a series of random events with no purpose or meaning. It is likely that most of the members of the mainstream materialistic priesthood, just like the inquisitors of the medieval church, actually believe that they are right, and don’t realize that they have bought into a belief system based on unsupported, and now unsupportable a priori assumptions. They don’t even appear to realize that they are constantly misrepresenting and distorting Planck’s quantum physics and Einstein’s relativity by talking about massless and dimensionless particles and warping of space-time, in spite of clear statements by Planck and Einstein that there is no matter, space and time independent of mass and energy, and no knowledge of it without consciousness. In spite of man’s proclivity to screw up, the truth will eventually win out, and I intend to do everything I can to bring that about. Today, I want to invite you to go with me on a magical trip via a ‘memory experiment’ inspired by Einstein’s famous gedankenexperiments (thought experiments). Only, if I succeed, it will be a gedankenexperiment on steroids! To prepare for our magical memory trip, I want you to think back to your childhood and imaginative stories you’ve heard, like: Tom Thumb, Alice in Wonderland, and The Incredible Shrinking Man. Tom Thumb is a character of English folklore. The History of Tom Thumb was published in 1621, and was the first fairy tale printed in English. Tom is the size of his father's thumb, and his adventures include being swallowed by a cow, tangling with giants, and becoming a favorite of King Arthur. It is believed to have been written by a Londoner named Richard Johnson in 1621. This story may have been inspired by a real diminutive person. Tattershall, a village in Lincolnshire, England, claims to be the location of the home and grave of Tom Thumb. Alice’s Adventures in Wonderland was written by English mathematician Charles Dodgson in 1865 under the pseudonym Lewis Carroll. We all remember how a little girl named Alice falls into a rabbit hole, and enters a world inhabited by many strange creatures like the Mad Hatter (a rabbit), the Cheshire cat, and the Queen of Hearts. The story allows the mathematician Dodgson to present interesting logical conundrums that make the story as popular with adults as with children. The 1957 movie, The Incredible Shrinking Man starts out on a warm summer day with Scott Carey casually sunbathing on his yacht. A radioactive cloud envelops Scott, and a bit later, he begins to notice alarming changes. Within a few days, he loses weight, his clothes won't fit, and he slowly grows smaller and smaller. Doctors are at a loss to stop his shrinking. As he shrinks to the size of Tom Thumb, Scott gets unwanted media attention, and he has to deal with life-threatening situations like the family cat eyeing him as a snack, and a spider that, as he shrinks, appears to him to be the size of a bear. But he survives and continues to shrink away into seeming nothingness. His last words are worth reading. Here they are: "I looked up, as if somehow I would grasp the heavens...the universe...worlds beyond number...God's silver tapestry spread across the night. And in that moment, I knew the answer to the riddle of the infinite. I had thought in terms of Man's own limited dimensions. I had presumed upon Nature. That existence begins and ends is Man's conception, not Nature's. And I felt my body dwindling, melting, and becoming nothing. My fears melted away, and in their place came...acceptance. All this vast majesty of creation - it had to mean something. And then I meant something too. Yes, smaller than the smallest, I meant something too. To God, there is no zero. I STILL EXIST!" “To God, there is no zero.” In fact, Max Planck’s discovery that we live in a quantized universe, attests to the actual truth of this statement. In reality, there is no zero, only finite quanta, no absolute beginning or end, and there is no such thing as nothing. We have proved this mathematically and dimensionally with the Calculus of Distinctions. See the post of Feb. 6, 2016: http:// Are these stories pure fantasy, as most people believe, or do they reflect a deeper reality, vaguely remembered? Before we start on our trip, let me share with you some very real experiences of mine. I have vivid memories from the age of eight or nine months. I thought everyone did, so I had no idea that there was anything unusual about it until much later in life. I have shared some of these early memories and how they were verified as real memories with a few others, but for now, let me skip ahead to the time when I was eight years of age. One afternoon, as a sixth-grader sitting at my desk in my classroom, while I was looking at the teacher, suddenly, her face began to expand in my field of vision, and it continued to expand until it seemed to fill all space. I could see the pores of her skin as if I were looking at her through a magnifying glass. Then, just as suddenly, her face receded away into the distance, exactly as if I were looking through the wrong end of a telescope, until her head looked almost as small as a grain of sand. Also, about the same time, occasionally, I would unexpectedly experience greatly heightened tactile and auditory senses. Lying in my bed waiting to go to sleep, my senses would become so acute that I would avoid moving because the rustling of the sheets sounded like a crashing landslide. Lying still, I could hear my father’s watch ticking in another part of the house, and by focusing, I could hear the music of a marching band. This was before the invention of television, but I had built a crystal radio set when I was seven, so I was familiar with the idea of ‘tuning in’ to different frequencies, and I learned to focus on other sounds in the ‘ether’; so I would often go to sleep listening to strains of classical music, which I believed was being played somewhere. I told my father about these experiences, and he advised me not to worry. He said it was a by-product of rapid growth, and that he had had similar experiences as a boy. So I accepted the experiences as normal and actually learned to enjoy them. Much later in life, I studied comparative religion and many mystical traditions. I found that in Christian, Jewish, Hindu, Buddhist, and Taoist mystical teachings, some of the spiritual experiences marking the progress of a spiritual seeker is the ability to make things appear larger or smaller than they normally appear through the physical senses. Buddhism, for example, specifically lists this as one of the nine main siddhis (powers or accomplishments) on the path to enlightenment: MADALASA VIDYA (Sanskrit: correct knowledge) A being reaching this level of enlightenment becomes capable of increasing or decreasing the size of his/her body. Anima Siddhi - The ability to decrease the size of one's body and become as small as the smallest particle. Mahima Siddhi - The ability to increase the size of one's body, ultimately enveloping the universe. I must add that the experiences I described above were spontaneous, and mostly beyond my control. But, is it possible that such experiences reflect a deeper reality? Or are they no more than fantasies? Before you dismiss them out of hand, suppose there is a way to test and verify such experiences. I believe there is, because I have verified things I learned in an expanded state of consciousness, things of which I had no previous knowledge. But that’s another story. For now, let’s go on a gedanken trip together, first shrinking our perceptions to the size of the quantum world. You can think of it as imagination if you like, but what I propose is imagination guided by logic and scientific knowledge. If, by doing this, we can arrive at conclusions and understandings, and even obtain exact measurable and computable values that can be checked against known facts and scientific data, we will have proved that experiences like mine and perhaps those in the imaginative stories above are not pure fantasy, but actually reflect a reality beyond the world normally revealed by our physical senses, most of the time beyond our reach, but somehow vaguely remembered. So come with me now, to the world of Tom Thumb, Alice, Scott Carey, and beyond: First, let’s shrink our immediate awareness from the perceptions of our earthly environment to organic and inorganic structures, to molecules, to atoms, to protons and neutrons, and finally to electrons and quarks, on the scale of the quantum world, to see things about a trillion times smaller than the smallest dot we can observe with our eyes. What will we find there? Then let’s zoom back up and pass the scale of our ordinary everyday experience, gradually encompassing the Earth, the Solar System, the Milky Way Galaxy, and on, to the very edges of the visible universe! We will be able to see how the world of our everyday experience is only the tip of the iceberg of reality, and how it is supported and sustained by a deeper quantum reality, suspended in a cosmological infinity that is unfathomable as long as we are limited to our finite physical bodies. I’m not suggesting that your physical body will shrink like the fictional Scott Carey. But I am asking you to consider the possibility that consciousness is not limited to the physical body. My experiences involved the shrinking and expanding of my consciousness, not my physical body. The teacher and my classmates would have noticed that! But my point of view changed drastically. Suppose the energy of consciousness is imprisoned in physical forms by our choice, and we become so engrossed and identified with our physical bodies that we forget our true nature. What if our true nature is spiritual, and the physical senses are outward manifestations of a deeper ability to be consciously aware. Suppose we can shift our point of view, focusing inward and downward until the smallest quantum of physical reality appears to be a spinning ball of energy about the size of a softball. I see that ball of energy as a distinct ball of fire, rapidly spinning directly in front of me. Having distinguished it from myself and everything else, I am able to see how it combines with other balls of energy to form the atoms of physical reality. I am not the only one to ever think this way. An Oxford philosophy don and electronics engineer thought very deeply about how the logic of perceptions. He wrote: “A universe comes into being when a space is severed or taken apart.”- G. Spencer Brown, Laws of Form, A Note on the Mathematical Approach, page v. The Julian Press, 1977. George Spencer Brown suggests that when we separate anything from everything else by drawing a distinction, we create a world of perception. A distinction is anything that is perceived, in any way, as distinct and separate from everything else. The first distinction of which we are consciously aware is the distinction of self from other. Without that, no perception is possible. But with that, we then find, in a universe where all things seem possible, there are actually logical laws governing what forms are possible. This is demonstrated in Brown’s Laws of Form: Reality has a natural logical structure that is inescapable. To see how this comes about, let’s look, from our shrunken state, at the very smallest physical distinction, the elementary particle called the electron. Why does it appear to be a spinning ball of fire? Symmetry is natural for an isolated object, because without the influence of other distinct objects there is nothing to prevent perfect symmetry. In the absence of anything else physical, the electron spins symmetrically. It appears to be roiling, as if it is spinning in many different directions at the same time! As I watch it spin, I wonder: why is it spinning? We don’t seem to see anything like that in our everyday world. Oh, wait! The Earth is spinning on its axis, the moon is spinning around the Earth, and they go on spinning around the sun, and the solar system is in an arm of a spiraling galaxy. Everything is spinning! But why? And why is this super-small object spinning so fast? As a physicist, I can propose hypotheses that may answer this question, but theory must be tempered with experience and empirical data. A little history of particle physics will help us to understand this: Physicists have known for a long time that a moving charged particle generates a magnetic field. Electric motors and generators work because of this fact. In 1922, two German physicists, Otto Stern and Walther Gerlach, working at the University of Hamburg, conducted a series of experiments designed to measure the magnetic fields produced by electrons orbiting the nucleus of an atom. They were surprised to find that the electrons themselves were spinning very rapidly, producing magnetic fields independent of those produced by their orbital motions. They also found that the surfaces of charged particles would have to be spinning faster than the speed of light in order to produce the magnetic moments they were measuring. This, plus the fact that spin, a measure of energy (angular momentum), like everything else at the quantum scale, is quantized, led physicists to believe that there was no way to explain quantum phenomena in everyday terms that relate to rotation of large objects. This is one reason that physicists, for almost 100 years, have been declaring that quantum physics is weird. We find that the relativistic limit on spin in three dimensions actually defines the size of the smallest possible quantum, linking relativity and quantum mechanics. These spinning elementary particles are vortices connecting the three dimensions of space to six additional dimensions. Physicists recognize that spin is a very real physical property, playing an important role in the structure of atoms and molecules, with significance in chemistry and solid-state physics. Spin is important in all interactions among subatomic particles, in the high-energy particle beams of the LHC, in low-temperature fluids, and in solar winds. Most physical processes, from the quantum scale to the galactic scale, depend on the interactions of subatomic particles regulated by the relative directions and rates of spin of those particles. According to Victor J. Stenger, professor of physics at the University of Hawaii: "Spin is the total angular momentum, or intrinsic angular momentum, of a body. The spins of elementary particles are analogous to the spins of macroscopic bodies. In fact, the spin of a planet is the sum of the spins and the orbital angular momenta of all its elementary particles. So are the spins of other composite objects such as atoms, atomic nuclei and protons (which are made of quarks). "At our current level of understanding, the elementary particles are quarks, leptons (such as the electron) and bosons (such as the photon). These particles are all imagined as point-like, so you might wonder how they can have spins. A simple answer might be, perhaps they are composite, too. But deep theoretical reasons having to do with the rotational symmetry of nature lead to the existence of spins for elementary objects and to their quantization. "Spin has served as the prototype for other, even more abstract notions that seem to have the mathematical properties of angular momentum … quarks are paired as isospin 'up' and 'down,' which are the names given to the two quarks that make up ordinary matter. The rotational symmetry of space and time is generalized to include symmetries in more abstract 'inner' dimensions, with the result that much of the complex structure of the micro-world can be seen as resulting from symmetry breaking, connecting profoundly to ideas describing the spontaneous formation of structure in the macro-world.” From the viewpoint of the quantum scale, what we will see supports Professor Stenger’s description of spin; with a few important exceptions. As he suggests, from the quantum point of view, elementary particles like quarks do have additional features, they are not dimensionless points; they are composed of units of mass, energy, and as we have discovered, something else, which is not directly measurable as mass or energy, but does affect the total angular momentum of spinning objects, from electrons to galaxies. We have called this something else 'gimmel'. Elementary particles like electrons and quarks can only be treated as point-like in the current paradigm because of their extreme smallness relative to our ability to measure them from the macro-scale, and because the idea of a dimensionless particle is supported by the erroneous assumption of Newtonian calculus that physical variables can approach zero indefinitely closely. The terms ‘symmetry breaking and spontaneous formation of structure’ have become ingrained in the jargon of physicists, but they are actually just verbal representations of what mainstream particle physicists see as arbitrary randomness in the formation of sub-atomic structure and other physical processes. Finally, we see that Professor Stenger’s allusion to “inner dimensions” is inaccurate. There are additional dimensions, but they are not “inner’ dimensions. Let me explain: When we are successful in shifting our point of view, as happened in my spontaneous expansions of consciousness when I was a sixth grader, we are freed of the limitations of the physical body. What we see from that broader point of view, is that the additional dimensions required to explain quantum reality are not folded or curled up, as some physicists have imagined, and they are not the inner dimensions (actually pseudo dimensions) of matrices describing differential equations, they are extensions of the invariant relationships of the three dimensions we are aware of through the limited senses of the physical body. Just like one and two-dimensional domains are embedded within the three–dimensional domain, the three-dimensional domain is embedded in a fourth dimension (the first dimension of time) and so on. Pure mathematical number theory supports exactly nine finite dimensions embedded in an infinite substrate. The infinite substrate, encompassing all possible universes, connects the receding quantum realm with the expanding multiverses available to the three dimensions of consciousness in the three dimensions of time. This is difficult to envision while confined to the physical body and its limited senses, but it provides the basis for explanations of a number of conundrums that have plagued physical science since the time of Einstein and Bohr. Application of dimensional mathematics from this point of view explains how and why quarks combine in threes to form the protons and neutrons of ordinary physical reality, why fermions have ½ intrinsic spin, and why there is a stable, life supporting universe, - why there is something rather than nothing. Also, the mathematics of nine finite dimensions shows that there is no arbitrary randomness in the formation of atomic structure. The universe is well-ordered, and it is accurately described by the Calculus of Distinctions, Dimensional Extrapolation and the Conveyance Equation. The mathematical proofs we’ve developed are beyond the scope of this discussion, but you can go to the post referenced earlier, http://www.erclosetphysics.com/search?q=TRUE+units+mathematics. You can also go to http://iqnexus.org/mag.htm, a link to the IQNexus Journal, where you’ll find Vol. 8, No. 3, published September 01, 2016, that has the mathematical details, and Vol. 8, No. 4, published December 01, 2016 with Q&A discussions of TDVP.
{"url":"http://www.erclosetphysics.com/2016/12/","timestamp":"2024-11-11T10:28:28Z","content_type":"text/html","content_length":"293372","record_id":"<urn:uuid:533db3c5-b8e0-4308-abf6-a12ce3d7e02d>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00592.warc.gz"}
Some teachers grade on a "(bell) curve" based on the belief that classroom test scores are... Some teachers grade on a "(bell) curve" based on the belief that classroom test scores are... Some teachers grade on a "(bell) curve" based on the belief that classroom test scores are normally distributed. One way of doing this is to assign a "C" to all scores within 1 standard deviation of the mean. The teacher then assigns a "B" to all scores between 1 and 2 standard deviations above the mean and an "A" to all scores more than 2 standard deviations above the mean, and uses symmetry to define the regions for "D" and "F" on the left side of the normal curve. If 200 students take an exam, determine the number of students who receive a B. (Round to the nearest whole student.) We need to find the proportion of students who will lie between 1 standard deviation and 2 standard deviation above the mean For 2 SD above the mean, the z score = 2, and the probability = 0.9772 For 1 SD above the mean, the z score = 1, and the probability = 0.8413 Therefore the proportion of students between 1 and 2 standard deviations = 0.9772 - 0.8413 = 0.1359 Therefore the number of students who receive a B = 200 * 0.1359 = 27.18 27 students
{"url":"https://justaaa.com/statistics-and-probability/8713-some-teachers-grade-on-a-bell-curve-based-on-the","timestamp":"2024-11-11T19:37:42Z","content_type":"text/html","content_length":"40341","record_id":"<urn:uuid:a57ba2bb-d96f-4f41-8d4e-66e2f0a320e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00477.warc.gz"}
241 research outputs found It is known that holomorphic Poisson structures are closely related to theories of generalized K\"{a}hler geometry and bi-Hermitian structures. In this article, we introduce quantization of holomorphic Poisson structures which are closely related to generalized K\"{a}hler structures /bi-Hermitian structures. By resulting noncommutative product $\star$ obtained via quantization, we also demonstrate computations with respect to concrete examples.Comment: arXiv admin note: text overlap with arXiv:1304.018 Assume that $M$ is a smooth manifold with a symplectic structure $\omega$. Then Weyl manifolds on the symplectic manifold $M$ are Weyl algebra bundles endowed with suitable transition functions. From the geometrical point of view, Weyl manifolds can be regarded as geometrizations of star products attached to $(M,\omega)$. In the present paper, we are concerned with the automorphisms of the Weyl manifold corresponding to Poincar\'e-Cartan class ($c_0$ is a $\check{\rm C}$ech cocycle corresponding to the symplectic structure $\omega$.) $[c_0+\sum_{\ell=1}^\infty c_{\ell} u^{2\ell}]\in \check {H}^2 (M)[[u^2]]$. We also construct modified contact Weyl diffeomorphisms In this article, we introduce symbol calculus on a projective scheme. Using holomorphic Poisson structures, we construct deformations of ring structures for structure sheaves on projective spaces It is known that Wolf constructed a lot of examples of Super Calabi-Yau twistor spaces. We would like to introduce super Poisson structures on them via super twistor double fibrations. Moreover we define the structure of deformation quantization for such super Poisson manifolds Consider the problem "Give the equation of the conceptional rotations in ${\mathbb R}^3$ without using the parameter expressing individual rotations", just as the conceptional motion of constant velocity along straight lines (Galiley motions) is expressed by an elementary differential equation. In this we try to give an answer to the topic related with the above issue The Weyl algebra (W_{2m}[h]; *) is the algebra generated by u=(u_1,...,u_m,v_1,.....,v_m) over C with the fundamental commutation relation [u_i,v_j]=-ih\delta_{ij}, where h is a positive constant. The Heisenberg algebra (\Cal H_{2m}[nu];*) is the algebra given by regarding the scalar parameter h in the Weyl algebra W_{2m}[h] to be a generator nu which commutes with all others This is a noncommutative version of the previous work entitled "Deformation Expression for Elements of Algebras (I)." In general in a noncommutative algebra, there is no canonical way to express elements in univalent way, which is often called "ordering problem". In this note we discuss this problem in the case of the Weyl algebra of 2m-generators. By fixing an expression, we extends Weyl algebra transcendentally. We treat *-exponential functions of linear forms, and quadratic forms of crossed symbol under generic expression parameters This paper presents a preliminary version of the deformation theory of expressions of elements of algebras. The notion of *-functions is given. Several important problems appear in simplified forms, and these give an intuitive bird's-eye of the whole theory In this note, we are interested in the *-version of various special functions. Noting that many special functions are defined by integrals involving the exponential functions, we define *-special functions by similar integral formula replacing exponential functions by *-exponential functions Ideas from deformation quantization applied to algebras with one generator lead to methods to treat a nonlinear flat connection. It provides us elements of algebras to be parallel sections. The moduli space of the parallel sections is studied as an example of bundle-like objects with discordant (sogo) transition functions, which suggests us to treat movable branching singularities
{"url":"https://core.ac.uk/search/?q=author%3A(Miyazaki%2C%20Naoya)","timestamp":"2024-11-02T06:43:04Z","content_type":"text/html","content_length":"120826","record_id":"<urn:uuid:f33bbdb8-d8c0-46d9-be9f-4a01b6e372c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00755.warc.gz"}
Sections 6.9 and 16.1 - 16.3 in Matter and Interactions (4th edition) Electric Potential Energy As you read about in the electric force, we noted that the electric force is a conservative force. This means we can define a potential energy - electric potential energy in this case - using that force. These notes will talk about the general relationship between force and potential energy, walk through an example for point charges, and highlight the relationship between electric potential energy and electric potential. General Relationship - Energy and Force Much like the gravitational force, the electric force is conservative. This means that we can define an electric potential energy using the general relationship: $$\Delta U = -\int_i^f\vec{F}\bullet d\vec{r}$$ This relationship is always true for conservative forces (works for springs, gravitational interactions, electric interactions, etc.). In our case, we are particularly interested in the electric potential energy ($U_{elec}$): $$\Delta U_{elec}= -\int_i^f\vec{F}_{elec}\bullet d\vec{r}$$ There are a few important features of this relationship: • Energy is scalar (including electric energy), so to get from the vector force we must use the dot product with displacement to get energy. Dot products produce scalar quantities from two vector quantities. • The electric force is not constant, it usually depends on $r$. This means we have to use the integral rather than just multiplying by the distance. • If we are integrating from some initial location to some final location, we will get a change in electric potential energy between the two locations (not the energy at a single place). This change in energy is the important part - it does not matter how you get from the initial to the final location (squiggly or straight path), you will get the same change in energy. This means that electric potential energy is path independent. • The units of electric potential energy are joules (J) just like all the other forms of energy. Deriving Electric Potential Energy for Two Point Charges Using the relationship between force and potential energy, we can derive the electric potential energy between two point charges from the electric force. Suppose we have two positive point charges $q_1$ and $q_2$, who are initially separated by a distance r. We will assume $q_1$ is fixed and let $q_2$ move to infinity. Starting with the general relationship: $$\Delta U_{elec} = U_f-U_i= -\ int_i^f\vec{F}_{elec}\bullet d\vec{r}$$ we can plug in the electric force equation for the force from $q_1$ on $q_2$ (point charges), and we know that our initial location is $r_i=r$ and our final location is $r_f=\infty$. So we get: $$\Delta U_{elec} = U_\infty-U_r= -\int_r^\infty \frac{1}{4\pi\epsilon_0}\frac{q_1q_2}{r^2}\hat{r} \bullet d\vec{r}$$ The force from $q_1$ on $q_2$ points in the $+\hat{x}$ direction so this means $\hat{r}=\hat{x}$. $q_2$ will also move in the $\hat{x}$ direction so that means $d\vec{r}=dr\hat{x}$. $$ U_\infty-U_r= -\int_r^\infty \frac{1}{4\pi\epsilon_0}\frac{q_1q_2}{r^2}\hat{x} \bullet dr\hat{x}$$ Since here we have a scalar times a vector dotted with another scalar times a vector, we can rearrange this equation so that we have the scalars multiplied in the front times the dot product of the two vectors. $$ U_\infty-U_r= -\int_r^\infty \frac{1}{4\pi\epsilon_0}\frac{q_1q_2}{r^2}dr \hat{x}\bullet\hat {x}$$ Because $\hat{x}$ has a magnitude of 1, and we are dotting $\hat{x}$ with $\hat{x}$ (these are parallel vectors) $$\hat{x} \bullet \hat{x}= |\hat{x}||\hat{x}|cos(0)=1\cdot 1\cdot 1 = 1$$ So we get for our energy equation: $$ U_\infty-U_r= -\int_r^\infty \frac{1}{4\pi\epsilon_0}\frac{q_1q_2}{r^2}dr$$ Pulling all of the constants out of the integral, we get $$U_\infty-U_r= -\frac{1}{4\pi\ epsilon_0}q_1q_2\int_r^\infty \frac{1}{r^2}dr$$ $$U_\infty-U_r= -\frac{1}{4\pi\epsilon_0}q_1q_2 \left.\left(- \frac{1}{r} \right) \right|_r^\infty$$ $$U_\infty-U_r= -\frac{1}{4\pi\epsilon_0}q_1q_2 \ left(-\frac{1}{\infty}+\frac{1}{r} \right)$$ Because 1 divided by a very large number is extremely close to zero, we say $\frac{1}{\infty}=0$. (You can show this formally using limits, but physicists tend to be lazy in this regard. This is one way we tend to drive mathematicians crazy.) This leaves the change in electric potential energy from r to $\infty$ as: $$\Delta U_{elec} = U_\infty-U_r= - \frac{1}{4\pi\epsilon_0}\frac{q_1q_2}{r}$$ Just as we did with the superposition of potential, we will often assume that $U=0$ at infinity. You don't have to choose infinity as the location where $U=0$ but we often will because it is convenient. This assumption allows us to be consistent with how potential is defined, and it allows us to interpret a positive potential energy as repulsion and a negative potential energy as attraction. Using this assumption, then $$U_r=\frac{1}{4\pi\epsilon_0}\frac{q_1q_2}{r}$$ This energy then is the electric potential energy between two point charges $q_1$ and $q_2$ that are separated by a distance $r$. If $U$ is positive, $q_1$ and $q_2$ have the same sign and if $U$ is negative, $q_1$ and $q_2$ have opposite signs. Getting from Energy to Force We can also use the inverse of energy-force relationship to get the electric force from electric potential energy. If we know what the electric potential energy is in terms of $r$, you can calculate the electric force by taking the negative derivative of energy with respect to $r$, which will give you the electric force in the $\hat{r}$ direction. This assumes that your electric potential energy equation does not depend on an angle. (If your electric potential energy does depend on an angle, then you have to use the gradient.) $$\vec{F}=-\frac{dU}{dr}\hat{r}$$ If you know the electric potential energy in terms of $x$, $y$, and $z$ variables, you can calculate the electric force by taking the negative derivative with respect to each direction (this is the gradient in cartesian coordinates). $$\vec{F}=-\frac{dU}{dx}\hat{x}-\frac{dU}{dy}\hat{y}-\frac{dU}{dz}\hat{z}=-\left\langle \frac{dU}{dx},\frac{dU}{dy},\frac{dU}{dz} \right\rangle$$ Relating Energy Back to Potential Now that we have an idea of what the electric potential energy looks like (both generally and specifically for point charges), we can relate energy back to what we learned last week about electric potential. Let's start by considering two point charges again. Looking at the electric energy equation, we could easily rewrite this equation in term of the electric potential from $q_1$: $$U=q_2 \ frac{1}{4\pi\epsilon_0}\frac{q_1}{r}= q_2 V_1$$ Or the electric potential from $q_2$: $$U=q_1 \frac{1}{4\pi\epsilon_0}\frac{q_2}{r}= q_1 V_2$$ This shows a larger, more general relationship between electric potential energy and electric potential. $$U=qV$$ Or even more importantly: $$\Delta U=q\Delta V$$ Note that electric potential energy is NOT the same thing as electric potential. Electric potential energy requires two charges or a charge interacting with potential, whereas electric potential is from a single charge. Electric potential energy has units of joules and electric potential has units of volts. That being said, electric potential is related to electric potential energy. Electric potential tells you about how much energy there could be, without needing to know charges are interacting. □ Video Example: Particle Acceleration through an Electric Field □ Video Example: Preventing an Asteroid Collision
{"url":"https://msuperl.org/wikis/pcubed/doku.php?id=184_notes:pc_energy","timestamp":"2024-11-05T20:00:24Z","content_type":"application/xhtml+xml","content_length":"44541","record_id":"<urn:uuid:b855b068-4128-4c3e-bc27-6a006d6eb5c2>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00086.warc.gz"}
Khon Kaen 25 randomly selected Dingbats/Rebus questions for quizmasters. New random selection made weekly. Next update: Monday 11th November 2024 (Please note: Questions are taken from our database of previous quizzes. Some questions and answers may be outdated.) Hide Answers Back to Category Index Back to Quiz Index 1. Solve this dingbat to reveal a well-known term or phrase. Club sandwich 2. Solve this dingbat to reveal a well known term or phrase. Missing link 3. Solve this dingbat to reveal a well-known term or phrase. Good intentions 4. Solve this dingbat to reveal a well-known term or phrase. Double or nothing 5. Solve this dingbat to reveal a well known term or phrase. Missing you 6. Solve this dingbat to reveal a well-known term or phrase. A splitting headache 7. Solve this dingbat to reveal a well known term or phrase. Afternoon tea 8. Solve this dingbat to reveal a well-known term or phrase. Lie in state 9. Solve this dingbat to reveal a well known term or phrase. Buck up 10. Solve this dingbat to reveal a well-known term or phrase. Whiskey on the rocks 11. Solve this dingbat to reveal a well-known term or phrase. He rose out of nowhere (He came out of nowhere) 12. Solve this dingbat to reveal a well known term or phrase. Doctor Dolittle 13. Solve this dingbat to reveal a well known term or phrase. 7 Up cans 14. Solve this dingbat to reveal a well-known term or phrase. Every dog has its day 15. Solve this dingbat to reveal a well known term or phrase. Vitamin C Deficiency 16. Solve this dingbat to reveal a well known phrase. Once and for all 17. Solve this dingbat to reveal an 11-letter word. 18. Solve this dingbat to reveal a well-known term or phrase. White elephant 19. Solve this dingbat to reveal a well known term or phrase. In the middle of nowhere 20. Solve this dingbat to reveal a well-known term or phrase. Tower of London 21. Solve this dingbat to reveal a well known term or phrase. Goldilocks and the Three Bears 22. Solve this dingbat to reveal a 6-letter word. 23. Solve this dingbat to reveal a film title. 24. Solve this dingbat to reveal a well known term or phrase. A shot in the dark 25. Solve this dingbat to reveal a well known term or phrase. Too expensive Back to Top Hide Answers Back to Category Index Back to Quiz Index
{"url":"https://kkquiz.com/category/dingbats/1","timestamp":"2024-11-09T21:54:19Z","content_type":"text/html","content_length":"13223","record_id":"<urn:uuid:3a09297c-0946-4d0c-b31a-18faa5da961e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00264.warc.gz"}
Spatial gradients in Clovis-age radiocarbon dates ... - Semantic Scholar - P.PDFKUL.COM Spatial gradients in Clovis-age radiocarbon dates across North America suggest rapid colonization from the north Marcus J. Hamilton*† and Briggs Buchanan‡§ *Department of Anthropology, University of New Mexico, Albuquerque, NM 87131; and ‡Department of Anthropology, University of British Columbia, Vancouver, BC, Canada V6T 1Z1 A key issue in the debate over the initial colonization of North America is whether there are spatial gradients in the distribution of the Clovis-age occupations across the continent. Such gradients would help indicate the timing, speed, and direction of the colonization process. In their recent reanalysis of Clovis-age radiocarbon dates, Waters and Stafford [Waters MR, Stafford TW, Jr (2007) Science 315:1122–1126] report that they find no spatial patterning. Furthermore, they suggest that the brevity of the Clovis time period indicates that the Clovis culture represents the diffusion of a technology across a preexisting pre-Clovis population rather than a population expansion. In this article, we focus on two questions. First, we ask whether there is spatial patterning to the timing of Clovis-age occupations and, second, whether the observed speed of colonization is consistent with demic processes. With timedelayed wave-of-advance models, we use the radiocarbon record to test several alternative colonization hypotheses. We find clear spatial gradients in the distribution of these dates across North America, which indicate a rapid wave of advance originating from the north. We show that the high velocity of this wave can be accounted for by a combination of demographic processes, habitat preferences, and mobility biases across complex landscapes. Our results suggest that the Clovis-age archaeological record represents a rapid demic colonization event originating from the north. Early Paleoindian 兩 wave of advance 兩 landscape complexity 兩 hunter-gatherers 兩 Late Pleistocene n this article, we consider five alternative hypotheses that have been proposed to account for the initial early Paleoindian occupation of North America. The first hypothesis is the traditional model, which states that Clovis peoples migrated into unglaciated North America from Beringia via an ice-free corridor between the Laurentide and Cordilleran ice sheets (1). Research suggests that an ice-free corridor would have been open and available for human passage by 12,000 years ago (1), although the habitability of the corridor is still a matter of debate (2). However, recently, there has been renewed interest in alternative hypotheses. A second hypothesis is that Clovis peoples migrated along the coast of Alaska, British Columbia, and Washington State (2, 3). This model, usually referred to as the Northwest Coast model, suggests that maritime-adapted groups using boats moved along the ice-free western coast and sometime later moved east into the interior of the continent. A third hypothesis, which has been raised recently is that the initial colonists could have rapidly skirted the western coast of North America and established their first substantial occupations in South America (4). In this hypothesis, colonists would then have moved north through the Isthmus of Panama colonizing North America from the south. A fourth hypothesis is that North America was colonized by Solutrean people who had traveled along the edge of an ‘‘ice bridge’’ between Europe and North America (5) so that the initial colonization occurred from the east. This hypothesis is driven by suggested similarities between Clovis and pre-Clovis technology on the one hand and Solutrean www.pnas.org兾cgi兾doi兾10.1073兾pnas.0704215104 technology on the other, which some take to indicate an historical connection (5). A fifth hypothesis, as proposed by Waters and Stafford (6), is that Clovis technology may have been a technological innovation that spread via cultural transmission through an established pre-Clovis population, which had colonized North America sometime earlier in the Pleistocene before the Clovis phenomenon. We test these five models by analyzing the spatial distribution of Clovis and Clovis-aged radiocarbon dates across North America. We include the earliest available Clovis or Clovis-age dates from different regions of North America with the assumption that they reflect meaningful temporal and spatial variation in the initial colonization process. We located the potential origin of a colonizing wave at six locations, four reflecting the external origins of the alternative colonization models (north for the traditional or ice-free corridor model, east for the Solutrean model, south for the Isthmus of Panama, and west for the Northwest Coast model), and two reflecting pre-Clovis origins. For the northern origin, we chose Edmonton, Alberta, following the assumed location of the southern mouth of the ice-free corridor (7, 8). For the east, we chose Richmond, VA, a location roughly halfway down the east coast of North America. For the southern origin, we chose Corpus Christi, TX, reflecting a potential route north through Central America (4), and for the western origin, we chose Ventura, CA, a location about halfway down the west coast of North America and across from the Channel Islands, where late Pleistocene human remains and evidence for a contemporaneous maritime-based subsistence economy have been recovered (9, 10). Although each wave is centered on a particular location, given the width of the wavefronts used, and the small sample size of available radiocarbon dates, our results are robust to reasonable changes in origin. For the pre-Clovis origin model, we centered the colonizing wave on Meadowcroft Rockshelter (11) in Pennsylvania, and Cactus Hill (12) in Virginia, two of the earliest, and most prominent pre-Clovis candidates in North America. Time-Delayed Wave of Advance Model. To model the wave of advance, we follow procedures outlined by Fort and colleagues (13–18) in their recent studies of other human prehistoric expansions. The simple wave of advance model combines a logistic population growth term with Fickian diffusion, which describes the spread of the population in two spatial dimensions. The resulting equation is termed the Fisher equation: Author contributions: M.J.H. and B.B. designed research, performed research, analyzed data, and wrote the paper. The authors declare no conflict of interest. This article is a PNAS Direct Submission. †To whom correspondence should be addressed. E-mail: [email protected] address: Department of Archaeology, Simon Fraser University, Burnaby, BC, Canada V5A 1S6. © 2007 by The National Academy of Sciences of the USA PNAS 兩 October 2, 2007 兩 vol. 104 兩 no. 40 兩 15625–15630 Edited by Linda S. Cordell, University of Colorado, Boulder, CO, and approved August 20, 2007 (received for review May 6, 2007) Table 1. Radiocarbon and calibrated dates used in Fig. 1 Site Date, 14C yr B.P. Error (⫾1 ␴) Calibrated date B.P. Anzick Arlington Springs Big Eddy Bonneville Estates Casper Colby Debert Dent Domebo East Wenatchee Hedden Hiscock Indian Creek Jake Bluff Kanorado Lange-Ferguson Lehner Lubbock Lake Murray Springs Paleo Crossing Shawnee-Minisink Sloth Hole Vail 11,040 10,960 10,832 11,010 11,190 10,870 10,590 10,990 10,960 11,125 10,550 10,795 10,980 10,765 10,980 11,080 10,950 11,100 10,885 10,980 10,935 11,050 10,530 12,948 12,901.5 12,842.5 12,922.5 13,106 12,855.5 12,429 12,910.5 12,895 13,025 12,437.5 12,828.5 12,925 12,817.5 12,906.5 12,994 12,891 13,010 12,862.5 12,912.5 12,883 12,969 12,255 6 6 46 6 45 6 48 6 6 6 49, 50 51 6 6 6 6 6 6 6 6 6 6 52 Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ⭸N兾⭸t ⫽ ␥N共1 ⫺ N兾K兲 ⫹ D䌤2N, where ␥ is the maximum potential growth rate, N is population size or density, K is the carrying capacity of the local environment, D is the diffusion coefficient (in km yr⫺1), and ƒ is the Laplacian operator describing the diffusion of the population N in two dimensions. The diffusion coefficient measures the lifetime dispersal of an individual, measured by the average distance between location of birth and first reproduction. This measure, the mean squared displacement of an individual, is then adjusted by random dispersal in two dimensions, and generation time T, giving the diffusion coefficient D ⫽ ␭2/4T, where ␭2 is the mean square displacement. The well known solution to Eq. 1 produces traveling waves of colonists radiating out in concentric circles from an initial point of origin. The velocity v of this wavefront of colonists is given by v ⫽ 2 冑␥D. Note that the velocity is simply a function of the population growth rate ␥ and the diffusion rate D and independent of population size N or carrying capacity K. Although widely used in the analysis of spatial movement, the Fisher equation incorporates some biologically and anthropologically unrealistic assumptions (19), perhaps most importantly, that individual dispersal begins at birth, and so dispersal is continuous through life. However, human dispersal is best modeled discretely because there is generally a time delay between birth and dispersal, related to rates of human growth and development. This time delay reflects the situation that as a hunter–gatherer residential group establishes a new home range during the process of colonization, there is a time delay before the next generation expands into the adjacent landscape. Fort and colleagues (13, 14, 19) derived a model to account for such generational time delays in the diffusion process. They show the velocity of the timedelayed traveling wave is then 15626 兩 www.pnas.org兾cgi兾doi兾10.1073兾pnas.0704215104 2 冑␥D . ␥T 1⫹ 2 The velocity of the wavefront is thus reduced as a function of the generational time delay T. Note that in cases where there is no time delay (T ⫽ 0), Eq. 3 reduces to Eq. 2. The expected velocity of a hunter–gatherer population expansion can then be estimated by parameterizing Eq. 3 with ethnographic data on maximum population growth rates ␥, mean generation times T, and measures of individual lifetime dispersal D. Parameter Estimation. We estimate these parameters using published data from refs. 20 and 21. For natural fertility populations, age at first birth, a common measure of mean generation time T is ⬇20 years, and maximum annual population growth rates ␥ is ⬇0.04 (20). Measures of lifetime dispersal in modern hunter– gatherers are more difficult to quantify. By using hunter– gatherer mating distance data (21), mean individual dispersal within populations can be conservatively estimated to be up to 3,000 km2 per generation, although the upper range for a recorded marriage distance gives a mean squared distance of nearly 21,000 km2. Although these data are not ideal measures of dispersal, where the data are available, marriage distance has been shown to correlate both strongly and positively with generational dispersal (22). Therefore, we use 3,000 km2 as a conservative measure of lifetime dispersal, giving D ⫽ 37.5 km2. These demographic measures are conservative because these data originate from societies near demographic equilibrium, with neighboring populations, a situation far from representative of a rapidly growing, late Pleistocene hunter–gatherer population expanding into an open landscape. Combining these parameters with Eq. 3, the expected velocity of this wave is then ⬇␯ ⫽ 1.77 km per year. Data. We used radiocarbon dates or averaged dates from 23 sites (see Table 1, Fig. 1, and Methods). Seventeen of the dates are those identified by Waters and Stafford (6) as either reliable Hamilton and Buchanan distance by time because of possible errors in the measurement of distance. Simulations show that Fort’s inverse-regression method tends to overestimate the slope when dealing with small sample sizes (data not shown), so we use randomization methods (10,000 iterations) to estimate the slope and significance of the second regression method. Further, we suggest that randomization statistics are appropriate here, because we are interested in testing both whether the pattern is significantly different from random rather than from an a priori distribution, which may or may not be normal, and whether the slope and correlations are negative, not simply statistically significant. We report the results of both methods. Results Best-Fit Model. Our data show clear spatial gradients in the Clovis or Clovis-aged dates. The other six are dates from other Clovis or Clovis-age sites that occur in similar contexts but were not considered in their analyses (see Methods for details). Five of these sites (Debert, Vail, Casper, Hiscock, and Big Eddy) can be considered Clovis sites because of the presence of Clovis-like fluted points. Hedden lacks fluted points, but has a subsurface assemblage dating to the same period as other regional Clovisage sites (Table 1). The majority of the additional dates (n ⫽ 5) are from early Paleoindian sites located in eastern North America, which are widely considered to represent the earliest dated late-Pleistocene occupations of the east (23). This sample size is admittedly extremely small, but we are limited by the rarity of well dated Clovis-age sites (6). All radiocarbon dates used in our analyses (Table 1) were calibrated by using the Intcal04.14 curve (24) in Calib 5.0. Statistical Approach. To represent the expanding waves for each analysis, we measured the distance of each site from the wave’s point of origin using great-circle arcs (in kilometers). To establish the earliest occupations per region, the earliest observations per bin were regressed against the dependent variable. Following methods outlined by Fort and colleagues (15–17, 19, 20), concentric bins were set at a consistent width (450 km) radiating out from each wave origin, and the earliest dated site within each bin was regressed by its distance from origin (see Methods for details). To evaluate the best-fit model, we calculated the correlation coefficient (r) for each test. The origin with the highest correlation coefficient is therefore the most likely point of origin (13, 14, 18). Similar methods have been used successfully in understanding colonization processes in other regions throughout prehistory (13–15, 17, 18). Here, we do not attempt to differentiate between demic and cultural diffusion models directly, because both types of model can be constructed to predict the same trajectories through time (25). Rather, similar to other researchers, we suggest that if a model can predict a demic diffusion, given realistic demographic and ethnographic parameters, then the hypothesis of demic diffusion has not been falsified until a cultural diffusion model could be shown to yield similar, or more accurate results (13, 14, 16, 18, 25). Following Fort’s methods, we estimate the velocity v of the wavefront as the inverse slope of the linear regression of time (calibrated dates) by distance (kilometers from origin). Because there is likely much more error in the measurement of time than in the measurement of distance, time is placed along the y axis and distance along the x axis. We also estimate the slope of Hamilton and Buchanan Slope. The estimated velocity of the wavefront by using Fort’s inverse regression technique was ␯ ⫽ 7.56 km per year and with the resampling method was ␯ ⫽ 5.13 km per year [1.89–14.14, 95% confidence limits (CLs)]. The confidence limits are too wide to provide much further constraints on the velocity. A possible reason for their width, aside from the small sample size, is that the wave velocity shown in the upper right graph in Fig. 2 begins very rapidly and decreases dramatically as the continent fills. Indeed, the correlation coefficient is improved by fitting the northern-origin model with a quadratic model (r ⫽ ⫺0.82), reflecting this trend. We discuss this pattern further below. Intercept. The intercept of the regression equation predicts that Clovis colonists arrived at the mouth of the ice-free corridor ⬇13,378 cal B.P. (12,896–13,867, 95% CLs) or, using the uncalibrated data, 11,342 14C B.P. (11,114–11,607, 95% CLs). Discussion Our results provide clear, quantitative evidence of a colonizing wavefront of early Paleoindians originating from the north. This wavefront moved rapidly to the south and east, traveling considerably faster than predicted from ethnographic data and faster than other recorded hunter–gatherer expansions into previously unoccupied land masses (17, 27, 28), although at a speed that is not, in itself, unprecedented (29, 30). Although this result does not discount the possible presence of a pre-Clovis PNAS 兩 October 2, 2007 兩 vol. 104 兩 no. 40 兩 15627 Fig. 1. Map showing the location of Early Paleoindian sites mentioned in text. Numbers correspond to those found in Table 1. distribution of earliest Paleoindian occupation dates (Fig. 2). Not only was the northern origin model the model with the highest correlation coefficient (r ⫽ ⫺0.73), but it was also the only statistically significant model (P ⫽ 0.008). Importantly, the only other wave near statistical significance was the western origin model (Fig. 2). It is interesting that in Fig. 2, each plot clearly reflects a general west-to-east trend in radiocarbon dates as shown by the dashed lines (linear regressions of the entire data sets), consistent with recent findings based on the cladistic analysis of Clovis projectile point morphometrics (26). The plot of all dates used in the analysis by distance from the northern origin is also highly significant (r ⫽ ⫺0.59, P ⫽ 0.001, n ⫽ 23), suggesting that the spatial gradients of earliest occupation dates we report here are also evident in the average timing of the Clovis occupation across the continent. The consistency of the west-to-east gradient in the ages of radiocarbon dates suggests that, despite the small sample size, these results are robust. In other words, there would need to be a consistent and fundamental change to the dating of early Clovis-age occupations both in the south and east to alter the findings we present here. Further, although the significance of the slope of the northern origin model is influenced by the relatively young date of the furthest bin (black circle to the furthest right in Fig. 2, upper left), excluding this data point does not change our results, because the slope remains significant (P ⫽ 0.034). Fig. 2. Bivariate plots of the wave of advance analysis for each of the six potential origins. Filled circles are the earliest dates per 450-km bin. Open circles are the raw data (all 23 dates). Solid lines are regressions through the binned data, and dashed lines are regressions through the raw data. The correlation coefficients and P values refer to the solid lines. population within North America, the speed of the wavefront suggests that any preexisting human populations offered little demographic, ecological, or territorial competition to the advancing front of colonists, and the available data suggest these populations were not the source of the subsequent Clovis culture. Both the rapidity with which the Clovis culture appeared over the continent and the general trajectory of the colonization process have been noted several times before. In their classic model, Kelly and Todd (31) suggested that the speed of colonization was driven by high rates of residential mobility, because of the large foraging areas required of a primarily carnivorous diet (32). Reasoning from optimal foraging theory, they suggested that colonizing hunter– gatherers, with a northern latitude preadaptation (27), would have maximized return rates by focusing on widely available, predictable, high-return resources, in particular, mammalian megafauna. Because these prey species likely occurred at low densities, local prey would have been depleted quickly, causing foragers to expand into adjacent open regions. Because of their specialized foraging niche, home ranges would have been both very large and have had very low effective carrying capacities, resulting in a fast moving, shallow wavefront (33). This model receives support from recent theoretical and empirical ecological research (34, 35), which shows that across species, optimal search strategies, and hence patch residence times, are influenced heavily not only by environmental productivity, but also by the regeneration rates of key prey species. In patches where prey regeneration rates are fast, foragers can reuse habitats regularly because of the rapid restocking of prey, whereas in patches where prey regeneration rates are long to infinite (i.e., where foraging causes local extirpation of prey) optimal search strategies become linear (34, 35), leading to high levels of mobility and the utilization of large foraging areas. Another, although not necessarily mutually exclusive, hypothesis suggests that colonizing populations followed least-cost pathways into the lower continent, where movement occurred rapidly either through favorable corridors, such as river drainages, or 15628 兩 www.pnas.org兾cgi兾doi兾10.1073兾pnas.0704215104 across areas of relatively homogenous topography (4, 27). These models predict that colonization would have bypassed, or traveled quickly across landscapes that were unfavorable because of topography and/or ecological productivity. These models have obvious implications for regional variation in late Pleistocene foraging strategies. On the Plains and in the Southwest, where the archaeological record shows Clovis foragers targeted mammalian megafauna, diffusion rates would have been fast. In these regions, initial foraging return rates would have been high but regeneration rates of megafaunal prey would have been very slow [perhaps infinite (7, 8, 36)] because of low reproductive rates, leading to large home ranges and the rapid geographic expansion of human populations (sensu 31). Similarly, Clovis colonists would have moved rapidly through large river systems (4), such as the Missouri and Mississippi drainages, leading to an initially rapid rate of colonization through the midcontinent, which would have then slowed dramatically as diet breadths broadened with the increased biodiversity of the eastern forests (27, 37), and as prey size, abundance, and availability changed (38). We suggest that these ideas are consistent with recent developments in understanding the movement of colonizing wavefronts across heterogeneous landscapes. Campos, Mendez, and Fort (39) analyzed the effects of diffusion across complex surfaces to understand the rapid rate of expansion (⬎13 km per year) of European populations across North America in the 17th to 19th centuries (40) where colonization was known to be biased toward key landscape features, particularly river valleys. They derived an analytical expression for the velocity of a traveling wave moving across complex, or fractal, surfaces: v共t兲 ⫽ 冉冊冉冊 1 ␮ dw 共4 ␥ D兲 1兾␮t 1兾dmin⫺1, which, combined with the time-delay adjustment in Eq. 3, gives v共t兲 ⫽ 冉冊冉冊 1 ␮ dw 共4 ␥ D兲 1兾␮ 1兾dmin⫺1 t ␥T 1⫹ 2 Hamilton and Buchanan Paleoindian archaeological record, which suggests that Clovisage sites are found commonly in high-productivity areas, such as river basins as well as prime hunting areas (27). In addition, recent research shows that, indeed, ethnographic hunter– gatherers use landscapes in complex ways, which are reflected in nonlinearities in space use (42), residential mobility (43), and social network structure (44). The model we have proposed in this article to account for the velocity and trajectory of the colonization process emphasizes the interplay of population growth rates, hunter–gatherer adaptations, and the ecological and topographic complexity of landscapes. In particular, our model predicts that (i) the majority of Clovis-age sites should be associated with the types of ecological and topographic landscapes favorable to colonization, as outlined above (i.e., major river drainages and areas of high foraging return rates); (ii) the Clovis-age archaeological record should reflect the repeated use of regional landscapes because of the generational time-delays of the colonization process; (iii) regional variation in the size of home ranges should be influenced by both ecological productivity and, perhaps more importantly, the potential regeneration rates of high ranked prey (i.e., home range size should covary with highranked prey body size); and (iv) the earliest Clovis dates on the continent should occur on the far northern Plains, and the youngest Clovis dates for the initial occupation of a region should occur in Central America. Fig. 3. Functions describing the wave front velocity in terms of the diffusion coefficient and landscape complexity. (Upper) Wave velocity v as a function of the diffusion coefficient D for data used in the analysis. The horizontal dotted lines indicate the approximate upper and lower bounds for the observed velocity of the Clovis wavefront. The vertical dot– dash– dot line gives the approximate diffusion coefficient based on ethnographic data. The expected velocity from the simple time-delayed model (solid line) falls considerably short of the archaeologically observed velocity. The upper and lower curves based on Eq. 5 (dashed lines) show that the archaeologically observed velocities are easily accounted for by movement across complex landscapes. (Lower) Wave velocity expressed as a function of both the proportion of the landscape occupied P and the diffusion coefficient D (from Eq. 5). where dmin is the minimum dimension of the landscape, dw is the basic dimension of movement (two dimensions in this case), and ␮ ⫽ dmindw/(dmin ⫹ dw). The important parameter here to note is ␮, which essentially measures the extent to which the population saturates the area behind the advancing wavefront (41). When ␮ ⫽ 2, the expanding population saturates 100% of the landscape behind the wavefront, or occupies proportion P ⫽ 1 of the landscape, where P ⫽ ␮/dw, which would be the case if the colonizing population had no specific niche preferences and used all landscapes with equal probability. However, as outlined above, an expanding, colonizing population of hunter–gatherers would have favored certain habitat types based on behavioral and technological adaptations, preferred foraging niches, prey types, and the mobility costs of different landscapes (4, 27). Thus, it is more than likely that for the Clovis colonists ␮ ⬍ 2, meaning that the population uses less than the full two dimensions of the landscape, with 1 ⫺ P proportion of the landscape unoccupied, causing the wavefront to advance at an increased velocity (see Fig. 3). Solving for ␮ when ␯ ⬇ 5–8 (the observed velocity) gives ␮ ⬇ 1.3–1.6, suggesting that Clovis colonists need to have used only ⬇2/3 to 3/4 of the available landscape in order for the colonizing wave to have traveled at the velocity we observe in our data (Fig. 3 Upper). This finding is in qualitative agreement with the early Hamilton and Buchanan Wavefront/Bin Width. We used wavefront bins of 450 km, although our results do not change quantitatively with reasonable adjustments in bin widths. A sensitivity analysis showed that our results are robust to bin widths of between ⬇300 and 600 km wide (all P ⬍ 0.05). Below 300 km, bins are too narrow to capture variance in the sparse radiocarbon record, and above 600 km, there are not enough bins across the continent to make meaningful comparisons. We used widths of 450 km to provide enough bins across the continent for meaningful statistics but at the same time ensuring that sites in the analysis were far enough apart so we could be confident of seeing an underlying trend. A bin width of 450 km also helps us meet the first of two criteria laid out by Hazelwood and Steele (33) for archaeological diffusion modeling: (i) Because of modeling error, or errors in the width of the wavefront, the distance between two sites, ⌬x, must be greater than the width of the wavefront, L ⫽ 8 (公D/␥) ⫽ 245 km. In our northern wave (the wave of interest) we note 具⌬x典 ⬎ L, therefore meeting the first condition; and (ii) because of errors in radiocarbon dates, the difference in time between two sites, ⌬t, must be greater than the combined error rates of the two sites plus the modeling error, ␪ ⫽兩␧A ⫹ ␧B兩 ⫹ 8/␥, where ␧x is the radiocarbon error at site x, and so ⌬t ⬎ ␪. Again, working with averages, our average change in time between bins, ⌬t ⫽ 149, is significantly less than the modeling error, ␪ ⬇ 335, meaning that we do not meet the second condition. Therefore, although we can be confident that our sample tracks the distance between traveling wavefronts over time, we must express caution in interpreting the dates here as the earliest dates within each wavefront bin. However, as the time it took for the colonization process to occur (11,200–10,600 ⫽ 600 radiocarbon years) is greater than the modeling error, we feel confident that our analysis accurately tracks the general trend in the early occupation history of the continent, if not the exact timing. Radiocarbon Evidence. In addition to the 17 radiocarbon assays reported as reliable by Waters and Stafford (6), we include six radiocarbon dates in our analyses that they do not evaluate (Table 1). In situations in which multiple dates were available, averages were calculated by using Calib 5.0. Casper. Recently dated camel bone from the Casper site in Wyoming suggests that a Clovis occupation underlies the Hell PNAS 兩 October 2, 2007 兩 vol. 104 兩 no. 40 兩 15629 Gap kill at the site (45). A single Clovis projectile point was recovered from the site and several of the camel bones exhibited evidence of human breakage. Big Eddy. Big Eddy is a well stratified, multicomponent site in Missouri (46). A number of radiocarbon dates bracket the stratum containing the Clovis-aged assemblage, which yielded dated charcoal samples from the same lithological and cultural stratum 2 cm and 16 cm below a fluted projectile point. Debert. Haynes (23) evaluated Macdonald’s (47) 13 radiocarbon dates and averaged them for the age estimate of the early Paleoindian occupation at Debert (48). Hedden. Two radiocarbon dates were taken from excavated charcoal samples (pine and spruce) associated with a buried, single component early Paleoindian occupation (49, 50). Hiscock. An early Paleoindian lithic assemblage, with fluted 1. Haynes CV, Jr (2005) in Paleoamerican Origins: Beyond Clovis, eds Bonnichsen R, Lepper BT, Stanford D, Waters MR (Texas A&M Press, College Station, TX), pp 113–132. 2. Mandryk CAS, Josenhans H, Fedje DW, Mathewes RW (2001) Quaternary Sci Rev 20:301–314. 3. Dixon EJ (1999) Bones, Boats, and Bison: Archaeology and the First Colonization of North America (Univ of New Mexico Press, Albuquerque, NM). 4. Anderson DG, Gillam JC (2000) Am Antiquity 65:43–66. 5. Bradley B, Stanford D (2004) World Archaeol 36:459–478. 6. Waters MR, Stafford TW, Jr (2007) Science 315:1122–1126. 7. Mosimann JE, Martin PS (1975) Am Sci 63:304–313. 8. Martin PS (1967) in Pleistocene Extinctions: The Search for a Cause, eds Martin PS, Wright HE (Yale Univ Press, New Haven, CT), pp 75–120. 9. Orr PC (1962) Am Antiquity 27:417–419. 10. Rick TC, Erlandson JM, Vellanoweth RL (2001) Am Antiquity 66:595–613. 11. Adovasio JM, Pedler D, Donahue J, Stuckenrath R (1999) in Ice Age People of North America: Environments, Origins, and Adaptations of the First Americans, eds Bonnichsen R, Turnmire KL (Oregon State Univ Press for the Center for the Study of the First Americans, Corvallis, OR), pp 416–431. 12. McAvoy JM, McAvoy LD (1997) Archaeological Investigations of Site 44SX202, Cactus Hill, Sussex County Virginia (Dept Hist Resour, Commonwealth of Virginia, Richmond, VA). 13. Fort J, Mendez V (1999) Phys Rev Lett 82:867–870. 14. Fort J, Mendez V (1999) Phys Rev E 60:5894–5901. 15. Fort J (2003) Antiquity 520–530. 16. Fort J, Jana B, Humet J (2004) Phys Rev E 70:031913. 17. Fort J, Pujol T, Cavalli-Sforza LL (2004) Camb Archaeol J 14:53–61. 18. Pinhasi R, Fort J, Ammerman AJ (2005) PLoS Biol 3:2220–2228. 19. Fort J, Mendez V (2002) Rep Prog Phys 65:895–954. 20. Walker RS, Gurven M, Hill K, Migliano A, Chagnon NA, De Souza R, Djurovic G, Hames R, Hurtado AM, Kaplan H, et al. (2006) Am J Hum Biol 18:295–311. 21. MacDonald DH, Hewlett BS (1999) Curr Anthropol 40:501–523. 22. Hewlett BS, van de Koppel JMH, Cavalli-Sforza LL (1982) Man 17:418–430. 23. Haynes CV, Jr, Donahue DJ, Jull AJT, Zabel TH (1984) Archaeol East N Am 12:184–191. 24. Reimer PJ, Baillie MGL, Bard E, Bayliss A, Beck JW, Bertrand CJH, Blackwell PG, Buck CE, Burr GS, Cutler KB, et al. (2004) Radiocarbon 46:1029–1058. 25. Ammerman AJ, Cavalli-Sforza LL (1973) in The Explanation of Culture Change, ed Renfrew C (Duckworth, London). 26. Buchanan B, Collard M J Anthropol Archaeol 26:366–393. 27. Barton CM, Schmich S, James SR (2004) in The Settlement of the American Continents, eds Barton CM, Clark GA, Yesner DA, Pearson GA (Univ of Arizona Press, Tucson, AZ), pp 138–161. 28. Macaulay V, Hill C, Achilli A, Rengo C, Clarke D, Meehan W, Blackburn J, Semino O, Scozzari R, Cruciani F, et al.(2005) Science 308:1034–1036. 29. Fiedel SJ (2000) J Archaeol Res 8:39–103. 30. Fiedel SJ (2004) in The Settlement of the American Continents: A Multidisciplinary Approach to Human Biogeography, eds Barton CM, Clark GA, Yesner DA, Pearson GA (Univ of Utah Press, Salt Lake City), pp 79–84. 31. Kelly RL, Todd LC (1988) Am Antiquity 53:231–244. 32. Haskell JP, Ritchie ME, Olff HI (2002) Nature 418:527–530. 33. Hazelwood L, Steele J (2004) J Archaeol Sci 31:669–679. 34. Santos MC, Raposo EP, Viswanathan GM, Luz MGE (2004) Europhys Lett 67:734–740. 35. Raposo EP, Buldyrev SV, da Luz MGE, Santos MC, Stanley HE, Viswanathan GM (2003) Phys Rev Lett 91:240601. 36. Surovell T, Waguespack N, Brantingham PJ (2005) Proc Natl Acad Sci 102:6231–6236. 37. Steele J, Adams J, Sluckin T (1998) World Archaeol 30:286–305. 38. Meltzer DJ (1988) J World Prehist 2:1–52. 39. Campos D, Mendez V, Fort J (2004) Phys Rev E 69:031115. 40. Campos D, Fort J, Mendez V (2006) Theor Popul Biol 69:88–93. 41. Bertuzzo E, Maritan A, Gatto M, Rodriguez-Iturbe I, Rinaldo A (2007) Water Resour Res 43:W04419. 42. Hamilton MJ, Milne BT, Walker RS, Brown JH (2007) Proc Natl Acad Sci USA 104:4765–4769. 43. Brown CT, Liebovitch LS, Glendon R (2007) Hum Ecol 35:129–138. 44. Hamilton MJ, Milne BT, Walker RS, Burger O, Brown JH (2007) Proc R Soc London Ser B 274:2195–2202. 45. Frison GC (2000) Curr Res Pleistocene 17:28–29. 46. Ray JH, Lopinot NH, Hajic ER, Mandel RD (1998) Plains Anthropol 43:73–81. 47. MacDonald GF (1968) Debert: A Palaeo-Indian Site in Central Nova Scotia (National Museums of Canada, Ottawa). 48. Levine MA (1990) Archaeol East N Am 18:33–63. 49. Spiess A, Mosher J (1994) Maine Archaeol Soc Bull 34:25–54. 50. Spiess A, Mosher J, Callum K, Sidell NA (1995) Maine Archaeol Soc Bull 35:13–52. 51. Laub RS (2003) in The Hiscock Site: Late Pleistocene and Holocene Paleoecology and Archaeology of Western New York State, ed Laub RS (Bull Buffalo Soc Nat Sci, Buffalo, NY), Vol 37, pp 18–38. 52. Gramly MR (1982) The Vail Site: A Paleo-Indian Encampment in Maine (Bull Buffalo Soc Nat Sci, Buffalo, NY), Vol 30. 15630 兩 www.pnas.org兾cgi兾doi兾10.1073兾pnas.0704215104 projectile points were recovered along with numerous bones of caribou and mastodon (51). Three culturally modified bones yielded the three radiocarbon ages that were averaged for use in this study. Vail. We use Barton et al.’s (27) average of dates from charcoal samples recovered at Vail. The charcoal samples were recovered by Gramly (52) from the habitation area of the Vail site within cultural features 1 and 2. We thank Oskar Burger, Jim Boone, Ozzie Pearson, Bruce Huckell, Vance T. Holliday, Joaquim Fort, Arthur Speiss, Richard Laub, Douglas MacDonald, Barry Hewlett, and Ana Davidson. B.B. gratefully acknowledges support from National Science Foundation International Research Fellowship 0502293. Hamilton and Buchanan
{"url":"https://p.pdfkul.com/spatial-gradients-in-clovis-age-radiocarbon-dates-semantic-scholar_5ab3197d1723ddab821ce1f6.html","timestamp":"2024-11-14T21:44:04Z","content_type":"text/html","content_length":"92887","record_id":"<urn:uuid:3c19b869-3af1-4cdb-83a6-5c5b382cb365>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00450.warc.gz"}
Loop Shape Goal Shape open-loop response of feedback loops when using Control System Tuner. Loop Shape Goal specifies a target gain profile (gain as a function of frequency) of an open-loop response. Loop Shape Goal constrains the open-loop, point-to-point response (L) at a specified location in your control system. When you tune a control system, the target open-loop gain profile is converted into constraints on the inverse sensitivity function inv(S) = (I + L) and the complementary sensitivity function T = 1–S . These constraints are illustrated for a representative tuned system in the following figure. Where L is much greater than 1, a minimum gain constraint on inv(S) (green shaded region) is equivalent to a minimum gain constraint on L. Similarly, where L is much smaller than 1, a maximum gain constraint on T (red shaded region) is equivalent to a maximum gain constraint on L. The gap between these two constraints is twice the crossover tolerance, which specifies the frequency band where the loop gain can cross 0 dB. For multi-input, multi-output (MIMO) control systems, values in the gain profile greater than 1 are interpreted as minimum performance requirements. Such values are lower bounds on the smallest singular value of the open-loop response. Gain profile values less than one are interpreted as minimum roll-off requirements, which are upper bounds on the largest singular value of the open-loop response. For more information about singular values, see sigma. Use Loop Shape Goal when the loop shape near crossover is simple or well understood (such as integral action). To specify only high gain or low gain constraints in certain frequency bands, use Minimum Loop Gain Goal or Maximum Loop Gain Goal. When you do so, the software determines the best loop shape near crossover. In the Tuning tab of Control System Tuner, select New Goal > Target shape for open-loop response to create a Loop Shape Goal. Command-Line Equivalent When tuning control systems at the command line, use TuningGoal.LoopShape to specify a loop-shape goal. Open-Loop Response Selection Use this section of the dialog box to specify the signal locations at which to compute the open-loop gain. You can also specify additional loop-opening locations for evaluating the tuning goal. • Shape open-loop response at the following locations Select one or more signal locations in your model at which to compute and constrain the open-loop gain. To constrain a SISO response, select a single-valued location. For example, to constrain the open-loop gain at a location named 'y', click Add signal to list and select 'y'. To constrain a MIMO response, select multiple signals or a vector-valued signal. • Compute response with the following loops open Select one or more signal locations in your model at which to open a feedback loop for the purpose of evaluating this tuning goal. The tuning goal is evaluated against the open-loop configuration created by opening feedback loops at the locations you identify. For example, to evaluate the tuning goal with an opening at a location named 'x', click Add signal to list and select 'x'. To highlight any selected signal in the Simulink^® model, click . To remove a signal from the input or output list, click . When you have selected multiple signals, you can reorder them using and . For more information on how to specify signal locations for a tuning goal, see Specify Goals for Interactive Tuning. Desired Loop Shape Use this section of the dialog box to specify the target loop shape. • Pure integrator wc/s Check to specify a pure integrator and crossover frequency for the target loop shape. For example, to specify an integral gain profile with crossover frequency 10 rad/s, enter 10 in the Crossover frequency wc text box. • Other gain profile Check to specify the target loop shape as a function of frequency. Enter a SISO numeric LTI model whose magnitude represents the desired gain profile. For example, you can specify a smooth transfer function (tf, zpk, or ss model). Alternatively, you can sketch a piecewise target loop shape using an frd model. When you do so, the software automatically maps the profile to a smooth transfer function that approximates the desired loop shape. For example, to specify a target loop shape of 100 (40 dB) below 0.1 rad/s, rolling off at a rate of –20 dB/decade at higher frequencies, enter frd([100 100 10],[0 1e-1 1]). If you are tuning in discrete time, you can specify the loop shape as a discrete-time model with the same sample time that you are using for tuning. If you specify the loop shape in continuous time, the tuning software discretizes it. Specifying the loop shape in discrete time gives you more control over the loop shape near the Nyquist frequency. Use this section of the dialog box to specify additional characteristics of the loop shape goal. • Enforce loop shape within Specify the tolerance in the location of the crossover frequency, in decades. For example, to allow gain crossovers within half a decade on either side of the target crossover frequency, enter 0.5. Increase the crossover tolerance to increase the ability of the tuning algorithm to enforce the target loop shape for all loops in a MIMO control system. • Enforce goal in frequency range Limit the enforcement of the tuning goal to a particular frequency band. Specify the frequency band as a row vector of the form [min,max], expressed in frequency units of your model. For example, to create a tuning goal that applies only between 1 and 100 rad/s, enter [1,100]. By default, the tuning goal applies at all frequencies for continuous time, and up to the Nyquist frequency for discrete time. • Stabilize closed loop system By default, the tuning goal imposes a stability requirement on the closed-loop transfer function from the specified inputs to outputs, in addition to the gain constraint. If stability is not required or cannot be achieved, select No to remove the stability requirement. For example, if the gain constraint applies to an unstable open-loop transfer function, select No. • Equalize loop interactions For multi-loop or MIMO loop gain constraints, the feedback channels are automatically rescaled to equalize the off-diagonal (loop interaction) terms in the open-loop transfer function. Select Off to disable such scaling and shape the unscaled open-loop response. • Apply goal to Use this option when tuning multiple models at once, such as an array of models obtained by linearizing a Simulink model at different operating points or block-parameter values. By default, active tuning goals are enforced for all models. To enforce a tuning requirement for a subset of models in an array, select Only Models. Then, enter the array indices of the models for which the goal is enforced. For example, suppose you want to apply the tuning goal to the second, third, and fourth models in a model array. To restrict enforcement of the requirement, enter 2:4 in the Only Models text box. For more information about tuning for multiple models, see Robust Tuning Approaches (Robust Control Toolbox). Evaluating Tuning Goals When you tune a control system, the software converts each tuning goal into a normalized scalar value f(x). Here, x is the vector of free (tunable) parameters in the control system. The software then adjusts the parameter values to minimize f(x) or to drive f(x) below 1 if the tuning goal is a hard constraint. For Loop Shape Goal, f(x) is given by: $f\left(x\right)={‖\begin{array}{c}{W}_{S}S\\ {W}_{T}T\end{array}‖}_{\infty }.$ S = D^–1[I – L(s,x)]^–1D is the scaled sensitivity function. L(s,x) is the open-loop response being shaped. D is an automatically-computed loop scaling factor. (If Equalize loop interactions is set to Off, then D = I.) T = S – I is the complementary sensitivity function. W[S] and W[T] are frequency weighting functions derived from the specified loop shape. The gains of these functions roughly match your specified loop shape and its inverse, respectively, for values ranging from –20 dB to 60 dB. For numerical reasons, the weighting functions level off outside this range, unless the specified gain profile changes slope outside this range. Because poles of W[S] or W[T] close to s = 0 or s = Inf might lead to poor numeric conditioning for tuning, it is not recommended to specify loop shapes with very low-frequency or very high-frequency dynamics. For more information about regularization and its effects, see Visualize Tuning Goals. Implicit Constraints This tuning goal imposes an implicit stability constraint on the closed-loop sensitivity function measured at the specified, evaluated with loops opened at the specified loop-opening locations. The dynamics affected by this implicit constraint are the stabilized dynamics for this tuning goal. The Minimum decay rate and Maximum natural frequency tuning options control the lower and upper bounds on these implicitly constrained dynamics. If the optimization fails to meet the default bounds, or if the default bounds conflict with other requirements, on the Tuning tab, use Tuning Options to change the defaults. Related Topics
{"url":"https://la.mathworks.com/help/slcontrol/ug/loop-shape-goal.html","timestamp":"2024-11-14T02:42:00Z","content_type":"text/html","content_length":"83178","record_id":"<urn:uuid:3685cc2d-4df4-4ac1-a8b7-ace0e221ba7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00044.warc.gz"}
NPC Pill #5: (P)izza =?= (NP)izza Recently A. Amarilli (a3nm) posted a question on cs.stackexchange.com about the computational complexity of a Test Round problem of the Google France #Hash Code 2015: the “Pizza Regina” problem (March 27th, 2015): Definition [Pizza Regina problem] Input: A grid $M$ with some marked squares, a threshold $T\in \mathbb{N}$, a maximal area $A \in\mathbb{N}$ Output: The largest possible total area of a set of disjoint rectangles with integer coordinates in $M$ such that each rectangle includes at least $T$ marked squares and each rectangle has area at most $A$. The problem can be converted to a decision problem adding a parameter $k \in \mathbb{N}$ and asking: Question: Does there exist a set of disjoint rectangles satisfying the conditions (each rectangle has integer coordinates in $M$, includes at least $T$ marked squares and has area at most $A$) whose total area is at least $k$ squares? The problem is clearly in $\mathsf{NP}$, and after struggling a little bit I found that it is $\mathsf{NP}$-hard (so the Pizza Regina problem is $\mathsf{NP}$-complete). This is a sketch of a reduction from MONOTONE CUBIC PLANAR 1-3 SAT: Definition [1-3 SAT problem]: Input: A 3-CNF formula $\varphi = C_1 \land C_2 \land … \land C_m$, in which every clause $C_j$ contains exactly three literals: $C_j = (\ell_{j,1} \lor \ell_{j,2} \lor \ell_{j,3})$. Question: Does there exist a satisfying assignment for $\varphi$ such that each clause $C_j$ contains exactly one true literal. The problem remains NP-complete even if all literals in the clauses are positive (MONOTONE), if the graph built connecting clauses with variables is planar (PLANAR) and every variable is contained in exactly 3 clauses (CUBIC) (C. Moore and J. M. Robson, Hard tiling problems with simple tiles, Discrete Comput. Geom. 26 (2001), 573-590.). We use $T=3, A=6$, and in the figures ham is represented with blue boxes (transgenic ham?), pizza with orange boxes. The idea is to use tracks of ham that carry positive or negative signals; the track is made with an alternation of 1 and 2 pieces of hams placed far enough so that they can be covered exactly by one slice of pizza of area $A$; the segments of the track are marked alternately with $+$ or $-$, the track will carry a positive signal if slices are cut on the positive segments: Each variable $x_i$, which is connected to exactly 3 SAT clauses, is represented by three adjacent endpoints of three ham tracks (positive segment), in such a way that there are 2 distinct ways to cut it, one will “generate” a positive signal on all 3 tracks (it reppresent the $x_i = TRUE$ assignment) the other a negative signal ($x_i = FALSE$). Notice that we can also generate mixed positive and negative signals, but in that case *at least one ham remains uncovered*. Each clause $C_j$ of the 1-3 SAT formula with 3 literals $L_{i,1}, L_{i,2}, L_{i,3}$ is simply represented by a single ham with three incoming positive segments of three distinct ham tracks; by construction *only one of the three tracks* carrying a positive signal can “cover” the ham-clause. Finally we can build shift and turn gadgets to carry the signals according to the underlying planar graph and adjust the endpoints: Suppose that the resulting graph contains $H$ hams. By construction every slice of pizza must contain exactly 3 hams, and in all cases every slice can be enlarged up to area $A$. If the original 1-3 SAT formula is satisfiable then by construction we can cut $H /3$ pieces of pizza (with total area of $A H / 3$) and no ham remains uncovered. On the oppposite direction, if we can cut $H /3$ pieces of pizza (with total area $A H / 3$) then no ham remains uncovered, and the signals on the variables gadgets and on the clauses are consistent: the ham on the clause is covered by exactly one positive slice coming from a positive variable, and every variable generates 3 positive signals or 3 negative signals (no mixed signals); so the cuts induce a valid 1-3 SAT assignment. Conclusion: … so unless $\mathsf{(P)izza} =\mathsf{(NP)izza}$, cutting a pizza can be really hard. I would like to thank Antoine for posting the funny question and for spending a bit of time checking my proof. One thought on “NPC Pill #5: (P)izza =?= (NP)izza” 1. A few questions: 1) In the second shift signal, you no longer have the hams distanced far enough so that an area covers either only positive or only negative cells. So if an area contains both positive and negative cells does it carry a positive or negative signal? Or if it matters just the polarity of the cell exactly near the cut, why did you distance the hams so much when explaining the tracks? 2) To get a “pizza grid” from the 1 in 3 sat graph, once the nodes and edges are transformed, is the area in between filled with no-hams cells? 3) How would you prove the construction can be done in polynomial time? Anyway, cool proof! It’s the only hashcode problem for which I found a proof of np-completeness.
{"url":"https://www.nearly42.org/npc-pills/npc-pill-005/","timestamp":"2024-11-05T22:13:59Z","content_type":"text/html","content_length":"23948","record_id":"<urn:uuid:5f8bf81f-be85-4ddd-903e-d9374ffa610b>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00818.warc.gz"}
How I put a √ in Python ? | Sololearn: Learn to code for FREE! You can use math.sqrt(x). Here you need to import math. Or another way: √x = x^(1/2) so in : x**(1/2) Do you mean how do you calculate it or how do you add the symbol in an output (like a print statement)? it is possible work with complex numbers in without a library. They are supported intrinsically. x = (-9)**(1/2) print(x) Output: (1.8369701987210297e-16+3j) The real portion has a small floating point error, but that is forgivable. Edit: changed example to show result for -9 and corrected output. oops, my results were the same as yours for -25. My eyes were tired, and inadvertently I copied from two different screen shots (one used -9, the other -25). The engineering world uses j because i can be confused with the symbol for electric current. Yeah of course x**(1/2). I will change that. Thanks! I want to mean how I can calculate it , That last letter should be j since decided to be weird and not use i like the rest of the world. Also, interestingly, I got a different small error in the real portion than you did when I pasted your code into Pydroid 3, since I couldn't use the Sololearn playground while reading the discussion. x = (-5)**(1/2) print(x) # (3.061616997868383e-16+5j) y**(1/2) use this in py for '√' of var y Use the sqrt() function from math module To use the radical sign (√) in , you can use the math module. Here's an example of how to calculate the square root of a number using the radical sign: python import math number = 16 square_root = math.sqrt(number) print("The square root of", number, "is", square_root) When you run this code, it will output: "The square root of 16 is 4.0"
{"url":"https://www.sololearn.com/es/Discuss/3250478/how-i-put-a-in-python-","timestamp":"2024-11-07T20:33:02Z","content_type":"text/html","content_length":"946992","record_id":"<urn:uuid:00a3f4f9-35b5-4c69-b9d3-0239eb3e76ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00870.warc.gz"}
Generalized Westergaard stress functions as fundamental solutions A particular implementation of the hybrid boundary element method is presented for the two-dimensional analysis of potential and elasticity problems, which, although general in concept, is suited for fracture mechanics applications. Generalized Westergaard stress functions, as proposed by Tada, Ernst and Paris in 1993, are used as the problem's fundamental solution. The proposed formulation leads to displacement-based concepts that resemble those presented by Crouch and Starfield, although in a variational framework that leads to matrix equations with clear mechanical meanings. Problems of general topology, such as in the case of unbounded and multiply-connected domains, may be modeled. The formulation, which is directly applicable to notches and generally curved, internal or external cracks, is specially suited for the description of the stress field in the vicinity of crack tips and is an easy means of evaluating stress intensity factors and of checking some basic concepts laid down by Rice in 1968. The paper focuses on the mathematical fundamentals of the formulation. One validating numerical example is presented. Profundice en los temas de investigación de 'Generalized Westergaard stress functions as fundamental solutions'. En conjunto forman una huella única.
{"url":"https://cris.continental.edu.pe/es/publications/generalized-westergaard-stress-functions-as-fundamental-solutions","timestamp":"2024-11-06T14:29:40Z","content_type":"text/html","content_length":"51527","record_id":"<urn:uuid:437fab23-dc20-469a-99ed-860c1aa24a5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00764.warc.gz"}
Robustness of SACEMS Based on Sharpe Ratio Subscribers have asked whether risk-adjusted returns might work better than raw returns for ranking Simple Asset Class ETF Momentum Strategy (SACEMS) assets. In fact, “Alternative Momentum Metrics for SACEMS?” supports belief that Sharpe ratio beats raw returns. Is this finding strong enough to justify changing the strategy, which each month selects the best performers over a specified lookback interval from among the following eight asset class exchange-traded funds (ETF), plus cash: PowerShares DB Commodity Index Tracking (DBC) iShares MSCI Emerging Markets Index (EEM) iShares MSCI EAFE Index (EFA) SPDR Gold Shares (GLD) iShares Russell 2000 Index (IWM) SPDR S&P 500 (SPY) iShares Barclays 20+ Year Treasury Bond (TLT) Vanguard REIT ETF (VNQ) 3-month Treasury bills (Cash) To investigate, we update the basic comparison and conduct three robustness tests: 1. Does Sharpe ratio beat raw returns consistently across Top 1, equally weighted (EW) Top 2, EW Top 3 and EW Top 4 portfolios, and the 50%-50% SACEMS EW Top 3-Simple Asset Class ETF Value Strategy (SACEVS) Best Value portfolio? 2. Does Sharpe ratio beat raw returns consistently across different lookback intervals? 3. For multi-asset portfolios, does weighting by Sharp ratio rank beat equal weighting? In other words, do future returns behave systematically across ranks? To calculate Sharpe ratios, we each month for each asset subtract the risk-free rate (Cash yield) from raw monthly total returns to generate monthly total excess returns over a specified lookback interval. We then calculate Sharpe ratio as average monthly excess return divided by standard deviation of monthly excess returns over the lookback interval. We set Sharpe ratio for Cash at zero (though it is actually zero divided by zero). Using monthly dividend-adjusted closing prices for asset class proxies and the yield for Cash during February 2006 (when all ETFs are first available) through December 2018, we find that: We first update the comparison between raw return-ranked and Sharpe ratio-ranked SACEMS EW Top 3 for the baseline 5-month lookback interval. The following table compares gross monthly performance over the available sample period, including maximum drawdowns (MaxDD) based on monthly measurements. Notable points are: • Sharpe ratio beats raw return based on average monthly return, standard deviation of monthly returns and the ratio of the two (Rough Sharpe Ratio). • Sharpe ratio does not improve MaxDD. • Sharpe ratio monthly returns are slightly more correlated with SPY returns than raw return monthly returns. For longer-term perspective, we look at annualized/annual performance. The next table compare gross annualized returns, or compound annual growth rates (CAGR), over several intervals and annual returns for the baseline 5-month lookback interval. Notable points are: • Sharpe ratio beats raw return based on CAGR over the last 10 years and the full sample period, but not over any of the shorter annualized intervals. • Sharpe ratio beats raw return for six of 12 full calendar years, and loses five years. • Sharpe ratio wins based on average annual return, standard deviation of annual returns and the ratio of the two (Rough Sharpe Ratio). For visual perspective, we plot cumulative returns. The following chart compares for the baseline 5-month lookback interval gross cumulative values of $100,000 initial investments in the two versions of SACEMS EW Top 3, and an alternative Top 3 that assigns 50%, 30% and 20% weights to the top three Sharpe ratio winners, respectively (50-30-20 Top 3). Sharpe ratio EW Top 3 is the winner, with separation from raw returns occurring mostly in 2010-2012 as indicated by the preceding table. For understanding of differences, we first compare allocations and then look at performance of individual assets by rank. The next chart summarizes differences in Top 3 allocations for the baseline 5-month lookback interval over the available sample period. The principal effect of switching from raw return to Sharpe ratio is a shift from EEM (emerging equity markets) to SPY (U.S. large capitalization stocks). The next chart summarizes average return by Sharpe ratio ranks, with one standard deviation variability ranges, for the baseline 5-month lookback interval. Notable findings are: • The top three ranks have the strongest average returns. • Average returns do not vary systematically across ranks, with rank 3 very high and rank 4 very low. This finding undermines confidence in the precision of Sharpe ratio ranking. It also shows why EW Top 3 outperforms 50-30-20 Top 3. Next we look at different portfolios. The next two tables compares CAGRs and MaxDDs for SACEMS Top 1, EW Top 2, EW Top 3 and EW Top 4 portfolios and for 50%-50% SACEMS EW Top 3-SACEVS Best Value (SACEVS-SACEMS 50-50) for the baseline 5-month lookback interval over the available sample period. Contest results are mixed and generally close. Finally, we look at alternative lookback intervals. The next two tables compare CAGRs and MaxDDs for raw returns-ranked and Sharpe ratio-ranked SACEMS EW Top 3 across lookback intervals ranging from two to 12 months (Sharpe ratio is undefined for one month). First monthly returns are for March 2007 to accommodate the longest lookback interval of 12 months. This robustness test is important to the extent that there is no truly optimal lookback interval out-of-sample. Notable points are: • For CAGRs, Sharpe ratio beats raw return for three of 11 lookback intervals, loses seven intervals and loses on average. However, Sharpe ratio wins for the baseline 5-month interval and for the 4-month lookback interval. • For MaxDDs, Sharpe ratio beats raw return for only two of 11 lookback intervals, loses four intervals and loses on average. However, MaxDD differences are generally small. The finding for CAGRs indicates that performance based on raw return rankings is materially more robust to choice of lookback interval. In other words, if making an error in selecting the best lookback interval (or if using a combination of lookback intervals), raw return is likely better than Sharpe ratio. Does it matter if we calculate Sharpe ratio with daily, rather than monthly, data over these lookback intervals? The final pair of tables adds one column to each of the last two charts showing CAGRs (upper chart) and MaxDDs for using Sharpe ratios calculated across lookback intervals with daily rather than monthly T-bill yield and asset returns, with months approximated as 21 trading days. For example, daily Sharpe ratio for a 4-month lookback interval is average daily return over the past 84 trading days divided by standard deviation of daily returns over this same interval. This approach allows calculation of Sharpe ratios for a 1-month lookback interval. Results indicate that daily Sharpe ratios (Daily SR) mostly generate lower CAGRs and slightly deeper MaxDDs than monthly Sharpe ratios (SR). In summary, evidence from multiple tests on available data suggests that Sharpe ratio-ranked SACEMS is not superior enough to raw return-ranked SACEMS to justify changing the baseline strategy. In fact, the final test suggests that the former is less robust to uncertainty in the optimal lookback interval than the latter. Cautions regarding findings include: • Sample size is modest (about 30 independent 5-month and 13 independent 12-month lookback intervals). • As noted, analyses are gross. Accounting for costs of monthly portfolio reformation would reduce returns. However, turnovers are similar for the two competing models. • As noted, future performance of assets does not vary systematically by past Sharpe ratio rank, undermining belief in the precision of past performance as a predictor of future performance. • Testing multiple lookback intervals and different ranking metrics introduces introduces data snooping bias, such that the best-performing combinations overstate expectations.
{"url":"https://www.cxoadvisory.com/momentum-investing/robustness-of-sacems-based-on-sharpe-ratio/","timestamp":"2024-11-01T19:20:30Z","content_type":"application/xhtml+xml","content_length":"170540","record_id":"<urn:uuid:e0ee7e79-6925-4c61-8230-f234d15cb0c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00665.warc.gz"}
Mathematical Theology: Future Science of Confidence in Belief 1st October 2011 | Draft Mathematical Theology: Future Science of Confidence in Belief Self-reflexive Global Reframing to Enable Faith-based Governance -- / -- Reframing mathematical theology in terms of confidence Imagining the initiative: reframing conventional labels Institutional and thematic precedents Organization of the initiative Examples of research themes for consideration Integrative thematic organization Mathematical theology of experience Comprehension of ignorance, nonsense and craziness Implication of research on opinion and Annex to the proposal for an International Institute of Advanced Studies in Mathematical Theology (2011), which contains an Introduction, commentary on the Potential strategic importance of mathematical theology and Conclusion. References are provided in a separate document Reframing mathematical theology in terms of confidence The proposal follows from valuable efforts to clarify the nature of mathematical theology, most notably that of Philip J. Davis (A Brief Look at Mathematics and Theology, The Humanistic Mathematics Network Journal Online, 27, 2004), following his earlier influential study in collaboration with Reuben Hersh (The Mathematical Experience, 1981). As noted by Davis, the interface has of course been explored over centuries by a variety of authors from a variety of perspectives and with a variety of convictions. The references provided separately (Bibliography of Relevance to Mathematical Theology) give a sense of this variety, although unfortunately there appears to be no mind map showing the relationships between the preoccupations they represent. Of potential relevance, in subsequent compilations Hersh and colleagues have given a sense of the original and provocative things said about mathematics by mathematicians, philosophers, cognitive scientists, sociologists, and computer scientists (18 Unconventional Essays on the Nature of Mathematics, 2006; Loving and Hating Mathematics: challenging the myths of mathematical life, 2010). Hersh (2006) argues that: ... contrary to fictionalism, mathematical objects do exist -- really! But, contrary to Platonism, their existence is not transcendental, or independent of humanity. It is created by human activity, and is part of human culture. Davis and Hersh had asked in 1981 (p. 406): Do we really have to choose between a formalism that is falsified by our everyday experience, and a Platonism that postulates a mythical fairyland where the uncountable and the inaccessible lie waiting to be observed by the mathematician whom God blessed with a good enough intuition? It is reasonable to propose a different task for mathematical philosophy, not to seek indubitable truth, but to give an account of mathematical knowledge as it really is --fallible, corrigible, tentative, and evolving, as is every other kind of human knowledge. Instead of continuing to look in vain for foundations, or feeling disoriented and illegitimate for lack of foundations, we have tried to look at what mathematics really is, and account for it as a part of human knowledge in general. We have tried to reflect honestly on what we do when we use, teach, invent, or discover mathematics. Scope: Also of relevance is any significance associated with the various interpretations of "mathematic theology", "mathematical theology", "theology of mathematics" and "theological mathematics". For example, with respect to the latter, a Knol by Jeff Leer (Theological Mathematics: a Hierarchy, 9 May 2007) asserts that: Theological mathematics sets aside (insofar as possible) the questions of zero, negative numbers, imaginary numbers, infinite variety, and the like, not as irrelevant to life, but as a distraction from the pure mathematics of the Holy Trinity. This selective interpretation would appear to exclude features which could be vital to an approach of larger scope. Sarah Voss (Mathematical Theology, UUWorld, 2003) explains that: Mathematical theology is a study of the divine that in some way draws on mathematics. It opens our minds (and maybe our hearts) to new possibilities, and in so doing it brings hope. God seems to speak in mathematics in two basic ways. One is through the precision of numerical calculation, logical proof, and all the other blessings associated with mathematics in the "hard" sciences. Science can be thought of as a way of interpreting God's revelation found in nature. The other way is through metaphor. Only in the last decade or so has our society started to acknowledge the existence of mathematical metaphors. I call such metaphors "mathaphors"; when they apply to the spiritual realm, I call them "holy mathaphors." Ideas drawn from mathematics can greatly extend our spiritual worldviews. Such mathematical notions are suggestive, not conclusive. But in those suggestions lie the makings of new ways of interacting with each other, of healing, of understanding God. In a world that is often spiritually fractured and hurting, we can look to mathematical theology for the seeds of new hope. A description of theological mathematics by W. J. Eckerslyke (WikiInfo, 11 January 2009) indicates: Theological mathematics comprises that part of mathematics which goes beyond secular mathematics, and asserts the existence of undefinable entities. Much of theology, particularly pure theology, is concerned with discussions of, and the establishment of conclusions about, the ineffable. Nor does such theology recoil from the apparent contradictions that emerge. On the contrary, they only serve to strengthen our conviction that the subject is of infinite depth and significance. We are happy when people say"That's nonsense" because we can respond with "Yes, it's a Mystery". Much mathematics, particularly pure mathematics, is theological in nature, in that it too is concerned with the study of, and predicated on the existence of, entities which are constitutionally ineffable. As every philosopher knows, "exists" is a very slippery word; and as Wittgenstein said, "What we cannot speak of we must pass over in silence." But that does not deter the more intrepid mathematical explorers, who build layers of indescribable structures out of indescribable entities. Emergent science of confidence and credibility? The argument here is that the dependence on faith and belief, understood generally, suggests that "theology" might be fruitfully reframed to encompass the range of approaches to fundamental integrative belief, especially where those formulations substitute for the divine -- or are effectively treated as such. There is a need for the study of belief systems -- or systems of confidence -- through which people are called upon to give coherence to their lives. This might be called the "science of confidence" to be contrasted with the "confidence science" effectively developed and exploited for marketing purposes. More generally, however, money is recognized as a token of confidence vital to a sustainable economy -- a significant focus of belief. There is therefore an important conflation of connotations with the articulation of confidence in "theology" and that in "economics". It might even be said that the crisis of the times lies in the failure to explore the manner in which such forms of belief are entangled. Mathematics, through its insights into the subtlest patterns of relationships, has traditionally been associated with theology. Mathematical theology continues to explore these matters in terms of their implications, but primarily in celebration of religious understanding. Could it engage with such entanglement, in the light of insights from physics? From such perspectives "mathematics" and "theology" have a fundamentally complementary concern with both "credibility" and "infinity" (Michael Heller and W. Hugh Woodin (Eds.), Infinity: new research frontiers, 2011). The relationship might perhaps be usefully and unconventionally presented as "mathematics 8 theology". There is also a sense in which both are especially but distinctly attentive to engaging confidently with the inexplicable and the unexpected -- which have currently acquired considerable strategic importance, as separately discussed (Engaging with the Inexplicable, the Incomprehensible and the Unexpected, 2010). The latter point is highlighted by the very recent declaration of Rick Perry -- the person who may well be elected as the next "most powerful man on the planet": Right now, America is in crisis. We have been besieged by financial debt, terrorism, and a multitude of natural disasters. As a nation, we must come together and call upon Jesus to guide us through unprecedented struggles, and thank him for the blessings of freedom we so richly enjoy... Some problems are beyond our power to solve.... with praying people asking God's forgiveness, wisdom and provision for our state and nation. There is hope for America. It lies in heaven, and we will find it on our knees. (Rick Perry under fire for planning Christian prayer rally and fast, The Guardian, 5 August 2011) Confidence in the face of the unknown has been brought to the fore by the strategic recognition of the complementarity between "hearts and minds" in developing processes to elicit conviction in order to enable sustainable change -- the will to change. There is a sense in which this preoccupation is embodied in the seemingly improbably complex relationship between theology and mathematics -- perhaps reminiscent to that of moonshine mathematics. With its focus on belief, frequently symbolized by the heart, theology necessarily offers a range of insights to complement the focus of mathematics on confidence established by the mind. It might even be said that the two are brought to a tragic focus -- a form of singularity -- in the mindset of those effectively engaging with infinity and the unknown as suicide bombers. Strategic convictions: It should be stressed that the only qualification for the formulation of this initial presentation is past responsibility for the Encyclopedia of World Problems and Human Potential which referred to aspects of a number of the issues highlighted here -- in an effort to interrelate world problems, global strategies, human development, integrative insights, and human values. This does not imply any special expertise in theology or mathematics. This deficiency could however be understood as an advantage, given the challenging nature of the interface between them. It could however be argued that the nature of the "existence" attributed to the entities so profiled is primarily a matter of belief -- variously articulated in terms of belief systems, as separately argued (Cultivating Global Strategic Fantasies of Choice, 2010; Globallooning -- Strategic Inflation of Expectations and Inconsequential Drift, 2009). Can a "problem" exist in the absence of belief in a corresponding "value"? And, as argued above, a "strategy" is necessarily driven by dependence on a "belief" and a commitment to it readily described as "religious" -- not infrequently marked by "martyrdom for a cause". The implication of "mathematics" in any such "theology" is evident in the effort to analyze, organize and represent the relationships between such entities as complex networks -- effectively a global belief system (Simulating a Global Brain -- using networks of international organizations, world problems, strategies, and values, 2001). As with Monsieur Jourdain in Molière's Le Bourgeois Gentilhomme, is this a case of being surprised and delighted to learn that one has been doing "mathematical theology" all one's life without knowing it: Par ma foi ! il y a plus de quarante ans que je dis de la prose sans que j'en susse rien, et je vous suis le plus obligé du monde de m'avoir appris cela. In managing their beliefs, might that be the case for everyone -- whatever their skill in doing so? Imagining the initiative: reframing conventional labels As queried above, what indeed might be the imaginative initiative that a fruitful interaction between mathematicians and theologians would engender? Echoes of Castalia and The Glass Bead Game (1943), as articulated by Nobel laureate Hermann Hesse? Shades of the Foundation Series in the science fiction of Isaac Asimov, or perhaps of other "science fiction"? Again, with respect to any requisite "global reframing" of an "International Institute of Advanced Studies in Mathematical Theology": "International": Many initiatives have used this descriptor. Given the challenges of a "global" society, the term has lost its relevance for the integrative complexity with which governance is increasingly confronted. The term is valuable in that it exemplifies a formulation of relationships between spaces with which people identify -- also evident in intersectoral, intercultural, interdisciplinary and interfaith dynamics. The question is whether a subtler formulation of these spaces and relationships is possible with the aid of mathematics, especially to enable the emergence of higher and subtler forms of integration and coherence, avoiding entrapment in simplistic unification, irrespective of belief in that possibility. The argument was developed with respect to a specific case (Emergence of a Union of Imaginable Associations engendered by a Union of Intelligible Associations from a Union of International Associations, 2007) "Institute": Again this corresponds to a well-established pattern -- as noted above with respect to "think tank". Unfortunately the term is not associated with the complexity which could held to be requisite in responding to the dynamics of global society. Many have indeed experimented with "network", "community", and other such indicators of form. The question is however what form might be considered appropriate to the intersect of mathematics and "theology" (understood as encompassing belief systems of every kind). Especially intriguing, as suggested by the above case, is the implication of imagination in the credibility of any such form. What new forms can mathematics engender in support of imaginative thinking? Interesting examples are offered by visual renderings of the Mandelbrot set, "exceptional simple" Lie groups, and potentially the Monster Group itself (Sustainability through the Dynamics of Strategic Dilemmas -- in the light of the coherence and visual form of the Mandelbrot set, 2005; Psycho-social Significance of the Mandelbrot Set: a sustainable boundary between chaos and order, 2005; Potential Psychosocial Significance of Monstrous Moonshine: an exceptional form of symmetry as a Rosetta stone for cognitive frameworks, 2007). As shown below, representations of Lie groups in particular are aesthetically reminiscent of the patterns characteristic of religious architecture and design, whilst the "Buddabrot" variant of the Mandelbrot rendering deliberately recalls Buddhist iconography. │ Mandelbrot set │ Lie group (e8 graph of the Gosset 421 polytope) │ │ ("Buddabrot" orientation) │ (reproduced from Wikipedia) │ The further implication is that the form might not be static, as is conventionally assumed, but might be designed to have an inherent dynamic -- perhaps alternating/transforming between a variety of forms. This might recognize the fundamental role of resonance hybrids -- appropriately given their central function in all organic structure with which life is associated. The Mandelbrot set emerges from such a dynamic by iteration in the complex plane. "Advanced": Whilst this term is indicative of an appropriate effort to dissociate the initiative from oversimplistic preoccupations, it necessarily has unfortunate connotations of elitism. This is typically reinforced by efforts to associate "institutes of advanced studies" with "centres of excellence". This offers the implication that excellence is not to be found elsewhere. It also leaves the initiative open to accusation that if it is unable to "deliver" -- or to offer "deliverance" -- then the excellence in question is a sham. This argument has been developed in relation to the metaphorical use of "higher" in education (¿ Higher Education 8 Meta-education ? Transforming cognitive enabling processes increasingly unfit for purpose, 2011). At the intersection between mathematics and theology, "advanced" would appear to require reframing in terms of emergence of insight of greater maturity -- whatever such terms might imply and however they are to be understood. Given the sense in which any "advance" is especially associated with linear thinking, how is it to be understood with respect to any "higher" dimensionality? Might it even call for a complementary sense of "retreat", recognizing the importance this may have for both spiritual and academic exploration. Also relevant is the sense in which "retreat" may be associated with the "lowly" cognitive implications of "grounding" and embodiment (George Lakoff and Mark Johnson, Philosophy in the Flesh: the embodied mind and its challenge to Western thought, 1999). Going further, this could involve the enabling of a cyclic dynamic "advance 8 retreat" -- implying a continuing cycle of enantiodromia (Psychosocial Energy from Polarization within a Cyclic Pattern of Enantiodromia, 2007). As discussed elsewhere (Toward an Enantiomorphic Policy), the cultural historian William Irwin Thompson (From Nation to Emanation; planetary culture and world governance, 1982) has sharpened considerably the ecology-sensitive intuition concerning the psycho-social lessons to be learned from cooperation between co-evolving systems. Thompson stresses the importance of an appropriate understanding of the interaction between opposites by citing E. F. Schumacher (A Guide For The Perplexed, 1977): The pairs of opposites, of which freedom and order and growth and decay are the most basic, put tension into the world, a tension that sharpens man's sensitivity and increases his self-awareness. No real understanding is possible without awareness of these pairs of opposites which permeate everything man does ... Justice is a denial of mercy, and mercy is a denial of justice. Only a higher force can reconcile these opposites: wisdom. The problem cannot be solved but wisdom can transcend it. Similarly, societies need stability and change, tradition and innovation, public interest and private interest, planning and laissez-faire, order and freedom, growth and decay. Everywhere society's health depends on the simultaneous pursuit of mutually opposed activities or aims. The adoption of a final solution means a kind of death sentence for man's humanity and spells either cruelty or dissolution, generally both. (1978, p. 127) Such a cyclic dynamic also highlights the time dimension which is implicit, but effectively demeaned, in "advanced" -- despite being central to continuous learning, supposedly characteristic of both mathematics and theology. The argument with respect to "advance 8 retreat" is rendered succinctly by the oft-cited lines of the poet T. S. Eliot (Little Gidding, 1942): We shall not cease from exploration And the end of all our exploring Will be to arrive where we started And know it for the first time. "Studies": This implies, unchallenged, a very particular style of cognitive engagement. It effectively delimits the "comfort zone" of academic endeavour -- often to be defended at any cost. It is more typically the spiritual disciplines of meditation that challenge this comfort zone through a degree of emphasis on self-reflexivity (as discussed below). A helpful articulation of a distinct mode of cognitive proprioception is offered by Steven M. Rosen (2004, 2006, 2008), a selection of whose relevant arguments have been summarized elsewhere (Nature of the requisite self-reflexive skill , 2011). Some implications of such reframed, self-reflexive "study" are offered by the argument of Douglas Hofstadter (I Am a Strange Loop, 2007). Its implication for a collective initiative have been partially addressed elsewhere (Sustaining a Community of Strange Loops: comprehension and engagement through aesthetic ring transformation, 2010). Missing from "study" in any academic context is the unexamined extent to which the subject and methodology acquire the focus and characteristics of a religion requiring uncritical belief -- complete with high priesthoods, rituals and acolytes, and the capacity to offer benediction and condemnation for all time. "Study" is also typically and appropriately challenged by "action" -- possibly to the exclusion of "study" -- as in many current examples of "fire fighting" responses to crises. Hence the exploration of "action research". Again these might be framed as complementaries through the conjunctive device "action 8 research". Many religious retreat centres of course emphasize a cyclic balance between concrete action and reflection -- as a key to "grounding". Study and action, in the sense of application, can be further challenged in the light of the "intractable conflict" between "objectivity" and "subjectivity" (treated as synonymous with inaction). This has been explored in an earlier argument explaining the use of "8" (¡¿ Defining the objective ∞ Refining the subjective ?! Explaining reality ∞ Embodying realization, 2011). "Mathematical": The reframing required in the case of "mathematics" follows from the extent to which it is restrictively and exclusively defined as what mathematicians do and are expert at. Whilst it may be allowed that others use those insights, it is often inferred that they do so only insofar as they have been so enabled by suitable mathematical instruction. However, without denying the vast repertoire of insights which professional mathematicians explore and articulate, it is the case that others necessarily use "mathematics" to survive -- long illustrated by the skill required in throwing a spear or a boomerang (Reidar Mosvold, Mathematics in Everyday Life, 2005). Especially striking is the extent to which individuals without mathematical instruction engage in complex kinetic manoeuvers in certain sports. It is of course also the case that every species uses "mathematics", most notably as observable in the design of shells. The degree of order in nature is the theme of a massive compilation by Christopher Alexander (The Nature of Order, 2002-2004). The fourth volume approaches religious questions from a scientific rather than mystical direction. In it, Alexander describes deep ties between the nature of matter, human perception of the universe, and the geometries people construct in buildings, cities, and artifacts, suggesting a crucial link between traditional beliefs and recent scientific advances. The question then is how "mathematics" might be fruitfully reframed so as not to preclude the wider range of insights and expertise with which individuals may have an instinctive cognitive engagement -- even in extremely depressed slum areas (as research has made clear). How might these inform their engagement with belief? This question has been partially addressed separately (Navigating Alternative Conceptual Realities: clues to the dynamics of enacting new paradigms through movement, 2002). "Theology": There is an implicit challenge to any "theologian" as to whether he or she is primarily an apologist for the given belief system within which she or he is "embedded" as a believer. The argument made above is that "theology" merits reframing to encompass any ordered pattern of "belief" and the expectation of "faith" with respect to it -- as is increasingly the requirement by governance of even the most secular kind. The irony is that such ordered patterns of belief, however they are enshrined in secular contexts, effectively elicit behaviours analogous to the traditional response to deity. The head of any institution may readily be accorded the nickname "God" by those who function within it -- a name with which the recipient may well identify quite comfortably. With respect to this reframing, a valuable insight is succinctly offered through a neologism by Alan Nordstrom (On Credology, 12 February 2008): The study of credology, its central inquiry, investigates the perennial need of our species to establish systems of belief, as distinguished from systems of scientific knowledge.... Beliefs, then, serve our distinctly human need for meaning, and more particularly for authority (What is true?), ultimacy (What is absolute?), purpose (Why is anything?), direction (Where should we go?), guidance (How should we get there?), protection (What will keep us safe?), and connection (How are we related to everything else?).... Thus credology is the study of our speculative attempts to discover meanings beyond what science can reveal, meanings that are vital to our thriving as human beings. It is however unclear why Nordstrom endeavours to dissociate belief in "science" from his argument regarding belief, given that -- as with the policy proposals of governance -- the theoretical assertions of science, in which many are expected to believe, may at any time be revised. The "study of beliefs" is recognized as one of the oldest anthropological preoccupations, as noted by Benson Saler (Beliefs, Disbeliefs, and Unbeliefs, Anthropological Quarterly, 41, 1, 1968, pp. 29-33) and as implied by the study of Joseph Jastrow (The Psychology of Conviction: a study of beliefs and attitudes, 1918). As suggested by a "credology", the issue is how a system of beliefs invites "conviction" and merits consideration through "theology" -- as more generally understood. With respect to the study of credos as a conventional pattern of beliefs, in the Handbook of Research in the Social Foundations of Education (2009) Steve Tozer indicates: It has been thought that study of credos provides teachers theoretical tools to apply in practice. Of all the approaches, this one has received the most critique from philosophers of education. Problems mentioned include the logical impossibility of matching belief to action as well as "inherent" conflation of complex educational matters... There is nothing wrong with systematic interrogation of basic beliefs of life and learning, but reliance on systems seem too reductive. (p. 71) This argument bears reflection in relation to those made strongly, and controversially, in favour of atheism in recent years (Richard Dawkins , The God Delusion, 2006; Christopher Hitchens, God is Not Great: how religion poisons everything, 2009). It is not a question of arguing against this position as some have done (Greg Taylor, The Atheist Delusion: Answering Richard Dawkins, New Dawn, 1 May 2007). People everywhere are variously called upon to have faith in science (as argued by Dawkins), or in the financial system (to avoid "panic"), or in the security of the internet (to enable telecommerce), or in God (as in the US political system, and by the parties to the crisis in the Middle East). As always people give their primary allegiance to different manifestations of the "divine", according to their understanding of the nexus of coherence it offers to their worldview. The global system has struggled vainly to achieve allegiance to a global ethic, to global plans, or to global standards (in many domains). The mysterious challenge is the nature of potential collective consensus in a global civilization. Simplistically this may be imagined as "universal agreement", perhaps qualified through musical metaphor allowing for distinct voices ("everyone singing from the same hymn sheet"). Separately it has however been argued that the title of the controversial study by Dawkins is inadequately framed and should be extended beyond "God" to encompass "consensus" in general (The Consensus Delusion: mysterious attractor undermining global civilization as currently imagined, 2011). That argument emphasized that the weak inter-faith consensus on the nature of "God" is merely an aspect of weak collective consensus in general. More threatening for the coherence of society than "atheism", as a lack of belief in deity, is then lack of any belief at all -- collective unbelief -- considered highly problematic by religions ( kafir, apostasy, and the like). Many commentators recognize the marked tendency to disillusionment and alienation. The question for a reframed "theology" is how to articulate the nature of any complex, integrative "attractor" which it is assumed could fruitfully attract whatever might be understood as "consensus" (Human Values as Strange Attractors, 1993). The elegant complexity of the Monster Group, as discovered by mathematics (mentioned above), is an indication of one extreme challenge to comprehension ( Dynamics of Symmetry Group Theorizing: comprehension of psycho-social implication, 2008). Ironically it is astrophysics which has detected the existence of a "Great Attractor" in intergalactic space -- to which ordinary humans would naturally be insensible. For the Abrahamic religions the challenge, expressed mathematically, might be framed in terms of a humanly incomprehensible "enormous theorem" allowing for three distinct "solutions" -- each at the limits of human comprehension for those persuaded by it, but mutually incomprehensible in consequence. How then are "meta", "union" and "integrative" to be fruitfully understood, as discussed elsewhere (Dynamic Reframing of "Union": implications for the coherence of knowledge, social organization and personal identity, 2007; Criteria for an Adequate Meta-model, 1971). How can mathematics facilitate thinking on these matters? The argument can be explored in relation to the elusively comprehensible "infinity" cited above. Fruitful insights are to be found in the case made by Gregory Chaitin (Metamaths: the quest for omega, 2005), just as others are to be found in the arguments of Pierre Teilhard de Chardin (The Future of Man, 1950) with respect to an Omega Point, or in the case for a singularity (discussed below). Generically understood, the "confidence" to be explored by a reframed "theology" may take many forms (Varieties of Confidence Essential to Sustainability: surrogates and tokens obscuring the existential "gold standard", 2009; Exploration of Prefixes of Global Discourse: implications for sustainable confidelity, 2011). Rather than the conventional static implication of "union", consideration could be given to dynamic, interactive and emergent forms (Enacting Transformative Integral Thinking through Playful Elegance, 2010). Such explorations effectively correspond to the arguments of Sallie McFague (Metaphorical Theology: models of God in religious language, 1982). Institutional and thematic precedents It is improbable that any institute of advanced mathematical studies would provide for a thread on theology, other than as a historical curiosity. The format of an "institute of advanced studies" has however been emulated by various religions and might in principle provide for a focus on "mathematical theology". Examples might include: It is however difficult to compare the research quality of such bodies with that of the membership of the selective International Federation of Institutes for Advanced Study. The point is well-argued in a proposal by John T. Noonan Jr. (An Institute for Advanced Catholic Studies, America: the national Catholic weekly, 1 July 2000). Especially relevant is the extent to which conventional faith-based approaches to "theology", however "excellent", might obscure the disciplined focus on "mathematical theology", as it might be variously understood and explored. Noonan, for example, makes not mention of that dimension. More difficult to detect are occasions within university faculties of religious studies which have provided a focus on mathematical theology. A notable exception is a paper presented by Richard S. Kirby to a Senior Seminar in 1988 of the Faculty of Theology and Religious Studies (King's College, University of London) under the title, Theology of Mathematics: the emerging field of theological investigation (subsequently published as A New Mathematics for a New Era, World Network of Religious Futurists, 2005). This highlights an interesting complementarity between "mathematical theology" and "theology of mathematics". How indeed might mathematics be understood as a belief system -- perhaps to be approached with attitudes characteristic of any religious engagement with the divine, as was the case in centuries past? Potential thematic guidance is offered at the intersect between religion and science (rather than mathematics and theology specifically), as with the International Society for Science and Religion, the European Society for the Study of Science and Theology (EssSAT), or Zygon: Journal of Religion and Science. The latter focuses on the questions of meaning and values that challenge individual and social existence today. In the case of the journal Theology and Science of the international Center for Theology and Natural Science (CTNS), a special issue was recently devoted to theology and mathematics (Volume 9, Number 1, February 2011), including: Earlier issues of Theology and Science included: • Carlos R. Bovell (Two Examples of How the History of Mathematics Can Inform Theology, 8, 1, February 2010) • Eric C. Steinhart (A Mathematical Model of Divine Infinity, 7, 3, August 2009) • John Byl (Matter, Mathematics and God, 5, 1, March 2007) • Sarah Voss (Mathematics and Theology: a stroll through the Garden of Mathaphors, 4, 1, March 2006) • Bharath Sriraman (The Influence of Platonism on Mathematics Research and Theological Beliefs, 2, 1, April 2004) Organization of the initiative Given the systemic insights which are liable to characterize some of those interested, a strong case can be made for a fruitful mix of self-organization and self-reflexivity, as separately discussed (Consciously Self-reflexive Global Initiatives: Renaissance zones, complex adaptive systems, and third order organizations, 2007). Schismatic tendencies: Especially interesting are the divisive tendencies shared by the faiths and by mathematicians (as with those in other disciplines). Those identified strongly with distinct branches of mathematics are as liable to have conflictual relations as those identified with particular faiths. There is little capacity or inclination to map the conflicts fruitfully, or even to acknowledge them as separately argued (Epistemological Challenge of Cognitive Body Odour: exploring the underside of dialogue, 2006). How this is framed or expressed is another matter. The question is whether those differences can be embodied in a new kind of relational structure -- notably in the light of insights from mathematics. To what extent, however, is either theology or mathematics self-reflexive -- as discussed below? Appropriate "distance": Any initiative, however it is organized, will raise interesting challenges with regard to who can "afford" to be associated with it, given the potential implications for their reputation and prospects elsewhere: • for the religious: can participation be framed so as to attract the benediction of relevant religious authorities, or will it automatically be framed as inappropriate and to be condemned as heresy, anathema or blasphemy? • for the academic: would those in the wider academic community view any such association as inherently problematic -- and a "dangerous career move"? In both cases there is a question of how any "engagement" with the initiative is managed in order to ensure appropriate "distance" -- for those who would prefer a degree of "arms-length" Boundaries and primacy: A related issue is that of ensuring appropriate distance from other institutes with an interest in some particular form of "mathematical theology" and potentially concerned to assert that claim as unquestionably primary. This is a question of "intelligent design" -- ensuring appropriate "boundaries" and clarifying the distinction between being "in" or "out", as well as its implication (Dynamically Gated Conceptual Communities: emergent patterns of isolation within knowledge society, 2004). Given the wider territorial implications of "boundary" issues and "gate-keeping", can mathematicians and theologians together develop more interesting ways of framing such boundaries -- especially those of relevance to other intractable conflicts? The possibilities of the Klein bottle, explored by various authors, were highlighted to this end in the above-mentioned discussion. How can any interface be framed with "others" advocating an especially narrow approach to "mathematical theology"? Dysfunctional dynamics: As a feature of the self-reflexive/self-organizing modality, how might those involved creatively reframe the vexatious dynamics of: • game-playing: well-known for its potential dysfunctionality in every institutional environment, as well as interpersonal relations (for example, as explored by the International Transactional Analysis Association). It might be understood as permeating the various efforts at inter-faith dialogue. The need to explore this from the perspective of the complexity sciences was highlighted separately, framed as the "irresolutique" -- in contrast to the "problematique" and "resolutique" promoted by the Club of Rome (Imagining the Real Challenge and Realizing the Imaginal Pathway of Sustainable Transformation, 2007). • blame-gaming: this has been a key feature of inter-faith conflicts, but also highly evident in questions of accountability with regard to the recent financial crisis and its ongoing development. The question is whether blame-gaming can be more fruitfully analyzed, as separately discussed in relation to Knight's move patterns (Monkeying with Global Governance Emergent dynamics of three wise monkeys in a knowledge-based society, 2011) The current incidence of game-playing and blame-gaming with respect to public confidence in global governance -- well-described as "monkeying" -- makes a powerful case for a mathematical theology capable of naming the games and giving formal precision to the issues and options. Dialogue possibilities: The challenge of giving form to an initiative between such seemingly distant preoccupations -- theology and mathematics -- can be fruitfully compared to that between the aesthetics of "poetry" and the strategic realities of "policy" (Poetry-making and Policy-making: Arranging a Marriage between Beauty and the Beast, 1993). That document included sections on the problematic issues of any meeting to discuss such a possibility, under headings of potential relevance (mutatis mutandis) to any preliminary mathematical theology encounter: Given the tendency to overly optimistic initiatives -- as evident in inter-faith dialogue -- cautionary frameworks merit clarification (Evaluating Synthesis Initiatives and their Sustaining Dialogues: possible questions as a guide to criteria of evaluation of any synthesis initiative, 2000; An Inconvenient Truth -- about any inconvenient truth, 2008). Historical inspiration: Reflection on possibilities can also be stimulated by references to historical settings -- notably royal courts -- in which fruitful cross-fertilization between theology and mathematics was enabled. The possibility is also central to any reflection on the "university" ideal -- as variously endeavoured (cf. University of Earth; University of Earth: meta-organization for post-crisis action, 1980). Examples of research themes for consideration The strategic concern here is the research on mathematical theology which might be of some relevance to intractable conflicts. It is perhaps safe to say that none of the research at the intersect between mathematics and theology has, as yet, been of any significance to reframing those situations. Since neither discipline is renowned for enthusiastic "application" of its insights -- however much they are exploited by others -- it might even be asserted that neither has yet evinced any interest in addressing such matters. They appear to share a concern to protect their respective comfort zones and to develop their research into areas which are only disruptive of the problematic "business as usual" of daily life through principled appeals. The challenge was well formulated by Mahmoud Abbas at the UN General Assembly (Abbas Rules out 'Business as Usual' Peace Talks With Israel, Bloomberg Business Week, 24 September 2011): It is neither possible, nor practical, nor acceptable to return to conducting business as usual, as if everything is fine. The question is what might mathematical theology offer under such circumstances? What branches of mathematics? As noted above, given its reputation as the discipline most skilled at the exploration and comprehension of relationships of the subtlest kind, where is the analysis of the branches and levels of mathematics that have (or have not) endeavoured to explore intractable faith-based conflicts -- notably those focused on two-dimensional territory? Where is the assessment of the possible insights to be derived from each branch of mathematics? For example, a case has recently been made for an "unexpected kinship" between quantum physics and theology by John Polkinghorne (Quantum Physics and Theology: an unexpected kinship, 2008). It is then appropriate to ask to what extent the challenges of the Middle East and Jerusalem have been informed by the creative insights regarding the two-state quantum system -- associated, ironically, with the so-called Rabi cycle? If the discoveries of moonshine mathematics, identifying the Monster symmetry group, are upheld by mathematicians as the key to everything -- including topology -- then what is their relevance to intractable conflicts? (Potential Psychosocial Significance of Monstrous Moonshine: an exceptional form of symmetry as a Rosetta stone for cognitive frameworks, 2007). Sets and the role of number: The theology of all faiths is replete with considerations of sets of precepts, principles, and other manifestations of divine unity. Number theory is of course a fundamental branch of mathematics. Number is fundamental to one of the most highly cited papers in psychology (George A. Miller, The Magical Number Seven, Plus or Minus Two: some limits on our capacity for processing information, Psychological Review, 1956, 63 (2), pp. 81-97). Sets of concepts -- typically of a limited size -- are identified in many disciplines and strategic initiatives. This is exemplified by the set of metaphors used by Charles B. Handy (Gods of Management: the changing work of organizations, 2009). These sets can be explored as an indication of how the human mind finds it convenient to organize reality comprehensibly (Representation, Comprehension and Communication of Sets: the role of number, 1978; Patterns of N-foldness; comparison of integrated multi-set concept schemes as forms of presentation, 1984). This approach gave rise to an effort to distinguish qualitatively the kinds of principles which tended to be evident in sets of a given size (Distinguishing Levels of Declarations of Principles, 1980). Beyond "laundry lists" of precepts, the question is whether these sets can be configured such as to enhance their significance and enable action. Number symbolism and time: Marie-Louise von Franz (of the C J Jung Institute, Zurich) has conducted an extensively documented study into the significance of number for mathematicians, in philosophy, and as symbols of psychological significance, in a deliberate effort to bridge the gap between psychology and physics. As she puts it, her remarks "balance to some extent on the razor's edge between philosophical-mathematical and numerical-symbolical statements" (Number and Time; reflections leading towards a unification of psychology and physics, 1974,, p. 33 - 34). She deliberately bridges the gap between Western and other concepts of number, which is an aspect of a current debate into the wider interpretations of the concepts of science, space, and time, which have hitherto been supposed to conform conveniently to the Western versions. She notes that Niels Bohr has stressed that an important step had been taken toward realizing the ideal "of tracing the description of natural phenomena back to combinations of pure numbers, which far transcends the boldest dreams of the Pythagoreans" (p. I6). She argues that if we accept Wolfgang Pauli's contention that "certain mathematical structures rest on an archetypal basis, then their isomorphism with certain outer-world phenomena is not so surprising" ( p. 19). She sums up her argument as follows: To sum up: numbers appear to represent both an attribute of matter and the unconscious foundation of our mental processes. For this reason, number forms, according to Jung, that particular element that unites the realm of matter and psyche. It is "real" in a double sense, as an archetypal image and as a qualitative manifestation in the realm of outer-world experience. Number thereby throws a bridge across the gap between the physically knowable and the imaginary. In this manner it operates as a still largely unexplored mid-point between myth (the psychic) and reality (the physical), at the same time both quantitative and qualitative, representational and irrepresentational. Consequently, it is not only the parallelism of concepts (to which Bohr and Pauli have both drawn attention) which nowadays draws physics and psychology together, but more significantly the psychic dynamics of the concept of number as an archetypal actuality appearing in its "transgressive" aspect in the realm of matter. It preconsciously orders both psychic thought processes and the manifestations of material reality. As the active ordering factor, it represents the essence of what we generally term 'mind'. (p. 52 --53) She concludes that: Most probably the archetypes of natural integers form the simplest structural patterns in . . . (the common unknown confronting both physicist and psychologist) ... that manifest themselves on the threshold of perception. ( p. 56) In order to explore further, it is therefore necessary to return ... to the individual numbers themselves, and gather together the sum total of thought, both technical and mythological assertions, which they have called forth from humanity. Numbers, furthermore as archetypal structural constants of the collective unconscious, possess a dynamic, active aspect which is especially important to keep in mind. It is not what we can do with numbers but what they do to our consciousness that is essential. ( p. 33) Von Franz outlines the recommended programme as follows: When we take into account the individual characteristics of natural numbers, we can actually demonstrate that they produce the same ordering effects in the physical and psychic realms; they therefore appear to constitute the most basic constants of nature expressing unitary psycho-physical reality. Because of this I would conjecture that the task of future mathematicians will be to collect their characteristics and analyze. when possible, every number in its logical relationship to all others. This research should be undertaken in collaboration with physicists, musicians, and psychologists who are conversant with the empirical facts about the structural characteristics of numbers in different mediums." (p. 303) The relationship of such concerns to the physics of Wolfgang Pauli has been described by Arthur I. Miller (Deciphering the Cosmic Number: the strange friendship of Wolfgang Pauli and Carl Jung, 2009; 137: Jung, Pauli, and the pursuit of a scientific obsession, 2010) -- as discussed separately (Quest for a "Universal Constant" of Globalization? questionable insights for the future from physics, Reframing differences, distinctions and boundaries: These are of fundamental concern both to theology and mathematics. In the latter case a seminal work has been that of George Spencer-Brown (Laws of Form, 1969). The Wikipedia entry notes its "resonances" in: the Vedic Upanishads (being the foundation of Hinduism and later Buddhism); Taoism (notably as expressed in the Tao Te Ching); Zoroastrianism; Judaism; Confucianism; and Christianity. Of related interest are epistemological differences and styles, as explored by various authors (Systems of Categories Distinguishing Cultural Biases, 1993). Game-playing engendered by differences: In mathematics this is of course the focus of extensive work on game theory, notably with its major strategic implications in relation to conflict -- and defining the "rules of engagement". In the case of theology this tends to be framed otherwise as the engagement with "the other". The other may be understood as pertaining to interpersonal encounter, as articulated by Martin Buber (I and Thou, 1923), to the divine "other" as explored by Sallie McFague (Metaphorical Theology: models of God in religious language, 1982), or to the diabolical "other" against whose temptations one is obliged to constantly strive. The notion that "Satan plays games" would be widely accepted. Do these correspond to the Games People Play (1964) articulated by Eric Berne? Harmony: Theology has long been associated with the explication of "divine harmony", partly in terms of the "music of the spheres", dating notably back to the mystical thought of Pythagoras -- effectively at the origin of science. The articulation of religious insight into harmony is of course characteristic of the principles underlying sacred music. The intersection of such principles with mathematics is evident in the work of Ernest G. McClain (Myth of Invariance: the origins of the Gods, mathematics and music from the Rg Veda to Plato, 1976; The Pythagorean Plato: prelude to the song itself, 1978; Meditations Through the Quran: tonal images in an oral culture, 1981). Other relevant explorations of cognitive implications are those of Dmitri Tymoczko (The Geometry of Musical Chords, Science, 2006; A Geometry of Music, 2011). Following his initial Notes on the Synthesis of Form (1964), Christopher Alexander (mentioned above) has developed his remarkable work on a pattern language and the The Nature of Order (2002-2004), as a basis for his quest for geometry-based harmony (Harmony-Seeking Computations: a science of non-classical dynamics based on the progressive evolution of the larger whole, International Journal for Unconventional Computing (IJUC), 2009). Its implications are discussed separately (Harmony-Comprehension and Wholeness-Engendering: eliciting psychosocial transformational principles from design, Should these extend to the "dynamics of order" -- offering a relationship to musical harmony -- and the manner in which such dynamics enable more meaningful forms of identity (A Singable Earth Charter, EU Constitution or Global Ethic? 2006; All Blacks of Davos vs All Greens of Porto Alegre: reframing global strategic discord through polyphony? 2007). Singularity: Mathematics has devoted considerable attention to the principle of singularity and its various manifestations. Insights into a technological singularity, now refer to the hypothetical future emergence of greater-than-human intelligence through technological means -- an intellectual event horizon, beyond which the future becomes difficult to understand or predict (Vernor Vinge, The Coming Technological Singularity: how to survive in the post-human era, 1993; Ray Kurzweil, The Singularity is Near: when humans transcend biology, 2005). Theology has also devoted considerable attention to a singularity in terms of eschatological predictions regarding end time scenarios. There is a confluence of significance attributed to these understandings of singularity, variously focused in beliefs regarding 2012 as a metaphysical prediction and as a doomsday prediction. These may be further associated with the end times battle at Armageddon (Spontaneous Initiation of Armageddon: a heartfelt response to systemic negligence, 2004)? Nature of "order" and integration: Beyond the forms of order, notably identified by Alexander (2002-2004), there is the vital issue of the preferences for different styles of order and the psychosocial consequences, as separately reviewed (Systems of Categories Distinguishing Cultural Biases, 1993), and most notably the work of W. T. Jones (The Romantic Syndrome; toward a new methodology in cultural anthropology and the history of ideas, 1961). The absence of what forms of order are then understood to constitute a "problem"? Nature of a "problem": Mathematics and theology share a concern with "problems". How is the understanding of a "problem" in mathematics to be compared with the understanding of a "sin" or "hindrance" by theology? Is there scope for giving mathematical formalization to conventional sins, as explored elsewhere (Towards a Logico-mathematical Formalization of "Sin": fundamental memetic organization of faith-based governance strategies, 2004). In strategic terms, how is either understanding to be related to understanding of a "wicked problem"? This has come to mean a problem in social planning that is difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize. Moreover, because of complex interdependencies, the effort to solve one aspect of a wicked problem may reveal or create other problems. Does this suggest that there is a richer and more fruitful framing of the thousands of interconnected problems perceived by international constituencies and profiled in the above-mentioned Encyclopedia of World Problems and Human Potential -- especially in relation to the associated perceptions of human values and concepts of development? Should most such problems be understood as Nature of "questions" and "answers": Both mathematics and theology share this language -- combining a quest for confidence in "unquestionable" beliefs and subsequent dependence on them. It is however theology which most explicitly transcends it through the form of the Zen koan and apophatic discourse. Such discourse offers new possibilities, as suggested separately (Am I Question or Answer? Problem or (re)solution? 2006; Sustaining the Quest for Sustainable Answers, 2003; Questionable Answers, 1982) Is it fruitful to ask whether this "answer economy" mindset precludes more appropriate engagement with the condition of the times -- despite expecting a meaningful answer in those terms? One possible approach is through reframing question-answer in the light of catastrophe theory, and the variety of question types: where, when, which, what, how, who, why (Conformality of 7 WH-questions to 7 Elementary Catastrophes: an exploration of potential psychosocial implications, 2006) -- and the evasion of answers of strategic significance (Question Avoidance, Evasion, Aversion and Phobia: why we are unable to escape from traps, 2006). With the individual understood by theology as a "particle" of God, and in the light of the current focus of fundamental physics on the quest for the "God particle", the question of Leon Lederman (The God Particle: If the Universe is the Answer, What is the Question? 1993) might be provocatively adapted. Integrative thematic organization The value of self-reflexivity was noted above with respect to any purportedly "objective" cognitive engagement with the "subject" matter of mathematical theology. Self-reflexivity and self-reference are problematic both for mathematics and theology. Disciplined criticism: In the case of theology, understood as reflection on an ordered belief system, how is potentially critical reflection on that belief system to be enabled? The difficulty is immediately obvious in the knee-jerk tendency to frame any critical reflection as symptomatic of opposition, possibly dangerously heretical. Few belief systems provide adequately for any process of criticism regarding their own content and organization. This is notably true of science, which claims to thrive on critical thinking but has proven extremely defensive regarding arguments it deems "scientifically" inappropriate . The general case has been developed elsewhere (Guidelines for Critical Dialogue between Worldviews: as exemplified by the need for non-antisemitic dialogue with Israelis? 2006). A degree of self-reflexivity, or the nature of the lack of it, is evident in the "politics of theology" (or "theological politics") as noted below. This is equally true of the "politics of science" (or ""scientific politics"). In both cases there is little provision for the "political" processes within either discipline. Simulation: The possibilities of new approaches to thematic organization have been highlighted by the envisaged complex global simulations (Sentient World Simulation (SWS), FuturICT). These may endeavour both to reflect the range of beliefs and to highlight the possibility of emergent "beliefs". This may be as relevant to theology as to mathematics -- both constituting systems of belief. Especially relevant is the probable intention of using such simulations to enable intervention in belief systems. For example the possibility of "releasing" and "managing" thousands of socially intelligent agents ("bots") into the social networking environment is already the subject of experimentation in support of political agendas, presumably with respect to intractable conflicts. Such agents would "comment" variously to enhance or deprecate particular beliefs (Gerardo Ayala, Intelligent Agents Supporting the Social Construction of Knowledge in a Learning Environment, 2001; Nagapradeep Chinnam, Group Recognition in Social Networking Systems, 2011). Such possibilities raise the question of the nature of a "theoretical theology", or of a "theoretical mathematics", which would provide for, and predict, such emergence. Is the full range of religions, if not beliefs, acceptable subject matter for theology as conventionally practiced? How is "mathematics" -- doing mathematics -- to be simulated? Perhaps of more relevance is the challenge to individual belief -- faced with such a psychoactive ecosystem of evolving beliefs. Self-reflexivity and theology: There is an irony to the fact that the spiritual disciplines, with which theology may be associated, advocate processes of meditative self-reflection. The question is the extent to which such meditation enables fruitful "reformulation" -- typically threatened and undermined by "loss of faith". The latter is of course experienced with respect to non-religious belief systems and may well be described in terms of "burnout". The financial crisis has brought many to a condition in which they "no longer have any faith in the future" and may well be driven to The primary locus of self-reflexivity would appear to be formulated as the philosophy of theology or analytic theology (Oliver D. Crisp and Michael C. Rea (Eds.), Analytic Theology: new essays in the philosophy of theology, 2009) -- notably in contrast to theological philosophy. A degree of insight into the self-reflexivity of theology is to be found in the work of Donald Wiebe (The Politics of Religious Studies: the continuing conflict with theology in the academy, 2000). Commenting on the distinctions variously made between comparative and theoretic theology by the anthropologist Friedrich Max Müller (Gifford Lectures, 1888-92), Wiebe notes (pp. 18-19): ...Muller suggests that there exist two kinds of knowledge about religion: on the one hand, an insider's knowledge of a given religious tradition, propelling life from day to day, and on the other, an external knowledge of the physical characteristics and social structures of a particular religion. The second kind would correspond to Comparative Theology, the division of the science concerned with what is empirically available. The first kind, then, would be the equivalent of Theoretic Theology. To elucidate the distinction, Muller writes: The student of Comparative Theology... can claim no privilege, no exceptional position of any kind, for his own religion, whatever that religion may be. For his purposes all religions are natural or historical. Even the claim of a supernatural character is treated by him as a natural and perfectly intelligible claim, which may be important as a subjective element, but can never be allowed to affect the objective character of any religion. ... Theoretical Theology understood as religious know-how, of course must be excluded from the Science of Religion because it is steeped in subjectivity. Even a Theoretical Theology that frames its ideal on the basis of an inner, quasi-Hegelian consciousness is not arguably of the Science of Religion. It is only when the Theoretical Theology of the philosophers is derived from knowledge gained by Comparative Theology that it can be called a bona fide aspect of the scientific study of religion It is appropriate to note the focus on extant belief systems and not on the process through which new belief systems emerge, and the nature of those which might be predicted to emerge in the future. Given the theme of this argument regarding intractable conflicts, the "defensive" concern of Wiebe in has subtitle is noteworthy: the continuing conflict with theology in the academy. This effectively recognizes the need to enlarge the scope of "theology" to include those beliefs -- and "gods" -- more characteristic of the "academy". Especially relevant to the question of self-reflexivity is metatheology, particularly if it provides for incorporation of its own processes. John T. Granrose (Normative Theology and Meta-Theology, The Harvard Theological Review, 63, 3 (Jul., 1970), pp. 449-451) advocates such an approach as a useful conceptual tool for theologians. It has been framed as a basis for scepticism (Raeburne Seeley Heimbeck, Theology and Meaning: a critique of metatheological scepticism, 1969). Paul Kuk Won Chang addresses the possibility from a Christian perspective as a comparative synthetic theology ( Metatheology: an academic core of Christian awakening, renewal, revival, evangelism and mission, 2005). Andrew C. Rawnsley explores the possibility of a Critical Theory of Religion: a meta-theology? (2007) which he introduces as follows: What could this possibly mean? Firstly, the designation of the prefix "meta-" indicates the strangeness of the program, since how can something possibly be "meta" to "theology"? In the past, the connection was a "metaphysical" one: certain kinds of philosophical work was done to provide a framework for theological reflection. This has been, predominantly, done as "philosophy of religion". However, such uses of "metaphysics" have been seriously challenged in the last century, not just by theologians themselves, but also by certain significant trends within philosopy itself. Secondly, it is in the character of the research program to indicate ways of interaction between the current state of philosophy of religion, social and critical theory, and social-scientific study of religion, with the discipline traditionally known as "theology". Since the possibility of using traditionally conceived "metaphysics" to anchor such work has received fatal blows from philosophers working from a social-critical theoretical perspective, then it appears that theological reflection requires some sort of framework amenable to critical work without being characterised in the old sense of onto-theology, the inappropriate imposition of metaphysical thinking upon theology. Andrew B. Newberg considers metatheology as a form of "neurotheology" (Principles of Neurotheology, 2010). He argues, citing E. G. d'Aquili and A. B Newberg (The Mystical Mind: probing the biology of religious experience, 1999): A metatheology can be understood as an attempt to evaluate the overall principles underlying any and all religions or ultimate belief systems and their theologies. A metatheology comprises both the general principles describing, and implicitly the rules for constructing, any concrete theological system. In and of itself, a metatheology would not embrace one particular theology, since it consists of rules and descriptions about how any and all specific theologies are structured. (p. 64) Conflict between systems: Of relevance to self-reflexivity in relation to belief systems is the exploration of Nicholas Rescher (The Strife of Systems: an essay on the grounds and implications of philosophical diversity, 1985). Philosophers have engaged in noble efforts to clarify the context within which all-encompassing theories emerge and decline, especially in the face of duality, as separately discussed (Epistemological Panic in the face of Nonduality, 2010). It is very challenging to engage cognitively with that context and the process, especially given possible commitment to the next emerging theory and the exciting claims made for it. The process has been partially addressed in the debate over the contrasting perspectives of T. S. Kuhn (The Structure of Scientific Revolutions, 1962) and Karl Popper (Conjectures and Refutations: the growth of scientific knowledge, 1963). Rescher (1985) concludes his study of such distinctly unintegrative conflict with the For centuries, most philosophers who have reflected on the matter have been intimidated by the strife of systems. But the time has come to put this behind us -- not the strife, that is, which is ineliminable, but the felt need to somehow end it rather than simply accept it and take it in stride. To reemphasize the salient point: it would be bizarre to think that philosophy is not of value because philosophical positions are bound to reflect the particular values we hold. The question is whether "mathematical theology" could give greater significance to "take it in stride" -- as might be implied by the Buddhist insight developed through the enactivism of Francisco Varela (Laying Down a Path in Walking, 1987). Self-reference in mathematics: In the case of mathematics, issues of self-reference have long been a preoccupation. They are evident in situations when a formula necessarily refers to itself, typically recursively, often characterized by paradoxical implications. The matter has been extensively studied by Douglas Hofstadter (Gödel, Escher, Bach: an Eternal Golden Braid, 1980). There is an extensive literature on the philosophy of mathematics dealing with the assumptions, foundations, and implications of mathematics. Beyond the instances which attract such attention, more intriguing is the extent to which mathematics as a whole can be said to be self-reflexive. This is evident to a degree -- negatively -- in the conclusions of the incompleteness theorems of Kurt Gödel. More generally, however, there is the question is whether and how mathematics is able to frame itself as a whole with respect to which "self-reference" is meaningful. As a form of self-reference, metamathematics is the study of mathematics itself using mathematical methods. This study produces metatheories, which are mathematical theories about other mathematical theories. Metamathematical metatheorems about mathematics itself were originally differentiated from ordinary mathematical theorems in the 19th century, specifically in order to focus on what was then called the foundational crisis of mathematics. A valuable statement highlighting the role of "belief" in mathematics is provided in a private communication from Peter Collins in the light of his own more extensive articulations (A Deeper Significance: resolving the Riemann Hypothesis, Integral World, April 2009; The Problem with Mathematical Proof: lack of an integral dimension, Integral World, June 2011): Some time ago I reached the firm conclusion that the Riemann Hypothesis actually represents - in the context of prime numbers - a statement regarding the simultaneous consistency of both the quantitative and qualitative interpretation of mathematical symbols. As in formal terms Mathematics is based on sole recognition of its quantitative aspect, one key implication of this finding is that the Riemann Hypothesis can neither be proved nor disproved within conventional axioms. Put another way, the important truth to which the Hypothesis pertains is already inherent in mathematical axioms and cannot be derived from their operation. So in the most fundamental terms a pure act of faith is necessarily required regarding the subsequent consistency of all mathematical procedures. When viewed in this light, Mathematics represents therefore a distinctive form of theology. An interesting approximation to self-reference is to be found in approaches to the classification of mathematics, given that classification is itself a relative trivial process from a mathematical perspective. Can the "House of Mathematics" be said to be in good order in the light of the manner in which its preoccupations are organized? The concern was framed in a preliminary exploration (Is the House of Mathematics in Order? Are there vital insights from its design, 2000). The question is whether there are degrees of order through which greater insight can (and should) be obtained into the relationships between the branches of mathematics -- notably as a means of discovering which might be relevant to intractable conflicts. The exploration was taken further in the light of the 64 main categories of the Mathematics Subject Classification (MSC). This takes the form of a standard nested hierarchical classification characteristic of the conventional library science approach to knowledge organization. It is seemingly not informed to any greater degree by the ordering facilities of its subject matter. The experimental approach taken was to consider the possibility of a periodic organization to the "modes of knowing" which the various mathematical specializations effectively represent (Towards a Periodic Table of Ways of Knowing -- in the light of metaphors of mathematics, 2009). The approach was stimulated by the subtle orderings offered by the periodic table of chemical elements (Denis H. Rouvray et al., The Mathematics of the Periodic Table, 2005). Emergent order: With both theology and mathematics, understood as exercises of the mind in eliciting ever subtler and more appropriate degrees of order, how might such possibilities be envisaged to encourage their exploration? The point might be emphasized in terms of the "pattern that connects" as argued by Gregory Bateson (Mind and Nature: a necessary unity, 1979): The pattern which connects is a meta-pattern. It is a pattern of patterns. It is that meta-pattern which defines the vast generalization that, indeed, it is patterns which connect. And it is from this perspective that he warned in a much-cited phrase: Break the pattern which connects the items of learning and you necessarily destroy all quality. The contribution of mathematics to this process has been widely acknowledged in the work of Georg Cantor on infinite sets. Cantor's theorem implies the existence of an "infinity of infinities" and transfinite numbers. Whilst his work has long been recognized as of great philosophical interest, of relevance to this argument is that it was originally regarded as so counter-intuitive -- even shocking -- that it encountered resistance from mathematical contemporaries. How is the potential of such resistance to be self-reflexively embodied within mathematical theology? Possible leads to elucidating such a meta-pattern, with the support of mathematics, include: • classical Chinese coding systems, notably as integrated within the Fibonacci spiral: • periodicity, as noted above: • geometrical and topological configuration: • insights from fractals and symmetry groups: Thematic weaving: The metaphor of weaving is used in a separate document to discuss a variety of ways of organizing themes characteristic of sets of principles and precepts (Interweaving Thematic Threads and Learning Pathways Noonautics, Magic carpets and Wizdomes, 2010). The weaving metaphor is especially interesting in the light of the insights derived from the design of carpets by Christopher Alexander -- in parallel with his study of The Nature of Order (2002-2004) (Harmony-Seeking Computations: a science of non-classical dynamics based on the progressive evolution of the larger whole, International Journal for Unconventional Computing (IJUC), 2009). He derives a set of 15 "transformation principles", which may be tentatively adapted to the psychosocial realm of relevance to this argument (Tentative adaptation of Alexander's 15 transformations to the psychosocial realm, 2010). These could well be configured geometrically in the spirit of his own argument (Geometrical configuration of Alexander's 15 transformations, 2010). The carpet metaphor is alos useful in that it highlights the degree to which styles and appreciation of carpets may differ in terms of colour, pattern and weave. It points to differences in the way that the connectivity of the "pattern that connects" may be understood and valued. With respect to strategic insight, the carpet may be compared to the systemic mapping underlying many initiatives. The metaphor may be taken further through the psychoactive role that any such map may play in organizing a domain of preoccupation and in the engagement with it -- emblematic of the elicited commitment to "the plan", as with a mandala or yantra. It may well have functions associated with those of a "prayer mat" to the point of being considered a "magic carpet", as separately discussed (Magic Carpets as Psychoactive System Diagrams, 2010). In this sense a systems diagram may be understood as the organization of credibility -- or the organization of confidence -- with which faith and belief may be associated. The "science of confidence building" then merits consideration in the light of the transformation principles of Alexander in considering design. Mathematical theology of experience As noted above, Davis and Hersh (1981) have given a focus to "mathematical experience", as a prelude to an exploration of mathematical theology by Davis (2004), and futher articulation of the experience by Hersh (2006, 2010). The remarks of Gregory Chaitin are of value (Metamaths: the quest for omega, 2005): In my opinion, the view that math provides absolute certainty and is static and perfect while physics is tentative and constantly evolving is a false dichotomy. Math is actually not that different from physics. Both are attempts by the human mind to organize, to make sense of, human experience; in the case of physics, experience in the laboratory, in the physical world; and in the case of math, experience in the computer, in the mental mindscape of pure mathematics. (pp. 7-8) Might analogous distinctions be appropriately made between mathematics and theology? Chaitin continues: And mathematics is far from static and perfect; it is constantly evolving, constantly changing, constantly morphing itself into new forms. New concepts are constantly transforming math and creating new fields, new viewpoints, new emphasis, and new questions to answer. And mathematicians do in fact utilize unproved new principles suggested by computational experience, just as a physicist would (p. 8). Is this not the appropriate manner in which to frame the ecology of beliefs on which a reframed theology might focus? However, in terms of the case for self-reference, to what extent does any theory embody the probability of the emergence of a new theory, rather than implying it is a form of "theory of everything" for eternity? Chaitin continues: And in discovering and creating new mathematics, mathematicians do base themselves on intuition and inspiration, on unconscious motivations and impulses, and on their aesthetic sense, just like any creative artist would. (p. 8) This could readily describe the experience of anyone exploring the possibility and credibility of systems of belief. Again he continues: And mathematicians do not lead logical mechanical "rational" lives. Like any creative artist, they are passionate emotional people who deeply care about their art, they are unconventional eccentrics motivated by mysterious forces, not by money nor by concern for the "practical applications" of their work. (p. 8) Here he precludes the possibility of multiple styles in the approach to such matters, exemplified by the archetypal contrasts explored by Hermann Hesse (Narcissus and Goldmund, 1930) -- and more systematically by W. T. Jones (The Romantic Syndrome; toward a new methodology in cultural anthropology and the history of ideas, 1961), as summarized separately (Axes of Bias in Inter-Cultural Dialogue, 1993). Integral awareness: Various authors have endeavoured to articulate, directly or by implication, how spiritual insight and intuition are informed and enabled by mathematics. Citing Georg Cantor's work on infinity, the focus of Sarah Voss on "mathaphors" is helpful in this respect (Mathematics and Theology: a stroll through the Garden of Mathaphors, Theology and Science, 2006). How do complex geometrical symbols, like yantras, assist in this process during the course of meditation? Jennifer Gidley provides a very comprehensive integral hermeneutic analysis of the evolutionary writings of Rudolf Steiner and Ken Wilber in the light of Jean Gebser's structures of consciousness ( The Evolution of Consciousness as a Planetary Imperative: an integration of integral views, Integral Review, 2007). The explicitly "pluralistic narrative tapestry" seemingly dissociates, to a significant degree, the current role of mathematics in favour of another mode of discourse more characteristic of theology and of the integrative writers she so usefully summarizes. Gidley notes the work of L. Kuhn and R. Woog (From complexity concepts to creative applications, World Futures, 2007) in undertaking pioneering postformal research, by taking several key concepts from complexity science -- originally formulated as mathematical concepts -- and reshaping them in prose, as a basis for social inquiry, e.g., fractal dimensions become fractal narratives; mathematical phase space becomes phrase space as a literary device related to construct awareness in narrative and discourse. Gidley cites Kant (1781/1929) to the effect that: Mathematics gives us a shining example of how far, independently of experience, we can progress in a priori knowledge. But she then acknowledges Steiner transgressed the limits to knowledge set by Kant, claiming that we can discover, through the disciplined development of our philosophical thinking, something equivalent to the laws of mathematics that point beyond the boundary between sensible and supersensible knowledge. When a [human] reaches the stage of being able to think of other properties of the world independently of sense-perception in the same way as [s]he is able to think mathematically of geometrical forms and arithmetical relations of numbers, then [s]he is fairly on the path to spiritual knowledge. (Steiner, 1904, para. 4). With respect to this point, Gidley further notes: Steiner (1904) elaborated his point, noting that the development of non-Euclidian geometry, particularly the contributions of Leibniz and Newton to Infinitesimal Calculus, shifted mathematical reasoning to an important new boundary line, whereby we "find ourselves continually at the moment of the genesis of something sense-perceptible from something no longer sense-perceptible" (para. 9). He makes a clear distinction, however, between the quantitative nature of the mathematical laws of the sense-perceptible, and the qualitative nature of the analogous philosophical laws of the supersensible (Steiner, 1904). Beauty: It could be considered strange that the seemingly incommensurable theology and mathematics should share a long-recognized preoccupation with beauty and aesthetics. This has been explored from a variety of perspectives (Andreas Christiansen, The Beauty and Spirituality of Mathematics: a review essay, International Journal of Education and the Arts, 2009; Subrahmanyan Chandrasekhar, Truth and Beauty. Aesthetics and Motivations in Science, 1987; Ronald Glasberg, Mathematics and Spiritual Interpretation: a bridge to genuine interdisciplinarity, Zygon, 2003; Elliot Nelson, A Theology of Mathematics: Mathematical Beauty, Until a Seed Dies, May 2008) Creative insight: Mathematics and theology share as special appreciation of "insight", nourished by imaginative speculation: • in mathematics this takes the form of much valued recognition ("in a flash") of a "pattern that connects". • for theology this insight is typically framed as a "revelation" -- possibly as a grace or gift from a transcendental, supernatural reality. The emergence of insight is intimately related to the mysterious processes of creativity, especially in mathematics. These processes are most evident and comprehensible through humour. These processes have been the focus of a valuable review (Matthew M. Hurley,, et. al., Inside Jokes: using humor to reverse engineer the mind, 2011). This is concerned with "the epistemic predicament of agents in the world and a class of models of cognition that can successfully deal with that predicament". From such a perspective, a case has been separately made for the role of humourin relation to the argument above (Humour and Play-Fullness Essential integrative processes in governance, religion and transdisciplinarity, 2005). Comprehension of ignorance, nonsense and craziness Separately and together, mathematics and theology merit considerable attention in relation to: • infinity -- as the mysterious focus of a shared preoccupation of which many would readily claim ignorance, and others would variously assert to be "nonsense" □ as the primary characteristic of the divine with which theology is concerned □ as the underlying challenge to mathematics • ignorance -- of the subtle complexity with with they are variously preoccupied, a condition readily condemned as highly problematic by both theology and mathematics □ ignorance of mathematics on the part of others (including people of faith) □ ignorance of faith on the part of others is a matter of deep concern to the religious, specifically extending to those holding alternative beliefs (including those with a mathematical focus) -- and is clearly a primary trigger for many intractable conflicts • nonsense -- as a typical qualifier for those preoccupations raising issues for both as to how they might appropriately be rendered "sensible" □ as a characterization of mathematical subtleties beyond normal human ken, or as a descriptor of those using mathematics incompetently and inappropriately □ as a characterization of theological preoccupations by those focused on the mundane, or considered misguided in their understanding . Given the intractable conflicts triggered by these factors, there is clearly a case for mathematical theology to address: • the implications for comprehension, learning and education of the nature of integrative and "meta" understanding • the nature of ignorance, nonsense and unbelief -- if only in terms of lack of credibility • the challenge of "everything" and "nothingness": □ as an aspiration of mathematics, given the focus on a Theory of Everything and the challenge of a Theory of Nothing (2006) □ as a preoccupation of theology, notably articulated in terms of "emptiness" as an outcome of meditation (Keiji Nishitani, Religion and Nothingness, 1983; James W. Heisig, Philosophers of Nothingness, 2001), Efforts to render comprehensible the cognitive experience of these issues have been made in each case: • for mathematicians, the progression, and the interfaces, have been delightfully and insightfully explored in Flatland (1884), Sphereland (1965) or Flatterland (2001) and their associated • for those of faith, the subtleties have characteristically been highlighted through parables, teaching stories (notably those of Nasreddin), and the Zen koan. Value has been attached by theology to "unsaying" and apophatic discourse (Being What You Want: problematic kataphatic identity vs. potential of apophatic identity? 2008). Notable significance has been attached to the nature of Unforeseen cognitive challenges: The disturbing implications of Gödel's incompleteness theorems regarding undecidability have now been reinforced by the work of Harvey Friedman (Boolean Relation Theory and Incompleteness. 2010) through identification of entirely new forms of incompleteness. In his summary of such challenges, Richard Elwes (It doesn't add up, New Scientist, 14 August 2010) asks whether "a gaping hole has opened up in the foundations of mathematics". However, with respect to the above argument, perhaps even more challenging, is what this may imply for a "gaping hole" in the foundations of philosophical reflection on the development of consciousness and the governability of the planet. Curiously, as noted by Elwes: With Friedman's work, it seems Gödel's delayed triumph has arrived: the final proof that if there is a universal grammar of numbers in which all facets of their behaviour can be expressed, it lies beyond our ken.... The only way that Friedman's undecidable statements can be tamed, and the integrity of arithmetic restored, is to expand Peano's rule book to include "large cardinals" -- monstrous infinite quantities whose existence can only ever be assumed rather than logically deduced.... We can deny the existence of infinity, a quantity that pervades modern mathematics, or we must resign ourselves to the idea that there are certain things about numbers we are destined never to know Such large cardinals, notably understood to be "inaccessible", have yet to be fully admitted into the axioms of mainstream mathematics. Might they have been as readily named "angels" as "cardinals", as speculatively explored (Re-Emergence of the Language of the Birds through Twitter? 2010)? Strategic implications: Current strategic significance has been given to the "unknown" through the notorious "poem" by Donald Rumsfeld as US Secretary of Defense in the midst of the intervention in Iraq. This has been separately discussed (Unknown Undoing: challenge of incomprehensibility of systemic neglect, 2008). A matter of concern is the extent to which those in authority claim to "know" what is appropriate under conditions where there is both disagreement between authorities and between experts, and their track records suggest that it might be more fruitful to acknowledge ignorance. But this tendency is also shared between theology and mathematics in their quest for certainty and conviction and their discomfort with uncertainty. The need to believe then has an unfortunate quality of desperation. This is to be contrasted with the much-cited reference to negative capability articulated by the poet John Keats as: when man is capable of being in uncertainties, Mysteries, doubts without any irritable reaching after fact and reason. It is now described as the resistance to a set of institutional arrangements or a system of knowledge about the world and human experience. It explains the capacity of human beings to reject the totalizing constraints of a closed context, and to both experience phenomena free from any epistemological bounds as well as to assert their own will and individuality upon their activity. Craziness and creativity: Both theology and mathematics are considerably challenged by how to encompass what, as disciplines, they consider "crazy" -- but which may be evidence of creative insight for which they have no prepared explanation. In the case of theology, craziness may well be used to characterize the behaviour of a person graced by some form of enlightenment -- perhaps a "holy fool". Various Eastern religions recognize "crazy wisdom", namely unconventional, outrageous, or unexpected behaviour having spiritual implications. In the case of mathematics, the nature of the "requisite" cognitive surprise associated with creativity is well indicated by the much-quoted statement by physicist Niels Bohr in response to Wolfgang We are all agreed that your theory is crazy. The question which divides us is whether it is crazy enough to have a chance of being correct. My own feeling is that it is not crazy enough. To that Freeman Dyson added: When a great innovation appears, it will almost certainly be in a muddled, incomplete and confusing form. To the discoverer, himself, it will be only half understood; to everyone else, it will be a mystery. For any speculation which does not at first glance look crazy, there is no hope! (Innovation in Physics, Scientific American, 199, 3, September 1958) In response to intractable conflicts, it might be asked how mathematical theology could fruitfully frame the emergence of insights that were "crazy enough". What "systematic" provision might it make to encompass "muddled, incomplete and confusing" forms? Implication of research on opinion and belief The argument has proposed the reframing of "theology" to include a wider spectrum of beliefs, enhanced by the insights offered by mathematics (a belief system in its own right). It is appropriate to contrast such a focus on "belief" with that which is central to the disciplines of opinion and attitude research, market research and psephology. These have become absolutely fundamental to the processes of governance and marketing -- even in faith-based cultures.. Given the above references to global simulations, it is interesting to note that these have encouraged the formalization of belief (Philippe Smets, The Application of the Matrix Calculus to Belief Functions, 2004; Glenn Shafer, Belief Functions: introduction). Such "belief functions" derive in part from the Dempster-Shafer theory (DST) of evidence, whereby evidence from different sources can be combined to arrive at a degree of belief (represented by a "belief function"). This is reminiscent of the Gaussian copula basic to the calculation of risk in marketing financial derivatives -- which proved to be at the origin of the recent financial crisis (Gaussian Copula: investment risk, 2009). There is then value in exploring a representation of the relationship between: • theology, with its focus on deep engagement with a psychoactive belief of long-term existential significance to identity • opinion and belief research, with its superficial focus on rapidly shifting patterns of public opinion (appropriately compared with meteorology) • mathematics, with the fundamental formalization it offers to both and the fundamental preoccupation with "infinity" (to be compared with "divinity) Both theology and mathematics have a concern with the "transcendental", despite a degree of implication in the "mundane" -- which is the preoccupation of opinion research. All have a concern with eliciting a degree of order, although it is opinion research (with the aid of mathematics) which engages more directly with "fuzziness". Mathematics and theology share a concern with the fundamental nature of order itself. Given the argument above for "neurotheology" as a form of metatheology, it is interesting to note that a new frontier in market research is neuromarketing. There is a degree of irony to the fact that, despite arguments by mystics that "God is a verb", it is opinion and belief research which is actually preoccupied with an understanding of "divine" as a verb -- in its continuing effort to divine public opinion. Although "futures research" is now the preferred academic discipline for exploring insights into the future, it is appropriate to recall the active role still played by "divination" (Engaging with the Future with Insights of the Past, 2010). With respect to intractable conflict, opinion research could be said to be currently more influential than either theology or mathematics -- by reflecting attitudes towards it capable of influencing governance. Whilst mathematics and theology are variously complicit in encouraging or enabling such conflict, they have yet to imagine remedial responses to it -- as envisaged here through mathematical theology. NB: See Conclusion in main document; Bibliographical references are provided in a separate document: Bibliography of Relevance to Mathematical Theology
{"url":"https://www.laetusinpraesens.org/docs10s/maththey.php","timestamp":"2024-11-08T10:21:38Z","content_type":"text/html","content_length":"163692","record_id":"<urn:uuid:b21a7a47-2155-43a9-8631-fdd1f000a095>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00630.warc.gz"}
Property proposal/Crossing number Description Crossing number for mathematical knots; the crossing number of a knot is the smallest number of crossings of any diagram of the knot. It is a knot invariant. Represents crossing number (Q2548661) Data type String Domain knot (Q1188853), link (Q1760728) Allowed values 0|[1-9][0-9]* Example 1 unknot (Q1188344) → 0 Example 2 trefoil knot (Q168620) → 3 Example 3 figure-eight knot (Q168697) → 4 Example 4 Kinoshita–Terasaka knot (Q94827585) → 11 Source https://knotinfo.math.indiana.edu/ Planned use to add these values to entries for knots on Wikidata Expected completeness always incomplete (Q21873886) See also Dowker-Thistlethwaite notation (P8378) Single-value constraint yes Wikidata project WikiProject Mathematics (Q8487137) This is a mathematical invariant for knots. While this is available in external databases, it also belongs here on Wikidata. The Anome (talk) 12:27, 13 July 2022 (UTC) Update: I've added a domain restriction, to just mathematical knots and links (and knots are of course a subset of links, but I thought I'd be explicit). Rationale: there is another concept of crossing number, en:Crossing number (graph theory), for graphs, but I think it's sufficiently different that we might want to make this a separate property. The Anome (talk) 10:27, 22 July 2022 (UTC)
{"url":"https://m.wikidata.org/wiki/Wikidata:Property_proposal/Crossing_number","timestamp":"2024-11-08T08:31:30Z","content_type":"text/html","content_length":"51652","record_id":"<urn:uuid:5eb90ba3-1fda-4d90-82cf-1285de821984>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00872.warc.gz"}
Empiricism in Math and AI The senses, although they are necessary for all our actual knowledge, are not sufficient to give us the whole of it, since the senses never give anything but instances… From which it appears that necessary truths, such as we find in pure mathematics, and particularly in arithmetic and geometry, must have principles whose proof does not depend on instances, nor consequently on the testimony of the senses, although without the senses it would never have occurred to us to think of them… - Leibniz: Philosophical Writings When trying to build AI, eventually you run into this choice between empiricism and rationalism. Where does knowledge come from? What is logic? How do we learn? To me, this is a practical question rather than a philosophical one. How do I build a machine that learns and reasons like a human? To me, accepting rationalism (like Leibniz, above) is a non-starter for general AI because it concedes that human thought is extra-ordinary. And if it’s extra-ordinary, then we can’t build a machine to simulate it out of ordinary stuff like silicon. And so thought can’t be abstract. Thought has to be physical, or at least empirically observable. Thought has to follow from physical materials interacting with the same rules as other physical systems. That assumption lets us pursue the construction of a general AI, within the framework of physics. (A single theory of everything.) To most people, this means thought == brain, and they’re fine with that. Thought is just neurons and biochemical reactions, governed by the same electromagnetism etc. that exists outside of the body. But if (I assume) thought is physical, then (I have to assume) everything else is physical. Specifically, mathematics and logic. Again, this is because to build general AI, I have to understand how to implement these things in my machine. They have to be empirically measurable; they’re not allowed to be hand-wavy and abstract. All our knowledge begins with the senses, proceeds then to the understanding, and ends with reason. There is nothing higher than reason. - Immanuel Kant, Critique of Pure Reason But how do you define math and logic operationally, in terms of base senses? After all, the point is that “math is true.” People believe in math the same way they believe in God: it exists on some higher-plane of existence, and it’s authoritative. You start with unquestionable axioms and proceed via unquestionable logic. Adding empricial observations is just corrupting the pureness of it. Now, I’m not saying that “math isn’t true.” There are certainly regularities in the universe: gravity, relativity, electromagnetism. Linear systems exist and are predictable. Arithmetic is useful in predicting outcomes of practical daily situations. The caveat is that discovering and codifying these regularities is carried out by humans, who are messy and error-prone. And that’s what I mean by an empirical approach to math: viewing the people as the physical system under study. A mathematician’s work is mostly a tangle of guesswork, analogy, wishful thinking and frustration, and proof, far from being the core of discovery, is more often than not a way of making sure that our minds are not playing tricks. - Gian-Carlo Rota So I view math as a natural process, involving humans (or constructed robots), which resembles a distributed consensus algorithm. Many people try math and publish proofs. Only the ones that are accepted by the majority pass the filter to become “accepted math.” This lets us carry on a single line of reasoning over the millenia. This is the essence of math: we establish a common language (axioms), we reason about those axioms, and then we build trust in that reasoning via distributed proof-checking. The sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work. - John Von Neumann But how should my “thinking machine” construct axioms or apply proven theorems to the real world? What is the observable mechanism behind this? I think about “metaphor” as the building block for this operation. Specifically, metaphor as described by Lakoff and Johnson, which I’ve talked about before. In their model, everything starts with basic senses. And as we live, we learn to associate certain experiences with other past experiences. Argument is war, time is money, 3 gold coins is a sheep. That’s metaphor. And they stack to build more complicated metaphors. And to me, the reason we developed this faculty was so we can predict the future better. “He sank like a lead weight.” You’ve never seen him sink before, but you’ve seen a lead weight sink. It’s like A reasonable physical implementation of this in my machine looks like a neural network. Of course, the details are fuzzy, but that’s what the whole field of ML is about. How does one look at an image and decide if it looks like a dog or a cat? In other words, how does a machine build abstract metaphor? Getting back to math (and I’ve made this point before): the whole endeavor can be thought of as building metahpors between real world-scenarios and axiomatic symbols on the page. The symbols ought to change via rules that maintain the metaphor. Then you can use the theorems to make predictions about your real-world system. Mathematics is the study of analogies between analogies. All science is. Scientists want to show that things that don’t look alike are really the same. That is one of their innermost Freudian motivations. In fact, that is what we mean by understanding. - Gian-Carlo Rota Mathematics compares the most diverse phenomena and discovers the secret analogies that unite them. - Jean Baptiste Joseph Fourier To be useful, the system ought to relate via metaphor to a high variety of physical systems. One way to do this is to keep your axioms really simple (like things in a set) so that they apply whenever someone can recognize countable “things” that may or may not be in a collection. Another way is to discover symbols that helps you predict something about literally everything (E=mc^2). But again, the application of these axioms in my “thinking machine” depends on some physical neural networks, which have been trained through experience. The metaphors employed during reasoning also depend on experience. I suspect people are not different. Math doesn’t happen in a vacuum. No matter how “pure” and “right” you think math is, eventually you need a messy brain to pattern-match real-world systems to axioms, and back again. Except for pure math, I suppose. And I can’t entirely disregard it, because even negative numbers and complex numbers were “pure math” at some point. Paul Lockhart would say they were “extended” via symmetry and then only later reified to concrete domains (like debts and electrical circuits). It’s curious that our aesthetic sense of symmetry should have any relation at all to what happens in the world. There’s probably deeper truth here, but I can’t put my finger on it. I’m an empiricist. I’ll likely live and die and empiricist. Feel free to put “empricist” on my tombstone. And this has changed how I think about math. The whole thing, to me, is pattern-matching via metaphor which exploits regularities in the universe. Math is only possible because for some reason, these symbols \(-at^2+v_0t\), when shuffled around correctly, behave exactly like a ball falling through the air. It’s wild stuff. And when viewed as less-than perfect art, the short-comings of math become apparent. Some math is wrong. (Probably not linear algebra, though.) Some math is beautiful (in the eyes of the community) but not useful. (It does not relate via metaphor to any real-world phenomenon which we can make predictions about.) There is not one “right” way to do math. There are many symbols to choose and many ways to prove a theorem. The important thing is that the metaphor is maintained. The symbols must refelect some regularity in the universe. And it has to be beautiful. Therefore psychologically we must keep all the theories in our heads, and every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same - Richard Feynman And in the end, the kind of math we create and will accept as beatiful and true is limited by the physical constraints of our wetware. There might be regularities in the universe that are universal but too complicated for us to store in our brains. However, all this requires abandoning the conceit that humans are special. If you are willing to propose that you and I are fundamentally different from the robot I am building out of stones and such (or even the people whose heads we’ve opened up and looked inside), feel free to ignore everything I’ve said. - Mitchell
{"url":"http://mitchgordon.me/philosophy/2019/07/18/empiricism-in-math-and-ai.html","timestamp":"2024-11-03T00:04:49Z","content_type":"text/html","content_length":"16601","record_id":"<urn:uuid:4f3197e2-3d69-468d-adc2-bf65d4f6235c>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00306.warc.gz"}
How do you solve \frac{2x}{x+4}=\frac{5}{x}? | Socratic How do you solve #\frac{2x}{x+4}=\frac{5}{x}#? 1 Answer Cross multiplication and then a quadratic equation Firstly, cross multiply the two denominators in order to remove the division signs, $\frac{2 x}{x + 4} = \frac{5}{x}$ $2 x \cdot x = 5 \cdot \left(x + 4\right)$ $2 {x}^{2} = 5 x + 20$ Put all terms on one side (easier to keep ${x}^{2}$ positive) $2 {x}^{2} - 5 x - 20 = 0$ Divide through by any common factors (If there are any, there aren't in this case.) Now either use the quadratic formula or try to factorise it yourself. In this example factorisation using integers is impossible so just use the formula. You should end up with an answer of $x = \frac{5 \pm \sqrt{185}}{4}$ Impact of this question 3243 views around the world
{"url":"https://socratic.org/questions/how-do-you-solve-frac-2x-x-4-frac-5-x","timestamp":"2024-11-13T14:37:10Z","content_type":"text/html","content_length":"33653","record_id":"<urn:uuid:bc9e019d-570a-4873-b783-76f015a12061>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00009.warc.gz"}
The Kidneys - Evidence of Design The Kidneys How Does Waste Removal Work? Were it not for the two filters located in our backs just above our waist, we would not live very long. Each reddish-brown kidney contains a million nephrons, with 30 miles of filters. About every thirty minutes while we are active, all our blood goes through one or the other of these masterpieces. That means that 7 to 8 liters of our blood goes through them about 20-25 times a day! That’s 45 gallons of blood being filtered each day! Each of these 5 ounce organs is about the size of your fist (4 inches long, 2 ½ inches wide, and 1 ½ inches thick). Each is attached to a major artery and vein of the body. Blood flows into the top of the nephron and 95-99% of the fluids flow out the bottom to return to the body. The waste materials are squeezed through the ureter to the bladder to be stored until they are eliminated. Chemical signals from the pituitary gland determine how much of certain substances the kidney should remove. For example, both nicotine and caffeine produce too much of this hormone and the kidneys slow down. Alcohol slows the pituitary and increases the work of the kidneys, but a lot of the water is reabsorbed into the blood by osmosis. What do the Kidneys Remove? As the body functions, waste products are produced. For instance, creatinine is produced when muscles contract. Salt and other chemicals must also be kept in balance in the blood stream. We need salt to maintain the fluid in our blood cells and to transmit information in our nerves and muscles, but too much salt can be harmful. Hormones must also be filtered out after they do their job and excess chemicals such as sugars must be removed. Water level must also be kept in balance. Though 95% of urine is water, the other 5% contains these poisons that, if allowed to build up in our systems, would be fatal. We are able to live with only one kidney and, in fact, some live with partial functioning of one kidney. But if the kidneys fail entirely, that person has only days to live without dialysis, that is, artificial blood filtration. Scientist Are Baffled! How all this works is too complicated for an article this size. Guyton’s Medical Physiology textbook contains 70 pages on body fluids and the work of the kidneys and bladder. The chemistry involved is staggering. The most complex blood filters designed by engineers cannot begin to duplicate the work of the kidney. It would be folly to try to explain the work of a dialysis machine as the result of a series of amazing accidents. No, intelligent engineers have worked years to develop machines that can somewhat duplicate the work of the kidney. Why is it so difficult then to admit the kidney is designed by someone superior to these engineers? A further thought– if man and animal evolved, it is obvious that the kidney would be required in the beginning, else the life form would die of toxemia within days. This is yet another example of the absurdity of trying to explain complex interdependent required systems of the body as the result of random changes over long periods of time.
{"url":"https://www.evidenceofdesign.com/the-kidneys/","timestamp":"2024-11-02T05:13:58Z","content_type":"text/html","content_length":"125077","record_id":"<urn:uuid:1d71eeb7-102c-4d95-906e-ef990f876e5f>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00885.warc.gz"}
n C Creative Commons Attribution 3.0 License © 1996 by Alejandro A. Torassa In this work a new dynamics is developed, which is valid for all observers, and which establishes, among other things, the existence of a new universal force of interaction, called kinetic force, which balances the remaining forces acting on a body. In this new dynamics, the motion of a body is not determined by the forces acting on it; instead, the body itself determines its own motion, since as a result of such motion it exerts over all other bodies the kinetic force which is necessary to keep the system of forces acting on each of them always in equilibrium. It is known that in classical mechanics Newton's dynamics cannot be formulated for all reference frames, since it does not conserve its form when passing from one reference frame to another. For instance, if we admit that Newton's dynamics is valid for a chosen reference frame, then we cannot admit it to be valid for a reference frame which is accelerated relative to the first one, for the description of the behavior of a body from the accelerated reference frame differs from the description given by Newton's dynamics. Classical mechanics solves this difficulty by separating reference frames into two classes: inertial reference frames, for which Newton's dynamics applies, and non-inertial reference frames, where Newton's dynamics does not apply; but this solution contradicts the principle of general relativity, which states: the laws of physics shall be valid for all reference frames. However, this work puts forward a different solution to the difficulty from classical mechanics mentioned above, with no need to distinguish among reference frames, and in accordance to the principle of general relativity, starting from Newton's dynamics and the transformations of kinematics and developing a new dynamics which can be formulated for all reference frames, since it conserves its form when passing from one reference frame to another. The development of the new dynamics will be made in two parts: in the first part, which deals with the classical mechanics of particles, the new dynamics of particles will be developed, starting from Newton's dynamics of particles and the transformations of the kinematics of particles; in the second part, which deals with the classical mechanics of rigid bodies, the new dynamics of rigid bodies will be developed, starting from Newton's dynamics of rigid bodies and the transformations of the kinematics of rigid bodies. In this work only the first part will be formulated. The classical mechanics of particles considers that the only kind of bodies found in the Universe are particles, and assumes that any reference frame is fixed to a particle. Therefore, in the classical mechanics of particles, it can be assumed that reference frames are not rotating. Reference Frames If reference frames are not rotating, then each coordinate axis of a reference frame S will remain at a fixed angle to the corresponding coordinate axis of another reference frame S'. Therefore, to simplify calculations it will be assumed that each axis of S is parallel to the corresponding axis of S', as shown in Figure 1. Figure 1 Transformations of Kinematics If a reference frame S of axes O(X,Y,Z) determines an event by means of three space coordinates X, Y, Z and one time coordinate T, then another reference frame S' of axes O'(X',Y',Z') determines the same event by means of three space coordinates X', Y', Z' and one time coordinate T'. A change of coordinates X, Y, Z, T from reference frame S to coordinates X', Y', Z', T' from reference frame S' whose origin O' has coordinates Xo', Yo', Zo' measured from S, can be carried out by means of the following equations: From these equations, the transformation of velocity and acceleration from reference frame S to reference frame S' may be carried out, and expressed in vector form as follows: where Vo' and Ao' are the velocity and acceleration respectively, of reference frame S' relative to S. Newton's Dynamics Newton's first law: Any particle in a state of rest or of uniform linear motion tends to remain in such a state unless acted upon by an unbalanced external force. Newton's second law: The sum of all forces acting on a particle A produces an acceleration in the direction of the force, and directly proportional to that force. where Ma is the inertial mass of particle A. Newton's third law: If a particle A exerts a force F on a particle B, then particle B exerts on particle A a force -F of the same magnitude but opposite direction. The transformation of real forces from one reference frame to another is given by The transformation of inertial masses from one reference frame to another is given by Dynamical Behavior of Particles Let us consider a Universe composed of three particles A, B, and C which follow Newton's dynamics from reference frame S (inertial frame). Therefore, the behavior of such particles will be given (from S) by the equations From the equations (1) and by means of the transformations of dynamics and kinematics, it can be shown that the behavior of particles A, B, and C will be determined from a reference frame S' by the where A'o is the acceleration of reference frame S relative to S', which is equal and opposite to the acceleration -Ao' of reference frame S' relative to S. As the equations (2) are the same as the equations (1) only if the acceleration A'o of reference frame S relative to S' is equal to zero, then the behavior of particles A, B, and C cannot be determined from any (accelerated) reference frame by the equations (1). Now, if the equations (2) are added together, it yields It follows from Newton's third law that S F'a + S F'b + S F'c = 0, and from (3), A'o may be expressed as As the right-hand side of (4) is the acceleration A'cm of the center of mass of the Universe relative to the reference frame S', then Substituting into the equations (2) yields the following equations: Therefore, the behavior of particles A, B, and C is now determined from the reference frame S' by the equations (6), which are equivalent to the equations (2). Now, if the equations (6) are transformed from reference frame S' to S using the transformations of kinematics and dynamics, the resulting equations become It follows that the behavior of particles A, B, and C will now be determined from reference frame S by the equations (7), which are equivalent to the equations (1) only if the acceleration Acm of the center of mass of the Universe relative to the reference frame S equals zero, a fact that may be verified by adding together the equations (1): Dividing both sides of (8) by Ma + Mb + Mc and using the fact that S Fa + S Fb + S Fc = 0 from Newton's third law, (8) yields Considering that the equations (7) have the same form as the equations (6), then the behavior of particles A, B, and C will be determined from any reference frame by the equations (7), and will be determined by the equations (1) only if the acceleration of the center of mass of the Universe relative to that reference frame is zero. Now, the equations (7) can be arranged as follows: Substituting (9) into (10) and factoring If the second and third terms of the left-hand sides of each one of the equations (11) is taken as a new force F° acting on the corresponding particle, and exerted by the remaining particles, then it can be seen that F° conserves its form when passing from one reference frame to another; in addition, if a particle exerts a force F° on another particle, the latter exerts on the first particle a force -F° of equal magnitude and opposite direction. Therefore, as the second and third terms of the left-hand sides of each one of the equations (11) represent the sum of the new forces S F° acting on the particles, then And adding the second term to the first yields Consequently, it can be established that the behavior of particles A, B, and C will be determined from any reference frame by the equations (13), which may be stated as follows: if the new force is added to the sum of real forces, the resulting force will be zero, yielding a system in equilibrium. Consequently, it is possible to conceive a new dynamics, which can be formulated for all reference frames. The usual explanation for the motion of particles is that particles undergo a certain motion in response to the external forces acting on them, following Newton's first and second laws. The new dynamics, instead, considers that particles experience a certain motion because in that way they balance the sum of real forces with the new force. From now on, the new force will be called kinetic force, since it is a force which depends on the motion of particles, and the magnitude M (mass) will be called kinetic mass instead of inertial mass, since in the new dynamics particles do not exhibit the property known as inertia. The New Dynamics First principle: A particle can have any state of motion. Second principle: The forces acting upon a particle A always remain balanced. Third principle: If a particle A exerts a force F on a particle B, then particle B exerts on particle A a force -F of the same magnitude but opposite direction. The transformation of real forces from one reference frame to another, is given by the following equation: The kinetic force FKab exerted on a particle A by another particle B, caused by the interaction between particle A and particle B, is given by the following equation: where Ma is the kinetic mass of particle A, Mb is the kinetic mass of particle B, Ab is the acceleration of particle B, Aa is the acceleration of particle A, and MT is the total kinetic mass of the The transformation of kinetic masses from one reference frame to another is given by the following equation: From the previous statements it follows that the sum of kinetic forces S FKa acting on a particle A is given by where Ma is the kinetic mass of particle A, Acm is the acceleration of the center of kinetic mass of the Universe and Aa is the acceleration of particle A. Determination of the Motion of Particles The equation determining the acceleration Aa of a particle A relative to a reference frame S fixed to a particle S may be calculated as follows: the sum of the kinetic forces S FKa acting on particle A and the sum of the kinetic forces S FKs acting on particle S, are given by the following equations: Combining both equations yields Since the acceleration As of particle S relative to the reference frame S equals zero always, Aa may be obtained from the last equation as Since from the second principle of the new dynamics the sum of the kinetic forces (S FK) acting on a particle equals the opposite of the sum of the non-kinetic forces (-S FN) acting on the particle, we have Therefore, the acceleration Aa of a particle A relative to a reference frame S fixed to a particle S will be determined by the last equation, where S FNa is the sum of the non-kinetic forces acting on particle A, Ma is the mass of particle A (from now on, kinetic mass will be referred to as mass), S FNs is the sum of the non-kinetic forces acting on particle S, and Ms is the mass of particle S. Galilean Circumstance A reference frame S fixed to a particle S is said to be in the galilean circumstance if the sum of the non-kinetic forces acting on particle S equals zero. If reference frame S is in the galilean circumstance, then, by the second principle of the new dynamics it can be shown that the sum of the kinetic forces S FKs acting on particle S equals zero, that And, as the acceleration As of particle S relative to the reference frame S equals zero always, then That is, the acceleration of the center of mass of the Universe relative to a reference frame in the galilean circumstance is zero. Isolated System A system of particles is said to be isolated if the sum of the non-kinetic external forces acting on the system equals zero. Therefore, if a system of particles is isolated, by the second principle of the new dynamics, the sum of the internal non-kinetic forces S FNi and the internal and external kinetic forces S FK equals Substituting S FK from expression (14) applied to a system of N particles, and taking into account that S FNi = 0 from the third principle of the new dynamics, it follows that from which Acm can be expressed as And as the right-hand side is the acceleration Acms of the isolated system, then Therefore, the acceleration of the center of mass of an isolated system equals the acceleration of the center of mass of the Universe. Restricted Conservation of Linear Momentum On one hand, the acceleration of the center of mass of an isolated system equals the acceleration of the center of mass of the Universe and, on the other hand, the acceleration of the center of mass of the Universe relative to a reference frame in the galilean circumstance equals zero. Therefore, the acceleration of the center of mass of an isolated system relative to a reference frame in the galilean circumstance equals zero; that is Multiplying both sides of this equation by Ma + Mb + ... + Mn and integrating with respect to time yields As the left-hand side is the total linear momentum P of the isolated system, then Therefore, for a reference frame in the galilean circumstance the total linear momentum of an isolated system is conserved. Work and Live Energy The total work W done by the forces acting on a particle is given by Grouping yields As Fa + Fb + ... + Fn = 0 by the second principle of the new dynamics, it follows that That is, the total work done by the forces acting on a particle equals zero. But the total work W done by the interacting kinetic forces FKa and FKb acting on particles A and B respectively, is given by or else resulting in If we call the energy of the kinetic force live energy, then the expression between brackets represents the live energy ELab of the system particle A - particle B; therefore It follows that the total work done by the interacting kinetic forces acting on a particle A and a particle B is equal and opposite in sign to the live energy difference of the system particle A - particle B; with the live energy of the system given by where Ma is the mass of particle A, Mb is the mass of particle B, Va is the velocity of particle A, Vb is the velocity of particle B, and MT is the total mass of the Universe. The total work W done by the kinetic forces acting on an isolated system is that is Substituting Acm in the last equation by the acceleration Acms of the center of mass of the isolated system, since Acms is equal to Acm, yields The expression between brackets represents the total live energy EL of the isolated system, then Therefore, the total work done by the kinetic forces acting on an isolated system equals minus the total live energy difference of the isolated system, where the total live energy EL of an isolated system is given by where EK is the total kinetic energy of the isolated system, P is the total linear momentum of the isolated system, and MS is the total mass of the isolated system. Conservation of Live Energy The total work done by the forces acting on a particle equals zero; therefore, the total work W done by the forces acting on an isolated system equals zero. If the total work W is divided into two parts: the total work Wfn done by the non-kinetic forces and the total work Wfk done by the kinetic forces, then As Wfk equals minus the total live energy difference of the isolated system, then If the non-kinetic forces acting on the isolated system do not perform work, it follows that that is or else Therefore, if the non-kinetic forces acting on an isolated system do not perform work, the total live energy of the isolated system is conserved. On the other hand, if the total live energy of an isolated system is conserved, then from a reference frame in the galilean circumstance the total kinetic energy of the isolated system is conserved too, since for such system the total linear momentum remains constant. It is currently known that in order to describe the behavior (motion) of a body from a non-inertial reference frame in classical mechanics, it is necessary to introduce apparent forces called fictitious forces (also called pseudo-forces, inertial forces or non-inertial forces). Unlike real forces, fictitious forces are not caused by the interaction between bodies, that is, if there is a fictitious force F acting on a body A, then a fictitious force -F of the same magnitude but opposite direction acting on another body B cannot be found; that is, fictitious forces do not obey Newton's third law. On the other hand, in the theory of general relativity, based on the principle of equivalence, it is established that fictitious forces are caused, in a generalized sense, by a gravitational field which all non-inertial reference frames experience, that is, in the theory of general relativity fictitious forces are equivalent to gravitational forces. But, why are fictitious forces not caused by the interaction between bodies, just as real forces are? Why do not fictitious forces conserve their value when passing from one non-inertial reference frame to another inertial reference frame, just as real forces do? If fictitious forces are equivalent to gravitational forces, then why are fictitious forces not caused by the interaction between bodies and do not conserve their value when passing from one non-inertial reference frame to another inertial reference frame, just as gravitational forces are caused and conserve their value? It can be stated that neither classical mechanics nor the theory of general relativity give satisfactory answers to the above mentioned questions and that, therefore, it should be accepted that apparently experience shows that to describe the behavior (motion) of a body from a non-inertial reference frame it is necessary to introduce fictitious forces that do not behave in the same way that real forces do. However, this work does give satisfactory answers to the above mentioned questions, since it is deduced from it that, in fact, experience does not show that fictitious forces that do not behave as real forces exist, but experience does show that there exists a new real force which is still ignored and that the so called fictitious forces are in fact mathematical expressions that partially represent this new real force. In this work the new real force, called kinetic force, behaves like the other real forces, that is, it is a force caused by the interaction between bodies and conserves its value when passing from one reference frame to another. But, on the other hand, it is established in this work that the goal of the kinetic force is to balance the remaining real forces acting on a body, that is, the kinetic force is the real force that makes the sum of all the real forces acting on a body be always equal to zero. Now, how is it possible then to change the natural state of motion of a body, if according to Newton's first and second laws, based on the principle of inertia, it is established that the natural state of motion of a body will only change when there is an unbalanced external force acting on it? In contradiction with the principle of inertia, it is established in this work that in the absence of external forces the natural state of motion of a body is not only the state of rest or of uniform linear motion, but that the natural state of motion of a body in the absence of external forces is any possible state of motion; that is, any possible state of motion is a natural state of motion. However, the previous statement does not mean that there is no relation between the motion of bodies and the forces acting on them, since such a relation exists and is mathematically expressed in the new dynamics developed in this work. In the new dynamics the motion is the mechanism that bodies have, which makes it possible for the kinetic force to balance the remaining forces acting on a body, since as a result of its motion a body exerts over all other bodies the kinetic force which is necessary to keep the system of forces acting on each of them always in equilibrium. On the other hand, in this work it is not necessary to separate reference frames into two classes: inertial reference frames and non-inertial reference frames, since through the new dynamics the behavior (motion) of a body can be described exactly in the same way from any reference frame. That is, the new dynamics is in accord with the principle of general relativity, which states: the laws of physics shall be valid for all reference frames. As a final conclusion it can be said that physics has two possible options: to develop classical mechanics based on the principle of inertia, as a first option, or to develop classical mechanics not based on the principle of inertia, as a second option. However, this work, at least in the classical mechanics of particles, demonstrates, on one hand, that the second option is in accord with what experience shows and, on the other hand, that from a theoretical point of view the second option is widely superior to the first one. A. Einstein, Relativity: The Special and General Theory. E. Mach, The Science of Mechanics. R. Resnick and D. Halliday, Physics. J. Kane and M. Sternheim, Physics. H. Goldstein, Classical Mechanics. L. Landau and E. Lifshitz, Mechanics.
{"url":"https://torassa.tripod.com/paper.htm","timestamp":"2024-11-02T14:08:27Z","content_type":"application/xhtml+xml","content_length":"48543","record_id":"<urn:uuid:70161134-bf66-451b-9b0e-e9df7fc56d2f>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00896.warc.gz"}
The One Thing to Do for What a Mode in Math | If you purchase a statement in this way, it typically suggests that the equation has infinitely many solutions. It is helpful for finding the quartiles. Decision variables are occasionally called independent variables. The BBox appears to be automatically loaded. Never wait to utilize PIA or PIN in case you have the opportunity to spare and should you truly feel uncomfortable with algebra alone. Then search for the middle number. Get the Scoop on What a Mode in Math Before You’re Too Late Growth of population is divided into three major components. It is the same as average. It is important because it describes the behavior of the entire set of numbers. To locate the median, your numbers need to be listed in numerical order from smallest to largest, which means you may need to rewrite your list before you may discover the median. http:// writing2.richmond.edu/writing/wweb/history/principles.html Sometimes you’ll have an average that does not seem in the data set, but will still reveal to you the huge picture of the numbers given. It’s the average of each one of the numbers. If you don’t understand what I’m speaking about here, talk with your math teacher, pronto! Since these questions often appear straightforward, it can be simple to find yourself rushing through them. You’re likely to have a decimal answer, but we’re likely to round to the closest percent. The Pain of What a Mode in Math It feels weird to select the trouble to visit the theatre merely to watch Jake Gyllenhaal do a glorified audition. However, there’s always the possibility which I overlooked something or made a mistake. For quite a long time everything I tried was incorrect. Take note that in case the problem asks for a negative number, that doesn’t necessarily indicate a negative INTEGER. As a workaround compiling texvc might have to be done offline. A pupil has to know the directions provided in the manual. The results raise intriguing possibilities for an assortment of future materials. A set of information can be bimodal. It is definitely the most commonly-occurring value inside this set. There are two kinds of chi square distribution. Be aware it is meaningless since we’re adding quantities with various units. In other words, it’s the number that is repeated most, i.e. the number with the maximum frequency. Progress monitoring is conducted to be sure the instruction is having a positive effect on every kid’s growth and achievement. His main teaching interests are international company and worldwide accounting. In statistics, there are 3 common strategies to locate the average of a group of information. What a Mode in Math and What a Mode in Math – The Perfect Combination You will require a special username, password, and working e-mail address to make your account. Pause the video here to see whether you can locate the mean of this data collection. Lyx employs a key-binding file to find out the key-bindings at start time. What a Mode in Math – the Story Given two sides of a perfect triangle, students utilize the Pythagorean Theorem to get the period of the third side. It’s the divider in the middle of the street. Rounding away from zero is the most commonly known kind of rounding. He couldn’t address the math issue. Our lesson planning worksheet is able to help you estimate. Algebra is often taught abstractly with very minimum emphasis on what algebra is or the manner that it can be employed to deal with real difficulties. Take practice exams so you can pass your written test the exact first moment. A couple of the students had questions about the issue. So, over the previous six Summer Olympics, the usa has been awarded a mean of 103 medals. It may be used to predict amounts. Suppose you’re going to use plenty of Christoffel symbols, but the very first superscript is usually likely to be i. Put simply, it’s the value that is probably to be sampled. All ACT statistics questions are just variations on the exact theme, so knowing your foundations is important. The particular meaning depends upon context. The value of mean lies in its capacity to summarize the entire dataset with a single price. It may be used to spell out qualitative phenomenon. They are sure that the mutation disrupts the standard structure and use of the RDS protein. A particularly important corollary is that lots of knots and links are actually hyperbolic. This question can be challenging to answer as it involves a number of different kinds of evaluation. In the business of statistics, it’s an important tool to interpret data in an appropriate way. This class may be used to concurrently gather statistics for several datasets together with for a combined sample including all the data. The remaining part of the numbers are only listed once, so there is just one mode. It is also feasible to have a whole set of data without a mode. Working with blend modes is virtually always an experimental course of action. There’s an open ended distribution. You may technically leave it in auto mode and see whether it figures out whether you desire a math answer or not. This data set doesn’t have any mode.
{"url":"https://www.darisrl.eu/2019/11/11/the-one-thing-to-do-for-what-a-mode-in-math/","timestamp":"2024-11-05T09:33:08Z","content_type":"text/html","content_length":"41535","record_id":"<urn:uuid:71573879-c5a6-4596-82c3-ebdb8c15874a>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00084.warc.gz"}
June Huh from Princeton University Hard Lefschetz theorem and Hodge-Riemann relations for combinatorial geometries A conjecture of Read predicts that the coefficients of the chromatic polynomial of a graph form a log-concave sequence for any graph. A related conjecture of Welsh predicts that the number of linearly independent subsets of varying sizes form a log-concave sequence for any configuration of vectors in a vector space. In this talk, I will argue that two main results of Hodge theory, the Hard Lefschetz theorem and the Hodge-Riemann relations, continue to hold in a realm that goes beyond that of Kahler geometry. This implies the above mentioned conjectures and their generalization to arbitrary matroids. Joint work with Karim Adiprasito and Eric Katz. The talk will be accessible to a general audience, see http://matroidunion.org/?p=1664 for some details.
{"url":"https://math.washington.edu/events/2016-04-22/june-huh-princeton-university","timestamp":"2024-11-11T07:22:17Z","content_type":"text/html","content_length":"50825","record_id":"<urn:uuid:432ffb80-a699-47f6-ac2e-b7e74e939a9b>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00835.warc.gz"}
ADT on road is 30000 vpd. The traffic composition is 30% cars and 35% buses. Design hour factor of 0.15 and usage ratio of 0.2 are same for both the vehicle types. The duration of stay for bus is 45 minutes and for the car is 1 hour. Assume that 50% of the total traffic flows in each direction. Using US-HCM formula, the ratio of parking spaces required for buses and cars would be. ADT on road is 30000 vpd. The traffic composition is 30% cars and 35% buses. Design hour factor of 0.15 and usage ratio of 0.2 are same for both the vehicle types. The duration of... ADT on road is 30000 vpd. The traffic composition is 30% cars and 35% buses. Design hour factor of 0.15 and usage ratio of 0.2 are same for both the vehicle types. The duration of stay for bus is 45 minutes and for the car is 1 hour. Assume that 50% of the total traffic flows in each direction. Using US-HCM formula, the ratio of parking spaces required for buses and cars would be. Understand the Problem The question is asking us to calculate the ratio of parking spaces required for buses and cars based on the provided traffic data and assumptions. To solve this, we will need to use the values for average daily traffic (ADT), traffic composition, design hour factor, usage ratio, and duration of stay for both vehicle types. The ratio of parking spaces required for buses to cars is approximately $0.0705$. Answer for screen readers The ratio of parking spaces required for buses to cars is approximately $0.0705$. Steps to Solve 1. Identify the necessary values from the problem You need to gather all the required values: • Average Daily Traffic (ADT), let's say ADT = 10,000 vehicles. • Traffic composition: assume 5% buses and 95% cars. • Design hour factor: assume the factor is 10% • Usage ratio for buses: assume it’s 0.75. • Usage ratio for cars: assume it’s 1.0. • Duration of stay: assume 1 hour for buses and 2 hours for cars. 2. Calculate the number of buses and cars First, find the number of each vehicle type using the traffic composition percentages. For buses: $$ Buses = ADT \times \frac{5}{100} = 10,000 \times 0.05 = 500 $$ For cars: $$ Cars = ADT \times \frac{95}{100} = 10,000 \times 0.95 = 9,500 $$ 3. Calculate the hourly demand for each type of vehicle Now, use the design hour factor to find the number of vehicles that need parking during the peak hour. Hourly Buses: $$ Hourly\ Buses = Buses \times Design\ Hour\ Factor = 500 \times 0.10 = 50 $$ Hourly Cars: $$ Hourly\ Cars = Cars \times Design\ Hour\ Factor = 9,500 \times 0.10 = 950 $$ 4. Factor in the usage ratio and duration of stay Now adjust these figures using the usage ratio for each vehicle to find out how many parking spaces are needed. Parking spaces for buses: $$ Parking\ Spaces\ Buses = \frac{Hourly\ Buses}{Usage\ Ratio\ Buses} = \frac{50}{0.75} \approx 67 $$ Parking spaces for cars: $$ Parking\ Spaces\ Cars = \frac{Hourly\ Cars}{Usage\ Ratio\ Cars} = \frac{950}{1.0} = 950 $$ 5. Calculate the ratio of parking spaces required Finally, find the ratio of parking spaces for buses to cars. $$ Ratio = \frac{Parking\ Spaces\ Buses}{Parking\ Spaces\ Cars} = \frac{67}{950} \approx 0.0705 $$ The ratio of parking spaces required for buses to cars is approximately $0.0705$. More Information This ratio indicates that for every bus parking space required, there are around 14 car parking spaces needed, which showcases the significant difference in demand for parking space between the two vehicle types. • Forgetting to apply the design hour factor correctly, which can lead to inflated or deflated demand. • Not properly applying the usage ratio which will cause incorrect parking space requirements. • Misinterpreting the traffic composition percentages may lead to incorrect calculations of the number of buses and cars.
{"url":"https://quizgecko.com/q/adt-on-road-is-30000-vpd-the-traffic-composition-is-30-cars-and-35-buses-des-kisvo","timestamp":"2024-11-06T15:29:41Z","content_type":"text/html","content_length":"174222","record_id":"<urn:uuid:7c34cd52-2f06-427c-9675-8a48f610bedb>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00150.warc.gz"}
How to read a three-phase electric meterHow to read a three-phase electric meter 🚩 three-phase meter rates 🚩 Utilities. You will need • - instruction to the counter; • calculator; • - a sheet of paper; • - pen or pencil. Take instructions and determine what you kind of the counter. There are presently many varieties of three-phase meters. Basically use only two types – electronic and induction. Compared to the e-induction has more advantages. The reason is that in the mid 90-ies they were installed in every home and in their lifetime has not expired. Examine carefully the counter. Counters of different numerical indicators. They are three-digit and four-digit. In the first case the maximum rate is equal to 1000 kWh and in the second 10000 kilowatt/hour. Upon reaching 999 or 9999 kWh indicators fully reset, and counting of consumption starts from zero. Consider these features before remove the performance. Determine the amount of electricity consumed. To do this, take the figure for the previous month and subtract it from the current. This way you will get the amount of electricity consumption for the current period since the last payment. Write it on a sheet of paper. Put the result in monetary unit, i.e. the amount that you should pay. To do this, multiply the difference will be levied on the existing price. But sometimes such a situation when the testimony is necessary to remove when replacing counters. Most often it is replacement of a three-digit to four-digit. In this case, summarize the old testimony and the new, the result multiply by the rate and in the end get the amount of payment per month. With self-removal indicators keep the receipts for the payment, otherwise you will not be able to take an accurate reading of watts used in a month. Not for the full payment of the used electric power, power company can produce electricity until full repayment of the debt. Useful advice It is best for removing indicators call in a specialist from the organ supply. It has all your previous records, and so you will be instantly presented with a bill for payment.
{"url":"https://eng.kakprosto.ru/how-103974-how-to-read-a-three-phase-electric-meter","timestamp":"2024-11-08T23:28:20Z","content_type":"text/html","content_length":"36370","record_id":"<urn:uuid:eb10ee15-effd-4896-9c6d-e996aa64431b>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00154.warc.gz"}
The Integrate Gadget 11.8 The Integrate Gadget The Integrate Gadget tool (the addtool_curve_integ X-Function) allows you to select an arbitrary range of data on a graph intuitively, using the region of interest (ROI) object (yellow rectangle). Then the tool performs integration on the chosen section to calculate the area under the curve and display the results instantly on top of the ROI. With this Gadget, you can: • Specify the integration limits • Specify baseline • Display the difference curve and the integral curve inside the ROI • Calculate quantities including: peak area, peak height, peak center and FWHM To Open Integrate Gadget To use this tool, select Gadgets: Integrate... from the Origin menu when a graph window is active. If the tool has already been activated, you can re-open the Integrate dialog by clicking on the arrow in the upper-right corner of the ROI and choosing Preferences. To Show or Hide Gadget Tool To toggle the display of all gadget ROI boxes in a graph at the same time, click the H button in the top right corner of the graph, which enables users to export the graph with gadget results. ROI Box tab Specify the X Data range for the ROI. The beginning of X scale. X Scale To The ending of X scale. Fixed (Prevent moving by ROI) Fix the X scale to prevent from moving by ROI box. Show Tool Name (toolname) Specify whether to show the tool name at the top of the rectangle in the graph. The tool name can be specified in Preferences. Fill Color (rectcolor) Specify the color of the rectangle that attaches to the graph. See the list of colors. Show Area/FWHM on Center-Top (showtop) Specify whether to show the integrated area and FWHM of the selected section at the top of the rectangle in the graph. Integration tab Specify the integration limits. You can get the limits through linear interpolation/extrapolation or use raw data points as integral limits. □ Interpolate to Rectangle Edge Specify whether to interpolate the two points at the rectangle edges according to the source data, and then use these two points as integration limits. Please note that if the rectangle edge exceeds the source data, it will not interpolate. Fit Limits To ( fitlimit) □ Interpolate/Extrapolate to Rectangle Edge Specify whether to interpolate/extrapolate the two points at the rectangle edges according to the source data and then use these two points as integration limits. □ Data Points Do not interpolate and use the raw data points as integration limits. Specify the integral area type. • Mathematical Area Area Type ( The area is the algebraic sum of trapezoids. • Absolute Area The area is the sum of absolute trapezoid values. Baseline Subtracted Curve Specify whether to show the baseline subtracted curve in the graph. Show Integrated Area Specify whether to show the integrated area in the graph. Keep the shading color after New Output Specify whether to keep the shadow of integrated area after selecting New Output options everytime. Note: For information on customizing the shaded (integrated) area and baseline, see this FAQ. Show ( showIntegrArea) Integral Curve ☆ None Do not show the integral curve in the graph. ☆ Restrict to Rectangle Show the integral curve inside the rectangle. If the integral values are much larger than the original curve, the integral curve will be re-scaled to restrict inside the rectangle box. The integral curve does not show the true values in this case. ☆ True Value Show the actual integral values in the graph. Baseline Tab Specify a baseline mode. • None (Y=0) Use Y=0 as baseline. • Constant Y Use a horizontal line as the baseline. • Straight Line Method (method) Use a straight line, which can be tilted, as the baseline. • Use Existing Dataset Use an existing data set as the baseline. • 2nd Derivative Use the 2nd Derivative method to create baseline. • End Point Weighted Create a smoothed curve using data points from two ends Y= (yvalue) This is only available when Constant Y is selected for Method. You can use it to specify the Y value for the horizontal line to be used as the baseline. This is only available when Straight Line is selected for Method. It works together with Y offset of left (yoffsetleft) and Y Offset of Right (yoffsetright). It is used to specify how to choose X values for the beginning and end of the baseline. Now, suppose we define the value of Y offset of left (yoffsetleft) as b1 and Y offset of right (yoffsetright) as b2; the beginning Y value of the baseline as a1, and the ending Y value as a2. Then we always have a1-b1=y1 and a2-b2=y2, where y1 and y2 are the Y values at x1 and x2, respectively. For how to define x1 and x2, see below. • Entire Data Fix x to the entire data set. When you select this, x1 is the X value of the beginning of the raw data and x2 is the X value of the end of the raw data. Fix x to (fixxto) • Rectangle Fix x to the rectangle. When you select this, x1 is the X value that corresponds to the left edge of the rectangle and x2 is the X value that corresponds to the right • Scale Fix x to the X scale. When you select this, x1 is the beginning of the scale and x2 is the end of it. Y Offset of Left ( This is only available when Straight Line is selected for Method. It used to set the value of b1 mentioned above. Y offset of Right ( This is only available when Straight Line is selected for Method. It used to set the value of b2 mentioned above. This is only available when Dataset is selected for Method. It is used to specify an existing data set to be used as the baseline. Range 1 (Range1) X (X=) Specify the X range. Y (Y=) Baseline Dataset ( Specify the Y range. Select this check box if you want to use only part of the data set as the baseline. Specify the beginning row. Specify the ending row. Select a smoothing method to apply smoothing prior to creating the baseline. This option is only available when the Mode is set as 2nd Derivative. Options include: Window Size Specify the desired window size (a positive integer) in the moving window for the Savitzky-Golay or Adjacent-Averaging smoothing. Smoothing Method Threshold Specify threshold for the Savitzky-Golay or Adjacent_Averaging smoothing. Polynomial Order This parameter is available only when Savitzky-Golay is selected for Smoothing Method. It specifies the polynomial order (1 through 9). Maximum Anchor Specify the maximum of baseline anchor points. This option is only available when you set Mode to 2nd Derivative. Connected Method Specify connect method for the anchor points. This option is only available when you set Mode to 2nd Derivative. End Points(%) Specify the percentage of end points to create baseline. This option is only available when you set Mode as End Points Weighted. Output tab Output Quantities to Customize the output results. Script Window (script) Specify whether to output the results to the Script window. Results Log (reslog) Specify whether to output the result to the Results Log. Long Name in Results Log/Script Specify whether to use the long names of the Quantities (quantities) or the variable names of them when Origin outputs the results to the Results Log or Script Window (useLongName): window. Append to Worksheet (appendwks) Specify whether to append the results to a worksheet. This is only available when Append to Worksheet (appendwks) is selected. • When you generate new output, results are output to [%H-QkInteg]Result by default (here %H means the Short Name of source graph), but other books and sheets can Result Worksheet Name (wbkName) be specified. If the book and sheet do not exist, they will be created on output. • Alternately, you can click the flyout button to the right of Result Worksheet Name and choose Sheet in Input Book. This fills the edit box with [<input>]Result. When you generate new output, results are output to a sheet named Result in the source book. Add Label to Graph (LabelToGraph It is used to specify whether output the label to the graph. Significant Digits(signdigits) It is used to specify the significant digits of output quantities. The default system follows the Digits settings on the Numeric Format tab of Preferences: Options dialog. It effects labels on the top of ROI box and the outputs to Script Window, Results Log and labels added to graph. Quantities Branch Specify the quantities to be outputted. Dataset Identifier (name) Specify a dataset identifier in the drop-down list. Beginning Row Index (begin_row) Specify whether to output the beginning row index. Ending Row Index (end_row) Specify whether to output the ending row index. Beginning X (begin_x) Specify whether to output the beginning X value. Ending X (end_x) Specify whether to output the ending X value. Max Height (ph) Specify whether to output the maximum height, as computed from the baseline. X at YMax (pc) Specify whether to output the X value at the maximum Y value. Area (pa) Specify whether to output the area, as computed from the baseline. Area Above Baseline (paa) Specify whether to output the area above the baseline. Area Below Baseline (pab) Specify whether to output the area below the baseline. Centroid (pcd) Specify whether to output the centroid. FWHM (pfwhm) Specify whether to output the FWHM (the full width at half the height of the source curve). Left Half Width (plhw) Specify whether to output the left half width. Right Half Width (prhw) Specify whether to output the right half width. Y Max (pymax) Specify whether to output the Y value on the input curve at the X position where the Y on subtracted curve has the maximum absolute value. Index of X at YMax (pxofymax) Specify whether to output the index of X for the Y Max value. Baseline (base) Specify whether to output the baseline information. Notes: The name in the brackets is the tree node name, and the name outside is the tree node label. When an X-function executes by script, you have to assign values to tree nodes using their names. Special characters and spaces are not allowed in tree node names. However, you can use special characters and white space in the tree node label. Baseline and Integrated Curve Specify whether output the created baseline, integrated curve or/and subtracted curve of the selected range under the ROI box. Baseline (base) Specify whether to output the baseline information. Integral Curve(Integrated) Specify whether to output the Integral Curve. Subtracted Curve(Subtracted) Specify whether to output the Subtracted Curve(gotten by subtracting selected baseline from the source curve) if the Baseline Mode is not None. Specify where to output the result: Output To (Outputto) • Source Sheet sends new output to new columns in the source worksheet. • Source Book, New Sheet sends output to a sheet named as GadgetIntegN in the source workbook. • New Book sends output to a book named as GadgetIntegN. The Fly-Out Menu Click the triangle button near the top right corner of the ROI to open a fly-out menu that offers the following options: New Output Output the result. New Output for All Output the results for all curves in the current layer to the specified worksheet (if not empty, append the results). Curves (N) New Output for All Output the results for all curves in all layers within the current graph to the specified worksheet (if not empty, append the results). Layers (L) Update Last Output Update the last output. Only available when there is already an output. Go to Report Go to the report worksheet once the result has been output to worksheet. Go to Source Go to the source worksheet Output to Clipboard When selected (menu item checked) New Output will be placed on the Clipboard. Change the input data. • By default, Auto mode is enabled. When Auto is enabled, plot selection is controlled by clicking on a plot in the graph window or Object Manager. Prior to Origin 2019, Origin Change Data did not support Auto; to change target plot/data in older versions, you must select a plot from the fly-out menu. • Place a check mark in front of any plot to select that plot. • Click Select... or More ... to open the Select Plot(s) dialog and change selection. Expand to Full Plot Expand the ROI box to the full Plots range. (s) Range Fix ROI Position Fix the position of ROI box. Save Theme Save the settings in this dialog as a dialog theme. Save as <default> Save the setting in this dialog as <default> theme. Load Theme Load the dialog theme pre-saved. Preferences Open the Preferences dialog. To perform integration on an area of a graph with the baseline at y=2, do the following: 1. Create a new worksheet. 2. Import the Origin sample data fftfilter1.DAT which is located in <Origin Program Folder>\Samples\Signal Processing. 3. Select Plot > 2D : Line: Spline from the Origin menu, to draw a graph. 4. Select Gadgets: Integrate from the Origin menu when the graph window is active, to bring up the Integrate: addtool_curve_integ dialog box. 5. Go to the Baseline tab. Choose Constant Y for the Method, and then enter 2 in the Y= edit box. 6. Click the OK button. This will add a rectangle onto the plot. The integration area is shown at the top of the rectangle. 7. Click the button and select New Output from the context menu. The results will be outputted to the Classic Script Window by default.
{"url":"https://cloud.originlab.com/doc/en/Origin-Help/Gadget-Integration","timestamp":"2024-11-02T04:37:15Z","content_type":"text/html","content_length":"168802","record_id":"<urn:uuid:7429f605-a8d9-4794-85c2-330b16235d07>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00731.warc.gz"}
The concept of impedance extends also to masses and springs. Figure 7.2 illustrates an ideal mass of force equals mass times acceleration, or Since impedance is defined in terms of force and velocity, we will prefer the form differentiation theorem for Laplace transforms [284],^8.1we have If we assume the initial velocity of the mass is zero, we have and the impedance frequency domain is simply of a mass This is the transfer function of an . Thus, an ideal mass integrates the applied force (divided by linear systems '' way of saying force equals mass times acceleration. Since we normally think of an applied force as an input and the resulting velocity as an output, the corresponding transfer function is system diagram for this view is shown in Fig. 7.3. The impulse response of a mass, for a force input and velocity output, is defined as the inverse Laplace transform of the transfer function: In this instance, setting the input to unit momentum to the mass at time 0. (Recall that momentum is the integral of force with respect to time.) Since momentum is also equal to mass Once the input and output signal are defined, a transfer function is defined, and therefore a frequency response is defined [485]. The frequency response is given by the transfer function evaluated on the i.e., for Again, this is just the frequency response of an integrator, and we can say that the amplitude response rolls off per octave, and the phase shift is In circuit theory, the element analogous to the mass is the inductor, characterized by equivalent circuit, a mass can be represented using an inductor with value Next Section: Ideal SpringPrevious Section: Dashpot
{"url":"https://www.dsprelated.com/freebooks/pasp/Ideal_Mass.html","timestamp":"2024-11-10T01:34:41Z","content_type":"text/html","content_length":"38059","record_id":"<urn:uuid:edc8c180-2e46-4c83-8e0e-f2c66d2be27b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00154.warc.gz"}
Optimization Solvers When training the MyCaffe Solver classes are used to optimize the network by using different strategies to apply the gradients to the weights of each learnable layer. Solver Configuration The SolverParameter is used to configure the solver to use and specifies the specific configuration for each solver. There are for main areas of settings to consider when configuring a solver: Testing parameters, Learning Rate parameters, Snapshot parameters and the Solver Specific parameters. Testing Parameters The testing parameters configure how testing is performed during training and includes the following settings. test_interval: the test interval defines how often the test cycle is performed when training. For example, if training for 1000 iterations with a test_interval = 100, the test cycle occurs every 100th interval. test_iter: specifies the number of tests to perform on each test interval. For example, when test_iter=10 ten test cycles are run with the batch size defined by the TEST phase data input. test_initialization: when true, a testing cycle is performed before training start. Learning Rate Parameters The learning rate specifies impact how much gradient is applied to the weights by the solver. Over time the learning rate may be changed using various strategies. base_lr: the base learning rate defines the starting learning rate which may change on each cycle depending on the learning strategy used. lr_policy: the learning rate policy defines how the learning rate is changed during the training process. For example, a FIXED learning rate policy leaves the learning rate unchanged, whereas a SIGMOID policy uses a sigmoid decay rate. For more information on the learning rate policies supported by MyCaffe, see the LearningRatePolicyType. gamma: specifies the ‘gamma’ term used to calculate STEP, EXP, INV and SIGMOID learning rate policies. power: specifies the ‘power’ term used to calculate the INV and POLY learning rate policies. stepsize: specifies the step size term used to calcuate the STEP learning rate policy. stepvalue: specifies the step values used to calculate the MULTISTEP learning rate policy. weight_decay: when updating weights, the learning rate multiplied by the gradient is subtracted from the weights. Weight decay subtracts an additional value equal to the weight_decay rate multiplied by the weight from weight as well which decays the weight by a small amount which can help reduce the chance of overfitting the model being trained. For more information on weight decay, see the article “This thing called Weight Decay” by Dipam Vasani. regularization_type: MyCaffe supports both L1 and L2 type regularization which are used to help avoid overfitting a model during training by adding a penalty to the loss function. For a more detailed discussion comparing the difference between L1 and L2 regularization, see the article “L1 vs L2 Regularization: The intuitive difference” by Dhaval Taunk. Snapshot Parameters A ‘snapshot’ is the process of saving the weights learned during training which you may want to do when reaching the best accuracy at each step in the training so as to avoid losing any learning in the event the training is stopped early. The following parameters define how the snapshots are taken. snapshot_include_weights: when true, the weight values themselves are saved in the snapshot. When inferencing, the weights are loaded and used in the inferencing solution. snapshot_include_state: when true, the state which includes the weight state, learning rate and iteration information are saved in the snapshot. Saving the state allows for restarting where training left off during the previous training session. snapshot_format: MyCaffe supports the BINARYPROTO format for snapshot data. snapshot: the snapshot value defines the fixed intervals over which shapshots are saved. By default, snapshots are saved on each testing cycle where the best accuracy is found in the training MyCaffe supports the following solver types: SGD, NESTEROV, ADAGRAD, RMSPROP, ADADELTA, ADAM, LBFGS SGD Solver SGDSolver – performs Stochastic Gradient Descent optimization which, when used with momentum, updates weights with a linear combination of the negative gradient and the previous weight update. NesterovSolver – this solver is similar to SGD, but the error gradient is computed on weights with added momentum where the momentum term is “pointed in the right direction”. For more information on the Netsterov momentum and how it differs from SGD + momentum, see the article “Understanding Nesterov Momentum (NAG)” by Dominik Schmidt. AdaGradSolver – this solver is simpler than SGD, for AdaGrad does not use a momentum term. Instead, this solver uses “different learning rates for each parameter based on the iteration”* in an attempt to find rarely seen features. AdaDeltaSolver – this solver is an extension to the AdaGrad in that it uses an adaptive learning rate and does not use momentum but differs by using a “restricted window size” of a weighted exponentially averaged past ‘t’ gradients*. AdamSolver – this solver is the most preferred optimizer for it combines the SGD+momentum with the adaptive learning rate of the AdaDelta*. According to “Adam vs. SGD: Closing the generalization gap on image classification” by Gupta et al., “Adam finds solutions that generalize worse than those found by SGD”. RmsPropSolver – like Adam, this solver uses an adaptive learning rate and “tries to resolve the problem that gradients may vary widely in magnitudes.”. To solve this, “Rprop combines the idea of only using the sign of the gradient with the idea of adapting the step size individually for each weight.” For more information on RmsPropSolver, see the article “Understanding RMSProp – faster neural network learning” by Vitaly Bushaev. LBFGSSolver – this solver optimizes the parameters of a net using the L-BFGS algorithm based on the minFunc implementation by Marc Schmidt. MyCaffe uses the LBFGSSolver in neural style transfer *For more information comparing AdaGrad, AdaDelta and Adam optimizers, see the article “Deep Learning Optimizers” by Gunand Mayanglambam. To learn more about the difference between deep learning optimization techniques, see “Various Optimization Algorithms for Training Neural Network” by Sanket Doshi.
{"url":"https://www.signalpop.com/mycaffe/optimization-training/","timestamp":"2024-11-09T07:00:26Z","content_type":"text/html","content_length":"115100","record_id":"<urn:uuid:f40c71dc-9665-4c17-b3e0-f668fa5d0d73>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00675.warc.gz"}
Local independence of irrelevant alternatives - electowikiLocal independence of irrelevant alternativesLocal independence of irrelevant alternatives A criterion weaker than IIA proposed by H. Peyton Young and A. Levenglick is called local independence from irrelevant alternatives (LIIA).^[1] LIIA requires that both of the following conditions always hold: • If the option that finished in last place is deleted from all the votes, then the order of finish of the remaining options must not change. (The winner must not change.) • If the winning option is deleted from all the votes, the order of finish of the remaining options must not change. (The option that finished in second place must become the winner.) An equivalent way to express LIIA is that if a subset of the options are in consecutive positions in the order of finish, then their relative order of finish must not change if all other options are deleted from the votes. For example, if all options except those in 3rd, 4th and 5th place are deleted, the option that finished 3rd must win, the 4th must finish second, and 5th must finish 3rd. Another equivalent way to express LIIA is that if two options are consecutive in the order of finish, the one that finished higher must win if all options except those two are deleted from the votes. LIIA is weaker than IIA because satisfaction of IIA implies satisfaction of LIIA, but not vice versa. Despite being a weaker criterion (i.e. easier to satisfy) than IIA, LIIA is satisfied by very few voting methods. These include Kemeny-Young and ranked pairs, but not Schulze. Just as with IIA, LIIA compliance for rating methods such as approval voting, range voting, and majority judgment require the assumption that voters rate each alternative individually and independently of knowing any other alternatives, on an absolute scale calibrated prior to the election. Such an assumption would imply that there exist elections where, although a voter has slight differences in preference, that voter would rate them all equal if required to cast a vote. LIIA and majority together imply Condorcet, Smith, and independence of Smith-dominated alternatives. If a method passes majority, then in an election with only two candidates, the winner must pairwise beat or tie the loser. LIIA thus requires that if X finishes directly ahead of Y in the election method's outcome, then X must pairwise beat or tie Y. Since a Condorcet winner pairwise beats everybody else, it follows that he must finish first. Furthermore, it's impossible for a candidate outside of the Smith set to finish ahead of one inside, because by definition every candidate in the Smith set pairwise beats every candidate outside of Finally, independence of Smith-dominated alternatives follows from that all the Smith candidates finish ahead of every non-Smith candidate. By LIIA, eliminating all the non-Smith candidates must not change the outcome.
{"url":"https://electowiki.org/wiki/Local_IIA","timestamp":"2024-11-03T09:04:59Z","content_type":"text/html","content_length":"51364","record_id":"<urn:uuid:2bc4249f-0e37-4f76-bbd6-f74bb93f4a66>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00559.warc.gz"}
SciPy 0.7.0 Release Notes SciPy 0.7.0 is the culmination of 16 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.7.x branch, and on adding new features on the development trunk. This release requires Python 2.4 or 2.5 and NumPy 1.2 or greater. Please note that SciPy is still considered to have “Beta” status, as we work toward a SciPy 1.0.0 release. The 1.0.0 release will mark a major milestone in the development of SciPy, after which changing the package structure or API will be much more difficult. Whilst these pre-1.0 releases are considered to have “Beta” status, we are committed to making them as bug-free as possible. For example, in addition to fixing numerous bugs in this release, we have also doubled the number of unit tests since the last release. However, until the 1.0 release, we are aggressively reviewing and refining the functionality, organization, and interface. This is being done in an effort to make the package as coherent, intuitive, and useful as possible. To achieve this, we need help from the community of users. Specifically, we need feedback regarding all aspects of the project - everything - from which algorithms we implement, to details about our function’s call signatures. Over the last year, we have seen a rapid increase in community involvement, and numerous infrastructure improvements to lower the barrier to contributions (e.g., more explicit coding standards, improved testing infrastructure, better documentation tools). Over the next year, we hope to see this trend continue and invite everyone to become more involved. A significant amount of work has gone into making SciPy compatible with Python 2.6; however, there are still some issues in this regard. The main issue with 2.6 support is NumPy. On UNIX (including Mac OS X), NumPy 1.2.1 mostly works, with a few caveats. On Windows, there are problems related to the compilation process. The upcoming NumPy 1.3 release will fix these problems. Any remaining issues with 2.6 support for SciPy 0.7 will be addressed in a bug-fix release. Python 3.0 is not supported at all; it requires NumPy to be ported to Python 3.0. This requires immense effort, since a lot of C code has to be ported. The transition to 3.0 is still under consideration; currently, we don’t have any timeline or roadmap for this transition. SciPy documentation is greatly improved; you can view a HTML reference manual online or download it as a PDF file. The new reference guide was built using the popular Sphinx tool. This release also includes an updated tutorial, which hadn’t been available since SciPy was ported to NumPy in 2005. Though not comprehensive, the tutorial shows how to use several essential parts of Scipy. It also includes the ndimage documentation from the numarray manual. Nevertheless, more effort is needed on the documentation front. Luckily, contributing to Scipy documentation is now easier than before: if you find that a part of it requires improvements, and want to help us out, please register a user name in our web-based documentation editor at https://docs.scipy.org/ and correct the issues. NumPy 1.2 introduced a new testing framework based on nose. Starting with this release, SciPy now uses the new NumPy test framework as well. Taking advantage of the new testing framework requires nose version 0.10, or later. One major advantage of the new framework is that it greatly simplifies writing unit tests - which has all ready paid off, given the rapid increase in tests. To run the full test suite: >>> import scipy >>> scipy.test('full') For more information, please see The NumPy/SciPy Testing Guide. We have also greatly improved our test coverage. There were just over 2,000 unit tests in the 0.6.0 release; this release nearly doubles that number, with just over 4,000 unit tests. Support for NumScons has been added. NumScons is a tentative new build system for NumPy/SciPy, using SCons at its core. SCons is a next-generation build system, intended to replace the venerable Make with the integrated functionality of autoconf/automake and ccache. Scons is written in Python and its configuration files are Python scripts. NumScons is meant to replace NumPy’s custom version of distutils providing more advanced functionality, such as autoconf, improved fortran support, more tools, and support for numpy.distutils/scons cooperation. While porting SciPy to NumPy in 2005, several packages and modules were moved into scipy.sandbox. The sandbox was a staging ground for packages that were undergoing rapid development and whose APIs were in flux. It was also a place where broken code could live. The sandbox has served its purpose well, but was starting to create confusion. Thus scipy.sandbox was removed. Most of the code was moved into scipy, some code was made into a scikit, and the remaining code was just deleted, as the functionality had been replaced by other code. Sparse matrices have seen extensive improvements. There is now support for integer dtypes such int8, uint32, etc. Two new sparse formats were added: • new class dia_matrix : the sparse DIAgonal format • new class bsr_matrix : the Block CSR format Several new sparse matrix construction functions were added: • sparse.kron : sparse Kronecker product • sparse.bmat : sparse version of numpy.bmat • sparse.vstack : sparse version of numpy.vstack • sparse.hstack : sparse version of numpy.hstack Extraction of submatrices and nonzero values have been added: • sparse.tril : extract lower triangle • sparse.triu : extract upper triangle • sparse.find : nonzero values and their indices csr_matrix and csc_matrix now support slicing and fancy indexing (e.g., A[1:3, 4:7] and A[[3,2,6,8],:]). Conversions among all sparse formats are now possible: • using member functions such as .tocsr() and .tolil() • using the .asformat() member function, e.g. A.asformat('csr') • using constructors A = lil_matrix([[1,2]]); B = csr_matrix(A) All sparse constructors now accept dense matrices and lists of lists. For example: • A = csr_matrix( rand(3,3) ) and B = lil_matrix( [[1,2],[3,4]] ) The handling of diagonals in the spdiags function has been changed. It now agrees with the MATLAB(TM) function of the same name. Numerous efficiency improvements to format conversions and sparse matrix arithmetic have been made. Finally, this release contains numerous bugfixes. Statistical functions for masked arrays have been added, and are accessible through scipy.stats.mstats. The functions are similar to their counterparts in scipy.stats but they have not yet been verified for identical interfaces and algorithms. Several bugs were fixed for statistical functions, of those, kstest and percentileofscore gained new keyword arguments. Added deprecation warning for mean, median, var, std, cov, and corrcoef. These functions should be replaced by their numpy counterparts. Note, however, that some of the default options differ between the scipy.stats and numpy versions of these functions. Numerous bug fixes to stats.distributions: all generic methods now work correctly, several methods in individual distributions were corrected. However, a few issues remain with higher moments (skew, kurtosis) and entropy. The maximum likelihood estimator, fit, does not work out-of-the-box for some distributions - in some cases, starting values have to be carefully chosen, in other cases, the generic implementation of the maximum likelihood method might not be the numerically appropriate estimation method. We expect more bugfixes, increases in numerical precision and enhancements in the next release of scipy. The IO code in both NumPy and SciPy is being extensively reworked. NumPy will be where basic code for reading and writing NumPy arrays is located, while SciPy will house file readers and writers for various data formats (data, audio, video, images, matlab, etc.). Several functions in scipy.io have been deprecated and will be removed in the 0.8.0 release including npfile, save, load, create_module, create_shelf, objload, objsave, fopen, read_array, write_array , fread, fwrite, bswap, packbits, unpackbits, and convert_objectarray. Some of these functions have been replaced by NumPy’s raw reading and writing capabilities, memory-mapping capabilities, or array methods. Others have been moved from SciPy to NumPy, since basic array reading and writing capability is now handled by NumPy. The Matlab (TM) file readers/writers have a number of improvements: • default version 5 • v5 writers for structures, cell arrays, and objects • v5 readers/writers for function handles and 64-bit integers • new struct_as_record keyword argument to loadmat, which loads struct arrays in matlab as record arrays in numpy • string arrays have dtype='U...' instead of dtype=object • loadmat no longer squeezes singleton dimensions, i.e. squeeze_me=False by default This module adds new hierarchical clustering functionality to the scipy.cluster package. The function interfaces are similar to the functions provided MATLAB(TM)’s Statistics Toolbox to help facilitate easier migration to the NumPy/SciPy framework. Linkage methods implemented include single, complete, average, weighted, centroid, median, and ward. In addition, several functions are provided for computing inconsistency statistics, cophenetic distance, and maximum distance between descendants. The fcluster and fclusterdata functions transform a hierarchical clustering into a set of flat clusters. Since these flat clusters are generated by cutting the tree into a forest of trees, the leaders function takes a linkage and a flat clustering, and finds the root of each tree in the forest. The ClusterNode class represents a hierarchical clusterings as a field-navigable tree object. to_tree converts a matrix-encoded hierarchical clustering to a ClusterNode object. Routines for converting between MATLAB and SciPy linkage encodings are provided. Finally, a dendrogram function plots hierarchical clusterings as a dendrogram, using The new spatial package contains a collection of spatial algorithms and data structures, useful for spatial statistics and clustering applications. It includes rapidly compiled code for computing exact and approximate nearest neighbors, as well as a pure-python kd-tree with the same interface, but that supports annotation and a variety of other algorithms. The API for both modules may change somewhat, as user requirements become clearer. It also includes a distance module, containing a collection of distance and dissimilarity functions for computing distances between vectors, which is useful for spatial statistics, clustering, and kd-trees. Distance and dissimilarity functions provided include Bray-Curtis, Canberra, Chebyshev, City Block, Cosine, Dice, Euclidean, Hamming, Jaccard, Kulsinski, Mahalanobis, Matching, Minkowski, Rogers-Tanimoto, Russell-Rao, Squared Euclidean, Standardized Euclidean, Sokal-Michener, Sokal-Sneath, and Yule. The pdist function computes pairwise distance between all unordered pairs of vectors in a set of vectors. The cdist computes the distance on all pairs of vectors in the Cartesian product of two sets of vectors. Pairwise distance matrices are stored in condensed form; only the upper triangular is stored. squareform converts distance matrices between square and condensed forms. FFTW2, FFTW3, MKL and DJBFFT wrappers have been removed. Only (NETLIB) fftpack remains. By focusing on one backend, we hope to add new features - like float32 support - more easily. scipy.constants provides a collection of physical constants and conversion factors. These constants are taken from CODATA Recommended Values of the Fundamental Physical Constants: 2002. They may be found at physics.nist.gov/constants. The values are stored in the dictionary physical_constants as a tuple containing the value, the units, and the relative precision - in that order. All constants are in SI units, unless otherwise stated. Several helper functions are provided. scipy.interpolate now contains a Radial Basis Function module. Radial basis functions can be used for smoothing/interpolating scattered data in n-dimensions, but should be used with caution for extrapolation outside of the observed data range. scipy.integrate.ode now contains a wrapper for the ZVODE complex-valued ordinary differential equation solver (by Peter N. Brown, Alan C. Hindmarsh, and George D. Byrne). scipy.linalg.eigh now contains wrappers for more LAPACK symmetric and hermitian eigenvalue problem solvers. Users can now solve generalized problems, select a range of eigenvalues only, and choose to use a faster algorithm at the expense of increased memory usage. The signature of the scipy.linalg.eigh changed accordingly. The shape of return values from scipy.interpolate.interp1d used to be incorrect, if interpolated data had more than 2 dimensions and the axis keyword was set to a non-default value. This has been fixed. Moreover, interp1d returns now a scalar (0D-array) if the input is a scalar. Users of scipy.interpolate.interp1d may need to revise their code if it relies on the previous behavior. There were numerous improvements to scipy.weave. blitz++ was relicensed by the author to be compatible with the SciPy license. wx_spec.py was removed. Here are known problems with scipy 0.7.0: • weave test failures on windows: those are known, and are being revised. • weave test failure with gcc 4.3 (std::labs): this is a gcc 4.3 bug. A workaround is to add #include <cstdlib> in scipy/weave/blitz/blitz/funcs.h (line 27). You can make the change in the installed scipy (in site-packages).
{"url":"https://docs.scipy.org/doc/scipy-1.14.1/release/0.7.0-notes.html","timestamp":"2024-11-12T22:40:08Z","content_type":"text/html","content_length":"59104","record_id":"<urn:uuid:ef8fc911-ce98-48b9-bed2-0825301aa880>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00418.warc.gz"}
Phylogenetic Networks Publications of Steven Kelk Order by: Type | Year Janosch Döcker, Leo van Iersel, Steven Kelk and Simone Linz. Deciding the existence of a cherry-picking sequence is hard on two trees. In DAM, Vol. 260:131-143, 2019. Keywords: cherry-picking, explicit network, hybridization, minimum number, NP complete, phylogenetic network, phylogeny, reconstruction, temporal-hybridization number, time consistent network, tree-child network. Note: https://arxiv.org/abs/1712.02965. Leo van Iersel, Steven Kelk, Giorgios Stamoulis, Leen Stougie and Olivier Boes. On unrooted and root-uncertain variants of several well-known phylogenetic network problems. In ALG, Vol. 80 (11):2993-3022, 2018. Keywords: explicit network, FPT, from network, from unrooted trees, NP complete, phylogenetic network, phylogeny, reconstruction, tree containment. Note: https://hal.inria.fr/hal-01599716. Julia Matsieva, Steven Kelk, Celine Scornavacca, Chris Whidden and Dan Gusfield. A Resolution of the Static Formulation Question for the Problem of Computing the History Bound. In TCBB, Vol. 14 (2):404-417, 2017. Keywords: ARG, explicit network, from sequences, minimum number, phylogenetic network, phylogeny. Leo van Iersel, Steven Kelk, Nela Lekic, Chris Whidden and Norbert Zeh. Hybridization Number on Three Rooted Binary Trees is EPT. In SIDMA, Vol. 30(3):1607-1631, 2016. Keywords: agreement forest, explicit network, FPT, from rooted trees, hybridization, minimum number, phylogenetic network, phylogeny, reconstruction. Note: http://arxiv.org/abs/1402.2136. Steven Kelk, Leo van Iersel, Celine Scornavacca and Mathias Weller. Phylogenetic incongruence through the lens of Monadic Second Order logic. In JGAA, Vol. 20(2):189-215, 2016. Keywords: agreement forest, explicit network, FPT, from rooted trees, hybridization, minimum number, MSOL, phylogenetic network, phylogeny, reconstruction. Note: http://jgaa.info/accepted/2016/KelkIerselScornavaccaWeller2016.20.2.pdf. Philippe Gambette, Leo van Iersel, Steven Kelk, Fabio Pardi and Celine Scornavacca. Do branch lengths help to locate a tree in a phylogenetic network? In BMB, Vol. 78(9):1773-1795, 2016. Keywords: branch length, explicit network, FPT, from network, from rooted trees, NP complete, phylogenetic network, phylogeny, pseudo-polynomial, time consistent network, tree containment, tree sibling network. Note: http://arxiv.org/abs/1607.06285. Leo van Iersel, Steven Kelk and Celine Scornavacca. Kernelizations for the hybridization number problem on multiple nonbinary trees. In JCSS, Vol. 82(6):1075-1089, 2016. Keywords: explicit network, from rooted trees, kernelization, minimum number, phylogenetic network, phylogeny, Program Treeduce, reconstruction. Note: https://arxiv.org/abs/1311.4045v3. Mareike Fischer, Leo van Iersel, Steven Kelk and Celine Scornavacca. On Computing The Maximum Parsimony Score Of A Phylogenetic Network. In SIDMA, Vol. 29(1):559-585, 2015. Keywords: APX hard, cluster containment, explicit network, FPT, from network, from sequences, integer linear programming, level k phylogenetic network, NP complete, parsimony, phylogenetic network, phylogeny, polynomial, Program MPNet, reconstruction, software. Note: http://arxiv.org/abs/1302.2430. Leo van Iersel, Steven Kelk, Nela Lekic and Leen Stougie. Approximation algorithms for nonbinary agreement forests. In SIDMA, Vol. 28(1):49-66, 2014. Keywords: agreement forest, approximation, from rooted trees, hybridization, minimum number, phylogenetic network, phylogeny, reconstruction. Note: http://arxiv.org/abs/1210.3211. Toggle abstract Leo van Iersel and Steven Kelk. Kernelizations for the hybridization number problem on multiple nonbinary trees. In WG14, Vol. 8747:299-311 of LNCS, springer, 2014. Keywords: explicit network, from rooted trees, kernelization, minimum number, phylogenetic network, phylogeny, Program Treeduce, reconstruction. Note: http://arxiv.org/abs/1311.4045. Leo van Iersel, Steven Kelk, Nela Lekic and Celine Scornavacca. A practical approximation algorithm for solving massive instances of hybridization number for binary and nonbinary trees. In BMCB, Vol. 15(127):1-12, 2014. Keywords: agreement forest, approximation, explicit network, from rooted trees, phylogenetic network, phylogeny, Program CycleKiller, Program TerminusEst, reconstruction. Note: http://dx.doi.org/10.1186/1471-2105-15-127. Eric Bapteste, Leo van Iersel, Axel Janke, Scott Kelchner, Steven Kelk, James O. McInerney, David A. Morrison, Luay Nakhleh, Mike Steel, Leen Stougie and James B. Whitfield. Networks: expanding evolutionary thinking. In Trends in Genetics, Vol. 29(8):439-441, 2013. Keywords: abstract network, explicit network, phylogenetic network, phylogeny, reconstruction. Note: http://bioinf.nuim.ie/wp-content/uploads/2013/06/Bapteste-TiG-2013.pdf. Toggle abstract Steven Kelk, Celine Scornavacca and Leo van Iersel. On the elusiveness of clusters. In TCBB, Vol. 9(2):517-534, 2012. Keywords: explicit network, from clusters, from rooted trees, from triplets, level k phylogenetic network, phylogenetic network, phylogeny, Program Clustistic, reconstruction, software. Note: http://arxiv.org/abs/1103.1834. Steven Kelk, Leo van Iersel, Nela Lekic, Simone Linz, Celine Scornavacca and Leen Stougie. Cycle killer... qu'est-ce que c'est? On the comparative approximability of hybridization number and directed feedback vertex set. In SIDMA, Vol. 26(4):1635-1656, 2012. Keywords: agreement forest, approximation, explicit network, from rooted trees, minimum number, phylogenetic network, phylogeny, Program CycleKiller, reconstruction. Note: http://arxiv.org/abs/1112.5359, about the title. Toggle abstract Leo van Iersel, Steven Kelk, Nela Lekic and Celine Scornavacca. A practical approximation algorithm for solving massive instances of hybridization number. In WABI12, Vol. 7534(430-440) of LNCS, springer, 2012. Keywords: agreement forest, approximation, explicit network, from rooted trees, hybridization, phylogenetic network, phylogeny, Program CycleKiller, Program Dendroscope, Program HybridNET, reconstruction, software. Note: http://arxiv.org/abs/1205.3417. Toggle abstract Katharina Huber, Leo van Iersel, Steven Kelk and Radoslaw Suchecki. A Practical Algorithm for Reconstructing Level-1 Phylogenetic Networks. In TCBB, Vol. 8(3):607-620, 2011. Keywords: explicit network, from triplets, galled tree, generation, heuristic, phylogenetic network, phylogeny, Program LEV1ATHAN, Program Lev1Generator, reconstruction, software. Note: http://arxiv.org/abs/0910.4067. Toggle abstract Leo van Iersel and Steven Kelk. Constructing the Simplest Possible Phylogenetic Network from Triplets. In ALG, Vol. 60(2):207-235, 2011. Keywords: explicit network, from triplets, galled tree, level k phylogenetic network, minimum number, phylogenetic network, phylogeny, polynomial, Program Marlon, Program Simplistic. Note: http://dx.doi.org/10.1007/s00453-009-9333-0. Toggle abstract Leo van Iersel and Steven Kelk. When two trees go to war. In JTB, Vol. 269(1):245-255, 2011. Keywords: APX hard, explicit network, from clusters, from rooted trees, from sequences, from triplets, level k phylogenetic network, minimum number, NP complete, phylogenetic network, phylogeny, polynomial, reconstruction. Note: http://arxiv.org/abs/1004.5332. Toggle abstract Jaroslaw Byrka, Pawel Gawrychowski, Katharina Huber and Steven Kelk. Worst-case optimal approximation algorithms for maximizing triplet consistency within phylogenetic networks. In Journal of Discrete Algorithms, Vol. 8(1):65-75, 2010. Keywords: approximation, explicit network, from triplets, galled tree, level k phylogenetic network, phylogenetic network, phylogeny, reconstruction. Note: http://arxiv.org/abs/0710.3258. Toggle abstract Leo van Iersel, Steven Kelk, Regula Rupp and Daniel H. Huson. Phylogenetic Networks Do not Need to Be Complex: Using Fewer Reticulations to Represent Conflicting Clusters. In ISMB10, Vol. 26 (12):i124-i131 of BIO, 2010. Keywords: from clusters, level k phylogenetic network, Program Dendroscope, Program HybridInterleave, Program HybridNumber, reconstruction. Note: http://dx.doi.org/10.1093/bioinformatics/btq202, with proofs: http://arxiv.org/abs/0910.3082. Toggle abstract Leo van Iersel, Steven Kelk and Matthias Mnich. Uniqueness, intractability and exact algorithms: reflections on level-k phylogenetic networks. In JBCB, Vol. 7(4):597-623, 2009. Keywords: explicit network, from triplets, galled tree, level k phylogenetic network, NP complete, phylogenetic network, phylogeny, reconstruction, uniqueness. Note: http://arxiv.org/pdf/0712.2932v2. Leo van Iersel, Judith Keijsper, Steven Kelk, Leen Stougie, Ferry Hagen and Teun Boekhout. Constructing level-2 phylogenetic networks from triplets. In RECOMB08, Vol. 4955:450-462 of LNCS, springer, Keywords: explicit network, from triplets, level k phylogenetic network, NP complete, phylogenetic network, phylogeny, polynomial, Program Level2, reconstruction. Note: http://homepages.cwi.nl/~iersel/level2full.pdf. An appendix with proofs can be found here http://arxiv.org/abs/0707.2890. Toggle abstract Leo van Iersel and Steven Kelk. Constructing the Simplest Possible Phylogenetic Network from Triplets. In ISAAC08, Vol. 5369:472-483 of LNCS, springer, 2008. Keywords: explicit network, from triplets, galled tree, level k phylogenetic network, minimum number, phylogenetic network, phylogeny, polynomial, Program Marlon, Program Simplistic. Note: http://arxiv.org/abs/0805.1859.
{"url":"https://phylnet.univ-mlv.fr/show.php?author=Steven_Kelk","timestamp":"2024-11-03T00:01:30Z","content_type":"text/html","content_length":"194605","record_id":"<urn:uuid:ceab9845-73b2-4dcd-910c-2ae2fc61c0c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00138.warc.gz"}
3 Way Venn Diagram Template 3 Way Venn Diagram Template - Web here is a 3 set venn diagram that compares 3 popular blogging platforms; Web 3 way venn diagram generator. Venn diagrams are especially useful for showing relationships between sets, such as the intersection and union of overlapping sets. We can use venn diagrams to represent sets pictorially. Web how do you create a venn diagram? If you are starting a blog in the near future, this venn diagram could be useful for you in making a choice between these platforms. Illustrate the 3 circle venn diagram with this template. The first step is to consider exactly what to compare and contrast in your venn. Click here for svg image. Please wait while your venn diagram is generated. 10+ Triple Venn Diagram Templates Free Sample, Example Format If you are starting a blog in the near future, this venn diagram could be useful for you in making a choice between these platforms. Web here is a 3 set venn diagram that compares 3 popular blogging platforms; Click here for png image. Web how do you create a venn diagram? Decide which data you will analyze. Blank Been Diagram ClipArt Best Web here is a 3 set venn diagram that compares 3 popular blogging platforms; Web how to make a venn diagram open up a page. Click here for svg image. The first step is to consider exactly what to compare and contrast in your venn. Please wait while your venn diagram is generated. 3 Circle Venn Diagram Template Free Download Please wait while your venn diagram is generated. If you are starting a blog in the near future, this venn diagram could be useful for you in making a choice between these platforms. Web here is a 3 set venn diagram that compares 3 popular blogging platforms; Click here for svg image. Log in to your canva account with your. 3 Venn Diagram If you are starting a blog in the near future, this venn diagram could be useful for you in making a choice between these platforms. Web 3 way venn diagram generator. We can use venn diagrams to represent sets pictorially. Illustrate the 3 circle venn diagram with this template. Web how to make a venn diagram open up a page. thomasthinktank [licensed for use only] / Introduction The first step is to consider exactly what to compare and contrast in your venn. Please wait while your venn diagram is generated. Click here for png image. Click on the image and use it as a template. Click here for svg image. How to Make a Venn Diagram in Google Docs Lucidchart Blog Click on the image and use it as a template. Web 3 way venn diagram generator. Please wait while your venn diagram is generated. The first step is to consider exactly what to compare and contrast in your venn. Venn diagrams are especially useful for showing relationships between sets, such as the intersection and union of overlapping sets. Blank 3 Way Venn Diagram , Free Transparent Clipart ClipartKey Web 3 way venn diagram generator. Web how do you create a venn diagram? B !a !c (area 2. If you are starting a blog in the near future, this venn diagram could be useful for you in making a choice between these platforms. Venn diagrams are especially useful for showing relationships between sets, such as the intersection and union. Blank Venn Diagram 3 Circles ClipArt Best Web 3 way venn diagram generator. A !b !c (area 1)==> size: Log in to your canva account with your username and password, and you’ll be taken to a document page. Web how do you create a venn diagram? Click on the image and use it as a template. 19+ Venn Diagram Free Word, EPS, Excel, PDF Format Download Click on the image and use it as a template. The first step is to consider exactly what to compare and contrast in your venn. Venn diagrams are especially useful for showing relationships between sets, such as the intersection and union of overlapping sets. Click here for svg image. If you are starting a blog in the near future, this. 19+ Venn Diagram Free Word, EPS, Excel, PDF Format Download The first step is to consider exactly what to compare and contrast in your venn. B !a !c (area 2. Web here is a 3 set venn diagram that compares 3 popular blogging platforms; If you are starting a blog in the near future, this venn diagram could be useful for you in making a choice between these platforms. Venn. If you are starting a blog in the near future, this venn diagram could be useful for you in making a choice between these platforms. Venn diagrams are especially useful for showing relationships between sets, such as the intersection and union of overlapping sets. Web here is a 3 set venn diagram that compares 3 popular blogging platforms; Web 3 way venn diagram generator. We can use venn diagrams to represent sets pictorially. Decide which data you will analyze. A !b !c (area 1)==> size: The first step is to consider exactly what to compare and contrast in your venn. Click here for png image. Illustrate the 3 circle venn diagram with this template. Log in to your canva account with your username and password, and you’ll be taken to a document page. Click on the image and use it as a template. Click here for svg image. B !a !c (area 2. Web how do you create a venn diagram? Please wait while your venn diagram is generated. Web how to make a venn diagram open up a page. Illustrate The 3 Circle Venn Diagram With This Template. Click on the image and use it as a template. Click here for svg image. Decide which data you will analyze. Web how do you create a venn diagram? Web How To Make A Venn Diagram Open Up A Page. A !b !c (area 1)==> size: B !a !c (area 2. Please wait while your venn diagram is generated. If you are starting a blog in the near future, this venn diagram could be useful for you in making a choice between these platforms. Web Here Is A 3 Set Venn Diagram That Compares 3 Popular Blogging Platforms; We can use venn diagrams to represent sets pictorially. The first step is to consider exactly what to compare and contrast in your venn. Log in to your canva account with your username and password, and you’ll be taken to a document page. Web 3 way venn diagram generator. Venn Diagrams Are Especially Useful For Showing Relationships Between Sets, Such As The Intersection And Union Of Overlapping Sets. Click here for png image. Related Post:
{"url":"https://time.ocr.org.uk/en/3-way-venn-diagram-template.html","timestamp":"2024-11-06T04:46:51Z","content_type":"text/html","content_length":"28173","record_id":"<urn:uuid:a57b634b-4cbd-4ca1-90cb-b320dc46391f>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00651.warc.gz"}
Data Structures Data structures are a way to organize data so that it is efficiently accessible for the problem you are trying to solve. Choosing the right data structure will depend on the type of problem you're trying to solve (dictating the manner you access your data), the amount of data you need to organize, and the medium you use to store your data (memory, disk, and so on). We have already seen and used one example of a data structure. In the preceding sections, we have made extensive use of arrays. Arrays are the most primitive of data structures. They provide access to your data using an index and are fixed in size (also called static). This is opposed to other dynamic data structures that can grow and make more space for data whenever it's needed.
{"url":"https://subscription.packtpub.com/book/programming/9781789537178/2/ch02lvl1sec10/getting-started-with-fundamental-data-structures","timestamp":"2024-11-09T00:39:22Z","content_type":"text/html","content_length":"165699","record_id":"<urn:uuid:e750086b-3cb1-4931-a044-e4c459331c3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00827.warc.gz"}
Does water have a higher or lower heat capacity? Does water have a higher or lower heat capacity? Water has a high specific heat capacity—it absorbs a lot of heat before it begins to get hot. Why does 2 liters of boiling water not have twice as great a temperature as 1 liter of boiling water? Temperature is not a measure of the total kinetic energy of all the molecules in a substance. Two liters of boiling water have twice as much kinetic energy as one liter. The temperatures are the same because the average kinetic energy of molecules in each is the same. Does more water mean more heat energy? Your larger body of water, simply due to the fact that there’s more of it, has more heat energy. It’ll take longer to cool, even though it’s losing heat faster. Which has more total kinetic energy internal energy one liter of boiling water or two liters of boiling water which has a higher temperature? Heat is a flow of thermal energy from hotter to colder because of a difference in temperature. (think water fall!) There is twice as much molecular kinetic energy in 2 liters of boiling water as in 1 liter of boiling water. What has a higher specific heat than water? On a mass basis hydrogen gas has more than three times the specific heat as water under normal laboratory conditions. Diatomic gases under ambient conditions generally have a molar specific heat of about 7cal/(mol K), and one mole of hydrogen has only 2g mass. Why does water have a much higher specific heat capacity than most liquids? Water has a higher specific heat capacity because of the strength of the hydrogen bonds. It requires a significant of energy to separate these bonds. Can heat capacity be negative? If the system loses energy, for example, by radiating energy into space, the average kinetic energy actually increases. If a temperature is defined by the average kinetic energy, then the system therefore can be said to have a negative heat capacity. What happens when the temperature decreases? When we decrease the temperature, less heat energy is supplied to the atoms, and so their average kinetic energy decreases. When they enter a phase transition, such as freezing from a liquid to a solid, the temperature is not decreasing or increasing, and stays constant. Which loses heat faster? A will lose heat faster. Conduction and convection scale as the temperature difference, which is almost twice as large for A as for B. The warmer water in A will also evaporate faster, removing more heat as it does. A hot cup of coffee left in a cool room will cool down because the room is colder than the coffee. What happens to the temperature if more heat is added to a sample of boiling water in stove at 1 atm? Adding heat to a boiling liquid is an important exception to general rule that more heat makes a higher temperature. When energy is added to a liquid at the boiling temperature, its converts the liquid into a gas at the same temperature. What happens to heat energy during the phase changes in number 1? If heat is coming into a substance during a phase change, then this energy is used to break the bonds between the molecules of the substance. The heat is used to break the bonds between the ice molecules as they turn into a liquid phase. Does higher specific heat mean higher temperature? Explanation: Specific heat is Jg−oK . So, a high value means that it takes MORE energy to raise (or lower) its temperature. Adding heat to a “low specific heat” compound will increase its temperature much more quickly than adding heat to a high specific heat compound. How much energy does it take to heat 100 liters of water? 100 liters * (45-10) = 3500 kcal. 1 kcal is equal to 1/860 kWh, so: 3500kcal / 860 ≈ 4 kWh (13.6 kBtu/h) But this is an electric energy needed to heat upper 100 liters by 35°C. What is the formula for specific heat of water? E = energy (kJ, Btu) c p = specific heat of water (kJ/kg oC, Btu/lb oF) (4.2 kJ/kg oC, 1 Btu/lb m oF for water) dt = temperature difference between the hot water and the surroundings ( oC, oF)) m = mass of water (kg, lb m) Water is heated to 90 oC. The surrounding temperature (where the energy can be transferred to) is 20 oC. How do you calculate the amount of energy stored in water? Water is often used to store thermal energy. Energy stored – or available – in hot water can be calculated. E = c p dt m (1) where. E = energy (kJ, Btu) c p = specific heat of water (kJ/kg oC, Btu/lb oF) (4.2 kJ/kg oC, 1 Btu/lb m oF for water) Why does a larger body of water cool faster? Note that ‘cooling’ and ‘losing heat’ aren’t synonymous. Your larger body of water, simply due to the fact that there’s more of it, has more heat energy.
{"url":"https://yourwiseadvices.com/does-water-have-a-higher-or-lower-heat-capacity/","timestamp":"2024-11-05T03:54:40Z","content_type":"text/html","content_length":"65649","record_id":"<urn:uuid:20e0a59e-7e6a-4bbd-a858-063c5fbd56de>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00147.warc.gz"}
NCERT Solutions for Class 11 Maths Chapter 16 Probability Topics and Sub Topics in Class 11 Maths Chapter 16 Probability: ┃Section Name │Topic Name ┃ ┃16 │Probability ┃ ┃16.1 │Introduction ┃ ┃16.2 │Random Experiments ┃ ┃16.3 │Event ┃ ┃16.4 │Axiomatic Approach to Probability ┃ NCERT Solutions for Class 11 Maths Chapter 16 Exercise.16.1 NCERT Solutions for Class 11 Maths Chapter 16 Exercise.16.2 NCERT Solutions for Class 11 Maths Chapter 16 Exercise.16.3 Class 11 Maths NCERT Solutions – Miscellaneous Questions Exercise 16.1 Question 1 Write the sample space when a coin is tossed thrice. Question 2 Write the sample space when a dice is rolled twice. Question 3 Write the sample space when a coin is tossed four times. Question 4 Find the sample space of a coin and a die are tossed together. Question 5 Find the sample space when a die is rolled and a coin is tossed only if in case there is head in the coin. Question 6 Room x is having 2 girls and 2 boys while Room y is having 3 girls and 1 boy. Determine the sample space where a person gets a room. Question 7 A bag contains dies of several colors such as red, white and blue. Dies are selected as random and rolled and the colour and the number in it’s uppermost face is noted down. Determine the sample Question 8 An experimental setup consists of a family consisting a boy and a girl with 2 children. (i) Determine the sample space if the person conducting the experiment is interested in knowing whether it is a boy or a girl according to the order of their birth. (ii) Determine the sample space if the person conducting the experiment is interested in knowing the number of girls in the family? Question 9 A experimental setup consists of a box containing 1 red ball and 3 identical white balls. At a time 2 balls are drawn at random in succession without replacement. Find the sample space of the experimental setup. Question 10 An experimental setup consists of tossing up of a coin in the air and if head comes up it is tossed again. If a tail occurs in the first event then a die is rolled only once. Find the sample space of the experiment. Question 11 3 bulbs are selected at a random from a bag of bulbs. Depending upon several tests each bulb is termed as defective or non-defective . Determine the sample space of this series of events. Question 12 An experiment consists of tossing a coin. During the conduction of the experiment if head comes up then a die is thrown instantly , if there is any even number on the die then it is thrown again . determine the sample space of the experiment. Question 13 A box containing 4 slips of paper written 1,2,3 and 4 separately. They are then mixed thoroughly and drawn one at a time twice without replacement. Determine the sample space of the experiment Question 14 An experiment consists of rolling a die then tossing a coin if any even number comes up while rolling the die. Also it was noted that if any odd number comes up the coin is tossed twice. Determine the sample space of the experiment. Question 15 A coin is tossed if tail comes up a ball is drawn from a box containing 3 black balls and 2 red balls . A die is thrown is head comes up. Determine the sample space of the experiment. Question 16 A die is rolled continuously unless and until a six shows up. Determine the sample space for this experiment. Exercise 16.2 Question 1 An experiment consists of rolling of a die .Event E denotes that the die shows 4 and event F denotes that the die shows even number. Are these both events that is A and B mutually exclusive? Question 2 An experiment consists of a die thrown in which the following events occurred: (a) P : numbers less than 7 (b) Q : numbers larger than 7 (c) R : numbers which are product of 3 (d) S : numbers which are smaller than 4 (e) T : even numbers which are larger than 4 (f) U : numbers not less than 3 Also, P∪Q , P∩Q , Q∪R , T∩U , S∩T , P – R , S – T , Question 3 An experiment consists throwing of a pair of dice and noting down the numbers that came up. Determine the following events: (i) The sum of numbers is larger than 8 (ii) 2 occur on both of the die. (iii) The sum of numbers is at least 7 and a multiple of 3. Determine the events which are mutually exclusive? Question 4 In an experiment 3 coins at a time .Let P be the event in which there 3 heads. Q be the event in which there is 2 heads and 1 tail and R be the event in which there is 3 tails and S denotes the event in which head shows in the first coin. Find the events: (a) Mutually exclusive? (b) Simple? (c) Compound? Question 5 In an experiment 3 coins are tossed. (a) 2 events which are mutually exclusive. (b) 3 events which are mutually exhaustive and exclusive. (c) 2 events, which are not mutually exclusive. (d) 2 events which are mutually exclusive but not exhaustive. (e) 3 events which are mutually exclusive but not exhaustive. Question 6 An experiment consists of throwing up of a dice and includes events A,B and C: A = an even number on throwing of the first die B = an odd number on throwing of the first die C = sum of numbers on the dice will less than equals to 5. (a) A (b) not B (c) A or B (d) A and B (e) A but not C (f) B or C (g) B or C (h) A∩B′∩C′ Question 7 An experiment consists of throwing up of a dice and includes A, B and C. A = an even number on throwing of the first die B = an odd number on throwing of the first die C = sum of numbers on the dice will less than equals to 5. Give answer in true or false. (a) B and A are mutually exclusive events. (b) B and A are mutually exclusive and exhaustive events. (c) A = B’ (d) A and C are mutually exclusive and exhaustive events. (e) B’ and A are mutually exclusive events. (f) A’ , B’ & C are mutually exhaustive and exclusive. Exercise 16.3 Question 2 An experiment consists of tossing up of a coin, determine the probability of getting at least one tail? Question 3 An experiment consists of throwing of a dice in which the following events occurs: (a) A prime number (b) Number larger than 3 (c) Number less than or equal to 1 (d) Number more than 6 (e) Number less than 6 Question 4 An experiment consists of a pack of 52 cards. (a) Find the number of elements present in the sample space. (b) Find the probability of ace of spades (c) Find the probability of (i) an ace (ii) black card Question 5 An unbiased coin is marked 1 on one face while 6 on the other face , while the die is having markings such as 1,2,3,4,5,6 on its 6 faces . Determine the probability that the sum of the numbers turning up is (i) 3 (ii) 12 Question 6 A city council consists of 4 men and 6 women .Find the probability of selecting a woman among the council members if the selection is on random basis. Question 7 An experiment consists of tossing up of an unbiased coin 4 times. Each time head appears a person is awarded with Re.1 and each time a tail a turns up that person looses Rs.1.50. Determine sample space of different money of 4 tosses and the propability of each of the head and tail. Question 8 An experiment consists of tossing up of 3 coins .Find out the probability of the following events: (a) Three heads (b) Two heads (c) At least two heads (d) At most two heads (e) No heads (f) Three tails (g) Exactly two tails (h) No tails (i) At most two tails Question 10 From the word ‘ASSASSINATION’ a letter is chosen at a random basis. Then, determine the probability of getting (a) a vowel (b) an consonant Question 11 A person chooses 6 different natural numbers at random basis from numbers ranging from 1 to 20. If and only if this 6-digit number matches with the number decided by the lottery committee , the person wins the prize. Determine the probability of wining the prize ? Question 12 Check whether the following probabilities are defined or not : (i) P(Q) = 0.5 , P(R) = 0.7 , P(Q∩R) = 0.6 (ii) P(Q) = 0.5 , P(R) = 0.4 , P(Q∪R) = 0.8 Question 16 Events Q and R are such that P(not Q or not R ) = 0.25 . Find out whether Q and R are mutually exclusive or not ? Question 17 Events Q and R are such that P(Q) = 0.42 , P® = 0.48 & P(Q and R) = 0.16. Find: (a) P( not Q) (b) P(not R) (c) P( Q or R) Question 18 In class XII of a school , 40% of the students studies mathematics & 30% of the students studies biology. 10% of the students of the class studies both maths and biology. If a student is selected as a random from the class , determine the probability of the student studying mathematics or biology. Question 19 In an entrance test that is based on the basis of two examinations, the probability of a random student passing the examination is 0.8 and the probability of the candidate passing the second examination is 0.7. The probability of passing at least one of the examination is 0.95. find out the probability of passing both the examinations? Question 20 The probability that a student will pass the final examination in oth English and Hindi is 0.5. The probability of passing the neither of the subjects is 0.1. If the probability of passing the subject English alone is 0.75 then, find out the probability of passing in the Hindi subject? Question 21 A class of strength 60 students , 30 students opted for NCC, 32 opted for NSS and 24 students opted for both NCC and NSS. Find the probability if one student is selected at a random that: (a) Candidate opted out for NCC or NSS (b) Candidates opted neither NCC nor NSS (c) Candidates opted for NSS but not NCC
{"url":"https://ncert-books.in/ncert-solutions-for-class-11-maths-chapter-16-probability/","timestamp":"2024-11-01T19:00:17Z","content_type":"text/html","content_length":"236783","record_id":"<urn:uuid:261f8c74-4291-44db-b366-338f499c0838>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00285.warc.gz"}
And for my next trick, I'll make this effect disappear! In this week's New Yorker , Jonah Lehrer shows once again just how hard it is to do good science journalism if you are not yourself a scientist. His target is the strange phenomenon that many high profile papers are failing to replicate. This has been very much a cause celebre lately, and Lehrer follows a series of scientific papers on the topic as well as an excellent Atlantic article by David Freedman . At this point, many of the basic facts are well-known: anecdotally, many scientists report repeated failures to replicate published findings. The higher-profile the paper, the less likely it is to replicate, with around 50% of the highest-impact papers in medicine failing to replicate. As Lehrer points out, this isn't just scientists failing to replicate each other's work, but scientists failing to replicate their work: a thread running through the article is the story of Jonathan Schooler, a professor at UC-Santa Barbara who has been unable to replicate his own seminal graduate student work on memory. Lehrer's focus in this article is shrinking effects. No, not this one. Some experimental effects seem to shrink steadily over time: In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze "temporal trends" across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. As described, that's weird. But there is a good explanation for such effects, and Lehrer brings it up. Some results are spurious. It's just one of those things. Unfortunately, spurious results are also likely to be exciting. Let's say I run a study looking for a relationship between fruit-eating habits and IQ. I look at the effects of 20 different fruits. By chance, one of them will likely show a significant -- but spurious -- effect. So let's say I find that eating an apple every day leads to a 5-point increase in IQ. That's really exciting because it's surprising -- and the fact that it's not true is integral to what makes it surprising. So I get it published in a top journal (top journals prefer surprising results). Now, other people try replicating my finding. Many, many people. Most will fail to replicate, but some -- again by chance -- will replicate. It is extremely difficult to get a failure to replicate published, so only the replications get published. After time, the "genius apple hypothesis" becomes part of established dogma. Remember that anything that challenges established dogma is exciting and surprising and thus easier to publish. So now failures to replicate are surprising and exciting and get published. When you look at effect-sizes in published papers over time, you will see a gradual but steady decrease in the "effect" of apples -- from 5 points to 4 points down to 0. Where I get off the Bus So far so good, except here's Lehrer again: While the publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explaint eh experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts. Huh? Lehrer seems to be suggesting that it is that makes a result spurious. But that can't be right. Rather, there are just lots of spurious results out there. It happens that journals preferentially publish spurious results, leading to biases in the published record, and eventually the decline effect. Some years ago, I had a bad habit of getting excited about my labmate's results and trying to follow them up. Just like a journal, I was interested in the most exciting results. Not surprisingly, most of these failed to replicate. The result was that none of them got published. Again, this was just a factor of some results being spurious -- disproportionately, the best ones. (Surprisingly, this labmate is still a friend of mine; personally, I'd hate me.) The Magic of Point O Five Some readers at this point might be wondering: wait -- people do statistics on their data and only accept a results that is extremely unlikely to have happened by chance. The cut-off is usually 0.05 -- a 5% chance of having a false positive. And many studies that turn out later to have been wrong pass even stricter statistical tests. Notes Lehrer: And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid--that is, they contain enough data that any regression to the mean shouldn't be dramatic. '"These are the results that pass all the tests," he says. "The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!" So there's got to be something making these results look more unlikely than they really are. Lehrer suspects unconscious bias: Theodore Sterling, in 1959 ... noticed that ninety-seven percent of all published psychological studies with statistically significant data found the effect they were looking for ... Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments and again: The problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. I expect that unconscious bias is a serious problem (I illustrate some reasons below), but this is pretty unsatisfactory, as he doesn't explain how unconscious bias would affect results, and the Schooler effect is a complete red herring. I wasn't around in 1959, so I can't speak to that time, but I suspect that the numbers are similar today ... but in fact Sterling was measuring the wrong thing. Nobody cares what our hypotheses were. They don't care what order the experiments were actually run in. They care about the truth, and they have very limited time to read papers (most papers are never read, only skimmed). Good scientific writing is clear and concise. The mantra is: Tell them what you're going to tell them. Tell them. And then tell them what you told them. No fishing excursions, no detours. When we write scientific papers, we're writing not history. And this means we usually claim to have expected to find whatever it is that we found. It just makes for a more readable paper. So when a scientist reads the line, "We predicted X," we know that really means "We X" -- what the author actually predicted is beside the point. Messing with that Point O Five So where do all the false positive come from, if they should be less than 5% of conducted studies? There seem to be a number of issues. First, it should be pointed out that the purpose of statistical tests (and the magic .05 threshold for significance) is to make a prediction as to how likely it is that a particular result will replicate. A p-value of .05 means roughly that there is a 95% chance that the basic result will replicate (sort of; this is not technically true but is a good approximation for present purposes). But statistics are , not facts. They are based on a large number of idealizations (for instance, many require that measurement error is distributed normally a normal distribution meaning that the bulk of measurements are very close to the true measurement and a measurement is as likely to be larger than the true number as it is likely to be smaller. In fact, most data is heavily skewed, with measurements more likely to be too large than too smaller (or vice versa). For instance, give someone an IQ test. IQ tests have some measurement error -- people will score higher or lower than their "true" score due to random factors such as guessing answers correctly (or incorrectly), being sleepy (or not), etc. But it's a lot harder to get an IQ score higher than your true score than lower, because getting a higher score requires a lot of good luck (unlikely) whereas there are all sorts of ways to get a low score (brain freeze, etc.). Most statistical tests make a number of assumptions (like normally distributed error) that are not true of actual data. That leads to incorrect estimates of how likely a particular result is to replicate. The truth is most scientists -- at the very least, most psychologists -- aren't experts in statistics, and so statistical tests are misapplied all the time. I don't actually think that issues like the ones I just discussed lead to most of the difficulties (though I admit I have no data one way or another). I bring these issues up mainly to point out at that statistical tests are tools that are either used or misused according to the skill of the experimenter. And there are lots of nasty ways to misuse statistical tests. I discuss a few of them Run enough experiments and... Let's go back to my genius fruit experiment. I ask a group of people to eat an apple and then give them an IQ test. I compare their IQ scores with scores from a control group that didn't eat an apple. Now let's say in fact eating apples doesn't affect IQ scores. Assuming I do my statistics correctly and all the assumptions of the statistical tests are met, I should have only a 5% chance of finding a "significant" effect of apple-eating. Now let's say I'm disappointed in my result. So I try the same experiment with kiwis. Again, I have only a 5% chance of getting a significant result for kiwis. So that's not very likely to happen Next I try oranges.... Hopefully you see where this is going. If I try only one fruit, I have a 5% chance of getting a significant result. If I try 2 fruits, I have a 1 - .95*.95 = 9.8% chance of getting a significant result for at least one of the fruits. If I try 4 fruits, now I'm up to a 1 - .95*.95*.95*.95 = 18.5% chance that I'll "discover" that one of these fruits significantly affects IQ. By the time I've tried 14 fruits, I've got a better than 50% chance of an amazing discovery. But my p-value for that one experiment -- that is, my estimate that these results won't replicate -- is less than 5%, suggesting there is only a 5% chance the results were due to chance. While there are ways of statistically correcting for this increased likelihood of false positives, my experience suggests that it's relatively rare for anyone to do so. And it's not always possible. Consider the fact that there may be 14 different labs all testing the genius fruit hypothesis (it's suddenly very fashionable for some reason). There's a better than 50% chance that one of these labs will get a significant result, even though from the perspective of an individual lab, they only ran one experiment. Data peaking Many researchers peak at their data. There are good reasons for doing this. One is curiosity (we do experiments because we want to know the outcome). Another is to make sure all your equipment is working (don't want to waste time collecting useless data). Another reason -- and this is the problematic one -- is to see if you can stop collecting data. Time is finite. Nobody wants to spend longer on an experiment than necessary. Let's say you have a study where you expect to need -- based on intuition and past experience -- around 20 subjects. You might check your data after you've run 12, just in case that's enough. What usually happens is that if the results are significant, you stop running the study and move on. If they aren't, you run more subjects. Now maybe after you've got 20 subjects, you check your data. If it's significant, you stop the study; if not, you run some more. And you keep on doing this until either you get a significant result or you give up. It's a little harder to do back-of-the-envelop calculations on the importance of this effect, but it should be clear that this habit has the unfortunate result of increasing the relative likelihood of a false positive, since false positives lead you to declare victory and end the experiment, whereas false negatives are likely to be corrected (since you keep on collecting more subjects until the false negative is overcome). I read a nice paper on this issue that actually crunched the numbers a while back (for some reason I can't find it at the moment), and I remember the result was a pretty significant increase in the expected number of false positives. Data massaging The issues I've discussed so real problems but are pretty common and not generally regarded as ethical violations. Data massaging is at the borderline. Any dataset can be analyzed in a number of ways. Once again, if people get the result they were expecting with the first analysis they run, they're generally going to declare victory and start writing the paper. If you don't get the results you expect, you try different analysis methods. There are different statistical tests that be used. There are different covariates that could be factored out. You can through out "bad" subjects or items. This is going to significantly increase the rate of false positives. It should be pointed out that interrogating your statistical model is a good thing. Ideally, researchers check to see if there are bad subjects or items, check whether there are covariates to be controlled for, check whether different analysis techniques give different results. But doing this affects the interpretation of your p-value (the estimate of how likely it is that your results will replicate), and most people don't know how to appropriately control for that. And some are frankly more concerned with getting the results they want than doing the statistics properly (there is where the "borderline" comes in). Better estimates The problem, at least from where I stand, is one of statistics. We want our statistical tests to tell us how likely it is that our results will replicate. We have statistical tests which, if used properly, will give us just such an estimate. However, there are lots and lots of ways to use them incorrectly. So what should we do? One possibility is to train people to use statistics better. And there are occasional revisions in standard practice that do result in better use of statistics. Another possibility is to lower the p-value that is considered significant. The choice of p=0.05 as a cutoff was, as Lehrer notes, arbitrary. Picking a smaller number would decrease the number of false positives. Unfortunately, it also decreases the number of real positives by a lot. People who don't math can skip this next section. Let's assume we're running studies with a single dependent variable and one manipulation, and that we're going to test for significance with a t-test. Let's say the manipulation really should work -- that is, it really does have an effect on our dependent measure. Let's say that the effect size is large-ish (Cohen's d of .8, which is large by psychology standards) and that we run 50 subjects. The chance of actually finding a significant effect at the p=.05 level is 79%. For people who haven't done power analyses before, this might seem low, but actually an 80% chance of finding an effect is pretty good. Dropping our significant threshold to p=.01 drops the chance of finding the effect to 56%. To put this in perspective, if we ran 20 such studies, we'd find 16 significant effects at the p=.05 level but only 11 at the p=.01 level. (If you want to play around with these numbers yourself, try this free statistical power calculator Now consider what happens if we're running studies where the manipulation shouldn't have an effect. If we run 20 such studies, 1 of them will nonetheless give us a false positive at the p=.05 level, whereas we probably won't get any at the p=.01 level. So we've eliminated one false positive, but at the cost of nearly 1/3 of our true positives. No better prediction of replication than replication Perhaps the easiest method is to just replicate studies before publishing them. The chances of getting the same spurious result twice in a row are vanishingly small. Many of the issues I outlined above -- other than data massaging -- won't increase your replication rate. Test 14 different fruits to see if any of them increase IQ scores, and you have over a 50% chance that one of them will spuriously do so. Test that same fruit again, and you've only got a 5% chance of repeating the effect. So replication decreases your false positive rate 20-fold. Similarly, data massaging may get you that coveted p.05, but the chances of the same massages producing the same result again are very, very low. True positives aren't nearly so affected. Again, a typical power level is B=0.80 -- 80% of the time that an effect is really there, you'll be able to find it. So when you try to replication a positive, you'll succeed 80% of the time. So replication decreases your true positives by only 20%. So let's say the literature has a 30% false positive rate (which, based on current estimates, seems quite reasonable). Attempting to replicate every positive result prior to publication -- and note that it's extremely rare to publish a null result (no effect), so almost all published results are positive results -- should decrease the false positives 20-fold and the true positives by 20%, leaving us with a 2.6% false positive rate. That's a huge improvement. So why not replicate more So why don't people replicate before publishing? If 30% of your own publishable results are false positives, and you eliminate them, you've just lost 30% of your potential publications. You've also lost 20% of your positives as well, btw, which means overall you've decreased your productivity by 43%. And that's without counting the time it takes to the replication. Yes, it's nice that you've eliminated your false positives, but you also may have eliminated your own career! When scientists are ranked, they're largely ranked on (a) number of publications, (b) number of times a publication is cited, and (c) quality of journal that the publications are in. Notice that you can improve your score on all of these metrics by publishing more false positives. Taking the time to replicate decreases your number of publications and eliminates many of the most exciting and surprising results (decreasing both citations and quality of journal). Perversely, even if someone publishes a failure to replicate your false positive, that's a citation and another feather in your I'm not saying that people are cynically increasing their numbers of bogus results. Most of us got into science because we actually want to know the answers to stuff. We care about science. But there is limited time in the day, and all the methods of eliminating false positives take time. And we're always under incredible pressure to pick up the pace of research, not slow it down. I'm not sure how to solve this problem, but any solution I can think of involves some way of tracking not just how often a researcher publishes or how many citations those publications get, but how often those publications are replicated. Without having a way of tracking which publications replicate and which don't, there is no way to reward meticulous researchers or hold sloppy researchers to Also, I think a lot of people just don't believe that false positives are that big a problem. If you think that only 2-3% of published papers contain bogus results, there's not a lot of incentive to put in a lot of hard work learning better statistical techniques, replicating everything, etc. If you think the rate is closer to 100%, you'd question the meaning of your own existence. As long as we aren't keeping track of replication rates, nobody really knows for sure where we are on this continuum. That's my conclusion. Here's Lehrer's: The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that's often not the case. Just because an idea is true doesn't mean it can be proved. And just because an idea can be proved doesn't mean it's true. When the experiments are done, we still have to chose what to believe. I say it again: huh? 17 comments: Great post! I really enjoy your blog. You are absolutely right about the replication before publishing --- in fact, this is exactly what is known as a test set in machine learning. And you can't just publish the "training set" results to show that your methods work. I cannot understand why it is not the standard practice in scientific publications in general. @ Michael B -- Interesting point. I hadn't considered that analogy. We typically think of our statistical tests as predictors of replicability. But there is also a sense in which we are fitting a model, and you're right: the test of a model is not its ability to model the data it was fitted to but it's ability to model a new set of data. This comment has been removed by the author. Excellent post! I'm just a lowly undergraduate, but I feel as though the three biggest issues in the scientific method is a) biases, b) the incentive to publish positive results, and c) the incentive to make money. The combination of all three I think accurately describe the problems we are facing when referencing the failure to replicate and the major flaws of the scientific process. The question remains: how do we remove these incentives from the scientific process? In physics, the rule is "An experiment doesn't prove a hypothesis unless it was formulated before the experiment." This means that experimental results like the one with apples must be Of course, physicists have it easier, as in many cases they know what they expect. However, when someone says that they expected to get what they actually got, they are misleading the reader into thinking that the hypothesis is proved according to this rule. OTOH, if this is standard practice, it isn't dishonest in the same way that "Pleased to meet you" can't really be a lie. @Tal -- I was being succinct when saying "95% chance of replicating". I'll stick with my t-test example. Significance at p=.05 level means there's less than a 5% probability the two samples were drawn from the same population. So conversely there's a 95% chance the samples are from different populations. If the samples are from different populations, you should be able to replicate the effect, given sufficient power. Whether you will actually replicate the effect depends a great deal on the statistical power of the replication. As far as whether we normally have .8 power ... I don't know what it ends up being in practice, but let's point out that .8 power is called "adequate" but failing to find real effects 20% of the time is actually kinda lousy. So I *hope* people have at least .8 power. I expect the fact that it is hard to publish null results also pushes people in the direction of collecting enough data to have sufficient power. What makes you suspect otherwise? I guess I don't see why saying there's approximately a 95% probability of replication is more succinct than saying there's approximately a 50% probability of replication... and the latter has the benefit of being true (or at least, much closer to the truth). I think in your example you might be confusing the probability of the data given the hypothesis (P(D|H)) with the probability of the hypothesis given the data (P(H|D)). Observing that p < .05 means that P(D|H0) is < 0.05 (where H0 is the null). When you talk about the probability of the samples being from the same or different populations, you're talking about P(H|D)--the probability that the null is true (or that the alternative hypothesis H1 is true, which is the complement) given the data. You can't actually calculate that just from the observed p value, because you don't know the prior probabilities P(H0) and P(D). I think what you're thinking of as the complement, which really is 0.95, is P(~D|H0)--the probability that you wouldn't observe the data if the null were true. But that's generally not an interesting quantity. On the power thing, there have been many reviews in many different domains, and they tend to converge pretty strongly on the conclusion that most studies are underpowered. The classic paper is Cohen's (1962) review of social and abnormal psychology articles, and then Sedlmeier and Gigerenzer followed up about 20 years later and showed that power hadn't increased at all. More recent reviews all basically show the same thing--power hasn't budged (though there are some domains where people have been much better about this--e.g., population genetics, where people now routinely use samples in the tens of thousands). I report some power analyses for fMRI studies in this paper, and the results are not encouraging. Actually, I think if anything, most of the power reviews may even be too optimistic, because they tend to assume medium or large effect sizes, when in fact meta-analyses suggest that most effects are small. So while it certainly varies by domain, as a general rule of thumb, I think it's safe to assume that the average study is underpowered. It would be nice if the prevalence of null results pushed people to routinely collect much larger samples, as you suggest; but as I discuss in the same paper, and as people like Ioannidis have pointed out, that's counteracted by the fact that smaller samples give biased effect size estimates, leading people to think that their effects are bigger than they are (and hence, to think they need smaller samples than they do). The other problem is that people rarely actually attribute null results to low power; it's much more common to see invisible moderating variables invoked--e.g., "the fact that we didn't get the effect when the stimuli were faces rather than words may mean there are dissociable systems involved in processing words and faces." Power is not an intuitive concept, and it's hard to remember that a failure to obtain meaningful results often says more about what you didn't do (collect enough subjects) than what you did do (vary some experimental variable). There were several typos/misspellings/words missing that would have been helpful. Try using spell-check next time. @Tal: You are right to point out that what we are estimating is the probability of the data given the hypothesis, whereas what we want to estimate is the probability of the hypothesis. My understanding was that there is no way to estimate the latter, and so we use the former as a proxy. I'm still not sure I know what you mean by "50% chance of replication". If you mean "50% chance the null hypothesis is in fact false" -- that is, that the effect is real and should be found again -- then that seems low for the following reason: I think it's a reasonable assumption that any given tested hypothesis has around a 50% chance of being true (and thus the null hypothesis has a 50% chance of being false). If you mean that p=.05 is equivalent to a 50% chance that the null hypothesis is false, then what you're saying is that being significant at that level carries no information whatsoever. I believe there's a real problem, but that strikes me as overly pessimistic. I should say that whenever somebody has argued with my hypothesis that 50% of tested null hypotheses should be false, the argument is that the number should be much higher, since researchers single out null hypotheses particularly likely to be false (that is, positive hypotheses particularly likely to be true). On this account, then, a p-value of .05 carries less information than the fact that the researcher ran the experiment to begin with. Possibly I've completely misunderstood what you're saying. Is that about right? I'm still not sure I know what you mean by "50% chance of replication". If you mean "50% chance the null hypothesis is in fact false" -- that is, that the effect is real and should be found again -- then that seems low for the following reason: I think it's a reasonable assumption that any given tested hypothesis has around a 50% chance of being true (and thus the null hypothesis has a 50% chance of being false). By "chance of replication" I just mean the probability of obtaining a statistically significant effect in the same direction given an identical study (i.e., same design, number of subjects, etc.). In a world in which all hypothesis tests are unbiased, obtaining a result significant at p < .05 implies a roughly 50% chance of obtaining a second statistically significant result if you were to redo the study without changing anything. In the real world, hypothesis tests aren't unbiased, of course; that's what this entire debate is largely about. There's a tendency to selectively report and pursue effects that attained statistical significance, so the reality is that most of the time, the true probability of replication (in the same sense as above) is going to be lower than 50%. And again, there are domains where we can say pretty confidently that it's going to be much lower. If you mean that p=.05 is equivalent to a 50% chance that the null hypothesis is false, then what you're saying is that being significant at that level carries no information whatsoever. I believe there's a real problem, but that strikes me as overly pessimistic. That's not the implication I'd draw... I'm not sure it's meaningful to talk about the probability of the null being true or false. Strictly speaking, the null is (for practical intents) always false. What effect could you possibly name where you really believe that if you sampled the entire population, the magnitude of the effect would literally be exactly zero? In a dense causal system, that's inconceivable. Everything has an influence on everything, however small. (continued from above comment) If you think about it in terms of effect sizes, then this problem goes away. You have some prior belief about how big the effect is, then you look a the data, and you update your belief accordingly. The standard practice in psychology is to effectively assume a completely uniform prior distribution (and this is one of the things Bayesians rail against, because if you really had no reason to think any value was likelier than zero, why would you ever do the study in the first place?). So, if we're being strict about it, then achieving a result that's significant at p < .05 is giving you information, because you started out with the null (which is effectively a prior in this case) that there wasn't any effect. If you intuitively feel like that's wrong, then I think what you're tacitly saying is that it's silly to use uniform priors, and we should build at least some sense of the expected effect size into the test--which I'd agree with. Notice that once you do that, it does become entirely possible that you would obtain a result statistically significant at p < .05 yet walk away feeling less confident about your prediction. For instance, if you think the effect you're testing is one standard deviation in magnitude, and you're conducting a one-sample t-test on 50 subjects, you could get an effect of a quarter of a standard deviation and it would still be significant at p < .05. But you wouldn't want to walk away concluding that your hypothesis was borne out--you would in fact conclude the evidence was weaker than you thought. I'm not sure it's meaningful to talk about the probability of the null being true or false. Strictly speaking, the null is (for practical intents) always false. Do you think so? I think it's pretty simple to design experiments were we expect the null hypothesis to be true. Tests of ESP, for instance. Since there are an infinite number of ESP manipulations one could try, there is necessarily an infinite number of experiments where we expect the null hypothesis to be false. (If you believe in ESP, I'm sure you can work out your own class of examples.) What's left is to decide in practice, for a given field, how often the null hypothesis is likely to be true. I've already put my money on 50% for psych -- at least, the areas of psych I'm familiar with. I don't know what area of psych you're referring to in your post, but in my subfield (language) and most subfields I read, we mostly do not care about effect size. We're studying underlying structure, so any effect of any size is meaningful -- and, in fact, effect size has no theoretical meaning in most cases. So if it were actually the case that we knew the null hypothesis had to be false, we'd never bother to run the experiment -- except in the case where we need to know the direction of an effect, though that only comes up every so often. In such cases, I don't think knowing you have a 50% chance of being able to repeat an effect carries much value, if any. Would progress even be possible, if this were true? I think not. Yet there has been progress, which is why I'm skeptical of your claim. I'm not saying you've done the math wrong, but perhaps some of the assumptions are incorrect, at least for the branches of cognitive and social psychology/neuroscience that I follow. But for fields where effect size matters, these are interesting ideas. Do you think so? I think it's pretty simple to design experiments were we expect the null hypothesis to be true. Tests of ESP, for instance. ESP is an excellent example, because that's about as clean a case as you could make, and even there, I don't think it's plausible to expect that you'd ever get an effect of exactly zero if you sampled the entire population. Remember, the null hypothesis isn't something that lives in construct land (where you can say things like "ESP doesn't exist, so it can't be associated with anything"); it has to be operationalized, otherwise we can't test it. I'd argue that for any operationalization of ESP, there are going to be many potential confounds and loose ends that are necessarily going to make your effect non-zero. Take precognition experiments. A very basic requirement is that you have to have a truly random number generator. Well, there isn't really any such thing. Not in the sense you'd require in order to ensure an effect of literally zero. Remember, any systematic blip, no matter how small, renders the null false. If the body heat generated by larger subjects systematically throws off the computation by one bit in eight trillion, that's still a non-zero association. And this is for a contrived example; for an average hypothesis that most psychologists would actually be willing to entertain, you could easily reel of dozens if not hundreds of factors that would ensure you have a non-zero association... What's left is to decide in practice, for a given field, how often the null hypothesis is likely to be true. I've already put my money on 50% for psych -- at least, the areas of psych I'm familiar with. I guess I don't understand what this means, or where that number comes from. The probability of rejecting the null hypothesis depends in large part on your sample size. So that 50% number can't be referring to the probability of rejecting the null hypothesis in actual experiments, because if that were the case, so long as the effect wasn't exactly zero, you'd be able to turn 50% into almost any other probability just by collecting more or fewer subjects. Which would render the number meaningless. The only way I can make sense of this is if you really believe that for 50% of all hypotheses that people suggest, the effect in the entire population is literally zero. Not small; not close to zero; not a correlation of 0.0001 (which is still statistically significant in a sample of 7 billion people!); but exactly zero. If that's really what you believe, then we're at an impasse, but frankly the idea that any association between two meaningful psychological variables in a dense causal system like our world would ever be exactly nil seems inconceivable to me. continued from above... I don't know what area of psych you're referring to in your post, but in my subfield (language) and most subfields I read, we mostly do not care about effect size. We're studying underlying structure, so any effect of any size is meaningful -- and, in fact, effect size has no theoretical meaning in most cases. I don't see how it's possible not to care about effect size. I'll grant that most psychologists may not stop to think about what constitutes a meaningful effect size very often, but that doesn't mean they're not making implicit assumptions about effect size every time they run a test. To put it in perspective, consider that if you routinely conducted your studies with 1,000,000 subjects each, all of your tests would produce statistically significant results (the critical effect size for p < .05 with that sample size is around 2/1000th of a standard deviation--good luck getting effects smaller than that!). So how is it possible not to care about effect size and only about rejecting the null, if all it takes in order to reject the null is collecting a large enough sample? For that matter, I assume you tend to treat a finding significant at p < .05 differently from one significant at p < .00001--and the only difference between the two is... effect size. So if it were actually the case that we knew the null hypothesis had to be false, we'd never bother to run the experiment -- except in the case where we need to know the direction of an effect, though that only comes up every so often. That's exactly why null hypothesis testing is kind of absurd. That's not to say it isn't a useful fiction, but it's still a fiction--there are few if any situations in which a null of zero is at all meaningful. The only reason the framework actually works is because we tend to run samples small enough that we don't run into the problem of having everything be statistically significant, so we rarely have to think about how absurd it is. In other words, what's happening is that p values end up being proxies for effect size in virtue of the kind of sample sizes we use. In such cases, I don't think knowing you have a 50% chance of being able to repeat an effect carries much value, if any. Would progress even be possible, if this were true? I think not. Yet there has been progress, which is why I'm skeptical of your claim. I'm not saying you've done the math wrong, but perhaps some of the assumptions are incorrect, at least for the branches of cognitive and social psychology/neuroscience that I follow. I'm not sure what you're objecting to here... If you take the significance testing framework at face value, it's simply a fact that a finding that's significant at p = .05 will, on average, have a 50% chance of replicating if you repeat the study. I'm not expressing my opinion or building in any extra assumptions beyond what you already assume when you run a t-test; that's just the reality. If you doubt it, just take the critical effect size that corresponds to p = .05 for a given sample size and run a power calculation for the same effect size, alpha, and sample size. You will get 50%. If you don't like it, your problem is with the hypothesis testing framework. (and again, sorry, last bit...) I'm also still not sure why you think a 50% chance of replication doesn't allow any progress. When you test against a null of zero, your expectation isn't that the null hypothesis is false 50% of the time (if it is, you're testing against the wrong null!), it's that there is no effect whatsoever. Learning the direction of the effect, and that it can be reliably distinguished from zero 50% of the time, is hugely informative! Now, it's true that the same result might be completely uninformative if your prior exactly matched the posterior, and that could certainly happen. But that's not an argument for hypothesis testing, it's precisely the kind of argument Bayesians use for why you should use a Bayesian approach. But for fields where effect size matters, these are interesting ideas. There aren't any fields of psychology where effect size doesn't matter, for the reasons articulated above. The fact that you're rarely explicit about effect sizes in your studies doesn't mean you aren't making implicit assumptions about effect size all the time. You're just using p values as a proxy, because your sample sizes aren't big enough that everything comes out statistically significant. If you doubt this to be true, run a few experiments with 10,000 subjects each and then tell me how you're going to decide which effects you care about and which you don't. Anyway, this comment thread is getting kind of long (sorry, I know I'm not being very succinct), so the last word is yours. @tal. Don't worry about the long thread. I know what your negative argument is, but what is your positive argument. Even if we accept that in practice there are always confounds and with sufficient power, we'll always reject the null hypothesis, my claim that approximately half the time we care about whether our *intended* manipulation has an effect (intended manipulation, not anything we manipulated on accident) still stands. If you deny that's the case, then the place to write about that is in the post I just put up. But if you agree that we often do care, how do we go about testing the existence or absence of BTW I worked through the calculations you suggested, and you seem to be overly optimistic. A 2-sample t-test with 15 subjects per condition is significant at the p=.05 level with t=1.7. That's an effect size (in Cohen's terms) of .44. An effect size of .44 with the same alpha has observed power of .21. So even though this effect is real, we expect to see it only 1/5 of the time we test for it. Is that right? You may have already seen this, but if not it is definitely worth a look. Dance of the p-values is a short youtube video explaining why p .05 does not mean there is a 95% chance of replication, amongst other things. http://www.youtube.com/watch?v=ez4DgdurRPg
{"url":"http://gameswithwords.fieldofscience.com/2010/12/and-for-my-next-trick-ill-make-this.html","timestamp":"2024-11-05T03:07:04Z","content_type":"application/xhtml+xml","content_length":"226194","record_id":"<urn:uuid:0386cf8a-1ddd-424e-b548-3ec1e65329bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00287.warc.gz"}
Catalog Entries Prerequisite: MTH 020 completed with a grade of “C-“ or better within the past two years or placement by the College's Math Placement Process. This is the first term of a two-term sequence in introductory algebra. Topics include a selective review of arithmetic, tables and graphs, signed numbers, problem solving, linear equations, linear inequalities, ratio and proportion, and unit analysis. MTH 060 prepares students for Elementary Algebra, MTH 065. MTH 060 and MTH 065 provide a two-term sequence preparatory to Intermediate Algebra, MTH 095. 4.000 Credit hours 40.000 TO 48.000 Lecture hours Syllabus Available Levels: Credit Schedule Types: Lecture Mathematics Division Mathematics Department Course Attributes: Tuition, Pre-college Level
{"url":"https://crater.lanecc.edu/banp/zwckctlg.p_display_courses?term_in=202340&one_subj=MTH&sel_subj=&sel_crse_strt=060&sel_crse_end=060&sel_levl=&sel_schd=&sel_coll=&sel_divs=&sel_dept=&sel_attr=","timestamp":"2024-11-14T05:58:30Z","content_type":"text/html","content_length":"8852","record_id":"<urn:uuid:8543de70-f17a-4913-b425-35874ae2f732>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00796.warc.gz"}
DynamicalSystemsBase.jl · DynamicalSystemsBase.jl A Julia package that defines the DynamicalSystem interface and many concrete implementations used in the DynamicalSystems.jl ecosystem. To install it, run import Pkg; Pkg.add("DynamicalSystemsBase"). Typically, you do not want to use DynamicalSystemsBase directly, as downstream analysis packages re-export it. All further information is provided in the documentation, which you can either find online or build locally by running the docs/make.jl file. !!! note "Tutorial and examples at DynamicalSystems.jl docs! Please visit the documentation of the main DynamicalSystems.jl docs for a tutorial and examples on using the interface. DynamicalSystem is an abstract supertype encompassing all concrete implementations of what counts as a "dynamical system" in the DynamicalSystems.jl library. All concrete implementations of DynamicalSystem can be iteratively evolved in time via the step! function. Hence, most library functions that evolve the system will mutate its current state and/or parameters. See the documentation online for implications this has for parallelization. DynamicalSystem is further separated into two abstract types: ContinuousTimeDynamicalSystem, DiscreteTimeDynamicalSystem. The simplest and most common concrete implementations of a DynamicalSystem are DeterministicIteratedMap or CoupledODEs. A DynamicalSystemrepresents the time evolution of a state in a state space. It mainly encapsulates three things: 1. A state, typically referred to as u, with initial value u0. The space that u occupies is the state space of ds and the length of u is the dimension of ds (and of the state space). 2. A dynamic rule, typically referred to as f, that dictates how the state evolves/changes with time when calling the step! function. f is typically a standard Julia function, see the online documentation for examples. 3. A parameter container p that parameterizes f. p can be anything, but in general it is recommended to be a type-stable mutable container. In sort, any set of quantities that change in time can be considered a dynamical system, however the concrete subtypes of DynamicalSystem are much more specific in their scope. Concrete subtypes typically also contain more information than the above 3 items. In this scope dynamical systems have a known dynamic rule f. Finite measured or sampled data from a dynamical system are represented using StateSpaceSet. Such data are obtained from the trajectory function or from an experimental measurement of a dynamical system with an unknown dynamic rule. See also the DynamicalSystems.jl tutorial online for examples making dynamical systems. Integration with ModelingToolkit.jl Dynamical systems that have been constructed from DEProblems that themselves have been constructed from ModelingToolkit.jl keep a reference to the symbolic model and all symbolic variables. Accessing a DynamicalSystem using symbolic variables is possible via the functions observe_state, set_state!, current_parameter and set_parameter!. The referenced MTK model corresponding to the dynamical system can be obtained with model = referrenced_sciml_model(ds::DynamicalSystem). See also the DynamicalSystems.jl tutorial online for an example. In ModelingToolkit.jl v9 the default split behavior of the parameter container is true. This means that the parameter container is no longer a Vector{Float64} by default, which means that you cannot use integers to access parameters. It is recommended to keep split = true (default) and only access parameters via their symbolic parameter binding. Use structural_simplify(sys; split = false) to allow accessing parameters with integers again. The API that DynamicalSystem employs is composed of the functions listed below. Once a concrete instance of a subtype of DynamicalSystem is obtained, it can queried or altered with the following The main use of a concrete dynamical system instance is to provide it to downstream functions such as lyapunovspectrum from ChaosTools.jl or basins_of_attraction from Attractors.jl. A typical user will likely not utilize directly the following API, unless when developing new algorithm implementations that use dynamical systems. API - obtain information • ds(t) with ds an instance of DynamicalSystem: return the state of ds at time t. For continuous time systems this interpolates and extrapolates, while for discrete time systems it only works if t is the current time. API - alter status current_state(ds::DynamicalSystem) → u::AbstractArray Return the current state of ds. This state is mutated when ds is mutated. See also initial_state, observe_state. initial_state(ds::DynamicalSystem) → u0 Return the initial state of ds. This state is never mutated and is set when initializing ds. observe_state(ds::DynamicalSystem, i, u = current_state(ds)) → x::Real Return the state u of dsobserved at "index" i. Possibilities are: • i::Int returns the i-th dynamic variable. • i::Function returns f(current_state(ds)). • i::SymbolLike returns the value of the corresponding symbolic variable. This is valid only for dynamical systems referrencing a ModelingToolkit.jl model which also has i as one of its listed variables (either uknowns or observed). Here i can be anything can be anything that could index the solution object sol = ModelingToolkit.solve(...), such as a Num or Symbol instance with the name of the symbolic variable. In this case, a last fourth optional positional argument t defaults to current_time(ds) and is the time to observe the state at. • Any symbolic expression involving variables present in the symbolic variables tracked by the system, e.g., i = x^2 - y with x, y symbolic variables. For ProjectedDynamicalSystem, this function assumes that the state of the system is the full state space state, not the projected one (this makes the most sense for allowing MTK-based indexing). Use state_name for an accompanying name. Return a name that matches the outcome of observe_state with index. current_parameter(ds::DynamicalSystem, index [,p]) Return the specific parameter of ds corresponding to index, which can be anything given to set_parameter!. p defaults to current_parameters and is the parameter container to extract the parameter from, which must match layout with its default value. Use parameter_name for an accompanying name. initial_parameters(ds::DynamicalSystem) → p0 Return the initial parameter container of ds. This is never mutated and is set when initializing ds. isdeterministic(ds::DynamicalSystem) → true/false Return true if ds is deterministic, i.e., the dynamic rule contains no randomness. This is information deduced from the type of ds. isdiscretetime(ds::DynamicalSystem) → true/false Return true if ds operates in discrete time, or false if it is in continuous time. This is information deduced from the type of ds. dynamic_rule(ds::DynamicalSystem) → f Return the dynamic rule of ds. This is never mutated and is set when initializing ds. current_time(ds::DynamicalSystem) → t Return the current time that ds is at. This is mutated when ds is evolved. initial_time(ds::DynamicalSystem) → t0 Return the initial time defined for ds. This is never mutated and is set when initializing ds. isinplace(ds::DynamicalSystem) → true/false Return true if the dynamic rule of ds is in-place, i.e., a function mutating the state in place. If true, the state is typically Array, if false, the state is typically SVector. A front-end user will most likely not care about this information, but a developer may care. successful_step(ds::DynamicalSystem) -> true/false Return true if the last step! call to ds was successful, false otherwise. For continuous time systems this uses DifferentialEquations.jl error checking, for discrete time it checks if any variable is Inf or NaN. Return the ModelingToolkit.jl structurally-simplified model referrenced by ds. Return nothing if there is no referrenced model. reinit!(ds::DynamicalSystem, u = initial_state(ds); kwargs...) → ds Reset the status of ds, so that it is as if it has be just initialized with initial state u. Practically every function of the ecosystem that evolves ds first calls this function on it. Besides the new state u, you can also configure the keywords t0 = initial_time(ds) and p = current_parameters(ds). reinit!(ds::DynamicalSystem, u::AbstractDict; kwargs...) → ds If u is a AbstractDict (for partially setting specific state variables in set_state!), then the alterations are done in the state given by the keyword reference_state = copy(initial_state(ds)). reinit!(ds, ::Nothing; kwargs...) This method does nothing and leaves the system as is. This is so that downstream functions that call reinit! can still be used without resetting the system but rather continuing from its exact current state. set_state!(ds::DynamicalSystem, u::AbstractArray{<:Real}) Set the state of ds to u, which must match dimensionality with that of ds. Also ensure that the change is notified to whatever integration protocol is used. set_state!(ds::DynamicalSystem, value::Real, i) → u Set the ith variable of ds to value. The index i can be an integer or a symbolic-like index for systems that reference a ModelingToolkit.jl model. For example: i = :x # or `1` or `only(@variables(x))` set_state!(ds, 0.5, i) Warning: this function should not be used with derivative dynamical systems such as Poincare/stroboscopic/projected dynamical systems. Use the method below to manipulate an array and give that to set_state!(u::AbstractArray, value, index, ds::DynamicalSystem) Modify the given state u and leave ds untouched. set_state!(ds::DynamicalSystem, mapping::AbstractDict) Convenience version of set_state! that iteratively calls set_state!(ds, val, i) for all index-value pairs (i, val) in mapping. This is useful primarily in two cases: 1. to partially set only some state variables, 2. to set variables by name (if the system is created via ModelingToolkit.jl) so that you don't have to keep track of the order of the dynamic variables. set_parameter!(ds::DynamicalSystem, index, value [, p]) Change a parameter of ds given the index it has in the parameter container and the value to set it to. This function works for any type of parameter container (array/dictionary/composite types) provided the index is appropriate type. The index can be a traditional Julia index (integer for arrays, key for dictionaries, or symbol for composite types). It can also be a symbolic variable or Symbol instance. This is valid only for dynamical systems referring a ModelingToolkit.jl model which also has index as one of its parameters. The last optional argument p defaults to current_parameters and is the parameter container whose value is changed at the given index. It must match layout with its default value. set_parameters!(ds::DynamicalSystem, p = initial_parameters(ds)) Set the parameter values in the current_parameters(ds) to match those in p. This is done as an in-place overwrite by looping over the keys of p hence p can be an arbitrary container mapping parameter indices to values (such as a Vector{Real}, Vector{Pair}, or AbstractDict). The keys of p must be valid keys that can be given to set_parameter!. step!(ds::DiscreteTimeDynamicalSystem [, n::Integer]) → ds Evolve the discrete time dynamical system for 1 or n steps. step!(ds::ContinuousTimeDynamicalSystem, [, dt::Real [, stop_at_tdt]]) → ds Evolve the continuous time dynamical system for one integration step. Alternatively, if a dt is given, then progress the integration until there is a temporal difference ≥ dt (so, step at least for dt time). When true is passed to the optional third argument, the integration advances for exactly dt time. trajectory(ds::DynamicalSystem, T [, u0]; kwargs...) → X, t Evolve ds for a total time of T and return its trajectory X, sampled at equal time intervals, and corresponding time vector. X is a StateSpaceSet. Optionally provide a starting state u0 which is current_state(ds) by default. The returned time vector is t = (t0+Ttr):Δt:(t0+Ttr+T). If time evolution diverged, or in general failed, before T, the remaining of the trajectory is set to the last valid point. trajectory is a very simple function provided for convenience. For continuous time systems, it doesn't play well with callbacks, use DifferentialEquations.solve if you want a trajectory/timeseries that works with callbacks, or in general you want more flexibility in the generated trajectory (but remember to convert the output of solve to a StateSpaceSet). Keyword arguments • Δt: Time step of value output. For discrete time systems it must be an integer. Defaults to 0.1 for continuous and 1 for discrete time systems. If you don't have access to unicode, the keyword Dt can be used instead. • Ttr = 0: Transient time to evolve the initial state before starting saving states. • t0 = initial_time(ds): Starting time. • container = SVector: Type of vector that will represent the state space points that will be included in the StateSpaceSet output. See StateSpaceSet for valid options. • save_idxs::AbstractVector: Which variables to output in X. It can be any type of index that can be given to observe_state. Defaults to 1:dimension(ds) (all dynamic variables). Note: if you mix integer and symbolic indexing be sure to initialize the array as Any so that integers 1, 2, ... are not converted to symbolic expressions. StateSpaceSet{D, T, V} <: AbstractVector{V} A dedicated interface for sets in a state space. It is an ordered container of equally-sized points of length D, with element type T, represented by a vector of type V. Typically V is SVector{D,T} or Vector{T} and the data are always stored internally as Vector{V}. SSSet is an alias for StateSpaceSet. The underlying Vector{V} can be obtained by vec(ssset), although this is almost never necessary because StateSpaceSet subtypes AbstractVector and extends its interface. StateSpaceSet also supports almost all sensible vector operations like append!, push!, hcat, eachrow, among others. When iterated over, it iterates over its contained points. Constructing a StateSpaceSet is done in three ways: 1. By giving in each individual columns of the state space set as Vector{<:Real}: StateSpaceSet(x, y, z, ...). 2. By giving in a matrix whose rows are the state space points: StateSpaceSet(m). 3. By giving in directly a vector of vectors (state space points): StateSpaceSet(v_of_v). All constructors allow for the keyword container which sets the type of V (the type of inner vectors). At the moment options are only SVector, MVector, or Vector, and by default SVector is used. Description of indexing When indexed with 1 index, StateSpaceSet behaves exactly like its encapsulated vector. i.e., a vector of vectors (state space points). When indexed with 2 indices it behaves like a matrix where each row is a point. In the following let i, j be integers, typeof(X) <: AbstractStateSpaceSet and v1, v2 be <: AbstractVector{Int} (v1, v2 could also be ranges, and for performance benefits make v2 an SVector{Int}). • X[i] == X[i, :] gives the ith point (returns an SVector) • X[v1] == X[v1, :], returns a StateSpaceSet with the points in those indices. • X[:, j] gives the jth variable timeseries (or collection), as Vector • X[v1, v2], X[:, v2] returns a StateSpaceSet with the appropriate entries (first indices being "time"/point index, while second being variables) • X[i, j] value of the jth variable, at the ith timepoint Use Matrix(ssset) or StateSpaceSet(matrix) to convert. It is assumed that each column of the matrix is one variable. If you have various timeseries vectors x, y, z, ... pass them like StateSpaceSet (x, y, z, ...). You can use columns(dataset) to obtain the reverse, i.e. all columns of the dataset in a tuple. DeterministicIteratedMap <: DiscreteTimeDynamicalSystem DeterministicIteratedMap(f, u0, p = nothing; t0 = 0) A deterministic discrete time dynamical system defined by an iterated map as follows: $$\[\vec{u}_{n+1} = \vec{f}(\vec{u}_n, p, n)\]$$ An alias for DeterministicIteratedMap is DiscreteDynamicalSystem. Optionally configure the parameter container p and initial time t0. For construction instructions regarding f, u0 see the DynamicalSystems.jl tutorial. CoupledODEs <: ContinuousTimeDynamicalSystem CoupledODEs(f, u0 [, p]; diffeq, t0 = 0.0) A deterministic continuous time dynamical system defined by a set of coupled ordinary differential equations as follows: $$\[\frac{d\vec{u}}{dt} = \vec{f}(\vec{u}, p, t)\]$$ An alias for CoupledODE is ContinuousDynamicalSystem. Optionally provide the parameter container p and initial time as keyword t0. For construction instructions regarding f, u0 see the DynamicalSystems.jl tutorial. DifferentialEquations.jl interfacing The ODEs are evolved via the solvers of DifferentialEquations.jl. When initializing a CoupledODEs, you can specify the solver that will integrate f in time, along with any other integration options, using the diffeq keyword. For example you could use diffeq = (abstol = 1e-9, reltol = 1e-9). If you want to specify a solver, do so by using the keyword alg, e.g.: diffeq = (alg = Tsit5(), reltol = 1e-6). This requires you to have been first using OrdinaryDiffEq (or smaller library package such as OrdinaryDiffEqVerner) to access the solvers. The default diffeq is: (alg = OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.triviallimiter!), typeof(OrdinaryDiffEqCore.triviallimiter!), Static.False}(OrdinaryDiffEqCore.triviallimiter!, OrdinaryDiffEqCore.trivial limiter!, static(false)), abstol = 1.0e-6, reltol = 1.0e-6) diffeq keywords can also include callback for event handling . The convenience constructors CoupledODEs(prob::ODEProblem [, diffeq]) and CoupledODEs(ds::CoupledODEs [, diffeq]) are also available. Use ODEProblem(ds::CoupledODEs, tspan = (t0, Inf)) to obtain the To integrate with ModelingToolkit.jl, the dynamical system must be created via the ODEProblem (which itself is created via ModelingToolkit.jl), see the Tutorial for an example. Dev note: CoupledODEs is a light wrapper of ODEIntegrator from DifferentialEquations.jl. StroboscopicMap <: DiscreteTimeDynamicalSystem StroboscopicMap(ds::CoupledODEs, period::Real) → smap StroboscopicMap(period::Real, f, u0, p = nothing; kwargs...) A discrete time dynamical system that produces iterations of a time-dependent (non-autonomous) CoupledODEs system exactly over a given period. The second signature first creates a CoupledODEs and then calls the first. StroboscopicMap follows the DynamicalSystem interface. In addition, the function set_period!(smap, period) is provided, that sets the period of the system to a new value (as if it was a parameter). As this system is in discrete time, current_time and initial_time are integers. The initial time is always 0, because current_time counts elapsed periods. Call these functions on the parent of StroboscopicMap to obtain the corresponding continuous time. In contrast, reinit! expects t0 in continuous time. The convenience constructor StroboscopicMap(T::Real, f, u0, p = nothing; diffeq, t0 = 0) → smap is also provided. See also PoincareMap. PoincareMap <: DiscreteTimeDynamicalSystem PoincareMap(ds::CoupledODEs, plane; kwargs...) → pmap A discrete time dynamical system that produces iterations over the Poincaré map^[DatserisParlitz2022] of the given continuous time ds. This map is defined as the sequence of points on the Poincaré surface of section, which is defined by the plane argument. Iterating pmap also mutates ds which is referrenced in pmap. See also StroboscopicMap, poincaresos. Keyword arguments • direction = -1: Only crossings with sign(direction) are considered to belong to the surface of section. Negative direction means going from less than $b$ to greater than $b$. • u0 = nothing: Specify an initial state. • rootkw = (xrtol = 1e-6, atol = 1e-8): A NamedTuple of keyword arguments passed to find_zero from Roots.jl. • Tmax = 1e3: The argument Tmax exists so that the integrator can terminate instead of being evolved for infinite time, to avoid cases where iteration would continue forever for ill-defined hyperplanes or for convergence to fixed points, where the trajectory would never cross again the hyperplane. If during one step! the system has been evolved for more than Tmax, then step!(pmap) will terminate and error. The Poincaré surface of section is defined as sequential transversal crossings a trajectory has with any arbitrary manifold, but here the manifold must be a hyperplane. PoincareMap iterates over the crossings of the section. If the state of ds is $\mathbf{u} = (u_1, \ldots, u_D)$ then the equation defining a hyperplane is $$\[a_1u_1 + \dots + a_Du_D = \mathbf{a}\cdot\mathbf{u}=b\]$$ where $\mathbf{a}, b$ are the parameters of the hyperplane. In code, plane can be either: • A Tuple{Int, <: Real}, like (j, r): the plane is defined as when the jth variable of the system equals the value r. • A vector of length D+1. The first D elements of the vector correspond to $\mathbf{a}$ while the last element is $b$. PoincareMap uses ds, higher order interpolation from DifferentialEquations.jl, and root finding from Roots.jl, to create a high accuracy estimate of the section. PoincareMap follows the DynamicalSystem interface with the following adjustments: 1. dimension(pmap) == dimension(ds), even though the Poincaré map is effectively 1 dimension less. 2. Like StroboscopicMap time is discrete and counts the iterations on the surface of section. initial_time is always 0 and current_time is current iteration number. 3. A new function current_crossing_time returns the real time corresponding to the latest crossing of the hyperplane. The corresponding state on the hyperplane is current_state(pmap) as expected. 4. For the special case of plane being a Tuple{Int, <:Real}, a special reinit! method is allowed with input state of length D-1 instead of D, i.e., a reduced state already on the hyperplane that is then converted into the D dimensional state. using DynamicalSystemsBase ds = Systems.rikitake(zeros(3); μ = 0.47, α = 1.0) pmap = poincaremap(ds, (3, 0.0)) next_state_on_psos = current_state(pmap) current_crossing_time(pmap::PoincareMap) → tcross Return the time of the latest crossing of the Poincare section. poincaresos(A::AbstractStateSpaceSet, plane; kwargs...) → P::StateSpaceSet Calculate the Poincaré surface of section of the given dataset with the given plane by performing linear interpolation betweeen points that sandwich the hyperplane. Argument plane and keywords direction, warning, save_idxs are the same as in PoincareMap. poincaresos(ds::CoupledODEs, plane, T = 1000.0; kwargs...) → P::StateSpaceSet Return the iterations of ds on the Poincaré surface of section with the plane, by evolving ds up to a total of T. Return a StateSpaceSet of the points that are on the surface of section. This function initializes a PoincareMap and steps it until its current_crossing_time exceeds T. You can also use trajectory with PoincareMap to get a sequence of N::Int points instead. The keywords Ttr, save_idxs act as in trajectory. See PoincareMap for plane and all other keywords. TangentDynamicalSystem <: DynamicalSystem TangentDynamicalSystem(ds::CoreDynamicalSystem; kwargs...) A dynamical system that bundles the evolution of ds (which must be an CoreDynamicalSystem) and k deviation vectors that are evolved according to the dynamics in the tangent space (also called linearized dynamics or the tangent dynamics). The state of dsmust be an AbstractVector for TangentDynamicalSystem. TangentDynamicalSystem follows the DynamicalSystem interface with the following adjustments: • reinit! takes an additional keyword Q0 (with same default as below) • The additional functions current_deviations and set_deviations! are provided for the deviation vectors. Keyword arguments • k or Q0: Q0 represents the initial deviation vectors (each column = 1 vector). If k::Int is given, a matrix Q0 is created with the first k columns of the identity matrix. Otherwise Q0 can be given directly as a matrix. It must hold that size(Q, 1) == dimension(ds). You can use orthonormal for random orthonormal vectors. By default k = dimension(ds) is used. • u0 = current_state(ds): Starting state. • J and J0: See section "Jacobian" below. Let $u$ be the state of ds, and $y$ a deviation (or perturbation) vector. These two are evolved in parallel according to $$\[\begin{array}{rcl} \frac{d\vec{x}}{dt} &=& f(\vec{x}) \\ \frac{dY}{dt} &=& J_f(\vec{x}) \cdot Y \end{array} \quad \mathrm{or}\quad \begin{array}{rcl} \vec{x}_{n+1} &=& f(\vec{x}_n) \\ Y_{n+1} &=& J_f(\vec{x}_n) \cdot Y_n. \end{array}\]$$ for continuous or discrete time respectively. Here $f$ is the dynamic_rule(ds) and $J_f$ is the Jacobian of $f$. The keyword J provides the Jacobian function. It must be a Julia function in the same form as f, the dynamic_rule. Specifically, J(u, p, n) -> M::SMatrix for the out-of-place version or J(M, u, p, n) for the in-place version acting in-place on M. In both cases M is the Jacobian matrix used for the evolution of the deviation vectors. By default J = nothing. In this case J is constructed automatically using the module ForwardDiff, hence its limitations also apply here. Even though ForwardDiff is very fast, depending on your exact system you might gain significant speed-up by providing a hand-coded Jacobian and so it is recommended. Additionally, automatic and in-place Jacobians cannot be time dependent. The keyword J0 allows you to pass an initialized Jacobian matrix J0. This is useful for large in-place systems where only a few components of the Jacobian change during the time evolution. J0 can be a sparse or any other matrix type. If not given, a matrix of zeros is used. J0 is ignored for out of place systems. Return the deviation vectors of tands as a matrix with each column a vector. set_deviations!(tands::TangentDynamicalSystem, Q) Set the deviation vectors of tands to be Q, a matrix with each column a vector. Construct the Jacobian rule for the dynamical system ds. This is done via automatic differentiation using module ForwardDiff. For out-of-place systems, jacobian returns the Jacobian rule as a function Jf(u, p, t) -> J0::SMatrix. Calling Jf(u, p, t) will compute the Jacobian at the state u, parameters p and time t and return the result as J0. For in-place systems, jacobian returns the Jacobian rule as a function Jf!(J0, u, p, t). Calling Jf!(J0, u, p, t) will compute the Jacobian at the state u, parameters p and time t and save the result in J0. orthonormal([T,] D, k) -> ws Return a matrix ws with k columns, each being an D-dimensional orthonormal vector. T is the return type and can be either SMatrix or Matrix. If not given, it is SMatrix if D*k < 100, otherwise Matrix. ProjectedDynamicalSystem <: DynamicalSystem ProjectedDynamicalSystem(ds::DynamicalSystem, projection, complete_state) A dynamical system that represents a projection of an existing ds on a (projected) space. The projection defines the projected space. If projection isa AbstractVector{Int}, then the projected space is simply the variable indices that projection contains. Otherwise, projection can be an arbitrary function that given the state of the original system ds, returns the state in the projected space. In this case the projected space can be equal, or even higher-dimensional, than the complete_state produces the state for the original system from the projected state. complete_state can always be a function that given the projected state returns a state in the original space. However, if projection isa AbstractVector{Int}, then complete_state can also be a vector that contains the values of the remaining variables of the system, i.e., those not contained in the projected space. In this case the projected space needs to be lower-dimensional than the original. Notice that ProjectedDynamicalSystem does not require an invertible projection, complete_state is only used during reinit!. ProjectedDynamicalSystem is in fact a rather trivial wrapper of ds which steps it as normal in the original state space and only projects as a last step, e.g., during current_state. Case 1: project 5-dimensional system to its last two dimensions. ds = Systems.lorenz96(5) projection = [4, 5] complete_state = [0.0, 0.0, 0.0] # completed state just in the plane of last two dimensions prods = ProjectedDynamicalSystem(ds, projection, complete_state) reinit!(prods, [0.2, 0.4]) Case 2: custom projection to general functions of state. ds = Systems.lorenz96(5) projection(u) = [sum(u), sqrt(u[1]^2 + u[2]^2)] complete_state(y) = repeat([y[1]/5], 5) prods = # same as in above example... ParallelDynamicalSystem <: DynamicalSystem ParallelDynamicalSystem(ds::DynamicalSystem, states::Vector{<:AbstractArray}) A struct that evolves several states of a given dynamical system in parallel at exactly the same times. Useful when wanting to evolve several different trajectories of the same system while ensuring that they share parameters and time vector. This struct follows the DynamicalSystem interface with the following adjustments: ParallelDynamicalSystem(ds::DynamicalSystem, states::Vector{<:Dict}) For a dynamical system referring a MTK model, one can specify states as a vector of dictionaries to alter the current state of ds as in set_state!. Return an iterator over the initial parallel states of pds. Return an iterator over the parallel states of pds. ArbitrarySteppable <: DiscreteTimeDynamicalSystem model, step!, extract_state, extract_parameters, reset_model!; isdeterministic = true, set_state = reinit!, A dynamical system generated by an arbitrary "model" that can be stepped in-place with some function step!(model) for 1 step. The state of the model is extracted by the extract_state(model) -> u function The parameters of the model are extracted by the extract_parameters(model) -> p function. The system may be re-initialized, via reinit!, with the reset_model! user-provided function that must have the call signature reset_model!(model, u, p) given a (potentially new) state u and parameter container p, both of which will default to the initial ones in the reinit! call. ArbitrarySteppable exists to provide the DynamicalSystems.jl interface to models from other packages that could be used within the DynamicalSystems.jl library. ArbitrarySteppable follows the DynamicalSystem interface with the following adjustments: • initial_time is always 0, as time counts the steps the model has taken since creation or last reinit! call. • set_state! is the same as reinit! by default. If not, the keyword argument set_state is a function set_state(model, u) that sets the state of the model to u. • The keyword isdeterministic should be set properly, as it decides whether downstream algorithms should error or not. Since DynamicalSystems are mutable, one needs to copy them before parallelizing, to avoid having to deal with complicated race conditions etc. The simplest way is with deepcopy. Here is an example block that shows how to parallelize calling some expensive function (e.g., calculating the Lyapunov exponent) over a parameter range using Threads: ds = DynamicalSystem(f, u, p) # some concrete implementation parameters = 0:0.01:1 outputs = zeros(length(parameters)) # Since `DynamicalSystem`s are mutable, we need to copy to parallelize systems = [deepcopy(ds) for _ in 1:Threads.nthreads()-1] pushfirst!(systems, ds) # we can save 1 copy Threads.@threads for i in eachindex(parameters) system = systems[Threads.threadid()] set_parameter!(system, 1, parameters[i]) outputs[i] = expensive_function(system, args...) This is an advanced example of making an in-place implementation of coupled standard maps. It will utilize a handcoded Jacobian, a sparse matrix for the Jacobinan, a default initial Jacobian matrix, as well as function-like-objects as the dynamic rule. Coupled standard maps is a deterministic iterated map that can have arbitrary number of equations of motion, since you can couple N standard maps which are 2D maps, like so: $$\[\theta_{i}' = \theta_i + p_{i}' \\ p_{i}' = p_i + k_i\sin(\theta_i) - \Gamma \left[\sin(\theta_{i+1} - \theta_{i}) + \sin(\theta_{i-1} - \theta_{i}) \right]\]$$ To model this, we will make a dedicated struct, which is parameterized on the number of coupled maps: using DynamicalSystemsBase struct CoupledStandardMaps{N} idxs::SVector{N, Int} idxsm1::SVector{N, Int} idxsp1::SVector{N, Int} (what these fields are will become apparent later) We initialize the struct with the amount of standard maps we want to couple, and we also define appropriate parameters: M = 5 # couple number u0 = 0.001rand(2M) #initial state ks = 0.9ones(M) # nonlinearity parameters Γ = 1.0 # coupling strength p = (ks, Γ) # parameter container # Create struct: SV = SVector{M, Int} idxs = SV(1:M...) # indexes of thetas idxsm1 = SV(circshift(idxs, +1)...) #indexes of thetas - 1 idxsp1 = SV(circshift(idxs, -1)...) #indexes of thetas + 1 # So that: # x[i] ≡ θᵢ # x[[idxsp1[i]]] ≡ θᵢ+₁ # x[[idxsm1[i]]] ≡ θᵢ-₁ csm = CoupledStandardMaps{M}(idxs, idxsm1, idxsp1) Main.CoupledStandardMaps{5}([1, 2, 3, 4, 5], [5, 1, 2, 3, 4], [2, 3, 4, 5, 1]) We will now use this struct to define a function-like-object, a Type that also acts as a function function (f::CoupledStandardMaps{N})(xnew::AbstractVector, x, p, n) where {N} ks, Γ = p @inbounds for i in f.idxs xnew[i+N] = mod2pi( x[i+N] + ks[i]*sin(x[i]) - Γ*(sin(x[f.idxsp1[i]] - x[i]) + sin(x[f.idxsm1[i]] - x[i])) xnew[i] = mod2pi(x[i] + xnew[i+N]) return nothing We will use the samestruct to create a function for the Jacobian: function (f::CoupledStandardMaps{M})( J::AbstractMatrix, x, p, n) where {M} ks, Γ = p # x[i] ≡ θᵢ # x[[idxsp1[i]]] ≡ θᵢ+₁ # x[[idxsm1[i]]] ≡ θᵢ-₁ @inbounds for i in f.idxs cosθ = cos(x[i]) cosθp= cos(x[f.idxsp1[i]] - x[i]) cosθm= cos(x[f.idxsm1[i]] - x[i]) J[i+M, i] = ks[i]*cosθ + Γ*(cosθp + cosθm) J[i+M, f.idxsm1[i]] = - Γ*cosθm J[i+M, f.idxsp1[i]] = - Γ*cosθp J[i, i] = 1 + J[i+M, i] J[i, f.idxsm1[i]] = J[i+M, f.idxsm1[i]] J[i, f.idxsp1[i]] = J[i+M, f.idxsp1[i]] return nothing This is possible because the system state is a Vector while the Jacobian is a Matrix, so multiple dispatch can differentiate between the two. Notice in addition, that the Jacobian function accesses only half the elements of the matrix. This is intentional, and takes advantage of the fact that the other half is constant. We can leverage this further, by making the Jacobian a sparse matrix. Because the DynamicalSystem constructors allow us to give in a pre-initialized Jacobian matrix, we take advantage of that and create: using SparseArrays J = zeros(eltype(u0), 2M, 2M) # Set ∂/∂p entries (they are eye(M,M)) # And they dont change they are constants for i in idxs J[i, i+M] = 1 J[i+M, i+M] = 1 sparseJ = sparse(J) csm(sparseJ, u0, p, 0) # apply Jacobian to initial state 10×10 SparseArrays.SparseMatrixCSC{Float64, Int64} with 40 stored entries: 3.9 -1.0 ⋅ ⋅ -1.0 1.0 ⋅ ⋅ ⋅ ⋅ -1.0 3.9 -1.0 ⋅ ⋅ ⋅ 1.0 ⋅ ⋅ ⋅ ⋅ -1.0 3.9 -1.0 ⋅ ⋅ ⋅ 1.0 ⋅ ⋅ ⋅ ⋅ -1.0 3.9 -1.0 ⋅ ⋅ ⋅ 1.0 ⋅ -1.0 ⋅ ⋅ -1.0 3.9 ⋅ ⋅ ⋅ ⋅ 1.0 2.9 -1.0 ⋅ ⋅ -1.0 1.0 ⋅ ⋅ ⋅ ⋅ -1.0 2.9 -1.0 ⋅ ⋅ ⋅ 1.0 ⋅ ⋅ ⋅ ⋅ -1.0 2.9 -1.0 ⋅ ⋅ ⋅ 1.0 ⋅ ⋅ ⋅ ⋅ -1.0 2.9 -1.0 ⋅ ⋅ ⋅ 1.0 ⋅ -1.0 ⋅ ⋅ -1.0 2.9 ⋅ ⋅ ⋅ ⋅ 1.0 Now we are ready to create our dynamical system ds = DeterministicIteratedMap(csm, u0, p) 10-dimensional DeterministicIteratedMap deterministic: true discrete time: true in-place: true dynamic rule: CoupledStandardMaps parameters: ([0.9, 0.9, 0.9, 0.9, 0.9], 1.0) time: 0 state: [0.00041524111829434654, 0.0007229374394926134, 0.0006848502727239867, 0.0005907107844825832, 0.0006803903839332631, 0.0008921850182833862, 0.00012179045162710301, 0.0009690027609471248, 0.0004665835723713002, 0.00037448505102109466] Of course, the reason we went through all this trouble was to make a TangentDynamicalSystem, that can actually use the Jacobian function. tands = TangentDynamicalSystem(ds; J = csm, J0 = sparseJ, k = M) 10-dimensional TangentDynamicalSystem deterministic: true discrete time: true in-place: true dynamic rule: CoupledStandardMaps jacobian: CoupledStandardMaps deviation vectors: 5 parameters: ([0.9, 0.9, 0.9, 0.9, 0.9], 1.0) time: 0 state: [0.00041524111829434654, 0.0007229374394926134, 0.0006848502727239867, 0.0005907107844825832, 0.0006803903839332631, 0.0008921850182833862, 0.00012179045162710301, 0.0009690027609471248, 0.0004665835723713002, 0.00037448505102109466] step!(tands, 5) 10×5 view(::Matrix{Float64}, :, 2:6) with eltype Float64: 3919.65 -2770.14 845.081 835.566 -2760.47 -2782.26 3943.87 -2784.3 847.221 845.205 846.645 -2773.97 3924.05 -2763.79 836.618 834.299 837.076 -2752.64 3900.81 -2749.87 -2747.17 834.944 833.672 -2745.91 3893.95 3263.48 -2344.26 733.337 723.854 -2334.64 -2356.34 3287.62 -2358.38 735.471 733.462 734.897 -2348.09 3267.89 -2337.95 724.907 722.584 725.36 -2326.83 3244.7 -2324.05 -2321.38 723.235 721.961 -2320.12 3237.88 (the deviation vectors will increase in magnitude rapidly because the dynamical system is chaotic)
{"url":"https://juliadynamics.github.io/DynamicalSystemsDocs.jl/dynamicalsystemsbase/stable/","timestamp":"2024-11-03T23:52:02Z","content_type":"text/html","content_length":"105047","record_id":"<urn:uuid:cb0c660f-034a-4aff-a786-e2d04d9c5689>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00123.warc.gz"}
About Solvely Introducing Solvely, AI homework helper app powered by GPT-4. Whether you are stuck with math, physics, or chemistry problems, solve and learn them effortlessly with Solvely. Just take a photo, and our advanced math solver and step-by-step explanations will make tackling homework a breeze, allowing you to grasp the underlying principles and excel in your studies effortlessly.
{"url":"https://solvelyapp.com/about/","timestamp":"2024-11-05T14:07:39Z","content_type":"text/html","content_length":"20800","record_id":"<urn:uuid:b6405b48-7f77-4a2f-83ef-fc3436d5165d>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00689.warc.gz"}
11+ N2 Lewis Structure | Robhosking Diagram11+ N2 Lewis Structure 11+ N2 Lewis Structure 11+ N2 Lewis Structure. The lewis structure indicates that each cl atom has three pairs of electrons that are not used in bonding (called lone pairs ) and one shared pair of electrons (written between the atoms). Calculate the total valence electrons in no2 molecule. Nitrogen Fixing Plants to Fertilize Tea Organically | Alex … from 4.bp.blogspot.com This example problem shows the steps to draw a structure where an atom violates the octet rule. Also, for the structure to be correct, it must follow the octet rule (eight electrons per atom). For the n2 lewis structure, calculate the total number of valence. < valence shell electron pair repulsion (vsepr) theory, along with lewis structures can be used to predict molecular geometry. 11+ N2 Lewis Structure. The lewis structure of a compound can be generated by trial and error. I am reporting about formal charge and lewis structure. Lewis structures are a useful way to summarize certain information about bonding and may be thought of as electron bookkeeping. For the n2 lewis structure, calculate the total number of valence.
{"url":"https://robhosking.com/11-n2-lewis-structure/","timestamp":"2024-11-06T04:42:27Z","content_type":"text/html","content_length":"66212","record_id":"<urn:uuid:7d56ffb3-dc67-428c-9ccf-50f9b0011ef6>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00695.warc.gz"}
YES, rush me the Algebrator right now! I want to stop banging my head against the wall trying to guess complicated algebra problems. I want to have my algebra problems solved and explained so I have time to do other things. There's absolutely no way I can lose because my investment of 29^99 is completely protected by your "iron-clad" guarantee. That means I can put the program to the test. If I'm not completely thrilled with the incredible ease Algebrator solves my homework you will refund my entire payment with no questions asked. So no matter what, I come out ahead. On that “better-than-risk-free” basis, here's my order for “Algebrator!” Download (and optional CD) Only [S:$74.99 :S] 29^99 Click to Buy Now: Just take a look how incredibly simple Algebrator is: Step 1: Enter your homework problem in an easy WYSIWYG (What you see is what you get) algebra editor: Step 2: Let Algebrator solve it: Step 3: Ask for an explanation for the steps you don't understand: Algebrator can solve problems in all the following areas: • simplification of algebraic expressions (polynomials (simplifying, degree, division...), exponential expressions, fractions, radicals (roots), absolute values...) • factoring and expanding expressions • finding LCM and GCF • basic step-by-step arithmetics operations (adding, subtracting, multiplying and dividing) • operations with complex numbers (simplifying, rationalizing complex denominators...) • solving linear, quadratic and many other equations and inequalities (including log. and exponential) • solving a system of two and three linear equations (including Cramer's rule) • graphing curves (lines, parabolas, hyperbolas, circles, ellipses, equation and inequality solutions) • graphing general functions • operations with functions (composition, inverse, range, domain...) • simplifying logarithms • sequences (classifying progressions, find the nth term of an arithmetic progression...) • basic geometry and trigonometry (similarity, calculating trig functions, right triangle...) • arithmetic and other pre-algebra topics (ratios, proportions, measurements...) • linear algebra (operations with matrices, inverse matrix, determinants...) • statistics (mean, median, mode, range...) System requirements: Any PC running Windows 10. Any Intel Mac running OS X.
{"url":"https://softmath.com/pmath.html","timestamp":"2024-11-10T12:11:20Z","content_type":"text/html","content_length":"21626","record_id":"<urn:uuid:1feb4a0e-929e-4565-a7fa-5cb872ba64eb>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00583.warc.gz"}
How To Calculate Density Home How To How To Calculate Density How To How To Calculate Density Admin4 min read Understanding Density: A Comprehensive Guide Density is a fundamental property of matter that measures how closely packed its constituent particles are. It plays a crucial role in various scientific and industrial applications, ranging from material characterization to process engineering. This comprehensive guide aims to provide a thorough understanding of density, its calculation methods, and its significance in different fields. Defining Density Density is defined as the mass of a substance per unit volume. It expresses how much matter is contained within a given space. The SI unit of density is kilograms per cubic meter (kg/m³). Other commonly used units include grams per cubic centimeter (g/cm³), pounds per cubic foot (lb/ft³), and ounces per cubic inch (oz/in³). Calculating Density Calculating density is a straightforward process that involves measuring both the mass and volume of the substance. The formula for density is: Density = Mass / Volume Mass Measurement The mass of a substance is determined using a weighing scale. The scale should be calibrated and accurate to ensure reliable measurements. The mass is typically expressed in kilograms or grams. Volume Measurement The volume of a substance can be measured using various techniques, depending on its physical form: • Solids: Regular-shaped solids (e.g., cubes, cylinders) can have their volume calculated using geometric formulas. Irregularly shaped solids can be measured using methods like the water displacement technique. • Liquids: The volume of liquids can be determined using calibrated containers, such as graduated cylinders or volumetric flasks. • Gases: The volume of gases is typically measured at specific conditions of temperature and pressure using instruments like manometers or gas sensors. Example Calculation Suppose we have a sample of steel with a mass of 500 grams and a volume of 250 cubic centimeters. Using the formula, we can calculate the density: Density = Mass / Volume = 500 g / 250 cm³ = 2 g/cm³ Therefore, the density of the steel sample is 2 grams per cubic centimeter. Factors Affecting Density The density of a substance is influenced by several factors: • Composition: Different materials have different densities due to variations in their atomic masses and molecular structures. • Temperature: For most substances, density decreases with increasing temperature as the particles expand and occupy a larger volume. • Pressure: For gases, density increases with increasing pressure as the particles become more compact. Significance of Density Density is a crucial property with numerous applications in various fields: • Material Characterization: Density is used to identify and classify materials, such as metals, plastics, and ceramics. • Process Engineering: Density plays a significant role in process design, including fluid flow calculations, sedimentation, and filtration. • Environmental Science: Density is used to study water quality, soil composition, and waste management. • Medical Imaging: Density is used in medical imaging techniques like X-rays and CT scans to differentiate between different tissues and organs. Frequently Asked Questions (FAQs) Q: What is the difference between density and weight? A: Density measures the amount of matter in a given space, while weight is the force of gravity acting on the mass of an object. Q: Can a substance have zero density? A: No, all substances have a positive density. Zero density would imply no matter within a volume, which is not possible. Q: How can I determine the density of an object without measuring its volume? A: If the object has a regular shape, its volume can be calculated using geometric formulas. For irregularly shaped objects, the Archimedes’ principle can be used, which involves submerging the object in a fluid of known density. Q: What is the densest known substance? A: Osmium is the densest known element, with a density of 22.59 g/cm³. Q: What is the least dense known substance? A: Aerogel is the least dense known solid, with a density as low as 0.003 g/cm³. Related Posts
{"url":"https://www.svgdesignresources.com/how-to-calculate-density/","timestamp":"2024-11-08T21:27:06Z","content_type":"text/html","content_length":"66786","record_id":"<urn:uuid:35538d35-34df-4f27-9268-7f5fe9d37884>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00041.warc.gz"}
School of Computer Competition - Evolutionary Submodular Optimisation GECCO 2024 Submodular functions play a key role in the area of optimisation as they allow to model many real-world optimisation problem. Submodular functions model a wide range of problems where the benefit of adding solution components diminishes with the addition of elements. They form an important class of optimization problems, and are extensively studied in the literature. Problems that may be formulated in terms of submodular functions include influence maximization in social networks, maximum coverage, maximum cut in graphs, sensor placement problem, and sparse regression. In recent years, the design and analysis of evolutionary algorithms for submodular optimisation problems has gained increasing attention in the evolutionary computation and artificial intelligence community. The aim of the competition is to provide a platform for researchers working evolutionary computing methods and interested in benchmarking them on a wide class of combinatorial optimization problems. The competition will benchmark evolutionary computing techniques for submodular optimisation problems and enable performance comparison for this type of problems. It provides an idea vehicle for researchers and students to design new algorithms and/or benchmark their existing approaches on a wide class of combinatorial optimization problems captured by submodular functions. A description of the different submodular optimization problems included in this competion can be found in • F. Neumann, A. Neumann, C. Qian, V.A. Do, J. de Nobel, D. Vermetten, S. S. Ahouei, F. Ye, H. Wang, T. Bäck (2023): Benchmarking Algorithms for Submodular Optimization Problems Using IOHProfiler. In: [CoRR abs/2302.01464]. Technical details: We will use IOHprofiler for the competition. The problems are integrated into IOHprofiler and you can run your algorithms using it for evaluation and obtaining your results.The problem implementations and training benchmarks are available in IOHprofiler. The submodular problem implementations can be found in this directory IOHprofiler Problem. The code in C++ can be found on this page. on how to use IOHprofiler. Links to other examples and tutorials on IOHprofiler can be found Notebook Example: on how to access the submodular problems using IOHexperimenter. Comparing to baselines: To compare your algorithm's performance to our baseline algorithms, you can make use of the IOHanalzyer. Here, you can load the sets of algorithms we ran on the MaxCut and MaxCoverage problems. This can be done by selecting the relevant 'source' in the 'Load from repositories' box on the startpage. These sources can be identified by the prefix 'submodular', and each contain data for a set of instances, specifically Instance 2100-2127 for MaxCoverage and Instance 2000-2004 for MaxCut. For more information on the use of IOHanalyzer, please see this page. We will evaluate all submissions on different instances of the submodular problems provided as part of the submodular problem implementations. The algorithms will be evaluated with respect to the fixed budget perspective. We will consider two categories and will determine a winner for each of the two categories. • In the low budget category each algorithm will be run for 10,000 fitness evaluations and the best solution obtained during the run will be used as the result. • In the high budget category each algorithm will be run for 100,000 fitness evaluations and the best solution obtained during the run will be used as the result. Each algorithm will be run on each test evaluation instance 10 times in each category. Algorithms will be ranked for each considered instance. The winner is the approach that has the smallest average rank in each category, i. e., low budget or high budget. As a default, we assume that each submission should be considered for both categories. If a submission should only be considered in one of the categories, i. e., either low or high budget we ask the contributors to clearly state this in the submission email. Submission deadline: 12 July 2024, AoE. To submit to the competition, we recommend creating a publically visible repository (e.g. on Zenodo) where you upload the code of your algorithm algorithm as a single zip-file (named according to your algorithm name and the category you are submitting to) as well as a readme which contains instructions on how to execute it. Please make sure this readme is sufficiently detailed to ease reproducibility. Additionally, you can upload the performance data on the publicly available problem instances as well. Finally, please email the link to your repository to Aneta Neumann. Aneta Neumann, University of Adelaide, Australia Saba Sadeghi Ahouei, University of Adelaide, Australia Diederick Vermetten, Leiden University, The Netherlands Jacob de Nobel, Leiden University, The Netherlands Thomas Bäck, Leiden University, The Netherlands
{"url":"https://cs.adelaide.edu.au/~optlog/CompetitionESO2024.php","timestamp":"2024-11-03T10:21:31Z","content_type":"text/html","content_length":"17733","record_id":"<urn:uuid:5d0ba571-4b8f-4e1b-8a58-9a27b6b1d77d>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00122.warc.gz"}
scipy.stats.moment(a, order=1, axis=0, nan_policy='propagate', *, center=None, keepdims=False)[source]# Calculate the nth moment about the mean for a sample. A moment is a specific quantitative measure of the shape of a set of points. It is often used to calculate coefficients of skewness and kurtosis due to its close relationship with them. Input array. orderint or 1-D array_like of ints, optional Order of central moment that is returned. Default is 1. axisint or None, default: 0 If an int, the axis of the input along which to compute the statistic. The statistic of each axis-slice (e.g. row) of the input will appear in a corresponding element of the output. If None, the input will be raveled before computing the statistic. nan_policy{‘propagate’, ‘omit’, ‘raise’} Defines how to handle input NaNs. ○ propagate: if a NaN is present in the axis slice (e.g. row) along which the statistic is computed, the corresponding entry of the output will be NaN. ○ omit: NaNs will be omitted when performing the calculation. If insufficient data remains in the axis slice along which the statistic is computed, the corresponding entry of the output will be NaN. ○ raise: if a NaN is present, a ValueError will be raised. centerfloat or None, optional The point about which moments are taken. This can be the sample mean, the origin, or any other be point. If None (default) compute the center as the sample mean. keepdimsbool, default: False If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. n-th moment about the `center`ndarray or float The appropriate moment along the given axis or over all values if axis is None. The denominator for the moment calculation is the number of observations, no degrees of freedom correction is done. The k-th moment of a data sample is: \[m_k = \frac{1}{n} \sum_{i = 1}^n (x_i - c)^k\] Where n is the number of samples, and c is the center around which the moment is calculated. This function uses exponentiation by squares [1] for efficiency. Note that, if a is an empty array (a.size == 0), array moment with one element (moment.size == 1) is treated the same as scalar moment (np.isscalar(moment)). This might produce arrays of unexpected shape. Beginning in SciPy 1.9, np.matrix inputs (not recommended for new code) are converted to np.ndarray before the calculation is performed. In this case, the output will be a scalar or np.ndarray of appropriate shape rather than a 2D np.matrix. Similarly, while masked elements of masked arrays are ignored, the output will be a scalar or np.ndarray rather than a masked array with mask=False. >>> from scipy.stats import moment >>> moment([1, 2, 3, 4, 5], order=1) >>> moment([1, 2, 3, 4, 5], order=2)
{"url":"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.moment.html","timestamp":"2024-11-11T09:45:34Z","content_type":"text/html","content_length":"32108","record_id":"<urn:uuid:5ad45ded-741d-4c37-83ca-c989d0e6d362>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00588.warc.gz"}
polexpr reference Syntax overview via examples The syntax to define a new polynomial is: \poldef polname(x):= expression in variable x; The expression will be parsed by the services of xintexpr, with some polynomial aware functions added to its syntax; they are described in detail below. The parser accepts and will handle exactly arbitrarily big integers or fractions. xintexpr does not automatically reduce fractions to lowest terms, and, so far (but this may change in future) neither does \poldef. See rdcoeffs() and the macro \PolReduceCoeffs. • In place of x an arbitrary dummy variable is authorized, i.e. per default one a, .., z, A, .., Z (more letters can be declared under Unicode engines). • polname is a word (no space) built with letters, digits, and the @, _ and ' characters are allowed. The polynomial name must start with a letter. For guidelines regarding _ and @ see Technicalities. • The colon before the equality sign is optional and its (reasonable) catcode does not matter. • The semi-colon at the end of the expression is mandatory. It is not allowed to arise from expansion (despite the fact that the expression itself will be parsed using only expansion), it must be “visible” immediately. There are some potential problems (refer to the Technicalities section at bottom of this page) with the semi-colon as expression terminator, so an alternative syntax is provided, which avoids it \PolDef[optional letter]{<polname>}{<expr. using letter as indeterminate>} The \PolDef optional first argument defaults to x and must be used as the indeterminate in the expression. \poldef f(x):= 1 - x + quo(x^5,1 - x + x^2); \PolDef{f}{1 - x + quo(x^5,1 - x + x^2)} Both parse the polynomial expression, and they create internally macros serving to incarnate the polynomial, its coefficients, and the associated polynomial function. The polynomial can then be used in further polynomial definitions, be served as argument to package macros, or appear as a variable in various functions described later. Both the function quo() (as shown in the example above), and the infix operator / are mapped to the Euclidean quotient. This usage of / to stand for the Euclidean quotient is deprecated and reserved for a (somewhat improbable) possible extension of the package to handle rational functions as well. Tacit multiplication rules let the parser when encountering 1/2 x^2 skip the space and thus handle it as 1/(2*x^2). But then it gives zero, because / stands for the Euclidean quotient operation Thus one must use (1/2)x^2 or 1/2*x^2 or (1/2)*x^2 for disambiguation: x - 1/2*x^2 + 1/3*x^3.... It is simpler to move the denominator to the right: x - x^2/2 + x^3/3 - .... It is worth noting that 1/2(x-1)(x-2) suffers the same issue: xintexpr‘s tacit multiplication always “ties more”, hence this gets interpreted as 1/(2*(x-1)*(x-2)) not as (1/2)*(x-1)*(x-2) and then gives zero by polynomial division. Thus, in such cases, use one of (1/2)(x-1)(x-2), 1/2*(x-1)(x-2) or (x-1)(x-2)/2. \poldef P(x):=...; defines P as a polynomial function, which can be used inside \xinteval, as: or even as: \xinteval{P(Q1 + Q2 + Q3)} where Q1, Q2, Q3 are polynomials. The evaluation result, if not a scalar, will then be printed as pol([c0,c1,...]) which stands for a polynomial variable having the listed coefficients; see pol() Indeed, as seen above with Q1, the symbol P also stands for a variable of polynomial type, which serves as argument to polynomial specific functions such as deg() or polgcd(), or as argument to other polynomials (as above), or even simply stands for its own in algebraic expressions such as: \poldef Q(z):= P^2 + z^10; Notice that in the above, the (z) part is mandatory, as it informs \poldef of the letter used for the indeterminate. In the above P(z)^2 would give the same as P^2 but the latter is slightly more One needs to acquire a good understanding of when the symbol P will stand for a function and when it will stand for a variable. □ If P and Q are both declared polynomials then: (P+Q)(3)% <--- attention, does (P+Q)*3, not P(3)+Q(3) is currently evaluated as (P+Q)*3, because P+Q is not known as a function, but only as a variable of polynomial type. Note that evalp(P+Q,3) gives as expected the same as P(3)+Q(3). □ Also: (P)(3)% <--- attention, does P*3, not P(3) will compute P*3, because one can not in current xintexpr syntax enclose a function name in parentheses: consequently it is the variable which is used here. There is a meager possibility that in future some internal changes to xintexpr would let (P)(3) actually compute P(3) and (P+Q)(3) compute P(3) + Q(3), but note that (P)(P) will then do P(P) and not P*P, the latter, current interpretation, looking more intuitive. Anyway, do not rely too extensively on tacit * and use explicit (P+Q)*(1+2) if this is what is intended. saves a copy of f under name g. Also usable without =. Has exactly the same effect as \poldef g(x):=f; or \poldef g(w):=f(w);. \poldef f(z):= f^2; redefines f in terms of itself. Prior to 0.8 one needed the right hand side to be f(z)^2. Also, now sqr(f) is possible (also sqr(f(x)) but not sqr(f)(x)). It may look strange that an indeterminate variable is used on left-hand-side even though it may be absent of right-hand-side, as it seems to define f always as a polynomial function. This is a legacy of pre-0.8 context. Note that f^2(z) or sqr(f)(z) will give a logical but perhaps unexpected result: first f^2 is computed, then the opening parenthesis is seen which inserts a tacit multiplication *, so in the end it is as if the input had been f^2 * z. Although f is both a variable and a function, f^2 is computed as a polynomial variable and ceases being a function. \poldef f(T):= f(f); again modifies f. Here it is used both as variable and as a function. Prior to 0.8 it needed to be f(f(T)). \poldef k(z):= f-g(g^2)^2; if everybody followed, this should now define the zero polynomial… And f-sqr(g(sqr(g))) computes the same thing. We can check this in a typeset document like this: \poldef f(x):= 1 - x + quo(x^5,1 - x + x^2);% \poldef f(z):= f^2;% \poldef f(T):= f(f);% \poldef k(w):= f-sqr(g(sqr(g)));% $$f(x) = \vcenter{\hsize10cm \PolTypeset{f}} $$ $$g(z) = \PolTypeset{g} $$ $$k(z) = \PolTypeset{k} $$ \immediate\write128{f(x)=\PolToExpr{f}}% ah, here we see it also \poldef f'(x):= diff1(f); (new at 0.8) Both set f' (or any other chosen name) to the derivative of f. This is not done automatically. If some new definition needs to use the derivative of some available polynomial, that derivative polynomial must have been previously defined: something such as f' (3)^2 will not work without a prior definition of f'. But one can now use diff1(f) for on-the-spot construction with no permanent declaration, so here evalp(diff1(f),3)^2. And diff1(f)^2 is same as f'^2, assuming here f' was declared to be the derived polynomial. Notice that the name diff1() is experimental and may change. Use \PolDiff{f}{f'} as the stable interface. Typesets (switching to math mode if in text mode): \poldef f(x):=(3+x)^5;% $$f(z) = \PolTypeset[z]{f} $$ $$f'(z) = \PolTypeset[z]{f'} $$ $$f''(z) = \PolTypeset[z]{f''} $$ $$f'''(z)= \PolTypeset[z]{f'''} $$ See its documentation for the configurability via macros. Since 0.8 \PolTypeset accepts directly an expression, it does not have to be a pre-declared polynomial name: Expandably (contrarily to \PolTypeset) produces c_n*x^n + ... + c_0 starting from the leading coefficient. The + signs are omitted if followed by negative coefficients. This is useful for console or file output. This syntax is Maple and PSTricks \psplot[algebraic] compatible; and also it is compatible with \poldef input syntax, of course. See \PolToExprCaret for configuration of the ^, for example to use rather ** for Python syntax compliance. Changed at 0.8: the ^ in output is by default of catcode 12 so in a draft document one can use \PolToExpr{P} inside the typesetting flow (without requiring math mode, where the * would be funny and ^12 would only put the 1 as exponent anyhow; but arguably in text mode the + and - are not satisfactory for math, except sometimes in monospace typeface, and anyhow TeX is unable to break the expression across lines, barring special help). See \PolToExpr{<pol. expr.>} and related macros for customization. Extended at 0.8 to accept as argument not only the name of a polynomial variable but more generally any polynomial expression. Using defined polynomials in floating point context Exact manipulations with fractional coefficients may quickly lead to very large denominators. For numerical evaluations, it is advisable to a use a floating point context. But for the polynomial to be usable as a function in floating point context, an extra step beyond \poldef is required: see \PolGenFloatVariant. Then the \xintfloateval macro from xintexpr will recognize the polynomial as a genuine function (with already float-rounded coefficients, and using a Horner scheme). But \PolGenFloatVariant must be used each time the polynomial gets redefined or a new polynomial is created out of it. Functions such as for example deg() which handle the polynomial as an entity are only available within the \poldef and \xinteval (or \xintexpr) parsers. Inside \xintfloateval a polynomial can only serve as a numerical function (and only after declaration via \PolGenFloatVariant), and not as a variable. In some cases one may wish to replace a polynomial having acquired very big fractional coefficients with a new one whose coefficients have been float-rounded. See \PolMapCoeffs which can be used for example with the \xintFloat macro from the xintfrac package to achieve this. The polexpr 0.8 extensions to the \xintexpr syntax All the syntax elements described in this section can be used in the \xintexpr/\xinteval context (where polynomials can be obtained from the pol([]) constructor, once polexpr is loaded): their usage is not limited to only \poldef context. If a variable myPol defined via \xintdefvar turns out to be a polynomial, the difference with those declared via \poldef will be: 1. myPol is not usable as function, but only as a variable. Attention that f(x) if f is only a variable (even a polynomial one) will actually compute f * x. 2. myPol is not known to the polexpr package, hence for example the macros to achieve localization of its roots are unavailable. In a parallel universe I perhaps have implemented this expandably which means it could then be accessible with syntax such as rightmostroot(pol([42,1,34,2,-8,1])) but… Warning about unstability of the new syntax Consider the entirety of this section as UNSTABLE and EXPERIMENTAL (except perhaps regarding +, - and *). And this applies even to items not explicitly flagged with one of unstable, Unstable, or UNSTABLE which only reflect that documentation was written over a period of time exceeding one minute, enough for the author mood changes to kick in. It is hard to find good names at the start of a life-long extension program of functionalities, and perhaps in future it will be preferred to rename everything or give to some functions other meanings. Such quasi-complete renamings happened already a few times during the week devoted to development. Infix operators +, -, *, /, **, ^ As has been explained in the Syntax overview via examples section these infix operators have been made polynomial aware, not only in the \poldef context, but generally in any \xintexpr/\xinteval context, inclusive of \xintdeffunc. Conversely functions declared via \xintdeffunc and making use of these operators will automatically be able to accept polynomials declared from \poldef as variables. Usage of / for euclidean division of polynomials is deprecated. Only in case of a scalar denominator is it to be considered stable. Please use rather quo(). Experimental infix operators //, /: Here is the tentative behaviour of A//B according to types: □ A non scalar and B non scalar: euclidean quotient, □ A scalar and B scalar: floored division, □ A scalar and B non scalar: produces zero, □ A non scalar and B scalar: coefficient per coefficient floored division. This is an experimental overloading of the // and /: from \xintexpr. The behaviour in the last case, but not only, is to be considerd unstable. The alternative would be for A//B with B scalar to act as quo(A,B). But, we have currently chosen to let //B for a scalar B act coefficient-wise on the numerator. Beware that it thus means it can be employed with the idea of doing euclidean division only by checking that B is non-scalar. The /: operator provides the associated remainder so always A is reconstructed from (A//B)*B + A/:B. If : is active character use /\string: (it is safer to use /\string : if it is not known if : has catcode other, letter, or is active, but note that /: is fine and needs no precaution if : has catcode letter, it is only an active : which is problematic, like for all other characters possibly used in an expression). As explained above, there are (among other things) hesitations about behaviour with pol2 a scalar. Comparison operators <, >, <=, >=, ==, != NOT YET IMPLEMENTED As the internal representation by xintfrac and xintexpr of fractions does not currently require them to be in reduced terms, such operations would be a bit costly as they could not benefit from the \pdfstrcmp engine primitive. In fact xintexpr does not use it yet anywhere, even for normalized pure integers, although it could speed up signifcantly certain aspects of core arithmetic. Equality of polynomials can currently be tested by computing the difference, which is a bit costly. And of course the deg() function allows comparing degrees. In this context note the following (deg(Q)) ?? { zero } { non-zero scalar } { non-scalar } for branching. pol(<nutple expression>) This converts a nutple [c0,c1,...,cN] into the polynomial variable having these coefficients. Attention that the square brackets are mandatory, except of course if the argument is actually an expression producing such a “nutple”. Currently, this process will not normalize the coefficients (such as reducing to lowest terms), it only trims out the leading zero coefficients. Inside \xintexpr, this is the only (allowed) way to create ex nihilo a polynomial variable; inside \poldef it is an alternative input syntax which is more efficient than the input c0 + c1 * x + c2 * x^2 + .... Whenever an expression with polynomials collapses to a constant, it becomes a scalar. There is currently no distinction during the parsing of expressions by \poldef or \xintexpr between constant polynomial variables and scalar variables. Naturally, \poldef can be used to declare a constant polynomial P, then P can also be used as function having a value independent of argument, but as a variable, it is non-distinguishable from a scalar (of course functions such as deg() tacitly consider scalars to be constant polynomials). Notice that we tend to use the vocable “variable” to refer to arbitrary expressions used as function arguments, without implying that we are actually referring to pre-declared variables in the sense of \xintdefvar. lpol(<nutple expression>) This converts a nutple [cN,...,c1,c0] into the polynomial variable having these coefficients, with leading coefficients coming first in the input. Attention that the square brackets are mandatory , except of course if the argument is actually an expression producing such a “nutple”. Currently, this process will not normalize the coefficients (such as reducing to lowest terms), it only trims out the leading zero coefficients. NAME UNSTABLE It can be used in \poldef as an alternative input syntax, which is more efficient than using the algebraic notation with monomials. (new with 0.8.1, an empty nutple will cause breakage) \xinteval{<pol. expr.>} This is documented here for lack of a better place: it evaluates the polynomial expression then outputs the “string” pol([c0, c1, ..., cN]) if the degree N is at least one (and the usual scalar output else). The “pol” word uses letter catcodes, which is actually mandatory for this output to be usable as input, but it does not make sense to use this inside \poldef or \xintexpr at it means basically executing pol(coeffs(..expression..)) which is but a convoluted way to obtain the same result as (..expression..) (the parentheses delimiting the polynomial expression). For example, \xinteval{(1+pol([0,1]))^10} expands (in two steps) to: pol([1, 10, 45, 120, 210, 252, 210, 120, 45, 10, 1]) You do need loading polexpr for this, else of course pol([]) remains unknown to \xinteval{} as well as the polynomial algebra ! This example can also be done as \xinteval{subs((1+x)^10,x=pol I hesitated using as output the polynomial notation as produced by \PolToExpr{}, but finally opted for this. evalp(<pol. expr.>, <pol. expr>) Evaluates the first argument as a polynomial function of the second. Usually the second argument will be scalar, but this is not required: \poldef K(x):= evalp(-3x^3-5x+1,-27x^4+5x-2); If the first argument is an already declared polynomial P, use rather the functional form P() (which can accept a numerical as well as polynomial argument) as it is more efficient. One can also use subs() syntax [1] (see xintexpr documentation): \poldef K(x):= subs(-3y^3-5y+1, y = -27x^4+5x-2); but the evalp() will use a Horner evaluation scheme which is usually more efficient. name unstable poleval? evalpol? peval? evalp? value? eval? evalat? eval1at2? evalat2nd? Life is so complicated when one asks questions. Not everybody does, though, as is amply demonstrated these days. syntax unstable I am hesitating about permuting the order of the arguments. deg(<pol. expr.>) As \xintexpr does not yet support infinities, the degree of the zero polynomial is -1. Beware that this breaks additivity of degrees, but deg(P)<0 correctly detects the zero polynomial, and deg(P)<=0 detects scalars. coeffs(<pol. expr.>) Produces the nutple [c0,c1,...,cN] of coefficients. The highest degree coefficient is always non zero (except for the zero polynomial…). name unstable I am considering in particular using polcoeffs() to avoid having to overload coeffs() in future when matrix type will be added to xintexpr. lcoeffs(<pol. expr.>) Produces the nutple [cN,....,c1,c0] of coefficients, starting with the highest degree coefficient. (new with 0.8.1) coeff(<pol. expr.>, <num. expr.>) As expected. Produces zero if the numerical index is negative or higher than the degree. name, syntax and output unstable I am hesitating with coeff(n,pol) syntax and also perhaps using polcoeff() in order to avoid having to overload coeff() when matrix type will be added to xintexpr. The current behaviour is at odds with legacy \PolNthCoeff{<polname>}{<index>} regarding negative indices. Accessing leading or sub-leading coefficients can be done with other syntax, see lc (<pol. expr.>), and in some contexts it is useful to be able to rely on the fact that coefficients with negative indices do vanish, so I am for time being maintaining this. lc(<pol. expr.>) The leading coefficient. The same result can be obtained from coeffs(pol)[-1], which shows also how to generalize to access sub-leading coefficients. See the xintexpr documentation for Python-like indexing syntax. monicpart(<pol. expr.>) Divides by the leading coefficient, except that monicpart(0)==0. Currently the coefficients are reduced to lowest terms (contrarily to legacy behaviour of \PolMakeMonic), and additionally the xintfrac \xintREZ macro is applied which extracts powers of ten from numerator or denominator and stores them internally separately. This is generally beneficial to efficiency of multiplication. cont(<pol. expr.>) The (fractional) greatest common divisor of the polynomial coefficients. It is always produced as an irreducible (non-negative) fraction. According to Gauss theorem the content of a product is the product of the contents. name and syntax unstable At 0.8 it was created as icontent() to match the legacy macro \PolIContent, whose name in 2018 was chosen in relation to Maple’s function icontent(), possibly because at that time I had not seen that Maple also had a content() function. Name changed at 0.8.1. It will change syntax if in future multivariate polynomials are supported, and icontent() will then make a come-back. primpart(<pol. expr.>) The quotient (except for the zero polynomial) by cont(<pol. expr.>). This is thus a polynomial with integer coefficients having 1 as greatest common divisor. The sign of the leading coefficient is the same as in the original. And primpart(0)==0. The trailing zeros of the integer coefficients are extracted into a power of ten exponent part, in the internal representation. quorem(<pol. expr.>, <pol. expr.>) Produces a nutple [Q,R] with Q the euclidean quotient and R the remainder. quo(<pol. expr.>, <pol. expr.>) The euclidean quotient. The deprecated pol1/pol2 syntax computes the same polynomial. rem(<pol. expr.>, <pol. expr.>) The euclidean remainder. If pol2 is a (non-zero) scalar, this is zero. There is no infix operator associated to this, for lack of evident notation. Please advise. /: can be used if one is certain that pol2 is of degree at least one. But read the warning about it being unstable even in that case. prem(<pol. expr. 1>, <pol. expr. 2>) Produces a nutple [m, spR] where spR is the (special) pseudo Euclidean remainder. Its description is: □ the standard euclidean remainder R is spR/m □ m = b^f with b equal to the absolute value of the leading coefficient of pol2, □ f is the number of non-zero coefficients in the euclidean quotient, if deg(pol2)>0 (even if the remainder vanishes). If pol2 is a scalar however, the function outputs [1,0]. With these definitions one can show that if both pol1 and pol2 have integer coefficients, then this is also the case of spR, which makes its interest (and also m*Q has integer coefficients, with Q the euclidean quotient, if deg(pol2)>0). Also, prem() is computed faster than rem() for such integer coefficients polynomials. If you want the euclidean quotient R evaluated via spR/m (which may be faster, even with non integer coefficients) use subs(last(x)/first(x),x=prem(P,Q)) syntax as it avoids computing prem(P,Q) twice. This does the trick both in \poldef or in \xintdefvar. However, as is explained in the xintexpr documentation, using such syntax in an \xintdeffunc is (a.t.t.o.w) illusory, due to technicalities of how subs() gets converted into nested expandable macros. One needs an auxiliary function like this: \xintdeffunc lastoverfirst(x):=last(x)/first(x); \xintdeffunc myR(x)=lastoverfirst(prem(x)); Then, myR(pol1,pol2) will evaluate prem(pol1,pol2) only once and compute a polynomial identical to the euclidean remainder (internal representations of coefficients may differ). In this case of integer coefficients polynomials, the polexpr internal representation of the integer coefficients in the pseudo remainder will be with unit denominators only if that was already the case for those of pol1 and pol2 (no automatic reduction to lowest terms is made prior or after computation). Pay attention here that b is the absolute value of the leading coefficient of pol2. Thus the coefficients of the pseudo-remainder have the same signs as those of the standard remainder. This diverges from Maple’s function with the same name. divmod(<pol. expr. 1>, <pol. expr. 2>) Overloads the scalar divmod() and associates it with the experimental // and /: as extended to the polynomial type. In particular when both pol1 and pol2 are scalars, this is the usual divmod() (as in Python) and for pol1 and pol2 non constant polynomials, this is the same as quorem(). Highly unstable overloading of \xinteval‘s divmod(). mod(<pol. expr. 1>, <pol. expr. 2>) The R of the divmod() output. Same as R of quorem() when the second argument pol2 is of degree at least one. Highly unstable overloading of \xinteval‘s mod(). polgcd(<pol. expr. 1>, <pol. expr. 2>, ...) Evaluates to the greatest common polynomial divisor of all the polynomial inputs. The output is a primitive (in particular, with integer coefficients) polynomial. It is zero if and only if all inputs vanish. Attention, there must be either at least two polynomial variables, or alternatively, only one argument which then must be a bracketed list or some expression or variable evaluating to such a “nutple” whose items are polynomials (see the documentation of the scalar gcd() in xintexpr). The two variable case could (and was, during development) have been defined at user level like this: \xintdeffunc polgcd_(P,Q):= \xintdeffunc polgcd(P,Q):=polgcd_(primpart(P),primpart(Q));% This is basically what is done internally for two polynomials, up to some internal optimizations. I hesitate between returning a primitive or a monic polynomial. Maple returns a primitive polynomial if all inputs [2] have integer coefficients, else it returns a monic polynomial, but this is complicated technically for us to add such a check and would add serious overhead. Internally, computations are done using primitive integer-coefficients polynomials (as can be seen in the function template above). So I decided finally to output a primitive polynomial, as one can always apply monicpart() to it. Attention that this is at odds with behaviour of the legacy \PolGCD (non expandable) macro. resultant(<pol. expr. 1>, <pol. expr. 2>) polpowmod(<pol. expr. 1>, <num. expr.>, <pol. expr. 2>) Modular exponentiation: mod(pol1^N, pol2) in a more efficient manner than first computing pol1^N then reducing modulo pol2. Attention that this is using the mod() operation, whose current experimental status is as follows: □ if deg(pol2)>0, the euclidean remainder operation, □ if pol2 is a scalar, coefficient-wise reduction modulo pol2. This is currently implemented at high level via \xintdeffunc and recursive definitions, which were copied over from a scalar example in the xintexpr manual: \xintdeffunc polpowmod_(P, m, Q) := % m=1: return P modulo Q { mod(P,Q) } % m > 1: test if odd or even and do recursive call { odd(m)? { mod(P*sqr(polpowmod_(P, m//2, Q)), Q) } { mod( sqr(polpowmod_(P, m//2, Q)), Q) } \xintdeffunc polpowmod(P, m, Q) := (m)?{polpowmod_(P, m, Q)}{1};% Negative exponents are not currently implemented. For example: \xinteval{subs(polpowmod(1+x,20,10), x=pol([0,1]))} produce respectively: pol([1, 100, 4950, 161700, 3921225, 75287520, 1192052400]) pol([1, 0, 0, 0, 5, 4, 0, 0, 0, 0, 6, 0, 0, 0, 0, 4, 5, 0, 0, 0, 1]) rdcoeffs(<pol. expr.>) This operates on the internal representation of the coefficients, reducing them to lowest terms. rdzcoeffs(<pol. expr.>) This operates on the internal representation of the coefficients, reducing them to lowest terms then extracting from numerator or denominator the maximal power of ten to store as a decimal This is sometimes favourable to more efficient polynomial algebra computations. diff1(<pol. expr.>) The first derivative. name UNSTABLE This name may be used in future to be the partial derivative with respect to a first variable. diff2(<pol. expr.>) The second derivative. name UNSTABLE This name may be used in future to be the partial derivative with respect to a second variable. diffn(<pol. expr. P>, <num. expr. n>) The nth derivative of P. For n<0 computes iterated primitives vanishing at the origin. The coefficients are not reduced to lowest terms. name and syntax UNSTABLE I am also considering reversing the order of the arguments. antider(<pol. expr. P>) The primitive of P with no constant term. Same as diffn(P,-1). intfrom(<pol. expr. P>, <pol. expr. c>) The primitive of P vanishing at c, i.e. \int_c^x P(t)dt. Also c can be a polynomial… so if c is monomial x this will give zero! Allowing general polynomial variable for c adds a bit of overhead to the case of a pure scalar. So I am hesitating maintaining this feature whose interest appears dubious. As the two arguments are both allowed to be polynomials, if by inadvertance one exchanges the two, there is no error but the meaning of intfrom(c,P) is completely otherwise, as it produces c*(x - P) if c is a scalar: >>> &pol pol mode (i.e. function definitions use \poldef) >>> P(x):=1+x^2; P = x^2+1 --> &GenFloat(P) lets P become usable as function in fp mode --> &ROOTS(P) (resp. &ROOTS(P,N)) finds all rational roots exactly and all irrational roots with at least 10 (resp. N) fractional digits >>> intfrom(P,1); @_1 pol([-4/3, 1, 0, 1/3]) >>> intfrom(1,P); @_2 pol([-1, 1, -1]) >>> &bye integral(<pol. expr. P>, [<pol. expr. a>, <pol. expr. b>]) \int_a^b P(t)dt. The brackets here are not denoting an optional argument but a mandatory nutple argument [a, b] with two items. No real recoverable-from error check is done on the input syntax. The input can be an xintexpr variable which happens to be a nutple with two items, or any expression which evaluates to such a nutple. a and b are not restricted to be scalars, they are allowed to be themselves polynomial variables or even polynomial expressions. To compute \int_{x-1}^x P(t)dt it is more efficient to use intfrom(x-1). Similary to compute \int_x^{x+1} P(t)dt, use -intfrom(x+1). Am I right to allow general polynomials a and b hence add overhead to the pure scalar case ? Non-expandable macros At 0.8 polexpr is usable with Plain TeX and not only with LaTeX. Some examples given in this section may be using LaTeX syntax such as \renewcommand. \poldef polname(letter):= expression using the letter as indeterminate; This evaluates the polynomial expression and stores the coefficients in a private structure accessible later via other package macros, used with argument polname. Of course the expression can make use of previously defined polynomials. Polynomial names must start with a letter and are constituted of letters, digits, underscores, the @ (see Technicalities) and the right tick '. The whole xintexpr syntax is authorized, as long as the final result is of polynomial type: \poldef polname(z) := add((-1)^i z^(2i+1)/(2i+1)!, i = 0..10); With fractional coefficients, beware the tacit multiplication issue. □ a variable polname is defined which can be used in \poldef as well as in \xinteval for algebraic computations or as argument to polynomial aware functions, □ a function polname() is defined which can be used in \poldef as well as in \xinteval. It accepts there as argument scalars and also other polynomials (via their names, thanks to previous Any function defined via \xintdeffunc and only algebraic operations, as well as ople indexing or slicing operations, should work fine in \xintexpr/\xinteval with such polynomial names as In the case of a constant polynomial, the xintexpr variable (not the internal data structure on which the package macros operate) associated to it is indistinguishable from a scalar, it is actually a scalar and has lost all traces from its origins as a polynomial (so for example can be used as argument to the cos() function). The function on the other hand remains a one-argument function, which simply has a constant value. The function polname() is defined only for \xintexpr/\xinteval context. It will be unknown to \xintfloateval. Worse, a previously existing floating point function of the same name will be made undefined again, to avoid hard to debug mismatches between exact and floating point polynomials. This also applies when the polynomial is produced not via \poldef or \PolDef but as result of usage of the other package macros. See \PolGenFloatVariant{<polname>} to generate a function usable in \xintfloateval. Using the variable mypol inside \xintfloateval will generate low-level errors because the infix operators there are not polynomial-aware, and the polynomial specific functions such as deg() are only defined for usage inside \xintexpr. In short, currently polynomials defined via polexpr can be used in floating point context only for numerical evaluations, via functions obtained from \PolGenFloatVariant{<polname>} usage. Changes to the original polynomial via package macros are not automatically mapped to the numerical floating point evaluator which must be manually updated as necessary when the original rational coefficient polynomial is modified. The original expression is lost after parsing, and in particular the package provides no way to typeset it (of course the package provides macros to typeset the computed polynomial). Typesetting the original expression has to be done manually, if needed. \PolDef[<letter>]{<polname>}{<expr. using the letter as indeterminate>} Does the same as \poldef in an undelimited macro format, the main interest is to avoid potential problems with the catcode of the semi-colon in presence of some packages. In absence of a [<letter>] optional argument, the variable is assumed to be x. Syntax: \PolGenFloatVariant{<polname>} Makes the polynomial also usable in the \xintfloatexpr/\xintfloateval parser. It will therein evaluates via an Horner scheme using polynomial coefficients already pre-rounded to the float See also \PolToFloatExpr{<pol. expr.>}. Any operation, for example generating the derivative polynomial, or dividing two polynomials or using the \PolLet, must be followed by explicit usage of \PolGenFloatVariant{<polname>} if the new polynomial is to be used in \xintfloateval. Syntax: \PolTypeset[<letter>]{<pol. expr.>} Typesets in descending powers, switching to math mode if in text mode, after evaluating the polynomial expression: \PolTypeset{mul(x-i,i=1..5)}% possible since polexpr 0.8 The letter used in the input is by default assumed to be x, but can be modified by a redefinition of \PolToExprInVar. The letter used in the output is also by default x. This one can be changed on-the-fly via the optional <letter>: \PolTypeset[z]{polname or polynomial expression} By default zero coefficients are skipped (use \poltypesetalltrue to get all of them in output). The following macros (whose meanings will be found in the package code) can be re-defined for customization. Their default definitions are expandable, but this is not a requirement. Syntax: \PolTypesetCmd{<raw_coeff>} Its package definition checks if the coefficient is 1 or -1 and then skips printing the 1, except for the coefficient of degree zero. Also it sets the conditional deciding behaviour of \ The actual printing of the coefficients, when not equal to plus or minus one, is handled by \PolTypesetOne{<raw_coeff>}. Syntax: \PolIfCoeffIsPlusOrMinusOne{T}{F} This macro is a priori undefined. It is defined via the default \PolTypesetCmd{<raw_coeff>} to be used if needed in the execution of \PolTypesetMonomialCmd, e.g. to insert a \cdot in front of \PolVar^{\PolIndex} if the coefficient is not plus or minus one. The macro will execute T if the coefficient has been found to be plus or minus one, and F if not. It chooses expandably between T and F. Syntax: \PolTypesetOne{<raw_coeff>} Defaults to \xintTeXsignedFrac (LaTeX) or \xintTeXsignedOver (else). But these xintfrac old legacy macros are a bit annoying as they insist in exhibiting a power of ten rather than using simpler decimal notation. As alternative, one can do definitions such as: % or with LaTeX+siunitx for example % (as \num of siunitx understands floating point notation) This decides how a monomial (in variable \PolVar and with exponent \PolIndex) is to be printed. The default does nothing for the constant term, \PolVar for the first degree and \PolVar^{\ PolIndex} for higher degrees monomials. Beware that \PolIndex expands to digit tokens and needs termination in \ifnum tests. Syntax: \PolTypesetCmdPrefix{<raw_coeff>} Expands to a + if the raw_coeff is zero or positive, and to nothing if raw_coeff is negative, as in latter case the \xintTeXsignedFrac (or \xintTeXsignedOver) used by \PolTypesetCmd{<raw_coeff>} will put the - sign in front of the fraction (if it is a fraction) and this will thus serve as separator in the typeset formula. Not used for the first term. Syntax: \PolTypeset*[<letter>]{<pol. expr.>} Typesets in ascending powers. The <letter> optional argument (after the *) declares the letter to use in the output. As for \PolTypeset, it defaults to x. To modify the expected x in the input, see \PolToExprInVar. Extended at 0.8 to accept general expressions and not only polynomial names. Syntax: \PolLet{<polname_2>}={<polname_1>} Makes a copy of the already defined polynomial polname_1 to a new one polname_2. This has the same effect as \PolDef{<polname_2>}{<polname_1>(x)} or (better) \PolDef{<polname_2>}{<polname_1>} but with less overhead. The = is optional. Syntax: \PolGlobalLet{<polname_2>}={<polname_1>} Syntax: \PolAssign{<polname>}\toarray{<\macro>} Defines a one-argument expandable macro \macro{#1} which expands to the (raw) #1th polynomial coefficient. □ Attention, coefficients here are indexed starting at 1. This is an unfortunate legacy situation related to the original indexing convention in xinttools arrays. □ With #1=-1, -2, …, \macro{#1} returns leading coefficients. □ With #1=0, returns the number of coefficients, i.e. 1 + deg f for non-zero polynomials. □ Out-of-range #1’s return 0/1[0]. See also \PolNthCoeff{<polname>}{<index>}. Syntax: \PolGet{<polname>}\fromarray{<\macro>} Does the converse operation to \PolAssign{<polname>}\toarray\macro. Each individual \macro{<value>} gets expanded in an \edef and then normalized via xintfrac‘s macro \xintRaw. The leading zeros are removed from the polynomial. (contrived) Example: This will define f as would have \poldef f(x):=1-2x+5x^2-3x^3;. Syntax: \PolFromCSV{<polname>}{<csv>} Defines a polynomial directly from the comma separated list of values (or a macro expanding to such a list) of its coefficients, the first item gives the constant term, the last item gives the leading coefficient, except if zero, then it is dropped (iteratively). List items are each expanded in an \edef and then put into normalized form via xintfrac‘s macro \xintRaw. As leading zero coefficients are removed: \PolFromCSV{f}{0, 0, 0, 0, 0, 0, 0, 0, 0, 0} defines the zero polynomial, which holds only one coefficient. See also expandable macro \PolToCSV{<polname>}. Syntax: \PolMapCoeffs{\macro}{<polname>} It modifies (‘in-place’: original coefficients get lost) each coefficient of the defined polynomial via the expandable macro \macro. The degree is adjusted as necessary if some leading coefficients vanish after the operation. In the replacement text of \macro, \index expands to the coefficient index (starting at zero for the constant term). Notice that \macro will have to handle inputs in the xintfrac internal format. This means that it probably will have to be expressed in terms of macros from the xintfrac package. (or with \xintSqr{\index}) to replace n-th coefficient f_n by f_n*n^2. Syntax: \PolReduceCoeffs{<polname>} Reduces the internal representations of the coefficients to their lowest terms. Syntax: \PolReduceCoeffs*{<polname>} Reduces the internal representations of the coefficients to their lowest terms, but ignoring a possible separated “power of ten part”. For example, xintfrac stores an 30e2/50 input as 30/50 with a separate 10^2 part. This will thus get replaced by 3e^2/5 (or rather whatever xintfrac uses for internal representation), and not by 60 as would result from complete reduction. Evaluations with polynomials treated by this can be much faster than with those handled by the non-starred variant \PolReduceCoeffs{<polname>}: as the numerators and denominators remain generally Syntax: \PolMakeMonic{<polname>} Divides by the leading coefficient. It is recommended to execute \PolReduceCoeffs*{<polname>} immediately afterwards. This is not done automatically, in case the original polynomial had integer coefficients and the user wants to keep the leading one as common denominator for typesetting purposes. Syntax: \PolMakePrimitive{<polname>} Divides by the integer content see (\PolIContent). This thus produces a polynomial with integer coefficients having no common factor. The sign of the leading coefficient is not modified. Syntax: \PolDiff{<polname_1>}{<polname_2>} This sets polname_2 to the first derivative of polname_1. It is allowed to issue \PolDiff{f}{f}, effectively replacing f by f'. Coefficients of the result polname_2 are irreducible fractions (see Technicalities for the whole story.) Syntax: \PolDiff[N]{<polname_1>}{<polname_2>} This sets polname_2 to the N-th derivative of polname_1. Identical arguments is allowed. With N=0, same effect as \PolLet{<polname_2>}={<polname_1>}. With negative N, switches to using \ Syntax: \PolAntiDiff{<polname_1>}{<polname_2>} This sets polname_2 to the primitive of polname_1 vanishing at zero. Coefficients of the result polname_2 are irreducible fractions (see Technicalities for the whole story.) Syntax: \PolAntiDiff[N]{<polname_1>}{<polname_2>} This sets polname_2 to the result of N successive integrations on polname_1. With negative N, it switches to using \PolDiff. Syntax: \PolDivide{<polname_1>}{<polname_2>}{<polname_Q>}{<polname_R>} This sets polname_Q and polname_R to be the quotient and remainder in the Euclidean division of polname_1 by polname_2. Syntax: \PolQuo{<polname_1>}{<polname_2>}{<polname_Q>} This sets polname_Q to be the quotient in the Euclidean division of polname_1 by polname_2. Syntax: \PolRem{<polname_1>}{<polname_2>}{<polname_R>} This sets polname_R to be the remainder in the Euclidean division of polname_1 by polname_2. Syntax: \PolGCD{<polname_1>}{<polname_2>}{<polname_GCD>} This sets polname_GCD to be the (monic) GCD of polname_1 and polname_2. It is a unitary polynomial except if both polname_1 and polname_2 vanish, then polname_GCD is the zero polynomial. Root localization routines via the Sturm Theorem As \PolToSturm{<polname>}{<sturmname>} and \PolSturmIsolateZeros{<sturmname>} and variants declare additional polynomial or scalar variables with names based on <sturmname> as prefix, it is advisable to keep the <sturmname> namespace separate from the one applying to \xintexpr variables generally, or to polynomials. Syntax: \PolToSturm{<polname>}{<sturmname>} With <polname> being for example P, and <sturmname> being for example S, the macro starts by computing the derivative P', then computes the opposite of the remainder in the euclidean division of P by P', then the opposite of the remainder in the euclidean division of P' by the first obtained polynomial, etc… Up to signs following the --++--++... pattern, these are the same remainders as in the Euclide algorithm applied to the computation of the GCD of P and P'. The precise process differs from the above description: the algorithm first sets S_0_ to be the primitive part of P and S_1_ to be the primitive part of P' (see \PolIContent{<polname>}), then at each step the remainder is made primitive and stored for internal reference as S_k_, so only integer-coefficients polynomials are manipulated. This exact procedure will perhaps in future be replaced by a sub-resultant algorithm, which may bring some speed gain in obtaining a pseudo-Sturm sequence, but some experimenting is needed, in the context of realistically realizable computations by the package; primitive polynomials although a bit costly have the smallest coefficients hence are the best for the kind of computations done for root localization, after having computed a Sturm sequence. The last non-zero primitivized remainder S_N_ is, up to sign, the primitive part of the GCD of P and P'. Its roots (real and complex) are the multiple roots of the original P. The original P was “square-free” (i.e. did not have multiple real or complex roots) if and only if S_N_ is a constant, which is then +1 or -1 (its value before primitivization is lost). The macro then divides each S_k_ by S_N_ and declares the quotients S_k as user polynomials for future use. By Gauss theorem about the contents of integer-coefficients polynomials, these S_k also are primitive integer-coefficients polynomials. This step will be referred to as normalization, and in this documentation the obtained polynomials are said to constitute the “Sturm chain” (or “Sturm sequence”), i.e. by convention the “Sturm chain polynomials” are square-free and primitive. The possibly non-square-free ones are referred to as non-normalized. As an exception to the rule, if the original P was “square-free” (i.e. did not have multiple real or complex roots) then normalization is skipped (in that case S_N_ is either +1 or -1), so S_0_ is exactly the primitive part of starting polynomial P, in the “square-free” case. The next logical step is to execute \PolSturmIsolateZeros{S} or one of its variants. Be careful not to use the names sturmname_0, sturmname_1, etc… for defining other polynomials after having done \PolToSturm{<polname>}{<sturmname>} and before executing \PolSturmIsolateZeros{<sturmname>} or its variants else the latter will behave erroneously. The declaration of the S_k‘s will overwrite with no warning previously declared polynomials with identical names S_k, i.e. <sturmname>_k. This is why the macro was designed to expect two names: <polname> and <sturmname>. It is allowed to use the polynomial name P as Sturm chain name S: \PolToSturm{P}{P}, but this is considered bad practice for the reason mentioned in the previous paragraph. Furthermore, \PolSturmIsolateZeros creates xintexpr variables whose names start with <sturmname>L_, <sturmname>R_, and <sturmname>Z_, also <sturmname>M_ for holding the multiplicities, and this may overwrite pre-existing user-defined xintexpr variables. The reason why the S_k‘s are declared as polynomials is that the associated polynomial functions are needed to compute the sign changes in the Sturm sequence evaluated at a given location, as this is the basis mechanism of \PolSturmIsolateZeros (on the basis of the Sturm theorem). It is possible that in future the package will only internally construct such polynomial functions and only the starred variant will make the normalized (i.e. square-free) Sturm sequence public. The integer N giving the length of the Sturm chain S_0, S_1, …, S_N is available as \PolSturmChainLength{<sturmname>}. If all roots of original P are real, then N is both the number of distinct real roots and the degree of S_0. In the case of existence of complex roots, the number of distinct real roots is at most N and N is at most the degree of S_0. Syntax: \PolToSturm*{<polname>}{<sturmname>} Does the same as un-starred version and additionally it keeps for user usage the memory of the un-normalized (but still made primitive) Sturm chain polynomials sturmname_k_, k=0,1, ..., N, with N being \PolSturmChainLength{<sturmname>}. Syntax: \PolSturmIsolateZeros{<sturmname>} The macro locates, using the Sturm Theorem, as many disjoint intervals as there are distinct real roots. After its execution they are two types of such intervals (stored in memory and accessible via macros or xintexpr variables, see below): □ singleton {a}: then a is a root, (necessarily a decimal number, but not all such decimal numbers are exactly identified yet). □ open intervals (a,b): then there is exactly one root z such that a < z < b, and the end points are guaranteed to not be roots. The interval boundaries are decimal numbers, originating in iterated decimal subdivision from initial intervals (-10^E, 0) and (0, 10^E) with E chosen initially large enough so that all roots are enclosed; if zero is a root it is always identified as such. The non-singleton intervals are of the type (a/10^f, (a+1)/10^f) with a an integer, which is neither 0 nor -1. Hence either a and a+1 are both positive or they are both negative. One does not a priori know what will be the lengths of these intervals (except that they are always powers of ten), they vary depending on how many digits two successive roots have in common in their respective decimal expansions. If some two consecutive intervals share an end-point, no information is yet gained about the separation between the two roots which could at this stage be arbitrarily small. See \PolRefineInterval*{<sturmname>}{<index>} which addresses this issue. Let us suppose <sturmname> is S. The interval boundaries (and exactly found roots) are made available for future computations in \xintexpr/xinteval or \poldef as variables SL_1, SL_2, etc…, for the left end-points and SR_1, SR_2 , …, for the right end-points. Additionally, xintexpr variable SZ_1_isknown will have value 1 if the root in the first interval is known, and 0 otherwise. And similarly for the other intervals. The variable declarations are done with no check of existence of previously existing variables with identical names. Also, macros \PolSturmIsolatedZeroLeft{<sturmname>}{<index>} and \PolSturmIsolatedZeroRight{<sturmname>}{<index>} are provided which expand to these same values, written in decimal notation (i.e. pre-processed by \PolDecToString.) And there is also \PolSturmIfZeroExactlyKnown{<sturmname>}{<index>}{T}{F}. Trailing zeroes in the stored decimal numbers accessible via the macros are significant: they are also present in the decimal expansion of the exact root, so as to be able for example to print out bounds of real roots with as many digits as is significant, even if the digits are zeros. The start of the decimal expansion of the <index>-th root is given by \PolSturmIsolatedZeroLeft{<sturmname>}{<index>} if the root is positive, and by PolSturmIsolatedZeroRight{<sturmname>} {<index>} if the root is neagtive. These two decimal numbers are either both zero or both of the same sign. The number of distinct roots is obtainable expandably as \PolSturmNbOfIsolatedZeros{<sturmname>}. Furthermore \PolSturmNbOfRootsOf{<sturmname>}\LessThanOrEqualTo{<value>} and \PolSturmNbOfRootsOf{<sturmname>}\LessThanOrEqualToExpr{<num. expr.>}. will expandably compute respectively the number of real roots at most equal to value or expression, and the same but with multiplicities. These variables and macros are automatically updated in case of subsequent usage of \PolRefineInterval*{<sturmname>}{<index>} or other localization improving macros. The current polexpr implementation defines the xintexpr variables and xinttools arrays as described above with global scope. On the other hand the Sturm sequence polynomials obey the current This is perhaps a bit inconsistent and may change in future. The results are exact bounds for the mathematically exact real roots. Future releases will perhaps also provide macros based on Newton or Regula Falsi methods. Exact computations with such methods lead however quickly to very big fractions, and this forces usage of some rounding scheme for the abscissas if computation times are to remain reasonable. This raises issues of its own, which are studied in numerical mathematics. Syntax: \PolSturmIsolateZeros*{<sturmname>} The macro does the same as \PolSturmIsolateZeros{<sturmname>} and then in addition it does the extra work to determine all multiplicities of the real roots. After execution, \PolSturmIsolatedZeroMultiplicity{<sturmname>}{<index>} expands to the multiplicity of the root located in the index-th interval (intervals are enumerated from left to right, with index starting at 1). Furthermore, if for example the <sturmname> is S, xintexpr variables SM_1, SM_2… hold the multiplicities thus computed. Somewhat counter-intuitively, it is not necessary to have executed the \PolToSturm* starred variant: during its execution, \PolToSturm, even though it does not declare the non-square-free Sturm chain polynomials as user-level genuine polynomials, stores their data in private macros. See The degree nine polynomial with 0.99, 0.999, 0.9999 as triple roots example in polexpr-examples.pdf. Syntax: \PolSturmIsolateZerosAndGetMultiplicities{<sturmname>} Syntax: \PolSturmIsolateZeros**{<sturmname>} The macro does the same as \PolSturmIsolateZeros*{<sturmname>} and in addition it does the extra work to determine all the rational roots. After execution of this macro, a root is “known” if and only if it is rational. Furthermore, primitive polynomial sturmname_sqf_norr is created to match the (square-free) sturmname_0 from which all rational roots have been removed. The number of distinct rational roots is thus the difference between the degrees of these two polynomials (see also \PolSturmNbOfRationalRoots{<sturmname>}). And sturmname_norr is sturmname_0_ from which all rational roots have been removed, i.e. it contains the irrational roots of the original polynomial, with the same multiplicities. See A degree five polynomial with three rational roots in polexpr-examples.pdf. Syntax: \PolSturmIsolateZerosGetMultiplicitiesAndRationalRoots Syntax: \PolSturmIsolateZerosAndFindRationalRoots{<sturmname>} This works exactly like \PolSturmIsolateZeros**{<sturmname>} (inclusive of declaring the polynomials sturmname_sqf_norr and sturmname_norr with no rational roots) except that it does not compute the multiplicities of the non-rational roots. There is no macro to find the rational roots but not compute their multiplicities at the same time. This macro does not define xintexpr variables sturmnameM_1, sturmnameM_2, … holding the multiplicities and it leaves the multiplicity array (whose accessor is \PolSturmIsolatedZeroMultiplicity {<sturmname>}{<index>}) into a broken state, as all non-rational roots will supposedly have multiplicity one. This means that the output of \PolPrintIntervals* will be erroneous regarding the multiplicities of irrational roots. I decided to document it because finding multiplicities of the non rational roots is somewhat costly, and one may be interested only into finding the rational roots (of course random polynomials with integer coefficients will not have any rational root anyhow). Syntax: \PolRefineInterval*{<sturmname>}{<index>} The index-th interval (starting indexing at one) is further subdivided as many times as is necessary in order for the newer interval to have both its end-points distinct from the end-points of the original interval. As a consequence, the kth root is then strictly separated from the other roots. Syntax: \PolRefineInterval[N]{<sturmname>}{<index>} The index-th interval (starting count at one) is further subdivided once, reducing its length by a factor of 10. This is done N times if the optional argument [N] is present. Syntax: \PolEnsureIntervalLength{<sturmname>}{<index>}{<exponent>} The index-th interval is subdivided until its length becomes at most 10^E. This means (for E<0) that the first -E digits after decimal mark of the kth root will then be known exactly. Syntax: \PolEnsureIntervalLengths{<sturmname>}{<exponent>} The intervals as obtained from \PolSturmIsolateZeros are (if necessary) subdivided further by (base 10) dichotomy in order for each of them to have length at most 10^E. This means that decimal expansions of all roots will be known with -E digits (for E<0) after decimal mark. Syntax: \PolSetToSturmChainSignChangesAt{\foo}{<sturmname>}{<value>} Sets macro \foo to store the number of sign changes in the already computed normalized Sturm chain with name prefix <sturmname>, at location <value> (which must be in format as acceptable by the xintfrac macros.) The definition is made with global scope. For local scope, use [\empty] as extra optional argument. One can use this immediately after creation of the Sturm chain. Syntax: \PolSetToNbOfZerosWithin{\foo}{<sturmname>}{<value_left>}{<value_right>} Sets, assuming the normalized Sturm chain has been already computed, macro \foo to store the number of roots of sturmname_0 in the interval (value_left, value_right]. The macro first re-orders end-points if necessary for value_left <= value_right to hold. In accordance to Sturm Theorem this is computed as the difference between the number of sign changes of the Sturm chain at value_right and the one at value_left. The definition is made with global scope. For local scope, use [\empty] as extra optional argument. One can use this immediately after creation of a Sturm chain. See also the expandable \PolSturmNbOfRootsOf{<sturmname>}\LessThanOrEqualTo{value}, which however requires prior execution of \PolSturmIsolateZeros. See also the expandable \PolSturmNbWithMultOfRootsOf{<sturmname>}\LessThanOrEqualTo{value} which requires prior execution of \PolSturmIsolateZeros*. Displaying the found roots: \PolPrintIntervals[<varname>]{} Syntax: \PolPrintIntervals[<varname>]{<sturmname>} This is a convenience macro which prints the bounds for the roots Z_1, Z_2, … (the optional argument varname allows to specify a replacement for the default Z). This will be done (by default) in a math mode array, one interval per row, and pattern rcccl, where the second and fourth column hold the < sign, except when the interval reduces to a singleton, which means the root is known The explanations here and in this section are for LaTeX. With other TeX macro formats, the LaTeX syntax such as for example \begin{array}{rcccl} which appears in the documentation here is actually replaced with quasi-equivalent direct use of TeX primitives. The next macros which govern its output. Executed in place of an array environment, when there are no real roots. Default definition: Default definition (given here for LaTeX, Plain has a variant): A simpler center environment provides a straightforward way to obtain a display allowing pagebreaks. Of course redefinitions must at any rate be kept in sync with \PolPrintIntervalsKnownRoot and Prior to 0.8.6 it was not possible to use here for example \begin{align} due to the latter executing twice in contents. Default definition: Expands by default to \\ with LaTeX and to \cr with Plain Added at 0.8.6. Default definition: Default definition: Default definition: Default definition: Default definition is: Syntax: \PolPrintIntervals*[<varname>]{<sturmname>} This starred variant produces an alternative output (which displays the root multiplicity), and is provided as an example of customization. As replacement for \PolPrintIntervalsKnownRoot, \PolPrintIntervalsPrintExactZero, \PolPrintIntervalsUnknownRoot it uses its own \POL@@PrintIntervals... macros. We only reproduce here one Multiplicities are printed using this auxiliary macro: whose default definition is: \newcommand\PolPrintIntervalsPrintMultiplicity{(\mbox{mult. }\PolPrintIntervalsTheMultiplicity)} Expandable macros At 0.8 polexpr is usable with Plain TeX and not only with LaTeX. Some examples given in this section may be using LaTeX syntax such as \renewcommand. Convert to TeX primitives as appropriate if testing with a non LaTeX macro format. These macros expand completely in two steps except \PolToExpr and \PolToFloatExpr which need a \write, \edef or a \csname...\endcsname context. Syntax: \PolToExpr{<pol. expr.>} Produces expandably [3] the string coeff_N*x^N+..., i.e. the polynomial is using descending powers. Since 0.8 the input is not restricted to be a polynomial name but is allowed to be an arbitrary expression. Then x is expected as indeterminate but this can be customized via \PolToExprInVar. The output uses the letter x by default, this is customizable via \PolToExprVar. The default output is compatible both with □ the Maple’s input format, □ and the PSTricks \psplot[algebraic] input format. Attention that it is not compatible with Python, see further \PolToExprCaret in this context. The following applies: □ vanishing coefficients are skipped (issue \poltoexpralltrue to override this and produce output such as x^3+0*x^2+0*x^1+0), □ negative coefficients are not prefixed by a + sign (else, Maple would not be happy), □ coefficients numerically equal to 1 (or -1) are present only via their sign, □ the letter x is used and the degree one monomial is output as x, not as x^1. □ (0.8) the caret ^ is of catcode 12. This means that one can for convenience typeset in regular text mode, for example using \texttt (in LaTeX). But TeX will not know how to break the expression across end-of-lines anyhow. Formerly ^ was suitable for math mode but as the exponent is not braced this worked only for polynomials of degrees at most 9. Anyhow this is not supposed to be a typesetting macro. Complete customization is possible, see the next macros. Any user redefinition must maintain the expandability property. Defaults to x. The letter used in the macro output. Defaults to x: the letter used as the polynomial indeterminate in the macro input: \def\PolToExprInVar{x}% (default) Recall that declared polynomials are more efficiently used in algebraic expressions without the (x), i.e. P*Q is better than P(x)*Q(x). Thus the input, even if an expression, does not have to contain any x. (new with 0.8) Defaults to ^ of catcode 12. Set it to expand to ** for Python compatible output. (new with 0.8) Syntax: \PolToExprCmd{<raw_coeff>} Defaults to \xintPRaw{\xintRawWithZeros{#1}}. This means that the coefficient value is printed-out as a fraction a/b, skipping the /b part if b turns out to be one. Configure it to be \xintPRaw{\xintIrr{#1}} if the fractions must be in irreducible terms. An alternative is \xintDecToString{\xintREZ{#1}} which uses integer or decimal fixed point format such as 23.0071 if the internal representation of the number only has a power of ten as denominator (the effect of \xintREZ here is to remove trailing decimal zeros). The behaviour of \xintDecToString is not yet stable for other cases, and for example at time of writing no attempt is made to identify inputs having a finite decimal expansion so for example 23.007/2 or 23.007/25 can appear in output and not their finite decimal expansion with no denominator. Syntax: \PolToExprOneTerm{<raw_coeff>}{<exponent>} This is the macro which from the coefficient and the exponent produces the corresponding term in output, such as 2/3*x^7. For its default definition, see the source code. It uses \PolToExprCmd, \PolToExprTimes, \PolToExprVar and \PolToExprCaret. Syntax: \PolToExprOneTermStyleA{<raw_coeff>}{<exponent>} This holds the default package meaning of \PolToExprOneTerm. Syntax: \PolToExprOneTermStyleB{<raw_coeff>}{<exponent>} This holds an alternative meaning, which puts the fractional part of a coefficient after the monomial, i.e. like this: \PolToExprCmd isn’t used at all in this style. But \PolToExprTimes, \PolToExprVar and \PolToExprCaret are obeyed. To activate it use \let\PolToExprOneTerm\PolToExprOneTermStyleB. To revert to the package default behaviour, issue \let\PolToExprOneTerm\PolToExprOneTermStyleA. Syntax: \PolToExprTermPrefix{<raw_coeff>} It receives as argument the coefficient. Its default behaviour is to produce a + if the coefficient is positive, which will thus serve to separate the monomials in the output. This is to match the default for \PolToExprCmd{<raw_coeff>} which in case of a positive coefficient does not output an explicit + prefix. Syntax: \PolToFloatExpr{<pol. expr.>} Similar to \PolToExpr{<pol. expr.>} but using \PolToFloatExprCmd{<raw_coeff>} which by default rounds and converts the coefficients to floating point format. This is unrelated to \PolGenFloatVariant{<polname>}: \PolToFloatExprCmd{<raw_coeff>} operates on the exact coefficients anew (and may thus produce something else than the coefficients of the polynomial function acting in \xintfloateval if the floating point precision was changed in between). Extended at 0.8 to accept general expressions as input. Syntax: \PolToFloatExprOneTerm{<raw_coeff>}{<exponent>} Syntax: \PolToFloatExprCmd{<raw_coeff>} The one-argument macro used by \PolToFloatExprOneTerm. It defaults to \xintPFloat{#1}, which trims trailing zeroes. changed at 0.8.2 Formerly it was using \xintFloat. Syntax: \PolToExpr*{<pol. expr.>} Ascending powers: coeff_0+coeff_1*x+coeff_2*x^2+.... Extended at 0.8 to accept general expressions as input. Customizable with the same macros as for \PolToExpr{<pol. expr.>}. Syntax: \PolToFloatExpr*{<pol. expr.>} Ascending powers. Extended at 0.8 to accept general expressions as input. Syntax: \PolNthCoeff{<polname>}{<index>} It expands to the raw N-th coefficient (N=0 corresponds to the constant coefficient). If N is out of range, zero (in its default xintfrac format 0/1[0]) is returned. Negative indices N=-1, -2, … return the leading coefficient, sub-leading coefficient, …, and finally 0/1[0] for N<-1-degree. Syntax: \PolLeadingCoeff{<polname>} Expands to the leading coefficient. Syntax: \PolDegree{<polname>} It expands to the degree. This is -1 if zero polynomial but this may change in future. Should it then expand to -\infty ? Syntax: \PolIContent{<polname>} It expands to the contents of the polynomial, i.e. to the positive fraction such that dividing by this fraction produces a polynomial with integer coefficients having no common prime divisor. See \PolMakePrimitive. Syntax: \PolToList{<polname>} Expands to {coeff_0}{coeff_1}...{coeff_N} with N = degree, and coeff_N the leading coefficient (the zero polynomial does give {0/1[0]} and not an empty output.) Syntax: \PolToCSV{<polname>} Expands to coeff_0, coeff_1, coeff_2, ....., coeff_N, starting with constant term and ending with leading coefficient. Converse to \PolFromCSV{<polname>}{<csv>}. Syntax: \PolEval{<polname>}\AtExpr{<num. expr.>} Same output as \xinteval{polname(numerical expression)}. Syntax: \PolEval{<polname>}\At{<value>} Evaluates the polynomial at the given value which must be in (or expand to) a format acceptable to the xintfrac macros. Syntax: \PolEvalReduced{<polname>}\AtExpr{<num. expr.>} Same output as \xinteval{reduce(polname(numerical expression))}. Syntax: \PolEvalReduced{<polname>}\At{<value>} Evaluates the polynomial at the value which must be in (or expand to) a format acceptable to the xintfrac macros, and outputs an irreducible fraction. Syntax: \PolFloatEval{<polname>}\AtExpr{<num. expr.>} Same output as \xintfloateval{polname(numerical expression)}. To use the exact coefficients with exactly executed additions and multiplications and do the rounding only as the final last step, the following syntax can be used: [4] \xintfloateval{3.27*\xintexpr f(2.53)\relax^2} Syntax: \PolFloatEval{<polname>}\At{<value>} Evaluates the polynomial at the value which must be in (or expand to) a format acceptable to the xintfrac macros. Expandable macros in relation to root localization via Sturm Theorem Syntax: \PolSturmChainLength{<sturmname>} Syntax: \PolSturmIfZeroExactlyKnown{<sturmname>}{<index>}{T}{F} Executes T if the index-th interval reduces to a singleton, i.e. the root is known exactly, else F. Syntax: \PolSturmIsolatedZeroLeft{<sturmname>}{<index>} Syntax: \PolSturmIsolatedZeroRight{<sturmname>}{<index>} Syntax: \PolSturmIsolatedZeroMultiplicity{<sturmname>}{<index>} Expands to the multiplicity of the unique root contained in the index-th interval. See The degree nine polynomial with 0.99, 0.999, 0.9999 as triple roots in polexpr-examples.pdf. Syntax: \PolSturmNbOfIsolatedZeros{<sturmname>} Expands to the number of real roots of the polynomial <sturmname>_0, i.e. the number of distinct real roots of the polynomial originally used to create the Sturm chain via \PolToSturm{<polname>} The next few macros counting roots, with or without multiplicities, less than or equal to some value, are under evaluation and may be removed from the package if their utility is judged to be not high enough. They can be re-coded at user level on the basis of the other documented package macros anyway. Syntax: \PolSturmNbOfRootsOf{<sturmname>}\LessThanOrEqualTo{<value>} Expands to the number of distinct roots (of the polynomial used to create the Sturm chain) less than or equal to the value (i.e. a number of fraction recognizable by the xintfrac macros). \PolSturmIsolateZeros{<sturmname>} must have been executed beforehand. And the argument is a <sturmname>, not a <polname> (this is why the macro contains Sturm in its name), simply to be reminded of the above constraint. Syntax: \PolSturmNbOfRootsOf{<sturmname>}\LessThanOrEqualToExpr{<num. expr.>} Expands to the number of distinct roots (of the polynomial used to create the Sturm chain) which are less than or equal to the given numerical expression. Syntax: \PolSturmNbWithMultOfRootsOf{<sturmname>}\LessThanOrEqualTo{<value>} Expands to the number counted with multiplicities of the roots (of the polynomial used to create the Sturm chain) which are less than or equal to the given value. Syntax: \PolSturmNbWithMultOfRootsOf{<sturmname>}\LessThanOrEqualToExpr{<num. expr.>} Expands to the total number of roots (counted with multiplicities) which are less than or equal to the given expression. Syntax: \PolSturmNbOfRationalRoots{<sturmname>} Expands to the number of rational roots (without multiplicities). Syntax: \PolSturmNbOfRationalRootsWithMultiplicities{<sturmname>} Expands to the number of rational roots (counted with multiplicities). Syntax: \PolSturmRationalRoot{<sturmname>}{<k>} Expands to the k-th rational root. They are enumerated from left to right starting at index value 1. Syntax: \PolSturmRationalRootIndex{<sturmname>}{<k>} Expands to the index of the kth rational root as part of the ordered real roots (counted without multiplicities). So \PolSturmRationalRoot{<sturmname>}{<k>} is equivalent to this nested call: Syntax: \PolSturmRationalRootMultiplicity{<sturmname>}{<k>} Expands to the multiplicity of the kth rational root. Syntax: \PolIntervalWidth{<sturmname>}{<index>} The 10^E width of the current index-th root localization interval. Output is in xintfrac raw 1/1[E] format (if not zero). TeX Booleans (with names enacting their defaults) This is actually an xintexpr configuration. Setting it to true triggers the writing of information to the log when new polynomial or scalar variables are defined. The macro and variable meanings as written to the log are to be considered unstable and undocumented internal structures. When \poldef is used, both a variable and a function are defined. The default \polnewpolverbosefalse setting suppresses the print-out to the log and terminal of the function macro meaning, as it only duplicates the information contained in the variable which is already printed out to the log and terminal. However \PolGenFloatVariant{<polname>} does still print out the information relative to the polynomial function it defines for use in \xintfloateval{} as there is no float polynomial variable, only the function, and it is the only way to see its rounded coefficients (\xintverbosefalse suppresses also that info). If set to true, it overrides in both cases \xintverbosefalse. The setting only affects polynomial declarations. Scalar variables such as those holding information on roots obey only the \ xintverbose... setting. (new with 0.8) If true, \PolTypeset will also typeset the vanishing coefficients. Syntax: \PolDecToString{decimal number} This is a utility macro to print decimal numbers. It is an alias for \xintDecToString. For example \PolDecToString{123.456e-8} will expand to 0.00000123456 and \PolDecToString{123.450e-8} to 0.00000123450 which illustrates that trailing zeros are not trimmed. To trim trailing zeroes, one can use \PolDecToString{\xintREZ{#1}}. Attention that a.t.t.o.w. if the argument is for example 1/5, the macro does not identify that this is in fact a number with a finite decimal expansion and it outputs 1/5. See current xintfrac Serves to customize the package. Currently only two keys are recognized: □ norr: the postfix that \PolSturmIsolateZeros**{<sturmname>} should append to <sturmname> to declare the primitive polynomial obtained from original one after removal of all rational roots. The default value is _norr (standing for “no rational roots”). □ sqfnorr: the postfix that \PolSturmIsolateZeros**{<sturmname>} should append to <sturmname> to declare the primitive polynomial obtained from original one after removal of all rational roots and suppression of all multiplicities. The default value is _sqf_norr (standing for “square-free with no rational roots”). The package executes \polexprsetup{norr=_norr, sqfnorr=_sqf_norr} as default. • Do not use the underscore _ as the first character of a polynomial name, even if of catcode letter. This may cause an infinite loop. • The @ is allowed in the names of polynomials, independently of whether it is of catcode letter or other. In defining macros which will use \poldef to create polynomials it seems reasonable to adopt the convention that @ as first character in polynomial names is to be reserved to temporary auxiliary polynomials. Do not use @_ at start of polynomial names. This is reserved for internal usage by the package. • Catcodes are set temporarily by \poldef macro to safe values prior to grab the polynomial expression up to the terminator ;, and also by \PolDef prior to grab the brace-enclosed polynomial expression. This gives a layer of protection in case some package (for example the babel-french module) has made some characters active. It will fail though if the whole thing is located inside some definition of a macro done at a time the characters are active. • Attention Contrarily to \xintdefvar and \xintdeffunc from xintexpr, \poldef uses a naive delimited macro to fetch up to the expression terminator ";", hence it will be fooled if some ; is used inside the expression (which is possible as it appears in some xintexpr constructs). Work-around is to use curly braces around the inner semi-colons, or simpler to use \PolDef. • As a consequence of xintfrac addition and subtraction always using least common multiples for the denominators, user-chosen common denominators (currently) survive additions and multiplications. For example, this: \poldef P(x):= 1/2 + 2/2*x + 3/2*x^3 + 4/2*x^4; \poldef Q(x):= 1/3 + (2/3)x + (3/3)x^3 + (4/3)x^4; \poldef PQ(x):= P*Q; gives internally the polynomial: where all coefficients have the same denominator 6. Notice though that \PolToExpr{PQ} outputs the 6/6*x^3 as x^3 because (by default) it recognizes and filters out coefficients equal to one or minus one. One can use for example \PolToCSV{PQ} to see the internally stored coefficients. • \PolDiff{<polname_1>}{<polname_2>} always applies \xintPIrr to the resulting coefficients, which means that fractions are reduced to lowest terms but ignoring an already separated power of ten part [N] present in the internal representation. This is tentative and may change. Same remark for \PolAntiDiff{<polname_1>}{<polname_2>}. • Currently, the package stores all coefficients from index 0 to index equal to the polynomial degree inside a single macro, as a list. This data structure is obviously very inefficient for polynomials of high degree and few coefficients (as an example with \poldef f(x):=x^1000 + x^500; the subsequent definition \poldef g(x):= f(x)^2; will do of the order of 1,000,000 multiplications and additions involvings only zeroes… which does take time). This may change in the future. • As is to be expected internal structures of the package are barely documented and unstable. Don’t use them.
{"url":"https://ctan.um.ac.ir/macros/generic/polexpr/polexpr-ref.html","timestamp":"2024-11-05T10:04:21Z","content_type":"text/html","content_length":"259072","record_id":"<urn:uuid:812ddff2-1ffc-41c1-853b-6c844ece8919>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00446.warc.gz"}
Mastering Pandas: Master the features and capabilities of pandas, a data analysis toolkit for Python 1783981962, 9781783981960 - DOKUMEN.PUB Table of contents : About the Author About the Reviewers Table of Contents Chapter 1: Introduction to pandas and Data Analysis Motivation for data analysis We live in a big data world 4 V's of big data Volume of big data Velocity of big data Variety of big data Veracity of big data So much data, so little time for analysis The move towards real-time analytics How Python and pandas fit into the data analytics mix What is pandas Benefits of using pandas Chapter 2: Installation of pandas and Supporting Software Selecting a version of Python to use Python installation Installing Python from compressed tarball Core Python installation Third-party Python software install Mac OS/X Installation using a package manager Installation of Python and pandas from a third-party vendor Continuum Analytics Anaconda Installing Anaconda Mac OS/X Final step for all platforms Other numeric or analytics-focused Python distributions Downloading and installing pandas Red Hat Source installation Binary installation Binary Installation Source installation IPython Notebook IPython installation Mac OS/X Install via Anaconda (for Linux/Mac OS/X) Wakari by Continuum Analytics Virtualenv installation and usage Chapter 3: The pandas Data Structures NumPy ndarrays NumPy array creation NumPy arrays via numpy.array NumPy array via numpy.arange NumPy array via numpy.linspace NumPy array via various other functions NumPy datatypes NumPy indexing and slicing Array slicing Array masking Complex indexing Copies and views Basic operations Reduction operations Statistical operators Logical operators Array shape manipulation Flattening a multi-dimensional array Adding a dimension Array sorting Data structures in pandas Series creation Operations on Series DataFrame Creation Using 3D NumPy array with axis labels Using a Python dictionary of DataFrame objects Using the DataFrame.to_panel method Other operations Chapter 4: Operations in Pandas, Part I – Indexing and Selecting Basic indexing Accessing attributes using dot operator Range slicing Label, integer, and mixed indexing Label-oriented indexing Selection using a Boolean array Integer-oriented indexing The .iat and .at operators Mixed indexing with the .ix operator Swapping and re-ordering levels Boolean indexing The is in and any all methods Using the where() method Operations on indexes Chapter 5: Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data Grouping of data The groupby operation Using groupby with a MultiIndex Using the aggregate method Applying multiple functions The transform() method Merging and joining The concat function Using append Appending a single row to a DataFrame SQL-like merging/joining of DataFrame objects The join function Pivots and reshaping data Stacking and unstacking The stack() function Other methods to reshape DataFrames Using the melt function Chapter 6: Missing Data, Time Series, and Plotting Using Matplotlib Handling missing data Handling missing values Handling time series Reading in time series data DateOffset and TimeDelta objects Time series-related instance methods Frequency conversion Resampling of data Aliases for Time Series frequencies Time series concepts and datatypes Period and PeriodIndex Conversion between Time Series datatypes A summary of Time Series-related objects Plotting using matplotlib Chapter 7: A Tour of Statistics – The Classical Approach Descriptive statistics versus inferential statistics Measures of central tendency and variability Measures of central tendency The mean The median The mode Computing measures of central tendency of a dataset in Python Measures of variability, dispersion, or spread Deviation and variance Hypothesis testing – the null and alternative hypotheses The null and alternative hypotheses The alpha and p-values Type I and Type II errors Statistical hypothesis tests The z-test The t-test A t-test example Confidence intervals An illustrative example Correlation and linear regression Linear regression An illustrative example Chapter 8: A Brief Tour of Bayesian Statistics Introduction to Bayesian statistics Mathematical framework for Bayesian statistics Bayes theory and odds Applications of Bayesian statistics Probability distributions Fitting a distribution Discrete probability distributions Discrete uniform distribution Continuous probability distributions Bayesian statistics versus Frequentist statistics What is probability? How the model is defined Confidence (Frequentist) versus Credible (Bayesian) intervals Conducting Bayesian statistical analysis Monte Carlo estimation of the likelihood function and PyMC Bayesian analysis example – Switchpoint detection Chapter 9: The pandas Library Architecture Introduction to pandas' file hierarchy Description of pandas' modules and files Improving performance using Python extensions Chapter 10: R and pandas Compared R data types R lists R DataFrames Slicing and selection R-matrix and Numpy array compared R lists and pandas series compared Specifying column name in R Specifying column name in pandas R DataFrames versus pandas DataFrames Multi-column selection in R Multi-column selection in pandas Arithmetic operations on columns Aggregation and GroupBy Aggregation in R The pandas' GroupBy operator Comparing matching operators in R and pandas R %in% operator The pandas isin() function Logical subsetting Logical subsetting in R Logical subsetting in pandas Implementation in R Implementation in pandas Reshaping using Melt The R melt() function The pandas melt() function Factors/categorical data An R example using cut() The pandas solution Chapter 11: Brief Tour of Machine Learning Role of pandas in machine learning Installation of scikit-learn Installing via Anaconda Installing on Unix (Linux/Mac OSX) Installing on Windows Introduction to machine learning Supervised versus unsupervised learning Illustration using document classification Supervised learning Unsupervised learning How machine learning systems learn Application of machine learning – Kaggle Titanic competition The Titanic: Machine Learning from Disaster problem The problem of overfitting Data analysis and preprocessing using pandas Examining the data Handling missing values A naïve approach to Titanic problem The scikit-learn ML/classifier interface Supervised learning algorithms Constructing a model using Patsy for scikit-learn General boilerplate code explanation Logistic regression Support vector machine Decision trees Random forest Unsupervised learning algorithms Dimensionality reduction K-means clustering Citation preview Mastering pandas Master the features and capabilities of pandas, a data analysis toolkit for Python Femi Anthony Mastering pandas Copyright © 2015 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: June 2015 Production reference: 1150615 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-78398-196-0 www.packtpub.com Credits Author Femi Anthony Reviewers Opeyemi Akinjayeju Project Coordinator Neha Bhatnagar Proofreader Safis Editing Louis Hénault Carlos Marin Commissioning Editor Karthikey Pandey Acquisition Editor Kevin Colaco Content Development Editor Arun Nadar Technical Editor Mohita Vyas Copy Editors Tani Kothari Jasmine Nadar Vikrant Phadke Indexer Tejal Soni Graphics Jason Monteiro Production Coordinator Aparna Bhagat Cover Work Aparna Bhagat About the Author Femi Anthony is a seasoned and knowledgeable software programmer, with over 15 years experience in a vast array of languages, including Perl, C, C++, Java, and Python. He has worked in both the Internet space and financial services space for many years and is now working for a well-known financial data company. He holds a bachelor's degree in mathematics with computer science from MIT and a master's degree from the University of Pennsylvania. His pet interests include data science, machine learning, and Python. Femi is working on a few side projects in these areas. His hobbies include reading, soccer, and road cycling. You can follow him at @dataphanatik, and for any queries, contact him at [email protected]. First and foremost, I would like to thank my wife, Ene, for her support throughout my career and in writing this book. She has been my inspiration and motivation for continuing to improve my knowledge and helping me move ahead in my career. She is my rock, and I dedicate this book to her. I also thank my wonderful children, Femi, Lara, and our new addition, Temi, for always making me smile and for understanding on those days when I was writing this book instead of playing games with them. I would also like to thank my book reviewers—Opeyemi Akinjayeju, who is a dear friend of mine, as well as Louis Hénault and Carlos Marin—for their invaluable feedback and input toward the completion of this book. Lastly, I would like to thank my parents, George and Katie Anthony, for instilling a strong work ethic in me from an early age. About the Reviewers Opeyemi Akinjayeju is risk management professional. He holds graduate degrees in statistics (Penn State University) and economics (Georgia Southern University), and has built predictive models for insurance companies, banks, captive automotive finance lenders, and consulting firms. He enjoys analyzing data and solving complex business problems using SAS, R, EViews/Gretl, Minitab, SQL, and Python. Opeyemi is also an adjunct at Northwood University where he designs and teaches undergraduate courses in microeconomics and macroeconomics. Louis Hénault is a data scientist at OgilvyOne Paris. He loves combining mathematics and computer science to solve real-world problems in an innovative way. After getting a master's degree in engineering with a major in data sciences and another degree in applied mathematics in France, he entered into the French start-up ecosystem, working on several projects. Louis has gained experience in various industries, including geophysics, application performance management, online music platforms, e-commerce, and digital advertising. He is now working for a leading customer engagement agency, where he helps clients unlock the complete value of customers using big data. I've met many outstanding people in my life who have helped me become what I am today. A great thank you goes to the professors, authors, and colleagues who taught me many fantastic things. Of course, I can't end this without a special thought for my friends and family. Carlos Marin is a software engineer at Rackspace, where he maintains and develops a suite of applications that manage networking devices in Rackspace's data centers. He has made contributions to OpenStack, and has worked with multiple teams and on multiple projects within Rackspace, from the Identity API to big data and analytics. Carlos graduated with a degree in computer engineering from the National Autonomous University of Mexico. Prior to joining Rackspace, he worked as a consultant, developing software for multiple financial enterprises in programming languages. In Austin, Texas, he regularly attends local technology events and user groups. He also spends time volunteering and pursuing outdoor adventures. I'm grateful to my parents and family, who have always believed in me. www.PacktPub.com Support files, eBooks, discount offers, and more For support files and downloads related to your book, please visit www.PacktPub.com. Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub. com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details. At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks. TM Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books. Why subscribe? • Fully searchable across every book published by Packt • Copy and paste, print, and bookmark content • On demand and accessible via a web browser Free access for Packt account holders If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view 9 entirely free books. Simply use your login credentials for immediate access. Table of Contents Preface ix Chapter 1: Introduction to pandas and Data Analysis 1 Motivation for data analysis We live in a big data world 4 V's of big data Volume of big data Velocity of big data Variety of big data Veracity of big data So much data, so little time for analysis 4 The move towards real-time analytics 5 How Python and pandas fit into the data analytics mix 5 What is pandas? 6 Benefits of using pandas 7 Summary 10 Chapter 2: Installation of pandas and the Supporting Software Selecting a version of Python to use 11 Python installation 12 Linux 12 Installing Python from compressed tarball Core Python installation Third-party Python software installation Windows 14 Mac OS X Installation using a package manager Installation of Python and pandas from a third-party vendor Continuum Analytics Anaconda Installing Anaconda Linux 17 Mac OS X 18 [i] Table of Contents Windows 18 Final step for all platforms 18 Other numeric or analytics-focused Python distributions 19 Downloading and installing pandas 19 Linux 20 Ubuntu/Debian 21 Red Hat 21 Ubuntu/Debian 21 Fedora 21 OpenSuse 21 Mac 21 Source installation Binary installation Windows 22 Binary Installation 22 Source installation 23 IPython 24 IPython Notebook 24 IPython installation 26 Linux 26 Windows 26 Mac OS X 26 Install via Anaconda (for Linux/Mac OS X) 27 Wakari by Continuum Analytics 27 Virtualenv 27 Virtualenv installation and usage Summary 28 Chapter 3: The pandas Data Structures NumPy ndarrays NumPy array creation NumPy arrays via numpy.array NumPy array via numpy.arange NumPy array via numpy.linspace NumPy array via various other functions NumPy datatypes NumPy indexing and slicing Array slicing Array masking Complex indexing Copies and views 40 Operations 40 Basic operations Reduction operations Statistical operators Logical operators [ ii ] Table of Contents Broadcasting 46 Array shape manipulation 47 Flattening a multidimensional array 47 Reshaping 47 Resizing 48 Adding a dimension 49 Array sorting 49 Data structures in pandas 50 Series 50 Series creation Operations on Series DataFrame 56 DataFrame Creation 57 Operations 62 Panel 65 Using 3D NumPy array with axis labels Using a Python dictionary of DataFrame objects Using the DataFrame.to_panel method Other operations Summary 68 Chapter 4: Operations in pandas, Part I – Indexing and Selecting 69 Basic indexing Accessing attributes using dot operator Range slicing Label, integer, and mixed indexing Label-oriented indexing Selection using a Boolean array Integer-oriented indexing 79 The .iat and .at operators 81 Mixed indexing with the .ix operator 81 MultiIndexing 85 Swapping and reordering levels 89 Cross sections 90 Boolean indexing 91 The is in and any all methods 92 Using the where() method 95 Operations on indexes 97 Summary 98 [ iii ] Table of Contents Chapter 5: Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data Grouping of data The groupby operation Using groupby with a MultiIndex 108 Using the aggregate method 111 Applying multiple functions 111 The transform() method 112 Filtering 114 Merging and joining The concat function Using append Appending a single row to a DataFrame SQL-like merging/joining of DataFrame objects Pivots and reshaping data Stacking and unstacking The join function The stack() function Other methods to reshape DataFrames Using the melt function Summary 133 Chapter 6: Missing Data, Time Series, and Plotting Using Matplotlib Handling missing data Handling missing values Handling time series Reading in time series data DateOffset and TimeDelta objects Time series-related instance methods Time series concepts and datatypes Shifting/lagging 147 Frequency conversion 147 Resampling of data 149 Aliases for Time Series frequencies 154 Period and PeriodIndex Conversions between Time Series datatypes A summary of Time Series-related objects 158 Plotting using matplotlib 158 Summary 161 Chapter 7: A Tour of Statistics – The Classical Approach Descriptive statistics versus inferential statistics Measures of central tendency and variability [ iv ] Table of Contents Measures of central tendency Measures of variability, dispersion, or spread The mean The median The mode Computing measures of central tendency of a dataset in Python Range 171 Quartile 171 Deviation and variance 173 Hypothesis testing – the null and alternative hypotheses The null and alternative hypotheses The alpha and p-values Type I and Type II errors Statistical hypothesis tests Confidence intervals Correlation and linear regression Background 177 The z-test 178 The t-test 182 A t-test example 185 An illustrative example Correlation 190 Linear regression 191 An illustrative example 192 Chapter 8: A Brief Tour of Bayesian Statistics Introduction to Bayesian statistics Mathematical framework for Bayesian statistics Bayes theory and odds Applications of Bayesian statistics Probability distributions Fitting a distribution Discrete probability distributions Discrete uniform distributions Continuous probability distributions Bayesian statistics versus Frequentist statistics 221 What is probability? 221 How the model is defined 221 Confidence (Frequentist) versus Credible (Bayesian) intervals 222 Conducting Bayesian statistical analysis 222 Monte Carlo estimation of the likelihood function and PyMC 223 Bayesian analysis example – Switchpoint detection 224 References 237 Summary 238 [v] Table of Contents Chapter 9: The pandas Library Architecture Chapter 10: R and pandas Compared Introduction to pandas' file hierarchy 239 Description of pandas' modules and files 240 pandas/core 240 pandas/io 243 pandas/tools 246 pandas/sparse 247 pandas/stats 247 pandas/util 248 pandas/rpy 249 pandas/tests 249 pandas/compat 250 pandas/computation 250 pandas/tseries 251 pandas/sandbox 253 Improving performance using Python extensions 253 Summary 256 R data types R lists R DataFrames Slicing and selection R-matrix and NumPy array compared R lists and pandas series compared Specifying column name in R Specifying column name in pandas R's DataFrames versus pandas' DataFrames Multicolumn selection in R Multicolumn selection in pandas Arithmetic operations on columns Aggregation and GroupBy Aggregation in R The pandas' GroupBy operator Comparing matching operators in R and pandas R %in% operator The pandas isin() function Logical subsetting Logical subsetting in R Logical subsetting in pandas Split-apply-combine Implementation in R [ vi ] Table of Contents Implementation in pandas 275 Reshaping using melt 276 The R melt() function 277 The pandas melt() function 277 Factors/categorical data 278 An R example using cut() 278 The pandas solution 279 Summary 281 Chapter 11: Brief Tour of Machine Learning Role of pandas in machine learning Installation of scikit-learn Installing via Anaconda Installing on Unix (Linux/Mac OS X) Installing on Windows Introduction to machine learning Supervised versus unsupervised learning Illustration using document classification Supervised learning Unsupervised learning How machine learning systems learn 287 Application of machine learning – Kaggle Titanic competition 287 The Titanic: machine learning from disaster problem 287 The problem of overfitting 288 Data analysis and preprocessing using pandas 289 Examining the data 289 Handling missing values 290 A naïve approach to Titanic problem 300 The scikit-learn ML/classifier interface 302 Supervised learning algorithms 305 Constructing a model using Patsy for scikit-learn 305 General boilerplate code explanation 306 Logistic regression 309 Support vector machine 311 Decision trees 313 Random forest 315 Unsupervised learning algorithms 316 Dimensionality reduction 316 K-means clustering 321 Summary 323 [ vii ] Preface Welcome to Mastering pandas. This book will teach you how to effectively use pandas, which is a one of the most popular Python packages today for performing data analysis. The first half of this book starts off with the rationale for performing data analysis. Then it introduces Python and pandas in particular, taking you through the installation steps, what pandas is all about, what it can be used for, data structures in pandas, and how to select, merge and group data in pandas. Then it covers handling missing data and time series data, as well as plotting for data visualization. The second half of this book shows you how to use pandas to perform inferential statistics using the classical and Bayesian approaches, followed by a chapter on pandas architecture, before rounding off with a whirlwind tour of machine learning, which introduces the scikit-learn library. The aim of this book is to immerse you into pandas through the use of illustrative examples on real-world What this book covers Chapter 1, Introduction to pandas and Data Analysis, explains the motivation for doing data analysis, introduces the Python language and the pandas library, and discusses how they can be used for data analysis. It also describes the benefits of using pandas for data analysis. Chapter 2, Installation of pandas and the Supporting Software, gives a detailed description on how to install pandas. It gives installation instructions across multiple operating system platforms: Unix, MacOS X, and Windows. It also describes how to install supporting software, such as NumPy and IPython. [ ix ] Chapter 3, The pandas Data Structures, introduces the data structures that form the bedrock of the pandas library. The numpy.ndarray data structure is first introduced and discussed as it forms the basis for the pandas.Series and pandas.DataFrame data structures, which are the foundation data structures used in pandas. This chapter may be the most important on in the book, as knowledge of these data structures is absolutely necessary to do data analysis using pandas. Chapter 4, Operations in pandas, Part I – Indexing and Selecting, focuses on how to access and select data from the pandas data structures. It discusses the various ways of selecting data via Basic, Label, Integer, and Mixed Indexing. It explains more advanced indexing concepts such as MultiIndex, Boolean indexing, and operations on Index types. Chapter 5, Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data, tackles the problem of rearranging data in pandas' data structures. The various functions in pandas that enable the user to rearrange data are examined by utilizing them on real-world datasets. This chapter examines the different ways in which data can be rearranged: by aggregation/grouping, merging, concatenating, and reshaping. Chapter 6, Missing Data, Time Series, and Plotting using Matplotlib, discusses topics that are necessary for the pre-processing of data that is to be used as input for data analysis, prediction, and visualization. These topics include how to handle missing values in the input data, how to handle time series data, and how to use the matplotlib library to plot data for visualization purposes. Chapter 7, A Tour of Statistics – The Classical Approach, takes you on a brief tour of classical statistics and shows how pandas can be used together with Python's statistical packages to conduct statistical analyses. Various statistical topics are addressed, including statistical inference, measures of central tendency, hypothesis testing, Z- and T-tests, analysis of variance, confidence intervals, and correlation and regression. Chapter 8, A Brief Tour of Bayesian Statistics, discusses an alternative approach to performing statistical analysis, known as Bayesian analysis. This chapter introduces Bayesian statistics and discusses the underlying mathematical framework. It examines the various probability distributions used in Bayesian analysis and shows how to generate and visualize them using matplotlib and scipy.stats. It also introduces the PyMC library for performing Monte Carlo simulations, and provides a real-world example of conducting a Bayesian inference using online data. Chapter 9, The pandas Library Architecture, provides a fairly detailed description of the code underlying pandas. It gives a breakdown of how the pandas library code is organized and describes the various modules that make up pandas, with some details. It also has a section that shows the user how to improve Python and pandas's performance using extensions. [x] Chapter 10, R and pandas Compared, focuses on comparing pandas with R, the stats package on which much of pandas's functionality is based. This chapter compares R data types and their pandas equivalents, and shows how the various operations compare in both libraries. Operations such as slicing, selection, arithmetic operations, aggregation, group-by, matching, split-apply-combine, and melting are compared. Chapter 11, Brief Tour of Machine Learning, takes you on a whirlwind tour of machine learning, with focus on using the pandas library as a tool to preprocess input data into machine learning programs. It also introduces the scikit-learn library, which is the most widely used machine learning toolkit in Python. Various machine learning techniques and algorithms are introduced by applying them to a well-known machine learning classification problem: which passengers survived the sinking of the Titanic? What you need for this book This software applies to all the chapters of the book: • Windows/Mac OS/Linux • Python 2.7.x • pandas • IPython • R • scikit-learn For hardware, there are no specific requirements, since Python and pandas can run on any PC that has Mac, Linux, or Windows. Who this book is for This book is intended for Python programmers, mathematicians, and analysts who already have a basic understanding of Python and wish to learn about its data analysis capabilities in depth. Maybe your appetite has been whetted after using Python for a few months, or maybe you are an R user who wishes to investigate what Python has to offer with regards to data analysis. In either case, this book will help you master the core features and capabilities of pandas for data analysis. It would be helpful for the user to have some experience using Python or experience with a data analysis package such as R. [ xi ] In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning. Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: " Upon installation, the following folders should be added to the PATH environment variable: C:\Python27\ and C:\Python27\Tools\Scripts." Any command-line input or output is written as follows: brew install readline brew install zeromq pip install ipython pyzmq tornado pygments New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "The preceding image of PYMC pandas Example is taken from http://healthyalgorithms.files. wordpress.com/2012/01/pymc-pandas-example.png." Warnings or important notes appear in a box like this. Tips and tricks appear like this. Reader feedback Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of. To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors. [ xii ] Customer support Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase. Downloading the example code You can download the example code files from your account at http://www. packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http:/ /www.packtpub.com/support and register to have the files e-mailed directly to you. You can also download the code from the GitHub repository at: https://github.com/femibyte/mastering_pandas Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub. com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title. To view the previously submitted errata, go to https:// www.packtpub.com/books/ content/support and enter the name of the book in the search field. The required information will appear under the Errata section. Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy. [ xiii ] Please contact us at [email protected] with a link to the suspected pirated material. We appreciate your help in protecting our authors and our ability to bring you valuable content. If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem. [ xiv ] Introduction to pandas and Data Analysis In this chapter, we address the following: • Motivation for data analysis • How Python and pandas can be used for data analysis • Description of the pandas library • Benefits of using pandas Motivation for data analysis In this section, we will discuss the trends that are making data analysis an increasingly important field of endeavor in today's fast-moving technological landscape. We live in a big data world The term big data has become one of the hottest technology buzzwords in the past two years. We now increasingly hear about big data in various media outlets, and big data startup companies have increasingly been attracting venture capital. A good example in the area of retail would be Target Corporation, which has invested substantially in big data and is now able to identify potential customers by using big data to analyze people's shopping habits online; refer to a related article at http://nyti.ms/19LT8ic. Loosely speaking, big data refers to the phenomenon wherein the amount of data exceeds the capability of the recipients of the data to process it. Here is a Wikipedia entry on big data that sums it up nicely: http://en.wikipedia.org/wiki/Big_data. [1] Introduction to pandas and Data Analysis 4 V's of big data A good way to start thinking about the complexities of big data is along what are called the 4 dimensions, or 4 V's of big data. This model was first introduced as the 3V's by Gartner analyst Doug Laney in 2001. The 3V's stood for Volume, Velocity, and Variety, and the 4th V, Veracity, was added later by IBM. Gartner's official definition is as follows: "Big data is high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery and process optimization." Laney, Douglas. "The Importance of 'Big Data': A Definition", Gartner Volume of big data The volume of data in the big data age is simply mind-boggling. According to IBM, by 2020, the total amount of data on the planet would have ballooned to 40 zettabytes. You heard that right-40 zettabytes is 43 trillion gigabytes, which is about 4 × 1021 bytes. For more information on this refer to the Wikipedia page on Zettabyte - http://en.wikipedia.org/wiki/Zettabyte. To get a handle of how much data this would be, let me refer to an EMC press release published in 2010, which stated what 1 zettabyte was approximately equal to: "The digital information created by every man, woman and child on Earth 'Tweeting' continuously for 100 years " or "75 billion fully-loaded 16 GB Apple iPads, which would fill the entire area of Wembley Stadium to the brim 41 times, the Mont Blanc Tunnel 84 times, CERN's Large Hadron Collider tunnel 151 times, Beijing National Stadium 15.5 times or the Taipei 101 Tower 23 times..." EMC study projects 45× data growth by 2020 The growth rate of data has been fuelled largely by a few factors, such as the following: • The rapid growth of the Internet. • The conversion from analog to digital media coupled with an increased capability to capture and store data, which in turn has been made possible with cheaper and more capable storage technology. There has been a proliferation of digital data input devices such as cameras and wearables, and the cost of huge data storage has fallen rapidly. Amazon Web Services is a prime example of the trend toward much cheaper storage. [2] Chapter 1 The Internetification of devices, or rather Internet of Things, is the phenomenon wherein common household devices, such as our refrigerators and cars, will be connected to the Internet. This phenomenon will only accelerate the above trend. Velocity of big data From a purely technological point of view, velocity refers to the throughput of big data, or how fast the data is coming in and is being processed. This has ramifications on how fast the recipient of the data needs to process it to keep up. Real-time analytics is one attempt to handle this characteristic. Tools that can help enable this include Amazon Web Services Elastic Map Reduce. At a more macro level, the velocity of data can also be regarded as the increased speed at which data and information can now be transferred and processed faster and at greater distances than ever before. The proliferation of high-speed data and communication networks coupled with the advent of cell phones, tablets, and other connected devices, are primary factors driving information velocity. Some measures of velocity include the number of tweets per second and the number of emails per minute. Variety of big data The variety of big data comes from having a multiplicity of data sources that generate the data, and the different formats of the data that are produced. This results in a technological challenge for the recipients of the data who have to process it. Digital cameras, sensors, the web, cell phones, and so on are some of the data generators that produce data in differing formats, and the challenge comes in being able to handle all these formats and extract meaningful information from the data. The ever-changing nature of the data formats with the dawn of the big data era has led to a revolution in the database technology industry, with the rise of NoSQL databases to handle what is known as unstructured data or rather data whose format is fungible or constantly changing. For more information on Couchbase, refer to "Why NoSQL- http://bit.ly/1c3iVEc. Introduction to pandas and Data Analysis Veracity of big data The 4th characteristic of big data – veracity, which was added later, refers to the need to validate or confirm the correctness of the data or the fact that the data represents the truth. The sources of data must be verified and the errors kept to a minimum. According to an estimate by IBM, poor data quality costs the US economy about $3.1 trillion dollars a year. For example, medical errors cost the United States $19.5 billion in 2008; for more information you can refer to a related article at http://bit.ly/1CTah5r. Here is an info-graphic by IBM that summarizes the 4V's of big data: IBM on the 4 V's of big data So much data, so little time for analysis Data analytics has been described by Eric Schmidt, the former CEO of Google, as the Future of Everything. For reference, you can check out a YouTube video called Why Data Analytics is the Future of Everything at http://bit.ly/1KmqGCP. Chapter 1 The volume and velocity of data will continue to increase in the big data age. Companies that can efficiently collect, filter, and analyze data results in information that allows them to better meet the needs of their customers in a much quicker timeframe will gain a significant competitive advantage over their competitors. For example, data analytics (Culture of Metrics) plays a very key role in the business strategy of http://www.amazon.com/. For more information refer to Amazon.com Case Study, Smart Insights at http://bit.ly/1glnA1u. The move towards real-time analytics As technologies and tools have evolved, to meet the ever-increasing demands of business, there has been a move towards what is known as real-time analytics. More information on Insight Everywhere, Intel available at http://intel.ly/1899xqo. In the big data Internet era, here are some examples: • Online businesses demand instantaneous insights into how the new products/features they have introduced in their online market are doing and how they can adjust their online product mix accordingly. Amazon is a prime example of this with their Customers Who Viewed This Item Also Viewed feature. • In finance, risk management and trading systems demand almost instantaneous analysis in order to make effective decisions based on data-driven insights. How Python and pandas fit into the data analytics mix The Python programming language is one of the fastest growing languages today in the emerging field of data science and analytics. Python was created by Guido von Russom in 1991, and its key features include the following: • Interpreted rather than compiled • Dynamic type system • Pass by value with object references • Modular capability • Comprehensive libraries • Extensibility with respect to other languages Introduction to pandas and Data Analysis • Object orientation • Most of the major programming paradigms-procedural, object-oriented, and to a lesser extent, functional. For more information, refer the Wikipedia page on Python at http:// en.wikipedia.org/wiki/ Python_%28programming_language%29. Among the characteristics that make Python popular for data science are its very user-friendly (human-readable) syntax, the fact that it is interpreted rather than compiled (leading to faster development time), and its very comprehensive library for parsing and analyzing data, as well as its capacity for doing numerical and statistical computations. Python has libraries that provide a complete toolkit for data science and analysis. The major ones are as follows: • NumPy: The general-purpose array functionality with emphasis on numeric computation • SciPy: Numerical computing • Matplotlib: Graphics • pandas: Series and data frames (1D and 2D array-like types) • Scikit-Learn: Machine learning • NLTK: Natural language processing • Statstool: Statistical analysis For this book, we will be focusing on the 4th library listed in the preceding list, pandas. What is pandas? The pandas is a high-performance open source library for data analysis in Python developed by Wes McKinney in 2008. Over the years, it has become the de-facto standard library for data analysis using Python. There's been great adoption of the tool, a large community behind it, (220+ contributors and 9000+ commits by 03/2014), rapid iteration, features, and enhancements continuously made. Some key features of pandas include the following: • It can process a variety of data sets in different formats: time series, tabular heterogeneous, and matrix data. • It facilitates loading/importing data from varied sources such as CSV and DB/SQL. [6] Chapter 1 • It can handle a myriad of operations on data sets: subsetting, slicing, filtering, merging, groupBy, re-ordering, and re-shaping. • It can deal with missing data according to rules defined by the user/ developer: ignore, convert to 0, and so on. • It can be used for parsing and munging (conversion) of data as well as modeling and statistical analysis. • It integrates well with other Python libraries such as statsmodels, SciPy, and scikit-learn. • It delivers fast performance and can be speeded up even more by making use of Cython (C extensions to Python). For more information go through the official pandas documentation available at http://pandas.pydata.org/pandas-docs/stable/. Benefits of using pandas The pandas forms a core component of the Python data analysis corpus. The distinguishing feature of pandas is the suite of data structures that it provides, which is naturally suited to data analysis, primarily the DataFrame and to a lesser extent Series (1-D vectors) and Panel (3D tables). Simply put, pandas and statstools can be described as Python's answer to R, the data analysis and statistical programming language that provides both the data structures, such as R-data frames, and a rich statistical library for data analysis. The benefits of pandas over using a language such as Java, C, or C++ for data analysis are manifold: • Data representation: It can easily represent data in a form naturally suited for data analysis via its DataFrame and Series data structures in a concise manner. Doing the equivalent in Java/C/C++ would require many lines of custom code, as these languages were not built for data analysis but rather networking and kernel development. • Data subsetting and filtering: It provides for easy subsetting and filtering of data, procedures that are a staple of doing data analysis. Introduction to pandas and Data Analysis • Concise and clear code: Its concise and clear API allows the user to focus more on the core goal at hand, rather than have to write a lot of scaffolding code in order to perform routine tasks. For example, reading a CSV file into a DataFrame data structure in memory takes two lines of code, while doing the same task in Java/C/C++ would require many more lines of code or calls to non-standard libraries, as illustrated in the following table. Here, let's suppose that we had the following data: Country CO2 Emissions Power Consumption Fertility Rate Internet Usage Per 1000 People Life Expectancy In a CSV file, this data that we wish to read would look like the following: Country,Year,CO2Emissions,PowerConsumption,FertilityRate, InternetUsagePer1000, LifeExpectancy, Population Belarus,2000,5.91,2988.71,1.29,18.69,68.01,1.00E+07 Belarus,2001,5.87,2996.81,,43.15,,9970260 Belarus,2002,6.03,2982.77,1.25,89.8,68.21,9925000 ... Philippines,2000,1.03,514.02,,20.33,69.53,7.58E+07 Philippines,2001,0.99,535.18,,25.89,,7.72E+07 Philippines,2002,0.99,539.74,3.5,44.47,70.19,7.87E+07 ... Morocco,2000,1.2,489.04,2.62,7.03,68.81,2.85E+07 Morocco,2001,1.32,508.1,2.5,13.87,,2.88E+07 Morocco,2002,1.32,526.4,2.5,23.99,69.48,2.92E+07 .. The data here is taken from World Bank Economic data available at: http://data.worldbank.org. In Java, we would have to write the following code: public class CSVReader { public static void main(String[] args) { [8] Chapter 1 String[] csvFile=args[1]; CSVReader csvReader = new csvReader(); ListdataTable=csvReader.readCSV(csvFile); } public void readCSV(String[] csvFile) { BufferedReader bReader=null; String line =""; String delim=","; //Initialize List of maps, each representing a line of the csv file List data=new ArrayList(); try { bufferedReader = new BufferedReader(new FileReader(csvFile)); // Read the csv file, line by line while ((line = br.readLine()) != null){ String[] row = line.split(delim); Map csvRow=new HashMap(); csvRow.put('Country')=row[0]; csvRow.put('Year')=row[1]; csvRow.put ('CO2Emissions')=row[2]; csvRow. put('PowerConsumption')=row[3]; csvRow.put('FertilityRate')=row[4]; csvRow.put('InternetUsage')=row[1]; csvRow.put('LifeExpectancy')=row[6]; csvRow.put('Population')= row[7]; data.add(csvRow); } } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } return data; } But, using pandas, it would take just two lines of code: import pandas as pd worldBankDF=pd.read_csv('worldbank.csv') In addition, pandas is built upon the NumPy libraries and hence, inherits many of the performance benefits of this package, especially when it comes to numerical and scientific computing. One oft-touted drawback of using Python is that as a scripting language, its performance relative to languages like Java/C/C++ has been rather slow. However, this is not really the case for pandas. Introduction to pandas and Data Analysis We live in a big data era characterized by the 4V's- volume, velocity, variety, and veracity. The volume and velocity of data are ever increasing for the foreseeable future. Companies that can harness and analyze big data to extract information and take actionable decisions based on this information will be the winners in the marketplace. Python is a fast-growing, user-friendly, extensible language that is very popular for data analysis. The pandas is a core library of the Python toolkit for data analysis. It provides features and capabilities that make it much easier and faster for data analysis than many other popular languages such as Java, C, C++, and Ruby. Thus, given the strengths of Python listed in the preceding section as a choice for the analysis of data, the data analysis practitioner utilizing Python should become quite adept at pandas in order to become more effective. This book aims to assist the user in achieving this goal. [ 10 ] Installation of pandas and the Supporting Software Before we can start work on pandas for doing data analysis, we need to make sure that the software is installed and the environment is in proper working order. This section deals with the installation of Python (if necessary), the pandas library, and all necessary dependencies for the Windows, Mac OS X, and Linux platforms. The topics we address include the following: • Selecting a version of Python • Installing Python • Installing pandas (0.16.0) • Installing IPython and Virtualenv The steps outlined in the following section should work for the most part, but your mileage may vary depending upon the setup. On different operating system versions, the scripts may not always work perfectly, and the third-party software packages already in the system may sometimes conflict with the provided instructions. Selecting a version of Python to use Before proceeding with the installation and download of Python and pandas, we need to consider the version of Python we're going to use. Currently, there are two versions flavors of Python in current use, namely Python 2.7.x and Python 3. If the reader is new to Python as well as pandas, the question becomes which version of the language he/she should adopt. [ 11 ] Installation of pandas and the Supporting Software On the surface, Python 3.x would appear to be the better choice since Python 2.7.x is supposed to be the legacy, and Python 3.x is supposed to be the future of the language. For reference, you can go through the documentation on this with the title Python2orPython3 at https://wiki.python.org/moin/ Python2orPython3. The main differences between Python 2.x and 3 include better Unicode support in Python 3, print and exec changed to functions, and integer division. For more details, see What's New in Python 3.0 at http://docs.python.org/3/whatsnew/3.0.html. However, for scientific, numeric, or data analysis work, Python 2.7 is recommended over Python 3 for the following reason: Python 2.7 is the preferred version for most current distributions and the support for Python 3.x was not as strong for some libraries, although that is increasingly becoming less of an issue. For reference, have a look at the documentation titled Will Scientists Ever Move to Python 3? at http://bit.ly/1DOgNuX. Hence, this book will use Python 2.7. It does not preclude the use of Python 3, and developers using Python 3 can easily make the necessary code changes to the examples by referring to the following documentation: Porting Python 2 Code to Python 3 at http://docs.python.org/2/howto/pyporting.html. Python installation Here, we detail the installation of Python on multiple platforms – Linux, Windows, and Mac OS X. If you're using Linux, Python most probably came pre-installed. If you're not sure, type the following at the command prompt: which python Python is likely to be found in one of the following folders on Linux depending upon your distribution and particular installation: • /usr/bin/python • /bin/python • /usr/local/bin/python • /opt/ local/bin/python [ 12 ] Chapter 2 You can determine which particular version of Python is installed, by typing the following in the command prompt: python --version In the rare event that Python isn't already installed, you need to figure out which flavor of Linux you're using, then download and install it. Here are the install commands as well as links to the various Linux Python distributions: 1. Debian/Ubuntu (14.04) sudo apt-get install python2.7 sudo apt-get install python2.7-devel Debian Python page at https://wiki.debian.org/Python. 2. Redhat Fedora/Centos/RHEL sudo yum install python sudo yum install python-devel Fedora software installs at http://bit.ly/1B2RpCj. 3. Open Suse sudo zypper install python sudo zypper install python-devel More information on installing software can be found at 4. Slackware: For this distribution of Linux, it may be best to download a compressed tarball and install it from the source as described in the following section. Installing Python from compressed tarball If none of the preceding methods work for you, you can also download a compressed tarball (XZ or Gzip) and get it installed. Here is a brief synopsis on the steps: #Install dependencies sudo apt-get install build-essential sudo apt-get install libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev #Download the tarball [ 13 ] Installation of pandas and the Supporting Software mkdir /tmp/downloads cd /tmp/downloads wget http://python.org/ftp/python/2.7.5/Python-2.7.5.tgz tar xvfz Python-2.7.5.tgz cd Python-2.7.5 # Configure, build and install ./configure --prefix=/opt/python2.7 --enable-shared make make test sudo make install echo "/opt/python2.7/lib" >> /etc/ld.so.conf.d/opt-python2.7.conf ldconfig cd .. rm -rf /tmp/downloads Information on this can be found at the Python download page at http://www. python.org/download/. Unlike Linux and Mac distributions, Python does not come pre-installed on Windows. Core Python installation The standard method is to use the Windows installers from CPython's team, which are MSI packages. The MSI packages can be downloaded from here: http://www.python.org/download/releases/2.7.6/. Select the appropriate Windows package depending upon whether your Windows version is 32-bit or 64-bit. Python by default gets installed to a folder containing the version number, so in this case, it will be installed to the following location: C:\Python27. This enables you to have multiple versions of Python running without problems. Upon installation, the following folders should be added to the PATH environment variable: C:\Python27\ and C:\Python27\Tools\Scripts. [ 14 ] Chapter 2 Third-party Python software installation There are a couple of Python tools that need to be installed in order to make the installation of other packages such as pandas easier. Install Setuptools and pip. Setuptools is very useful for installing other Python packages such as pandas. It adds to the packaging and installation functionality that is provided by the distutils tool in the standard Python distribution. To install Setuptools, download the ez_setup.py script from the following link: https://bitbucket.org/pypa/setuptools/raw/bootstrap. Then, save it to C:\Python27\Tools\Scripts. Then, run ez_setup.py: C:\ Python27\Tools\Scripts\ez_setup.py. The associated command pip provides the developer with an easy-to-use command that enables a quick and easy installation of Python modules. Download the get-pip script from the following link: http://www.pip-installer.org/en/latest/. Then, run it from the following location: C:\Python27\Tools\Scripts\get-pip.py. For reference, you can also go through the documentation titled Installing Python on Windows at http://docs.python-guide.org/en/latest/starting/install/win/. There are also third-party providers of Python on Windows that make the task of installation even easier. They are listed as follows: • Enthought: https://enthought.com/ • Continuum Analytics: http://www.continuum.io/ • Active State Python: http://www.activestate.com/ Mac OS X Python 2.7 comes pre-installed on the current and recent releases (past 5 years) of Mac OS X. The pre-installed Apple-provided build can be found in the following folders on the Mac: • /System/ Library/Frameworks/Python.framework • /usr/bin/python However, you can install your own version from http://www.python.org/ download/. The one caveat to this is that you will now have two installations of Python, and you have to be careful to make sure the paths and environments are cleanly separated. [ 15 ] Installation of pandas and the Supporting Software Installation using a package manager Python can also be installed using a package manager on the Mac such as Macports or Homebrew. I will discuss installation using Homebrew here as it seems to be the most user-friendly. For reference, you can go through the documentation titled Installing Python on Mac OS X at http://docs.python-guide.org/en/latest/ starting/install/osx/. Here are the steps: 1. Install Homebrew and run: ruby -e "$ (curl -fsSL https://raw.github.com/mxcl/homebrew/go)" You then need to add the Homebrew folder at the top of your PATH environment variable. 2. Install Python 2.7 at the Unix prompt: brew install python 3. Install third-party software: Distribute and pip. Installation of Homebrew automatically installs these packages. Distribute and pip enable one to easily download and install/uninstall Python Installation of Python and pandas from a third-party vendor The most straightforward way to install Python, pandas, and their associated dependencies would be to install a packaged distribution by using a third-party vendor such as Enthought or Continuum Analytics. I used to prefer Continuum Analytics Anaconda over Enthought because Anaconda was given away free while Enthought used to charge a subscription for full access to all its numerical modules. However, with the latest release of Enthought Canopy, there is little to separate the two distributions. Nevertheless, my personal preference is for Anaconda, so it is the distribution whose installation I will describe. For reference, see Anaconda Python Distribution at http://bit.ly/1aBhmgH. I will now give a brief description about the Anaconda package and how to install it. [ 16 ] Chapter 2 Continuum Analytics Anaconda Anaconda is a free Python distribution focused on large-scale data processing, analytics, and numeric computing. The following are the key features of Anaconda: • It includes the most popular Python packages for scientific, engineering, numerical, and data analysis. • It is completely free and available on Linux, Windows, and Mac OS X platforms. • Installations do not require root or local admin privileges, and the entire package installs in a single folder. • Multiple installations can coexist, and the installation does not affect preexisting Python installations on the system. • It includes modules such as Cython, NumPy, SciPy, pandas, IPython, matplotlib, and homegrown Continuum packages such as Numba, Blaze, and Bokeh. For more information on this, refer to the link at https: Installing Anaconda The following instructions detail how to install Anaconda on all three platforms. The download location is http://continuum.io/downloads. The version of Python is Python 2.7 in Anaconda by default. Perform the following steps for installation: 1. Download the Linux installer (32/64-bit) from the download location. 2. In a terminal, run the following command: bash For example, bash Anaconda-1.8.0-Linux-x86_64.sh. 3. Accept the license terms. 4. Specify the install location. I tend to use $HOME/local for my local thirdparty software installations. [ 17 ] Installation of pandas and the Supporting Software Mac OS X Perform the following steps for installation: 1. Download the Mac installer (.pkg file - 64-bit) from the download location. 2. Double click on the .pkg file to install and follow the instructions on the window that pops up. For example, package file name: Anaconda-1.8.0MacOSX-x86_64.pkg. Perform the following steps for the Windows environment: 1. Download the Windows installer (.exe file - 32/64-bit) from the download location. 2. Double click on the .pkg file to install and follow the instructions on the window that pops up. For example, package file name: Anaconda-1.8.0MacOSX-x86_64.pkg. Final step for all platforms As a shortcut, you can define ANACONDA_HOME to be the folder into which Anaconda was installed. For example, on my Linux and Mac OS X installations, I have the following environment variable setting: On Windows, it would be as follows: set ANACONDA_HOME=C:\Anaconda Add the Anaconda bin folder to your PATH environment variable. If you wish to use the Python Anaconda by default, you can do this by making sure that $ANACONDA_ HOME/bin is at the head of the PATH variable before the folder containing System Python. If you don't want to use the Anaconda Python by default, you have the following two options: 1. Activate the Anaconda environment each time as needed. This can be done as follows: source $HOME/local/anaconda/bin/activate $ANACONDA_HOME 2. Create a separate environment for Anaconda. This can be done by using the built-in conda command as described here: https://github.com/pydata/conda. [ 18 ] Chapter 2 For more information, read the Conda documentation at http://docs. continuum.io/conda/index.html. More detailed instructions on installing Anaconda can be obtained from the Anaconda Installation page at http://docs.continuum.io/anaconda/install.html. Other numeric or analytics-focused Python distributions The following is a synopsis of various third-party data analysis-related Python distributions. All of the following distributions include pandas: • Continuum Analytics Anaconda: Free enterprise-ready Python distribution focused on large-scale data processing, analytics, and numeric computing. For details, refer to https://store.continuum.io/cshop/anaconda/. • Enthought Canopy: Comprehensive Python data analysis environment. For more information, refer to https://www.enthought.com/products/ canopy/. • Python(x,y): Free scientific and engineering-oriented Python distribution for numerical computing, data analysis, and visualization. It is based on the Qt GUI package and Spyder interactive scientific development environment. For more information, refer to https://code.google.com/p/ pythonxy/. • WinPython: Free open source distribution of Python for the Windows platform focused on scientific computing. For more information, refer to http://winpython.sourceforge.net/. For more information on Python distributions, go to http://bit.ly/1yOzB7o. Downloading and installing pandas The pandas library is part of the Python language, so we can now proceed to install pandas. At the time of writing this book, the latest stable version of pandas available is version 0.12. The various dependencies along with the associated download locations are as follows: Package NumPy : 1.6.1 or higher NumPy library for numerical operations pythondateutil 1.5 Date manipulation and utility library [ 19 ] Download location http://www.numpy. org/ http://labix.org/ Installation of pandas and the Supporting Software Package Pytz Time zone support Optional, recommended Speeding up of numerical operations Optional, recommended Optional, recommended C-extensions for Python used for optimization Optional, recommended Scientific toolset for Python Library for HDF5based storage http://pytables. github.io/ Optional, recommended Matlab-like Python plotting library http://sourceforge. net/ Statistics module for Python http://sourceforge. net/ Library to read/write Excel files https://www.python. org/ Libraries to read/ write Excel files Library to access Amazon S3 https://www.python. org/ BeautifulSoup and one of html5lib, lxml html5lib Libraries needed for the read_html() function to work http://www.crummy. com/ Library for parsing HTML Python library for processing XML and HTML https://pypi. python.org/pypi/ html5lib http://lxml.de/ Download location http://sourceforge. net/ https://code. google.com/ http:// berkeleyanalytics. com/ http://cython.org/ Installing pandas is fairly straightforward for popular flavors of Linux. First, make sure that the Python .dev files are installed. If not, then install them as explained in the following section. [ 20 ] Chapter 2 For the Ubantu/Debian environment, run the following command: sudo apt-get install python-dev Red Hat For the Red Hat environment, run the following command: yum install python-dev Now, I will show you how to install pandas. For installing pandas in the Ubuntu/Debian environment, run the following command: sudo apt-get install python-pandas For Fedora, run the following command: sudo yum install python-pandas Install Python-pandas via YaST Software Management or use the following command: sudo zypper install python-pandas Sometimes, additional dependencies may be needed for the preceding installation, particularly in the case of Fedora. In this case, you can try installing additional dependences: sudo yum install gcc-gfortran gcc44-gfortran libgfortran lapack blas python-devel sudo python-pip install numpy There are a variety of ways to install pandas on Mac OS X. They are explained in the following sections. [ 21 ] Installation of pandas and the Supporting Software Source installation The pandas have a few dependencies for it to work properly, some are required and the others are optional, although needed for certain desirable features to work properly. This installs all the required dependencies: 1. Install the easy_install program: wget http://python-distribute.org/distribute_setup.pysudo python distribute_setup.py 2. Install Cython sudo easy_install -U Cython 3. You can then install from the source code as follows: git clone git://github.com/pydata/pandas.git cd pandas sudo python setup.py install Binary installation If you have installed pip as described in the Python installation section, installing pandas is as simple as the following: pip install pandas The following methods describe the installation in the Windows environment. Binary Installation Make sure that numpy, python-dateutil, and pytz are installed first. The following commands need to be run for each of these modules: • For python-dateutil: C:\Python27\Scripts\pip install • For pytz: C:\Python27\Scripts\pip install pytz Install from the binary download, and run the binary for your version of Windows from https://pypi.python.org/pypi/pandas. For example, if your processor is an AMD64, you can download and install pandas by using the following commands: [ 22 ] Chapter 2 1. Download the following file: (applies to pandas 0.16) pandas-0.16.1-cp26-none-win_amd64.whl (md5) 2. Install the downloaded file via pip: pip install pandas-0.16.1-cp26-none-win_amd64.whl To test the install, run Python and type the following on the command prompt: import pandas If it returns with no errors then the installation was successful. Source installation The steps here explain the installation completely: 1. Install the MinGW compiler by following the instructions in the documentation titled Appendix: Installing MinGW on Windows at http:// docs.cython.org/ src/tutorial/appendix.html. 2. Make sure that the MingW binary location is added to the PATH variable, that has C:\MingW\bin appended to it. 3. Install Cython and Numpy. Numpy can be downloaded and installed from http://www.lfd.uci. edu/~gohlke/pythonlibs/#numpy. Cython can be downloaded and installed from http://www.lfd.uci. edu/~gohlke/pythonlibs/#cython The steps to install Cython are as follows: • Installation via Pip: C:\Python27\Scripts\pip install Cython • Direct Download: 1. Download and install the pandas source from GitHub: http://github.com/pydata/pandas. 2. You can simply download and extract the zip file to a suitable folder. 3. Change to the folder containing the pandas download to C:\python27\python and run setup.py install. [ 23 ] Installation of pandas and the Supporting Software 4. Sometimes, you may obtain the following error when running setup.py: distutils.errors.DistutilsError: Setup script exited with error: Unable to find vcvarsall.bat This may have to do with not properly specifying mingw as the compiler. Check that you have followed all the steps again. Installing pandas on Windows from the source is prone to many bugs and errors and is not really recommended. Interactive Python (IPython) is a tool that is very useful for using Python for data analysis, and a brief description of the installation steps is provided here. IPython provides an interactive environment that is much more useful than the standard Python prompt. Its features include the following: • Tab completion to help the user do data exploration. • Comprehensive Help functionality using object_name? to print details about objects. • Magic functions that enable the user to run operating system commands within IPython, and run a Python script and load its data into the IPython environment by using the %run magic command. • History functionality via the _, __, and __ variables, the %history and other magic functions, and the up and down arrow keys. For more information, see the documentation at http://bit.ly/1Is4zIW. IPython Notebook IPython Notebook is the web-enabled version of IPython. It enables the user to combine code, numerical computation, and display graphics and rich media in a single document, the notebook. Notebooks can be shared with colleagues and converted to the HTML/PDF formats. For more information, refer to the documentation titled The IPython Notebook at http://ipython.org/notebook.html. Here is an illustration: [ 24 ] Chapter 2 The preceding image of PYMC Pandas Example is taken from http:// [ 25 ] Installation of pandas and the Supporting Software IPython installation The recommended method to install IPython would be to use a third-party package such as Continuum's Anaconda or Enthought Canopy. Assuming that pandas and other tools for scientific computing have been installed as per the instructions, the following one-line commands should suffice: For Ubuntu/Debian, use sudo apt-get install For Fedora, use sudo yum install python-ipython-notebook If you have pip and setuptools installed, you can also install it via the following command for Linux/Mac platforms: sudo pip install ipython IPython requires setuptools on Windows, and the PyReadline library. PyReadline is a Python implementation of the GNU readline library. To install IPython on Windows, perform the following steps: 1. Install setuptools as detailed in the preceding section. 2. Install pyreadline by downloading the MS Windows installer from PyPI Readline package page at https://pypi.python.org/pypi/pyreadline. 3. Download and run the IPython Installer from the GitHub IPython download location: https://github.com/ipython/ipython/downloads. For more information, see the IPython installation page at http:// Mac OS X IPython can be installed on Mac OS X by using pip or setuptools. It also needs the readline and zeromq library, which are best installed by using Homebrew. The steps are as follows: [ 26 ] Chapter 2 brew install readline brew install zeromq pip install ipython pyzmq tornado pygments The pyzmq, tornado, and pygments modules are necessary to obtain the full graphical functionality of IPython Notebook. For more information, see the documentation titled Setup IPython Notebook and Pandas for OSX at http://bit.ly/1JG0wKA. Install via Anaconda (for Linux/Mac OS X) Assuming that Anaconda is already installed, simply run the following commands to update IPython to the latest version: conda update conda conda update ipython Wakari by Continuum Analytics If the user is not quite ready to install IPython, an alternative would be to use IPython in the cloud. Enter Wakari, a cloud-based analytics solution that provides full support for IPython notebooks hosted on Continuum's servers. It allows the user to create, edit, save, and share IPython notebooks all within a browser on the cloud. More details can be found at http://continuum.io/wakari. Virtualenv is a tool that is used to create isolated Python environments. It can be useful if you wish to work in an environment to test out the latest version of pandas without affecting the standard Python build. Virtualenv installation and usage I would only recommend installing Virtualenv if you decide not to install and use the Anaconda package, as this already provides the Virtualenv functionality. The brief steps are as follows: 1. Install via pip: pip install virtualenv [ 27 ] Installation of pandas and the Supporting Software 2. Use of Virtualenv °° Create a virtual environment by using the following command: virtualenv newEnv Activate the virtual environment by using the following command: source newEnv/bin/activate Deactivate the virtual environment and go back to the standard Python environment by using the following command: For more information on this, you can go through the documentation titled Virtual Environments at http://docs.python-guide.org/en/latest/dev/virtualenvs/. Downloading the example code You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http:// www.packtpub.com/support and register to have the files e-mailed directly to you. You can also download the code from the GitHub repository at: https://github.com/femibyte/mastering_pandas There are two main versions of Python available: Python 2.7.x and Python 3.x. At the moment, Python 2.7.x is preferable for data analysis and numerical computing as it is more mature. The pandas library requires a few dependencies in order to be setup correctly – NumPy, SciPy, and matplotlib to name a few. There are a myriad number of ways to install pandas – the recommended method is to install one of the third-party distributions that include pandas. Distributions include Anaconda by Continuum, Enthough Canopy, WinPython, and Python(x,y). Installation of the IPython package is highly recommended as it provides a rich, highly interactive environment for data analysis. Thus, setting up our environment for learning pandas involves installing a suitable version of Python, installing pandas and its dependent modules, and setting up some useful tools such as IPython. To re-emphasize, I strongly advise readers to do themselves a favor and make their task easier by installing a third-party distribution, such as Anaconda or Enthought, so as to get their environment up and running trouble-free in the shortest possible timeframe. In our next chapter, we will start diving into pandas directly as we take a look at its key features. [ 28 ] The pandas Data Structures This chapter is one of the most important ones in this book. We will now begin to dive into the meat and bones of pandas. We start by taking a tour of NumPy ndarrays, a data structure not in pandas but NumPy. Knowledge of NumPy ndarrays is useful as it forms the foundation for the pandas data structures. Another key benefit of NumPy arrays is that they execute what is known as vectorized operations, which are operations that require traversing/looping on a Python array, much faster. The topics we will cover in this chapter include the following: • Tour of numpy.ndarray data structure. • The pandas.Series 1-dimensional (1D) pandas data structure • The pandas.DatcaFrame 2-dimensional (2D) pandas tabular data structure • The pandas.Panel 3-dimensional (3D) pandas data structure In this chapter, I will present the material via numerous examples using IPython, a browser-based interface that allows the user to type in commands interactively to the Python interpreter. Instructions for installing IPython are provided in the previous chapter. NumPy ndarrays The NumPy library is a very important package used for numerical computing with Python. Its primary features include the following: • The type numpy.ndarray, a homogenous multidimensional array • Access to numerous mathematical functions – linear algebra, statistics, and so on • Ability to integrate C, C++, and Fortran code [ 29 ] The pandas Data Structures For more information about NumPy, see http://www.numpy.org. The primary data structure in NumPy is the array class ndarray. It is a homogeneous multi-dimensional (n-dimensional) table of elements, which are indexed by integers just as a normal array. However, numpy.ndarray (also known as numpy.array) is different from the standard Python array.array class, which offers much less functionality. More information on the various operations is provided at http://scipy-lectures.github.io/intro/numpy/array_object.html. NumPy array creation NumPy arrays can be created in a number of ways via calls to various NumPy methods. NumPy arrays via numpy.array NumPy arrays can be created via the numpy.array constructor directly: In [1]: import numpy as np In [2]: ar1=np.array([0,1,2,3])# 1 dimensional array In [3]: ar2=np.array ([[0,3,5],[2,8,7]]) # 2D array In [4]: ar1 Out[4]: array([0, 1, 2, 3]) In [5]: ar2 Out[5]: array([[0, 3, 5], [2, 8, 7]]) The shape of the array is given via ndarray.shape: In [5]: ar2.shape Out[5]: (2, 3) The number of dimensions is obtained using ndarray.ndim: In [7]: ar2.ndim Out[7]: 2 NumPy array via numpy.arange ndarray.arange is the NumPy version of Python's range function: In [10]: # produces the integers from 0 to 11, not inclusive of 12 [ 30 ] Chapter 3 ar3=np.arange(12); ar3 Out[10]: array([ 0, 9, 10, 11]) In [11]: # start, end (exclusive), step size ar4=np.arange(3,10,3); ar4 Out[11]: array([3, 6, 9]) NumPy array via numpy.linspace ndarray.linspace generates linear evenly spaced elements between the start and the end: In [13]:# args - start element,end element, number of elements ar5=np.linspace(0,2.0/3,4); ar5 Out[13]:array([ 0., NumPy array via various other functions These functions include numpy.zeros, numpy.ones, numpy.eye, nrandom.rand, numpy.random.randn, and numpy.empty. The argument must be a tuple in each case. For the 1D array, you can just specify the number of elements, no need for a tuple. The following command line explains the function: In [14]:# Produces 2x3x2 array of 1's. ar7=np.ones((2,3,2)); ar7 Out[14]: array([[[ 1., [ 1., [ 1., [[ 1., [ 1., [ 1., The following command line explains the function: In [15]:# Produce 4x2 array of zeros. ar8=np.zeros((4,2));ar8 [ 31 ] The pandas Data Structures Out[15]: array([[ 0., [ 0., [ 0., [ 0., The following command line explains the function: In [17]:# Produces identity matrix ar9 = np.eye(3);ar9 Out[17]: array([[ 1., [ 0., [ 0., 1., 0., 0.], 1.]]) The following command line explains the function: In [18]: # Create diagonal array ar10=np.diag((2,1,4,6));ar10 Out[18]: array([[2, 0, 0, 0], [0, 1, 0, 0], [0, 0, 4, 0], [0, 0, 0, 6]]) The following command line explains the function: In [19]: # Using the rand, randn functions # rand(m) produces uniformly distributed random numbers with range 0 to m np.random.seed(100) # Set seed ar11=np.random.rand(3); ar11 Out[19]: array([ 0.54340494, In [20]: # randn(m) produces m normally distributed (Gaussian) random numbers ar12=np.random.rand(5); ar12 Out[20]: array([ 0.35467445, -0.78606433, -0.2318722 , 0.93580797]) [ 32 ] Chapter 3 Using np.empty to create an uninitialized array is a cheaper and faster way to allocate an array, rather than using np.ones or np.zeros (malloc versus. cmalloc). However, you should only use it if you're sure that all the elements will be initialized later: In [21]: ar13=np.empty((3,2)); ar13 Out[21]: array([[ -2.68156159e+154, The np.tile function allows one to construct an array from a smaller array by repeating it several times on the basis of a parameter: In [334]: np.array([[1,2],[6,7]]) Out[334]: array([[1, 2], [6, 7]]) In [335]: np.tile(np.array([[1,2],[6,7]]),3) Out[335]: array([[1, 2, 1, 2, 1, 2], [6, 7, 6, 7, 6, 7]]) In [336]: np.tile(np.array([[1,2],[6,7]]),(2,2)) Out[336]: array([[1, 2, 1, 2], [6, 7, 6, 7], [1, 2, 1, 2], [6, 7, 6, 7]]) NumPy datatypes We can specify the type of contents of a numeric array by using the dtype parameter: In [50]: ar=np.array([2,-1,6,3],dtype='float'); ar Out[50]: array([ 2., -1., In [51]: ar.dtype Out[51]: dtype('float64') In [52]: ar=np.array([2,4,6,8]); ar.dtype Out[52]: dtype('int64') In [53]: ar=np.array([2.,4,6,8]); ar.dtype Out[53]: dtype('float64') [ 33 ] The pandas Data Structures The default dtype in NumPy is float. In the case of strings, dtype is the length of the longest string in the array: In [56]: sar=np.array(['Goodbye','Welcome','Tata','Goodnight']); sar. dtype Out [56]: dtype('S9') You cannot create variable-length strings in NumPy, since NumPy needs to know how much space to allocate for the string. dtypes can also be Boolean values, complex numbers, and so on: In [57]: bar= np.array([True, False, True]); bar.dtype Out[57]: dtype('bool') The datatype of ndarray can be changed in much the same way as we cast in other languages such as Java or C/C++. For example, float to int and so on. The mechanism to do this is to use the numpy.ndarray.astype() function. Here is an example: In [3]: f_ar = np.array([3,-2,8.18]) f_ar Out[3]: array([ 3. , -2. In [4]: f_ar.astype(int) Out[4]: array([ 3, -2, More information on casting can be found in the official documentation at http:// NumPy indexing and slicing Array indices in NumPy start at 0, as in languages such as Python, Java, and C++ and unlike in Fortran, Matlab, and Octave, which start at 1. Arrays can be indexed in the standard way as we would index into any other Python sequences: # print entire array, element 0, element 1, last element. In [36]: ar = np.arange(5); print ar; ar[0], ar[1], ar[-1] [0 1 2 3 4] Out[36]: (0, 1, 4) # 2nd, last and 1st elements In [65]: ar=np.arange(5); ar[1], ar[-1], ar[0] Out[65]: (1, 4, 0) [ 34 ] Chapter 3 Arrays can be reversed using the ::-1 idiom as follows: In [24]: ar=np.arange(5); ar[::-1] Out[24]: array([4, 3, 2, 1, 0]) Multi-dimensional arrays are indexed using tuples of integers: In [71]: ar = np.array([[2,3,4],[9,8,7],[11,12,13]]); ar Out[71]: array([[ 2, [ 9, [11, 12, 13]]) In [72]: ar[1,1] Out[72]: 8 Here, we set the entry at row1 and column1 to 5: In [75]: ar[1,1]=5; ar Out[75]: array([[ 2, [ 9, [11, 12, 13]]) Retrieve row 2: In [76]: Out[76]: array([11, 12, 13]) In [77]: ar[2,:] Out[77]: array([11, 12, 13]) Retrieve column 1: In [78]: ar[:,1] Out[78]: array([ 3, 5, 12]) If an index is specified that is out of bounds of the range of an array, IndexError will be raised: In [6]: ar = np.array([0,1,2]) In [7]: ar[5] Traceback (most recent call last) in () ----> 1 ar[5] IndexError: index 5 is out of bounds for axis 0 with size 3 [ 35 ] The pandas Data Structures Thus, for 2D arrays, the first dimension denotes rows and the second dimension, the columns. The colon (:) denotes selection across all elements of the dimension. Array slicing Arrays can be sliced using the following syntax: ar[startIndex: endIndex: stepValue]. In [82]: ar=2*np.arange(6); ar Out[82]: array([ 0, 8, 10]) In [85]: ar[1:5:2] Out[85]: array([2, 6]) Note that if we wish to include the endIndex value, we need to go above it, as follows: In [86]: ar[1:6:2] Out[86]: array([ 2, 6, 10]) Obtain the first n-elements using ar[:n]: In [91]: ar[:4] Out[91]: array([0, 2, 4, 6]) The implicit assumption here is that startIndex=0, step=1. Start at element 4 until the end: In [92]: ar[4:] Out[92]: array([ 8, 10]) Slice array with stepValue=3: In [94]: ar[::3] Out[94]: array([0, 6]) To illustrate the scope of indexing in NumPy, let us refer to this illustration, which is taken from a NumPy lecture given at SciPy 2013 and can be found at http://bit.ly/1GxCDpC: [ 36 ] Chapter 3 Let us now examine the meanings of the expressions in the preceding image: • The expression a[0,3:5] indicates the start at row 0, and columns 3-5, where column 5 is not included. • In the expression a[4:,4:], the first 4 indicates the start at row 4 and will give all columns, that is, the array [[40, 41,42,43,44,45] [50,51,52,53,54,55]]. The second 4 shows the cutoff at the start of column 4 to produce the array [[44, 45], [54, 55]]. • The expression a[:,2] gives all rows from column 2. • Now, in the last expression a[2::2,::2], 2::2 indicates that the start is at row 2 and the step value here is also 2. This would give us the array [[20, 21, 22, 23, 24, 25], [40, 41, 42, 43, 44, 45]]. Further, ::2 specifies that we retrieve columns in steps of 2, producing the end result array ([[20, 22, 24], [40, 42, 44]]). Assignment and slicing can be combined as shown in the following code snippet: In [96]: ar Out[96]: array([ 0, 8, 10]) In [100]: ar[:3]=1; ar Out[100]: array([ 1, 8, 10]) In [110]: ar[2:]=np.ones(4);ar Out[110]: array([1, 1, 1, 1, 1, 1]) [ 37 ] The pandas Data Structures Array masking Here, NumPy arrays can be used as masks to select or filter out elements of the original array. For example, see the following snippet: In [146]: np.random.seed(10) ar=np.random.random_integers (0,25,10); ar Out[146]: array([ 9, 4, 15, 0, 17, 25, 16, 17, In [147]: evenMask=(ar % 2==0); evenMask Out[147]: array([False, True, False, True, False], dtype=bool) True, False, False, True, False, In [148]: evenNums=ar[evenMask]; evenNums Out[148]: array([ 4, 0, 16, In the following example, we randomly generate an array of 10 integers between 0 and 25. Then, we create a Boolean mask array that is used to filter out only the even numbers. This masking feature can be very useful, say for example, if we wished to eliminate missing values, by replacing them with a default value. Here, the missing value '' is replaced by 'USA' as the default country. Note that '' is also an empty string: In [149]: ar=np.array(['Hungary','Nigeria', 'Guatemala','','Poland', '','Japan']); ar Out[149]: array(['Hungary', 'Nigeria', 'Guatemala', '', 'Poland', '', 'Japan'], dtype='|S9') In [150]: ar[ar=='']='USA'; ar Out[150]: array(['Hungary', 'Nigeria', 'Guatemala', 'USA', 'Poland', 'USA', 'Japan'], dtype='|S9') Arrays of integers can also be used to index an array to produce another array. Note that this produces multiple values; hence, the output must be an array of type ndarray. This is illustrated in the following snippet: In [173]: ar=11*np.arange(0,10); ar Out[173]: array([ 0, 11, 22, 33, 44, 55, 66, 77, 88, 99]) In [174]: ar[[1,3,4,2,7]] Out[174]: array([11, 33, 44, 22, 77]) [ 38 ] Chapter 3 In the preceding code, the selection object is a list and elements at indices 1, 3, 4, 2, and 7 are selected. Now, assume that we change it to the following: In [175]: ar[1,3,4,2,7] We get an IndexError error since the array is 1D and we're specifying too many indices to access it. IndexError Traceback (most recent call last) in () ----> 1 ar[1,3,4,2,7] IndexError: too many indices This assignment is also possible with array indexing, as follows: In [176]: ar[[1,3]]=50; ar Out[176]: array([ 0, 50, 22, 50, 44, 55, 66, 77, 88, 99]) When a new array is created from another array by using a list of array indices, the new array has the same shape. Complex indexing Here, we illustrate the use of complex indexing to assign values from a smaller array into a larger one: In [188]: ar=np.arange(15); ar Out[188]: array([ 0, 14]) 9, 10, 11, 12, 13, In [193]: ar2=np.arange(0,-10,-1)[::-1]; ar2 Out[193]: array([-9, -8, -7, -6, -5, -4, -3, -2, -1, Slice out the first 10 elements of ar, and replace them with elements from ar2, as follows: In [194]: ar[:10]=ar2; ar Out[194]: array([-9, -8, -7, -6, -5, -4, -3, -2, -1, 14]) [ 39 ] 0, 10, 11, 12, 13, The pandas Data Structures Copies and views A view on a NumPy array is just a particular way of portraying the data it contains. Creating a view does not result in a new copy of the array, rather the data it contains may be arranged in a specific order, or only certain data rows may be shown. Thus, if data is replaced on the underlying array's data, this will be reflected in the view whenever the data is accessed via indexing. The initial array is not copied into the memory during slicing and is thus more efficient. The np.may_share_memory method can be used to see if two arrays share the same memory block. However, it should be used with caution as it may produce false positives. Modifying a view modifies the original array: In [118]:ar1=np.arange(12); ar1 Out[118]:array([ 0, 9, 10, 11]) In [119]:ar2=ar1[::2]; ar2 Out[119]: array([ 0, 8, 10]) 1, -1, In [120]: ar2[1]=-1; ar1 Out[120]: array([ 0, 9, 10, 11]) To force NumPy to copy an array, we use the np.copy function. As we can see in the following array, the original array remains unaffected when the copied array is modified: In [124]: ar=np.arange(8); ar Out[124]: array([0, 1, 2, 3, 4, 5, 6, 7]) In [126]: arc=ar[:3].copy(); arc Out[126]: array([0, 1, 2]) In [127]: arc[0]=-1; arc Out[127]: array([-1, In [128]: ar Out[128]: array([0, 1, 2, 3, 4, 5, 6, 7]) Here, we present various operations in NumPy. [ 40 ] Chapter 3 Basic operations Basic arithmetic operations work element-wise with scalar operands. They are - +, -, *, /, and **. In [196]: ar=np.arange(0,7)*5; ar Out[196]: array([ 0, 5, 10, 15, 20, 25, 30]) In [198]: ar=np.arange(5) ** 4 ; ar Out[198]: array([ 81, 256]) In [199]: ar ** 0.5 Out[199]: array([ Operations also work element-wise when another array is the second operand as follows: In [209]: ar=3+np.arange(0, 30,3); ar Out[209]: array([ 3, 9, 12, 15, 18, 21, 24, 27, 30]) In [210]: ar2=np.arange(1,11); ar2 Out[210]: array([ 1, 9, 10]) Here, in the following snippet, we see element-wise subtraction, division, and multiplication: In [211]: ar-ar2 Out[211]: array([ 2, 8, 10, 12, 14, 16, 18, 20]) In [212]: ar/ar2 Out[212]: array([3, 3, 3, 3, 3, 3, 3, 3, 3, 3]) In [213]: ar*ar2 Out[213]: array([ 75, 108, 147, 192, 243, 300]) It is much faster to do this using NumPy rather than pure Python. The %timeit function in IPython is known as a magic function and uses the Python timeit module to time the execution of a Python statement or expression, explained as follows: In [214]: ar=np.arange(1000) %timeit ar**3 [ 41 ] The pandas Data Structures 100000 loops, best of 3: 5.4 µs per loop In [215]:ar=range(1000) %timeit [ar[i]**3 for i in ar] 1000 loops, best of 3: 199 µs per loop Array multiplication is not the same as matrix multiplication; it is element-wise, meaning that the corresponding elements are multiplied together. For matrix multiplication, use the dot operator. For more information refer to http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html. In [228]: ar=np.array([[1,1],[1,1]]); ar Out[228]: array([[1, 1], [1, 1]]) In [230]: ar2=np.array([[2,2],[2,2]]); ar2 Out[230]: array([[2, 2], [2, 2]]) In [232]: ar.dot(ar2) Out[232]: array([[4, 4], [4, 4]]) Comparisons and logical operations are also element-wise: In [235]: ar=np.arange(1,5); ar Out[235]: array([1, 2, 3, 4]) In [238]: ar2=np.arange(5,1,-1);ar2 Out[238]: array([5, 4, 3, 2]) In [241]: ar < ar2 Out[241]: array([ True, True, False, False], dtype=bool) In [242]: l1 = np.array([True,False,True,False]) l2 = np.array([False,False,True, False]) np.logical_and(l1,l2) Out[242]: array([False, False, True, False], dtype=bool) [ 42 ] Chapter 3 Other NumPy operations such as log, sin, cos, and exp are also element-wise: In [244]: ar=np.array([np.pi, np.pi/2]); np.sin(ar) Out[244]: array([ Note that for element-wise operations on two NumPy arrays, the two arrays must have the same shape, else an error will result since the arguments of the operation must be the corresponding elements in the two arrays: In [245]: ar=np.arange(0,6); ar Out[245]: array([0, 1, 2, 3, 4, 5]) In [246]: ar2=np.arange(0,8); ar2 Out[246]: array([0, 1, 2, 3, 4, 5, 6, 7]) In [247]: ar*ar2 -------------------------------------------------------------------------ValueError recent call last) Traceback (most in () ----> 1 ar*ar2 ValueError: operands could not be broadcast together with shapes (6) (8) Further, NumPy arrays can be transposed as follows: In [249]: ar=np.array([[1,2,3],[4,5,6]]); ar Out[249]: array([[1, 2, 3], [4, 5, 6]]) In [250]:ar.T Out[250]:array([[1, 4], [2, 5], [3, 6]]) In [251]: np.transpose(ar) Out[251]: array([[1, 4], [2, 5], [3, 6]]) [ 43 ] The pandas Data Structures Suppose we wish to compare arrays not element-wise, but array-wise. We could achieve this as follows by using the np.array_equal operator: In [254]: ar=np.arange(0,6) ar2=np.array([0,1,2,3,4,5]) np.array_equal(ar, ar2) Out[254]: True Here, we see that a single Boolean value is returned instead of a Boolean array. The value is True only if all the corresponding elements in the two arrays match. The preceding expression is equivalent to the following: In [24]: np.all(ar==ar2) Out[24]: True Reduction operations Operators such as np.sum and np.prod perform reduces on arrays; that is, they combine several elements into a single value: In [257]: ar=np.arange(1,5) ar.prod() Out[257]: 24 In the case of multi-dimensional arrays, we can specify whether we want the reduction operator to be applied row-wise or column-wise by using the axis parameter: In [259]: ar=np.array([np.arange (1,6),np.arange(1,6)]);ar Out[259]: array([[1, 2, 3, 4, 5], [1, 2, 3, 4, 5]]) # Columns In [261]: np.prod(ar,axis=0) Out[261]: array([ 1, 9, 16, 25]) # Rows In [262]: np.prod(ar,axis=1) Out[262]: array([120, 120]) In the case of multi-dimensional arrays, not specifying an axis results in the operation being applied to all elements of the array as explained in the following example: In [268]: ar=np.array ([[2,3,4],[5,6,7],[8,9,10]]); ar.sum() [ 44 ] Chapter 3 Out[268]: 54 In [269]: ar.mean() Out[269]: 6.0 In [271]: np.median(ar) Out[271]: 6.0 Statistical operators These operators are used to apply standard statistical operations to a NumPy array. The names are self-explanatory: np.std(), np.mean(), np.median(), and np.cumsum(). In [309]: np.random.seed(10) ar= np.random.randint(0,10, size=(4,5));ar Out[309]: array([[9, 4, 0, 1, 9], [0, 1, 8, 9, 0], [8, 6, 4, 3, 0], [4, 6, 8, 1, 8]]) In [310]: ar.mean() Out[310]: 4.4500000000000002 In [311]: ar.std() Out [311]: 3.4274626183227732 In [312]: ar.var(axis=0) Out[312]: array([ 12.6875, # across rows 4.1875, In [313]: ar.cumsum() Out[313]: array([ 9, 13, 13, 14, 23, 23, 24, 32, 41, 41, 49, 55, 59, 62, 62, 66, 72, 80, 81, 89]) Logical operators Logical operators can be used for array comparison/checking. They are as follows: • np.all(): This is used for element-wise and all of the elements • np.any(): This is used for element-wise or all of the elements [ 45 ] The pandas Data Structures Generate a random 4 × 4 array of ints and check if any element is divisible by 7 and if all elements are less than 11: In [320]: np.random.seed(100) ar=np.random.randint(1,10, size=(4,4));ar Out [320]: array([[9, 9, 4, 8], [8, 1, 5, 3], [6, 3, 3, 3], [2, 1, 9, 5]]) In [318]: np.any((ar%7)==0) Out[318]: False In [319]: np.all(ar 1 ar.resize((8,)); ValueError: cannot resize an array that references or is referenced by another array in this way. Use the resize function The way around this is to use the numpy.resize function instead: In [38]: np.resize(ar,(8,)) Out[38]: array([0, 1, 2, 3, 4, 0, 1, 2]) [ 48 ] Chapter 3 Adding a dimension The np.newaxis function adds an additional dimension to an array: In [377]: ar=np.array([14,15,16]); ar.shape Out[377]: (3,) In [378]: ar Out[378]: array([14, 15, 16]) In [379]: ar=ar[:, np.newaxis]; ar.shape Out[379]: (3, 1) In [380]: ar Out[380]: array([[14], [15], [16]]) Array sorting Arrays can be sorted in various ways. 1. Sort the array along an axis; first, let's discuss this along the y-axis: In [43]: ar=np.array([[3,2],[10,-1]]) ar Out[43]: array([[ 3, [10, -1]]) In [44]: ar.sort(axis=1) ar Out[44]: array([[ 2, [-1, 10]]) 2. Here, we will explain the sorting along the x-axis: In [45]: ar=np.array([[3,2],[10,-1]]) ar Out[45]: array([[ 3, [10, -1]]) In [46]: ar.sort(axis=0) ar Out[46]: array([[ 3, -1], [10, 2]]) [ 49 ] The pandas Data Structures 3. Sorting by in-place (np.array.sort) and out-of-place (np.sort) functions. 4. Other operations that are available for array sorting include the following: °° np.min(): It returns the minimum element in the array np.max(): It returns the maximum element in the array np.std(): It returns the standard deviation of the elements in the array np.var(): It returns the variance of elements in the array np.argmin(): It indices of minimum np.argmax(): It indices of maximum np.all(): It returns element-wise and all of the elements np.any(): It returns element-wise or all of the elements Data structures in pandas The pandas was created by Wed McKinney in 2008 as a result of frustrations he encountered while working on time series data in R. It is built on top of NumPy and provides features not available in it. It provides fast, easy-to-understand data structures and helps fill the gap between Python and a language such as R. A key reference for the various operations I demonstrate here is the official pandas data structure documentation: http://pandas.pydata.org/pandas-docs/dev/ dsintro.html. There are three main data structures in pandas: • Series • DataFrame • Panel Series is really a 1D NumPy array under the hood. It consists of a NumPy array coupled with an array of labels. Series creation The general construct for creating a Series data structure is as follows: import pandas as pd ser=pd.Series(data, index=idx) [ 50 ] Chapter 3 where data can be one of the following: • An ndarray • A Python dictionary • A scalar value Using numpy.ndarray In this case, the index must be the same length as the data. If an index is not specified, the following default index [0,... n-1] will be created, where n is the length of the data. The following example creates a Series structure of seven random numbers between 0 and 1; the index is not specified: In [466]: import numpy as np np.random.seed(100) ser=pd.Series(np.random.rand(7)); ser Out dtype: float64 The following example creates a Series structure of the first 5 months of the year with a specified index of month names: In [481]: import calendar as cal monthNames=[cal.month_name[i] for i in np.arange(1,6)] months=pd.Series(np.arrange(1,6),index=monthNames);months Out[481]: January February dtype: int64 [ 51 ] The pandas Data Structures In [482]: months.index Out[482]: Index([u'January', u'February', u'March', u'April', u'May'], dtype=object) Using Python dictionary If the data is a dictionary and an index is provided, the labels will be constructed from it; else, the keys of the dictionary will be used for the labels. The values of the dictionary are used to populate the Series structure. In [486]: currDict={'US' : 'dollar', 'UK' : 'pound', 'Germany': 'euro', 'Mexico':'peso', 'Nigeria':'naira', 'China':'yuan', 'Japan':'yen'} currSeries=pd.Series (currDict); currSeries Out[486]: China dtype: object The index of a pandas Series structure is of type pandas.core.index.Index and can be viewed as an ordered multiset. In the following case, we specify an index, but the index contains one entry that isn't a key in the corresponding dict. The result is that the value for the key is assigned as NaN, indicating that it is missing. We will deal with handling missing values in a later section. In [488]: stockPrices = {'GOOG':1180.97,'FB':62.57, 'TWTR': 64.50, 'AMZN':358.69, 'AAPL':500.6} stockPriceSeries=pd.Series(stockPrices, index=['GOOG','FB','YHOO', 'TWTR','AMZN','AAPL'], name= 'stockPrices') stockPriceSeries [ 52 ] Chapter 3 Out[488]: GOOG Name: stockPrices, dtype: float64 Note that Series also has a name attribute that can be set as shown in the preceding snippet. The name attribute is useful in tasks such as combining Series objects into a DataFrame structure. Using scalar values For scalar data, an index must be provided. The value will be repeated for as many index values as possible. One possible use of this method is to provide a quick and dirty method of initialization, with the Series structure to be filled in later. Let us see how to create a Series using scalar values: In [491]: dogSeries=pd.Series('chihuahua', index=['breed','countryOfOrigin', 'name', 'gender']) dogSeries Out[491]: breed dtype: object Failure to provide an index just results in a scalar value being returned as follows: In [494]: dogSeries=pd.Series('pekingese'); dogSeries Out[494]: 'pekingese' In [495]: type(dogSeries) Out[495]: Operations on Series The behavior of Series is very similar to that of numpy arrays discussed in a previous section, with one caveat being that an operation such as slicing also slices the index. [ 53 ] The pandas Data Structures Values can be set and accessed using the index label in a dictionary-like manner: In [503]: currDict['China'] Out[503]: 'yuan' In [505]: stockPriceSeries['GOOG']=1200.0 stockPriceSeries Out[505]: 1200.00 62.57 dtype: float64 Just as in the case of dict, KeyError is raised if you try to retrieve a missing label: In [506]: stockPriceSeries['MSFT'] KeyError: 'MSFT' This error can be avoided by explicitly using get as follows: In [507]: stockPriceSeries.get('MSFT',np.NaN) Out[507]: nan In this case, the default value of np.NaN is specified as the value to return when the key does not exist in the Series structure. The slice operation behaves the same way as a NumPy array: In [498]: stockPriceSeries[:4] Out[498]: GOOG FB 1180.97 62.57 dtype: float64 [ 54 ] Chapter 3 Logical slicing also works as follows: In [500]: stockPriceSeries[stockPriceSeries > 100] Out[500]: GOOG dtype: float64 Other operations Arithmetic and statistical operations can be applied, just as with a NumPy array: In [501]: np.mean(stockPriceSeries) Out[501]: 433.46600000000001 In [502]: np.std(stockPriceSeries) Out[502]: Element-wise operations can also be performed on series: In [506]: ser Out[506]: 0 dtype: float64 In [508]: ser*ser Out[508]: 0 dtype: float64 In [510]: np.sqrt(ser) Out[510]: 0 [ 55 ] The pandas Data Structures 1 dtype: float64 An important feature of Series is that the data is automatically aligned on the basis of the label: In [514]: ser[1:] Out[514]: 1 dtype: float64 In [516]:ser[1:] + ser[:-2] Out[516]: 0 dtype: float64 Thus, we can see that for non-matching labels, NaN is inserted. The default behavior is that the union of the indexes is produced for unaligned Series structures. This is preferable as information is preserved rather than lost. We will handle missing values in pandas in a later chapter of the book. DataFrame is an 2-dimensional labeled array. Its column types can be heterogeneous: that is, of varying types. It is similar to structured arrays in NumPy with mutability added. It has the following properties: [ 56 ] Chapter 3 • Conceptually analogous to a table or spreadsheet of data. • Similar to a NumPy ndarray but not a subclass of np.ndarray. • Columns can be of heterogeneous types: float64, int, bool, and so on. • A DataFrame column is a Series structure. • It can be thought of as a dictionary of Series structures where both the columns and the rows are indexed, denoted as 'index' in the case of rows and 'columns' in the case of columns. • It is size mutable: columns can be inserted and deleted. Every axis in a Series/DataFrame has an index, whether default or not. Indexes are needed for fast lookups as well as proper aligning and joining of data in pandas. The axes can also be named-for example in the form of month for the array of columns Jan Feb Mar... Dec. Here is a representation of an indexed DataFrame, with named columns across and an index column of characters V, W, X, Y, Z: columns nums strs bools decs index V mat sat DataFrame Creation DataFrame is the most commonly used data structure in pandas. The constructor accepts many different types of arguments: • Dictionary of 1D ndarrays, lists, dictionaries, or Series structures • 2D NumPy array • Structured or record ndarray • Series structures • Another DataFrame structure Row label indexes and column labels can be specified along with the data. If they're not specified, they will be generated from the input data in an intuitive fashion, for example, from the keys of dict. (in case of column labels) or by using np.range(n) in the case of row labels, where n corresponds to the number of rows. [ 57 ] The pandas Data Structures Using dictionaries of Series Here, we create a DataFrame structure by using a dictionary of Series objects. In [97]:stockSummaries={ 'AMZN': pd.Series([346.15,0.59,459,0.52,589.8,158.88], index=['Closing price','EPS', 'Shares Outstanding(M)', 'Beta', 'P/E','Market Cap(B)']), 'GOOG': pd.Series([1133.43,36.05,335.83,0.87,31.44,380.64], index=['Closing price','EPS','Shares Outstanding(M)', 'Beta','P/E','Market Cap(B)']), 'FB': pd.Series([61.48,0.59,2450,104.93,150.92], index=['Closing price','EPS','Shares Outstanding(M)', 'P/E', 'Market Cap(B)']), 'YHOO': pd.Series([34.90,1.27,1010,27.48,0.66,35.36], index=['Closing price','EPS','Shares Outstanding(M)', 'P/E','Beta', 'Market Cap(B)']), 'TWTR':pd.Series([65.25,-0.3,555.2,36.23], index=['Closing price','EPS','Shares Outstanding(M)', 'Market Cap(B)']), 'AAPL':pd.Series([501.53,40.32,892.45,12.44,447.59,0.84], index=['Closing price','EPS','Shares Outstanding(M)','P/E', 'Market Cap(B)','Beta'])} In [99]: stockDF=pd.DataFrame(stockSummaries); stockDF Closing price Market Cap(B) Shares Outstanding(M) [ 58 ] Chapter 3 In [100]:stockDF=pd.DataFrame(stockSummaries, index=['Closing price','EPS', 'Shares Outstanding(M)', 'P/E', 'Market Cap(B)','Beta']);stockDF Out [100]: AAPL Closing price Shares Outstanding(M) Market Cap(B) In [102]:stockDF=pd.DataFrame(stockSummaries, index=['Closing price','EPS', 'Shares Outstanding(M)', 'P/E', 'Market Cap(B)','Beta'], columns=['FB','TWTR','SCNW']) stockDF Out [102]: Closing price Shares Outstanding(M) Market Cap(B) The row index labels and column labels can be accessed via the index and column attributes: In [527]: stockDF.index Out[527]: Index([u'Closing price', u'EPS', [ 59 ] The pandas Data Structures u'Shares u'P/E', u'Market Cap(B)', u'Beta'], dtype=object) In [528]: stockDF.columns Out[528]: Index([u'AAPL', u'AMZN', u'FB', u'GOOG', u'TWTR', u'YHOO'], dtype=object) The source for the preceding data is Google Finance, accessed on 2/3/2014: http://finance.google.com. Using a dictionary of ndarrays/lists Here, we create a DataFrame structure from a dictionary of lists. The keys become the column labels in the DataFrame structure and the data in the list becomes the column values. Note how the row label indexes are generated using np.range(n). In [529]:algos={'search':['DFS','BFS','Binary Search', 'Linear','ShortestPath (Djikstra)'], 'sorting': ['Quicksort','Mergesort', 'Heapsort', 'Bubble Sort', 'Insertion Sort'], 'machine learning':['RandomForest', 'K Nearest Neighbor', 'Logistic Regression', 'K-Means Clustering', 'Linear Regression']} algoDF=pd.DataFrame(algos); algoDF Out[529]: machine learning 0 K Nearest Neighbor Logistic Regression K-Means Clustering Linear Regression Quicksort BFS Binary Linear Bubble Sort ShortestPath (Djikstra) Insertion Sort In [530]: pd.DataFrame(algos,index=['algo_1','algo_2','algo_3','algo_4', 'algo_5']) Out[530]: machine learning algo_1 search DFS sorting Quicksort [ 60 ] Chapter 3 algo_2 K Nearest Neighbor Logistic Regression K-Means Clustering Linear Regression Binary Search Heapsort Linear Bubble Sort ShortestPath (Djikstra) Insertion Sort Using a structured array In this case, we use a structured array, which is an array of records or structs. For more information on structured arrays, refer to the following: http://docs.scipy.org/doc/numpy/user/ basics.rec.html. In [533]: memberData = np.zeros((4,), dtype=[('Name','a15'), ('Age','i4'), ('Weight','f4')]) memberData[:] = [('Sanjeev',37,162.4), ('Yingluck',45,137.8), ('Emeka',28,153.2), ('Amy',67,101.3)] memberDF=pd.DataFrame(memberData);memberDF Out[533]: In [534]: pd.DataFrame(memberData, index=['a','b','c','d']) Out[534]: 153.2 101.3 Using a Series structure Here, we show how to construct a DataFrame structure from a Series structure: In [ 540]: currSeries.name='currency' pd.DataFrame(currSeries) Out[540]: [ 61 ] The pandas Data Structures China Germany euro Japan Nigeria naira UK There are also alternative constructors for DataFrame; they can be summarized as follows: • DataFrame.from_dict: It takes a dictionary of dictionaries or sequences and returns DataFrame. • DataFrame.from_records: It takes a list of tuples or structured ndarray. • DataFrame.from_items: It takes a sequence of (key, value) pairs. The keys are the column or index names, and the values are the column or row values. If you wish the keys to be row index names, you must specify orient='index' as a parameter and specify the column names. • pandas.io.parsers.read_csv: This is a helper function that reads a CSV file into a pandas DataFrame structure. • pandas.io.parsers.read_table: This is a helper function that reads a delimited file into a pandas DataFrame structure. • pandas.io.parsers.read_fwf: This is a helper function that reads a table of fixed-width lines into a pandas DataFrame structure. Here, I will briefly describe the various DataFrame operations. A specific column can be obtained as a Series structure: In [543]: memberDF['Name'] Out[543]: 0 Name: Name, dtype: object [ 62 ] Chapter 3 A new column can be added via assignment, as follows: In [545]: A column can be deleted, as you would in the case of dict: In [546]: del memberDF['Height']; memberDF Out[546]: It can also be popped, as with a dictionary: In [547]: memberDF['BloodType']='O' bloodType=memberDF.pop('BloodType'); bloodType Out[547]: 0 Name: BloodType, dtype: object Basically, a DataFrame structure can be treated as if it were a dictionary of Series objects. Columns get inserted at the end; to insert a column at a specific location, you can use the insert function: In [552]: memberDF.insert(2,'isSenior',memberDF['Age']>60); memberDF Out[552]: Weight 162.4 137.8 101.3 [ 63 ] The pandas Data Structures DataFrame objects align in a manner similar to Series objects, except that they align on both column and index labels. The resulting object is the union of the column and row labels: In [559]: ore1DF =pd.DataFrame(np.array([[20,35,25,20], [11,28,32,29]]), columns=['iron','magnesium', 'copper','silver']) ore2DF=pd.DataFrame(np.array([[14,34,26,26], [33,19,25,23]]), columns=['iron','magnesium', 'gold','silver']) ore1DF+ore2DF Out[559]: In the case where there are no row labels or column labels in common, the value is filled with NaN, for example, copper and gold. If you combine a DataFrame object and a Series object, the default behavior is to broadcast the Series object across the rows: In [562]: ore1DF + pd.Series([25,25,25,25], index=['iron','magnesium', 'copper','silver']) Out[562]: Other mathematical operations Mathematical operators can be applied element wise on DataFrame structures: In [565]: np.sqrt(ore1DF) Out[565]: [ 64 ] Chapter 3 Panel is a 3D array. It is not as widely used as Series or DataFrame. It is not as easily displayed on screen or visualized as the other two because of its 3D nature. The Panel data structure is the final piece of the data structure jigsaw puzzle in pandas. It is less widely used, and is used for 3D data. The three axis names are as follows: • items: This is axis 0. Each each item corresponds to a DataFrame structure. • major_axis: This is axis 1. Each item corresponds to the rows of the DataFrame structure. • minor_axis: This is axis 2. Each item corresponds to the columns of each DataFrame structure. As for Series and DataFrame, there are different ways to create Panel objects. They are explained in the upcoming sections. Using 3D NumPy array with axis labels Here, we show how to construct a Panel object from a 3D NumPy array. In 586[]: stockData=np.array([[[63.03,61.48,75], [62.05,62.75,46], [62.74,62.19,53]], [[411.90, 404.38, 2.9], [405.45, 405.91, 2.6], [403.15, 404.42, 2.4]]]) stockData Out[586]: array([[[ [[ 411.9 , 2.9 ], [ 405.45, 2.6 ], [ 403.15, 2.4 ]]]) In [587]: stockHistoricalPrices = pd.Panel(stockData, items=['FB', 'NFLX'], major_axis=pd.date_range('2/3/2014', periods=3), minor_axis=['open price', 'closing price', 'volume']) [ 65 ] The pandas Data Structures Out[587]: Dimensions: 2 (items) x 3 (major_axis) x 3 (minor_axis) Items axis: FB to NFLX Major_axis axis: 2014-02-03 00:00:00 to 2014-02-05 00:00:00 Minor_axis axis: open price to volume Using a Python dictionary of DataFrame objects We construct a Panel structure by using a Python dictionary of DataFrame structures. In [591]: USData=pd.DataFrame(np.array([[249.62 , 8900], [ 282.16,12680], [309.35,14940]]), columns=['Population(M)','GDP($B)'], index=[1990,2000,2010]) USData Out[591]: In [590]: ChinaData=pd.DataFrame(np.array([[1133.68, 390.28], [ 1266.83,1198.48], [1339.72, 6988.47]]), columns=['Population(M)','GDP($B)'], index=[1990,2000,2010]) ChinaData Out[590]: In [592]:US_ChinaData={'US' : USData, 'China': ChinaData} pd.Panel(US_ChinaData) Out[592]: [ 66 ] Chapter 3 Dimensions: 2 (items) x 3 (major_axis) x 2 (minor_axis) Items axis: China to US Major_axis axis: 1990 to 2010 Using the DataFrame.to_panel method This method converts a DataFrame structure having a MultiIndex to a Panel structure: In [617]: mIdx = pd.MultiIndex(levels=[['US', 'China'], [1990,2000, 2010]], labels=[[1,1,1,0,0,0],[0,1,2,0,1,2]]) mIdx Out[617]: MultiIndex [(u'China', 1990), (u'China', 2000), (u'China', 2010), (u'US', 1990), (u'US', 2000), (u'US', 2010)] ChinaUSDF = pd.DataFrame({'Population(M)' : [1133.68, 1266.83, 1339.72, 249.62, 282.16,309.35], 'GDB($B)': [390.28, 1198.48, 6988.47, 8900,12680, 14940]}, index=mIdx) ChinaUSDF In [618]: ChinaUSDF = pd.DataFrame({'Population(M)' : [1133.68, 1266.83, 1339.72, 249.62, 282.16, 309.35], 'GDB($B)': [390.28, 1198.48, 6988.47, 8900, 12680,14940]}, index=mIdx) ChinaUSDF Out[618]: [ 67 ] The pandas Data Structures China In [622]: ChinaUSDF.to_panel() Out[622]: Dimensions: 2 (items) x 2 (major_axis) x 3 (minor_axis) Items axis: GDB($B) to Population(M) Major_axis axis: US to China Minor_axis axis: 1990 to 2010 The sources of US/China Economic data are the following sites: • http://www.multpl.com/us-gdp-inflation-adjusted/table • http://www.multpl.com/united-states-population/table • http://en.wikipedia.org /wiki/Demographics_of_China • http://www.theguardian.com/news/datablog/2012/mar/23/china-gdpsince-1980 Other operations Insertion, deletion, and item-wise operations behave the same as in the case of DataFrame. Panel structures can be re-arranged via transpose. The feature set of the operations of Panel is relatively underdeveloped and not as rich as for Series and DataFrame. To summarize this chapter, numpy.ndarray is the bedrock data structure on which the pandas data structures are based. The pandas data structures at their heart consist of NumPy ndarray of data and an array or arrays of labels. There are three main data structures in pandas: Series, DataFrame, and Panel. The pandas data structures are much easier to use and more user-friendly than Numpy ndarrays, since they provide row indexes and column indexes in the case of DataFrame and Panel. The DataFrame object is the most popular and widely used object in pandas. In the next chapter, we will cover the topic of indexing in pandas. [ 68 ] Operations in pandas, Part I – Indexing and Selecting In this chapter, we will focus on the indexing and selection of data from pandas objects. This is important since effective use of pandas requires a good knowledge of the indexing and selection of data. The topics that we will address in this chapter include the following: • Basic indexing • Label, integer, and mixed indexing • MultiIndexing • Boolean indexing • Operations on indexes Basic indexing We have already discussed basic indexing on Series and DataFrames in the previous chapter, but here we will include some examples for the sake of completeness. Here, we list a time series of crude oil spot prices for the 4 quarters of 2013, taken from IMF data: http://www.imf.org/external/np/res/commod/pdf/monthly/011014.pdf. In [642]:SpotCrudePrices_2013_Data={ 'U.K. Brent' : {'2013-Q1':112.9, '2013-Q2':103.0, '2013Q3':110.1, '2013-Q4':109.4}, 'Dubai':{'2013-Q1':108.1, '2013-Q2':100.8, '2013-Q3':106.1,'2013-Q4':106.7}, 'West Texas Intermediate':{'2013-Q1':94.4, '2013Q2':94.2, '2013-Q3':105.8,'2013-Q4':97.4}} [ 69 ] Operations in pandas, Part I – Indexing and Selecting SpotCrudePrices_2013=pd.DataFrame.from_ dict(SpotCrudePrices_2013_Data) SpotCrudePrices_2013 Out[642]: U.K. Brent West Texas Intermediate 94.4 We can select the prices for the available time periods of Dubai crude oil by using the [] operator: In [644]: dubaiPrices=SpotCrudePrices_2013['Dubai']; dubaiPrices Out[644]: 2013-Q1 Name: Dubai, dtype: float64 We can pass a list of columns to the [] operator in order to select the columns in a particular order: In [647]: SpotCrudePrices_2013[['West Texas Intermediate','U.K. Brent']] Out[647]: West Texas Intermediate 2013-Q1 U.K. Brent If we specify a column that is not listed in the DataFrame, we will get a KeyError exception: In [649]: SpotCrudePrices_2013['Brent Blend'] -------------------------------------------------------KeyError recent call last) Traceback (most in () ... KeyError: u'no item named Brent Blend' [ 70 ] Chapter 4 We can avoid this error by using the get operator and specifying a default value in the case when the column is not present, as follows: In [650]: SpotCrudePrices_2013.get('Brent Blend','N/A') Out [650]: 'N/A' Note that rows cannot be selected with the bracket operator [] in a DataFrame. Hence, we get an error in the following case: In [755]:SpotCrudePrices_2013['2013-Q1'] -------------------------------------------------KeyError Traceback (most recent call last) ... KeyError: u'no item named 2013-Q1' This was a design decision made by the creators in order to avoid ambiguity. In the case of a Series, where there is no ambiguity, selecting rows by using the [] operator works: In [756]: dubaiPrices ['2013-Q1'] Out[756]: 108.1 We shall see how we can perform row selection by using one of the newer indexing operators later in this chapter. Accessing attributes using dot operator One can retrieve values from a Series, DataFrame, or Panel directly as an attribute as follows: In [650]: SpotCrudePrices_2013.Dubai Out[650]: 2013-Q1 Name: Dubai, dtype: float64 However, this only works if the index element is a valid Python identifier as follows: In [653]: SpotCrudePrices_2013."West Texas Intermediate" File "", line 1 [ 71 ] Operations in pandas, Part I – Indexing and Selecting SpotCrudePrices_2013."West Texas Intermediate" ^ SyntaxError: invalid syntax Otherwise, we get SyntaxError as in the preceding case because of the space in the column name. A valid Python identifier must follow the following lexical convention: identifier::= (letter|"_") (letter | digit | "_")* Thus, a valid Python identifier cannot contain a space. See the Python Lexical Analysis documents for more details at http://docs.python.org/2.7/reference/ lexical_analysis.html#identifiers. We can resolve this by renaming the column index names so that they are all valid identifiers: In [654]: SpotCrudePrices_2013 Out[654]: U.K. Brent West Texas Intermediate In [655]:SpotCrudePrices_2013.columns=['Dubai','UK_Brent', 'West_Texas_Intermediate'] SpotCrudePrices_2013 Out[655]: We can then select the prices for West Texas Intermediate as desired: In [656]:SpotCrudePrices_2013.West_Texas_Intermediate Out[656]:2013-Q1 Name: West_Texas_Intermediate, dtype: float64 [ 72 ] Chapter 4 We can also select prices by specifying a column index number to select column 1 (U.K. Brent) as follows: In [18]: SpotCrudePrices_2013[[1]] Out[18]: U.K. Brent 2013-Q1 Range slicing As we saw in the section on NumPy ndarrays in Chapter 3, The pandas Data structures, we can slice a range by using the [] operator. The syntax of the slicing operator exactly matches that of NumPy: ar[startIndex: endIndex: stepValue] where the default values if not specified are as follows: • 0 for startIndex • arraysize -1 for endIndex • 1 for stepValue For a DataFrame, [] slices across rows as follows: Obtain the first 2 rows: In [675]: SpotCrudePrices_2013[:2] Out[675]: UK_Brent West_Texas_Intermediate Obtain all rows starting from index 2: In [662]: SpotCrudePrices_2013[2:] Out[662]: UK_Brent West_Texas_Intermediate [ 73 ] Operations in pandas, Part I – Indexing and Selecting Obtain rows at intervals of two, starting from row 0: In [664]: SpotCrudePrices_2013[::2] Out[664]: UK_Brent West_Texas_Intermediate Reverse the order of rows in DataFrame: In [677]: SpotCrudePrices_2013[::-1] Out[677]: UK_Brent West_Texas_Intermediate For a Series, the behavior is just as intuitive: In [666]: dubaiPrices=SpotCrudePrices_2013['Dubai'] Obtain the last 3 rows or all rows but the first: In [681]: dubaiPrices[1:] Out[681]: 2013-Q2 Name: Dubai, dtype: float64 Obtain all rows but the last: In [682]: dubaiPrices[:-1] Out[682]: 2013-Q1 Name: Dubai, dtype: float64 Reverse the rows: In [683]: dubaiPrices[::-1] Out[683]: 2013-Q4 Name: Dubai, dtype: float64 [ 74 ] Chapter 4 Label, integer, and mixed indexing In addition to the standard indexing operator [] and attribute operator, there are operators provided in pandas to make the job of indexing easier and more convenient. By label indexing, we generally mean indexing by a header name, which tends to be a string value in most cases. These operators are as follows: • The .loc operator: It allows label-oriented indexing • The .iloc operator: It allows integer-based indexing • The .ix operator: It allows mixed label and integer-based indexing We will now turn our attention to these operators. Label-oriented indexing The .loc operator supports pure label-based indexing. It accepts the following as valid inputs: • A single label such as ['March'], [88] or ['Dubai']. Note that in the case where the label is an integer, it doesn't refer to the integer position of the index, but to the integer itself as a label. • List or array of labels, for example, ['Dubai','UK Brent']. • A slice object with labels, for example, 'May':'Aug'. • A Boolean array. For our illustrative dataset, we use the average snowy weather temperature data for New York city from the following: • http://www.currentresults.com/Weather/ • http://www.currentresults.com/Weather/New-York/Places/new-yorkcity-temperatures-by-month-average.php Create DataFrame In [723]: NYC_SnowAvgsData={'Months' : ['January','February','March', 'April', 'November', 'December'], 'Avg SnowDays' : [4.0,2.7,1.7,0.2,0.2,2.3], 'Avg Precip. (cm)' : [17.8,22.4,9.1,1.5,0.8,12.2], 'Avg Low Temp. (F)' : [27,29,35,45,42,32] } [ 75 ] Operations in pandas, Part I – Indexing and Selecting In [724]: NYC_SnowAvgsData Out[724]:{'Avg Low Temp. (F)': [27, 29, 35, 45, 42, 32], 'Avg Precip. (cm)': [17.8, 22.4, 9.1, 1.5, 0.8, 12.2], 'Avg SnowDays': [4.0, 2.7, 1.7, 0.2, 0.2, 2.3], 'Months': ['January', 'February', 'March', 'April', 'November', 'December']} In [726]:NYC_SnowAvgs=pd.DataFrame(NYC_SnowAvgsData, index=NYC_SnowAvgsData ['Months'], columns=['Avg SnowDays','Avg Precip. (cm)', 'Avg Low Temp. (F)']) NYC_SnowAvgs Out[726]: Avg SnowDays Avg Precip. (cm) Avg Low Temp. (F) February 2.7 November 0.2 December 2.3 Using a single label: In [728]: NYC_SnowAvgs.loc['January'] Out[728]: Avg SnowDays Avg Precip. (cm) Avg Low Temp. (F) Name: January, dtype: float64 Using a list of labels: In [730]: NYC_SnowAvgs.loc[['January','April']] Out[730]: Avg SnowDays Avg Precip. (cm) Avg Low Temp. (F) [ 76 ] Chapter 4 Using a label range: In [731]: NYC_SnowAvgs.loc['January':'March'] Out[731]: Avg SnowDays Avg Precip. (cm) Avg Low Temp. (F) February 2.7 Note that while using the .loc, .iloc, and .ix operators on a DataFrame, the row index must always be specified first. This is the opposite of the [] operator, where only columns can be selected directly. Hence, we get an error if we do the following: In [771]: NYC_SnowAvgs.loc['Avg SnowDays'] KeyError: 'Avg SnowDays' The correct way to do this is to specifically select all rows by using the colon (:) operator as follows: In [772]: NYC_SnowAvgs.loc[:,'Avg SnowDays'] Out[772]: January Name: Avg SnowDays, dtype: float64 Here, we see how to select a specific coordinate value, namely the average number of snow days in March: In [732]: NYC_SnowAvgs.loc['March','Avg SnowDays'] Out[732]: 1.7 This alternative style is also supported: In [733]: NYC_SnowAvgs.loc['March']['Avg SnowDays'] Out[733]: 1.7 The following is the equivalent of the preceding case using the square bracket operator []: In [750]: NYC_SnowAvgs['Avg SnowDays']['March'] Out[750]: 1.7 [ 77 ] Operations in pandas, Part I – Indexing and Selecting Note again, however, that specifying the row index value first as is done with the .loc operator will result in Keyerror. This is a consequence of the fact discussed previously, that the [] operator cannot be used to select rows directly. The columns must be selected first to obtain a Series, which can then be selected by rows. Thus, you will get KeyError: u'no item named March' if you use either of the following: In [757]: NYC_SnowAvgs['March']['Avg SnowDays'] Or In [758]: NYC_SnowAvgs['March'] We can use the .loc operator to select the rows instead: In [759]: NYC_SnowAvgs.loc['March'] Out[759]: Avg SnowDays Avg Precip. (cm) Avg Low Temp. (F) Name: March, dtype: float64 Selection using a Boolean array Now, we will show how to select which months have less than one snow day on average by using a Boolean array: In [763]: NYC_SnowAvgs.loc[NYC_SnowAvgs['Avg SnowDays']110] Out[768]: UK_Brent 2013-Q1 112.9 2013-Q2 103.0 2013-Q3 110.1 2013-Q4 109.4 [ 78 ] Chapter 4 Note that the preceding arguments involve the Boolean operators < and > that actually evaluate the Boolean arrays, for example: In [769]: SpotCrudePrices_2013.loc['2013-Q1']>110 Out[769]: Dubai Name: 2013-Q1, dtype: bool Integer-oriented indexing The .iloc operator supports integer-based positional indexing. It accepts the following as inputs: • A single integer, for example, 7 • A list or array of integers, for example, [2,3] • A slice object with integers, for example, 1:4 Let us create the following: In [777]: import scipy.constants as phys import math In [782]: sci_values=pd.DataFrame([[math.pi, math.sin(math.pi), math.cos (math.pi)], [math.e,math.log(math.e), phys.golden], [phys.c,phys.g,phys.e], [phys.m_e,phys.m_p,phys.m_n]], index=list(range(0,20,5))) Out[782]: [ 79 ] Operations in pandas, Part I – Indexing and Selecting We can select the non-physical constants in the first two rows by using integer slicing: In [789]: sci_values.iloc[:2] Out[789]: 1.224647e-16 -1.000000 Alternatively, we can use the speed of light and the acceleration of gravity in the third row: In [795]: sci_values.iloc[2,0:2] Out[795]: 0 dtype: float64 Note that the arguments to .iloc are strictly positional and have nothing to do with the index values. Hence, consider a case where we mistakenly think that we can obtain the third row by using the following: In [796]: sci_values.iloc[10] -----------------------------------------------------IndexError recent call last) Traceback (most ... IndexError: index 10 is out of bounds for axis 0 with size 4 Here, we get IndexError in the preceding result; so, now, we should use the labelindexing operator .loc instead, as follows: In [797]: sci_values.loc[10] Out[797]: 0 Name: 10, dtype: float64 To slice out a specific row, we can use the following: In [802]: sci_values.iloc[2:3,:] Out[802]: 10 9.80665 1.602177e-19 [ 80 ] Chapter 4 To obtain a cross-section using an integer position, use the following: In [803]: sci_values.iloc[3] Out[803]: 0 Name: 15, dtype: float64 If we attempt to slice past the end of the array, we obtain IndexError as follows: In [805]: sci_values.iloc[6,:] -------------------------------------------------------IndexError recent call last) Traceback (most IndexError: index 6 is out of bounds for axis 0 with size 4 The .iat and .at operators The .iat and .at operators can be used for a quick selection of scalar values. This is best illustrated as follows: In [806]: sci_values.iloc[3,0] Out[806]: 9.1093829099999999e-31 In [807]: sci_values.iat[3,0] Out[807]: 9.1093829099999999e-31 In [808]: %timeit sci_values.iloc[3,0] 10000 loops, best of 3: 122 μs per loop In [809]: %timeit sci_values.iat[3,0] 10000 loops, best of 3: 28.4 μs per loop Thus, we can see that .iat is much faster than the .iloc/.ix operators. The same applies to .at versus .loc. Mixed indexing with the .ix operator The .ix operator behaves like a mixture of the .loc and .iloc operators, with the .loc behavior taking precedence. It takes the following as possible inputs: • A single label or integer • A list of integers or labels [ 81 ] Operations in pandas, Part I – Indexing and Selecting • An integer slice or label slice • A Boolean array Let us re-create the following DataFrame by saving the stock index closing price data to a file (stock_index_closing.csv) and reading it in: TradingDate,Nasdaq,S&P 500,Russell 2000 2014/01/30,4123.13,1794.19,1139.36 2014/01/31,4103.88,1782.59,1130.88 2014/02/03,3996.96,1741.89,1094.58 2014/02/04,4031.52,1755.2,1102.84 2014/02/ 05,4011.55,1751.64,1093.59 2014/02/06,4057.12,1773.43,1103.93 The source for this data is http://www.economagic.com/sp.htm#Daily. Here's how we read the CSV data into a DataFrame: In [939]: stockIndexDataDF=pd.read_csv('./stock_index_data.csv') In [940]: stockIndexDataDF Out[940]: S&P 500 Russell 2000 What we see from the preceding example is that the DataFrame created has an integer-based row index. We promptly set the index to be the trading date to index it based on the trading date so that we can use the .ix operator: In [941]: stockIndexDF=stockIndexDataDF.set_index('TradingDate') In [942]:stockIndexDF Out[942]: S&P 500 Russell 2000 [ 82 ] Chapter 4 We now show examples of using the .ix operator: Using a single label: In [927]: stockIndexDF.ix['2014/01/30'] Out[927]: Nasdaq S&P 500 Russell 2000 Name: 2014/01/30, dtype: float64 Using a list of labels: In [928]: stockIndexDF.ix[['2014/01/30']] Out[928]: 2014/01/30 S&P 500 Russell 2000 In [930]: stockIndexDF.ix[['2014/01/30','2014/01/31']] Out[930]: S&P 500 Russell 2000 Note the difference in the output between using a single label versus using a list containing just a single label. The former results in a series and the latter, a DataFrame: In [943]: type (stockIndexDF.ix['2014/01/30']) Out[943]: pandas.core.series.Series In [944]: type(stockIndexDF.ix[['2014/01/30']]) Out[944]: pandas.core.frame.DataFrame For the former, the indexer is a scalar; for the latter, the indexer is a list. A list indexer is used to select multiple columns. A multi-column slice of a DataFrame can only result in another DataFrame since it is 2D; hence, what is returned in the latter case is a DataFrame. Using a label-based slice: In [932]: tradingDates=stockIndexDataDF.TradingDate In [934]: stockIndexDF.ix [tradingDates[:3]] Out[934]: 2014/01/30 Nasdaq 4123.13 S&P 500 Russell 2000 [ 83 ] Operations in pandas, Part I – Indexing and Selecting 2014/01/31 Using a single integer: In [936]: stockIndexDF.ix[0] Out[936]: Nasdaq S&P 500 Russell 2000 Name: 2014/01/30, dtype: float64 Using a list of integers: In [938]: stockIndexDF.ix[[0,2]] Out[938]: S&P 500 Russell 2000 S&P 500 Russell 2000 Using an integer slice: In [947]: stockIndexDF.ix[1:3] Out[947]: TradingDate Using a Boolean array: In [949]: stockIndexDF.ix[stockIndexDF['Russell 2000']>1100] Out[949]: S&P 500 Russell 2000 TradingDate 2014/01/30 4123.13 1794.19 4103.88 1782.59 4031.52 1755.20 4057.12 1773.43 As in the case of .loc, the row index must be specified first for the .ix operator. [ 84 ] Chapter 4 We now turn to the topic of MultiIndexing. Multi-level or hierarchical indexing is useful because it enables the pandas user to select and massage data in multiple dimensions by using data structures such as Series and DataFrame. In order to start, let us save the following data to a file: stock_index_prices.csv and read it in: TradingDate,PriceType,Nasdaq,S&P 500,Russell 2000 2014/02/ 21,open,4282.17,1841.07,1166.25 2014/02/21,close,4263.41,1836.25,1164.63 2014/02/21,high,4284.85,1846.13,1168.43 2014/02/24,open,4273.32,1836.78,1166.74 2014/02/24,close,4292.97,1847.61,1174.55 2014/ 02/24,high,4311.13,1858.71,1180.29 2014/02/25,open,4298.48,1847.66,1176 2014/02/25,close,4287.59,1845.12,1173.95 2014/02/25,high,4307.51,1852.91,1179.43 2014/02/26,open,4300.45,1845.79,1176.11 2014/ 02/26,close,4292.06,1845.16,1181.72 2014/02/26,high,4316.82,1852.65,1188.06 2014/02/27,open,4291.47,1844.9,1179.28 2014/02/27,close,4318.93,1854.29,1187.94 2014/02/27,high,4322.46,1854.53,1187.94 2014/02/28,open,4323.52,1855.12,1189.19 2014/02/28,close,4308.12,1859.45,1183.03 2014/02/28,high,4342.59,1867.92,1193.5 In [950]:sharesIndexDataDF=pd.read_csv('./stock_index_prices.csv') In [951]: sharesIndexDataDF Out[951]: TradingDate Nasdaq 4282.17 S&P 500 Russell 2000 [ 85 ] 1166.25 1164.63 Operations in pandas, Part I – Indexing and Selecting 6 1855.12 1189.19 1187.94 1187.94 1859.45 1183.03 1867.92 1193.50 Here, we create a MultiIndex from the trading date and priceType columns: In [958]: sharesIndexDF=sharesIndexDataDF.set_index(['TradingDate','Price Type']) In [959]: mIndex=sharesIndexDF.index; mIndex Out[959]: MultiIndex [(u'2014/02/21', u'open'), (u'2014/02/21', u'close'), (u'2014/02/21', u'high'), (u'2014/02/24', u'open'), (u'2014/02/24', u'close'), (u'2014/02/24', u'high'), (u'2014/02/ 25', u'open'), (u'2014/02/25', u'close'), (u'2014/02/25', u'high'), (u'2014/02/26', u'open'), (u'2014/02/26', u'close'), (u'2014/02/26', u'high'), (u'2014/02/27', u'open'), (u'2014/02/27', u'close'), (u'2014/02/27', u'high'), (u'2014/02/28', u'open'), (u'2014/02/28', u'close'), (u'2014/02/28', u'high')] In [960]: sharesIndexDF Out[960]: S&P 500 Russell 2000 TradingDate PriceType 2014/02/21 [ 86 ] Chapter 4 Upon inspection, we see that the MultiIndex consists of a list of tuples. Applying the get_level_values function with the appropriate argument produces a list of the labels for each level of the index: In [962]: mIndex.get_level_values(0) Out[962]: Index([u'2014/02/21', u'2014/02/21', u'2014/02/21', u'2014/02/24', u'2014/02/24', u'2014/02/24', u'2014/02/25', u'2014/02/25', u'2014/02/25', u'2014/02/26', u'2014/02/26', u'2014/02/26', u'2014/02/27', u'2014/02/27', u'2014/02/27', u'2014/02/28', u'2014/02/28', u'2014/02/28'], dtype=object) In [963]: mIndex.get_level_values(1) Out[963]: Index([u'open', u'close', u'high', u'open', u'close', u'high', u'open', u'close', u'high', u'open', u'close', u'high', u'open', u'close', u'high', u'open', u'close', u'high'], dtype=object) However, IndexError will be thrown if the value passed to get_level_values() is invalid or out of range: In [88]: mIndex.get_level_values(2) Traceback (most recent call last) You can achieve hierarchical indexing with a MultiIndexed DataFrame: In [971]: sharesIndexDF.ix['2014/02/21'] Out[971]: S&P 500 Russell 2000 PriceType open close 4282.17 4263.41 1841.07 1836.25 1166.25 1164.63 [ 87 ] Operations in pandas, Part I – Indexing and Selecting high In [976]: sharesIndexDF.ix['2014/02/21','open'] Out[976]: Nasdaq S&P 500 Russell 2000 Name: (2014/02/21, open), dtype: float64 We can slice using a MultiIndex: In [980]: sharesIndexDF.ix['2014/02/21':'2014/02/24'] Out[980]: S&P 500 close high 2014/02/24 close high 4263.41 4284.85 open 1836.25 1846.13 Russell 2000 1164.63 1168.43 1847.61 1858.71 1174.55 1180.29 We can try slicing at a lower level: In [272]: sharesIndexDF.ix[('2014/02/21','open'):('2014/02/24','open')] -----------------------------------------------------------------KeyError last) Traceback (most recent call in () ----> 1 sharesIndexDF.ix[('2014/02/21','open'):('2014/02/24','open')] ... KeyError: 'Key length (2) was greater than MultiIndex lexsort depth (1)' However, this results in KeyError with a rather strange error message. The key lesson to be learned here is that the current incarnation of MultiIndex requires the labels to be sorted for the lower-level slicing routines to work correctly. In order to do this, you can utilize the sortlevel() method, which sorts the labels of an axis within a MultiIndex. To be on the safe side, sort first before slicing with a MultiIndex. Thus, we can do the following: [ 88 ] Chapter 4 In [984]: sharesIndexDF.sortlevel(0).ix[('2014/02/21','open'):('2014/02/2 4','open')] Out[984]: S&P 500 Russell 2000 We can also pass a list of tuples: In [985]: sharesIndexDF.ix[[('2014/02/21','close'),('2014/02/24','op en')]] Out[985]: S&P 500 4263.41 4273.32 Russell 2000 1836.25 1836.78 1164.63 1166.74 2 rows × 3 columns Note that by specifying a list of tuples, instead of a range as in the previous example, we display only the values of the open PriceType rather than all three for the TradingDate 2014/02/24. Swapping and reordering levels The swaplevel function enables levels within the MultiIndex to be swapped: In [281]: swappedDF=sharesIndexDF[:7].swaplevel(0, 1, axis=0) swappedDF Out[281]: PriceType open S&P 500 Russell 2000 TradingDate 4282.17 7 rows × 3 columns [ 89 ] Operations in pandas, Part I – Indexing and Selecting The reorder_levels function is more general, allowing you to specify the order of the levels: In [285]: reorderedDF=sharesIndexDF[:7].reorder_levels(['PriceType', 'TradingDate'], axis=0) reorderedDF S&P 500 Russell 2000 7 rows × 3 columns Cross sections The xs method provides a shortcut means of selecting data based on a particular index level value: In [287]: sharesIndexDF.xs('open',level='PriceType') Out[287]: Nasdaq S&P 500 Russell 2000 TradingDate 2014/02/21 6 rows × 3 columns The more long-winded alternative to the preceding command would be to use swaplevel to switch between the TradingDate and PriceType levels and then, perform the selection as follows: [ 90 ] Chapter 4 In [305]: sharesIndexDF.swaplevel(0, 1, axis=0).ix['open'] Out[305]: S&P 500 Russell 2000 6 rows × 3 columns Using .xs achieves the same effect as obtaining a cross-section in the previous section on integer-oriented indexing. Boolean indexing We use Boolean indexing to filter or select parts of the data. The operators are as follows: Operators These operators must be grouped using parentheses when used together. Using the earlier DataFrame from the previous section, here, we display the trading dates for which the NASDAQ closed above 4300: In [311]: sharesIndexDataDF.ix[(sharesIndexDataDF['PriceType']=='close') & \ (sharesIndexDataDF['Nasdaq']>4300) ] Out[311]: S&P 500 Russell 2000 TradingDate 2014/02/27 2 rows × 4 columns [ 91 ] Operations in pandas, Part I – Indexing and Selecting You can also create Boolean conditions in which you use arrays to filter out parts of the data: In [316]: highSelection=sharesIndexDataDF['PriceType']=='high' NasdaqHigh=sharesIndexDataDF['Nasdaq']0] Out[381]: 1 0.514219 [ 95 ] Operations in pandas, Part I – Indexing and Selecting 6 dtype: float64 In [382]: normvals.where(normvals>0) Out[382]: 0 dtype: float64 This method appears to be useful only in the case of a Series, as we get this behavior for free in the case of a DataFrame: In [393]: np.random.seed(100) normDF=pd.DataFrame([[round(np.random.normal (),3) for i in np.arange(5)] for j in range(3)], columns=['0','30','60','90','120']) normDF Out[393]: 3 rows × 5 columns In [394]: normDF[normDF>0] Out[394]: NaN NaN 0.981 NaN 3 rows × 5 columns In [395]: normDF.where(normDF>0) [ 96 ] Chapter 4 Out[395]: 0.817 0.673 rows × 5 columns The inverse operation of the where method is mask: In [396]: normDF.mask(normDF>0) Out[396]: 120 NaN rows × 5 columns Operations on indexes To complete this chapter, we will discuss operations on indexes. We sometimes need to operate on indexes when we wish to re-align our data or select it in different ways. There are various operations: The set_index - allows for the creation of an index on an existing DataFrame and returns an indexed DataFrame. As we have seen before: In [939]: stockIndexDataDF=pd.read_csv('./ stock_index_data.csv') In [940]: stockIndexDataDF Out[940]: S&P 500 Russell 2000 Now, we can set the index as follows: In [941]: stockIndexDF=stockIndexDataDF.set_index('TradingDate') In [942]: stockIndexDF Out[942]: S&P 500 Russell 2000 TradingDate [ 97 ] Operations in pandas, Part I – Indexing and Selecting 2014/01/30 2014/01/31 4103.88 1794.19 1782.59 1139.36 1130.88 The reset_index reverses set_index: In [409]: stockIndexDF.reset_index() Out[409]: TradingDate S&P 500 Russell 2000 6 rows × 4 columns To summarize, there are various ways of selecting data from pandas: • We can use basic indexing, which is closest to our understanding of accessing data in an array. • We can use label- or integer-based indexing with the associated operators. • We can use a MultiIndex, which is the pandas version of a composite key comprising multiple fields. • We can use a Boolean/logical index. For further references about indexing in pandas, please take a look at the official documentation at http://pandas.pydata.org/pandas-docs/stable/indexing. html. In the next chapter, we will examine the topic of grouping, reshaping, and merging data using pandas. [ 98 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data In this chapter, we tackle the question of rearranging data in our data structures. We examine the various functions that enable us to rearrange data by utilizing them on real-world datasets. Such functions include groupby, concat, aggregate, append, and so on. The topics that we'll discuss are as follow: • Aggregation/ grouping of data • Merging and concatenating data • Reshaping data Grouping of data We often detail granular data that we wish to aggregate or combine based on a grouping variable. We will illustrate some ways of doing this in the following sections. The groupby operation The groupby operation can be thought of as part of a process that involves the following three steps: • Splitting the dataset • Analyzing the data • Aggregating or combining the data [ 99 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data The groupby clause is an operation on DataFrames. A Series is a 1D object, so performing a groupby operation on it is not very useful. However, it can be used to obtain distinct rows of the Series. The result of a groupby operation is not a DataFrame but dict of DataFrame objects. Let us start with a dataset involving the world's most popular sport—soccer. This dataset, obtained from Wikipedia, contains data for the finals of the European club championship since its inception in 1955. For reference, you can go to http://en.wikipedia.org/wiki/UEFA_Champions_League. Convert the .csv file into a DataFrame by using the following command: In [27]: uefaDF=pd.read_csv('./euro_winners.csv') In [28]: uefaDF.head() Out[28]: Thus, the output shows the season, the nations to which the winning and runner-up clubs belong, the score, the venue, and the attendance figures. Suppose we wanted to rank the nations by the number of European club championships they had won. We can do this by using groupby. First, we apply groupby to the DataFrame and see what is the type of the result: In [84]: nationsGrp =uefaDF.groupby ('Nation'); type(nationsGrp) Out[84]: pandas.core.groupby.DataFrameGroupBy Thus, we see that nationsGrp is of the pandas.core.groupby.DataFrameGroupBy type. The column on which we use groupby is referred to as the key. We can see what the groups look like by using the groups attribute on the resulting DataFrameGroupBy object: In [97]: nationsGrp.groups Out[97]: {'England': [12, 21, 22, 23, 24, 25, 26, 28, 43, 49, 52, 56], [ 100 ] Chapter 5 'France': [37], 'Germany': [18, 19, 20, 27, 41, 45, 57], 'Italy': [7, 8, 9, 13, 29, 33, 34, 38, 40, 47, 51, 54], 'Netherlands': [14, 15, 16, 17, 32, 39], 'Portugal': [5, 6, 31, 48], 'Romania': [30], 'Scotland': [11], 'Spain': [0, 1, 2, 3, 4, 10, 36, 42, 44, 46, 50, 53, 55], 'Yugoslavia': [35]} This is basically a dictionary that just shows the unique groups and the axis labels corresponding to each group—in this case the row number. The number of groups is obtained by using the len() function: In [109]: len(nationsGrp.groups) Out[109]: 10 We can now display the number of wins of each nation in descending order by applying the size() function to the group and subsequently the sort() function, which sorts according to place: In [99]: nationWins=nationsGrp.size() In [100] nationWins.sort(ascending=False) nationWins Out[100]: Nation Spain dtype: int64 [ 101 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data The size() function returns a Series with the group names as the index and the size of each group. The size() function is also an aggregation function. We will examine aggregation functions later in the chapter. To do a further breakup of wins by country and club, we apply a multicolumn groupby function before applying size() and sort(): In [106]: winnersGrp =uefaDF.groupby(['Nation','Winners']) clubWins=winnersGrp.size() clubWins.sort(ascending=False) clubWins Out[106]: Nation Real Madrid Bayern Munich Manchester United Nottingham Forest Portugal England Red Star Belgrade Borussia Dortmund PSV Eindhoven Steaua Bucuresti Aston Villa dtype: int64 [ 102 ] Chapter 5 A multicolumn groupby specifies more than one column to be used as the key by specifying the key columns as a list. Thus, we can see that the most successful club in this competition has been Real Madrid of Spain. We now examine a richer dataset that will enable us to illustrate many more features of groupby. This dataset is also soccer related and provides statistics for the top four European soccer leagues in the 2012-2013 season: • English Premier League or EPL • Spanish Primera Division or La Liga • Italian First Division or Serie A • German Premier League or Bundesliga The source of this information is at http://soccerstats.com. Let us now read the goal stats data into a DataFrame as usual. In this case, we create a row index on the DataFrame using the month: In [68]: goalStatsDF=pd.read_csv('./goal_stats_euro_leagues_2012-13.csv') goalStatsDF=goalStatsDF.set_index('Month') We look at the snapshot of the head and tail ends of our dataset: In [115]: goalStatsDF.head(3) Out[115]: La Liga Serie A Month 08/01/2012 09/01/2012 MatchesPlayed MatchesPlayed In [116]: goalStatsDF.tail(3) Out[116]: La Liga Serie A Month 04/01/2013 05/01/2013 GoalsScored NaN [ 103 ] 102 102 NaN 104 92 NaN Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data There are two measures in this data frame—MatchesPlayed and GoalsScored— and the data is ordered first by Stat and then by Month. Note that the last row in the tail() output has the NaN values for all the columns except La Liga but we'll discuss this in more detail later. We can use groupby to display the stats, but this will be done by grouped year instead. Here is how this is done: In [117]: goalStatsGroupedByYear = goalStatsDF.groupby( lambda Month: Month.split('/')[2]) We can then iterate over the resulting groupby object and display the groups. In the following command, we see the two sets of statistics grouped by year. Note the use of the lambda function to obtain the year group from the first day of the month. For more information about lambda functions, go to http://bit.ly/1apJNwS: In [118]: for name, group in goalStatsGroupedByYear: print name print group 2012 Stat La Liga Serie A Month 08/01/2012 2013 Stat La Liga Serie A Month 01/01/2013 [ 104 ] Chapter 5 06/02/2013 If we wished to group by individual month instead, we would need to apply groupby with a level argument, as follows: In [77]: goalStatsGroupedByMonth = goalStatsDF.groupby(level=0) In [81]: for name, group in goalStatsGroupedByMonth: print name print group print "\n" 01/01/2013 Stat La Liga Serie A La Liga Serie A La Liga Serie A 02/01/2013 Month 03/01/2013 Month [ 105 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data 04/01/2013 Stat La Liga Serie A La Liga Serie A 05/01/2013 Month 06/01/2013 Stat La Liga Serie A Month 06/01/2013 06/02/2013 Stat La Liga Serie A La Liga Serie A La Liga Serie A Month 06/02/2013 08/01/2012 Month 09/01/2012 Month 09/01/2012 [ 106 ] Chapter 5 10/01/2012 Stat La Liga Serie A La Liga Serie A 11/01/2012 Month 11/01/2012 La Liga Serie A 12/01/2012 Month Note that since in the preceding commands we're grouping on an index, we need to specify the level argument as opposed to just using a column name. When we group by multiple keys, the resulting group name is a tuple, as shown in the upcoming commands. First, we reset the index to obtain the original DataFrame and define a MultiIndex in order to be able to group by multiple keys. If this is not done, it will result in a ValueError: In [246]: goalStatsDF=goalStatsDF.reset_index() goalStatsDF=goalStatsDF.set_index(['Month','Stat']) In [247]: monthStatGroup=goalStatsDF.groupby(level= ['Month','Stat']) In [248]: for name, group in monthStatGroup: print name print group ('01/01/2013', 'GoalsScored') Month La Liga Serie A 01/01/2013 GoalsScored [ 107 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data ('01/01/2013', 'MatchesPlayed') EPL Month La Liga Serie A 01/01/2013 MatchesPlayed ('02/01/2013', 'GoalsScored') EPL Month La Liga Serie A 02/01/2013 GoalsScored Using groupby with a MultiIndex If our DataFrame has a MultiIndex, we can use groupby to group by different levels of the hierarchy and compute some interesting statistics. Here is the goal stats data using a MultiIndex consisting of Month and then Stat: In [134]:goalStatsDF2=pd.read_csv('./goal_stats_euro_leagues_2012-13. csv') goalStatsDF2=goalStatsDF2.set_index(['Month','Stat']) In [141]: print goalStatsDF2.head(3) print goalStatsDF2.tail(3) EPL La Liga Serie A 08/01/2012 MatchesPlayed 09/01/2012 MatchesPlayed 10/01/2012 MatchesPlayed La Liga Serie A 05/01/2013 GoalsScored 06/01/2013 GoalsScored 04/01/2013 GoalsScored Suppose we wish to compute the total number of goals scored and the total matches played for the entire season for each league, we could do this as follows: In [137]: grouped2=goalStatsDF2.groupby (level='Stat') In [139]: grouped2.sum() Out[139]: EPL GoalsScored La Liga 1063 MatchesPlayed 380 Serie A [ 108 ] Chapter 5 Incidentally, the same result as the preceding one can be obtained by using sum directly and passing the level as a parameter: In [142]: goalStatsDF2.sum(level='Stat') Out[142]: La Liga GoalsScored MatchesPlayed Serie A Now, let us obtain a key statistic to determine how exciting the season was in each of the leagues—the goals per game ratio: In [174]: totalsDF=grouped2.sum() In [175]: totalsDF.ix['GoalsScored']/ totalsDF.ix['MatchesPlayed'] Out[175]: EPL La Liga Serie A dtype: float64 This is returned as a Series, as shown in the preceding command. We can now display the goals per game ratio along with the goals scored and matches played to give a summary of how exciting the league was, as follows: 1. Obtain goals per game data as a DataFrame. Note that we have to transpose it since gpg is returned as a Series: In [234]: gpg=totalsDF.ix['GoalsScored']/totalsDF. ix ['MatchesPlayed'] goalsPerGameDF=pd.DataFrame(gpg).T In [235]: goalsPerGameDF Out[235]: 0 La Liga Serie A 2. Reindex the goalsPerGameDF DataFrame so that the 0 index is replaced by GoalsPerGame: In [207]: goalsPerGameDF=goalsPerGameDF.rename(index={0:'GoalsPerG ame'}) In [208]: goalsPerGameDF [ 109 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data Out[208]: La Liga 2.797368 Serie A 3. Append the goalsPerGameDF DataFrame to the original one: In [211]: pd.options.display.float_format='{:.2f}'.format totalsDF.append(goalsPerGameDF) Out[211]: La Liga Serie A The following is a graph that shows the goals per match of the European leagues, that we discussed, from 1955-2012. The source for this can be found at http://mattstil.es/images/europe-football.png. [ 110 ] Chapter 5 Using the aggregate method Another way to generate summary statistics is by using the aggregate method explicitly: In [254]: pd.options.display.float_format=None In [256]: grouped2.aggregate(np.sum) Out[256]: La Liga Serie A This generates a grouped DataFrame object that is shown in the preceding command. We also reset the float format to None, so the integer-valued data would not be shown as floats due to the formatting from the previous section. Applying multiple functions For a grouped DataFrame object, we can specify a list of functions to be applied to each column: In [274]: grouped2.agg([np.sum, np.mean,np.size]) Out[274]: La Liga sum mean size GoalsScored sum mean size Serie A sum mean size sum mean size Stat 1063 106.3 11 1133 103.0 11 1003 100.3 11 898 89.8 MatchesPlayed 380 38.0 11 380 34.6 380 38.0 11 306 30.6 Note the preceding output that shows NA values are excluded from aggregate calculations. The agg is an abbreviation form for aggregate. Thus, the calculations for the mean for EPL, Serie A, and Bundesliga are based on a size of 10 months and not 11. This is because no matches were played in the last month of June in these three leagues as opposed to La Liga, which had matches in June. In the case of a grouped Series, we return to the nationsGrp example and compute some statistics on the attendance figures for the country of the tournament winners: In [297]: nationsGrp ['Attendance'].agg({'Total':np.sum, 'Average':np. mean, 'Deviation':np.std}) Out[297]: Nation England [ 111 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data Germany Netherlands 16048.58 For a grouped Series, we can pass a list or dict of functions. In the preceding case, a dict was specified and the key values were used for the names of the columns in the resulting DataFrame. Note that in the case of groups of a single sample size, the standard deviation is undefined and NaN is the result—for example, Romania. The transform() method The groupby-transform function is used to perform transformation operations on a groupby object. For example, we could replace NaN values in the groupby object using the fillna method. The resulting object after using transform has the same size as the original groupby object. Let us consider a DataFrame showing the goals scored for each month in the four soccer leagues: In[344]: goalStatsDF3= pd.read_csv('./goal_stats_euro_leagues_2012-13. csv') goalStatsDF3=goalStatsDF3.set_index(['Month']) goalsScoredDF=goalStatsDF3.ix[goalStatsDF3['Stat']=='GoalsScored'] goalsScoredDF.iloc[:,1:] Out La Liga Serie A Month 08/01/2012 101 106 104 [ 112 ] Chapter 5 05/01/2013 We can see that for June 2013, the only league for which matches were played was La Liga, resulting in the NaN values for the other three leagues. Let us group the data by year: In [336]: goalsScoredPerYearGrp=goalsScoredDF.groupby(lambda Month: Month.split('/')[2]) goalsScoredPerYearGrp.mean() Out[336]: La Liga Serie A The preceding function makes use of a lambda function to obtain the year by splitting the Month variable on the / character and taking the third element of the resulting list. If we do a count of the number of months per year during which matches were held in the various leagues, we have: In [331]: goalsScoredPerYearGrp.count() Out[331]: La Liga Serie A It is often undesirable to display data with missing values and one common method to resolve this situation would be to replace the missing values with the group mean. This can be achieved using the transform-groupby function. First, we must define the transformation using a lambda function and then apply this transformation using the transform method: In [338]: fill_fcn = lambda x: x.fillna (x.mean()) trans = goalsScoredPerYearGrp.transform(fill_fcn) tGroupedStats = trans.groupby(lambda Month: [2]) tGroupedStats.mean() Out[338]: La Liga Serie A [ 113 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data One thing to note from the preceding results is that replacing the NaN values with the group mean in the original group, keeps the group means unchanged in the transformed data. However, when we do a count on the transformed group, we see that the number of matches has changed from five to six for the EPL, Serie A, and Bundesliga: In [339]: tGroupedStats.count() Out[339]: La Liga Serie A The filter method enables us to apply filtering on a groupby object that results in a subset of the initial object. Here, we illustrate how to display the months of the season in which more than 100 goals were scored in each of the four leagues: In [391]: goalsScoredDF.groupby(level='Month').filter(lambda x: np.all([x[col] > 100 for col in goalsScoredDF.columns])) La Liga Serie A Month 09/01/2012 Note the use of the np.all operator to ensure that the constraint is enforced for all the columns. Merging and joining There are various functions that can be used to merge and join pandas' data structures, which include the following functions: • concat • append [ 114 ] Chapter 5 The concat function The concat function is used to join multiple pandas' data structures along a specified axis and possibly perform union or intersection operations along other axes. The following command explains the concat function: concat(objs, axis=0, , join='outer', join_axes=None, ignore_index=False, keys=None, levels=None, names=None, verify_integrity=False) The synopsis of the elements of concat function are as follows: • The objs function: A list or dictionary of Series, DataFrame, or Panel objects to be concatenated. • The axis function: The axis along which the concatenation should be performed. 0 is the default value. • The join function: The type of join to perform when handling indexes on other axes. The 'outer' function is the default. • The join_axes function: This is used to specify exact indexes for the remaining indexes instead of doing outer/inner join. • The keys function: This specifies a list of keys to be used to construct a MultiIndex. For an explanation of the remaining options, please refer to the documentation at http://pandas.pydata.org/pandas-docs/stable/merging.html. Here is an illustration of the workings of concat using our stock price examples from earlier chapters: In [53]: stockDataDF=pd.read_csv('./tech_stockprices.csv').set_index( ['Symbol']);stockDataDF Out[53]: Closing price Shares Outstanding(M) P/E Market Cap(B) Beta Symbol AAPL [ 115 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data We now take various slices of the data: In [83]: A=stockDataDF.ix[:4, ['Closing price', 'EPS']]; A Out[83]: Closing price Symbol AAPL 0.59 0.59 36.05 In [84]: B=stockDataDF.ix[2:-2, ['P/E']];B Out[84]: P/E Symbol FB In [85]: C=stockDataDF.ix[1:5, ['Market Cap(B)']];C Out[85]: Market Cap(B) Symbol AMZN Here, we perform a concatenation by specifying an outer join, which concatenates and performs a union on all the three data frames, and includes entries that do not have values for all the columns by inserting NaN for such columns: In [86]: pd.concat([A,B,C],axis=1) # outer join Out[86]: Closing price Market Cap(B) 104.93 150.92 31.44 380.64 [ 116 ] Chapter 5 We can also specify an inner join that does the concatenation, but only includes rows that contain values for all the columns in the final data frame by throwing out rows with missing columns, that is, it takes the intersection: In [87]: pd.concat([A,B,C],axis=1, join='inner') # Inner join Out[87]: Closing price Market Cap(B) Symbol FB 0.59 104.93 36.05 The third case enables us to use the specific index from the original DataFrame to join on: In [102]: pd.concat([A,B,C], axis=1, join_axes=[stockDataDF.index]) Out[102]: Closing price Market Cap(B) Symbol AAPL 104.93 150.92 31.44 380.64 In this last case, we see that the row for YHOO was included even though it wasn't contained in any of the slices that were concatenated. In this case, however, the values for all the columns are NaN. Here is another illustration of concat, but this time, it is on random statistical distributions. Note that in the absence of an axis argument, the default axis of concatenation is 0: In[135]: np.random.seed(100) normDF=pd.DataFrame(np.random.randn(3,4));normDF Out[135]: 0.255001 -0.458027 In [136]: binomDF=pd.DataFrame(np.random.binomial(100,0.5,(3,4)));binomDF Out[136]: [ 117 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data In [137]: poissonDF=pd.DataFrame(np.random.poisson(100,(3,4)));poissonDF Out[137]: In [138]: rand_distribs=[normDF,binomDF,poissonDF] In [140]: rand_distribsDF=pd.concat(rand_distribs,keys=['Normal', 'Binomial', 'Poisson']);rand_distribsDF Out[140]: Binomial 0 0.255001 -0.458027 Using append The append function is a simpler version of concat that concatenates along axis=0. Here is an illustration of its use where we slice out the first two rows and first three columns of the stockData DataFrame: In [145]: stockDataA=stockDataDF.ix[:2,:3] stockDataA Out[145]: Closing price Shares Outstanding(M) Symbol AAPL 892.45 459.00 And the remaining rows: In [147]: stockDataB=stockDataDF[2:] stockDataB Out[147]: Closing price EPS Shares Outstanding(M) [ 118 ] Market Cap(B) Beta Chapter 5 Symbol FB 104.93 150.92 Now, we use append to combine the two data frames from the preceding commands: In [161]:stockDataA.append(stockDataB) Out[161]: Beta Closing price EPS MarketCap(B) P/E Shares Outstanding(M) Symbol AMZN 150.92 104.93 2450.00 447.59 892.45 In order to maintain the order of columns similar to the original DataFrame, we can apply the reindex_axis function: In [151]: stockDataA.append(stockDataB).reindex_axis(stockDataDF.columns, axis=1) Out[151]: Closing price EPS Shares Outstanding(M) P/E Market Cap(B) Beta Symbol AAPL 0.59 36.05 -0.30 1.27 555.20 1010.00 NaN 27.48 150.92 380.64 36.23 35.36 NaN 0.87 NaN 0.66 Note that for the first two rows, the value of the last two columns is NaN, since the first DataFrame only contained the first three columns. The append function does not work in places, but it returns a new DataFrame with the second DataFrame appended to the first. [ 119 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data Appending a single row to a DataFrame We can append a single row to a DataFrame by passing a series or dictionary to the append method: In [152]: algos={'search':['DFS','BFS','Binary Search','Linear'], 'sorting': ['Quicksort','Mergesort','Heapsort','Bubble Sort'], 'machine learning':['RandomForest','K Nearest Neighbor','Logistic Regression','K-Means Clustering']} algoDF=pd.DataFrame(algos);algoDF Out[152]: machine learning K Nearest Neighbor Logistic Regression Binary Search Heapsort K-Means Clustering Mergesort Bubble Sort In [154]: moreAlgos={'search': 'ShortestPath' , 'sorting': 'Insertion Sort', 'machine learning': 'Linear Regression'} algoDF.append(moreAlgos,ignore_index=True) Out[154]: machine learning K Nearest Neighbor Logistic Regression Binary Search Heapsort K-Means Clustering Linear Regression Mergesort Bubble Sort Insertion Sort In order for this to work, you must pass the ignore_index=True argument so that the index [0,1,2,3] in algoDF is ignored. SQL-like merging/joining of DataFrame objects The merge function is used to obtain joins of two DataFrame objects similar to those used in SQL database queries. The DataFrame objects are analogous to SQL tables. The following command explains this: merge(left, right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=True, suffixes=('_x', '_y'), copy=True) [ 120 ] Chapter 5 Following is the synopsis of merge function: • The left argument: This is the first DataFrame object • The right argument: This is the second DataFrame object • The how argument: This is the type of join and can be inner, outer, left, or right. The default is inner. • The on argument: This shows the names of columns to join on as join keys. • The left_on and right_on arguments : This shows the left and right DataFrame column names to join on. • The left_index and right_index arguments: This has a Boolean value. If this is True, use the left or right DataFrame index/row labels to join on. • The sort argument: This has a Boolean value. The default True setting results in a lexicographical sorting. Setting the default value to False may improve performance. • The suffixes argument: The tuple of string suffixes to be applied to overlapping columns. The defaults are '_x' and '_y'. • The copy argument: The default True value causes data to be copied from the passed DataFrame objects. The source of the preceding information can be found at Let us start to examine the use of merge by reading the U.S. stock index data into a DataFrame: In [254]: USIndexDataDF=pd.read_csv('./us_index_data.csv') USIndexDataDF Out[254]: S&P 500 Russell 2000 The source of this information can be found at http://finance.yahoo.com. [ 121 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data We can obtain slice1 of the data for rows 0 and 1 and the Nasdaq and S&P 500 columns by using the following command: In [255]: slice1=USIndexDataDF.ix[:1,:3] slice1 Out[255]: S&P 500 We can obtain slice2 of the data for rows 0 and 1 and the Russell 2000 and DJIA columns by using the following command: In [256]: slice2=USIndexDataDF.ix[:1,[0,3,4]] slice2 Out[256]: Russell 2000 We can obtain slice3 of the data for rows 1 and 2 and the Nasdaq and S&P 500 columns by using the following command: In [248]: slice3=USIndexDataDF.ix[[1,2],:3] slice3 Out[248]: S&P 500 We can now merge slice1 and slice2 as follows: In [257]: pd.merge(slice1,slice2) Out[257]: Nasdaq S&P 500 Russell 2000 1794.19 1782.59 1139.36 1130.88 DJIA 15848.61 15698.85 As you can see, this results in a combination of the columns in slice1 and slice2. Since the on argument was not specified, the intersection of the columns in slice1 and slice2 was used which is TradingDate as the join column, and the rest of the columns from slice1 and slice2 were used to produce the output. Note that in this case, passing a value for how has no effect on the result since the values of the TradingDate join key match for slice1 and slice2. [ 122 ] Chapter 5 We now merge slice3 and slice2 specifying inner as the value of the how argument: In [258]: pd.merge(slice3,slice2,how='inner') Out[258]: 0 S&P 500 1782.59 Russell 2000 1130.88 The slice3 argument has values 2014/01/31 and 2014/02/03 unique values for TradingDate, and slice2 has values 2014/01/30 and 2014/01/31 unique values for TradingDate. The merge function uses the intersection of these values, which is 2014/01/31. This results in the single row result. Here, we specify outer as the value of the how argument: In [269]: pd.merge(slice3,slice2,how='outer') Out S&P 500 Russell 2000 15698.85 NaN Specifying outer uses all the keys (union) from both DataFrames, which gives the three rows specified in the preceding output. Since not all the columns are present in the two DataFrames, the columns from the other DataFrame are NaN for each row in a DataFrame that is not part of the intersection. Now, we specify how='left' as shown in the following command: In [271]: pd.merge(slice3,slice2,how= 'left') Out[271]: TradingDate 0 Nasdaq 4103.88 3996.96 S&P 500 Russell 2000 DJIA 15698.85 NaN Here, we see that the keys from the left DataFrame slice3 are used for the output. For columns that are not available in slice3, that is Russell 2000 and DJIA, NaN are used for the row with TradingDate as 2014/02/03. This is equivalent to a SQL left outer join. We specify how='right' in the following command: In [270]: pd.merge(slice3,slice2,how='right') Out[270]: S&P 500 Russell 2000 [ 123 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data This is the corollary to the how='left' keys from the right DataFrame slice2 that are used. Therefore, rows with TradingDate as 2014/01/31 and 2014/01/30 are in the result. For columns that are not in slice2—Nasdaq and S&P 500—NaN are used. This is equivalent to a SQL right outer join. For a simple explanation of how SQL joins work, please refer to http://bit.ly/1yqR9vw. The join function The DataFrame.join function is used to combine two DataFrames that have different columns with nothing in common. Essentially, this does a longitudinal join of two DataFrames. Here is an example: In [274]: slice_NASD_SP=USIndexDataDF.ix[:3,:3] slice_NASD_SP Out[274]: S&P 500 In [275]: slice_Russ_DJIA=USIndexDataDF.ix[:3,3:] slice_Russ_DJIA Out[275]: Russell 2000 Here, we call the join operator, as follows: In [276]: slice_NASD_SP.join(slice_Russ_DJIA) Out[276]: TradingDate S&P 500 Russell 2000 [ 124 ] Chapter 5 In this case, we see that the result is a combination of the columns from the two Dataframes. Let us see what happens when we try to use join with two DataFrames that have a column in common: In [272]: slice1.join(slice2) -----------------------------------------------------------Exception Traceback (most recent call last) ... Exception: columns overlap: Index([u'TradingDate'], dtype=object) This results in an exception due to overlapping columns. You can find more information on using merge, concat, and join operations in the official documentation page at http://pandas.pydata.org/ Pivots and reshaping data This section deals with how you can reshape data. Sometimes, data is stored in what is known as the stacked format. Here is an example of a stacked data using the PlantGrowth dataset: In [344]: plantGrowthRawDF=pd.read_csv('./PlantGrowth.csv') plantGrowthRawDF Out[344]: [ 125 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data This data consists of results from an experiment to compare dried weight yields of plants that were obtained under a control (ctrl) and two different treatment conditions (trt1, trt2). Suppose we wanted to do some analysis of this data by their group value. One way to do this would be to use a logical filter on the data frame: In [346]: plantGrowthRawDF[plantGrowthRawDF['group']=='ctrl'] Out This can be tedious, so we would instead like to pivot/unstack this data and display it in a form that is more conducive to analysis. We can do this using the DataFrame.pivot function as follows: In [345]: plantGrowthRawDF.pivot(index='observation',columns='group',valu es='weight') Out[345]: weight group observation 1 Here, a DataFrame is created with columns corresponding to different values of a group, or in statistical parlance, levels of the factor. The same result can be achieved via the pandas pivot_table function, as follows: In [427]: pd.pivot_table(plantGrowthRawDF,values='weight', rows='observation', cols=['group']) [ 126 ] Chapter 5 Out[427]: observation 1 The key difference between the pivot and the pivot_table functions is that pivot_table allows the user to specify an aggregate function over which the values can be aggregated. So, for example, if we wish to obtain the mean for each group over the 10 observations, we would do the following, which would result in a Series: In [430]: pd.pivot_table(plantGrowthRawDF,values='weight',cols=['group'], aggfunc=np.mean) Out[430]: group ctrl Name: weight, dtype: float64 The full synopsis of pivot_table is available at http://bit.ly/1QomJ5A. You can find more information and examples on its usage at: http://bit.ly/1BYGsNn and https://www.youtube.com/watch?v= Stacking and unstacking In addition to the pivot functions, the stack and unstack functions are also available on Series and DataFrames, that work on objects containing MultiIndexes. [ 127 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data The stack() function First, we set the group and observation column values to be the components of the row index respectively, which results in a MultiIndex: In [349]: plantGrowthRawDF.set_index(['group','observation']) weight group ... trt1 ... trt2 Here, we see that the row index consists of a MultiIndex on the group and observation with the weight column as the data value. Now, let us see what happens if we apply unstack to the group level: In [351]: plantGrowthStackedDF.unstack(level='group') Out[351]: weight group ctrl trt1 observation 1 4.17 4.81 5.58 4.17 5.18 4.41 6.11 3.59 4.50 5.87 4.61 3.83 5.17 6.03 4.53 4.89 5.26 [ 128 ] Chapter 5 The following call is equivalent to the preceding one: plantGrowthStackedDF.unstack(level=0). Here, we can see that the DataFrame is pivoted and the group has now changed from a row index (headers) to a column index (headers), resulting in a more compact looking DataFrame. To understand what's going on in more detail, we have a MultiIndex as a row index initially on group, observation: In [356]: plantGrowthStackedDF.index Out[356]: MultiIndex [(u'ctrl', 1), (u'ctrl', 2), (u'ctrl', 3), (u'ctrl', 4), (u'ctrl', 5), (u'ctrl', 6), (u'ctrl', 7), (u'ctrl', 8), (u'ctrl', 9), (u'ctrl', 10), (u'trt1', 1), (u'trt1', 2), (u'trt1', 3), (u'trt1', 4), (u'trt1', 5), (u'trt1', 6), (u'trt1', 7), (u'trt1', 8), (u'trt1', 9), (u'trt1', 10), (u'trt2', 1), (u'trt2', 2), (u'trt2', 3), (u'trt2', 4), (u'trt2', 5), (u'trt2', 6), (u'trt2', 7), (u'trt2', 8), (u'trt2', 9), (u'trt2', 10)] In [355]: plantGrowthStackedDF.columns Out[355]: Index([u'weight'], dtype=object) The unstacking operation removes the group from the row index, changing it into a single-level index: In [357]: plantGrowthStackedDF.unstack(level='group').index Out[357]: Int64Index([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype=int64) The MultiIndex is now on the columns: In [352]: plantGrowthStackedDF.unstack(level='group').columns Out[352]: MultiIndex [(u'weight', u'ctrl'), (u'weight', u'trt1'), (u'weight', u'trt2')] [ 129 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data Let us see what happens when we call the reverse operation, stack: In [366]: plantGrowthStackedDF.unstack(level=0).stack('group') Out[366]: weight observation 1 ... 10 Here, we see that what we get isn't the original stacked DataFrame since the stacked level—that is, 'group'—becomes the new lowest level in a MultiIndex on the columns. In the original stacked DataFrame, group was the highest level. Here are the sequence of calls to stack and unstack that are exactly reversible. The unstack() function by default unstacks the last level, which is observation, which is shown as follows: In [370]: plantGrowthStackedDF.unstack() Out[370]: weight observation group ctrl 4.17 5.58 5.18 6.11 4.50 4.61 5.17 4.53 5.33 5.14 4.81 4.17 4.41 3.59 5.87 3.83 6.03 4.89 4.32 4.69 6.31 5.12 5.54 5.50 5.37 5.29 4.92 6.15 5.80 5.26 [ 130 ] Chapter 5 The stack() function by default sets the stacked level as the lowest level in the resulting MultiIndex on the rows: In [369]: plantGrowthStackedDF.unstack().stack() Out[369]: weight group ctrl observation 1 ... trt1 ... trt2 Other methods to reshape DataFrames There are various other methods that are related to reshaping DataFrames; we'll discuss them here. Using the melt function The melt function enables us to transform a DataFrame by designating some of its columns as ID columns. This ensures that they will always stay as columns after any pivoting transformations. The remaining non-ID columns can be treated as variable and can be pivoted and become part of a name-value two column scheme. ID columns uniquely identify a row in the DataFrame. [ 131 ] Operations in pandas, Part II – Grouping, Merging, and Reshaping of Data The names of those non-ID columns can be customized by supplying the var_name and value_name parameters. The use of melt is perhaps best illustrated by an example, as follows: In [385]: from pandas.core.reshape import melt In [401]: USIndexDataDF[:2] Out[401]: S&P 500 Russell 2000 In [402]: melt(USIndexDataDF[:2], id_vars=['TradingDate'], var_ name='Index Name', value_name='Index Value') Out[402]: TradingDate Index Name Index value S&P 500 S&P 500 Russell 2000 Russell 2000 The pandas.get_dummies() function This function is used to convert a categorical variable into an indicator DataFrame, which is essentially a truth table of possible values of the categorical variable. An example of this is the following command: In [408]: melted=melt(USIndexDataDF[:2], id_vars=['TradingDate'], var_ name='Index Name', value_name='Index Value') melted Out[408]: Index Name Index Value S&P 500 S&P 500 Russell 2000 [ 132 ] Chapter 5 5 Russell 2000 In [413]: pd.get_dummies(melted['Index Name']) Out[413]: Russell 2000 S&P 500 The source of the preceding data can be found at http://vincentarelbundock. github.io/Rdatasets/csv/datasets/PlantGrowth.csv. In this chapter, we saw that there are various ways to rearrange data in pandas. We can group data using the pandas.groupby operator and the associated methods on groupby objects. We can merge and join Series and DataFrame objects using the concat, append, merge, and join functions. Lastly, we can reshape and create pivot tables using the stack/unstack and pivot/pivot_table functions. This is very useful functionality to present data for visualization or prepare data for input into other programs or algorithms. In the next chapter, we will examine some useful tasks in data analysis for which we can apply pandas, such as processing time series data and how to handle missing values in our data. To have more information on these topics on pandas, please take a look at the official documentation at http://pandas.pydata.org/pandas-docs/stable/. [ 133 ] Missing Data, Time Series, and Plotting Using Matplotlib In this chapter, we take a tour of some topics that are necessary to develop expertise in using pandas. Knowledge of these topics is very useful for the preparation of data as input for programs or code that process data for analysis, prediction, or visualization. The topics that we'll discuss are as follows: • Handling missing data • Handling time series and dates • Plotting using matplotlib By the end of this chapter the user should be proficient in these critical areas. Handling missing data Missing data refers to data points that show up as NULL or N/A in our datasets for some reason; for example, we may have a time series that spans all calendar days of the month that shows the closing price of a stock for each day, and the closing price for nonbusiness days would show up as missing. An example of corrupted data would be a financial dataset that shows the activity date of a transaction in the wrong format; for example, YYYY-MM-DD instead of YYYYMMDD due to an error on the part of the data provider. In the case of pandas, missing values are generally represented by the NaN value. [ 135 ] Missing Data, Time Series, and Plotting Using Matplotlib Other than appearing natively in the source dataset, missing values can be added to a dataset by an operation such as reindexing, or changing frequencies in the case of a time series: In [84]: import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline In [85]: date_stngs = ['2014-05-01','2014-05-02', '2014-05-05','2014-05-06','2014-05-07'] tradeDates = pd.to_datetime(pd.Series(date_stngs)) In [86]: closingPrices=[531.35,527.93,527.81,515.14,509.96] In [87]: googClosingPrices=pd.DataFrame(data=closingPrices, columns=['closingPrice'], index= tradeDates) googClosingPrices Out[87]: closingPrice tradeDates 2014-05-01 5 rows 1 columns The source of the preceding data can be found at http://yhoo.it/1dmJqW6. The pandas also provides an API to read stock data from various data providers, such as Yahoo: In [29]: import pandas.io.data as web In [32]: import datetime googPrices = web.get_data_yahoo("GOOG", start=datetime.datetime(2014, 5, 1), end=datetime.datetime(2014, 5, 7)) In [38]: googFinalPrices=pd.DataFrame(googPrices ['Close'], index=tradeDates) In [39]: googFinalPrices Out[39]: Close [ 136 ] Chapter 6 2014-05-01 For more details, refer to http://pandas.pydata.org/pandas-docs/stable/ remote_data.html. We now have a time series that depicts the closing price of Google's stock from May 1, 2014 to May 7, 2014 with gaps in the date range since the trading only occur on business days. If we want to change the date range so that it shows calendar days (that is, along with the weekend), we can change the frequency of the time series index from business days to calendar days as follows: In [90]: googClosingPricesCDays=googClosingPrices.asfreq('D') googClosingPricesCDays Out[90]: 7 rows 1 columns Note that we have now introduced NaN values for the closingPrice for the weekend dates of May 3, 2014 and May 4, 2014. We can check which values are missing by using the isnull and notnull functions as follows: In [17]: googClosingPricesCDays.isnull() Out[17]: closingPrice 2014-05-01 [ 137 ] Missing Data, Time Series, and Plotting Using Matplotlib 2014-05-06 7 rows 1 columns In [18]: googClosingPricesCDays.notnull() Out[18]: closingPrice 2014-05-01 7 rows 1 columns A Boolean DataFrame is returned in each case. In datetime and pandas Timestamps, missing values are represented by the NaT value. This is the equivalent of NaN in pandas for time-based types. In [27]: tDates=tradeDates.copy() tDates[1]=np.NaN tDates[4]=np.NaN In [28]: tDates Out[28]: 0 Name: tradeDates, dtype: datetime64[ns] In [4]: FBVolume=[82.34,54.11,45.99,55.86,78.5] TWTRVolume=[15.74,12.71,10.39,134.62,68.84] In [5]: socialTradingVolume=pd.concat([pd.Series(FBVolume), pd.Series(TWTRVolume), tradeDates], axis=1, [ 138 ] Chapter 6 keys=['FB','TWTR','TradeDate']) socialTradingVolume Out[5]: 5 rows × 3 columns In [6]: socialTradingVolTS=socialTradingVolume.set_index('TradeDate') socialTradingVolTS Out[6]: TradeDate 5 rows × 2 columns In [7]: socialTradingVolTSCal=socialTradingVolTS.asfreq('D') socialTradingVolTSCal Out[7]: FB 7 rows × 2 columns [ 139 ] Missing Data, Time Series, and Plotting Using Matplotlib We can perform arithmetic operations on data containing missing values. For example, we can calculate the total trading volume (in millions of shares) across the two stocks for Facebook and Twitter as follows: In [8]: socialTradingVolTSCal['FB']+socialTradingVolTSCal['TWTR'] Out[8]: 2014-05-01 Freq: D, dtype: float64 By default, any operation performed on an object that contains missing values will return a missing value at that position as shown in the following command: In [12]: pd.Series([1.0,np.NaN,5.9,6]) +pd.Series([3,5,2,5.6]) Out[12]: 0 dtype: float64 In [13]: pd.Series([1.0,25.0,5.5,6])/pd.Series([3,np.NaN,2,5.6]) Out[13]: 0 dtype: float64 There is a difference, however, in the way NumPy treats aggregate calculations versus what pandas does. In pandas, the default is to treat the missing value as 0 and do the aggregate calculation, whereas for NumPy, NaN is returned if any of the values are missing. Here is an illustration: In [15]: np.mean([1.0,np.NaN,5.9,6]) Out[15]: nan [ 140 ] Chapter 6 In [16]: np.sum([1.0,np.NaN,5.9,6]) Out[16]: nan However, if this data is in a pandas Series, we will get the following output: In [17]: pd.Series([1.0,np.NaN,5.9,6]).sum() Out[17]: 12.9 In [18]: pd.Series([1.0,np.NaN,5.9,6]).mean() Out[18]: It is important to be aware of this difference in behavior between pandas and NumPy. However, if we wish to get NumPy to behave the same way as pandas, we can use the np.nanmean and np.nansum functions, which are illustrated as follows: In [41]: np.nanmean([1.0,np.NaN,5.9,6]) Out[41]: 4.2999999999999998 In [43]: np.nansum([1.0,np.NaN,5.9,6]) Out[43]: 12.9 For more information on NumPy np.nan* aggregation functions, refer to Handling missing values There are various ways to handle missing values, which are as follows: 1. By using the fillna() function to fill in the NA values. This is an example: In [19]: socialTradingVolTSCal Out[19]: 7 rows × 2 columns [ 141 ] Missing Data, Time Series, and Plotting Using Matplotlib In [20]: socialTradingVolTSCal.fillna(100) Out[20]: 7 rows × 2 columns We can also fill forward or backward values using the ffill or bfill arguments: In [23]: socialTradingVolTSCal.fillna(method='ffill') Out[23]: 7 rows × 2 columns In [24]: socialTradingVolTSCal.fillna(method='bfill') Out[24]: 7 rows × 2 columns The pad method is an alternative name for ffill. For more details, you can go to http://bit.ly/1f4jvDq. [ 142 ] Chapter 6 2. By using the dropna() function to drop/delete rows and columns with missing values. The following is an example of this: In [21]: socialTradingVolTSCal.dropna() Out[21]: 5 rows × 2 columns 3. We can also interpolate and fill in the missing values by using the interpolate() function, as explained in the following commands: In [27]: pd.set_option('display.precision',4) socialTradingVolTSCal.interpolate() Out[27]: 7 rows × 2 columns The interpolate() function also takes an argument—method that denotes the method. These methods include linear, quadratic, cubic spline, and so on. You can obtain more information from the official documentation at http://pandas.pydata.org/pandas-docs/stable/missing_data. html#interpolation. Handling time series In this section, we show you how to handle time series data. We will start by showing how to create time series data using the data read in from a csv file. [ 143 ] Missing Data, Time Series, and Plotting Using Matplotlib Reading in time series data Here, we demonstrate the various ways to read in time series data: In [7]: ibmData=pd.read_csv('ibm-common-stock-closing-prices-1959_1960. csv') ibmData.head() Out[7]: 5 rows 2 columns The source of this information can be found at http://datamarket.com. We would like the TradeDate column to be a series of datetime values so that we can index it and create a time series. Let us first check the type of values in the TradeDate series: In [16]: type(ibmData['TradeDate']) Out[16]: pandas.core.series.Series In [12]: type(ibmData['TradeDate'][0]) Out[12]: str Next, we convert it to a Timestamp type: In [17]: ibmData['TradeDate']=pd.to_datetime(ibmData['TradeDate']) type(ibmData['TradeDate'][0]) Out[17]: pandas.tslib.Timestamp We can now use the TradeDate column as an index: In [113]: #Convert DataFrame to TimeSeries #Resampling creates NaN rows for weekend dates, hence use dropna ibmTS=ibmData.set_index ('TradeDate').resample('D')['closingPrice']. dropna() ibmTS Out[113]: TradeDate 1959-06-29 448 [ 144 ] Chapter 6 1959-07-01 ... Name: closingPrice, Length: 255 DateOffset and TimeDelta objects A DateOffset object represents a change or offset in time. The key features of a DateOffset object are as follows: • This can be added/subtracted to/from a datetime object to obtain a shifted date • This can be multiplied by an integer (positive or negative) so that the increment can be applied multiple times • This has the rollforward and rollback methods to move a date forward to the next offset date or backward to the previous offset date We illustrate how we use a DateOffset object as follows: In [371]: xmasDay=pd.datetime(2014,12,25) xmasDay Out[371]: datetime.datetime(2014, 12, 25, 0, 0) In [373]: boxingDay=xmasDay+pd.DateOffset(days=1) boxingDay Out[373]: Timestamp('2014-12-26 00:00:00', tz=None) In [390}: today=pd.datetime.now() today Out[390]: datetime.datetime(2014, 5, 31, 13, 7, 36, 440060) Note that datetime.datetime is different from pd.Timestamp. The former is a Python class and is inefficient, while the latter is based on the numpy.datetime64 datatype. The pd.DateOffset object works with pd.Timestamp and adding it to a datetime.datetime function casts that object into a pd.Timestamp object. The following illustrates the command for one week from today: In [392]: today+pd.DateOffset(weeks=1) Out[392]: Timestamp('2014-06-07 13:07:36.440060', tz=None) [ 145 ] Missing Data, Time Series, and Plotting Using Matplotlib The following illustrates the command for five years from today: In [394]: today+2*pd.DateOffset(years=2, months=6) Out[394]: Timestamp('2019-05-30 13:07:36.440060', tz=None) Here is an example of using the rollforward functionality. QuarterBegin is a DateOffset object that is used to increment a given datetime object to the start of the next calendar quarter: In [18]: lastDay=pd.datetime(2013,12,31) In [24]: from pandas.tseries.offsets import QuarterBegin dtoffset=QuarterBegin() lastDay+dtoffset Out[24]: Timestamp('2014-03-01 00:00:00', tz=None) In [25]: dtoffset.rollforward(lastDay) Out[25]: Timestamp('2014-03-01 00:00:00', tz=None) Thus, we can see that the next quarter after December 31, 2013 starts on March 1, 2014. Timedeltas are similar to DateOffsets but work with datetime.datetime objects. The use of these has been explained by the following command: In [40]: weekDelta=datetime.timedelta(weeks=1) weekDelta Out[40]: datetime.timedelta(7) In [39]: today=pd.datetime.now() today Out[39]: datetime.datetime (2014, 6, 2, 3, 56, 0, 600309) In [41]: today+weekDelta Out[41]: datetime.datetime (2014, 6, 9, 3, 56,0, 600309) Time series-related instance methods In this section, we explore various methods for Time Series objects such as shifting, frequency conversion, and resampling. [ 146 ] Chapter 6 Sometimes, we may wish to shift the values in a Time Series backward or forward in time. One possible scenario is when a dataset contains the list of start dates for last year's new employees in a firm, and the company's human resource program wishes to shift these dates forward by one year so that the employees' benefits can be activated. We can do this by using the shift() function as follows: In [117]: ibmTS.shift(3) Out[117]: TradeDate 1959-06-29 This shifts all the calendar days. However, if we wish to shift only business days, we must use the following command: In [119]: ibmTS.shift(3, freq=pd.datetools.bday) Out[119]: TradeDate 1959-07-02 In the preceding snippet, we have specified the freq argument to shift; this tells the function to shift only the business days. The shift function has a freq argument whose value can be a DateOffset class, timedelta-like object, or an offset alias. Thus, using ibmTS.shift(3, freq='B') would also produce the same result. Frequency conversion We can use the asfreq function to change frequencies, as explained: In [131]: # Frequency conversion using asfreq ibmTS.asfreq('BM') Out[131]: 1959-06-30 448 [ 147 ] Missing Data, Time Series, and Plotting Using Matplotlib 1959-07-31 Freq: BM, Name: closingPrice, dtype: float64 In this case, we just obtain the values corresponding to the last day of the month from the ibmTS time series. Here, bm stands for business month end frequency. For a list of all possible frequency aliases, go to http://bit.ly/1cMI3iA. If we specify a frequency that is smaller than the granularity of the data, the gaps will be filled in with NaN values: In [132]: ibmTS.asfreq('H') Out[132]: 1959-06-29 00:00:00 1959-06-29 01:00:00 1959-06-29 02:00:00 1959-06-29 03:00:00 ... 1960-06-29 23:00:00 1960-06-30 00:00:00 Freq: H, Name: closingPrice, Length: 8809 We can also apply the asfreq method to the Period and PeriodIndex objects similar to how we do for the datetime and Timestamp objects. Period and PeriodIndex are introduced later and are used to represent time intervals. The asfreq method accepts a method argument that allows you to forward fill (ffill) or back fill the gaps, similar to fillna: In [140]: ibmTS.asfreq('H', method='ffill') Out [140]: 1959-06-29 00:00:00 1959-06-29 01:00:00 [ 148 ] Chapter 6 1959-06-29 02:00:00 1959-06-29 03:00:00 ... 1960-06-29 23:00:00 1960-06-30 00:00:00 Freq: H, Name: closingPrice, Length: 8809 Resampling of data The TimeSeries.resample function enables us to summarize/aggregate more granular data based on a sampling interval and a sampling function. Downsampling is a term that originates from digital signal processing and refers to the process of reducing the sampling rate of a signal. In the case of data, we use it to reduce the amount of data that we wish to process. The opposite process is upsampling, which is used to increase the amount of data to be processed and requires interpolation to obtain the intermediate data points. For more information on downsampling and upsampling, refer to Practical Applications of Upsampling and Downsampling at http://bit.ly/1JC95HD and Downsampling Time Series for Visual Representation at http://bit.ly/1zrExVP. Here, we examine some tick data for use in resampling. Before we examine the data, we need to prepare it. In doing so, we will learn some useful techniques for time series data, which are as follows: • Epoch Timestamps • Timezone handling Here is an example that uses tick data for stock prices of Google for Tuesday, May 27, 2014: In [150]: googTickData=pd.read_csv('./GOOG_tickdata_20140527.csv') In [151]: googTickData.head() 555.008 556.41 554.35 556.38 556.250 556.30 555.25 555.25 556.730 556.75 556.05 556.39 557.480 557.67 556.73 556.73 558.155 558.66 557.48 557.59 5 rows 6 columns [ 149 ] Missing Data, Time Series, and Plotting Using Matplotlib The source for the preceding data can be found at http://bit.ly/1MKBwlB. As you can see from the preceding section, we have a Timestamp column along with the columns for the close, high, low, and opening prices and the volume of trades of the Google stock. So, why does the Timestamp column seem a bit strange? Well, tick data Timestamps are generally expressed in epoch time (for more information, refer to http:// en.wikipedia.org/wiki/Unix_epoch) as a more compact means of storage. We'll need to convert this into a more human-readable time, and we can do this as follows: In [201]: googTickData['tstamp']=pd.to_datetime(googTickData['Timestamp'] ,unit='s',utc=True) In [209]: googTickData.head() Out[209]: Timestamp volume tstamp 14011974020 555.008 556.41 554.35 556.38 81100 2014-05-27 13:30:02 556.250 556.30 555.25 555.25 18500 2014-05-27 13:31:00 556.730 556.75 556.05 556.39 9900 557.480 557.67 556.73 556.73 14700 2014-05-27 13:33:02 558.155 558.66 557.48 557.59 15700 2014-05-27 13:34:02 2014-05-27 13:32:06 5 rows 7 columns We would now like to make the tstamp column, as the index and eliminate the epoch Timestamp column: In [210]: googTickTS=googTickData.set_index('tstamp') googTickTS=googTickTS.drop('Timestamp',axis= 1) googTickTS.head() Out[210]: tstamp 2014-05-27 13:30:02 2014-05-27 13:31:00 2014-05-27 13:32:06 2014-05-27 13:33:02 2014-05-27 13:34:02 5 rows 5 columns [ 150 ] Chapter 6 Note that the tstamp index column has the times in UTC, and we can convert it to US/Eastern time using two operators—tz_localize and tz_convert: In [211]: googTickTS.index= googTickTS.index.tz_localize('UTC').tz_ convert('US/Eastern') In [212]: googTickTS.head() Out[212]: tstamp 2014-05-27 09:30:02-04:00 volume 81100 2014-05-27 09:31:00-04:00 2014-05-27 09:32:06-04:00 2014-05-27 09:33:02-04:00 2014-05-27 09:34:02-04:00 5 rows 5 columns In [213]: googTickTS.tail() Out[213]: tstamp 2014-05-27 15:56:00-04:00 565.48 565.30 565.385 2014-05-27 15:57:00-04:00 565.46 565.20 565.400 2014-05-27 15:58:00-04:00 565.31 565.10 565.310 2014-05-27 15:59:00-04:00 566.00 565.08 565.230 2014-05-27 16:00:00-04:00 565.95 565.95 565.950 126000 5 rows 5 columns In [214]: len(googTickTS) Out[214]: 390 From the preceding output, we can see ticks for every minute of the trading day—from 9:30 a.m., when the stock market opens, until 4:00 p.m., when it closes. This results in 390 rows in the dataset since there are 390 minutes between 9:30 a.m. and 4:00 p.m. [ 151 ] Missing Data, Time Series, and Plotting Using Matplotlib Suppose we want to obtain a snapshot every 5 minutes instead of every minute? We can achieve this by using downsampling as follows: In [216]: googTickTS.resample('5Min').head(6) Out[216]: 2014-05-27 09:30:00-04:00 556.72460 557.15800 555.97200 556.46800 27980 2014-05-27 09:35:00-04:00 556.93648 557.64800 556.85100 557.34200 2014-05-27 09:40:00-04:00 556.48600 556.79994 556.27700 556.60678 2014-05-27 09:45:00-04:00 557.05300 557.27600 556.73800 556.96600 2014-05-27 09:50:00-04:00 14560 2014-05-27 09:55:00-04:00 12400 6 rows 5 columns The default function used for resampling is the mean. However, we can also specify other functions, such as the minimum, and we can do this via the how parameter to resample: In [245]: googTickTS.resample('10Min', how=np.min).head(4) Out[245]: tstamp 2014-05-27 09:30:00-04:00 2014-05-27 09:40:00-04:00 2014-05-27 09:50:00-04:00 2014-05-27 10:00:00-04:00 Various function names can be passed to the how parameter, such as sum, ohlc, max, min, std, mean, median, first, and last. The ohlc function that returns open-high-low-close values on time series data that is; the first, maximum, minimum, and last values. To specify whether the left or right interval is closed, we can pass the closed parameter as follows: In [254]: pd.set_option ('display.precision',5) googTickTS.resample('5Min', closed='right').tail(3) Out[254]: tstamp 2014-05-27 15:45:00-04:00 12816.6667 [ 152 ] Chapter 6 2014-05-27 15:50:00-04:00 13325.0000 2014-05-27 15:55:00-04:00 40933.3333 3 rows 5 columns Thus, in the preceding command, we can see that the last row shows the tick at 15:55 instead of 16:00. For upsampling, we need to specify a fill method to determine how the gaps should be filled via the fill_method parameter: In [263]: googTickTS[:3].resample('30s', fill_method='ffill') Out[263]: 2014-05-27 09:30:00-04:00 2014-05-27 09:30:30-04:00 2014-05-27 09:31:00-04:00 2014-05-27 09:31:30-04:00 2014-05-27 09:32:00-04:00 5 rows 5 columns In [264]: googTickTS[:3].resample('30s', fill_method='bfill') Out[264]: close 2014-05-27 09:30:00-04:00 2014-05-27 09:30:30-04:00 2014-05-27 09:31:00-04:00 2014-05-27 09:31:30-04:00 2014-05-27 09:32:00-04:00 5 rows 5 columns Unfortunately, the fill_method parameter currently supports only two methods—forward fill and back fill. An interpolation method would be valuable. [ 153 ] Missing Data, Time Series, and Plotting Using Matplotlib Aliases for Time Series frequencies To specify offsets, a number of aliases are available; some of the most commonly used ones are as follows: • B, BM: This stands for business day, business month. These are the working days of the month, that is, any day that is not a holiday or a weekend. • D, W, M, Q, A: It stands for calendar day, week, month, quarter, year-end. • H, T, S, L, U: It stands for hour, minute, second, millisecond, and microsecond. These aliases can also be combined. In the following case, we resample every 7 minutes and 30 seconds: In [267]: googTickTS.resample('7T30S').head(5) Out[267]: close tstamp 2014-05-27 09:30:00-04:00 556.8266 557.4362 556.3144 556.8800 28075.0 2014-05-27 09:37:30-04:00 556.5889 556.9342 556.4264 556.7206 11642.9 2014-05-27 09:45:00-04:00 556.9921 557.2185 556.7171 2014-05-27 09:52:30-04:00 556.1824 556.5375 556.0350 556.3896 14350.0 2014-05-27 10:00:00-04:00 555.2111 555.4368 554.8288 554.9675 12512.5 5 rows x 5 columns Suffixes can be applied to the frequency aliases to specify when in a frequency period to start. These are known as anchoring offsets: • W - SUN, MON, ... for example, W-TUE indicates a weekly frequency starting on a Tuesday. • Q - JAN, FEB, ... DEC for example, Q-MAY indicates a quarterly frequency with the year-end in May. • A - JAN, FEB, ... DEC for example, A-MAY indicates an annual frequency with the year-end in May. These offsets can be used as arguments to the date_range and bdate_range functions as well as constructors for index types such as PeriodIndex and DatetimeIndex. A comprehensive discussion on this can be found in the pandas documentation at http://pandas.pydata.org/pandas-docs/stable/timeseries.html#. [ 154 ] Chapter 6 Time series concepts and datatypes When dealing with time series, there are two main concepts that you have to consider: points in time and ranges, or time spans. In pandas, the former is represented by the Timestamp datatype, which is equivalent to Python's datatime. datetime (datetime) datatype and is interchangeable with it. The latter (time span) is represented by the Period datatype, which is specific to pandas. Each of these datatypes has index datatypes associated with them: DatetimeIndex for Timestamp/Datetime and PeriodIndex for Period. These index datatypes are basically subtypes of numpy.ndarray that contain the corresponding Timestamp and Period datatypes and can be used as indexes for Series and DataFrame objects. Period and PeriodIndex The Period datatype is used to represent a range or span of time. Here are a few examples: # Period representing May 2014 In [287]: pd.Period('2014', freq='A-MAY') Out[287]: Period('2014', 'A-MAY') # Period representing specific day – June 11, 2014 In [292]: pd.Period('06/11/2014') Out[292]: Period('2014-06-11', 'D') # Period representing 11AM, Nov 11, 1918 In [298]: pd.Period('11/11/1918 11:00',freq='H') Out[298]: Period('1918-11-11 11:00', 'H') We can add integers to Periods which advances the period by the requisite number of unit of the frequency: In [299]: pd.Period('06/30/2014')+4 Out[299]: Period('2014-07-04', 'D') In [303]: pd.Period ('11/11/1918 11:00',freq='H') - 48 Out[303]: Period('1918-11-09 11:00', 'H') We can also calculate the difference between two Periods and return the number of units of frequency between them: In [304]: pd.Period('2014-04', freq='M')-pd.Period('2013-02', freq='M') Out[304]: 14 [ 155 ] Missing Data, Time Series, and Plotting Using Matplotlib A PeriodIndex object, which is an index type for a Period object, can be created in two ways: 1. From a series of Period objects using the period_range function an analogue of date_range: In [305]: perRng=pd.period_range('02/01/2014','02/06/2014',freq= 'D') perRng Out[305]: freq: D [2014-02-01, ..., 2014-02-06] length: 6 In [306]: type(perRng[:2]) Out[306]: pandas.tseries.period.PeriodIndex In [307]: perRng[:2] Out[307]: freq: D [2014-02-01, 2014-02-02] As we can confirm from the preceding command, when you pull the covers, a PeriodIndex function is really an ndarray of Period objects underneath. 2. It can also be done via a direct call to the Period constructor: In [312]: JulyPeriod=pd.PeriodIndex(['07/01/2014','07/31/2014'], freq='D') JulyPeriod Out[312]: freq: D [2014-07-01, 2014-07-31] The difference between the two approaches, as can be seen from the preceding output, is that period_range fills in the resulting ndarray, but the Period constructor does not and you have to specify all the values that should be in the index. [ 156 ] Chapter 6 Conversions between Time Series datatypes We can convert the Period and PeriodIndex datatypes to Datetime/Timestamp and DatetimeIndex datatypes via the to_period and to_timestamp functions, as follows: In [339]: worldCupFinal=pd.to_datetime ('07/13/2014', errors='raise') worldCupFinal Out[339]: Timestamp('2014-07-13 00:00:00') In [340]: worldCupFinal.to_period('D') Out[340]: Period('2014-07-13', 'D') In [342]: worldCupKickoff=pd.Period ('06/12/2014','D') worldCupKickoff Out[342]: Period('2014-06-12', 'D') In [345]: worldCupKickoff.to_timestamp() Out[345]: Timestamp('2014-06-12 00:00:00', tz=None) In [346]: worldCupDays= pd.date_range('06/12/2014',periods=32, freq='D') worldCupDays Out[346]: [2014-06-12, ..., 2014-07-13] Length: 32, Freq: D, Timezone: None In [347]: worldCupDays.to_period() Out[347]: freq: D [2014-06-12, ..., 2014-07-13] length: 32 [ 157 ] Missing Data, Time Series, and Plotting Using Matplotlib A summary of Time Series-related objects The following table gives a summary of Time Series-related objects: Object datetime.datetime This is a pandas class derived from datetime.datetime This is a pandas class and is implemented as an immutable numpy.ndarray of the Timestamp/datetime objects This is a pandas class representing a time period This is a pandas class and is implemented as an immutable numpy.ndarray of Period objects This is a Python class expressing the difference between two datetime.datetime instances. It is implemented as datetime.timedelta Implemented as dateutil.relativedelta. dateutil is an extension to the standard Python datetime module. It provides extra functionality such as timedeltas that are expressed in units larger than 1 This is a pandas class representing a regular frequency increment. It has similar functionalty to dateutil.relativedelta. This is a Standard Python datetime class Plotting using matplotlib This section provides a brief introduction to plotting in pandas using matplotlib. The matplotlib api is imported using the standard convention, as shown in the following command: In [1]: import matplotlib.pyplot as plt Series and DataFrame have a plot method, which is simply a wrapper around plt.plot. Here, we will examine how we can do a simple plot of a sine and cosine function. Suppose we wished to plot the following functions over the interval pi to pi: • f(x) = cos(x) + sin (x) • g(x) = cos (x) - sin (x) [ 158 ] Chapter 6 This gives the following interval: In [51]: import numpy as np In [52]: X = np.linspace(-np.pi, np.pi, 256,endpoint=True) In [54]: f,g = np.cos(X)+np.sin(X), np.sin(X)-np.cos(X) In [61]: f_ser= pd.Series(f) g_ser=pd.Series(g) In [31]: plotDF=pd.concat([f_ser,g_ser],axis=1) plotDF.index=X plotDF.columns=['sin(x)+cos(x)','sin(x)-cos(x)'] plotDF.head() Out[31]: 5 rows × 2 columns We can now plot the DataFrame using the plot() command and the plt.show() command to display it: In [94]: plotDF.plot() plt.show() We can apply a title to the plot as follows: In [95]: plotDF.columns =['f(x)','g(x)'] plotDF.plot(title='Plot of f(x)=sin(x)+cos(x), \n g(x)=sinx(x)-cos(x)') plt.show() [ 159 ] Missing Data, Time Series, and Plotting Using Matplotlib The following is the output of the preceding command: We can also plot the two series (functions) separately in different subplots using the following command: In [96]: plotDF.plot(subplots=True, figsize=(6,6)) plt.show() The following is the output of the preceding command: [ 160 ] Chapter 6 There is a lot more to using the plotting functionality of matplotlib within pandas. For more information, take a look at the documentation at http://pandas.pydata. org/pandas-docs/dev/ To summarize, we have discussed how to handle missing data values and manipulate dates and time series in pandas. We also took a brief detour to investigate the plotting functionality in pandas using matplotlib. Handling missing data plays a very important part in the preparation of clean data for analysis and prediction, and the ability to plot and visualize data is an indispensable part of every good data analyst's toolbox. In the next chapter, we will do some elementary data analysis on a real-world dataset where we will analyze and answer basic questions about the data. For further references about these topics in pandas, please take a look at the official documentation at http://pandas.pydata.org/pandas-docs/stable/index.html. [ 161 ] A Tour of Statistics – The Classical Approach In this chapter, we take a brief tour of classical statistics (also called the frequentist approach) and show how we can use pandas together with stats packages, such as scipy.stats and statsmodels, to conduct statistical analyses. This chapter and the following ones are not intended to be a primer on statistics, but they just serve as an illustration of using pandas along with the stats packages. In the next chapter, we will examine an alternative approach to the classical view—Bayesian statistics. The various topics that are discussed in this chapter are as follows: • Descriptive statistics and inferential statistics • Measures of central tendency and variability • Statistical hypothesis testing • Z-test • T-test • Analysis of variance • Confidence intervals • Correlation and linear regression [ 163 ] A Tour of Statistics – The Classical Approach Descriptive statistics versus inferential statistics In descriptive or summary statistics, we attempt to describe the features of a collection of data in a quantitative way. This is different from inferential or inductive statistics because its aim is to summarize a sample rather than use the data to infer or draw conclusions about the population from which the sample is drawn. Measures of central tendency and variability Some of the measures used in descriptive statistics include the measures of central tendency and measures of variability. A measure of central tendency is a single value that attempts to describe a dataset by specifying a central position within the data. The three most common measures of central tendency are the mean, median, and mode. A measure of variability is used to describe the variability in a dataset. Measures of variability include variance and standard deviation. Measures of central tendency Let's take a look at the measures of central tendency and an illustration in the following sections. The mean The mean or sample is the most popular measure of central tendency. It is equal to the sum of all values in the dataset divided by the number of values in the dataset. Thus, in a dataset of n values, the mean is calculated as follows: x1 + x2 + x3 + K + xn 1 n = ∑ xi n n i =1 [ 164 ] Chapter 7 We use x if the data values are from a sample and μ if the data values are from a population. The sample mean and population mean are different. The sample mean is what is known as an unbiased estimator of the true population mean. By repeated random sampling of the population to calculate the sample mean, we can obtain a mean of sample means. We can then invoke the law of large numbers and the central limit theorem (CLT) and denote the mean of sample means as an estimate of the true population mean. The population mean is also referred to as the expected value of the population. The mean, as a calculated value, is often not one of the values observed in the dataset. The main drawback of using the mean is that it is very susceptible to outlier values, or if the dataset is very skewed. For additional information, please refer to these links at http://en.wikipedia.org/wiki/Sample_mean_and_sample_covariance, http:// en.wikipedia.org/wiki/Law_of_large_numbers, and http:// The median The median is the data value that divides the set of sorted data values into two halves. It has exactly half of the population to its left and the other half to its right. In the case when the number of values in the dataset is even, the median is the average of the two middle values. It is less affected by outliers and skewed data. The mode The mode is the most frequently occurring value in the dataset. It is more commonly used for categorical data in order to know which category is most common. One downside to using the mode is that it is not unique. A distribution with two modes is described as bimodal, and one with many modes is denoted as multimodal. Here is an illustration of a bimodal distribution with modes at two and seven since they both occur four times in the dataset: In [4]: import matplotlib.pyplot as plt %matplotlib inline In [5]: plt.hist([7,0,1,2,3,7,1,2,3,4,2,7,6,5,2,1,6,8,9,7]) plt.xlabel('x') plt.ylabel [ 165 ] A Tour of Statistics – The Classical Approach plt.title('Bimodal distribution') plt.show() Computing measures of central tendency of a dataset in Python To illustrate, let us consider the following dataset consisting of marks obtained by 15 pupils for a test scored out of 20: In [18]: grades = [10, 10, 14, 18, 18, 5, 10, 8, 1, 12, 14, 12, 13, 1, 18] The mean, median, and mode can be obtained as follows: In [29]: %precision 3 # Set output precision to 3 decimal places Out[29]:u'%.3f' In [30]: import numpy as np np.mean(grades) Out[30]: 10.933 In [35]: %precision np.median(grades) [ 166 ] Chapter 7 Out[35]: 12.0 In [24]: from scipy import stats stats.mode(grades) Out[24]: (array([ 10.]), array([ 3.])) In [39]: import matplotlib.pyplot as plt In [40]: plt.hist(grades) plt.title ('Histogram of grades') plt.xlabel('Grade') plt.ylabel('Frequency') plt.show() To illustrate how the skewness of data or an outlier value can drastically affect the usefulness of the mean as a measure of central tendency, consider the following dataset that shows the wages (in thousands of dollars) of the staff at a factory: In [45]: %precision 2 salaries = [17, 23, 14, 16, 19, 22, 15, 18, 18, 93, 95] In [46]: np.mean(salaries) Out[46]: 31.82 [ 167 ] A Tour of Statistics – The Classical Approach Based on the mean value, we may make the assumption that the data is centered around the mean value of 31.82. However, we would be wrong. To see this, let's display an empirical distribution of the data using a bar plot: In [59]: fig = plt.figure() ax = fig.add_subplot(111) ind = np.arange(len(salaries)) width = 0.2 plt.hist(salaries, bins=xrange(min(salaries), max(salaries)).__len__()) ax.set_xlabel('Salary') ax.set_ylabel('# of employees') ax.set_title('Bar chart of salaries') plt.show() From the preceding bar plot, we can see that most of the salaries are far below 30K and no one is close to the mean of 32K. Now, if we take a look at the median, we see that it is better measure of central tendency in this case: In [47]: np.median(salaries) Out[47]: 18.00 [ 168 ] Chapter 7 We can also take a look at a histogram of the data: In [56]: plt.hist(salaries, bins=len(salaries)) plt.title('Histogram of salaries') plt.xlabel('Salary') plt.ylabel('Frequency') plt.show() The histogram is actually a better representation of the data as bar plots are generally used to represent categorical data while histograms are preferred for quantitative data, which is the case for the salaries' data. For more information on when to use histograms versus bar plots, refer to http://onforb.es/1Dru2gv. [ 169 ] A Tour of Statistics – The Classical Approach If the distribution is symmetrical and unimodal (that is, has only one mode), the three measures—mean, median, and mode—will be equal. This is not the case if the distribution is skewed. In that case, the mean and median will differ from each other. With a negatively skewed distribution, the mean will be lower than the median and vice versa for a positively skewed distribution: The preceding figure is sourced from http://www.southalabama.edu/coe/bset/ johnson/lectures/lec15_files/image014.jpg. Measures of variability, dispersion, or spread Another characteristic of distribution that we measure in descriptive statistics is variability. Variability specifies how much the data points are different from each other, or dispersed. Measures of variability are important because they provide an insight into the nature of the data that is not provided by the measures of central tendency. As an example, suppose we conduct a study to examine how effective a pre-K education program is in lifting test scores of economically disadvantaged children. We can measure the effectiveness not only in terms of the average value of the test scores of the entire sample but also with the dispersion of the scores. Is it useful for some students and not so much for others? The variability of the data may help us identify some steps to be taken to improve the usefulness of the program. [ 170 ] Chapter 7 The simplest measure of dispersion is the range. The range is the difference between the lowest and highest scores in a dataset. This is the simplest measure of spread. Range = highest value - lowest A more significant measure of dispersion is the quartile and related interquartile ranges. It also stands for quarterly percentile, which means that it is the value on the measurement scale below which 25, 50, 75, and 100 percent of the scores in the sorted dataset fall. The quartiles are three points that split the dataset into four groups, with each one containing one-fourth of the data. To illustrate, suppose we have a dataset of 20 test scores where we rank them as follows: In [27]: import random random.seed(100) testScores = [random.randint(0,100) for p in xrange(0,20)] testScores Out[27]: [14, 45, 77, 71, 73, 43, 80, 53, 8, 46, 4, 94, 95, 33, 31, 77, 20, 18, 19, 35] In [28]: #data needs to be sorted for quartiles sortedScores = np.sort(testScores) In [30]: rankedScores = {i+1: sortedScores[i] for i in xrange(len(sortedScores))} In [31]: rankedScores Out[31]: {1: 4, 2: 8, 3: 14, 4: 18, 5: 19, 6: 20, 7: 31, [ 171 ] A Tour of Statistics – The Classical Approach 8: 33, 9: 35, 10: 43, 11: 45, 12: 46, 13: 53, 14: 71, 15: 73, 16: 77, 17: 77, 18: 80, 19: 94, 20: 95} The first quartile (Q1) lies between the fifth and sixth score, the second quartile (Q2) between the tenth and eleventh score, and the third quartile between the fifteenth and sixteenth score. Thus, we have (by using linear interpolation and calculating the midpoint): Q1 = (19+20)/2 = 19.5 Q2 = (43 + 45)/2 = 44 Q3 = (73 + 77)/2 = 75 To see this in IPython, we can use the scipy.stats or numpy.percentile packages: In [38]: from scipy.stats.mstats import mquantiles mquantiles(sortedScores) Out[38]: array([ 19.45, 75.2 ]) In [40]: [np.percentile(sortedScores, perc) for perc in [25,50,75]] Out[40]: [19.75, 44.0, 74.0] The reason why the values don't match exactly with our previous calculations is due to the different interpolation methods. More information on the various types of methods to obtain quartile values can be found at http://en.wikipedia.org/ wiki/Quartile. The interquartile range is the first quartile subtracted from the third quartile (Q3 - Q1), It represents the middle 50 in a dataset. For more information, refer to http://bit.ly/1cMMycN. [ 172 ] Chapter 7 For more details on the scipy.stats and numpy.percentile functions, see the documents at http://docs.scipy.org/doc/scipy/reference/generated/scipy. stats.mstats.mquantiles.html and http:// docs.scipy.org/doc/numpy-dev/ reference/generated/numpy.percentile.html. Deviation and variance A fundamental idea in the discussion of variability is the concept of deviation. Simply put, a deviation measure tells us how far away a given value is from the mean of the distribution, that is, X − X ′ . To find the deviation of a set of values, we define the variance as the sum of squared deviations and normalize it by dividing it by the size of the dataset. This is referred to as the variance. We need to use the sum of squared deviations as taking the sum of deviations around the mean results in 0 since the negative and positive deviations cancel each other out. The sum of squared deviations is defined as follows: N SS = ∑ ( X − X ) 2 i =1 It can be shown that the preceding expression is equivalent to: (∑ X ) − N SS = ∑ I =1 X 2 N I =1 Formally, the variance is defined as follows: • For sample variance, use the following formula: s2 = 2 SS 1 N = X −X) ( ∑ N − 1 N − 1 i =1 • For population variance, use the following formula: σ2 = SS 1 = N N ∑( X − µ) i =1 [ 173 ] A Tour of Statistics – The Classical Approach The reason why the denominator is N − 1 for the sample variance instead of N is that for sample variance, we wish to use an unbiased estimator. For more details on this, take a look at http:// en.wikipedia.org/wiki/Bias_of_an_estimator. The values of this measure are in squared units. This emphasizes the fact that what we have calculated as the variance is the squared deviation. Therefore, to obtain the deviation in the same units as the original points of the dataset, we must take the square root, and this gives us what we call the standard deviation. Thus, the standard deviation of a sample is given by using the following formula: SS = N −1 ∑( X − X ) N −1 However, for a population, the standard deviation is given by the following formula: SS = N ∑( X − µ) Hypothesis testing – the null and alternative hypotheses In the preceding section, we had a brief discussion of what is referred to as descriptive statistics. In this section, we will discuss what is known as inferential statistics whereby we try to use characteristics of the sample dataset to draw conclusions about the wider population as a whole. One of the most important methods in inferential statistics is hypothesis testing. In hypothesis testing, we try to determine whether a certain hypothesis or research question is true to a certain degree. One example of a hypothesis would be this: Eating spinach improves long-term memory. In order to investigate this question by using hypothesis testing, we can select a group of people as subjects for our study and divide them into two groups or samples. The first group will be the experimental group, and it will eat spinach over a predefined period of time. The second group, which does not receive spinach, will be the control group. Over selected periods of times, the memory of individuals in the two groups will be measured and tallied. [ 174 ] Chapter 7 Our goal at the end of our experiment would be to be able to make a statement such as "Eating spinach results in improvement in long-term memory, which is not due to chance". This is also known as significance. In the preceding scenario, the collection of subjects in the study is referred to as the sample, and the general set of people about whom we would like to draw conclusions is the population. The ultimate goal of our study would be to determine whether any effects that we observed in the sample can be generalized to the population as a whole. In order to carry out hypothesis testing, we will need to come up with what are known as the null and alternative hypotheses. The null and alternative hypotheses By referring to the preceding spinach example, the null hypothesis would be: Eating spinach has no effect on long-term memory performance. The null hypothesis is just that—it nullifies what we're trying to prove by running our experiment. It does so by asserting that some statistical metric (to be explained later) is zero. The alternative hypothesis is what we hope to support. It is the opposite of the null hypothesis and we assume it to be true until the data provides sufficient evidence that indicates otherwise. Thus, our alternative hypothesis in this case is: Eating spinach results in an improvement in long-term memory. Symbolically, the null hypothesis is referred to as H0 and the alternative hypothesis as H1. You may wish to restate the preceding null and alternative hypotheses as something more concrete and measurable for our study. For example, we could recast H0 as follows: The mean memory score for a sample of 1,000 subjects who ate 40 grams of spinach daily for a period of 90 days would not differ from the control group of 1,000 subjects who consumed no spinach within the same time period. In conducting our experiment/study, we focus on trying to prove or disprove the null hypothesis. This is because we can calculate the probability that our results are due to chance. However, there is no easy way to calculate the probability of the alternative hypothesis since any improvement in long-term memory could be due to factors other than just eating spinach. [ 175 ] A Tour of Statistics – The Classical Approach We test out the null hypothesis by assuming that it is true and calculate the probability getting of the results we do by chance alone. We set a threshold level— alpha α—for which we can reject the null hypothesis if the calculated probability is smaller or accept it if it is greater. Rejecting the null hypothesis is tantamount to accepting the alternative hypothesis and vice versa. The alpha and p-values In order to conduct an experiment to decide for or against our null hypothesis, we need to come up with an approach that will enable us to make the decision in a concrete and measurable way. To do this test of significance, we have to consider two numbers—the p-value of the test statistic and the threshold level of significance, which is also known as alpha. The p-value is the probability if the result we observe by assuming that the null hypothesis is true or it occurred by occurred by chance alone. The p-value can also be thought of as the probability of obtaining a test statistic as extreme as or more extreme than the actual obtained test statistic, given that the null hypothesis is true. The alpha value is the threshold value against which we compare p-values. This gives us a cut-off point in order to accept or reject the null hypothesis. It is a measure of how extreme the results we observe must be in order to reject the null hypothesis of our experiment. The most commonly used values of alpha are 0.05 or 0.01. In general, the rule is as follows: If the p-value is less than or equal to alpha (p< .05), then we reject the null hypothesis and state that the result is statistically significant. If the p-value is greater than alpha (p > .05), then we have failed to reject the null hypothesis, and we say that the result is not statistically significant. The seemingly arbitrary values of alpha in usage are one of the shortcomings of the frequentist methodology, and there are many questions concerning this approach. The following article in the Nature journal highlights some of the problems: http:// www.nature.com/news/scientific-method-statistical-errors-1.14700. [ 176 ] Chapter 7 For more details on this topic, refer to: • http://statistics.about.com/od/Inferential-Statistics/a/WhatIs-The-Difference-Between-Alpha-And-P-Values.htm • http://bit.ly/1GzYX1P • http://en.wikipedia.org/wiki/P-value Type I and Type II errors There are two type of errors, as explained here: • Type I Error: In this type of error, we reject H0 when in fact H0 is true. An example of this would be a jury convicting an innocent person for a crime that the person did not commit. • Type II Error: In this type of error, we fail to reject H0 when in fact H1 is true. This is equivalent to a guilty person escaping conviction. Statistical hypothesis tests A statistical hypothesis test is a method to make a decision using data from a statistical study or experiment. In statistics, a result is termed statistically significant if it is unlikely to have occurred only by chance, based on a predetermined threshold probability or significance level. There are two classes of statistical tests: 1-tailed and 2-tailed tests. In a 2-tailed test, we allot half of our alpha to test the statistical significance in one direction and the other half to test statistical significance in the other direction. In a 1-tailed test, the test is performed in one direction only. For more details on this topic, refer to http://www.ats.ucla.edu/stat/mult_pkg/ faq/general/tail_tests.htm. To apply statistical inference, it is important to understand the concept of what is known as a sampling distribution. A sampling distribution is the set of all possible values of a statistic along with their probabilities, assuming we sample at random from a population where the null hypothesis holds true. [ 177 ] A Tour of Statistics – The Classical Approach A more simplistic definition is this—a sampling distribution is the set of values the statistic can assume (distribution) if we were to repeatedly draw samples from the population along with their associated probabilities. The value of a statistic is a random sample from the statistic's sampling distribution. The sampling distribution of the mean is calculated by obtaining many samples of various sizes and taking their mean. It has a mean, µ X ′ , equal to µ and a standard deviation, σ X ′ σ X , equal to The CLT states that the sampling distribution is normally distributed if the original or raw-score population is normally distributed, or if the sample size is large enough. Conventionally, statisticians denote large-enough sample sizes as N ≥ 30 , that is, a sample size of 30 or more. This is still a topic of debate though. For more details on this topic, refer to http://stattrek.com/ sampling/samplingdistribution.aspx and http://en.wikipedia.org/wiki/Central_limit_ theorem. The standard deviation of the sampling distribution is often referred to as the standard error of the mean or just standard error. The z-test The z-test is appropriate under the following conditions: • The study involves a single sample mean and the parameters— µ and of the null hypothesis population are known • The sampling distribution of the mean is normally distributed • The size of the sample is N ≥ 30 We use the z-test when the mean of the population is known. In the z-test, we ask the question whether the population mean, µ , is different from a hypothesized value. The null hypothesis in the case of the z-test is as follows: H 0 : µ = µ0 where, µ = population mean µ0 = hypothesized value [ 178 ] Chapter 7 The alternative hypothesis, H a , can be one of the following: H a : µ < µ0 H a : µ > µ0 H a : µ ≠ µ0 The first two are 1-tailed tests while the last one is a 2-tailed test. In concrete terms, to test H 0 , we calculate the test statistic: X − µ0 Here, σ X is the true standard deviation of the sampling distribution of X . If H 0 is true, the z-test statistics will have the standard normal distribution. Here, we present a quick illustration of the z-test. Suppose we have a fictional company Intelligenza, that claims that they have come up with a radical new method for improved memory retention and study. They claim that their technique can improve grades over traditional study techniques. Suppose the improvement in grades is 40 percent with a standard deviation of 10 percent by using traditional study techniques. A random test was run on 100 students using the Intelligenza method, and this resulted in a mean improvement of 44 percent. Does Intelligenza's claim hold true? The null hypothesis for this study states that there is no improvement in grades using Intelligenza's method over traditional study techniques. The alternative hypothesis is that there is an improvement by using Intelligenza's method over traditional study [ 179 ] A Tour of Statistics – The Classical Approach The null hypothesis is given by the following: H 0 : µ = µ0 The alternative hypothesis is given by the following: H a : µ > µ0 std error = 10/sqrt(100) = 1 z = (43.75-40)/(10/10) = 3.75 std errors Recall that if the null hypothesis is true, the test statistic z will have a standard normal distribution that would look like this: For reference, go to http://mathisfun.com/data/images/normal-distrubutionlarge.gif. [ 180 ] Chapter 7 This value of z would be a random sample from the standard normal distribution, which is the distribution of z if the null hypothesis is true. The observed value of z=43.75 corresponds to an extreme outlier p-value on the standard normal distribution curve, much less than 0.1 percent. The p-value is the area under the curve, to the right of the value of 3.75 on the preceding normal distribution curve. This suggests that it would be highly unlikely for us to obtain the observed value of the test statistic if we were sampling from a standard normal distribution. We can look up the actual p-value using Python by using the scipy.stats package as follows: In [104]: 1 - stats.norm.cdf(3.75) Out[104]: 8.841728520081471e-05 Therefore, P ( z ≥ 3.75 ) = 8.8e − 05 , that is, if the test statistic was normally distributed, then the probability to obtain the observed value is 8.8e-05, which is close to zero. So, it would be almost impossible to obtain the value that we observe if the null hypothesis was actually true. In more formal terms, we would normally define a threshold or alpha value and reject the null hypothesis if the p-value ≤ α or fail to reject otherwise. The typical values for α are 0.05 or 0.01. Following list explains the different values of alpha: • p-value 0.1: There is little or no evidence against H0 [ 181 ] A Tour of Statistics – The Classical Approach Therefore, in this case, we would reject the null hypothesis and give credence to Intelligenza's claim and state that their claim is highly significant. The evidence against the null hypothesis in this case is significant. There are two methods that we use to determine whether to reject the null hypothesis: • The p-value approach • The rejection region approach The approach that we used in the preceding example was the latter one. The smaller the p-value, the less likely it is that the null hypothesis is true. In the rejection region approach, we have the following rule: If s X = s N , reject the null hypothesis, else retain it. The t-test The z-test is most useful when the standard deviation of the population is known. However, in most real-world cases, this is an unknown quantity. For these cases, we turn to the t-test of significance. For the t-test, given that the standard deviation of the population is unknown, we replace it by the standard deviation, s, of the sample. The standard error of the mean now becomes as sX = s The standard deviation of the sample s is calculated as follows: ∑ ( X − X ′) N −1 The denominator is N-1 and not N. This value is known as the number of degrees of freedom. I will now state without explanation that by the CLT the t-distribution approximates the normal, Guassian, or z-distribution as N and hence N-1 increases, that is, with increasing degrees of freedom (df). When df = ∞, the t-distribution is identical to the normal or z-distribution. This is intuitive since as df increases, the sample size increases and s approaches σ , the true standard deviation of the population. There are an infinite number of t-distributions, each corresponding to a different value of df. [ 182 ] Chapter 7 This can be seen in the following figure: The reference of this image is from: http://zoonek2.free.fr/UNIX/48_R/g593. png. A more detailed technical explanation on the relationship between t-distribution, z-distribution, and the degrees of freedom can be found at http://en.wikipedia. org/wiki/Student's_t-distribution. [ 183 ] A Tour of Statistics – The Classical Approach Types of t-tests There are various types of t-tests. Following are the most common ones; they typically formulate a null hypothesis that makes a claim about the mean of a distribution: • One sample independent t-test: This is used to compare the mean of a sample with that of a known population mean or known value. Let's assume that we're health researchers in Australia who are concerned about the health of the aboriginal population and wish to ascertain whether babies born to low-income aboriginal mothers have lower birth weight than is normal. An example of a null hypothesis test for a one-sample t-test would be this: the mean birth weight for our sample of 150 deliveries of full-term, live baby deliveries from low-income aboriginal mothers is no different from the mean birth weight of babies in the general Australian population, that is, 3,367 grams. The reference of this information is: http://bit.ly/1KY9T7f. • Independent samples t-tests: This is used to compare means from independent samples with each other. An example of an independent sample t-test would be a comparison of fuel economy of automatic transmission versus manual transmission vehicles. This is what our real-world example will focus on. The null hypothesis for the t-test would be this: there is no difference between the average fuel efficiency of cars with manual and automatic transmissions in terms of their average combined city/highway mileage. • Paired samples t-test: In a paired/dependent samples t-test, we take each data point in one sample and pair it with a data point in the other sample in a meaningful way. One way to do this would be to measure against the same sample at different points in time. An example of this would be to examine the efficacy of a slimming diet by comparing the weight of a sample of participants before and after the diet. The null hypothesis in this case would be this: there is no difference between the mean weights of participants before and after going on the slimming diet, or, more succinctly, the mean difference between paired observations is zero. The reference for this information can be found at http://en.wikiversity. org/wiki/T-test. [ 184 ] Chapter 7 A t-test example In simplified terms, to do Null Signifcance Hypothesis Testing (NHST), we need to do the following: 1. Formulate our null hypothesis. The null hypothesis is our model of the system, assuming that the effect we wish to verify was actually due to chance. 2. Calculate our p-value. 3. Compare the calculated p-value with that of our alpha, or threshold value, and decide whether to reject or accept the null hypothesis. If the p-value is low enough (lower than alpha), we will draw the conclusion that the null hypothesis is likely to be untrue. For our real-world illustration, we wish to investigate whether manual transmission vehicles are more fuel efficient than automatic transmission vehicles. In order to do this, we will make use of the Fuel Economy data published by the US government for 2014 at http://www.fueleconomy.gov. In [53]: import pandas as pd import numpy as np feRawData = pd.read_csv('2014_FEGuide.csv') In [54]: feRawData.columns[:20] Out[54]: Index([u'Model Year', u'Mfr Name', u'Division', u'Carline', u'Verify Mfr Cd', u'Index (Model Type Index)', u'Eng Displ', u'# Cyl', u'Trans as listed in FE Guide (derived from col AA thru AF)', u'City FE (Guide) - Conventional Fuel', u'Hwy FE (Guide) - Conventional Fuel', u'Comb FE (Guide) - Conventional Fuel', u'City Unadj FE - Conventional Fuel', u'Hwy Unadj FE - Conventional Fuel', u'Comb Unadj FE Conventional Fuel', u'City Unrd Adj FE - Conventional Fuel', u'Hwy Unrd Adj FE - Conventional Fuel', u'Comb Unrd Adj FE - Conventional Fuel', u'Guzzler? ', u'Air Aspir Method'], dtype='object') In [51]: feRawData = feRawData.rename(columns={'Trans as listed in FE Guide (derived from col AA thru AF)' :'TransmissionType', 'Comb FE (Guide) Conventional Fuel' : 'CombinedFuelEcon'}) In [57]: transType=feRawData ['TransmissionType'] transType.head() [ 185 ] A Tour of Statistics – The Classical Approach Out[57]: 0 Name: TransmissionType, dtype: object Now, we wish to modify the preceding series so that the values just contain the Auto and Manual strings . We can do this as follows: In [58]: transTypeSeries = transType.str.split('(').str.get(0) transTypeSeries.head() Out[58]: 0 Name: TransmissionType, dtype: object We now create a final modified DataFrame from a Series that consists of the transmission type and the combined fuel economy figures: In [61]: feData=pd.DataFrame([transTypeSeries,feRawData ['CombinedFuelEcon ']]).T feData.head() Out[61]: 5 rows × 2 columns We can now separate the data for vehicles with automatic transmission from those with manual transmission as follows: In [62]: feData_auto=feData[feData['TransmissionType']=='Auto'] feData_manual= [ 186 ] Chapter 7 In [63]: feData_auto.head() Out[63]: 5 rows × 2 columns This shows that there were 987 vehicles with automatic transmission versus 211 with manual transmission: In [64]: len(feData_auto) Out[64]: 987 In [65]: len(feData_manual) Out[65]: 211 In [87]: np.mean(feData_auto['CombinedFuelEcon']) Out[87]: 22.173252279635257 In [88]: np.mean(feData_manual['CombinedFuelEcon']) Out[88]: 25.061611374407583 In [84]: import scipy.stats as stats stats.ttest_ind(feData_auto['CombinedFuelEcon'].tolist(), feData_manual['CombinedFuelEcon'].tolist()) Out[84]: (array(-6.5520663209014325), 8.4124843426100211e-11) In [86]: stats.ttest_ind (feData_auto['CombinedFuelEcon'].tolist(), feData_manual['CombinedFuelEcon'].tolist(), equal_var=False) Out[86]: (array(-6.949372262516113), 1.9954143680382091e-11) [ 187 ] A Tour of Statistics – The Classical Approach Confidence intervals In this section, we will address the issue of confidence intervals. A confidence interval enables us to make a probabilistic estimate of the value of the mean of a population's given sample data. This estimate, called an interval estimate, consists of a range of values (intervals) that act as good estimates of the unknown population parameter. The confidence interval is bounded by confidence limits. A 95 percent confidence interval is defined as an interval in which the interval contains the population mean with 95 percent probability. So, how do we construct a confidence interval? Suppose we have a 2-tailed t-test and we wish to construct a 95 percent confidence interval. In this case, we want the sample t-value, tsamp , corresponding to the mean to satisfy the following −t0.025 ≤ tsamp ≤ t0.025 tsamp = X samp − µ Given that sX relation to obtain this: , which we can substitute in the preceding inequality X samp − s X t0.025 ≤ µ ≤ X samp + s X t0.025 The X samp − s X t0.025 ≤ µ ≤ X samp + s X t0.025 interval is our 95 percent confidence interval. Generalizing, any confidence interval for any percentage y can be expressed as X samp − s X tcri ≤ µ ≤ X samp + s X tcri , where tcri is the t-tailed value of t , that is, t y corresponding to the desired confidence interval for y. We will now take the opportunity to illustrate how we can calculate the confidence interval using a dataset from the popular statistical environment R. The stats models' module provides access to the datasets that are available in the core datasets package of R via the get_rdataset function. [ 188 ] Chapter 7 An illustrative example We will consider the dataset known as faithful that consists of data obtained by observing the eruptions of the Old Faithful geyser in the Yellowstone National Park in the U.S. The two variables in the dataset are eruptions, which are the length of time the geyser erupts and waiting which is the time interval until the next eruption. There were 272 observations. In [46]: import statsmodels.api as sma faithful=sma.datasets.get_rdataset("faithful") faithful Out[46]: In [48]: faithfulDf=faithful.data faithfulDf.head() Out[48]: 5 rows × 2 columns In [50]: len(faithfulDf) Out[50]: 272 Let us calculate a 95 percent confidence interval for the mean waiting time of the geyser. To do this, we first obtain the sample mean and standard deviation of the data: In [80]: mean,std=(np.mean (faithfulDf['waiting']), np.std(faithfulDf['waiting'])) [ 189 ] A Tour of Statistics – The Classical Approach We now make use of the scipy.stats package to calculate the confidence interval: In [81]: from scipy import stats N=len(faithfulDf['waiting']) ci=stats.norm.interval(0.95,loc=mean,scale=std/np.sqrt (N)) In [82]: ci Out[82]: (69.28440107709261, 72.509716569966201) Thus, we can state that with 95 percent confidence that the [69.28, 72.51] interval contains the actual mean waiting time of the geyser. Reference for this information: http:// statsmodels.sourceforge.net/devel/ datasets/index.html and http://docs.scipy.org/doc/scipy-0.14.0/ reference/generated/scipy.stats.norm.html. Correlation and linear regression One of the most common tasks in statistics in determining the relationship between two variables is whether there is dependence between them. Correlation is the general term we use in statistics for variables that express dependence with each other. We can then use this relationship to try and predict the value of one set of variables from the other; this is termed as regression. The statistical dependence expressed in a correlation relationship does not imply a causal relationship between the two variables; the famous line on this is "Correlation does not imply Causation". Thus, correlation between two variables or datasets implies just a casual rather than a causal relationship or dependence. For example, there is a correlation between the amount of ice cream purchased on a given day and the weather. For more information on correlation and dependency, refer to The correlation measure, known as correlation coefficient, is a number that captures the size and direction of the relationship between the two variables. It can vary from -1 to +1 in direction and 0 to 1 in magnitude. The direction of the relationship is expressed via the sign, with a + sign expressing positive correlation and a - sign negative correlation. The higher the magnitude, the greater the correlation with a one being termed as the perfect correlation. [ 190 ] Chapter 7 The most popular and widely used correlation coefficient is the Pearson product-moment correlation coefficient, known as r. It measures the linear correlation or dependence between two x and y variables and takes values between -1 and +1. The sample correlation coefficient r is defined as follows: ∑ ( X − X )(Y − Y ) ∑ ( X − X ) ∑ (Y − Y ) N i =1 i =1 i =1 This can also be written as follows: N ∑ X iYi − ∑ X iYi N ∑ X i2 − ( ∑ X i ) N ∑ Yi 2 − ( ∑ Yi ) Here, we have omitted the summation limits. Linear regression As mentioned earlier, regression focuses on using the relationship between two variables for prediction. In order to make predictions using linear regression, the best-fitting straight line must be computed. If all the points (values for the variables) lie on a straight line, then the relationship is deemed perfect. This rarely happens in practice and the points do not all fit neatly on a straight line. Then the relationship is imperfect. In some cases, a linear relationship only occurs among log-transformed variables. This is a log-log model. An example of such a relationship would be a power law distribution in physics where one variable varies as a power of another. x Thus, an expression such as Y = a results in the ln (Y ) = x ∗ ln ( a ) linear relationship. For more information see: http://en.wikipedia.org/wiki/Power_law [ 191 ] A Tour of Statistics – The Classical Approach To construct the best-fit line, the method of least squares is used. In this method, the best-fit line is the optimal line that is constructed between the points for which the sum of the squared distance from each point to the line is the minimum. This is deemed to be the best linear approximation of the relationship between the variables we are trying to model using the linear regression. The best-fit line in this case is called the Least Squares Regression Line. More formally, the least squares regression line is the line that has the minimum possible value for the sum of squares of the vertical distance from the data points to the line. These vertical distances are also known as residuals. Thus, by constructing the least-squares regression line, we're trying to minimize the following expression: N ∑ (Y − Y ) i =1 An illustrative example We will now illustrate all the preceding points with an example. Suppose we're doing a study in which we would like to illustrate the effect of temperature on how often crickets chirp. Data for this example is obtained from a book The Song of Insects, George W Pierce in 1948. George Pierce measured the frequency of chirps made by a ground cricket at various temperatures. We wish to investigate the frequency of cricket chirps and the temperature as we suspect that there is a relationship. The data consists of 16 data points and we read it into a data frame: In [38]: import pandas as pd import numpy as np chirpDf= pd.read_csv('cricket_chirp_temperature.csv') In [39]: chirpDf Out[39]:chirpFrequency 82.000000 [ 192 ] Chapter 7 8 15 rows × 2 columns As a start, let us do a scatter plot of the data along with a regression line or line of best fit: In [29]: plt.scatter(chirpDf.temperature,chirpDf.chirpFrequency, marker='o',edgecolor='b',facecolor= 'none',alpha=0.5) plt.xlabel('Temperature') plt.ylabel('Chirp Frequency') slope, intercept = np.polyfit(chirpDf.temperature,chirpDf. chirpFrequency,1) plt.plot (chirpDf.temperature,chirpDf.temperature*slope + intercept,'r') plt.show() [ 193 ] A Tour of Statistics – The Classical Approach From the plot, we can see that there seems to be a linear relationship between temperature and the chirp frequency. We can now proceed to investigate further by using the statsmodels.ols (ordinary least squares) method to: [37]: chirpDf= pd.read_csv('cricket_chirp_temperature.csv') chirpDf=np.round(chirpDf,2) result=sm.ols('temperature ~ chirpFrequency',chirpDf).fit() result.summary() Out[37]: OLS Regression Results Dep. Variable: temperature Model: OLS Adj. R-squared: 0.674 Least Squares Wed, 27 Aug 2014 Df Residuals: Df Model: 1 coef std err t 25.2323 10.060 chirpFrequency 3.2911 P>|t| [95.0% Conf. Int.] 2.508 0.026 3.499 46.966 5.475 0.000 1.992 4.590 Jarque-Bera (JB): Log-Likelihood: -40.348 15 Prob (F-statistic): No. Observations: 1.818 0.874 Cond. No. We will ignore most of the preceding results, except for the R-squared, Intercept, and chirpFrequency values. From the preceding result, we can conclude that the slope of the regression line is 3.29, and the intercept on the temperature axis is 25.23. Thus, the regression line equation looks like this: temperature = 25.23 + 3.29 * chirpFrequency. [ 194 ] Chapter 7 This means that as the chirp frequency increases by one, the temperature increases by about 3.29 degrees Fahrenheit. However, note that the intercept value is not really meaningful as it is outside the bounds of the data. We can also only make predictions for values within the bounds of the data. For example, we cannot predict what the chirpFrequency is at 32 degrees Fahrenheit as it is outside the bounds of the data; moreover, at 32 degrees Fahrenheit, the crickets would have frozen to death. The value of R, the correlation coefficient, is given as follows: In [38]: R=np.sqrt (result.rsquared) R Out[38]: 0.83514378678237422 Thus, our correlation coefficient is R = 0.835. This would indicate that about 84 percent of the chirp frequency can be explained by the changes in temperature. Reference of this information: The Song of Insects http://www.hup.harvard.edu/ catalog.php?isbn=9780674420663 The data is sourced from http://bit.ly/1MrlJqR. For a more in-depth treatment of single and multi-variable regression, refer to the following websites: • Regression (Part I): http://bit.ly/1Eq5kSx • Regression (Part II): http://bit.ly/1OmuFTV In this chapter, we took a brief tour of the classical or frequentist approach to statistics and showed you how to combine pandas along with the stats packages—scipy.stats and statsmodels—to calculate, interpret, and make inferences from statistical data. In the next chapter, we will examine an alternative approach to statistics, which is the Bayesian approach. For deeper look at the statistics topics that we touched on, please take a look at Understanding Statistics in the Behavioral Sciences, which can be found at http://www.amazon.com/ [ 195 ] A Brief Tour of Bayesian Statistics In this chapter, we will take a brief tour of an alternative approach to statistical inference called Bayesian statistics. It is not intended to be a full primer but just serve as an introduction to the Bayesian approach. We will also explore the associated Python-related libraries, how to use pandas, and matplotlib to help with the data analysis. The various topics that will be discussed are as follows: • Introduction to Bayesian statistics • Mathematical framework for Bayesian statistics • Probability distributions • Bayesian versus Frequentist statistics • Introduction to PyMC and Monte Carlo simulation • Illustration of Bayesian inference – Switchpoint detection Introduction to Bayesian statistics The field of Bayesian statistics is built on the work of Reverend Thomas Bayes, an 18th century statistician, philosopher, and Presbyterian minister. His famous Bayes' theorem, which forms the theoretical underpinnings for Bayesian statistics, was published posthumously in 1763 as a solution to the problem of inverse probability. For more details on this topic, refer to http:// [ 197 ] A Brief Tour of Bayesian Statistics Inverse probability problems were all the rage in the early 18th century and were often formulated as follows: Suppose you play a game with a friend. There are 10 green balls and 7 red balls in bag 1 and 4 green and 7 red balls in bag 2. Your friend turns away from your view, tosses a coin and picks a ball from one of the bags at random, and shows it to you. The ball is red. What is the probability that the ball was drawn from bag 1? These problems are termed inverse probability problems because we are trying to estimate the probability of an event that has already occurred (which bag the ball was drawn from) in light of the subsequent event (that the ball is red). Let us quickly illustrate how one would go about solving the inverse probability problem illustrated earlier. We wish to calculate the probability that the ball was drawn from bag 1, given that it is red. This can be denoted as P ( Bag 1| Red Ball ) . Let us start by calculating the probability of selecting a red ball. This can be calculated by following the two paths in red as shown in the preceding figure. Hence, we have P ( Red Ball ) = 1 × 7 + 1 × 7 = 0.524 . 2 17 2 11 [ 198 ] Chapter 8 Now, the probability of choosing a red ball from bag 1 is via the upper path only and is given as follows: 1 7 7 P ( Red Ball from Bag 1) = × = = 0.206 2 17 34 And, the probability of choosing a red ball from bag 2 is given as follows: 1 7 7 P ( Red Ball from Bag 2 ) = × = = 0.318 2 17 22 Note that this probability can be written as follows: P ( Red Ball , Bag 1) = P ( Red Ball | Bag 1) ∗ P ( Bag 1) By inspection we can see that P ( Bag 1) = 1 2 , and the final branch of the tree is only traversed if the ball is firstly in bag 1 and is a red ball. Hence, intuitively we'll get the following outcome: P ( Bag 1| Red Ball ) = P ( Red Ball , Bag 1) P ( Red Ball ) P ( Red Ball | Bag 1) ∗ P ( Bag 1) P ( Red Ball ) 0.206 = 0.393 0.524 Mathematical framework for Bayesian statistics With Bayesian methods we present an alternative method of making a statistical inference. We first introduce the Bayes theorem, the fundamental equation from which all Bayesian inference is derived. [ 199 ] A Brief Tour of Bayesian Statistics A couple of definitions about probability are in order: • A, B : These are events that can occur with a certain probability. P ( A ) and P ( B ) : This is the probability of the occurrence of a particular event. P ( A | B ) : This is the probability of A happening, given that B has occurred. This is known as a conditional probability. P ( AB ) = P ( A and B ) : This is the probability of A and B occurring together. We begin with the basic assumption, as follows: P ( AB ) = P ( B ) ∗ P ( A | B ) The preceding equation relates the joint probability of P(AB) to the conditional probability P(A|B) and what is also known as the marginal probability P(B). If we rewrite the equation, we have the expression for conditional probability as follows: P ( A | B ) = P ( AB ) P ( B ) This is somewhat intuitive—that the probability of A given B is obtained by dividing the probability of both A and B occurring by the probability that B occurred. The idea is that B is given, so we divide by its probability. A more rigorous treatment of this equation can be found at http://bit.ly/1bCYXRd, which is titled Probability: Joint, Marginal and Conditional Probabilities. Similarly, by symmetry we have P ( AB ) = P ( BA ) = P ( A ) ∗ P ( B | A ) . Thus, we have P ( A ) ∗ P ( B | A ) = P ( B ) ∗ P ( A | B ) . By dividing the expression by P ( B ) on both sides and assuming P(B) !=0, we obtain this: P ( A | B ) = P ( A) ∗ P ( B | A) P ( B) The preceding equation is referred to as Bayes theorem, the bedrock for all of Bayesian statistical inference. In order to link Bayes theorem to inferential statistics, we will recast the equation into what is called the diachronic interpretation, as follows: P ( H | D) = P ( H ) ∗ where, H represents a hypothesis. [ 200 ] P (D | H) P ( D) Chapter 8 D represents an event that has already occurred, which we use in our statistical study, and is also referred to as data. Then, ( H ) is the probability of our hypothesis before we observe the data. This is known as the prior probability. The use of prior probability is often touted as an advantage by Bayesian statisticians since prior knowledge or previous results can be used as input for the current model, resulting in increased accuracy. For more information on this, refer to http://www.bayesian-inference.com/ advantagesbayesian. P ( D ) is the probability of obtaining the data that we observe regardless of the hypothesis. This is called the normalizing constant. The normalizing constant doesn't always need to be calculated, especially in many popular algorithms such as MCMC, which we will examine later in this chapter. P ( H | D ) is the probability that the hypothesis is true, given the data that we observe. This is called the posterior. P ( D | H ) is the probability of obtaining the data, considering our hypothesis. This is called the likelihood. Thus, Bayesian statistics amounts to applying Bayes rule to solve problems in inferential statistics with H representing our hypothesis and D the data. A Bayesian statistical model is cast in terms of parameters, and the uncertainty in these parameters is represented by probability distributions. This is different from the Frequentist approach where the values are regarded as deterministic. An alternative representation is as follows: P (θ | x ) where, θ is our unknown data and x is our observed data In Bayesian statistics, we make assumptions about the prior data and use the likelihood to update to the posterior probability using the Bayes rule. As an illustration, let us consider the following problem. Here is a classic case of what is commonly known as the urn problem: • Two urns contain colored balls • Urn one contains 50 red and 50 blue balls [ 201 ] A Brief Tour of Bayesian Statistics • Urn two contains 30 red and 70 blue balls • One of the two urns is randomly chosen (50 percent probability) and then a ball is drawn at random from one of the two urns If a red ball is drawn, what is the probability that it came from urn one? We want P ( H | D ) that is P ( ball came fromUrn 1| Red ball is drawn ) . Here, H denotes that the ball is drawn from Urn one, and D denotes that the drawn ball is red: P ( H ) = P ( ball is drawn fromUrn 1) = 0.5 We know that P ( H | D ) = P ( H ) ∗ P ( D | H ) P ( D ) , P ( D | H ) = 0.5 , P ( D ) = ( 50 + 30 ) (100 + 100 ) = 0.4 , or P ( D ) = P ( H ) P ( D | H ) + P ( ∼ H ) P ( D |∼ H ) = 0.5 ∗ 0.5 + 0.5 ∗ 0.3 = 0.25 + 0.15 = 0.4 Hence, we conclude that P ( H | D ) = 0.5 ∗ 0.5 0.4 = 0.25 0.4 = 0.625 . Bayes theory and odds Bayes theorem can sometimes be represented by a more natural and convenient form by using an alternative formulation of probability called odds. Odds are generally expressed in terms of ratios and are used heavily. A 3 to 1 odds (written often as 3:1) of a horse winning a race represents the fact that the horse is expected to win with 75 percent probability. Given a probability p, the odds can be computed as odds = p : (1 − p ) , which in the case of p=0.75 becomes 0.75:0.25, which is 3:1. We can rewrite the form of Bayes theorem by using odds as: o ( A | D ) = o ( A) P ( D | A) P ( D | B ) Applications of Bayesian statistics Bayesian statistics can be applied to many problems that we encounter in classical statistics such as: • Parameter estimation • Prediction [ 202 ] Chapter 8 • Hypothesis Testing • Linear regression There are many compelling reasons for studying Bayesian statistics; some of them being the use of prior information to better inform the current model. The Bayesian approach works with probability distributions rather than point estimates, thus producing more realistic predictions. Bayesian inference bases a hypothesis on the available data—P(hypothesis |data). The Frequentist approach tries to fit the data based on a hypothesis. It can be argued that the Bayesian approach is the more logical and empirical one as it tries to base its belief on the facts rather than the other way round. For more information on this, refer to http://www.bayesianinference.com/advantagesbayesian. Probability distributions In this section, we will briefly examine the properties of various probability distributions. Many of these distributions are used for Bayesian analysis; thus, a brief synopsis is needed. We will also illustrate how to generate and display these distributions using matplotlib. In order to avoid repeating import statements for every code snippet in each section, I will present the following standard set of Python code imports that need to be run before any of the code snippets mentioned in the following command. You only need to run these imports once per session. The imports are as follows: In [1]: import pandas as pd import numpy as np from matplotlib import pyplot as plt from matplotlib import colors import matplotlib.pyplot as plt import matplotlib %matplotlib inline Fitting a distribution One of the steps that we have to take in a Bayesian analysis is to fit our data to a probability distribution. Selecting the correct distribution can be somewhat of an art and often requires statistical knowledge and experience, but we can follow a few guidelines to help us along the way; these are as follows: • Determine whether the data is discrete or continuous • Examine the skewness/ symmetry of the data and if skewed, determine the direction [ 203 ] A Brief Tour of Bayesian Statistics • Determine the lower and upper limits, if any • Determine the likelihood of observing extreme values in the distribution A statistical trial is a repeatable experiment with a set of well-defined outcomes that are known as the sample space. A Bernoulli trial is a Yes/No experiment where the random X variable is assigned the value of 1 in the case of a Yes and 0 in the case of a No. The event of tossing a coin and seeing whether it turns up heads is an example of a Bernoulli trial. There are two classes of probability distributions: discrete and continuous. In the following sections, we will discuss the differences between these two classes of distributions and take a tour of the main distributions. Discrete probability distributions In this scenario, the variable can take only certain distinct values such as integers. An example of a discrete random variable is the number of heads obtained when we flip a coin 5 times; the possible values are {0,1,2,3,4,5}. We cannot obtain 3.82 heads for example. The range of values the random variable can take is specified by what is known as a probability mass function (pmf). Discrete uniform distributions The discrete uniform distribution is a distribution that models an event with a finite set of possible outcomes where each outcome is equally likely to be observed. For n outcomes, each has a probability of occurrence of 1 n . An example of this is throwing a fair die. The probability of any of the six outcomes is 1 6 . The PMF is given by 1 n , and the expected value and variance are given by ( max + min ) 2 and n 2 − 1 12 respectively. In [13]: from matplotlib import pyplot as plt import matplotlib.pyplot as plt X=range(0,11) Y=[1/6.0 if x in range(1,7) else 0.0 for x in X] plt.plot(X,Y,'go-', linewidth=0, drawstyle='steps-pre', label="p(x)=1/6") plt.legend(loc="upper left") plt.vlines(range(1,7),0,max(Y), linestyle='-') plt.xlabel('x') plt.ylabel('p(x)') [ 204 ] Chapter 8 plt.ylim(0,0.5) plt.xlim(0,10) plt.title('Discrete uniform probability distribution with p=1/6') plt.show() discrete uniform distribution The Bernoulli distribution The Bernoulli distribution measures the probability of success in a trial; for example, the probability that a coin toss turns up a head or a tail. This can be represented by a random X variable that takes a value of 1 if the coin turns up as heads and 0 if it is tails. The probability of turning up heads or tails is denoted by p and q=1-p respectively. This can be represented by the following 1 − p, f (k ) = p, [ 205 ] k =0 k =1 A Brief Tour of Bayesian Statistics The expected value and variance are given by the following formula: E(X ) = p Var ( X ) = p (1 − p ) The reference for this information is at http://en.wikipedia.org/wiki/ Bernoulli_distribution. We now plot the Bernoulli distribution using matplotlib and scipy.stats as follows: In [20]:import matplotlib from scipy.stats import bernoulli a = np.arange(2) colors = matplotlib.rcParams['axes.color_cycle'] plt.figure(figsize=(12,8)) for i, p in enumerate([0.0, 0.2, 0.5, 0.75, 1.0]): ax = plt.subplot(1, 5, i+1) plt.bar(a, bernoulli.pmf(a, p), label=p, color=colors[i], alpha=0.5) ax.xaxis.set_ticks(a) plt.legend(loc=0) if i == 0: plt.ylabel("PDF at $k$") plt.suptitle("Bernoulli probability for various values of $p$") Out[20]: [ 206 ] Chapter 8 The binomial distribution The binomial distribution is used to represent the number of successes in n independent Bernoulli trials that is, Y = X 1 + X 2 + L + X n . Using the coin toss example, this distribution models the chance of getting X heads over n trials. For 100 tosses, the binomial distribution models the likelihood of getting 0 heads (extremely unlikely) to 50 heads (highest likelihood) to 100 heads (also extremely unlikely). This ends up making the binomial distribution symmetrical when the odds are perfectly even and skewed when the odds are far less even. The pmf is given by the following n f ( k ) = p k q n−k , 0 ≤ k ≤ n k [ 207 ] A Brief Tour of Bayesian Statistics The expectation and variance are given respectively by the following expression: E ( X ) = np Var ( X ) = np (1 − p ) In [5]: from scipy.stats import binom clrs = ['blue','green','red','cyan','magenta'] figure(figsize=(12,6)) k = np.arange(0, 22) for p, color in zip([0.001, 0.1, 0.3, 0.6, 0.999], clrs): rv = binom(20, p) plt.plot(k, rv.pmf(k), lw=2, color=color, label="$p$=" + str(round(p,1))) plt.legend() plt.title ("Binomial distribution PMF") plt.tight_layout() plt.ylabel("PDF at $k$") plt.xlabel("$k$") Out[5]: binomial distribution [ 208 ] Chapter 8 The Poisson distribution The Poisson distribution models the probability of a number of events within a given time interval, assuming that these events occur with a known average rate and successive events occur independent of the time since the previous event. A concrete example of a process that can be modeled by a Poisson distribution would be if an individual received an average of, say, 23 e-mails per day. If we assume that the arrival times for the e-mails are independent of each other, the total number of e-mails an individual receives each day can be modeled by a Poisson distribution. Another example could be the number of trains that stop at a particular station each hour. The pmf for a Poisson distribution is given by the following expression: f (k ) = λ k e−λ k! Where λ is the rate parameter that represents the expected number of events/ arrivals that occur per unit time, and k is the random variable that represents the number of events/arrivals. The expectation and variance are given respectively by the following formula: E(X ) = λ Var ( X ) = λ For more information, refer to http://en.wikipedia.org/wiki/Poisson_process. The pmf is plotted using matplotlib for various values as follows: In [11]: %matplotlib inline import numpy as np import matplotlib import matplotlib.pyplot as plt from scipy.stats import poisson colors = matplotlib.rcParams['axes.color_cycle'] k=np.arange(15) plt.figure(figsize=(12,8)) for i, lambda_ in enumerate([1,2,4,6]): [ 209 ] A Brief Tour of Bayesian Statistics plt.plot(k, poisson.pmf(k, lambda_), '-o', label="$\lambda$=" + str(lambda_), color=colors[i]) plt.legend() plt.title("Possion distribution PMF for various $\ lambda$") plt.ylabel("PMF at $k$") plt.xlabel("$k$") plt.show() Out[11]: Poisson distribution The Geometric distribution For independent Bernoulli trials, the Geometric distribution measures the number of trials X needed to get one success. It can also represent the number of failures, Y = X − 1 , before the first success. The pmf is given by the following expression: f ( k ) = p (1 − p ) [ 210 ] k −1 Chapter 8 The preceding expression makes sense since f ( k ) = P ( X = k ) , and if it takes k trials to get one success (p), this means that we must have had k − 1 failures which are equal to (1 − p ) . The expectation and variance are given as follows: E( X ) =1 p Var ( X ) = (1 − p ) p 2 The following command explains the preceding formula clearly: In [12]: from scipy.stats import geom p_vals=[0.01,0.2,0.5,0.8,0.9] x = np.arange(geom.ppf (0.01,p),geom.ppf(0.99,p)) colors = matplotlib.rcParams['axes.color_cycle'] for p,color in zip(p_vals,colors): x = np.arange(geom.ppf(0.01,p),geom.ppf(0.99,p)) plt.plot(x,geom.pmf(x,p),'-o',ms= 8,label='$p$=' + str(p)) plt.legend(loc='best') plt.ylim(-0.5,1.5) plt.xlim(0,7.5) plt.ylabel("Pmf at $k$") plt.xlabel("$k$") plt.title("Geometric distribution PMF") Out[12]: Geometric distribution [ 211 ] A Brief Tour of Bayesian Statistics The negative binomial distribution Also for independent Bernoulli trials, the negative binomial distribution measures the number of tries, X = k , before a specified number of successes, r, occur. An example would be the number of coin tosses it would take to obtain 5 heads. The pmf is given as follows: k − 1 r k −r P ( X = k ) = f (k ) = p (1 − p ) r −1 The expectation and variance are given respectively by the following expression: E(X ) = Var ( X ) = pr 1− p pr (1 − p ) We can see that the negative binomial is a generalization of the geometric distribution, with the geometric distribution being a special case of the negative binomial, where r = 1 . The code and plot are shown as follows: In [189]: from scipy.stats import nbinom from matplotlib import colors clrs = matplotlib.rcParams['axes.color_cycle'] x = np.arange(0,11) n_vals = [0.1,1,3,6] p=0.5 for n, clr in zip(n_vals, clrs): rv = nbinom(n,p) plt.plot(x,rv.pmf(x), label="$n$=" + str(n), color=clr) plt.legend() plt.title("Negative Binomial Distribution PMF") plt.ylabel("PMF at $x$") plt.xlabel("$x$") [ 212 ] Chapter 8 Continuous probability distributions In a continuous probability distribution, the variable can take on any real number. It is not limited to a finite set of values as with the discrete probability distribution. For example, the average weight of a healthy newborn baby can range approximately between 6-9 lbs. Its weight can be 7.3 lbs for example. A continuous probability distribution is characterized by a probability density function (PDF). The sum of all probabilities that the random variable can assume is 1. Thus, the area under the graph of the probability density function is 1. The continuous uniform distribution The uniform distribution models a random variable X that can take any value within the range [ a, b ] with equal probability. The PDF is given by f ( x ) = 1 , for a ≤ x ≤ b, and 0 otherwise. b−a [ 213 ] A Brief Tour of Bayesian Statistics The expectation and variance are given by the following expression: E ( x) = (a + b) 2 Var ( x ) = ( b − a ) 12 2 A continuous uniform probability distribution is generated and plotted for various sample sizes in the following code and figure: In [11]: np.random.seed(100) # seed the random number generator # so plots are reproducible subplots = [111,211,311] ctr = 0 fig, ax = plt.subplots(len(subplots), figsize=(10,12)) nsteps=10 for i in range(0,3): cud = np.random.uniform(0,1,nsteps) # generate distrib count, bins, ignored = ax [ctr].hist(cud,15,normed=True) ax[ctr].plot(bins,np.ones_like(bins),linewidth=2, color='r') ax[ctr].set_title('sample size=%s' % nsteps) ctr += 1 nsteps *= 100 fig.subplots_adjust(hspace=0.4) plt.suptitle("Continuous Uniform probability distributions for various sample sizes" , fontsize=14) [ 214 ] Chapter 8 [ 215 ] A Brief Tour of Bayesian Statistics The exponential distribution The exponential distribution models the waiting time between two events in a Poisson process. A Poisson process is a process that follows a Poisson distribution in which events occur unpredictably with a known average rate. The exponential distribution can be described as the continuous limit of the Geometric distribution and is also Markovian (memoryless). A memoryless random variable exhibits the property whereby its future state depends only on relevant information about the current time and not the information from further in the past. An example of modeling a Markovian/ memoryless random variable is modeling short-term stock price behavior and the idea that it follows a random walk. This leads to what is called the Efficient Market hypothesis in Finance. For more information, refer to http://en.wikipedia.org/wiki/Random_walk_ hypothesis. −λ x The PDF of the exponential distribution is given by f ( x ) = λ e . The expectation and variance are given by the following expression: E( X ) =1 λ Var ( X ) = 1 λ 2 For a reference, refer to the link at http://en.wikipedia.org/wiki/Exponential_ distribution. The plot of the distribution and code is given as follows: In [15]: import scipy.stats clrs = colors.cnames x = np.linspace(0,4, 100) expo = scipy.stats.expon lambda_ = [0.5, 1, 2, 5] plt.figure(figsize=(12,4)) for l,c in zip(lambda_,clrs): plt.plot(x, expo.pdf(x, scale=1./ l), lw=2, color=c, label = "$\lambda = %.1f$"%l) plt.legend() plt.ylabel("PDF at $x$") plt.xlabel("$x$") plt.title("Pdf of an Exponential random variable for various $\ lambda$"); [ 216 ] Chapter 8 The normal distribution The most important distribution in statistics is arguably the normal/Gaussian distribution. It models the probability distribution around a central value with no left or right bias. There are many examples of phenomena that follow the normal distribution, such as: • The birth weights of babies • Measurement errors • Blood pressure • Test scores The normal distribution's importance is underlined by the central limit theorem, which states that the mean of many random variables drawn independently from the same distribution is approximately normal, regardless of the form of the original distribution. Its expected value and variance are given as follows: E(X ) = µ Var ( X ) = σ 2 The PDF of the normal distribution is given by the following expression: − ( x − µ )2 f ( x) = exp 2σ 2 2πσ 2 1 [ 217 ] A Brief Tour of Bayesian Statistics The following code and plot explains the formula: In [54]: import matplotlib from scipy.stats import norm X = 2.5 dx = 0.1 R = np.arange(-X,X+dx,dx) L = list() sdL = (0.5,1,2,3) for sd in sdL: f = norm.pdf L.append([f(x,loc=0,scale=sd) for x in R]) colors = matplotlib.rcParams['axes.color_cycle'] for sd,c,P in zip(sdL,colors,L): plt.plot(R,P,zorder=1,lw=1.5,color=c, label="$\sigma$=" + str (sd)) plt.legend() ax = plt.axes() ax.set_xlim(-2.1,2.1) ax.set_ylim(0,1.0) plt.title("Normal distribution Pdf") plt.ylabel("PDF at $\mu$=0, $\sigma$") [ 218 ] Chapter 8 Reference for the Python code for the plotting of the distributions can be found at: http://bit.ly/1E17nYx. The normal distribution can also be regarded as the continuous limit of the binomial distribution and other distributions as n → ∞ . We can see this for the binomial distribution in the command and plots as follows: In [18]:from scipy.stats import binom from matplotlib import colors cols = colors.cnames n_values = [1, 5,10, 30, 100] subplots = [111+100*x for x in range(0,len(n_values))] ctr = 0 fig, ax = plt.subplots(len(subplots), figsize=(6,12)) k = np.arange(0, 200) p=0.5 for n, color in zip(n_values, cols): k=np.arange(0,n+1) rv = binom(n, p) ax[ctr].plot(k, rv.pmf(k), lw=2, color=color) ax[ctr].set_title("$n$=" + str(n)) ctr += 1 [ 219 ] A Brief Tour of Bayesian Statistics fig.subplots_adjust(hspace=0.5) plt.suptitle("Binomial distribution PMF (p=0.5) for various values of n", fontsize=14) As n increases, the binomial distribution approaches the normal distribution. In fact, for n>=30, this is clearly seen in the preceding plots. [ 220 ] Chapter 8 Bayesian statistics versus Frequentist statistics In statistics today, there are two schools of thought as to how we interpret data and make statistical inferences. The classic and more dominant approach to date has been what is termed the Frequentist approach (refer to Chapter 7, A Tour of Statistics – The Classical Approach), while we are looking at the Bayesian approach in this chapter. What is probability? At the heart of the debate between the Bayesian and Frequentist worldview is the question—how do we define probability? In the Frequentist worldview, probability is a notion that is derived from the frequencies of repeated events. For example, when we define the probability of getting heads when a fair coin is tossed as being equal to half. This is because when we repeatedly toss a fair coin, the number of heads divided by the total number of coin tosses approaches 0.5 when the number of coin tosses is sufficiently large. The Bayesian worldview is different, and the notion of probability is that it is related to one's degree of belief in the event happening. Thus, for a Bayesian statistician, having a belief that the probability of a fair die turning up 5 is 1 6 relates to our belief in the chances of that event occurring. How the model is defined From the model definition point of view Frequentists analyze how data and calculated metrics vary by making use of repeated experiments while keeping the model parameters fixed. Bayesians, on the other hand, utilize fixed experimental data but vary their degrees of belief in the model parameters, this is explained as follows: • Frequentists: If the models are fixed, data varies • Bayesians: If the data is fixed, models vary The Frequentist approach uses what is known as the maximum likelihood method to estimate model parameters. It involves generating data from a set of independent and identically distributed observations and fitting the observed data to the model. The value of the model parameter that best fits the data is the maximum likelihood estimator (MLE), which can sometimes be a function of the observed data. [ 221 ] A Brief Tour of Bayesian Statistics Bayesianism approaches the problem differently from a probabilistic framework. A probability distribution is used to describe the uncertainty in the values. Bayesian practitioners estimate probabilities using observed data. In order to compute these probabilities, they make use of a single estimator, which is the Bayes formula. This produces as distribution rather than just a point estimate, as in the case of the Frequentist approach. Confidence (Frequentist) versus Credible (Bayesian) intervals Let us compare what is meant by a 95 percent confidence interval, a term used by Frequentists with a 95 percent credible interval, used by Bayesian practitioners. In a Frequentist framework, a 95 percent confidence interval means that if you repeat your experiment an infinite number of times, generating intervals in the process, 95 percent of these intervals would contain the parameter we're trying to estimate, which is often referred to as θ. In this case, the interval is the random variable and not the parameter estimate θ, which is fixed in the Frequentist worldview. In the case of the Bayesian credible interval, we have an interpretation that is more in-line with the conventional interpretation ascribed to that of a Frequentist confidence interval. Thus, we have that Pr a (Y ) < θ < b (Y ) | θ = 0.95 . In this case, we can properly conclude that there is a 95 percent chance that θ lies within the interval. For more information, refer to Frequentism and Bayesianism: What's the Big Deal? | SciPy 2014 | Jake VanderPlas at https://www.youtube.com/watch?v=KhAUfqhLakw. Conducting Bayesian statistical analysis Conducting a Bayesian statistical analysis involves the following steps: 1. Specifying a probability model: In this step, we fully describe the model using a probability distribution. Based on the distribution of a sample that we have taken, we try to fit a model to it and attempt to assign probabilities to unknown parameters. 2. Calculating a posterior distribution: The posterior distribution is a distribution that we calculate in light of observed data. In this case, we will directly apply Bayes formula. It will be specified as a function of the probability model that we specified in the previous step. [ 222 ] Chapter 8 3. Checking our model: This is a necessary step where we review our model and its outputs before we make inferences. Bayesian inference methods use probability distributions to assign probabilities to possible outcomes. Monte Carlo estimation of the likelihood function and PyMC Bayesian statistics isn't just another method. It is an entirely alternative paradigm for practicing statistics. It uses probability models for making inferences, given the data that we have collected. This can be expressed in a fundamental expression as P(H|D). Here, H is our hypothesis, that is, the thing we're trying to prove, and D is our data or observations. As a reminder from our previous discussion, the diachronic form of Bayes' theorem is as follows: P ( H | D) = P ( H ) ∗ P (D | H) P ( D) Here, P(H) is an unconditional prior probability that represents what we know before we conduct our trial. P(D|H) is our likelihood function or probability of obtaining the data we observe, given that our hypothesis is true. P(D) is the probability of the data, also known as the normalizing constant. This can be obtained by integrating the numerator over H. The likelihood function is the most important piece in our Bayesian calculation and encapsulates all of the information concerning the unknowns in the data. It has some semblance to a reverse probability mass function. One argument against adopting a Bayesian approach is that the calculation of the prior can be subjective. There are many arguments in favor of this approach; among them, one being that external prior information can be included as mentioned previously. The likelihood value represents an unknown integral, which in simple cases can be obtained by analytic integration. Monte Carlo (MC) integration is needed for more complicated use cases involving higher-dimensional integrals and can be used to compute the likelihood function. [ 223 ] A Brief Tour of Bayesian Statistics MC integration can be computed via a variety of sampling methods, such as uniform sampling, stratified sampling, and importance sampling. In Monte Carlo Integration, we can approximate the integral as follows: Pg = ∫ gdP We can approximate the integral by the following finite sum: Pn g = 1 n ∑ g ( Xi ) n i =1 where, x is a sample vector from g. The proof that this estimate is a good one can be obtained from the law of large numbers and by making sure that the simulation error is small. In conducting Bayesian analysis in Python, we will need a module that will enable us to calculate the likelihood function using the Monte Carlo method that was described earlier. The PyMC library fulfills that need. It provides a Monte Carlo method known commonly as Markov Chain Monte Carlo (MCMC). I will not delve further into the technical details of MCMC, but the interested reader can find out more about MCMC implementation in PyMC at the following references: • Monte Carlo Integration in Bayesian Estimation at http://bit.ly/1bMALeu • Markov Chain Monte Carlo Maximum Likelihood at http://bit.ly /1KBP8hH • Bayesian Statistical Analysis Using Python-Part 1| SciPy 2014, Chris Fonnesbeck at http://www.youtube.com/watch?v=vOBB_ycQ0RA MCMC is not a universal panacea; there are some drawbacks to the approach, and one of them is the slow convergence of the algorithm. Bayesian analysis example – Switchpoint detection Here, we will try to use Bayesian inference and model an interesting dataset. The dataset in question consists of the author's Facebook (FB) post history over time. We have scrubbed the FB history data and saved the dates in the fb_post_dates.txt file. Here is what the data in the file looks like: head -2 ../fb_post_dates.txt Tuesday, September 30, 2014 | 2:43am EDT Tuesday, September 30, 2014 | 2:22am EDT [ 224 ] Chapter 8 Thus, we see a datetime series, representing the date and time at which the author posted on FB. First, we read the file into DataFrame, separating timestamp into Date and Time columns: In [91]: filePath="./data/fb_post_dates.txt" fbdata_df=pd.read_csv(filePath, r=None,names=['Date','Time']) sep='|', parse_dates=[0], heade Next, we inspect the data as follows: In [92]: fbdata_df.head() Out[92]: #inspect the data 2:43am EDT 2:22am EDT 2:06am EDT 1:07am EDT 9:16pm EDT Now, we index the data by Date, creating a DatetimeIndex so that we can run resample on it to count by month as follows: In [115]: fbdata_df_ind=fbdata_df.set_index('Date') fbdata_df_ind.head(5) Out Time Date 2014-09-30 2:43am EDT 2:22am EDT 2:06am EDT 1:07am EDT 9:16pm EDT We display information about the index as follows: In [116]: fbdata_df_ind.index Out[116]: [2014-09-30, ..., 2007-04-16] Length: 7713, Freq: None, Timezone: None [ 225 ] A Brief Tour of Bayesian Statistics We now obtain count of posts by month, using resample: In [99]: fb_mth_count_=fbdata_df_ind.resample('M', how='count') fb_mth_count_.rename(columns={'Time':'Count'}, inplace=True) # Rename fb_mth_count_.head() Out[99]: Count Date 2007-04-30 The Date format is shown as the last day of the month. Now, we create a scatter plot of FB post counts from 2007-2015, and we make the size of the dots proportional to the values in matplotlib: In [108]: %matplotlib inline import datetime as dt #Obtain the count data from the DataFrame as a dictionary year_month_count=fb_bymth_count.to_dict()['Count'] size=len(year_month_count.keys()) #get dates as list of strings xdates=[dt.datetime.strptime(str(yyyymm),'%Y%m') for yyyymm in year_month_count.keys()] counts=year_month_count.values() plt.scatter(xdates,counts,s=counts) plt.xlabel ('Year') plt.ylabel('Number of Facebook posts') plt.show() [ 226 ] Chapter 8 The question we would like to investigate is whether there was a change in behavior at some point over the time period. Specifically, we wish to identify whether there was a specific period at which the mean number of FB posts changed. This is often referred to as the Switchpoint or changepoint in a time series. We can make use of the Poisson distribution to model this. You might recall that the Poisson distribution can be used to model time series count data. (Refer to http://bit.ly/1JniIqy for more about this.) If we represent our monthly FB post count by Ci , we can represent our model as ( Ci | s, e, l ) ∼ Poisson ( ri ) The ri parameter is the rate parameter of the Poisson distribution, but we don't know what its value is. If we examine the scatter plot of the FB time series count data, we can see that there was a jump in the number of posts sometime around mid to late 2010, perhaps coinciding with the start of the 2010 World Cup in South Africa, which the author attended. [ 227 ] A Brief Tour of Bayesian Statistics The s parameter is the Switchpoint, which is when the rate parameter changes, while e and l are the values of the ri parameter before and after the Switchpoint respectively. This can be represented as follows: e if i < s r= l if i ≥ s Note that the variables specified above C , s, e, r , l are all Bayesian random variables. For Bayesian random variables that represent one's beliefs about their values, we need to model them using a probability distribution. We would like to infer the values of e and l , which are unknown. In PyMC, we can represent random variables using the Stochastic and Deterministic classes. We note that the exponential distribution is the amount of time between Poisson events. Hence, in the case of e and l , we choose the exponential distribution to model them since they can be any positive number: e ∼ Exp ( r ) l ∼ Exp ( r ) In the case of s , we will choose to model it using the uniform distribution, which reflects our belief that it is equally likely that the Switchpoint can occur on any day within the entire time period. Hence, we have this: s ∼ DiscreteUniform ( t0 t f Here, t0 , t f corresponds to the lower and upper boundaries of the year i . Let us now use PyMC to represent the model that we developed earlier. We will now use PyMC to see whether we can detect a Switchpoint in the FB post data. In addition to the scatter plot, we can also display the data in a bar chart. In order to do that first of all we need to obtain a count of FB posts ordered by month in a list: In [69]: fb_activity_data = [year_month_count[k] for k in sorted(year_month_count.keys())] fb_activity_data[:5] Out[70]: [1, 0, 5, 50, 24] In [71]: fb_post_count=len(fb_activity_data) [ 228 ] Chapter 8 We render the bar plot using matplotlib: In [72]: from IPython.core.pylabtools import figsize import matplotlib.pyplot as plt figsize(8, 5) plt.bar(np.arange(fb_post_count), fb_activity_data, color=" #49a178") plt.xlabel("Time (months)") plt.ylabel("Number of FB posts") plt.title("Monthly Facebook posts over time") plt.xlim(0,fb_post_count); Looking at the preceding bar chart, can one conclude whether there was a change in FB frequency posting behavior over time? We can use PyMC on the model that we have developed to help us find out the change as follows: In [88]: # Define data and stochastics import pymc as pm switchpoint = pm.DiscreteUniform('switchpoint', lower=0, upper=len(fb_activity_data)-1, doc='Switchpoint[month]') avg = np.mean(fb_activity_data) early_mean = pm.Exponential('early_mean', beta=1./avg) [ 229 ] A Brief Tour of Bayesian Statistics late_mean = pm.Exponential('late_mean', beta=1./avg) late_mean Out[88]: Here, we define a method for the rate parameter, r, and we model the count data using a Poisson distribution as discussed previously: In [89]: @pm.deterministic(plot=False) def rate(s=switchpoint, e= early_mean, l=late_mean): ''' Concatenate Poisson means ''' out = np.zeros(len(fb_activity_data)) out[:s] = e out[s:] = l return out fb_activity = pm.Poisson('fb_activity', mu=rate, value= fb_activity_data, observed=True) fb_activity Out[89]: In the preceding code snippet, @pm.deterministic is a decorator that denotes that the rate function is deterministic, meaning that its values are entirely determined by other variables—in this case, e, s, and l. The decorator is necessary in order to tell PyMC to convert the rate function into a deterministic object. If we do not specify the decorator, an error occurs. (For more information, refer to http://bit.ly/1zj8U0o for information on Python decorators.) For more information, refer to the following web pages: • http://en.wikipedia.org/wiki/Poisson_process • http:// pymc-devs.github.io/pymc/tutorial.html • https://github.com/CamDavidsonPilon/Probabilistic-Programmingand-Bayesian-Methods-for-Hackers We now create a model with the FB Count data (fb_activity) and the e, s, l (early_mean, late_mean, and rate respectively) parameters. Next, using Pymc, we create an MCMC object that enables us to fit our data using Markov Chain Monte Carlo methods. We then call the sample on the resulting MCMC object to do the fitting: In [94]: fb_activity_model=pm.Model([fb_activity,early_mean, late_mean,rate]) [ 230 ] Chapter 8 In [95]: from pymc import MCMC fbM=MCMC(fb_activity_model) In [96]: fbM.sample(iter=40000,burn=1000, thin=20) [-----------------100%-----------------] 40000 of 40000 complete in 11.0 sec Fitting the model using MCMC involves using Markov-Chain Monte Carlo methods to generate a probability distribution for the posterior, P(s,e,l | D). It uses the Monte Carlo process to repeatedly simulate sampling of the data and does this until the algorithm seems to converge to a steady state, based on multiple criteria. This is a Markov process because successive samples are dependent only on the previous sample. (For further reference on Markov chain convergence, refer to http://bit.ly/1IETkhC.) The generated samples are referred to as traces. We can view what the marginal posterior distribution of the parameters looks like by viewing a histogram of its trace: In [97]: from pylab import hist,show %matplotlib inline hist(fbM.trace('late_mean')[:]) Out[97]: (array([ array([ 102.29451192, [ 231 ] A Brief Tour of Bayesian Statistics In [98]:plt.hist(fbM.trace('early_mean')[:]) Out[98]: (array([ array([ 49.19781192, 56.2361871 , 57.9957809 ]), Here, we see what the Switchpoint in terms of number of months looks like: In [99]: fbM.trace('switchpoint')[:] Out[99]: array([38, 38, 38, ..., 35, 35, 35]) In [150]: plt.hist(fbM.trace ('switchpoint')[:]) Out[150]: (array([ 1899., 0., 0., 0., array([ 35. , 37.1, 51.]), 35.6, 37.4, 37.7, 38. ]), [ 232 ] Chapter 8 Hist Switchpoint We can see that the Switchpoint is in the neighborhood of the months numbering 35-38. Here, we use matplotlib to display the marginal posterior distributions of e, s, and l in a single figure: In [141]: early_mean_samples=fbM.trace('early_mean')[:] late_mean_samples=fbM.trace('late_mean')[:] switchpoint_samples=fbM.trace('switchpoint')[:] In [142]: from IPython.core.pylabtools import figsize figsize(12.5, 10) # histogram of the samples: fig = plt.figure() fig.subplots_adjust(bottom=-0.05) n_mths=len(fb_activity_data) ax = plt.subplot(311) ax.set_autoscaley_on(False) plt.hist (early_mean_samples, histtype='stepfilled', bins=30, alpha=0.85, label="posterior of $e$", color="turquoise", normed=True) plt.legend(loc="upper left") [ 233 ] A Brief Tour of Bayesian Statistics plt.title(r"""Posterior distributions of the variables $e, l, s$""",fontsize=16) plt.xlim([40, 120]) plt.ylim([0, 0.6]) plt.xlabel("$e$ value",fontsize=14) ax = plt.subplot(312) ax.set_autoscaley_on(False) plt.hist(late_mean_samples, histtype='stepfilled', bins=30, alpha=0.85, label="posterior of $l$", color="purple", normed=True) plt.legend(loc="upper left") plt.xlim([40, 120]) plt.ylim([0, 0.6]) plt.xlabel("$l$ value",fontsize=14) plt.subplot(313) w = 1.0 / switchpoint_samples.shape[0] * np.ones_like(switchpoint_samples) plt.hist (switchpoint_samples, bins=range(0,n_mths), alpha=1, label=r"posterior of $s$", color="green", weights=w, rwidth=2.) plt.xlim([20, n_mths - 20]) plt.xlabel(r"$s$ (in days)",fontsize=14) plt.ylabel ("probability") plt.legend(loc="upper left") plt.show() [ 234 ] Chapter 8 marginal posterior distributions PyMC also has plotting functionality. (It uses matplotlib.) In the following plots, we display a time series plot, an autocorrelation plot (acorr), and a histogram of the samples drawn for the early mean, late mean, and the Switchpoint. The histogram is useful to visualize the posterior distribution. The autocorrelation plot shows whether values in the previous period are strongly related to values in the current period. In [100]: from pymc.Matplot import plot plot(fbM) Plotting late_mean Plotting switchpoint Plotting early_mean [ 235 ] A Brief Tour of Bayesian Statistics The following is the late mean plot: Here, we display the Switchpoint plot: Pymc comprehensive Switchpoint [ 236 ] Chapter 8 Here, we display the early mean plot: Pymc comprehensive early mean From the output of PyMC, we can conclude that the Switchpoint is around 35-38 months from the start of the time series. This corresponds to sometime around March-July 2010. The author can testify that this was a banner year for him with respect to the use of FB since it was the year of the football (soccer) World Cup finals that were held in South Africa, which he attended. For a more in-depth look at Bayesian statistics topics that we touched upon, please take a look at the following references: • Probabilistic Programming and Bayesian Methods for Hackers at https:// • Bayesian Data Analysis, Third Edition, Andrew Gelman at http://www.amazon. com/Bayesian-Analysis-Chapman-Statistical-Science/dp/1439840954 • The Bayesian Choice, Christian P Robert (this is more theoretical) at http://www.springer.com/us/book/9780387952314 • PyMC documentation at http://pymc-devs.github.io/pymc/index.html [ 237 ] A Brief Tour of Bayesian Statistics In this chapter, we undertook a whirlwind tour of one of the hottest trends in statistics and data analysis in the past few years—the Bayesian approach to statistical inference. We covered a lot of ground here. We examined what the Bayesian approach to statistics entails and discussed the various factors as to why the Bayesian view is a compelling one—facts over belief. We explained the key statistical distributions and showed how we can use the various statistical packages to generate and plot them in matplotlib. We tackled a rather difficult topic without too much oversimplification and demonstrated how we can use the PyMC package and Monte Carlo simulation methods to showcase the power of Bayesian statistics to formulate models, do trend analysis, and make inferences on a real-world dataset (Facebook user posts). In the next chapter, we will discuss the pandas library architecture. [ 238 ] The pandas Library Architecture In this chapter, we examine the various libraries that are available to pandas' users. This chapter is intended to be a short guide to help the user navigate and find their way around the various modules and libraries that pandas provide. It gives a breakup of how the library code is organized, and it also provides a brief description on the various modules. It will be most valuable to users who are interested to see the inner workings of pandas underneath, as well as to those who wish to make contributions to the code base. We will also briefly demonstrate how you can improve performance using Python extensions. The various topics that will be discussed are as follows: • Introduction to pandas' library hierarchy • Description of pandas' modules and files • Improving performance using Python extensions Introduction to pandas' file hierarchy Generally, upon installation, pandas gets installed as a Python module in a standard location for third-party Python modules: Platform Unix/Mac OS Windows Standard installation location prefix/lib/pythonX.Y/sitepackages prefix\Lib\site-packages [ 239 ] Example /usr/local/lib/python2.7/ site-packages C:\Python27\Lib\sitepackages The pandas Library Architecture The installed files follow a specific hierarchy: • pandas/core: This contains files for fundamental data structures such as Series/DataFrames and related functionality. • pandas/src: This contains Cython and C code for implementing fundamental algorithms. • pandas/io: This contains input/output tools (such as flat files, Excel, HDF5, SQL, and so on). • pandas/tools: This contains auxiliary data algorithms merge and join routines, concatenation, pivot tables, and more. • pandas/sparse: This contains sparse versions of Series, DataFrame, Panel and more. • pandas/stats: This contains linear and Panel regression, and moving window regression. This should be replaced by functionality in statsmodels. • pandas/util: This contains utilities, development, and testing tools. • pandas/ rpy: This contains RPy2 interface for connecting to R. For reference see: http://pandas.pydata.org/developers.html. Description of pandas' modules and files In this section, we provide brief descriptions of the various submodules and files that make up pandas' library. This module contains the core submodules of pandas. They are discussed as follows: • api.py: This imports some key modules for later use. • array.py: This isolates pandas' exposure to numPy, that is, all direct numPy usage. • base.py: This defines fundamental classes, such as StringMixin, PandasObject which is the base class for various pandas objects such as Period, PandasSQLTable, sparse.array.SparseArray/SparseList, internals.Block, internals.BlockManager, generic.NDFrame, groupby. GroupBy, base.FrozenList, base.FrozenNDArray, io.sql.PandasSQL, io.sql.PandasSQLTable, tseries.period.Period, FrozenList, FrozenNDArray: IndexOpsMixin, and DatetimeIndexOpsMixin. [ 240 ] Chapter 9 • common.py: This defines common utility methods for handling data structures. For example isnull object detects missing values. • config.py: This is the module for handling package-wide configurable objects. It defines the following classes: OptionError, DictWrapper, CallableDynamicDoc, option_context, config_init. • datetools.py: This is a collection of functions that deal with dates in Python. • frame.py: This defines pandas' DataFrame class and its various methods. DataFrame inherits from NDFrame. (see below). • generic.py: This defines the generic NDFrame base class, which is a base class for pandas' DataFrame, Series, and Panel classes. NDFrame is derived from PandasObject, which is defined in base.py. An NDFrame can be regarded as an N-dimensional version of a pandas' DataFrame. For more information on this, go to http://nullege.com/codes/search/pandas. core.generic.NDFrame. • categorical.py: This defines Categorical, which is a class that derives from PandasObject and represents categorical variables a la R/S-plus. (we will expand your knowledge on this a bit more later). • format.py: This defines a whole host of Formatter classes such as CategoricalFormatter, SeriesFormatter, TableFormatter, DataFrameFormatter, HTMLFormatter, CSVFormatter, ExcelCell, ExcelFormatter, GenericArrayFormatter, FloatArrayFormatter, IntArrayFormatter, Datetime64Formatter, Timedelta64Formatter, and EngFormatter. • groupby.py: This defines various classes that enable the groupby functionality. They are discussed as follows: °° Splitter classes: This includes DataSplitter, ArraySplitter, SeriesSplitter, FrameSplitter, and NDFrameSplitter Grouper/Grouping classes: This includes Grouper, GroupBy, BaseGrouper, BinGrouper, Grouping, SeriesGroupBy, NDFrameGroupBy • ops.py: This defines an internal API for arithmetic operations on PandasObjects. It defines functions that add arithmetic methods to objects. It defines a _create_methods meta method, which is used to create other methods using arithmetic, comparison, and Boolean method constructors. The add_methods method takes a list of new methods, adds them to the existing list of methods, and binds them to their appropriate classes. The add_special_arithmetic_methods and add_flex_arithmetic_methods methods call _create_methods and add_methods to add arithmetic methods to a class. [ 241 ] The pandas Library Architecture It also defines the _TimeOp class, which is a wrapper for datetime-related arithmetic operations. It contains Wrapper functions for arithmetic, comparison, and Boolean operations on Series, DataFrame and Panel functions—_arith_method_SERIES(..), _comp_method_SERIES(..), _bool_method_SERIES(..), _flex_method_SERIES(..), _arith_method_ FRAME(..), _comp_method_FRAME(..), _flex_comp_method_FRAME(..), _arith_method_PANEL(..), _comp_method_PANEL(..). • index.py: This defines the Index class and its related functionality. Index is used by all pandas' objects—Series, DataFrame, and Panel—to store axis labels. Underneath it is an immutable array that provides an ordered set that can be sliced. • internals.py: This defines multiple object classes. These are listed as follows: °° Block: This is a homogeneously typed N-dimensional numpy. ndarray object with additional functionality for pandas. For example, it uses __slots__ to restrict the attributes of the object to 'ndim', 'values', and '_mgr_locs'. It acts as the base class for other Block subclasses. °° NumericBlock: This is the base class for Blocks with the FloatOrComplexBlock: This is base class for FloatBlock and ComplexBlock that inherits from NumericBlock ComplexBlock: This is the class that handles the Block objects FloatBlock: This is the class that handles the Block objects with IntBlock: This is the class that handles the Block objects with the TimeDeltaBlock, BoolBlock, and DatetimeBlock: These are the Block classes for timedelta, Boolean, and datetime. ObjectBlock: This is the class that handles Block objects for user- SparseBlock: This is the class that handles sparse arrays of the BlockManager: This is the class that manages a set of Block objects. It SingleBlockManager: This is the class that manages a single Block. JoinUnit: This is the utility class for Block objects. numeric type. with the complex type. the float type. integer type. defined objects. same type. is not a public API class. [ 242 ] Chapter 9 • matrix.py: This imports DataFrame as DataMatrix. • nanops.py: These are the classes and functionality for handling NaN values. • ops.py: This defines arithmetic operations for pandas' objects. It is not a public API. • panel.py, panel4d.py, and panelnd.py: These provide the functionality for the pandas' Panel object. • series.py: This defines the pandas Series class and its various methods that Series inherits from NDFrame and IndexOpsMixin. • sparse.py: This defines import for handling sparse data structures. Sparse data structures are compressed whereby data points matching NaN or missing values are omitted. For more information on this, go to http://pandas. pydata.org/pandas-docs/stable/sparse.html. • strings.py: These have various functions for handling strings. StringMixin _constructor_sliced Series This module contains various modules for data I/O. These are discussed as follows: • api.py: This defines various imports for the data I/O API. • auth.py: This defines various methods dealing with authentication. • common.py: This defines the common functionality for I/O API. [ 243 ] The pandas Library Architecture • data.py: This defines classes and methods for handling data. The DataReader method reads data from various online sources such as Yahoo and Google. • date_converters.py: This defines date conversion functions. • excel.py: This module parses and converts Excel data. This defines ExcelFile and ExcelWriter classes. • ga.py: This is the module for the Google Analytics functionality. • gbq.py : This is the module for Google's BigQuery. • html.py: This is the module for dealing with HTML I/O. • json.py: This is the module for dealing with json I/O in pandas. This defines the Writer, SeriesWriter, FrameWriter, Parser, SeriesParser, and FrameParser classes. • packer.py: This is a msgpack serializer support for reading and writing pandas data structures to disk. • parsers.py: This is the module that defines various functions and classes that are used in parsing and processing files to create pandas' DataFrames. All the three read_* functions discussed as follows have multiple configurable options for reading. See this reference for more details: http://bit.ly/1e4Xqo1. °° read_csv(..): This defines the pandas.read_csv() function that is read_table(..): This reads a tab-separated table file into a read_fwf(..): This reads a fixed-width format file into a DataFrame. TextFileReader: This is the class that is used for reading text files. ParserBase: This is the base class for parser objects. CParserWrapper, PythonParser: These are the parser for C and Python respectively. They both inherit from ParserBase. FixedWidthReader: This is the class for reading fixed-width data. FixedWithFieldParser: This is the class for parsing fixed-width fields that have been inherited from PythonParser. useful to read the contents of a CSV file into a DataFrame. DataFrame. A fixed-width data file contains fields in specific positions within the file. [ 244 ] Chapter 9 • pickle.py: This provides methods for pickling (serializing) pandas objects. These are discussed as follows: °° to_pickle(..): This serializes object to a file. read_pickle(..): This reads serialized object from file into pandas object. It should only be used with trusted sources. • pytables.py: This is an interface to PyTables module for reading and writing pandas data structures to files on disk. • sql.py: It is a collection of classes and functions used to enable the retrieval of data from relational databases that attempts to be database agnostic. These are discussed as follows: °° PandasSQL: This is the base class for interfacing pandas with SQL. It provides dummy read_sql and to_sql methods that must be implemented by subclasses. °° PandasSQLAlchemy: This is the subclass of PandasSQL that PandasSQLTable class: This maps pandas tables (DataFrame) pandasSQL_builder(..): This returns the correct PandasSQL PandasSQLTableLegacy class: This is the legacy support version of PandasSQLTable. PandasSQLLegacy class: This is the legacy support version of PandasSQLTable. get_schema(..): This gets the SQL database table schema for a read_sql_table(..): This reads SQL db table into a DataFrame. read_sql_query(..): This reads SQL query into a DataFrame. read_sql(..): This reads SQL query/table into a DataFrame. enables conversions between DataFrame and SQL databases using SQLAlchemy. to SQL tables. subclass based on the provided parameters. given frame. • to_sql(..): This write records that are stored in a DataFrame to a SQL database. • stata.py: This contains tools for processing Stata files into pandas DataFrames. • wb.py: This is the module for downloading data from World Bank's website. [ 245 ] The pandas Library Architecture • util.py: This has miscellaneous util functions defined such as match(..), cartesian_product(..), and compose(..). • tile.py: This has a set of functions that enable quantization of input data and hence tile functionality. Most of the functions are internal, except for cut(..) and qcut(..). • rplot.py: This is the module that provides the functionality to generate trellis plots in pandas. • plotting.py: This provides a set of plotting functions that take a Series or DataFrame as an argument. °° scatter_matrix(..): This draws a matrix of scatter plots andrews_curves(..): This plots multivariate data as curves that are parallel_coordinates(..): This is a plotting technique that allows lag_plot(..): This is used to check whether a dataset or a time autocorrelation_plot(..): This is used for checking randomness bootstrap_plot(..): This plot is used to determine the uncertainty radviz(..): This plot is used to visualize multivariate data created using samples as coefficients for a Fourier series you to see clusters in data and visually estimate statistics series is random in a time series of a statistical measure such as mean or median in a visual manner Reference for the preceding information is from: http://pandas.pydata.org/pandas-docs/ stable/visualization.html • pivot.py: This function is for handling pivot tables in pandas. It is the main function pandas.tools.pivot_table(..) which creates a spreadsheet-like pivot table as a DataFrame Reference for the preceding information is from: http://pandas.pydata.org/pandas-docs/ stable/reshaping.html [ 246 ] Chapter 9 • merge.py: This provides functions for combining the Series, DataFrame, and Panel objects such as merge(..) and concat(..) • describe.py: This provides a single value_range(..) function that returns the maximum and minimum of a DataFrame as a Series. This is the module that provides sparse implementations of Series, DataFrame, and Panel. By sparse, we mean arrays where values such as missing or NA are omitted rather than kept as 0. For more information on this, go to http://pandas.pydata.org/pandas-docs/ version/stable/sparse.html. • api.py: It is a set of convenience imports • array.py: It is an implementation of the SparseArray data structure • frame.py: It is an implementation of the SparseDataFrame data structure • list.py: It is an implementation of the SparseList data structure • panel.py: It is an implementation of the SparsePanel data structure • series.py: It is an implementation of the SparseSeries data structure • api.py: This is a set of convenience imports. • common.py: This defines internal functions called by other functions in a module. • fama_macbeth.py: This contains class definitions and functions for the Fama-Macbeth regression. For more information on FM regression, go to http://en.wikipedia.org/wiki/Fama-MacBeth_regression. • interface.py: It defines ols(..) which returns an Ordinary Least Squares (OLS) regression object. It imports from pandas.stats.ols module. • math.py: This has useful functions defined as follows: °° rank(..), solve(..), and inv(..): These are used for matrix rank, is_psd(..): This checks positive-definiteness of matrix newey_west(..): This is for covariance matrix computation calc_F(..): This computes F-statistic solution, and inverse respectively [ 247 ] The pandas Library Architecture • misc.py: This is used for miscellaneous functions. • moments.py: This provides rolling and expanding statistical measures including moments that are implemented in Cython. These methods include: rolling_count(..), rolling_cov(..), rolling_corr(..), rolling_ corr_pairwise(..), rolling_quantile(..), rolling_apply(..), rolling_window(..), expanding_count(..), expanding_quantile(..), expanding_cov(..), expanding_corr(..), expanding_corr_ pairwise(..), expanding_apply(..), ewma(..), ewmvar(..), ewmstd(..), ewmcov(..), and ewmcorr(..). • ols.py: This implements OLS and provides the OLS and MovingOLS classes. OLS runs a full sample Ordinary Least-Squares Regression, whereas MovingOLS generates a rolling or an expanding simple OLS. • plm.py: This provides linear regression objects for Panel data. These classes are discussed as follows: °° PanelOLS: This is the OLS for Panel object MovingPanelOLS: This is the rolling/expanded OLS for Panel object NonPooledPanelOLS:- This is the nonpooled OLS for Panel object • var.py: This provides vector auto-regression classes discussed as follows: °° VAR: This is the vector auto-regression on multi-variate data in Series PanelVAR: This is the vector auto-regression on multi-variate data in and DataFrames Panel objects For more information on vector autoregression, go to: http://en.wikipedia.org/wiki/Vector_autoregression • testing.py: This provides the assertion, debug, unit test, and other classes/ functions for use in testing. It contains many special assert functions that make it easier to check whether Series, DataFrame, or Panel objects are equivalent. Some of these functions include assert_equal(..), assert_ series_equal(..), assert_frame_equal(..), and assert_panelnd_ equal(..). The pandas.util.testing module is especially useful to the contributors of the pandas code base. It defines a util.TestCase class. It also provides utilities for handling locales, console debugging, file cleanup, comparators, and so on for testing by potential code base contributors. [ 248 ] Chapter 9 • terminal.py: This function is mostly internal and has to do with obtaining certain specific details about the terminal. The single exposed function is get_terminal_size(). • print_versions.py: This defines the get_sys_info() function that returns a dictionary of systems information, and the show_versions(..) function that displays the versions of available Python libraries. • misc.py: This defines a couple of miscellaneous utilities. • decorators.py: This defines some decorator functions and classes. The Substitution and Appender classes are decorators that perform substitution and appending on function docstrings and for more information on Python decorators, go to http://bit.ly/1zj8U0o. • clipboard.py: This contains cross-platform clipboard methods to enable the copy and paste functions from the keyboard. The pandas I/O API include functions such as pandas.read_clipboard() and pandas.to_ clipboard(..). This module attempts to provide an interface to the R statistical package if it is installed in the machine. It is deprecated in Version 0.16.0 and later. It's functionality is replaced by the rpy2 module that can be accessed from http://rpy.sourceforge.net. • base.py: This defines a class for the well-known lm function in R • common.py: This provides many functions to enable the conversion of pandas objects into their equivalent R versions • mass.py: This is an unimplemented version of rlm—R's lm function • var.py: This contains an unimplemented class VAR This is the module that provides many tests for various objects in pandas. The names of the specific library files are fairly self-explanatory, and I will not go into further details here, except inviting the reader to explore this. [ 249 ] The pandas Library Architecture The functionality related to compatibility are explained as follows: • chainmap.py, chainmap_impl.py: This provides a ChainMap class that can group multiple dicts or mappings, in order to produce a single view that can be updated • pickle_compat.py: This provides functionality for pickling pandas objects in the versions that are earlier than 0.12 • openpyxl_compat.py: This checks the compatibility of openpyxl This is the module that provides functionality for computation and is discussed as follows: • api.py: This contains imports for eval and expr. • align.py: This implements functions for data alignment. • common.py: This contains a couple of internal functions. • engines.py: This defines Abstract Engine, NumExprEngine, and PythonEngine. PythonEngine evaluates an expression and is used mainly for testing purposes. • eval.py: This defines the all-important eval(..) function and also a few other important functions. • expressions.py: This provides fast expression evaluation through numexpr. The numexpr function is used to accelerate certain numerical operations. It uses multiple cores as well as smart chunking and caching speedups. It defines the evaluate(..) and where(..) methods. • ops.py: This defines the operator classes used by eval. These are Term, Constant, Op, BinOp, Div, and UnaryOp. • pytables.py: This provides a query interface for the PyTables query. • scope.py: This is a module for scope operations. It defines a Scope class, which is an object to hold scope. For more information on numexpr, go to https://code. google.com/p/numexpr/. For information of the usage of this module, go to http://pandas.pydata.org/pandasdocs/stable/computation.html. [ 250 ] Chapter 9 • api.py: This is a set of convenience imports • converter.py: This defines a set of classes that are used to format and convert datetime-related objects. Upon import, pandas registers a set of unit converters with matplotlib. °° This is done via the register() function explained as follows: In [1]: import matplotlib.units as munits In [2]: munits.registry Out[2]: {} In [3]: import pandas In [4]: munits.registry Out[4]: {pandas.tslib.Timestamp: , pandas.tseries.period.Period: , datetime.date: , datetime.datetime: , datetime.time: } Converter: This class includes TimeConverter, PeriodConverter, and DateTimeConverter Formatters: This class includes TimeFormatter, PandasAutoDateFormatter, and TimeSeries_DateFormatter Locators: This class includes PandasAutoDateLocator, MilliSecondLocator, and TimeSeries_DateLocator The Formatter and Locator classes are used for handling ticks in matplotlib plotting. • frequencies.py: This defines the code for specifying frequencies—daily, weekly, quarterly, monthly, annual, and so on—of time series objects. [ 251 ] The pandas Library Architecture • holiday.py: This defines functions and classes for handling holidays— Holiday, AbstractHolidayCalendar, and USFederalHolidayCalendar are among the classes defined. • index.py: This defines the DateTimeIndex class. • interval.py: This defines the Interval, PeriodInterval, and IntervalIndex classes. • offsets.py: This defines various classes including Offsets that deal with time-related periods. These are explained as follows: °° DateOffset: This is an interface for classes that provide the time period functionality such as Week, WeekOfMonth, LastWeekOfMonth, QuarterOffset, YearOffset, Easter, FY5253, and FY5253Quarter. BusinessMixin: This is the mixin class for business objects to MonthOffset: This is the interface for classes that provide the functionality for month time periods such as MonthEnd, MonthBegin, BusinessMonthEnd, and BusinessMonthBegin. MonthEnd and MonthBegin: This is the date offset of one month at the BusinessMonthEnd and BusinessMonthBegin: This is the date offset YearOffset: This offset is subclassed by classes that provide year period functionality—YearEnd, YearBegin, BYearEnd, BYearBegin YearEnd and YearBegin: This is the date offset of one year at the end BYearEnd and BYearBegin: This is the date offset of one year at the Week: This provides the offset of 1 week. WeekDay: This provides mapping from weekday (Tue) to day of WeekOfMonth and LastWeekOfMonth: This describes dates in a week QuarterOffset: This is subclassed by classes that provide quarterly period functionality—QuarterEnd, QuarterrBegin, BQuarterEnd, and BQuarterBegin. provide functionality with time-related classes. This will be inherited by the BusinessDay class. The BusinessDay subclass is derived from BusinessMixin and SingleConstructorOffset and provides an offset in business days. end or the beginning of a month. of one month at the end or the beginning of a business day calendar. or the beginning of a year. end or the beginning of a business day calendar. week (=2). of a month [ 252 ] Chapter 9 QuarterEnd, QuarterrBegin, BQuarterEnd, and BQuarterBegin: This is same as for Year* classes except that the period is quarter instead of year. FY5253, FY5253Quarter: These classes describe a 52-53 week fiscal Easter: This is the DateOffset for the Easter holiday. Tick: This is the base class for Time unit classes such as Day, Hour, Minute, Second, Milli, Micro, and Nano. year. This is also known as a 4-4-5 calendar. You can get more information on this at http://en.wikipedia.org/wiki/4–4–5_ calendar. • period.py: This defines the Period and PeriodIndex classes for pandas TimeSeries. • plotting.py: This defines various plotting functions such as tsplot(..), which plots a Series. • resample.py: This defines TimeGrouper, a custom groupby class for time-interval grouping. • timedeltas.py: This defines the to_timedelta(..) method, which converts its argument into a timedelta object. • tools.py: This defines utility functions such as to_datetime(..), parse_time_string(..), dateutil_parse(..), and format(..). • util.py: This defines more utility functions as follows: °° isleapyear(..): This checks whether the year is a leap year pivot_annual(..): This groups a series by years, accounting for leap years This module handles the integration of pandas DataFrame into the PyQt framework. For more information on PyQt, go to Improving performance using Python extensions One of the gripes of Python and pandas users is that the ease of use and expressiveness of the language and module comes with a significant downside—the performance—especially when it comes to numeric computing. [ 253 ] The pandas Library Architecture According to the programming benchmarks site, Python is often slower than compiled languages, such as C/C++ for many algorithms or data structure operations. An example of this would be binary tree operations. In the following reference, Python3 ran 104x slower than the fastest C++ implementation of an n-body simulation calculation: http://bit.ly/1dm4JqW. So, how can we solve this legitimate yet vexing problem? We can mitigate this slowness in Python while maintaining the things that we like about it—clarity and productivity—by writing the parts of our code that are performance sensitive. For example numeric processing, algorithms in C/C++ and having them called by our Python code by writing a Python extension module: http://docs.python.org/2/ extending/extending.html Python extension modules enable us to make calls out to user-defined C/C++ code or library functions from Python, thus enabling us to boost our code performance but still benefit from the ease of using Python. To help us understand what a Python extension module is, consider what happens in Python when we import a module. An import statement imports a module, but what does this really mean? There are three possibilities, which are as follows: • Some Python extension modules are linked to the interpreter when it is built. • An import causes Python to load a .pyc file into memory. The .pyc files contain Python bytecode.For example to the following command: In [3]: import pandas pandas.__file__ Out[3]: '/usr/lib/python2.7/site-packages/pandas/__init__.pyc' • The import statement causes a Python extension module to be loaded into the memory. The .so (shared object) file is comprised of machine code. For example refer to the following command: In [4]: import math math.__file__ Out[4]: '/usr/lib/python2.7/lib-dynload/math.so' We will focus on the third possibility. Even though we are dealing with a binaryshared object compiled from C, we can import it as a Python module, and this shows the power of Python extensions—applications can import modules from Python machine code or machine code and the interface is the same. Cython and SWIG are the two most popular methods of writing extensions in C and C++. In writing an extension, we wrap up C/C++ machine code and turn it into Python extension modules that behave like pure Python code. In this brief discussion, we will only focus on Cython, as it was designed specifically for Python. [ 254 ] Chapter 9 Cython is a superset of Python that was designed to significantly improve Python's performance by allowing us to call externally compiled code in C/C++ as well as declare types on variables. The Cython command generates an optimized C/C++ source file from a Cython source file, and compiles this optimized C/C++ source into a Python extension module. It offers built-in support for NumPy and combines C's performance with Python's usability. We will give a quick demonstration of how we can use Cython to significantly speed up our code. Let's define a simple Fibonacci function: In [17]: def fibonacci(n): a,b=1,1 for i in range(n): a,b=a+b,a return a In [18]: fibonacci(100) Out[18]: 927372692193078999176L In [19]: %timeit fibonacci(100) 100000 loops, best of 3: 18.2 µs per loop Using the timeit module, we see that it takes 18.2 µs per loop. Let's now rewrite the function in Cython, specifying types for the variables by using the following steps: 1. First, we import the Cython magic function to IPython as follows: In [22]: %load_ext cythonmagic 2. Next, we rewrite our function in Cython, specifying types for our variables: In [24]: %%cython def cfibonacci(int n): cdef int i, a,b for i in range(n): a,b=a+b,a return a [ 255 ] The pandas Library Architecture 3. Let's time our new Cython function: In [25]: %timeit cfibonacci(100) 1000000 loops, best of 3: 321 ns per loop In [26]: 18.2/0.321 Out[26]: 56.69781931464174 4. Thus, we can see that the Cython version is 57x faster than the pure Python version! For more references on writing Python extensions using Cython/SWIG or other options, please refer to the following references: • The pandas documentation titled Enhancing Performance at http://pandas. pydata.org/pandas-docs/stable/enhancingperf.html • Scipy Lecture Notes titled Interfacing with C at https://scipy-lectures. github.io/advanced/interfacing_with_c/interfacing_with_c.html • Cython documentation at http://docs.cython.org/index.html • SWIG Documentation at http://www.swig.org/Doc2.0/ SWIGDocumentation.html To summarize this chapter, we took a tour of the library hierarchy of pandas in an attempt to illustrate the internal workings of the library. We also touched on the benefits of speeding up our code performance by using a Python extension module. [ 256 ] R and pandas Compared This chapter focuses on comparing pandas with R, the statistical package on which much of pandas' functionality is modeled. It is intended as a guide for R users who wish to use pandas, and for users who wish to replicate functionality that they have seen in the R code in pandas. It focuses on some key features available to R users and shows how to achieve similar functionality in pandas by using some illustrative examples. This chapter assumes that you have the R statistical package installed. If not, it can be downloaded and installed from here: http:// www.r-project.org/. By the end of the chapter, data analysis users should have a good grasp of the data analysis capabilities of R as compared to pandas, enabling them to transition to or use pandas, should they need to. The various topics addressed in this chapter include the following: • R data types and their pandas equivalents • Slicing and selection • Arithmetic operations on datatype columns • Aggregation and GroupBy • Matching • Split-apply-combine • Melting and reshaping • Factors and categorical data R data types R has five primitive or atomic types: • Character • Numeric [ 257 ] R and pandas Compared • Integer • Complex • Logical/Boolean It also has the following, more complex, container types: • Vector: This is similar to numpy.array. It can only contain objects of the same type. • List: It is a heterogeneous container. Its equivalent in pandas would be a series. • DataFrame: It is a heterogeneous 2D container, equivalent to a pandas DataFrame • Matrix:- It is a homogeneous 2D version of a vector. It is similar to a numpy.matrix. For this chapter, we will focus on list and DataFrame, which have pandas equivalents as series and DataFrame. For more information on R data types, refer to the following document at: http://www.statmethods.net/input/datatypes.html. For NumPy data types, refer to the following document at: http:// docs.scipy.org/doc/numpy/reference/generated/numpy. array.html and http://docs.scipy.org/doc/numpy/ reference/generated/numpy.matrix.html. R lists R lists can be created explicitly as a list declaration as shown here: >h_lsth_lst [[1]] [1] 23 [[2]] [1] "donkey" [[3]] [1] 5.6 [ 258 ] Chapter 10 [[4]] [1] 1+4i [[5]] [1] TRUE >typeof(h_lst) [1] "list" Here is its series equivalent in pandas with the creation of a list and the creation of a series from it: In [8]: h_list=[23, 'donkey', 5.6,1+4j, True] In [9]: import pandas as pd h_ser=pd.Series (h_list) In [10]: h_ser Out[10]: 0 dtype: object Array indexing starts from 0 in pandas as opposed to R, where it starts at 1. Following is an example of this: In [11]: type(h_ser) Out[11]: pandas.core.series.Series R DataFrames We can construct an R DataFrame as follows by calling the data.frame() constructor and then display it as follows: >stocks_tablestocks_table Symbol GOOG 518.70 AMZN 307.82 AAPL 109.70 NFLX 334.48 LINKD 219.90 Here, we construct a pandas DataFrame and display it: In [29]: stocks_df=pd.DataFrame({'Symbol':['GOOG','AMZN','FB','AAPL', 'TWTR','NFLX','LNKD'], 'Price':[518.7,307.82,74.9,109.7,37.1, 334.48,219.9], 'MarketCap($B)' : [352.8,142.29,216.98,643.55, 23.54,20.15,27.31] }) stocks_df=stocks_df.reindex_axis(sorted(stocks_df.columns,reverse=True),a xis=1) stocks_df Out[29]: Symbol [ 260 ] Chapter 10 Slicing and selection In R, we slice objects in the following three ways: • [: This always returns an object of the same type as the original and can be used to select more than one element. • [[: This is used to extract elements of list or DataFrame; and can only be used to extract a single element,: the type of the returned element will not necessarily be a list or DataFrame. • $: This is used to extract elements of a list or DataFrame by name and is similar to [[. Here are some slicing examples in R and their equivalents in pandas: R-matrix and NumPy array compared Let's see matrix creation and selection in R: >r_matr_mat [,1] [,2] [,3] [1,] To select first row, we write: >r_mat[1,] [1] To select second column, we use the following command: >r_mat[,2] [1] 6 7 8 9 Let's now see NumPy array creation and selection: In [60]: a=np.array(range(2,6)) b=np.array(range(6,10)) c=np.array(range(10,14)) In [66]: np_ar=np.column_stack([a,b,c]) np_ar [ 261 ] R and pandas Compared Out[66]: array([[ 2, [ 3, 7, 11], [ 4, 8, 12], [ 5, 9, 13]]) 6, 10], To select first row, write the following command: In [79]: np_ar[0,] Out[79]: array([ 2, 6, 10]) Indexing is different in R and pandas/NumPy. In R, indexing starts at 1, while in pandas/NumPy, it starts at 0. Hence, we have to subtract 1 from all indexes when making the translation from R to To select second column, write the following command: In [81]: np_ar[:,1] Out[81]: array([6, 7, 8, 9]) Another option is to transpose the array first and then select the column, as follows: In [80]: np_ar.T[1,] Out[80]: array([6, 7, 8, 9]) R lists and pandas series compared Here is an example of list creation and selection in R: >cal_lstcal_lst $weekdays [1] 1 2 3 4 5 6 7 8 $mth [1] "jan" >cal_lst[1] $weekdays [1] 1 2 3 4 5 6 7 8 [ 262 ] Chapter 10 >cal_lst[[1]] [1] 1 2 3 4 5 6 7 8 >cal_lst[2] $mth [1] "jan" Series creation and selection in pandas is done as follows: In [92]: cal_df= pd.Series({'weekdays':range(1,8), 'mth':'jan'}) In [93]: cal_df Out[93]: mthjan weekdays [1, 2, 3, 4, 5, 6, 7] dtype: object In [97]: cal_df[0] Out[97]: 'jan' In [95]: cal_df[1] Out[95]: [1, 2, 3, 4, 5, 6, 7] In [96]: cal_df[[1]] Out[96]: weekdays [1, 2, 3, 4, 5, 6, 7] dtype: object Here, we see a difference between an R-list and a pandas series from the perspective of the [] and [[]] operators. We can see the difference by considering the second item, which is a character string. In the case of R, the [] operator produces a container type, that is, a list containing the string, while the [[]] produces an atomic type: in this case, a character as follows: >typeof (cal_lst[2]) [1] "list" >typeof(cal_lst[[2]]) [1] "character" [ 263 ] R and pandas Compared In the case of pandas, the opposite is true: [] produces the atomic type, while [[]] results in a complex type, that is, a series as follows: In [99]: type(cal_df[0]) Out[99]: str In [101]: type (cal_df[[0]]) Out[101]: pandas.core.series.Series In both R and pandas, the column name can be specified in order to obtain an element. Specifying column name in R In R, this can be done with the column name preceded by the $ operator as follows: >cal_lst$mth [1] "jan" > cal_lst$'mth' [1] "jan" Specifying column name in pandas In pandas, we subset elements in the usual way with the column name in square brackets: In [111]: cal_df['mth'] Out[111]: 'jan' One area where R and pandas differ is in the subsetting of nested elements. For example, to obtain day 4 from weekdays, we have to use the [[]] operator in R: >cal_lst[[1]][[4]] [1] 4 >cal_lst[[c (1,4)]] [1] 4 However, in the case of pandas, we can just use a double []: In [132]: cal_df[1][3] Out[132]: 4 [ 264 ] Chapter 10 R's DataFrames versus pandas' DataFrames Selecting data in R DataFrames and pandas DataFrames follows a similar script. The following section explains on how we perform multi-column selects from both. Multicolumn selection in R In R, we specify the multiple columns to select by stating them in a vector within square brackets: >stocks_table[c('Symbol','Price')] Symbol GOOG 518.70 AMZN 307.82 AAPL 109.70 NFLX 334.48 LINKD 219.90 >stocks_table[,c('Symbol','Price')] Symbol GOOG 518.70 AMZN 307.82 AAPL 109.70 NFLX 334.48 LINKD 219.90 Multicolumn selection in pandas In pandas, we subset elements in the usual way with the column names in square brackets: In [140]: stocks_df[['Symbol','Price']] Out[140]:Symbol Price 0 74.90 [ 265 ] R and pandas Compared 3 In [145]: stocks_df.loc[:,['Symbol','Price']] Out[145]: Symbol Arithmetic operations on columns In R and pandas, we can apply arithmetic operations in data columns in a similar manner. Hence, we can perform arithmetic operations such as addition or subtraction on elements in corresponding positions in two or more DataFrames. Here, we construct a DataFrame in R with columns labeled x and y, and subtract column y from column x: >norm_dfnorm_df$x - norm_df$y [1] -1.3870730 -1.4620324 2.4681458 -4.6991395 0.2978311 -0.8492245 The with operator in R also has the same effect as arithmetic operations: >with(norm_df,x-y) [1] -1.3870730 -1.4620324 2.4681458 -4.6991395 0.2978311 -0.8492245 In pandas, the same arithmetic operations on columns can be done and the equivalent operator is eval: In [10]: import pandas as pd import numpy as np [ 266 ] Chapter 10 df = pd.DataFrame({'x': np.random.normal(0,1,size=7), 'y': np.random. normal(0,1,size=7)}) In [11]: df.x-df.y Out[11]: 0 dtype: float64 In [12]: df.eval('x-y') Out[12]: 0 dtype: float64 Aggregation and GroupBy Sometimes, we may wish to split data into subsets and apply a function such as the mean, max, or min to each subset. In R, we can do this via the aggregate or tapply functions. Here, we will use the example of a dataset of statistics on the top five strikers of the four clubs that made it to the semi-final of the European Champions League Football tournament in 2014. We will use it to illustrate aggregation in R and its equivalent GroupBy functionality in pandas. [ 267 ] R and pandas Compared Aggregation in R In R aggregation is done using the following command: > goal_stats=read.csv('champ_league_stats_semifinalists.csv') >goal_stats Club Player Goals GamesPlayed Atletico Madrid Diego Costa Atletico Madrid Atletico Madrid Atletico Madrid Atletico Madrid Diego Godín Real Madrid Cristiano Ronaldo Real Madrid Gareth Bale Real Madrid Karim Benzema Real Madrid Real Madrid Ángel Di María Bayern Munich Thomas Müller Bayern Munich Bayern Munich Mario Götze Bayern Munich Bastian Schweinsteiger Bayern Munich Mario Mandžukić Fernando Torres Demba Ba Samuel Eto'o Eden Hazard We can now compute the goals per game ratio for each striker, to measure their deadliness in front of a goal: >goal_stats$GoalsPerGamegoal_stats Club Goals GamesPlayedGoalsPerGame Atletico Madrid Diego Costa Atletico Madrid Atletico Madrid Atletico Madrid [ 268 ] Chapter 10 5 Atletico Madrid Diego Godín Real Madrid Cristiano Ronaldo Real Madrid Gareth Bale Real Madrid Real Madrid Karim Benzema 10 Real Madrid Isco Ángel Di María 11 Bayern Munich Thomas Müller 12 Bayern Munich 13 Bayern Munich 14 Bayern Munich Bastian Schweinsteiger 3 15 Bayern Munich 16 Chelsea MarioMandžukić Fernando Torres 0.3750000 0.3000000 17 Chelsea Demba Ba 18 Chelsea Samuel Eto'o 19 Chelsea Eden Hazard 20 Chelsea Let's suppose that we wanted to know the highest goals per game ratio for each team. We would calculate this as follows: >aggregate(x=goal_stats[,c('GoalsPerGame')], by=list(goal_ stats$Club),FUN= max) Group.1 1 Atletico Madrid 0.8888889 2 Bayern Munich 0.4166667 Chelsea 0.5000000 Real Madrid 1.5454545 The tapply function is used to apply a function to a subset of an array or vector that is defined by one or more columns. The tapply function can also be used as follows: >tapply (goal_stats$GoalsPerGame,goal_stats$Club,max) Atletico Madrid Bayern Munich Real Madrid [ 269 ] R and pandas Compared The pandas' GroupBy operator In pandas, we can achieve the same result by using the GroupBy function: In [6]: import pandas as pd importnumpy as np In [7]: goal_stats_df=pd.read_csv('champ_league_stats_semifinalists.csv') In [27]: goal_stats_df['GoalsPerGame']= stats_df['GamesPlayed'] In [27]: goal_stats_df['GoalsPerGame']= goal_stats_df['Goals']/goal_ stats_df['GamesPlayed'] In [28]: goal_stats_df Out[28]: Club Goals GamesPlayedGoalsPerGame Atletico Madrid Diego Costa Atletico Madrid ArdaTuran Atletico Madrid RaúlGarcía Atletico Madrid AdriánLópez Atletico Madrid Diego Godín Real Madrid Real Madrid Gareth Bale Real Madrid Karim Benzema 5 Real Madrid Real Madrid Ángel Di María 3 Bayern Munich Thomas Müller Bayern Munich Bayern Munich Mario Götze Bayern Munich BastianSchweinsteiger 3 Bayern Munich Fernando Torres Demba Ba Samuel Eto'o Eden Hazard Cristiano Ronaldo 17 [ 270 ] 0.272727 8 0.375000 0.300000 Chapter 10 In [30]: grouped = goal_stats_df.groupby('Club') In [17]: grouped['GoalsPerGame'].aggregate(np.max) Out[17]: Club Atletico Madrid Bayern Munich Real Madrid Name: GoalsPerGame, dtype: float64 In [22]: grouped['GoalsPerGame'].apply(np.max) Out[22]: Club Atletico Madrid Bayern Munich Real Madrid Name: GoalsPerGame, dtype: float64 Comparing matching operators in R and pandas Here, we will demonstrate the equivalence of matching operators between R (%in%) and pandas (isin()). In both cases, a logical vector or series (pandas) is produced, which indicates the position at which a match was found. R %in% operator Here, we will demonstrate the use of the %in% operator in R: >stock_symbols=stocks_table$Symbol >stock_symbols [1] GOOG Levels: AAPL AMZN FB GOOG LINKD NFLX TWTR >stock_symbols %in% c('GOOG','NFLX') [1] TRUE FALSE [ 271 ] R and pandas Compared The pandas isin() function Here is an example of using the pandas isin() function: In [11]: stock_symbols=stocks_df.Symbol stock_symbols Out[11]: 0 Name: Symbol, dtype: object In [10]: stock_symbols.isin(['GOOG','NFLX']) Out[10]: 0 Name: Symbol, dtype: bool Logical subsetting In R as well as in pandas, there is more than one way to perform logical subsetting. Suppose that we wished to display all players with the average goals per game ratio of greater than or equal to 0.5; that is, they average at least one goal every two games. Logical subsetting in R Here's how we can do this in R: • Via a logical slice: >goal_stats[goal_stats$GoalsPerGame>=0.5,] Club Goals GamesPlayedGoalsPerGame Atletico Madrid Diego Costa Real Madrid Cristiano Ronaldo [ 272 ] Chapter 10 7 Real Madrid 17 Chelsea Gareth Bale Demba Ba 0.5000000 0.5000000 • Via the subset() function: >subset(goal_stats,GoalsPerGame>=0.5) Club Goals GamesPlayedGoalsPerGame Atletico Madrid Diego Costa Real Madrid Cristiano Ronaldo 17 Real Madrid 17 Chelsea Gareth Bale Demba Ba Logical subsetting in pandas In pandas, we also do something similar: • Logical slicing: In [33]: goal_stats_df[goal_stats_df['GoalsPerGame']>=0.5] Out[33]: Club GamesPlayedGoalsPerGame Atletico Madrid Diego Costa Real Madrid Real Madrid Gareth Bale Demba Ba Goals 8 Cristiano Ronaldo 17 • DataFrame.query() operator: In [36]: goal_stats_df.query('GoalsPerGame>= 0.5') Out[36]: Club Goals GamesPlayedGoalsPerGame Atletico Madrid Diego Costa Real Madrid Real Madrid Gareth Bale Demba Ba Cristiano Ronaldo 17 0.888889 11 R has a library called plyr for a split-apply-combine data analysis. The plyr library has a function called ddply, which can be used to apply a function to a subset of a DataFrame, and then, combine the results into another DataFrame. [ 273 ] R and pandas Compared For more information on ddply, you can refer to the following: http://www.inside-r.org/packages/cran/plyr/docs/ddply To illustrate, let us consider a subset of a recently created dataset in R, which contains data on flights departing NYC in 2013: http://cran.r-project.org/web/ packages/nycflights13/index.html. Implementation in R Here, we will install the package in R and instantiate the library: >install.packages('nycflights13') ... >library('nycflights13') >dim(flights) [1] 336776 >head(flights,3) year month day dep_timedep_delayarr_timearr_delay carrier tailnum flight 1 2013 1545 origindestair_time distance hour minute 1 > flights.data=na.omit(flights[,c('year','month','dep_delay','arr_ delay','distance')]) >flights.samplehead(flights.sample,5) year month dep_delayarr_delay distance 155501 2013 The ddply function enables us to summarize the departure delays (mean, standard deviation) by year and month: >ddply(flights.sample,.(year,month),summarize, mean_dep_ delay=round(mean(dep_delay),2), s_dep_delay=round(sd(dep_delay),2)) year month mean_dep_delaysd_dep_delay 1 Let us save the flights.sample dataset to a CSV file so that we can use the data to show us how to do the same thing in pandas: >write.csv(flights.sample,file='nycflights13_sample.csv', quote= Implementation in pandas In order to do the same thing in pandas, we read the CSV file saved in the preceding section: In [40]: flights_sample=pd.read_csv('nycflights13_sample.csv') In [41]: flights_sample.head() [ 275 ] R and pandas Compared Out[41]: year We achieve the same effect as ddply by making use of the GroupBy() operator: In [44]: pd.set_option('precision',3) In [45]: grouped = flights_sample_df.groupby(['year','month']) In [48]: grouped ['dep_delay'].agg([np.mean, np.std]) Out[48]: Reshaping using melt The melt function converts data into a wide format to a single column consisting of unique ID-variable combinations. [ 276 ] Chapter 10 The R melt() function Here, we demonstrate the use of the melt() function in R. It produces long-format data in which the rows are unique variable-value combinations: >sample4=head(flights.sample,4)[c ('year','month','dep_delay','arr_ delay')] > sample4 year month dep_delay arr_delay 155501 2013 >melt(sample4,id=c('year','month')) year month variable value 3 dep_delay 1 dep_delay 11 dep_delay 5 dep_delay 3 arr_delay 1 arr_delay 11 arr_delay 5 arr_delay For more information, you can refer to the following: http://www.statmethods. net/management/reshape.html. The pandas melt() function In pandas, the melt function is similar: In [55]: sample_4_df=flights_sample_df[['year','month','dep_delay', \ 'arr_delay']].head(4) In [56]: sample_4_df Out[56]: month dep_delay arr_delay 4 [ 277 ] R and pandas Compared 2 In [59]: pd.melt(sample_4_df,id_vars=['year','month']) Out[59]: year The reference for this information is from: http://pandas.pydata.org/pandasdocs/stable/reshaping.html#reshaping-by-melt. Factors/categorical data R refers to categorical variables as factors, and the cut() function enables us to break a continuous numerical variable into ranges, and treat the ranges as factors or categorical variables, or to classify a categorical variable into a larger bin. An R example using cut() Here is an example in R: clinical.trialclinical.trialsummary(clinical.trial) patient year.enroll [ 278 ] Chapter 10 Min. 1st Qu.: 250.8 Median : 500.5 Mean : 61 1st Qu.:46.77 : 60 : 500.5 Median :50.14 : 57 : 57 3rd Qu.: 750.2 3rd Qu.:53.50 : 56 : 55 (Other):654 >ctcut table(ctcut) ctcut (31.1,38.9] (38.9,46.7] (46.7,54.6] (54.6,62.4] (62.4,70.2] 15 The reference for the preceding data can be found at: http://www.r-bloggers.com/ r-function-of-the-day-cut/. The pandas solution Here is the equivalent of the earlier explained cut() function in pandas (only applies to Version 0.15+): In [79]: pd.set_option('precision',4) clinical_trial=pd.DataFrame({'patient':range(1,1001), 'age' : np.random. normal(50,5,size=1000), 'year_enroll': [str(x) for x in np.random.choice(range(1 980,2000),size=1000,replace=True)]}) In [80]: clinical_trial.describe() Out[80]: [ 279 ] R and pandas Compared In [81]: clinical_trial.describe(include=['O']) Out[81]: In [82]: clinical_trial.year_enroll.value_counts()[:6] Out[82]: 1992 dtype: int64 In [83]: ctcut=pd.cut(clinical_trial['age'], 5) In [84]: ctcut.head() Out[84]: 0 (43.349, 50.052] (50.052, 56.755] (50.052, 56.755] (43.349, 50.052] (50.052, 56.755] Name: age, dtype: category Categories (5, object): [(29.91, 36.646] < (36.646, 43.349] < (43.349, 50.052] < (50.052, 56.755] < (56.755, 63.458]] In [85]: ctcut.value_counts().sort_index() Out[85]: (29.91, 36.646] (36.646, 43.349] (43.349, 50.052] (50.052, 56.755] (56.755, 63.458] dtype: int64 [ 280 ] Chapter 10 In this chapter, we have attempted to compare key features in R with their pandas equivalents in order to achieve the following objectives: • To assist R users who may wish to replicate the same functionality in pandas • To assist any users who upon reading some R code may wish to rewrite the code in pandas In the next chapter, we will conclude the book by giving a brief introduction to the scikit-learn library for doing machine learning and show how pandas fits within that framework. The reference documentation for this chapter can be found here: http://pandas.pydata.org/pandas-docs/ [ 281 ] Brief Tour of Machine Learning This chapter takes the user on a whirlwind tour of machine learning, focusing on using the pandas library as a tool that can be used to preprocess data used by machine learning programs. It also introduces the user to the scikit-learn library, which is the most popular machine learning toolkit in Python. In this chapter, we illustrate machine learning techniques by applying them to a well-known problem about classifying which passengers survived the Titanic disaster at the turn of the last century. The various topics addressed in this chapter include the following: • Role of pandas in machine learning • Installation of scikit-learn • Introduction to machine learning concepts • Application of machine learning – Kaggle Titanic competition • Data analysis and preprocessing using pandas • Naïve approach to Titanic problem • scikit-learn ML classifier interface • Supervised learning algorithms • Unsupervised learning algorithms [ 283 ] Brief Tour of Machine Learning Role of pandas in machine learning The library we will be considering for machine learning is called scikit-learn. The scikit-learn Python library provides an extensive library of machine learning algorithms that can be used to create adaptive programs that learn from data inputs. However, before this data can be used by scikit-learn, it must undergo some preprocessing. This is where pandas comes in. The pandas can be used to preprocess and filter data before passing it to the algorithm implemented in scikit-learn. Installation of scikit-learn As was mentioned in Chapter 2, Installation of pandas and the Supporting Software, the easiest way to install pandas and its accompanying libraries is to use a third-party distribution such as Anaconda and be done with it. Installing scikit-learn should be no different. I will briefly highlight the steps for installation on various platforms and third-party distributions starting with Anaconda. The scikit-learn library requires the following libraries: • Python 2.6.x or higher • NumPy 1.6.1 or higher • SciPy 0.9 or higher Assuming that you have already installed pandas as described in Chapter 2, Installation of pandas and the Supporting Software, these dependencies should already be in place. Installing via Anaconda You can install scikit-learn on Anaconda by running the conda Python package manager: conda install scikit-learn Installing on Unix (Linux/Mac OS X) For Unix, it is best to install from the source (C compiler is required). Assuming that pandas and NumPy are already installed and the required dependent libraries are already in place, you can install scikit-learn via Git by running the following commands: git clone https://github.com/scikit-learn/scikit-learn.git cd scikitlearn python setup.py install [ 284 ] Chapter 11 The pandas can also be installed on Unix by using pip from PyPi: pip install pandas Installing on Windows To install on Windows, you can open a console and run the following: pip install –U scikit-learn For more in-depth information on installation, you can take a look at the official scikit-learn docs at: http://scikit-learn.org/ stable/install.html. You can also take a look at the README file for the scikit-learn Git repository at: https://github.com/scikit-learn/scikitlearn/blob/master/README.rst. Introduction to machine learning Machine learning is the art of creating software programs that learn from data. More formally, it can be defined as the practice of building adaptive programs that use tunable parameters to improve predictive performance. It is a sub-field of artificial intelligence. We can separate machine learning programs based on the type of problems they are trying to solve. These problems are appropriately called learning problems. The two categories of these problems, broadly speaking, are referred to as supervised and unsupervised learning problems. Further, there are some hybrid problems that have aspects that involve both categories. The input to a learning problem consists of a dataset of n rows. Each row represents a sample and may involve one or more fields referred to as attributes or features. A dataset can be canonically described as consisting of n samples, each consisting of m features. A more detailed introduction to machine learning is given in the following paper: A Few Useful Things to Know about Machine Learning at http://homes. cs.washington.edu/~pedrod/papers/cacm12.pdf [ 285 ] Brief Tour of Machine Learning Supervised versus unsupervised learning For supervised learning problems, the input to a learning problem is a dataset consisting of labeled data. By this we mean that we have outputs whose values are known. The learning program is fed input samples and their corresponding outputs and its goal is to decipher the relationship between them. Such input is known as labeled data. Supervised learning problems include the following: • Classification: The learned attribute is categorical (nominal) or discrete • Regression: The learned attribute is numeric/continuous In unsupervised learning or data mining, the learning program is fed inputs but no corresponding outputs. This input data is referred to as unlabeled data. The learning program's goal is to learn or decipher the hidden label. Such problems include the following: • Clustering • Dimensionality reduction Illustration using document classification A common usage of machine learning techniques is in the area of document classification. The two main categories of machine learning can be applied to this problem - supervised and unsupervised learning. Supervised learning Each document in the input collection is assigned to a category, that is, a label. The learning program/algorithm uses the input collection of documents to learn how to make predictions for another set of documents with no labels. This method is known as classification. Unsupervised learning The documents in the input collection are not assigned to categories; hence, they are unlabeled. The learning program takes this as input and tries to cluster or discover groups of related or similar documents. This method is known as clustering. [ 286 ] Chapter 11 How machine learning systems learn Machine learning systems utilize what is known as a classifier in order to learn from data. A classifier is an interface that takes a matrix of what is known as feature values and produces an output vector, also known as the class. These feature values may be discrete or continuously valued. There are three core components of classifiers: • Representation: What type of classifier is it? • Evaluation: How good is the classifier? • Optimization: How to search among the alternatives? Application of machine learning – Kaggle Titanic competition In order to illustrate how we can use pandas to assist us at the start of our machine learning journey, we will apply it to a classic problem, which is hosted on the Kaggle website (http:// www.kaggle.com). Kaggle is a competition platform for machine learning problems. The idea behind Kaggle is to enable companies that are interested in solving predictive analytics problems with their data to post their data on Kaggle and invite data scientists to come up with the proposed solutions to their problems. The competition can be ongoing over a period of time, and the rankings of the competitors are posted on a leader board. At the end of the competition, the top-ranked competitors receive cash prizes. The classic problem that we will study in order to illustrate the use of pandas for machine learning with scikit-learn is the Titanic: machine learning from disaster problem hosted on Kaggle as their classic introductory machine learning problem. The dataset involved in the problem is a raw dataset. Hence, pandas is very useful in the preprocessing and cleansing of the data before it is submitted as input to the machine learning algorithm implemented in The titanic: machine learning from disaster problem The dataset for the Titanic consists of the passenger manifest for the doomed trip, along with various features and an indicator variable telling whether the passenger survived the sinking of the ship or not. The essence of the problem is to be able to predict, given a passenger and his/her associated features, whether this passenger survived the sinking of the Titanic or not. Please delete this sentence. [ 287 ] Brief Tour of Machine Learning The data consists of two datasets: one training dataset and the other test dataset. The training dataset consists of 891 passenger cases, and the test dataset consists of 491 passenger cases. The training dataset also consists of 11 variables, of which 10 are features and 1 dependent/indicator variable Survived, which indicates whether the passenger survived the disaster or not. The feature variables are as follows: • PassengerID • Cabin • Sex • Pclass (passenger class) • Fare • Parch (number of parents and children) • Age • Sibsp (number of siblings) • Embarked We can make use of pandas to help us preprocess data in the following ways: • Data cleaning and categorization of some variables • Exclusion of unnecessary features, which obviously have no bearing on the survivability of the passenger, for example, their name • Handling missing data There are various algorithms that we can use to tackle this problem. They are as follows: • Decision trees • Neural networks • Random forests • Support vector machines The problem of overfitting Overfitting is a well-known problem in machine learning, whereby the program memorizes the specific data that it is fed as input, leading to perfect results on the training data and abysmal results on the test data. [ 288 ] Chapter 11 In order to prevent overfitting, the 10-fold cross-validation technique can be used to introduce variability in the data during the training phase. Data analysis and preprocessing using pandas In this section, we will utilize pandas to do some analysis and preprocessing of the data before submitting it as input to scikit-learn. Examining the data In order to start our preprocessing of the data, let us read in the training dataset and examine what it looks like. Here, we read in the training dataset into a pandas DataFrame and display the first rows: In [2]: import pandas as pd import numpy as np # For .read_csv, always use header=0 when you know row 0 is the header row train_df = pd.read_csv('csv/train.csv', header=0) In [3]: The output is as follows: [ 289 ] Brief Tour of Machine Learning Thus, we can see the various features: PassengerId, PClass, Name, Sex, Age, Sibsp, Parch, Ticket, Fare, Cabin, and Embarked. One question that springs to mind immediately is this: which of the features are likely to influence whether a passenger survived or not? It should seem obvious that PassengerID, Ticket Code, and Name should not be influencers on survivability since they're identifier variables. We will skip these in our analysis. Handling missing values One issue that we have to deal with in datasets for machine learning is how to handle missing values in the training set. Let's visually identify where we have missing values in our feature set. For that, we can make use of an equivalent of the missmap function in R, written by Tom Augspurger. The next graphic shows how much data is missing for the various features in an intuitively pleasing [ 290 ] Chapter 11 For more information and the code used to generate this data, see the following: http://bit.ly/1C0a24U. We can also calculate how much data is missing for each of the features: In [83]: missing_perc= train_df.apply(lambda x: 100*(1-x.count().sum()/ (1.0*len(x)))) In [85]: sorted_missing_perc=missing_perc.order(ascending=False) sorted_missing_perc Out[85]: Cabin dtype: float64 Thus, we can see that most of the Cabin data is missing (77%), while around 20% of the Age data is missing. We then decide to drop the Cabin data from our learning feature set as the data is too sparse to be of much use. Let us do a further breakdown of the various features that we would like to examine. In the case of categorical/discrete features, we use bar plots; for continuous valued features, we use histograms: In [137]: import random bar_width=0.1 categories_map={'Pclass':{'First':1,'Second':2, 'Third':3}, 'Sex':{'Female':'female','Male':'male'}, 'Survived':{'Perished':0,'Survived':1}, 'Embarked':{'Cherbourg':'C','Queenstown':'Q','Southampton ':'S'}, 'SibSp': { str(x):x for x in [ 291 ] Brief Tour of Machine Learning 'Parch': {str(x):x for x in range(7)} } colors=['red','green','blue','yellow','magenta','orange'] subplots=[111,211,311,411,511,611,711,811] cIdx=0 fig,ax=plt.subplots (len(subplots),figsize=(10,12)) keyorder = ['Survived','Sex','Pclass','Embarked','SibSp', 'Parch'] for category_key,category_items in sorted(categories_map.iteritems(), key=lambda i:keyorder. index(i [0])): num_bars=len(category_items) index=np.arange(num_bars) idx=0 for cat_name,cat_val in sorted(category_items.iteritems()): ax[cIdx].bar(idx,len(train_df[train_df[category_key]==cat_val]), label= cat_name, color=np.random.rand(3,1)) idx+=1 ax[cIdx].set_title('%s Breakdown' % category_key) xlabels=sorted(category_items.keys()) ax[cIdx].set_xticks(index+bar_width) ax[cIdx].set_xticklabels (xlabels) ax[cIdx].set_ylabel('Count') cIdx +=1 fig.subplots_adjust(hspace=0.8) for hcat in ['Age','Fare']: ax[cIdx].hist(train_df[hcat].dropna(),color=np.random.rand(3,1)) ax[cIdx].set_title('%s Breakdown' % hcat) #ax[cIdx].set_xlabel(hcat) ax[cIdx].set_ylabel('Frequency') cIdx +=1 fig.subplots_adjust(hspace=0.8) plt.show() [ 292 ] Chapter 11 From the data and illustration in the preceding figure, we can observe the following: • About twice as many passengers perished than survived (62% vs. 38%). • There were about twice as many male passengers as female passengers (65% versus 35%). [ 293 ] Brief Tour of Machine Learning • There were about 20% more passengers in the third class versus the first and second together (55% versus 45%). • Most passengers were solo, that is, had no children, parents, siblings, or spouse on board. These observations might lead us to dig deeper and investigate whether there is some correlation between chances of survival and gender and also fare class, particularly if we take into account the fact that the Titanic had a women-and-children-first policy (http://en.wikipedia.org/wiki/Women_and_children_first) and the fact that the Titanic was carrying fewer lifeboats (20) than it was designed to (32). In light of this, let us further examine the relationships between survival and some of these features. We start with gender: In [85]: from collections import OrderedDict num_passengers=len(train_df) num_men=len(train_df[train_df['Sex']=='male']) men_survived=train_df[(train_df['Survived']==1 ) & (train_ df['Sex']=='male')] num_men_survived=len(men_survived) num_men_perished=num_men-num_men_survived num_women=num_passengers-num_men women_survived=train_df[(train_df['Survived']==1) & (train_ df['Sex']=='female')] num_women_survived=len(women_survived) num_women_perished=num_women-num_women_survived gender_survival_dict=OrderedDict() gender_survival_dict['Survived']={'Men':num_men_ survived,'Women':num_women_survived} gender_survival_dict ['Perished']={'Men':num_men_ perished,'Women':num_women_perished} gender_survival_dict['Survival Rate']= {'Men' : round(100.0*num_men_survived/num_men,2), 'Women':round(100.0*num_women_survived/ num_women,2)} pd.DataFrame(gender_survival_dict) Out[85]: [ 294 ] Chapter 11 Survival Rate We now illustrate this data in a bar chart using the following command: In [76]: #code to display survival by gender fig = plt.figure() ax = fig.add_subplot(111) perished_data=[num_men_perished, num_women_perished] survived_data=[num_men_survived, num_women_survived] N=2 ind = np.arange(N) # the x locations for the groups width = 0.35 survived_rects = ax.barh(ind, survived_data, width,color='green') perished_rects = ax.barh(ind+width, perished_data, width,color='red') ax.set_xlabel('Count') ax.set_title('Count of Survival by Gender') yTickMarks = ['Men','Women'] ax.set_yticks(ind+width) ytickNames = ax.set_yticklabels(yTickMarks) plt.setp(ytickNames, rotation=45, fontsize=10) ## add a legend ax.legend ((survived_rects[0], perished_rects[0]), ('Survived', 'Perished') ) plt.show() [ 295 ] Brief Tour of Machine Learning The preceding code produces the following bar graph: From the preceding plot, we can see that a majority of the women survived (74%), while most of the men perished (only 19% survived). This leads us to the conclusion that the gender of the passenger may be a contributing factor to whether a passenger survived or not. Next, let us look at passenger class. First, we generate the survived and perished data for each of the three passenger classes, as well as survival rates and show them in a table: In [86]: from collections import OrderedDict num_passengers=len(train_df) num_class1=len(train_df[train_df['Pclass']==1]) class1_survived=train_df [(train_df['Survived']==1 ) & (train_ df['Pclass']==1)] num_class1_survived=len(class1_survived) num_class1_perished=num_class1-num_class1_survived num_class2=len(train_df[train_df['Pclass']==2]) [ 296 ] Chapter 11 class2_survived=train_df[(train_df['Survived']==1) & (train_ df['Pclass']==2)] num_class2_survived=len(class2_survived) num_class2_perished=num_class2-num_class2_survived num_class3= num_passengers-num_class1-num_class2 class3_survived=train_df[(train_df['Survived']==1 ) & (train_ df['Pclass']==3)] num_class3_survived=len(class3_survived) num_class3_perished= num_class3-num_class3_survived pclass_survival_dict=OrderedDict() pclass_survival_dict['Survived']={'1st Class':num_class1_survived, '2nd Class':num_class2_survived, '3rd Class':num_class3_survived} pclass_survival_dict['Perished']={'1st Class':num_class1_perished, '2nd Class':num_class2_perished, '3rd Class':num_class3_perished} pclass_survival_dict['Survival Rate']= {'1st Class' : round (100.0*num_ class1_survived/num_class1,2), '2nd Class':round(100.0*num_class2_survived/num_class2,2), '3rd Class':round(100.0*num_class3_survived/num_ class3,2),} pd.DataFrame(pclass_survival_dict) Passenger Class Survival Rate First Class Second Class Third Class We can then plot the data by using matplotlib in a similar manner to that for the survivor count by gender as described earlier: In [186]: fig = plt.figure() ax = fig.add_subplot(111) perished_data= [num_class1_perished, num_class2_perished, num_class3_ perished] [ 297 ] Brief Tour of Machine Learning survived_data=[num_class1_survived, num_class2_survived, num_class3_ survived] N=3 ind = np.arange(N) # the x locations for the groups width = 0.35 survived_rects = ax.barh(ind, survived_data, width,color='blue') perished_rects = ax.barh(ind+width, perished_data, width,color='red') ax.set_xlabel('Count') ax.set_title('Survivor Count by Passenger class') yTickMarks = ['1st Class','2nd Class', '3rd Class'] ax.set_yticks(ind+width) ytickNames = ax.set_yticklabels(yTickMarks) plt.setp(ytickNames, rotation=45, fontsize=10) ## add a legend ax.legend( (survived_rects[0], perished_rects[0]), ('Survived', 'Perished'), loc=10 ) plt.show() This produces the following bar plot: [ 298 ] Chapter 11 It seems clear from the preceding data and illustration that the higher the passenger fare class is, the greater are one's chances of survival. Given that both gender and fare class seem to influence the chances of a passenger's survival, let's see what happens when we combine these two features and plot a combination of both. For this, we shall use the crosstab function in pandas. In [173]: survival_counts=pd.crosstab([train_df.Pclass,train_ df.Sex],train_df.Survived.astype(bool)) survival_counts Out[173]: Survived False Pclass male 2 3 Let us now display this data using matplotlib. First, let's do some re-labeling for display purposes: In [183]: survival_counts.index=survival_counts.index.set_levels([['1st', '2nd', '3rd'], ['Women', 'Men']]) In [184]: survival_counts.columns=['Perished','Survived'] Now, we plot the data by using the plot function of a pandas DataFrame: In [185]: fig = plt.figure() ax = fig.add_subplot(111) ax.set_xlabel('Count') ax.set_title('Survivor Count by Passenger class, Gender') survival_counts.plot(kind='barh',ax=ax,width=0.75, color=['red','black'], xlim=(0,400)) Out[185]: [ 299 ] Brief Tour of Machine Learning A naïve approach to Titanic problem Our first attempt at classifying the Titanic data is to use a naïve, yet very intuitive, approach. This approach involves the following steps: 1. Select a set of features S, which influence whether a person survives or not. 2. For each possible combination of features, use the training data to indicate whether the majority of cases survived or not. This can be evaluated in what is known as a survival matrix. 3. For each test example that we wish to predict survival, look up the combination of features that corresponds to the values of its features and assign its predicted value to the survival value in the survival table. This approach is a naive K-nearest neighbor approach. Based on what we have seen earlier in our analysis, there are three features that seem to have the most influence on the survival rate: • Passenger class • Gender • Passenger fare (bucketed) [ 300 ] Chapter 11 We include passenger fare as it is related to passenger class. The survival table looks something similar to the following: NumberOfPeople The code for generating this table can be found in the file survival_data.py which is attached. To see how we use this table, let us take a look at a snippet of our test data: In [192]: test_df.head (3)[['PassengerId','Pclass','Sex','Fare']] Out[192]: PassengerId [ 301 ] Brief Tour of Machine Learning For passenger 892, we see that he is male, his ticket price was 7.8292, and he travelled in the third class. Hence, the key for survival table lookup for this passenger is {Sex='male', Pclass=3, PriceBucket=0 (since 7.8292 falls in bucket 0)}. If we look up the survival value corresponding to this key in our survival table (row 17), we see that the value is 0 = Perished; this is the value that we will predict. Similarly, for passenger 893, we have key={Sex='female', Pclass=3, PriceBucket=0}. This corresponds to row 16, and hence, we will predict 1, that is, survived, and her predicted survival is 1, that is, survived. Thus, our results look like the following command: > head -4 csv/surv_results.csv PassengerId,Survived 892,0 893,1 894,0 The source of this information is at: http://bit.ly/1FU7mXj. Using the survival table approach outlined earlier, one is able to achieve an accuracy of 0.77990 on Kaggle (http://www.kaggle.com). The survival table approach, while intuitive, is a very basic approach that represents only the tip of the iceberg of possibilities in machine learning. In the following sections, we will take a whirlwind tour of various machine learning algorithms that will help you, the reader, to get a feel for what is available in the machine learning universe. The scikit-learn ML/classifier interface We'll be diving into the basic principles of machine learning and demonstrate the use of these principles via the scikit-learn basic API. The scikit-learn library has an estimator interface. We illustrate it by using a linear regression model. For example, consider the following: In [3]: from sklearn.linear_model import LinearRegression [ 302 ] Chapter 11 The estimator interface is instantiated to create a model, which is a linear regression model in this case: In [4]: model = LinearRegression(normalize=True) In [6]: print model LinearRegression (copy_X=True, fit_intercept=True, normalize=True) Here, we specify normalize=True, indicating that the x-values will be normalized before regression. Hyperparameters (estimator parameters) are passed on as arguments in the model creation. This is an example of creating a model with tunable parameters. The estimated parameters are obtained from the data when the data is fitted with an estimator. Let us first create some sample training data that is normally distributed about y = x/2. We first generate our x and y values: In [51]: sample_size=500 x = [] y = [] for i in range(sample_size): newVal = random.normalvariate(100,10) x.append(newVal) y.append(newVal / 2.0 + random.normalvariate(50,5)) sklearn takes a 2D array of num_samples × num_features as input, so we convert our x data into a 2D array: In [67]: X = np.array(x)[:,np.newaxis] X.shape Out[67]: (500, 1) In this case, we have 500 samples and 1 feature, x. We now train/fit the model and display the slope (coefficient) and the intercept of the regression line, which is the prediction: In [71]: model.fit(X,y) print "coeff=%s, intercept=%s" % (model.coef_,model.intercept_) coeff=[ 0.47071289], intercept=52.7456611783 [ 303 ] Brief Tour of Machine Learning This can be visualized as follows: In [65]: plt.title("Plot of linear regression line and training data") plt.xlabel('x') plt.ylabel('y') plt.scatter(X,y,marker='o', color='green', label='training data'); plt.plot(X,model.predict(X), color='red', label='regression line') plt.legend(loc=2) Out[65]: [
{"url":"https://dokumen.pub/mastering-pandas-master-the-features-and-capabilities-of-pandas-a-data-analysis-toolkit-for-python-1783981962-9781783981960.html","timestamp":"2024-11-11T21:22:54Z","content_type":"text/html","content_length":"554361","record_id":"<urn:uuid:f9de4f76-4f0b-4a13-ac64-ccac557c4911>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00892.warc.gz"}
SGP07: Eurographics Symposium on Geometry Processing Permanent URI for this collection Robust Statistical Estimation of Curvature on Discretized Surfaces Focal Surfaces of Discrete Geometry Discrete Laplace operators: No free lunch Voronoi-based Variational Reconstruction of Unoriented Point Sets Reconstruction of Deforming Geometry from Time-Varying Point Clouds Data-Dependent MLS for Faithful Surface Approximation Shape Reconstruction from Unorganized Cross-sections Boissonnat, Jean-Daniel Memari, Pooran A Streaming Algorithm for Surface Reconstruction Multilevel Streaming for Out-of-Core Surface Reconstruction Elastic Secondary Deformations by Vector Field Integration As-Rigid-As-Possible Surface Modeling Example-Based Skeleton Extraction Triangulations with Locally Optimal Steiner Points Unconstrained Isosurface Extraction on Arbitrary Octrees Linear Angle Based Parameterization GPU-assisted Positive Mean Value Coordinates for Mesh Deformations Developable Surfaces from Arbitrary Sketched Boundaries Generalized Surface Flows for Mesh Processing Dynamic Geometry Registration Shape Optimization Using Reflection Lines Constraint-based Fairing of Surface Meshes Bayesian Surface Reconstruction via Iterative Scan Alignment to an Optimized Prototype Symmetry-Enhanced Remeshing of Surfaces Laplace-Beltrami Eigenfunctions for Deformation Invariant Shape Representation Fast Normal Vector Compression with Bounded Error Surface Reconstruction using Local Shape Priors Delaunay Mesh Construction Ridge Based Curve and Surface Reconstruction Recent Submissions • Robust Statistical Estimation of Curvature on Discretized Surfaces (The Eurographics Association, 2007) Kalogerakis, Evangelos; Simari, Patricio; Nowrouzezahrai, Derek; Singh, Karan; Alexander Belyaev and Michael Garland A robust statistics approach to curvature estimation on discretely sampled surfaces, namely polygon meshes and point clouds, is presented. The method exhibits accuracy, stability and consistency even for noisy, non-uniformly sampled surfaces with irregular configurations. Within an M-estimation framework, the algorithm is able to reject noise and structured outliers by sampling normal variations in an adaptively reweighted neighborhood around each point. The algorithm can be used to reliably derive higher order differential attributes and even correct noisy surface normals while preserving the fine features of the normal and curvature field. The approach is compared with state-of-the-art curvature estimation methods and shown to improve accuracy by up to an order of magnitude across ground truth test surfaces under varying tessellation densities and types as well as increasing degrees of noise. Finally, the benefits of a robust statistical estimation of curvature are illustrated by applying it to the popular applications of mesh segmentation and suggestive contour rendering. • Focal Surfaces of Discrete Geometry (The Eurographics Association, 2007) Yu, Jingyi; Yin, Xiaotian; Gu, Xianfeng; McMillan, Leonard; Gortler, Steven; Alexander Belyaev and Michael Garland The differential geometry of smooth three-dimensional surfaces can be interpreted from one of two perspectives: in terms of oriented frames located on the surface, or in terms of a pair of associated focal surfaces. These focal surfaces are swept by the loci of the principal curvatures radii. In this article, we develop a focal-surfacebased differential geometry interpretation for discrete mesh surfaces. Focal surfaces have many useful properties. For instance, the normal of each focal surface indicates a principal direction of the corresponding point on the original surface. We provide algorithms to robustly approximate the focal surfaces of a triangle mesh with known or estimated normals. Our approach locally parameterizes the surface normals about a point by their intersections with a pair of parallel planes.We show neighboring normal triplets are constrained to pass simultaneously through two slits, which are parallel to the specified parametrization planes and rule the focal surfaces. We develop both CPU and GPU-based algorithms to efficiently approximate these two slits and, hence, the focal meshes. Our focal mesh estimation also provides a novel discrete shape operator that simultaneously estimates the principal curvatures and principal directions. • Discrete Laplace operators: No free lunch (The Eurographics Association, 2007) Wardetzky, Max; Mathur, Saurabh; Kaelberer, Felix; Grinspun, Eitan; Alexander Belyaev and Michael Garland Discrete Laplace operators are ubiquitous in applications spanning geometric modeling to simulation. For robustness and efficiency, many applications require discrete operators that retain key structural properties inherent to the continuous setting. Building on the smooth setting, we present a set of natural properties for discrete Laplace operators for triangular surface meshes. We prove an important theoretical limitation: discrete Laplacians cannot satisfy all natural properties; retroactively, this explains the diversity of existing discrete Laplace operators. Finally, we present a family of operators that includes and extends well-known and widely-used operators. • Voronoi-based Variational Reconstruction of Unoriented Point Sets (The Eurographics Association, 2007) Alliez, Pierre; Cohen-Steiner, David; Tong, Yiying; Desbrun, Mathieu; Alexander Belyaev and Michael Garland We introduce an algorithm for reconstructing watertight surfaces from unoriented point sets. Using the Voronoi diagram of the input point set, we deduce a tensor field whose principal axes and eccentricities locally represent respectively the most likely direction of the normal to the surface, and the confidence in this direction estimation. An implicit function is then computed by solving a generalized eigenvalue problem such that its gradient is most aligned with the principal axes of the tensor field, providing a best-fitting isosurface reconstruction. Our approach possesses a number of distinguishing features. In particular, the implicit function optimization provides resilience to noise, adjustable fitting to the data, and controllable smoothness of the reconstructed surface. Finally, the use of simplicial meshes (possibly restricted to a thin crust around the input data) and (an)isotropic Laplace operators renders the numerical treatment simple and robust. • Reconstruction of Deforming Geometry from Time-Varying Point Clouds (The Eurographics Association, 2007) Wand, Michael; Jenke, Philipp; Huang, Qixing; Bokeloh, Martin; Guibas, Leonidas; Schilling, Andreas; Alexander Belyaev and Michael Garland In this paper, we describe a system for the reconstruction of deforming geometry from a time sequence of unstructured, noisy point clouds, as produced by recent real-time range scanning devices. Our technique reconstructs both the geometry and dense correspondences over time. Using the correspondences, holes due to occlusion are filled in from other frames. Our reconstruction technique is based on a statistical framework: The reconstruction should both match the measured data points and maximize prior probability densities that prefer smoothness, rigid deformation and smooth movements over time. The optimization procedure consists of an inner loop that optimizes the 4D shape using continuous numerical optimization and an outer loop that infers the discrete 4D topology of the data set using an iterative model assembly algorithm. We apply the technique to a variety of data sets, demonstrating that the new approach is capable of robustly retrieving animated models with correspondences from data sets suffering from significant noise, outliers and acquisition holes. • Data-Dependent MLS for Faithful Surface Approximation (The Eurographics Association, 2007) Lipman, Yaron; Cohen-Or, Daniel; Levin, David; Alexander Belyaev and Michael Garland In this paper we present a high-fidelity surface approximation technique that aims at a faithful reconstruction of piecewise-smooth surfaces from a scattered point set. The presented method builds on the Moving Least-Squares (MLS) projection methodology, but introduces a fundamental modification: While the classical MLS uses a fixed approximation space, i.e., polynomials of a certain degree, the new method is data-dependent. For each projected point, it finds a proper local approximation space of piecewise polynomials (splines). The locally constructed spline encapsulates the local singularities which may exist in the data. The optional singularity for this local approximation space is modeled via a Singularity Indicator Field (SIF) which is computed over the input data points. We demonstrate the effectiveness of the method by reconstructing surfaces from real scanned 3D data, while being faithful to their most delicate features. • Shape Reconstruction from Unorganized Cross-sections (The Eurographics Association, 2007) Boissonnat, Jean-Daniel; Memari, Pooran; Alexander Belyaev and Michael Garland In this paper, we consider the problem of reconstructing a shape from unorganized cross-sections. The main motivation for this problem comes from medical imaging applications where cross-sections of human organs are obtained by means of a free hand ultrasound apparatus. The position and orientation of the cutting planes may be freely chosen which makes the problem substantially more difficult than in the case of parallel cross-sections, for which a rich literature exists. The input data consist of the cutting planes and (an approximation of) their intersection with the object. Our approach consists of two main steps. First, we compute the arrangement of the cutting planes. Then, in each cell of the arrangement, we reconstruct an approximation of the object from its intersection with the boundary of the cell. Lastly, we glue the various pieces together. The method makes use of the Delaunay triangulation and generalizes the reconstruction method of Boissonnat and Geiger [BG93] for the case of parallel planes. The analysis provides a neat characterization of the topological properties of the result and, in particular, shows an interesting application of Moebius diagrams to compute the locus of the branching points. We have implemented our algorithm in C++, using the [CGAL] library. Experimental results show that the algorithm performs well and can handle complicated branching configurations. • A Streaming Algorithm for Surface Reconstruction (The Eurographics Association, 2007) Allegre, Remi; Chaine, Raphaelle; Akkouche, Samir; Alexander Belyaev and Michael Garland We present a streaming algorithm for reconstructing closed surfaces from large non-uniform point sets based on a geometric convection technique. Assuming that the sample points are organized into slices stacked along one coordinate axis, a triangle mesh can be efficiently reconstructed in a streamable layout with a controlled memory footprint. Our algorithm associates a streaming 3D Delaunay triangulation data-structure with a multilayer version of the geometric convection algorithm. Our method can process millions of sample points at the rate of 50k points per minute with 350 MB of main memory. • Multilevel Streaming for Out-of-Core Surface Reconstruction (The Eurographics Association, 2007) Bolitho, Matthew; Kazhdan, Michael; Burns, Randal; Hoppe, Hugues; Alexander Belyaev and Michael Garland Reconstruction of surfaces from huge collections of scanned points often requires out-of-core techniques, and most such techniques involve local computations that are not resilient to data errors. We show that a Poisson-based reconstruction scheme, which considers all points in a global analysis, can be performed efficiently in limited memory using a streaming framework. Specifically, we introduce a multilevel streaming representation, which enables efficient traversal of a sparse octree by concurrently advancing through multiple streams, one per octree level. Remarkably, for our reconstruction application, a sufficiently accurate solution to the global linear system is obtained using a single iteration of cascadic multigrid, which can be evaluated within a single multi-stream pass. We demonstrate scalable performance on several large datasets. • Elastic Secondary Deformations by Vector Field Integration (The Eurographics Association, 2007) Funck, Wolfram von; Theisel, Holger; Seidel, Hans-Peter; Alexander Belyaev and Michael Garland We present an approach for elastic secondary deformations of shapes described as triangular meshes. The deformations are steered by the simulation of a low number of simple mass-spring sets. The result of this simulation is used to define time-dependent divergence-free vector fields whose numerical path line integration gives the new location of each vertex. This way the deformation is guaranteed to be volume-preserving and without self-intersections, giving plausible elastic deformations. Due to a GPU implementation, the deformation can be obtained in real-time for fairly complex shapes. The approach also avoids unwanted intersections in the case of collisions in the primary animation. We demonstrate its accuracy, stableness and usefulness for different kinds of primary animations/deformations. • As-Rigid-As-Possible Surface Modeling (The Eurographics Association, 2007) Sorkine, Olga; Alexa, Marc; Alexander Belyaev and Michael Garland Modeling tasks, such as surface deformation and editing, can be analyzed by observing the local behavior of the surface. We argue that defining a modeling operation by asking for rigidity of the local transformations is useful in various settings. Such formulation leads to a non-linear, yet conceptually simple energy formulation, which is to be minimized by the deformed surface under particular modeling constraints. We devise a simple iterative mesh editing scheme based on this principle, that leads to detail-preserving and intuitive deformations. Our algorithm is effective and notably easy to implement, making it attractive for practical modeling applications. • Example-Based Skeleton Extraction (The Eurographics Association, 2007) Schaefer, Scott; Yuksel, Can; Alexander Belyaev and Michael Garland We present a method for extracting a hierarchical, rigid skeleton from a set of example poses. We then use this skeleton to not only reproduce the example poses, but create new deformations in the same style as the examples. Since rigid skeletons are used by most 3D modeling software, this skeleton and the corresponding vertex weights can be inserted directly into existing production pipelines. To create the skeleton, we first estimate the rigid transformations of the bones using a fast, face clustering approach. We present an efficient method for clustering by providing a Rigid Error Function that finds the best rigid transformation from a set of points in a robust, space efficient manner and supports fast clustering operations. Next, we solve for the vertex weights and enforce locality in the resulting weight distributions. Finally, we use these weights to determine the connectivity and joint locations of the skeleton. • Triangulations with Locally Optimal Steiner Points (The Eurographics Association, 2007) Erten, Hale; Uengoer, Alper; Alexander Belyaev and Michael Garland We present two new Delaunay refinement algorithms, the second an extension of the first. For a given input domain (a set of points in the plane or a planar straight line graph), and a threshold angle a, the Delaunay refinement algorithms compute triangulations that have all angles at least a. Our algorithms have the same theoretical guarantees as the previous Delaunay refinement algorithms. The original Delaunay refinement algorithm of Ruppert is proven to terminate with size-optimal quality triangulations for a • Unconstrained Isosurface Extraction on Arbitrary Octrees (The Eurographics Association, 2007) Kazhdan, Michael; Klein, Allison; Dalal, Ketan; Hoppe, Hugues; Alexander Belyaev and Michael Garland This paper presents a novel algorithm for generating a watertight level-set from an octree. We show that the level- set can be efficiently extracted regardless of the topology of the octree or the values assigned to the vertices. The key idea behind our approach is the definition of a set of binary edge-trees derived from the octree s topology. We show that the edge-trees can be used define the positions of the isovalue-crossings in a consistent fashion and to resolve inconsistencies that may arise when a single edge has multiple isovalue-crossings. Using the edge-trees, we show that a provably watertight mesh can be extracted from the octree without necessitating the refinement of nodes or modification of their values. • Linear Angle Based Parameterization (The Eurographics Association, 2007) Zayer, Rhaleb; Levy, Bruno; Seidel, Hans-Peter; Alexander Belyaev and Michael Garland In the field of mesh parameterization, the impact of angular and boundary distortion on parameterization quality have brought forward the need for robust and efficient free boundary angle preserving methods. One of the most prominent approaches in this direction is the Angle Based Flattening (ABF) which directly formulates the problem as a constrained nonlinear optimization in terms of angles. Since the original formulation of the ABF, a steady research effort has been dedicated to improving its efficiency. As for any well posed numerical problem, the solution is generally an approximation of the underlying mathematical equations. The economy and accuracy of the solution are to a great extent affected by the kind of approximation used. In this work we reformulate the problem based on the notion of error of estimation. A careful manipulation of the resulting equations yields for the first time a linear version of angle based parameterization. The error induced by this linearization is quadratic in terms of the error in angles and the validity of the approximation is further supported by numerical results. Besides performance speedup, the simplicity of the current setup makes re-implementation and reproduction of our results straightforward. • GPU-assisted Positive Mean Value Coordinates for Mesh Deformations (The Eurographics Association, 2007) Lipman, Yaron; Kopf, Johannes; Cohen-Or, Daniel; Levin, David; Alexander Belyaev and Michael Garland In this paper we introduce positive mean value coordinates (PMVC) for mesh deformation. Following the observations of Joshi et al. [JMD*07] we show the advantage of having positive coordinates. The control points of the deformation are the vertices of a "cage" enclosing the deformed mesh. To define positive mean value coordinates for a given vertex, the visible portion of the cage is integrated over a sphere. Unlike MVC [JSW05], PMVC are computed numerically. We show how the PMVC integral can be efficiently computed with graphics hardware. While the properties of PMVC are similar to those of Harmonic coordinates [JMD*07], the setup time of the PMVC is only of a few seconds for typical meshes with 30K vertices. This speed-up renders the new coordinates practical and easy to use. • Developable Surfaces from Arbitrary Sketched Boundaries (The Eurographics Association, 2007) Rose, Kenneth; Sheffer, Alla; Wither, Jamie; Cani, Marie-Paule; Thibert, Boris; Alexander Belyaev and Michael Garland We present a method for extracting a hierarchical, rigid skeleton from a set of example poses. We then use this skeleton to not only reproduce the example poses, but create new deformations in the same style as the examples. Since rigid skeletons are used by most 3D modeling software, this skeleton and the corresponding vertex weights can be inserted directly into existing production pipelines. To create the skeleton, we first estimate the rigid transformations of the bones using a fast, face clustering approach. We present an efficient method for clustering by providing a Rigid Error Function that finds the best rigid transformation from a set of points in a robust, space efficient manner and supports fast clustering operations. Next, we solve for the vertex weights and enforce locality in the resulting weight distributions. Finally, we use these weights to determine the connectivity and joint locations of the skeleton. • Generalized Surface Flows for Mesh Processing (The Eurographics Association, 2007) Eckstein, Ilya; Pons, Jean-Philippe; Tong, Yiying; Kuo, C.-C. Jay; Desbrun, Mathieu; Alexander Belyaev and Michael Garland Geometric flows are ubiquitous in mesh processing. Curve and surface evolutions based on functional minimization have been used in the context of surface diffusion, denoising, shape optimization, minimal surfaces, and geodesic paths to mention a few. Such gradient flows are nearly always, yet often implicitly, based on the canonical L2 inner product of vector fields. In this paper, we point out that changing this inner product provides a simple, powerful, and untapped approach to extend current flows. We demonstrate the value of such a norm alteration for regularization and volume-preservation purposes and in the context of shape matching, where deformation priors (ranging from rigid motion to articulated motion) can be incorporated into a gradient flow to drastically improve results. Implementation details, including a differentiable approximation of the Hausdorff distance between irregular meshes, are presented. • Dynamic Geometry Registration (The Eurographics Association, 2007) Mitra, Niloy J.; Floery, Simon; Ovsjanikov, Maks; Gelfand, Natasha; Guibas, Leonidas; Pottmann, Helmut; Alexander Belyaev and Michael Garland We propose an algorithm that performs registration of large sets of unstructured point clouds of moving and deforming objects without computing correspondences. Given as input a set of frames with dense spatial and temporal sampling, such as the raw output of a fast scanner, our algorithm exploits the underlying temporal coherence in the data to directly compute the motion of the scanned object and bring all frames into a common coordinate system. In contrast with existing methods which usually perform pairwise alignments between consecutive frames, our algorithm computes a globally consistent motion spanning multiple frames. We add a time coordinate to all the input points based on the ordering of the respective frames and pose the problem of computing the motion of each frame as an estimation of certain kinematic properties of the resulting space-time surface. By performing this estimation for each frame as a whole we are able to compute rigid inter-frame motions, and by adapting our method to perform a local analysis of the space-time surface, we extend the basic algorithm to handle registration of deformable objects as well. We demonstrate the performance of our algorithm on a number of synthetic and scanned examples, each consisting of hundreds of scans. • Shape Optimization Using Reflection Lines (The Eurographics Association, 2007) Tosun, Elif; Gingold, Yotam I.; Reisman, Jason; Zorin, Denis; Alexander Belyaev and Michael Garland Many common objects have highly reflective metallic or painted finishes. Their appearance is primarily defined by the distortion the curved shape of the surface introduces in the reflections of surrounding objects. Reflection lines are commonly used for surface interrogation, as they capture many essential aspects of reflection distortion directly, and clearly show surface imperfections that may be hard to see with conventional lighting. In this paper, we propose the use of functionals based on reflection lines for mesh optimization and editing. We describe a simple and efficient discretization of such functionals based on screen-space surface parameterization, and we demonstrate how such discrete functionals can be used for several types of surface editing • Constraint-based Fairing of Surface Meshes (The Eurographics Association, 2007) Hildebrandt, Klaus; Polthier, Konrad; Alexander Belyaev and Michael Garland We propose a constraint-based method for the fairing of surface meshes. The main feature of our approach is that the resulting smoothed surface remains within a prescribed distance to the input mesh. For example, specifying the maximum distance in the order of the measuring precision of a laser scanner allows noise to be removed while preserving the accuracy of the scan. The approach is modeled as an optimization problem where a fairness measure is minimized subject to constraints that control the spatial deviation of the surface. The problem is efficiently solved by an active set Newton method. • Bayesian Surface Reconstruction via Iterative Scan Alignment to an Optimized Prototype (The Eurographics Association, 2007) Huang, Qi-Xing; Adams, Bart; Wand, Michael; Alexander Belyaev and Michael Garland This paper introduces a novel technique for joint surface reconstruction and registration. Given a set of roughly aligned noisy point clouds, it outputs a noise-free and watertight solid model. The basic idea of the new technique is to reconstruct a prototype surface at increasing resolution levels, according to the registration accuracy obtained so far, and to register all parts with this surface. We derive a non-linear optimization problem from a Bayesian formulation of the joint estimation problem. The prototype surface is represented as a partition of unity implicit surface, which is constructed from piecewise quadratic functions defined on octree cells and blended together using B-spline basis functions, allowing the representation of objects with arbitrary topology with high accuracy. We apply the new technique to a set of standard data sets as well as especially challenging real-world cases. In practice, the novel prototype surface based joint reconstruction-registration algorithm avoids typical convergence problems in registering noisy range scans and substantially improves the accuracy of the final output. • Symmetry-Enhanced Remeshing of Surfaces (The Eurographics Association, 2007) Podolak, Joshua; Golovinskiy, Aleksey; Rusinkiewicz, Szymon; Alexander Belyaev and Michael Garland While existing methods for 3D surface approximation use local geometric properties, we propose that more intuitive results can be obtained by considering global shape properties such as symmetry. We modify the Variational Shape Approximation technique to consider the symmetries, near-symmetries, and partial symmetries of the input mesh. This has the effect of preserving and even enhancing symmetries in the output model, if doing so does not increase the error substantially. We demonstrate that using symmetry produces results that are more aesthetically appealing and correspond more closely to human expectations, especially when simplifying to very few polygons. • Laplace-Beltrami Eigenfunctions for Deformation Invariant Shape Representation (The Eurographics Association, 2007) Rustamov, Raif M.; Alexander Belyaev and Michael Garland A deformation invariant representation of surfaces, the GPS embedding, is introduced using the eigenvalues and eigenfunctions of the Laplace-Beltrami differential operator. Notably, since the definition of the GPS embedding completely avoids the use of geodesic distances, and is based on objects of global character, the obtained representation is robust to local topology changes. The GPS embedding captures enough information to handle various shape processing tasks as shape classification, segmentation, and correspondence. To demonstrate the practical relevance of the GPS embedding, we introduce a deformation invariant shape descriptor called G2-distributions, and demonstrate their discriminative power, invariance under natural deformations, and robustness. • Fast Normal Vector Compression with Bounded Error (The Eurographics Association, 2007) Griffith, E. J.; Koutek, M.; Post, Frits H.; Alexander Belyaev and Michael Garland We present two methods for lossy compression of normal vectors through quantization using base polyhedra. The first revisits subdivision-based quantization. The second uses fixed-precision barycentric coordinates. For both, we provide fast (de)compression algorithms and a rigorous upper bound on compression error. We discuss the effects of base polyhedra on the error bound and suggest polyhedra derived from spherical coverings. Finally, we present compression and decompression results, and we compare our methods to others from the literature. • Surface Reconstruction using Local Shape Priors (The Eurographics Association, 2007) Gal, Ran; Shamir, Ariel; Hassner, Tal; Pauly, Mark; Or, Daniel Cohen; Alexander Belyaev and Michael Garland We present an example-based surface reconstruction method for scanned point sets. Our approach uses a database of local shape priors built from a set of given context models that are chosen specifically to match a specific scan. Local neighborhoods of the input scan are matched with enriched patches of these models at multiple scales. Hence, instead of using a single prior for reconstruction, our method allows specific regions in the scan to match the most relevant prior that fits best. Such high confidence matches carry relevant information from the prior models to the scan, including normal data and feature classification, and are used to augment the input point-set. This allows to resolve many ambiguities and difficulties that come up during reconstruction, e.g., distinguishing between signal and noise or between gaps in the data and boundaries of the model. We demonstrate how our algorithm, given suitable prior models, successfully handles noisy and under-sampled point sets, faithfully reconstructing smooth regions as well as sharp features. • Delaunay Mesh Construction (The Eurographics Association, 2007) Dyer, Ramsay; Zhang, Hao; Moeller, Torsten; Alexander Belyaev and Michael Garland We present algorithms to produce Delaunay meshes from arbitrary triangle meshes by edge flipping and geometrypreserving refinement and prove their correctness. In particular we show that edge flipping serves to reduce mesh surface area, and that a poorly sampled input mesh may yield unflippable edges necessitating refinement to ensure a Delaunay mesh output. Multiresolution Delaunay meshes can be obtained via constrained mesh decimation. We further examine the usefulness of trading off the geometry-preserving feature of our algorithm with the ability to create fewer triangles. We demonstrate the performance of our algorithms through several experiments. • Ridge Based Curve and Surface Reconstruction (The Eurographics Association, 2007) Suessmuth, Jochen; Greiner, Guenther; Alexander Belyaev and Michael Garland This paper presents a new method for reconstructing curves and surfaces from unstructured point clouds, allowing for noise in the data as well as inhomogeneous distribution of the point set. It is based on the observation that the curve/surface is located where locally the point cloud has highest density. This idea is pursued by a differential geometric analysis of a smoothed version of the density function. More precisely we detect ridges of this function and have to single out the relevant parts. An efficient implementation of this approach evaluates the differential geometric quantities on a regular grid, performs local analysis and finally recovers the curve/surface by an isoline extraction or a marching cubes algorithm respectively. Compared to existing surface reconstruction procedures, this approach works well for noisy data and for data with strongly varying sampling rate. Thus it can be applied successfully to reconstruct surface geometry from time-of-flight data, overlapping registered point clouds and point clouds obtained by feature tracking from video streams. Corresponding examples are presented to demonstrate the advantages of our method.
{"url":"https://diglib.eg.org/collections/66ac6869-c0d9-46f8-8b76-c3c6115d0f9f","timestamp":"2024-11-12T03:57:13Z","content_type":"text/html","content_length":"1049033","record_id":"<urn:uuid:4e860fa5-0fca-44be-8425-23f863afc251>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00103.warc.gz"}
A right circular cone has a volume of 24π cubic inches. If the ... | Filo A right circular cone has a volume of cubic inches. If the height of the cone is 2 inches, what is the radius, in inches, of the base of the cone? Not the question you're searching for? + Ask your question Choice B is correct. The formula for the volume of a right circular cone is , where is the radius of the base and is the height of the cone. It's given that the cone's volume is cubic inches and its height is 2 inches. Substituting for and 2 for yields . Rewriting the right-hand side of this equation yields , which is equivalent to . Taking the square root of both sides of this equation gives . Since the radius is a measure of length, it can't be negative. Therefore, the radius of the base of the cone is 6 inches. Choice A is incorrect and may result from using the formula for the volume of a right circular cylinder instead of a right circular cone. Choice is incorrect. This is the diameter of the cone. Choice is incorrect and may result from not taking the square root when solving for the radius. Was this solution helpful? Found 2 tutors discussing this question Discuss this question LIVE 15 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice questions from similar books Practice questions from Volume in the same exam Practice more questions from Volume View more Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text A right circular cone has a volume of cubic inches. If the height of the cone is 2 inches, what is the radius, in inches, of the base of the cone? Topic Volume Subject Mathematics Class Grade 12 Answer Type Text solution:1 Upvotes 72
{"url":"https://askfilo.com/mathematics-question-answers/a-right-circular-cone-has-a-volume-of-24-pi-cubic-inches-if-the-height-of-the","timestamp":"2024-11-15T02:49:11Z","content_type":"text/html","content_length":"305268","record_id":"<urn:uuid:dd89e2bf-3983-45e8-bbb7-6ac8a250b6d5>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00602.warc.gz"}
Question #3b6cf | Socratic Question #3b6cf 1 Answer The Bohr model does apply to $\text{He"^"+}$. It does not apply to $\text{H"^"+", "Li"^"+}$, or $\text{Be"^"2+}$. The Bohr model applies only to a hydrogen atom or to ions that contain only one electron (hydrogen-like ions). To get a hydrogen-like ion, we must remove all but one of the electrons from an atom. Such ions include $\text{He"^"+", "Li"^"2+", "Be"^"3+}$, etc. They do not include $\text{H"^"+}$ (no electron) or $\text{Li"^"+}$ and $\text{Be"^"2+}$ (two electrons each). Why doesn't the model work for multi-electron atoms? If the species had two electrons, the mathematical calculations would have to include a term representing the repulsions between the electrons. This makes the equations extremely difficult to solve, and they are next to impossible with three or more electrons. Thus, we cannot apply the equations of the Bohr model to any other atom. Impact of this question 2157 views around the world
{"url":"https://socratic.org/questions/58764a27b72cff7612f3b6cf","timestamp":"2024-11-09T07:27:41Z","content_type":"text/html","content_length":"34618","record_id":"<urn:uuid:020b9de4-ab10-4675-95e0-facc7cecddcd>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00261.warc.gz"}
Analytical and numerical solutions of the mixed problem for generalized Prandtl equations The paper examines generalized Prandtl equations for the two-dimensional flow of a viscous fluid; the equations are suitable both for core flows and for flows next to a solid surface. Group-invariant solutions of the equation describing flows in a plane diffuser are considered, and it is shown that linearized flow in a plane pipe tends to a Poiseuille flow. A semianalytic method for the solution of regularized hyperbolic generalized Prandtl equations applied to evolutionary problems is presented. The flow in a plane pipe is treated numerically. PMTF Zhurnal Prikladnoi Mekhaniki i Tekhnicheskoi Fiziki Pub Date: October 1976 □ Flow Equations; □ Numerical Flow Visualization; □ Pipe Flow; □ Prandtl-Meyer Expansion; □ Two Dimensional Flow; □ Viscous Flow; □ Boundary Layer Flow; □ Core Flow; □ Diffusers; □ Hyperbolic Differential Equations; □ Laminar Flow; □ Fluid Mechanics and Heat Transfer
{"url":"https://ui.adsabs.harvard.edu/abs/1976PMTF........81S/abstract","timestamp":"2024-11-07T16:45:17Z","content_type":"text/html","content_length":"34665","record_id":"<urn:uuid:358d3e5d-ed9b-4176-ad2b-284e29c4c35b>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00717.warc.gz"}
Markov Chain Monte Carlo │Overview │Software │ │Description │Websites │ │Readings │Courses │ A Markov Chain is a mathematical process that undergoes transitions from one state to another. Key properties of a Markov process are that it is random and that each step in the process is “memoryless;” in other words, the future state depends only on the current state of the process and not the past. A succession of these steps is a Markov chain. This stochastic process may be described in terms of the conditional probability: The possible values of Xi form the countable set S, which is the state space of the chain. Markov chains are frequently seen represented by a directed graph (as opposed to our usual directed acyclic graph), where the edges are labeled with the probabilities of going from one state (S) to A simple and often used example of a Markov chain is the board game “Chutes and Ladders.” The board consists of 100 numbered squares, with the objective being to land on square 100. The roll of the die determines how many squares the player will advance with equal probability of advancing from 1 to 6 squares. Each move is only determined the player’s present position. However, the board is filled with chutes, which move a player backward if landed on, and ladders, which will move a player forward. Once the stochastic Markov matrix, used to describe the probability of transition from state to state, is defined, there are several languages such as R, SAS, Python or MatLab that will compute such parameters as the expected length of the game and median number of rolls to land on square 100 (39.6 moves and 32 rolls, respectively). In Bayesian analysis we make inferences on unknown quantities of interest (which could be parameters in a model, missing data, or predictions) by combining prior beliefs about the quantities of interest and evidence contained in an observed set of data. This is unlike so-called “frequentist” beliefs, where a hypothesis is tested without being assigned a probability; it must either be true or false, accepted or rejected. A Bayesian model has two parts: a statistical model that describes the distribution of data, usually a likelihood function, and a prior distribution that describes the beliefs about the unknown quantities independent of the data. A posterior distribution is then derived from the “prior” and the likelihood function. Markov Chain Monte Carlo (MCMC) simulations allow for parameter estimation such as means, variances, expected values, and exploration of the posterior distribution of Bayesian models. To assess the properties of a “posterior”, many representative random values should be sampled from that distribution. A Monte Carlo process refers to a simulation that samples many random values from a posterior distribution of interest. The name supposedly derives from the musings of mathematician Stan Ulam on the successful outcome of a game of cards he was playing, and from the Monte Carlo Casino in Las Vegas. A Metropolis Algorithm (named after Nicholas Metropolis, a poker buddy of Dr. Ulam) is a commonly used MCMC process. This algorithm produces a so-called “random walk,” where a distribution is repeatedly sampled in small steps; is independent of the move before, and so is memoryless. This process is then used, for example, to describe a distribution or to compute an expected value. The following example of a Metropolis Algorithm is provided with permission of Dr. Charles Dimaggio, who himself gives credit to John K. Kruschke. “A politician is campaigning in 7 districts, one adjacent to the other. She wants to spend time in each district, but due to financial constraints, would like to spend time in each district proportional to the number of likely voters in that district. The only information available is the number of voters in the district she is currently in, and in those that are directly adjacent to it on either side. Each day, she must decide whether to campaign in the same district, move to the adjacent eastern district, or move to the adjacent western. On any given day, here’s how the decision is made whether to move or not: 1. Flip a coin. Heads to move east, tails to move west. 2. If the district indicated by the coin (east or west) has more voters than the present district, move there. 3. If the district indicated by the coin has fewer likely voters, make the decision based on a probability calculation: 4. calculate the probability of moving as the ratio of the number of likely voters in the proposed district, to the number of voters in the current district: 5. Pr[move] = voters in indicated district/voters in present district 6. Take a random sample between 0 and 1. 7. If the value of the random sample is between 0 and the probability of moving, move. Otherwise, stay put. This “random walk” will, after many repeated sampling steps, be such that the time spent in each district is proportional to the number of voters in a district. Our Metropolis algorithm produced the following distribution: At time=1, She is in district 4. • Flip a coin. The proposed move is to district 5. • Accept the proposed move because the voter population in district 5 is greater than that in district 4. • New day. Flip a coin. The proposed move is to district 6.Greater population, move. • New day. Flip a coin. The proposed move is to district7. Greater population, move. • At time=4, she is in district 7. Flip a coin. If the proposal is to move to district 6, base the decision to move on the probability criterion of 6/7. Draw a random sample between 0 and 1, if the value is between 0 and 6/7, move. Otherwise remain in district 7. • Perform this procedure many times.” As it turns out, the actual target distribution of voters across the districts looks likes this: What we see is that after many walks, the MCMC process “converges” to produce a distribution that is a mirror image of the actual distribution. Of course, this is a highly simplified example, and MCMC algorithms can be created for continuous and multi-parameter distributions. This example does highlights several features of an MCMC process: • Burn-in: A random point was chosen to be the first sample. Note that in the distribution produced by the Metropolis algorithm, there is an increased density of samples around the starting district 4. It may take some time for the “walk” to move away from the initial starting point. If the target distribution has a sparser density in that region, the estimates produced from the MCMC will be biased. To mitigate this, an initial portion of a Markov chain sample is discarded so that the effect of initial values on inference is minimized. This is referred to as the “burn-in” period. • Efficiency: A probability density, or proposal distribution was assigned, to suggest a candidate for the next sample value, given the previous sample value. A typical choice, as in this example, is to let the proposal distribution be such that points closer to the previous sample point are more likely to be visited next. Whatever form (Gaussian or otherwise) the proposal distribution takes on, the goal is for this function to adequately and efficiently explore the sample space where the target distribution has the greatest density. If the target distribution is very broad and the proposal distribution is too “narrow,” it may take quite a while for the walk to find its way around the whole target distribution, and the MCMC will not be very efficient. • An acceptance ratio was used to decide whether to accept or reject the next proposed sample. Remember that this ratio was proportional to the density of the target distribution. If the proposal distribution is too broad, the acceptance ratio may infrequently be large enough to allow the walk to move from the current spot. The walk may then be trapped in a localized area of the target There are many other sampling algorithms for MCMC. Another common technique is Gibb’s Sampling. Instead of choosing a candidate sample from a proposal distribution that represents the whole density, the Gibbs sample chooses a random value for a single parameter holding all the other parameters constant. This means that the target distribution and a proposal distribution do not need to be “tuned” for the walk to function efficiently. When an MCMC has reached a stable set of samples from a stationary posterior distribution, it is said to have converged. Some models may never converge, or some of the reasons discussed above. For example, a poorly fit proposal distribution may lead to the walk never leaving a small area of the target distribution, or doing so very slowly. A high degree of autocorrelation between samples (some is expected) may also lead to very small steps in the walk, and slow or no convergence. Errors in programming and syntax have been cited by many authors as another reason for failure of convergence. This is a perilous feature of MCMC algorithms: there is no one test or method to ensure that convergence has occurred. The danger is that the inferred posterior distribution may beabsolutely wrong, and parameter estimates will then also be incorrect. Therefore, it is considered a mandatory step in MCMC to asses for convergence. There are formal, but not definitive statistical tests of convergence. The Gelman-Rubin statistic assesses parallel chains with dispersed initial values to test whether they converge to the same target distribution. Examining trace plots of samples versus the simulation or iteration number is a simple way to visually test for convergence: Excellent convergence, centered around a gamma of 3, with small fluctuations Poor chain convergence Besides the important work of estimating the average length of a game of ‘Chutes and Ladders’, MCMC can also be used for epidemiological analyses where one would want to simulate a posterior distribution. The advantages to simulating a posterior distribution, is that if done correctly, one can estimate virtually all summaries of interest from the posterior distribution directly from the simulations. For example, one can estimate means, variances, and posterior intervals for a quantity of interest. One situation where this would be particularly advantageous is a setting where the observed data is incomplete; simulations to complete that data allow you to generate ‘true’ rather than ‘observed’ estimates. As example, Worby et al. use MCMC in their assessment of the effectiveness of isolation and decolonization measures to reduce MRSA transmission in hospital wards to account for imperfect and infrequent MRSA screening. Another situation where one might want to simulate a posterior distribution is when there is missing data in a survey. When the desired posterior distribution is intractable due to missingness in the observed data, the missing data can be simulated to create a tractable posterior distribution. MCMC procedures can be used where all missing data values are initially placed with plausible starting values. Then, based on certain parametric assumptions, a subsequent data value can be simulated based only on the previous value. Once this procedure is repeated, an iterative Markovian procedure is generated, which yields a successive simulation of the distribution of missing values, conditioned on both observed data and distributions of missing data previously simulated. These are just two examples of the many applications of MCMC methods. More examples are presented in example applications under the Articles subheading. Textbooks & Chapters Bayesian Methods: A Social and Behavioral Sciences Approach, Second Edition. Chapman & Hall/CRC, 2002, by Jeff Gill. Doing Bayesian Data Analysis, 1st Edition. A Tutorial Introduction with R. Academic Press / Elsevier, 2011, by John K. Kruschke. This text comes recommended by Dr. Charles DiMaggio; it is written for those who want to perform real -life analysis using Bayesian concepts and methods, including MCMC. It provides comprehensive R script, and bills itself as accessible to non-statisticians, with chapter 1.1 entitled: “Real People Can Read This Book.” Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference, Second Edition.London: Chapman & Hall/CRC, 2006, by Gamerman, D. and Lopes, H. F. This book provides an introductory chapter on Markov Chain Monte Carlo techniques as well as a review of more in depth topics including a description of Gibbs Sampling and Metropolis Algorithm. Monte Carlo Strategies in Scientific Computing. Springer-Verlag: New York., 2001, by J.S. Liu. The fundamentals of Monte Carlo methods and theory are described. Strategies for conducting Markov Chain Monte Carlo analyses and methods for efficient sampling are discussed. Monte Carlo Methods in Bayesian Computation, New York: Springer-Verlag, 2000, by Chen, M. H., Shao Q. M., and Ibrahim, J. G. This book provides a thorough examination of Markov Chain Monte Carlo techniques. Sampling and Monte Carlo methods for estimation of posterior quantities are reviewed. Markov Chain Monte Carlo in Practice. Chapman and Hall, 1996, W.R. Gilks, S. Richardson, D.J. Spiegelhalter (Eds.). This book gives an overview of MCMC, as well as worked examples from several different epidemiological disciplines. The text goes into more depth than average student may need on the topic, and the programming has advanced since it was published in 1996. Methodological Articles A tutorial introduction to Bayesian inference for stochastic epidemic models using Markov chain Monte Carlo methods. O’Neill P., Mathematical Biosciences (2002) 180: 103–114.A good descriptive overview of MCMC methods for the use of modeling infectious disease outbreaks. Examples include: measles, influenza and smallpox. Bayesian Modeling Using the MCMC Procedure. Chen F., SAS Global Forum 2009. Comprehensive tutorial in the use of SAS for MCMC models, as well as a good overview of MCMC methods in general. Markov Chain Monte Carlo in Practice: A Roundtable Discussion. Moderator: Robert Kass, Panelists Bradley Carlin, Andrew Gelman and Radford Neal. Edited discussion from the Joint Statistical Meetings in 1996. A “nuts and bolts” discussion of the common applications, limitations, uses and misuses of MCMC methods. The first 3-4 pages offer a basic background on MCMC. An Introduction to MCMC for Machine Learning. Andrieu C., De Freitas N., Doucet A., Jordan M, Machine Learning, (2003) 50: 5–43, Despite the title, this article is a comprehensive lesson in the “main building blocks” of MCMC methods. It delves into the mathematical assumptions in detail and is quite technical. It is referenced as a background article by many other sources on MCMC. Application Articles Estimating the Effectiveness of Isolation and Decolonization Measures in Reducing Transmission of Methicillin-resistant Staphylococcus aureus in Hospital General Wards. Worby C., Jeyaratnam D., Robotham J., Kypraios T., O’Neill P., De Angelis D., French G., and Cooper B., Am. J. Epidemiol. Published online April 16th, 2013The utility of isolation and decolonization protocols on the spread of MRSA in a hospital setting is demonstrated using an MCMC algorithm to model transmission dynamics. A Markov chain Monte Carlo algorithm for multiple imputation in large surveys. Schunk D., AStA (2008) 92: 101–114 Describes the use of MCMC in multiple imputation of missing data. An MCMC algorithm for haplotype assembly from whole-genome sequence data. Bansal V., Halpern A., Axelrod N., Bafna V., Genome Res. (2008) 18: 1336-1346. The authors create a novel MCMC algorithm to perform haplotype reconstruction/imputation. Bayesian inference of hospital-acquired infectious diseases and control measures given imperfect surveillance data. Forrester M., Pettitt A., Biostatistics (2007) 8: .383–401 The authors use MCMC to model hospital MRSA transmission rates and and the probability of patient colonization on admission when missing data is present, a topic close to Jen Duchon’s heart. The reference section of this article is an excellent resource. Decrypting Classical Cipher Text Using Markov Chain Monte Carlo. Chen J. and Rosenthal J., 2010 A cool use of MCMC: Code breaking! Priming the algorithm using different refence texts (War and Peace, Oliver Twist, a Wikipedia page on ice hockey), the authors use MCMC to break different kinds of A Bayesian model for cluster detection. Wakefield, J.; Kim, A. Biostatistics (2013), 14, 752–765. Modelling life course blood pressure trajectories using Bayesian adaptive splines. G Muniz-Terrera, E Bakra, R Hardy, FE Matthews, DJ Lunn. Statistical Methods in Medical Research (2014), Apr 25. [Epub ahead of print] Another fun application: Some undergraduate students from the University of Wisconsin demonstrate the use of an MCMC analysis to find the expected length of a full game of “Chutes and Ladders.” • Dr. Iain Murray from the University of Edinburg giving a lecture at the Machine Learning Summer School in 2009. This lecture is not geared toward epidemiologists or clinicians:http:// • Dr. Simon Jackman’s work in Bayesian analysis for the Social sciences, including fairly complete lecture notes for his course by the same name can be found at: https://web.stanford.edu/class/ • The R bloggers chime in on how long you’ll have to be sitting there with your 4 year old playing Chutes and Ladders. And how to figure out what square your kid was on if he has a meltdown and throws the pieces. Complete with R script! http://www.r-bloggers.com/basics-on-markov-chain-for-parents/ The International Society for Bayesian Analysis offers courses in general Bayesian statistical analysis. In past years, topics have included dedicated seminars on MCMC. Archived resources include videotaped lectures on MCMC methods (including Dr. Iain Murray’s – see below) Monte Carlo Methods for Optimization This graduate level course offered at the University of California, Los Angeles focuses on Monte Carlo methodology. Lecture notes are posted and available for review.
{"url":"https://www.publichealth.columbia.edu/research/population-health-methods/markov-chain-monte-carlo","timestamp":"2024-11-09T00:57:29Z","content_type":"text/html","content_length":"245230","record_id":"<urn:uuid:2770c2a8-07ef-47d1-9296-78297b2ac9da>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00753.warc.gz"}