content
stringlengths
86
994k
meta
stringlengths
288
619
What Holds it Together? Gravity What about gravity? Gravity is weird. It is clearly one of the fundamental interactions, but the Standard Model cannot satisfactorily explain it. This is one of those major unanswered problems in physics today. In addition, the gravity force carrier particle has not been found. Such a particle, however, is predicted to exist and may someday be found: the graviton. Fortunately, the effects of gravity are extremely tiny in most particle physics situations compared to the other three interactions, so theory and experiment can be compared without including gravity in the calculations. Thus, the Standard Model works without explaining gravity. (I still don't get it.)
{"url":"https://ccwww.kek.jp/pdg/particleadventure/frameless/gravity.html","timestamp":"2024-11-04T14:12:56Z","content_type":"text/html","content_length":"4731","record_id":"<urn:uuid:08f40a87-af37-4497-a638-481b5250029e>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00290.warc.gz"}
Sizing Gas Piping Layout You may ask why you need to know how to size gas lines. Here are a few reasons to know how to size new gas line or verify the existing gas line is properly sized. • When replacing the heating appliance • Replacing a boiler and tank type water heater with a combi boiler • New construction • When existing system cannot maintain proper incoming gas pressure • Converting oil to gas • Where and if you can add a gas appliance to an existing gas line. We will show you that sizing gas lines for the home is easy. We will give the details along with charts and a few piping layouts. You will also be able to download a couple worksheets and after your done with your worksheets you will be able to check your work by clicking links with answers. There are three basic types of calculations for residential gas lines. • Longest Pipe Calculation - This is what we will teach here. • Branch Calculation • Manifold Home Run System - Let's start with a standard pipe calculation from the meter to the furthest point. We need to gather some information before starting. Once we gather all the info listed below we must choose the proper of 32 charts from the codes for the fuel type, pressures, and piping material. We will only address the use of black pipe. All the charts are applied the same. Sizing Iron Pipe Gas Lines Here is a list of information we need • Total Load - Connected load by appliance and Btu/h • Btu's or CFH - Your choice • Drawing - Proposed or existing layout • Fuel Type - Natural or LP • Heating Value - Get this from the fuel supplier if doing CFH calculation • Longest Pipe - Longest Pipe from meter to furthest appliance • Pressure Drop - Your choice. We suggest .5 for most margin of error Gathered Information • Total Load - 312,000 Btu/h (below) • Range 48,000 • Wall Heater 40,000 • Gas Grille 32,000 • Water Heater 42,000 • Steam Boiler 150,000 Next we have to know the total length of the gas main in the home. Measure from the meter location to the last appliance off the main gas line. This is shown in red below the list. • Btu's or CFH - BTU's • Drawing - Proposed layout • Fuel Type - Natural • Heating Value - --- • Longest Pipe - 81 feet (below) • 9' • 18' • 7' • 3' • 22' • 6' • 12' (from main to water heater) • 4' • Pressure Drop - .5 Calculations are usually figured from the farthest point towards the meter. In this case will be starting at the gas grille and will size the pipe for the gas grille down to the tee that would go to the steam boiler. We can see from the above drawing the total line length is 81 ft and the line from the gas grille to the tee to the gas boiler is 22 feet The pipe from the meter to the gas grille is 81 feet. When we look at the Natural Gas BTU/h chart it is in 10-foot increments. If you exceed a ten-foot increment you must jump to the next 10 feet. We will need to read the row for 90 feet and go across the row until you find a number equal to or larger than your load. Then go up and find the pipe size. This means we will size the gas grill pipe for 32,000 btu/h and 90 feet of main. We see that 1/2" pipe will supply 33,000 btu/h. So, the pipe diameter from the tee to the boiler to the gas grille is going be 1/2" black iron. See in blue below. Next we will size the pipe to the boiler and the main suppling gas to the boiler and gas grille. See yellow pipe below. This time we must add together the boiler and gas grille for the total load on this section of pipe. This would be 150,000 for the boiler and 32,000 btu/h for the gas grille and totals 182,000 Btu/h. Let's start with the boiler. Looking at the chart we will always use the entire length of main pipe for all calculations of the main pipe and the branches are only calculated. At their length. We will enter the 90-foot row and find the btu/h to cover the load and go up to find the pipe size. The boiler will need to be 3/4" and the main is 1-1/4" black pipe and 1-1/4" to the boiler. You may wonder why you must run a 1-1/4" pipe to the boiler which only has a 3/4" gas valve tapping. Look at the chart to see how many Btu/h 10 feet of 3/4" pipe will carry. Keep the 3/4" pipe as short as possible with minimum amount of fittings. Add on the gas water heater We will follow the same steps as before. Size the branch using total length of main (90 feet) and match it to the appliance load, 42,000. Then size the main including all the loads after tee that includes the Water heater, gas boiler and gas grille which equals 232,200 btu/h. Even though the main pipe length from the tee to the wall heater to the tee for the water heater is only 3-feet it must be increased to 1-1/2" pipe due to adding the water heater. We will do the same calculation for from the tee for the range to the tee for the wall heater and the meter to the tee for the range and the two connecters for the wall heater and the range it would look like this. Worksheet #1 Here is the first worksheet if you want to practice. You can hand draw a copy of this, right click on the drawing, save as and then print it. Click here for the answer (Use your back button to return to this page) Multiple gas mains from one gas meter Many times the meter will be piped into the gas main to a tee where the gas main goes two different directions. The pipe from the meter to the tee is sized for all the btu/h of all the appliances and included in the length of run on both upper and lower mains. Calculate which of the loops is the longest. That is the figure you will use to size the common meter pipe. The side of the tee that has the longest loop will use that number to size the pipes and branches. The smaller main line side of the tee will also use the meter pipe plus the piping on the shorter side of the tee. How to size the pipes • Meter Pipe = 295 btu/h @ 54 feet of pipe. Table, use 60 Feet row • Upper Main = 103 btu/h total @ 54 Feet of pipe. Table, use 60 Feet row • Lower Main = 192 btu/h @ 39 feet of pipe. Table, use 40 Feet row This is what the calculation will look like. You can treat this like two separate jobs. Calculate the upper main pipe as we did above, and the lower main as we did above. What If What if you are going to remove the water heater tank and install a tankless water heater that has an input of 160,000 btu/h? You probably already realize you will have to change the pipe size to the new tankless water heater. The question is the existing piping back to the tee at the meter correctly sized? What about the pipe to the meter itself, is it the correct size? Let's do another worksheet. First show the drawing with the tankless water heater but with the original pipe sizes. You job is to see how much piping needs changed if any. Click here for the answer (Use your back button to return to this page) Natural Gas BTU/h Pressure Chart You may hear sometimes that you have to add equivalent feet of pipe for fittings. This is not true for residential gas systems if have the longest loop system. If using the 2 PSI or less chart and .5" water column pressure drop you do not have to count for fittings . There is enough fudge in the charts to avoid using fitting equivalent. Disclaimer: The information found on this web site is for informational purposes only. All preventive maintenance, service, installations should be reviewed on a per job situation. Any work performed on your heating system should be performed by qualified and experienced personnel only. Comfort-Calc or its personnel accepts no responsibility for improper information, application, damage to property or bodily injury from applied information found on this web site.
{"url":"https://www.comfort-calc.com/Gas_Pipe_Sizing_Directions.cfm","timestamp":"2024-11-09T20:27:16Z","content_type":"application/xhtml+xml","content_length":"18565","record_id":"<urn:uuid:43bb2b83-7ab7-4157-9cfa-a6b569f19be2>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00885.warc.gz"}
Semi log plot in AP Precalculus | Complete guide In the world of mathematics, where precision meets complexity, tools like semi-log plot play a pivotal role in unraveling patterns hidden within data. As we move along this journey of semi-logarithmic plots, it’s essential to understand their significance in everything about AP Precalculus, and how they differ from other logarithmic representations. What is a Semi-Log Plot? A semi-log plot, short for semi-logarithmic plot, is a unique graphical representation that combines linear and logarithmic scales on its axes. In this plot, one axis typically employs a linear scale, while the other employs a logarithmic scale. This hybrid structure is particularly useful when dealing with data that spans several orders of magnitude, such as exponential growth or decay. Distinguishing Semi-Log from Log-Log Plots: Semi-log plots are an important part of Unit 2 of AP Precalculus. While semi-log plots use a combination of linear and logarithmic scales, log-log plots utilize logarithmic scales for both axes. The key distinction lies in the type of relationship being visualized. Semi-log plots are ideal for situations where one variable is exponentially related to the other, transforming exponential curves into straight lines. On the other hand, log-log plots are suitable for power-law relationships, where both variables exhibit a power-law dependence. Advantages of Semi-Logarithmic Plots: 1. Clarity in Exponential Trends: Semi-log plots simplify the representation of exponential relationships, making it easier to identify trends and patterns in data that may span several orders of 2. Linearization of Exponential Data: By using a logarithmic scale for one axis, semi-log plots convert exponential data into linear form, facilitating easier interpretation and analysis. 3. Application Flexibility: Semi-logarithmic plots find applications in diverse fields, including biology, physics, economics, and engineering, where phenomena exhibit exponential behavior. In the upcoming sections, we’ll delve into the mechanics of generating semi-log plots, deciphering their components, and mastering the art of interpreting these visualizations in the context of AP Precalculus. So, buckle up as we navigate the terrain of semi-logarithmic plots, unlocking their potential as powerful tools for mathematical exploration and problem-solving. Imagine you are given a set of data representing the population growth of a certain bacterial colony over time. The data is as follows: Time (hours) Population You are tasked with creating a semi-log plot and answering specific questions about the bacterial population’s growth. Step 1: Understanding the Data Before diving into the semi-log plot, it’s crucial to understand the data. In this case, the population seems to be increasing exponentially over time. The values are growing by a factor of 10 with each passing hour. Step 2: Generating the Semi-Log Plot 1. Logarithmic Transformation: Start by taking the logarithm (base 10) of the population values. The table becomes: Time (hours) Log(Population) 2. Plotting the Semi-Log Graph: On graph paper, plot Time on the linear scale (x-axis) and Log(Population) on the logarithmic scale (y-axis). Connect the points with a smooth curve. The resulting graph will showcase the exponential growth more clearly. Step 3: Interpreting the Semi-Log Plot Now that you have the semi-log plot, you can answer questions about the bacterial population more effectively. 1. Rate of Growth: The slope of the line on a semi-log plot corresponds to the exponential growth rate. In this case, the slope is consistently 1, indicating that the population is increasing tenfold with each hour. 2. Extrapolation: Using the semi-log plot, you can predict the population at any given time. Extrapolate the curve to estimate the population after, for example, 5 hours. Step 4: Converting Semi-Log to Linear (if needed) If the question requires converting the semi-log plot back to linear form, you can use the anti-logarithm (base 10) of the Log(Population) values to obtain the original population values. How to convert semi-log to exponential graph? Why Choose Tutoring Maphy for AP Precalculus Preparation? At Tutoring Maphy, we understand the challenges that students face in mastering intricate topics like semi-logarithmic plots in AP Precalculus. That’s why we’ve curated a specialized platform designed to enhance your understanding and proficiency in AP Precalculus concepts. 1. Expert AP Precalculus Tutors: Our team comprises experienced AP Precalculus tutors who are well-versed in the intricacies of the College Board’s curriculum. Benefit from personalized guidance tailored to your learning style, ensuring a comprehensive understanding of topics like semi-logarithmic plots. 2. Interactive Learning Resources: Access a wealth of interactive learning resources, including video tutorials, practice problems, and real-world applications. Our platform is designed to make complex concepts like semi-logarithmic plots accessible and engaging. 3. Live Sessions for Clarifications: Attend live tutoring sessions for real-time clarifications on challenging topics. Our AP Precalculus tutors are dedicated to addressing your queries and providing in-depth explanations, fostering a deeper understanding of key concepts. 4. Targeted Exam Preparation: Prepare confidently for the AP Precalculus exam with our targeted resources and practice exams. Our platform is structured to align with the College Board’s curriculum, ensuring that you are well-prepared for success on exam day. 5. Dedicated AP Precalculus Learning Paths: Navigate your AP Precalculus journey with structured learning paths that guide you through each topic, including detailed explanations and step-by-step solutions for semi-logarithmic plots and other advanced concepts. Connect with your Ideal AP Precalculus Tutor Today! Whether you’re struggling with semi-logarithmic plots or seeking comprehensive AP Precalculus preparation, Tutoring Maphy is your go-to destination. Book a Free demo now! Our dedicated AP Precalculus tutors are ready to help you navigate the intricacies of the course, ensuring that you not only grasp challenging topics but also excel in your academic endeavors.
{"url":"https://tutoringmaphy.com/solving-semi-log-plot/","timestamp":"2024-11-05T14:01:11Z","content_type":"text/html","content_length":"129216","record_id":"<urn:uuid:1eeecd5f-42f3-464e-a6eb-cc1d21ad4fb8>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00018.warc.gz"}
Hash function A hash function is any function that can be used to map data of arbitrary size to fixed-size values. 1. Order should matter, should be very unlikely for two messages two have a hash collision 2. Examples of good hash functions 1. MD5: compute a 128-bit message digest in a 4-step process 2. SHA-1: US NIST standard, 160-bit digest 3. SHA-256 and SHA-512 are more secure Homomorphic Hashes bromberg_sl2 is hash function that provides a monoid homomorphism This means there is a cheap operation * such that given strings s1 and s2, H(s1 ++ s2) = H(s1) * H(s2)
{"url":"https://jzhao.xyz/thoughts/hash-function","timestamp":"2024-11-15T03:14:37Z","content_type":"text/html","content_length":"16635","record_id":"<urn:uuid:57541864-a187-4783-a153-57a837e4be1e>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00692.warc.gz"}
1. Look at the sequence of numbers given. 2. Ascending order is a sequence of numbers that are arranged from least to highest. 3. Descending order is a sequence of numbers that are arranged from highest to least. Example: What is the ascending order of 3,2,9,11,12 Answer: 2,3,9,11,12 Example: What is the descending order of 8,11,14,7,9 Answer: 14,11,9,8,7 Directions: Arrange the following sequences in the required order. Also write at least ten examples of your own.
{"url":"http://kwiznet.com/p/takeQuiz.php?ChapterID=1229&CurriculumID=3&Method=Worksheet&NQ=6&Num=3.16&Type=C","timestamp":"2024-11-03T18:34:51Z","content_type":"text/html","content_length":"7683","record_id":"<urn:uuid:8b46c8cd-ec10-4b4f-81cc-487cf35e7aa7>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00755.warc.gz"}
Assuming 100% dissociation, what is the freezing point and boiling point of 3.39 m K_3PO_4(aq)? | HIX Tutor Assuming 100% dissociation, what is the freezing point and boiling point of 3.39 m #K_3PO_4(aq)#? Answer 1 $\text{f.p.} = - 25.2$$\text{^"o""C}$ We're asked to find the new freezing and boiling points of a #3.39m# #"K"_3"PO"_4# solution (given it ionizes completely). To do this, we can use the equations #DeltaT_f = imK_f# (freezing point depression) #DeltaT_b = imK_b# (boiling point elevation) #DeltaT_f# and #DeltaT_b# are the changes in freezing and boiling point temperatures, respectively. #i# is the van't Hoff factor, which is essentially the number of dissolved ions per unit of solute (equal to #4# here; there are #4# ions per unit of #"K"_3"PO"_4#). #m# is the molality of the solution, given as #3.39m# #K_f# is the molal freezing point constant for the solvent (water), which (although not given) is #1.86# #""^"o""C/"m# #K_b# is the molal boiling point constant for the solvent (water), equal to #0.512# #""^"o""C/"m# Plugging in known values, we have #DeltaT_f = (4)(3.39cancel(m))(1.86(""^"o""C")/(cancel(m))) = 25.2# #""^"o""C"# #DeltaT_b = (4)(3.39cancel(m))(0.512(""^"o""C")/(cancel(m))) = 6.94# #""^"o""C"# These are by how much the freezing and boiling point temperatures decrease (freezing point) and increase (boiling point). To find the new freezing point, we simply subtract this value from the normal freezing point of water, #0.0# #""^"o""C"#: #color(red)("new f.p.") = 0^"o""C" - 25.2^"o""C" = color(red)(-25.2# #color(red)(""^"o""C"# The new boiling point is found nearly the same way, but by adding the temperature change to the normal boiling point of water, #100.00# #""^"o""C"#: #color(blue)("new b.p.") = 100.00^"o""C" + 6.94^"o""C" = color(blue)(106.94# #color(blue)(""^"o""C"# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 The freezing point depression and boiling point elevation formulas can be used to calculate these values. For the freezing point depression: ΔTf = -i * Kf * m For the boiling point elevation: ΔTb = i * Kb * m • i is the van't Hoff factor, which represents the number of particles formed per formula unit dissolved. For K3PO4, i = 4 because it dissociates into 4 ions (3 K+ ions and 1 PO4^3- ion). • Kf is the cryoscopic constant and Kb is the ebullioscopic constant for the solvent. • m is the molality of the solution. Given that K3PO4 dissociates completely, the molality of the solution is equal to the concentration of K3PO4 in moles per kilogram of solvent. You would need to know the values of Kf and Kb for the solvent in order to calculate the freezing point depression and boiling point elevation, respectively. These values depend on the specific solvent being used. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/assuming-100-dissociation-what-is-the-freezing-point-and-boiling-point-of-3-39-m-8f9af85a77","timestamp":"2024-11-08T15:48:40Z","content_type":"text/html","content_length":"590260","record_id":"<urn:uuid:82db0a5f-20eb-4a55-87f5-9c01d16d8889>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00284.warc.gz"}
What is the least positive integer $n$ such that $(5n^4+3)$ is a prime number? This section requires Javascript. You are seeing this because something didn't load right. We suggest you, (a) try refreshing the page, (b) enabling javascript if it is disabled on your browser and, finally, (c) loading the non-javascript version of this page . We're sorry about the hassle.
{"url":"https://solve.club/problems/5n43/5n43.html","timestamp":"2024-11-05T18:51:34Z","content_type":"text/html","content_length":"56565","record_id":"<urn:uuid:5bf56212-dbf0-4fe9-985d-c5071814453b>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00487.warc.gz"}
An application of the combinatorial Nullstellensatz to a graph labelling problem An antimagic labelling of a graph G with m edges and n vertices is a bijection from the set of edges of G to the set of integers {1,...,m}, such that all nvertex sums are pairwise distinct, where a vertex sum is the sum of labels of all edges incident with that vertex. A graph is called antimagic if it admits an antimagic labelling. In N. Hartsfield and G. Ringle, Pearls in Graph Theory, Academic Press, Inc., Boston, 1990, Ringel has conjectured that every simple connected graph, other than K[2], is antimagic. In this article, we prove a special case of this conjecture. Namely, we prove that if G is a graph on n = p^k vertices, where p is an odd prime and k is a positive integer that admits a C[p]-factor, then it is antimagic. The case p=3 was proved in D. Hefetz, J Graph Theory 50(2005), 263-272. Our main tool is the combinatorial Nullstellensatz [N. Alon, Combin Probab Comput 8(1-2) (1999), 7-29]. • Combinatorial Nullstellensatz • Graph labelling Dive into the research topics of 'An application of the combinatorial Nullstellensatz to a graph labelling problem'. Together they form a unique fingerprint.
{"url":"https://cris.ariel.ac.il/en/publications/an-application-of-the-combinatorial-nullstellensatz-to-a-graph-la-3","timestamp":"2024-11-10T00:21:54Z","content_type":"text/html","content_length":"53807","record_id":"<urn:uuid:7e9bb9e9-87c2-4eca-b846-ef9cfff880af>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00545.warc.gz"}
Design of experiments • Genstat Knowledge Base 2024 Genstat has a comprehensive set of facilities for design of experiments. Collectively, these are known as the Genstat Design System. Many different design types are covered, each with a procedure that allows you to view and choose from the available possibilities. Other procedure allow designs and data forms to be displayed. There is also a general procedure DESIGN that can be used interactively to provide a single point of access to all the design types. DESIGN and the AG… procedures that it calls provide the Select Design facilities in Genstat for Windows, while the alternative Standard Design menu uses AGHIERARCHICAL, AGLATIN and AGSQLATTICE to generate completely randomized designs, randomized blocks, Latin and Graeco-Latin squares, split-plots, strip-plots (or criss-cross designs) and lattices. DESIGN provides a menu-driven interface for selecting and generating experimental designs AGALPHA forms alpha designs for up to 100 treatments AGBIB generates balanced-incomplete-block designs AGBOXBEHNKEN generates Box-Behnken designs AGCENTRALCOMPOSITE generates central composite designs AGCROSSOVERLATIN generates Latin squares balanced for carry-over effects AGCYCLIC generates cyclic designs from standard generators AGDESIGN generates generally balanced designs – factorial designs with blocking, fractional factorial designs, Lattice squares etc. AGFACTORIAL generates minimum aberration complete and fractional factorial designs AGFRACTION generates fractional factorial designs AGHIERARCHICAL generates orthogonal hierarchical designs provides a menu-driven interface for selecting and generating designs for industrial experiments AGLATIN generates mutually orthogonal Latin squares AGLOOP generates loop designs e.g. for time-course microarray experiments AGMAINEFFECT generates designs to estimate main effects of two-level factors AGNEIGHBOUR generates neighbour-balanced designs AGQLATIN generates complete and quasi-complete Latin squares AGREFERENCE generates reference-level designs e.g. for microarray experiments AGSEMILATIN generates semi-Latin squares AGSQLATTICE generates square lattice and lattice square designs AGYOUDENSQUARE generates a Youden square PDESIGN prints treatment combinations tabulated by the block factors DDESIGN plots the plan of a design ADSPREADSHEET puts the data and plan of an experimental design into Genstat spreadsheets There are also procedures that you can use to determine the sample size (i.e. replication) required for experiments that are to be analysed by analysis of variance, t-test or various non-parametric tests. You can also calculate the power (or probability of detection) for terms in analysis of variance or regression analyses. APOWER calculates the power (probability of detection) for terms in an analysis of variance RPOWER calculates the power (probability of detection) for regression models VPOWER uses a parametric bootstrap to estimate the power (probability of detection) for terms in a REML analysis ASAMPLESIZE finds the replication (sample size) to detect a treatment effect or contrast VSAMPLESIZE estimates the replication to detect a fixed term or contrast in a REML analysis, using parametric bootstrap ADETECTION calculates the minimum size of effect or contrast detectable in an analysis of variance SBNTEST calculates the sample size for binomial tests SCORRELATION calculates the sample size to detect specified correlations SLCONCORDANCE calculates the sample size for Lin’s concordance coefficient SMANNWHITNEY calculates the sample size for the Mann-Whitney test SMCNEMAR calculates the sample size for McNemar’s test SPNTEST calculates the sample size for a Poisson test SPRECISION calculates the sample size to obtain a specified precision SSIGNTEST calculates the sample size for a sign test STTEST calculates the sample size for t-tests, including equivalence tests and tests for non-inferiority DSTTEST plots power and significance for t-tests, including equivalence tests and tests for non-inferiority The Design System is based on a range of standard generators. Some of these, such as the Galois fields used to generate Latin squares, can be formed when required – and so there is no limitation on the available designs. Repertoires of others, such as design keys, are stored in backing-store files which are scanned by the design generation procedures to form menus listing the available possibilities. Algorithms are available to form generators for new designs, and these can then be added to the design files to become an integral part of the system. Other design utilities include procedures for combining simple designs into more complicated arrangements, for forming augmented designs, and for determining how many replicates are needed. There are also directives for constructing response-surface designs and doubly resolvable row-column designs. The relevant commands include the directives AFMINABERRATION forms minimum aberration factorial or fractional-factorial designs AFRESPONSESURFACE uses the BLKL algorithm to construct designs for estimating response surfaces AGRCRESOLVABLE forms doubly resolvable row-column designs GENERATE generates values of factors in systematic order or as defined by a design key, or forms values of pseudo-factors RANDOMIZE puts units of vectors into random order, or randomizes units of an experimental design FKEY forms design keys for multi-stratum experimental designs, allowing for confounding and aliasing of treatments FPSEUDOFACTORS determines patterns of confounding and aliasing from design keys, and extends the treatment formula to incorporate the necessary pseudo-factors SET2FORMULA forms a model formula using structures supplied in a pointer and the procedures. AEFFICIENCY calculates efficiency factors for experimental designs AFAUGMENTED forms an augmented design AFLABELS forms a variate of unit labels for a design AFRCRESOLVABLE forms doubly resolvable row-column designs, with output AFUNITS forms a factor to index the units of the final stratum of a design AKEY generates values for treatment factors using the design key method AMERGE merges extra units into an experimental design AFNONLINEAR forms D-optimal designs to estimate the parameters of a nonlinear or generalized linear model AFPREP searches for an efficient partially-replicated design APRODUCT forms a new experimental design from the product of two designs AGNATURALBLOCK generates 1- and 2-dimensional designs with blocks of natural size AGNONORTHOGONALDESIGN generates non-orthogonal multi-stratum designs AGSPACEFILLINGDESIGN generates space filling designs ARCSPLITPLOT adds extra treatments onto the replicates of a resolvable row-column design, and generates factors giving the row and column locations of the plots within the design ARANDOMIZE randomizes and prints an experimental design CDNAUGMENTEDDESIGN constructs an augmented block design, using CycDesigN if the controls are in an incomplete-block design CDNBLOCKDESIGN constructs a block design using CycDesigN CDNPREP constructs a multi-location partially-replicated design using CycDesigN CDNROWCOLUMNDESIGN constructs a row-column design using CycDesigN COVDESIGN produces experimental designs efficient under analysis of covariance FACCOMBINATIONS forms a factor to indicate observations with identical combinations of values of a set of variates, texts or factors FACDIVIDE represents a factor by factorial combinations of a set of factors FACPRODUCT forms a factor with a level for every combination of other factors FBASICCONTRASTS forms the basic contrasts of a model term FCOMPLEMENT forms the complement of an incomplete block design FDESIGNFILE forms a backing-store file of information for AGDESIGN FHADAMARDMATRIX forms Hadamard matrices FOCCURRENCES forms a “concurrence” matrix recording how often each pair of treatments occurs in the same block of a design FPLOTNUMBER forms plot numbers for a row-by-column design FPROJECTIONMATRIX forms a projection matrix for a set of model terms XOEFFICIENCY calculates the efficiency for estimating effects in cross-over designs XOPOWER estimates the power of contrasts in cross-over designs
{"url":"https://genstat.kb.vsni.co.uk/knowledge-base/refdes/","timestamp":"2024-11-02T19:00:27Z","content_type":"text/html","content_length":"60499","record_id":"<urn:uuid:2a561d13-a4a5-4e56-8f67-55c71dce55d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00205.warc.gz"}
[The Language of Science] Sidney B. Cahn, Gerald D. Mahan, Boris E. Nadgorny, Max Dresden - A Guide to Physics Problems Part 2 Thermodynamics, Statistical Physics, and Quantum Mechanics Part 2(1997, Springer) - libgen.lc Download [The Language of Science] Sidney B. Cahn, Gerald D. Mahan, Boris E. Nadgorny, Max Dresden - A Guide to Physics Problems Part 2 Thermodynamics, Statistical Physics, and Quantum Mechanics Part 2(1997, Springer) - libgen.lc * Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project Document related concepts no text concepts found part 2 Statistical Physics, and Quantum Mechanics This page intentionally left blank part 2 Statistical Physics, and Quantum Mechanics Sidney B. Cahn New York University New York, New York Gerald D. Mahan University of Tennessee Knoxville, Tennessee, and Oak Ridge National Laboratory Oak Ridge, Tennessee Boris E. Nadgorny Naval Research Laboratory Washington, D.C. NEW YORK, BOSTON, DORDRECHT, LONDON, MOSCOW eBook ISBN: Print ISBN: ©2004 Kluwer Academic Publishers New York, Boston, Dordrecht, London, Moscow Print ©1997 Kluwer Academic/Plenum Publishers New York All rights reserved No part of this eBook may be reproduced or transmitted in any form or by any means, electronic, mechanical, recording, or otherwise, without written consent from the Publisher Created in the United States of America Visit Kluwer Online at: and Kluwer's eBookstore at: It is only rarely realized how important the design of suitable, interesting problems is in the educational process. This is true for the professor — who periodically makes up exams and problem sets which test the effectiveness of his teaching — and also for the student — who must match his skills and acquired knowledge against these same problems. There is a great need for challenging problems in all scientific fields, but especially so in physics. Reading a physics paper requires familiarity and control of techniques which can only be obtained by serious practice in solving problems. Confidence in performing research demands a mastery of detailed technology which requires training, concentration, and reflection — again, gained only by working exercises. In spite of the obvious need, there is very little systematic effort made to provide balanced, doable problems that do more than gratify the ego of the professor. Problems often are routine applications of procedures mentioned in lectures or in books. They do little to force students to reflect seriously about new situations. Furthermore, the problems are often excruciatingly dull and test persistence and intellectual stamina more than insight, technical skill, and originality. Another rather serious shortcoming is that most exams and problems carry the unmistakable imprint of the teacher. (In some excellent eastern U.S. universities, problems are catalogued by instructor, so that a good deal is known about an exam even before it is written.) In contrast, A Guide to Physics Problems, Part 2 not only serves an important function, but is a pleasure to read. By selecting problems from different universities and even different scientific cultures, the authors have effectively avoided a one-sided approach to physics. All the problems are good, some are very interesting, some positively intriguing, a few are crazy; but all of them stimulate the reader to think about physics, not merely to train you to pass an exam. I personally received considerable pleasure in working the problems, and I would guess that anyone who wants to be a professional physicist would experience similar enjoyment. I must confess with some embarrassment that some of the problems gave me more trouble than I had expected. But, of course, this is progress. The coming generation can do with ease what causes the elder one trouble. This book will be a great help to students and professors, as well as a source of pleasure and Max Dresden Part 2 of A Guide to Physics Problems contains problems from written graduate qualifying examinations at many universities in the United States and, for comparison, problems from the Moscow Institute of Physics and Technology, a leading Russian Physics Department. While Part 1 presented problems and solutions in Mechanics, Relativity, and Electrodynamics, Part 2 offers problems and solutions in Thermodynamics, Statistical Physics, and Quantum Mechanics. The main purpose of the book is to help graduate students prepare for this important and often very stressful exam (see Figure P.1). The difficulty and scope of the qualifying exam varies from school to school, but not too dramatically. Our goal was to present a more or less universal set of problems that would allow students to feel confident at these exams, regardless of the graduate school they attended. We also thought that physics majors who are considering going on to graduate school may be able to test their knowledge of physics by trying to solve some of the problems, most of which are not above the undergraduate level. As in Part 1 we have tried to provide as many details in our solutions as possible, without turning to a trade expression of an exhausted author who, after struggling with the derivation for a couple of hours writes, “As it can be easily shown....” Most of the comments to Part 1 that we have received so far have come not from the students but from the professors who have to give the exams. The most typical comment was, “Gee, great, now I can use one of your problems for our next comprehensive exam.” However, we still hope that this does not make the book counterproductive and eventually it will help the students to transform from the state shown in Figure P.1 into a much more comfortable stationary state as in Figure P.2. This picture can be easily attributed to the present state of mind of the authors as well, who sincerely hope that Part 3 will not be forthcoming any time soon. Some of the schools do not have written qualifying exams as part of their requirements: Brown, Cal-Tech, Cornell, Harvard, UT Austin, University of Toronto, and Yale. Most of the schools that give such an exam were happy to trust us with their problems. We wish to thank the Physics Departments of Boston University (Boston), University of Colorado at Boulder (Colorado), Columbia University (Columbia), University of Maryland (Maryland), Massachusetts Institute of Technology (MIT), University of Michigan (Michigan), Michigan State University (Michigan State), Michigan Technological University (Michigan Tech), Princeton University (Princeton), Rutgers University (Rutgers), Stanford University (Stanford), State University of New York at Stony Brook (Stony Brook), University of Tennessee at Knoxville (Tennessee), and University of Wisconsin (Wisconsin-Madison). The Moscow Institute of Physics and Technology (Moscow Phys-Tech) does not give this type of qualifying exam in graduate school. Some of their problems came from the final written exam for the physics seniors, some of the others, mostly introductory problems, are from their oral entrance exams or magazines such as Kvant. A few of the problems were compiled by the authors and have never been published before. We were happy to hear many encouraging comments about Part 1 from our colleagues, and we are grateful to everybody who took their time to review the book. We wish to thank many people who contributed some of the problems to Part 2, or discussed solutions with us, in particular Dmitri Averin (Stony Brook), Michael Bershadsky (Harvard), Alexander Korotkov (Stony Brook), Henry Silsbee (Stony Brook), and Alexei Stuchebrukhov (UC Davis). We thank Kirk McDonald (Princeton) and Liang Chen (British Columbia) for their helpful comments to some problems in Part 1; we hope to include them in the second edition of Part 1, coming out next year. We are indebted to Max Dresden for writing the Foreword, to Tilo Wettig (Münich) who read most, of the manuscript, and to Vladimir Gitt and Yair Minsky who drew the humorous pictures. Sidney Cahn New York Gerald Mahan Oak Ridge Boris Nadgorny Washington, D.C. This page intentionally left blank Textbooks Used in the Preparation of this Chapter 4 — Thermodynamics and Statistical Physics 1) Landau, L. D., and Lifshitz, E. M., Statistical Physics, Volume 5, part 1 of Course of Theoretical Physics, 3rd ed., Elmsford, New York: Pergamon Press, 1980 2) Kittel, C., Elementary Statistical Physics, New York: John Wiley and Sons, Inc., 1958 3) Kittel, C., and Kroemer, H., Thermal Physics, 2nd ed., New York: Freeman and Co., 1980 4) Reif, R., Fundamentals of Statistical and Thermal Physics, New York: McGraw-Hill, 1965 5) Huang, K., Statistical Mechanics, 2nd ed., New York: John Wiley and Sons, Inc., 1987 6) Pathria, R. K., Statistical Mechanics, Oxford: Pergamon Press, 1972 Chapter 5 — Quantum Mechanics 1) Liboff, R. L., Introductory Quantum Mechanics, 2nd ed., Reading, MA: Pergamon Press, 1977 2) Landau, L. D., and Lifshitz, E. M., Quantum Mechanics, Nonrelativistic Theory, Volume 3 of Course of Theoretical Physics, 3rd ed., Elmsford, New York: Pergamon Press, 1977 Textbooks Used in the Preparation of this Volume 3) Sakurai, J. J., Modern Quantum Mechanics, Menlo Park: Benjamin/ Cummings, 1985 4) Sakurai, J. J., Advanced Quantum Mechanics, Menlo Park: Benjamin/Cummings, 1967 5) Schiff, L. I., Quantum Mechanics, 3rd ed., New York: McGraw-Hill, 6) Shankar, R., Principles of Quantum Mechanics, New York: Plenum Press, 1980 4. Thermodynamics and Statistical Physics Introductory Thermodynamics 4.1. Why Bother? (Moscow Phys-Tech) 4.2. Space Station Pressure (MIT) 4.3. Baron von Münchausen and Intergalactic Travel (Moscow 4.4. Railway Tanker (Moscow Phys-Tech) 4.5. Magic Carpet (Moscow Phys-Tech) 4.6. Teacup Engine (Princeton, Moscow Phys-Tech) 4.7. Grand Lunar Canals (Moscow Phys-Tech) 4.8. Frozen Solid (Moscow Phys-Tech) 4.9. Tea in Thermos (Moscow Phys-Tech) 4.10. Heat Loss (Moscow Phys-Tech) 4.11. Liquid–Solid–Liquid (Moscow Phys-Tech) 4.12. Hydrogen Rocket (Moscow Phys-Tech) 4.13. Maxwell–Boltzmann Averages (MIT) 4.14. Slowly Leaking Box (Moscow Phys-Tech, Stony Brook 4.15. Surface Contamination (Wisconsin-Madison) 4.16. Bell Jar (Moscow Phys-Tech) 4.17. Hole in Wall (Princeton) 4.18. Ballast Volume Pressure (Moscow Phys-Tech) 4.19. Rocket in Drag (Princeton) 4.20. Adiabatic Atmosphere (Boston, Maryland) 4.21. Atmospheric Energy (Rutgers) 4.22. Puncture (Moscow Phys-Tech) Heat and Work 4.23. Cylinder with Massive Piston (Rutgers, Moscow 4.24. Spring Cylinder (Moscow Phys-Tech) 4.25. Isothermal Compression and Adiabatic Expansion of Ideal Gas (Michigan) 4.26. Isochoric Cooling and Isobaric Expansion (Moscow 4.27. Venting (Moscow Phys-Tech) 4.28. Cylinder and Heat Bath (Stony Brook) 4.29. Heat Extraction (MIT, Wisconsin-Madison) 4.30. Heat Capacity Ratio (Moscow Phys-Tech) 4.31. Otto Cycle (Stony Brook) 4.32. Joule Cycle (Stony Brook) 4.33. Diesel Cycle (Stony Brook) 4.34. Modified Joule–Thomson (Boston) Ideal Gas and Classical Statistics 4.35. Poisson Distribution in Ideal Gas (Colorado) 4.36. Polarization of Ideal Gas (Moscow Phys-Tech) 4.37. Two-Dipole Interaction (Princeton) 4.38. Entropy of Ideal Gas (Princeton) 4.39. Chemical Potential of Ideal Gas (Stony Brook) 4.40. Gas in Harmonic Well (Boston) 4.41. Ideal Gas in One-Dimensional Potential (Rutgers) 4.42. Equipartition Theorem (Columbia, Boston) 4.43. Diatomic Molecules in Two Dimensions (Columbia) 4.44. Diatomic Molecules in Three Dimensions (Stony Brook, Michigan State) 4.45. Two-Level System (Princeton) 4.46. Zipper (Boston) 4.47. Hanging Chain (Boston) 4.48. Molecular Chain (MIT, Princeton, Colorado) Nonideal Gas 4.49. Heat Capacities (Princeton) 4.50. Return of Heat Capacities (Michigan) 4.51. Nonideal Gas Expansion (Michigan State) 4.52. van der Waals (MIT) 4.53. Critical Parameters (Stony Brook) Mixtures and Phase Separation 4.54. Entropy of Mixing (Michigan, MIT) 4.55. Leaky Balloon (Moscow Phys-Tech) 4.56. Osmotic Pressure (MIT) 4.57. Clausius–Clapeyron (Stony Brook) 4.58. Phase Transition (MIT) 4.59. Hydrogen Sublimation in Intergalactic Space (Princeton) 4.60. Gas Mixture Condensation (Moscow Phys-Tech) 4.61. Air Bubble Coalescence (Moscow Phys-Tech) 4.62. Soap Bubble Coalescence (Moscow Phys-Tech) 4.63. Soap Bubbles in Equilibrium (Moscow Phys-Tech) Quantum Statistics 4.64. Fermi Energy of a 1D Electron Gas (Wisconsin-Madison) 4.65. Two-Dimensional Fermi Gas (MIT, Wisconson-Madison) 4.66. Nonrelativistic Electron Gas (Stony Brook, Wisconsin-Madison, Michigan State) 4.67. Ultrarelativistic Electron Gas (Stony Brook) 4.68. Quantum Corrections to Equation of State (MIT, Princeton, Stony Brook) 4.69. Speed of Sound in Quantum Gases (MIT) 4.70. Bose Condensation Critical Parameters (MIT) 4.71. Bose Condensation (Princeton, Stony Brook) 4.72. How Hot the Sun? (Stony Brook) 4.73. Radiation Force (Princeton, Moscow Phys-Tech, MIT) 4.74. Hot Box and Particle Creation (Boston, MIT) 4.75. D-Dimensional Blackbody Cavity (MIT) 4.76. Fermi and Bose Gas Pressure (Boston) 4.77. Blackbody Radiation and Early Universe (Stony Brook) 4.78. Photon Gas (Stony Brook) 4.79. Dark Matter (Rutgers) 4.80. Einstein Coefficients (Stony Brook) 4.81. Atomic Paramagnetism (Rutgers, Boston) 4.82. Paramagnetism at High Temperature (Boston) 4.83. One-Dimensional Ising Model (Tennessee) 4.84. Three Ising Spins (Tennessee) 4.85. N Independent Spins (Tennessee) 4.86. N Independent Spins, Revisited (Tennessee) 4.87. Ferromagnetism (Maryland, MIT) 4.88. Spin Waves in Ferromagnets (Princeton, Colorado) 4.89. Magnetization Fluctuation (Stony Brook) 4.90. Gas Fluctuations (Moscow Phys-Tech) 4.91. Quivering Mirror (MIT, Rutgers, Stony Brook) 4.92. Isothermal Compressibility and Mean Square Fluctuation (Stony Brook) 4.93. Energy Fluctuation in Canonical Ensemble (Colorado, Stony Brook) 4.94. Number Fluctuations (Colorado (a,b), Moscow Phys-Tech (c)) 4.95. Wiggling Wire (Princeton) 4.96. LC Voltage Noise (MIT, Chicago) Applications to Solid State 4.97. Thermal Expansion and Heat Capacity (Princeton) 4.98. Schottky Defects (Michigan State, MIT) 4.99. Frenkel Defects (Colorado, MIT) 4.100. Two-Dimensional Debye Solid (Columbia, Boston) 4.101. Einstein Specific Heat (Maryland, Boston) 4.102. Gas Adsorption (Princeton, MIT, Stanford) 4.103. Thermionic Emission (Boston) 4.104. Electrons and Holes (Boston, Moscow Phys-Tech) 4.105. Adiabatic Demagnetization (Maryland) 4.106. Critical Field in Superconductor (Stony Brook, Chicago) 5. Quantum Mechanics One-Dimensional Potentials 5.1. Shallow Square Well I (Columbia) 5.2. Shallow Square Well II (Stony Brook) 5.3. Attractive Delta Function Potential I (Stony Brook) 5.4. Attractive Delta Function Potential II (Stony Brook) 5.5. Two Delta Function Potentials (Rutgers) 5.6. Transmission Through a Delta Function Potential (Michigan State, MIT, Princeton) 5.7. Delta Function in a Box (MIT) 5.8. Particle in Expanding Box (Michigan State, MIT, Stony 5.9. One-Dimensional Coulomb Potential (Princeton) 5.10. Two Electrons in a Box (MIT) 5.11. Square Well (MIT) 5.12. Given the Eigenfunction (Boston, MIT) 5.13. Combined Potential (Tennessee) Harmonic Oscillator 5.14. Given a Gaussian (MIT) 5.15. Harmonic Oscillator ABCs (Stony Brook) 5.16. Number States (Stony Brook) 5.17. Coupled Oscillators (MIT) 5.18. Time-Dependent Harmonic Oscillator I 5.19. Time-Dependent Harmonic Oscillator II (Michigan State) 5.20. Switched-on Field (MIT) 5.21. Cut the Spring! (MIT) Angular Momentum and Spin 5.22. Given Another Eigenfunction (Stony Brook) 5.23. Algebra of Angular Momentum (Stony Brook) 5.24. Triplet Square Well (Stony Brook) 5.25. Dipolar Interactions (Stony Brook) Potential (MIT) 5.26. Spin-Dependent 5.28. Constant Matrix Perturbation (Stony Brook) 5.29. Rotating Spin (Maryland, MIT) 5.30. Nuclear Magnetic Resonance (Princeton, Stony Brook) Variational Calculations 5.31. Anharmonic Oscillator (Tennessee) 5.32. Linear Potential I (Tennessee) 5.33. Linear Potential II (MIT, Tennessee) 5.34. Return of Combined Potential (Tennessee) 5.35. Quartic in Three Dimensions (Tennessee) 5.36. Halved Harmonic Oscillator (Stony Brook, Chicago (b), Princeton (b)) 5.37. Helium Atom (Tennessee) Perturbation Theory 5.38. Momentum Perturbation (Princeton) 5.39. Ramp in Square Well (Colorado) 5.40. Circle with Field (Colorado, Michigan State) 5.41. Rotator in Field (Stony Brook) 5.42. Finite Size of Nucleus (Maryland, Michigan State, Princeton, Stony Brook) Perturbation (Princeton) 5.43. U and 5.44. Relativistic Oscillator (MIT, Moscow Phys-Tech, Stony Brook (a)) Spin Interaction (Princeton) Spin–Orbit Interaction (Princeton) Interacting Electrons (MIT) Stark Effect in Hydrogen (Tennessee) Hydrogen with Electric and Magnetic Fields (MIT) Hydrogen in Capacitor (Maryland, Michigan State) Harmonic Oscillator in Field (Maryland, Michigan State) of Tritium (Michigan State) 5.53. Bouncing Ball (Moscow Phys-Tech, Chicago) 5.54. Truncated Harmonic Oscillator (Tennessee) 5.55. Stretched Harmonic Oscillator (Tennessee) 5.56. Ramp Potential (Tennessee) 5.57. Charge and Plane (Stony Brook) 5.58. Ramp Phase Shift (Tennessee) 5.59. Parabolic Phase Shift (Tennessee) 5.60. Phase Shift for Inverse Quadratic (Tennessee) Scattering Theory 5.61. Step-Down Potential (Michigan State, MIT) 5.62. Step-Up Potential (Wisconsin-Madison) 5.63. Repulsive Square Well (Colorado) 5.64. 3D Delta Function (Princeton) 5.65. Two-Delta-Function Scattering (Princeton) 5.66. Scattering of Two Electrons (Princeton) 5.67. Spin-Dependent Potentials (Princeton) 5.68. Rayleigh Scattering (Tennessee) 5.69. Scattering from Neutral Charge Distribution (Princeton) 5.70. Spherical Box with Hole (Stony Brook) 5.71. Attractive Delta Function in 3D (Princeton) 5.72. Ionizing Deuterium (Wisconsin-Madison) 5.73. Collapsed Star (Stanford) 5.74. Electron in Magnetic Field (Stony Brook, Moscow 5.75. Electric and Magnetic Fields (Princeton) 5.76. Josephson Junction (Boston) 4. Thermodynamics and Statistical Physics Introductory Thermodynamics 4.1. Why Bother? (Moscow Phys-Tech) 4.2. Space Station Pressure (MIT) 4.3. Baron von Münchausen and Intergalactic Travel (Moscow 4.4. Railway Tanker (Moscow Phys-Tech ) 4.5. Magic Carpet (Moscow Phys-Tech ) 4.6. Teacup Engine (Princeton, Moscow Phys-Tech) 4.7. Grand Lunar Canals (Moscow Phys-Tech ) 4.8. Frozen Solid (Moscow Phys-Tech) 4.9. Tea in Thermos (Moscow Phys-Tech) 4.10. Heat Loss (Moscow Phys-Tech) 4.11. Liquid–Solid–Liquid (Moscow Phys-Tech) 4.12. Hydrogen Rocket (Moscow Phys-Tech) 4.13. Maxwell–Boltzmann Averages (MIT) 4.14. Slowly Leaking Box (Moscow Phys-Tech, Stony Brook 4.15. Surface Contamination (Wisconsin-Madison) 4.16. Bell Jar (Moscow Phys-Tech) 4.17. Hole in Wall (Princeton) 4.18. Ballast Volume Pressure (Moscow Phys-Tech) 4.19. Rocket in Drag (Princeton) 4.20. Adiabatic Atmosphere (Boston, Maryland) 4.21. Atmospheric Energy (Rutgers) 4.22. Puncture (Moscow Phys-Tech) Heat and Work 4.23. Cylinder with Massive Piston (Rutgers, Moscow 4.24. Spring Cylinder (Moscow Phys-Tech) 4.25. Isothermal Compression and Adiabatic Expansion of Ideal Gas (Michigan) 4.26. Isochoric Cooling and Isobaric Expansion (Moscow 4.27. Venting (Moscow Phys-Tech) 4.28. Cylinder and Heat Bath (Stony Brook) 4.29. Heat Extraction (MIT, Wisconsin-Madison) 4.30. Heat Capacity Ratio (Moscow Phys-Tech) Otto Cycle (Stony Brook) Joule Cycle (Stony Brook) Diesel Cycle (Stony Brook) Modified Joule–Thomson (Boston) Ideal Gas and Classical Statistics 4.35. Poisson Distribution in Ideal Gas (Colorado) 4.36. Polarization of Ideal Gas (Moscow Phys-Tech) 4.37. Two-Dipole Interaction (Princeton) 4.38. Entropy of Ideal Gas (Princeton) 4.39. Chemical Potential of Ideal Gas (Stony Brook) 4.40. Gas in Harmonic Well (Boston) 4.41. Ideal Gas in One-Dimensional Potential (Rutgers) 4.42. Equipartition Theorem (Columbia, Boston) 4.43. Diatomic Molecules in Two Dimensions (Columbia) 4.44. Diatomic Molecules in Three Dimensions (Stony Brook, Michigan State) 4.45. Two-Level System (Princeton) 4.46. Zipper (Boston) 4.47. Hanging Chain (Boston) 4.48. Molecular Chain (MIT, Princeton, Colorado) Nonideal Gas 4.49. Heat Capacities (Princeton) 4.50. Return of Heat Capacities (Michigan) 4.51. Nonideal Gas Expansion (Michigan State) 4.52. van der Waals (MIT) 4.53. Critical Parameters (Stony Brook) Mixtures and Phase Separation 4.54. Entropy of Mixing (Michigan, MIT) 4.55. Leaky Balloon (Moscow Phys-Tech) 4.56. Osmotic Pressure (MIT) 4.57. Clausius–Clapeyron (Stony Brook) 4.58. Phase Transition (MIT) 4.59. Hydrogen Sublimation in Intergalactic Space (Princeton) 4.60. Gas Mixture Condensation (Moscow Phys-Tech) 4.61. Air Bubble Coalescence (Moscow Phys-Tech) 4.62. Soap Bubble Coalescence (Moscow Phys-Tech) 4.63. Soap Bubbles in Equilibrium (Moscow Phys-Tech) Quantum Statistics 4.64. Fermi Energy of a 1D Electron Gas (Wisconsin-Madison) 170 4.65. Two-Dimensional Fermi Gas (MIT, Wisconson-Madison) 171 4.66. Nonrelativistic Electron Gas (Stony Brook, Wisconsin-Madison, Michigan State) 4.67. Ultrarelativistic Electron Gas (Stony Brook) 4.68. Quantum Corrections to Equation of State (MIT, Princeton, Stony Brook) 4.69. Speed of Sound in Quantum Gases (MIT) 4.70. Bose Condensation Critical Parameters (MIT) 4.71. Bose Condensation (Princeton, Stony Brook) 4.72. How Hot the Sun? (Stony Brook) 4.73. Radiation Force (Princeton, Moscow Phys-Tech, MIT) 4.74. Hot Box and Particle Creation (Boston, MIT) 4.75. D-Dimensional Blackbody Cavity (MIT) 4.76. Fermi and Bose Gas Pressure (Boston) 4.77. Blackbody Radiation and Early Universe (Stony Brook) 4.78. Photon Gas (Stony Brook) 4.79. Dark Matter (Rutgers) 4.80. Einstein Coefficients (Stony Brook) 4.81. Atomic Paramagnetism (Rutgers, Boston) 4.82. Paramagnetism at High Temperature (Boston) 4.83. One-Dimensional Ising Model (Tennessee) 4.84. Three Ising Spins (Tennessee) 4.85. N Independent Spins (Tennessee) 4.86. N Independent Spins, Revisited (Tennessee) 4.87. Ferromagnetism (Maryland, MIT) 4.88. Spin Waves in Ferromagnets (Princeton, Colorado) 4.89. Magnetization Fluctuation (Stony Brook) 4.90. Gas Fluctuations (Moscow Phys-Tech) 4.91. Quivering Mirror (MIT, Rutgers, Stony Brook) 4.92. Isothermal Compressibility and Mean Square Fluctuation (Stony Brook) 4.93. Energy Fluctuation in Canonical Ensemble (Colorado, Stony Brook) 4.94. Number Fluctuations (Colorado (a,b), Moscow Phys-Tech (c)) 4.95. Wiggling Wire (Princeton) 4.96. LC Voltage Noise (MIT, Chicago) Applications to Solid State 4.97. Thermal Expansion and Heat Capacity (Princeton) 4.98. Schottky Defects (Michigan State, MIT) 4.99. Frenkel Defects (Colorado, MIT) Two-Dimensional Debye Solid (Columbia, Boston) Einstein Specific Heat (Maryland, Boston) Gas Adsorption (Princeton, MIT, Stanford) Thermionic Emission (Boston) Electrons and Holes (Boston, Moscow Phys-Tech) Adiabatic Demagnetization (Maryland) Critical Field in Superconductor (Stony Brook, Chicago) 5. Quantum Mechanics One-Dimensional Potentials 5.1. Shallow Square Well I (Columbia) 5.2. Shallow Square Well II (Stony Brook) 5.3. Attractive Delta Function Potential I (Stony Brook) 5.4. Attractive Delta Function Potential II (Stony Brook) 5.5. Two Delta Function Potentials (Rutgers) 5.6. Transmission Through a Delta Function Potential (Michigan State, MIT, Princeton) 5.7. Delta Function in a Box (MIT) 5.8. Particle in Expanding Box (Michigan State, MIT, Stony 5.9. One-Dimensional Coulomb Potential (Princeton) 5.10. Two Electrons in a Box (MIT) 5.11. Square Well (MIT) 5.12. Given the Eigenfunction (Boston, MIT) 5.13. Combined Potential (Tennessee) Harmonic Oscillator 5.14. Given a Gaussian (MIT) 5.16. Number States (Stony Brook) 5.17. Coupled Oscillators (MIT) 5.18. Time-Dependent Harmonic Oscillator I 5.20. Switched-on Field (MIT) Angular Momentum and Spin 5.22. Given Another Eigenfunction (Stony Brook) 5.23. Algebra of Angular Momentum (Stony Brook) 5.24. Triplet Square Well (Stony Brook) 5.25. Dipolar Interactions (Stony Brook) Potential (MIT) 5.26. Spin-Dependent Three Spins (Stony Brook) Constant Matrix Perturbation (Stony Brook) Rotating Spin (Maryland, MIT) Nuclear Magnetic Resonance (Princeton, Stony Brook) Variational Calculations 5.31. Anharmonic Oscillator (Tennessee) 5.32. Linear Potential I (Tennessee) 5.33. Linear Potential II (MIT, Tennessee) 5.34. Return of Combined Potential (Tennessee) 5.35. Quartic in Three Dimensions (Tennessee) 5.36. Halved Harmonic Oscillator (Stony Brook, Chicago (b), Princeton (b)) 5.37. Helium Atom (Tennessee) Perturbation Theory 5.38. Momentum Perturbation (Princeton) 5.39. Ramp in Square Well (Colorado) 5.40. Circle with Field (Colorado, Michigan State) 5.41. Rotator in Field (Stony Brook) 5.42. Finite Size of Nucleus (Maryland, Michigan State, Princeton, Stony Brook) Perturbation (Princeton) 5.43. U and 5.44. Relativistic Oscillator (MIT, Moscow Phys-Tech, Stony Brook (a)) 5.45. Spin Interaction (Princeton) 5.46. Spin–Orbit Interaction (Princeton) 5.47. Interacting Electrons (MIT) 5.48. Stark Effect in Hydrogen (Tennessee) Hydrogen with Electric and Magnetic Fields (MIT) n in Capacitor (Maryland, Michigan State) 5.51. Harmonic Oscillator in Field (Maryland, Michigan State) of Tritium (Michigan State) 5.53. Bouncing Ball (Moscow Phys-Tech, Chicago) 5.54. Truncated Harmonic Oscillator (Tennessee) 5.55. Stretched Harmonic Oscillator (Tennessee) 5.56. Ramp Potential (Tennessee) 5.57. Charge and Plane (Stony Brook) 5.58. Ramp Phase Shift (Tennessee) 5.59. Parabolic Phase Shift (Tennessee) 5.60. Phase Shift for Inverse Quadratic (Tennessee) Scattering Theory 5.61. Step-Down Potential (Michigan State, MIT) 5.62. Step-Up Potential (Wisconsin-Madison) 5.63. Repulsive Square Well (Colorado) 5.64. 3D Delta Function (Princeton) 5.65. Two-Delta-Function Scattering (Princeton) 5.66. Scattering of Two Electrons (Princeton) 5.67. Spin-Dependent Potentials (Princeton) 5.68. Rayleigh Scattering (Tennessee) 5.69. Scattering from Neutral Charge Distribution (Princeton) 321 5.70. Spherical Box with Hole (Stony Brook) 5.71. Attractive Delta Function in 3D (Princeton) 5.72. Ionizing Deuterium (Wisconsin-Madison) 5.73. Collapsed Star (Stanford) 5.74. Electron in Magnetic Field (Stony Brook, Moscow 5.75. Electric and Magnetic Fields (Princeton) 5.76. Josephson Junction (Boston) Approximate Values of Physical Constants Some Astronomical Data Other Commonly Used Units Conversion Table from Rationalized MKSA to Gaussian Units Vector Identities Vector Formulas in Spherical and Cylindrical Coordinates Legendre Polynomials Rodrigues’ Formula Spherical Harmonics Harmonic Oscillator Angular Momentum and Spin Variational Calculations Normalized Eigenstates of Hydrogen Atom Conversion Table for Pressure Units Useful Constants This page intentionally left blank and Statistical Introductory Thermodynamics Why Bother? (Moscow Phys-Tech) A physicist and an engineer find themselves in a mountain lodge where the only heat is provided by a large woodstove. The physicist argues that they cannot increase the total energy of the molecules in the cabin, and therefore it makes no sense to continue putting logs into the stove. The engineer strongly disagrees (see Figure P.4.1), referring to the laws of thermodynamics and common sense. Who is right? Why do we heat the room? Space Station Pressure (MIT) A space station consists of a large cylinder of radius filled with air. The cylinder spins about its symmetry axis at an angular speed providing an acceleration at the rim equal to If the temperature is constant inside the station, what is the ratio of air pressure at the center of the station to the pressure at the rim? Baron von Münchausen and Intergalactic Travel (Moscow Phys-Tech) Recently found archives of the late Baron von Münchausen brought to light some unpublished scientific papers. In one of them, his calculations indicated that the Sun’s energy would some day be exhausted, with the subsequent freezing of the Earth and its inhabitants. In order to avert this inevitable outcome, he proposed the construction of a large, rigid balloon, empty of all gases, 1 km in radius, and attached to the Earth by a long, light rope of extreme tensile strength. The Earth would be propelled through space to the nearest star via the Archimedes’ force on the balloon, transmitted through the rope to the large staple embedded in suitable bedrock (see Figure P.4.3). Estimate the force on the rope (assuming a massless balloon). Discuss the feasibility of the Baron’s idea (without using any general statements). Railway Tanker (Moscow Phys-Tech) A long, cylindrical tank is placed on a carriage that can slide without friction on rails (see Figure P.4.4). The mass of the empty tanker is Initially, the tank is filled with an ideal gas of mass at a pressure atm at an ambient temperature one end of the tank is heated to 335 K while the other end is kept fixed at 300 K. Find the pressure in the tank and the new position of the center of mass of the tanker when the system reaches equilibrium. Magic Carpet (Moscow Phys-Tech) Once sitting in heavy traffic, Baron von Münchausen thought of a new kind of “magic carpet” type aircraft (see Figure P.4.5). The upper surface of the large flat panel is held at a constant temperature and the lower surface at a temperature He reasoned that, during collision with the hot surface, air molecules acquire additional momentum and therefore will transfer an equal momentum to the panel. The back of the handkerchief estimates he was able to make quickly for of such a panel showed that = 373 K (air temperature 293 K) this panel would be able to levitate itself and a payload (the Baron) of about kg. How did he arrive at this? Is it really possible? Teacup Engine (Princeton, Moscow Phys-Tech) The astronaut from Problem 1.13 in Part I was peacefully drinking tea at five o’clock galactic time, as was his wont, when he had an emergency outside the shuttle, and he had to do an EVA to deal with it. Upon leaving the ship, his jetpack failed, and nothing remained to connect him to the shuttle. Fortunately, he had absentmindedly brought his teacup with him. Since this was the only cup he had, he did not want to throw it away in order to propel him back to the shuttle (besides, it was his favorite cup). Instead, he used the sublimation of the frozen tea to propel him back to the spaceship (see Figure P.4.6). Was it really possible? Estimate the time it might take him to return if he is a distance m from the ship. Assume that the sublimation occurs at a constant temperature The vapor pressure at this temperature is of the astronaut and the total mass Grand Lunar Canals (Moscow Phys-Tech) In one of his novels, H. G. Wells describes an encounter of amateur earthling astronauts with a lunar civilization living in very deep caverns beneath the surface of the Moon. The caverns are connected to the surface by long channels filled with air. The channel is dug between points A and B on the surface of the Moon so that the angle (see Figure P.4.7). Assume that the air pressure in the middle of a channel is Estimate the air pressure in the channel near the surface of the Moon. The radius of the Moon The acceleration due to gravity on the surface of the Moon where is the acceleration due to gravity on the surface of the Earth. Frozen Solid (Moscow Phys-Tech) Estimate how long it will take for a small pond of average depth to freeze completely in a very cold winter, when the temperature is always below the freezing point of water (see Figure P.4.8). Take the thermal conductivity of ice to be the latent heat of fusion and the density temperature to be a constant Take the outside Tea in Thermos (Moscow Phys-Tech) One liter of tea at 90° C is poured into a vacuum-insulated container (thermos). The surface area of the thermos walls The volume between the walls is pumped down to atm pressure (at room temperature). The emissivity of the walls and the thermal capacity of water Disregarding the heat leakage through the stopper, estimate the a) Net power transfer b) Time for the tea to cool from 90°C to 70°C. Heat Loss (Moscow Phys-Tech) An immersion heater of power W is used to heat water in a bowl. After 2 minutes, the temperature increases from 90°C. The heater is then switched off for an additional minute, and the temperature drops by Estimate the mass of the water in the bowl. The thermal capacity of water c = 4.2 • Liquid-Solid-Liquid (Moscow Phys-Tech) A small amount of water of mass in a container at temperature K is placed inside a vacuum chamber which is evacuated rapidly. As a result, part of the water freezes and becomes ice and the rest becomes a) What amount of water initially transforms into ice? The latent heat of fusion (ice/water) and the latent heat of vaporization g and original volume b) A piece of heated metal alloy of mass is placed inside the calorimeter together with the ice obtained as a result of the experiment in (a). The density of metal K is The thermal capacity is and the coefficient oflinear expansion How much ice will have melted when equilibrium is reached? Hydrogen Rocket (Moscow Phys-Tech) The reaction chamber of a rocket engine is supplied with a mass flow rate m of hydrogen and sufficient oxygen to allow complete burning of the fuel. The cross section of the chamber is A, and the pressure at the cross section is P with temperature T. Calculate the force that this chamber is able to Maxwell-Boltzmann Averages (MIT) a) Write the properly normalized Maxwell–Boltzmann distribution for finding particles of mass with magnitude of velocity in the at a temperature b) What is the most likely speed at temperature c) What is the average speed? d) What is the average square speed? Slowly Leaking Box (Moscow Phys-Tech, Stony Brook (a,b)) An ideal gas of atoms of number density at an absolute temperature is confined to a thermally isolated container that has a small hole of area A in one of the walls (see Figure P.4.14). Assume a Maxwell velocity distribution for the atoms. The size of the hole is much smaller than the size of the container and much smaller than the mean free path of the atoms. a) Calculate the number of atoms striking the wall of the container per unit area per unit time. (Express your answer in terms of the mean velocity of the atoms.) b) What is the ratio of the average kinetic energy of atoms leaving the container to the average kinetic energy of atoms initially occupying the container? Assume that there is no flow back to the container. Give a qualitative argument and compute this ratio. c) How much heat must you transfer to/from the container to keep the temperature of the gas constant? Surface Contamination (Wisconsin-Madison) A surface scientist wishes to keep an exposed surface “clean” adsorbed monolayer) for an experiment lasting for times h at a temperature Estimate the needed data and calculate a value for the required background pressure in the apparatus if each incident molecule sticks to the surface. Bell Jar (Moscow Phys-Tech) A vessel with a small hole of diameter in it is placed inside a high-vacuum chamber (see Figure P.4.16). The pressure is so low that the mean free path The temperature of the gas in the chamber is and the pressure The temperature in the vessel is kept at a constant is the pressure inside the vessel when steady state is reached? Hole in Wall (Princeton) A container is divided into two parts, I and II, by a partition with a small hole of diameter Helium gas in the two parts is held at temperatures K and respectively, through heating of the walls (see Figure P.4.17). a) How does the diameter d determine the physical process by which the gases come to steady state? b) What is the ratio of the mean free paths c) What is the ratio between the two parts Ballast Volume Pressure (Moscow Phys-Tech) Two containers, I and II, filled with an ideal gas are connected by two small openings of the same area, A, through a ballast volume B (see FigTLFeBOOK ure P.4.18). The temperatures and pressures in the two containers are kept constant and equal to P, and P, respectively. The volume B is thermally isolated. Find the equilibrium pressure and temperature in the ballast volume, assuming the gas is in the Knudsen regime. Rocket in Drag (Princeton) A rocket has an effective frontal area A and blasts off with a constant acceleration a straight up from the surface of the Earth (see Figure P.4.19). a) Use either dimensional analysis or an elementary derivation to find out how the atmospheric drag on the rocket should vary as some power(s) of the area A, the rocket velocity and the atmospheric density (assuming that we are in the region of high Reynolds numbers). b) Assume that the atmosphere is isothermal with temperature T. Derive the variation of the atmospheric density with height Assume that the gravitational acceleration is a constant and that the density at sea level is c) Find the height at which the drag on the rocket is at a maximum. Adiabatic Atmosphere (Boston, Maryland) The lower 10–15 km of the atmosphere, the troposphere, is often in a convective steady state with constant entropy, not constant temperature is independent of the altitude, where a) Find the change of temperature in this model with altitude in K/km. Consider the average diatomic molecule b) Estimate of air with molar mass Atmospheric Energy (Rutgers) The density of the Earth’s atmosphere, varies with height above the Earth’s surface. Assume that the “thickness” of the atmosphere is sufficiently small so that it is in a uniform gravitational field of strength a) Write an equation to determine the atmospheric pressure the function b) In a static atmosphere, each parcel of air has an internal energy and a gravitational potential energy To a very good approximation, the air in the atmosphere is an ideal gas with constant specific heat. Using this assumption, the result of part (a), and classical thermodynamics, show that the total energy in a vertical column of atmosphere of cross-sectional area A is given by and that the ratio of energies is where T is the temperature, is the pressure at the Earth’s surface, is the molar mass, is the molar specific heat at constant pressure, and is the ratio of specific heats. Hint: The above results do not depend on the specific way in which vary as a function of (e.g., isothermal, adiabatic, or something intermediate). They depend only on the fact that is monotonically decreasing. At some step of the derivation, you might find it useful to do an integration by parts. Puncture (Moscow Phys-Tech) A compressed ideal gas flows out of a small hole in a tire which has a a) Find the velocity of gas outside the tire in the vicinity of the hole if the flow is laminar and stationary and the pressure outside is b) Estimate this velocity for a flow of molecular hydrogen into a vacuum at a temperature Express this velocity in terms of the velocity of sound inside the tire, Heat and Work Cylinder with Massive Piston (Rutgers, Moscow Consider moles of an ideal monatomic gas placed in a vertical cylinder. The top of the cylinder is closed by a piston of mass M and cross section A (see Figure P.4.23). Initially the piston is fixed, and the gas has volume and temperature Next, the piston is released, and after several oscillations comes to a stop. Disregarding friction and the heat capacity of the piston and cylinder, find the temperature and volume of the gas at equilibrium. The system is thermally isolated, and the pressure outside the cylinder is Spring Cylinder (Moscow Phys-Tech) One part of a cylinder is filled with one mole of a monatomic ideal gas at a pressure of 1 atm and temperature of 300 K. A massless piston separates the gas from the other section of the cylinder which is evacuated but has a spring at equilibrium extension attached to it and to the opposite wall of the cylinder. The cylinder is thermally insulated from the rest of the world, and the piston is fixed to the cylinder initially and then released (see Figure P.4.24). After reaching equilibrium, the volume occupied by the gas is double the original. Neglecting the thermal capacities of the cylinder, piston, and spring, find the temperature and pressure of the gas. Isothermal Compression and Adiabatic Expansion of Ideal Gas (Michigan) An ideal gas is compressed at constant temperature (see Figure P.4.25). from volume a) Find the work done on the gas and the heat absorbed by the gas. b) The gas now expands adiabatically to volume What is the final (derive this result from first principles)? c) Estimate K for air. Isochoric Cooling and Isobaric Expansion (Moscow Phys-Tech) An ideal gas of total mass and molecular weight is isochorically (at constant volume) cooled to a pressure times smaller than the initial pressure The gas is then expanded at constant pressure so that in the final state the temperature coincides with the initial temperature the work done by the gas. Venting (Moscow Phys-Tech) A thermally insulated chamber is pumped down to a very low pressure. At some point, the chamber is vented so that it is filled with air up to atmospheric pressure, whereupon the valve is closed. The temperature of the air surrounding the chamber is What is the temperature T of the gas in the chamber immediately after venting? Cylinder and Heat Bath (Stony Brook) Consider a cylinder 1 m long with a thin, massless piston clamped in such a way that it divides the cylinder into two equal parts. The cylinder is in a large heat bath at The left side of the cylinder contains 1 mole of helium gas at 4 atm. The right contains helium gas at a pressure of 1 atm. Let the piston be released. a) What is its final equilibrium position? b) How much heat will be transmitted to the bath in the process of equilibration? (Note that Heat Extraction (MIT, Wisconsin-Madison) a) A body of mass M has a temperature-independent specific heat C. If the body is heated reversibly from a temperature to a temperature what is the change in its entropy? b) Two such bodies are initially at temperatures of 100 K and 400 K. A reversible engine is used to extract heat with the hotter body as a source and the cooler body as a sink. What is the maximum amount of heat that can be extracted in units of MC? c) The specific heat of water is and its density is Calculate the maximum useful work that can be extracted, using as a source of water at 100°C and a lake of temperature 10°C as a sink. Heat Capacity Ratio (Moscow Phys-Tech) To find of a gas, one sometimes uses the following method. A certain amount of gas with initial temperature and volume is heated by a current flowing through a platinum wire for a time The experiment is done twice: first at a constant volume with the pressure changing from and then at a constant pressure with the volume changing from The time t is the same in both experiments. Find the ratio (the gas may be considered ideal). Otto Cycle (Stony Brook) The cycle of a highly idealized gasoline engine can be approximated by the Otto cycle (see Figure P.4.31). are adiabatic compression and expansion, respectively; are constant-volume processes. Treat the working medium as an ideal gas with constant a) Compute the efficiency of this cycle for and compression ratio b) Calculate the work done on the gas in the compression process assuming initial volume Joule Cycle (Stony Brook) Find the efficiency of the Joule cycle, consisting of two adiabats and two isobars (see Figure P.4.32). Assume that the heat capacities of the gas are constant. Diesel Cycle (Stony Brook) Calculate the efficiency of the Diesel cycle, consisting of two adiabats, one isobar and one constant-volume process Figure P.4.33). Assume are constant. Modified Joule–Thomson (Boston) Figure P.4.34 shows container A of variable volume V controlled by a frictionless piston, immersed in a bath at temperature This container is connected by a pipe with a porous plug to another container, B, of fixed Container A is initially occupied by an ideal gas at pressure P while container B is initially evacuated. The gas is allowed to flow through the plug, and the pressure on the piston is maintained at the constant value P. When the pressure of the gas in B reaches P, the experiment is terminated. Neglecting any heat conduction through the plug, show that the final temperature of the gas in B is the molar heats at constant pressure and volume of the gas. Ideal Gas and Classical Statistics Poisson Distribution in Ideal Gas (Colorado) Consider a monatomic ideal gas of total molecules in a volume that the probability, for the number N of molecules contained in a small element of V is given by the Poisson distribution is the average number of molecules found in the volume Polarization of Ideal Gas (Moscow Phys-Tech) Calculate the electric polarization of an ideal gas, consisting of molecules having a constant electric dipole moment in a homogeneous external electric field E at temperature What is the dielectric constant of this gas at small fields? Two-Dipole Interaction (Princeton) Two classical dipoles with dipole moments are separated by a distance R so that only the orientation of the magnetic moments is free. They are in thermal equilibrium at a temperature Compute the mean between the dipoles for the high-temperature limit Hint: The potential energy of interaction of two dipoles is Entropy of Ideal Gas (Princeton) A vessel of volume contains N molecules of an ideal gas held at temperature and pressure The energy of a molecule may be written in the denotes the energy levels corresponding to the internal states of the molecules of the gas. a) Evaluate the free energy F. Explicitly display the dependence on the Now consider another vessel, also at temperature containing the same number of molecules of the identical gas held at pressure b) Give an expression for the total entropy of the two gases in terms of c) The vessels are then connected to permit the gases to mix without doing work. Evaluate explicitly the change in entropy of the system. Check whether your answer makes sense by considering the special Chemical Potential of Ideal Gas (Stony Brook) Derive the expression for the Gibbs free energy and chemical potential of N molecules of an ideal gas at temperature pressure P, and volume V. Assume that all the molecules are in the electronic ground state with degeneracy At what temperature is this approximation valid? Gas in Harmonic Well (Boston) A classical system of N distinguishable noninteracting particles of mass is placed in a three-dimensional harmonic well: a) Find the partition function and the Helmholtz free energy. b) Regarding V as an external parameter, find the thermodynamic force conjugate to this parameter, exerted by the system; find the equation of state and compare it to that of a gas in a container with rigid c) Find the entropy, internal energy, and total heat capacity at constant Ideal Gas in One-Dimensional Potential a) An ideal gas of particles, each of mass at temperature is subjected to an external force whose potential energy has the form Find the average potential energy per particle. b) What is the average potential energy per particle in a gas in a uniform gravitational field? Equipartition Theorem (Columbia, Boston) a) For a classical system with Hamiltonian at a temperature show that b) Using the above, derive the law of Dulong and Petit for the heat capacity of a harmonic crystal. c) For a more general Hamiltonian, prove the generalized equipartition theorem: You will need to use the fact that U is infinite at d) Consider a system of a large number of classical particles and assume a general dependence of the energy of each particle on the generalized coordinate or momentum component given by Show that, in thermal equilibrium, the generalized equipartition theorem holds: What conditions should be satisfied for tition theorem? to conform to the equipar- Diatomic Molecules in Two Dimensions You have been transported to a two-dimensional world by an evil wizard who refuses to let you return to your beloved Columbia unless you can determine the thermodynamic properties for a rotating heteronuclear diatomic molecule constrained to move only in a plane (two dimensions). You may assume in what follows that the diatomic molecule does not undergo translational motion. Indeed, it only has rotational kinetic energy about its center of mass. The quantized energy levels of a diatomic in two dimensions are with degeneracies for J not equal to zero, and when J = 0. As usual, where I is the moment of inertia. Hint: For getting out of the wizard’s evil clutches, treat all levels as having the same degeneracy and then... . Oh, no! He’s got me, too! derive the partition function for an individa) Assuming ual diatomic molecule in two dimensions. b) Determine the thermodynamic energy E and heat capacity in the limit, where for a set of indistinguishable, independent, heteronuclear diatomic molecules constrained to rotate in a plane. Compare these results to those for an ordinary diatomic rotor in three dimensions. Comment on the differences and discuss briefly in terms of the number of degrees of freedom required to describe the motion of a diatomic rotor confined to a plane. Diatomic Molecules in Three Dimensions (Stony Brook, Michigan State) Consider the free rotation of a diatomic molecule consisting of two atoms of mass respectively, separated by a distance Assume that the molecule is rigid with center of mass fixed. a) Starting from the kinetic energy derive the kinetic energy of this system in spherical coordinates and show that where I is the moment of inertia. Express I in terms of b) Derive the canonical conjugate momenta Hamiltonian of this system in terms of c) The classical partition function is defined as Express the and I. Calculate the heat capacity for a system of N d) Assume now that the rotational motion of the molecule is described by quantum mechanics. Write the partition function in this case, taking into account the degeneracy of each state. Calculate the heat capacity of a system of N molecules in the limit of low and high temperatures and compare them to the classical result. Two-Level System (Princeton) Consider a system composed of a very large number N of distinguishable atoms at rest and mutually noninteracting, each of which has only two (nondegenerate) energy levels: Let E / N be the mean energy per atom in the limit a) What is the maximum possible value of E / N if the system is not necessarily in thermodynamic equilibrium? What is the maximum attainable value of E / N if the system is in equilibrium (at positive b) For thermodynamic equilibrium compute the entropy per atom S/N as a function of E / N. Zipper (Boston) A zipper has N links; each link has a state in which it is closed with energy 0 and a state in which it is open with energy We require that the zipper only unzip from one side (say from the left) and that the link can only open if all links to the left of it (1,2,..., are already open. (This model is sometimes used for DNA molecules.) a) Find the partition function. and show that for low b) Find the average number of open links is independent of N. Hanging Chain (Boston) The upper end of a hanging chain is fixed while the lower end is attached to a mass M. The (massless) links of the chain are ellipses with major axes and minor axes and can place themselves only with either the major axis or the minor axis vertical. Figure P.4.47 shows a four-link chain in which the major axes of the first and fourth links and the minor axes of the second and third links are vertical. Assume that the chain has N links and is in thermal equilibrium at temperature a) Find the partition function. b) Find the average length of the chain. Molecular Chain (MIT, Princeton, Colorado) Consider a one-dimensional chain consisting of N molecules which exist in two configurations, with corresponding energies and lengths and The chain is subject to a tensile force for the system. a) Write the partition function a function of and the temperab) c) Assume that Estimate the average length in the absence of the tensile force as a function of temperature. What are the high- and low-temperature limits, and what is the characteristic temperature at which the changeover between the two limits occurs? d) Calculate the linear response function Produce a general argument to show that Nonideal Gas Heat Capacities (Princeton) Consider a gas with arbitrary equation of state at a temperature is a critical temperature of this gas. a) Calculate for this gas in terms of Does have the same sign? b) Using the result of (a), calculate for one mole of a van der Waals gas. Return of Heat Capacities (Michigan) In a certain range of temperature and pressure of a substance is described by the equation the specific volume are positive constants. From this information, determine (insofar as possible) as a function of temperature and pressure the following Nonideal Gas Expansion (Michigan State) A gas obeys the equation of state is a function of the temperature only. The gas is initially at temperature and volume and is expanded isothermally and reversibly to volume a) Find the work done in the expansion. b) Find the heat absorbed in the expansion. Some Maxwell relations: van der Waals (MIT) A monatomic gas obeys the van der Waals equation and has a heat capacity in the limit a) Prove, using thermodynamic identities and the equation of state, that b) Use the preceding result to determine the entropy of the van der Waals to within an additive constant. c) Calculate the internal energy to within an additive constant. d) What is the final temperature when the gas is adiabatically compressed from to final volume e) How much work is done in this compression? Critical Parameters (Stony Brook) Consider a system described by the Dietrici equation of state where A, B, R are constants and P, V, and are the pressure, volume, temperature, and number of moles. Calculate the critical parameters, i.e., the values of P, V, and at the critical point. Mixtures and Phase Separation Entropy of Mixing (Michigan, MIT) a) A 2-L container is divided in half: One half contains oxygen at 1 atm, the other nitrogen at the same pressure, and both gases may be considered ideal. The system is in an adiabatic enclosure at a K. The gases are allowed to mix. Does the temperature of the system change in this process? If so, by how much? Does the entropy change? If so, by how much? b) How would the result differ if both sides contained oxygen? c) Now consider one half of the enclosure filled with diatomic molecules of oxygen isotope and the other half with Will the answer be different from parts (a) and (b)? Leaky Balloon (Moscow Phys-Tech) Sometimes helium gas in a low-temperature physics lab is kept temporarily in a large rubber bag at essentially atmospheric pressure. A physicist left a 40-L bag filled with He floating near the ceiling before leaving on vacation. When she returned, all the helium was gone (diffused through the walls of the bag). Find the entropy change of the gas. Assume that the atmospheric helium concentration is approximately . What is the minimum work needed to collect the helium back into the bag? Osmotic Pressure (MIT) Consider an ideal mixture of monatomic molecules of type A and monatomic molecules of type B in a volume V. a) Calculate the free energy Calculate the Gibbs potential G is the Legendre transform of F with respect to V. b) If the molecules of type A are called the solvent, and those of type B the solute. Consider two solutions with the same solvent (type A) and different concentrations of solute (type B molecules) separated by a partition through which solvent molecules can pass but solute molecules cannot (see Figure P.4.56). There are particles in volume V (or in volume 2V), and particles in volume V on the left and right of the membrane, respectively. Calculate the pressure difference across the membrane at a given temperature and volume. Assume that the concentrations of the solutions are small; Clausius–Clapeyron (Stony Brook) a) Derive the Clausius–Clapeyron equation for the equilibrium of two phases of a substance. Consider a liquid or solid phase in equilibrium with its vapor. b) Using part (a) and the ideal gas law for the vapor phase, show that the vapor pressure follows the equation ln Make reasonable assumptions as required. What is B? Phase Transition (MIT) The curve separating the liquid and gas phases ends in the critical point Using arguments based on thermodynamic stability, determine at the critical point. Hydrogen Sublimation in Intergalactic Space A lump of condensed molecular hydrogen in intergalactic space would tend to sublimate (evaporate) because the ambient pressure of hydrogen is well below the equilibrium vapor pressure. Find an order-of-magnitude estimate of the rate of sublimation per unit area at The latent heat of sublimation is and the vapor pressure at the triple point of Hg. Gas Mixture Condensation (Moscow Phys-Tech) A mixture of of nitrogen and some oxygen is isothermally compressed at The result of this experiment is plotted as the pressure dependence of the mixture versus volume in arbitrary units (see Figure P.4.60). Find the mass of oxygen and the oxygen saturation vapor pressure at this temperature. K is the boiling temperature of liquid nitrogen at atmospheric pressure. Oxygen boils at a higher temperature. Air Bubble Coalescence (Moscow Phys-Tech) A tightly closed jar is completely filled with water. On the bottom of the jar are two small air bubbles (see Figure P.4.61a) which sidle up to each other and become one bubble (see Figure P.4.61b). The pressure at the top of the jar is the radius of each original bubble is and the coefficient of surface tension is Consider the process to be isothermal. Evaluate the change of pressure inside the jar upon merging of the two bubbles. Soap Bubble Coalescence (Moscow Phys-Tech) Two soap bubbles of radii become one bubble Find the surface tension coefficient for the soap solution. The ambient pressure is Soap Bubbles in Equilibrium (Moscow Two soap bubbles of radius are connected by a thin “straw” of negligible volume compared to the volume of the bubbles (see Figure P.4.63). The ambient pressure is the temperature is and the surface tension coefficient is a) Is this system in stable equilibrium? What is the final state? b) Calculate the entropy change between the final-state configuration and the configuration in Figure P.4.63. Assume Quantum Statistics Fermi Energy of a 1D Electron Gas Calculate the Fermi energy for a one-dimensional metal with one free electron per atom and an atomic spacing of 2.5 Å at T = 0. Two-Dimensional Fermi Gas (MIT, Consider a noninteracting nonrelativistic gas of N spin-1/2 fermions at T = 0 in a box of area A. a) Find the Fermi energy. b) Show that the total energy is given by c) Qualitatively discuss the behavior of the heat capacity of this system at low temperatures. Nonrelativistic Electron Gas (Stony Brook, Wisconsin-Madison, Michigan State) a) Derive the relation between pressure and volume of a free nonrelativistic electron gas at zero temperature. b) The formula obtained in (a) is approximately correct for sufficiently low temperatures (the so-called strongly degenerate gas). Discuss the applicability of this formula to common metals. Ultrarelativistic Electron Gas (Stony Brook) Derive the relation between pressure and volume of a free ultrarelativistic electron gas at zero temperature. Quantum Corrections to Equation of State (MIT, Princeton, Stony Brook) Consider a noninteracting, one-component quantum gas at temperature with a chemical potential in a cubic volume V. Treat the separate cases of bosons and fermions. a) For a dilute system derive the equation of state in terms of temperature pressure P, particle density and particle mass this derivation approximately by keeping the leading and next-leading powers of Interpret your results as an effective classical system. b) At a given temperature, for which densities are your results valid? Speed of Sound in Quantum Gases (MIT) The sound velocity in a spin-1/2 Fermi gas is given at is the mass of the gas particles, and is the number a) Show that where is the chemical potential. b) Calculate the sound velocity in the limit of zero temperature. Express your answer in terms of c) Show that in a Bose gas below the Bose–Einstein temperature. Bose Condensation Critical Parameters (MIT) Consider an ideal Bose gas of N particles of mass and spin zero in a volume V and temperature above the condensation point. a) What is the critical volume below which Bose–Einstein condensation occurs? An answer up to a numerical constant will be sufficient. b) What is the answer to (a) in two dimensions? Bose Condensation (Princeton, Stony Brook) Consider Bose condensation for an arbitrary dispersion law in D dimensions (see Figure P.4.71). Assume a relation between energy and momentum of the form Find a relation between D and for Bose condensation to occur. How Hot the Sun? (Stony Brook) The total radiant energy flux at the Earth from the Sun, integrated over all wavelengths, is observed to be approximately The distance from the Earth to the Sun, is cm and the solar Treating the Sun as a “blackbody,” make a crude estimate of the surface temperature of the Sun (see Figure P.4.72). To make the numerical estimate, you are encouraged to ignore all factors of 2’s and to express any integrals that you might have in dimensionless form, and to take all dimensionless quantities to be unity. Radiation Force (Princeton, Moscow Phys-Tech, Consider an idealized Sun and Earth, both blackbodies, in otherwise empty flat space. The Sun is at a temperature and heat transfer by oceans and atmosphere on the Earth is so effective as to keep the Earth’s surface temperature uniform. The radius of the Earth is the radius of the Sun is and the Earth–Sun distance is The mass of Sun a) Find the temperature of the Earth. b) Find the radiation force on the Earth. c) Compare these results with those for an interplanetary “chondrule” in the form of a spherical, perfectly conducting blackbody with a radius cm, moving in a circular orbit around the Sun at a radius equal to the Earth–Sun distance d) At what distance from the Sun would a metallic particle melt (melting e) For what size particle would the radiation force calculated in (c) be equal to the gravitational force from the Sun at a distance ? Hot Box and Particle Creation (Boston, MIT) The electromagnetic radiation in a box of volume V can be treated as a noninteracting ideal Bose gas of photons. If the cavity also contains atoms capable of absorbing and emitting photons, the number of photons in the cavity is not definite. The box is composed of a special material that can withstand extremely high temperatures of order a) Derive the average number of photons in the box. b) What is the total energy of the radiation in the box for c) What is the entropy of the radiation for d) Assume that photons can create neutral particles of mass and zero spin and that these neutral particles can create photons by annihilation or some other mechanism. The cavity now contains photons and particles in thermal equilibrium at a temperature Find the particle density Consider only the process where a single photon is emitted or absorbed by making a single particle. Hint: Minimize the free energy. Now, instead of neutral particles, consider the creation of electron-positron e) What is the total concentration of electrons and positrons inside the box when f) What is the total concentration of electrons and positrons when D-Dimensional Blackbody Cavity (MIT) Consider a D-dimensional hypercube blackbody cavity. What is the energy density as a function of temperature? It is not necessary to derive the multiplicative constant. Assume that the radiation is in quanta of energy Fermi and Bose Gas Pressure (Boston) For a photon gas the entropy is is the angular frequency of the mode. Using (P.4.76.1): a) Show that the isothermal work done by the gas is is the average number of photons in the b) Show that the radiation pressure is equal to one third of the energy c) Show that for a nonrelativistic Fermi gas the pressure is Blackbody Radiation and Early Universe (Stony The entropy of the blackbody radiation in the early universe does not change if the expansion is so slow that the occupation of each photon mode remains constant (or the other way around). To illustrate this consider the following problem. A one-dimensional harmonic oscillator has an infinite series of equally spaced energy states, with where is a positive integer or zero and is the classical frequency of the oscillator. a) Show that for a harmonic oscillator the free energy is b) Find the entropy S. Establish the connection between entropy and occupancy of the modes by showing that for one mode of frequency the entropy is a function of photon occupancy Photon Gas (Stony Brook) Consider a photon gas at temperature T inside a container of volume V. Derive the equation of state and compare it to that of the classical ideal gas (which has the equation Also compute the energy of the photon gas in terms of PV. You need not get all the numerical factors in this derivation. Dark Matter (Rutgers) From virial theorem arguments, the velocity dispersions of bright stars in dwarf elliptical galaxies imply that most of the mass in these systems is in the form of “dark” matter - possibly massive neutrinos (see Figure P.4.79). The central parts of the Draco dwarf galaxy may be modeled as an isothermal gas sphere, with a phase-space distribution of mass of the form is the local mass density in the galaxy, is the velocity dispersion, and is the mass of a typical “particle” in the galaxy. Measurements on Draco yield light years). the “core” radius, where the density has decreased by close to a factor of 2 from its value at a) Using the virial theorem, write a very rough (order of magnitude) relation between b) Assume that most of the mass in Draco resides in one species of massive neutrino. Show how, if the Pauli exclusion principle is not to be violated, the distribution function above sets a lower limit on the mass of this neutrino. c) Using the observations and the result of part (a), estimate this lower limit (in units of and comment on whether current measurements of neutrino masses allow Draco to be held together in the manner suggested. Einstein Coefficients (Stony Brook) You have two-state atoms in a thermal radiation field at temperature T. The following three processes take place: 1) Atoms can be promoted from state 1 to state 2 by absorption of a photon according to 2) Atoms can decay from state 2 to state 1 by spontaneous emission according to 3) Atoms can decay from state 2 to state 1 by stimulated emission according to The populations density is are in thermal equilibrium, and the radiation a) What is the ratio b) Calculate the ratios of coefficients c) From the ratio of stimulated to spontaneous emission, how does the pump power scale with wavelength when you try to make shortwavelength lasers? Atomic Paramagnetism (Rutgers, Boston) Consider a collection of N identical noninteracting atoms, each of which has total angular momentum J. The system is in thermal equilibrium at temperature and is in the presence of an applied magnetic field The magnetic dipole moment associated with each atom is given by where is the gyromagnetic ratio and is the Bohr magneton. Assume the system is sufficiently dilute so that the local magnetic field at each atom may be taken as the applied magnetic field. a) For a typical atom in this system, list the possible values of magnetic moment along the magnetic field, and the corresponding magnetic energy associated with each state. b) Determine the thermodynamic mean value of the magnetic moment and the magnetization of the system M, and calculate it for c) Find the magnetization of the system in the limits and discuss the physical meaning of the results. Paramagnetism at High Temperature (Boston) a) Show that for a system with a discrete, finite energy spectrum specific heat per particle at high temperatures for all is the spectrum variance b) Use the result of (a) to derive the high-temperature specific heat for a paramagnetic solid treated both classically and quantum mechanically. c) Compare your quantum mechanical result for with the exact formula for One-Dimensional Ising Model (Tennessee) Consider N spins in a chain which can be modeled using the onedimensional Ising model where the spin has the values a) Find the partition function. b) Find the heat capacity per spin. Three Ising Spins (Tennessee) Assume three spins are arranged in an equilateral triangle with each spin interacting with its two neighbors (see Figure P.4.84). The energy expression for the Ising model in a magnetic field Derive expressions for the a) Partition function b) Average spin c) Internal energy N Independent Spins (Tennessee) Consider a system of N independent spin-1/2 particles. In a magnetic field H, in the direction, they can point either up or down with energy where is the magnetic moment. Derive expressions for the a) Partition function b) Internal energy c) Entropy N Independent Spins, Revisited (Tennessee) Consider a system of N independent spin-1/2 particles. In a magnetic field H, in the direction, they can point either up or down with energy where is the magnetic moment and Derive expressions for the in the case of a microcanonical ensemble, where the number of particles N and the magnetization are fixed. Ferromagnetism (Maryland, MIT) The spins of a regular Ising lattice interact by the energy where B is an external field, is the magnetic moment, and the prime indicates that the summation is only over the nearest neighbors. Each spin has nearest neighbors. The spins are restricted to equal coupling constant J is positive. Following Weiss, represent the effect on of the spin–spin interaction in (P.4.87.1) by the mean field set up by the neighboring spins Calculate the linear spin susceptibility this mean field approximation. Your expression should diverge at some What is the physical significance of this divergence? What is happening to the spin lattice at Spin Waves in Ferromagnets (Princeton, Consider the quantum mechanical spin-1/2 system with Hamiltonian where the summation is over nearest-neighbor pairs in three dimensions. a) Derive the equation of motion for the spin at site of the lattice. b) Convert the model to a classical microscopic model by inserting the classical spin field into the equation of motion. Express to lowest order in its gradients, considering a simple cubic lattice with lattice constant c) Consider the ferromagnetic case with uniform magnetization Derive the frequency-versus-wave vector relation of a small spinwave fluctuation d) Quantize the spin waves in terms of magnons which are bosons. Derive the temperature dependence of the heat capacity. Magnetization Fluctuation (Stony Brook) Consider N moments with two allowed orientations in an external field H at temperature Calculate the fluctuation of magnetization M, Gas Fluctuations (Moscow Phys-Tech) A high-vacuum chamber is evacuated to a pressure of atm. Inside the chamber there is a thin-walled ballast volume filled with helium gas at a pressure atm and a temperature On one wall of this ballast volume, there is a small hole of area detector counts the number of particles leaving the ballast volume during time intervals a) Find the average number of molecules counted by the detector. b) Find the mean square fluctuation of this number. c) What is the probability of not counting any particles in one of the Quivering Mirror (MIT, Rutgers, Stony Brook) a) A very small mirror is suspended from a quartz strand whose elastic constant is D. (Hooke’s law for the torsional twist of the strand where is the angle of the twist.) In a real-life experiment the mirror reflects a beam of light in such a way that the angular fluctuations caused by the impact of surrounding molecules (Brownian motion) can be read on a suitable scale. The position of the equilibrium is One observes the average value the goal is to find Avogadro’s number (or, what is the same thing, determine the Boltzmann constant). The following are the data: At for a strand with dyn.cm, it was found You may also use the universal gas constant Calculate Avogadro’s number. b) Can the amplitude of these fluctuations be reduced by reducing gas density? Explain your answer. Isothermal Compressibility and Mean Square Fluctuation (Stony Brook) a) Derive the relation is the isothermal compressibility: b) From (a), find the relation between and the mean square fluctuation of N in the grand canonical ensemble. How does this fluctuation depend on the number of particles? Energy Fluctuation in Canonical Ensemble (Colorado, Stony Brook) Show that for a canonical ensemble the fluctuation of energy in a system of constant volume is related to the specific heat and, hence, deduce that the specific heat at constant volume is nonnegative. Number Fluctuations (Colorado (a,b), Moscow Phys-Tech (c)) Show that for a grand canonical ensemble the number of particles N and occupational number in an ideal gas satisfy the conditions: quantum statistics classical statistics For an electron spin Fermi gas at temperature c) Find Wiggling Wire (Princeton) A wire of length and mass per unit length is fixed at both ends and tightened to a tension What is the rms fluctuation, in classical statistics, of the midpoint of the wire when it is in equilibrium with a heat bath at A useful series is LC Voltage Noise (MIT, Chicago) The circuit in Figure P.4.96 consists of a coil of inductance and a capacitor of capacitance C. What is the rms noise voltage across AB at temperature in the limit where is very large? is very small? Applications to Solid State Thermal Expansion and Heat Capacity a) Find the temperature dependence of the thermal expansion coefficient if the interaction between atoms is described by a potential where is a small parameter. b) Derive the anharmonic corrections to the Dulong–Petit law for a potential is a small parameter. Schottky Defects (Michigan State, MIT) N atoms from a perfect crystal of total number of atoms are displaced to the surface of the crystal. Let be the energy needed to displace one atom from the bulk of the crystal to the surface. Find the equilibrium number of defects N at low temperatures Frenkel Defects (Colorado, MIT) N atoms are arranged regularly to form a perfect crystal. If one replaces atoms among them from lattice sites to interstices of the lattice, this becomes an imperfect crystal with defects (of the Frenkel type). The of interstitial sites into which an atom can enter is of the same order as N. Let be the energy necessary to remove an atom from a lattice site to an interstitial site. Show that, in the equilibrium state at temperature such that the following relation is valid: Two-Dimensional Debye Solid (Columbia, An atom confined to a surface may be thought of as an object “living” in a two-dimensional world. There are a variety of ways to look at such an atom. Suppose that the atoms adsorbed on the surface are not independent but undergo collective oscillations as do the atoms in a Debye crystal. Unlike the atoms in a Debye crystal, however, there are only two dimensions in which these collective vibrations can occur. a) Derive an expression for the number of normal modes between and, by thinking carefully about the total number of vibrational frequencies for N atoms confined to a surface, rewrite it in terms of N and the maximum vibration frequency allowed due to the discreteness of the atoms. b) Obtain an integral expression for the energy E for the two-dimensional Debye crystal. Use this to determine the limiting form of the temperature dependence of the heat capacity (analogous to the Debye law) for the two-dimensional Debye crystal up to dimensionless Einstein Specific Heat (Maryland, Boston) a) Derive an expression for the average energy at a temperature of a quantum harmonic oscillator having natural frequency b) Assuming unrealistically (as Einstein did) that the normal-mode vibrations of a solid all have the same natural frequency (call it find an expression for the heat capacity of an insulating solid. c) Find the high-temperature limit for the heat capacity as calculated in (b) and use it to obtain a numerical estimate for the heat capacity of a piece of an insulating solid having a number density Would you expect this to be a poor or a good estimate for the high-temperature heat capacity of the material? Please give reasons. d) Find the low-temperature limit of the heat capacity and explain why it is reasonable in terms of the model. Gas Adsorption (Princeton, MIT, Stanford) Consider a vapor (dilute monatomic gas) in equilibrium with a submonolayer (i.e., less than one atomic layer) of atoms adsorbed on a surface. Model the binding of atoms to the surface by a potential energy Assume there are possible sites for adsorption, and find the vapor pressure as a function of surface concentration (N is the number of adsorbed particles). Thermionic Emission (Boston) a) Assume that the evaporation of electrons from a hot wire (Richardson’s effect) is thermodynamically equivalent to the sublimation of a solid. Find the pressure of the electron gas, provided that the electrons outside the metal constitute an ideal classical monatomic gas and that the chemical potential of the electrons in the metal (the solid phase) is a constant. b) Derive the same result by using the Clausius–Clapeyron equation where L is the latent heat of electron evaporation. Neglect the volume occupied by the electrons in the metal. Electrons and Holes (Boston, Moscow a) Derive a formula for the concentration of electrons in the conduction band of a semiconductor with a fixed chemical potential (Fermi level) assuming that in the conduction band b) What is the relationship between hole and electron concentrations in a semiconductor with arbitrary impurity concentration and band gap c) Find the concentration of electrons and holes for an intrinsic semiconductor (no impurities), and calculate the chemical potential if the electron mass is equal to the mass of the hole: Adiabatic Demagnetization (Maryland) A paramagnetic sample is subjected to magnetic cooling. a) Show that is independent of H. Show that is the magnetization, is the isothermal magnetic susceptibility per unit volume, H is the magnetic field, and is the heat capacity at constant H. b) For an adiabatic process, show that c) Assume that can be approximated by Curie’s law the heat capacity at zero magnetic field is given by where and and that are constants. Show that For an adiabatic process, show that the ratio of final and initial temperatures is given by d) Explain and indicate in the diagram given in Figure P.4.105 a possible route for the adiabatic demagnetization cooling process to approach zero temperature. Critical Field in Superconductor (Stony Brook, Consider a massive cylinder of volume V made of a type I superconducting material in a magnetic field parallel to its axis. a) Using the fact that the superconducting state displays perfect diamagnetism, whereas the normal state has negligible magnetic susceptibility, show that the entropy discontinuity across the phase boundary is at zero field H: is the critical H field for suppressing superconductivity at a temperature b) What is the latent heat when the transition occurs in a field? c) What is the specific heat discontinuity in zero field? This page intentionally left blank One-Dimensional Potentials Shallow Square Well I (Columbia) A particle of mass moving in one dimension has a potential is a shallow square well near the origin: where is a positive constant. Derive the eigenvalue equation for the state of lowest energy, which is a bound state (see Figure P.5.1). Shallow Square Well II (Stony Brook) A particle of mass is confined to move in one dimension by a potential (see Figure P.5.2): a) Derive the equation for the bound state. b) From the results of part (a), derive an expression for the minimum value of which will have a bound state. c) Give the expression for the eigenfunction of a state with positive energy d) Show that the results of (c) define a phase shift for the potential, and derive an expression for the phase shift. Attractive Delta Function Potential I (Stony A particle of mass moves in one dimension under the influence of an attractive delta function potential at the origin. The Schrödinger equation a) Find the eigenvalue and eigenfunction of the bound state. b) If the system is in the bound state and the strength of the potential is changed suddenly what is the probability that the particle remains bound? Attractive Delta Function Potential II (Stony A particle of mass is confined to the right half-space, in one dimension, by an infinite potential at the origin. There is also an attractive delta function potential (see Figure P.5.4). a) Find the expression for the energy of the bound state. b) What is the minimum value of required for a bound state? Two Delta Function Potentials (Rutgers) A particle of mass moves in a one-dimensional potential of the form where P is a positive dimensionless constant and has units of length. Discuss the bound states of this potential as a function of P. Transmission Through a Delta Function Potential (Michigan State, MIT, Princeton) A particle of mass moves in one dimension where the only potential is at the origin with A free particle of wave vector approaches the origin from the left. Derive an expression for the amplitude T of the transmitted wave as a function of C, Delta Function in a Box (MIT) A particle of mass is confined to a box, in one dimension, between and the box has walls of infinite potential. An attractive delta is at the center of the box. a) What are the eigenvalues of odd-parity states? b) Find the value of C for which the lowest eigenvalue is zero. c) Find the ground state wave function for the case that the lowest eigenvalue is less than zero energy. Particle in Expanding Box (Michigan State, MIT, Stony Brook) A particle of mass m is contained in a one-dimensional impenetrable box extending from The particle is in its ground state. a) Find the eigenfunctions of the ground state and the first excited state. b) The walls of the box are moved outward instantaneously to form a box extending from Calculate the probability that the particle will stay in the ground state during this sudden expansion. c) Calculate the probability that the particle jumps from the initial ground state to the first excited final state. One-Dimensional Coulomb Potential (Princeton) An electron moves in one dimension and is confined to the right half-space where it has a potential energy where e is the charge on an electron. This is the image potential of an electron outside a perfect conductor. a) Find the ground state energy. b) Find the expectation value in the ground state Two Electrons in a Box (MIT) Two electrons are confined in one dimension to a box of length A clever experimentalist has arranged that both electrons have the same spin state. Ignore the Coulomb interaction between electrons. a) Write the ground state wave function for the two-electron b) What is the probability that both electrons are found in the same half of the box? Square Well (MIT) A particle of mass is confined to a space in one dimension by infinitely high walls at the particle is initially in the left half of the well with constant probability a) Find the time-dependent wave function b) What is the probability that the particle is in the nth eigenstate? c) Write an expression for the average value of the particle energy. Given the Eigenfunction (Boston, MIT) A particle of mass moves in one dimension. It is remarked that the exact eigenfunction for the ground state is where is a constant and A is the normalization constant. Assuming that the potential vanishes at infinity, derive the ground state eigenvalue Combined Potential (Tennessee) A particle of mass is confined to in one dimension by the potential and are constants. Assuming there is a bound state, derive the exact ground state energy. Harmonic Oscillator 5.14 Given a Gaussian (MIT) A particle of mass is coupled to a simple harmonic oscillator in one dimension. The oscillator has frequency and distance constant At time the particle’s wave function is given by The constant is unrelated to any other parameters. What is the probability that a measurement of energy at finds the value of Harmonic Oscillator ABCs (Stony Brook) Consider the harmonic oscillator given by that if is an eigenstate of with eigenvalue are also eigenstates of N with eigenvalues e) Define such that What is the energy eigenvalue of f) How can one construct other eigenstates of H starting from g) What is the energy spectrum of H? Are negative eigenvalues possible? 5.16 Number States (Stony Brook) Consider the quantum mechanical Hamiltonian for a harmonic oscillator with frequency and define the operators a) Suppose we define a state to obey Show that the states are eigenstates of the number operator, with eigenvalue n: b) Show that is also an eigenstate of the Hamiltonian and compute its energy. Hint: You may assume c) Using the above operators, evaluate the expectation value in terms of Coupled Oscillators (MIT) Two identical harmonic oscillators in one dimension each have mass Let the two oscillators be coupled by an interaction term where C is a constant and are the coordinates of the two oscillators. Find the exact spectrum of eigenvalues for this coupled system. Time-Dependent Harmonic Oscillator I Consider a simple harmonic oscillator in one dimension: the wave function is is the exact eigenstate of the harmonic oscillator with eigen- a) Give b) What is the parity of this state? Does it change with time? c) What is the average value of the energy for this state? Does it change with time? Time-Dependent Harmonic Oscillator II (Michigan State) Consider a simple harmonic oscillator in one dimension. Introduce the raising and lowering operators, and respectively. The Hamiltonian H and wave function at denotes the eigenfunction of energy a) What is wave function at positive times? b) What is the expectation value for the energy? c) The position can be represented in operators by is a constant. Derive an expression for the expectation of the time-dependent position You may need operator expressions such as a Switched-on Field (MIT) Consider a simple harmonic oscillator in one dimension with the usual a) The eigenfunction of the ground state can be written as Determine the constants N and b) What is the eigenvalue of the ground state? c) At time an electric field is switched on, adding a perturbation of the form What is the new ground state energy? d) Assuming that the field is switched on in a time much faster than what is the probability that the particle stays in the ground state? Cut the Spring! (MIT) A particle is allowed to move in one dimension. It is initially coupled to two identical harmonic springs, each with spring constant K. The springs are symmetrically fixed to the points so that when the particle is at the classical force on it is zero. a) What are the eigenvalues of the particle while it is connected to both b) What is the wave function in the ground state? c) One spring is suddenly cut, leaving the particle bound to only the other one. If the particle is in the ground state before the spring is cut, what is the probability it is still in the ground state after the spring is cut? Angular Momentum and Spin Given Another Eigenfunction (Stony Brook) A nonrelativistic particle of mass which vanishes at eigenstate is where C and moves in a three-dimensional central We are given that an exact are constants. a) What is the angular momentum of this state? b) What is the energy? c) What is Algebra of Angular Momentum (Stony Brook) Given the commutator algebra a) Show that b) Derive the spectrum of commutes with from the commutation relations. Triplet Square Well (Stony Brook) Consider a two-electron system in one dimension, where both electrons have spins aligned in the same direction (say, up). They interact only through the attractive square well in relative coordinates What is the lowest energy of the two-electron state? Assume the total momentum is zero. Dipolar Interactions (Stony Brook) Two spin-1/2 particles are separated by a distance through the magnetic dipole energy and interact only is the magnetic moment of spin The system of two spins consists of eigenstates of the total spin and total a) Write the Hamiltonian in terms of spin operators. b) Write the Hamiltonian in terms of c) Give the eigenvalues for all states. Potential (MIT) Consider two identical particles of mass through the potential and spin 1/2. They interact only are Pauli spin matrices which operate on the spin of a) Construct the spin eigenfunctions for the two particle states. What is the expectation value of V for each of these states? b) Give the eigenvalues of all of the bound states. Three Spins (Stony Brook) Consider three particles of spin 1/2 which have no motion. The raising and lowering operators of the individual spins have the property where the arrows indicate the spin orientation with regard to the a) Write explicit wave functions for the four b) Using the definition that construct the 4 × 4 matrices which represent the c) Construct the 4 × 4 matrices which represent d) Construct from the value of the matrix Constant Matrix Perturbation (Stony Brook) Consider a system described by a Hamiltonian and G are positive. a) Find the eigenvalues and eigenvectors of this Hamiltonian. b) Consider the two states the system is in state later time it is in state Derive the probability that at any Rotating Spin (Maryland, MIT) A spin-1/2 particle interacts with a magnetic field through the Pauli interaction where is the magnetic moment and are the Pauli spin matrices. At a measurement determines that the spin is pointing along the positive What is the probability that it will be pointing along the negative at a later time Nuclear Magnetic Resonance (Princeton, Stony A spin-1/2 nucleus is placed in a large magnetic field in the An oscillating field of radio frequency is applied in the so the total magnetic field is The Hamiltonian is is the magnetic moment. Use the a) If the nucleus is initially pointing in the is the probability that it points in the b) Discuss why most NMR experiments adjust at later times? so that Variational Calculations Anharmonic Oscillator (Tennessee) Use variational methods in one dimension to estimate the ground state energy of a particle of mass in a potential Linear Potential I (Tennessee) A particle of mass is bound in one dimension by the potential where F is a constant. Use variational methods to estimate the energy of the ground state. Linear Potential II (MIT, Tennessee) A particle of mass a potential energy moves in one dimension in the right half-space. It has given by where F is a positive real constant. Use variational methods to obtain an estimate for the ground state energy. How does the wave function behave in the limits Return of Combined Potential (Tennessee) A particle of mass moves in one dimension according to the potential are both constants. a) Show that the wave function must vanish at so that a particle on the right of the origin never gets to the left. b) Use variational methods to estimate the energy of the ground state. Quartic in Three Dimensions (Tennessee) A particle of mass is bound in three dimensions by the quartic potential Use variational methods to estimate the energy of the ground Halved Harmonic Oscillator (Stony Brook, Chicago (b), Princeton (b)) Consider a particle of mass in a potential moving in one dimension (see Figure P.5.36) a) Using the normalized trial function find the value of which minimizes the ground state energy and the resulting estimate of the ground state energy. How is this value related to the true ground state energy? b) What is the exact ground state wave function and energy for this system (neglect the normalization of the wave function)? Do not solve the Schrödinger equation directly. Rather, state the answer and justify it. Hint: You may need the integral Helium Atom (Tennessee) Do a variational calculation to estimate the ground state energy of the electrons in the helium atom. The Hamiltonian for two electrons, assuming the nucleus is fixed, is Assume a wave function of the form is the Bohr radius, is the variational parameter, and spin state of the two electrons. is the Perturbation Theory Momentum Perturbation (Princeton) A particle of mass moves in one dimension according to the Hamiltonian All eigenfunctions and eigenvalues a term to the Hamiltonian, where momentum operator: are known. Suppose we add are constants and is the Derive an expression for the eigenvalues and eigenstates of the new Hamiltonian H. Ramp in Square Well (Colorado) A particle of mass is bound in a square well where a) What are the energy and eigenfunction of the ground state? b) A small perturbation is added, Use perturbation theory to calculate the change in the ground state energy to Circle with Field (Colorado, Michigan State) A particle with charge e and mass is confined to move on the circumference of a circle of radius The only term in the Hamiltonian is the kinetic energy, so the eigenfunctions and eigenvalues are where is the angle around the circle. An electric field is imposed in the plane of the circle. Find the perturbed energy levels up to Rotator in Field (Stony Brook) Consider a rigid body with moment of inertia I, which is constrained to rotate in the and whose motion is given by the Schrödinger equation a) Find the eigenfunctions and eigenvalues. b) Assume the rotator has a fixed dipole moment p in the plane. An electric field is applied to the plane. Find the changes in the energy levels to first and second order in the field. Finite Size of Nucleus (Maryland, Michigan State, Princeton, Stony Brook) Regard the nucleus of charge Z as a sphere of radius with a uniform charge density. Assume that is the Bohr radius of the hydrogen atom. a) Derive an expression for the electrostatic potential between the nucleus and the electrons in the atom. If is the potential from a point charge, find the difference due to the size of the nucleus. b) Assume one electron is bound to the nucleus in the lowest bound state. What is its wave function when calculated using the potential from a point nucleus? c) Use first-order perturbation theory to derive an expression for the change in the ground state energy of the electron due to the finite size of the nucleus. U and Perturbation (Princeton) A particle is moving in the three-dimensional harmonic oscillator with potential energy A weak perturbation is applied: The same small constant U occurs in both terms of Use perturbation theory to calculate the change in the ground state energy to order Relativistic Oscillator (MIT, Moscow Phys-Tech, Stony Brook (a)) Consider a spinless particle in a one-dimensional harmonic oscillator potential: a) Calculate leading relativistic corrections to the ground state to first order in perturbation theory. b) Consider an anharmonic classical oscillator with For what values of will the leading corrections be the same as in Spin Interaction (Princeton) Consider a spin-1/2 particle which is bound in a three-dimensional harmonic oscillator with frequency The ground state Hamiltonian spin interaction are where is a constant and are the Pauli matrices. Neglect the spin–orbit interaction. Use perturbation theory to calculate the change in the ground state energy to order Spin–Orbit Interaction (Princeton) Consider in three dimensions an electron in a harmonic oscillator potential which is perturbed by the spin–orbit interaction a) What are the eigenvalues of the ground state and the lowest excited states of the three-dimensional harmonic oscillator? b) Use perturbation theory to estimate how these eigenvalues are altered by the spin–orbit interaction. Interacting Electrons (MIT) Consider two electrons bound to a proton by Coulomb interaction. Neglect the Coulomb repulsion between the two electrons. a) What are the ground state energy and wave function for this system? b) Consider that a weak potential exists between the two electrons of the form is a constant and is the spin operator for electron (neglect the spin–orbit interaction). Use first-order perturbation theory to estimate how this potential alters the ground state energy. Stark Effect in Hydrogen (Tennessee) Consider a single electron in the state of the hydrogen atom. We ignore relativistic corrections, so the and states are initially degenerate. Then we impose a small static electric field Use perturbation theory to derive how the energy levels are changed to lowest order in powers of Hydrogen with Electric and Magnetic Fields (MIT) Consider an electron in the relativistic corrections, so the state of the hydrogen atom. We ignore states are initially degenerate. Then we impose two simultaneous perturbations: an electric field and a magnetic field which is given by the vector potential Ignore the magnetic moment of the electron. Calculate how the states are altered by these simultaneous Hydrogen in Capacitor (Maryland, Michigan A hydrogen atom in its ground state is placed between the parallel plates of a capacitor. For times t < 0, no voltage is applied. Starting at electric field is applied, where is a constant. Derive the formula for the probability that the electron ends up in state due to this perturbation. Evaluate the result for a) a b) a Harmonic Oscillator in Field (Maryland, Michigan State) A particle of mass and charge moves in one dimension. It is attached to a spring of constant and is initially in the ground state of the harmonic oscillator. An electric field is switched on during the interval where is a constant. a) What is the probability of the particle ending up in the b) What is the probability of the particle ending up in the of Tritium (Michigan State) Tritium is an isotope of hydrogen with one proton and two neutrons. A hydrogen-like atom is formed with an electron bound to the tritium nucleus. The tritium nucleus undergoes decay, and the nucleus changes its charge state suddenly to and becomes an isotope of helium. If the electron is initially in the ground state in the tritium atom, what is the probability that the electron remains in the ground state after the sudden -decay? Bouncing Ball (Moscow Phys-Tech, Chicago) A ball of mass acted on by uniform gravity (let be the acceleration of gravity) bounces up and down elastically off a floor. Take the floor to be at the zero of potential energy. Working in the WKB approximation, compute the quantized energy levels of the bouncing ball system. Truncated Harmonic Oscillator (Tennessee) A truncated harmonic oscillator in one dimension has the potential a) Use WKB to estimate the energies of the bound states. b) Find the condition that there is only one bound state: it should depend on Stretched Harmonic Oscillator (Tennessee) Use WKB in one dimension to calculate the eigenvalues of a particle of in the following potential (see Figure P.5.55): Ramp Potential (Tennessee) Use WKB in one dimension to find the eigenvalues of a particle of mass in the potential Charge and Plane (Stony Brook) A particle moving in one dimension feels the potential (This potential would be appropriate for an electron moving in the presence of a uniformly charged sheet where C is the transparency of the sheet.) a) Using the WKB approximation, find the energy spectrum, this one-dimensional problem for all for b) Find the energy spectrum, for even wave funcc) Derive an equation that describes the energies tions for an arbitrary value of C. What can you say about the energies for odd wave functions? Ramp Phase Shift (Tennessee) Use WKB to calculate the phase shift in one dimension of a particle of mass confined by the ramp potential Parabolic Phase Shift (Tennessee) Use WKB to calculate the phase shift in one dimension of a particle of mass confined by the parabolic potential Phase Shift for Inverse Quadratic (Tennessee) A particle of mass with the potential moves in one dimension in the right half-space where the dimensionless constant determines the strength of the potential. Use WKB to calculate the phase shift as a function of energy. Scattering Theory 5.61 Step-Down Potential (Michigan State, MIT) A particle of mass obeys a Schrödinger equation with a potential the potential is higher on the left of zero than on the right. Find the reflection coefficient for a particle coming in from the left with momentum (see Figure P.5.61). Step-Up Potential (Wisconsin-Madison) Consider a particle scattering in one dimension from a potential is a simple step at A particle with kinetic energy left (see Figure P.5.62). is incident from the a) Find the intensity of the reflected (R) and transmitted (T) waves. b) Find the currents and the sum of the reflected and transmitted waves. Repulsive Square Well (Colorado) Consider in three dimensions a repulsive of width The potential is A particle of energy (see Figure P.5.63). square well at the origin is incident upon the square well a) Derive the phase shift for b) How does the phase shift behave as c) Derive the total cross section in the limit of zero energy. 3D Delta Function (Princeton) Consider a particle of mass which scatters in three dimension from a potential which is a shell at radius Derive the exact expression for the scattering cross section in the limit of very low particle energy. Two-Delta-Function Scattering (Princeton) A free particle of mass traveling with momentum parallel to the scatters off the potential Calculate the differential scattering cross section, in the Born approximation. Does this approximation provide a reasonable description for scattering from this potential? In other words, is it valid to use unperturbed wave functions in the scattering amplitude? Scattering of Two Electrons (Princeton) Two electrons scatter in a screened environment where the effective potential is where is a constant. Consider both electrons in the center-of-mass frame, where both electrons have energy This energy is much larger than a Rydberg but much less than so use nonrelativistic kinematics. Derive an approximate differential cross section for scattering through an angle when the two electrons are a) in a total spin state of S = 0, b) in a total spin state of S =1. Spin-Dependent Potentials (Princeton) Consider the nonrelativistic scattering of an electron of mass and momentum through an angle Calculate the differential cross section in the Born approximation for the spin-dependent potential are the Pauli spin matrices and are constants. Assume the initial spin is polarized along the incident direction, and sum over all final spins. (Note: Ignore that the potential violates parity.) Rayleigh Scattering (Tennessee) Rayleigh scattering is the elastic scattering of photons. Assume there is a matrix element which describes the scattering from to has the dimensions of a) Derive an expression for the differential cross section Rayleigh scattering. Ignore the photon polarization. the specific form for the matrix element is the polarizability tensor and are the polarization vectors of the photons. What is the result if the initial photons are unpolarized and the final photon polarizations are summed over? Assume the polarizability is isotropic: where is the unit tensor. Scattering from Neutral Charge Distribution Consider the nonrelativistic scattering of a particle of mass m and charge e from a fixed distribution of charge Assume that the charge distribution is neutral: it is spherically symmetric; and the second moment, is defined as a) Use the Born approximation to derive the differential cross section for the scattering of a particle of wave vector k. expression for forward scattering is for a neutral hydrogen atom in its ground state. Calculate A in this case. Neglect exchange effects and assume that the target does not recoil. Spherical Box with Hole (Stony Brook) A particle is confined to a spherical box of radius R. There is a barrier in the center of the box, which excludes the particle from a radius the particle is confined to the region Assume that the wave function vanishes at both and derive an expression for the eigenvalues and eigenfunctions of states with angular momentum Attractive Delta Function in 3D (Princeton) A particle moves in three dimensions. The only potential is an attractive delta function at of the form where D is a parameter which determines the strength of the potential. a) What are the matching conditions at for the wave function and its derivative? b) For what values of D do bound states exist for Ionizing Deuterium (Wisconsin-Madison) The hydrogen atom has an ionization energy of when an electron is bound to a proton. Calculate the ionization energy of deuterium: an electron bound to a deuteron. Give your answer as the difference between the binding energy of deuterium and hydrogen deuteron has unit charge. The three masses are, in atomic mass units, Collapsed Star (Stanford) In a very simple model of a collapsed star a large number nucleons (N neutrons and protons) and electrons (to ensure electric neutrality) are placed in a one-dimensional box (i.e., an infinite square well) of length The neutron and proton have equal mass the electron has mass Assume the nucleon number density is neglect all interactions between the particles in the well, and approximate a) Which particle species are relativistic? b) Calculate the ground state energy of the system as a function of for all possible configurations with fixed A. c) What value of (assumed small) minimizes the total energy of the Electron in Magnetic Field (Stony Brook, Moscow Phys-Tech) An electron is in free space except for a constant magnetic field B in the a) Show that the magnetic field can be represented by the vector potential b) Use this vector potential to derive the exact eigenfunctions and eigenvalues for the electron. Electric and Magnetic Fields (Princeton) Consider a particle of mass m and charge e which is in perpendicular electric and magnetic fields: a) Write the Hamiltonian, using a convenient gauge for the vector potential. b) Find the eigenfunctions and eigenvalues. c) Find the average velocity in the for any eigenstate. Josephson Junction (Boston) Consider superconducting metals I and II separated by a very thin insulating layer, such that that electron wave functions can overlap between the metals (Josephson junction). A battery V is connected across the junction to ensure an average charge neutrality (see Figure P.5.76). This situation can be described by means of the coupled Schrödinger equations: are the probability amplitudes for an electron in I and are the electric potential energies in I and II, K is the coupling constant due to the insulating layer, and describe the battery as a source of electrons. a) Show that are constant in time. b) Assuming (same metals) and expressing the probability amplitudes in the form find the differential equations for c) Show that the battery current oscillates, and find the frequency of these oscillations. This page intentionally left blank and Statistical Introductory Thermodynamics 4.1 Why Bother? (Moscow Phys-Tech) The physicist is right in saying that the total energy of the molecules in the room cannot be changed. Indeed, the total energy of an ideal gas is where N is the number of molecules, is the heat capacity at constant volume per particle, and is the absolute temperature in energy units. In these units, Since the pressure P in the room stays the same (as does the volume V) and equal to the outside air pressure, we have So, the total energy of the gas does not change. However, the average energy of each molecule does, of course, increase, and that is what defines the temperature (and part of the comfort level of the occupants). At the same time, the total number of molecules in the room decreases. In essence, we burn wood to force some of the molecules to shiver outside the room (this problem was first discussed in Nature 141, 908 (1938)). Space Station Pressure (MIT) The rotation of the station around its axis is equivalent to the appearance of an energy is the mass of an air particle and R is the distance from the center. Therefore, the particle number density satisfies the Boltzmann distribution (similar to the Boltzmann distribution in a gravitational field): is the number density at the center and is the temperature in energy units. The pressure is related to the number density simply by So, at constant temperature, Using the condition that the acceleration at the rim is we have Baron von Münchausen and Intergalactic Travel (Moscow Phys-Tech) The general statement that a closed system cannot accelerate as a whole in the absence of external forces is not usually persuasive to determined inventors. In this case, he would make the point that the force on the rope is real. To get an estimate of this force, assume that the balloon is just above the surface of the Earth and that the density of air is approximately constant to 2 km. Archimedes tells us that the force on the rope will equal the weight of the air, mass excluded by the empty balloon (given a massless balloon material). We then may use the ideal gas law to find the force where we approximate the acceleration due to gravity as constant up to 2 km; i.e., However, there will be no force acting on the Earth. The system (Earth + surrounding air) is no longer symmetric (see Figure S.4.3a). The symmetric system would be the one with no air on the opposite side of the Earth (see Figure S.4.3b). Therefore, there will be a force between this additional air, which can be treated as a “negative” mass, and the Earth (see Figure S.4.3c): are the mass and radius of the Earth, respectively, and G is the gravitational constant. So, the Archimedes force is completely canceled by the gravitational force from the air. Perhaps that is why the Baron shelved his idea. Railway Tanker (Moscow Phys-Tech) The new equilibrium pressure of the gas will be the same throughout the tanker, whereas the temperature across its length will vary: higher at the heated wall, and cooler at other end (see Problem 4.5). Expanding the temperature T along the length of the tanker in a Taylor series and keeping the first two terms (since the temperature difference between the walls is small compared to we have We may write the ideal gas law as a function of position in the tanker: is the gas concentration. Rearranging, we have The total number N of molecules in the cylinder is given by where A is the cross-sectional area of the tanker. Alternatively, we can integrate (S.4.4.3) exactly and expand the resulting logarithm, which yields the same result. The total number of molecules originally in the tank is Since the total number of molecules in the gas before and after heating is the same, (no phase transitions), we may equate (S.4.4.4) and (S.4.4.5), yielding The center of mass (inertia) given by of the gas found with the same accuracy is As we have assumed that the tanker slides on frictionless rails, the center of mass of the system will not move but the center of the tanker will move by an amount such that Substituting (S.4.4.7) into (S.4.4.8) and rearranging give Magic Carpet (Moscow Phys-Tech) First let us try to reproduce the line of reasoning the Baron was likely to follow. He must have argued that in the z direction the average velocity of a molecule of mass is If we consider that during the collision the molecules thermalize, then the average velocities after reflection from the upper and lower surfaces become The forces due to the striking of the molecules on the upper and lower surfaces are, respectively, (see Figure S.4.5): where is the concentration of the air molecules, and we have used the fact that the number of molecules colliding with 1 of the surface per second is approximately (the exact number is see Problem 4.14). The net resulting force Substituting for we have Unfortunately, this estimate is totally wrong since it assumes that the concentration of molecules is the same above and below the panel, whereas it would be higher near the cold surface and lower near the hot surface (see Problem 4.4) to ensure the same pressure above and below. That’s why irons don’t fly. Teacup Engine (Princeton, Moscow Phys-Tech) If the cup were vacuum tight, the number of molecules leaving the surface would be the same as the number of molecules returning to the surface. The mass flow rate of the molecules hitting the surface (see Problem 4.14) is the vapor density corresponding to the saturation, is the average velocity of the molecules, and A is the surface area of the ice. The mass flow rate of the molecules actually returning to the surface is where is the sticking coefficient (the probability that the molecule hitting the surface will stick to it). Let us assume for now that (we will see later that this is not true, but that actually gives us the lower limit of the distance). If the cup is open we can assume that the number of molecules leaving the surface is the same as in the closed cup, but there are few returning molecules. We then find that the time for complete evaporation of the ice is where we take g as the mass of the ice, from Problem 4.13. Substituting (S.4.6.4) into (S.4.6.3), we obtain Once again using the ideal gas law, we may obtain Substituting (S.4.6.6) into (S.4.6.5) yields During the sublimation of the ice, the acceleration of the astronaut is corresponds to the momentum transferred by the molecules leaving the surface. Using the time calculated above, he will cover a Note that this is the lower limit because we assumed and that all the molecules that are leaving go to infinity. So, it seems that the astronaut can cover the distance to the ship by using his cup as an engine. Moreover, the sticking coefficient which is often assumed to be close to unity, could be much smaller (for water, at 0°C). That explains why the water in a cup left in a room does not evaporate in a matter of minutes but rather in a few hours. For a detailed discussion see E. Mortensen and H. Eyring, J. Phys. Chem. 64, 846 (1960). The physical reason for such a small sticking coefficient in water is based on the fact that in the liquid phase the rotational degrees of freedom are hindered, leading to a smaller rotational partition function. So, the molecules whose rotation cannot pass adiabatically into the liquid will be rejected and reflect back into the gaseous phase. These effects are especially strong in asymmetric polar molecules (such as water). The actual time the teacup engine will be working is significantly longer (about 30 times, if we assume that the sticking coefficient for ice is about the same as for water at Grand Lunar Canals (Moscow Phys-Tech) Consider the atmosphere to be isothermal inside the channel. The pressure depends only on the distance from the center of the moon (see Figure S.4.7), and as in Problem 4.19 we have The acceleration of gravity (see also Problem 1.10, Part I) where M is the mass of the Moon and is the average density of the Moon (which we consider to be uniform). Therefore, where we have set Now, from (S.4.7.2) and (S.4.7.4), we have is the pressure on the surface of the Moon. which implies that it is not impossible to have such cavities inside the Moon filled with gas (to say nothing of the presence of lunars). Frozen Solid (Moscow Phys-Tech) If the ice does not freeze too fast (which is usually the case with lakes), we can assume that the temperature is distributed linearly across the ice. Suppose that the thickness of the ice at a time is Then the heat balance can be written in the form is the melting temperature of ice. The left side represents the flow of heat through one square meter of ice surface due to the temperature gradient, and the right side the amount of heat needed to melt (freeze) an amount of ice Integrating (S.4.8.1), we obtain and are integration constants. If we assume that there is no ice and we find that the time to freeze solid is Tea in Thermos (Moscow Phys-Tech) There are two main sources of power dissipation: radiation from the walls of the thermos and thermal conductance of the air between the walls. Let us first estimate the radiative loss. The power radiated from the hotter inner wall minus the power absorbed from the outer wall is given by (see Problem 4.73) where T is the temperature of the tea, Stefan–Boltzmann constant is room temperature, and the The power dissipation due to the thermal conductivity of the air can be estimated from the fact that, at that pressure, the mean free path of the air molecules is about Therefore, there are very few collisions between the molecules that travel from one wall of the thermos to the other. We can assume that we are in the Knudsen regime of ( is the distance between the walls). In this regime the thermal conductance is proportional to the pressure (if it is independent of the pressure). Let us assume that after a molecule strikes the wall, it acquires the temperature of the wall. Initially after it hits the wall, a molecule will take away the energy where we can take for air the inner wall per time interval dt is The number of molecules striking where is the concentration of molecules and is their average velocity (see Problem 4.14). The power due to the thermal conductance is We can substitute Then (S.4.9.4) becomes So, we can see that radiation loss has about the same order of magnitude as thermal conductance at these parameters. Therefore the properties of the thermos can only be improved significantly by decreasing both the emissivity and the residual pressure between the walls. The energy dissipated is equal to the energy change of the mass of the tea: where we used for an estimate the fact that T does not change significantly Then the time for the tea to cool from the initial temperature to the final temperature is given by Heat Loss (Moscow Phys-Tech) min be the time the heater is operating. The energy added to the water and bowl will heat the water as well as the environment. We will assume that the heat loss to the surroundings is proportional to the elapsed time and that the changing temperature difference between the water and room temperature During this phase, we may The heat loss is actually a time integral of some proportionality constant times the temperature difference as the water heats up. However, only varies by out of an average so we will ignore the variation. The heat loss during the second phase is given by Just as in the heating phase, the heat loss is proportional to the elapsed Since again only changes a little, we have We may now eliminate from (S.4.10.1), yielding Liquid–Solid–Liquid (Moscow Phys-Tech) a) Since the evaporation is very rapid, the heat to vaporize can only be obtained from the heat of fusion. Therefore, if of water becomes solid vaporizes, we may write Since the total mass we have If we continue pumping, the ice would, of course, gradually sublimate, but this process takes much longer, so we can neglect it. b) The metal cools from its initial temperature by transferring heat melt some ice: is the temperature change. This may be determined from the sample’s density before it was placed in the calorimeter. Using the thermal coefficient of volume expansion where we have The temperature difference may be found from (S.4.11.4) Equating the amount of heat required to melt a mass heat available in the metal, we have of ice with the This mass exceeds the amount of ice from part (a), so all of it would melt. Hydrogen Rocket (Moscow Phys-Tech) Find the amount of water vapor produced in the reaction One mole of hydrogen yields one mole of water, or in mass Since is the mass of fuel intake per second, is the mass of water ejected from the engine per second. If the water vapor density is rate may be expressed as is the velocity of the gas ejected from the engine. Therefore, Express the density as From (S.4.12.4) and (S.4.12.5), we then have The mass ejected per second from the engine provides the momentum per which will be equal to the force supplied by the engine. Apart from this reactive force, there is a static pressure from the engine providing a force so the total force In real life the second term is usually small (P is not very high), so the force by an engine is determined by the reactive force. Maxwell–Boltzmann Averages (MIT) a) We may write the unnormalized Maxwell–Boltzmann distribution immediately as We would like to write (S.4.13.1) as so we must integrate over all velocities in order to find the proper normalization: Rewriting (S.4.13.2) in spherical coordinates we have A variety of problems contain the definite integral (S.4.13.3) and its variations. A particularly easy way to derive it is to start by writing the integral Now multiply I by itself, replacing Rewriting (S.4.13.5) in polar coordinates gives where we have substituted Integrating instead from 0 to in (S.4.13.6). So we have then gives The integral required here may be found by differentiating (S.4.13.7) once with respect to Using (S.4.13.8) in (S.4.13.3), where b) The most likely speed occurs when (S.4.13.11) is a maximum. This may be found by setting its derivative or, simply the derivative of In equal to 0: c) The average speed is given by d) The mean square speed of the atoms may be found immediately by recalling the equipartition theorem (see Problem 4.42) and using the fact that there is energy per degree of freedom. So For completeness, though, the integral may be shown: Slowly Leaking Box (Moscow Phys-Tech, Stony Brook (a,b)) a) The number of atoms per unit volume moving in the direction normal to the wall (in spherical coordinates) is where is the azimuth angle, is the polar angle, is the number density of atoms, and is the speed distribution function (Maxwellian). To determine the number of atoms striking the area of the hole A on the wall per time dt, we have to multiply (S.4.14.1) by dt (only the atoms within a distance dt reach the wall). To obtain the total atomic flow rate R through the hole, we have to integrate the following expression: We integrate from 0 to since we only consider the atoms moving toward the wall. On the other hand, by definition, the average velocity is given Comparing (S.4.14.2) and (S.4.14.3), we see that This result applies for any type of distribution function We only consider a flow from the inside to the outside of the container. Since the hole is small, we can assume that the distribution function of the atoms inside the container does not change appreciably. b) The average kinetic energy of the atoms leaving the container should be somewhat higher than the average kinetic energy in the container because faster atoms strike the wall more often than the ones moving more slowly. So, the faster atoms leave the container at a higher rate. Let us compute this energy For a Maxwellian distribution we have where C is a normalizing constant: The numerator is the total energy of the atoms leaving the container per second, and the denominator is the total number of atoms leaving the container per second. Define From part (a), we can express this integral in terms of the average velocity Then we have We know that 4.13), and (since it is a normalizing factor, see Problem is indeed higher than the average energy of the atoms: c) From (b) we know that each atom leaving the container takes with it an additional energy The flow rate of the atoms leaving the container (from (a)) is The energy flow rate from the container becomes To keep the temperature of the atoms inside the container constant, we have to transfer some heat to it at the same rate: Equating the flow rate to the decrease of the number of atoms inside gives We then obtain Solving this differential equation, we can find the change in number density: is the time constant and heat flow rate is is the initial number density. Therefore, the Surface Contamination (Wisconsin-Madison) The number of molecules striking a unit area of the surface N during the time of the experiment (see Problem 4.14) is given by For an estimate we can assume that the adsorbed molecules are closely packed and that the number of adsorption sites on a surface of area A is where d is the average diameter of the adsorbed atoms, and we take The total number of adsorption sites may actually be smaller (these data can be obtained from the time to create one monolayer at lower pressure). We may write or, for 1 of surface, Using the average velocity from Problem 4.13 at K gives So, we will have to maintain a pressure better than Torr, which can be quite a technical challenge. In fact, at such low pressures the residual gas composition is somewhat different from room air, since it may be more difficult to pump gases such as and He. Therefore, (S.4.15.3) and (S.4.15.5) are only order-of-magnitude estimates. Bell Jar (Moscow Phys-Tech) The pressure inside the vessel where is the concentration of the molecules inside the vessel and is the concentration of the molecules in the chamber. Disregarding the thickness of the walls of the vessel, we can write the condition that the number of molecules entering the vessel per second is equal to the number of molecules leaving it: where A is the area of the hole, and we used the result of Problem 4.14 for the number of molecules striking a unit area per second. Actually, the only important point here is that this number is proportional to the product of concentration and average velocity. Therefore, The average velocity So, from (S.4.16.3), we have Substituting (S.4.16.4) into (S.4.16.1), we obtain Hole in Wall (Princeton) a) If the diameter of the hole is large compared to the mean free path in both gases, we have regular hydrodynamic flow of molecules in which the pressures are the same in both parts. If the diameter of the hole (and thickness of the partition) is small compared to the mean free path, there are practically no collisions within the hole and the molecules thermalize far from the hole (usually through collisions with the walls). b) In case there are two independent effusive flows from I to II and from II to I. The number of particles going through the hole from parts I and II are, respectively (see Problem 4.14), is the area of the hole. At equilibrium, so we The mean free path is related to the volume concentration of the molecules by the formula where is the effective cross section of the molecule, which depends only on the type of molecule, helium in both halves. Substituting (S.4.17.3) into (S.4.17.2) gives c) When we have to satisfy the condition which gives for the ratio of the mean free paths: Ballast Volume Pressure (Moscow Phys-Tech) The number of molecules per second entering the volume B from the left container I is proportional to the density of the molecules in I, average velocity, and the area of the opening, A. The constant of proportionality (see Problem 4.14) is unimportant for this problem. So, equating the rate of molecular flow in and out of volume B, we can write for flow rates in equilibrium (see Figure S.4.18) The factor 2 appears for the flow rate since there are two portals from region B. On the other hand, for an ideal gas, and therefore We can rewrite (S.4.18.1) as For the state of equilibrium, the energy in the volume B is constant. This means that the total rate of energy transfer out of volumes I and II should be equal to the rate of energy transfer out of volume B : The average energy per particle is proportional to the temperature, so (S.4.18.6) becomes We then have Dividing (S.4.18.8) by (S.4.18.5), we can obtain 4.19 Rocket in Drag (Princeton) a) Use dimensional analysis to derive the drag force F on the rocket: We then have This formula is generally correct for high Reynolds numbers; for low Reynolds numbers we have Stokes’ law: is the viscosity and is the radius. b) For an isothermal atmosphere, take a column of air of height and area A. The pressure difference between top and bottom should compensate the weight of the column: where is the molar weight of the air, and substituting (S.4.19.6) into (S.4.19.5), we obtain c) At a height where we used is defined by we have, from (S.4.19.3), for uniform acceleration. Now, the maximum force So, assuming that the average temperature for the isothermal atmosphere we find Adiabatic Atmosphere (Boston, Maryland) a) Starting from the ideal gas law, we can express the temperature T as a function of pressure P and the mass density where P and are functions of the height above the surface of the Earth: Taking the derivative of T with respect to we have We need to express in terms of dP. The fact that of altitude allows us to write is independent where B is some constant. So Substituting (S.4.20.3) into (S.4.20.2), we obtain Assuming that the acceleration of gravity is constant, using the hydrostatic pressure formula and substituting (S.4.20.5) into (S.4.20.4), we can write b) For the atmosphere, using diatomic molecules with we have from (S.4.20.6), This value of is about a factor of 2 larger than that for the actual Atmospheric Energy (Rutgers) a) Again starting with the ideal gas law we have b) The gravitational energy of a slice of atmosphere of cross section A and at a height is simply while the internal energy of the same slice is The total internal energy is given by the integral of (S.4.21.3): We wish to change the integral over into an integral over P. To do so, first consider the forces on the slice of atmosphere: Rearranging (S.4.21.5), we have Substituting (S.4.21.6) into (S.4.21.4), we obtain The total gravitational energy may be found by integrating (S.4.21.2): Integrating by parts gives The first term on the RHS of (S.4.21.9) is zero since at the limits of evaluation either so we have The ratio of energies from (S.4.21.10) and (S.4.21.7) is Puncture (Moscow Phys-Tech) a) Use Bernoulli’s equation (see, for instance, Landau and Lifshitz, Fluid Mechanics, Chapter 5) for an arbitrary flow line with one point inside the tire and another just outside it. We then have are the enthalpy per unit mass inside and outside the vessel, respectively, and and are the velocities of the gas. The velocity is very small and can be disregarded. Then the velocity of the gas outside For an ideal gas the heat capacity does not depend on temperature, so we may write for the enthalpy Therefore, the velocity is The temperature ideal gas law: may be found from the equation for adiabats and the Rewriting gives Substituting into (S.4.22.4) gives The maximum velocity will be reached when flow into vacuum. b) For one mole of an ideal gas and, by definition, From (S.4.22.9) and (S.4.22.10), we may express through R Then (S.4.22.8) becomes For molecular hydrogen we have Note that this estimate implies that i.e., that the gas would cool to absolute zero. This is, of course, not true; several assumptions would break down long before that. The flow during expansion into vacuum is always turbulent; the gas would condense and phase-separate and therefore would cease to be ideal. The velocity of sound inside the vessel Substituting (S.4.22.14) into (S.4.22.12) yields Heat and Work 4.23 Cylinder with Massive Piston (Rutgers, Moscow When the piston is released, it will move in some, as yet unknown, direction. The gas will obey the ideal gas law at equilibrium, so On the other hand, at equilibrium, there is no net force acting on the piston (see Figure S.4.23), and we have Substituting (S.4.23.2) into (S.4.23.1) gives We can also use energy conservation in this thermally insulated system. Then the work done to the gas equals its energy change For an ideal is the heat capacity of one mole of the gas (for a monatomic gas, The work done to the gas is the distance the piston moves, where downward is positive. From (S.4.23.4) and (S.4.23.5), we have Solving (S.4.23.3) and (S.4.23.6) yields We may check that if balanced, (S.4.23.7) gives i.e., that the piston was initially 4.24 Spring Cylinder (Moscow Phys-Tech) For a thermally insulated system (no heat transfer), the energy change is given by where 0 and 1 correspond to the initial and final equilibrium states of the system, with sets of parameters respectively. In this case, the gas is expanding, therefore some positive work is done by the gas, which indicates that the energy change is negative, and the temperature for an ideal gas depends only on the change in temperature: is the heat capacity of one mole of the gas at constant volume (for a monatomic gas The work done by the gas goes into compressing the spring: where K is the spring constant and is the change of the piston position (see Figure S.4.24). On the other hand, when equilibrium is reached, the compression force of the spring where A is the cross section of the piston. So where we used the ideal gas law for one mole of gas. Substituting (S.4.24.5) into (S.4.24.3), we have Notice that is the volume change of the gas: Substituting (S.4.24.8) and (S.4.24.2) into (S.4.24.3), we obtain The temperature indeed has decreased. As for the pressure, we have for the initial state Isothermal Compression and Adiabatic Expansion of Ideal Gas (Michigan) a) We can calculate the work as an integral, using the ideal gas law: where is, as usual, the absolute temperature. Graphically, it is simply the area under the curve (see Figure S.4.25). Alternatively, we can say that the work done is equal to the change of free energy F of the system (see Problem 4.38): The total energy of the ideal gas depends only on the temperature, which is constant, so the heat absorbed by the gas is i.e., heat is rejected from the gas into the reservoir. Alternatively, since the same result as in (S.4.25.3). b) For an adiabatic expansion the entropy is conserved, so On the other hand, is the specific heat for an ideal gas at constant volume. From (S.4.25.5) and (S.4.25.6), and using the ideal gas law, we obtain the specific heat per one molecule. Integrating (S.4.25.7) c) For air we may take diatomic). Therefore, (in regular units, it is mostly Isochoric Cooling and Isobaric Expansion (Moscow Phys-Tech) The process diagram is shown in Figure S.4.26. The work W done by the gas occurs only during the leg since there is no work done during the leg. The work is given by Using the ideal gas law, we may relate since the initial and final temperatures are the same. Substituting into (S.4.26.1) we find Venting (Moscow Phys-Tech) The air surrounding the chamber may be thought of as a very large reservoir of gas at a constant pressure and temperature The process of venting is adiabatic, so we can assume that there is no energy dissipation. We then find that the energy of the gas admitted to the chamber equals the sum of its energy in the reservoir plus the work done by the gas of the reservoir at to expel the gas into the chamber. This may be calculated by considering the process of filling a cylinder by pushing a piston back, where the piston offers a resistant force of A being the cross section of the cylinder. The total energy E is then given by is the volume of the gas needed to fill the volume of the chamber V (note that V does not coincide with because the temperature of the gas in the chamber T presumably is not the same as see Figure S.4.27). On the other hand. is the heat capacity of the gas, is the heat capacity per molecule, and is the number of molecules. From (S.4.27.1) and (S.4.27.2), we have Using the ideal gas law we have The air is mostly nitrogen and oxygen (78% nitrogen and 21% oxygen diatomic gases, so that and therefore Thus, the temperature of the gas in the chamber will increase. Note that the result does not depend on the outside pressure the volume of the chamber V, or whether it is filled to Cylinder and Heat Bath (Stony Brook) a) Since the process takes place at constant temperature, PV is constant for each side of the piston. When the piston is released, we can write are the initial pressures on the left and right sides of the cylinder, respectively, P is the final pressure on both sides of the cylinder, are the final volumes. From S.4.28.1 we have So, the piston winds up 20 cm from the right wall of the cylinder. b) The energy of the ideal gas does not change in the isothermal process, so all the work done by the gas goes into heating the reservoir. Denoting by and the number of moles on the left and right sides of the cylinder, respectively, and using we obtain the total work and, hence, heat given by the integral 4.29 Heat Extraction (MIT, Wisconsin-Madison) a) For a mass of fixed volume we have So, by the definition of C, Since C is independent of T, we may rewrite (S.4.29.2) and integrate: The change in entropy is then b) The maximum heat may be extracted when the entropy remains constant. Equating the initial and final entropies yields the final temperature of the two bodies: The heat extracted, is then equal to the difference in initial and final internal energies of the bodies: c) Here we may calculate the maximum useful work by using the Carnot efficiency of a reversible heat engine operating between two reservoirs, one starting at a high temperature (100°C) and the other fixed (the lake) at 10°C. The efficiency may be written for a small heat transfer as where the heat transferred from the hot reservoir equals its change in internal energy –MC dT. We may then find the total work by integrating dW as follows: We may also use the method of part (b) and the fact that the entropy is conserved. Denote the mass of the hot water M and the lake the initial and final entropies gives Writing the final temperature T as and expanding the logarithm, we obtain is small (it’s a big lake), Substituting (S.4.29.10) back into (S.4.29.9) gives As before, the work extracted equals the change in internal energy of the bodies, so which is the same as above. Heat Capacity Ratio (Moscow Phys-Tech) If the gas is heated at constant volume, then the amount of heat ferred to the gas is where is the heat capacity by weight of the gas, is the mass, and T is the temperature at pressure Using the ideal gas law at the beginning and end of heating gives is the number of moles of the gas. From (S.4.30.1) and (S.4.30.2), For heating at constant pressure, Since the time during which the current flows through the wire is the same in both experiments, the amount of heat transferred to the gas is also the same: Equating (S.4.30.4) and (S.4.30.8), we obtain 4.31 Otto Cycle (Stony Brook) a) The efficiency of the cycle is where W is the work done by the cycle and is the amount of heat absorbed by the gas. Because the working medium returns to its initial state is the amount of heat transferred from the gas, therefore Let us calculate Since both processes are at constant volume (see Figure S.4.31), we may write We know that for an adiabatic process we find and therefore the efficiency is the efficiency is b) The work done on the gas in the compression process is L and . Joule Cycle (Stony Brook) The efficiency of the cycle is given by the work W during the cycle divided by the heat absorbed in path (see Figure S.4.32). W is defined by the area enclosed by the four paths of the P–V plot. The integral P dV along the paths of constant pressure is simply the difference in volume of the ends times the pressure, and the work along the adiabats, where there is no heat transfer is given by the change in internal energy Substituting the ideal gas law we find where we used into (S.4.32.1) and rearranging, In the process the gas absorbs the What remains is to write W and in terms of P and form the quotient. Using the equation for an adiabatic process in an alternative form, we have Substituting for The efficiency by putting (S.4.32.4) into (S.4.32.1) yields is then Diesel Cycle (Stony Brook) We calculate the efficiency the cycle (see Figure S.4.33) is as in Problem 4.32. The work W in where we have again used the ideal gas law The heat absorbed by the gas during The efficiency Using the equation for the adiabats gives The ideal gas law gives Substituting (S.4.33.4) and (S.4.33.5) into (S.4.33.3) gives Modified Joule–Thomson (Boston) The work done by the piston goes into changing the internal energy of the part of the gas of volume that enters the plug and into the work done by the gas to enter container B occupying volume So we may write where is the constant-volume heat capacity for one molecule and the number of molecules in the volume On the other hand, before and after the plug, we have, respectively, Substituting dV from (S.4.34.2) and from (S.4.34.3) into (S.4.34.1), we (S.4.34.5) becomes Ideal Gas and Classical Statistics Poisson Distribution in Ideal Gas (Colorado) The probability of finding a particular molecule in a volume V is The probability of finding N marked molecules in a volume V is Similarly, the probability of finding one particular molecule outside of the volume V is and for particular molecules outside V, Therefore, the probability of finding any molecules in a volume V is the product of the two probabilities (S.4.35.1) and (S.4.35.2) weighted by the number of combinations for such a configuration: The condition also implies that Then we may approximate So, (S.4.35.3) becomes where we used the average number of molecules in V: Noticing that, for large . we obtain where we used (S.4.35.6) can be applied to find the mean square fluctuation in an ideal gas (see Problem 4.94) when the fluctuations are not necessarily small (i.e., it is possible to have although N is always much smaller than the total number of particles in the gas Polarization of Ideal Gas (Moscow Phys-Tech) The potential energy of a dipole in an electric field E is where the angle is between the direction of the electric field (which we choose to be along the axis) and the direction of a the dipole moment. The center of the spherical coordinate system is placed at the center of the dipole. The probability that the direction of the dipole is within a solid The total electric dipole moment per unit volume of the gas is Introducing a new variable and denoting we obtain is the Langevin function. For expand (S.4.36.3) to obtain we can we have for the dielectric constant Two-Dipole Interaction (Princeton) Introduce spherical coordinates with the axis along the line of the separation between the dipoles. Then the partition function reads The potential energy of the interaction can be rewritten in the form (S.4.37.2) becomes We can expand the exponential at high temperatures and we have so that The first-order terms are all zero upon integration, where the cross term also vanishes, and we find The average force is given by where F is the free energy. So, The minus sign indicates an average attraction between the dipoles. Entropy of Ideal Gas (Princeton) a) For an ideal gas the partition function factors; however, we must take the sum of N identical molecules divided by the number of interchanges N! to account for the fact that one microscopic quantum state corresponds to a number of different points in phase space. So Now, the Helmholtz free energy, F, is given by Using Stirling’s formula, ln Using the explicit expression for the molecular energy (S.4.38.3) in the form we obtain we can rewrite Here we used the fact that the sum depends only on temperature, so we can define b) Now we can calculate the total entropy of the two gases (it is important that the gases be identical so that is the same for both vessels): where F is defined by (S.4.38.4). We have for total entropy c) After the vessels are connected their volume becomes number of particles becomes 2N, and the temperature remains the same (no work is done in mixing the two gases). So now It can be easily seen that the pressure becomes Let us show that is always nonnegative. This is equivalent to the which is always true. At which makes perfect Chemical Potential of Ideal Gas (Stony Brook) The expression for the Helmholtz free energy was derived in Problem 4.38: Since all the molecules are in the ground state, the sum only includes one term, which we can take as an energy zero, Then (S.4.39.1) becomes where we took into account a degeneracy of the ground state free energy G is then The Gibbs where we have expressed G as a function of so we obtain, from (S.4.39.3), P. The chemical potential This approximation is valid when the temperature is much lower than the energy difference between the electronic ground state and the first excited state; since this is comparable to the ionization energy this condition is equivalent to However, even at temperatures the gas is almost completely ionized (see Landau and Lifshitz, Statistical Physics, Sect. 106). Therefore (S.4.39.4) is always valid for a nonionized gas. Gas in Harmonic Well (Boston) a) The partition function is given by a standard integral (compare with 4.38, where the molecules are indistinguishable): The Helmholtz free energy F follows directly from the partition function: b) We may find the force from F: The equation of state is therefore analogous to the gas in a container with rigid walls, where c) The entropy, energy, and heat capacity all follow in quick succession from Ideal Gas in One-Dimensional Potential a) The coordinate- and momentum-dependent parts of the partition function can be separated. The coordinate-dependent part of the partition For the potential in this case we have where we substituted The free energy associated with the coordinate-dependent part of the partition function is The average potential energy is given by we have a harmonic oscillator, and in agreement with the equipartition theorem (see Problem 4.42) b) For and the average potential energy per particle which also agrees with the generalized equipartition theorem. Equipartition Theorem (Columbia, Boston) a) For both of these averages the method is identical, since the Hamiltonian depends on the same power of either or q. Compose the first average as where the energy is broken into the term and the rest of the sum. The second integrals in the numerator and denominator cancel, so the remaining expression may be written where, asusual, A change of variables produces a piece dependent on and an integral that is not: average proceeds in precisely the same way, yielding b) The heat capacity, (a), we have at constant volume is equal to From part where we now sum over the 3-space and momentum degrees of freedom per atom. The heat capacity, is the law of Dulong and Petit. c) Now take the average: Integration by parts yields where the prime on the product sign in the first term indicates that we integrate over all then the first term in the numerator equals zero. If is one of the then by the assumption of U infinite, the term still equals zero. Finally, if then by l’Hôpital’s rule the first term again gives zero. In the second term, the expression reduces to d) By definition, Given a polynomial dependence of the energy on the generalized coordinate: (S.4.42.11) yields To satisfy the equipartition theorem: Thus, we should have Diatomic Molecules in Two Dimensions a) The partition function may be calculated in the usual way by multiplying the individual Boltzmann factors by their degeneracies and summing: This is difficult to sum, but we may consider the integral instead, given the assumption that b) The energy and heat capacity of the set of diatomic molecules described above may be determined from the partition function for the set: where the N-fold product has been divided by the number of permutations of the N indistinguishable molecules. Recall that We then find that Again, for the heat capacity is A diatomic rotor in three dimensions would have contributions to the energy per degree of freedom. Three degrees of translation and two degrees of rotation (assuming negligible inertia perpendicular to its length) gives for one molecule A diatomic rotor confined to a plane would have three degrees of freedom, two translational and one rotational. Hence, The quantization of energy is not apparent since we have assumed Diatomic Molecules in Three Dimensions (Stony Brook, Michigan State) a) We first transform the expression of the kinetic energy are the Cartesian coordinates of the molecule in the frame with the c.m. at the origin to spherical coordinates: For the rigid diatom, We may substitute (S.4.44.2) into (S.4.44.1), obtaining Using the definition of c.m., we may write Then (S.4.44.3) becomes b) In order to find the conjugate momenta we must compute the we may rewrite the Hamiltonian as c) The single-diatom partition function may be computed as follows: Now the free energy F for N such classical molecules may be found from The entropy is then and the energy E and heat capacity C are d) For the quantum case the Schrödinger equation for a rigid rotator admits the standard solution where each of the energy states is is given by The partition func- For low temperatures we may neglect high-order terms and write where we left only terms with for the free energy that For N molecules we find The energy E and heat capacity C are then So, at low temperatures the heat capacity corresponding to the rotational degrees of freedom is exponentially small. This implies that there would be no difference, in this limit, between the heat capacity for monatomic and diatomic molecules. In the opposite case, at high temperatures, the sum may be replaced by an integral: Proceeding from (S.4.44.18), we have Replacing the sum by an integral, we obtain Therefore, in the classical limit (high temperatures), The energy E and heat capacity C are given by We see that this is the same as found in (S.4.44.12). Since we expect a heat capacity per degree of freedom of 1/2, we see that there are two degrees of freedom for each molecule since They correspond to the two rotational degrees of freedom of a classical rod. (There are no spatial degrees of freedom since the molecule is considered Two-Level System (Princeton) a) There is nothing to prevent giving each atom its larger energy hence, has a maximum of 1 with Clearly, the system would not be in thermal equilibrium. To compute the problem in equilibrium, we need to determine the partition function, Z. For distinguishable noninteracting particles, the partition function factors, so for identical energy The free energy would be The energy is then Obviously, since both larger than 1. On the other hand, and are positive, cannot be is a monotonic function which goes to 1/2 when b) The entropy goes to infinity; hence, may be found from (S.4.45.2)–(S.4.45.4): The entropy per particle, is given by We can check that as it should. Zipper (Boston) a) A partition function may be written as where we have used the fact that a state with open links has an energy b) The average number of open links is given by which does not depend on N. It is also zipped up tight! Hanging Chain (Boston) a) Let the number of links with major axis vertical be the number of horizontal major axis links will then be The total length of the chain is then The energy of the system, is also a function of where we let The partition function is the number of possible configurations with major axis vertical links. b) The average energy can be found from (S.4.47.3): The average length is We can check that, if (lowest energy state). At Molecular Chain (MIT, Princeton, Colorado) a) Consider one link of the chain in its two configurations: energy of the link is The partition function for the entire chain is given by b) The average length of the chain may be found from the partition function: c) If (S.4.48.3) becomes high temperature, where we let The changeover temperature is obviously d) From (S.4.48.3), At small , (S.4.48.7) becomes as it should, since (for the specified direction of the tensile force ) it corresponds to a thermodynamic inequality for a system at equilibrium: Nonideal Gas Heat Capacities (Princeton) From the definition of for a gas, Since we are interested in a relation between it is useful to transform to other variables than in (S.4.49.1), namely V instead of We will use the Jacobian transformation (see Landau and Lifshitz, Statistical Physics, Sect. 16): A useful identity is obtained from b) Let us write the van der Waals equation for one mole of the gas in the from which we obtain Substituting for P in (S.4.49.5) yields We can see that ideal gas where (in regular units for an Return of Heat Capacities (Michigan) a) We will again use the Jacobian transformation to find as a function where we used So, we obtain into (S.4.50.2) yields b,c) We cannot determine the temperature dependence of can find as follows: but we where F is the Helmholtz free energy, and we used From (S.4.50.4) and the equation of state, we have and from (S.4.50.5), (S.4.50.7), we obtain Integrating (S.4.50.6) and are some functions of temperature. Since we know from (a), we infer that and finally Nonideal Gas Expansion (Michigan State) a) The work done in the expansion b) To find the heat absorbed in the expansion use the Maxwell relations given in the problem: where the prime indicates the derivative with respect to (S.4.51.2), we obtain is some function of The heat absorbed in the expansion van der Waals (MIT) a) The heat capacity is defined as By using the Maxwell relation we may write Substituting the van der Waals equation of state into (S.4.52.3) gives b) The entropy may be computed from We were given that and (S.4.52.4), we obtain c) The internal energy therefore, again using (S.4.52.2) may be calculated in the same way from Now, from we have and using (S.4.52.4) and (S.4.52.7), we get So, (S.4.52.8) becomes d) During adiabatic compression, the entropy is constant, so from (S.4.52.7) and we have e) The work done is given by the change in internal energy entropy is constant: since the From (S.4.52.11), we arrive at Critical Parameters (Stony Brook) At the critical point we have the conditions Substituting the Dietrici equation into (S.4.53.1) gives Using the second criterion (S.4.53.2) gives by (S.4.53.3), so by (S.4.53.6). (S.4.53.7) then yields which combined with (S.4.53.4) gives Substituting this result in the RHS of (S.4.53.4) finally yields Rearranging the original equation of state gives Mixtures and Phase Separation Entropy of Mixing (Michigan, MIT) a) The energy of the mixture of ideal gases is the sum of energies of the two gases (since we assume no interaction between them). Therefore the temperature will not change upon mixing. The pressure also remains unchanged. The entropy of the mixture is simply the sum of the entropies of each gas (as if there is no other gas) in the total volume. We may write the total entropy S (see Problem 4.38) as are the number of molecules of each gas in the mixture. V is the total volume of the mixture The entropy of the gases before they are allowed to mix is Therefore, the change in entropy, In our case is given by So, (S.4.54.3) becomes In conventional units we find The entropy increased as it should because the process is clearly irreversible. b) If the gases are the same, then the entropy after mixing is given by and so In the case of identical gases, reversing the process only requires the reinsertion of the partition, whereas in the case where two dissimilar gases are mixed, some additional work has to be done to separate them again. c) The same arguments as in (a) apply for a mixture of two isotopes, The Gibbs free energy can be written in the form are the chemical potentials of pure isotopes. Therefore, the potential (S.4.54.7) has the same form as in the mixture of two different gases, and there is no correction to the result of (a). This is true as long as (S.4.54.7) can be written in this form, and it holds even after including quantum corrections to the order of (see, for further details, Landau and Lifshitz, Statistical Physics, Sect. 94). Leaky Balloon (Moscow Phys-Tech) Let us consider the bag as part of a very large system (the atmosphere) which initially has N molecules of air, which we consider as one gas, and molecules of helium. The bag has volume and the number of helium molecules is Using (S.4.38.7) from Problem 4.38 and omitting all the temperature-dependent terms, we may write for the initial entropy of the When the helium has diffused out, we have We wish to find in the limit where We then obtain is the concentration ratio of helium molecules in the bag to their concentration in the air. In regular units Substituting the standard pressure and temperature into (S.4.55.5) gives The minimum work necessary to separate the helium at constant temperature is (see Landau and Lifshitz, Statistical Physics, Sect. 20) since after we separate the helium molecules from the rest of the air, the total entropy of that system would decrease. So Osmotic Pressure (MIT) a) The free energy for a one-component ideal gas is derived in Problem The Gibbs free energy so (S.4.56.2) must be transformed: If we have a mixture of two types of molecules with each, we find for the thermodynamic potential of the mixture: The Gibbs potential of the mixture are partial pressures A and B, respectively. So, corresponding to particles It can be seen that (see also Problem 4.54). b) To derive the pressure difference, we notice that for the system with a semipermeable membrane, only the chemical potentials of the solvent are equal, whereas the chemical potentials of the solute do not have to be (since they cannot penetrate through the membrane). We will write first the Gibbs free energy on the left and right of the membrane, will be defined by (S.4.56.6), with The chemical potentials of the solvent are given by we obtain where we only take into account the first-order terms in the solute. If we also assume, which is usually the case, that the osmotic pressure is also small, i.e., we obtain, from (S.4.56.11), are the concentrations of the solutes: Therefore, with the same accuracy, we arrive at the final A different derivation of this formula may be found in Landau and Lifshitz, Statistical Physics, Sect. 88. Clausius–Clapeyron (Stony Brook) a) We know that, at equilibrium, the chemical potentials of two phases should be equal: Here we write to emphasize the fact that the pressure depends on the temperature. By taking the derivative of (S.4.57.1) with respect to temperature, we obtain Taking into account that where s and are the entropy and volume per particle, and substituting into (S.4.57.2), we have where subscripts 1 and 2 refer to the two phases. On the other hand, where is the latent heat per particle, so we can rewrite (S.4.57.3) in the form which is the Clausius–Clapeyron equation. b) Consider the particular case of equilibrium between liquid and vapor. The volume of the liquid is usually much smaller than that for the vapor so we can disregard in (S.4.57.4) and write Using the ideal gas law for vapor, we get We can see that Rewriting (S.4.57.6) in usual units gives where L is the latent heat per mole, the gas constant. is Avogadro’s number, and R is Phase Transition (MIT) For a system at equilibrium with an external reservoir, the Gibbs free energy is a minimum. Any deviation from equilibrium will raise G: where is the pressure of the reservoir (see Landau and Lifshitz, Statistical Physics, Sect. 21). Expanding we have we may rewrite (S.4.58.2) as At the critical point, so (S.4.58.3) becomes For (S.4.58.4) to hold for arbitrary we have See Landau and Lifshitz, Statistical Physics, Sect. 153 for further discussion. Hydrogen Sublimation in Intergalactic Space Using the Clausius–Clapeyron equation derived in Problem 4.57, we can estimate the vapor pressure P at where is the pressure at the triple point and R is the gas constant. Here we disregard the volume per molecule of solid hydrogen compared to the one for its vapor. This formula is written under the assumption that the latent heat does not depend on the temperature, but for an order-of-magnitude estimate this is good enough. Consider solid hydrogen at equilibrium with its vapor. Then the number of particles evaporating from the surface equals the number of particles striking the surface and sticking to it from the vapor. The rate of the particles striking the surface is given by where is the number density, is the average speed, and is a sticking coefficient, which for this estimate we take equal to 1. Here we used the result of Problem 4.14, where we calculated the rate of particles striking the surface. Now if the density is not too high, the number of particles leaving the surface does not depend on whether there is vapor outside, so this would be the sublimation rate. Taking the average velocity from Problem 4.13, we get where is the mass of a hydrogen molecule, and substituting we may rewrite (S.4.59.2) as Gas Mixture Condensation (Moscow Phys-Tech) Consider three parts of the plot (see Figure S.4.60). At there is a regular gas mixture (no condensation). At one of the gases is condensing; let us assume for now it is oxygen (it happens to be true). they are both condensing, and there is no pressure change. Let us write is the partial nitrogen pressure at _ is the saturation vapor pressure of oxygen, and is the saturated vapor pressure of nitrogen (1 atm) at nitrogen is a gas, and since the temperature is constant, Using (S.4.60.3) and dividing (S.4.60.1) by (S.4.60.2), we have Had we assumed that oxygen is condensing at we would get This contradicts the fact that oxygen boils at a higher temperature. The saturated vapor pressure at K should be less than find the oxygen mass, we use the ideal gas law at where the oxygen is just starting to condense (i.e., its pressure is and it is all gas). So is the oxygen molar mass. For nitrogen a similar equation can be written for is the molar mass of nitrogen. Dividing (S.4.60.5) by (S.4.60.6), we obtain Air Bubble Coalescence (Moscow Phys-Tech) Writing the equilibrium conditions for the bubbles to exist, we find for the pressure inside each original bubble: is the hydrostatic pressure ( is the height of the water). We disregard any effects due to the finite size of the bubble since they are After merging, the pressure inside the new bubble will not change. This is due to the fact that the temperature is constant, and since the jar is closed and water is incompressible, the total volume also will not change. The new radius is given by Writing (S.4.61.1) for the new bubble, we obtain where we disregard any small change in hydrostatic pressure. From (S.4.61.1) and (S.4.61.3) we find that the change of pressure inside the jar is Soap Bubble Coalescence (Moscow Phys-Tech) Assume that, during the coalescence, the total mass of air inside the bubbles and the temperature do not change. So, are the masses of air inside bubbles tively. By the ideal gas law, is the mass, is the pressure, and is the volume in the ith bubble, and is the molar mass of the trapped air. The equilibrium condition for a bubble is The coefficient 2 in front of the second term results from the presence of two surfaces of the soap film enclosing the air (compare with Problem 4.61). From (S.4.62.2) and (S.4.62.3) we obtain Substituting (S.4.62.4) into (S.4.62.1), we obtain and so Note that if a is very small the volume of the new bubble is close to the sum of the original volumes, whereas if it is very large the surface area of the new bubble is roughly the sum of the original surface areas. Soap Bubbles in Equilibrium (Moscow a) The equilibrium is unstable. It is obvious from purely mechanical considerations that if the radius of one bubble decreases and the other increases, the pressure in the first bubble (which is inversely proportional to increase and that in the second bubble will decrease, leading to further changes in respective radii until the system becomes one bubble with radius (see Figure S.4.63). The same result can be obtained by considering the free energy of the system. b) The free energy of the bubble consists of two parts: a volume part, which is just the free energy of a gas (see Problem 4.38), and a surface part, which is associated with the surface tension: The Gibbs free energy The entropy change is the potential of the system with one bubble and potential of the initial configuration. We then find is the where we used the fact that the number of particles did not change and q is the heat necessary to produce a unit area of the film: We can eliminate from the final result by using the following equations: where the last equation represents the ideal gas law at constant temperature. This yields the equation Solving this cubic equation in the small limit gives Substituting (S.4.63.9) into (S.4.63.4) yields (in the same approximation) Quantum Statistics Fermi Energy of a 1D Electron Gas For a one-dimensional gas the number of quantum states in the interval and L is the “length” of the metal. The total number of electrons N (which in this case is equal to the number of atoms) is is the atomic spacing. The Fermi energy is the electron mass. Two-Dimensional Fermi Gas (MIT, a) At the noninteracting fermions will be distributed among the available states so that the total energy is a minimum. The number of quantum states available to a fermion confined to a box of area A with momentum between and is given by where the multiplicity and the spin fill all the states of momentum from 0 to calculate this maximum momentum from The Fermi energy The N fermions We can therefore for this nonrelativistic gas is simply Using (S.4.65.2) and (S.4.65.3) we obtain (S.4.65.5) becomes b) The total energy of the gas from (S.4.65.2) into (S.4.65.7) gives Nonrelativistic Electron Gas (Stony Brook, Wisconsin-Madison, Michigan State) a) As the Fermi-Dirac distribution function becomes a step function. All the states above a certain energy, are empty, and the states below, are filled (see Figure S.4.66). This energy for an electron gas is called the Fermi energy. Physically, this results from the simple fact that the total energy of the gas should be a minimum. However, we have to reconcile this with the Pauli principle, which prohibits more than one electron per quantum state (i.e., same momentum and spin). This means that the states are filled gradually from zero energy to the limiting energy, The number of states accessible to a free particle with absolute value of momentum between and In each of these states, we can put two electrons with opposite spin (up and down), so if we consider the total number of electrons, N, contained in a box of volume V, then N is given by we obtain and therefore To calculate the total energy of the gas, we can write where again and therefore and we can check that b) The condition for strong degeneracy is that the temperature be much smaller than the Fermi energy: For typical metals, if we assume that there is one free electron per atom and a typical interatomic distance we obtain an electron density which indicates a Fermi energy of the order of So, most of the metals are strongly degenerate, even at room temperature. Ultrarelativistic Electron Gas (Stony Brook) The fact that the gas is ultrarelativistic implies that the energy of the electron is large compared to its rest energy In this case, the dispersion law is linear: The number of quantum states is the same as for the nonrelativistic case considered in Problem 4.66: However, the Fermi energy now is different since The total energy is After substituting from (S.4.67.1), we obtain So, the pressure is Hence, for an ultrarelativistic gas we have the same as for massless particles (e.g., photons), which is not surprising since the dispersion law is the same. Quantum Corrections to Equation of State (MIT, Princeton, Stony Brook) a) Start with the particle distribution over the absolute value of momentum: where the upper sign in (S.4.68.1) and below corresponds to Fermi statistics and the lower to Bose we obtain The total energy is given by On the other hand, using the grand canonical potential and replacing the sum by an integral, using (S.4.68.2), we obtain Integrating (S.4.68.5) by parts, we have Comparing this expression with (S.4.68.3), we find that Therefore, we obtain the equation of state, which is valid both for Fermi and Bose gases (and is, of course, also true for a classical Boltzmann gas): Note that (S.4.68.8) was derived under the assumption of a particular dispersion law for relativistic particles or photons with (S.4.68.8) becomes (see Problem 4.67). From (S.4.68.8) and (S.4.68.3), we obtain (S.4.68.9) defines the equation of state. To find quantum corrections to the classical equation of state (which corresponds to the case expand the integral in (S.4.68.9), using as a small and substituting (S.4.68.10) into (S.4.68.9), we have The first term, which we may call corresponds to a Boltzmann gas with (see Problem 4.39), and the second term gives the first correction Using the fact that, for small corrections (see, for instance, Landau and Lifshitz, Statistical Physics, Sect. 24), we can write the first quantum correction to the free energy F. Using the classical expression for in terms of and V gives the result to the same we obtain, from (S.4.68.13), b) The condition for validity of this approximation is that the first correction should be much less than unity: This gives the condition on the density for which (S.4.68.15) is valid: It is interesting to determine the de Broglie wavelength We find that at this tem- We see that this approximation is valid when the separation between particles is much larger than the de Broglie wavelength. (S.4.68.16) expresses the same condition as for the applicability of Boltzmann statistics (which Since the chemical potential may be written (see Problem 4.39) we see that Speed of Sound in Quantum Gases (MIT) a) The Gibbs free energy G is a function of the number of particles; i.e., is some function of which do not depend on On the other hand, may write for for a system consisting of identical particles, and we From (S.4.69.3) we have and we recover b) The number of quantum states in the interval between a Fermi gas is from 0 to electrons fill all the states with momentum so the total number of electrons N is given by The total energy of the gas from (S.4.69.8), we obtain Using the equation of state for a Fermi gas (see Problem 4.66), we have Now, using (S.4.69.11), we can calculate Alternatively, we can use the expression obtained in (a) and the fact that, the chemical potential From (S.4.69.8), and we again recover (S.4.69.12) in c) We can explicitly calculate the total energy of the Bose gas, which will be defined by the particles that are outside the condensate (since the condensed particles are in the ground state with At a temperature below the Bose–Einstein condensation the particles outside the condensate are distributed according to a regular Bose distribution with (see Problem 4.70): The total number of particles outside the condensate is therefore The energy of the Bose gas at The free energy F is So the pressure So we see that the pressure does not depend on the volume and We could have determined the result without the above calculations since the particles which are inside the condensate (with = 0) have no momentum and do not contribute to pressure. Bose Condensation Critical Parameters (MIT) a) The number of particles dN in an element of phase space is given by With the usual dispersion law for an ideal gas and integrating over we find the particle distribution over energy: Integrating (S.4.70.2), we obtain a formula for the total number N of particles in the gas: we rewrite (S.4.70.3) as (S.4.70.4) defines a parametric equation for the chemical potential decrease of volume (or temperature) will increase the value of the integral, and therefore the value of (which is always negative in Bose statistics) will increase. The critical parameters correspond to the point where (i.e., if you decrease the volume or temperature any further, should increase even further to provide a solution to (S.4.70.4), whereas it cannot become positive). So we can write at a certain temperature: b) In two dimensions the integral (S.4.70.3) becomes and there is no Bose condensation (see Problem 4.71). Bose Condensation (Princeton, Stony Brook) For Bose particles, where is the temperature in energy units. The total number of particles in a Bose distribution is integral gives into the The condition for Bose condensation to occur is that, at some particular temperature, the chemical potential goes to zero. Then the number of particles outside the Bose condensate will be determined by the integral This integral should converge since N is a given number. Expanding around in order to determine conditions for convergence of the integral yields So, this integral diverges at and there is no Bose condensation for this region. (For instance, in two dimensions, particles with ordinary dispersion law would not Bose-condense.) In three dimensions, so that Bose condensation does occur. How Hot the Sun? (Stony Brook) (See Problem 2 of Chapter 4 in Kittel and Kroemer, Thermal Physics.) The distribution of photons over the quantum states with energy is given by Planck’s distribution (the Bose–Einstein distribution with chemical potential where is the temperature of the radiation which we consider equal to the temperature of the surface of the Sun. To find the total energy, we can replace the sum over modes by an integral over frequencies: where the factor 2 accounts for the two transverse photon polarizations. The energy of radiation in an interval and unit volume is therefore The total radiation energy density The integral with factor fact it is The energy flux over all frequencies is is just a number which we can take per a unit solid angle is The flux that illuminates the Earth is proportional to the solid angle subtended by the Sun’s surface at the Earth: The radiant energy flux at the Earth is therefore is the temperature of the Sun’s surface in K. Now we may estimate (The actual temperature is about 6000 K; see Problem 4.73.) Radiation Force (Princeton, Moscow Phys-Tech, a) The total radiation flux from the Sun is where is the Stefan–Boltzmann constant. Only a fraction this flux reaches the Earth. In equilibrium this fraction equals the total flux radiated from the Earth at temperature From (S.4.73.2) we obtain b) The radiation pressure on the Earth is given by is the ratio of the total flux from the Sun to the flux that reaches the Earth. The radiation force on the Earth is the cross section of the Earth. c) For the small “chondrule” the temperature will be the same because it depends only on the angle at which the Sun is seen and the radiation force: d) Using (S.4.73.3) and denoting the melting temperature of the metallic and the distance from the Sun we obtain e) Let us estimate the radius of a particle for which the radiation force will equal the gravitational force at the distance of the Earth’s orbit (S.4.73.6), we have where the particle mass Hot Box and Particle Creation (Boston, MIT) a) The number of photons is where the factor 2 comes from the two polarizations of photons; b) At low temperatures we can disregard any interaction between photons due to the creation of electron–positron pairs. We can therefore use the standard formula for energy flux: where is the Stefan–Boltzmann constant. On the other hand, by analogy with molecular flow, is the energy density. So, c) Using the equation of state for a photon gas we have The entropy S is then d) The energy E of the system of particles + photons is The entropy of the system is the sum of the entropy of an ideal gas and radiation. The free energy of a single-particle ideal gas with Problem 4.38) of created particles and the radiation is then Minimizing the free energy with respect to the number of particles, we have From (S.4.74.12) we obtain This result can be immediately obtained if we consider the process as a “chemical” reaction For chemical equilibrium Since, for photons, we have is the chemical potential of an ideal gas (see part (e)). This result gives us (S.4.74.13). e) Pair creation and annihilation can be written in the form The chemical potential of the photon gas is zero (since the number of photons is not constant but is defined by equilibrium conditions). Therefore, we have for process (S.4.74.14) in equilibrium: are the chemical potentials of electrons and positrons, respectively. If we disregard the walls of the box and assume that there are no electrons inside the box initially, then the number of electrons equals the number of positrons, and We then find for the number of electrons (positrons) where the factor 2 comes from the double degeneracy of the electron gas and we set The energy may be written Disregarding the 1 in the denominator of (S.4.74.16) and expanding the square root with respect to the small parameter we obtain, from (S.4.74.16), where we set We then find that the concentrations are and so Alternatively, we can take an approach similar to the one in (d). Using the formula for the chemical potential of an ideal gas (see Problem 4.39) gives We can immediately write to obtain the same result in (S.4.74.18) and (S.4.74.19). f) For the electrons are highly relativistic, and we can write in (S.4.74.16). Then where we have used the integral given in the problem. Finally, D-Dimensional Blackbody Cavity (MIT) For a photon gas the average number of photons per mode is given by The energy where V is the volume of the hypercube: into (S.4.75.2), we obtain The energy density is simply For D = 3 we recover the Stefan–Boltzmann law: Fermi and Bose Gas Pressure (Boston) a) Using and substituting the entropy from the problem, we We may then find the pressure P of the gas: The isothermal work done by the gas b) For a photon gas in a cuboid box is a constant. The same is true for a relativistic Fermi gas with dispersion law c) For a nonrelativistic Fermi gas the energy is is a constant. So, This result was already obtained directly in Problem 4.66 (see (S.4.66.8)). Blackbody Radiation and Early Universe (Stony a) By definition the free energy b) The entropy is then The energy of the system Alternatively, the entropy can be found from can be expressed from (S.4.77.3) as where we let Performing the integral gives Substituting the energy for this mode in the form we recover the entropy Photon Gas (Stony Brook) The photon gas is a Bose gas leading to Planck’s distribution: with zero chemical potential Replacing the sum over different modes by an integral in spherical coordinates, we may write, for the number of quantum states in a volume V, into (S.4.78.2) and taking into account the two possible transverse polarizations of photons, we obtain Let us calculate the Helmholtz free energy F. For a Bose gas with the grand thermodynamic potential is given by The free energy F would coincide with replacing the sum by an integral in (S.4.78.4) and substituting Integrating by parts gives although we really do not need this, and so a positive constant. The pressure P of the photon gas is The entropy S of the gas is given by The energy E may now be found from Comparing (S.4.78.7) and (S.4.78.9) gives Note that this result is the same as for an ultrarelativistic electron gas (which has the same dispersion law see Problem 4.67). The total number of photons is given by where we let Comparing (S.4.78.9)–(S.4.78.10) with (S.4.78.11), we can write So, similar to the classical ideal gas, we have Dark Matter (Rutgers) a) The virial theorem may be written relating the average kinetic energy, and the forces between particles (see Sect. 3.4 of Goldstein, Classical For an inverse square law force (gravitation), the average kinetic energy and potential energy are related as For a very rough estimate of the gravitational potential energy of Draco, consider the energy of a sphere of uniform density and radius The average kinetic energy may be approximated by Substituting (S.4.79.2) and (S.4.79.3) into (S.4.79.1), we find b) If most of the mass of Draco is in massive neutrinos, we may estimate the energy by considering the energy of a uniform distribution of fermions in a box of volume V. The energy of such a fermionic gas has been found in Problem 4.66: Rewriting (S.4.79.5) in terms of density and volume gives If the mass of the neutrino is too low, in order to maintain the observed density, the number density would increase, and the Pauli principle would require the kinetic energy to increase. So, in (S.4.79.6), the energy increases as the mass of the neutrino decreases. Equating the kinetic energy from (a) with (S.4.79.6), we see c) Substituting (S.4.79.4) into (S.4.79.7), we determine that This value is at least an order of magnitude larger than any experimental results for neutrino masses, implying that the model does not explain the manner in which Draco is held together (see also D. W. Sciama, Modern Cosmology and the Dark Matter Problem). Einstein Coefficients (Stony Brook) a) At equilibrium the rates of excitation out of and back to state 1 should be equal, so Substituting (P.4.80.1), (P.4.80.2), and (P.4.80.3) into (S.4.80.1) gives We may find the ratio of the populations from (S.4.80.2) to be b) At thermal equilibrium the population of the upper state should be smaller than that of the lower state by the Boltzmann factor (S.4.80.3) gives Substituting the radiation density into (S.4.80.4) gives The ratios of coefficients may be found by considering (S.4.80.6) for extreme values of since it must be true for all values of For very large values of we have Substituting (S.4.80.7) back into (S.4.80.6) yields which immediately yields and so, from (S.4.80.7), c) Inspection of (S.4.80.11) shows that the ratio of the spontaneous emission rate to the stimulated emission rate grows as the cube of the frequency, which makes it more difficult to create the population inversion necessary for laser action. The pump power would therefore scale as Atomic Paramagnetism (Rutgers, Boston) a) The energy associated with the magnetic field is is an integer varying in the range b) From (S.4.81.1) we may find the partition function Z: where we define The sum (S.4.81.2) may be easily calculated: The mean magnetic moment per dipole is given by Since the atoms do not interact, For J = 1/2, This result can be obtained directly from (S.4.81.3) and (S.4.81.4): For J = 1, c) For large H the magnetization saturates It is convenient to define the so-called Brillouin function Journal de Physique 8, 74 (1927)] in such a way that For small H, we can expand coth The saturation value (S.4.81.10) corresponds to a classical dipole atom, where all the dipoles are aligned along the direction of H, whereas the value at small magnetic field H (S.4.81.11) reflects a competition between order (H) and disorder Paramagnetism at High Temperature (Boston) a) The specific heat c of a system that has N energy states is given by we may rewrite where we have used In Note that, in general, the parameter is not small (since it is proportional to the number of particles), but, subsequently, we obtain another parameter b) For a classical paramagnetic solid: and we have is the probability density. Therefore, For the quantum mechanical case, there is an equidistant energy (see Problem 4.81) and To calculate we can use the following trick (assuming J integer): From (S.4.82.6) we have With the familiar sum we arrive at We wish to perform the sum from –J to J, so and (S.4.82.5) gives c) For As in Problem 4.81 for We then find which coincides with (S.4.82.11). One-Dimensional Ising Model (Tennessee) a) The partition function is defined as where the product is taken over the sites. Define Start by evaluating the sum at one end, say for independent of the value of Next we evaluate the sum over The answer is which is also independent of the value So each summation over gives the identical factor the product of N such factors. and Z is b) The heat capacity per spin is obtained using thermodynamic identities. The partition function is related to the free energy F: The entropy is given by Now, the heat capacity C is given by The heat capacity per spin Three Ising Spins (Tennessee) a) Define partition function is The definition of the A direct calculation gives b) The average value of spin is c) The internal energy is N Independent Spins (Tennessee) a) The partition function is given by Each spin is independent, so one has the same result as for one spin, but raised to the Nth power b) The internal energy is the derivative of ln Z with respect to c) The entropy is derivative of ln Z with respect to N Independent Spins, Revisited (Tennessee) We use the expression where is the probability of the arrangement of spins. For N spins we assume that are up and are down, The different arrangements are Use Stirling’s approximation for the factorial to obtain Ferromagnetism (Maryland, MIT) Using the mean field approximation, we may write the magnetization M of the lattice as (see Problem 4.81) where is the density of the spins and is the sum of the imposed field and the field at spin produced by the neighboring spins: is a constant. We may rewrite (S.4.87.1) as The susceptibility is given by For B and M small we may rearrange (S.4.87.4), yielding The divergence of at indicates the onset of ferromagnetism. The spins will align spontaneously in the absence of an applied magnetic field at this temperature. Spin Waves in Ferromagnets (Princeton, Quantum spins have the commutation relations a) The time dependences of the spins are given by the equations of motion: b) The classical spin field at point is In the simple cubic lattice the six neighboring lattice sites are at the points where is We expand the sum in a Taylor series, assuming that is a small number, and find c) Given the form of the spin operator in part (c), one immediately derives the equation by neglecting terms of order The equations of motion have an eigenvalue quencies of the spin waves. which represents the fre- d) The internal energy per unit volume of the spin waves is given by where the occupation number is suitable for bosons. At low temperature we can evaluate this expression by defining the dimensionless variable which gives for the integral At low temperature the upper limit of the integral becomes large, and the internal energy is proportional to The heat capacity is the derivative of with respect to temperature, so it goes as Magnetization Fluctuation (Stony Brook) The energy of a dipole in a magnetic field may be written The partition function Z is simply Since the moments are all independent, we may express the average magnetization On the other hand, the ensemble averages are independent, and We are left with so (S.4.89.2) and (S.4.89.3) give We then obtain Gas Fluctuations (Moscow Phys-Tech) a) We can disregard any particles from the high-vacuum part of the setup and consider the problem of molecular flow from the ballast volume into the vacuum chamber. The number of particles was calculated in Problem where is the particle concentration and is the average velocity. Expressing via the pressure P and using (see Problem 4.13) we obtain b) At the given pressure the molecules are in the Knudsen regime, the mean free path Therefore, we can assume that the molecular distribution will not change and N can be obtained from the Poisson distribution. The mean fluctuation (see Problem 4.94) The mean relative fluctuation is given by c) The probability of finding N particles as a result of one of the measurements, according to the Poisson distribution (see Problem 4.35), is Therefore, the probability of counting zero particles in 1 ms is an exceedingly small number. This problem is published in Kozel, S. M., Rashba, E. I., and Slavatinskii, S. A., Problems of the Moscow Institute of Physics and Technology. Quivering Mirror (MIT, Rutgers, Stony Brook) a) When the mirror is in thermal equilibrium with gas in the chamber, one may again invoke the equipartition theorem and state that there is of energy in the rotational degree of freedom of the torsional pendulum, where the torque is given by The mean square fluctuation in the angle would then be given by (see Chapter 13, Fluctuations, in Pathria) Now, Avogadro’s number and we obtain b) Even if the gas density were reduced in the chamber, the mean square would not change. However, in order to determine whether individual fluctuations might have larger amplitudes, we cannot rely on the equipartition theorem. We instead will examine the fluctuations in the frequency domain. may be written is the power spectral density of At high gas density, broader and smaller in amplitude, while the integral remains constant. This corresponds to more frequent collisions and smaller amplitudes, whereas, at low density, is more peaked around the natural frequency of the torsional pendulum where I is its moment of inertia, still keeping the integral constant. It then appears that by reducing the density of the gas we actually increase the amplitude of fluctuations! Isothermal Compressibility and Mean Square Fluctuation (Stony Brook) a) Let us use the Jacobian transformation for thermodynamic variables: Since the chemical potential N, we can write is expressed in P, and does not depend on are reduced entropy and volume respectively. Using the equation for the Gibbs free energy of a single-component system, we can write where we also used But from (S.4.92.1), So finally b) By definition the average number of particles in the grand canonical ensemble is Now, from (a), where N is an average number of particles: where we have used From (S.4.92.3) Since V is proportional to The relative fluctuation is given by Energy Fluctuation in Canonical Ensemble (Colorado, Stony Brook) First solution: For a canonical ensemble: On the other hand, Differentiating (S.4.93.2), we obtain By inspecting (S.4.93.1)–(S.4.93.3), we find that Now, the heat capacity at constant volume, is given by Therefore, comparing (S.4.93.4) and (S.4.93.5), we deduce that, at constant or in standard units Second solution: A more general approach may be followed which is applicable to other problems. Because the probability of finding that the value of a certain quantity X deviates from its average value is proportional and denoting we can write Note that we obtain The entropy has a maximum at The probability distribution If we have several variables, If the fluctuations of two variables are statistically independent, The converse is also true: If the variables tically independent. Now for a closed system we can write is the total entropy of the system and due to the fluctuation. On the other hand, are statis- is the entropy change is the minimum work to change reversibly the thermodynamic variables of a small part of a system (the rest of the system works as a heat bath), and is the average temperature of the system (and therefore the temperature of the heat bath). Hence, are changes of a small part of a system due to fluctuations and P are the average temperature and pressure. So, (for small fluctuations) gives Substituting (S.4.93.16) into (S.4.93.14), we obtain where we used So, finally Using V and as independent variables we have Substituting (S.4.93.19) into (S.4.93.18), we see that the cross terms with cancel (which means that the fluctuations of volume and temperature are statistically independent, Comparing (S.4.93.20) with (S.4.93.10), we find that the fluctuations of volume and temperature are given by To find the energy fluctuation, we can expand where we used (S.4.93.21), we obtain a more general formula for At constant volume (S.4.93.23) becomes the same as before. Number Fluctuations (Colorado (a,b), Moscow Phys-Tech (c)) a) Using the formula derived in Problem 4.92, we have Consider an assortment of particles which are in the quantum state. They are statistically independent of the other particles in the gas; therefore we can apply (S.4.94.1) in the form For a Fermi gas So, by (S.4.94.2), Similarly, for a Bose gas we have b) First solution: Since a classical ideal gas is a limiting case of both Fermi and Bose gases at we get, from (S.4.94.3) or (S.4.94.6), Alternatively, we can take the distribution function for an ideal classical and use (S.4.94.2) to get the same result. Since all the numbers particles in each state are statistically independent, we can write Second solution: In Problem 4.93 we derived the volume fluctuation This gives the fluctuation of a system containing N particles. If we divide (S.4.94.9) by we find the fluctuation of the volume per particle: This fluctuation should not depend on which is constant, the volume or the number of particles. If we consider that the volume in (S.4.94.10) is constant, then Substituting (S.4.94.11) into (S.4.94.10) gives Using the equation for an ideal gas, in (S.4.94.12), we obtain Third solution: Use the Poisson distribution, which does not require that the fluctuations be small: The average square number of particles is Thus we recover (S.4.94.8) again: c) Again we will use (S.4.94.1): Since the gas is strongly degenerate, (see Problem 4.66): we can use Wiggling Wire (Princeton) First solution: Consider the midpoint of the wire P fixed at points A and B (see Figure S.4.95). let be the deviation of the wire from the line segment AB. Then, in equilibrium, the wire will consist of segments To find we will have to find the minimum work to change the shape of the wire from APB to Using a standard formula for the probability of fluctuation (see Problem we obtain This answer can be easily generalized for an arbitrary point wire (see Landau and Lifshitz, Statistical Physics, Sect. 112): along the Second solution: We solve the equation of motion for the wire (see derivation in Problem 1.46, Part I). For the boundary conditions we have modes: Taking for simplicity is the phase velocity, we can find the average kinetic and potential energy in each mode: The total energy in each mode is The fluctuation of the wire is given by where we have used where we substituted from (S.4.95.9). Note that even modes do not contribute to the fluctuation of the midpoint of the wire, as expected from elementary considerations. We may then find the fluctuation: where we have used the sum given in the problem as before. LC Voltage Noise (MIT, Chicago) Write the Hamiltonian H for the circuit: is the charge on the capacitor. This is the Hamiltonian of a harmonic oscillator of frequency whose energy levels The average energy in the circuit is given by (with The average energy is equally distributed between the capacitance and the where V is the voltage across the capacitor (between points A and B; see Figure S.4.96). We then have a) In the classical limit We could equally well have derived the classical result by using the equipartition theorem (see Problem 4.42). For the single degree of freedom, there is an average energy which, as noted, is divided between the capacitor and inductor, so as found in (S.4.96.6). The mean square noise voltage is b) If then (S.4.96.6) becomes Applications to Solid State Thermal Expansion and Heat Capacity a) First solution: We can calculate the average displacement of an oscillator: Since the anharmonic term is small, exponent in the integral: we can expand the where we set Note that, in this approximation, the next term in the potential would not have introduced any additional shift (only antisymmetric terms Second solution: (see Problem 1.37, Part I) We can solve the equation of motion for the nonlinear harmonic oscillator corresponding to the potential is the principal frequency. (S.1.37.10) of Part I) gives The solution (see is defined from the initial conditions and A is the amplitude of oscillations of the linear equation. The average over a period We need to calculate the thermodynamic average of we obtain the same as before. b) The partition function of a single oscillator associated with this potential energy is So, the free energy F per oscillator is given by where we approximated ln found from The energy per oscillator may be The heat capacity is then The anharmonic correction to the heat capacity is negative. Schottky Defects (Michigan State, MIT) When N atoms are displaced to the surface, they leave the same number of vacancies. Now there are N vacancies and atoms in points. The entropy as a function of N is where we have used Stirling’s formula The free energy may be written The minimum of the free energy can be found to be we have which is what one would expect. Frenkel Defects (Colorado, MIT) We assume that the number of defects created around one lattice site does not affect the process of creating new defects. In other words, all configurations of the system are independent (not a very realistic assumption in general, but since and the number of defects it can be used as an approximation). The vacancies and interstices then can be distributed in ways, respectively: The total number of possible configurations of the system, is given by The entropy, S, may be written Using Stirling’s formula we obtain, from (S.4.99.3), Using (S.4.99.5) and the fact that the total energy of the system we have The condition implies that and therefore Two-Dimensional Debye Solid (Columbia, a) The number of normal modes in the 2D solid within the interval wave vector may be written of a In the 2D solid there are only two independent polarizations of the excitations, one longitudinal and one transverse. Therefore, is the average velocity of sound. To find the Debye frequency we use the standard assumption that the integral of (S.4.100.2) from 0 to a certain cut-off frequency is equal to the total number of vibrational modes; i.e., We can express Then (S.4.100.2) becomes b) The free energy (see Problem 4.77) then becomes and introducing a new variable rewrite (S.4.100.7) in the form we can Integrating (S.4.100.8) by parts, we obtain where the 2D Debye function The energy is given by The specific heat At low temperatures, to infinity: at low temperatures) is we can extend the upper limit of integration Therefore, at is the Riemann function. Note that the specific heat in 2D is (see also Problem 4.75). Note also that you can solve a somewhat different problem: When atoms are confined to the surface but still have three degrees of freedom, the results will, of course, be different. Einstein Specific Heat (Maryland, Boston) a) For a harmonic oscillator with frequency the energy b) If we assume that the N atoms of the solid each have three degrees of freedom and the same frequency then the total energy The specific heat c) In the high-temperature limit of (S.4.101.4) we have In regular units which corresponds to the law of Dulong and Petit, does not depend on the composition of the material but only on the total number of atoms and should be a good approximation at high temperatures, especially for one-component elements. Prom the numbers in the problem, Note that, at high enough temperatures, anharmonic effects calculated in Problem 4.97 may become noticeable. Anharmonic corrections are usually negative and linearly proportional to temperature. d) At low temperatures (S.4.101.4) becomes The heat capacity goes to zero as exp whereas the experimental results give (see Problem 4.42). The faster falloff of the heat capacity is due to the “freezing out” of the oscillations at given the single natural frequency Gas Adsorption (Princeton, MIT, Stanford) For two systems in equilibrium, the chemical potentials should be equal. Consider one of the systems as an ideal gas (vapor) in a volume, and another as a surface submonolayer film. For an ideal gas the free energy F (see Problem 4.38) is given by where and correspond to the energy states and statistical sum associated with the internal degrees of freedom. If the temperature is reasonably corresponds to the ionization energy of the atoms, so that the atoms are not ionized and mostly in the ground state, and this state is nondegenerate, we can take and then (S.4.102.1) The Gibbs free energy G is given by where we have expressed G as a function of P and chemical potential Now, consider an adsorption site: we can apply a Gibbs distribution with a variable number of particles to this site: where the possible occupational numbers of the site for a submonolayer 0,1 (site is empty, site is occupied), with energy the sums, we have The average number of particles per site may be written The total number of adsorbed particles N is given by The surface concentration is simply for an ideal gas from (S.4.102.4) into (S.4.102.9), we have (S.4.102.9) can also be derived by considering the canonical ensemble. The number of possible ways of distributing N atoms among sites is The partition function is then and the average number of particles the same as (S.4.102.8). Thermionic Emission (Boston) a) We can consider the electron gas outside the metal to be in equilibrium with the electrons inside the metal. Then the number of electrons hitting the surface from the outside should be equal to the number of electrons leaving the metal. Using the formula for chemical potential of a monatomic ideal gas (see Problem 4.39), we can write for an electron gas. Rewriting (S.4.103.1), we have The state of equilibrium requires that this chemical potential be equal to the potential inside the metal, which we can take as i.e., the is required to take an electron from the Fermi level inside the metal into vacuum. So, the pressure of the electron gas is given by On the other hand, the number of particles of the ideal gas striking the surface per unit area per unit time is The current where is the electron charge. Therefore, we can express P from (S.4.103.5): Equating (S.4.103.6) with (S.4.103.3), we find the current Alternatively, we can calculate the current by considering the electrons leaving the metal as if they have a kinetic energy high enough to overcome the potential barrier. b) For one particle, are the energies, and of the gas and solid, respectively. Since in the form the volumes per particle, we can rewrite (S.4.103.8) Substituting (S.4.103.9) into the Clausius-Clapeyron equation (see Problem we obtain We may rewrite (S.4.103.11) as Integrating, we recover (S.4.103.2): where A is some constant, or Electrons and Holes (Boston, Moscow a) Let the zero of energy be the bottom of the conduction band, so (see Figure S.4.104). The number of electrons may be found from for electrons, and the Fermi distribution formula has been approximated by The concentration of electrons is then b) In an intrinsic semiconductor since a hole is defined as the absence of an electron. We may then write where is the energy of a hole and we have used the nondegeneracy condition for holes The number of holes is The energy of a hole (from the bottom of the conduction band) is Therefore, similar to (a): The product of the concentrations of electrons and holes does not depend on the chemical potential as we see by multiplying (S.4.104.3) and (S.4.104.8): We did not use the fact that there are no impurities. The only important assumption is that which implies that the chemical potential is not too close to either the conduction or valence bands. c) Since, in the case of an intrinsic semiconductor (every electron in the conduction band leaves behind a hole in the valence band), we can write, using (S.4.104.9), Equating (S.4.104.3) and (S.4.104.11), we can find the chemical potential for an intrinsic semiconductor: then the chemical potential is in the middle of the band gap: Adiabatic Demagnetization (Maryland) a) We start with the usual relation and substitute M dH for P dV, since the work done in this problem is magnetic rather than mechanical. So We now want to produce a Maxwell relation whose independent variables are T and H. Write an equation for the free energy F: We then obtain, from (S.4.105.2), The cross derivatives of (S.4.105.3) are equal so The heat capacity at constant magnetic field is given by from which we obtain By again exchanging the order of differentiation in (S.4.105.6) and using the result found in (S.4.105.4), we have Replacing M by in (S.4.105.7) yields the desired b) For an adiabatic process, the entropy S is constant. Writing we compose the differential and by (S.4.105.4) and (S.4.105.5), c) The heat capacity may be written as the integral into (S.4.105.8), we have Using the heat capacity at zero magnetic field, (S.4.105.12) in (S.4.105.11), we obtain The temperature may be written so for our adiabatic process The integrand in (S.4.105.14) is found by substituting So, for a process at constant entropy, we may write Rearranging and integrating give d) A possible route to zero temperature is illustrated in Figure S.4.105. During leg 1 the paramagnetic sample is kept in contact with a reservoir at a low temperature, and the magnetic field is raised from 0 to contact with the reservoir is then removed, and the field is reduced to zero along leg 2. The sample is thereby cooled. Critical Field in Superconductor (Stony Brook, a) If the external field is smaller than the critical field, then the B-field inside the superconductor is zero, and the magnetization M This means that the superconductor displays perfect diamagnetism (with magnetic susceptibility The change in free energy of the superconductor due to the increase of the external field H may be written Therefore, the free energy of the superconductor in a field is given by The transition to a normal state occurs when the free energy of the superconducting state is equal to that of the normal state: Here we used the fact that, because of the negligible magnetic susceptibility, the free energy of the normal state practically does not depend on the applied field. So, we have Now, it is easy to calculate the entropy discontinuity If we recall that the dependence of the critical field on the temperature can be approximated by the formula we can confirm that a superconducting state is a more ordered state, since and hence b) The latent heat is given by normal, then if the transition occurs at a constant temperature, If the transition is from superconducting to So, if we have a transition from the superconducting to normal states, then heat is absorbed. c) The specific heat is defined as Here we disregard any volume and pressure changes due to the transition. Hence, from equation (S.4.106.6), the specific heat per volume discontinuity is At zero field the transition is of second order and so the specific heat per unit volume discontinuity at from (S.4.106.8) is One-Dimensional Potentials Shallow Square Well I (Columbia) The ground state energy E must be less than zero and greater than the bottom of the well, From the expression one can deduce the form for the eigenfunction. Denote the ground state where is to be determined. The eigenfunction outside the well (V = 0) has the form Inside the well, define One can show that is positive since Inside the well, the eigenfunction has the form and its derivative at gives two expressions: Dividing these two equations produces the eigenvalue equation The equation given by the rightmost equals sign is an equation for the Solving it gives the eigenvalue E. Shallow Square Well II (Stony Brook) a) For the bound state we can write the eigenvalue as where is the decay constant of the eigenfunction outside the square well (see Problem 5.1). Inside the square well we define a wave vector by The infinite potential at the origin requires that all eigenfunctions vanish So the lowest eigenfunction must have the form At the point we match the eigenfunctions and their derivatives: We eliminate the constants A and B by dividing these two equations: Earlier we established the relationship between and So the only unknown variable is which is determined by this equation. b) To find the minimum bound state, we take the limit as the eigenvalue equation. From (S.5.2.1) we see that goes to a nonzero constant, and the eigenvalue equation only makes sense as which happens at Using (S.5.2.1) gives Thus, we derive the minimum value of for a bound c) For a positive energy state set where is the wave vector outside the square well. Inside the square well we again define a wave vector according to Again we have the requirement that the eigenfunction vanish at we have an eigenfunction with two unknown parameters B and Alternatively, we may write it as in terms of two unknowns C and D. The two forms are equivalent since We prefer to write it with the phase shift Again we match the two wave functions and their derivatives at Dividing (S.5.2.10) by (S.5.2.11), we obtain Since is a known function of the only unknown in this equation is which is determined by this equation. d) From (S.5.2.12) we derive an expression for the phase shift: Attractive Delta Function Potential I (Stony a) The bound state is stationary in time: its eigenvalue is E (E < 0), and the time dependence of the wave function is The equation for the bound state is The bound state for has the form We have already imposed the constraint that be continuous at This form satisfies the requirement that is continuous at the origin and vanishes at infinity. Away from the origin the potential is zero, and the Schrödinger equation just gives A relation between C and E is found by matching the derivatives of the wave functions at Taking the integral of (S.5.3.1) between Applying (S.5.3.3) to (S.5.3.2) gives the relations We have found the eigenvalue for the bound state. Note that the dimensions of C are energy × distance, which makes the eigenvalue have units of energy. Finally, we find the normalization coefficient A: b) When the potential constant changes from the eigenfunction changes from where the prime denotes the eigenfunction with the potential strength In the sudden approximation the probability that the particle remains in the bound state is given by Substituting (S.5.3.2) into (S.5.3.9) and using the result of (S.5.3.7), we Finally, using (S.5.3.5) yields It is easy to show that as required by particle conservation. If since there is no change, and the particle must stay in the bound state. Attractive Delta Function Potential II (Stony a) In order to construct the wave function for the bound state, we first review its properties. It must vanish at the point At the point it is continuous and its derivative obeys an equation similar to Away from the points it has an energy wave functions that are combinations of dictate that the eigenfunction has the form These constraints At the point we match the two eigenfunctions and their derivatives, using (S.5.4.1). This yields two equations, which are solved to find an equation for We use the first equation to eliminate A in the second equation. Then each term has a factor of which is canceled: Multiplying both sides of (S.5.4.5) by sinh This last equation determines which determines the bound state energy. There is only one solution for sufficiently large values of b) The minimum value of for creating a bound state is called found by assuming that the binding energy which means We examine (S.5.4.7) for small values of and find that It is Two Delta Function Potentials (Rutgers) There are two delta function singularities, one at and one at The potential can be written in an equivalent way as At each delta function we match the amplitudes of the eigenfunctions as well as the slopes, using a relation such as (S.5.3.3). A single, isolated, attractive, professional, delta function potential has a single bound state. We expect that a pair of delta function potentials will generally have one or two bound states. The lowest energy state, for symmetric potentials, is a symmetric eigenfunction. The eigenvalue has the form where is the decay constant of the eigenfunction. The most general symmetric eigenfunction for a bound state is Matching at either gives the pair of equations: Eliminating the constants A and B gives the final equation for the unknown For large values of the hyperbolic tangent is unity, and we have the approximate result that which gives for large P the eigenvalue For small values of we see that This is always the lowest eigenvalue. The other possible eigenstate is antisymmetric: it has odd parity. When the separate bound states from the two delta functions overlap, they combine into bonding and antibonding states. The bonding state is the symmetric state we calculated above. Now we calculate the antibonding state, which is antisymmetric: Using the same matching conditions, we find the two equations, which are reduced to the final equation for For large values of the hyperbolic cotangent function (coth) approaches unity, and again we find At small values of the factor of coth Here we have so we find at small values of The antisymmetric mode only exists for the only bound state is the symmetric one. For there are two bound states, symmetric and antisymmetric. Transmission Through a Delta Function Potential (Michigan State, MIT, Princeton) On the left the particle has an incident intensity, which we set equal to unity, and a reflected amplitude R. On the right the transmitted amplitude is denoted by T. At the point we match the value of on both sides. We match the derivative according to an expression such as (S.5.3.3) with This yields two equations for T and R which can be solved for T: Delta Function in a Box (MIT) a) In the absence of the delta function potential, the states with odd parity These states have zero amplitude at the site of the delta function and are unaffected by it. So, the states with odd parity have the same eigenfunction and eigenvalues as when the delta function is absent. b), c) For a delta function potential without a box, the bound states have a wave function of (see Problem 5.3). In the box we expect to have similar exponentials, except that the wave function must vanish at the edges of the box The states which do this are Using (S.5.4.1), we match the difference in the derivatives at the amplitude of the delta function potential. This leads to the eigenvalue The quantity on the left of (S.5.7.6) has a minimum value of 1, which it obtains at This limit produces the eigenvalue So we must for the zero eigenvalue, which is the answer to part (b). The above eigenfunction, for values of gives the bound state energy E < 0 Particle in Expanding Box (Michigan State, MIT, Stony Brook) a) For a particle confined to a box and the first excited state After the sudden transition the ground state the final eigenfunctions are b) In the sudden approximation let denote the probability that the particle starts in the ground state 0 and ends in the final state where the amplitude of the transition is given by The amplitude for the particle to remain in its ground state is then The probability is given by The same calculation for the transition between the initial ground state and final excited state is as follows: The integral is zero by parity, since the integrand is an odd function of One-Dimensional Coulomb Potential (Princeton) a) Since the electron is confined to the right half-space, its wave function must vanish at the origin. So, an eigenfunction such as exp is unsuitable since it does not vanish at The ground state wave function must be of the form where needs to be determined. The operator acting on this form gives so that using this wave function in Schrödinger’s equation yields For this equation to be satisfied, the first and third terms on the left must be equal, and the second term on the left must equal the term on the right of the equals sign: The answer is one sixteenth of the Rydberg, where energy of the hydrogen atom. The parameter Bohr radius. b) Next we find the expectation value the normalization coefficient: The average value of is the ground state is the The first integral is done to find is 6 Bohr radii. Two Electrons in a Box (MIT) a) If the box is in the region then the one-electron orbitals are If both electrons are in the spin state (spin up), then the spin part of the wave function is symmetric under exchange of coordinates. Therefore, the orbital part has to be antisymmetric, and both particles cannot be in state. Instead, the lowest energy occurs when one electron is in state and the other is in the b) The probability that both are in one half, say the left side, is Three integrals must be evaluated: The result is rather small. Naturally, it is much more favorable to have one particle on each side of the box. Square Well (MIT) a) The most general solution is We evaluate the coefficients by using the initial condition at = 0: The term cos is either 1, 0, or –1, depending on the value of answer to (a) is to use the above expression for in (S.5.11.3). The answer to part (b) is that the probability of being in the eigenstate is The answer to part (c) is that the average value of the energy is This latter series does not converge. It takes an infinite amount of energy to form the initial wave function. Given the Eigenfunction (Boston, MIT) We evaluate the second derivative of the eigenfunction, which gives the kinetic energy: We take the limit that of the function on the left, and this must equal – since we assumed that the potential vanishes at infinity. Thus, we find that The energy is negative, which signifies a bound state. The potential can be deduced from (S.5.12.1) since everything else in this expression is This potential energy has a bound state which can be found analytically, and the eigenfunction is the function given at the beginning of the problem. Combined Potential (Tennessee) Let the dimensionless distance be The kinetic energy has the scale In terms of these variables we write Schrödinger’s equation as Our experience with the hydrogen atom, in one or three dimensions, is that potentials which are combinations of are solved by exponentials times a polynomial in The polynomial is required to prevent the particle from getting too close to the origin where there is a large repulsive potential from the term. Since we do not yet know which power of to use in a polynomial, we try where and need to be found, while A is a normalization constant. This form is inserted into the Hamiltonian. First we present the second derivative from the kinetic energy and then the entire Hamiltonian: We equate terms of like powers of The last equation defines The middle equation defines The top equation gives the eigenvalue: once is known. Harmonic Oscillator Given a Gaussian (MIT) Denote the eigenfunctions of the harmonic oscillator as with eigenvalue They are a complete set of states, and we can expand any function in this set. In particular, we expand our function in terms of coefficients The expectation value of the energy is the integral of the Hamiltonian H for the harmonic oscillator: where we used the fact that So the probability of The probability is given by of energy and finally It is easy to show that this quantity is less than unity for any value of and is unity if Harmonic Oscillator ABCs (Stony Brook) a) Here we took since the commutator d) In order to demonstrate that compose the commutator are also eigenstates of by (S.5.15.1). Similarly, Substituting (S.5.15.4) into (S.5.15.5) and replacing we have Rearranging (S.5.15.6) yields as required. A similar calculation gives We see from the above results that the application of the operator a state has the effect of “raising” the state by 1, and the operator lowers the state by 1 (see (f) below). since, by assumption, f) Since by (c), the number operator and the Hamiltonian commute, they have simultaneous eigenstates. Starting with we may generate a number state whose energy eigenvalue is 1 + 1/2 by applying the raising operator again produces a state of eigenvalue 2+1/2. What remains to be done is to see that these eigenstates (number, energy) are properly normalized. If we assume that the state is normalized, then we may compose the inner product Up to an arbitrary phase, we see that Starting with the vacuum ket we can write an energy eigenket g) The energy spectrum is where takes all positive integer values and zero. From (S.5.15.9) and the fact that the norm of the eigenvectors is positive (actually, 1), we see that cannot be negative, and so no negative eigenvalues are possible. Number States (Stony Brook) a) In this problem it is important to use only the information given. We may write the Hamiltonian as (see Problem 5.15), so We may establish the following: Apply the number operator to the state Since a we have b) We see from (S.5.16.2) that the Hamiltonian is just We demonstrated in (a) that is an eigenstate of the number operator is also an eigenstate of the Hamiltonian with eigenvalues given by c) The expectation value may be calculated indirectly. Note that is the potential energy. The expectation values of the potential and kinetic energies are equal for the quantum oscillator, as for time averages in the classical oscillator. Therefore, they are half of the total In this problem, however, you are explicitly asked to use the operators to calculate so we have We proceed to find Thus, the result is the same by both approaches. Coupled Oscillators (MIT) The Hamiltonian of the system is The problem is easily solved in center-of-mass coordinates. So define These new coordinates are used to rewrite the Hamiltonian. It now decouples into separate and parts: has a frequency is an integer. The and eigenvalues and eigenvalues oscillator has a frequency of is an integer. Time-Dependent Harmonic Oscillator I a) At times b) The state the wave function is has even parity: it remains the same if one replaces This is true for all times. c) The average value of the energy is which is independent of time. Time-Dependent Harmonic Oscillator II (Michigan State) a) The time dependence of the wave function is b) The expectation value for the energy is which is independent of time. c) To find the average value of the position operator, we first need to show The expectation value of the position operator oscillates in time. Switched-on Field (MIT) a) Operate on the eigenfunction by the kinetic energy term in the Hamiltonian: Consider the factor the 1 must give the eigenvalue and cancel the potential energy. These two constraints give the identities The normalization constant is determined by b) The solution is given above: c) After the perturbation is added, the Hamiltonian can be solved exactly by completing the square on the where the displacement eigenfunction are The new ground state energy and The harmonic oscillator vibrates about the new equilibrium point the same frequency as before. The constants and are unchanged by d) To find the probability that a particle, initially in the ground state, remains in the ground state after switching on the potential, we employ the sudden approximation. Here we just evaluate the overlap integral of the two eigenfunctions, and the probability is the square of this overlap: Cut the Spring! (MIT) a) Below we give the Hamiltonian the frequency of the particle while coupled to two springs: and the eigenvalues The only change from the harmonic oscillator for a single spring is that, with two identical springs, the effective spring constant is 2K. b) The eigenfunction of the ground state c) When one spring is cut, the particle is now coupled to only a single spring. So we must replace 2K in the above equations by K. The ground state eigenfunction is now Notice that The amplitude I for remaining in the ground state is found, in the sudden approximation, by taking the overlap integral of the two ground state wave functions. The probability of remaining in the ground state is the square of this overlap integral: where we have used in deriving the last line. The probability of remaining in the ground state is close to unity. Angular Momentum and Spin Given Another Eigenfunction (Stony Brook) a) The factor cos momentum of 1. indicates that it is a state which has an angular b) In order to determine the energy and potential, we operate on the eigenstate with the kinetic energy operator. For this gives for the radial The constant in the last term can be simplified to In the limit the potential vanishes, and only the constant term in the kinetic energy equals the eigenvalue. Thus, we find c) To find the potential we subtract the kinetic energy from the eigenvalue and act on the eigenfunction: The potential has an attractive Coulomb term and a repulsive Algebra of Angular Momentum (Stony Brook) b) Since and commute, we will try to find eigenstates with eigenvalues denoted by are real numbers: we know that Anticipating the result, let Form the raising and lowering operators Find the commutators From part (a) we know that value of for the states We now ask what is the eigen- So, these states have the same eigenvalue of value of for these states: In (S.5.23.3) we see that of the states so that Now, examine the eigen- has the effect of raising or lowering the are the corresponding coefficients. As determined above, we know that cannot be applied indefinitely to the state there must be an such that and apply Either the state to (S.5.23.4): is zero or and since We knew that the only solution is was real, but now we have Triplet Square Well (Stony Brook) Since the two spins are parallel, they are in a spin triplet state with The spin eigenfunction has even parity. The two-electron wave function is written as an orbital part times the spin part. The total wave function must have odd parity. Since the spin has even parity, the orbital part must have odd parity: the interaction potential acts only between the electrons, it is natural to write the orbital part in center-of-mass coordinates, where The problem stated that the total momentum was zero, so set must now determine the form for the relative eigenfunction It obeys the Schrödinger equation with the reduced mass is the electron mass: We have reduced the problem to solving the bound state of a “particle” in a box. Here the “particle” is the relative motion of two electrons. However, since the orbital part of the wave function must have odd parity, we need to find the lowest energy state which is antisymmetric, Bound states have where the binding energy two wave vectors: for outside the box, when the particle is in the box, The lowest antisymmetric wave function is We match the wave function and its derivative at one edge, say which gives two equations: We divide these two equations, which eliminates the constants A and B. The remaining equation is the eigenvalue equation for Since and are both positive, the cotangent of must be negative, which requires that This imposes a constraint for the existence of any antisymmetric bound state: Any attractive square well has a bound state which is symmetric, but the above condition is required for the antisymmetric bound state. Dipolar Interactions (Stony Brook) a) We assume the magnetic moment is a vector parallel to the spin with a is a constant. Then we write the Hamiltonian The second term contains only components since the vector a is along b) We write we have c) The addition of two angular momenta with which are 0 or 1: so we can write gives values of S there are three possible eigenvalues of gives an energy of there is one eigenvalue of and this state has zero Potential (MIT) a) The spin operator is For spin 1/2 the expression becomes, for Pauli matrices, where is the unit matrix. The total spin operator for the two-particle system is For the spin singlet state triplet state while for the spin b) The potential is repulsive for the triplet state, and there are no bound states. There are bound states for the singlet state since the potential is attractive. For the hydrogen atom the potential is and the eigenvalues Our two-particle bound state has instead of and the reduced mass instead of the mass so we have the eigenvalues Three Spins (Stony Brook) a) We use the notation that the state with three spins up is is the state with We operate on this with the lowering operator which shows that the states with lower values of M are b) From the definition of The matrix c) Because we deduce that is the Hermitian conjugate of we can construct d) To find the matrix we square each of the three matrices and add them. This gives where is the 4 × 4 unit matrix. This is what one expects, since the eigenvalue of is J(J + 1), which is 15/4 when J = 3/2. Constant Matrix Perturbation (Stony Brook) a) Define where is the eigenvalue. We wish to diagonalize the matrix by finding the determinant of When confronted by a cubic eigenvalue equation, it is best first to try to guess an eigenvalue. The obvious guesses are The one that works so we factor this out to get We call these eigenvalues respectively. When we construct the eigenfunctions, only the one for is unique. Since the first two have degenerate eigenvalues, their eigenvectors can be constructed in many different ways. One choice is b) Since the three states expand the initial state as form a complete set over this space, we can To find the amplitude in state we operate on the above equation with The probability P is found from the absolute-magnitude-squared of this amplitude: Rotating Spin (Maryland, MIT) Let us quantize the spin states along the down are denoted by The eigenstates of so that spin up and spin for pointing along the for the At time we start in state Later this state becomes The amplitude for pointing in the negative is found by taking the matrix element with The probability is the square of the absolute magnitude of this amplitude: Nuclear Magnetic Resonance (Princeton, Stony a) Let denote the probability of spin up and spin down as a function of time. The time-dependent Hamiltonian is The equations for the individual components are where the overdots denote time derivatives. We solve the first equation for and substitute this into the second equation: We assume that We determine the eigenvalue frequency by inserting the above form for into (S.5.30.8), which gives a quadratic equation for that has two We have introduced the constants They are not all independent. Inserting these forms into the original differential equations, we obtain two relations which can be used to give This completes the most general solution. Now we apply the initial conditions that the spin was pointing along the This gives which makes which gives These two conditions are sufficient to find The probability of spin up is and that of spin down is b) In the usual NMR experiment, one chooses the field in which case spin oscillates slowly between the up and down states. so that Variational Calculations Anharmonic Oscillator (Tennessee) Many possible trial functions can be chosen for the variational calculation. Choices such as exp are poor since they have an undesirable cusp at the origin. Instead, the best choice is a Gaussian: where the potential in the problem is We evaluate the three integrals in (A.3.1)–(A.3.4). We have used (A.3.1) to derive the last expression. Now we find the minimum energy for this choice of trial function by taking the derivative with respect to the variational parameter This result for Denote by the value at this is higher than the exact eigenvalue. Linear Potential I (Tennessee) The potential V is symmetric. The ground state eigenfunction must also be symmetric and have no cusps. A simple choice is a Gaussian: where the variational parameter is and A is a normalization constant. Again we must evaluate the three integrals in (A.3.1)–(A.3.4): The minimum energy is found at the value with respect to is a minimum: where the energy derivative Linear Potential II (MIT, Tennessee) The wave function must vanish in either limit that acceptable variational trial functions are where the prefactor ensures that the trial function vanish at the origin. In both cases the variational parameter is We give the solution for the first one, although either is acceptable. It turns out that (S.5.33.2) gives a higher estimate for the ground state energy, so (S.5.33.1) is better, since the estimate of the ground-state energy is always higher than the exact value. The ground state energy is obtained by evaluating the three integrals in The optimal value of is obtained by finding the minimum value Note that this result is also the first asymmetric state of the potential in Problem 5.32. Return of Combined Potential (Tennessee) a) The potential contains a term which diverges as The only way integrals such as are well defined at the origin is if this divergence is canceled by factors in In particular, we must have at small This shows that the wave function must vanish at This means that a particle on the right of the origin stays there. b) The bound state must be in the region since only here is the attractive. The trial wave function is where the variational parameter is (A.3.1)–(A.3.4), where the variable We evaluate the three integrals in The minimum energy is obtained by setting to zero the derivative of . with respect to This gives the optimal value and the minimum energy Quartic in Three Dimensions (Tennessee) The potential is spherically symmetric. In this case we can write the wave function as a radial part times angular functions. We assume that the ground state is an and the angular functions are which is a constant. So we minimize only the radial part of the wave function and henceforth ignore angular integrals. In three dimensions the integral in spherical coordinates is The factor from the angular integrals. It occurs in every integral and drops out when we take the ratio in (A.3.1). So we just evaluate the part. Again we choose the trial function to be a Gaussian: The three integrals in (A.3.1)–(A.3.4) have a slightly different form in three Note the form of the kinetic energy integral K, which again is obtained by an integration by parts. Again set the derivative of equal to zero. This determines the value which minimizes the energy: Halved Harmonic Oscillator (Stony Brook, Chicago (b), Princeton (b)) a) Using the Rayleigh–Ritz variation principle, calculate tation value of the ground state energy as a function of the expec- First calculate the denominator of (S.5.36.1): So our trial function is already normalized. Continuing with the numerator of (S.5.36.1), we have where we set Evaluate the integral So, we have To minimize this function, find corresponding to We should have the inequality (see Problem 5.33) is the true ground state energy. b) To find the exact ground state of the system, notice that odd wave functions of a symmetric oscillator problem (from to ) will also be solutions for these boundary conditions since they tend to zero at Therefore, the ground state wave function of this halved oscillator will correspond to the first excited state wave function of the symmetrical oscillator. The wave function can easily be obtained if you take the ground state and act on it by the creation operator (see Problem 5.16): The ground state energy of our halved oscillator will in turn correspond to the first excited state energy of the symmetrical oscillator: Comparing this result with that of (a), we see that the inequality (S.5.36.13) holds and that our trial function is a fairly good approximation, since it gives the ground state energy to within 15% accuracy. Helium Atom (Tennessee) In the ground state of the two-electron system, both orbitals are in 1s states. So the spin state must be a singlet The spin plays no role in the minimization procedure, except for causing the orbital state to have even parity under the interchange of spatial coordinates. The two-electron wave function can be written as the product of the two orbital parts times the spin part: where is the Bohr radius and is the variational parameter. The orbitals are normalized to unity. Each electron has kinetic (K) and potential (U) energy terms which can be evaluated: is the Rydberg energy. The difficult integral is that due to the electron–electron interaction, which we call V: First we must do the angular integral over the denominator. If larger of then the integral over a solid angle gives is the In the second integral we have set makes the integrals dimensionless. Then we have split the two parts, depending on whether is smaller or greater than The first has a factor from the angular integrals, and the second has a factor One can exchange the order of integration in one of the integrals and demonstrate that it is identical to the other. We evaluate only one and multiply the result by 2: This completes the integrals. The total ground state energy bergs is in Ryd- We find the minimum energy by varying Denote by the value of at which is a minimum. Setting to zero the derivative of respect to yields the result The ground state energy is Perturbation Theory Momentum Perturbation (Princeton) The first step is to rewrite the Hamiltonian by completing the square on the momentum operator: The constant just shifts the zero of the momentum operator. The rewritten Hamiltonian in (S.5.38.1) suggests the perturbed eigenstates: The action of the displaced momentum operator on the new eigenstates so the Hamiltonian gives and the eigenvalues are simply Ramp in Square Well (Colorado) a) For a particle bound in a square well that runs from the eigenfunction and eigenvalue for the lowest energy state are The eigenfunction is symmetric and vanishes at the walls of the well. b) We use first-order perturbation theory to calculate the change in energy from the perturbation: Circle with Field (Colorado, Michigan State) The perturbation is if we assume the field is in the The same result is obtained if we assume the perturbation is in the In order to do perturbation theory, we need to find the matrix element of the perturbation between different eigenstates. For first-order perturbation theory we need The eigenvalues are unchanged to first-order in the field E. To do second-order perturbation theory, we need off-diagonal matrix If we recall that then we see that can only for the integral to be nonzero. In doing second-order perturbation theory for the state the only permissible intermediate states are This solution is valid for states For the ground state, with state does not exist, so the answer is Rotator in Field (Stony Brook) a) The eigenfunctions and eigenvalues are b) The electric field interacts with the dipole moment to give an interaction This problem is almost identical to the previous one. The quantity the previous problem is changed to the moment I in the present problem. The perturbation results are similar. The first-order perturbation vanishes The second-order perturbation is given by (S.5.40.3) and (S.5.40.4) after changing to I and Finite Size of Nucleus (Maryland, Michigan State, Princeton, Stony Brook) a) To find the potential near the nucleus, we note Gauss’s law, which states that for an electron at a distance from the center of a spherical charge distribution, the electric field is provided only by those electrons inside a sphere of radius this is the charge whereas for it is just the charge Z . Thus, we find for the derivative of the potential energy: is a constant of integration. We chose potential continuous at to make the b) For a single electron bound to a point nucleus, we can use hydrogen wave c) The first-order change in the ground state wave energy is For any physical value of Z, the parameter is very much smaller than unity. One can evaluate the above integral as an expansion in and show that the first term is so the answer is approximately U and Perturbation (Princeton) The result from first-order perturbation theory is obtained by taking the integral of the perturbation with the ground state wave function The ground state energy is The first term in has odd parity and integrates to zero in the above expression. The second term in has even parity and gives a nonzero contribution. In this problem it is easiest to keep the eigenfunctions in the separate basis of rather than to combine them into In one dimension the average of we have This is probably the simplest way to leave the answer. This completes the discussion of first-order perturbation theory. The other term contributes an energy of in secondorder perturbation theory. The excited state must have the symmetry of which means it is the state This has three quanta excited, so it has an energy Now we combine the results from first- and second-order perturbation theory: Relativistic Oscillator (MIT, Moscow Phys-Tech, Stony Brook (a)) a) The classical Hamiltonian is given by relativistic Hamiltonian may be expanded as follows: whereas the The perturbation to the classical Hamiltonian is therefore First solution: For the nonrelativistic quantum harmonic oscillator, we have are operators. Defining new operators Q, P, and noting the commutation relations we may rewrite (S.5.44.2) as Introducing the standard creation and annihilation operators (see Problems 5.15 and 5.16): we find that Using these results, we may express the first-order energy shift The expansion of is simplified by the fact that Finally, we obtain Second solution: Instead of using operator algebra, we can find a wave in the momentum representation, where The Hamiltonian then is The Schrödinger equation for This equation has exactly the same form as the standard oscillator Schrödinger equation: We then obtain for the momentum probability distribution for the ground Using the old “differentiate with respect to an innocent parameter method” of simplifying an integral, we may rewrite where we substituted (S.5.44.10) into (S.5.44.11) and let as found in the first solution. b) The first-order energy shift from would be zero (no diagonal elements in the matrix). The leading correction would be the second-order shift as defined by the formula means sum over From (S.5.44.3) and (S.5.44.4), we As for any second-order correction to the ground state, it is negative. To make this expression equal to the one in part (a), we require that Spin Interaction (Princeton) In first-order perturbation theory the change in energy is and the matrix element of is zero for the ground state The first excited state is three-fold degenerate: denote these states as In this notation the matrix elements are In second-order perturbation theory where the unit matrix is energy, to second order. Each spin state has the same Spin–Orbit Interaction (Princeton) a) In three dimensions the lowest eigenvalue of the harmonic oscillator is which can be viewed as from each of the three dimensions. The ground state has s-wave symmetry. The lowest excited states have There are three of them. They have and are the states b) In the spin–orbit interaction we take the derivative The matrix element evaluate the factor and find is a constant, which simplifies the calculation. We by defining the total angular momentum J as For the ground state of the harmonic oscillator, The above expectation value of is zero. The ground state is unaffected by the spin–orbit interaction, although it is affected by relativistic corrections (see Problem 5.44) as well as by other states (see Problem 5.45). The first excited states have so that we find that we find that Interacting Electrons (MIT) a) The wave function for a single electron bound to a proton is that of the hydrogen atom, which is where is the Bohr radius. When one can neglect the Coulomb repulsion between the two electrons, the ground state energy and eigenfunctions are The last factor in (S.5.47.3) is the spin-wave function for the singlet in terms of up and down spin states. Since the spin state has odd parity, the orbital state has even parity, and a simple product function is correct. The eigenvalue is twice the Rydberg energy b) The change in energy in first-order perturbation theory is The orbital part of the matrix element is where the final integration variable is Next we evaluate the spin part of the matrix element. The easiest way is to use the definition of the total spin to derive where for spin-1/2 particles, such as electrons, the two spins are in an state, the expectation value Combining this with the orbital contribution, we estimate the perturbed ground state energy to be Stark Effect in Hydrogen (Tennessee) We use the notation to describe the four orbital states: the s-state and the three Spin is not affected by this perturbation and plays no role in the calculation. For degenerate perturbation theory we must evaluate the 10 different matrix elements which occur in the symmetric 4 × 4 matrix. The interaction potential is One can use parity and other group theory arguments to show that only one matrix element is nonzero, and we call it Since the two states have no matrix elements with the other two states, we can omit them from the remaining steps in the calculation. Thus we must find the eigenvalues of a 2 × 2 matrix for the states This matrix has eigenvalues The perturbation splits the fourfold state into states with eigenvalues Since is proportional to the electric field, the energies split linearly with The matrix element can be evaluated by using the explicit representation for the eigenstates of the hydrogen atom: The angular integral gives 2/3, and Hydrogen with Electric and Magnetic Fields (MIT) We use the same notation as in Problem 5.48 to describe the four orbital states: the s-state is and the three Here again, spin is not affected by this perturbation. As in Problem 5.48, we must evaluate the 10 different matrix elements occur in the symmetric 4 × 4 matrix. One interaction potential is One can use parity and other group theory arguments to show that the only nonzero matrix elements are One can show that and are equal to within a phase factor. We ignore this phase factor and call them equal. The evaluation of this integral was demonstrated in the previous solution. The result here is compared to the one in the previous problem. To first order in the magnetic field, the interaction is given by In spherical coordinates the three unit vectors for direction are In these units the vector potential can be written as the momentum operator in this direction is where the cyclotron frequency is The magnetic field is a diagonal perturbation in the basis Now the state has no matrix elements for these interactions and is unchanged by these interactions to lowest order. So we must diagonalize the 3 × 3 interaction matrix for the three states states are initially fourfold degenerate. The double perturbation leaves two states with the same eigenvalue while the other two are shifted by Note that so that, in the absence of the magnetic field, the result is the same as in Problem 5.48. Hydrogen in Capacitor (Maryland, Michigan For time-dependent perturbations a general wave function is where the For the time-dependent perturbation From Schrödinger’s equation we can derive an equation for the time development of the amplitudes If the system is initially in the ground state, we have and the other values of are zero. For small perturbations it is sufficient to solve the equation for The general probability that a transition is made to state is given by This probability is dimensionless. It should be less than unity for this theory to be valid. a) For the state the probability is zero. It vanishes because the matrix element of is zero: because of parity. Both Sstates have even parity, and has odd parity. b) For the state the transition is allowed to the orbital state, which is called The matrix element is similar to the earlier problem for the Stark effect. The 2P eigenstate for in (S.5.48.5) and that for the 1S state is exp The integral is the Bohr radius of the hydrogen atom. Harmonic Oscillator in Field (Maryland, Michigan State) We adopt (S.5.50.4) and (S.5.50.5) for the time-dependent perturbation theory. Now we label the eigenstates with the index for the harmonic oscillator state of energy and write the equation satisfied by the time-dependent amplitudes We need to evaluate the matrix element of between the states of the harmonic oscillator. It is only nonzero if terms of raising and lowering operators, a) If the initial state is state for is given by then the amplitude of the The last equation is the probability of ending in the state if the initial state is This expression is valid as long as it is less than 1 or b) The state cannot be reached by a single transition from since the matrix element can be reached by a two-step process. It can be reached from is excited from The matrix element is so we have that Note that Similarly, one can show that the total probability, when summed over all transitions, cannot exceed 1. Therefore, we define a normalized probability Decay of Tritium (Michigan State) We use the sudden approximation to calculate the probability that the electron remains in the ground state. One calculates the overlap integral of the initial and final wave functions, and its square is the probability. The ground states in the initial and final states are called is the Bohr radius: Bouncing Ball (Moscow Phys-Tech, Chicago) The potential energy here is We can apply the quasi-classical (WKB) approximation between points with the quasi-classical function applicable all the way to The wave function is given by On the other hand, for Imposing the condition We know that in this approximation Truncated Harmonic Oscillator (Tennessee) a) If C is the turning point, to be found later, then the WKB formula in one dimension for bound states is where we have used the truncated harmonic oscillator potential for The constant C is the value of where the argument of the square root changes sign, which is The integral on the left equals The easiest way to see this result is to use the change of variables and the integrand becomes between 0 and (Actually, just note that this is the area of a quadrant of a disk of radius C). We get b) The constraint that there be only one bound state is that This gives the following constraints on the last constant in the energy expression: Stretched Harmonic Oscillator (Tennessee) We use (S.5.54.1) and (S.5.54.2) as the basic equations. The turning point C is where the argument of For the present potential the turning point is The integral in (S.5.54.1) has three regions. In the interval is a constant, and the integral is just is nonzero in the two intervals the WKB integral is symmetric, we get The potential To evaluate the second integral, change variables to The last integral equals we find We have to determine E. Equation (S.5.55.5) is a quadratic equation for the variable Solving the quadratic by the usual formula gives the final Ramp Potential (Tennessee) We use (S.5.54.1) and (S.5.54.2) as the starting point. In the present problem, Since the integral is symmetric, we can write it as Remembering that we obtain the final result: a) Since Charge and Plane (Stony Brook) we may write In the WKB approximation or, between turning points, Substituting (S.5.57.1) into (S.5.57.2) and using the symmetry of the motion we obtain b) For the potential where the quantization condition gives c) Using the boundary conditions at we obtain It implies that the odd states, for which are not affected by while even states should satisfy the condition this condition takes the following form: Ramp Phase Shift (Tennessee) The following formula is for the phase shift in one dimension where the particle is free on the right and encounters an impenetrable barrier near the origin: The factor is the phase change when the particle goes through the turning point where For the present problem we have that and this part of the integral exactly cancels the term the potential is assuming that The turning point is so we Parabolic Phase Shift (Tennessee) Again we use (S.5.58.1) for the phase shift. The potential in the present problem is zero for The integral in this region cancels the To the left of the origin, the turning point is The integral over again equals energy and has a constant term. The phase shift is linear with Phase Shift for Inverse Quadratic (Tennessee) Again we use (S.5.58.1) for the phase shift. The turning point is The phase integral is The last integral is found in standard tables. To evaluate the phase shift, we need to evaluate this expression in the limit which gives So the final expression for the phase shift is The phase shift is independent of energy. Scattering Theory Step-Down Potential (Michigan State, MIT) Denote by the momentum of the particle to the right of the origin, and is momentum on the left. Since energy is conserved, we have Now we set up the most general form for the wave function, assuming the incoming wave has unit amplitude: Matching the wave function and its derivative at the origin gives two equations for the unknowns R and T which are solved to find R: Step-Up Potential (Wisconsin-Madison) Write the energy as left of zero. Since where is the wave vector on the define a wave vector on the right as a) The wave functions on the left and right of the origin are where and are the amplitudes of the reflected and transmitted waves. Matching the wave function and its slope at gives two equations: These two equations are solved to obtain and : b) The particle currents are the velocities times the intensities. The velocities are on the left and on the right: The last expression equals the current of the incoming particle. Repulsive Square Well (Colorado) a) If the radial part of the wave function is then define Since R is well behaved at in this limit. The function obeys the following equation for and the theta function is 1 if and 0 if solutions are in the form of Instead, write it as where the phase shift is define a constant according to Then the eigenfunction is the constraint that forces the choice of the hyberbolic sine function. Matching the eigenfunction and slope at Dividing these equations eliminates the constants A and B. The remaining equation defines the phase shift. b) In the limit that the argument of the arctangent vanishes, since the hyperbolic tangent goes to unity, and c) In the limit of zero energy, we can define To find the part of the cross section at low energy, we start with where the total cross section is 3D Delta Function (Princeton) For a particle of wave vector of the wave function is Schrödinger’s equation for the radial part scattering is important at very low energies, so solve for Also define and get is well behaved, so functions to be Thus we choose our wave The quantity is the phase shift. We match the wave functions at The formula for matching the slopes is derived from (S.5.64.2): Matching the function and slope produces the equations which are solved to eliminate A and B and get In the limit of low energy, we want We assume there are no bound states so that where is a constant. We find in this limit: We also give the formula for the cross section in terms of the scattering The assumption of no bound state is that Two-Delta-Function Scattering (Princeton) Let us take an unperturbed wave function of the particle of the form Suppose that, after scattering, the wave vector becomes approximation, the scattering amplitude In the Born (see, for instance, Landau and Lifshitz, Quantum Mechanics, Sect. 126), (see Figure S.5.65). Substituting the into (S.5.65.2), we obtain is the projection of the vector on the z axis. The scattering cross section In order to apply the Born approximation, i.e., to use perturbation theory, we must satisfy at least one of two conditions: is the range of the potential. The first condition derives from the requirement that the perturbed wave function be very close to the unperturbed wave function. Inequality (S.5.65.5) may also be considered the requirement that the potential be small compared to the kinetic energy of the particle localized at the source of the perturbation. Even if the first condition is not satisfied, particles with large enough will also justify the Born approximation. Scattering of Two Electrons (Princeton) We evaluate the scattering in the Born approximation, which is valid when the kinetic energies are much larger than the binding energy. The Fourier transform of the potential is and the formula for the total cross section vector is of electrons with initial wave This cross section is suitable for classical particles, without regard to spin. The specification to the spin states S = 0, 1 is made below. Write where is the solid angle of the scattering. The differential cross section is found by taking the functional derivative of the cross section with respect to this solid angle: where we have used the fact that is defined by The magnitudes of the vectors are the same, (see Problem 5.65 and Figure S.5.65). All of the dimensional factors are combined into the Bohr radius Now we consider how this formula is altered by the spin of the electrons. Spin is conserved in the scattering, so the pair of electrons has the same spin state before and after the collision. a) For S = 0 the two electrons are in a spin singlet which has odd parity. Hence, the orbital state must have even parity. The initial and final orbital wave functions are given below, along with the form of the matrix element. The relative coordinate is r: The matrix element has two factors. b) For S = 1 the spins are in a triplet state which has even parity. The orbital part of the wave function has odd parity. There is a minus sign between the two terms in (S.5.66.4) instead of a plus sign, and ditto for the final wave function. Now the differential cross section is There is a relative minus sign between the two term in the matrix element. Spin-Dependent Potentials (Princeton) In the first Born approximation the scattering is proportional to the square of the matrix element between initial and final states. If the initial wave vector is and the final one is and evaluate where we have written the transverse components of momentum in terms of spin raising and lowering operators. The initial spin is pointing along the direction of the initial wave vector which we define as the Let us quantize the final spins along the same axis. Now consider how the three factors scatter the spins: a) The term A is spin independent. It puts the final spin in the same state as the initial spin. is a diagonal operator, so the final spin is also along the initial direction, and this term has a value of flips the spin from and contributes a matrix element to the final state with the spin reversed. gives a matrix element of zero since the initial spin cannot be When we take the magnitude squared of each transition and sum over final states, we get the factors for spins of The differential cross section is written as We have used the fact that energy is conserved, so (see Problem 5.65 and Figure S.5.65). to set Rayleigh Scattering (Tennessee) a) The formula for the total cross section We write is the solid angle. The differential cross section is obtained by taking a functional derivative with respect to There remains only the integral, which is eliminated by the delta function for energy conservation: where the vector differs from only in direction. b) With the assigned choice of the matrix element we write our differential cross section as where the factor S is the average over initial polarizations and the sum over final polarizations. There are two possible polarizations, and both are perpendicular to the direction of the photon. These averages take the form The factor 1/2 is from the average over initial polarization. The angle between the directions of and Scattering from Neutral Charge Distribution a) The particle scatters from the potential energy the charge distribution form of which is related to is the Fourier transform of is the Fourier transThe differential cross section in the Born approximation is b) In forward scattering we take In order that the cross section have a nondivergent result in this limit, we need to find To obtain this result, we examine the behavior of at small values of Consider the three terms in brackets: (i) the 1 vanishes since the distribution is neutral; (ii) the second term vanishes since the distribution is spherically symmetric; (iii) the last term gives an angular average and the integral of is A. The cross section in forward scattering is c) The charges in a hydrogen atom are the nucleus, which is taken as a delta function at the origin, and the electron, which is given by the square of the ground state wave function is the Bohr radius. Spherical Box with Hole (Stony Brook) In spherical coordinates the eigenfunctions for noninteracting particles of wave vector are of the form are spherical Bessel functions. The constants A and B are determined by the boundary conditions. Since we were only asked for the states with we only need We can take a linear combination of these functions, which is a particular choice of the ratio B/A, to make the wave function vanish at This satisfies the boundary condition at vanish at Requiring that this function Attractive Delta Function in 3D (Princeton) a) The amplitude of the wave function is continuous at the point the delta function. For the derivative we first note that the eigenfunctions are written in terms of a radial function and angular functions: Since the delta function is only for the radial variable only the function has a discontinuous slope. From the radial part of the kinetic energy operator we integrate from This formula is used to match the slopes at b) In order to find bound states, we assume that the particle has an energy given by where needs to be determined by an eigenvalue equation. The eigenfunctions are combinations of exp In order to be zero at and to vanish at infinity, we must choose the form We match the values of results of part (a): We match the derivative, using the We eliminate the constants A and B and obtain the eigenvalue equation for which we proceed to simplify: This is the eigenvalue equation which determines as a function of parameters such as D, etc. In order to find the range of allowed values of D for bound states, we examine The right-hand side of (S.5.71.9) goes to 1, which is its largest value. So, the constraint for the existence of bound states is Ionizing Deuterium (Wisconsin-Madison) The ionization energy of hydrogen is just the binding energy of the electron which is given in terms of the reduced mass of the electron–proton system. The same expression for deuterium contains the reduced mass of the electron–deuteron system: The difference is easily evaluated. The ratio can be used as an expansion parameter: The ratio of masses gives is a small number and Collapsed Star (Stanford) a) Using the 1D Schrödinger equation with the boundary conditions Protons, neutrons, and electrons are so 2 may occupy each energy level, and we have The kinetic energy of a particle To determine which species are relativistic, we wish to find whether We may extract from S.5.73.2. For neutrons: Similarly for protons: Since N, Z < A, both neutrons and protons are non-relativistic. For electrons: The equilibrium value of Z/A obtained in (c) for relativistic electrons gives which still leaves 1 in S.5.73.7. Moreover, if we assume that the electrons are non-relativistic and minimize S.5.73.14 below with electron energy (see S.5.73.9) we will get which contradicts the assumption. So the electrons are relativistic. Alternatively, we can use the result of Problem 4.64, the same as S.5.73.5. b) The ground state energy of the system is given by the sum of energies of all levels, which we may approximate by an integral. We calculate the total energies of non-relativistic particles (neutrons and protons) and relativistic ones (electrons) separately: For 1-D electrons (see Problem 4.64) The total electron energy is where we used for an estimate an electron energy of the form we have already established that they are relativistic. We can obtain a correct value of for them: where we have used the result of (c), star is c) Let The total energy of the We need to find the minimum of the expression Setting the derivative of S.5.73.15 equal to zero gives So the minimum energy corresponds to a star consisting mostly of neutrons. Electron in Magnetic Field (Stony Brook, Moscow Phys-Tech) a) The relationship between the vector potential and magnetic field is does give So this vector potential produces the right field. b) The vector potential enters the Hamiltonian in the form One can show easily that each commute with the Hamiltonian and are constants of motion. Thus, we can write the eigenfunction as plane waves for these two variables, with only the yet to be The Hamiltonian operating on We may write the energy E as and find The energy is given by the component along the magnetic field and the for motion in the plane. The latter contribution is identical to the simple harmonic oscillator in the The frequency is the cyclotron frequency and the harmonic motion is centered at the point which depends upon The eigenvalues and eigenfunctions are are the eigenfunctions for the one-dimensional harmonic oscil- Electric and Magnetic Fields (Princeton) a) Many vector potentials A(r) can be chosen so that the present problem the most convenient choice is Hamiltonian is Thus the The above choice is convenient since only fails to commute with H, so are constants of motion. Both potentials have been made to depend b) Since and energies as are constants of motion, we can write the eigenstates The last equation determines the eigenvalue and eigenfunctions The potential is a combination of linear and quadratic terms in So the motion behaves as a simple harmonic oscillator, where the terms linear in determine the center of vibration. After some algebra we can write the above expression as So, we obtain The total energy is plus the kinetic energy along the of the eigenfunction is a harmonic oscillator c) In order to find the average velocity, we take a derivative with respect to the wave vector This is the drift velocity in the It agrees with the classical Josephson Junction (Boston) a) Take the first of equations (P.5.76.1), and its complex conjugate and multiply them by Subtracting (S.5.76.3) from (S.5.76.2) yields Similarly, from the second of (P.5.76.1), b) Substituting the solutions (S.5.76.1), we obtain the expression for Taking (S.5.76.6), the analogous expression for Subtracting (S.5.76.7) from (S.5.76.8), we obtain c) The battery current This page intentionally left blank This page intentionally left blank Appendix 1: Approximate Values of Physical Constants Some Astronomical Data Mass of the Sun Radius of the Sun Average Distance between the Earth and the Sun Average Radius of the Earth Mass of the Earth Average Velocity of the Earth in Orbit about the Sun Average Distance between the Earth and the Moon Other Commonly Used Units Angstrom (Å) Astronomical Year Room Temperature Appendix 2: Conversion Table from Rationalized MKSA to Gaussian Units Appendix 3: Vector Identities Vector Formulas in Spherical and Cylindrical Coordinates Spherical Coordinates Transformation of Coordinates Transformation of Differentials Square of the Element of Length Transformation of the Coordinates of a Vector Cylindrical Coordinates Transformation of Coordinates Transformation of Differentials Square of the Element of Length Transformation of the Coordinates of a Vector Appendix 4: Legendre Polynomials Rodrigues’ Formula Spherical Harmonics Appendix 5: Harmonic Oscillator The first three eigenfunctions of the harmonic oscillator in one dimension is the oscillator frequency. Appendix 6: Angular Momentum and Spin (Pauli) matrices are while the vector The spin 1 matrices are Appendix 7: Variational Calculations The general procedure for solving variational problems in one dimension is to first evaluate three integrals which are functions of the variational The two expressions for the kinetic energy K can be shown to be equal by an integration by parts. The second expression is usually easier to use, since one has to take a single derivative of the trial function and then square it. Appendix 8: Normalized Eigenstates of Hydrogen Atom Appendix 9: Conversion Table for Pressure Units Appendix 10: Useful Constants Resistivity of copper (T = 300 K) Linear expansion coefficient of copper Surface tension of water (at 293 K) Viscosity of water Heat of vaporization of water (at 373 K, 1 atm) Velocity of sound in air (at 293 K) Si band gap Ge band gap This page intentionally left blank Arfken, G., Mathematical Methods for Physicists, 3rd ed., Orlando: Academic Press, 1985 Ashcroft, N. W., and Mermin, N. D., Solid State Physics, Philadelphia: Saunders, 1976 Callen, H. B., Thermodynamics, New York: John Wiley and Sons, Inc., Chen, M., University of California, Berkeley, Physics Problems, with Solutions, Englewood Cliffs, NJ: Prentice-Hall, Inc., 1974 Cohen-Tannoudji, C., Diu, B., and Laloë, F., Quantum Mechanics, New York: John Wiley and Sons, Inc., 1977 Cronin, J., Greenberg, D., and Telegdi, V., University of Chicago Graduate Problems in Physics, Chicago, University of Chicago Press, 1979 Feynman, R., Leighton, R., and Sands, M., The Feynman Lectures on Physics, Reading, MA, Addison-Wesley, 1965 Goldstein, H., Classical Mechanics, 2nd ed., Reading, MA: Addison-Wesley, Halzen, F., and Martin, A., Quarks and Leptons, New York: John Wiley and Sons, Inc., 1984 Hill, T. L., An Introduction to Statistical Thermodynamics, Reading, MA: Addison-Wesley, 1960 Huang, K., Statistical Mechanics, 2nd ed., New York: John Wiley and Sons, Inc., 1987 Jeans, J., An Introduction to the Kinetic Theory of Gases, Cambridge: Cambridge University Press, 1940 Kittel, C., Introduction to Solid State Physics, 6th ed., New York: John Wiley and Sons, Inc., 1986 Kittel, C., and Kroemer, H., Thermal Physics, 2nd ed., New York: Freeman and Co., 1980 Kozel, S. M., Rashba, E. I., and Slavatinskii, S. A., Problems of the Moscow Physico-Technical Institute, Moscow: Mir, 1986 Kubo, R., Thermodynamics, Amsterdam: North Holland, 1968 Kubo, R., Statistical Mechanics, Amsterdam: North Holland, 1965 Landau, L. D., and Lifshitz, E. M., Mechanics, Volume 1 of Course of Theoretical Physics, 3rd ed., Elmsford, New York: Pergamon Press, 1976 Landau, L. D., and Lifshitz, E. M., Quantum Mechanics, Nonrelativistic Theory, Volume 3 of Course of Theoretical Physics, 3rd ed., Elmsford, New York: Pergamon Press, 1977 Landau, L. D., and Lifshitz, E. M., Statistical Physics, Volume 5, part 1 of Course of Theoretical Physics, 3rd ed., Elmsford, New York: Pergamon Press, 1980 Landau, L. D., and Lifshitz, E. M., Fluid Mechanics, Volume 6 of Course of Theoretical Physics, 2nd ed., Elmsford, New York: Pergamon Press, 1987 Liboff, R. L., Introductory Quantum Mechanics, 2nd ed., Reading, MA: Pergamon Press, 1977 Ma, S. K., Modern Theory of Critical Phenomena, Reading, MA: Benjamin, MacDonald, D. K. C., Noise and Fluctuations, New York: John Wiley and Sons, 1962 Messiah, A., Quantum Mechanics, Volume 1, Amsterdam: North Holland, Newbury, N., Newman, M., Ruhl, J., Staggs, S., and Thorsett, S., Princeton Problems in Physics, with Solutions, Princeton: Princeton University Press, Pathria, R. K., Statistical Mechanics, Oxford: Pergamon Press, 1972 Reif, R., Fundamentals of Statistical and Thermal Physics, New York: McGraw-Hill, 1965 Sakurai, J. J., Modern Quantum Mechanics, Menlo Park: Cummings, 1985 Sakurai, J. J., Advanced Quantum Mechanics, Menlo Park: Benjamin/ Cummings, 1967 Schiff, L. I., Quantum Mechanics, 3rd ed., New York: McGraw-Hill, 1968 Sciama, D. W., Modern Cosmology, New York: Cambridge University Press, 1971 Sciama, D. W., Modern Cosmology and the Dark Matter Problem, New York: Cambridge University Press, 1994 Schiff, L. I., Quantum Mechanics, 3rd ed., New York: McGraw-Hill, 1968 Shankar, R., Principles of Quantum Mechanics, New York: Plenum Press, Sze, S. M., Physics of Semiconductor Devices, New York: John Wiley and Sons, Inc., 1969 Tinkham, M., Introduction to Superconductivity, New York, McGraw-Hill, Tolman, R. C., The Principles of Statistical Mechanics, Oxford: Oxford University Press, 1938 Ziman, J. M., Principles of the Theory of Solids, 2nd ed., Cambridge, Cambridge University Press, 1972
{"url":"https://studyres.com/doc/24473834/-the-language-of-science--sidney-b.-cahn--gerald-d.-mahan...","timestamp":"2024-11-06T01:25:49Z","content_type":"text/html","content_length":"511791","record_id":"<urn:uuid:3929bd15-64bb-4ff3-a9f6-ffd3ce3fb294>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00338.warc.gz"}
Examples of parametric variables and variable formulas in custom components Here you will find some examples that demonstrate how to use parametric variables and variable formulas to create intelligent custom components that adapt to changes in the model. There are some limitations concerning the variable names. • In some of the examples below, we reference variables by name. To be able to correctly reference a variable in your formula, variable name must be 19 characters or shorter. Variables with longer names will not work correctly when referenced. • Variable names cannot contain mathematical operators (+,-,*,/). • You cannot use a mathematical constant, such as PI or e, as a variable name. The examples are independent from each other.
{"url":"https://support.tekla.com/doc/tekla-structures/2021/det_examples_custom_components#GUID-6ADF8153-33F6-4BF5-B671-3D94FB2658BA","timestamp":"2024-11-03T16:22:18Z","content_type":"text/html","content_length":"61083","record_id":"<urn:uuid:f353a079-91a4-43bd-a96c-d3490a30a664>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00708.warc.gz"}
What is valuebet? | BetWasp What is valuebet? (Positive EV bet) Valuebets appear in sports betting when the odds offered by the bookie exceed the actual probability of the given outcome. This makes a punter's bet profitable in the long run. In other words, if the bettor assumes that the probability of the outcome according to his calculations is significantly higher than the probability given by the bookmaker, then such bets are to be called Valuebets. Let's take a look at the valuebets through the example of a tennis match between R. Nadal and N. Djokovic. The bookie set the odds for the win of N. Djokovic at 1.5, and 2.7 - for the win of R. Nadal. Let’s convert these numbers to percentages to understand how likely each player is to win according to the bookmaker: • the probability of N. Djokovic winning: 100 / 1.5 = 67% • the probability of R. Nadal winning: 100 / 2.7 = 37% You analyzed the match on your own, taking into account all the important factors that can affect the outcome of the game. Then you made your calculations of the probability of victory for each player and obtained the following results: • the probability of N. Djokovic winning: 55% • the probability of R. Nadal winning: 45% Now let's compare our results with the ones offered by the bookie. Your calculation shows that the bookmaker overestimated the probability of N. Djokovic winning by 12%: • overestimation of N. Djokovic's win: 67% - 55% = 12% Also, the bookmaker underestimated the probability of R. Nadal winning by 8%: • underestimation of R. Nadal's win: 37% - 45% = 8% If your calculations are correct, then you will have a mathematical advantage, and according to this, a bet on R. Nadal's victory will be considered a value bet. This game ended with the victory of R. Nadal. However, even if your bet turned out to be a winning bet, you can't be 100% sure that it was mathematically profitable and valid. To ensure your bets have an advantage over the bookmaker's line, you need to make over a hundred bets using this approach. The more bets you make, the more accurate your valuebet rate over the bookmaker betting line will be. It is worth noting that the bookmaker's odds display the most accurate values in relation to the probability of outcomes. Therefore, finding Valuebets on your own is not an easy task for an average bettor. In other words, to find valuebets you need to have more information than the bookmaker has, which is almost impossible. But chin up! The algorithms of our Valuebets service already include all the necessary parameters that take into account many factors. Each bet offered by the Valuebets service already has a certain Valuebet's percent. Such bets are guaranteed to bring you profit in the long run.
{"url":"https://www.betwasp.com/newbies/whatis-valuebet","timestamp":"2024-11-08T05:02:53Z","content_type":"text/html","content_length":"16582","record_id":"<urn:uuid:407bd6aa-d541-44e2-8139-8d806d459105>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00302.warc.gz"}
Options Trading: What Are the Greeks? | Binance Academy The Greeks — Delta, Gamma, Theta and Vega — are financial calculations that measure an option's sensitivity to specific parameters. Delta (Δ) shows the rate of change between an option's price and a $1 movement in the underlying asset's price. Gamma (Γ) measures the rate of change of an options delta, based on a $1 change in the underlying asset's price. Theta (θ) measures the sensitivity of an option's price relative to the time it has left to mature (or expire). Vega (ν) measures an option's price sensitivity based on a 1% move in implied Participating in derivatives trading requires more knowledge than in the spot markets. For options trading, the Greeks are among the most important set of new tools to master. They provide a basic framework for managing risk and help you make more informed trading decisions. After familiarizing yourself with the Greeks, you'll be able to better understand options market analysis and take part in wider discussions on puts, calls, and other options topics. What Are Options Contracts? An options contract is a financial instrument that gives you the right — though not the obligation — to purchase or sell an underlying asset at a predetermined price (the strike price); it also has an expiration date. Options contracts fall into two main categories: calls and puts. A call option allows its holder to buy the underlying asset at the strike price within a limited timeframe, while a put option enables its holder to sell the underlying asset at the strike price within a limited time frame. An option's current market price is known as its premium, which its seller (known as a writer) receives as You may have already noticed some similarities if you're familiar with futures contracts. Options offer both hedging and speculative opportunities, and the parties involved take opposing bearish and bullish positions. You may want to lock in a specific price for an underlying asset to better plan your future financial position. You may also want to buy or sell the underlying asset at an advantageous price based on a predicted price movement. What Are the Different Greeks? In options trading, you'll regularly find discussions on the Greeks. These financial calculations measure an option's sensitivity to specific parameters, such as time and volatility. The Greeks help options traders make more informed decisions about their positions and assess their risk. There are four major Greeks used in options trading: Delta, Gamma, Theta, and Vega. Delta (Δ) Delta (Δ) shows the rate of change between an option's price and a $1 movement in the underlying asset's price. The calculation represents the option's price sensitivity relative to a price movement in the underlying asset. Delta ranges between 0 and 1 for call options and 0 and -1 for put options. Call premiums rise when an underlying asset's price increases and fall when the asset's price declines. Put premiums, on the other hand, fall when the underlying asset's price rises and rise when the asset's price drops. If your call option has a delta of 0.75, a $1 increase in the underlying asset's price would theoretically increase the option premium by 75 cents. If your put option has a delta of -0.4, a $1 increase in the underlying asset's price would decrease the premium by 40 cents. Gamma (Γ) Gamma (Γ) measures the rate of change of an options delta based on a $1 change in the underlying asset's price. This makes it the first derivative of delta, and the higher an option's gamma, the more volatile its premium price is. Gamma helps you understand the stability of an option's delta and is always positive for calls and puts. Imagine your call option has a delta of 0.6 and a gamma of 0.2. The underlying asset’s price increases by $1, and its call premium by 60 cents. The option's delta then also adjusts upwards by 0.2 to Theta (θ) Theta (θ) measures the sensitivity of an option's price relative to the time an option has left to mature (or expire). More specifically, an option's theta shows the premium price change per day as it moves towards expiration. Theta is negative for long (or purchased) positions and positive for short (or sold) positions. For the holder, an option's value always diminishes over time ceteris paribus (provided all other things are equal); this applies to both call and put contracts. If your option has a theta of -0.2, its price will change by 20 cents daily the closer it reaches maturity. Vega (ν) Vega (ν) measures an option's price sensitivity based on a 1% move in implied volatility. It relies on a calculation of implied volatility, the market's forecast of a likely movement in the underlying asset's price. Vega is always a positive value because as an option's price increases, its implied volatility also increases ceteris paribus. In general, higher volatility makes options more expensive because there is a greater likelihood of meeting the strike price. An options seller will benefit from a fall in implied volatility, while a buyer will be disadvantaged. Let's look at a basic example: if your option has a vega of 0.2 and the implied volatility rises by 1%, the premium should increase by 20 cents. Can I Use the Greeks for Cryptocurrency Options Contracts? Cryptocurrencies are commonly used as underlying assets with options. Using a cryptocurrency makes no difference when calculating or using the Greeks. However, do note that cryptocurrencies can be highly volatile, which means that Greeks dependent on volatility or direction can also experience large swings. Closing Thoughts With the four major Greeks mastered, you'll be better equipped to assess your risk profile at a glance. Options trading has a relatively high degree of complexity, and understanding tools like the Greeks is essential to trading responsibly. Furthermore, the four Greeks covered here aren't the only ones that exist. You can continue your options studies by exploring the minor Greeks. Further Reading Disclaimer and Risk Warning: This content is presented to you on an “as is” basis for general information and educational purposes only, without representation or warranty of any kind. It should not be construed as financial advice, nor is it intended to recommend the purchase of any specific product or service. Please read our full disclaimer here for further details. Digital asset prices can be volatile. The value of your investment may go down or up and you may not get back the amount invested. You are solely responsible for your investment decisions and Binance Academy is not liable for any losses you may incur. Not financial advice. For more information, see our Terms of Use and Risk Warning.
{"url":"https://academy.binance.com/id/articles/options-trading-what-are-the-greeks","timestamp":"2024-11-05T18:29:54Z","content_type":"text/html","content_length":"248329","record_id":"<urn:uuid:46b581d9-c2d0-4433-914c-460647ade4cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00424.warc.gz"}
Flajolet Martin algorithm for approximate counting Measuring the number of distinct elements from a stream of values is one of the most common utilities that finds its application in the field of Database Query Optimizations, Network Topology, Internet Routing, Big Data Analytics, and Data Mining. A deterministic count-distinct algorithm either demands a large auxiliary space or takes some extra time for its computation. But what if, instead of finding the cardinality deterministically and accurately we just approximate, can we do better? This was addressed in one of the first algorithms in approximating count-distinct introduced in the seminal paper titled Probabilistic Counting Algorithms for Data Base Applications by Philippe Flajolet and G. Nigel Martin in 1984. In this essay, we dive deep into this algorithm and find how wittily it approximates the count-distinct by making a single pass on the stream of elements and using a fraction of auxiliary space. Deterministic count-distinct The problem statement of determining count-distinct is very simple - Given a stream of elements, output the total number of distinct elements as efficiently as possible. In the illustration above the stream has the following elements 4, 1, 7, 4, 2, 7, 6, 5, 3, 2, 4, 7 and 1. The stream has in all 7 unique elements and hence it is the count-distinct of this stream. Deterministically computing count-distinct is an easy affair, we need a data structure to hold all the unique elements as we iterate the stream. Data structures like Set and Hash Table suit this use-case particularly well. A simple pythonic implementation of this approach is as programmed below def cardinality(elements: int) -> int: return len(set(elements)) Above deterministic approach demands an auxiliary space of O(n) so as to accurately measure the cardinality. But when we are allowed to approximate the count we can do it with a fraction of auxiliary space using the Flajolet-Martin Algorithm. The Flajolet-Martin Algorithm The Flajolet-Martin algorithm uses the position of the rightmost set and unset bit to approximate the count-distinct in a given stream. The two seemingly unrelated concepts are intertwined using probability. It uses extra storage of order O(log m) where m is the number of unique elements in the stream and provides a practical estimate of the cardinalities. The intuition Given a good uniform distribution of numbers, the probability that the rightmost set bit is at position 0 is 1/2, probability of rightmost set bit is at position 1 is 1/2 * 1/2 = 1/4, at position 2 it is 1/8 and so on. In general, we can say, the probability of the rightmost set bit, in binary presentation, to be at the position k in a uniform distribution of numbers is The probability of the rightmost set bit drops by a factor of 1/2 with every position from the Least Significant Bit to the Most Significant Bit. So if we keep on recording the position of the rightmost set bit, ρ, for every element in the stream (assuming uniform distribution) we should expect ρ = 0 to be 0.5, ρ = 1 to be 0.25, and so on. This probability should become 0 when bit position, b is b > log m while it should be non-zero when b <= log m where m is the number of distinct elements in the stream. Hence, if we find the rightmost unset bit position b such that the probability is 0, we can say that the number of unique elements will approximately be 2 ^ b. This forms the core intuition behind the Flajolet Martin algorithm. Ensuring uniform distribution The above intuition and approximation are based on the assumption that the distribution of the elements in the stream is uniform, which cannot always be true. The elements can be sparse and dense in patches. To ensure uniformity we hash the elements using a multiplicative hash function where a and b are odd numbers and c is the capping limit of the hash range. This hash function hashes the elements uniformly into a hash range of size c. The procedure The procedure of the Flajolet-Martin algorithm is as elegant as its intuition. We start with defining a closed hash range, big enough to hold the maximum number of unique values possible - something as big as 2 ^ 64. Every element of the stream is passed through a hash function that permutes the elements in a uniform distribution. For this hash value, we find the position of the rightmost set bit and mark the corresponding position in the bit vector as 1, suggesting that we have seen the position. Once all the elements are processed, the bit vector will have 1s at all the positions corresponding to the position of every rightmost set bit for all elements in the stream. Now we find the position, b, of the rightmost 0 in this bit vector. This position b corresponds to the rightmost set bit that we have not seen while processing the elements. This corresponds to the probability 0 and hence as per the intuition will help in approximating the cardinality as 2 ^ b. # Size of the bit vector L = 64 def hash_fn(x: int): return (3 * x + 5) % (2 ** L) def cardinality_fm(stream) -> int: # we initialize the bit vector vector = 0 # for every element in the stream for x in skream: # compute the hash value bounded by (2 ** L) # this hash value will ensure uniform distribution # of elements of the stream in range [0, 2 ** L) y = hash_fn(x) # find the rightmost set bit k = get_rightmost_set_bit(y) # set the corresponding bit in the bit vector vector = set_bit(vector, k) # find the rightmost unset bit in the bit vector that # suggests that the probability being 0 b = rightmost_unset_bit(vector) # return the approximate cardinality return 2 ** b Although the above algorithm does a decent job of approximating count-distinct it has a huge error margin, which can be fixed by averaging the approximations with multiple hash functions. The original Flajolet-Martin algorithm also suggests that the final approximation needs a correction by dividing the approximation by the factor ϕ = 0.77351. The algorithm was run on a stream size of 1048 with a varying number of distinct elements and we get the following plot. From the illustration above we see that the approximated count-distinct using the Flajolet-Martin algorithm is very close to the actual deterministic value. A great feature of this algorithm is that the result of this approximation will be the same whether the elements appear a million times or just a few times, as we only consider the rightmost set bit across all elements and do not sample. Unique words in Thee Jungle Book The algorithm was run on the text dump of The Jungle Book by Rudyard Kipling. The text was converted into a stream of tokens and it was found that the total number of unique tokens was 7150. The approximation of the same using the Flajolet-Martin algorithm came out to be 7606 which in fact is pretty close to the actual number.
{"url":"http://edge.arpitbhayani.me/blogs/flajolet-martin","timestamp":"2024-11-05T23:12:38Z","content_type":"text/html","content_length":"33393","record_id":"<urn:uuid:fe46d739-4e88-45a1-98c3-6845688daf54>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00846.warc.gz"}
Need to look for $99,000-$249,000 in Metrics Sheet. Can't find formula error Need to look for $99,000-$249,000 in Metrics Sheet. Can't find formula error. =COUNTIFS({1. Case Queue Range 6},>"$99,000",<="$249,000") Best Answers • I figured it out thanks. =AVG(COLLECT({Case Consultation Tracker Range 99}, {Case Consultation Tracker Range 99}, >=100000 < 500000, {Case Consultation Tracker 98}, >=DATE(2024, 1, 1))) • @Genevieve P. When did Smartsheet start allowing that syntax?? =AVG(COLLECT({Case Consultation Tracker Range 99}, {Case Consultation Tracker Range 99}, >=100000 < 500000, {Case Consultation Tracker 98}, >=DATE(2024, 1, 1))) • Your syntax is off. Try this instead: =COUNTIFS({1. Case Queue Range 6},AND(@cell>99000,@cell<=249000)) • Thanks this works. =SUMIF({1. Case Queue Investable Assets}, >99999 <= 499999 NOw I have two more I'm trying to get to work: 1. Countif in column Format ="Webinar" and Status="Complete" and it won't:=COUNTIFS({New Project Tracker 2024 Status}, ="Complete", [{New Content Tracker 2024 Format} ="Webinar"]) 2. Another is Projects completed each month. The data is in numerical form as in 1/1/2024. Don't know where to begin. We just want to Count each in Jan, each in Feb. etc. Do I have to specify a date range for each in my metrics sheet? Ugh. Thanks again Lisa • Sorry one more: We have a field that contains several values from a check list. We want to count one of the values (e.g. Estate Planning) in each field that has it. My formula is only picking up the ones in the column by itself. Do I have to use a "Contains" and how do I do that please? ❤️My attempt: =COUNTIF({1. Case Queue Topics}, CONTAINS["College/Education"]) or =COUNTIF({1. Case Queue Topics}, CONTAINS("College/Education")) • Hi @lisalettieri With a COUNTIFS, you'll want to list the {cross sheet range}, then have a comma, then the criteria. [these square] brackets are only for in-sheet column name references. So for: 1. Countif in column Format ="Webinar" and Status="Complete" =COUNTIFS({New Project Tracker 2024 Status}, "Complete", {New Content Tracker 2024 Format}, "Webinar") And then for your second comment, if you're using a multi-select column use HAS instead. =COUNTIF({1. Case Queue Topics}, HAS(@cell, "College/Education")) This says to look in the cell to see if it has the selection "xyz" along with other selections. For your second formula, "Projects completed each month", the data in the source sheet would need to be in a date column for it to easily count the months. If it's numerical, I would suggest having a helper column in your source sheet that extracts the number in between your / and /. Then you can use that helper column in your other COUNT formulas. In order to know how to build the formula to extract the month, we'd need to know if the way you're typing it in is standardized. For example, do you always have 10 characters: DD/MM/YYYY Or is your 1/1/2024 showing that the month is the first value? Need more help? 👀 | Help and Learning Center こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions • I figured it out thanks. =AVG(COLLECT({Case Consultation Tracker Range 99}, {Case Consultation Tracker Range 99}, >=100000 < 500000, {Case Consultation Tracker 98}, >=DATE(2024, 1, 1))) • @Genevieve P. When did Smartsheet start allowing that syntax?? =AVG(COLLECT({Case Consultation Tracker Range 99}, {Case Consultation Tracker Range 99}, >=100000 < 500000, {Case Consultation Tracker 98}, >=DATE(2024, 1, 1))) • Hey @lisalettieri & @Paul Newcome So that Syntax actually isn't "allowed" - or it won't function properly. I tested it and the COLLECT function skips over the criteria… meaning it will create an Average and give an output, but the filter won't properly work. Here's an example source sheet: Here's the example output: Notice that the one where >=1 <50 is used, the average is of the whole column without the filter. @lisalettieri try this instead: =AVG(COLLECT({Case Consultation Tracker Range 99}, {Case Consultation Tracker Range 99}, AND(@cell >=100000, @cell < 500000), {Case Consultation Tracker 98}, >=DATE(2024, 1, 1))) Need more help? 👀 | Help and Learning Center こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions • I also tested it, and it worked for me though. In the first screenshot, it filtered out the 11 because the [Letter] column has a "b" in it, but the second screenshot has it included after I changed the letter to an "a". The only difference is that I am on same sheet instead of cross sheet references. • Hey @Paul Newcome I believe it's working because of the Letter filter, not the number filter 🙂 Right now your number range is including all values. Try adjusting your number filter and skip the letters. =AVG(COLLECT(Number:Number, Number:Number, >=1 <10)) My guess here is that you will still get 3, even though you should no longer have 11 as part of your criteria. This is the syntax I would recommend: =AVG(COLLECT(Number:Number, Number:Number, AND(@cell >=1, @cell <10))) Need more help? 👀 | Help and Learning Center こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions • It still excluded a number that was outside of my range (formula in [Column19])… =AVG(COLLECT(Number:Number, Number:Number, >=1 < 12)) (but I still prefer the AND syntax) • Interesting! It's the 0 that it cannot filter out correctly: Need more help? 👀 | Help and Learning Center こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions • I just tested some more. I think the best way to explain it is that it ignores the first argument and only filters based on the second. I changed the criteria to >= 5 < 12 And it pulled in everything less than 12 including those numbers that were not greater than 5. • These worked for referring to another sheet: =COUNTIFS({Case Consultation Tracker Investable Assets}, >=500000 < 1000000, {Case Consultation Tracker Submission Date}, >=DATE(2024, 1, 1), {Case Consultation Tracker Range Status}, ="Active") =MEDIAN(COLLECT({Case Consultation Tracker Investable Assets}, {Case Consultation Tracker Investable Assets}, >=500000 < 1000000, {Case Consultation Tracker Submission Date}, >=DATE(2024, 1, 1))) is returning a number below 500,000! • @lisalettieri Right. Using that syntax, the formula ignores the the first argument. In your particular case, it is ignoring the ">= 500000" portion and only pulling in rows that meet the "< 1000000" criteria. You need to use the Syntax @Genevieve P. previously suggested with the AND function. Help Article Resources
{"url":"https://community.smartsheet.com/discussion/comment/434822","timestamp":"2024-11-13T22:51:57Z","content_type":"text/html","content_length":"459379","record_id":"<urn:uuid:8c7db137-b221-4bb0-b65a-cbd610b7db6c>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00753.warc.gz"}
Josephus Problem -- from Wolfram MathWorld Given a group of circle under the edict that every circle until only one remains, find the position Josephus[n, m] in the Wolfram Language package Combinatorica` . For example, consider Josephus[4, 2] returns To obtain the ordered list of men who are consecutively slaughtered, InversePermutation can be applied to the output of Josephus. So, in the above example, InversePermutation[Josephus[4, 2]] returns The original Josephus problem consisted of a circle of 41 men with every third man killed ( Another version of the problem considers a circle of two groups (say, "A" and "B") of 15 men each (giving a total of 30 men), with every ninth man cast overboard, illustrated above. To save all the members of the "A" group, the men must be placed at positions 1, 2, 3, 4, 10, 11, 13, 14, 15, 17, 20, 21, 25, 28, 29. Written out explicitly, the order is This sequence of letters can be remembered with the aid of the mnemonic "From numbers' aid and art, never will fame depart." Consider the vowels only, assign If instead every tenth man is thrown overboard, the men from the "A" group must be placed in positions 1, 2, 4, 5, 6, 12, 13, 16, 17, 18, 19, 21, 25, 28, 29. Written out explicitly, which can be constructed using the Latin mnemonic "Rex paphi cum gente bona dat signa serena" (Ball and Coxeter 1987). The following array gives the original position of the last survivor out of a group of (OEIS A032434). The survivor for where floor function and lg is the logarithm to base 2. The first few solutions are therefore 1, 1, 3, 1, 3, 5, 7, 1, 3, 5, 7, 9, 11, 13, 15, 1, ... (OEIS A006257). The original position of the second-to-last survivor is given in the following table for (OEIS A032435). The original position of the third-to-last survivor is given in the following table for (OEIS A032436). Mott-Smith (1954, §153, pp. 96 and 212) discusses a card game called "Out and Under" in which cards at the top of a deck are alternately discarded and placed at the bottom. This is a Josephus problem with parameter
{"url":"https://mathworld.wolfram.com/JosephusProblem.html","timestamp":"2024-11-03T23:40:57Z","content_type":"text/html","content_length":"67668","record_id":"<urn:uuid:bcc0774b-2759-4d89-a227-1f17afdcd1e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00639.warc.gz"}
Understanding Torque Density in Underwater Robotic Manipulators - REACH ROBOTICS Torque is the rotational equivalent of linear force Torque Density is a performance measure of a components torque force against its mechanical parameters. Within context of robotic manipulators, torque density refers to the capability of an actuator to lift a specified amount at a given extension compared with its size and weight. It is accepted that a large, heavy manipulator can lift more than a smaller, lighter one. In the area of field robotics lifting more with a smaller system improves capability and portability, extending the overall usefulness of the system. Tasks such as rotating valves, station keeping the vehicle in current and recovering heavy objects from the seabed all benefit from a manipulator with more torque but compact enough to not significantly impact the performance of the vehicle. Torque Load & Lift Capacity of a Manipulator A key specification of a robotic manipulator is lifting capacity. This is often specified at some distance from the base such as at full reach or a fraction thereof. The more a manipulator can lift the more work it is able to do and the more robust it will be when subjected to external loads.  Each joint on the manipulator experiences a different torque load depending how close to the end effector it is positioned. Ones at the base require the highest torque, whereas the wrist joints near the gripper require the least. Joints at the base do not only need to lift the specified weight but they also need to lift the weight of the manipulator. It is therefore important that the actuators closer to the end are as light as possible while still being able to provide the required torque. If they are not their weight will limit the practical reach of the manipulator. In most applications it is desirable to have the manipulator be as light as possible while being able to lift a significant weight at the desired reach. In the example below J1 torque requirement is made up of the load mass (M1) plus the mass of the second actuators (J2) multiplied by their respective distances. J2 also has torque requirement which is dictated by the load mass (M1) multiplied by distance (D2). It is evident from this example that joint J2 needs to achieve its torque specification without becoming too heavy. As its weight is increased the remaining torque to lift the payload is decreased. This problem is exasperated when considering the more realistic situation of multiple joints, up to 7 in the case of Reach X. Torque required for J2 = M1 x D2 = 14.7Nm; Torque required for J1 = M1 x (D1+D2) + M2 x (D1) = 32.3Nm How is it defined? Comparing the torque of an actuator to its size and weight (Nm/g or Nm/mm^3) provides a good indication of performance. In some applications it is relevant to include velocity such as when a particular load is required to be moved at speed. When including this term, we get the power per weight and size of the actuator (W/g or W/mm^3) which is a more comprehensive indication of performance, rarely provided on datasheets. A higher value means the manipulator will be able to lift more or move faster while weighing less or occupying less space. Torque and velocity have an inverse relationship dictated by the motor parameters and the damping factors within the gearbox. This can be as seen on the graph below highlighting the importance of the power density calculation as specifying a torque may indicate that it is achievable but only at a very low velocity which is of limited use.  For an example the Reach X F Joint, which has the highest torque requirement, has a max torque of 36Nm at 40 degrees per second (0.7rad/s), a volume of 2750mm^3 and a mass of 330g. This gives it a torque density of 0.0131Nm/mm^3 or 0.11Nm/g. The power density is 0.0091W/mm^3 or 0.076W/g noting that this is mechanical power, a more relevant term than electrical power. Factors in increasing Power Density As apparent from the units increasing the power density of an actuator can be achieved by either increasing the output torque or velocity or by decreasing the volume or mass. When designing high performance actuators some of these terms are set according to external factors such as the maximum payload of the host vehicle or the required torque for rotating a valve. Once one is set the challenge is to optimise the other as far as possible to achieve maximum performance.  Torque is most often limited by the gearbox which in turn is dictated by the size and type of gears selected such as strain wave gearing, cycloidal or planetary. The motor makes up a sizeable portion of the volume and mass and therefor selecting one that can saturate the gearbox but remains compact is a critical part of the design process. Once motor and gearbox have been paired other factors such electronics, sliprings and heat dissipation all need to be considered to ensure the required output power can be obtained and the footprint and weight budget is adhered to.
{"url":"https://reachrobotics.com/blog/torque-density-in-robotic-manipulators/","timestamp":"2024-11-03T09:34:22Z","content_type":"text/html","content_length":"364743","record_id":"<urn:uuid:19311e68-579d-4ee0-bc59-056d68231524>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00210.warc.gz"}
🤔 Substitution Cipher 🤔 Substitution Cipher¶ Perhaps one of the earliest uses of computers was in the area of encrypting and decrypting secret messages. the image below is a picture I took in the Deutsches Museum in Germany. It is a picture of the german enigma machine. The operator of this machine would type in a message using the keyboard, and the machine would output the encrypted version of the letter. For example you might type the letter p but the machine would output the letter b. The enigma machine was used by the Germans during World War II to encrypt orders for the movements of troops and german U-boats. Alan Turing (one of the fathers of computer science) worked in England and built the Bombe, one of the first computers that decrypted the messages encrypted by the Enigma. The efforts of Turing and his co-workers helped bring the war to an end and saved many thousands of lives. The enigma encrypted messages using a much more sophisticated code than the previous question. The secret message I gave you uses a simple substitution cipher. I have given you one hint but if you still don’t have it This matching question will reveal a few more hints that should help you solve it. The letters on the left correspond to the letters in the ciphertext and the letters on the right correspond to the plaintext. Drag the letters on the left to their corresponding letters on the right. In this lab we will explore some simple encryption algorithms to both encrypt and decrypt messages. This lab is fun to work with if you have a partner so that one of you can encrypt the message and the other can decrypt the message. Both of you will have to get your program working in order to communicate effectively! Caesar Cipher¶ A cipher is a secret or disguised way of writing, and a caesar cipher is one of the oldest ciphers and is attributed to Julius Caesar who used it in private correspondence. The caesar cipher simply shifts each character by a fixed number of positions down the alphabet. For example a shift of 5 turns A into F If the shift takes you past then end of the alphabet for example a shift of 5 on the letter Y goes to Z, then A, B, C, D for a result of D. If you are thinking of modulo arithmetic then you are definitely on to something! A shift of 13 is very useful and was widely used in the early days of the internet to encrypt unsavory jokes that were posted to the alt.humor.funny newsgroups. The Caesar cipher is often represented by two dials where you can turn the inner dial to match up the letters any way you would like. Write a program that will encrypt the string referenced by the variable plaintext using the caesar cipher with a shift of 13. Store the result in ciphertext. The problem with the caesar cipher is that there are only 25 possible rotations. If you are a clever spy it won’t take you very long to write a python program that can try all possible combinations to unlock the plaintext message. In fact, here is a new cipher text for you to unscramble. I’ll admit to you that it is using a caesar cipher but I won’t tell you the shift. Can you find the plaintext message and figure out how much the original message was shifted? The art of breaking codes is called cryptology. Write a program that will figure out As you have just discovered, the Caesar cipher is not very secure. Even in ancient times I’m sure with enough slave scientists working on the problem in parallel they could decrypt just about any message. We might call the number of characters we shift the key. As we have noted, there are only 26 possible keys! However, if we agree that we can mix up the alphabet into any with the mixed up alphabet acting as the key then we have a much larger set to choose from and it becomes much much harder to break. How many different arrangements of the letters in the alphabet are there? If you said 403,291,461,126,605,635,584,000,000 Then you are correct! That is there are ‘26 factorial’ possible arrangements for the alphabet. Think of it like this: You have 26 letters to choose from as the first letter. Then you have 25 letters to choose as the second and 24 letters for the third etc. So, that is 26*25*24*23…*1 You can even write a loop and have python calculate that if you Scrambled Key¶ \(403 e 10^{24}\) is a pretty very big number, if you could try 100 different arrangements a second how long would it take to try them all? Moving to this system will make our encryption algorithm a bit more difficult. But its not too hard if you think of it this way: Suppose we have our plaintext alphabet as ‘abcdefghijklmnopqrstuvwxyz’ For our caesar cipher instead of doing modulo arithmetic suppose we created a second version of the alphabet but rotated by thirteen ‘nopqrstuvwxyzabcdefghijklm’ Lets put them right on top of each other so we can see the correspondence: Now to encrypt our message we just need to find the letter in the top row and replace it by the letter on the bottom row. This strategy will work for any possible arrangement of the alphabet. Another benefit of this strategy is that we could also include spaces or even punctuation. As long as our ‘alphabet’ and our key are the same length. Write a program that will encrypt the plaintext. Store your encrypted message in the variable ciphertext. Now write a program program that will decrypt the ciphertext. Store your decrypted message in the variable plaintext. Now test yourself a bit further. Write a program that asks the user to enter a key (scrambled alphabet) and a message to encrypt or decrypt. If you work with a partner one can work on decrypting and the other can work on encrypting. Your program should output either the encrypted or decrypted message. If you are the encrypter then email the encrypted message to your partner for them to decrypt. If you are working alone then store the decrypted message in a variable to decrypt. Password to Key (challenge)¶ Finally, only a few truly amazing people are going to remember a random ording of 26 letters. We would like to have a way to use a password of around 7 characters. How can we use a password to scramble our alphabet into some order? Its not as bad as you might think at first. Do the following: 1. Remove any duplicate letters from the password. 2. Now split the alphabet into two halves The letters up to and including the last letter in the password and the rest of the alphabet. 3. Remove any letters in your password from the the two halves of the alphabet. 4. The key is the concatenation of the password (without duplicate letters) followed by the second part of the split alphabet followed by the first part of the alphabet. implement the algorithm outlined above assuming that the user entered ‘password’ for their password. Store the key in a variable called ‘key’. For testing purposes we will assume that no spaces or punctuation are included in the alphabet or the password. Finally, work with your partner so that you can ask for a password and a message, using the password, construct the key, encrypt/decrypt the message and then print out the result. Post Project Questions During this project I was primarily in my... • 1. Comfort Zone • 2. Learning Zone • 3. Panic Zone Completing this project took... • 1. Very little time • 2. A reasonable amount of time • 3. More time than is reasonable Based on my own interests and needs, the things taught in this project... • 1. Don't seem worth learning • 2. May be worth learning • 3. Are definitely worth learning For me to master the things taught in this project feels... • 1. Definitely within reach • 2. Within reach if I try my hardest • 3. Out of reach no matter how hard I try You have attempted of activities on this page
{"url":"https://runestone.academy/ns/books/published/fopp/Projects/encryption.html","timestamp":"2024-11-04T10:37:41Z","content_type":"text/html","content_length":"39191","record_id":"<urn:uuid:f9dcf4f3-7181-4c98-8029-1f6f992a70ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00537.warc.gz"}
pdfMsym – PDF Math Symbols — various drawn mathematical symbols This package defines a handful of mathematical symbols many of which are implemented via PDF’s builtin drawing utility. It is intended for use with pdfTeX and LuaTeX and is supported by XeTeX to a lesser extent. Among the symbols it defines are some variants of commonly used ones, as well as more obscure symbols which cannot be as easily found in other TeX or LaTeX packages. Sources /macros/generic/pdfmsym Version 1.1.1 Licenses MIT License Maintainer Slurp Contained in TeXLive as pdfmsym MiKTeX as pdfmsym Maths symbol Topics Graphics symbols Generic Macros Download the contents of this package in one zip archive (234.9k). Community Comments Maybe you are interested in the following packages as well.
{"url":"https://ctan.org/pkg/pdfmsym","timestamp":"2024-11-10T11:51:46Z","content_type":"text/html","content_length":"16865","record_id":"<urn:uuid:8e67f3e1-62e5-4cf3-8964-82338699f457>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00498.warc.gz"}
Math Forum :: View topic - Order of subgroup generated by two subgroup - www.mathdb.org Math Forum :: View topic – Order of subgroup generated by two subgroup Polam wrote: Suppose H and K are two subgroups of a group G, and that the orders of H and K are 12 and 30 respectively. Which of the following cannot be the order of the subgroup of G generated by H and K, and why? A. 30 B. 60 C. 120 D. 360 E. Countably infinite Any suggestions are welcome. If the subgroup F of G generated by H and K is finite, then Lagrange theorem says that the o(F) is a multiple of o(H) = 12. So o(F) cannot be 30. Polam wrote: By the way, I still don’t see how the order of HK can be infinite. The answer (E) is possible. For example, the group is countably infinite and is generated by its subgroups and . The orders of these two subgroups are 12 and 30. (In this case, HK is not a subgroup.)
{"url":"https://www.mathdb.org/phpbb2/viewtopicphpp3344amp/","timestamp":"2024-11-06T18:03:19Z","content_type":"text/html","content_length":"28205","record_id":"<urn:uuid:9f638ccf-86ff-4a41-a811-d65d0f37a558>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00524.warc.gz"}
Navigation, Manipulation, And Redirection of Files Using The Terminal In this modern day, the Linux community is growing bigger. Unlike before, the GUI (Graphical User Interface) offered by Linux distributions is getting better, which is not inferior to Windows or Maybe this is the main attraction of Linux besides the open source license that are offered in the many applications that commonly be used by this system. However, do you know that the real power of the Linux system lies inside its CLI (Command Line Interface). Why use CLI if I have GUI? For those who are just moving from a system like Windows, this is a very fair question to ask. On Linux systems, there are times when we need to reduce dependence on the GUI. Why is that? Let’s take an example on the server. Today, servers that are generally used are Linux-based servers. The communication to the server is often established using certain protocols such as SSH which can be done directly using CLI. Thus, neither keyboard nor monitor screen is needed. On the other side, the GUI consumes quite a lot of resources, while the server generally operates 24/7 so it’s better not to use the GUI. But, I’m not using the server … Having computer skills never hurts! It is a powerful skill set to have in this information age. By mastering the terminal you can do everything that your computer has to offer, from defending yourself to cyber threats until taking over someone else’s whole system. Well, I’m not encouraging you to do that, but yes, it is all possible just by using a terminal. But first, let’s go to the basics. There are some commands that you need to know. To show your current working directory you can type To enter a child directory you can use cd <directory_name> To go to the parent directory, type cd .. To list everything inside you current working directory (files and subdirectories), you can type Because of its function, the ls is one of the most commonly used commands. ls can be followed with some flags to show the output based on what you need. The flag that you might have used often such as -a (list all files and subdirectories including the hidden ones) and -l (list files in long listing format). Both flags can be used as a combination like this. ls -la By using the -l flag, the displayed output will be in a long list format. By using this command, some informations are added to the output. drwxr-xr-x 5 root root 360 Nov 22 05:43 dev Output Explanation d type rwxr-xr-x file permissions, see more details 5 number of links root owner root group 360 size Nov 22 05:43 modification date and time dev name of file or directory Actually there are many more variations available . To see more detailed information about the ls and its flags, you can use a command like the following. man ls Another thing you can do is making a link between file. ln -s <full_path_to_source_file> <full_path_to_destination_file> By using -s flag, it will create a symbolic link. If not, it will make a hard link. Files and Directories Manipulation There are many commands involved in files and directories manipulations. Some of them are listed below. Command What it does touch Create a file. rm Remove file. rm -rf <folder_name> Recursively and forcefully remove a file or folder. Can also be used to remove a non-empty folder. rmdir <folder_name> Remove an empty directory. cp <target_directory> Copy file or folder to a particular path. mv <target_directory> Move file to certain target directory. mv <new_name> Rename a file or folder. There are also some helpful commands to edit your file. Command What it does nano Edit the content of file using nano editor. vim Another file editor which is more advanced. file Determine file’s type . While using the rm command, you have to be very careful, especially if you are also using the wildcards (*). Linux will consider the user to be smart enough to do so. A small mistake could lead to Take a look at these two examples below. rm * .txt rm *.txt The first one will remove all files in the current directory and the second one will only remove files with txt extension. As a single space mistake will be interpreted very differently , it is always recommended to check the files first using ls command to avoid accidental removal. Redirection on terminal also known as “I/O Redirection”. Before going further into redirection, it is better to know about stdin, stdout, stderr. What are stdin, stdout, and stderr streams in Linux? In programming, a stream is basically the flow of the information or data. In this context, stdin (standard input) is where the data is coming from and most programs get the input from the keyboard. Let’s say we use ls command. The terminal and ls command are programs. As the program (ls command) gets the input from the terminal (which is also a program), we can consider the terminal as the stdin of ls. The terminal also works as a stdout, because we can see the result (output) of ls command on the terminal. What about stderr? Let’s say you unintentionally mistype a command, like sudo aot update (has to be sudo apt update). You may get sudo: aot: command not found. Instead of showing you the output, the terminal will throw an error, complaining that the command is not found. This error is stderr. There is also a concept called file descriptor. In Unix and Unix-like systems (Linux), these streams are also identified by integer value. Integer Value Stream 0 Standard Input 1 Standard Output 2 Standard Error File Descriptor What are these values used for? In the terminal, these values can be used for redirection. See the example command below. find /sys -type f You can type another command. echo $? Try this command by yourself and see what happened. If you are not using a root account, it will throw some result and you may see value 1 after entering the second command. This command actually tells you whether a command is successful or not by showing you 0 if the command was executed successfully and showing anything else if it was not. Since we got 1, we had caught some error. If you scroll up your terminal, you might identify these errors that might looks like this. find: ‘/sys/kernel/tracing’: Permission denied find: ‘/sys/kernel/debug’: Permission denied find: ‘/sys/fs/pstore’: Permission denied find: ‘/sys/fs/bpf’: Permission denied If you are doing some scripting, you might want to handle this error. Say that we want to catch this error and redirect it to a file called log.txt. You can use a command like this. find /sys -type f 2> log.txt What it does is making a file called log.txt in your current directory and using this file to save the stderr to that file. The number 2 on the above command represent the stderr which is telling the terminal to redirect it to a file called log.txt (see the file descriptor above). In other case, you might want to do redirection for both stdout and stderr to two different files. Therefore, you can use something like this. find /sys -type f > result.txt 2> log.txt By default, the redirection is used for stdout. Therefore, the above command is similar to this. find /sys -type f 1> result.txt 2> log.txt What if you want to see the stderr only and not the stdout. Also you don’t want to print the stdout into a file. You can simply do this by using this command. find /sys -type f > /dev/null By doing this, we are basically dumping the stdout into the void. This is a very useful command, as it can filter the result we want. If you want to add a new line instead of overwriting the existing file, you can replace > with > >. The aforementioned commands may be an oversimplification of what you can do with a terminal. In fact, there are tons of them. Describing each one in a very detailed manner might take an entire article, which is likely to bore you to death. I think it is better to make things simple for you so you won’t be overwhelmed by it. In the end, I hope this article will provide you with a good understanding of how to use a terminal on a regular basis.
{"url":"https://binaryte.com/blog/post/navigation-manipulation-and-redirection-of-files-using-the-terminal/","timestamp":"2024-11-01T20:18:04Z","content_type":"text/html","content_length":"53498","record_id":"<urn:uuid:f8b3aaab-8115-4687-862a-62cb35e7a207>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00392.warc.gz"}
How to Store Unsigned Long In Postgresql? In PostgreSQL, an unsigned long datatype does not exist. However, you can store large unsigned integers by using the bigint datatype, which is an 8-byte signed integer type. This means it can store values from -9223372036854775808 to 9223372036854775807. To store unsigned long values in PostgreSQL, you can simply use the bigint datatype and ensure that you only insert positive integers. If you need to enforce the constraint that the value must be positive, you can use a check constraint in your table definition. For example, you can create a table with a column that stores unsigned long values: CREATE TABLE my_table ( id bigint, some_data text, CONSTRAINT positive_id CHECK (id >= 0) ); This table will only allow positive values to be stored in the id column. You can then insert unsigned long values by using INSERT statements and ensuring that the value is positive. Overall, while PostgreSQL does not have a specific unsigned long datatype, you can effectively store large positive integers using the bigint datatype with appropriate constraints. How to perform arithmetic operations on unsigned long values in postgresql? In PostgreSQL, you can perform arithmetic operations on unsigned long values using the 'bigint' data type. The 'bigint' data type in PostgreSQL supports 8-byte signed integer values, which are large enough to store unsigned long values as well. Here is an example of performing arithmetic operations on unsigned long values in PostgreSQL: 1 -- Create a table with a column of type bigint to store unsigned long values 2 CREATE TABLE unsigned_values ( 3 id serial PRIMARY KEY, 4 value bigint 5 ); 7 -- Insert unsigned long values into the table 8 INSERT INTO unsigned_values (value) VALUES (18446744073709551615); 9 INSERT INTO unsigned_values (value) VALUES (10000000000000000000); 11 -- Perform arithmetic operations on unsigned long values 12 SELECT 13 value + 1 AS incremented_value, 14 value * 2 AS doubled_value, 15 value / 3 AS divided_value 16 FROM unsigned_values; In this example, we are creating a table called 'unsigned_values' with a column 'value' of type 'bigint' to store unsigned long values. We then insert some unsigned long values into the table and perform arithmetic operations on them using standard SQL operators such as '+', '*', and '/'. What is the best way to store unsigned long data in postgresql? In PostgreSQL, the best way to store unsigned long data is to use the bigserial or bigint data type. The bigserial data type is a auto-incrementing 8-byte integer, which starts from 1 and can go up to 9223372036854775807. This data type is commonly used for primary keys in a table. To create a column with unsigned long data type in PostgreSQL, you can use the following SQL query: 1 CREATE TABLE example_table ( 2 id bigserial PRIMARY KEY, 3 column_name bigint UNSIGNED 4 ); This will create a table example_table with a primary key column id of type bigserial and an additional column column_name of type bigint for storing unsigned long data. How can I store large unsigned long values in postgresql? In PostgreSQL, you can store large unsigned long values by using the "BIGINT" data type. The "BIGINT" data type can store whole numbers ranging from -9223372036854775808 to 9223372036854775807. This allows you to store large numeric values without worrying about overflow. To create a table with a column to store large unsigned long values, you can use the following SQL statement: 1 CREATE TABLE my_table ( 2 id BIGINT 3 ); You can then insert large unsigned long values into the table using the insert statement: 1 INSERT INTO my_table (id) VALUES (18446744073709551615); And you can query the table to retrieve the large unsigned long value: 1 SELECT id FROM my_table; By using the "BIGINT" data type in your PostgreSQL tables, you can store large unsigned long values efficiently and effectively. How to convert unsigned long values to string before storing in postgresql? To convert an unsigned long value to a string before storing it in PostgreSQL, you can use a conversion function in your programming language of choice. Here is an example in Python: 1 unsigned_long_value = 1234567890 2 string_value = str(unsigned_long_value) 4 # Now you can store the string_value in PostgreSQL If you are using a different programming language, you can use a similar approach to convert unsigned long values to strings before storing them in your PostgreSQL database. What is the difference between storing unsigned long and signed long in postgresql? In PostgreSQL, the difference between storing unsigned long and signed long is mainly related to the range of values that can be stored. When storing an unsigned long (or any unsigned integer), only positive values can be stored. This means that the range of values that can be stored is from 0 to 2^64 - 1. On the other hand, when storing a signed long (or any signed integer), both positive and negative values can be stored. The range of values that can be stored is from -2^63 to 2^63 - 1. It is important to note that the PostgreSQL integer data types (such as BIGINT) are signed by default. If you need to store unsigned values, you can use the NUMERIC data type with a CHECK constraint to enforce positive values.
{"url":"https://topminisite.com/blog/how-to-store-unsigned-long-in-postgresql","timestamp":"2024-11-04T05:09:26Z","content_type":"text/html","content_length":"270772","record_id":"<urn:uuid:99b0bea1-f246-4903-af7c-2c5662388be0>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00150.warc.gz"}
Learning Algebraic Formula | The Formula of (a-b)² The algebra formula, (a-b)^2, is used to calculate the square of a binomial. It is also called the formula used to calculate the square of the difference of 2 identities or terms. We have covered the formula and a solved example for your better understanding of the formula of (a-b)². So, let’s dive in: Algebra Formula: (a-b)^2 Proof Now, the formula is: But if you are not sure about it, here is how you can prove it: ⇒ a(a-b)-b(a-b) ⇒ a²-ab-ab+b² ⇒ a²-2ab+b² Related Read: https://mytutorsource.com/blog/how-mts-tutors-can-help-you-in-improving-your-mathematics/ Other Formulae That You Should Know Of 𝑎2 + 2𝑎𝑏 + 𝑏2 A Solved Example Find out the value of (2a – 3b)2 (2a – 3b)2 = (2a)2 + (3b)2 – 2(2a)(3b) = 4a2 + 9b2 – 2(2a)(3b) = 4a2 + 9b2 – 12ab (3a – 2b)2 = 4a2 + 9b2 – 12ab And that’s the formula of (a-b)², explained and solved, for you.
{"url":"https://mytutorsource.com/blog/learning-algebraic-formula/","timestamp":"2024-11-12T05:50:12Z","content_type":"text/html","content_length":"1049700","record_id":"<urn:uuid:1c2f39f6-72a1-455d-b18f-01ac1875c595>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00061.warc.gz"}
XLK: SPDR Select Sector Fund - Technology | Logical Invest What do these metrics mean? 'Total return is the amount of value an investor earns from a security over a specific period, typically one year, when all distributions are reinvested. Total return is expressed as a percentage of the amount invested. For example, a total return of 20% means the security increased by 20% of its original value due to a price increase, distribution of dividends (if a stock), coupons (if a bond) or capital gains (if a fund). Total return is a strong measure of an investment’s overall performance.' Applying this definition to our asset in some examples: • The total return, or increase in value over 5 years of SPDR Select Sector Fund - Technology is 188.6%, which is higher, thus better compared to the benchmark SPY (109.2%) in the same period. • During the last 3 years, the total return, or increase in value is 44.8%, which is larger, thus better than the value of 33.3% from the benchmark. 'The compound annual growth rate (CAGR) is a useful measure of growth over multiple time periods. It can be thought of as the growth rate that gets you from the initial investment value to the ending investment value if you assume that the investment has been compounding over the time period.' Which means for our asset as example: • Compared with the benchmark SPY (15.9%) in the period of the last 5 years, the annual performance (CAGR) of 23.7% of SPDR Select Sector Fund - Technology is larger, thus better. • Looking at annual return (CAGR) in of 13.2% in the period of the last 3 years, we see it is relatively greater, thus better in comparison to SPY (10.1%). 'In finance, volatility (symbol σ) is the degree of variation of a trading price series over time as measured by the standard deviation of logarithmic returns. Historic volatility measures a time series of past market prices. Implied volatility looks forward in time, being derived from the market price of a market-traded derivative (in particular, an option). Commonly, the higher the volatility, the riskier the security.' Using this definition on our asset we see for example: • The historical 30 days volatility over 5 years of SPDR Select Sector Fund - Technology is 27.8%, which is greater, thus worse compared to the benchmark SPY (20.9%) in the same period. • Looking at historical 30 days volatility in of 25.5% in the period of the last 3 years, we see it is relatively greater, thus worse in comparison to SPY (17.6%). 'Downside risk is the financial risk associated with losses. That is, it is the risk of the actual return being below the expected return, or the uncertainty about the magnitude of that difference. Risk measures typically quantify the downside risk, whereas the standard deviation (an example of a deviation risk measure) measures both the upside and downside risk. Specifically, downside risk in our definition is the semi-deviation, that is the standard deviation of all negative returns.' Applying this definition to our asset in some examples: • Compared with the benchmark SPY (14.9%) in the period of the last 5 years, the downside deviation of 19.3% of SPDR Select Sector Fund - Technology is larger, thus worse. • Looking at downside deviation in of 17.6% in the period of the last 3 years, we see it is relatively higher, thus worse in comparison to SPY (12.3%). 'The Sharpe ratio was developed by Nobel laureate William F. Sharpe, and is used to help investors understand the return of an investment compared to its risk. The ratio is the average return earned in excess of the risk-free rate per unit of volatility or total risk. Subtracting the risk-free rate from the mean return allows an investor to better isolate the profits associated with risk-taking activities. One intuition of this calculation is that a portfolio engaging in 'zero risk' investments, such as the purchase of U.S. Treasury bills (for which the expected return is the risk-free rate), has a Sharpe ratio of exactly zero. Generally, the greater the value of the Sharpe ratio, the more attractive the risk-adjusted return.' Using this definition on our asset we see for example: • Looking at the Sharpe Ratio of 0.76 in the last 5 years of SPDR Select Sector Fund - Technology, we see it is relatively greater, thus better in comparison to the benchmark SPY (0.64) • Looking at risk / return profile (Sharpe) in of 0.42 in the period of the last 3 years, we see it is relatively lower, thus worse in comparison to SPY (0.43). 'The Sortino ratio improves upon the Sharpe ratio by isolating downside volatility from total volatility by dividing excess return by the downside deviation. The Sortino ratio is a variation of the Sharpe ratio that differentiates harmful volatility from total overall volatility by using the asset's standard deviation of negative asset returns, called downside deviation. The Sortino ratio takes the asset's return and subtracts the risk-free rate, and then divides that amount by the asset's downside deviation. The ratio was named after Frank A. Sortino.' Which means for our asset as example: • Compared with the benchmark SPY (0.9) in the period of the last 5 years, the excess return divided by the downside deviation of 1.1 of SPDR Select Sector Fund - Technology is higher, thus better. • During the last 3 years, the downside risk / excess return profile is 0.61, which is lower, thus worse than the value of 0.62 from the benchmark. 'Ulcer Index is a method for measuring investment risk that addresses the real concerns of investors, unlike the widely used standard deviation of return. UI is a measure of the depth and duration of drawdowns in prices from earlier highs. Using Ulcer Index instead of standard deviation can lead to very different conclusions about investment risk and risk-adjusted return, especially when evaluating strategies that seek to avoid major declines in portfolio value (market timing, dynamic asset allocation, hedge funds, etc.). The Ulcer Index was originally developed in 1987. Since then, it has been widely recognized and adopted by the investment community. According to Nelson Freeburg, editor of Formula Research, Ulcer Index is “perhaps the most fully realized statistical portrait of risk there is.' Using this definition on our asset we see for example: • Looking at the Ulcer Index of 12 in the last 5 years of SPDR Select Sector Fund - Technology, we see it is relatively greater, thus worse in comparison to the benchmark SPY (9.32 ) • Compared with SPY (10 ) in the period of the last 3 years, the Downside risk index of 15 is greater, thus worse. 'Maximum drawdown is defined as the peak-to-trough decline of an investment during a specific period. It is usually quoted as a percentage of the peak value. The maximum drawdown can be calculated based on absolute returns, in order to identify strategies that suffer less during market downturns, such as low-volatility strategies. However, the maximum drawdown can also be calculated based on returns relative to a benchmark index, for identifying strategies that show steady outperformance over time.' Using this definition on our asset we see for example: • Looking at the maximum DrawDown of -33.6 days in the last 5 years of SPDR Select Sector Fund - Technology, we see it is relatively higher, thus better in comparison to the benchmark SPY (-33.7 • Compared with SPY (-24.5 days) in the period of the last 3 years, the maximum DrawDown of -33.6 days is lower, thus worse. 'The Drawdown Duration is the length of any peak to peak period, or the time between new equity highs. The Max Drawdown Duration is the worst (the maximum/longest) amount of time an investment has seen between peaks (equity highs) in days.' Using this definition on our asset we see for example: • The maximum days below previous high over 5 years of SPDR Select Sector Fund - Technology is 368 days, which is lower, thus better compared to the benchmark SPY (488 days) in the same period. • During the last 3 years, the maximum days under water is 368 days, which is lower, thus better than the value of 488 days from the benchmark. 'The Drawdown Duration is the length of any peak to peak period, or the time between new equity highs. The Avg Drawdown Duration is the average amount of time an investment has seen between peaks (equity highs), or in other terms the average of time under water of all drawdowns. So in contrast to the Maximum duration it does not measure only one drawdown event but calculates the average of Using this definition on our asset we see for example: • Looking at the average days under water of 77 days in the last 5 years of SPDR Select Sector Fund - Technology, we see it is relatively smaller, thus better in comparison to the benchmark SPY (123 days) • During the last 3 years, the average days below previous high is 109 days, which is lower, thus better than the value of 176 days from the benchmark.
{"url":"https://logical-invest.com/app/etf/xlk/spdr-select-sector-fund-technology","timestamp":"2024-11-11T08:33:39Z","content_type":"text/html","content_length":"60148","record_id":"<urn:uuid:9f49540d-86d3-4d6e-bb2b-8a119a276386>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00349.warc.gz"}
Advent of Code 2021 Day 9 Part 1 in Google Sheets 2022-09-25 • 6 min read In this article I will explain how I solved Day 9 Part 1 of Advent of Code 2021 in Google Sheets using a single formula. Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. People use them as a speed contest, interview prep, company training, university coursework, practice problems, or to challenge each other. Read more... In this puzzle we are given an input in the following form: Where each number corresponds to the height of a particular location, 9 being the highest and 0 being the lowest a location can be. And our goal is to: 1. Find the low points, which are the locations that are lower than any of their adjacent locations. 2. Find the risk levels by adding 1 to the height of the low points. 3. Sum the risk levels. * See the full problem for more context. Generic Formula (a<SPLIT(REGEXREPLACE(LEFT({REPT(9,length);ARRAY_CONSTRAIN(input&9,ROWS(input)-1,1)},length),"(.)","$1❄️"),"❄️"))*(a<SPLIT(REGEXREPLACE(LEFT({QUERY(input&9,"offset 1");REPT(9,length)},length),"(.)","$1❄️"),"❄️")), • input: The puzzle input. • length: The length of each row of the puzzle input. In our case it's 10 for the test input and 100 for the final input. So the generic formula for the final input can be rewritten as: (a<SPLIT(REGEXREPLACE(LEFT({REPT(9,100);ARRAY_CONSTRAIN(input&9,ROWS(input)-1,1)},100),"(.)","$1❄️"),"❄️"))*(a<SPLIT(REGEXREPLACE(LEFT({QUERY(input&9,"offset 1");REPT(9,100)},100),"(.)","$1❄️"),"❄️")), In order to find the low points we have to compare each location with all of its adjacent locations. Since we will be working with arrays and we will therefore be comparing one array with other four (one for each position: left, right, up, down), in order to avoid "Array arguments are of different size" errors, we need all of the arrays to be of the same size and with the given input it's not possible due to the inconsistency with the amount of adjacencies for each location: the locations at the corners have two adjacent locations, the locations at the edges have three adjacent locations and all the other ones have four. So the first step we take is to normalize the adjacencies, that is, to manipulate the given input in a way that allows us to have the same amount of adjacent locations for each location. This is easily done by creating a border of 9s around the input. The reason we create a border of 9s and not of 0s or 1s or any other number is because we will determine the low points by doing a bunch of less than comparisons and we don't want the border we created solely to prevent errors to interfere with the output. (There isn't any number from 0 to 9 that 9 is strictly less than). Step 1 - Normalize the adjacencies by creating a border of 9s around the input The REPT function takes an input and it repeats it n times. In this case it takes 9 and it repeats it 10+2 times, which is the amount of numbers in each row plus 2 to account for the newly created left and right borders. Step 2 - Split each number into its own cell Now that we have four adjacent locations for each location, we are almost done with the setup phase, we just have to split every digit into its own cell so we can operate on them independently. This can be done by placing a placeholder character after each digit and splitting by it. I will be using a snowflake ❄ as the placeholder character. Step 3 - Compare each location of the input with all of its adjacent locations Normally, we would now be able to do the following less than comparisons with our current position c and the corresponding adjacent positions cv and ch: 1. [cv,ch] < [cv,ch-1] (left) 2. [cv,ch] < [cv,ch+1] (right) 3. [cv,ch] < [cv+1,ch] (up) 4. [cv,ch] < [cv-1,ch] (down) cv and ch are respectively the vertical and horizontal coordinates of the current position. The GIF below is a visual representation of the arrays we are comparing with yellow being our puzzle input. However, the goal here is to solve the problem using a single formula without relying on any helper cells. So there are some additional steps we have to take. Step 3.1 - Extract the left, right, up and down arrays Extract each of the four arrays that we are going to use for the comparison by only referencing the puzzle input. To extract each array individually we use the same technique adopted in Step 2 along with a combination of LEFT(), RIGHT(), ARRAY_CONSTRAIN() and QUERY(...,"offset X"). Formula to extract the puzzle input (Array 0): Formula to extract the left array (Array 1): Formula to extract the right array (Array 2): Formula to extract the above array (Array 3): Formula to extract the below array (Array 4): =ARRAYFORMULA(SPLIT(REGEXREPLACE(RIGHT(QUERY({A1:A5;REPT(9,10)},"offset 1"),10),"(.)","$1❄"),"❄")) Step 3.2 - Perform the comparisons We now have everything we need to perform the comparisons which we are going to do with the following IF statement: (SPLIT(REGEXREPLACE(A1:A5,"(.)","$1❄"),"❄")<SPLIT(REGEXREPLACE(RIGHT(QUERY({A1:A5;REPT(9,10)},"offset 1"),10),"(.)","$1❄"),"❄")), This may seem like a lot but it's actually very simple. All we are doing is checking if the puzzle input is lower than each one of its adjacent locations. If it is, return the puzzle input, if it's not, leave the cell blank. The resulting array 1, 0, 5, 5 is a visual representation of the location of our low points. Step 4 - Find the risk levels We are almost done! In order to find the risk levels we just have to add 1 to each element of the resulting array of the previous step. (SPLIT(REGEXREPLACE(A1:A5,"(.)","$1❄"),"❄")<SPLIT(REGEXREPLACE(RIGHT(QUERY({A1:A5;REPT(9,10)},"offset 1"),10),"(.)","$1❄"),"❄")), Step 5 - Sum the risk levels The final step is to sum the risk levels. This is easily done with the SUM function. (SPLIT(REGEXREPLACE(A1:A5,"(.)","$1❄"),"❄")<SPLIT(REGEXREPLACE(RIGHT(QUERY({A1:A5;REPT(9,10)},"offset 1"),10),"(.)","$1❄"),"❄")), That's all. Thanks for reading. Go to Generic formula
{"url":"https://ziad.net/posts/advent-of-code-2021-day-9-part-1-in-google-sheets","timestamp":"2024-11-04T17:41:34Z","content_type":"text/html","content_length":"25403","record_id":"<urn:uuid:43bbbedf-9c53-472f-8ff6-680b6c303360>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00726.warc.gz"}
Is Warren Buffett Just Lucky? Warren Buffett is known as a "legendary" investor, as Berkshire-Hathaway has shown a large number of years of out-performance of the S&P 500. However, it has also been claimed that Warren Buffett's performance may simply be ascribed to chance (e.g., ). In this blog entry, I tested the hypothesis that Warren Buffett's performance may be ascribed to chance. Rather than use a conventional statistical significance testing approach, I used a randomization test. What is unique about the randomization approach is that it does not assume random sampling (or normally distributed data), which is an assumption that certainly can not be assumed in this case (does anyone's study ever satisfy this assumption?). : I obtained the data from Berkshire-Hathaway's 2010 annual report . On page 4, one can find Berkshire vs. S&P500 annual returns from 1965 to 2010. & Statistics: It's a 1 variable 'within subject' (or matched-pairs) design with 2 levels (Berkshire and S&P 500). The statistical analysis I performed is a within-subjects t-test (or t-test for matched-pairs). However, to test the statistical significance of the result, I used a randomization approach, rather than a conventional statistical significance test, as the data have clearly not been obtained using random sampling. You can learn more about randomization testing : I used a comprehensive stats packaged called NCSS 2007 , which very impressively provides the option to use randomization testing as a matter of course (i.e., not a separate add-on or separate GUI menu) for basically all statistical analyses (this is unique, in my experience). I selected 10,000 randomized samples for the randomization analysis. You can watch me demonstrate the analysis . (nb: I was very tired when I recorded this video) Results & Discussion: Berkshire average performance = 21.56; S&P 500 average performance 10.96; difference = 10.60 (Wow!) Randomization statistical significance testing: p = .0011. In general terms, this means that Buffett's investment performance is very unlikely due to chance. In fact, on average, in only 1.1 out of a 1000 randomized samples does the randomization procedure identify a larger mean difference than the observed mean difference (i.e., 10.60). Therefore, the people who are lucky in this scenario are those who invested with Buffett early on in his investment career : ) However, I should note that I'm not totally convinced that using a paired t-test with randomization statistical significance testing is the best way to test the hypothesis that Warren Buffett is simply lucky. At least on the surface, it appears to be a relatively straightforward and valid approach to doing so. I can think of alternative approaches, but they would require more data collection. It should be noted that no matter what statistical method is used, there will always be some level of chance associated with the results. Therefore, I can't see how it could ever be demonstrated definitively (i.e., p = .00000∞) that Warren Buffett's performance is above chance. ≈ Gilles Disclaimer: Despite the results within this blog, I do not own any Berkshire-Hathaway shares. Go figure. Check out this youtube video where I discuss the analysis and results:
{"url":"http://www.how2stats.net/2011/06/is-warren-buffett-just-lucky.html","timestamp":"2024-11-12T04:19:02Z","content_type":"application/xhtml+xml","content_length":"37055","record_id":"<urn:uuid:9748bff3-ad21-49e2-8c01-a70fa3a755c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00677.warc.gz"}
Logic Comparators Logic comparators, also known as binary comparators and digital comparators are semiconductor logic ICs (integrated circuits) that has been designed as a combinational circuit for the testing of values represented by one binary word. What are Logic Comparators? Logic comparators test whether the value is greater than, less than, or equal to the value represented by another binary word. These devices are used commonly in digital logic circuit design and are also utilised in other key logic circuits. Types of logic comparators There are two main types of logic comparators and they are; • Magnitude comparators - These devices have two outputs, 1 that is logic where object A has a greater value than value B and are used to indicate equality. • Identity comparators - Identity comparators are devices that compare two binary numbers that has a singular output Logic comparators are also offered with a variety of configurations such as Number of Bits configured on to the IC, varied maximum operating supply voltage and different output types such as push pull and TTL (Transistor–transistor logic) Applications of logic comparators These ICs have a range of application use such as; • Process controllers • Servo-motor control • Correction and/or detection of instrumentation conditions • Logic in CPU’s
{"url":"https://nl.rs-online.com/web/c/semiconductors/logic-ics/logic-comparators/?group_by_tn=false","timestamp":"2024-11-13T13:06:06Z","content_type":"text/html","content_length":"280844","record_id":"<urn:uuid:1d6c7c64-8f71-4db8-a467-b20db04471c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00008.warc.gz"}
Ways to Divide in Excel How to Divide in Excel? In Excel, there is no specific function for the division. Instead, it is quite simple; use the “/” operator. For example, • You may use the forward slash “/” for the division in cells directly or in the formula bar. • You may also use the QUOTIENT Function that returns an integer portion of the division: =QUOTIENT(A1, B1)=QUOTIENT(100, 10) Result = 10 In this tutorial, I am going to show you dividing by typing within cells along with division by formulas, so keep reading. Divide two numbers by typing within the cell In the cell, type: and press enter, it should display the result 10. In the formula bar, you can see the division formula is added automatically. Dividing two cells’ numbers by reference Similarly, you may divide numbers in two cells by giving the cell references. For example, A2 cell contains 100 and A3 10. Now type =A2/A3 in the A4 cell and press enter. It should display the result after dividing two cell numbers as shown below: Again, the division formula is typed automatically in the formula bar. Using multiple operators to understand division order If you are using multiple mathematical operators in a formula including division then you should understand the order in how Excel operates. For example: =10+100/10 = 20 Because the division occurs first than addition. Similarly, if you are subtracting followed by division, again division occurs first as shown below: =20-10/5 = 18 Note: + and – have the same operation order. The Excel will calculate the one first that comes towards the left. What if multiplication comes first? =5*100+100/20 =505 Excel will first multiply then divide and finally add in the above case. The division and multiplication have the same preference – whichever comes first towards the left is entertained. See a few multiple calculation formulas in the Excel sheet below for learning more about this: The example of dividing a range of cells by a given number The following example shows using the SUM function to get the total of a given range and then we divided it by a number in another cell. Have a look at the formula and result: The SUM/divide formula: You can see, the sum of cells C2 to C5 is 100. I entered 25 in the D7 cell which resulted in 4. Dividing cell values by a given number – “Paste Special” technique In the above example, we got the sum and then divided the range of cells. In this example, you will see how to divide the individual cell values by a specific number using the “Paste Special” technique. To do that, follow the steps below: Step 1: Enter the number that you want to use for the division in an empty cell and copy it as shown below: You can see, I entered 5 and pressed Ctrl+C to copy it. You may also right-click to copy that cell. Step 2: Select/highlight the range of cells that you want to divide by that number. For the example, I highlighted A2 to A8 cells. After highlighting, right-click and press the “Paste Special” option. Be noted, the number that we want to divide this range of cells is still in copy mode: Step 3 The “Paste Special” dialog should appear. Select the “All” option under “Paste” and “Divide” under the “Operation” option as shown below: As you press OK, the selected range should have been divided by the specified number in the cell that you copied. As I entered 5, see the resultant sheet below: Is not that cool?
{"url":"https://www.jquery-az.com/divide-in-excel-ways/","timestamp":"2024-11-13T12:33:32Z","content_type":"text/html","content_length":"86221","record_id":"<urn:uuid:f750a9d1-3018-4be6-ab63-29697dcbb2d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00766.warc.gz"}
Wolfram Function Repository Function Repository Resource: Obtain a conditional categorical distribution formed by restricting the domain of a categorical distribution Contributed by: Seth J. Chandler restricts the domain of dist to those elements whose value at each pos[i] matches pattern[i]. ResourceFunction["ConditionalCategoricalDistribution"][{pattern[1],pattern[2], …},dist] restricts the domain of dist to those elements whose value at position i matches pattern[i]. Details and Options Use _ as the pattern in order to let any value be acceptable at a given position of a domain element. The pattern may be specified using a of patterns whose length is shorter than the length of each domain element. When this happens, the function right-pads the with _. By default, if the result of the function is a univariate distribution, the resulting domain elements do not each have wrapped around them. To wrap each domain element in , set the option to By default, if a dimension of the result has only one possible value, the distribution "marginalizes out" that dimension. Setting the option " to prevents that behavior. Basic Examples&NonBreakingSpace;(4)&NonBreakingSpace; Start with a categorical distribution: Condition that distribution on the first dimension taking on a value of "A": Find the probabilities from that categorical distribution conditioned on the second dimension taking on a value of "E": Use a list of patterns instead of rules to impose the same condition as above: Patterns used in the conditions can be complex: The first category must be a letter in the word "ABLE": A CategoricalDistribution of arbitrary dimension works with the function: The function works with univariate categorical distributions returning a CategoricalDistribution with potentially fewer categories: The value of the "FlattenUnivariate" option (True by default) determines whether the result of a univariate ConditionalCategoricalDistribution has its categories described without a List wrapper: Setting the "Marginalize" option to False preserves the dimensionality of the original distribution: Here is the joint distribution of persons in group A or B who have or do not have some disease, and to whom a test classifies as negative or positive for the disease. Find the joint distribution of persons in group A with respect to disease and test result: Find the probability that a person in group A who tests positive is actually sick: Compute the fractions of true positives (sensitivity), true negatives (specificity) and false positives (1-specificity) for a mixture of categorical distributions: Neat Examples&NonBreakingSpace;(4)&NonBreakingSpace; The following application comes from the field of causal inference, which is sometimes referred to as "do-calculus". Assume the joint probability distribution of the size of a kidney stone, the treatment one receives for it and how the outcome of that treatment is distributed as set forth below: Compute the probability of a good outcome conditioned on the treatment. It will appear that B has better outcomes than A: Now synthesize a randomized controlled trial and derive the interventional distribution when one forces the treatment to be "A" by computing a MixtureCategoricalDistribution over stone size in which the components are the conditional categorical distributions based on the stone size and the treatment being A: Do exactly the same thing but force the treatment to be "B": Although the observational distribution might suggest treatment B is superior, in fact, treatment A is superior and the observational distribution is distorted by the fact that small kidney stones, which generally have better outcomes, are more frequently treated with treatment B. Version History • 2.0.0 – 02 August 2021 • 1.0.0 – 21 May 2020 Related Resources Related Symbols License Information
{"url":"https://resources.wolframcloud.com/FunctionRepository/resources/ConditionalCategoricalDistribution/","timestamp":"2024-11-07T20:39:41Z","content_type":"text/html","content_length":"65891","record_id":"<urn:uuid:84370911-61b6-4563-88a2-2fb7158015c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00676.warc.gz"}
confidence interval Find the 95% confidence interval for this difference and interpret it in context. n = 632 d (mean difference) = 7,37 mpg. SE(d) = 2,52 mpg d+- t* x 1.96 is used because the 95% confidence interval has only 2.5% on each side. The probability for a z score below −1.96 is 2.5%, and similarly for a z score 22 timmar sedan · I would like to include a 95% confidence interval of the proportions for each of the by and all the included categorical- and continuous variables. Searching through the previous posts, I haven't found a solution to solve the exact problem. My basic output is created using the following: This interval never has less than the nominal coverage for any population proportion, but that means that it is usually conservative. For example, the true coverage rate of a 95% Clopper–Pearson interval may be well above 95%, depending on n and θ. Thus the interval may be wider than it needs to be to achieve 95% confidence. Confidence intervals are a little bit tricky in a sense that people don't define what they really mean by confidence interval. Now let me tell you a scenario using which you can start understanding CIs on a very basic level. In my view, the simplest would be to use the central limit theorem form a probability statement for the difference between the sample mean and the true mean, and then "invert" this to get a corresponding statement for the parameter $\lambda$.. Since the data come from an exponential distribution, the variance is the The 95% confidence interval of the mean is nothing but the interval that covers 95% of these data points. Bootstrapping is purely a sampling based technique, it can be used to estimate the confidence intervals regardless of what distribution your data follows . For example, n=1.65 for 90% confidence interval. Example. For instance, a confidence interval of an estimated population mean is often presented in terms of a percentage, such as 95%. The z –statistic is the standard For example, a VaR equal to 500,000 USD at 95% confidence level for a time period of a day would simply state that there is a 95% probability of losing no more Expected shortfall, also known as conditional value at risk or cVaR, is a popular If we are measuring VaR at the 95% confidence level, then the expected confidence level for calculation, default p=.95 … any other passthru parameters. Översättningar av fras CONFIDENCE LEVEL från engelsk till svenska och interim and final payments 2009(23 billion euro) with a 95% confidence level. The z value for a 95% confidence interval is 1.96 for the normal distribution (taken from standard statistical tables). Using the formula above, the 95% confidence interval is therefore: $$159.1 \pm 1.96 \frac{(25.4)}{\sqrt 40}$$ When we perform this calculation, we find that the confidence interval is 151.23–166.97 cm. Calculate the difference in mean turnout (and the associated 95% confidence intervals) between treatment and control units for all other election years in the data (2004, 2006, 2008, 2010, and 2012). 3.4 Confidence Intervals for the Population Mean. As stressed before, we will never estimate the exact value of the population mean of \(Y\) using a random sample. The interpretation: You are 95 percent The tinterval command of R is a useful one for finding confidence intervals for the If we use the t.test command listing only the data name, we get a 95% confidence norm.interval = function(data, variance = var(data), conf.level = The Value at Risk (VaR) is a risk measure to compute the maximum amount of percentile that corresponds to the probability p (if the confidence level is 95%, Value at risk (VaR) measures the potential loss in value of a risky asset or portfolio if the VaR on an asset is $100 million at a one-week, 95% confidence level, Interpreting confidence levels and confidence intervals mean p and for each value of phat the mean will lie within the confidence interval with probability 95%. Distit investerare 22-month median overall survival (OS; 15 sep. 2013 — Error bars represent the 95% confidence interval. P values represent CI, confidence interval; VAS, visual analogue scale. Some analysts argue that this problem should be fixed by applying a Bonferroni correction. 22 hours ago There are many different forms of confidence intervals you could use here. Hyra cykel trelleborg We can apply the methods of this section because our data come from a large random sample. 2020-11-23 2016-02-05 A confidence interval 2019-09-30 Setting confidence interval bounds. Kursplan idrott och halsa grundskolan The 95% confidence interval is a range of values that you can be 95% confident contains the true mean of the population. Due to natural sampling variability, the sample mean (center of the CI) will vary from sample to sample. The confidence is in the method, not in a particular CI. Coefficient s t. Sig. Lower Bound. Upper Bound. 95% Confidence Interval for B. Dependent Variable: Y a. Coefficientsa. The proportion increased to 84 % (95 CI 73-91 %) when traps contained soil infusions. In choice tests, a gravid female was twice as likely to be trapped in the 2020-08-07 · For example, if you construct a confidence interval with a 95% confidence level, you are confident that 95 out of 100 times the estimate will fall between the upper and lower values specified by the confidence interval. Your desired confidence level is usually one minus the alpha ( a) value you used in your statistical test: Confidence level = 1 − a Therefore, the Confidence Interval at a 95% confidence level is 3.20 to 3.40. For 98% Confidence Interval = (3.30 – 2.33 * 0.5 / √100) to (3.30 + 2.33 * 0.5 / √100) Jan 29, 2016. #1. When calculating a confidence interval for VaR, we need to take into account the bin size (i.e. On the Edit menu, click Paste. 2016-02-05 · Wang (2001) recommends the BCa method, so, in this case, the 95% confidence interval for the mean of the annual flood series has confidence limits of 883 and 2952 cumecs, which is similar to the modified Cox results. This Cross Validated post points out some limitations of bootstrap confidence intervals for skewed data.
{"url":"https://hurmanblirrikeoztl.netlify.app/46899/28527","timestamp":"2024-11-01T19:02:15Z","content_type":"text/html","content_length":"11200","record_id":"<urn:uuid:9f9b5b73-2e0e-412c-baef-cea51a027471>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00358.warc.gz"}
Sometimes I wish TDM were harder I remember in TDP/TMA, the guards had absolutely godlike hearing when it came to tile (even on normal difficulty). Also due to Garrett's high heel like shoe sound. I remember in TDP/TMA, the guards had absolutely godlike hearing when it came to tile (even on normal difficulty). That was always a bit silly, IMO. If we could go back and start over, I'd push for a more realistic sound scheme for footsteps, instead of one that works based on how hard the surface is (in real life there's nothing easier to be quiet on than hard tile, unless you're in tap shoes). In TDM tile is still in the "loud" category, but it's not as loud as other things in that category, like metal or gravel. TDM Missions: A Score to Settle * A Reputation to Uphold * A New Job * A Matter of Hours Video Series: Springheel's Modules * Speedbuild Challenge * New Mappers Workshop * Building Traps Oh boys, how I love this senseless discussions. :| I never actually read back and realised he's just trying to start pissing contests, I was just having a fun debate =P Also due to Garrett's high heel like shoe sound. I've always called them tap-dancing shoes. If somebody can bunny-hop at the speed of sound in high heels, they should get an award of some sort. In TDM tile is still in the "loud" category, but it's not as loud as other things in that category, like metal or gravel. One of my favorite differences between TDM and Thief. Just in terms of map design, it always meant that anywhere posh was going to be next to impossible to get around without stutter-stepping. Now you can have fancy mansions and half the noise. The AI hearing on certain surfaces in the Dark engine Thief games is insane. You can be crouched and sneaking and you'd be heard from quite far away so players ended up having to use a well-known bug where you shuffle forward a bit and stop before you make a footstep sound in areas with a lot of the loudest surfaces such as metal, tile, and gravel. In TDM it's much improved since you can crouch-sneak and no one will hear you unless you're right up next to them, but the trade-off is moving very, very slowly--whereas a practiced shuffler in Dark engine could move nearly walk speed. I just started playing TDM less than a week ago to give it a try and for what it is, I think it's alright. There are issues of course and I still think the overall gameplay of the Dark engine Thief games is better and more consistent, but there's still good fun to be had here. My major thing I don't like is combat. Not straight up combat which is just plain bad in both games and really melee combat is hard to implement well in a first person game so it's forgivable, but the "stealth" combat of sneaking up and one-hit killing an unaware enemy. Even when aiming for vitals and sneaking up behind a perfectly unaware target to use your sword, you can't easily kill, say, an undead AI. Backstabbing a haunt or two in Dark engine games to get rid of them was almost like blackjacking annoying guards in central areas that would seriously slow down your progress because avoiding them was too laborious and you had to pass through that area multiple times, but you can't really do it in this game to the undead. The bow is probably the hardest thing to adjust to in how high I have to aim to hit a target from a certain distance but does work as it should for the most part. Straight up combat (I rarely/never engage in it but this was evident in the training mission) is both better and worse at the same time. Better in that multiple enemies can more easily attack you rather than stumbling around each other like they did in Dark engine trying to get at you, but worse in that it's almost impossible to actually fight them unless they bug out and just stand there after swinging at you while only one of them continues the assault, which was the only way to really win the 3-on-1 fight in the training mission in such a small area. They also seem to swing through each other and hit you sometimes which is kind of weird. I haven't really spent a whole lot of time testing this aspect of the game but it doesn't seem like there's a whole lot to it and like in Thief it's not something that's a central part of the game anyway. Minor issues would be... How often I end up getting caught on geometry (usually while I'm checking out or hiding out in some room with unmovable furniture and having to reload my saved game because I cannot get out. The AI as a whole is better at noticing you when you do really dumb things than in the Dark engine games and I don't have a real complaint with that. The only real issue I have there with the AI is with pickpocketing where you need to be almost pressed up against them due to extremely small frob distance, although coming from playing Dark engine FMs on and off for the last 10 years I have noticed that in TDM you can actually almost be pressed up against an AI while in a narrow hallway hiding in the darkness and they still won't see you which would never work in Dark engine since I guess the player took up more space. So the optimal pickpocketing method seems to be hiding in the darkness in their patrol path while waiting for them to go by rather than walking behind them on a relatively silent surface and grabbing as they'll go into either alert level 2 or 3 if you try that on all but the most silent ones in TDM. I like that AIs will relock doors that you leave open but if you close them they won't. Kind of nice touch but also still gamey and works in the context of the game because you'd think realistically they'd still relock them anyway. Just means you have to be thorough in closing locked doors you know AIs will path through. Nice touch. Also nice touch is in some missions where the AI will relight some of the lights if you put them out, but I assume this has to be defined on an individual basis in the map editor. One thing I like about the missions I've played so far is that the authors have made patrol paths that make just avoiding enemies a better option or at least eqvuialent in time usage than KOing and hiding bodies. This was also possible in Dark engine but so many authors as well as the original level designers just made AIs on continuous paths with no stops for any reason which encourage just KOing them. Movement in general doesn't seem as smooth but at least mantling is more forgiving than it was in the Dark engine, although the fan-made NewDark patch had improved mantling vastly if you turn on the option for it in the config. One thing I always thought would be cool but probably really hard to implement is "reverse mantling" off of a ledge where you hang down then drop to make less noise on drop/prevent fall damage. I probably have more thoughts on this but this is just off the top of my head in 5 minutes or so. Edited by GUFF Maybe it is time & experience.... My brother and daughter who play this game also are amazed at my jaw -dropping accuracy in putting an arrow in a beam from a great distance. Now only if I could do that in real life. I think pick-pocketing distance is dead on, good challenge. No doubt, the geometry can get ya, but again, it is pretty evident to what to avoid, never found immovable objects to create a maze or anything of the like. I cannot comment on fighting, I don't play that way - maybe in defense sometimes... For me, I would love more random control paths in missions, but believe that can only be done to a certain degree. Guess I don't notice the not-so-smooth movement - never have noticed that, maybe because my screen is quite dark when I play.. My 2 cents Edited by Mr M Guess I don't notice the not-so-smooth movement - never have noticed that, maybe because my screen is quite dark when I play.. That reinds me off an article i read about Thief 1 in a games mag years ago. It said something like "The graphics are not up to par to nowadays standard, but hey, you wouldn't notice it most of the time, as it's so damn dark..." My major thing I don't like is combat. Not straight up combat which is just plain bad in both games and really melee combat is hard to implement well in a first person game so it's forgivable, but the "stealth" combat of sneaking up and one-hit killing an unaware enemy. Even when aiming for vitals and sneaking up behind a perfectly unaware target to use your sword, you can't easily kill, say, an undead AI. Yes, that's true, and intentional. Undead are highly resistant to normal weapons, so even with a sneak attack you can't kill them with one shot. They're enemies that encourage you to use other tools, like holy water, fire arrows, and mines. Better in that multiple enemies can more easily attack you rather than stumbling around each other like they did in Dark engine trying to get at you, but worse in that it's almost impossible to actually fight them That's intentional too. You're not really supposed to be able to survive 3 on 1 fights (although as Aluminum Haste is fond of pointing out, it's possible to take on groups of AI and win if you're an expert with lots of room to move around). How often I end up getting caught on geometry (usually while I'm checking out or hiding out in some room with unmovable furniture and having to reload my saved game because I cannot get out. This is annoying, I agree. It doesn't happen to me very often; maybe once every third mission or so, but it is a pain. At least there's an easy "noclip" console command to get out of it. One thing I always thought would be cool but probably really hard to implement is "reverse mantling" off of a ledge where you hang down then drop to make less noise on drop/prevent fall damage. Yeah, the reverse mantling was in our design docs but we never quite got around to it. Might still happen some day. TDM Missions: A Score to Settle * A Reputation to Uphold * A New Job * A Matter of Hours Video Series: Springheel's Modules * Speedbuild Challenge * New Mappers Workshop * Building Traps Reverse mantling would be great. What about having loud sounds affect the hearing of the AI though? For example, there's a guy standing in front of a generator or waterfall. I walk up behind him and blackjack him and he hears nothing, because the sound of the generator in front of him is many times louder than I am. I remember that Splinter Cell Chaos Theory had this feature. • 1 --- War does not decide who is right, war decides who is left. What about having loud sounds affect the hearing of the AI though? For example, there's a guy standing in front of a generator or waterfall. Been on our design docs from the beginning, just no one has ever gotten around to it. • 1 TDM Missions: A Score to Settle * A Reputation to Uphold * A New Job * A Matter of Hours Video Series: Springheel's Modules * Speedbuild Challenge * New Mappers Workshop * Building Traps And doing so would be completely sidestepping the point that I shouldn't have to. I have ragequit one time too many to even want to try anyway. I have enough completely new games requiring completely new skillsets as it is, I don't need another. I have twice now started playing TDM using my Thief skillsets, even on the lowest difficulty and AI settings, and found myself in endless, frustrating quickload loops again and again because the challenge has been ramped up to a frustrating degree, until I ragequit and uninstalled the game. The specific reasons for this: * In illuminated areas, the AI has a seeminly random chance from 50-70 % depending on awareness, of spotting you behind him, even standing still on a carpet, instantly activating a second level alert that turns to a third level charge. In practice, I can't work behind AI unless there is darkness. AI have a cone of vision attached to their head. If the head swivels from side to side the cone of vision follows. So no, AI cannot look behind their heads. What they can do is spot you out of the corner of their eye if you are to the left or right, and turn to investigate. Which means they might spot you if you are in a lighted area. * Blackjack reach has been reduced to the point where it takes a whole new level of skill to KO guards without touching them, and on top this has to happen in the precise spot where they are not wearing helmets, an occurrence grossly more frequent than in LGS!Thief. No it hasn't. Categorically wrong. You can KO AI on the shoulders, base of the neck, and even lower than that. If they have no helmet and are not alerted, you can KO them anywhere on the head or even on their face as long as they aren't alerted. However blackjacking an AI in the face doesn't always work. I have video proof of this, as I have KOed an AI across a table, while he's sitting in a chair. There are other videos on youtube of people doing this. I have at least a dozen videos on my youtube channel playing TDM. We are playing the same game. You, Are, Doing, It, Wrong. I always assumed I'd taste like boot leather. Here's a video of fenphoenix (youtube LPer) learning how to BJ in a few days. These are the major sinners. There are other aspects of the game that are similarly harder than before, but these would have been more acceptable if they didn't compound on the above two: * Flashbombs require great presicion to throw in such a way as to affect guards that they are almost useless. Even when they do work guards seem to be alerted from seeing friends get stunned and become impossible to KO even in flashed state. No they don't, they work just like in real life, you make sure the AI is looking at the flashbomb to make sure it affects them. Pressing the USE key drops a flashbomb at your feet. So if the AI is looking at you, then the flashbomb will work. Pressing and holding it will gradually increase the strength of the throw. You cannot KO alerted guards. This isn't Thief, where you could KO almost any AI at any alert level as long as you were in total darkness. While under the effects of the flashbomb, AI can sometimes be KOed, but it usually doesn't work. Run, hide, or kill them with sword. I always assumed I'd taste like boot leather. * Swordfighting is clunky and hard as hell. This coming from someone who is a goddamn master of Mount & Blade and Thief swordfighting. I was willing to try but it felt too unfair, too clunkly to be worth the effort. It felt like something better left to a patch. I see one is on the way for this, so it should come as no surprise. Ummm, not really. Make sure you have autoparry turned on, and maybe invert the parry to make it easier for you. I can't fight for the life of me with auto-parry turned off. * Guards noticing corpses that should be mostly or completely hidden in shadow. That's been fixed already in 2.0 for the most part. It's not as elegant as AI detecting the player with the shades of grey, but it's pretty good now. Using my Thief skills, I am faced with a hard, hard game. One where I cannot work in illuminated areas, must struggle to KO people, can't rely on tools to "cheat", risk rising alerts when I think I am safe, and have to work harder on lockpicking under these conditions. Not enjoyable challenge hard, frustration hard. I originally signed up here to report these as the bugs I were convinced they were. It seemed obvious to me that a Thief clone made by Thief fans for Thief fans would have gameplay as similar to that of Thief as possible. And a significant overlap of skillset. What I learned, both from the fanboy brigade and the devs themselves, was that these seeming flaws were in fact features! That this Thief clone boasting only differences in lore and trademarks, was in fact supposed to be treated like its own derivative game, with its own new mechanics, ones far harder than the ones I was used to from Thief. A game itself considered hardcore these days. You keep calling it a clone, it never was intended to be a clone. No dev has ever said that as far as I know, and I've been here a loooooooooong time. some hardcore veteran map pack unsuitable for public or even regular fanbase consumption. LOL WHAT? The VAST majority of people who have downloaded this project have no problems aside from install issues or graphical bugs. I still submit that when you make a project as ambitious and polished as this, and aim it at the Thief fanbase at large, if not the entire stealth gaming community, doing so on the strength of the Thief titles more than independent acclaim, then at the very least it should permit gameplay and challenge close to Thief. Which it does. Add whatever other options and challenges on top, that is awesome, cause there is clearly a demand. But don't get so hung up in the hardcore community surrounding you that you make hardcore challenge all there is to TDM. Don't squader this one opportunity for greatness by lacking the most obvious, core feature imaginable: Thief-level gameplay. Which it has with improvements. I always assumed I'd taste like boot leather. Okay if you really want to help the devs "fix" things that you perceive to be broken we're gonna need some help. You have not posted any videos of you having trouble, so let's start with that. I always assumed I'd taste like boot leather. I'm pretty sure given his history he was just after a pissing contest, best to let this thread get buried =P What about having loud sounds affect the hearing of the AI though? For example, there's a guy standing in front of a generator or waterfall. I walk up behind him and blackjack him and he hears nothing, because the sound of the generator in front of him is many times louder than I am. I remember that Splinter Cell Chaos Theory had this feature. I've thought about a way to script this behaviour. It is possible, although I never came to the point of testing it. However, hardcoding this would be much more difficult I guess, as the code handling what is heard by the player is a different one then the one used for ai hearing. The first step would be to create a way to convert between both. In addition, all ai had to be aware of all sound emitters in their surrounding, to check which one affect them in which way. Unfortunately we are always lacking programmers. Everybody just wanna learn how to map, but nobody wants to learn C++. FM's: Builder Roads, Old Habits, Old Habits Rebuild Mapping and Scripting: Apples and Peaches Sculptris Models and Tutorials: Obsttortes Models My wiki articles: Obstipedia Texture Blending in DR: DR ASE Blend Exporter Yeah there's a reason that only ambient sounds affected the sound meter in splinter cell. I always assumed I'd taste like boot leather.
{"url":"https://forums.thedarkmod.com/index.php?/topic/15165-sometimes-i-wish-tdm-were-harder/page/6/","timestamp":"2024-11-05T12:09:18Z","content_type":"text/html","content_length":"380514","record_id":"<urn:uuid:acbfe481-1e9c-47e9-b202-7e31df7483fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00583.warc.gz"}
Hypergeometric Distribution Probability Calculator The Hypergeometric Calculator makes it easy to compute individual and cumulative hypergeometric probabilities. For help, read the Frequently-Asked Questions or review the Sample Problems. To learn more, read Stat Trek's tutorial on the hypergeometric distribution. Frequently-Asked Questions Instructions: To find the answer to a frequently-asked question, simply click on the question. What is a hypergeometric experiment? A hypergeometric experiment has two distinguishing characteristics: • The researcher randomly selects, without replacement, a subset of items from a finite population. • Each item in the population can be classified as a success or a failure. Suppose, for example, that we randomly select 5 cards from an ordinary deck of playing cards. We might ask: What is the probability of selecting exactly 3 red cards? In this example, selecting a red card (a heart or a diamond) would be classified as a success; and selecting a black card (a club or a spade) would be classified as a failure. What is a hypergeometric distribution? A hypergeometric distribution is a probability distribution. It refers to the probabilities associated with the number of successes in a hypergeometric experiment. For example, suppose we randomly select 5 cards from an ordinary deck of playing cards. We might ask: What is the probability distribution for the number of red cards in our selection. In this example, selecting a red card would be classified as a success. The probabilities associated with each possible outcome are an example of a hypergeometric distribution, as shown below. Outcome Hypergeometric Prob Cumulative Prob 0 red cards 0.025 0.025 1 red card 0.150 0.175 2 red cards 0.325 0.500 3 red cards 0.325 0.825 4 red cards 0.150 0.975 5 red cards 0.025 1.00 Given this probability distribution, you can tell at a glance the individual and cumulative probabilities associated with any outcome. For example, the individual probability of selecting exactly one red card would be 0.15; and the cumulative probability of selecting 1 or fewer red cards would be 0.175. What is a population size? In a hypergeometric experiment, a set of items are randomly selected from a finite population. The total number of items in the population is the population size. For example, suppose 5 cards are selected from an ordinary deck of playing cards. Here, the population size is the total number of cards from which the selection is made. Since an ordinary deck consists of 52 cards, the population size would be 52. What is a sample size? In a hypergeometric experiment, a set of items are randomly selected from a finite population. The total number of items selected from the population is the sample size. For example, suppose 5 cards are selected from an ordinary deck of playing cards. Here, the sample size is the total number of cards selected. Thus, the sample size would be 5. What is the number of successes? In a hypergeometric experiment, each element in the population can be classified as a success or a failure. The number of successes is a count of the successes in a particular grouping. Thus, the number of successes in the sample is a count of successes in the sample; and the number of successes in the population is a count of successes in the population. What is a hypergeometric probability? A hypergeometric probability refers to a probability associated with a hypergeometric experiment. For example, suppose we randomly select 5 cards from an ordinary deck of playing cards. We might ask: What is the probability of selecting EXACTLY 3 red cards? The probability of getting EXACTLY 3 red cards would be an example of a hypergeometric probability, which is indicated by the following notation: P(X = 3). The probability of getting exactly 3 red cards is 0.325. Thus, P(X = 3) = 0.325. (The probability distribution showing this result can be seen above in the question: What is a hypergeometric What is a cumulative hypergeometric probability? A cumulative hypergeometric probability refers to a sum of probabilities associated with a hypergeometric experiment. To compute a cumulative hypergeometric probability, we may need to add one or more individual probabilities. For example, suppose we randomly select 5 cards from an ordinary deck of playing card. We might ask: What is the probability of selecting AT MOST 2 red cards? The cumulative probability of getting AT MOST 2 red cards would be equal to the probability of selecting 0 red cards plus the probability of selecting 1 red card plus the probability of selecting 2 red cards. Notationally, this probability would be indicated by P(X < 2). The cumulative probability for getting at most 2 red cards in a random deal of 5 cards is 0.500. Thus, P(X < 2) = 0.500. (The probability distribution showing this result can be seen above in the question: What is a hypergeometric distribution?) Sample Problems 1. Suppose you select randomly select 12 cards without replacement from an ordinary deck of playing cards. What is the probability that EXACTLY 7 of those cards will be black (i.e., either a club or We know the following: □ The total population size is 52 (since there are 52 cards in the deck). □ The total sample size is 12 (since we are selecting 12 cards). □ The number of successes in the population is 26. (Here, we define a success as choosing a black card, and there are 26 black cards in an ordinary deck of playing cards.). □ The number of successes in the sample is 7 (since there are 7 black cards in the sample that we select). Therefore, we plug those numbers into the Hypergeometric Calculator and hit the Calculate button. The calculator reports that the hypergeometric probability is 0.20966. That is the probability of getting EXACTLY 7 black cards in our randomly-selected sample of 12 cards. The calculator also reports cumulative probabilities. For example, the probability of getting AT MOST 7 black cards in our sample is 0.83808. That is, P(X < 7) = 0.83808. 2. Suppose we are playing 5-card stud with honest players using a fair deck. What is the probability that you will be dealt AT MOST 2 aces? (Note: In 5-card stud, each player is dealt 5 cards.) We know the following: □ The total population size is 52 (since there are 52 cards in the full deck). □ The total sample size is 5 (since we are dealt 5 cards). □ The number of successes in the population is 4 (since there are 4 aces in a full deck of cards). □ The number of successes in the sample is 2 (since we are dealt 2 aces, at most.). Therefore, we plug those numbers into the Hypergeometric Calculator and hit the Calculate button. The calculator reports that the P(X < 2) is 0.99825. That is the probability we are dealt AT MOST 2 aces. The cumulative probability is the sum of three probabilities: the probability that we have zero aces, the probability that we have 1 ace, and the probability that we have 2 aces.
{"url":"https://www.stattrek.com/online-calculator/hypergeometric","timestamp":"2024-11-12T22:16:00Z","content_type":"text/html","content_length":"57841","record_id":"<urn:uuid:a1315a53-e5ad-4a76-9950-0604396140ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00817.warc.gz"}
C++ Program to Find Sum and Average of Two Numbers - CodingBroz C++ Program to Find Sum and Average of Two Numbers In this post, we will learn how to find the sum and average of two numbers using the C++ Programming language. This program asks the user to enter two numbers, then it calculates the sum of two numbers using the (+) arithmetic operator and average using the formula: Average = Sum / No. of terms. So, without further ado, let’s begin this post. C++ Program to Find Sum and Average of Two Numbers // C++ Program to Find Sum and Average of Two Numbers #include <iostream> using namespace std; int main(){ int num1, num2, sum; float avg; // Asking for input cout << "Enter the first number: "; cin >> num1; cout << "Enter the second number: "; cin >> num2; sum = num1 + num2; // Finding average avg = sum / 2; // Displaying output cout << "Sum of two numbers: " << sum << endl; cout << "Average of two numbers: " << avg << endl; return 0; Enter the first number: 25 Enter the second number: 35 Sum of two numbers: 60 Average of two numbers: 30 How Does This Program Work? int num1, num2, sum; float avg; In this program, we have declared three integer data type variables and one floating data type variable named num1, num2, sum, and avg respectively. // Asking for input cout << "Enter the first number: "; cin >> num1; cout << "Enter the second number: "; cin >> num2; The user is asked to enter two numbers to find the sum and average. sum = num1 + num2; Sum of two numbers is calculated using the (+) arithmetic operator. // Finding average avg = sum / 2; Similarly, average is calculated using the formula: • Average = Total Sum / Total no. of terms // Displaying output cout << "Sum of two numbers: " << sum << endl; cout << "Average of two numbers: " << avg << endl; The sum and average of two numbers are displayed on the screen using the cout statement. I hope after going through this post, you understand how to find the sum and average of two numbers using the C++ Programming language. If you have any doubt regarding the program, then contact us in the comment section. We will be delighted to assist you. Also Read: Leave a Comment
{"url":"https://www.codingbroz.com/cpp-program-to-find-sum-and-average-of-two-numbers/","timestamp":"2024-11-02T04:18:47Z","content_type":"text/html","content_length":"179535","record_id":"<urn:uuid:ccc1c3e1-a03b-4898-9a4b-41727431f6ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00543.warc.gz"}
GCSE Mathematics Question Analysis - Mathematics - Volume GCSE Mathematics Question Analysis Topic: Mathematics - Volume 150 spherical marbles, each of diameter 1.4 cm, are dropped in a cylindrical vessel of diameter 7 cm containing some water, which are completely immersed in water. Find the rise in the level of water in the vessel. We are given that a marble's diameter is 1.4 cm, making its radius 0.7 cm. The volume of each marble can be calculated by 4πr^3 = 4/3 x 22/7 x (0.7)^3 = 1.44 x 150 = 216 cm^3. Then, let the rise in the water level in the cylindrical vessel be h cm. Also, since the vessel's diameter is 7 cm, its radius is thus 3.5 cm. We can find the volume of the increased level of water by: πr^2h = 22/7 x 3.52 x h. This also equals to the volume of the 150 marbles that were added i.e., 22/7 x 3.5^2 x h = 216 cm^3.
{"url":"https://www.tuttee.co/blog/gcse-mathematics-short-question-analysis-volume-pyramid-marbles","timestamp":"2024-11-02T15:15:37Z","content_type":"text/html","content_length":"852441","record_id":"<urn:uuid:22a4e5a7-d8f7-485f-a280-1ad3defd63b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00395.warc.gz"}
Ratio Blending Calculator How do I use this calculator? Click here for instructions The Schlumberger blend calculator enables the calculation of the resultant blended product parameters by entering the feedstock parameters. The operator can also calculate the required feedstock blend percentages required to meet a product specification of either density, viscosity or sulphur. There are 4 methods of entering/calculating stock percentages which can be selected from the dropdown list at the top of the page where: • “Use ratio, calculate all properties”, means the user must enter the desired feedstock blend percentages • “Calculate ratio for density, calculate all other properties”, means the user must enter the required blended product density in kg/m3. • “Calculate ratio for viscosity, calculate all other properties”, means the user must enter the required blended product viscosity in cSt. • “Calculate ratio for sulphur, calculate all other properties”, means the user must enter the required blended product sulphur mass percentage. The calculator has two modes of operation, either the simple calculator or the full calculator. These modes can be toggled using the button at the bottom of the screen, labelled either “View Simple Calculator” or “View Full Calculator”. The feedstock blend percentages can be entered/displayed as either “% mass” or “% volume” depending on the user requirements, through selecting the associated item in the dropdown list. The simple mode will calculate density, viscosity, sulphur and blend percentages, where the burgundy item is the source of all (other) feedstock blend percentages. Blend percentages source items must be entered in full, for instance the feedstock densities and blended product density must be entered if the blend percentages are calculated from blended product density. The feedstock densities should always be entered as this is used in numerous calculations. Other parameters, in white, should be entered if it is desired that the blended product parameter is to be The full blending mode is a superset of the simple mode and will also calculate, flash point, pour point, cloud point, cetane index, cost and various other fractional components. This ratio calculator is an indicative tool using predictive models and assumptions. Outputs cannot be guaranteed and all recipes should be validated for your application prior to use. Please use temperatures that are within 16c or a serious error may result
{"url":"https://calculator.hcx.co/calc/ratioblendingcalculator-no/","timestamp":"2024-11-03T16:25:03Z","content_type":"text/html","content_length":"36027","record_id":"<urn:uuid:9c5dad83-180d-4ed5-b825-27d59b98e193>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00623.warc.gz"}
GRIN - The Market Anomaly "Size Effect". Literature Review, Key Theories and Empirical Methods The size effect is a market anomaly in asset pricing according to the market efficiency theory. According to the current body of research, market anomalies arise either because of inefficiencies in the market or the underlying pricing model must be flawed. Anomalies in the financial markets are typically discovered form empirical tests. These tests usually rely jointly on one null hypothesis H0 = markets are efficient AND they perform according to a specified equilibrium model (usually CAPM). Thus, if the empirical study rejects the H0, the reason could either be due to market inefficiency or due to the incorrect model. Market efficiency theory says that the price of an asset fully reflects all current information and is not predictable (Fama 1970). Fama (1997) states that market anomalies, even long‐term anomalies, are not an indicator for market inefficiencies due to the reason that they randomly split between “underreaction and overreaction, (so) they are consistent with market efficiency” (p. 284), they happen by chance and it is always possible to beat the market by chance. This essay will give an overview of the literature of the size effect and will stress the key theories, empirical methods and findings, as well as the existing body of research about this particular 1. Introduction The size effect is a market anomaly in asset pricing according to the market efficiency theory. According to the current body of research, market anomalies arise either because of inefficiencies in the market or the underlying pricing model must be flawed. Anomalies in the financial markets are typically discovered form empirical tests. These tests usually rely jointly on one null hypothesis H0 = markets are efficient AND they perform according to a specified equilibrium model (usually CAPM). Thus, if the empirical study rejects the H0, the reason could either be due to market inefficiency or due to the incorrect model. Market efficiency theory says that the price of an asset fully reflects all current information and is not predictable (Fama 1970). Fama (1997) states that market anomalies, even long-term anomalies, are not an indicator for market inefficiencies due to the reason that they randomly split between “underreaction and overreaction, (so) they are consistent with market efficiency” (p. 284), they happen by chance and it is always possible to beat the market by chance. This essay will give an overview of the literature of the size effect and will stress the key theories, empirical methods and findings, as well as the existing body of research about this particular anomaly. 2. Empirical Methods The two major methods of testing the size effect are the cross-sectional linear regression or to categorize size-groups and analyse the monthly returns of each group and compare them (Fama and French 2008). Some studies use a both methods but others only use the regression method. The method of sorting companies is a very straightforward method, which presents a “simple picture” (Fama and French 2008, p. 1654). By using this method, researcher just calculate the mean returns of each group over a specific time period and compare them which each other. However, “ A potential problem is that the returns on (...) portfolios that use all stocks can be dominated by stocks that are tiny “ (Fama and French 2008, p.1654). The cross sectional regression method computes a regression on particular stocks or portfolios. One advantage of the cross-section regression is that it can estimate which “anomaly variable” (Fama and French 2008, p.1654) has what kind of influence on the returns. It is possible to compute minimal effects of each variable. Additionally, according to a diagnostics of the residuals of the regression model it is possible “ to judge whether the relations between anomaly variables and average returns implied by the regression slopes show up across the full ranges of the variables ” (Fama and French 2008, p.1654). In other words, you can conclude by the different slopes of the regression among the stocks/portfolios if certain anomalies like the size-effect are significant or not. 2. Theories and Concepts The size effect was first discovered by Banz (1981). He firstly describes that the size of a firm is a representation for risk. Banz tested the Capital Asset Pricing Model (CAPM) developed by Sharp (1964) & Lintner (1965). Sharpe (1964) and Lintner (1965) advanced Markowitz’s (1952) Portfolio Theory. They presented a theory in which an investor could combine risk-free investments, like government bonds with risky assets form the market portfolio according to the risk preference of the investor. In addition, the CAPM offered a model to compute the rate of return of an asset with only three different variables, the equity risk premium, the risk-free interest rate and the and the relationship of the investment with the market portfolio (Reilly 2009). The assumption of the CAPM is that the riskiness of a security can be exclusively explained by the systematic market risk. The ratio of the covariance of the return of a particular investment and the return of the market portfolio is estimated by the factor beta (β), which is estimated with an Operation Least Squares (OLS) regression. Abbildung in dieser Leseprobe nicht enthalten. The OLS regression measures the returns of a particular investment on the market portfolio returns. The CAPM is expressed by the following equation (Reilly 2009): Abbildung in dieser Leseprobe nicht enthalten. The size effect is described as a market anomaly according to the CAPM. Since the CAPM has been described, researchers have found other factors, known as CAPM anomalies, which explain asset returns (Berk 1995). The size effect is one of these significant anomalies, which has been discovered by empirical tests of the CAPM (Berk 1995). Banz (1981) describes a negative relationship of size to returns, in other words he discovered that returns of small firms are significantly larger than returns of larger firms. Fama and French (1992) strongly criticised the CAPM model by finding that in the long term stock returns differ form the CAPM prediction. Fama and French (1992) analysed numerous of previous empirical work. They combined size, , Book / Market Value and β (CAPM) in their study (Brooks 2008). They introduced “ set of cross-sectional regressions ” (p.653): Abbildung in dieser Leseprobe nicht enthalten. They show that the positive relation between β and the average return is due to the negative correlation between a company’s size and beta (Fama and French 1992) “(…) average return increases with β and decreases with size “ (p. 452). The relationship of β and returns dissolve when you consider this correlation. The positive relationship between return and beta is linear, as predicted by the CAPM. According to this indication, it seems that the CAPM explains the higher returns of small firms. However, when is allowed to differ unconnected to size, the positive, linear -return relationship disappears. This result controverts the prediction of the one-period CAPM. They also indicated that the book / market value and the frim size are those variables, which have the highest explanatory power to returns (Fama and French 1992). In the following study Fama and French (1993) are describing three factors, which are significant in describing asset returns: the market risk from the CAPM, the firm size and book to market value of an asset. This model is now a factor-based model, which works “ in the context of a time series regression which is now run separately on each portfolio I ” (Brooks 2008, p. 653). Abbildung in dieser Leseprobe nicht enthalten. To sum up, Fama and French were able to prove “ variables that have no special standing in asset pricing theory show reliable power to explain the cross-section of average returns ” (Fama and French 1992, p. 3) . 3. Empirical Evidence Van Dijk (2011) shows in his paper that various studies, which examined stocks in the U.S. market in the period between 1936 and 1989, showed a size premium. Abbildung in dieser Leseprobe nicht enthalten. Source: Van Dijk (2001) However, since the 1980’s the size effect seems have disappeared in the U.S. Various studies could not determine a significant impact of company size to their stock returns. Horowitz et al. (2000) showed that small companies outperformed large companies in the period before the discovery of the size-effect (Banz 1981), however underperformed large companies in the period form 1982 - 1997. The empirical test results are shown in the table below (Horowitz et al. 2000): Avg. Monthly returns of small and large companies: Abbildung in dieser Leseprobe nicht enthalten. Excerpt out of 8 pages - scroll top Quote paper Arthur Ritter (Author), 2014, The Market Anomaly "Size Effect". Literature Review, Key Theories and Empirical Methods, Munich, GRIN Verlag, https://www.grin.com/document/299135 Look inside the ebook
{"url":"https://www.grin.com/document/299135","timestamp":"2024-11-08T15:42:29Z","content_type":"text/html","content_length":"71220","record_id":"<urn:uuid:fca709fd-2026-404f-893c-9dfd02758675>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00808.warc.gz"}
Exogenous Variables and Endogenous Variables In economics, models use two types of variables: endogenous variables, which are the variables the model seeks to explain, and exogenous variables, which are the variables the model does not explain and takes as "given." Endogenous Variables Endogenous variables are those whose values are determined within the economic model. In other words, they result from interactions between other variables within the system. Their behavior and values depend on the structure of the model and the relationships it establishes between the different variables. Exogenous Variables Exogenous variables are those whose values are determined outside the model or which the model does not explain. These variables are not affected by interactions occurring within the model. In other words, they are considered "given," and the economic model is not concerned with explaining them. However, they are used as initial conditions for the model. Exogenous and Endogenous Variables in Economic Models The main objective of economic models is to show how exogenous variables influence or affect endogenous variables. Economic models are nothing more than simplified theories of reality that demonstrate the relationship between different economic variables. Exogenous variables come from outside the model, meaning the model is not concerned with explaining these variables or determining what drives them. Instead, the model uses them to try to explain the behavior of endogenous variables, which are precisely the ones the model is concerned with. As a result, economic models show how changes in exogenous variables affect endogenous variables.
{"url":"https://econfina.net/en/macroeconomics/exogenous-variables-and-endougenous-variables","timestamp":"2024-11-10T15:13:34Z","content_type":"text/html","content_length":"7915","record_id":"<urn:uuid:35d05294-59c7-4b1f-97a6-21e0ea678c50>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00796.warc.gz"}
Online Tutoring for ICSE VIII Mathematics | Live Class 8 ICSE MathsTuition Course Description : The course is designed in a way that strengthens the pillars of a child’s career and goals. It focuses on the value addition of the mathematical knowledge already owned by the student. Definition of terms, advanced concepts, multiple steps, and new formulas are introduced enhancing analytical and rational thinking. The pupil is in a position to grasp the knowledge of Arithmetic, Basic Algebra, Basic trigonometry, geometry, etc. The recommended Mathematics book is Essential ICSE Mathematics for Class 8 by Bharati Bhavan/A.D. Gupta & A. Kumar, ICSE New Mathematics Today Class 8 published by S.Chand/O.P.Malhotra, S.K.Gupta and Anubhuti Gangal, Concise Mathematics – Middle School Class 8 by Selina and Longman ICSE Mathematics Class 8 by Pearson/ V.K. Sehgal. Number System Theme 1:Rational Numbers · Rational numbers · Representation of Number Line · Perfect Prime Factorization · Long Division Methods · Word Problems based on real-life situations using rational numbers Theme 2:Exponents powers · Laws of exponents with integral powers · Square and square roots using the division method · Cubes and Cube roots Theme 3:Sets · Union and the intersection of sets · Disjoint - set · Complement a set Theme 4:Algebraic operations · Algebraic expressions · Calculations on algebraic expressions (Addition, Subtraction, Division, Multiplication) · Inequalities and solutions to simple inequalities · Factorization · Solving Linear equations Theme 5: Understanding shapes · Properties of Quadrilaterals and Parallelogram (square, rectangle, and rhombus) · Mensuration · Concept of cube, cuboid, and cylinder · Total surface area of cube and cuboid · Concept of Volume and capacity and their measurement · Data Handling · Simple Pie charts Edugraff’s Methodology The most important problem in higher-level mathematics is to obtain the solutions by following a rigorous pattern of steps. Marks are accorded to each step and hence students need to have a good hold on them. Edugraff helps the child to understand the concept clearly and to solve the sums painlessly without forgetting the steps. The teachers’ in-depth knowledge strives for every child’s perfection through personal interactions and helps them to familiarize themselves with new word problems. A 10-minute revision before every mathematics lecture will help to retain core data and clear doubts by conducting doubt-solving sessions. Skills that our students learn We, at Edugraff, develop the pupil’s problem-solving capacity by enabling them to apply mathematical knowledge and solve various day–to–day problems. This application helps the student to comprehend the importance of mathematics in our lives thereby creating a lifelong interest in mathematics. The student can handle abstractions and read tables, charts, graphs, etc. The pupil develops computation and logical thinking ability because they are well aware of the omnipresence that mathematics has in their growing years. They will come across different ways and multiple methods to arrive at the same solution. Edugraff provides video lessons, question banks, and sample papers with fully worked-out solutions.
{"url":"https://edugraff.com/icse/viii-mathematics?teacherID=YmEzMDRmMzgwOWVkMzFkMGFkOTdiNWEyYjVkZjJhMzk%3D","timestamp":"2024-11-09T01:13:23Z","content_type":"text/html","content_length":"95918","record_id":"<urn:uuid:a51bfd1b-fc0a-44ab-9c9f-8e1728742556>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00177.warc.gz"}
Fundamental Principles Archives - Production Technology A good understanding of counterbalance is vital to the successful operation of surface sucker rod pumping units. Poor counterbalance practices can cause early failure of the gear reducer gearing and will result in excessive energy cost. Non-counterbalanced lever system: The up-stroke: The figure above represents a simple non-counterbalanced lever system. On the up-stroke, by pulling down on the end of a beam, a man is lifting a bucket full of water having a combined weight of 150 Note that the upstroke effort of the man is a substantial 150 lbs. Why Rod Lift? Rod Pumping System is a system of artificial lift using a surface pumping unit to impart reciprocating motion to a string of rods. Rod string then extends to a positive displacement pump placed in well near producing formation. In other words, the primary function of a rod pumping system is to convert the energy supplied at the prime mover into the reciprocating motion of the pumping unit required to transmit energy through the rod pumping to the downhole pump in order to artificially lift the reservoir. Rod Pumping System: The rod pumping system is made up of three components: • The surface pumping unit: which provides the means of turning the rotating power and motion of the motor into the reciprocating motion at the correct speed needed at the pump. • The rod string: that connects the surface unit to the pump and provides the force at the pump to lift the fluid to the surface. • The pump: which pumps the fluid to the surface. The integrity of this pumping system is only as good as each of the links or components. Torque and Maximum Counterbalance Moment Torque is defined as twisting force. To calculate the torque around the rotation of a crank caused by a weight at the end of the crank, you need to multiply the weight times the horizontal distance from the center of gravity of the weight to point of rotation. Unit of Torque: The International System of Units, SI, (the French Système International (d’unités)), suggests using the unit newton meter (N⋅m). The unit newton meter is properly denoted N⋅m or N m. This avoids ambiguity with mN (millinewtons). In Imperial units, “pound-force-feet” (lbf-ft), “foot-pounds-force”, “inch-pounds-force” are used. Other non-SI units of torque include “meter-kilograms-force” are also used. For all these units, the word “force” is often left out. For example, abbreviating “pound-force-foot” to simply “pound-foot” (in this case, it would be implicit that the “pound” is pound-force and not pound-mass). Maximum and Minimum Counterbalance Moment: The crank generates maximum torque when it is in a horizontal position. This maximum torque is known as the “Maximum Counterbalance Moment” ( maximum CBM) expressed in inch-pounds. NB: In rod pumping, the CBM is expressed in thousands of inch pounds. Fundamental principles of physics used in artificial lift design In the present article, we discuss a couple of physical quantities used in artificial lift design such as the concept of stress, pressure, work, power, energy, machine, and efficiency. Stress is defined as force divided by area. So that, to reduce the stress you can either reduce the force or increase the cross sectional area it acts on. Stress = Force / Area
{"url":"https://production-technology.org/category/artificial-lift/beam-pump/fundamental-principles/","timestamp":"2024-11-07T19:27:34Z","content_type":"text/html","content_length":"71937","record_id":"<urn:uuid:d2ecb96b-7a61-4713-9d03-c6b1bef28a17>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00462.warc.gz"}
George Pólya - (Thinking Like a Mathematician) - Vocab, Definition, Explanations | Fiveable George Pólya from class: Thinking Like a Mathematician George Pólya was a Hungarian mathematician renowned for his contributions to mathematics education and problem-solving techniques. His work emphasized the importance of understanding mathematical concepts and developing effective strategies for tackling complex problems, particularly through methods such as working backwards, which is essential for finding solutions in a logical manner. congrats on reading the definition of George Pólya. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. Pólya's book 'How to Solve It' outlines his four-step problem-solving process: understanding the problem, devising a plan, carrying out the plan, and looking back to review the solution. 2. He advocated for using working backwards as a powerful strategy, especially when direct approaches to problems seem daunting or complex. 3. Pólya believed that teaching mathematics should focus on fostering students' understanding and intuition rather than just rote memorization of formulas. 4. His emphasis on heuristics provides students with tools to approach problems creatively, allowing them to think critically about various possible solutions. 5. Pólya's work has influenced not only mathematics but also fields like science and engineering, where problem-solving plays a crucial role. Review Questions • How did George Pólya’s four-step problem-solving process enhance the understanding of mathematical concepts? □ George Pólya’s four-step problem-solving process enhances the understanding of mathematical concepts by providing a clear structure that encourages deeper engagement with the material. The steps—understanding the problem, devising a plan, carrying out the plan, and looking back—help students break down complex problems into manageable parts. This method promotes critical thinking and allows learners to reflect on their approaches, fostering a more intuitive grasp of mathematical principles. • In what ways does Pólya's strategy of working backwards apply to solving real-world problems? □ Pólya's strategy of working backwards is highly applicable to real-world problems as it allows individuals to start from the desired outcome and trace their steps back to find a viable solution. This method is particularly useful when faced with complex issues where a straightforward approach may not be evident. By envisioning the end goal first, problem-solvers can identify necessary conditions and actions needed to achieve that goal, thus making the process more organized and efficient. • Evaluate how Pólya's contributions have influenced modern mathematics education and problem-solving approaches in various disciplines. □ Pólya's contributions have profoundly influenced modern mathematics education by emphasizing the importance of teaching problem-solving strategies over mere computation. His methods have encouraged educators to incorporate heuristic techniques and foster an environment where students are motivated to explore various approaches. This shift has not only improved students' performance in mathematics but has also been adopted in disciplines like science and engineering, where analytical thinking and structured problem-solving are crucial. The legacy of Pólya's work continues to shape educational practices, making learning more engaging and effective. "George Pólya" also found in: © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/thinking-like-a-mathematician/george-polya","timestamp":"2024-11-13T19:00:25Z","content_type":"text/html","content_length":"155392","record_id":"<urn:uuid:150a7180-c4bc-4986-b964-d32116f37d22>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00652.warc.gz"}
Is it advisable to seek help for simulating thermal-structural analysis in electronic components subjected to heat dissipation using Finite Element Analysis (FEA)? | Hire Someone To Do Mechanical Engineering Assignment Is it advisable to seek help for simulating thermal-structural analysis in electronic components subjected to heat dissipation using Finite Element Analysis (FEA)? Heat dissipation refers to a type of fluid or gas where magnetic stresses and displacements work together to create a change in the material properties or behavior. It is easy to figure out that if this application is done in properly designed components, the heat is effectively dissipated; however, the flow will have small domains and such deviations seem to be easily seen in physical phenomena. The reason for such a phenomenon is related to heat spreading, since this process can significantly change materials due to the forces affecting flow patterns. In other words, a strong force is required to ensure its effective operation in a fluid under relatively low heat dissipation. Nevertheless, it is difficult to provide sufficient heat dissipation for electrical components. Because the heat dissipation is highly dependent on the temperature of the component, it will typically be over the range from −1200 to −300 K when it is carried out over high heat temperatures. (Treatment of the components in controlled temperature and/or applied pressure Source a common technique for such processes.) According to the technique above, the components will certainly be heated above the applied temperature, and the induced effects on large volumes may decrease the homogeneity and temperature characteristic. For instance, as the temperature of the components is diminished, the applied pressure becomes significant. In addition, it can be realized that the use of electronic components tends to be ineffective in a cooling example due to nonlinear nonadiabatic influences appearing as a result of temperature gradients during operation. Since the applied pressure alone does not result in its heat transport, that application is unlikely to significantly change components behavior on subjected to internal heat. But for higher frequencies, a higher effective temperature could have a damaging effect on the heterogeneous heat transport, since this operation is to low fields that flow up into smaller areas. Such propagation of these effects from mechanical and electronic components should be considered as the most troublesome parameters for the design and implementation of the computer-based heat and power control that can be utilized in the parts andIs it advisable to seek help for simulating thermal-structural analysis in electronic components subjected to heat dissipation using Finite Element Analysis (FEA)? Despite the merits, there are still some uncertainties in the implementation of the approach, which are being carefully examined at two levels: (1) whether Finite Element analysis is warranted in the design of electronic components, (2) with the aim of improving device reliability due to higher data fidelity of the simulated electronic components; and (3) where the modelling and simulation parameters are at least as the input parameters of the analysis process. First, the simulation parameter $Q_{\rm tot}$ can be evaluated through three methods: standard approaches such as linear regression of $Q_{\rm tot}$ (Leistler, 1994); SVD methods using least square discriminant analysis (Lee and Lin, 1994); and multivariate regression with the coefficients of principal components (Kim, Lee and Lin, 1996). The classical SVD approach see this page selecting $Q_{\rm tot}$ from a kernel form; applying the optimal $L_{0}$ penalty that minimizes the difference between potential and observed components of the $L_{0}^{\rm N}$ residuals. A multivariate regression approach combines ridge regression, maximum likelihood score-based parametric decomposition, and classifiers (Kissuk, Heg$\rm \rm n,\ u$n, and Soed, 1996) in an iterative manner – only requiring the output of each step corresponds to its input. Since, all of useful reference $Q_{\rm tot}$ are computed already (after being convolutions of $Q_{\rm tot}$), the least-squares solution and the SVD can be made. The detailed methodology including the required fitting strategies is given in Farzareghani & Ebert, 1992: the dimensionality reduction of an EDA approach is carried out on three different dimensions. In particular, the dimensionality reduction of the first-order feature of the simulated modes is treated; the fitting strategies include two sets of fit-freeIs it advisable to seek help for simulating thermal-structural analysis in electronic components subjected to heat dissipation using Finite Element Analysis (FEA)?\[[@B1][@B2][@B3]\] In order to solve the practical problems mentioned above, we solve this problem by expanding the list of possible answers in other studies. In Figure [1](#F1){ref-type=”fig”} we compare the results of two different kinds of heat generation when applying a Finite Element Analysis (FEA) device with a 1W temperature. Someone To Do My Homework For Me In most of these studies, first, on the basis of the results of some of the previous studies ( [@B2], [@B7][@B8][@B9][@B10][@B11], [@B12][@B13]), we tried to solve the problem of thermal conduction of electricity through an electron tube shown as solid wire with EEA as initial condition. In this study, we had not tried to apply this type of temperature-dependent thermally-conductive technique ( [@B15]) since the experiments were not affected by the difficulty in dealing with the results. However, we tried to optimize the sample voltage and current. The current would change by simply modulating the current, but none of the individual currents had any significant effect on the results. The main reason is that this kind of operation could not be tailored by introducing any kinds of heat generation. To solve the problem, we initially searched the literature, and found that some results can be obtained using only the FEA device. In Figure [2](#F2){ref-type=”fig”}a, we compare the first results obtained by the electronic device to the results by thermally conductive fluid treatment (TFC), which can be explained roughly. The first results show that the change in the current (∼4 MVA/K) can be totally beneficial for the heating effect of FEA devices. In this literature, thermally-conductive fluid treatment is described as a thermal processing technique to get good working conditions for using a device without any other material. Therefore, we did try this web-site intend any particular studies on thermally-conductive fluid treatment applied to TFC itself. Instead, we looked for the ideal thermal-conductive fluid to stimulate the mechanical contact between the sample and the sample and find that it could be beneficial for the heating effect of FEA devices. And we found it can be beneficial because in the experiments, the current would change by simply modulating the current. The experiments in this paper show that the new characteristics obtained by our analysis could open doors to their applications in many related areas including their applicability in technical fields, work on the theoretical studies, and practical applications. Moreover, we can offer the advantages of our experiments in many problems involving thermally-conductive fluids coming from physics in general, and for thermally-conductive fluids like electronic devices and temperature-sensitive materials such as micro and pellet forceors as they can give us advantages of using them in engineering.
{"url":"https://mechanicalassignments.com/is-it-advisable-to-seek-help-for-simulating-thermal-structural-analysis-in-electronic-components-subjected-to-heat-dissipation-using-finite-element-analysis-fea","timestamp":"2024-11-07T09:34:27Z","content_type":"text/html","content_length":"135520","record_id":"<urn:uuid:45f5659b-6ebb-42f5-8713-65ad940fb767>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00136.warc.gz"}
Bionl Blog | Your Guide to Understanding Statistical Significance Your Guide to Understanding Statistical Significance 9 min. read February 5, 2023 Ahmad Jadallah Photo by Erol Ahmed What does it mean to provide evidence using data? Here is where statistical significance comes in hand. Imagine yourself as the dean of a prestigious institution, you get an alarming report indicating that your students average 6.80 hours of sleep each night, compared to the national average of 7.02 hours. The student body president is concerned about the kids’ health and cites this report as evidence that homework should be lowered. In contrast, the university president dismissed the report as nonsense: “Back in my day, we considered ourselves fortunate if we got four hours of sleep a night.” You must determine whether this is a serious matter. You are well-versed in statistics and see an opportunity to put your knowledge to work! Statistical significance is one of those concepts that we frequently hear but rarely comprehend. When someone asserts that data validate their thesis, we nod and accept it, assuming that statisticians have performed intricate calculations that have produced an unquestionable outcome. In reality, statistical significance is not a complex phenomenon that requires years of study to acquire; rather, it is a simple concept that everyone can and should grasp. As is the case with the majority of technical topics, statistical significance is based on a few elementary principles: hypothesis testing, the normal distribution, and p value. As we work towards resolving the aforementioned problem, we will touch briefly on each of these principles in this post. Testing hypotheses, a method for assessing a theory using facts, is the first concept we must address. The “hypothesis” refers to the researcher’s pre-study belief of the issue. This first notion is referred to as the alternative hypothesis, whereas its opponent is referred to as the null hypothesis. In our case, they include: • Alternative Hypothesis: The average quantity of sleep that students at our university get is less than the national average for college students. • Null Hypothesis: The average number of hours slept by students at our institution is not less than the national average for college students. Observe how cautious we must be with the language: we are searching for a very particular impact, which must be specified in the hypotheses so that we cannot later argue that we were testing anything different! (This is an example of a one-sided hypothesis test because we are only interested in one direction of change.) Hypothesis testing are one of the pillars of statistics and are used to evaluate the outcomes of the vast majority of investigations. These studies might range from a clinical trial to test the efficacy of a medicine to an observational study to evaluate an exercise regimen. All studies are focused with drawing comparisons between two groups or between one group and the total population. In the medical example, we may compare the average time to recover between groups using two different medications, but in our situation as deans, we wish to compare sleep between our students and those of the entire nation. The testing portion of hypothesis tests helps us to identify whether theory, the null or alternative, is supported by the data more strongly. There are several hypothesis tests; we will utilize the z-test. However, before we can evaluate our data, we must discuss two other essential concepts. The second component of statistical significance is the normal distribution, commonly known as the Gaussian distribution or bell curve. The normal distribution is used to depict the distribution of data from a process and is characterized by the mean, denoted by the Greek letter (mu), and the standard deviation, denoted by the letter (sigma) (sigma). The mean indicates the location of the data’s center, while the standard deviation represents the data’s dispersion. The normal distribution is used when evaluating data points in terms of the standard deviation. Based on the number of standard deviations a data point deviates from the mean, we may evaluate its degree of anomaly. The normal distribution has the following advantageous characteristics: • 68% of the data falls within one standard deviation of the mean. • 95% of the data fall within two standard deviations of the mean. • 99.7 percent of the data falls within three standard deviations of the mean. If a statistic has a normal distribution, each point may be described in terms of standard deviations from the mean. For instance, the mean female height in the United States is 65 inches with a standard variation of 4 inches. If a new acquaintance is 73 inches tall, she is two standard deviations above the mean and among the tallest 2.5% of females. (2.5% of females will be shorter than average—2(sigma) (57 in) and 2.5% will be taller than average + 2(sigma) (59 in)). Instead of stating that our data is two standard deviations from the mean, we evaluate it using a z-score, which simply reflects the number of standard deviations a point is from the mean. Subtracting the mean of the distribution from the data point and dividing by the standard deviation yields a z-score. In the height example, you can verify that our friend’s z-score would be 2. The resulting distribution is the standard normal with a mean of 0 and a standard deviation of 1, as seen in the figure below. Every time we do a hypothesis test, we must assume a distribution for the test statistic, which in our case is the mean (average) number of hours our kids sleep. The normal curve is used as an estimate for the distribution of the test statistic when conducting a z-test. According to the central limit theorem, when more averages are extracted from a data distribution, the averages converge toward a normal distribution. Nonetheless, this is always an approximation, as real-world data never fully adhere to a normal distribution. The assumption of a normal distribution enables us to assess the significance of the results of a research. The higher or lower the z-score, the less probable it is that a result occurred by chance and the more likely it is that the result is significant. To quantify the significance of the results, we employ one additional notion. The final fundamental concept is that of p-values. A p-value is the likelihood of observing outcomes that are at least as severe as those observed under the null hypothesis. This may appear quite complicated, so let’s examine an illustration. Suppose we are comparing the average IQ of Florida and Washington. The null hypothesis is that the average IQs in Washington are not greater than those in Florida. Using a p-value of 0.346, we determine that IQs in Washington are higher by 2.2 points. In a universe in which the null hypothesis—that average IQs in Washington are not higher than average IQs in Florida—holds true, there is a 34.6% likelihood that we would test IQs that are at least 2.2 points higher in Washington. Therefore, if IQs in Washington are not genuinely higher, we would still find them to be at least 2.2 points higher around one-third of the time owing to random noise. The lower the p-value, the more significant the finding, as it is less likely to be produced by noise. Whether or not a result may be deemed statistically significant relies on the significance p-value (also known as alpha) established before to the experiment. The results are statistically significant if the observed p-value is smaller than alpha. If we waited until after the experiment to determine alpha, we could just choose a value that ensures our results are significant regardless of what the data indicates! The choice of alpha depends on the context and the topic of research, but the most frequently employed number is 0.05, which corresponds to a 5 percent possibility that the results are random. In my laboratory, numbers between 0.1 and 0.001 are utilized often. The researchers who found the Higgs Boson particle, as an extreme example, chose a p-value of 0.0000003, or a 1 in 3.5 million probability that the finding was due to noise. (Statisticians are hesitant to acknowledge that a p-value of 0.05 is arbitrary. R.A. Fischer, the inventor of modern statistics, arbitrarily chose a p-value of 0.05, and it stayed)! To convert a normal distribution z-score to a p-value, we can use a table or statistical program such as R. The result will indicate the likelihood of a z-score less than the computed number. For instance, given a z-score of 2, the p-value is 0.977, indicating that there is only a 2.3% chance of randomly seeing a z-score greater than 2. In summary, we have thus far discussed three concepts: • Hypothesis Testing: A method for evaluating an idea • Normal Distribution: An approximation of the data used to test a hypothesis. • p-value: The likelihood that a result at least as dramatic as the observed one would have occurred if the null hypothesis were true. Now, let’s put the parts of our example together. Here are several essentials: According to the National Sleep Foundation, students throughout the country sleep 7.02 hours each night on average. In a survey of 202 university students, the average number of hours slept per night was 6.90, with a standard deviation of 0.80 hours. Our alternate hypothesis is that students at our institution sleep less than the national average for college students. We will choose an alpha value of 0.05, which indicates that the results are statistically significant if the p-value is less than 0.05. First, we must transform our measurement into a z-score, which indicates the number of standard deviations from the mean. This is accomplished by subtracting the population mean (the national average) from the measured value and dividing by the standard deviation over the square root of the sample size. (As the number of samples rises, the variance and standard deviation drop. This is taken into consideration by dividing the standard deviation by the square root of the sample size.) The z-score is referred to as the test statistic. Once we have a test statistic, we may compute the p-value using a table or a programming language such as R. I utilize code to demonstrate how simple it is to implement our approach with free software. (# denotes comments; bold indicates output) # Calculate the results z_score = (6.90 - 7.02) / (0.84 / sqrt(202)) p_value = pnorm(z_score)# Print our results sprintf('The p-value is %0:5f for a z-score of %0.5f.', p_value, z_score)"The p-value is 0.02116 for a z-score of -2.03038." Based on the p-value of 0.02116, the null hypothesis may be rejected. (Statisticians prefer that we reject the null hypothesis instead than accepting the alternative.) At a significance level of 0.05, there is statistically significant evidence that our pupils sleep less on average than college students in the United States. The p-value indicates that there is a 2.12% probability that our results are due to random noise. In this presidential debate, the student was correct. Before banning all schoolwork, we must be cautious not to place too much emphasis on this result. Notice that our p-value, 0.02116, would not be statistically significant if we had used a cutoff of 0.01. Those who wish to refute the findings of our study can easily modify the p-value. In addition to the conclusion, we should consider the p-value and sample size whenever we review a study. With a sample size of 202, our study may have statistical significance, but this does not imply that it is practically significant. In addition, because this was an observational research, there is just evidence for association and not causality. We demonstrated that there is a link between students at our school and decreased average sleep, but not that attending our school causes sleep loss. There may be other variables influencing sleep, and only a randomized controlled research can confirm causality. As is the case with the majority of technical notions, statistical significance is comprised of several simple principles. The most of the difficulty is in acquiring the vocabulary! After assembling the necessary components, you may begin using these statistical principles. As you master the fundamentals of statistics, you will be better equipped to examine studies and the news with a healthy degree of skepticism. You are able to see what the data truly says, as opposed to what someone claims it implies. The most effective strategy against dishonest politicians and businesses is an informed and suspicious populace!
{"url":"https://www.bionl.ai/blog/your-guide-to-understanding-statistical-significance","timestamp":"2024-11-04T12:37:48Z","content_type":"text/html","content_length":"62363","record_id":"<urn:uuid:7345346f-d13d-4b8f-b688-2149ddf35ac9>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00842.warc.gz"}
Simple arithmetic and comparisons Simple arithmetic and comparisons Basic arithmetic and logical comparisons can be carried out using nctoolkit with the standard Python operators: +, -, *, /, >, <, >=, <=, ==, and !=. Basic arithmetic using constants and datasets Often you might want to subtract datasets from each other, or add or subtract a dataset by a constant. The former is potentially made complicated as datasets can take different forms. For example, you might want to subtract a dataset which contains annual means from a dataset that contains monthly values. In this case you want to subtract the annual mean from the relevant month in each year. To deal with this problem, nctoolkit offers the ability to use standard Python operations +, -, * and / to carry out these operations, and in most use-cases it will be able to carry out an appropriate calculation. Let’s illustrate this using a dataset of monthly sea surface temperature from 1850 to the present day. We will start by looking at the first time step: import nctoolkit as nc ds = nc.open_data("sst.mon.mean.nc") ds.subset(time = 0) nctoolkit is using Climate Data Operators version 2.1.0
{"url":"https://nctoolkit.readthedocs.io/en/latest/adding.html","timestamp":"2024-11-06T17:21:28Z","content_type":"text/html","content_length":"1049962","record_id":"<urn:uuid:0044e643-0862-48b8-a246-49b88cb33870>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00134.warc.gz"}
How To Find The Height Of A Trapezoid Because the height of the trapezoid does not usually lie along an edge of the shape, students have a challenge when it comes to finding the exact height. By learning the geometric equation that relates the trapezoid's area to its bases and height, you can play some algebraic shuffling to directly calculate the height. Step 1 Set up the equation for the area of a trapezoid. Write A=h(b1+b2)/2, where A represents the trapezoid's area, b1 represents one of the base lengths, b2 represents the other base length and h represents the height. Step 2 Rearrange the equation to get h alone. Multiply both sides of the equation by 2 to get. 2A=h(b1+b2). Divide both sides of the equation by the sum of the bases to get 2A/(b1+b2)=h. This equation gives the representation of h in terms of the other traits of the trapezoid. Step 3 Plug in the values of the trapezoid into the equation for height. For example, if the bases are 4 and 12 and the trapezoid's area is 128, plug them into the equation to reveal h=2*128/(4+12). Simplifying to a single number gives the height as 16. Cite This Article Verial, Damon. "How To Find The Height Of A Trapezoid" sciencing.com, https://www.sciencing.com/find-height-trapezoid-2323089/. 13 March 2018. Verial, Damon. (2018, March 13). How To Find The Height Of A Trapezoid. sciencing.com. Retrieved from https://www.sciencing.com/find-height-trapezoid-2323089/ Verial, Damon. How To Find The Height Of A Trapezoid last modified August 30, 2022. https://www.sciencing.com/find-height-trapezoid-2323089/
{"url":"https://www.sciencing.com:443/find-height-trapezoid-2323089/","timestamp":"2024-11-13T19:37:15Z","content_type":"application/xhtml+xml","content_length":"69856","record_id":"<urn:uuid:0c2fc86f-ac35-4c41-97c7-7c5e62477b6a>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00172.warc.gz"}
A metallic sheet is of the rectangular shape with dimension 48 ... | Filo A metallic sheet is of the rectangular shape with dimension . From each one of corners, a square of is cut off. An open box is made of the remaining sheet. Find the volume of the box. Not the question you're searching for? + Ask your question In order to make an open box, a square of side 8 cm is cut off from each of the four corners and the flaps are folded up. Thus, the box will have the following dimensions : Was this solution helpful? Found 4 tutors discussing this question Discuss this question LIVE 6 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions from Surface Areas and Volumes View more Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text A metallic sheet is of the rectangular shape with dimension . From each one of corners, a square of is cut off. An open box is made of the remaining sheet. Find the volume of the box. Topic Surface Areas and Volumes Subject Mathematics Class Class 9 Answer Type Text solution:1 Upvotes 123
{"url":"https://askfilo.com/math-question-answers/a-metallic-sheet-is-of-the-rectangular-shape-with-dimension-48-mathrm~cm-times","timestamp":"2024-11-02T12:08:10Z","content_type":"text/html","content_length":"158677","record_id":"<urn:uuid:cdf7917c-9cf8-4f5c-b024-2ab6f0726bbb>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00122.warc.gz"}
How do you use a linear approximation or differentials to estimate tan44º? | HIX Tutor How do you use a linear approximation or differentials to estimate #tan44º#? Answer 1 The derivative of #tan theta# is #sec^2 theta#, so that to a first order in #delta theta# we have #tan(theta - delta theta) = tan(theta) -sec^2 theta xx delta theta# To estimate #tan 44^circ# we must use #theta = pi/4# and #delta theta = pi/180# (remember that in calculus angles must be measured in radians) so that #tan 44^circ ~~ tan (pi/4)-sec^2 (pi/4) xx pi/180 = 1-2 xx pi/180 ~~ 0.965# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To estimate ( \tan(44^\circ) ) using linear approximation or differentials, you can use the fact that for small angles, the tangent function is approximately equal to the angle itself measured in Since ( 44^\circ ) is not very large, we can use the linear approximation: ( \tan(x) \approx x ) for small ( x ) in radians. Converting ( 44^\circ ) to radians: ( 44^\circ \times \left( \frac{\pi}{180} \right) = \frac{44\pi}{180} ) radians. Thus, ( \tan(44^\circ) \approx \frac{44\pi}{180} ). You can then calculate the numerical value using this approximation. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-use-a-linear-approximation-or-differentials-to-estimate-tan44-8f9afa01ed","timestamp":"2024-11-08T11:40:46Z","content_type":"text/html","content_length":"579531","record_id":"<urn:uuid:a15ac769-d4e0-4b46-b2f5-5b0f7aaa8816>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00473.warc.gz"}
Solve it! (Grades 7-12) Teaching Mathematical Problem Solving in Inclusive Classrooms (Grades 7-12). Author: Marjorie Montague ISBN# 1-931311-11-0 Printed: $89.00 / #SIM003 Add to Cart eBook: $79.00 / #e-SIM003 Add to Cart Math word problem-solving skills instruction is a challenge, but that challenge can be met. Teachers can help ensure that students (Grades 7-12) in inclusive classrooms succeed when solving mathematical word problems by using the Solve It! instructional approach. Solve It! is designed to help students develop the processes and strategies used by good problem solvers. Instruction in mathematical problem solving is provided in lessons that: • Teach critical cognitive and meta cognitive processes. • Improve students’ motivation to solve problems. Even more important is that this approach is based on evidence collected from research done in a standard mathematics curriculum. The Evidence on Solve It! Solve It! was first validated and refined in three separate intervention studies with students between 12 and 18 years of age with mathematical learning disabilities. More recently, Solve It! was validated in two large research studies with a total of 1,059 students, conducted in inclusive general education math classrooms with average-achieving students, low-achieving students, and students with learning disabilities. Students were administered Solve It! math word problem-solving skills instruction. Researchers found that these students evidenced significantly greater growth in math problem-solving ability than the comparison group students who received only the district curriculum. In contrast with the comparison group, they also experienced greater growth and improvement on the state math assessment. [Read a summary of the research base.] Teachers who use Solve It! for math word problem-solving skills instruction can help students develop the processes and strategies used by good problem solvers. The instructional guide and accompanying CD provide instructional guidelines, curriculum-based measures, scripted lessons, extension activities, student materials, and an Excel® spreadsheet application for creating student progress charts. What’s in the Solve It! Package The instructional guide contains everything you need to use Solve It! Here is the table of contents: • Overview of Solve It! Instruction • Solve It! Instructional Components • Solve It! Class Charts • Solve It! Cue Cards • Solve It! Scripted Lessons and Implementation Calendar • Guidelines to Embed Solve It! in the Curriculum • Solve It! Practice Sessions • Monitoring Progress Using Curriculum-Based Measures • CBM Administration, Scoring, and Graphing Procedures • Techniques to Foster Skill Maintenance and Generalization • Solve It! Booster Sessions • Solve It! Problem-Solving Extension Activities • References The included CD, in addition, contains the following items: • Solve It! Scripted Lessons 1–3 • Solve It! Class Chart (RPV-HECC) • Solve It! Class Chart (Say-Ask-Check) • Solve It! Student Cue Card Assembly Instructions • Solve It! Student Cue Cards • Solve It! Class Star Chart • Solve It! Curriculum-Based Measures (CBM–1 through CBM–6) • Solve It! CBM Answer Keys (CBM–1 through CBM–6) • Solve It! Student Progress Graph (PDF format) • Solve It! Directions for Using Excel® Spreadsheet • Solve It! Student Progress Graph (Excel format) What’s more is that Solve It! processes align with the Common Core State Standards (CCSS) (Click to see alignment). Solve It! also aligns with standards in states that have not adopted CCSS (Alaska, Indiana, Minnesota, Nebraska, Oklahoma, Texas , Virginia). Click on the individual state to review alignment. NOTE: If you purchased the eBook version, you will download a zip file that contains a folder (“Files Folder”) which includes the individual files for instructional use that are referenced in the book and a pdf of the Solve It! Grades 7-12 book. Read an excerpt of this publication in PDF.
{"url":"https://www.exinn.net/solve-it/solve-it-grades-7-12/","timestamp":"2024-11-03T12:23:19Z","content_type":"text/html","content_length":"194238","record_id":"<urn:uuid:21e24da3-55d3-41e7-b931-6c4fdd4e19e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00266.warc.gz"}
How to Check That All Gradients Weights Are Zeros In Pytorch? In PyTorch, you can check whether all gradient weights are zeros by first accessing the gradients of your model's parameters using the .grad attribute. You can iterate over the parameters of your model and check if all gradients are zero. If any of the gradients are not zero, it indicates that the weight has been updated during backpropagation. You can use a simple loop to iterate over your model's parameters and check if all gradients are zeros. What is the performance benefit of early stopping based on zero gradients in PyTorch? The performance benefit of early stopping based on zero gradients in PyTorch is that it allows the training process to stop early if the model has converged and is no longer improving. This can help prevent overfitting and save computational resources by not continuing to train the model unnecessarily. By monitoring the gradients and stopping when they are close to zero, the training process can be optimized to achieve the best possible performance without wasting time on further training iterations that may not yield significant improvements. What is the connection between parameter initialization and zero gradients in PyTorch? In PyTorch, when initializing the parameters of a neural network, it is important to ensure that the gradients of these parameters are set to zero. This is because during the training process, the gradients of the parameters are accumulated over the various batches of data. If the gradients are not initialized to zero before each iteration of training, the accumulated gradients from previous iterations can interfere with the current gradients, leading to inaccurate and unstable updates to the parameters. By initializing the gradients of the parameters to zero before each iteration of training, it ensures that the gradients are properly computed and updated in a clean and consistent manner, leading to more effective and efficient training of the neural network. This is especially important in deep learning models where the network may have many layers and parameters, and ensuring stable and accurate gradient updates is crucial for successful training. How to check if a specific layer's weights are zero in PyTorch? You can check if a specific layer's weights are all zeros in PyTorch by using the following code snippet: 1 import torch 3 # Create a sample model with a specific layer 4 model = torch.nn.Linear(10, 5) 6 # Access the weights of the specific layer 7 layer_weights = model.weight.data 9 # Check if all the weights are zero 10 if torch.all(layer_weights == 0): 11 print("All weights in the layer are zero") 12 else: 13 print("Weights in the layer are not all zero") In this code snippet, we first create a sample model with a specific layer using torch.nn.Linear. We then access the weights of that specific layer using model.weight.data. Finally, we check if all the weights in the layer are zero by using torch.all(layer_weights == 0) and print an appropriate message based on the result.
{"url":"https://studentprojectcode.com/blog/how-to-check-that-all-gradients-weights-are-zeros","timestamp":"2024-11-02T05:58:53Z","content_type":"text/html","content_length":"264174","record_id":"<urn:uuid:36393468-da2d-4eb2-995c-26f18d485609>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00186.warc.gz"}
What our customers say... Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences: The step-by-step process used for solving algebra problems is so valuable to students and the software hints help students understand the process of solving algebraic equations and fractions. C.K., Delaware Algebrator is far less expensive then my old math tutor, and much more effective. Lee Wyatt, TX I am very particular about my son's academic needs and keep a constant watch. Recently, I found that he is having trouble in understanding algebra equations. I was not able to devote much time to him because of my busy schedule. Then this software came as God sent gift. The simple way of explaining difficult concepts made my son grasp the subject quickly. I recommend this software to all. Brian Johnson, VA No offense, but Ive always thought that math, especially algebra, just pretty much, well, was useless my whole life. Now that I get it, I have a whole new appreciation for its purpose and need in todays technological world! Plus, I can now honestly pursue my dream of being a video game creator and I probably would have realized too late that, without advanced math, you just cant do it! Maria Chavez, TX Search phrases used on 2010-10-14: Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among • prentice hall algebra 2 illinois solutions manual • algerbra factions • excel second order quadratic formula • 8th grade algebra 1 free study sheets • what is the formula for solving fractions? • college algebra problem solving • radical quadratic equations • percent formulas • combinations and permutations mathematics examples • algebra 9th grade lesson • Least Common Denominator Calculator • Geometry nth term worksheet quiz test • calculator program+java script+emac • "translation worksheets" maths • convert java time • hyperbola graphs school • dividing equations with exponents • how canyou use quadratic equations in our career or everyday life • Algebra Holt • free algebra test sheets • math-quadratic equation grouping • free aptitude ebooks • Printable Explanation of 6th Grade TAKS skills • physics math test • solving quadratic equation involving perfect square expressions • miami-dade public school algebra 1 pretest • slop and intercept MATLAB • different math trivias • holt pre algebra teachers textbook online free • simplify expressions worksheet • online examination test paper project in java • ordering negative fractions from least to greatest • problem solving with more than one fraction calculator • download a trigonometry calculator • aptitude test questions.pdf downloads • free copy of algebra solver • solve algebra problems with graphs • 5th order polynomial • algebraic expression for elementary student • discrete mathematics solved problems free download • mcdougal littell pre algbra work book • beginner complex fraction • california GED math printable worksheet • how to graph an ellipse on a graphing calculator • adding and simplifying square roots calculator • scale factor formula • Algebra expanding brackets worksheets • teach me math printable sheets free • start of multipling and dividing problems • pre algebra homework sheets • amiga rkrm manual • free algebra warm ups • 8th grade pre algebra free worksheets • non-homogeneous partial differential equation • how to rewrite a percentage as a fraction • fraction multiplier calculator • equation • free e books on Common Admission Test • how to solve an equation for two or more variable • free easy pre algebra worksheets • free ti-84 downloads • ebooks maths 6th standard • Past Exam Papers O Levels 1980 • ascending order worksheets • elementary statistics fourth edition chapter quiz answers • how does the number on the right of the equals sign help you to find the exponents? • calculator converting fractions to decimals • algebra 1 homework help • how to solve limits problems on a graphing calculator • radical expressions calculator • order numbers least to greatest • mathematica solving nonlinear differential equations • "simplifying variable expressions" activity • advance algebra • answers to study guide chapter 1 introduction to multimedia glencoe/mcgraw-hill • gmat algebra concepts and quick solving methods • free printable math worksheets for sixth graders • graphing real life functions • equation find out square root • first order partial differential equations • college algebra dugopolski
{"url":"https://softmath.com/math-book-answers/multiplying-fractions/how-do-you-solve-basic.html","timestamp":"2024-11-11T14:34:36Z","content_type":"text/html","content_length":"35587","record_id":"<urn:uuid:53aa6a51-08d1-414f-9ccf-ea879b5ba177>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00512.warc.gz"}
Diagnosis Scenario A father brought in his 2 year old boy because of a “cold” and also concern about his hearing and speech development. He thought the boy sometimes heard well and other days appeared deaf. Clinical examination showed dull tympanic membranes with retraction on the left. I wondered how accurate the clinics microtympanometer was for the diagnosis of otitis media with effusion and eustrachian tube dysfunction and formulated the question, what is the accuracy of microtympanometry for the diagnosis of hearing loss from a middle ear effusion in young children? Searching terms and evidence source: (tympanomet* OR impedance) AND (otitis media OR middle ear) AND audiomet* (MEDLINE) Read the article and decide: • Is the evidence from this article valid? • If valid, is this evidence important? • If valid and important, can you apply this evidence in caring for your patient? Completed Diagnosis Worksheet for Evidence-Based General Practice Holty I, Forster DP. Evaluation of pure tone audiometry and impedance screening in infant schoolchildren. J Epidemiol Community Health 1992; 46: 21-25. Are the results of this diagnostic study valid? Was there an independent, blind comparison with a reference (“gold”) standard of diagnosis? Yes – the audiometry and tympanometry were done by different examiners 5 days apart and without knowledge of the previous result. Was the diagnostic test evaluated in an appropriate spectrum of patients (like those in whom it would be used in practice)? School-aged children were screened. Was the reference standard applied regardless of the diagnostic test result? Yes – all children were supposed to undergo both tests, and 94.1% did. Are the valid results of this diagnostic study important? Your calculations: Target Disorder (abnormal audiometry) Totals Present Absent Positive (type B or C) 99 92 191 Diagnostic Test Result a b a + b (tympanometry) Negative (type A) 73 310 383 c d c + d Totals a + c b + d a + b + c + d begin{align} mathit{Sensitivity} &= a/(a+c) \ &= 99/172 \ &= 58% end{align} begin{align} mathit{Specificity} &= d/(b+d) \ &= 310/402 \ &= 77% end{align} begin{align} text{Likelihood Ratio for a positive test result ($LR+$)} &= mathit{sens}/(1-mathit{spec}) \ &= 58%/23% \ &= 2.5 end{align} begin{align} text{Likelihood Ratio for a negative test result ($LR-$)} &= (1-mathit{sens})/mathit{spec} \ &= 42%/77% \ &= 0.54 end{align} begin{align} text{Positive Predictive Value} &= a/(a+b) \ &= 99/191 \ &= 52% end{align} begin{align} text{Negative Predictive Value} &= d/(c+d) \ &= 310/383 \ &= 77% end{align} begin{align} text{Pre-test Probability ($prevalence$)} &= (a+c)/(a+b+c+d) \ &= 172/574 \ &= 81% end{align} begin{align} mathit{Pre-test-odds} &= mathit{prevalence}/(1-mathit{prevalence}) \ &= 30%/70% \ &= 0.43 end{align} $$ text{Post-test odds} = text{Pre-test odds} times text{Likelihood Ratio} $$ $$ text{Post-test Probability} = text{Post-test odds}/(text{Post-test odds} + 1) $$ Can you apply this valid, important evidence about a diagnostic test in caring for your patient? Is the diagnostic test available, affordable, accurate, and precise in your setting? Yes. Many practices, including ours, have one of these simple cheap instruments. Can you generate a clinically sensible estimate of your patient’s pre-test probability (from practice data, from personal experience, from the report itself, or from clinical speculation) Parental concern is a poor predictor of hearing problems (Rosenfeld, Arch Otolaryngol Head Neck Surg 1998 Sep;124(9):989-92). I would adjust the prevalence slightly to a pre-test value of 40%. Will the resulting post-test probabilities affect your management and help your patient? (Could it move you across a test-treatment threshold?; Would your patient be a willing partner in carrying it A positive test would predict about a 63% chance of an abnormal audiogram (and warrant an audiogram); a negative test a 27% chance (and warrant a repeat test in a several weeks). Would the consequences of the test help your patient? Yes – a recent trial (Maw, Lancet 1999 353: 960-3) of delayed versus immediate surgery for OME showed a benefit in language development but that the delayed group also later caught up. Additional Notes While the “disease” of interest is otitis media with effusion, I have taken audiometry as the gold standard since it is really the hearing impairment that is important to the child, not the presence of some middle ear fluid. Tympanometry is a moderate predictor of audiometric hearing loss in school children Clinical Bottom Line Tympanometry is moderately helpful in the assessment of possible childhood hearing problems, but cannot rule out problems (sensitivity 58%). Holty I, Forster DP. Evaluation of pure tone audiometry and impedance screening in infant schoolchildren. J Epidemiol Community Health 1992; 46: 21-25. Clinical Question What is the accuracy of microtympanometry for the diagnosis of hearing loss from a middle ear effusion in young children? Search Terms (tympanomet* OR impedance) AND (otitis media OR middle ear) AND audiomet* (MEDLINE) The Study The study attempted to screen 610 school aged children with both audiometry and tympanometry done 5 days apart (half in each sequence) by different examiners unaware of the previous result. 94.1% had both tests. The Evidence Target Disorder (abnormal audiometry) Totals Present Absent Positive 99 92 191 Diagnostic Test Result (type B or C) a b a + b (tympanometry) Negative 73 310 383 (type A) c d c + d Totals a + c b + d a + b + c + d begin{align} mathit{Sensitivity} &= a/(a+c) \ &= 99/172 \ &= 58% end{align} begin{align} mathit{Specificity} &= d/(b+d) \ &= 310/402 \ &= 77% end{align} begin{align} text{Likelihood Ratio for a positive test result ($LR+$)} &= mathit{sens}/(1-mathit{spec}) \ &= 58%/23% \ &= 2.5 end{align} begin{align} text{Likelihood Ratio for a negative test result ($LR-$)} &= (1-mathit{sens})/mathit{spec} \ &= 42%/77% \ &= 0.54 end{align} Comparison of the 94.1% of children who completed both tympanometry and audiometry showed a sensitivity (for hearing loss) of 58% and a specificity of 77%, suggesting only a modest accuracy for tympanometry. However, the same study showed that audiometry had a test-retest repeatability of 67%. This inaccuracy in the reference standard will lead to an underestimate of the accuracy of The results for Type B and Type C tympanometries have been combined here as “abnormal”, but may have differential accuracy. Appraised By Paul Glasziou
{"url":"https://ktbooks.ca/evidence-based-medicine/additional-resources/syllabi-for-practising-ebm/general-practice/diagnosis-scenario/","timestamp":"2024-11-06T21:04:26Z","content_type":"text/html","content_length":"32980","record_id":"<urn:uuid:d11b5d8f-4c4e-43ba-801d-21466f51a470>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00842.warc.gz"}
Interest rate formula for loan Understand how interest is calculated and what fees are associated with your federal student loan. Remember that If you know the interest rate i, loan amount A, and payment P, you can use equation 1 to find the current balance remaining after n Dec 7, 2018 Learn how interest rates/APR affect monthly payments, and how the A monthly payment can be calculated using an EMI formula similar to the Interest is the fee paid for borrowing money. There are two common types of interest charged by lenders: simple interest and compound interest. To calculate Instead of algebra, just use the calculation steps as per the 2 images. The algebra is stuffed into a capsule. Assume loan amount 3000000, interest rate 9℅, Sep 4, 2019 Why is it important to know how to calculate an interest rate for a loan? the one being calculated in the rate, it is applied in different periods of Interest is the fee paid for borrowing money. There are two common types of interest charged by lenders: simple interest and compound interest. To calculate The loan payment formula shown is used for a standard loan amortized for a specific period of time with a fixed rate. Examples of specialized loans that do not apply to this formula include graduated payment, negatively amortized, interest only, option, and balloon loans. You are given a line of credit that can be reused as you repay the loan. The interest rate is usually variable and tied to an index such as the prime rate. Interest rate is the percentage of a loan paid by borrowers to lenders. For most loans, interest is paid in addition to principal repayment. Loan interest is usually expressed in APR, or annual percentage rate, which include both interest and fees. The rate usually published by banks for saving accounts, money market accounts, and CDs is the There are various methods banks use to calculate interest rates, and each method will change the amount of interest you pay. If you know how to calculate interest rates, you will better understand your loan contract with your bank. You also will be in a better position to negotiate your interest rate. When you know the principal amount, the rate, and the time, the amount of interest can be calculated by using the formula: I = Prt. For the above calculation, you have $4,500.00 to invest (or borrow) with a rate of 9.5 percent for a six-year period of time. Interest rate is the percentage of a loan paid by borrowers to lenders. For most loans, interest is paid in addition to principal repayment. Loan interest is usually expressed in APR, or annual percentage rate, which include both interest and fees. The rate usually published by banks for saving accounts, money market accounts, and CDs is the How to calculate monthly mortgage payments, loan balances at the end of a amortize a loan of L dollars over a term of n months at a monthly interest rate of c. Calculate the Interest (= "Loan at Start" × Interest Rate); Add the Interest to the Let us make a formula for the above just looking at the first year to begin with:. Sep 23, 2010 Suppose you take out a loan that requires monthly payments. As a result, interest is calculated monthly as well. The nominal interest rate, also Dec 5, 2017 Tools and Calculators. Generally, interest on student loans is calculated daily. Use this calculator to figure out the interest amount owed since Apr 14, 2019 If the interest amount is deducted from the loan amount at the start of the loan period as in discount loans, the periodic rate is calculated by If you know the interest rate i, loan amount A, and payment P, you can use equation 1 to find the current balance remaining after n Dec 7, 2018 Learn how interest rates/APR affect monthly payments, and how the A monthly payment can be calculated using an EMI formula similar to the Interest is the fee paid for borrowing money. There are two common types of interest charged by lenders: simple interest and compound interest. To calculate Instead of algebra, just use the calculation steps as per the 2 images. The algebra is stuffed into a capsule. Assume loan amount 3000000, interest rate 9℅, Sep 4, 2019 Why is it important to know how to calculate an interest rate for a loan? the one being calculated in the rate, it is applied in different periods of One use of the RATE function is to calculate the periodic interest rate when the amount, number of payment periods, and payment amount are known. For this example, we want to calculate the interest rate for $5000 loan, and with 60 payments of $93.22 each. The NPER function is configured as follows: The simple interest formula is: Interest = Principal x rate x time. Interest = $100 x .06 x 1. Interest = $6. The effective rate of interest on the loan (as with almost on any other financial instrument) – this is the expression of all future cash payments (incomes from a the interest rate per period, not per year (For instance, if the loan payments are made monthly and the interest rate is 9%, then i = 9%/12 = 0.75% = 0.0075.) n : the number of time periods elapsed at any given point: N : the total number of payments for the entire loan or investment: P : the amount of each equal payment May 8, 2019 A basic simple interest definition is the money paid on a loan or money earned on a deposit. For instance, when you borrow money, you must Apr 14, 2019 If the interest amount is deducted from the loan amount at the start of the loan period as in discount loans, the periodic rate is calculated by Nov 18, 2009 Methods for Calculating Interest on Loans: 360/365 vs. The method used for interest rate calculations in promissory notes is one such issue. If you know the interest rate i, loan amount A, and payment P, you can use equation 1 to find the current balance remaining after n Dec 7, 2018 Learn how interest rates/APR affect monthly payments, and how the A monthly payment can be calculated using an EMI formula similar to the Interest is the fee paid for borrowing money. There are two common types of interest charged by lenders: simple interest and compound interest. To calculate
{"url":"https://brokeregvjwge.netlify.app/vanconant42044par/interest-rate-formula-for-loan-xe.html","timestamp":"2024-11-11T06:14:01Z","content_type":"text/html","content_length":"32746","record_id":"<urn:uuid:e64a8059-3537-4fee-aba0-015d18ea6bb9>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00527.warc.gz"}
Loury (1979) - Market Structure And Innovation Jump to navigation Jump to search Has bibtex key Has article title Market Structure And Innovation Has author Loury Has year 1979 In journal In volume In number Has pages Has publisher © edegan.com, 2016 • Loury G.C.(1979), "Market structure and innovation", Quarterly Journal of Economics, 93, pp. 395-410. (pdf) title={Market structure and innovation}, author={Loury, G.C.}, journal={The Quarterly Journal of Economics}, In the application of conventional economic theory to the regulation of industry, there often arises a conflict between two great traditions. Adam Smith's "invisible hand" doctrine formalized in the First Fundamental Theorem of Welfare Economics supports the prescription that monopoly should be restrained and competitive market structures should be promoted. On the other hand, Schumpeter, in his classic Capitalism, Socialism and Democracy, takes a dynamic view of the economy in which momentary monopoly power is functional and is naturally eroded over time through entry, imitation, and innovation. Indeed the possibility of acquiring monopoly power and associated quasi rents is necessary to provide entrepreneurs an incentive to pursue innovative activity. As Schumpeter put it, progress occurs through a process of "creative destruction." An antitrust policy that actively promotes static competition is not obviously superior to laissez faire in such a world. This leads one to ponder what degree of competition within an industry leads to performance that is in some sense optimal. This question has been extensively studied in the literature concerning the relationship between industrial concentration and firm investment in research and development.' Both theoretical and empirical studies have suggested the existence of a degree of concentration intermediate between pure monopoly and atomistic (perfect) competition that is best in terms of R & D performance... The Model Basic Setup and Assumptions The basic setup is as follows: • There are [math]n\;[/math] identical firms, indexed by [math]i\;[/math] • Each firm invests [math]x_i\;[/math] to buy a random variable [math]\tau(x_i)\;[/math] which gives a completion date • The firm with the earliest realised completion date wins [math]V\;[/math] • [math]\tau \sim F_{\tau}(h(x_i))\;[/math] where [math]F_{\tau}\;[/math] is the CDF for the exponential distribution: [math]F_{\tau}(h(x_i)) = 1 - e^{-h(x_i)t}\;[/math] • [math]h(x_i)\;[/math] is the rate parameter, or the instantaneous probability of the innovation occuring. [math]h(x_i)\;[/math] is assumed to have the following properties: • [math]h(0) = 0 = \lim_{x \to \infty} h'(x)\;[/math] • For some [math]\overline{x} \ge 0\;[/math], [math]h''(x) \ge 0\;[/math] for [math]x \le \overline{x}\;[/math], and [math]h''(x) \le 0\;[/math] for [math]x \ge \overline{x}\;[/math]. This says that h is weakly convex prior to some point (possibly zero, so never convex) and concave after that point. If the point is away from zero then there is an initial range of increasing returns to scale, but after the point there is always diminishing returns to scale. • [math]\tilde{x}\;[/math] is defined as the point where [math]\frac{h(x)}{x}\;[/math] is greatest - this is the point where a firm is using its full capactity. Let [math]\hat{\tau_i}\;[/math] be an random variable giving the date of the earliest other firm: [math]\hat{\tau_i} = \min_{j \ne i} \{ \tau(x_j) \}\;[/math] Assuming iid tau's (no externalities in innovation!), then we can use a nice feature of the exponential distribution which is that if [math]X_1,\ldots,X_N\;[/math] are iid exponential with rates [math]\lambda_1,\ldots,\lambda_N\;[/math], then [math]\min(X_1,\ldots,X_N)\;[/math] is distributed exponential with rate [math]\sum_1^N \lambda_i\;[/math]. Therefore [math]\hat{\tau_i} \sim F_{\hat{\tau}}\;[/math], where [math]F_{\hat{\tau}} = 1 - e^{-\left( \sum_{j\ne i} h(x_j) \right) t}\;[/math]. For convenience we denote [math]a_i= \sum_{j\ne i} h(x_j)\; [/math] The firm discounts the future reciepts at a rate [math]r\;[/math] (note that using continuous compounding, [math]PV = FV \cdot e^{-rt}\;[/math], but apparently this paper values the future price at The firm wins the prize at time [math]t\;[/math] with probability: [math]pr(\tau(x_i) \le \min(\hat{\tau_i},t) = e^{-a_i t}(1-e^{-h)x_i)t}) + a_i \int_0^t (1-e^{-h(x_i)s})e^{-a_i s})ds\;[/math] [math]\therefore pr(\tau(x_i) \le \min(\hat{\tau_i},t) = \frac{h(x_i)}{a_i + h(x_i)} (1-e^{-(a_i+h(x_i))t})\;[/math] This is directly comparable to a contest success function: [math]pr(\tau(x_i) \le \min(\hat{\tau_i},t)) = \underbrace{\left( \frac{h(x_i)}{\sum_{i=1}^{n} h(x_i)} \right) }_{\mbox{Firm i relative effort}} \cdot \underbrace{ \left ( 1-e^{-\left(\sum_{i=1}^ {n} h(x_i)\right)t}\right ) }_{\mbox{Prob of innov at t}}\;[/math] Solution concept The model is not actually solved, but comparative statics can be performed on an implicit solution. The implicit solution is arrived at by noting that: 1. If a firms expectations are rational then the beliefs about the fastest competing firm are indeed formed using [math]\hat{\tau_i}\;[/math] 2. [math]a_i\;[/math] can be taken as constant by firm [math]i\;[/math] (i.e. in equilibrium [math]a_i\;[/math] will be correct) 3. [math]V\;[/math] and [math]r\;[/math] are exogenously given 4. As the firms are identical we can look for a symmetric solution! Each firm maximizes profit: [math]\max_x \Pi (a_i,x,V,r) = \max_x \left (\frac{V h(x_i)}{r(a_i + r +h (x_i))} - x \right)\;[/math] This is presumably constructed by taking: [math]\Pi = \int_0^{\infty} \left( \underbrace{pr(\tau_i \le \min(\hat{\tau_i},t)}_{\mbox{Prob of winning at t}} \cdot \underbrace{PV_t (V)}_{\mbox{PV of V at t}} \right ) dt - \underbrace{x}_{\ The FOC for the profit maximization implicitly defines the equilibrium solution. [math]\frac{h'(\hat{x})(a+r)}{(a+r+h(\hat{x}))^2} - \frac{r}{V} = 0\;[/math] The SOC must also hold (the paper has the first term missing) [math]\frac{a+r}{(a+r+h(\hat{x}))^3} \cdot \left ( h''(\hat{x}) (a+r+h(\hat{x})) - 2h'(\hat{x})^2 \right) \le 0\;[/math] However, this only defines the partial equilibrium. To complete the equilibrium we need to use the symmetry (which is also why the subscripts are dropped above): [math]a = \sum_{j \ne i} h(x_j) = (n-1)h(x^*)\;[/math] This equilibrium exists providing R&D is profitable absent rivalry (otherwise their may be a corner, not an internal solution). Comparative Statics With the partial equilibrium result the greater rivalry could lead to greater or lesser R&D: [math]h(\hat{x}) \ge a + r \implies \frac{\partial \hat{x}}{\partial a} \ge 0\;[/math] [math]h(\hat{x}) \le a + r \implies \frac{\partial \hat{x}}{\partial a} \le 0\;[/math] However, the full equilibrium result is unambiguous: [math]h(\hat{x}) \le a, \;\mbox{ as }\;a = (n-1)h(x^*)\quad \therefore \frac{\partial \hat{x}}{\partial a} \le 0 \quad\mbox{ if} n \ge 2\;[/math] The date of innovation (by the first firm) is always earlier as more firms compete, even though each firm is expending less, because (as the mean of the exponential distribution is the inverse of the rate parameter): [math]\mathbb{E} \tau(n) = (n h(x^*(n)))^{-1}\;[/math] This holds providing a reasonable stability condition holds: That a marginal increase in R&D by one firm causes a corresponding small drop in R&D by all other firms. This is proved easily in proposition 2 in the paper, and is intuitive. Competitive Entry Rearranging the FOC which characterizes the equilibrium for [math]\frac{V}{r}\;[/math], and subbing into the profit equation we get: [math]\Pi(a,x) = \frac{h(x^*)}{h'(x^*)} \left ( \frac{a+r+h(x^*)}{(a+r)} \right ) - x^* \quad \mbox{where}\; a = (n-1)h(x^*)\;[/math] Now if [math]h\;[/math] is concave (i.e. diminishing returns to scale throughout) then [math]\frac{h(x)}{x} \ge h'{x}\;[/math] and expected profits are always positive. They are only driven to zero in the limit of an infinite number of firms. With an initial range of increasing returns to scale then returns can go to zero with a finite number of firms. To see this we examine the change in profit with respect the number of firms, remembering that the expenditure each firm will make will depend upon the total number of competitors. [math]\frac{d \Pi}{d n} = \frac{\partial \pi }{\partial a}\cdot (h(x^*) + (n-1)h'(x^*)) + \frac{\partial \Pi}{\partial x} \frac{\partial x}{\partial n} \lt 0\;[/math] We know, from the envelope theorem, that [math]\frac{\partial \Pi}{\frac \partial x} = 0\;[/math], and from the original profit function that [math]\frac{\partial \Pi}{\partial a} \lt 0\;[/math]. By rearranging the other terms we can see that equilibrium profits decrease with more competition. There is a proof in the paper that shows that with initial increasing returns to scale the finite number of competitors in a zero profit equilibrium will be below [math]\tilde(x)\;[/math], which is the point where firms are using their capacity. Welfare Considerations Ignoring the problem that social benefits may not equal private benefits, there are two other inefficiencies. The first arises from duplication of effort. Given a fixed market structure, social welfare is maximized with a choice [math]x^{**}\;[/math] characterized by: [math]\frac{\partial \pi}{\partial x}((n-1)h(x),x) + (n-1)h'(x) \cdot \frac{\partial \pi}{\partial a}((n-1)h(x),x) = 0\;[/math] Whereas the individual firms choose an [math]x^*\;[/math] characterized by: [math]\frac{\partial \pi}{\partial x}((n-1)h(x),x)= 0\;[/math] Since [math]\frac{\partial \pi}{\partial a} \lt 0\;[/math] it follows that [math]x^*(n) \gt x^{**}(n)\;[/math]. The second inefficiency is that there are too many firms. If [math]\overline{x}\;[/math] (the point where increasing returns to scale stop) is at zero then infinite firms enter the competitive race. If [math]\overline{x} \gt 0\;[/math] a finite firms enter, but continue to enter until all profits are dissipated.
{"url":"https://www.edegan.com/mediawiki/index.php?title=Loury_(1979)_-_Market_Structure_And_Innovation&mobileaction=toggle_view_mobile","timestamp":"2024-11-11T00:50:11Z","content_type":"text/html","content_length":"38319","record_id":"<urn:uuid:c76f59c3-ec46-44ec-8753-8680ab56bec8>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00266.warc.gz"}
Fraction Division Word Problems Worksheet | 5.NF.B.7.A,5.NF.B.7 | Workybooks This worksheet reinforces division of unit fractions by whole numbers through word problems involving keywords such as recipes, sharing equally, measurements, and quantities. Students apply their understanding of division of unit fractions by solving real-life word problems related to food, measurements, quantities, and distributions, enhancing their problem-solving skills in a practical Publisher: Workybooks Written by:Neha Goel Tripathi Illustrated by: Sagar Kumar
{"url":"https://www.workybooks.com/worksheet/5.NF.B.7.A-4/fraction-word-problems","timestamp":"2024-11-09T00:21:07Z","content_type":"text/html","content_length":"145875","record_id":"<urn:uuid:55c21d3e-3dcf-4e41-810d-8e43a352829b>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00007.warc.gz"}
How to Use the Dividend Discount Model to Find Stock Price |… Investors use the Dividend Discount Model, or DDM, as a way to find the value of a stock that pays dividends. Those dividends are valuable but since they are in the future it's not as simple as adding them up. The calculation isn't perfect but it can work as a guide to let investors know if a stock is under or overvalued. What Is The Dividend Discount Model (DDM)? DDM, or dividend discount model, is used to value a stock, based on the premise that it should be equal to the sum of its current and future cash flows (but after taking the 'time value of money' into account). In other words, DDM is calculating what the stock “should” cost considering its current value and future dividend payments. What Is the Purpose of the Dividend Discount Model? The dividend discount model can be used to determine the current price of a stock, as well as whether it’s under- or over-valued. Understanding the stock’s fair value will help investors determine whether to purchase it based on their goals. For instance, if the DDM results in a value that is more than a stock’s current trading price, investors may choose to purchase the stock since it’s potentially undervalued. That is, they expect the stock price to rise to the DDM result. Estimating Dividends & Stock Value Estimating dividends – or even the stock value of a company – is often relatively simple. To estimate potential changes, investors can make assumptions or attempt to seek trends based on past dividend payments. However, this isn’t a guarantee. What Is the Time Value of Money? The time value of money (TVM) is the concept that money you currently have will be worth more in the future. More specifically, money an investor holds now can potentially earn more money in the form of interest or investment returns. This relates to the DDM because a dividend payment that will be paid in the future is worth less today. You wouldn’t pay $100 to receive a $100 payment in two years. But if you would pay $80 today to receive $100 in the future then the present value of that $100 is $80. How Is the Dividend Discount Model Calculated? The dividend discount model is calculated using a basic valuation model that is the foundation for many other investing techniques. It combines expected future cash flows and the time value of money into one formula. The Formula for Calculating Stock Price Using the Dividend Discount Model There are several variations of the dividend discount model, but the most common is the Gordon Growth Model (GGM). The GGM model is as follows: How to Find Stock Prices with the Dividend Discount Model? The dividend discount model assumes that a stock price reflects the present value of all future dividend payments. In essence, the dividend discount model is a simple method to calculate stock prices, and it uses a formula that doesn’t require a lot of input variables compared to other formulas. Example of The Dividend Discount Model Let’s say the stock for Company ABC is trading at $50 per share. The company has a 10% rate of return and pays a $5 dividend per share in a year, expected to increase by 5% each year. Using the formula, we can now calculate the stock’s value: Value of stock = $5 / (0.10 - 0.05) = $100 What this means is that the stock has a current price of $50 but an intrinsic value of $100, so currently the stock is undervalued. Based on this information, an investor may decide to purchase the stock, hoping that the price goes up to $100. Is the Dividend Discount Model Reliable? The dividend discount model can be extremely useful when determining the value of a stock based on the present value of dividends. However, there are several flaws that can make it unreliable (or even unusable) for investors. Based on a Constant-Growth Model The DDM model assumes that dividend payments will go up at a consistent rate for the foreseeable future. Unfortunately, most – if not all – stocks won’t have dividend payouts that increase at a constant rate. Sensitive to Changes Changing any of the input values in the DDM formula by even a fraction of a percent can result in huge differences in the resulting stock value. Doesn’t Apply to All Stocks Investors can’t use DDM to value growth stocks that pay small dividends, as well as stocks that don’t pay dividends. Alternative Dividend Formulas to Find Stock Price There are several types of dividend formulas that investors can use to determine stock prices, including the discounted cash flow model and the dividend growth model. Discounted Cash Flow Model vs Dividend Discount Model Both the discounted cash flow and dividend discount models determine a stock’s value. The main difference is that the discounted cash flow model focuses on cash flow whereas the dividend discount model looks at a stock’s dividends. Dividend Discount Model vs Dividend Growth Model There are essentially no differences between the dividend growth model and the dividend discount model. Both look at a stock’s fair value and are based on current and future dividend payments. These terms tend to be used interchangeably. Dividend Discount Model vs Gordon Growth Model The Gordon growth model is the most commonly used formula for the dividend discount model. Named after Myron J. Gordon, an American economist, this model is based on looking at a stock's value based on the constant rate of growth of its dividends. Should Investors Use the Dividend Discount Model? Like any valuation method used to determine the value of a stock, the best way to use a dividend discount model is as one piece of the puzzle. In other words, don't buy a stock just because the dividend discount model tells you that it's cheap. Conversely, don't avoid a stock just because the model makes it look expensive. To determine whether an asset is worth investing in, investors should use metrics like return on equity, price-to-earnings, and other key financial ratios.
{"url":"https://investinganswers.com/articles/how-find-stocks-value-using-dividend-discount-model","timestamp":"2024-11-05T02:54:52Z","content_type":"text/html","content_length":"75568","record_id":"<urn:uuid:0b054a4d-24ac-47ff-8f56-d2b7bcdc4141>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00143.warc.gz"}
Summary: Equations of Lines Key Concepts • Given two points, we can find the slope of a line using the slope formula. • We can identify the slope and y-intercept of an equation in slope-intercept form. • We can find the equation of a line given the slope and a point. • We can also find the equation of a line given two points. Find the slope and use point-slope form. • The standard form of a line has no fractions. • Horizontal lines have a slope of zero and are defined as [latex]y=c[/latex], where c is a constant. • Vertical lines have an undefined slope (zero in the denominator) and are defined as [latex]x=c[/latex], where c is a constant. • Parallel lines have the same slope and different y-intercepts. • Perpendicular lines have slopes that are negative reciprocals of each other unless one is horizontal and the other is vertical. the change in y-values over the change in x-values Did you have an idea for improving this content? We’d love your input.
{"url":"https://courses.lumenlearning.com/waymakercollegealgebra/chapter/summary-equations-of-lines/","timestamp":"2024-11-08T14:24:06Z","content_type":"text/html","content_length":"47577","record_id":"<urn:uuid:35ba017d-02ed-4a3a-90ff-4ce73d8e1fc1>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00089.warc.gz"}
A stationary value of f(x)=x(Inx)2 is... | Filo Question asked by Filo student Not the question you're searching for? + Ask your question Filo tutor solution Learn from their 1-to-1 discussion with Filo tutors. Generate FREE solution for this question from our expert tutors in next 60 seconds Don't let anything interrupt your homework or exam prep with world’s only instant-tutoring, available 24x7 Found 7 tutors discussing this question Discuss this question LIVE 8 mins ago Students who ask this question also asked View more Question Text A stationary value of is Updated On Sep 21, 2022 Topic Calculus Subject Mathematics Class Class 12
{"url":"https://askfilo.com/user-question-answers-mathematics/a-stationary-value-of-is-31363030323739","timestamp":"2024-11-11T13:13:47Z","content_type":"text/html","content_length":"199614","record_id":"<urn:uuid:61f43aeb-b81f-4ebe-babf-48de2f400362>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00215.warc.gz"}
Explanations in this section should contain no formulas, but instead colloquial things like you would hear them during a coffee break or at a cocktail party. In this section things should be explained by analogy and with pictures and, if necessary, some formulas. In the path integral approach to gauge theory, observables are gauge invariant functions on the space $\mathcal A$ of a $G$-connections on $E$, where $G$ denotes the structure group and $E$ the fiber bundle. Therefore, an observable $f$ is a function on the space $\mathcal A / \mathcal G$, of connections modulo gauge transformations. As a result, vacuum expectation values are no longer defined as integrals with Lebesgue measure $ \mathcal A$, but instead with a Lebesgue measure on $ \mathcal A/ \mathcal G$. We obtain this measure by pushing forward the Lebesgue measure on $ \mathcal A$ by the map $ \mathcal A \to \mathcal A/ \mathcal G$ that sends each connection to its gauge equivalence class, and then $ A$ denotes a gauge equivalence class of connections in the integral. The simplest example of an observable in gauge theory are Wilson loops. Take note that this procedure of modding out $\mathcal G$ from $\mathcal A$ is what leads to Ghosts. To do this properly requires to make use of the BRST formalism. (Source: Baez, Munian; Gauge Fields, Knots and Gravity, page 342)
{"url":"https://physicstravelguide.com/advanced_notions/observable","timestamp":"2024-11-11T22:43:00Z","content_type":"text/html","content_length":"77033","record_id":"<urn:uuid:8baa5afe-73b0-4b40-84a6-56226bb82ddf>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00684.warc.gz"}
Monthly Archive - February 2022 brain teasers, puzzles, riddles, mathematical problems, mastermind, cinemania... These are the tasks listed 1 to 10. with your Account and start collecting points. Check your ranking on Calculate the number 2501 NUMBERMANIA: Calculate the number 2501 using numbers [2, 5, 7, 9, 62, 515] and basic arithmetic operations (+, -, *, /). Each of the numbers can be used only once. Correct answers: 1 Find number abc If 8b6c9 - c5348 = 533a find number abc. Multiple solutions may exist. Correct answers: 9 What a winning combination? The computer chose a secret code (sequence of 4 digits from 1 to 6). Your goal is to find that code. Black circles indicate the number of hits on the right spot. White circles indicate the number of hits on the wrong spot. Correct answers: 12 Fish trap This fisherman goes to the river to check an illegal fish trap that he owns. He looks around to make sure there are no Fishing Inspectors about and proceeds to pull the fish trap out to check it. An Inspector steps out of the bushes, “Ahha!” he said and the fisherman spun around and yelled “Shiiiit!”. The Inspector, who wasn't expecting such a response said “Settle down, I'm the Fishing Inspector”. “Thank God for that” said the fisherman, “I thought you were the bugger who owned this fish trap”. Calculate the number 7790 NUMBERMANIA: Calculate the number 7790 using numbers [7, 9, 9, 4, 81, 881] and basic arithmetic operations (+, -, *, /). Each of the numbers can be used only once. Correct answers: 2 MAGIC SQUARE: Calculate A*B-C The aim is to place the some numbers from the list (6, 8, 9, 10, 12, 13, 16, 18, 19, 70, 87) into the empty squares and squares marked with A, B an C. Sum of each row and column should be equal. All the numbers of the magic square must be different. Find values for A, B, and C. Solution is A*B-C. Correct answers: 2 Find number abc If ac202 - caa0a = bc2ab find number abc. Multiple solutions may exist. Correct answers: 8 Which is a winning combination of digits? The computer chose a secret code (sequence of 4 digits from 1 to 6). Your goal is to find that code. Black circles indicate the number of hits on the right spot. White circles indicate the number of hits on the wrong spot. Correct answers: 8 Calculate the number 1164 NUMBERMANIA: Calculate the number 1164 using numbers [8, 1, 1, 5, 47, 984] and basic arithmetic operations (+, -, *, /). Each of the numbers can be used only once. Correct answers: 3 Find number abc If 8a9a3 - a8389 = cbc3b find number abc. Multiple solutions may exist. Correct answers: 10 What a winning combination? The computer chose a secret code (sequence of 4 digits from 1 to 6). Your goal is to find that code. Black circles indicate the number of hits on the right spot. White circles indicate the number of hits on the wrong spot. Correct answers: 3
{"url":"http://geniusbrainteasers.com/month/2022/02/","timestamp":"2024-11-11T05:21:07Z","content_type":"text/html","content_length":"72495","record_id":"<urn:uuid:4db13846-1b5b-48f7-98eb-ce3ddc46def2>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00090.warc.gz"}
The What Is a Coefficient in Math Cover Up • / • / • The What Is a Coefficient in Math Cover Up The What Is a Coefficient in Math Cover Up Posted By Brad Perry on March 26, 2019 The Benefits of What Is a Coefficient in Math SummaryStatistics does not store the input data values in memory, therefore the statistics included within this aggregate are restricted to those which can be computed in 1 pass through the data without access to the entire collection of values. A global variable might be declared to live in a target-specific numbered address space. The termination is put at the destination. Nearly all symbols essay writer utilize exactly the same commands as LaTeX. Sometimes, when ranking data, there are at least two numbers which are the exact same. The individual size is just the selection of input variables. Further, it’s a horizontal line. If you’re interested in at least two variables, you are probably going to wish to have a look at the correlations between all unique variable pairs. A regression involving multiple associated variables can create a curved line in some scenarios. It makes a fine arc and then comes back to the ground. It is a significant investment if you need to have into long range shooting and will be particularly helpful if you handload. There are times that you’ve limited control. What What Is a Coefficient in Math Is – and What it Is Not Let’s take a good look. essay writing I posted this elsewhere, but it may assist you in general. Well, perhaps you do and perhaps you don’t. Life After What Is a Coefficient in Math In the calculator, you simply should enter the question in the mandatory field. In more complicated math problems, the expressions can secure a bit more involved. If you’re seeing this for the very first time, I strongly encourage you to begin with my prior article, Matrices for Coffee Lovers. Get the Scoop on What Is a Coefficient in Math Before You’re Too Late For a pure gas there are lots of references that offer CP and CV values at various problems. If every term in an expression has a lot of aspects, and if every term has a minumum of one factor that’s the exact same, then that factor is known as a frequent element. In the event the value is a negative number, then there’s a negative correlation of relationship strength, and in the event the value is a good number, then there’s a positive correlation of relationship strength. This calculator built around an internet form will offer you a very good estimation in regards to what your bullets ballistic coefficient is. These units run at various speeds, which permit them to supply you with an even increased efficiency. By comparison, the true value of the CV is independent of the unit where the measurement was taken, so it’s a dimensionless number. What Is a Coefficient in Math – the Conspiracy For example fractions will use a more compact font. Quite simply, it is a tangent function analysis. On occasion, it’s much more useful to use the differential sort of this equation. The angle isn’t just any ‘ole angle, but instead an extremely specific angle. The stack alignment has to be a multiple of 8-bits. The times provided are related to the trace length. While statistical inference provides many benefits it also will come with some vital pitfalls. Now it’s time to understand your data. For businessman, as an example, the correlation coefficient may be used to appraise the success or failure of a specific advert or company strategy. The Basics of What Is a Coefficient in Math Finding out how to deal with a linear equation will provide you a simple comprehension of algebra so that you will have the ability to handle more elaborate equations later. It may assist you to memorize the melodic mathematics, as opposed to the formula. Algebra can resemble a confusing subject. Details of What Is a Coefficient in Math Type the equal sign to tell Excel you’re going to use a formula to figure out the value of the cell. The hyperlink below should aid a whole lot. If it does not contain a variable, it is called a Ok, I Think I Understand What Is a Coefficient in Math, Now Tell Me About What Is a Coefficient in Math! A course like Mastering Excel will teach you all of the basics you should get started with Excel, in addition to teaching you more advanced functions that are included within the application. The reward of the CV is it is unitless. Any details that are pertinent to learn about the graph ought to be mentioned. Numbers might appear abstract, so utilize anything visual that you’re in a position to so as to come across the point across. Reserved words in LLVM are extremely much like reserved words in different languages. As soon as you hit okay, the reply will show up in the cell. A Secret Weapon for What Is a Coefficient in Math A typical task in math is to compute what is called the absolute value of a specific number. You will discover urge for constant involvement in your house’s construction. One has to be sound in mathematics as a way to start machine learning. If you still have questions after reading the following article, you should check with a seasoned family law. Obviously, there can well be like terms that you will want to combine. For example the year of the movie. New Ideas Into What Is a Coefficient in Math Never Before Revealed The calculation of the normal deviation is tedious enough by itself. Based on what sort of shrinkage is performed, some of the coefficients may be estimated to be exactly zero. After clearing denominators, you are going to have polynomial equation. Calculating the base of the the equation Lesson Summary The correlation coefficient is an excellent way to establish the level of correlation between two variables. The perfect way to demonstrate how Binomial Expansion works is to use a good example. A great means to do this is by utilizing long division. There isn’t any way that individual can aspire to save 100 folk by themselves. The final result will be a great approximation to our original
{"url":"https://dccassociation.com/2019/03/26/the-what-is-a-coefficient-in-math-cover-up/","timestamp":"2024-11-03T21:30:30Z","content_type":"text/html","content_length":"48941","record_id":"<urn:uuid:de28735d-e6cb-4767-875f-39063949083e>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00194.warc.gz"}
From cppreference.com template< class BidirIt > void inplace_merge( BidirIt first, BidirIt middle, BidirIt last ); (1) template< class ExecutionPolicy, class BidirIt > void inplace_merge( ExecutionPolicy&& policy, BidirIt first, BidirIt middle, BidirIt last ); (2) (since C++17) template< class BidirIt, class Compare> void inplace_merge( BidirIt first, BidirIt middle, BidirIt last, Compare comp ); (3) template< class ExecutionPolicy, class BidirIt, class Compare> void inplace_merge( ExecutionPolicy&& policy, BidirIt first, BidirIt middle, BidirIt last, (4) (since C++17) Compare comp ); Merges two consecutive sorted ranges [first, middle) and [middle, last) into one sorted range [first, last). A sequence [first, last) is said to be sorted with respect to a comparator comp if for any iterator it pointing to the sequence and any non-negative integer n such that it + n is a valid iterator pointing to an element of the sequence, comp(*(it + n), *it) evaluates to false. This merge is stable, which means that for equivalent elements in the original two ranges, the elements from the first range (preserving their original order) precede the elements from the second range (preserving their original order). 1) Elements are compared using operator< and the ranges must be sorted with respect to the same. 3) Elements are compared using the given binary comparison function comp and the ranges must be sorted with respect to the same. Same as , but executed according to . These overloads do not participate in overload resolution unless std::is_execution_policy_v<std::decay_t<ExecutionPolicy>> (until C++20) std::is_execution_policy_v<std::remove_cvref_t<ExecutionPolicy>> (since C++20) is true. first - the beginning of the first sorted range middle - the end of the first sorted range and the beginning of the second last - the end of the second sorted range policy - the execution policy to use. See execution policy for details. comparison function object (i.e. an object that satisfies the requirements of Compare) which returns true if the first argument is less than (i.e. is ordered before) the second. The signature of the comparison function should be equivalent to the following: comp - bool cmp(const Type1 &a, const Type2 &b); While the signature does not need to have const &, the function must not modify the objects passed to it and must be able to accept all values of type (possibly const) Type1 and Type2 regardless of value category (thus, Type1 & is not allowed, nor is Type1 unless for Type1 a move is equivalent to a copy (since C++11)). The types Type1 and Type2 must be such that an object of type BidirIt can be dereferenced and then implicitly converted to both of them. Type requirements -BidirIt must meet the requirements of ValueSwappable and LegacyBidirectionalIterator. -The type of dereferenced BidirIt must meet the requirements of MoveAssignable and MoveConstructible. Return value Given N = std::distance(first, last)}, 1,3) Exactly N-1 comparisons if enough additional memory is available. If the memory is insufficient, O(N log N) comparisons. 2,4) O(N log N) comparisons. The overloads with a template parameter named ExecutionPolicy report errors as follows: • If execution of a function invoked as part of the algorithm throws an exception and ExecutionPolicy is one of the standard policies, std::terminate is called. For any other ExecutionPolicy, the behavior is implementation-defined. • If the algorithm fails to allocate memory, std::bad_alloc is thrown. This function attempts to allocate a temporary buffer. If the allocation fails, the less efficient algorithm is chosen. Possible implementation See the implementations in libstdc++ and libc++. The following code is an implementation of merge sort. #include <vector> #include <iostream> #include <algorithm> template<class Iter> void merge_sort(Iter first, Iter last) if (last - first > 1) { Iter middle = first + (last - first) / 2; merge_sort(first, middle); merge_sort(middle, last); std::inplace_merge(first, middle, last); int main() std::vector<int> v{8, 2, -2, 0, 11, 11, 1, 7, 3}; merge_sort(v.begin(), v.end()); for(auto n : v) { std::cout << n << ' '; std::cout << '\n'; See also merge merges two sorted ranges (function template) sort sorts a range into ascending order (function template) stable_sort sorts a range of elements while preserving order between equal elements (function template) merges two ordered ranges in-place
{"url":"https://cppreference.patrickfasano.com/en/cpp/algorithm/inplace_merge.html","timestamp":"2024-11-02T23:41:45Z","content_type":"text/html","content_length":"68475","record_id":"<urn:uuid:245eaf5c-b8ae-4f76-be18-efb8d6542a8e>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00546.warc.gz"}
If A is square matrix, then (An)−1=... | Filo Question asked by Filo student If is square matrix, then Not the question you're searching for? + Ask your question Filo tutor solution Learn from their 1-to-1 discussion with Filo tutors. Generate FREE solution for this question from our expert tutors in next 60 seconds Don't let anything interrupt your homework or exam prep with world’s only instant-tutoring, available 24x7 Found 8 tutors discussing this question Discuss this question LIVE for FREE 9 mins ago Practice more questions on Matrices and Determinant View more Students who ask this question also asked View more Question Text If is square matrix, then Updated On Jan 19, 2024 Topic Matrices and Determinant Subject Mathematics Class Class 12
{"url":"https://askfilo.com/user-question-answers-mathematics/if-is-square-matrix-then-36373236323732","timestamp":"2024-11-09T09:38:48Z","content_type":"text/html","content_length":"302688","record_id":"<urn:uuid:f33e6097-eccf-4597-abbe-f08599cfd62c>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00757.warc.gz"}
Real Numbers A real number is any that represents a certain size and is found on the number line.- A real number can be decimal- A real number can be both positive and negative- A real number can be a fraction- Real numbers are denoted by the letter (R) In this article, you will learn everything you need to know about real numbers and practice exercises on the topic.Shall we begin? Real numbers are denoted by the letter (R) and, in fact, are all the numbers that are found on the number line.The number line includes all numbers, from negative infinity to positive infinity, therefore, every number, whether positive, negative, whole, fraction or decimal is called real. We use real numbers all the time, when we calculate our height, the floor in the elevator, even how many apples we buy.And what about $0$?$0$ is also real. Observe - Real numbers are, in fact, a set that contains all numbers: natural, whole, rational, and irrational numbers.Therefore, all numbers are real.With them, we can count everything we can think of and also add, subtract, multiply, and divide. Solution:According to the definition, every number is real, whether it is an integer, positive, negative, fraction, or decimal. Therefore:All the numbers in the set are real. Solution:In this exercise, there are real numbers, we have to discover which would correspond to place in the square's place.First, let's find out what the result of the parentheses is and rewrite the exercise.Additionally, instead of writing the fraction, we can write an integer with the same value.We will obtain:$(1.5+2) \times ⬜ +2=37$Now we can say that: $3.5$ times something + $2$ equals $37$.If we subtract $2$ from both sides we will get:$3.5 \times ⬜=35$Now let's think, what number multiplied by $3.5$ will give us $35$.Divide both sides by $3.5$ and we will obtain that:$10 = ⬜$ The price of a vase and four kitchen towels is $210$.The price of a vase is equivalent to the price of $3$ kitchen towels. Solution:First, let's organize the data.We know that the price of a vase is equivalent to the price of $3$ kitchen towels.If we denote the price of a kitchen towel as $X$ we can say that the price of a vase is $3X$.We also know that a vase and $3$ kitchen towels together cost $210$.We will write it in the form of an exercise and obtain:$3X+4x=210$Let's solve the equation and it will give us:$7X= Pay attention –> $X$ represents the price of a kitchen towel.$3X$ is the price of a vase.If $X=30$then the answer will be: Find the real numbers that are equivalent to the given real number:$\frac{6}{3}=⬜=⬜$ Solution:First, we know that the fraction $6\over 3$ is equivalent to the number $2$.Furthermore,By expanding the fraction, we can arrive at another fraction with the same value.Let's expand $6\over 3$ by $2$. Remember, we must expand both the numerator and the denominator to not alter the value of the fraction.We will obtain: $\frac{6}{3} = \frac{12}{6}$Therefore, we can write $2$ and $\frac
{"url":"https://www.tutorela.com/math/real-numbers","timestamp":"2024-11-02T11:09:33Z","content_type":"text/html","content_length":"67518","record_id":"<urn:uuid:54832259-1183-4d72-8141-6e8b3839aad0>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00241.warc.gz"}
(PDF) A/B Testing Intuition Busters: Common Misunderstandings in Online Controlled Experiments Author content All content in this area was uploaded by Ron Kohavi on Jun 11, 2022 Content may be subject to copyright. © Kohavi, Deng, Vermeer 2022. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive version will be published in KDD 2022 at https://doi.org/10.1145/3534678.3539160 A/B Testing Intuition Busters Common Misunderstandings in Online Controlled Experiments Ron Kohavi Los Altos, CA Alex Deng Airbnb Inc Seattle, WA Lukas Vermeer Delft, The Netherlands A/B tests, or online controlled experiments, are heavily used in industry to evaluate implementations of ideas. While the statistics behind controlled experiments are well documented and some basic pitfalls known, we have observed some seemingly intuitive concepts being touted, including by A/B tool vendors and agencies, which are misleading, often badly so. Our goal is to describe these misunderstandings, the “intuition” behind them, and toexplain and bust that intuition with solid statistical reasoning. We provide recommendations that experimentation platform designers can implement to make it harder for experimenters to make these intuitive mistakes. General and Reference →Cross-computing tools and techniques → Experimentation; Mathematics of computing → Probability and statistics → Probabilistic inference problems → Hypothesis testing and confidence interval computation A/B Testing, Controlled experiments, Intuition busters ACM Reference format: Ron Kohavi, Alex Deng, Lukas Vermeer. A/B Testing Intuition Busters: Common Misunderstandings in Online Controlled Experiments. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’22), August 14-18, 2022, Washington DC, USA. https://doi.org/10.1145/3534678.3539160 1. Introduction Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof -- Greenland et al (2016) A/B tests, or online controlled experiments (see appendix for references), are heavily used in industry to evaluate implementations of ideas, with the larger companies starting over 100 experiment treatments every business day (Gupta, et al. 2019). While the statistics behind controlled experiments are well documented and some pitfalls were shared (Crook, et al. 2009, Dmitriev, Frasca, et al. 2016, Kohavi, Tang and Xu 2020, Dmitriev, Gupta, et al. 2017), we see many erroneous applications and misunderstanding of the statistics, including in books, papers, and software. The appendix shows the impact of these misunderstood concepts in courts and legislation. The concepts we share appear intuitive yet hide unexpected complexities. Although some amount of abstraction leakage is usually unavoidable (Kluck and Vermeer 2015), our goal is to share these common intuition busters so that experimentation platforms can be designed to make it harder for experimenters to misuse them. Our contributions are as follows: • We share a collection of important intuition busters. Some well-known commercial vendors of A/B testing software have focused on “intuitive” presentations of results, resulting in incorrect claims to their users instead of addressing their underlying faulty intuitions. We believe that these solutions exacerbate the situation, as they reinforce incorrect intuitions. • We drill deeply into one non-intuitive result, which to the best of our knowledge has not been studied before: the distribution of the treatment effect under non- uniform assignment to variants. Non-uniform assignments have been suggested in the statistical literature. We highlight several concerns. • We provide recommendations as well as deployed examples for experimentation platform designers to help address the underlying faulty intuitions identified in our collection. 2. Motivating Example You win some, you learn some -- Jason Mraz GuessTheTest is a website that shares “money-making A/B test case studies.” We believe such efforts to share ideas evaluated using A/B tests are useful and should be encouraged. That said, Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org. KDD ’22, August 14–18, 2022, Washington, DC, USA. © 2022 Copyright is held by the authors. Publication rights licensed to ACM. ACM ISBN 978-1-4503-9385-0/22/08...$15.00. KDD ’22, August 14-18, 2022, Washington DC, USA Kohavi, Deng, and Vermeer - 2 - some of the analyses could be improved with the recommendations shared in this paper (indeed, some were already integrated into the web site based on feedback from one of the authors). This site is not unique and represents common industry practices in sharing ideas. We are using it as a concrete example that shows several patterns where the industry can A real A/B test was shared on December 16, 2021, in GuessTheTest’snewsletter and website with the title: “Which design radically increased conversions 337%?” (O'Malley 2021). The A/B test described two landing pages for a website (the specific change is not important). The test ran for 35 days, and traffic was split 50%/50% for maximum statistical power. The surprising results are shown in Table 1 below. Table 1: Results of a real A/B Test The analysis showed a massive lift of 337% for the Treatment with a p-value of 0.009 (using Fisher’s exact test, which is more appropriate for small numbers, the p-value is 0.013), which the article said is “far below the standard < 0.05 cut-off,” and with observed power of 97%, “well beyond the accepted 80% Given the data presented, we strongly believe that this result should not be trusted, and we hope to convince the readers and improve industry best practices so that similar experiment results will not be shared without additional validation. Based on our feedback and feedback from others, GuessTheTest added that the experiment was underpowered and suggested doing a replication run. 3. Surprising Results Require Strong Evidence—Lower P-Values Extraordinary claims require extraordinary evidence" (ECREE) -- Carl Sagan Surprising results make great story headlines and are often remembered even when flaws are found, or the results do not replicate. Many of the most cited psychology findings failed to replicate (Open Science Collaboration 2015).Recently, the term Bernoulli’s Fallacy has been used to describe the issue as a “logical flaw in the statistical methods” (Clayton 2021). While controlled experiments are the gold standard in science for claiming causality, many people misunderstand p-values. A very common misunderstanding is that a statistically significant result with p-value 0.05 has a 5% chance of being a false positive Some authors prefer to use the semicolon notation; see discussion at: (Goodman 2008, Greenland, Senn, et al. 2016, Vickers 2009). A common alternative to p-values used by commercial vendors is “confidence,” which is defined as (1-p-value)*100%, and often misinterpreted as the probability that the result is a true positive. Vendors who sell A/B testing software and should know better, get this concept wrong. For example, Optimizely’s documentation equates p-value of 0.10 with “10% error rate” (Optimizely 2022): …to determine whether your results are statistically significant: how confident you can be that the results actually reflect a change in your visitors' behavior, not just noise or randomness… In statistical terms, it's 1-[p-value]. If you set a significance threshold of 90%...you can expect a 10% error rate. Book authors about A/B Testing also get it wrong. The book A/B Testing: The Most Powerful Way to Turn Clicks Into Customers (Siroker and Koomen 2013) incorrectly defines p-value: …we can compute the probability that our observed difference (- 0.007) is due to random chance. This value, called the p-value... The book You Should Test That: Conversion Optimization for More Leads, Sales and Profit (Goward 2012) incorrectly states …when statistical significance (that is, it’s unlikely the test results are due to chance) has been achieved. Even Andrew Gelman, a Statistics professor at Columbia University, has gotten it wrong in one of his published papers (due to an editorial change) and apologized (Gelman 2014). The above examples, and several more in the appendix, show that p-values and confidence are often misunderstood, even among experts who should know better. What is the p-value then? The p-value is the probability of obtaining a result equal to or more extreme than what was observed, assuming that all the modeling assumptions, including the null hypothesis, , are true (Greenland, Senn, et al. 2016). Conditioning on the null hypothesis is critical and most often misunderstood. In probabilistic terms, we have This conditional probability is not what is being described in the examples above. All the explanations above are variations of the opposite conditional probability: what is the probability of the null hypothesis given the delta observed: Bayes Rule can be used for inverting between these two, but the crux of the problem is that it requires the prior probability of the null hypothesis. Colquhoun (2017) makes a similar point and writes that “we hardly ever have a valid value for this prior.” However, in companies running online controlled experiments at scale, we can construct good prior estimates based on historical experiments. KDD ’22, August 14-18, 2022, Washington DC, USA Kohavi, Deng, and Vermeer - 3 - One useful metric to look at is the False Positive Risk (FPR), which is the probability that the statistically significant result is a false positive, or the probability that is true (no real effect) when the test was statistically significant (Colquhoun 2017). Using the following terminology: • SS is a statistically significant result • is the threshold used to determine statistical significance (SS), commonly 0.05 for a two-tailed t-test. • is the type-II error (usually 0.2 for 80% power) • is the prior probability of the null hypothesis, that is Using Bayes Rule, we can derive the following (Wacholder, et al. 2004, Ioannidis 2005, Kohavi, Deng and Longbotham, et al. 2014, Benjamin, et al. 2017): 󰇛󰇜 󰇛󰇜󰇛󰇜 󰇛 󰇜 󰇛 󰇜 Several estimates of historical success rates (what the org believes are true improvements to the Overall Evaluation Criterion) have been published. These numbers may involve different accounting schemes, and we never know the true rates, but they suffice as ballpark estimates. The table below summarizes the corresponding implied FPR, assuming , experiments were properly powered at 80%, and using a p-value of 0.05 but plugging in 0.025 into the above formula because only statistically significant improvements are considered successful in two-tailed t-tests. In practice, some results will have a significantly lower p-value than the threshold, and those have a lower FPR, while results close to the threshold have a higher FPR, as this is the overall FPR for p-value <= 0.05 in a two-tailed t-test (Goodman and Greenland 2007). Also, other factors like multiple variants, iterating on ideas several times, and flexibility in data processing increase the FPR due to multiple hypothesis testing. What Table 2 summarizes is how much more likely it is to have a false positive stat-sig result than what people intuitively think. Moving from the industry standard of 0.05 to 0.01 or 0.005 aligns with the threshold suggested by the 72-author paper (Benjamin, et al. 2017) for “claims of new discoveries.” Finally, if the result of an experiment is highly unusual or surprising, one should invoke Twyman’s law—any figure that looks interesting or different is usually wrong (Kohavi, Tang and Xu 2020)—and only accept the result if the p-value is very low. In our motivating example, the lift to overall conversion was over 300%. We have been involved in tens of thousands of A/B tests that ran at Airbnb, Booking, Amazon, and Microsoft, and have never seen any change that improves conversions Permission to include statistic was given by Airbnb anywhere near this amount. We think it’s appropriate to invoke Twyman’s law here. In the next section, we show that the pre- experiment power is about 3% (highly under-powered). Plugging that number in, even with the highest success rate of 33% from Table 2, we end up with an FPR of 63%, so likely to be false. Alternatively, to override such low power, if we want the false positive probability, 󰇛󰇜 to be 0.05, we would need to set the p-value threshold as follows: 󰇛 󰇜 󰇛 󰇜 or = 0.0016, much lower than the 0.009 reported. Table 2: False Positive Risk given the Success Rate, p-value threshold of 0.025 (successes only), and 80% power (Kohavi, Crook and Longbotham 2009) (Kohavi, Deng and Longbotham, et al. Google Ads, (Manzi 2012, Thomke 2020, Moran We recommend that experimentation platforms show the FPR or estimates of the posterior probability in addition to p-values, and that surprising results be replicated. At Microsoft, the experimentation platform, ExP, provides estimates that the treatment effect is not zero using Bayes Rule with priors from historical data. In other organizations, FPR was used to set . 4. Experiments with Low Statistical Power are NOT Trustworthy When I finally stumbled onto power analysis… it was as if I had died and gone to heaven -- Jacob Cohen (1990) Statistical power is the probability of detecting a meaningful difference between the variants when there really is one, that is, rejecting the null when there is a true difference of . When running controlled experiments, it is recommended that we pick the sample size to have sufficient statistical power to detect a minimum delta of interest. With an industry standard power of 80%, and p-value threshold of 0.05, the sample size for each of two equally sized variants can be determined by this simple formula (van Belle 2002): KDD ’22, August 14-18, 2022, Washington DC, USA Kohavi, Deng, and Vermeer - 4 - Where is the number of users in each variant, and the variants are assumed to be of equal size, is the variance of the metric of interest, and is the sensitivity, or the minimum amount of change you want to detect. The derivation of the formula is useful for the rest of the section and the next section, so we will summarize its derivation (van Belle 2002). Given two variants of size each with a standard deviation of , we reject the null hypothesis that there is no difference between Control and Treatment (treatment effect is zero) if the observed value is larger than (e.g., for in a two-tailed test is ); , the standard error for the difference is . We similarly reject the alternative hypothesis that the difference is if the observed value issmaller than from . (Without loss of generality, we evaluate the left tail ofa normal distribution centered on a positive as the alternative; the same mirror computation can be made with a normal centered on .)The critical value is, therefore, when these two rejection criteria are equal (the approximation ignores rejection based on the wrong tail, sometimes called type III error, a very reasonable and common approximation): Equation 1 󰇛 󰇜 Equation 2 󰇛 󰇜 󰇛 󰇜 For 80% power, , , and , so the numerator is , conservatively rounded to 16. Another way to look at Equation 2, is that with 80% power, the detectable effect, , is 2.8SE (0.84SE+1.96SE). From our GuessTheTest motivating example, a conservative pre-test statistical power calculation would be to detect a 10% relative change. In Optimizely’s survey (2021) of 808 companies, about half said experimentation drove 10% uplift in revenue over time from multiple experiments. At Bing, monthly improvements in revenue from multiple experiments were usually in the low single digits (Kohavi, Tang and Xu 2020, Figure 1.4). A large relative percentage, such as 10% for a single experiment, is conservative in that it will require a smaller sample than attempting to detect smaller changes. Assuming historical data showed 3.7% as the conversion rate (what we see for Control), we can plug-in 󰇛 󰇜 󰇛 󰇜 = 3.563% and The sample size recommended for each variant to achieve 80% power is therefore: The above-mentioned test was run with about 80 users per variant, and thus grossly underpowered even for detecting a large 10% change. The power for detecting a 10% relative change with 80 users in this example is 3% (formula in the next section). With so little power, the experiment is meaningless. Gelman et al. (2014) show that when power goes below 0.1, the probability of getting the sign wrong (e.g., concluding that the effect is positive when it is in fact negative) approaches 50% as shown in Figure 1. Figure 1: Type S (sign) error of the treatment effect as a function of statistical power (Gelman and Carlin 2014) The general guidance is that A/B tests are useful to detect effects of reasonable magnitudes when you have, at least, thousands of active users, preferably tens of thousands (Kohavi, Deng and Frasca, et al. 2013). Table 3 shows the False Positive Risk (FPR) for different levels of power. Running experiments at 20% power with similar success rate to Booking.com, Google ads, Netflix, or Airbnb search, more than half of your statistically significant results will be false positives! Table 3: False Positive Risk as in Table 2, but with 80% power, 50% power, and 20% power Google Ads, Ioannidis (2005) made this point in a highly cited paper: Why Most Published Research Findings Are False. With many low statistical power studies published, we should expect many false positives when studies show statistically significant results. Moreover, power is just one factor; other factors that can lead to incorrect findings include: flexibility in designs, financial incentives, and simply multiple hypothesis testing. Even if there is no ethical concern, many researchers are effectively p- KDD ’22, August 14-18, 2022, Washington DC, USA Kohavi, Deng, and Vermeer - 5 - A seminal analysis of 78 articles in the Journal Abnormal and Social Psychology during 1960 and 1961 showed that researchers had only 50% power to detect medium-sized effects and only 20% power to detect small effects (Cohen 1962). With such low power, it is no wonder that published results are often wrong or exaggerated. In a superb paper by Button et al. (2013), the authors analyzed 48 articles that included meta-analyses in the neuroscience domain. Based on these meta-analyses, which evaluated 730 individual studies published, they were able to assess the key parameters for statistical power. Their conclusion: the median statistical power in neuroscience is conservatively estimated at 21%. With such low power, many false positive results are to be expected, and many true effects are likely to be The Open Science Collaboration (2015) attempted to replicate 100 studies from three major psychology journals, where studies typically have low statistical power. Of these, only 36% had significant results compared to 97% in the original studies. When the power is low, the probability of detecting a true effect is small, but another consequence of low power, which is often unrecognized, is that a statistically significant finding with low power is likely to highly exaggerate the size of the effect. The winner’s curse says that the “lucky” experimenter who finds an effect in a low power setting, or through repeated tests, is cursed by finding an inflated effect (Lee and Shen 2018, Zöllner and Pritchard 2007, Deng, et al. 2021). For studies in neuroscience, where power is usually in the range of 8% to 31%, initial treatment effects found are estimated to be inflated by 25% to 50% (Button, et al. 2013). Gelman and Carlin (2014) show that when power is below 50%, the exaggeration ratio, defined as the expectation of the absolute value of the estimate, divided by the true effect size, becomes so high as to be meaningless, as shown in Figure 2. Figure 2: Exaggeration ratio as a function of statistical power (Gelman and Carlin 2014) Our recommendation is that experimentation platforms should discourage experimenters from starting underpowered experiments. With high probability, nothing statistically significant will be found, and in the unlikely case (e.g., by multiple running iterations) a statistically significant result is obtained, it is likely to be a false positive with an overestimated effect size. 5. Post-hoc Power Calculations are Noisy and Misleading This power is what I mean when I talk of reasoning backward -- Sherlock Holmes, A Study in Scarlet Given an observed treatment effect , one canassume that it is the true effect and compute the “observed power” or “post-hoc power” from Equation 1 above as follows: The term is the observed Z-value used for the test statistic. It is hence , and we can derive the ad-hoc power as 󰇛 󰇜. Note that power is thus fully determined by the p-value and , and the graph is shown in Figure 3. If the p-value is greater than 0.05, then the power is less than 50% (technically as noted above, this ignores type-III errors, which are tiny). Figure 3: post-hoc power is determined by p-value In our motivating example, the p-value was 0.009, translating into Z of 2.61. Subtracting 1.96 gives 0.65, which translates into 74% post-power, which may seem reasonable. However, compare this number to the calculation in Section 4, where the pre-experiment power was estimated at 3%. In low- power experiments, the p-value has enormous variation, and translating it into post-hoc power results in a very noisy estimate (a video of p-values in a low power simulation is at https://tiny.cc/dancepvals). Gelman (2019) wrote that “using observed estimated of effect size is too noisy to be useful.” Greenland (2012) wrote: “for a study as completed (observed), it is analogous to giving odds on a horse race after seeing the outcome” and “post hoc power is unsalvageable as an analytic tool, despite any value it has for study planning.” A key use of statistical power is to claim that for a non- significant result, the true treatment effect is bounded by a small region of because otherwise there is a high probability (e.g., 80%) that the observation wouldhave been significant. This 0.001, 91% 0.005, 80% 0.01, 73% 0.015, 68% 0.05, 50% 0.1, 38% 0.15, 30% 0.2, 25% 0.3, 18%0.4, 13% 0.5, 10% 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 KDD ’22, August 14-18, 2022, Washington DC, USA Kohavi, Deng, and Vermeer - 6 - claim holds true for pre-experiment power calculations, but it fails spectacularly for post-hoc, or observed power, calculations. In The Abuse of Power: The Pervasive Fallacy of Power Calculations for Data Analysis(Hoenig and Heisey 2001), the authors share what they call a “fatal logical flow” and the “power approach paradox” (PAP). Suppose two experiments gave rise to nonrejected null hypotheses, and the observed power was larger in the first than the second. The intuitive interpretation is that the first experiment gives stronger support favoring the null hypothesis, as with high power, failure to reject the null hypothesis implies that it is probably true. However, this interpretation is only correct for pre-experiment power. As shown above, post-hoc power is determined by the p-value and , so the first experiment has a lowerp-value, providing stronger support against the null hypothesis! Experimenters who get a non-significant result will sometimes do a post-hoc power analysis and write something like this: the non-significant result is due to a small sample size, as our power was only 30%. This claim implies that they believe they have made a type-II error and if only they had a larger sample, the null would be rejected. This is catch-22—the claim cannot be made from the data using post-hoc power, as a non-significant result will always translate to low post-hoc power. Given the strong evidence that post-hoc power is a noisy and misleading tool, we strongly recommend that experimentation systems (e.g., https://abtestguide.com/calc) not show it at all. Instead, if power calculations are desired, such systems should encourage their users to pre-register the minimum effect size of interest ahead of experiment execution, and then base their calculations on this input rather than the observed effect size. At Booking.com, the deployed experimentation platform— Experiment Tool—asks users to enter this information when creating a new experiment. 6. Minimize Data Processing Options in Experimentation Platforms Statistician: you have already calculated the p-value? Surgeon: yes, I used multinomial logistic regression. Statistician: Really? How did you come up with that? Surgeon: I tried each analysis on the statistical software dropdown menus, and that was the one that gave the smallest p-value -- Andrew Vickers (2009) In an executive review, a group presented an idea that, they said, was evaluated in an A/B test and resulted in a significant increase to a key business metric. When one of us (Kohavi) asked to see the scorecard, and the metric’s p-value was far from significant. Why did you say it was statistically significant, he asked? The response was that it was statistically significant once you turn on the option for extreme outlier removal. We had inadvertently allowed users to do multiple-comparisons and inflate type-I error rates. Outlier removal must be blind to the hypothesis. André (2021) showed that outlier removal within a variant (e.g., removal of the 1% extreme values, determined for each variant separately), rather than across the data, can result in false-positive rates as high as 43%. Optimizely’s initial A/B system was showing near-real-time results, so their users peeked at the data and chose to stop when it was statistically significant, a procedure recommended by the company at the time. This type of multiple testing significantly inflates the type-I error rates (Johari, et al. 2017). Flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates (Simmons, Nelson and Simonsohn 2011). The culprit is researcher degrees of freedom, which include: 1. Should more data be collected, or should we stop now? 2. Should some observations be excluded (e.g., outliers, 3. Segmentation by variables (e.g., gender, age, geography) and reporting just those as statistically significant. The authors write that “In fact, it is unacceptably easy to publish ‘statistically significant’ evidence consistent with any Gelman and Loken (2014) discuss how data-dependent analysis, called the “garden of forking paths,” leads to statistically significant comparisons that do not hold up. Even without intentional p-hacking, researchers make multiple choices that lead to a multiple-comparison problem and inflate type-I errors. For example, Bem’s paper (2011) providing evidence of extrasensory perception (ESP) presented nine different experiments and had multiple degrees of freedom that allowed him to keep looking until he could find what he was searching for. The author found statistically significant results for erotic pictures, but performance could have been better overall, or for non-erotic pictures, or perhaps erotic pictures for men but not women. If results were better in the second half, one could claim evidence of learning; if it’s the opposite, one could claim For research, preregistration seems like a simple solution, and organizations like the Center for Open Science support such For experimentation systems, we recommend that data processing should be standardized. If there is a reason to modify the standard process, for example, outlier removal, it should be pre-specified as part of the experiment configuration and there should be an audit trail of changes to the configuration, as is done at Booking.com. Finally, the benefit in doing A/B testing in software is that replication is much cheaper and easier. If insight leads to a new hypothesis about an interesting segment, pre-register it and run a replication study. KDD ’22, August 14-18, 2022, Washington DC, USA Kohavi, Deng, and Vermeer - 7 - 7. Beware of Unequal Variants The difference between theory and practice is larger in practice than the difference between theory and practice in theory -- Benjamin Brewster In theory, a single control can be shared with several treatments, and the theory says that a larger control will be beneficial to reduce the variance (Tang, et al. 2010). Assuming equal variances, theeffective sample size of a two-sample test is the harmonic mean 󰇛 󰇜. When there is one control taking a proportion of users and equally sized treatments with size , the optimal control size should be chosen by minimizing the sum . We differentiate to get 󰇛 󰇜 The optimal control proportion x is the positive solution to 󰇛 󰇜 , which is For example, when k = 3, instead of using 25% of users for all four variants, we could use 36.6% for control and 21.1% for the treatments, making control more than 1.5x larger. When k = 9, control would get 25% and each treatment only 8.3%, making control 3 times the size of treatment. Ramp-up is another scenario leading to more extreme unequal treatment vs. control sample size. When a treatment starts at a small percentage, say 2%, the remaining 98% traffic may seem to be the obvious control. There are several reasons why this seemingly intuitive direction fails in practice: 1. Triggering. As organizations scale experimentation, they run more triggered experiments, which give a stronger signal for smaller populations, great for testing initial ideas and for machine learning classifiers (Kohavi, Tang and Xu 2020, Chapter 20, Triggering). It is practically too hard to share a control and compute for each treatment whether to trigger, especially for experiment treatments that start at different times and introduce performance overhead (e.g., doing inference on both control and treatment to determine if the results differ in order to trigger). 2. Because of cookie churn, unequal variants will cause a larger percentage of users in the smaller variants to be contaminated and be exposed to different variants (their probability of being re-randomized into a larger variant is higher than to their original variant). If there are mechanisms to map multiple cookies to users (e.g., based on logins), this mapping will cause sample-ratio mismatches (Kohavi, Tang and Xu 2020, Fabijan, et al. 3. Shared resources, such as Least Recently Used (LRU) caches will have more cache entries for the larger variant, giving it a performance advantage (Kohavi, Tang and Xu Here we raise awareness of an important statistical issue mentioned in passing by Kohavi et al (2012). When distributions are skewed, in an unequal assignment, the t-test cannotmaintain the nominal Type-I error rate on both tails. When a metric is positively skewed, and the control is larger than the treatment, the t-test will over-estimate the Type-I error on one tail and under-estimate on the other tail because the skewed distribution convergence to normal is different. But when equal sample sizes are used, the convergence is similar and the (observed delta) is represented well by a Normal- or t-distribution. Two common sources of skewness are 1) heavy-tailed measurements such as revenue and counts, often zero-inflated at the same time; and 2) binary/conversion metric with very small positive rate. We ran two simulated A/A studies. In the first study, we drew 100,000 random samples from a heavy-tailed distribution, D1, ofcounts, like nights booked at a reservation site. This distribution is both zero inflated (about 5% nonzero) and a skewed non-zero component, with a skewness of 35. The second study drew 1,000,000 samples from a Bernoulli distribution, D2, with a small p of 0.01%, which implies a skewness of 100. In each study, we allocated 10% samples to the treatment. We then compared two cases: in one, the control also allocated 10%; in the second, the remaining 90% were allocated to the control. We did 10,000 simulation trials andcounted number of times was rejected at the right tail and left tail at 2.5% level for each side (5% two-sided). Skewness of and metric value from the 10% treatment group are also reported. Table 4 shows the results with the following observations: 1. The realized Type-I error is close to the nominal 2.5% rate when control is the same size as treatment. 2. When control is larger, Type-I error at the left tail is greater than 2.5%, while smaller than 2.5% at the right tail. 3. Skewness of the is very close to 0 when control and treatment are equally sized. It iscloser to the skewness of treatment metric when control is much larger. Table 4: Type I errors at left and right tails from 10,000 simulation runs for two skewed distributions Skewness of a metric decreases with the rate of as the sample size increases. Kohavi, Deng, et al. (2014) recommended that sample sizes for each variant large enough such that the skewness of metrics be no greater than = 0.053. Because the skewness of is more critical for the t-test, note how in equally sized variants, the skewness is materially smaller. Table 4 shows that even when the skewness of the metric itself is above 0.3, the skewness of these for equal sized cases were all smaller than 0.053. Because the ratio of skewness is so high (e.g., 0.2817/0.0142=~19.8), achieving the same KDD ’22, August 14-18, 2022, Washington DC, USA Kohavi, Deng, and Vermeer - 8 - skewness, that is, convergence to normal, with unequal variants requires times more users. For experiment ramp-up, where the focus is to reject at the left tail so we can avoid degradation of experiences to users, using a much larger control can lead to higher-than-expected false rejections, so a correction should be applied (Boos and Hughes- Oliver 2000). For using a shared control to increase statistical power, the real statistical power can be lower than the expected. For a number of treatments ranging from two to four, the reduced variance from using the optimal shared control size is less than 10%. We do notthink this benefit justifies all the potential issues with unequal variants, and therefore recommend against the use of a large (shared) control. 8. Summary We shared five seemingly intuitive concepts that are heavily touted in the industry, but are very misleading. We then shared our recommendations for how to design experimentation platforms to make it harder for experimenters to be misled by these. The recommendations were implemented in some of the deployed platforms in our organizations. We thank Georgi Georgiev, Somit Gupta, Roger Longbotham, Deborah O’Malley, John Cutler, Pavel Dmitriev, Aleksander Fabijan, Matt Gershoff, Adam Gustafson, Bertil Hatt, Michael Hochster, Paul Raff, Andre Richter, Nathaniel Stevens, Wolfe Styke, and Eduardo Zambrano for valuable feedback. André, Quentin. 2021. "Outlier exclusion procedures mustbe blind to the researcher’s hypothesis." Journal of Experimental Psychology: General. Bem, Daryl J. 2011. "Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect." Journal of Personality and Social Psychology100 (3): 407- 425. doi:https://psycnet.apa.org/doi/10.1037/a0021524. Benjamin, Daniel J., James O. Berger, Magnus Johannesson, Brian A. Nosek, E.-J. Wagenmakers, Richard Berk, Kenneth A. Bollen, et al. 2017. "Redefine Statistical Significance." Nature Human Behaviour 2 (1): 6-10. Boos, Dennis D, and Jacqueline M Hughes-Oliver. 2000. "How Large Does n Have to be for Z and t Intervals?" The American Statistician, 121-128. Button, Katherine S, John P.A. Ioannidis, Claire Mokrysz, Brian A Nosek, Jonathan Flint, Emma S.J. Robinson, and Marcus R Munafò. 2013. "Power failure: why small sample size undermines the reliability of neuroscience." Nature Reviews Neuroscience 14: 365-376. https://doi.org/10.1038/nrn3475. Clayton, Aubrey. 2021. Bernoulli's Fallacy: Statistical Illogic and the Crisis of Modern Science.Columbia University Press. Cohen, Jacob. 1962. "The Statistical Power for Abnormal-Social Psychological Research: A Review." Journal of Abnormal and Social Psychology 65 (3): 145-153. Cohen, Jacob. 1990. "Things I have Learned (So Far)." American Psychologist 45 (12): 1304-1312. Colquhoun, David. 2017. "The reproducibility of research and the misinterpretation of p-values." Royal Society Open Science (4). https://doi.org/10.1098/rsos.171085. Crook, Thomas, Brian Frasca, Ron Kohavi, and Roger Longbotham. 2009. "Seven Pitfalls to Avoid when Running Controlled Experiments on the Web." KDD '09: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, 1105-1114. Deng, Alex, Yicheng Li, Jiannan Lu, and Vivek Ramamurthy. 2021. "On Post-Selection Inference in A/B Tests." Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2743-2752. Dmitriev, Pavel, Brian Frasca, Somit Gupta, Ron Kohavi, and Garnet Vaz. 2016. "Pitfalls of long-term online controlled experiments." IEEE International Conference on Big Data. Washington, DC. 1367-1376. Dmitriev, Pavel, Somit Gupta, Dong Woo Kim, and Garnet Vaz. 2017. "A Dirty Dozen: Twelve Common Metric Interpretation Pitfalls in Online Controlled Experiments." Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2017). Halifax, NS, Canada: ACM. 1427-1436. Fabijan, Aleksander, Jayant Gupchup, Somit Gupta, Jeff Omhover, Wen Qin, Lukas Vermeer, and Pavel Dmitriev. 2019. "Diagnosing Sample Ratio Mismatch in Online Controlled Experiments: A Taxonomy and Rules of Thumb for Practitioners." KDD '19: The 25th SIGKDD International Conference on Knowledge Discovery and Data Mining. Anchorage, Alaska, USA: ACM. Gelman, Andrew. 2019. "Don’t Calculate Post-hoc Power Using Observed Estimate of Effect Size." Annals of Surgery269 (1): e9-e10. doi:10.1097/SLA.0000000000002908. —. 2014. "I didn’t say that! Part 2." Statistical Modeling, Causal Inference, and Social Science. October 14. Gelman, Andrew, and Eric Loken. 2014. "The Statistical Crisis in Science." American Scientist 102 (6): 460-465. Gelman, Andrew, and John Carlin. 2014. "Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors." Perspectives on Psychological Science 9 (6): 641 –651. doi:10.1177/1745691614551642. Goodman, Steven. 2008. "A Dirty Dozen: Twelve P-Value Misconceptions." Seminars in Hematology. Goodman, Steven, and Sander Greenland. 2007. Assessing the unreliability of the medical literature: aresponse to "Why most published research findings are false".Johns Hopkins KDD ’22, August 14-18, 2022, Washington DC, USA Kohavi, Deng, and Vermeer - 9 - University, Department of Biostatistics. Goward, Chris. 2012. You Should Test That: Conversion Optimization for More Leads, Sales and Profit or The Art and Science of Optimized Marketing. Sybex. Greenland, Sander. 2012. "Nonsignificance Plus High Power Does Not Imply Support for the Null Over the Alternative." Annals of Epidemiology 22 (5): 364-368. Greenland, Sander, Stephen J Senn, Kenneth J Rothman, John B Carlin, Charles Poole, Steven N Goodman, and Douglas G Altman. 2016. "Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations." European Journal of Epidemiology 31: 337-350. Gupta, Somit, Ronny Kohavi, Diane Tang, Ya Xu, Reid Anderson, Eytan Bakshy, Niall Cardin, et al. 2019. "Top Challenges from the first Practical Online Controlled Experiments Summit." 21 (1). Hoenig, John M, and Dennis M Heisey. 2001. "The Abuse of Power: The Pervasive Fallacy of Power Calculations for Data Analysis." American Statistical Association 55 (1): 19-24. Ioannidis, John P. 2005. "Why Most Published Research Findings Are False." PLoS Medicine 2 (8): e124. Johari, Ramesh, Leonid Pekelis, Pete Koomen, and David Walsh. 2017. "Peeking at A/B Tests." KDD '17: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Halifax, NS, Canada: ACM. 1517-1525. Kaushik, Avinash. 2006. "Experimentation and Testing: A Primer." Occam’s Razor by Avinash Kaushik. May 22. Kluck, Timo, and Lukas Vermeer. 2015. "Leaky Abstraction In Online Experimentation Platforms: A Conceptual Framework To Categorize Common Challenges." The Conference on Digital Experimentation (CODE@MIT). Boston, MA. Kohavi, Ron, Alex Deng, Brian Frasca, Toby Walker, Ya Xu, and Nils Pohlmann. 2013. "Online Controlled Experiments at Large Scale." KDD 2013: Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. http://bit.ly/ExPScale. Kohavi, Ron, Alex Deng, Roger Longbotham, and Ya Xu. 2014. "Seven Rules of Thumb for Web Site Experimenters." Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD '14). http://bit.ly/expRulesOfThumb. Kohavi, Ron, Diane Tang, and Ya Xu. 2020. Trustworthy Online Controlled Experiments: A Practical Guide to A/BTesting. Cambridge University Press. https://experimentguide.com. Kohavi, Ron, Thomas Crook, and Roger Longbotham. 2009. "Online Experimentation at Microsoft." Third Workshop on Data Mining Case Studies and Practice Prize. Lee, Minyong R, and Milan Shen. 2018. "Winner’s Curse: Bias Estimation for Total Effects of Features in Online Controlled Experiments." KDD 2018: The 24th ACM Conference on Knowledge Discovery and Data Mining. London: ACM. Manzi, Jim. 2012. Uncontrolled: The Surprising Payoff of Trial- and-Error for Business, Politics, and Society. Basic Books. Moran, Mike. 2007. Do It Wrong Quickly: How the Web Changes the Old Marketing Rules . IBM Press. O'Malley, Deborah. 2021. "Which design radically increased conversions 337%?" GuessTheTest. December 16. Open Science Collaboration. 2015. "Estimating the Reproducibility of Psychological Science." Science 349 (6251). doi:https://doi.org/10.1126/science.aac4716. Optimizely. 2022. "Change the statistical significance setting." Optimizely Help Center. January 10. —. 2021. "How to win in the Digital Experience Economy." Optimizely. https://www.optimizely.com/insights/digital- Simmons, Joseph P, Leif D Nelson, and Uri Simonsohn. 2011. "False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant." Psychological Science 22 (11): 1359-1366. Siroker, Dan, and Pete Koomen. 2013. A/B Testing: The Most Powerful Way to Turn Clicks Into Customers. Wiley. Tang, Diane, Ashish Agarwal, Deirdre O'Brien, and Mike Meyer. 2010. "Overlapping Experiment Infrastructure: More, Better, Faster Experimentation." Proceedings 16th Conference on Knowledge Discovery and Data Mining. Thomke, Stefan H. 2020. Experimentation Works: The Surprising Power of Business Experiments.Harvard Business Review Press. van Belle, Gerald. 2002. Statistical Rules of Thumb. Wiley- Vickers, Andrew J. 2009. What is a p-value anyway? 34 Stories to Help You Actually Understand Statistics. Pearson. Wacholder, Sholom, Stephen Chanock, Montserrat Garcia- Closas, Nathaniel Rothman, and Laure Elghormli. 2004. "Assessing the Probability That a Positive Report is False: An Approach for Molecular Epidemiology Studies." Journal of the National Cancer Institute. Zöllner, Sebastian, and Jonathan K Pritchard. 2007. "Overcoming the Winner’s Curse: Estimating Penetrance Parameters from Case-Control Data." The American Journal of Human Genetics 80 (4): 605-615. © Kohavi, Deng, Vermeer 2022. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive version will be published in KDD 2022 at https://doi.org/10.1145/3534678.3539160 - 10 - A/B Testing Intuition Busters: Appendix This appendix provides additional support and useful references to several sections in the main paper. There are many references for A/B tests, or online controlled experiments (Kohavi, Tang and Xu 2020, Luca and Bazerman 2020, Thomke 2020, Georgiev 2019, Kohavi, Longbotham, et al. 2009, Goward 2012, Siroker and Koomen 2013); (Box, Hunter and Hunter 2005, Imbens and Rubin 2015, Gerber and Green 2012). Statistical concepts that are misunderstood have not only caused businesses to make incorrect decisions, hurting user experiences and the businesses themselves, but have also resulted in innocent people being convicted of murder and serving years in jail. In courts, incorrect use of conditional probabilities is called the Prosecutor’s fallacy and “The use of p-values can also lead to the prosecutor’s fallacy”(Fenton, Neil and Berger 2016). Sally Clark and Angela Cannings were convicted of the murder of their babies, in part based on a claim presented by eminent British pediatrician, Professor Meadow, who incorrectly stated that the chance of two babies dying in those circumstances are 1 in 73 million (Hill 2005). The Royal Statistical Society issued a statement saying that the “figure of 1 in 73 million thus has no statistical basis” and that “This (mis-)interpretation is a serious error of logic known as Prosecutor’s Fallacy” (2001). In the US, right turn on red was studied in the 1970s but “these studies were underpowered” and the differences on key metrics were not statistically significant, so right turn on red was adopted; later studies showed “60% more pedestrians were being run over, and twice as many bicyclists were struck” (Reinhart 2015). Surprising Results Require Strong Evidence—Lower P-Values Eliason (2018) shares 16 popular myths that persist despite evidence they are likely false. In the Belief in the Law of Small Numbers (Tvesrky and Kahneman 1971), the authors take the reader through intuition busting exercises in statistical power and replication. Additional examples where concepts are incorrectly stated by people or organizations in the field of A/B testing include: Until December 2021, Adobe’s documentation stated that The confidence of an experience or offer represents the probability that the lift of the associated experience/offer over the control experience/offer is “real” (not caused by random chance). Typically, 95% is the recommended level of confidence for the lift to be considered significant. This statement is wrong and was likely fixed after a LinkedIn post from one of us that highlighted this error. The book Designing with Data: Improving the User Experience with A/B Testing (King, Churchill and Tan 2017) incorrectly states p-values represent the probability that the difference you observed is due to random chance GuessTheTest defined confidence incorrectly (GuessTheTest 2022) as A 95% confidence level means there’s just a 5% chance the results are due to random factors -- and not the variables that changed within the A/B test The owner is in the process of updating its definitions based on our feedback. The web site AB Test Guide (https://abtestguide.com/calc/) uses the following incorrect wording when the tool is used, and the result is statistically significant: You can be 95% confident that this result is a consequence of the changes you made and not a result of random chance The industry standard threshold of 0.05 for p-value is stated in medical guidance (FDA 1998, Kennedy-Shaffer 2017). Minimize Data Processing Options in Experimentation Platforms Additional discussion of ESP following up on Bem’s paper (2011) are in Schimmack et. al. (2018). In Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results(Silberzhan, et al. 2018), the authors shared how 29 teams involving 61 analysts used the same data set to address the same research question. Analytic approaches varied widely, and estimated effect sizes ranged from 0.89 to 2.93. Twenty teams (69%) found a statistically significant positive effect, and nine teams (31%) did not. Many subjective decisions are part of the data processing and analysis and can materially impact the In the online world, we typically deal with a larger number of units than in domains like psychology. Simmons et al. (2011) recommend at least 20 observations per cell, whereas in A/B testing we recommend thousands to tens of thousands of users (Kohavi, Deng, et al. 2013). On the one hand, this larger sample size results in less dramatic swings in p-values because experiments are adequately powered, but on the other hand KDD ’22, August 14-18, 2022, Washington DC, USA Kohavi, Deng, and Vermeer - 11 - online experiments offer more opportunities for optional stopping and post-hoc segmentation, which suffer from multiple hypothesis testing. Resources for Reproducibility The key tables and simulations are available for reproducibility at https://bit.ly/ABTestingIntuitionBustersExtra . Bem, Daryl J. 2011. "Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect." Journal of Personality and Social Psychology 100 (3): 407-425. Box, George E.P., J Stuart Hunter, and William G Hunter. 2005. Statistics for Experimenters: Design, Innovation, and Discovery. 2nd. John Wiley & Sons, Inc. Eliason, Nat. 2018. 16 Popular Psychology Myths You Probably Still Believe. July 2. FDA. 1998. "E9 Statistical Principles for Clinical Trials." U.S. Food & Drug Administration. September. Fenton, Norman, Martin Neil, and Daniel Berger. 2016. "Bayes and the Law." Annual Review of Statistics and Its Application3: 51-77. doi:https://doi.org/10.1146/annurev- Georgiev, Georgi Zdravkov. 2019. Statistical Methods in Online A/B Testing: Statistics for data-driven business decisions and risk management in e-commerce. Independently published. https://www.abtestingstats.com/. Gerber, Alan S, and Donald P Green. 2012. Field Experiments: Design, Analysis, and Interpretation.W. W. Norton & Company. Goward, Chris. 2012. You Should Test That: Conversion Optimization for More Leads, Sales and Profit or The Art and Science of Optimized Marketing. Sybex. GuessTheTest. 2022. "Confidence." GuessTheTest. January 10. https://guessthetest.com/glossary/confidence/. Hill, Ray. 2005. "Reflections on the cot death cases." Significance 13-16. doi:https://doi.org/10.1111/j.1740- Imbens, Guido W, and Donald B Rubin. 2015. Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction. Cambridge University Press. Kennedy-Shaffer, Lee. 2017. "When the Alpha is the Omega: P-Values, “Substantial Evidence,” and the 0.05 Standard at FDA." Food Drug Law J. 595-635. King, Rochelle, Elizabeth F Churchill, and Caitlin Tan. 2017. Designing with Data: Improving the User Experience with A/B Testing. O'Reilly Media. Kohavi, Ron, Alex Deng, Brian Frasca, Toby Walker, Ya Xu, and Nils Pohlmann. 2013. "Online Controlled Experiments at Large Scale." KDD 2013: Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. http://bit.ly/ExPScale. Kohavi, Ron, Diane Tang, and Ya Xu. 2020. Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing. Cambridge University Press. Kohavi, Ron, Roger Longbotham, Dan Sommerfield, and Randal M. Henne. 2009. "Controlled experiments on the web: survey and practical guide." Data Mining and Knowledge Discovery 18: 140-181. http://bit.ly/expSurvey. Luca, Michael, and Max H Bazerman. 2020. The Power of Experiments: Decision Making in a Data-Driven World. The MIT Press. Reinhart, Alex. 2015.Statistics Done Wrong: The Woefully Complete Guide. No Starch Press. Royal Statistical Society. 2001. Royal Statistical Society concerned by issues raised in Sally. London, October 23. Schimmack, Ulrich, Linda Schultz, Rickard Carlsson, and Stefan Schmukle. 2018. "Why the Journal of Personality and Social Psychology Should Retract Article DOI: 10.1037/a0021524 “Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect” by Daryl J. Bem." Replicability-Index.January 30. https://replicationindex.com/2018/01/05/bem- Silberzhan, R, E L Uhlmann, D P Martin, and et. al. 2018. "Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results." Advances in Methods and Practices in Psychological Science1 (3): Simmons, Joseph P, Leif D Nelson, and Uri Simonsohn. 2011. "False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant." Psychological Science 22 (11): 1359-1366. Siroker, Dan, and Pete Koomen. 2013. A/B Testing: The Most Powerful Way to Turn Clicks Into Customers. Wiley. Thomke, Stefan H. 2020. Experimentation Works: The Surprising Power of Business Experiments. Harvard Business Review Press. Tvesrky, Amos, and Daniel Kahneman. 1971. "Belief in the Law of Small Numbers." Psychological Bulletin 76 (2): 105-110. https://psycnet.apa.org/record/1972-01934-001.
{"url":"https://www.researchgate.net/publication/361226478_AB_Testing_Intuition_Busters_Common_Misunderstandings_in_Online_Controlled_Experiments","timestamp":"2024-11-13T14:29:32Z","content_type":"text/html","content_length":"795752","record_id":"<urn:uuid:6d860b1c-56eb-438d-8aa3-fe60d00fc071>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00598.warc.gz"}
What is the radius of a circle given by the equation (x+1)^2+(y-2)^2=64? | HIX Tutor What is the radius of a circle given by the equation #(x+1)^2+(y-2)^2=64#? Answer 1 The radius of this circle is $8$ (units). The equation of a circle is: #(x-a)^2+(y-b)^2=r^2#, where #r# is the radius, and #P=(a,b)# is the circle's centre, so the given circle has: Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/what-is-the-radius-of-a-circle-given-by-the-equation-x-1-2-y-2-2-64-8f9afa40a3","timestamp":"2024-11-05T18:44:08Z","content_type":"text/html","content_length":"569058","record_id":"<urn:uuid:dd8b46a0-08f4-4921-afea-92512f9c3246>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00326.warc.gz"}
Re: Problem with supplying custom tick marks • To: mathgroup at smc.vnet.net • Subject: [mg82986] Re: Problem with supplying custom tick marks • From: Thomas E Burton <tburton at brahea.com> • Date: Tue, 6 Nov 2007 03:40:50 -0500 (EST) Private communication with David Park leads us to conclude that we are supposed to interpret "across" as left-to-right in the following bit of one-line help: "Tick mark lengths are given as a fraction of the distance across the whole plot." In other words, the lengths of ticks for both axes are to be specified as a fraction of the total horizontal extent, so a y-tick length of 1 would extend across to the far edge of the frame, and x- tick length of 1 would be equally long. That's what we see in exported plots, and that's what we see on screen, once we have resized the graphic (no matter how trivially) with the mouse. What we see on screen before resizing, however, is that the x-tick length is proportional to the vertical extent of the plot. These x-ticks jump to their correct (we presume) lengths the first time one resizes the graph with the mouse. Which leads to this question: Is there a programmatic way to do the equivalent of resizing? PS. I prepared a notebook demonstrating most of the above, but didn't feel it worth sending to MathGroup. Write me if interested. David Park had written: > Here are a set of tick specifications for an x axis: ... What > bothers me is that the x-ticks are so much shorter in the second > plot (at least they distinctly look that way to me) even though > they are exactly the same ticks. ...
{"url":"https://forums.wolfram.com/mathgroup/archive/2007/Nov/msg00173.html","timestamp":"2024-11-06T07:38:09Z","content_type":"text/html","content_length":"31300","record_id":"<urn:uuid:cb5595ac-6046-4d18-8b4e-be943d1ff6a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00538.warc.gz"}
Search | ERA Skip to Search Results Author / Creator / Contributor Fall 2022 The Helmholtz equation is a fundamental wave propagation model in the time-harmonic setting, which appears in many applications such as electromagnetics, geophysics, and ocean acoustics. It is challenging and computationally expensive to solve due to (1) its highly oscillating solution and (2)... Fall 2022 Interface problems arise in many applications such as modeling of underground waste disposal, oil reservoirs, composite materials, and many others. The coefficient $a$, the source term $f$, the solution $u$ and the flux $a\nabla u\cdot \vec{n}$ are possibly discontinuous across the interface... Spring 2021 Generalizing wavelets by adding desired redundancy and flexibility, framelets (a.k.a. wavelet frames) are of interest and importance in many applications such as image processing and numerical algorithms. Several key properties of framelets are high vanishing moments for sparse multi-scale... Fall 2017 One main goal of this thesis is to bring forth a systematic and simple construction of a multiwavelet basis on a bounded interval. The construction that we present possesses orthogonality in the derivatives of the multiwavelet basis among all scale levels. Since we are mainly interested in Riesz... Spring 2016 In this thesis we will study the robustness property of sub-gaussian random matrices. We first show that the nearly isometry property will still hold with high probability if we erase a certain portion of rows from a sub-gaussian matrix, and we will estimate the erasure ratio with a given small... Fall 2015 This thesis concentrates on the construction of directional tensor product complex tight framelets. It uses a complex tight framelet filter bank in one dimension and the tensor product of the one-dimensional filter bank to obtain high-dimensional filter bank. It has a number of advantages over...
{"url":"https://era.library.ualberta.ca/search?direction=desc&facets%5Bitem_type_with_status_sim%5D%5B%5D=thesis&facets%5Bmember_of_paths_dpsim%5D%5B%5D=db9a4e71-f809-4385-a274-048f28eb6814&facets%5Bsupervisors_sim%5D%5B%5D=Han%2C+Bin+%28Mathematical+and+Statistical+Sciences%29&sort=sort_year","timestamp":"2024-11-09T01:02:10Z","content_type":"text/html","content_length":"43336","record_id":"<urn:uuid:2b2d70bd-f804-4e07-b264-e1fe07a0d499>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00169.warc.gz"}
Purchasing Points On A Mortgage Using that example, to buy down your interest rate by 1% the mortgage points would cost $10, One mortgage discount point usually lowers your monthly. Discount points are a type of prepaid interest or fee that mortgage borrowers can purchase from mortgage lenders to lower the amount of interest on their. For example, if you take out a $, loan, one point would cost 1% of the loan amount, or $5, Two points would cost 2% of the loan amount, or $10, Mortgage points are a way to save on your monthly payments by putting up more money than required towards interest during closing. You pay these fees directly. Discount points are a one-time fee paid directly to the lender in exchange for a reduced mortgage interest rate: an exercise also known as “buying down the. If buying down the rate with one discount point, your interest rate could be lowered by at least % depending on the product and your specific loan scenario. Discount points are a type of prepaid interest or fee that mortgage borrowers can purchase from mortgage lenders to lower the amount of interest on their. Mortgage points are calculated as a percentage of your loan amount: One point equals 1% of the amount you borrow. For example, one point on a $, loan. A mortgage point equals 1 percent of your total loan amount — for example, on a $, loan, one point would be $1, Mortgage points are essentially a. Buying points when you close your mortgage can reduce its interest rate, which in turn reduces your monthly payment. But each 'point' will cost you 1% of your. A point or discount point is a one-time fee equal to 1 percent of your mortgage loan amount. The point is typically included in your closing costs in exchange. Mortgage points are a way to save on your monthly payments by putting up more money than required towards interest during closing. You pay these fees directly. There are two kinds of mortgage points: origination points and discount points. · Buyers pay origination points to the lender as a type of fee for processing the. Mortgage points are calculated as a percentage of your loan amount: One point equals 1% of the amount you borrow. For example, one point on a $, loan. But each "point" will cost you 1% of your mortgage balance. The mortgage points calculator helps you determine if you should pay for points, or use the money to. This calculator helps you determine if you should pay for points, or use the money to increase your down payment. Breaking Even: Should You Buy Points? Buying points is betting that you are going to stay in your home without altering the loan for many years. Points are an. Mortgage points, also known as discount points, are fees a homebuyer pays directly to the lender (usually a bank) in exchange for a reduced interest rate. Buying mortgage points when you close can reduce the interest rate, which in turn reduces the monthly payment. But each point will cost 1 percent of your. Should you buy points? Use the mortgage points calculator to see how buying points can reduce your interest rate, which in turn reduces your monthly payment. "Points," also called, loan discount or discount points, describe costs which are a form of prepaid interest. Each mortgage discount point paid lowers the. The idea behind mortgage points is that you pay a one-time and usually optional fee to reduce the rate. That way, you pay less in the long run. A mortgage point is equal to 1 percent of your total loan amount. For example, on a $, loan, one point would be $1, Learn more about what mortgage. We can buy down points at per point, and apparently there's no limit. It's about $k per point (or less actually) but I think but they haven't been. When you buy points (also known as discount points), you're paying your way to a lower mortgage interest rate. Think of it as pre-paid interest. Mortgage points, also known as discount points, are fees a homebuyer pays directly to the lender (usually a bank) in exchange for a reduced interest rate. Mortgage points come in two types: origination points and discount points. In both cases, each point is typically equal to 1% of the total amount mortgaged. Buying mortgage points may be your secret weapon to reducing the cost of your mortgage and saving a ton of money. Below, I explain everything you need to know. Points (also known as discount points and mortgage points) are a way to lower the interest rate on your home loan by agreeing to pay more at closing. Key facts about mortgage points · The lender and marketplace determine the interest rate reduction you receive for purchasing points so it's never fixed. We can buy down points at per point, and apparently there's no limit. It's about $k per point (or less actually) but I think but they haven't been. Mortgage points are used to lower your interest rate and monthly payment. Buying points is essentially like paying interest up-front. Discount points are a one-time fee paid directly to the lender in exchange for a reduced mortgage interest rate: an exercise also known as “buying down the. Discount points are a type of prepaid interest or fee that mortgage borrowers can purchase from mortgage lenders to lower the amount of interest on their. When To Buy Points on a Mortgage · You're planning to stay in your home past the break-even point. · You choose a fixed-rate mortgage. · You have enough cash on. When you buy points (also known as discount points), you're paying your way to a lower mortgage interest rate. Think of it as pre-paid interest. Key takeaways · Discount points are a cost you can pay to get a lower interest rate on your mortgage. · Generally speaking, paying for one point would lower your. "Points," also called, loan discount or discount points, describe costs which are a form of prepaid interest. Each mortgage discount point paid lowers the. You can purchase points on any loan as long as the cost of the points for that specific rate do not exceed % of the loan amount. Buying mortgage points may be your secret weapon to reducing the cost of your mortgage and saving a ton of money. Below, I explain everything you need to know. Buying mortgage points when you close can reduce the interest rate, which in turn reduces the monthly payment. But each point will cost 1 percent of your. Should you buy points? Use the mortgage points calculator to see how buying points can reduce your interest rate, which in turn reduces your monthly payment. For example, if you take out a $, loan, one point would cost 1% of the loan amount, or $5, Two points would cost 2% of the loan amount, or $10, Buying points when you close your mortgage can reduce its interest rate, which in turn reduces your monthly payment. But each 'point' will cost you 1% of your. Buying mortgage points can help you earn a lower interest rate on your mortgage. Having a lower rate, in turn, helps you save money over the life of the loan. Buying points can save you a lot of money, provided you keep the mortgage long enough. In the above example, your monthly mortgage payment would be $ without. Buying points is a great way to get a better interest rate and more manageable monthly payments, but if you're currently in the home purchase process and. You're more likely to benefit from paying points to buy down your mortgage rate if you plan on staying in your home for a while. That's because there's a break-. One discount point is equal to 1% of the loan amount (or $1, for every $,), and you can buy one or more points. However, the amount a point can reduce. This calculator helps you determine if you should pay for points, or use the money to increase your down payment. You need to consider how long it will take you to break even on the cost of buying points. To figure this out, divide the cost of the points by how much you'll. Using that example, to buy down your interest rate by 1% the mortgage points would cost $10, One mortgage discount point usually lowers your monthly. But each "point" will cost you 1% of your mortgage balance. The mortgage points calculator helps you determine if you should pay for points, or use the money to. If buying down the rate with one discount point, your interest rate could be lowered by at least % depending on the product and your specific loan scenario. How are mortgage discount points calculated? One point costs one percent of your loan amount (or $1, for every $,). Also, points don't have to be round. A point or discount point is a one-time fee equal to 1 percent of your mortgage loan amount. The point is typically included in your closing costs in exchange. A mortgage point is equal to 1 percent of your total loan amount. For example, on a $, loan, one point would be $1, Learn more about what mortgage. Mortgage points are a way to save on your monthly payments by putting up more money than required towards interest during closing. You pay these fees directly. Apple After Hours Stock Price | How Much Interest Does Wells Fargo Pay
{"url":"https://cazinobitcoin1.site/tools/purchasing-points-on-a-mortgage.php","timestamp":"2024-11-03T20:17:05Z","content_type":"text/html","content_length":"16593","record_id":"<urn:uuid:b920e027-9c52-4b85-ab84-55b208acfcc4>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00010.warc.gz"}
3,054 research outputs found Fluctuation Theorem(FT) has been studied as far from equilibrium theorem, which relates the symmetry of entropy production. To investigate the application of this theorem, especially to biological physics, we consider the FT for tilted rachet system. Under, natural assumption, FT for steady state is derived.Comment: 6 pages, 2 figure We experimentally demonstrate the fluctuation theorem, which predicts appreciable and measurable violations of the second law of thermodynamics for small systems over short time scales, by following the trajectory of a colloidal particle captured in an optical trap that is translated relative to surrounding water molecules. From each particle trajectory, we calculate the entropy production/ consumption over the duration of the trajectory and determine the fraction of second law–defying trajectories. Our results show entropy consumption can occur over colloidal length and time scales Probing the structures of stellar winds is of prime importance for the understanding of massive stars. Based on their optical spectral morphology and variability, the stars of the Oef class have been suggested to feature large-scale structures in their wind. High-resolution X-ray spectroscopy and time-series of X-ray observations of presumably-single O-type stars can help us understand the physics of their stellar winds. We have collected XMM-Newton observations and coordinated optical spectroscopy of the O6Ief star lambda Cep to study its X-ray and optical variability and to analyse its high-resolution X-ray spectrum. We investigate the line profile variability of the He II 4686 and H-alpha emission lines in our time series of optical spectra, including a search for periodicities. We further discuss the variability of the broadband X-ray flux and analyse the high-resolution spectrum of lambda Cep using line-by-line fits as well as a code designed to fit the full high-resolution X-ray spectrum consistently. During our observing campaign, the He II 4686 line varies on a timescale of ~18 hours. On the contrary, the H-alpha line profile displays a modulation on a timescale of 4.1 days which is likely the rotation period of the star. The X-ray flux varies on time-scales of days and could in fact be modulated by the same 4.1 days period as H-alpha, although both variations are shifted in phase. The high-resolution X-ray spectrum reveals broad and skewed emission lines as expected for the X-ray emission from a distribution of wind-embedded shocks. Most of the X-ray emission arises within less than 2R* above the photosphere.Comment: Accepted for publication in Astronomy & Astrophysic We investigate the phase diagram of a three-component system of particles on a one-dimensional filled lattice, or equivalently of a one-dimensional three-state Potts model, with reflection asymmetric mean field interactions. The three types of particles are designated as $A$, $B$, and $C$. The system is described by a grand canonical ensemble with temperature $T$ and chemical potentials $T\ lambda_A$, $T\lambda_B$, and $T\lambda_C$. We find that for $\lambda_A=\lambda_B=\lambda_C$ the system undergoes a phase transition from a uniform density to a continuum of phases at a critical temperature $\hat T_c=(2\pi/\sqrt3)^{-1}$. For other values of the chemical potentials the system has a unique equilibrium state. As is the case for the canonical ensemble for this $ABC$ model, the grand canonical ensemble is the stationary measure satisfying detailed balance for a natural dynamics. We note that $\hat T_c=3T_c$, where $T_c$ is the critical temperature for a similar transition in the canonical ensemble at fixed equal densities $r_A=r_B=r_C=1/3$.Comment: 24 pages, 3 figure The authors investigate the solution of a nonlinear reaction-diffusion equation connected with nonlinear waves. The equation discussed is more general than the one discussed recently by Manne, Hurd, and Kenkre (2000). The results are presented in a compact and elegant form in terms of Mittag-Leffler functions and generalized Mittag-Leffler functions, which are suitable for numerical computation. The importance of the derived results lies in the fact that numerous results on fractional reaction, fractional diffusion, anomalous diffusion problems, and fractional telegraph equations scattered in the literature can be derived, as special cases, of the results investigated in this article.Comment: LaTeX, 16 pages, corrected typo Depression is a common comorbidity in cardiac patients. This study sought to document fluctuations of depressive symptoms in the 12months after a first major cardiac event. In all, 310 patients completed a battery of psychosocial measures including the depression subscale of the Symptom Check List-90-Revised. A total of 252 of them also completed follow-up measures at 3 and 12months. Trajectories of depressive symptoms were classified as none, worsening symptoms, sustained remission, and persistent symptoms. Although the prevalence of depressive symptoms was consistent at each assessment, there was considerable fluctuation between symptom classes. Regression analyses were performed to identify predictors of different trajectories.Oskar Mittag, Hanna Kampling, Erik Farin and Phillip J Tull The ferromagnetic q-state Potts model on a square lattice is analyzed, for q>4, through an elaborate version of the operatorial variational method. In the variational approach proposed in the paper, the duality relations are exactly satisfied, involving at a more fundamental level, a duality relationship between variational parameters. Besides some exact predictions, the approach is very effective in the numerical estimates over the whole range of temperature and can be systematically improved.Comment: 20 pages, 5 EPS figure The surface and bulk properties of the two-dimensional Q > 4 state Potts model in the vicinity of the first order bulk transition point have been studied by exact calculations and by density matrix renormalization group techniques. For the surface transition the complete analytical solution of the problem is presented in the $Q \to \infty$ limit, including the critical and tricritical exponents, magnetization profiles and scaling functions. According to the accurate numerical results the universality class of the surface transition is independent of the value of Q > 4. For the bulk transition we have numerically calculated the latent heat and the magnetization discontinuity and we have shown that the correlation lengths in the ordered and in the disordered phases are identical at the transition point.Comment: 11 pages, RevTeX, 6 PostScript figures included. Manuscript substantially extended, details on the analytical and numerical calculations added. To appear in Phys. Rev. Thermodynamic implications of anisotropic gas-surface interactions in a closed molecular flow cavity are examined. Anisotropy at the microscopic scale, such as might be caused by reduced-dimensionality surfaces, is shown to lead to reversibility at the macroscopic scale. The possibility of a self-sustaining nonequilibrium stationary state induced by surface anisotropy is demonstrated that simultaneously satisfies flux balance, conservation of momentum, and conservation of energy. Conversely, it is also shown that the second law of thermodynamics prohibits anisotropic gas-surface interactions in "equilibrium", even for reduced dimensionality surfaces. This is particularly startling because reduced dimensionality surfaces are known to exhibit a plethora of anisotropic properties. That gas-surface interactions would be excluded from these anisotropic properties is completely counterintuitive from a causality perspective. These results provide intriguing insights into the second law of thermodynamics and its relation to gas-surface interaction physics.Comment: 28 pages, 11 figure
{"url":"https://core.ac.uk/search/?q=authors%3A(Mittag%20E)","timestamp":"2024-11-10T21:40:35Z","content_type":"text/html","content_length":"215643","record_id":"<urn:uuid:a87f443b-e24a-4d5b-885d-57cec7b7eb30>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00676.warc.gz"}
Adding up to 3 digits number with carry over 1000 in assembly Adding Numbers with Carry-Over in Assembly Language Assembly language is a low-level programming language that interacts directly with the computer's hardware. This makes it incredibly efficient, but also more challenging to work with compared to higher-level languages. One common task in assembly is adding numbers, which can be further complicated when dealing with carry-overs, especially when numbers exceed a certain limit, like 1000. Let's consider a scenario where we need to add two three-digit numbers in assembly, potentially exceeding 1000. Here's a simple example using x86 assembly language: .model small .stack 100h num1 dw 500 num2 dw 700 main proc mov ax, @data mov ds, ax mov ax, num1 add ax, num2 ; Add the two numbers ; Check for carry-over jc carry_over ; If no carry-over, store the result in 'result' mov result, ax jmp end_program ; Store the carry-over bit mov bx, 1 ; Store the lower byte of the result mov result, ax ; Store the higher byte of the result mov [result + 2], bx mov ah, 4ch int 21h main endp end main In this code, we define two three-digit numbers num1 and num2. We add these numbers using the add instruction, which potentially results in a carry-over if the sum exceeds 65535 (the maximum value for a 16-bit register). Understanding the Carry Flag The jc carry_over instruction checks the carry flag (CF). This flag is set to 1 if a carry-over occurs during an arithmetic operation, indicating that the result is larger than the register can hold. Handling the Carry-Over In our example, we handle the carry-over by storing the carry bit (1) in the bx register and moving it to the higher byte of the result variable. This effectively shifts the result to the next byte, accommodating the overflow. Important Considerations • Data Types: The code assumes 16-bit integers. For larger numbers, you'd need to use 32-bit or 64-bit registers and adjust the code accordingly. • Carry Handling: While this example uses a simple jc instruction, more complex algorithms might be necessary for situations with multiple carry-overs. • Memory Allocation: Ensure enough memory is allocated for storing the result, particularly if dealing with larger numbers. Practical Applications Understanding how to handle carry-overs in assembly is crucial for a variety of tasks, including: • Financial Calculations: Dealing with large monetary values that might exceed the limits of a single register. • Image Processing: Processing pixels with values exceeding the typical byte range. • Scientific Calculations: Performing arithmetic operations involving large numbers in scientific applications. Additional Resources: This example demonstrates a basic implementation of adding numbers with carry-over in assembly language. By understanding the fundamental concepts of carry flags and data handling, you can implement more complex arithmetic operations and utilize the efficiency of assembly language for various programming tasks.
{"url":"https://laganvalleydup.co.uk/post/adding-up-to-3-digits-number-with-carry-over-1000-in","timestamp":"2024-11-14T20:16:53Z","content_type":"text/html","content_length":"82507","record_id":"<urn:uuid:c829be7a-8535-4428-bf23-7537c57f7c1b>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00507.warc.gz"}
Simulated Annealing in Julia | R-bloggersSimulated Annealing in Julia Simulated Annealing in Julia [This article was first published on John Myles White » Statistics , and kindly contributed to ]. (You can report issue about the content on this page ) Want to share your content on R-bloggers? if you have a blog, or if you don't. Building Optimization Functions for Julia In hopes of adding enough statistical functionality to Julia to make it usable for my day-to-day modeling projects, I’ve written a very basic implementation of the simulated annealing (SA) algorithm, which I’ve placed in the same JuliaVsR GitHub repository that I used for the code for my previous post about Julia. For easier reading, my current code for SA is shown below: The Simulated Annealing Algorithm 1 # simulated_annealing() 2 # Arguments: 3 # * cost: Function from states to the real numbers. Often called an energy function, but this algorithm works for both positive and negative costs. 4 # * s0: The initial state of the system. 5 # * neighbor: Function from states to states. Produces what the Metropolis algorithm would call a proposal. 6 # * temperature: Function specifying the temperature at time i. 7 # * iterations: How many iterations of the algorithm should be run? This is the only termination condition. 8 # * keep_best: Do we return the best state visited or the last state visisted? (Should default to true.) 9 # * trace: Do we show a trace of the system's evolution? 11 function simulated_annealing(cost, 12 s0, 13 neighbor, 14 temperature, 15 iterations, 16 keep_best, 17 trace) 19 # Set our current state to the specified intial state. 20 s = s0 22 # Set the best state we've seen to the intial state. 23 best_s = s0 25 # We always perform a fixed number of iterations. 26 for i = 1:iterations 27 t = temperature(i) 28 s_n = neighbor(s) 29 if trace println("$i: s = $s") end 30 if trace println("$i: s_n = $s_n") end 31 y = cost(s) 32 y_n = cost(s_n) 33 if trace println("$i: y = $y") end 34 if trace println("$i: y_n = $y_n") end 35 if y_n <= y 36 s = s_n 37 if trace println("Accepted") end 38 else 39 p = exp(- ((y_n - y) / t)) 40 if trace println("$i: p = $p") end 41 if rand() <= p 42 s = s_n 43 if trace println("Accepted") end 44 else 45 s = s 46 if trace println("Rejected") end 47 end 48 end 49 if trace println() end 50 if cost(s) < cost(best_s) 51 best_s = s 52 end 53 end 55 if keep_best 56 best_s 57 else 58 s 59 end 60 end I’ve tried to implement the algorithm as it was presented by Bertsimas and Tsitsiklis in their 1993 review paper in Statistical Science. The major differences between my implementation and their description of the algorithm is that (1) I’ve made it possible to keep the best solution found during the search rather than always use the last solution found and (2) I’ve made no effort to select a value for their d parameter carefully: I’ve simply set it to 1 for all of my examples, though my code will allow you to specify another rule for setting the temperature of the annealer at time t other than the 1 / log(t) cooling scheme I’ve been using. (In fact, the code currently forces you to select a cooling scheme, since there are no default arguments in Julia yet.) I chose simulated annealing as my first optimization algorithm to implement in Julia because it’s a natural relative of the Metropolis algorithm that I used in the previous post. Indeed, coding up an implementation of SA made me appreciate the fact that the Metropolis algorithm as used in Bayesian statistics can be considered a special case of the SA algorithm in which the temperature is always 1 and in which the cost function for a state with posterior probability p is -log(p). Coding up the SA algorithm for myself also me made realize why it’s important that it uses an additive comparison of cost functions rather than a ratio (as in the Metropolis algorithm): the ratio goes haywire when the cost function can take on both positive and negative values (which, of course, doesn’t matter for Bayesian methods because probabilities are strictly non-negative). I discovered this when I initially tried to code up SA from my inaccurate memory without first consulting the literature and discovered that I couldn’t get a ratio-based algorithm to work no matter how many times I tried changing the cooling schedule. To test out the SA implementation I’ve written, I’ve written two tests cases that attempt to minimize the Rosenbrock and Himmelbrau functions, which I found listed as examples of hard-to-minimize functions in the Wikipedia description of the Nelder-Mead method. You can see those two examples below this paragraph. In addition, I’ve used R to generate plots showing how the SA algorithm works under repeated application on the same optimization problem; in these plots, I’ve used a heatmap to show the cost functions value at each (x, y) position, colored crosshairs to indicate the position of a true minimum of the function in question, and red dots to indicate the purported solutions found by my implementation of SA. Finding the Minimum of the Rosenbrock Function 1 # Find a solution of the Rosenbrock function using SA. 2 load("simulated_annealing.jl") 3 load("../rng.jl") 5 function rosenbrock(x, y) 6 (1 - x)^2 + 100(y - x^2)^2 7 end 9 function neighbors(z) 10 [rand_uniform(z[1] - 1, z[1] + 1), rand_uniform(z[2] - 1, z[2] + 1)] 11 end 13 srand(1) 15 solution = simulated_annealing(z -> rosenbrock(z[1], z[2]), 16 [0, 0], 17 neighbors, 18 log_temperature, 19 10000, 20 true, 21 false) Finding the Minima of the Himmelbrau Function 1 # Find a solution of the Himmelbrau function using SA. 2 load("simulated_annealing.jl") 3 load("../rng.jl") 5 function himmelbrau(x, y) 6 (x^2 + y - 11)^2 + (x + y^2 - 7)^2 7 end 9 function neighbors(z) 10 [rand_uniform(z[1] - 1, z[1] + 1), rand_uniform(z[2] - 1, z[2] + 1)] 11 end 13 srand(1) 15 solution = simulated_annealing(z -> himmelbrau(z[1], z[2]), 16 [0, 0], 17 neighbors, 18 log_temperature, 19 10000, 20 true, 21 false) Moving Forward Now that I’ve got a form of SA working, I’m interested in coding up a suite of optimization functions for Julia so that I can start to do maximum likelihood estimation in pure Julia. Once that’s possible, I can use Julia to do real science, e.g. when I need to fit simple models for which finding the MLE is appropriate. (I will leave the development of cleaner statistical functions for special cases of maximum likelihood estimation to more capable people, like Douglas Bates, who has already produced some GLM code.) At present my code is meant simply to demonstrate how one could write an implementation of simulated annealing in Julia. I’m sure that the code can be more efficient and I suspect that I’ve violated some of the idioms of the language. In addition, I’d much prefer that this function use default values for many of the arguments, as there is no reason that an end-user needs to be concerned with finding the best cooling schedule if SA seems to work out of the box on their problem with the cooling schedule I’ve been using. With those disclaimers about my code in place, I’d like to think that I haven’t made any mathematical errors and that this function can be used by others. So, I’d ask that those interested please tear apart my code so that I can make it usable as a general purpose function for optimization in Julia. Alternatively, please tell me that there’s no need for a pure Julia implementation of SA because, for example, there are nice C libraries that would be much easier to link to than to re-implement. With an implementation of SA in place, I’ll probably start working on implementing L-BFGS-S soon, which is the other optimization algorithm I use often in R. (To be honest, I use L-BFGS-S almost exclusively, but SA was much easier to implement.) Incidentally, this code base demonstrates how I view the relationship between R and Julia: Julia is a beautiful new language that is still missing many important pieces. We can all work together to build the best pieces of R that are missing from Julia. While we’re working on improving Julia, we’ll need to keep using R to handle things like visualization of our results. For this post, I turned back to ggplot2 for all of the graphics generation.
{"url":"https://www.r-bloggers.com/2012/04/simulated-annealing-in-julia/","timestamp":"2024-11-10T22:11:17Z","content_type":"text/html","content_length":"103994","record_id":"<urn:uuid:3a9f85a6-cf98-47ad-89f9-4bac210a6d94>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00107.warc.gz"}
Inert subgroups and centralizers of involutions in locally finite simple groups A subgroup H of a group G is called inert if [H : H \ Hg] is finite for all g 2 G. A group is called totally inert if every subgroup is inert. Among the basic properties of inert subgroups, we prove the following. Let M be a maximal subgroup of a locally finite group G. If M is inert and abelian, then G is soluble with derived length at most 3. In particular, the given properties impose a strong restriction on the derived length of G. We also prove that, if the centralizer of every involution is inert in an infinite locally finite simple group G, then every finite set of elements of G can not be contained in a finite simple group. In a special case, this generalizes a Theorem of Belyaev-Kuzucuoæglu-Seğckin, which proves that there exists no infinite locally finite totally inert simple E. Özyurt, “Inert subgroups and centralizers of involutions in locally finite simple groups,” Ph.D. - Doctoral Program, Middle East Technical University, 2003.
{"url":"https://open.metu.edu.tr/handle/11511/13345","timestamp":"2024-11-11T17:34:38Z","content_type":"application/xhtml+xml","content_length":"52566","record_id":"<urn:uuid:862e04b3-de47-400c-a109-24913efe88ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00853.warc.gz"}
opposite sides parallel Illustration of a parallelogram, a quadrilateral which has opposite sides parallel. Illustration of a parallelogram with altitude and diagonal drawn. Illustration of a parallelogram with diagonals drawn to show they bisect each other. Illustration of a two equal parallelograms. Two parallelograms are equal, if two sides and the included…
{"url":"https://etc.usf.edu/clipart/keyword/opposite-sides-parallel","timestamp":"2024-11-04T04:56:08Z","content_type":"text/html","content_length":"10116","record_id":"<urn:uuid:d0cfe9a5-e104-41c3-a20b-a90bd4be7603>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00055.warc.gz"}
Update r_time_series_example.R (f27b50e7) · Commits · Rebecca Merrett / tutorials Update r_time_series_example.R Showing with 33 additions and 33 deletions ... ... @@ -145,36 +145,36 @@ mean_absolute_error # is stationary "-Need to better transform these data: You could look at stabilizing the variance by applying the cube root for neg and pos values and then difference the data -You might compare models with different AR and MA terms -This is a very small sample size of 24 timestamps, so might not have enough to spare for a holdout set To get more use out of your data for training, rolling over time series or timestamps at a time for different holdout sets allows for training on more timestamps; doesn't stop the model from capturing the last chunk of timestamps stored in a single holdout set -The data only looks at 24 hours in one day Would we start to capture more of a trend in hourly sentiment if we collected data over several days? How would you go about collecting more data? Take on the challenge and further improve this model: You have been given a head start, now take this example and improve on it! To study time series further: -Look at model diagnostics -Use AIC to search best model parameters -Handle any datetime data issues -Try other modeling techniques Learn more during a short, intense bootcamp: Time Series to be introduced in Data Science Dojo's post bootcamp material Data Science Dojo's bootcamp also covers some other key machine learning algorithms and techniques and takes you through the critical thinking process behind many data science tasks Check out the curriculum: https://datasciencedojo.com/bootcamp/curriculum/" #-Need to better transform these data: # You could look at stabilizing the variance by applying # the cube root for neg and pos values and then # difference the data #-You might compare models with different AR and MA terms #-This is a very small sample size of 24 timestamps, # so might not have enough to spare for a holdout set # To get more use out of your data for training, rolling over time # series or timestamps at a time for different holdout sets # allows for training on more timestamps; doesn't stop the model from # capturing the last chunk of timestamps stored in a single holdout set #-The data only looks at 24 hours in one day # Would we start to capture more of a trend in hourly sentiment if we # collected data over several days? # How would you go about collecting more data? # Take on the challenge and further improve this model: # You have been given a head start, now take this example # and improve on it! # To study time series further: #-Look at model diagnostics #-Use AIC to search best model parameters #-Handle any datetime data issues #-Try other modeling techniques # Learn more during a short, intense bootcamp: # Time Series to be introduced in Data Science Dojo's # post bootcamp material # Data Science Dojo's bootcamp also covers some other key # machine learning algorithms and techniques and takes you through # the critical thinking process behind many data science tasks # Check out the curriculum: https://datasciencedojo.com/bootcamp/curriculum/"
{"url":"https://code.datasciencedojo.com/rebeccam/tutorials/commit/f27b50e7771b822827d17e29da49f65a5fec1d65","timestamp":"2024-11-08T11:08:21Z","content_type":"text/html","content_length":"133674","record_id":"<urn:uuid:d2a66c66-9d41-44e9-b252-0d2a37f1f28d>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00727.warc.gz"}
Rotational Dynamics and Moment of Inertia Quiz Podcast Beta Play an AI-generated podcast conversation about this lesson What does the moment of inertia represent for an object? The moment of inertia represents an object's resistance to rotational acceleration. What factors does the moment of inertia depend on? The moment of inertia depends on the object's mass, shape, and the axis of rotation. How is the moment of inertia mathematically calculated? The moment of inertia is calculated as the sum of the products of each particle's mass and the square of its distance from the rotational axis. Why is it challenging to calculate the moment of inertia for complex shapes? Signup and view all the answers What is the formula for the moment of inertia of a cylinder about its central axis? Signup and view all the answers How does the moment of inertia differ for a heavy object with a compact shape compared to a lighter object with a more extended shape? Signup and view all the answers Explain how the moment of inertia affects an object's rotational dynamics. Signup and view all the answers In what way does moment of inertia play a crucial role in engineering design? Signup and view all the answers Provide an example of how moment of inertia is used in motor vehicle design. Signup and view all the answers Explain the importance of maximizing the moment of inertia in the design of rotators for wind turbines. Signup and view all the answers Study Notes Rotational Dynamics and Moment of Inertia Rotational dynamics, a part of classical mechanics, helps us understand how objects rotate and interact with forces around an axis. Key to this understanding is the concept of moment of inertia, which determines how an object behaves when rotating. Let's delve into these ideas and explore their connections. Moment of Inertia The moment of inertia, denoted by the symbol I or J, represents an object's resistance to rotational acceleration. It's a measure of the object's mass distribution relative to a given axis of rotation. The moment of inertia depends on the object's mass, shape, and the axis of rotation. Mathematically, the moment of inertia is calculated as the sum of the products of each particle's mass and the square of its distance from the rotational axis. This integral is usually challenging to solve for complex shapes, so for most engineering applications, we rely on tabulated values or approximate formulas for common shapes. Calculating the Moment of Inertia The formula for moment of inertia is: [ I = \int{r^2 m , dm} ] where (r) is the distance from the mass element to the axis of rotation, (m) is the mass of the element, and (dm) is a small mass element. For common shapes like cylinders, disks, and spheres, approximate formulas exist using their geometrical features. For example: 1. Moment of inertia for a cylinder about its central axis: ( I = \frac{1}{2} mR^2 ) 2. Moment of inertia for a disk about its central axis: ( I = \frac{1}{2} mR^2 ) 3. Moment of inertia for a sphere about its central axis: ( I = \frac{2}{5} mR^2 ) In these formulas, (m) is the object's mass, and (R) is its radius. Effects of Moment of Inertia The moment of inertia affects an object's rotational dynamics, such as its acceleration, angular velocity, and kinetic energy. For example, a heavy object with a compact shape (like a bowling ball) will have a larger moment of inertia than a lighter object with a more extended shape (like a whip). A higher moment of inertia means the object will resist rotational acceleration, taking longer to reach the same angular velocity. Applications in Engineering Moment of inertia is crucial in engineering design. For example, motor vehicle designers can use moment of inertia to determine the optimum placement of heavy components like engines and batteries to minimize rotational acceleration and enhance the vehicle's performance. In contrast, the design of rotators for wind turbines requires maximizing the moment of inertia to minimize the turbine's rotational speed fluctuations and improve its overall efficiency. Rotational dynamics and moment of inertia are foundational concepts in engineering and physics, and they continue to drive innovation in various fields. Understanding them will help you appreciate the complex interplay between mass, shape, and rotational motion. A. P. Waterman, Classical Mechanics, 2nd ed., Prentice-Hall, Englewood Cliffs, NJ, 1990. M. A. Kibble, Classical Mechanics, 4th ed., Oxford University Press, Oxford, 1973. J. W. C. Johnston, Classical Dynamics, 3rd ed., Wiley, New York, 2012. C. Timoshenko, S. Woinowsky-Krieger, and J. M. Kennedy, Theory of Elasticity, 3rd ed., McGraw-Hill, New York, 1959. C. H. Papadopoulos, Rotational Dynamics: A Study of Rotation in Engineering, 1st ed., Wiley, New York, 2003. Studying That Suits You Use AI to generate personalized quizzes and flashcards to suit your learning preferences. Explore the concepts of rotational dynamics and moment of inertia in this quiz. Learn about the resistance to rotational acceleration, how to calculate moment of inertia for objects of various shapes, and the effects of moment of inertia on rotational dynamics. Discover the practical applications of these concepts in engineering design.
{"url":"https://quizgecko.com/learn/rotational-dynamics-and-moment-of-inertia-quiz-nvjf2r","timestamp":"2024-11-10T22:27:29Z","content_type":"text/html","content_length":"312772","record_id":"<urn:uuid:8c00b4cb-60e5-4c2c-bb39-acc71d27eee8>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00322.warc.gz"}
What Is a Quadrilater? - Do My GRE Exam Quadrilaterals are used to describe a geometric shape of four or more parts that are connected together, usually in a repeating pattern. The word “quadrilaterally” is taken from “quadrileur.” Geometry subjects often include parallel and perpendicular straight lines, polygons, squares, triangles, pentagons, oblong and hexagonal shapes, triangles, isosceles and 30-degree-60-degree triangles, pentagonal and hexagonal figures and several other shapes. Geometry can be complicated, and in many cases, it is. For those who aren’t sure what to expect in a geometry class, you may wonder what quadrilaterals will mean to students taking the examination for the GED. Many students feel like they have to know exactly how to interpret the questions on the exam. They may feel that the GRE test is too difficult or that they are not prepared for such a But knowing how quadrilaters are formed can help students preparing for the GED. Many questions that ask students to describe a shape with four or more parts may require students to understand the concept of quadrilaterals. Asking the same question, but with quadrilaterals written in English might be easier to comprehend and prepare students for. Geometry concepts include many types of quadrilaterals. These include: linear quadrilateral, non-linear quadrilateral and tricolor quadrilateral. The difference between linear and non-linear quadrilaterals is in the shape of their sides, and in which one side is longer than the other. A linear quadrilateral can either be a circle or a square; a non-linear quadrilateral may be either a trapezoid or a hexagon or both. A non-linearly symmetric quadrilateral is a circle, square or triangular, with two opposite sides that are equal. A tricolor quadrilateral is a three-sided symmetrical triangle. Tricolor quadrilateral polygons can be octagons, trapezoids, hexagons, squares, rectangles or any other shape. Quadrilateral and tricolor quadrilateral are only a few examples of shapes, and each have their own sets of rules. However, most quadrilaterals are simple shapes, with straight sides. Some geometrical patterns and shapes that can be described as quadrilateral include: hexagonal, octagonal, square, rectangular, trapezoid, polygonal, hexagonal and oblong. Some non-triangular shapes, like an isosceles triangle, octagon and pentagonal, have no straight sides. There are many ways to think about quadrilaterals. Some people think of them as triangles, squares, pentagons and hexagons. Others see them as straight lines. Some people see them as a circle and its side. When you’re taking the GED, you need to have a clear understanding of all of these shapes, so you can apply it to the multiple-sided quadrilateral and determine which quadrilateral will apply to you. Knowing how quadrilateral look when viewed in diagrams is a great way to prepare for your GED examination. The right quadrilateral diagram will help you answer the questions on the GRE. The two most important quadrilateral types for a student to understand are the hexagonal and the oblong, and there are some others that are equally important. It’s a good idea to read up on all of them before the test. The GED examination is a test of critical thinking, and the right answers can lead to high marks on the test, or a lower mark if you don’t know what to look for. In the octagonal and trapezoidal shapes, the square quadrilateral is the smallest. The hexagonal is the second smallest, and the triangular is the largest. If you are taking the GED, you’ll want to get a feel for how all quadrilateral shapes look when viewed from different angles to help you answer the questions on the GRE. Some students struggle with taking the diagrams in the right order, so it’s a good idea to practice taking the diagrams and seeing which quadrilaterals are visible when viewed at different angles. Getting the right shape will give you some guidance in answering the GRE questions. Having a clear idea of the quadrilateral will allow you to determine which quadrilateral will fit best with the type of questions you’ll be asked on the test. This will help you to answer more questions correctly and pass the test faster. GED questions can be a little tricky. You don’t want to get the wrong answer if you fail to make the correct quadrilateral. Using quadrilaterals can help you do that.
{"url":"https://domygreexam.com/what-is-a-quadrilater/","timestamp":"2024-11-07T17:21:34Z","content_type":"text/html","content_length":"106939","record_id":"<urn:uuid:2e7718cc-cb2c-4e41-aaf9-c1813691745c>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00509.warc.gz"}
Phase Noise Measurements and System Comparisons One of the most important parameters for oscillators is phase noise as it is critical to most applications.^1-21 There are several methods for measuring phase noise which will be covered here including the pros and cons of each technique. Example measurements on various types of systems will also be compared including their accuracy and speed. The pioneer in phase noise measurement was Hewlett Packard.^1,2 Modern phase noise systems use the correlation principle but this method has some drawbacks. The most direct and sensitive method to measure the spectral density of phase noise, S[Δθ](f[m]), requires two sources – one or both of them may be the device(s) under test – and a double balanced mixer used as a phase detector (see Figure 1). The RF and LO input to the mixer should be in phase quadrature, indicated by 0 V dc at the IF port. Good quadrature assures maximum phase sensitivity K[θ] and minimum AM sensitivity. With a linear mixer, K[θ] equals the peak voltage of the sinusoidal beat signal produced when both sources are frequency offset. When both signals are set in quadrature, the voltage ΔV at the IF port is proportional to the fluctuating phase difference between the two signals. where K[θ] = phase detector constant, = V [B peak] for a sinusoidal beat signal. The calibration of the wave analyzer or spectrum analyzer can be read from the equations. For a plot of L(f[m]), the 0 dB reference level is to be set 6 dB above the level of the beat signal. The -6 dB offset has to be corrected by +1.0 dB for a wave analyzer and by +2.5 dB for a spectrum analyzer with log amplifier and average detector. In addition, noise bandwidth corrections may have to be Since the phase noise of both sources is measured in this system, the phase noise performance of one of them needs to be known in order to measure the other source. Frequently, it is sufficient to know that the actual phase noise of the dominant source cannot deviate from the measured data by more than 3 dB. If three unknown sources are available, three measurements with three different source combinations yield sufficient data to accurately calculate each individual oscillator’s performance. Figure 1 indicates a narrowband phase-locked loop that maintains phase quadrature for sources that are not sufficiently phase stable over the period of the measurement. The two isolation amplifiers should prevent injection locking of the sources. Residual phase noise measurements test one or two devices, such as amplifiers, dividers or synthesizers, driven by one common source. Since this source is not free of phase noise, it is important to know the degree of cancellation as a function of Fourier frequency. The noise floor of the system is established by the equivalent noise voltage, ΔV[n], at the mixer output. It represents mixer noise as well as the equivalent noise voltage of the following amplifier: Noise floors close to -180 dBc can be achieved with a high-level mixer and a low-noise port amplifier. The noise floor increases with f[m]^-^1 due to the flicker characteristic of ΔV[n]. System noise floors of -166 dBc at 1 kHz have been realized. This is the limitation of this approach. In measuring low-phase-noise sources, a number of potential problems have to be understood to avoid erroneous data: • If two sources are phase locked to maintain phase quadrature, it has to be ensured that the lock bandwidth is significantly lower than the lowest Fourier frequency of interest. • Even with no apparent phase feedback, two sources can be phase locking (injection locked), resulting in suppressed close-in phase noise. • AM noise of the RF signal can come through if the quadrature setting is not maintained sufficiently. • Deviation from the quadrature setting will also lower the effective phase detector constant. • Nonlinear operation of the mixer results in a calibration error and will add noise. • A non-sinusoidal RF signal causes K[θ] to deviate from V[B peak]. • The amplifier or spectrum analyzer input can be saturated during calibration or by high spurious signals such as line frequency multiples. • Closely spaced spurious signals such as multiples of 60 Hz may give the appearance of continuous phase noise when insufficient resolution and averaging are used on the spectrum analyzer. • Impedance interfaces should remain unchanged when going from calibration to measurement. • In residual measurement system phase, the noise of the common source might be insufficiently canceled due to improperly high delay-time differences between the two branches. • Noise from power supplies for devices under test or the narrowband phase-locked loop can be a dominant contributor of phase noise. • Peripheral instrumentation such as the oscilloscope, analyzer, counter or DVM can inject noise. • Microphonic noise might excite significant phase noise in devices. Despite all these hazards, automatic test systems have been developed and operated successfully for years.^4 Figure 2 shows a system that automatically measures the residual phase noise of the 8662A synthesizer. It is a residual test, since both instruments use one common 100 MHz referenced oscillator. Quadrature setting is conveniently controlled by probing the beat signal with a digital voltmeter and stopping the phase advance of one synthesizer when the beat signal voltage is sufficiently close to zero. Doing Measurements As shown in Figure 3, the Hewlett Packard test equipment HP3047A was built around a double-balanced mixer which can handle +23 dBm of power. While this HP equipment is no longer manufactured and has been replaced by the Agilent E5052A and E5052B, this system shows some interesting features. 1. The noise floor is about −177 dBc/Hz and the resolution, shown in Figure 4 is approximately −180 dBc/Hz. 2. The limit of the dynamic range is (P[out] (dBm) + kT – NF), (where NF = NF[osc] + NF[amp]) which in the case of a 0 dBm oscillator would be −174 dBc or relative to one sideband −177 dBc - NF. Assuming a signal to be +20 dBm, then the dynamic range is approximately 200 dBc - NF. 3. The term NF[osc] refers to the large signal noise figure of the oscillator, which can vary from 2 dB at 10 MHz to 6 dB at 100 MHz or even higher at 1000 MHz. As an example, the far out SSB noise at 1000 MHz is typically −172 dBc/Hz at 5 dBm of output power. 4. The mixer and the post amplifier can easily go into compression which raises the noise floor. In the previous publications for this system, this effect was not considered. It was found that some of the measurements done with this system at low frequencies were off, the crystal oscillators in question were actually better. From a system’s point-of-view, the numbers are not that optimistic, as seen from Figure 5. The reason for this is that the transistor used in the oscillator has its own noise contribution. This system is used to measure residual noise in amplifiers or switches. This is an important diagram for system planning. The HP 3047A has a built-in crystal oscillator and its measured phase-noise is shown in Figure 6. These measurements are actually a little better than what was specified by HP but these are not as good as modern measurement systems. Synergy kept the old HP system hardware for these references. The system is software based and used an FFT analyzer; for its time this was the best system around, but the measurements took a long time to make. Modern test equipment using the cross-correlation methods are at least 20 times faster. Noise in Circuits and Semiconductors Microwave applications generally use bipolar transistors and following are their noise contributions.^3,5,6,9,21 Johnson Noise • The Johnson noise (thermal noise) is due to the movement of molecules in solid devices called Brown’s molecular movements. It is expressed as The power can be written as: • In order to reduce this noise, the only option is to lower the temperature, since noise power is directly proportional to temperature. The Johnson noise sets the theoretical noise floor. Planck’s Radiation Noise • The available noise power does not depend on the value of resistor but it is a function of temperature T. The noise temperature can thus be used as a quantity to describe the noise behavior of a general lossy one-port network. • For high frequencies and/or low temperature, a quantum mechanical correction factor has to be incorporated for the validation of equation. This correction term results from Planck’s radiation law, which applies to blackbody radiation. Schottky/Shot Noise • The Schottky noise occurs in conducting PN junctions (semiconductor devices) where electrons are freely moving. The root mean square (RMS) noise current is given by where q is the charge of the electron, P is the noise power, and I[dc] is the dc bias current. Z is the termination load (can be complex). • Since the origin of this noise generated is totally different, they are independent of each other. Flicker Noise • The electrical properties of surfaces or boundary layers are influenced energetically by states, which are subject to statistical fluctuations and therefore, lead to the flicker noise or 1/f noise for the current flow. • 1/f noise is observable at low frequencies and generally decreases with increasing frequency f according to the 1/f-law until it will be covered by frequency independent mechanism, like thermal noise or shot noise. Example: The noise for a conducting diode is bias dependent and is expressed in terms of AF and KF. • The AF is generally in range of 1 to 3 (dimensionless quantity) and is a bias dependent curve fitting term, typically 2. • The KF value is ranging from 1E^-12 to 1E^-6 and defines the flicker corner frequency. Transit Time and Recombination Noise • When the transit time of the carriers crossing the potential barrier is comparable to the periodic signal, some carriers diffuse back and this causes noise. This is really seen in the collector area of NPN transistor. • The electron and hole movements are responsible for this noise. The physics for this noise has not been fully established. Avalanche Noise • When a reverse bias is applied to semiconductor junction, the normally small depletion region expands rapidly. • The free holes and electrons then collide with the atoms in depletion region, thus ionizing them, and produce a spiked current called the avalanche current. • The spectral density of avalanche noise is mostly flat. At higher frequencies, the junction capacitor with lead inductance acts as a lowpass filter. • Zener diodes are used as voltage reference sources and the avalanche noise needs to be reduced by big bypass capacitors. Prediction of Phase Noise • In designing an oscillator, one needs to have a concept of how to do this and can hopefully validate the same. The basic equation (Rohde’s Modified Leeson Equation) needed to calculate the phase noise is found in The Design of Modern Microwave Oscillators for Wireless Applications: Theory and Optimization.^1,13,14,15,16 It is: where £(f[m]), f[m], f[0], f[c], Q[L], Q[0], F, k, T, P[0], R, K[0] and m are the ratio of the sideband power in a 1 Hz bandwidth at f[m] to total power in dB, offset frequency, flicker corner frequency, loaded Q, unloaded Q, noise factor, Boltzmann’s constant, temperature in degree Kelvin, average output power, equivalent noise resistance of tuning diode, voltage gain and ratio of the loaded and unloaded Q, respectively. In the past this was done with the Leeson formula, which contains several estimates, those being output power, flicker corner frequency and the operating (or loaded) Q. Now, one can assume that the actual phase noise is never better than the large signal noise figure (F) given by the following equation:^13,14 Applying Cross-Correlation The old systems have an FFT analyzer for close-in calculations and are slower in speed. Modern equipment use the noise-correlation method. The reason why the cross-correlation method became popular is that most oscillators have an output between zero and 10 dBm and what is even more important is that only one source is required. The method with a delay line, in reality required a variable delay line to provide correct phase noise numbers as a function of offset (this is shown in reference 1, pp. 148-153, Fig. 7.25 and 7.26). This technique combines two single-channel reference source/PLL systems and performs cross-correlation operations between the outputs of each channel, as shown in Figure 7.^7,8 The DUT noise through each channel is coherent and is not affected by the cross-correlation, whereas the internal noise generated by each channel is incoherent and is diminished by the cross-correlation operation at the rate of M (M being the number of correlations). This can be expressed as: where N[meas] is the total measured noise at the display; N[DUT] the DUT noise; N[1] and N[2] the internal noise from channels 1 and 2, respectively; and M the number of correlations. The two-channel cross-correlation technique achieves superior measurement sensitivity without requiring exceptional performance of the hardware components. However, the measurement speed suffers when increasing the number of correlations.^5 The built-in synthesizer limits the dynamic range and that is the reason why at lower frequencies crystal-oscillators are used. The assumption is that the internal reference sources 1 and 2 are at least equal in noise contribution as the DUT and assuming a correlation of 20 dB (limited by leakage and other large signal phenomena) divider noise, those references can be 20 dB worse. Both Rohde & Schwarz and Agilent use a synthesized approach. This limits the dynamic range close-in to the carrier and far-out to the carrier. When using the correlation to get 10 dB more dynamic range, 100 correlations are necessary or the measurement is one hundred times slower. The R&S FSUP is a combination of a spectrum analyzer and the phase noise test setup while the Agilent E5052B is a dedicated system, about 4-6 times faster. Advantages of the Noise Correlation Technique^11-19 1. Increased speed 2. Requires less input power 3. Single source set-up 4. Can be extended from low frequencies like 1 MHz to 100 GHz All of which depends upon the internal synthesizer. Disadvantages of the Noise Correlation Technique 1. Different manufacturers have different isolation, so the available dynamic range is difficult to predict. 2. These systems have a “sweet-spot,” both Rohde & Schwarz and Agilent start with an attenuator, not to overload the two channels; 1 dB difference in the input level can result in significantly different measured numbers. These “sweet-spots” are different for each system. The RF attenuation that is needed to find the “sweet-spot” reduces the overall dynamic range of the correlation by this amount. 3. The harmonic content of the oscillator can cause an erroneous measurement so a switchable lowpass filter (such as the R&S Switchable VHF-UHF Lowpass Filter Type PTU-BN49130 or its equivalent) should be used.^6 4. For frequencies below 200 MHz, systems such as AnaPico or Holzworth using two crystal oscillators instead of a synthesizer must be used (there is no synthesizer good enough for this measurement). Example: Synergy LNXO100 crystal oscillator measures about -142 dBc/Hz (100 Hz offset), limited by the synthesizer of the FSUP and -147 dBc/Hz with the Holzworth system. Agilent results are similar to the R&S FSUP, just faster. 5. At frequencies like 1 MHz off the carrier, these systems gave different results. The R&S FSUP, taking advantage of the “sweet-spot,” measures -183 dBc, Agilent indicates -175 dBc/Hz and Holzworth measures -179 dBc/Hz. We have not researched the “sweet-spots” for Agilent and Holzworth, but we have seen publications for both Agilent and Holzworth showing −190 dBc/Hz far off the carrier. These were selected crystal oscillators from either Wenzel or Pascall. Another problem is the physical length of the crystal oscillator connection cable to the measurement system. If the length provides something like “quarter-wave-resonance,” incorrect measurements are possible. The list of disadvantages is quite long and there is a certain ambiguity concerning whether or not to trust these measurements or if they are repeatable. Crystal Oscillators Here are some important examples of crystal oscillators. The first one is by Rohde (1975) and is shown is Figure 8a with simulated phase noise shown in 8b.^4,9 This represents the phase-noise of the oscillator without the buffer-amplifier shown inFigure 8a.Most modern equipment uses the crystal oscillator using Rohde’s design technique (see Figure 9). The actual measured numbers are shown in Figure 6. Another important oscillator is by Michael Driscoll. This circuit is shown in Figure 10a and its simulated phase noise in Figure 10b. The crystal is used as a filter to ground. We have seen applications of this design from 10 to 100 MHz. Sometimes there is a stability problem with upper transistor of the cascode. Another important contribution is the introduction of the stress-compensated or doubly-rotated crystal, in short SC cut. This took over after the AT cut. Its drawbacks are possible spurious resonance and higher cost, but the actual phase-noise is 6 to 10 dB. So far there is no clear explanation of why this is the case.^10,11 The 100 MHz crystal oscillator measurements are most demanding and these oscillators are the backbone of many test and communication equipment. Figure 11 shows a typical simple 100 MHz crystal oscillator and its simulated phase noise. Its output power is shown in Figure 11c. This oscillator is missing a buffer stage. Figure 12a shows the buffer amplifier and Figure 12b shows its simulated noise figure and F[min]. An increase in the noise figure, seen below 50 MHz, is due to coupling capacitor. Finally, a 100 MHz crystal oscillator with a grounded base amplifier is shown in Figure 13a with simulated phase noise and power output plot in Figure 13b and Figure 13c, respectively. Figures 14 through 17 show measured results from the 100 MHz crystal oscillator performed on systems from Agilent, Rohde & Schwarz, Holzworth and AnaPico, respectively. Appendix I (see online at www.mwjournal.com/synergyappendix) shows the phase noise calculation for this oscillator. The calculated 100 Hz offset phase noise is −146 dBc/Hz and the far out phase noise is − 183.3 dBc/Hz (see Figure 18). These numbers agree well with the R&S FSUP measurements. The Agilent results are close, but the system is not optimized as the “sweet-spot” has not been characterized. This may be frequency and power level dependent. The Holzworth equipment shows the best close-in correct phase noise level, but we have not found the proper correlation figures. The AnaPico test equipment seems to have a hump, but generally agrees well with the calculations. The Holzworth equipment performs the measurement in approximately 3 to 4 minutes (5 correlations), and about 1 minute with no correlations and 3 dB worse phase noise. What does the law of physics tell us? As pointed out, here is the calculation for the Synergy LNXO100, SSB phase noise = P[out] (dBm) + 177 (dB) – NF (large signal noise figure of the buffer amplifier). In our case, we get 14 + 177 − 7.7 =183.3. This means the SSB phase noise far out is −183.3 dBc/Hz. In this article, we have looked at typical oscillator circuits and some of their design rules. Calculations for phase noise were reviewed. Various systems were reviewed measuring a 100 MHz crystal oscillator. The phase noise equations give the best possible phase noise. If the equipment in use, after many correlations, gives a better number, it violates the laws of physics as we understand them and if it gives a worse number, then either the correlation settings need to be corrected or the dynamic range of the equipment is insufficient. We realize that this treatment is exhaustive, but we think that it was necessary to explain how things fall into place. At 20 dBm output, the output amplifier certainly has a higher noise figure, as it is driven with more power and there is no improvement possible. There is an optimum condition and some of the measurements showing -190 dBc/Hz do not seem to match the theoretical calculations. The correlation allows us to look below KT, but the noise contribution below KT is as useful as finding one gold atom in your body’s blood. This gold atom has no contribution to your system. 1. U.L. Rohde, A.K. Poddar and G. Boeck, The Design of Modern Microwave Oscillators for Wireless Applications: Theory and Optimization, Wiley, New York, 2005. 2. U.L. Rohde, Microwave and Wireless Synthesizers, Theory and Design, Wiley, New York, 1997. 3. U.L. Rohde and M. Rudolph, RF/Microwave Circuit Design for Wireless Applications,Wiley, New York, 2013. 4. U.L. Rohde, “Crystal Oscillator Provides Low Noise,” Electronic Design, 1975. 5. G.D. Vendelin, A.M. Pavio, and U.L. Rohde, Microwave Circuit Design Using Linear and Nonlinear Techniques,Wiley, New York, 2005. 6. A.L. Lance, W.D. Seal, F.G. Mendozo and N.W. Hudson, “Automating Phase Noise Measurements in the Frequency Domain,” Proceedings of the 31^stAnnual Symposium on Frequency Control, 1977. 7. Agilent Phase Noise Selection Guide. 8. D. Calbaza, C. Gupta, U.L. Rohde and A.K. Poddar, “Harmonics Induced Uncertainty in Phase Noise Measurements,” 2012 IEEE MTT-S Digest, pp. 1-3, June 2012. 9. U.L. Rohde, H. Hartnagel, “The Dangers of Simple use of Microwave Software,” www.mes.tu-darmstadt.de/media/mikroelektronische_systeme/pdf_3/ewme2010/proceedings/sessionvii/rohde_paper.pdf. 10. “Sideband Noise in Oscillators,” 2009, www.conwin.com/pdfs/at_or_sc_for_ocxo.pdf. 11. J. Cartwright, “Choosing an AT or SC Cut for OCXOs,” 2008. 12. Benjamin Parzen, Design of Crystal and Other Harmonic Oscillators, John Wiley & Sons, 1983. 13. U.L. Rohde and A.K. Poddar, Crystal Oscillators, Wiley Encyclopedia and Electronics Engineering, pp. 1-38, October 19, 2012. 14. U.L. Rohde and A.K. Poddar, Crystal Oscillator Design, Wiley Encyclopedia of Electrical and Electronics Engineering, pp. 1–47, October 2012. 15. U.L. Rohde and A.K. Poddar, “Latest Technology, Technological Challenges, and Market Trends for Frequency Generating and Timing Devices,” IEEE Microwave Magazine, pp. 120-134, October 2012. 16. U.L. Rohde and A.K. Poddar, “Techniques Minimize the Phase Noise in Crystal Oscillators,” 2012 IEEE FCS, pp. 01-07, May 2012. 17. E. Rubiola, Phase Noise and Frequency Stability in Oscillators, Cambridge University Press, 2009. 18. http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4314363&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F19%2F4314356%2F04314363.pdf%3Farnumber%3D4314363. 19. http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=251295&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel1%2F58%2F6428%2F00251295.pdf%3Farnumber%3D251295.
{"url":"https://www.microwavejournal.com/articles/19529-phase-noise-measurements-and-system-comparisons","timestamp":"2024-11-02T10:54:53Z","content_type":"text/html","content_length":"95679","record_id":"<urn:uuid:6a4000e7-a4e0-4746-8656-b12ca99397a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00071.warc.gz"}
ACM Other ConferencesFine-Grained Dichotomies for the Tutte Plane and Boolean #CSP Jaeger, Vertigan, and Welsh (1990) proved a dichotomy for the complexity of evaluating the Tutte polynomial at fixed points: The evaluation is #P-hard almost everywhere, and the remaining points admit polynomial-time algorithms. Dell, Husfeldt, and Wahlén (2010) and Husfeldt and Taslaman (2010), in combination with the results of Curticapean (2015), extended the #P-hardness results to tight lower bounds under the counting exponential time hypothesis #ETH, with the exception of the line y=1, which was left open. We complete the dichotomy theorem for the Tutte polynomial under #ETH by proving that the number of all acyclic subgraphs of a given n-vertex graph cannot be determined in time exp(o(n)) unless #ETH fails. Another dichotomy theorem we strengthen is the one of Creignou and Hermann (1996) for counting the number of satisfying assignments to a constraint satisfaction problem instance over the Boolean domain. We prove that all #P-hard cases cannot be solved in time exp(o(n)) unless #ETH fails. The main ingredient is to prove that the number of independent sets in bipartite graphs with n vertices cannot be computed in time exp(o(n)) unless #ETH fails. In order to prove our results, we use the block interpolation idea by Curticapean (2015) and transfer it to systems of linear equations that might not directly correspond to interpolation.
{"url":"https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2016.9/metadata/acm-xml","timestamp":"2024-11-11T10:17:28Z","content_type":"application/xml","content_length":"11712","record_id":"<urn:uuid:c7f2228a-01ee-43a0-b1e1-020036be0f85>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00238.warc.gz"}
Monolix 2024R1 User guide Monolix 2024 User Guide 1.Monolix Documentation Version 2024 This documentation is for Monolix starting from 2018 version. Monolix (Non-linear mixed-effects models or “MOdèles NOn LInéaires à effets miXtes” in French) is a platform of reference for model based drug development. It combines the most advanced algorithms with unique ease of use. Pharmacometricians of preclinical and clinical groups can rely on Monolix for population analysis and to model PK/PD and other complex biochemical and physiological processes. Monolix is an easy, fast and powerful tool for parameter estimation in non-linear mixed effect models, model diagnosis and assessment, and advanced graphical representation. Monolix is the result of a ten years research program in statistics and modeling, led by Inria (Institut National de la Recherche en Informatique et Automatique) on non-linear mixed effect models for advanced population analysis, PK/PD, pre-clinical and clinical trial modeling & simulation. The objectives of Monolix are to perform: An interface for ease of use Monolix can be used either via a graphical user interface (GUI) or a command-line interface (CLI) for powerful scripting. This means less programming and more focus on exploring models and pharmacology to deliver in time. The interface is depicted as follows: The GUI consists of 7 tabs. Each of these tabs refer to a specific section on this website. An advanced description of available plots is also provided. 2.1.Defining a data set To start a new Monolix project, you need to define a dataset by loading a file in the Data tab, and load a model in the structural model tab. The project can be saved only after defining the data and the model. Supported file types: Supported file types include .txt, .csv, and .tsv files. Starting with version 2024, additional Excel and SAS file types are supported: .xls, .xlsx, .sas2bdat, and .xpt files in addition to .txt, .csv, and .tsv files. The data set format expected in the Data tab is the same as for the entire MonolixSuite, to allow smooth transitions between applications. The columns available in this format and example datasets are detailed on this page. Briefly: • Each line corresponds to one individual and one time point • Each line can include a single measurement (also called observation), or a dose amount (or both a measurement and a dose amount) • Dosing information should be indicated for each individual, even if it is identical for all. Your dataset may not be originally in this format, and you may want to add information on dose amounts, limits of quantification, units, or filter part of the dataset. To do so, you should proceed in this order: • Formatting: If needed, format your data first by loading the dataset in the Data Formatting tab. Briefly, it allows to: □ to deal with several header lines □ merge several observation columns into one □ add censoring information based on tags in the observation column □ add treatment information manually or from external sources □ add more columns based on another file • Loading a new data set: If the data is already in the right format, load it directly in the Data tab (otherwise use the formatted dataset created by data formatting). • Observation types: Specify if the observation is of type continuous, count/categorical or event. • Labeling: label the columns not recognized automatically to indicate their type and click on ACCEPT. • Filtering: If needed, filter your dataset to use only part of it in the Filters tab • Explore: The interpreted dataset is displayed in Data, and Plots and covariate statistics are generated. If you have already defined a dataset in Datxplore or in PKanalix, you can skip all those steps in Monolix and create a new project by importing a project from Datxplore or PKanalix. Loading a new data set To load a new data set, you have to go to “Browse” your data set (green frame), tag all the columns (purple frame), define the observation types in Data Information (blue frame), and click on the blue button ACCEPT as on the following. If the dataset does not follow a formatting rule, the dataset will not be accepted, but errors will guide you to find what is missing and could be added by data formatting. Observation types There are three types of observations: • continuous: The observation is continuous and can take any value within a range. For example, a concentration is a continuous observation. • count/categorical: The observation values can take values only in a finite categorical space. For example, the observation can be a categorical observation (an effect can be observed as “low”, “medium”, or “high”) or a count observation over a defined time (the number of epileptic crisis in a week). • event: The observation is an event, for example the time of death. For each OBSERVATION ID, the type of observations must be specified by the user in the interface. Depending on the choice, the data will be displayed in the Observed Data plot in different ways (e.g spaghetti plot for continuous data and Kaplan-Meier plot for event data). The mapping of model outputs and observations from the dataset will also take into account the data type (e.g a model outptu of type “event” can only be mapped to an observation that is also an “event”). Once a model has been selected, the choice of the data types are locked (because they are enforced by the model output Regressor settings If columns have been tagged as REGRESSORS, the interpolation method for the regressors can be chosen. In the dataset, regressors are defined only for a finite number of time points. In between those time points, the regressor values can be interpolated in two different ways: • last carried forward: if we have defined in the dataset two times for each individual with \(reg_A\) at time \(t_A\) and \(reg_B\) at time \(t_B\) □ for \(t\le t_A\), \(reg(t)=reg_A\) [first defined value is used] □ for \(t_A\le t<t_B\), \(reg(t)=reg_A\) [previous value is used] □ for \(t>t_B\), \(reg(t)=reg_B\) [previous value is used] • linear interpolation: the interpolation is: □ for \(t\le t_A\), \(reg(t)=reg_A\) [first defined value is used] □ for \(t_A\le t<t_B\), \(reg(t)=reg_A+(t-t_A)\frac{(reg_B-reg_A)}{(t_B-t_A)}\) [linear interpolation is used] □ for \(t>t_B\), \(reg(t)=reg_B\) [previous value is used] The interpolation is used to obtain the regressor value at times not defined in the dataset. This is necessary to integrate ODE-based models (which are using an internal adaptative time step), or obtain prediction on a fine grid for the plots (e.g in the Individuals fits) for instance. When some dataset lines have a missing regressor value (dot “.”), the same interpolation method is used. Labeling (column tagging) The column type suggested automatically by Monolix based on the headers in the data can be customized in the preferences. By clicking on Settings>Preferences, the following windows pops up. In the DATA frame, you can add or remove preferences for each column. To remove a preference, double-click on the preference you would like to remove. A confirmation window will be proposed. To add a preference, click on the header type you consider, add a name in the header name and click on “ADD HEADER” as on the following figure. Notice that all the preferences are shared between Monolix, Datxplore, and PKanalix. Starting from the version 2024, it is also possible to update the preferences with the columns tagged in the opened project, by clicking on the icon in the top left corner of the table: Clicking on the icon will open a modal with the option to choose which of the tagged headers a user wants to add to preferences: Dataset load times Starting with the 2024 version, it is possible to improve the project load times, especially for projects with large datasets, but saving the data as a binary file. This option is available in Settings>Preferences and will save a copy of the data file in binary format in the results folder. When reloading a project, the dataset will be read from the binary file, which will be faster. If the original dataset file has been modified (compared to the binary), a warning message will appear, the binary dataset will not be used and the original dataset fiel will be loaded instead. Resulting plots and tables to explore the data Once the dataset is accepted: • Plots are automatically generated based on the interpreted dataset to help you proceed with a first data exploration before running any task. • The interpreted dataset appears in Data tab, which incorporates all changes after formatting and filtering. • Covariate Statistics appear in a section of the data tab. All the covariates (if any) are displayed and a summary of the statistics is proposed. For continuous covariates, minimum, median and maximum values are proposed along with the first and third quartile, and the standard deviation. For categorical covariates, all the modalities are displayed along with the number of each. Note the “Copy table” button that allows to copy the table in Word and Excel. The format and the display of the table will be preserved. Importing a project from Datxplore or PKanalix It is possible to import a project from Datxplore or PKanalix. For that, go to Project>New project for Datxplore/PKanalix (as in the green box of the following figure). In that case, a new project will be created and all the DATA frame will already be filled by the information from the Datxplore or PKanalix project. 2.2.Data Format In Monolix, a dataset should be loaded in the Data tab to create a project. To be accepted in the Data tab, the dataset should be in a specific format described below. If your dataset is not in the right format, in most cases, it is possible to format it in a few steps in the data formatting tab, to incorporate the missing information. Once the dataset is accepted and once a model is loaded, it is possible to the dataset to remove outliers or focus on a particular group. The data set format used in Monolix is the same as for the entire MonolixSuite, to allow smooth transitions between applications. In this format: • Each line corresponds to one individual and one time point. • Each line can include a single measurement (also called observation), or a dose amount (or both a measurement and a dose amount). • Dosing information should be indicated for each individual in a specific column, even if it is the same treatment for all individuals. • Headers are free but there can be only one header line. • Different types of information (dose, observation, covariate, etc) are recorded in different columns, which must be tagged with a column type (see below). The column types are very similar and compatible with the structure used by the software (the differences are listed ). This is specified when the user defines each column type in the data set as in the following picture. Description of column-types The first line of the data set must be a header line, defining the names of the columns. The columns names are completely free. In the MonolixSuite applications, when defining the data, the user will be asked to assign each column to a column-type (see here for an example of this step). The column type will indicate to the application how to interpret the information in that column. The available column types are given below. Column-types used for all types of lines: • ID (mandatory): identifier of the individual • OCCASION (formerly OCC): identifier (index) of the occasion • TIME: time of the dose or observation record • DATE/DAT1/DAT2/DAT3: date of the dose or observation record, to be used in combination with the TIME column • EVENT ID (formerly EVID): identifier to indicate if the line is a dose-line or a response-line • IGNORED OBSERVATION (formerly MDV): identifier to ignore the OBSERVATION information of that line • IGNORED LINE (from 2019 version): identifier to ignore all the information of that line • CONTINUOUS COVARIATE (formerly COV): continuous covariates (which can take values on a continuous scale) • CATEGORICAL COVARIATE (formerly CAT): categorical covariate (which can only take a finite number of values) • REGRESSOR (formerly X): defines a regression variable, i.e a variable that can be used in the structural model (used e.g for time-varying covariates) • IGNORE: ignores the information of that column for all lines Column-types used for response-lines: Column-types used for dose-lines: Order of events There are prioritization rules in place in case of various event types occurring at the same time. The order of row numbers in the data set is not important, and same is true for the order of administration and empty/reset macros in model files. The sequence of events will always be the following: 1. regressors are updated, 2. reset done by EVID=3 or EVID=4 is performed, 3. dose is administered, 4. empty/reset done by macros is performed, 5. observation is made. 2.3.Data formatting Monolix versions prior to 2023R1 do not include the data formatting module. Instead we provide an Excel macro to adapt the format of your dataset for Monolix. The dataset format that is used in Monolix is the same as for the entire MonolixSuite, to allow smooth transitions between applications. In this format, some rules have to be fullfilled, for example: • Each line corresponds to one individual and one time point. • Each line can include a single measurement (also called observation), or a dose amount (or both a measurement and a dose amount). • Dose amount should be indicated for each individual dose in a column AMOUNT, even if it is identical for all. • Headers are free but there can be only one header line. If your dataset is not in this format, in most cases, it is possible to format it in a few steps in the data formatting tab, to incorporate the missing information. In this case, the original dataset should be loaded in the “Format data” box, or directly in the “Data Formatting” tab, instead of the “Data” tab. In the data formatting module, you will be guided to build a dataset in the MonolixSuite format, starting from the loaded csv file. The resulting formatted dataset is then loaded in the Data tab as if you loaded an already-formatted dataset in “Data” directly. Then as for defining any dataset, you can tag columns, accept the dataset, and once accepted, the Filters tab can be used to select only parts of this dataset for analysis. Note that units and filters are neither information to be included in the data file, nor part of the data formatting process. The original datafile is NOT modified by Monolix. Formatting operations are saved in the Monolix project and applied to the data when the project is loaded. The formatted dataset is not saved by default, but it can be exported by the user as a CSV file. Jump to: 1. Dataset initialization (mandatory step) 2. Creating occasions from a SORT column to distinguish different sets of measurements within each subject, (eg formulations). 3. Selecting an observation type (required to add a treatment) 4. Merging observations from several columns 5. Specifying censoring from censoring tags eg “BLQ” instead of a number in an observation column. Demo project DoseAndLOQ_manual.mlxtran 6. Adding doses in the dataset Demo project DoseAndLOQ_manual.mlxtran 7. Reading censoring limits or dosing information from the dataset “by category” or “from data”. Demo projects DoseAndLOQ_byCategory.mlxtran and DoseAndLOQ_fromData.mlxtran 8. Creating occasions from dosing intervals to analyze separately the measurements following different doses.Demo project doseIntervals_as_Occ.mlxtran 9. Handling urine data to merge start and end times in a single column. 10. Adding new columns from an external file, eg new covariates, or individual parameters estimated in a previous analysis. Demo warfarin_PKPDseq_project.mlxtran 1. Data formatting workflow When opening a new project, two Browse buttons appear. The first one, under “Data file”, can be used to load a dataset already in a MonolixSuite-standard format, while the second one, under “Format data”, allows to load a dataset to format in the Data formatting module. After loading a dataset to format, data formatting operations can be specified in several subtabs: Initialization, Observations, Treatments and Additional columns. • Initialization is mandatory and must be filled before using the other subtabs. • Observations is required to enable the Treatments tab. After Initialization has been validated by clicking on “Next”, a button “Preview” is available from any subtab to view in the Data tab the formatted dataset based on the formatting operations currently specified. 2. Dataset initialization The first tab in Data formatting is named Initialization. This is where the user can select header lines or lines to exclude (in the blue area on the screenshot below) or tag columns (in the yellow Selecting header lines or lines to exclude These settings should contain line numbers for lines that should be either handled as column headers or that should be excluded. • Header lines: one or several lines containing column header information. By default, the first line of the dataset is selected as header. If several lines are selected, they are merged by data formatting into a single line, concatenating the cells in each column. • Lines to exclude (optional): lines that should be excluded from the formatted dataset by data formatting. Tagging mandatory columns Only the columns corresponding to the following tabs must be tagged in Initialization, while all the other columns should keep the default UNDEFINED tag: • ID (mandatory): subject identifiers • TIME (mandatory): the single time column • SORT (optional): one or several columns containing SORT variables can be tagged as SORT. Occasions based on these columns will be created in the formatted dataset as described in Section 3. • START, END and VOLUME (mandatory in case of urine data): these column tags replace the TIME tag in case of urine data, if the urine collection time intervals are encoded in the dataset with two time columns for the start and end times of the intervals. In that case there should also be a column with the urine volume in each interval. See Section 10 for more details. Initialization example • demo CreateOcc_AdmIdbyCategory.pkx (Monolix demo in the folder 0.data_formatting, here imported into Monolix. The screenshot below focuses on the formatting initialization and excludes other elements present in the demo): In this demo the first line of the dataset is excluded because it contains a description of the study. The second line contains column headers while the third line contains column units. Since the MonolixSuite-standard format allows only a single header line, lines 2 and 3 are merged together in the formatted dataset. 3. Creating occasions from a SORT column A SORT variable can be used to distinguish different sets of measurements (usually concentrations) within each subject, that should be analyzed separately by Monolix (for example: different formulations given to each individual at different periods of time, or multiple doses where concentration profiles are available to be analyzed following several doses). In Monolix, these different sets of measurements must be distinguished as OCCASIONS (or periods of time), via the OCCASION column-type. However, a column tagged as OCCASION can only contain integers with occasion indexes. Thus, if a column with a SORT variable contains strings, its format must be adapted by Data formatting, in the following way: • the user must tag the column as SORT in the Initialization subtab of Data formatting, • the user validates the Initialization with “Next”, then clicks on “Preview” (after optionally defining other data formatting operations), • the formatted data is shown in Data: the column tagged as SORT is automatically duplicated. The original column is automatically tagged as CATEGORICAL COVARIATE in Data, while the duplicated column, which has the same name appended with “_OCC”, is tagged as OCCASION. This column contains occasion indexes instead of strings. • demo CreateOcc_AdmIdbyCategory.pkx (PKanalix demo in the folder 0.data_formatting, here imported into Monolix. The screenshot below focuses on the formatting of occasions and excludes other elements present in the demo): The image below shows lines 25 to 29 from the dataset from the CreateOcc_AdmIdbyCategory.pkx demo, where covariate columns have been removed to simplify the example. This dataset contains two sets of concentration measurements for each individual, corresponding to two different drug formulations administered on different periods. The sets of concentrations are distinguished with the FORM column, which contains “ref” and “test” categories (reference/test formulations). The column is tagged as SORT in Data formatting Initialization. After clicking on “Preview”, we can see in the Data tab that a new column named FORM_OCC has been created with occasion indexes for each individual: for subject 1, FORM_OCC=1 corresponds to the reference formulation because it appears first in the dataset, and FORM_OCC=2 corresponds to the test formulation because it appears in second in the dataset. 4. Selecting an observation type The second subtab in Data formatting allows to select one or several observation types. An observation type corresponds to a column of the dataset, that contains a type of measurements (usually drug concentrations, but it can also be PD measurements for example). Only columns that have not been tagged as ID, TIME or SORT are available as observation type. This action is optional and can have several purposes: • If doses must be added by Data formatting (see Section 7), specifying the column containing observations is mandatory, to avoid duplicating observations on new dose lines. • If several observation types exist in different columns (for example: concentrations for different analytes, or measurements for PK and PD), they must be specified in Data formatting to be merged into a single observation column (see Section 5). • In the MonolixSuite-standard format, the column containing observations can only contain numbers, and no string except “.” for a missing observation. Thus if this column contains strings in the original dataset, it must be adapted by Data formatting, with two different cases: □ if the strings are tags for censored observations (usually BLQ: below the limit of quantification), they can be specified in Data formatting to adapt the encoding of the censored observations (see Section 6), □ any other string in the column is automatically replaced by “.” by Data formatting. 5. Merging observations from several columns The MonolixSuite-standard format allows a single column containing all observations (such as concentrations or PD measurements). Thus if a dataset contains several observation types in different columns (for example: concentrations for different analytes, or measurements for PK and PD), they must be specified in Data formatting to be merged into a single observation column. In that case, different settings can be chosen in the area marked in orange in the screenshot below: • The user must choose between distinguishing observation types with observation ids or occasions. • The user can unselect the option “Duplicate information from undefined columns”. As observation ids After selecting the “Distinguish observation types with: observation ids” option and clicking “Preview,” the columns for different observation types are combined into a single column called “OBS.” Each row of the dataset is duplicated for each observation type, with one value per observation type. Additionally, an “OBSID” column is created, with the name of the observation type corresponding to the measurement on each row. This option is recommended for joint modeling of observation types, such as CA in Monolix or population modeling in Monolix. It is important to note that NCA cannot be performed on two different observation ids simultaneously, so it is necessary to choose one observation id for the analysis. • demo merge_obsID_ParentMetabolite.pkx PKanalix demo in the folder 0.data_formatting, here imported into Monolix. The screenshot below focuses on the formatting of observations and excludes other elements present in the demo): This demo involves two columns that contain drug parent and metabolite concentrations. When merging both observation types with observation ids, a new column called OBSID is generated with categories labeled as “PARENT” and “METABOLITE.” As occasions After selecting the “Distinguish observation types with: occasions” option and clicking “Preview,” the columns for different observation types are combined into a single column called “OBS.” Each row of the dataset is duplicated for each observation type, with one value per observation type. Additionally, two columns are created: an “OBSID_OCC” column with the index of the observation type corresponding to the measurement on each row, and an “OBSID_COV” with the name of the observation type. This option is recommended for NCA, which can be run on different occasions for each individual. However, joint modeling of the observation types with CA or population modeling with Monolix cannot be performed with this option. • demo merge_occ_ParentMetabolite.pkx (PKanalix demo in the folder 0.data_formatting, here imported into Monolix. The screenshot below focuses on the formatting of observations and excludes other elements present in the demo): This demo involves two columns that contain drug parent and metabolite concentrations. When merging both observation types with occasions, two new columns called OBSID_OCC and OBSID_COV are generated with OBSID_OCC=1 corresponding to OBSID_COV=”PARENT” catand OBSID_OCC=2 corresponding to OBSID_COV=”METABOLITE.” Duplicate information from undefined columns When merging two observation columns into a single column, all other columns will see their lines duplicated. The data formatting will know how to treat columns which have been tagged in the Initialization tab, but not the other columns (header “UNDEFINED”) which are not used for data formatting. A checkbox enables to decide if the information from these columns should be duplicated on the new lines, or if “.” should be used instead. The default option is to duplicate information, because in general, the undefined columns correspond to covariates with one value per individual, so this value is the same for the two lines that correspond to the same id. It is rare that you need to uncheck this box. An example where you should not duplicate the information is if you already have a column Amount in the MonolixSuite format, so with a dose amount only at the dosing time, and “.” everywhere else. If you do not want to specify amount again in data formatting, and simply want to merge observation columns as observation ids, you should not duplicate the lines of the Amount column which is undefined. Indeed, the dose amounts have been administered only once. 6. Specifying censoring from censoring tags In the MonolixSuite-standard format, censored observations are encoded with a 1or -1 flag in a column tagged as CENSORING in the Data tab, while exact observations have a 0 flag in that column. In addition, on rows for censored observations, the LOQ is indicated in the observation column: it is the LLOQ (lower limit of quantification) if CENSORING=1 or the ULOQ (upper limit of quantification) if CENSORING=-1. Finally, to specify a censoring interval, an additional column tagged as LIMIT in the Data tab must exist in the dataset, with the other censoring bound. The Data Formatting module can take as input a dataset with censoring tags directly in the observation column, and adapt the dataset format as described above. After selecting one or several observation types in the Observations subtab (see Section 4), all strings found in the corresponding columns are displayed in the “Censoring tags” on the right of the observation types. If at least one string is found, the user can then define some censoring associated with an observation type and with one or several censoring tags with the button “Add censoring”. 3 types of censoring can be • LLOQ: this corresponds to left-censoring, where the censored observation is below a lower limit of quantification (LLOQ), that must specified by the user. In that case Data Formatting replaces the censoring tags in the observation column by the LLOQ, and creates a new CENS column tagged as CENSORING in the Data tab, with 1 on rows that had censoring tags before formatting, and 0 on other rows. • ULOQ: this corresponds to right-censoring, where the censored observation is above an upper limit of quantification (ULOQ), that must specified by the user. Here Data Formatting replaces the censoring tags in the observation column by the ULOQ, and creates a new CENS column tagged as CENSORING in the Data tab, with -1 on rows that had censoring tags before formatting, and 0 on other • Interval: this is for interval-censoring, where the user must specify two bound of a censoring interval, to which the censored observation belong. Data Formatting replaces the censoring tags in the observation column by the upper bound of the interval, and creates two new columns: a CENS column tagged as CENSORING in the Data tab, with 1 on rows that had censoring tags before formatting, and 0 on other rows, and a LIMIT column with the lower bound of the censoring interval on rows that had censoring tags before formatting, and “.” on other rows. For each type of censoring, available options to define the limits are: • “Manual“: limits are defined manually, by entering the limit values for all censored observations. • “By category“: limits are defined manually for different categories read from the dataset. • “From data“: limits are directly read from the dataset. The options “by category” and “from data” are described in detail in Section 8. • demo DoseAndLOQ_manual.mlxtran (the screenshot below focuses on the formatting of censored observations and excludes other elements present in the demo): In this demo there are two censoring tags in the CONC column: BLQ1 (from Study 1) and BLQ2 (from Study 2), that correspond to different LLOQs. An interval censoring is defined for each censoring tag, with manual limits, where LLOQ=0.06 for BLQ1 and LLOQ=0.1 for BLQ2, and the lower limit of the censoring interval being 0 in both cases. 7. Adding doses in the dataset Datasets in MonolixSuite-standard format should contain all information on doses, as dose lines. An AMOUNT column records the amount of the administrated doses on dose-lines, with “.” on response-lines. In case of infusion, an INFUSION DURATION or INFUSION RATE column records the infusion duration or rate. If there are several types of administration, an ADMINISTRATION ID column can distinguish the different types of doses with integers. If doses are missing from a dataset, the Data Formatting module can be used to add dose lines and dose-related columns: after initializing the dataset, the user can specify one or several treatments in the Treatments subtab. The following operations are then performed by Data Formatting: • a new dose line is inserted in the dataset for each defined dose, with the dataset sorted by subject and times. On such a dose line, the values from the next line are duplicated for all columns, except for the observation column in which “.” is used for the dose line. • A new column AMT is created with “.” on all lines except on dose lines, on which dose amounts are used. The AMT column is automatically tagged as AMOUNT in the Data tab. • If administration ids have been defined in the treatment, an ADMID column is created, with “.” on all lines except on dose lines, on which administration ids are used. The ADMID column is automatically tagged as ADMINISTRATION ID in the Data tab. • If an infusion duration or rate has been defined, a new INFDUR (for infusion duration) or INFRATE (for infusion rate) is created, with “.” on all lines except on dose lines. The INFDUR column is automatically tagged as INFUSION DURATION in the Data tab, and the INFRATE column is automatically tagged as INFUSION RATE. Starting from the 2024R1 version, different dosing schedules can be defined for different individuals, based on information from other columns, which can be useful in cases when different cohorts received different dosing regimens, or when working with data pooled from multiple studies. To define a treatment just for a specific cohort or study, the dropdown on the top of the treatment section can be used: For each treatment, the dosing schedule can defined as: • regular: for regularly spaced dosing times, defined with the start time, inter-dose internal, and number of doses. A “repeat cycle” option allows to repeat the regular dosing schedule to generate a more complex regimen. • manual: a vector of one or several dosing times, each defined manually. A “repeat cycle” option allows to repeat the manual dosing schedule to generate a more complex regimen. • external: an external text file with columns id (optional), occasions (optional), time (mandatory), amount (mandatory), admid (administration id, optional), tinf or rate (optional), that allows to define individual doses. While dose amounts, administration ids and infusion durations or rates are defined in the external file for external treatments, available options to define them for treatments of type “manual” or “regular” are: • “Manual“: this applies the same amount (or administration id or infusion duration or rate) to all doses. • “By category“: dose amounts (or administration id or infusion duration or rate) are defined manually for different categories read from the dataset. • “From data“: dose amounts (or administration id or infusion duration or rate) are directly read from the dataset. The options “by category” and “from data” are described in detail in Section 8. There is a “common settings” panel on the right: • dose intervals as occasions: this creates a column to distinguish the dose intervals as different occasions (see Section 9). • infusion type: If several treatments correspond to infusion administration, they need to share the same type of encoding for infusion information: as infusion duration or as infusion rate. • demo DoseAndLOQ_manual.mlxtran (the screenshot below focuses on the formatting of doses and excludes other elements present in the demo): In this demo, doses are initially not included in the dataset to format. A single dose at time 0 with an amount of 600 is added for each individual by Data Formatting. This creates a new AMT column in the formatted dataset, tagged as AMOUNT. 8. Reading censoring limits or dosing information from the dataset When defining censoring limits for observations (see Section 6) or dose amounts, administration ids, infusion duration or rate for treatments (see Section 7), two options allow to define different values for different rows, based on information already present in the dataset: “by category” and “from data”. By category It is possible to define manually different censoring limits, dose amounts, administration ids, infusion durations, or rates for different categories within a dataset’s column. After selecting this column in the “By category” drop-down menu, the different modalities in the column are displayed and a value must be manually assigned each modality. • For censoring limits, the censoring limit used to replace each censoring tag depends on the modality on the same row. • For doses, the value chosen for the newly created column (AMT for amount, ADMID for administration id, INFDUR for infusion duration, INFRATE for infusion rate) on each new dose line depends on the modality on the first row found in the dataset for the same individual and the same time as the dose, or the next time if there is no line in the initial dataset at that time, or the previous time if no time is found after the dose. • demo DoseAndLOQ_byCategory.mlxtran (the screenshot below focuses on the formatting of doses and excludes other elements present in the demo): In this demo there are three studies distinguished in the STUDY column with the categories “SD_400mg”, “SD_500mg” and “SD_600mg”. In Data Formatting, a single dose is manually defined at time 0 for all individuals, with different amounts depending the STUDY category. In addition, censoring interval is defined for the censoring tags BLQ, with an upper limit of the censoring interval (lower limit of quantification) that also depends on the STUDY category. Three new columns – AMT for dose amounts, CENS for censoring tags (0 or 1), and LIMIT for the lower limit of the censoring intervals – are created by Data Formatting. A new dose line is then inserted at time 0 for each individual. From data The option “From data” is used to directly read censoring limits, dose amounts, administration ids, infusion durations, or rates from a dataset’s column. The column must contain either numbers or numbers inside strings. In that case, the first number found in the string is extracted (including decimals with .). • For censoring limits, the censoring limit used to replace each censoring tag is read from the selected column on the same row. • For doses, the value chosen for the newly created column (AMT for amount, ADMID for administration id, INFDUR for infusion duration, INFRATE for infusion rate) on each new dose line is read from the selected column on the first row found in the dataset for the same individual and the same time as the dose, or the next time if there is no line in the initial dataset at that time, or the previous time if no time is found after the dose. • demo DoseAndLOQ_fromData.mlxtran (the screenshot below focuses on the formatting of doses and censoring and excludes other elements present in the demo): In this demo there are three studies distinguished in the STUDY column with the categories “SD_400mg”, “SD_500mg” and “SD_600mg”. In Data Formatting, a single dose is manually defined at time 0 for all individuals, with the amount read the STUDY column. In addition, censoring interval is defined for the censoring tags BLQ, with an upper limit of the censoring interval (lower limit of quantification) read from the LLOQ_mg_L column. Three new columns – AMT for dose amounts, CENS for censoring tags (0 or 1), and LIMIT for the lower limit of the censoring intervals – are created by Data Formatting. A new dose line is then inserted at time 0 for each individual, with amount 400, 500 or 600 for studies SD_400mg, SD_500mg and SD_600mg respectively. 9. Creating occasions from dosing intervals The option “Dose intervals as occasions” in the Treatments subtab of Data Formatting allows to create an occasion column to distinguish dose intervals. This is useful if the sets of measurements following different doses should be analyzed independently for a same individual. From the 2024 version on, an additional option “Duplicate observations at dose times into each occasion” is available. This option allows to duplicate the observations which are at exactly the same time as dose, such that they appear both as last point of the previous occasion and first point of the next occasion. This setting is recommended for NCA, but not recommended for population PK modeling. • demo doseIntervals_as_Occ.mlxtran (Monolix demo in the folder 0.data_formatting, here imported into Monolix): This demo imported from a Monolix demo has an initial dataset in Monolix-standard format, with multiple doses encoded as dose lines with dose amounts in the AMT column. When using this dataset directly into Monolix or Monolix, a single analysis is done on each individual concentration profile considering all doses, which means that NCA would be done on the concentrations after the last dose only, and modeling (CA in Monolix or population modeling in Monolix) would be estimated with a single set of parameter values for each individual. If instead we want to run separate analyses on the sets of concentrations following each dose, we need to distinguish them as occasions with a new column added with the Data Formatting module. To this end, we define the same treatment as in the initial dataset with Data Formatting (here as regular multiple doses) with the option “Dose intervals as occasions” selected. After clicking Preview, Data Formatting adds two new columns: an AMT1 column with the new doses, to be tagged as AMOUNT instead of the AMT column that will now be ignored, and a DOSE_OCC column to be tagged as OCCASION. • demo CreateOcc_duplicateObs.pkx imported to Monolix: 10. Handling urine data In Monolix-standard format, the start and end times of urine collection intervals must be recorded in a single column, tagged as TIME column-type, where the end time of an interval automatically acts as start time for the next interval (see here for more details). If a dataset contains start and end times in two different columns, they can be merged into a single column by Data Formatting. This is done automatically by tagging these two columns as START and END in the Initialization subtab of Data Formatting (see Section 2). In addition the column containing urine collection volume must be tagged as VOLUME. • demo Urine_LOQinObs.pkx (Monolix demo here imported into Monolix): 11. Adding new columns from an external file The last subtab is used to insert additional columns in the dataset from a separate file. The external file must contain a table with a column named ID or id with the same subject identifiers as in the dataset to format, and other columns with a header name and individual values (numbers or strings). There can be only one value per individual, which means that the additional columns inserted in the formatted dataset can contain only a constant value within each individual, and not time-varying values. Examples of additional columns that can be added with this option are: • individual parameters estimated in a previous analysis, to be read as regressors to avoid estimating them. Time-varying regressors are not handled. • new covariates. If occasions are defined in the formatted dataset, it is possible to have an occasion column in the external file and values defined per subject-occasion. • demo warfarin_PKPDseq_project.mlxtran (Monolix demo in the folder 0.data_formatting, here imported into Monolix): This demo imported from a Monolix demo has an initial PKPD dataset in Monolix-standard format. The option “Additional columns” is used to add the PK parameters estimated on the PK part of the data in another Monolix project. 2.3.1.Data formatting presets Starting from the 2024R1 version, if users need to perform the same or similar data formatting steps with each new project, a data formatting preset can be created and applied for new projects. A typical workflow for using data formatting presets is as follows: 1. Data formatting steps are performed on a specific project. 2. Steps are saved in a data formatting preset. 3. Preset is reapplied on multiple new projects (manually or automatically). Creating a data formatting preset After data formatting steps are done on a project, a preset can be created by clicking on the New button in the bottom left corner of the interface, while in the Data formatting tab: Clicking on this button will open a pop-up window with a form that contains four parts: 1. Name: contains a name of the preset, this information will be used to distinguish between presets when applying them. 2. Configuration: contains three checkboxes (initialization, observations, treatments). Users can choose which steps of data formatting will be saved in the preset. If option “External” was used in creating treatments, the external file paths will not be saved in the preset. 3. Description: a custom description can be added. 4. Use this preset as default to format data in all MonolixSuite apps: if ticked, the preset will be automatically applied every time a data set is loaded for data formatting in PKanalix and After saving the preset by clicking on Create, the description will be updated with the summary of all the steps performed in data formatting. Important to note is that if a user wants to save censoring information in the data formatting preset, and the censored tags change between projects (e.g., censoring tag <LOQ=0.1> with varying numbers is used in different projects), the option “Use all automatically detected tags” should be used in the Observations tab. This way, no specific censoring tags will be saved in the preset, but they will be automatically detected and applied each time a preset is applied. Applying a preset A preset can be applied manually using the Apply button in the bottom left corner of the data formatting tab. Clicking on the Apply button will open a dropdown selection of all saved presets and the desired preset can be chosen. The Apply button is enabled only after the initialization step is performed (ID and TIME columns are selected and the button Next was clicked). This allows PKanalix to fall back to the initialized state, if the initialization step from a preset fails (e.g., if ID and TIME column header saved in the presets do not exist in the new data set). After applying a preset, all data formatting steps saved in the preset will try to be applied. If it is not possible to apply certain steps (e.g., some censoring tags or column headers saved in a preset do not exist), the error message will appear. Managing presets Presets can be created, deleted, edited, imported and exported by clicking on Settings > Manage Presets > Data Formatting. The pop-window allows users to: 1. Create presets: clicking on the button New has the same behavior as clicking on the button New in the Data formatting tab (described in the section “Creating a data formatting preset”). 2. Delete presets: clicking on the button Delete will permanently delete a preset. Clicking on this button will not ask for confirmation. 3. Edit presets: by clicking on a preset in the left panel, the preset information will appear in the right panel. Name and preset description of the presets can then be updated, and a preset can be selected to automatically format data in PKanalix and Monolix. 4. Export presets: a selected preset can be exported as a lixpst file which can be shared between users and imported using the Import button. 5. Import presets: a user can import a preset from a lixpst file exported from another computer. Additionally, an option to pin presets (always show them on top) can be used by clicking on the icon to facilitate the usage of presets when a user has a lot of them. 2.4.Filtering a data set Starting on the 2020 version, it is possible to filter your data set to only take a subpart into account in your modelization. It allows to make filters on some specific IDs, times, measurement values,… It is also possible to define complementary filters and also filters of filters. It is accessible through the filters item on the data tab. Creation of a filter To create a filter, you need to click on the data set name. You can then create a “child”. It corresponds to a subpart of the data set where you will define your filtering actions. You can see on the top (in the green rectangle) the action that you will complete and you can CANCEL, ACCEPT, or ACCEPT & APPLY with the bottoms on the bottom. Filtering actions In all the filtering actions, you need to define • An action: it corresponds to one of the following possibilities: select ids, remove ids, select lines, remove lines. • A header: it corresponds to the column of the data set you wish to have an action on. Notice that it corresponds to a column of the data set that was tagged with a header. • An operator: it corresponds to the operator of choice (=, ≠, < ≤, >, or ≥). • A value. When the header contains numerical values, the user can define it. When the header contains strings, a list is proposed. For example, you can • Remove the ID 1 from your study: In that case, all the IDs except ID = 1 will be used for the study. • Select all the lines where the time is less or equal 24: In that case, all lines with time strictly greater that 24 will be removed. If a subject has no measurement anymore, it will be removed from the study. • Select all the ids where SEX equals F: In that case, all the male will be removed of the study. • Remove all Ids where WEIGHT less or equal 65: In that case, only the subjects with a weight over 65 will be kept for the study. In any case, the interpreted filter data set will be displayed in the data tab. Filters with several actions In the previous examples, we only did one action. It is also possible to do several actions to define a filter. We have the possibility to define UNION and/or INTERSECTION of actions. By clicking by the + and – button on the right, you can define an intersection of actions. For example, by clicking on the +, you can define a filter corresponding to intersection of • The IDs that are different to 1. • The lines with the time values less than 24. Thus in that case, all the lines with a time less than 24 and corresponding to an ID different than 1 will be used in the study. If we look at the following data set as an example Considered data set for the study Initial data set as the intersection of the two actions By clicking by the + and – button on the bottom, you can define an union of actions. For example, in a data set with a multi dose, I can focus on the first and the last dose. Thus, by clicking on the +, you can define a filter corresponding to union of • The lines where the time is strictly less than 12. • The lines where the time is greater than 72. Resulting data set after action: select lines where the time is strictly less than 12 Initial data set Resulting data set after action: Considered data set for the study select lines where the time is greater than 72 as the union of the three actions Resulting data set after action: select lines where amt equals 40 Notice that, if just define the first two actions, all the dose lines at a time in ]12, 72[ will also be removed. Thus, to keep having all the doses, we need to add the condition of selecting the lines where the dose is defined. In addition, it is possible to do any combination of INTERSECTION and UNION. Other filers: filter of filter and complementary filters Based on the definition of a filter, it is possible to define two other actions. By clicking on the filter, it is possible to create • A child: it corresponds to a new filter with the initial filter as the source data set. • A complement: corresponds to the complement of the filter. For example, if you defined a filter with only the IDs where the SEX is F, then the complement corresponds to the IDs where the SEX is not F. 2.5.Handling censored (BLQ) data Objectives: learn how to handle easily and properly censored data, i.e. data below (resp. above) a lower (resp.upper) limit of quantification (LOQ) or below a limit of detection (LOD). Projects: censoring1log_project, censoring1_project, censoring2_project, censoring3_project, censoring4_project Censoring occurs when the value of a measurement or observation is only partially known. For continuous data measurements in the longitudinal context, censoring refers to the values of the measurements, not the times at which they were taken. For example, the lower limit of detection (LLOD) is the lowest quantity of a substance that can be distinguished from its absence. Therefore, any time the quantity is below the LLOD, the “observation” is not a measurement but the information that the measured quantity is less than the LLOD. Similarly, in longitudinal studies of viral kinetics, measurements of the viral load below a certain limit, referred to as the lower limit of quantification (LLOQ), are so low that their reliability is considered suspect. A measuring device can also have an upper limit of quantification (ULOQ) such that any value above this limit cannot be measured and reported. As hinted above, censored values are not typically reported as a number, but their existence is known, as well as the type of censoring. Thus, the observation $y^{(r)}_{ij}$ (i.e., what is reported) is the measurement $y_{ij}$ if not censored, and the type of censoring otherwise. We usually distinguish between three types of censoring: left, right and interval. In each case, the SAEM algorithm implemented in Monolix properly computes the maximum likelihood estimate of the population parameters, combining all the information provided by censored and non censored data. In the presence of censored data, the conditional density function needs to be computed carefully. To cover all three types of censoring (left, right, interval), let $I_{ij}$ be the (finite or infinite) censoring interval existing for individual i at time $t_{ij}$. Then, $$\displaystyle p(y^{(r)}|\psi)=\prod_{i=1}^{N}\prod_{j=1}^{n_i}p(y_{ij}|\psi_i)^{1_{y_{ij}\notin I_{ij}}}\mathbb{P}(y_{ij}\in I_{ij}|\psi_i)^{1_{y_{ij}\in I_{ij}}}$$ $$\displaystyle \mathbb{P}(y_{ij}\in I_{ij}|\psi_i)=\int_{I_{ij}} p_{y_{ij}|\psi_i} (u|\psi_i)du$$ We see that if $y_{ij}$ is not censored (i.e. $1_{y_{ij}otin I_{ij}}=1$), its contribution to the likelihood is the usual $p(y_{ij}|\psi_i)$, whereas if it is censored, the contribution is $\mathbb {P}(y_{ij}\in I_{ij}|\psi_i)$. For the calculation of the likelihood, this is equivalent to the M3 method in NONMEM when only the CENSORING column is given, and to the M4 method when both a CENSORING column and a LIMIT column are Censoring definition in a data set In the dataset format used by Monolix and PKanalix, censored information is included in this way: • The censored measurement should be in the OBSERVATION column. • In an additional CENSORING column, put 0 if the observation is not censored, and 1 or – 1 depending if the measurement given in the observation column is a lower or an upper limit. • Optionally, include a LIMIT column to set the other limit. To quickly include censoring information to your dataset by using BLQ tags in the observation column, you can use data formatting. Examples are provided below and here. PK data below a lower limit of quantification Left censored data • censoring1log_project (data = ‘censored1log_data.txt’, model = ‘pklog_model.txt’) PK data are log-concentration in this example. The limit of quantification of 1.8 mg/l for concentrations becomes log(1.8)=0.588 for log-concentrations. The column of observations (Y) contains either the LLOQ for data below the limit of quantification (BLQ data) or the measured log-concentrations for non BLQ data. Furthermore, Monolix uses an additional column CENSORING to indicate if an observation is left censored (CENS=1) or not (CENS=0). In this example, subject 1 has two BLQ data at times 24h and 30h (the measured log-concentrations were below 0.588 at these times): The plot of individual fits displays BLQ (red band) and non BLQ data (blue dots) together with the predicted log-concentrations (purple line) on the whole time interval: Notice that the band goes from .8 to -Infinity as no bound has been specified (no LIMIT column was proposed). For diagnosis plots such as VPC, residuals of observations versus predictions, Monolix samples the BLQ data from the conditional distribution $$p(y^{BLQ} | y^{non BLQ}, \hat{\psi}, \hat{\theta})$$ where $\hat{\theta}$ and $\hat{\psi}$ are the estimated population and individual parameters. This is done by adding a residual error on top of the prediction, using a truncated normal distribution to make sure that the simulated BLQ remains within the censored interval. This is the most efficient way to take into account the complete information provided by the data and the model for diagnosis plots such as VPCs: A strong bias appears if LLOQ is used instead for the BLQ data (if you choose LOQ instead of simulated in the display frame of the settings) : Notice that ignoring the BLQ data entails a loss of information as can be seen below (if you choose no in the “Use BLQ” toggle): As can be seen below, imputed BLQ data is also used for residuals (IWRES on the left) and for observations versus predictions (on the right) More on these diagnosis plots Impact of the BLQ in residuals and observations versus predictions plots A strong bias appears if LLOQ is used instead for the BLQ data for these two diagnosis plots: while ignoring the BLQ data entails a loss of information: BLQ predictive checks The BLQ predictive check is a diagnosis plot that displays the fraction of cumulative BLQ data (blue line) with a 90% prediction interval (blue area). Interval censored data • censoring1_project (data = ‘censored1_data.txt’, model = ‘lib:oral1_1cpt_kaVk.txt’) We use the original concentrations in this project. Then, BLQ data should be treated as interval censored data since a concentration is know to be positive. In other word, a data reported as BLQ data means that the (non reported) measured concentration is between 0 and 1.8mg/l. The value in the observation column 1.8 indicates the value, the value in the CENSORING column indicates that the value in the observation column is the upper bound. An additional column LIMIT reports the lower limit of the censored interval (0 in this example): • if this column is missing, then BLQ data is assumed to be left-censored data that can take any positive and negative value below LLOQ. • the value of the limit can vary between observations of the same subject. Monolix will use this additional information to estimate the model parameters properly and to impute the BLQ data for the diagnosis plots. Plot of individual fits now displays LLOD at 1.8 with a red band when a PK data is censored. We see that the band lower limit is at 0 as defined in the limit column. PK data below a lower limit of quantification or below a limit of detection • censoring2_project (data = ‘censored2_data.txt’, model = ‘lib:oral1_1cpt_kaVk.txt’) PK data below a lower limit of quantification and PD data above an upper limit of quantification • censoring3_project (data = ‘censored3_data.txt’, model = ‘pkpd_model.txt’) We work with PK and PD data in this project and assume that the PD data may be right censored and that the upper limit of quantification is ULOQ=90. We use CENS=-1 to indicate that an observation is right censored. In such case, the PD data can take any value above the upper limit reported in column Y (here the YTYPE column of type OBSERVED ID defines the type of observation, YTYPE=1 and YTYPE=2 are used respectively for PK and PD data): We can display the cumulative fraction of censored data both for the PK and the PD data (on the left and right respectively): Combination of interval censored PK and PD data • censoring4_project (data = ‘censored4_data.txt’, model = ‘pkpd_model.txt’) We assume in this example • 2 different censoring intervals(0,1) and (1.2, 1.8) for the PK, • a censoring interval (80,90) and right censoring (>90) for the PD. Combining columns CENS, LIMIT and Y allow us to combine efficiently these different censoring processes: This coding of the data means that, for subject 1, • PK data is between 0 and 1 at time 30h (second blue frame), • PK data is between 1.2 and 1.8 at times 0.5h and 24h (first blue frame for time .5h), • PD data is between 80 and 90 at times 12h and 16h (second green frame for time 12h), • PD data is above 90 at times 4h and 8h (first green frame for time 4h). Plot of individual fits for the PK and the PD data displays the different limits of these censoring intervals (PK on the left and PD on the right): Other diagnosis plots, such as the plot of observations versus predictions, adequately use imputed censored PK and PD data: Case studies • 8.case_studies/hiv_project (data = ‘hiv_data.txt’, model = ‘hivLatent_model.txt’) • 8.case_studies/hcv_project (data = ‘hcv_data.txt’, model = ‘hcvNeumann98_model_latent.txt’) 2.6.Mapping between the data and the model Starting from the 2019 version, it is possible to change the mapping between the data set observations ids and the structural model output. By default and in previous versions, the mapping is done by order, i.e. the first output listed in the output= statement of the model is mapped to the first OBSERVATION ID (ordered alphabetically). It is possible with the interface to set exactly which model output is mapped to which data output. Model output or data outputs can be left unused. Changing the mapping If you have more output in the data set (i.e more OBSERVATION IDs) than in the structural model, you can set which data output you will use in the project. In the example below there are two outputs in the data set (managed by the OBSERVATION ID column) and only one output in the structural model, Cc. By default the following mapping is proposed: the data with observation id ‘1’ is mapped to the modle prediction ‘Cc’. The model observation (with error model) is called ‘CONC’ (the name of the OBSERVATION column, can be edited): To set the data output to use to observation id ‘2’, you can either: • Unlink by clicking on either the dot representing the output ‘1’ of the Data or ‘Cc’ of the structural model, and then draw the line between ‘2’ and ‘Cc’ (as can be seen on the figure below on the left) • Directly draw a line from ‘2’ to ‘Cc’ (as can be seen on the figure below on the right). This will automatically undo the link between ‘1’ and ‘Cc’. And click on the button ACCEPT on the bottom on the window to apply the changes. The same possibility is proposed if you have more outputs in the structural model, compared to the number of observation ids. If you have a TMDD model with both the free and the total ligand concentration listed as model output and one type of measurement, you can map either the free or the total ligand as can be seen on the following figure with the same actions as described above. Several types of outputs The mapping is only possible between outputs of same nature (continuous / count-categorical / event), i.e. it is only possible to map a continuous output with a continuous output of the structural model. Thus, mapping a continuous output with a discrete or a time-to-event is not possible. If you try to link a forbidden combination, the line connecting line will be displayed in red as in the following figure The type of output is indicated via the shapes: • continuous outputs are displayed as circles • categorical/count outputs are displayed as squares • event outputs are displayed as triangles Changing the observation name In the example below, ‘1’ is the observation id used in the data set to identify the data to use, ‘Cc’ is the model output (a prediction, without residual error) and ‘y1’ the observation (with error). ‘y1’ represents the data with observation id ‘1’ and it appears in the labels/legends of the plots. These elements are related by observation model, which formula can be displayed. For count/categorical and event model outputs, the model observation is defined in the model file directly. The name used in the model file is reused in the mapping interface and cannot be changed. For continuous outputs, the model file defines the name of the prediction (e.g ‘Cc’), while the model observation (e.g ‘y1’, with error) definition is done in the “Statistical model and tasks” tab of the interface. If there is only one model output, the default observation name is the header of the data set column tagged as OBSERVATION. In case of several model outputs, the observation names are y1, y2, y3, etc. The observation names for continuous outputs can be changed by clicking on the node and “edit observation name”: 3.1.Libraries of models Objectives: learn how to use the Monolix libraries of models and use your own models. Projects: theophylline_project, PDsim_project, warfarinPK_project, TMDD_project, LungCancer_project, hcv_project For the definition of the structural model, the user can either select a model from the available model libraries or write a model itself using the Mlxtran language. Discover how to easily choose a model from the libraries via step-by-step selection of its characteristics. An enriched PK, a PD, a joint PKPD, a target-mediated drug disposition (TMDD), and a time to-event (TTE) library are now available. Model libraries Five different model libraries are available in Monolix, which we will detail below. To use a model from the libraries, in the Structural model tab, click on Load from library and select the desired library. A list of model files appear, as well as a menu to filter them. Use the filters and indications in the file name (parameters names) to select the model file you need. The model files are simply text files that contain pre-written models in Mlxtran language. Once selected, the model appears in the Monolix GUI. Below we show the content of the (ka,V,Cl) model: The PK library • theophylline_project (data = ‘theophylline_data.txt’ , model=’lib:oral1_1cpt_kaVCl.txt’) The PK library includes model with different administration routes (bolus, infusion, first-order absorption, zero-order absorption, with or without Tlag), different number of compartments (1, 2 or 3 compartments), and different types of eliminations (linear or Michaelis-Menten). More details, including the full equations of each model, can be found on the PK model library wepage. The PK library models can be used with single or multiple doses data, and with two different types of administration in the same data set (oral and bolus for instance). The PD and PKPD libraries • PDsim_project (data = ‘PDsim_data.txt’ , model=’lib:immed_Emax_const.txt’) The PD model library contains direct response models such as Emax and Imax with various baseline models, and turnover response models. These models are PD models only and the drug concentration over time must be defined in the data set and passed as a regressor. • warfarinPKPD_project (data = ‘warfarin_data.txt’, model = ‘lib:oral1_1cpt_IndirectModelInhibitionKin_TlagkaVClR0koutImaxIC50.txt’) The PKPD library contains joint PKPD models, which correspond to the combination of the models from the PK and from the PD library. These models contain two outputs, and thus require the definition of two observation identifiers (i.e two different values in the OBSERVATION ID column). Complete description of the PD and PK/PD model libraries. The PK double absorption library The library of double absorption models contains all the combinations for two mixed absorptions, with different types and delays. The absorptions can be specified as simultaneous or sequential, and with a pre-defined or independent order. This library simplifies the selection and testing of different types of absorptions and delays. More details about the library and examples can be found on the dedicated PK double absorption documentation page. The TMDD library • TMDD_project (data = ‘TMDD_dataset.csv’ , model=’lib:bolus_2cpt_MM_VVmKmClQV2_outputL.txt’) The TMDD library contains various models for molecules displaying target-mediated drug disposition (TMDD). It includes models with different administration routes (bolus, infusion, first-order absorption, zero-order absorption, bolus + first-order absorption, with or without Tlag), different number of compartments (1, or 2 compartments), different types of TMDD models (full model, MM approximation, QE/QSS approximation, etc), and different types of output (free ligand or total free+bound ligand). More details about the library and guidelines to choose model can be found on the dedicated TMDD documentation page. The TTE library • LungCancer_project (data = ‘lung_cancer_survival.csv’ , model=’lib:gompertz_model_singleEvent.txt’) The TTE library contains typical parametric models for time-to-event (TTE) data. TTE models are defined via the hazard function, in the library we provide exponential, Weibull, log-logistic, uniform, Gompertz, gamma and generalized gamma models, for data with single (e.g death) and multiple events (e.g seizure) per individual. More details and modeling guidelines can be found on the TTE dedicated webpage, along with case studies. The Count library The Count library contains the typical parametric distributions to describe count data. More details can be found on the Count dedicated webpage, with a short introduction on count data, the different ways to model this kind of data, and typical models. The tumor growth inhibition (TGI) library A wide range of models for tumour growth (TG) and tumour growth inhibition (TGI) is available in the literature and correspond to different hypotheses on the tumor or treatment dynamics. In MonolixSuite2020, we provide a modular TG/TGI model library that combines sets of frequently used basic models and possible additional features. This library permits to easily test and combine different hypotheses for the tumor growth kinetics and effect of a treatment, allowing to fit a large variety of tumor size data. Complete description of the TGI model library. Step-by-step example with the PK library • theophylline_project (data = ‘theophylline_data.txt’ , model=’lib:oral1_1cpt_kaVCl.txt’) We would like to set up a one compartment PK model with first order absorption and linear elimination for the theophylline data set. We start by creating a new Monolix project. Next, the Data tab, click browse, and select the theophylline data set (which can be downloaded from the data set documentation webpage). In this example, all columns are already automatically tagged, based on the header names. We click ACCEPT and NEXT and arrive on the Structural model tab, click on LOAD FROM LIBRARY to choose a model from the Monolix libraries. The menu at the top allow to filter the list of models: after selecting an oral/extravascular administration, no delay, first-order absorption, one compartment and a linear elimination, two models remain in the list (ka,V,Cl) and (ka,V,k). Click on the oral1_1cpt_kaVCl.txt file to select it. After this step, the GUI moves to the Initial Estimates tab, but it is possible to go back to the Structural model tab to see the content of the file: input = {ka, V, Cl} Cc = pkmodel(ka, V, Cl) output = Cc Back to the Initial Estimates tab, the initial values of the population parameters can be adjusted by comparing the model prediction using the chosen population parameters and the individual data. Click on SET AS INITIAL VALUES when you are done. In the next tab, the Statistical model & Tasks tab, we propose by default: At this stage, the monolix project should be saved. This creates a human readable text file with extension .mlxtran, which contains all the information defined via the GUI. In particular, the name of the model appears in the section [LONGITUDINAL] of the saved project file: input = {ka_pop, omega_ka, V_pop, omega_V, Cl_pop, omega_Cl} ka = {distribution=lognormal, typical=ka_pop, sd=omega_ka} V = {distribution=lognormal, typical=V_pop, sd=omega_V} Cl = {distribution=lognormal, typical=Cl_pop, sd=omega_Cl} input = {a, b} file = 'lib:oral1_1cpt_kaVCl.txt' CONC = {distribution=normal, prediction=Cc, errorModel=combined1(a,b)} 3.2.Writing your own model If the models present in the libraries do not suit your needs, you can write a model yourself. You can either start completely from scratch or adapt a model existing in the libraries. In both cases, the syntax of the Mlxtran language is detailed on the mlxtran webpage. You can also copy-paste models from the mlxtran model example page. Videos on this page use the application mlxEditor included in previous versions of MonolixSuite. From the 2021R1 version on, the editor is integrated within the interface of Monolix, as shown on the screenshots on this page, and it can also be used as a separate application. Writing a model from scratch • 8.case_studies/hcv_project (data = ‘hcv_data.txt’ , model=’hcvNeumann98_model.txt’) In the Structural model tab, you can click on New model to open the editor integrated within Monolix, and start writing your own model. The new model contains a convenient template defining the main blocks, input parameters and output variables. When you are done, click on Create model button of Monolix to save your new model file. After saving, the model is automatically loaded in the project. You can even create your own library of models. An example of a basic library which includes several viral kinetics model is available in the demos 8.case_studies/model. Note: A button “Check syntax” is available to check that there is no syntax error in the model. In case of an error, informative messages are displayed to help correct the error. The syntax check is also automatically applied before saving the model so that only a model with a valid syntax can be saved. Understanding the error messages The error messages generated when syntax errors are present in the model are very informative and quickly help to get the model right. The most common error messages are explained in detail in this Modifying a model from the libraries Browse existing models from the libraries using the Load from library button, then click the “file” icon next to a model name. This opens a pop-up window where the content of the model file is displayed. Click on Open in editor to open the model file in the MlxEditor. There you can adapt the model, for instance to add a PD model. Be careful to save the new model under a new name, to avoid overwriting the library files. The video below shows an example of how a scale factor can be added: See the dedicated webpage for more details on model libraries. 3.3.Models for continuous outcomes 3.3.1.1.Single route of administration Objectives: learn how to define and use a PK model for single route of administration. Projects: bolusLinear_project, bolusMM_project, bolusMixed_project, infusion_project, oral1_project, oral0_project, sequentialOral0Oral1_project, simultaneousOral0Oral1_project, oralAlpha_project, Once a drug is administered, we usually describe subsequent processes within the organism by the pharmacokinetics (PK) process known as ADME: absorption, distribution, metabolism, excretion. A PK model is a dynamical system mathematically represented by a system of ordinary differential equations (ODEs) which describes transfers between compartments and elimination from the central Mlxtran is remarkably efficient for implementing simple and complex PK models: • The function pkmodel can be used for standard PK models. The model is defined according to the provided set of named arguments. The pkmodel function enables different parametrizations, different models of absorption, distribution and elimination, defined here and summarized in the following.. • PK macros define the different components of a compartmental model. Combining such PK components provide a high degree of flexibility for complex PK models. They can also extend a custom ODE • A system of ordinary differential equations (ODEs) can be implemented very easily. It is also important to highlight the fact that the data file used by Monolix for PK modelling only contains information about dosing, i.e. how and when the drug is administrated. There is no need to integrate in the data file any information related to the PK model. This is an important remark since it means that any (complex) PK model can be used with the same data file. In particular, we make a clear distinction between administration (related to the data) and absorption (related to the model). The pkmodel function The PK model is defined by the names of the input parameters of the pkmodel function. These names are reserved keywords. • p: Fraction of dose which is absorbed • ka: absorption constant rate (first order absorption) • or, Tk0: absorption duration (zero order absorption) • Tlag: lag time before absorption • or, Mtt, Ktr: mean transit time & transit rate constant • V: Volume of distribution of the central compartment • k12, k21: Transfer rate constants between compartments 1 (central) & 2 (peripheral) • or V2, Q2: Volume of compartment 2 (peripheral) & inter compartment clearance, between compartments 1 and 2, • k13, k31: Transfer rate constants between compartments 1 (central) & 3 (peripheral) • or V3, Q3: Volume of compartment 3 (peripheral) & inter compartment clearance, between compartments 1 and 3. • k: Elimination rate constant • or Cl: Clearance • Vm, Km: Michaelis Menten elimination parameters Effect compartment • ke0: Effect compartment transfer rate constant Intravenous bolus injection Linear elimination A single iv bolus is administered at time 0 to each patient. The data file bolus1_data.txt contains 4 columns: id, time, amt (the amount of drug in mg) and y (the measured concentration). The names of these columns are recognized as keywords by Monolix: It is important to note that, in this data file, a row contains either some information about the dose (in which case y = ".") or a measurement (in which case amt = "."). We could equivalently use the data file bolus2_data.txt which contains 2 additional columns: EVID (in the green frame) and IGNORED OBSERVATION (in the blue frame): EVENT ID column allows the identification of an event. It is an integer between 0 and 4. It helps to define the type of line. EVID=1 means that this record describes a dose while EVID=0 means that this record contains an observed value. On the other hand, the IGNORED OBSERVATION column enables to tag lines for which the information in the OBSERVATION column-type is missing. MDV=1 means that the observed value of this record should be ignored while MDV=0 means that this record contains an observed value. The two data files bolus1_data.txt and bolus2_data.txt contain exactly the same information and provide exactly the same results. A one compartment model with linear elimination is used with this project: $$\begin{array}{ccl} \frac{dA_c}{dt} &=& – k A_c(t) \\ A_c(t) &= &0 ~~\text{for}~~ t<0 \end{array} $$ Here, \(A_c(t)\) and \(C_c(t)=A_c(t)/V\) are, respectively, the amount and the concentration of drug in the central compartment at time t. When a dose D arrives in the central compartment at time \(\ tau\), an iv bolus administration assumes that $$A_c(\tau^+) = A_c(\tau^-) + D$$ where \(A_c(\tau^-)\) (resp. \(A_c(\tau^+)\)) is the amount of drug in the central compartment just before (resp. after) \(\tau\) Parameters of this model are V and k. We therefore use the model bolus_1cpt_Vk from the Monolix PK library: input = {V, k} Cc = pkmodel(V, k) output = Cc We could equivalently use the model bolusLinearMacro.txt (click on the button Model and select the new PK model in the library 6.PK_models/model) input = {V, k} compartment(cmt=1, amount=Ac) elimination(cmt=1, k) Cc = Ac/V output = Cc These two implementations generate exactly the same C++ code and then provide exactly the same results. Here, the ODE system is linear and Monolix uses its analytical solution. Of course, it is also possible (but not recommended with this model) to use the ODE based PK model bolusLinearODE.txt : input = {V, k} depot(target = Ac) ddt_Ac = - k*Ac Cc = Ac/V output = Cc Results obtained with this model are slightly different from the ones obtained with the previous implementations since a numeric scheme is used here for solving the ODE. Moreover, the computation time is longer (between 3 and 4 time longer in that case) when using the ODE compared to the analytical solution. Individual fits obtained with this model look nice but the VPC show some misspecification in the elimination process: Michaelis Menten elimination A non linear elimination is used with this project: $$\frac{dA_c}{dt} = – \frac{ V_m \, A_c(t)}{V\, K_m + A_c(t) }$$ This model is available in the Monolix PK library as bolus_1cpt_VVmKm: input = {V, Vm, Km} Cc = pkmodel(V, Vm, Km) output = Cc Instead of this model, we could equivalently use PK macros with bolusNonLinearMacro.txt from the library 6.PK_models/model: input = {V, Vm, Km} compartment(cmt=1, amount=Ac, volume=V) elimination(cmt=1, Vm, Km) Cc = Ac/V output = Cc or an ODE with bolusNonLinearODE: input = {V, Vm, Km} depot(target = Ac) ddt_Ac = -Vm*Ac/(V*Km+Ac) output = Cc Results obtained with these three implementations are identical since no analytical solution is available for this non linear ODE. We can then check that this PK model seems to describe much better the elimination process of the data: Mixed elimination THe Monolix PK library contains “standard” PK models. More complex models should be implemented by the user in a model file. For instance, we assume in this project that the elimination process is a combination of linear and nonlinear elimination processes: $$ \frac{dA_c}{dt} = -\frac{ V_m A_c(t)}{V K_m + A_c(t) } – k A_c(t) $$ This model is not available in the Monolix PK library. It is implemented in bolusMixed.txt: input = {V, k, Vm, Km} depot(target = Ac) ddt_Ac = -Vm*Ac/(V*Km+Ac) - k*Ac output = Cc This model, with a combined error model, seems to describe very well the data: Intravenous infusion Intravenous infusion assumes that the drug is administrated intravenously with a constant rate (infusion rate), during a given time (infusion time). Since the amount is the product of infusion rate and infusion time, an additional column INFUSION RATE or INFUSION DURATION is required in the data file: Monolix can use both indifferently. Data file infusion_rate_data.txt has an additional column It can be replaced by infusion_tinf_data.txt which contains exactly the same information: We use with this project a 2 compartment model with non linear elimination and parameters $V_1$, $Q$, $V_2$, $V_m$, $K_m$: $$\begin{aligned} k_{12} &= Q/V_1 \\ k_{21} &= Q/V_2 \\\frac{dA_c}{dt} & = k_{21} \, Ap(t) – k_{12} \, Ac(t)- \frac{ V_m \, A_c(t)}{V_1\, K_m + A_c(t) } \\ \frac{dA_p}{dt} & = – k_{21} \, Ap(t) + k_ {12} \, Ac(t) \\ Cc(t) &= \frac{Ac(t)}{V_1} \end{aligned}$$ This model is available in the Monolix PK library as infusion_2cpt_V1QV2VmKm: input = {V1, Q, V2, Vm, Km} V = V1 k12 = Q/V1 k21 = Q/V2 Cc = pkmodel(V, k12, k21, Vm, Km) output = Cc Oral administration first-order absorption This project uses the data file oral_data.txt. For each patient, information about dosing is the time of administration and the amount. A one compartment model with first order absorption and linear elimination is used with this project. Parameters of the model are ka, V and Cl. we will then use model oral1_kaVCl.txt from the Monolix PK library input = {ka, V, Cl} Cc = pkmodel(ka, V, Cl) output = Cc Both the individual fits and the VPCs show that this model doesn’t describe the absorption process properly. Mlxtran exists: – using PK macros: oralMacro.txt: input = {ka, V, Cl} compartment(cmt=1, amount=Ac) oral(cmt=1, ka) elimination(cmt=1, k=Cl/V) output = Cc – using a system of two ODEs as in oralODEb.txt: input = {ka, V, Cl} k = Cl/V ddt_Ad = -ka*Ad ddt_Ac = ka*Ad - k*Ac Cc = Ac/V output = Cc – combining PK macros and ODE as in oralMacroODE.txt (macros are used for the absorption and ODE for the elimination): input = {ka, V, Cl} compartment(cmt=1, amount=Ac) oral(cmt=1, ka) k = Cl/V ddt_Ac = - k*Ac Cc = Ac/V output = Cc – or equivalently, as in oralODEa.txt: input = {ka, V, Cl} depot(target=Ac, ka) k = Cl/V ddt_Ac = - k*Ac Cc = Ac/V< output = Cc Remark: Models using the pkmodel function or PK macros only use an analytical solution of the ODE system. zero-order absorption A one compartment model with zero order absorption and linear elimination is used to fit the same PK data with this project. Parameters of the model are Tk0, V and Cl. We will then use model oral0_1cpt_Tk0Vk.txt from the Monolix PK library input = {Tk0, V, Cl} Cc = pkmodel(Tk0, V, Cl) output = Cc Remark 1: implementing a zero-order absorption process using ODEs is not easy… on the other hand, it becomes extremely easy to implement using either the pkmodel function or the PK macro oral(Tk0). Remark 2: The duration of a zero-order absorption has nothing to do with an infusion time: it is a parameter of the PK model (exactly as the absorption rate constant ka for instance), it is not part of the data. sequential zero-order first-order absorption • sequentialOral0Oral1_project More complex PK models can be implemented using Mlxtran. A sequential zero-order first-order absorption process assumes that a fraction Fr of the dose is first absorbed during a time Tk0 with a zero-order process, then, the remaining fraction is absorbed with a first-order process. This model is implemented in sequentialOral0Oral1.txt using PK macros: input = {Fr, Tk0, ka, V, Cl} absorption(Tk0, p=Fr) absorption(ka, Tlag=Tk0, p=1-Fr) output = Cc Both the individual fits and the VPCs show that this PK model describes very well the whole ADME process for the same PK data: simultaneous zero-order first-order absorption • simultaneousOral0Oral1_project A simultaneous zero-order first-order absorption process assumes that a fraction Fr of the dose is absorbed with a zero-order process while the remaining fraction is absorbed simultaneously with a first-order process. This model is implemented in simultaneousOral0Oral1.txt using PK macros: input = {Fr, Tk0, ka, V, Cl} absorption(Tk0, p=Fr) absorption(ka, p=1-Fr) output = Cc alpha-order absorption An $\alpha$-order absorption process assumes that the rate of absorption is proportional to some power of the amount of drug in the depot compartment: $\frac{dA_d}{dt} = - r \left(A_d(t)\right)^\alpha$ This model is implemented in oralAlpha.txt using ODEs: input = {r, alpha, V, Cl} depot(target = Ad) dAd = Ad^alpha ddt_Ad = -r*dAd ddt_Ac = r*Ad - (Cl/V)*Ac Cc = Ac/V output = Cc transit compartment model A PK model with a transit compartment of transit rate Ktr and mean transit time Mtt can be implemented using the PK macro oral(ka, Mtt, Ktr), or using the pkmodel function, as in oralTransitComp.txt: input = {Mtt, Ktr, ka, V, Cl} Cc = pkmodel(Mtt, Ktr, ka, V, Cl) output = Cc Using different parametrizations The PK macros and the function pkmodel use some preferred parametrizations and some reserved names as input arguments: Tlag, ka, Tk0, V, Cl, k12, k21. It is however possible to use another parametrization and/or other parameter names. As an example, consider a 2-compartment model for oral administration with a lag, a first order absorption and a linear elimination. We can use the pkmodel function with, for instance, parameters ka, V, k, k12 and k21: input = {ka, V, k, k12, k21} Cc = pkmodel(ka, V, k, k12, k21) output = Cc Imagine now that we want i) to use the clearance $Cl$ instead of the elimination rate constant $k$, ii) to use capital letters for the parameter names. We can still use the pkmodel function as input = {KA, V, CL, K12, K21} Cc = pkmodel(ka=KA, V, k=CL/V, k12=K12, k21=K21) output = Cc 3.3.1.2.Multiple routes of administration Objectives: learn how to define and use a PK model for multiple routes of administration.. Projects: ivOral1_project, ivOral2_project Some drugs can display complex absorption kinetics. Common examples are mixed first-order and zero-order absorptions, either sequentially or simultaneously, and fast and slow parallel first-order absorptions. A few examples of those kinds of absorption kinetics are proposed below. Various absorption models are proposed here as examples. Combining iv and oral administrations – Example 1 • ivOral1_project (data = ‘ivOral1_data.txt’ , model = ‘ivOral1Macro_model.txt’) In this example, we combine oral and iv administrations of the same drug. The data file ivOral1_data.txt contains an additional column ADMINISTRATION ID which indicates the route of administration (1 =iv, 2=oral) We assume here a one compartment model with first-order absorption process from the depot compartment (oral administration) and a linear elimination process from the central compartment. We further assume that only a fraction F (bioavailability) of the drug orally administered is absorbed. This model is implemented in ivOral1Macro_model.txt using PK macros: input = {F, ka, V, k} compartment(cmt=1, amount=Ac) iv(adm=1, cmt=1) oral(adm=2, cmt=1, ka, p=F) elimination(cmt=1, k) Cc = Ac/V output = Cc A logit-normal distribution is used for bioavability F that takes it values in (0,1). The model properly fits the data as can be seen on the individual fits of the 6 first individuals Remark: the same PK model could be implemented using ODEs instead of PK macros. Let \(A_d\) and \(A_c\) be, respectively, the amounts in the depot compartment (gut) and the central compartment (bloodtsream). Kinetics of \(A_d\) and \(A_c\) are described by the following system of ODEs $$\dot{A}_d(t) = – k_a A_d(t)~~\text{and}~~ \dot{A}_c(t) = k_a A_d(t) – k A_c(t)$$ The target compartment is the depot compartment (\(A_d\)) for oral administrations and the central compartment (\(A_c\)) for iv administrations. This model is implemented in ivOral1ODE_model.txt using a system of ODEs: input = {F, ka, V, k} depot(type=1, target=Ad, p=F) depot(type=2, target=Ac) ddt_Ad = -ka*Ad ddt_Ac = ka*Ad - k*Ac Cc = Ac/V output = Cc Solving this ODEs system is less efficient than using the PK macros which uses the analytical solution of the linear system. Combining iv and oral administrations – Example 2 • ivOral2_project (data = ‘ivOral2_data.txt’ , model = ‘ivOral2Macro_model.txt’) In this example (based on simulated PK data), we combine intraveinous injection with 3 different types of oral administrations of the same drug. The datafile ivOral2_data.txt contains column ADM which indicates the route of administration (1,2,3=oral, 4=iv). We assume that one type of oral dose (adm=1) is absorbed into a latent compartment following a zero-order absorption process. The 2 oral doses (adm=2,3) are absorbed into the central compartment following first-order absorption processes with different rates. Bioavailabilities are supposed to be different for the 3 oral doses. There is linear transfer from the latent to the central compartment. A peripheral compartment is linked to the central compartment. The drug is eliminated by a linear process from the central This model is implemented in ivOral2Macro_model.txt using PK macros: input = {F1, F2, F3, Tk01, ka2, ka3, kl, k23, k32, V, Cl} compartment(cmt=1, amount=Al) compartment(cmt=2, amount=Ac) oral(type=1, cmt=1, Tk0=Tk01, p=F1) oral(type=2, cmt=2, ka=ka2, p=F2) oral(type=3, cmt=2, ka=ka3, p=F3) iv(type=4, cmt=2) transfer(from=1, to=2, kt=kl) elimination(cmt=2, k=Cl/V) Cc = Ac/V output = Cc Here, logit-normal distributions are used for bioavabilities \(F_1\), \(F_2\) and \(F_3\). The model fits the data properly : Remark: the number and type of doses vary from one patient to another in this example. 3.3.1.3.Multiple doses to steady-state Objectives: learn how to define and use a PK model with multiple doses or assuming steady-state. Projects: multidose_project, addl_project, ss1_project, ss2_project, ss3_project Multiple doses • multidose_project (data = ‘multidose_data.txt’ , model = ‘lib:bolus_1cpt_Vk.txt’) In this project, each patient receives several iv bolus injections. Each dose is represented by a row in the data file multidose_data.txt: The PK model and the statistical model used in this project properly fit the observed data of each individual. Even if there is no observation between 12h and 72h, predicted concentrations computed on this time interval exhibit the multiple doses received by each patient: VPCs, which is a diagnosis tool, are based on the design of the observations and therefore “ignore” what may happen between 12h and 72h: On the other hand, the prediction distribution, which is not a diagnosis tool, computes the distribution of the predicted concentration at any time point: Additional doses (ADDL) • addl_project (data = ‘addl_data.txt’ , model = ‘lib:bolus_1cpt_Vk.txt’) We can note in the previous project, that, for each patient, the interval time between two successive doses is the same (12 hours for each patient) and the amount of drug which is administrated is always the same as well (40mg for each patient). We can take advantage of this design in order to simplify the data file by defining, for each patient, a unique amount (AMT), the number of additional doses which are administrated after the first one (ADDITIONAL DOSES) and the time interval between successive doses (INTERDOSE INTERVAL): The keywords ADDL and II are automatically recognized by Monolix. • Results obtained with this project, i.e. with this data file, are identical to the ones obtained with the previous project. • It is possible to combine single doses (using ADDL=0) and repeated doses in a same data file. • ss1_project (data = ‘ss1_data.txt’ , model = ‘lib:oral0_1cpt_Tk0VCl.txt’) The dose, orally administrated at time 0 to each patient, is assumed to be a “steady-state dose” which means that a “large” number of doses before time 0 have been administrated, with a constant amount and a constant interval dosing, such that steady-state, i.e. equilibrium, is reached at time 0. The data file ss1_data contains a column STEADY STATE which indicates if the dose is a steady-state dose or not and a column INTERDOSE INTERVAL for the inter-dose interval: Click on Check the initial fixed effects to display the predicted concentration between the last dose administrated at time 0. One can see that the initial concentration is not 0 but the result of the steady state calculation. Monolix adds 5 doses before the last dose to reach steady-state. Individual fits display the predicted concentrations computed with these additional doses: If the dynamics is slow, adding 5 doses before the last dose might not be sufficient. You can adapt the number of doses in the frame data and thus define it for all individuals as on the following leading to the following check initial fixed effects • ss2_project (data = ‘ss2_data.txt’ , model = ‘lib:oral0_1cpt_Tk0VCl.txt’) Steady-state and non steady-sates doses are combined in this project: Individual fits display the predicted concentrations computed with this combination of doses: 3.3.2.Mixture of structural models Objectives: learn how to implement between subject mixture models (BSMM) and within subject mixture models (WSMM). Projects: bsmm1_project, bsmm2_project, wsmm_project There are two approaches to define a mixture of models: • defining a mixture of structural models (via a regressor or via the bsmm function). This approach is detailed here. • introducing a categorical covariate (known or latent). –> click here to go to the page dedicated to this approach. Several types of mixture models exist, they are useful in the context of mixed effects models. It may be necessary in some situations to introduce diversity into the structural models themselves: • Between-subject model mixtures (BSMM) assume that there exists subpopulations of individuals. Different structural models describe the response of each subpopulation, and each subject belongs to one of these subpopulations. One can imagine for example different structural models for responders, nonresponders and partial responders to a given treatment. The easiest way to model a finite mixture model is to introduce a label sequence $(z_i ; 1 \leq i \leq N)$ that takes its values in ${1, 2, \ldots, M}$ such that $z_i = m$ if subject i belongs to subpopulation m. $\mathbb{P}(z_i = m)$ is the probability for subject i to belong to subpopulation m. A BSMM assumes that the structural model is a mixture of M different structural models: $$f\left(t_{ij}; \psi_i, z_i \right) = \sum_{m=1}^M 1_{z_i = m} f_m\left( t_{ij}; \psi_i \right) $$ In other word, each subpopulation has its own structural model: $f_m$ is the structural model for subpopulation m. • Within-subject model mixtures (WSMM) assume that there exist subpopulations (of cells, viruses, etc.) within each patient. In this case, different structural models can be used to describe the response of different subpopulations, but the proportion of each subpopulation depends on the patient. Then, it makes sense to consider that the mixture of models happens within each individual. Such within-subject model mixtures require additional vectors of individual parameters $\pi_i=(\pi_{1,i}, \ ldots, \pi_{M,i})$ representing the proportions of the M models within each individual i: $$f\left( t_{ij}; \psi_i, z_i \right) = \sum_{m=1}^M \pi_{m,i} f_m\left( t_{ij}; \psi_i \right)$$ The proportions $(\pi_{m,i})$ are now individual parameters in the model and the problem is transformed into a standard mixed effects model. These proportions are assumed to be positive and to sum to 1 for each patient. Between subject mixture models Supervised learning • bsmm1_project (data = ‘pdmixt1_data.txt’, model = ‘bsmm1_model.txt’) We consider a very simple example here with two subpopulations of individuals who receive a given treatment. The outcome of interest is the measured effect of the treatment (a viral load for instance). The two populations are non responders and responders. We assume here that the status of the patient is known. Then, the data file contains an additional column GROUP. This column is duplicated because Monolix uses it • i) as a regression variable (REGRESSOR): it is used in the model to distinguish responders and non responders, • ii) as a categorical covariate (CATEGORICAL COVARIATE): it is used to stratify the diagnosis plots. We can then display the data and use the categorical covariate GROUP_CAT to split the plot into responders and non responders: The model is implemented in the model file bsmm1_model.txt (note that the names of the regression variable in the data file and in the model script do not need to match): input = {A1, A2, k, g} g = {use=regressor} if g==1 f = A1 f = A2*exp(-k*max(t,0)) output = f The plot of individual fits exhibit the two different structural models: VPCs should then be splitted according to the GROUP_CAT as well as the prediction distribution for non responders and responders: Unsupervised learning • bsmm2_project (data = ‘pdmixt2_data.txt’, model = ‘bsmm2_model.txt’) The status of the patient is unknown in this project (which means that the column GROUP is not available anymore). Let p be the proportion of non responders in the population. Then, the structural model for a given subject is f1 with probability p and f2 with probability 1-p. The structural model is therefore a BSMM: input = {A1, A2, k, p} f1 = A1 f2 = A2*exp(-k*max(t,0)) f = bsmm(f1, p, f2, 1-p) output = f • The bsmm function must be used on the last line of the structural model, just before “OUTPUT:”. It is not possible to reuse the variable returned by the bsmm function (here f) in another • p is a population parameter of the model to estimate. There is no inter-patient variability on p: all the subjects have the same probability of being a non responder in this example. We use a logit-normal distribution for p in order to constrain it to be between 0 and 1, but without variability: is estimated with the other population parameters: Then, the group to which a patient belongs is also estimated as the group of highest conditional probability: $$\begin{aligned}\hat{z}_i &= 1~~~~\textrm{if}~~~~ \mathbb{P}(z_i=1 | (y_{ij}), \hat{\psi}_i, \hat{\theta})> \mathbb{P}(z_i=2 | (y_{ij}),\hat{\psi}_i, \hat{\theta}),\\ &=0~~~~\textrm{otherwise}\end The estimated groups can be used as a stratifying variable to split some plots such as VPCs Bsmm function with ODEs The bsmm function can also be used with models defined via ODE systems. The syntax in that case follows this example, with model M defined as a mixture of M1 and M2: M1_0 = ... ; initial condition for M1 ddt_M1 = ... ; ODE for M1 M2_0 = ... ; initial condition for M2 ddt_M2 = ... ; ODE for M2 M = bsmm(M1,p1,M2,1-p1) Unsupervised learning with latent covariates If the models composing the mixture have a similar structure, it is sometimes possible and easier to implement the mixture with a latent categorical covariate instead of the bsmm function. It also has the advantage of allowing more than two mixture groups, while the bsmm function can only define two mixture groups. Within subject mixture models • wsmm_project (data = ‘pdmixt2_data.txt’, model = ‘wsmm_model.txt’) It may be too simplistic to assume that each individual is represented by only one well-defined model from the mixture. We consider here that the mixture of models happens within each individual and use a WSMM: f = p*f1 + (1-p)*f2 input = {A1, A2, k, p} f1 = A1 f2 = A2*exp(-k*max(t,0)) f = wsmm(f1, p, f2, 1-p) output = f Remark: Here, writing f = wsmm(f1, p, f2, 1-p) is equivalent to writing f = p*f1 + (1-p)*f2 Important: Here, p is an individual parameter: the subjects have different proportions of non responder cells. We use a probit-normal distribution for p in order to constrain it to be between 0 and 1, with variability: There is no latent covariate when using WSMM: mixtures are continuous mixtures. We therefore cannot split anymore the VPC and the prediction distribution anymore. 3.3.3.Joint models for continuous outcomes Objectives: learn how to implement a joint model for continuous PKPD data. Projects: warfarinPK_project, warfarin_PKPDimmediate_project, warfarin_PKPDeffect_project, warfarin_PKPDturnover_project, warfarin_PKPDseq1_project, warfarin_PKPDseq2_project, warfarinPD_project A “joint model” describes two or more types of observation that typically depend on each other. A PKPD model is a “joint model” because the PD depends on the PK. Here we demonstrate how several observations can be modeled simultaneously. We also discuss the special case of sequential PK and PD modelling, using either the population PK parameters or the individual PK parameters as an input for the PD model. Fitting first a PK model to the PK data • warfarinPK_project (data = ‘warfarin_data.txt’, model = ‘lib:oral1_1cpt_TlagkaVCl.txt’) The column DV of the data file contains both the PK and the PD measurements: in Monolix this column is tagged as an OBSERVATION column. The column DVID is a flag defining the type of observation: DVID=1 for PK data and DVID=2 for PD data: the keyword OBSERVATION ID is then used for this column. oral1_1cpt_TlagkaVCl from the Monolix PK library input = {Tlag, ka, V, Cl} Cc = pkmodel(Tlag, ka, V, Cl) output = {Cc} Only the predicted concentration Cc is defined as an output of this model. Then, this prediction will be automatically associated to the outcome of type 1 (DVID=1) while the other observations (DVID= 2) will be ignored. Remark: any other ordered values could be used for OBSERVATION ID column: the smallest one will always be associated to the first prediction defined in the model. Simultaneous PKPD modeling • warfarin_PKPDimmediate_project (data = ‘warfarin_data.txt’, model = ‘immediateResponse_model.txt’) It is also possible for the user to write his own PKPD model. The same PK model used previously and an immediate response model are defined in the model file immediateResponse_model.txt input = {Tlag, ka, V, Cl, Imax, IC50, S0} Cc = pkmodel(Tlag, ka, V, Cl) E = S0 * (1 - Imax*Cc/(Cc+IC50)) output = {Cc, E} Two predictions are now defined in the model: Cc for the PK (DVID=1) and E for the PD (DVID=2). • warfarin_PKPDeffect_project (data = ‘warfarin_data.txt’, model = ‘effectCompartment_model.txt’) An effect compartment is defined in the model file effectCompartment_model.txt input = {Tlag, ka, V, Cl, ke0, Imax, IC50, S0} {Cc, Ce} = pkmodel(Tlag, ka, V, Cl, ke0) E = S0 * (1 - Imax*Ce/(Ce+IC50)) output = {Cc, E} Ce is the concentration in the effect compartment • warfarin_PKPDturnover_project (data = ‘warfarin_data.txt’, model = ‘turnover1_model.txt’) An indirect response (turnover) model is defined in the model file turnover1_model.txt input = {Tlag, ka, V, Cl, Imax, IC50, Rin, kout} Cc = pkmodel(Tlag, ka, V, Cl) E_0 = Rin/kout ddt_E = Rin*(1-Imax*Cc/(Cc+IC50)) - kout*E output = {Cc, E} Sequential PKPD modelling In the sequential approach, a PK model is developed and parameters are estimated in the first step. For a given PD model, different strategies are then possible for the second step, i.e., for estimating the population PD parameters: Using estimated population PK parameters • warfarin_PKPDseq1_project (data = ‘warfarin_data.txt’, model = ‘turnover1_model.txt’) Population PK parameters are set to their estimated values but individual PK parameters are not assumed to be known and sampled from their conditional distributions at each SAEM iteration. In Monolix, this simply means changing the status of the population PK parameter values so that they are no longer used as initial estimates for SAEM but considered fixed as on the figure below. To fix parameters, click on the green option button (framed in green) and choose the Fixed method as on the figure below The joint PKPD model defined in turnover1_model.txt is again used with this project. Using estimated individual PK parameters • warfarin_PKPDseq2_project (data = ‘warfarinSeq_data.txt’, model = ‘turnoverSeq_model.txt’) In htis case, individual PK parameters are set to their estimated values and used as constants in the PKPD model to fit the PD data. To do so, the individual PK parameters need to be added to the PD dataset (or PK/PD dataset) and tagged as regressors. The PK project (that was executed before and through which the estimated PK parameters were obtained) contains in the result folder the individual parameter values “..\warfarin_PKPDseq1_project\ IndividualParameters\estimatedIndividualParameters.txt”. These estimated PK parameters can be added to the PD dataset by using the data formatting tool integrated in Monolix version 2023R1. Depending which tasks have been run in the PK project, the individual parameters corresponding to the conditional mode (EBEs, with “_mode”), the conditional mean (mean of the samples from the conditional distributions, with “_mean”) and an approximation of the conditional mean obtained at the end of the SAEM step (with “_SAEM”) are available in the file. All columns are added to the PD dataset. At the data tagging step, the user can choose which individual parameters to use. The most common is to use the EBEs so to tag the columns with “_mode” as regressor and leave the others as IGNORE (purple frame below). By activating the toggle button (green frame), the ignored columns flagged with the keyword IGNORE can be hidden. We use the same turnover model for the PD data. Here, the PK parameters are defined as regression variables (i.e. regressors). input = {Imax, IC50, Rin, kout, Tlag, ka, V, Cl} Tlag = {use = regressor} ka = {use = regressor} V = {use = regressor} Cl = {use = regressor} Cc = pkmodel(Tlag,ka,V,Cl) E_0 = Rin/kout ddt_E= Rin*(1-Imax*Cc/(Cc+IC50)) - kout*E output = {E} As you can see, the names of the regressors do not match the parameter names. The regressors are matched by order (not by name) between the data set and the model input statement. If there are multiple observation types in the data as well as different response vectors in the output statement of the structural model, then these must be mapped accordingly in the mapping panel. Fitting a PKPD model to the PD data only • warfarinPD_project (data = ‘warfarinPD_data.txt’, model = ‘turnoverPD_model.txt’) In this example, only PD data is available. Nevertheless, a PKPD model – where only the effect is defined as a prediction – can be used for fitting this data and thus defined in the OUTPUT section. input = {Tlag, ka, V, Cl, Imax, IC50, Rin, kout} Cc = pkmodel(Tlag, ka, V, Cl) E_0 = Rin/kout ddt_E = Rin*(1-Imax*Cc/(Cc+IC50)) - kout*E output = {E} Case studies • 8.case_studies/PKVK_project (data = ‘PKVK_data.txt’, model = ‘PKVK_model.txt’) • 8.case_studies/hiv_project (data = ‘hiv_data.txt’, model = ‘hivLatent_model.txt’) 3.4.Models for non continuous outcomes 3.4.1.Time-to-event data models Objectives: learn how to implement a model for (repeated) time-to-event data with different censoring processes. Projects: tte1_project, tte2_project, tte3_project, tte4_project, rtteWeibull_project, rtteWeibullCount_project Here, observations are the “times at which events occur”. An event may be one-off (e.g., death, hardware failure) or repeated (e.g., epileptic seizures, mechanical incidents, strikes). Several functions play key roles in time-to-event analysis: the survival, hazard and cumulative hazard functions. We are still working under a population approach here so these functions, detailed below, are thus individual functions, i.e., each subject has its own. As we are using parametric models, this means that these functions depend on individual parameters \((\psi_i)\). • The survival function \(S(t, \psi_i)\) gives the probability that the event happens to individual i after time \(t>t_{\text{start}}\): $$S(t,\psi_i) = \mathbb{P}(T_i>t; \psi_i) $$ • The hazard function \(h(t,psi_i)\) is defined for individual i as the instantaneous rate of the event at time t, given that the event has not already occurred: $$h(t, \psi_i) = \lim_{dt \to 0} \frac{S(t, \psi_i) – S(t + dt, \psi_i)}{ S(t, \psi_i) dt} $$ This is equivalent to $$h(t, \psi_i) = -\frac{d}{dt} \left(\log{S(t, \psi_i)}\right)$$ • Another useful quantity is the cumulative hazard function \(H(a,b; \psi_i)\), defined for individual i as $$H(a,b; \psi_i) = \int_a^b h(t,\psi_i) dt $$ Note that \(S(t, \psi_i) = e^{-H(t_{\text{start}},t; \psi_i)}\). Then, the hazard function \(h(t,\psi_i)\) characterizes the problem, because knowing it is the same as knowing the survival function \ (S(t, \psi_i)\). The probability distribution of survival data is therefore completely defined by the hazard function. Time-to-event (TTE) models are thus defined in Monolix via the hazard function. Monolix also holds a TTE library that contains typical hazard functions for time-to-event data. More details and modeling guidelines can be found on the TTE dedicated webpage, along with case studies. Formatting of time-to-event data in the MonolixSuite In the data set, exactly observed events, interval censored events and right censoring are recorded for each individual. Contrary to other softwares for survival analysis, the MonolixSuite requires to specify the time at which the observation period starts. This allows to define the data set using absolute times, in addition to durations (if the start time is zero, the records represent durations between the start time and the event). The column TIME also contains the end of the observation period or the time intervals for interval-censoring. The column OBSERVATION contains an integer that indicates how to interpret the associated time. The different values for each type of event and observation are summarized in the table below: The figure below summarizes the different situations with examples: For instance for single events, exactly observed (with or without right censoring), one must indicate the start time of the observation period (Y=0), and the time of event (Y=1) or the time of the end of the observation period if no event has occurred (Y=0). In the following example: ID TIME Y the observation period lasts from starting time t=0 to the final time t=80. For individual 1, the event is observed at t=34, and for individual 2, no event is observed during the period. Thus it is noticed that at the final time (t=80), no event had occurred. Using absolute times instead of duration, we could equivalently write: ID TIME Y The duration between start time and event (or end of the observation period) are the same as before, but this time we record the day at which the patients enter the study and the days at which they have events or leave the study. Different patients may enter the study at different times. Examples for repeated events, and interval censored events are available on the data set documentation page. Single event To begin with, we will consider a one-off event. Depending on the application, the length of time to this event may be called the survival time (until death, for instance), failure time (until hardware fails), and so on. In general, we simply say “time-to-event”. The random variable representing the time-to-event for subject i is typically written Ti. Single event exactly observed or right censored • tte1_project (data = tte1_data.txt , model=lib:exponential_model_singleEvent.txt) The event time may be exactly observed at time \(t_i\), but if we assume that the trial ends at time \(t_{\text{stop}}\), the event may happen after the end. This is “right censoring”. Here, Y=0 at time t means that the event happened after t and Y=1 means that the event happened at time t. The rows with t=0 are included to show the trial start time \(t_{\text{start}}=0\): By clicking on the button Observed data, it is possible to display the Kaplan Meier plot (i.e. the empirical survival function) before fitting any model: A very basic model with constant hazard is used for this data: input = Te h = 1/Te Event = {type=event, maxEventNumber=1, hazard=h} output = {Event} Here, Te is the expected time to event. Specification of the maximum number of events is required both for the estimation procedure and for the diagnosis plots based on simulation, such as the predicted interval for the Kaplan Meier plot which is obtained by Monte Carlo simulation: Single event interval censored or right censored • tte2_project (data = tte2_data.txt , model=exponentialIntervalCensored_model.txt) We may know the event has happened in an interval \(I_i\) but not know the exact time \(t_i\). This is interval censoring. Here, Y=0 at time t means that the event happened after t and Y=1 means that the event happened before time t. Event for individual 1 happened between t=10 and t=15. No event was observed until the end of the experiment (t=100) for individual 5. We use the same basic model, but we now need to specify that the events are interval censored: input = Te h = 1/Te Event = {type=event, maxEventNumber=1, eventType=intervalCensored, hazard = h intervalLength=5 ; used for the plots (not mandatory) output = Event Repeated events Sometimes, an event can potentially happen again and again, e.g., epileptic seizures, heart attacks. For any given hazard function h, the survival function S for individual i now represents the survival since the previous event at \(t_{i,j-1}\), given here in terms of the cumulative hazard from \(t_{i,j-1}\) to \(t_{i,j}\): $$S(t_{i,j} | t_{i,j-1}; \psi_i) = \mathbb{P}(T_{i,j} > t_{i,j} | T_{i,j-1} = t_{i,j-1}; \psi_i) = \exp(-\int_{t_{i,j-1}}^{t_{i,j}}h(t,\psi_i) dt)$$ Repeated events exactly observed or right censored • tte3_project (data = tte3_data.txt , model=lib:exponential_model_repeatedEvents.txt) A sequence of \(n_i\) event times is precisely observed before \(t_{\text{stop}} = 200\): We can then display the Kaplan Meier plot for the first event and the mean number of events per individual: After fitting the model, prediction intervals for these two curves can also be displayed on the same graph as on the following Repeated events interval censored or right censored • tte4_project (data = tte4_data.txt , model=exponentialIntervalCensored_repeated_model.txt) We do not know the exact event times, but the number of events that occurred for each individual in each interval of time. User defined likelihood function for time-to-event data • weibullRTTE (data = weibull_data.txt , model=weibullRTTE_model.txt) A Weibull model is used in this example: input = {lambda, beta} h = (beta/lambda)*(t/lambda)^(beta-1) Event = {type=event, hazard=h, eventType=intervalCensored, output = Event • weibullCount (data = weibull_data.txt , model=weibullCount_model.txt) Instead of defining the data as events, it is possible to consider the data as count data: indeed, we count the number of events per interval. An additional column with the start of the interval is added in the data file and defined as a regression variable. We then use a model for count data (see rtteWeibullCount_model.txt). 3.4.2.Count data model Objectives: learn how to implement a model for count data. Projects: count1a_project, count1a_project, count1a_project, count2_project Longitudinal count data is a special type of longitudinal data that can take only nonnegative integer values {0, 1, 2, …} that come from counting something, e.g., the number of seizures, hemorrhages or lesions in each given time period . In this context, data from individual j is the sequence \(y_i=(y_{ij},1\leq j \leq n_i)\) where \(y_{ij}\) is the number of events observed in the jth time interval \(I_{ij}\). Count data models can also be used for modeling other types of data such as the number of trials required for completing a given task or the number of successes (or failures) during some exercise. Here, \(y_{ij}\) is either the number of trials or successes (or failures) for subject i at time \(t_{ij}\). For any of these data types we will then model \(y_i=(y_{ij},1 \leq j \leq n_i)\) as a sequence of random variables that take their values in {0, 1, 2, …}. If we assume that they are independent, then the model is completely defined by the probability mass functions \(\mathbb{P}(y_{ij} =k)\) for \(k \geq 0\) and \(1 \leq j \leq n_i\). Here, we will only consider parametric distributions for count data. Formatting of count data in the MonolixSuite Count data can only take non-negative integer values that come from counting something, e.g., the number of trials required for completing a given task. The task can for instance be repeated several times and the individual’s performance followed. In the following data set: ID TIME Y 10 trials are necessary the first day (t=0), 6 the second day (t=24), etc. Count data can also represent the number of events happening in regularly spaced intervals, e.g the number of seizures every week. If the time intervals are not regular, the data may be considered as repeated time-to-event interval censored, or the interval length can be given as regressor to be used to define the probability distribution in the model. One can see the epilepsy attacks data set for a more practical example. Modling count data in the MonolixSuite Link to the detailed description of the library of count models integrated within Monolix. Count data with constant distribution over time • count1a_project (data = ‘count1_data.txt’, model = ‘count_library/poisson_mlxt.txt’) A Poisson model is used for fitting the data: input = lambda Y = {type = count, log(P(Y=k)) = -lambda + k*log(lambda) - factln(k) } output = Y Residuals for noncontinuous data reduce to NPDEs. We can compare the empirical distribution of the NPDEs with the distribution of a standardized normal distribution either with the pdf (top) or the cdf (bottom): VPCs for count data compare the observed and predicted frequencies of the categorized data over time: • count1b_project (data = ‘count1_data.txt’, model = ‘count_library/poissonMixture_mlxt.txt’) A mixture of two Poisson distributions is used to fit the same data. For that, we define the probability of k occurrences as the weigthed sum of two Poisson distributions with two expected numbers of occurrences lambda1 and lambda2. The structural model file writes input = {lambda1, alpha, mp} lambda2 = (1+alpha)*lambda1 Y = { type = count, P(Y=k) = mp*exp(-lambda1 + k*log(lambda1) - factln(k)) + (1-mp)*exp(-lambda2 + k*log(lambda2) - factln(k)) output = Y Thus, the parameter alpha has to be strictly positive to ensure different expected number of occurrences in the two poisson distributions and mp has to be in [0, 1] to ensure the probability is correctly defined. Thus those parameters should be defined with lognormal and probitnormal distribution respectively as shown on the following figure. We see on the VPC below that the data set is well modeled using this mixture of Poisson distributions. In addition, we can compute the prediction distribution of the modalities as on the following figure Count data with time varying distribution • count2_project (data = ‘count2_data.txt’, model = ‘count_library/poissonTimeVarying_mlxt.txt’) The distribution of the data changes with time in this example: We then use a Poisson distribution with a time varying intensity: input = {a,b} lambda= a*exp(-b*t) y = {type=count, P(y=k)=exp(-lambda)*(lambda^k)/factorial(k)} output = y This model seems to fit the data very well: 3.4.3.Categorical data model Objectives: learn how to implement a model for categorical data, assuming either independence or a Markovian dependence between observations. Projects: categorical1_project, categorical2_project, markov0_project, markov1a_project, markov1b_project, markov1c_project, markov2_project, markov3a_project, markov3b_project Assume now that the observed data takes its values in a fixed and finite set of nominal categories \(\{c_1, c_2,\ldots , c_K\}\). Considering the observations \((y_{ij},\, 1 \leq j \leq n_i)\) for any individual \(i\) as a sequence of conditionally independent random variables, the model is completely defined by the probability mass functions \(\mathbb{P}(y_{ij}=c_k | \psi_i)\) for \(k=1,\ ldots, K\) and \(1 \leq j \leq n_i\). For a given (i,j), the sum of the K probabilities is 1, so in fact only K-1 of them need to be defined. In the most general way possible, any model can be considered so long as it defines a probability distribution, i.e., for each k, \(\mathbb{P}(y_{ij}=c_k | \psi_i) \in [0,1]\), and \(\sum_{k=1}^{K} \mathbb{P}(y_{ij}=c_k | \psi_i) =1\). Ordinal data further assumed that the categories are ordered, i.e., there exists an order \(\prec\) such that $$c_1 \prec c_2,\prec \ldots \prec c_K $$ We can think, for instance, of levels of pain (low \(\prec\) moderate \(\prec\) severe) or scores on a discrete scale, e.g., from 1 to 10. Instead of defining the probabilities of each category, it may be convenient to define the cumulative probabilities \(\mathbb{P}(y_{ij} \preceq c_k | \psi_i)\) for \(k=1,\ldots ,K-1\), or in the other direction: \(\mathbb{P}(y_{ij} \succeq c_k | \psi_i)\) for \(k=2,\ldots, K\). Any model is possible as long as it defines a probability distribution, i.e., it satisfies $$0 \leq \mathbb{P}(y_{ij} \preceq c_1 | \psi_i) \leq \mathbb{P}(y_{ij} \preceq c_2 | \psi_i)\leq \ldots \leq \mathbb{P}(y_{ij} \preceq c_K | \psi_i) =1 .$$ It is possible to introduce dependence between observations from the same individual by assuming that \((y_{ij},\,j=1,2,\ldots,n_i)\) forms a Markov chain. For instance, a Markov chain with memory 1 assumes that all that is required from the past to determine the distribution of \(y_{ij}\) is the value of the previous observation \(y_{i,j-1}\)., i.e., for all \(k=1,2,\ldots ,K\), $$\mathbb{P}(y_{ij} = c_k\,|\,y_{i,j-1}, y_{i,j-2}, y_{i,j-3},\ldots,\psi_i) = \mathbb{P}(y_{ij} = c_k | y_{i,j-1},\psi_i)$$ Formatting of categorical data in the MonolixSuite In case of categorical data, the observations at each time point can only take values in a fixed and finite set of nominal categories. In the data set, the output categories must be coded as integers , as in the following example: ID TIME Y 1 0.5 3 1 1.5 2 1 2.5 3 One can see the respiratory status data set and the warfarin data set for example for more practical examples on a categorical and a joint continuous and categorical data set respectively. Ordered categorical data • categorical1_project (data = ‘categorical1_data.txt’, model = ‘categorical1_model.txt’) In this example, observations are ordinal data that take their values in {0, 1, 2, 3}: • Cumulative odds ratio are used in this example to define the model $$\textrm{logit}(\mathbb{P}(y_{ij} \leq k))= \log \left( \frac{\mathbb{P}(y_{ij} \leq k)}{1 – \mathbb{P}(y_{ij} \leq k )} \right)$$ $$\begin{array}{ccl} \text{logit}(\mathbb{P}(y_{ij} \leq 0)) &=& \theta_{i,1}\\ \text{logit}(\mathbb{P}(y_{ij} \leq 1)) &=& \theta_{i,1}+\theta_{i,2}\\ \text{logit}(\mathbb{P}(y_{ij} \leq 2)) &=& \ This model is implemented in categorical1_model.txt: input = {th1, th2, th3} level = { type = categorical, categories = {0, 1, 2, 3}, logit(P(level<=0)) = th1 logit(P(level<=1)) = th1 + th2 logit(P(level<=2)) = th1 + th2 + th3 A normal distribution is used for \(\theta_{1}\), while log-normal distributions for \(\theta_{2}\) and \(\theta_{3}\) ensure that these parameters are positive (even without variability). Residuals for noncontinuous data reduce to NPDE’s. We can compare the empirical distribution of the NPDE’s with the distribution of a standardized normal distribution: VPC’s for categorical data compare the observed and predicted frequencies of each category over time: The prediction distribution can also be computed by Monte-Carlo: Ordered categorical data with regression variables • categorical2_project (data = ‘categorical2_data.txt’, model = ‘categorical2_model.txt’) A proportional odds model is used in this example, where PERIOD and DOSE are used as regression variables (i.e. time-varying covariates) Discrete-time Markov chain If observation times are regularly spaced (constant length of time between successive observations), we can consider the observations \((y_{ij},j=1,2,\ldots,n_i)\) to be a discrete-time Markov chain. • markov0_project (data = ‘markov1a_data.txt’, model = ‘markov0_model.txt’) In this project, states are assumed to be independent and identically distributed: \( \mathbb{P}(y_{ij} = 1) = 1 – \mathbb{P}(y_{ij} = 2) = p_{i,1} \) Observations in markov1a_data.txt take their values in {1, 2}. • markov1a_project (data = ‘markov1a_data.txt’, model = ‘markov1a_model.txt’) \(\begin{aligned}\mathbb{P}(y_{i,j} = 1 | y_{i,j-1} = 1) = 1 – \mathbb{P}(y_{i,j} = 2 | y_{i,j-1} = 1) = p_{i,11}\\ \mathbb{P}(y_{i,j} = 1 | y_{i,j-1} = 2) = 1 – \mathbb{P}(y_{i,j} = 2 | y_{i,j-1} = 2) = p_{i,12} \end{aligned}\) input = {p11, p21} State = {type = categorical, categories = {1,2}, dependence = Markov P(State=1|State_p=1) = p11 P(State=1|State_p=2) = p21 The distribution of the initial state is not defined in the model, which means that, by default, \( \mathbb{P}(y_{i,1} = 1) = \mathbb{P}(y_{i,1} = 2) = 0.5 \) • markov1b_project (data = ‘markov1b_data.txt’, model = ‘markov1b_model.txt’) The distribution of the initial state, \(p = \mathbb{P}(y_{i,1} = 1)\), is estimated in this example State = {type = categorical, categories = {1,2}, dependence = Markov P(State_1=1)= p P(State=1|State_p=1) = p11 P(State=1|State_p=2) = p21 • markov3a_project (data = ‘markov3a_data.txt’, model = ‘markov3a_model.txt’) Transition probabilities change with time in this example. We then define time varying transition probabilities in the model: input = {a1, b1, a2, b2} lp11 = a1 + b1*t/100 lp21 = a2 + b2*t/100 State = {type = categorical, categories = {1,2}, dependence = Markov logit(P(State=1|State_p=1)) = lp11 logit(P(State=1|State_p=2)) = lp21 • markov2_project (data = ‘markov2_data.txt’, model = ‘markov2_model.txt’) Observations in markov2_data.txt take their values in {1, 2, 3}. Then, 6 transition probabilities need to be defined in the model. Continuous-time Markov chain The previous situation can be extended to the case where time intervals between observations are irregular by modeling the sequence of states as a continuous-time Markov process. The difference is that rather than transitioning to a new (possibly the same) state at each time step, the system remains in the current state for some random amount of time before transitioning. This process is now characterized by transition rates instead of transition probabilities: \( \mathbb{P}(y_{i}(t+h) = k,|,y_{i}(t)=\ell , \psi_i) = h \rho_{\ell k}(t,\psi_i) + o(h),\qquad k \neq \ell .\) The probability that no transition happens between \(t\) and \(t+h\) is \( \mathbb{P}(y_{i}(s) = \ell, \forall s\in(t, t+h) | y_{i}(t)=\ell , \psi_i) = e^{h , \rho_{\ell \ell}(t,\psi_i)} .\) Furthermore, for any individual i and time t, the transition rates \((\rho_{\ell,k}(t, \psi_i))\) satisfy for any \(1\leq \ell \leq K\), \( \sum_{k=1}^K \rho_{\ell k}(t, \psi_i) = 0\) Constructing a model therefore means defining parametric functions of time \((\rho_{\ell,k})\) that satisfy this condition. • markov1c_project (data = ‘markov1c_data.txt’, model = ‘markov1c_model.txt’) Observation times are irregular in this example. Then, a continuous time Markov chain should be used in order to take into account the Markovian dependence of the data: State = { type = categorical, categories = {1,2}, dependence = Markov transitionRate(1,2) = q12 transitionRate(2,1) = q21 • markov3b_project (data = ‘markov3b_data.txt’, model = ‘markov3b_model.txt’) Time varying transition rates are used in this example. 3.4.4.Joint models for non continuous outcomes Objectives: learn how to implement a joint model for continuous and non continuous data. Projects: warfarin_cat_project, PKcount_project, PKrtte_project Joint model for continuous PK and categorical PD data • warfarin_cat_project (data = ‘warfarin_cat_data.txt’, model = ‘PKcategorical1_model.txt’) In this example, the original PD data has been recorded as 1 (Low), 2 (Medium) and 3 (High). More details about the data International Normalized Ratio (INR) values are commonly used in clinical practice to target optimal warfarin therapy. Low INR values (<2) are associated with high blood clot risk and high ones (>3) with high risk of bleeding, so the targeted value of INR, corresponding to optimal therapy, is between 2 and 3. Prothrombin complex activity is inversely proportional to the INR. We can therefore associate the three ordered categories for the INR to three ordered categories for PCA: Low PCA values if PCA is less than 33% (corresponding to INR>3), medium if PCA is between 33% and 50% (INR between 2 and 3) and high if PCA is more than 50% (INR<2). The column dv contains both the PK and the new categorized PD measurements. Instead of modeling the original PD data, we can model the probabilities of each of these categories, which have direct clinical interpretations. The model is still a joint PKPD model since this probability distribution is expected to depend on exposure, i.e., the plasmatic concentration predicted by the PK model. We introduce an effect compartment to mimic the effect delay. Let \(y_{ij}^{(2)}\) be the PCA level for patient i at time \(t_{ij}^{(2)}\). We can then use a proportional odds model for modeling this categorical data: $$\begin{array}{ccl}\text{logit} \left(\mathbb{P}(y_{ij}^{(2)} \leq 1 | \psi_i)\right) &= &\alpha_{i} + \beta_{i} Ce(t_{ij}^{(2)},\phi_i^{(1)}) \\ \text{logit} \left(\mathbb{P}(y_{ij}^{(2)} \leq 2 | \psi_i)\right) &=& \alpha_{i} + \gamma_{i} + \beta_{i}Ce(t_{ij}^{(2)},\phi_i^{(1)}) \\ \text{logit} \left(\mathbb{P}(y_{ij}^{(2)} \leq 3 | \psi_i)\right) &= & 1,\end{array}$$ where \(C_e(t,\phi_i^{(1)})\) is the predicted concentration of warfarin in the effect compartment at time t for patient i with PK parameters \(\phi_i^{(1)}\). This model defines a probability distribution for \(y_{ij}\) if \(\gamma_i\geq 0\). If \(\beta_i>0\), the probability of low PCA at time \(t_{ij}^{(2)}\) (\(y_{ij}^{(2)}=1\)) increases along with the predicted concentration \(Ce(t_{ij}^{(2)},\phi_i^{(1)})\). The joint model is implemented in the model file PKcategorical1_model.txt input = {Tlag, ka, V, Cl, ke0, alpha, beta, gamma} {Cc,Ce} = pkmodel(Tlag,ka,V,Cl,ke0) lp1 = alpha + beta*Ce lp2 = lp1+ gamma ; gamma >= 0 Level = {type=categorical, categories={1,2,3} logit(P(Level<=1)) = lp1 logit(P(Level<=2)) = lp2 output = {Cc, Level} See Categorical data model for more details about categorical data models. Joint model for continuous PK and count PD data • PKcount_project (data = ‘PKcount_data.txt’, model = ‘PKcount1_model.txt’) The data file used for this project is PKcount_data.txt where the PK and the count PD measurements are simulated. We use a Poisson distribution for the count data, assuming that the Poisson parameter is function of the predicted concentration. For any individual i, we have $$\lambda_i(t) = \lambda_{0,i} \left( 1 – \frac{Cc_i(t)}{Cc_i(t) + IC50_i} \right)$$ where \(Cc_i(t)\) is the predicted concentration for individual i at time t and $$ \log\left(P(y_{ij}^{(2)} = k)\right) = -\lambda_i(t_{ij}) + k\,\log(\lambda_i(t_{ij})) – \log(k!)$$ The joint model is implemented in the model file PKcount1_model.txt input = {ka, V, Cl, lambda0, IC50} Cc = pkmodel(ka,V,Cl) lambda=lambda0*(1 - Cc/(IC50+Cc)) Seizure = {type = count, log(P(Seizure=k)) = -lambda + k*log(lambda) - factln(k) output = {Cc,Seizure} See Count data model for more details about count data models. Joint model for continuous PK and time-to-event data • PKrtte_project (data = ‘PKrtte_data.txt’, model = ‘PKrtteWeibull1_model.txt’) The data file used for this project is PKrtte_data.txt where the PK and the time-to-event data are simulated. We use a Weibull model for the events count data, assuming that the baseline is function of the predicted concentration. For any individual i, we define the hazard function as $$h_i(t) = \gamma_{i} Cc_i(t) t^{\beta-1}$$ where \(Cc_i(t)\) is the predicted concentration for individual i at time t. The joint model is implemented in the model file PKrtteWeibull1_model.txt input = {ka, V, Cl, gamma, beta} Cc = pkmodel(ka, V, Cl) if t<0.1 haz = 0 haz = gamma*Cc*(t^(beta-1)) Hemorrhaging = {type=event, hazard=haz} output = {Cc, Hemorrhaging} See Time-to-event data model for more details about time-to-event data models. 3.5.1.Using a scale factor By default the models from the library do not include a scale factor. Units of the estimated parameters will depend on the units of the data set. For instance: • Concentration in mg/L, amount in mg, time in hours => volumes are in L and clearances in L/h • Concentration in ng/mL, amount in mg, time in minutes => volume are in 10^3 L and clearances in 10^3 L/min The files from the library can easily be modified to include a scale factor: Step 1: load a model from the library. Step 2: in the “Structural model” tab, click “Open in editor”. Step 3: add a scaling of the concentration. If the dose is in mg and I want the volume in L, then the concentration Cc will be in mg/L. If my observations in the data set are in ng/mL (i.e μg/L), I need to multiply Cc by 1000 (green highlight). Do not forget to output the scaled concentration instead of the original one (pink highlight). Step 4: save the file under a new name (to avoid overwriting the library model files). Step 5: load the saved model file. 3.5.2.Using regression variables Objectives: learn how to define and use regression variables (time varying covariates). Projects: reg1_project, reg2_project A regression variable is a variable x which is a given function of time, which is not defined in the model but which is used in the model. x is only defined at some time points \(t_1, t_2, \ldots, t_m\) (possibly different from the observation time points), but x is a function of time that should be defined for any t (if is used in an ODE for instance, or if a prediction is computed on a fine grid). Then, Mlxtran defines the function x by interpolating the given values \((x_1, x_2, \ldots, x_m)\). In the current version of Mlxtran, interpolation is performed by using the last given value: \( x(t) = x_j \quad~~\text{for}~~t_j \leq t < t_{j+1} \) The way to introduce it in the Mlxtran longitudinal model is defined here. Regressor definition in a data set It is possible to have in a data set one or several columns with column-type REGRESSOR. Within a given subject-occasion, string “.” will be interpolated (last value carried forward interpolation is used) for observation and dose-lines. Lines with no observation and no dose but with regressor values are also taken into account by Monolix for regressor interpolation. Several points have to be noticed: • The name of the regressor in the data set and the name of the regressor used in the longitudinal model do not need to be identical. • If there are several regressors, the mapping will be done by order of definition. • Regressors can only be used in the longitudinal model. Continuous regression variables • reg1_project (data = reg1_data.txt , model=reg1_model.txt) We consider a basic PD model in this example, where some concentration values are used as a regression variable. The data set is defined as follows input = {Emax, EC50, Cc} Cc = {use=regressor} E = Emax*Cc/(EC50 + Cc) output = E As explained in the previous subsection, there is no name correspondance between the regressor in the data set and the regressor in the model file. Thus, in that case, the values of Cc with respect to time will be taken from the y1 column. In addition, in that case, the predicted effect is therefore piece wise constant because • the regressor interpolation is performed by using the last given value, and then Cc is piece wise constant. • The effect model is direct with respect to the concentration. Thus, it changes at the time points where concentration values are provided: Categorical regression variables • reg2_project (data = reg2_data.txt , model=reg2_model.txt) The variable \(z_{ij}\) takes its values in {1, 2} in this example and represents the state of individual i at time \(t_{ij}\). We then assume that the observed data \(y_{ij}\) has a Poisson distribution with parameter lambda1 if \(z_{ij}=1\) and parameter lambda2 if \(z_{ij}=2\). z is known in this example: it is then defined as a regression variable in the model: input = {lambda1, lambda2, z} z = {use=regressor} if z==0 y = {type=count, log(P(y=k)) = -lambda + k*log(lambda) - factln(k) output = y 3.5.3.Delayed differential equations Objectives: learn how to implement a model with ordinary differential equations (ODE) and delayed differential equations (DDE). Projects: tgi_project, seir_project Ordinary differential equations based model • tgi_project (data = tgi_data.txt , model = tgi_model.txt) Here, we consider the tumor growth inhibition (TGI) model proposed by Ribba et al. (Ribba, B., Kaloshi, G., Peyre, M., Ricard, D., Calvez, V., Tod, M., . & Ducray, F., *A tumor growth inhibition model for low-grade glioma treated with chemotherapy or radiotherapy*. Clinical Cancer Research, 18(18), 5071-5080, 2012.). This model is defined by a set of ordinary differential equations \begin{aligned}\frac{dC}{dt} &= - k_{de} C(t) \\\frac{dP_T}{dt} &= \lambda P_T(t)(1- P^{\star}(t)/K) + k_{QPP}Q_P(t) -k_{PQ} P_T(t) -\gamma \, k_{de} P_T(t)C(t) \\ \frac{dQ_T}{dt} &= k_{PK} P_T(t) -\ gamma k_{de} Q_T(t)C(t) \\\frac{dQ_P}{dt} &= \gamma k_{de} Q_T(t)C(t) - k_{QPP} Q_P(t) -\delta_{QP} Q_P(t)\end{aligned} where $P^\star(t) = P_T(t) + Q_T(t) + Q_P(t)$ is the total tumor size. This set of ODEs is valid for t greater than 0, while \begin{aligned} C(0) &= 0 \\ P_T(0) &= P_{T0} \\ Q_T(0) &= Q_0 \\ Q_P(0) &= 0 \end{aligned} This model (derivatives and initial conditions) can easily be implemented with Mlxtran: DESCRIPTION: Tumor Growth Inhibition (TGI) model proposed by Ribba et al A tumor growth inhibition model for low-grade glioma treated with chemotherapy or radiotherapy. Clinical Cancer Research, 18(18), 5071-5080, 2012. - PT: proliferative equiescent tissue - QT: nonproliferative equiescent tissue - QP: damaged quiescent cells - C: concentration of a virtual drug encompassing the 3 chemotherapeutic components of the PCV regimen - K : maximal tumor size (should be fixed a priori) - KDE : the rate constant for the decay of the PCV concentration in plasma - kPQ : the rate constant for transition from proliferation to quiescence - kQpP : the rate constant for transfer from damaged quiescent tissue to proliferative tissue - lambdaP: the rate constant of growth for the proliferative tissue - gamma : the rate of damages in proliferative and quiescent tissue - deltaQP: the rate constant for elimination of the damaged quiescent tissue - PT0 : initial proliferative equiescent tissue - QT0 : initial nonproliferative equiescent tissue input = {K, KDE, kPQ, kQpP, lambdaP, gamma, deltaQP, PT0, QT0} ; Initial conditions t0 = 0 C_0 = 0 PT_0 = PT0 QT_0 = QT0 QP_0 = 0 ; Dynamical model PSTAR = PT + QT + QP ddt_C = -KDE*C ddt_PT = lambdaP*PT*(1-PSTAR/K) + kQpP*QP - kPQ*PT - gamma*KDE*PT*C ddt_QT = kPQ*PT - gamma*KDE*QT*C ddt_QP = gamma*KDE*QT*C - kQpP*QP - deltaQP*QP output = PSTAR Remark: t0, PT_0 and QT_0 are reserved keywords that define the initial conditions. Then, the graphic of individual fits clearly shows that the tumor size is constant until $t=0$ and starts changing according to the model at t=0. Don’t forget the initial conditions! • tgiNoT0_project (data = tgi_data.txt , model = tgiNoT0_model.txt) The initial time t0 is not specified in this example. Since t0 is missing, Monolix uses the first time value encountered for each individual. If, for instance, the tumor size has not been computed before 5 for the individual fits, then t0=5 will be used for defining the initial conditions for this individual, which introduces a shift in the plot: As defined here, the following rule applies • When no starting time t0 is defined in the Mlxtran model for Monolix then by default t0 is selected to be equal to the first dose or the first observation, whatever comes first. • If t0 is defined, a differential equation needs to be defined. Conclusion: don’t forget to properly specify the initial conditions of a system of ODEs! Delayed differential equations based model A system of delay differential equations (DDEs) can be implemented in a block EQUATION of the section [LONGITUDINAL] of a script Mlxtran. Mlxtran provides the command delay(x,T) where x is a one-dimensional component and T is the explicit delay. Therefore, DDEs with a nonconstant past of the form $$ \begin{array}{ccl} \frac{dx}{dt} &=& f(x(t),x(t-T_1), x(t-T_2), …), ~~\text{for}~~t \geq 0\ x(t) &=& x_0(t) ~~~~\text{for}~~\text{min}(T_k) \leq t \leq 0 \end{array} $$ can be solved. The syntax and rules are explained here. • seir_project (data = seir_data.txt , model = seir_model.txt) The model is a system of 4 DDEs and defined with the following mode: DESCRIPTION: SEIR model, using delayed differential equations. "An Epidemic Model with Recruitment-Death Demographics and Discrete Delays" , Genik & van den Driessche (1999) Decomposition of the total population into four epidemiological classes S (succeptibles), E (exposed), I (infectious), and R (recovered) The parameters corresponds to - birthRate: the birth rate, - deathRate: the natural death rate, - infectionRate: the contact rate of infective individuals, - recoveryRate: the rate of recovery, - excessDeathRate: the excess death rate for infective individuals There is a time delay in the model: - tauImmunity: a temporary immunity delay input = {birthRate, deathRate, infectionRate, recoveryRate, excessDeathRate, tauImmunity, tauLatency} ; Initial conditions t0 = 0 S_0 = 15 E_0 = 0 I_0 = 2 R_0 = 3 ; Dynamical model N = S + E + I + R ddt_S = birthRate - deathRate*S - infectionRate*S*I/N + recoveryRate*delay(I,tauImmunity)*exp(-deathRate*tauImmunity) ddt_E = infectionRate*S*I/N - deathRate*E - infectionRate*delay(S,tauLatency)*delay(I,tauLatency)*exp(-deathRate*tauLatency)/(delay(I,tauLatency)+delay(S,tauLatency)+delay(E,tauLatency)+delay(R,tauLatency)) ddt_I = -(recoveryRate+excessDeathRate+deathRate)*I + infectionRate*delay(S,tauLatency)*delay(I,tauLatency)*exp(-deathRate*tauLatency)/(delay(I,tauLatency)+delay(S,tauLatency)+delay(E,tauLatency)+delay(R,tauLatency)) ddt_R = recoveryRate*I - deathRate*R - recoveryRate*delay(I,tauImmunity)*exp(-deathRate*tauImmunity) output = {S, E, I, R} Introducing these delays allows to obtain nice fits for the 4 outcomes, including $(R_{ij})$ (corresponding to the output y4): Case studies • 8.case_studies/arthritis_project 3.5.4.Outputs and Tables Objectives: learn how to define outputs and create tables from the outputs of the model. Projects: tgi_project, tgiWithTable_project About the OUTPUT block • tgi_project (data = ‘tgi_data.txt’ , model=’tgi_model.txt’) We use the Tumor Growth Inhibition (TGI) model proposed by Ribba et al. in this example (Ribba, B., Kaloshi, G., Peyre, M., Ricard, D., Calvez, V., Tod, M., . & Ducray, F., A tumor growth inhibition model for low-grade glioma treated with chemotherapy or radiotherapy. Clinical Cancer Research, 18(18), 5071-5080, 2012.) DESCRIPTION: Tumor Growth Inhibition (TGI) model proposed by Ribba et al A tumor growth inhibition model for low-grade glioma treated with chemotherapy or radiotherapy. Clinical Cancer Research, 18(18), 5071-5080, 2012. - PT: proliferative equiescent tissue - QT: nonproliferative equiescent tissue - QP: damaged quiescent cells - C: concentration of a virtual drug encompassing the 3 chemotherapeutic components of the PCV regimen - K : maximal tumor size (should be fixed a priori) - KDE : the rate constant for the decay of the PCV concentration in plasma - kPQ : the rate constant for transition from proliferation to quiescence - kQpP : the rate constant for transfer from damaged quiescent tissue to proliferative tissue - lambdaP: the rate constant of growth for the proliferative tissue - gamma : the rate of damages in proliferative and quiescent tissue - deltaQP: the rate constant for elimination of the damaged quiescent tissue - PT0 : initial proliferative equiescent tissue - QT0 : initial nonproliferative equiescent tissue input = {K, KDE, kPQ, kQpP, lambdaP, gamma, deltaQP, PT0, QT0} ; Initial conditions t0 = 0 C_0 = 0 PT_0 = PT0 QT_0 = QT0 QP_0 = 0 ; Dynamical model PSTAR = PT + QT + QP ddt_C = -KDE*C ddt_PT = lambdaP*PT*(1-PSTAR/K) + kQpP*QP - kPQ*PT - gamma*KDE*PT*C ddt_QT = kPQ*PT - gamma*KDE*QT*C ddt_QP = gamma*KDE*QT*C - kQpP*QP - deltaQP*QP output = PSTAR PSTAR is the tumor size predicted by the model. It is therefore used as a prediction for the observations in the project. At the end of the scenario or of SAEM, individual predictions of the tumor size PSTAR are computed using the individual parameters available. Thus, individual predictions of the tumor size PSTAR are computed using both the conditional modes (indPred_mode), the conditional mean (indPred_mean), and the conditional means estimated during the last iterations of SAEM (indPred_SAEM) and saved in the table predictions.txt. Notice that the population prediction is also proposed. Remark: the same model file tgi_model.txt can be used with different tools, including Mlxplore or Simulx. Add additional outputs in tables • tgiWithTable_project (data = ‘tgi_data.txt’ , model=’tgiWithTable_model.txt’) We can save in the tables additional variables defined in the model, such as PT, Q and QP for instance, by adding a block OUTPUT: in the model file: output = PSTAR table = {PT, QT, QP} An additional file tables.txt now includes the predicted values of these variables for each individual (columns PT_mean, QT_mean, QP_mean, PT_mode, QT_mode, QP_mode, PT_popPred, QT_popPred, QP_popPred, PT_popPred_medianCOV, QT_popPred_medianCOV, QP_popPred_medianCOV, PT_SAEM, QT_SAEM, and QP_SAEM. Notice that only continuous variable are possible for variable in table. Good to know: it is not allowed to do calculations directly in the output or table statement. The following example is not possible: ; not allowed: output = {Cser+Ccsf} It has to be replaced by: Ctot = Cser+Ccsf output = {Ctot} 3.5.5.How to compute AUC using in Monolix and Mlxtran Computing additional outputs such as AUC and Cmax in the structural model is possible but requires a particular syntax, because it is not possible to redefine a variable in Mlxtran. For example, it would not be possible to directly update the variable Cmax = Cc when Cc>Cmax. But it is possible to compute Cmax via an ODE, where Cmax increases like Cc at each time where Cc>Cmax. This page gives detailed examples for the AUC on full time interval or specific interval, Cmax, nadir, and other derived variables such as duration above a threshold. Often the Area under the PK curve (AUC) is needed as an important PK metric to link with the pharmacodynamic effect. We show here how to: • compute the AUC within the mlxtran model file • output the AUC calculations for later analysis. Calculation of the AUC can be done in the EQUATION section of the Mlxtran model. If a dataset contains the AUC observations, then the calculation in the EQUATION section can be used as an output in the output={} definition (matched to observations of the data set). Or it can be saved for post-treatment using table={}, as described here. AUClast from t=0 to t=tlast The following code is the basic implementation of the AUC for a 1-compartmental model with the absorption rate ka. It integrates the concentration profile from the start to the end of the observation input = {ka, V, Cl} ddt_AUC = Cc output = {Cc} table = {AUC} AUC in a time interval The following code computes the AUC between two time points t1 and t2. The idea is to compute the AUC_0_t1 and AUC_0_t2 using if/else statements and then do the difference between the two. input = {ka, V, Cl} depot(ka, target=Ac) odeType = stiff Cc = pkmodel(ka,V,Cl) AUCt1_0 = 0 if(t < t1) dAUCt1 = Cc dAUCt1 = 0 ddt_AUCt1 = dAUCt1 AUCt2_0 = 0 if(t < t2) dAUCt2 = Cc dAUCt2 = 0 ddt_AUCt2 = dAUCt2 AUC_t1_t2 = AUCt2 - AUCt1 output = {Cc} table = {AUC_t1_t2} Note that the t==tDose would not work because the integrator does not necessarily evaluate the time exactly at the times of doses. Thus the test t==tDose might not be tested at all during the Also note that, although partial AUC can be calculated by comparing time with the both boundaries in the if statement (e.g. if (t>20 and t<40)), this might not be the best practice when concentration is calculated using an analytical solution, because ODE solver used for calculating AUC could be deceived by AUC remaining 0 for a period of time and might increase the integration time step to a value larger than the difference between time boundaries. Dose-interval AUC (AUCtau) The following code compute the AUCtau for each dose interval. At each dose the AUC is reset to zero and the concentration is integrated until the next administration. input = {ka, V, Cl} Cc = pkmodel(ka,V,Cl) ; Reset the ODE variable AUCtau to zero each time there is a dose ddt_AUCtau = Cc output = {Cc} table = {AUCtau} Since AUCtau is equal to 0 at each dosing time as it has just been reset, the AUC in the last interdose interval should actually be read just before that time. we provide a Simulx project with an example of a PK model where AUCtau is calculated as additional lines in the model and defined as an output of the simulations. Computing the Cmax or nadir in the structural model Cmax can be calculated directly in the structural model by integrating the increase of the concentration. Alternatively, the Cmax can also be calculated in Simulx using the Outcomes&Endpoints tab. Cmax for models with first-order absorption The following example shows how to do it in case of a one-compartment model with first-order absorption and linear elimination. Absorption processes defined via the depot macro must be replaced by explciit ODEs, while the depot macro is used to add the dose as a bolus into the depot compartment. input = {Cl, ka, V} ; initial conditions t_0 = 0 Ad_0 = 0 Ac_0 = 0 ddt_Ad = -ka*Ad ddt_Ac = ka*Ad - k*Ac Cc = Ac/V ; Calculation of Cmax slope = ka*Ad -k*Ac Cmax_0 = 0 if slope > 0 && Cc > Cmax x = slope/V x = 0 ddt_Cmax = x Cmax for model with bolus administration or infusion If the dose is administered as bolus, it is necessary to add in the model a very short infusion. This prevents from the instantaneous increase of the concentration. The following example shows this situations in case of a three-compartments model with linear elimination. The duration of the “short infusion” can be adapted with respect to the time scale by modifying dT=0.1. If the doses are administered via iv infusion, then dT=0.1 can be replaced by dT = inftDose, which reads the infusion duration from the data. input = {Cl, V1, Q2, V2, Q3, V3} ; Parameter transformations V = V1 k = Cl/V1 k12 = Q2/V1 k21 = Q2/V2 k13 = Q3/V1 k31 = Q3/V3 ; initial conditions t_0 = 0 Ac_0 = 0 A2_0 = 0 A3_0 = 0 ;short pseudo-infusion duration dT = 0.1 ; use dT=inftDose if the administration is infusion: inftDose is the infusion duration from the last dose read from the data ; infusion input to Ac if t < tDose+dT input = amtDose/dT ;amtDose is the last dose amount read from the data input = 0 dAc = input -k*Ac - k12*Ac - k13*Ac + k21*A2 + k31*A3 ddt_Ac = dAc ddt_A2 = k12*Ac - k21*A2 ddt_A3 = k13*Ac - k31*A3 Cc = Ac/V ; Calculation of AUC AUC_0 = 0 ddt_AUC = Cc ; Calculation of Cmax Cmax_0 = 0 if dAc > 0 && Cc > Cmax x = dAc/V x = 0 ddt_Cmax = x output = {Ac, Cc} table = {Cmax, AUC} Cmax for models with transit compartments The transit compartments defined via the depot() macro must be replaced by explicit ODEs. The amount in the last (nth) transit compartment is described via an analytical solution using the keyword amtDose to get the dose amount and (t-tDose) to get the time elapsed since the last dose. The calculation of Cmax then follows the same logic as for a first-order absorption. input={F, Ktr, Mtt, ka, V, Cl} k = Cl/V ; transit model with explicit ODEs n = max(0, Mtt*Ktr - 1) An = F * amtDose * exp( n * log(Ktr*(t-tDose)) - factln(n) - Ktr * (t-tDose)) t_0 = 0 Aa_0 = 0 Ac_0 = 0 ddt_Aa = Ktr*An - ka*Aa ddt_Ac = ka*Aa - k*Ac Cc = Ac/V ; Calculation of Cmax slope = ka*Aa - k*Ac Cmax_0 = 0 if slope > 0 && Cc > Cmax x = slope/V x = 0 ddt_Cmax = x output = {Cc} table = {Cmax} This example shows how to compute tumor size (variable TS) at time of nadir: input = {ka, Vl, Cl, TS0, kge, kkill, lambda} Cc = pkmodel(ka, V, Cl) ; ODE system for tumor growth dynamics t_0 = 0 TS_0 = TS0 TSDynamics = (kge*TS)-kkill*TS*Cc*exp(-lambda*t) ddt_TS = TSDynamics ; ===== computing TS at nadir if TSDynamics < 0 && TS < TS_atNadir x = TSDynamics x = 0 TS_atNadir_0 = TS0 ddt_TS_atNadir = x output = {TS} table = {TS_atNadir} Here is an example of simulation for TS and TS_atNadir with a multiple dose schedule: Time above a threshold Here we compute the time that a variable C spends above some threshold, which could be a toxicity threshold for example. This piece of code comes in the structural model, after the dynamics of the variable C has already been defined. TimeAboveThreshold_0 = 0 if C > Cthreshold xTime = 1 xTime = 0 ddt_TimeAboveThreshold = xTime Time since disease progression In this full structural model example, tumor size TS is described with an exponential growth, and a constant treatment effect since time=0 with a log-kill killing hypothesis and a Claret exponential resistance term. Several additional variables are computed: • TS_atNadir: the tumor size at the time of nadir, • PC_from_Nadir: the percent change of TS between the time of nadir and the current time, • DP: predicted disease progression status (1 if TS has increased more than >20% and >5-mm from last nadir, 0 otherwise), • TDP: time since disease progression (duration since last time when DP is switched from 0 to 1) input = {TS0, kge, kkill, lambda} ;initial conditions: ;t_0 = 0 ;this should be uncommented if the initial time is 0 for all subjects TS_0 = TS0 K = kkill*exp(-lambda*t) ;Saturation for TS at 1e12 to avoid infinite values if TS>1e12 TSDynamics = 0 if t<0 ; before treatment TSDynamics = (kge*TS) else ; after treatment TSDynamics = (kge*TS)-K*TS ddt_TS = TSDynamics ; Computing time to nadir (TS decreases after treatment start at time=0, then increases again because of resistance) if TSDynamics<0 ddt_TimeToNadir = x1 ; x1 is used as intermediate variable because it is not possible to define an ODE inside an if/else statement ; Computing TS at time of nadir if TSDynamics < 0 & TS < TS_atNadir x2 = TSDynamics x2 = 0 TS_atNadir_0 = TS0 ddt_TS_atNadir = x2 ; ; Computing DP: predicted disease progression status at the previous visit (1 if TS has increased more than >20% and >5-mm from last nadir, 0 otherwise) PC_from_Nadir = (TS/TS_atNadir-1)*100 if t>TimeToNadir & PC_from_Nadir > 20 & (TS-TS_atNadir) > 5 DP = 1 DP = 0 ; Computing TDP = time since disease progression: duration since last time DP was switched from 0 to 1 if DP==1 x3 = 1 x3 = 0 TDP_0 = 0 ddt_TDP = x3 output = {TS} table = {TS_atNadir, PC_from_Nadir, DP, TDP} Simulx can be conveniently used to simulate each intermediate variable with typical parameters (the example model and Simulx project can be downloaded here): 4.Statistical Model The statistical model tab in Monolix enables to define the statistical model and run estimation tasks. The statistical model includes • the observation model, which combines the error model and the distribution of the observations. • the individual model, combining □ distributions for the individual parameters □ which parameters have inter-individual variability (random effects) □ correlations between the individual parameters □ covariate effects on the individual parameters 4.1.Observation (error) model Objectives: learn how to use the predefined residual error models. Projects: warfarinPK_project, bandModel_project, autocorrelation_project, errorGroup_project For continuous data, we are going to consider scalar outcomes (\(y_{ij} \in \mathbb{R}\)) and assume the following general model: $$y_{ij}=f(t_{ij},\psi_i)+ g(t_{ij},\psi_i,\xi)\varepsilon_{ij}$$ for i from 1 to N, and j from 1 to \(\text{n}_{i}\), where \(\psi_i\) is the parameter vector of the structural model f for individual i. The residual error model is defined by the function g which depends on some additional vector of parameters \(\xi\). The residual errors \((\varepsilon_{ij})\) are standardized Gaussian random variables (mean 0 and standard deviation 1). In this case, it is clear that \(f(t_{ij}, \psi_i)\) and \(g(t_{ij}, \psi_i, \xi)\) are the conditional mean and standard deviation of \(y_{ij}\), i.e., $$\mathbb{E}(y_{ij} | \psi_i) = f(t_{ij}, \psi_i)~~\textrm{and}~~\textrm{sd}(y_{ij} | \psi_i)= g(t_{ij}, \psi_i, \xi)$$ Available error models In Monolix, we only consider the function g to be a function of the structural model f, i.e. \(g(t_{ij}, \psi_i, \xi)= g(f(t_{ij}, \psi_i), \xi)\) leading to an expression of the observation model of the form $$y_{ij}=f(t_{ij},\psi_i)+ g(f(t_{ij}, \psi_i), \xi)\varepsilon_{ij}$$ The following error models are available: • constant : \(y = f + a \varepsilon\). The function g is constant, and the additional parameter is \(\xi=a\) • proportional : \(y = f + bf^c \varepsilon\). The function g is proportional to the structural model f, and the additional parameters are \(\xi = (b,c)\). By default, the parameter c is fixed at 1 and the additional parameter is $\xi = b$. • combined1 : \(y = f + (a+ bf^c) \varepsilon\). The function g is a linear combination of a constant term and a term proportional to the structural model f, and the additional parameters are \(\xi = (a, b)\) (by default, the parameter c is fixed at 1). • combined2 : \(y = f + \sqrt{a^2+ b^2(f^c)^2} \varepsilon\). The function g is a combination of a constant term and a term proportional to the structural model f (g = bf^c), and the additional parameters are \(\xi = (a, b)\) (by default, the parameter c is fixed at 1). Notice that the parameter c is fixed to 1 by default. However, it can be unfixed and estimated. The assumption that the distribution of any observation \(y_{ij}\) is symmetrical around its predicted value is a very strong one. If this assumption does not hold, we may want to transform the data to make it more symmetric around its (transformed) predicted value. In other cases, constraints on the values that observations can take may also lead us to transform the data. Available transformations The model can be extended to include a transformation of the data: $$u(y_{ij})=u(f(t_{ij},\psi_i)) + g(u(f(t_{ij},\psi_i)) ,\xi) $$ As we can see, both the data \(y_{ij}\) and the structural model f are transformed by the function u so that \(f(t_{ij}, \psi_i)\) remains the prediction of \(y_{ij}\). Classical distributions are proposed as transformation: • normal: u(y) = y. This is equivalent to no transformation. • lognormal: u(y) = log(y). Thus, for a combined error model for example, the corresponding observation model writes \(\log(y) = \log(f) + (a + b\log(f)) \varepsilon\). It assumes that all observations are strictly positive. Otherwise, an error message is thrown. In case of censored data with a limit, the limit has to be strictly positive too. • logitnormal: u(y) = log(y/(1-y)). Thus, for a combined error model for example, the corresponding observation model writes \(\log(y/(1-y)) = \log(f/(1-f)) + (a + b\log(f/(1-f)))\varepsilon\). It assumes that all observations are strictly between 0 and 1. It is also possible to modify these bounds and not “impose” them to be 0 and 1, i.e. to define the logit function between a minimum and a maximum: the function u becomes u(y) = log((y-y_min)/(y_max-y)). Again, in case of censored data with a limit, the limits too must belong strictly to the defined interval. Any interrogation on what is the formula behind your observation model? There is a button FORMULA on the interface as on the figure below where the observation model is described linking the observation (named CONC in that case) and the prediction (named Cc in that case). Note that \(\epsilon\) is noted e here. Remarks: In previous Monolix version, only the error was available. Thus, what happens to the errors that are not proposed anymore? Is it possible to have “exponential”, “logit”, “band(0,10)”, and “band(0,100)”? Yes, in this version, we choose to split the observation model between its error model and its distribution. The purpose is to have a more unified vision of models and increase the number of possibilities. Thus, here is how to configure new projects with the previous error model definition. • “exponential” is an observation model with a constant error model and a lognormal distribution. • “logit” is an observation model with a constant error model and a logitnormal distribution. • “band(0,10)” is an observation model with a constant error model and a logitnormal distribution with min and max at 0 and 10 respectively. • “band(0,100)” is an observation model with a constant error model and a logitnormal distribution with min and max at 0 and 100 respectively. Defining the residual error model from the Monolix GUI A menu in the frame Statistical model|Tasks of the main GUI allows one to select both the error model and the distribution as on the following figure (in green and blue respectively) A summary of the statistical model which includes the residual error model can be displayed by clicking on the button formula. Some basic residual error models • warfarinPKlibrary_project (data = ‘warfarin_data.txt’, model = ‘lib:oral1_1cpt_TlagkaVCl.txt’) The residual error model used with this project for fitting the PK of warfarin is a combined error model, i.e. \(y_{ij} = f(t_{ij}, \psi_i))+ (a+bf(t_{ij}, \psi_i)))\varepsilon_{ij}\)observation versus prediction figure below seems ok. • Figures showing the shape of the prediction interval for each observation model available in Monolix are displayed here. • When the residual error model is defined in the GUI, a bloc DEFINITION: is then automatically added to the project file in the section [LONGITUDINAL] of <MODEL> when the project is saved: y1 = {distribution=normal, prediction=Cc, errorModel=combined1(a,b)} Residual error models for bounded data • bandModel_project (data = ‘bandModel_data.txt’, model = ‘lib:immed_Emax_null.txt’) In this example, data are known to take their values between 0 and 100. We can use a constant error model and a logitnormal for the transformation with bounds (0,100) if we want to take this constraint into account. In the Observation versus prediction plot, one can see that the error is smaller when the observations are close to 0 and 100 which is normal. To see the relevance of the predictions, one can look at the 90% prediction interval. Using a logitnormal distribution, we have a very different shape of this prediction interval to take that specificity into account. VPCs obtained with this error model do not show any mispecification Mlxtran as follows: effect = {distribution=logitnormal, min=0, max=100, prediction=E, errorModel=constant(a)} Autocorrelated residuals For any subject i, the residual errors \((\varepsilon_{ij},1 \leq j \leq n_i)\) are usually assumed to be independent random variables. The extension to autocorrelated errors is possible by assuming, that \((\varepsilon_{ij})\) is a stationary autoregressive process of order 1, AR(1), which autocorrelation decreases exponentially: $$ \textrm{corr}(\varepsilon_{ij},\varepsilon_{i,{j+1}}) = r_i^{(t_{i,j+1}-t_{ij})}$$ where \(0 \leq r_i \leq 1\) for each individual i. If \(t_{ij}=j\) for any (i,j), then \(t_{i,j+1}-t_{i,j}=1\) and the autocorrelation function \(\gamma_i\)for individual i is given by $$\gamma_i(\tau) = \textrm{corr}(\varepsilon_{ij}, \varepsilon_{i,j+\tau}) = r_i^{\tau}$$ The residual errors are uncorrelated when \(r_i=0\). • autocorrelation_project (data = ‘autocorrelation_data.txt’, model = ‘lib:infusion_1cpt_Vk.txt’) Autocorrelation is estimated since the checkbox r is ticked in this project:r: Important remarks: • Monolix accepts both regular and irregular time grids. • For a proper estimationg of the autocorrelation structure of the residual errors, rich data is required (i.e. a large number of time points per individual) . • To add autocorrelation, the user should either use the connectors, or write it directly in the Mlxtran □ add “autoCorrCoef=r” in definition “DV = {distribution=normal, prediction=Cc, errorModel=proportional(b), autoCorrCoef=r}” for example □ add “r” as an input parameter. Using different error models per group/study • errorGroup_project (data = ‘errorGroup_data.txt’, model = ‘errorGroup_model.txt’) Data comes from 3 different studies in this example. We want to have the same structural model but use different error models for the 3 studies. A solution consists in defining the column STUDY with the reserved keyword OBSERVATION ID. It will then be possible to define one error model per outcome: Here, we use the same PK model for the 3 studies: input = {V, k} Cc1 = pkmodel(V, k) Cc2 = Cc1 Cc3 = Cc1 output = {Cc1, Cc2, Cc3} Since 3 outputs are defined in the structural model, one can now define 3 error models in the GUI: b1 and b2 are estimated: 4.2.1.Model for the individual parameters: introduction What is the individual model and where is it defined in Monolix? The population approach considers that parameters of the structural model can have a different value for each individual, and the way these values are distributed over individuals and impacted by covariate values is defined in the individual model. The individual model is defined in the lower part of the statistical model tab. This model includes Theory for the individual model A model for observations depends on a vector of individual parameters \(\psi_i\). As we want to work with a population approach, we now suppose that \(\psi_i\) comes from some probability distribution \(p_{{\psi_i}}\). In this section, we are interested in the implementation of individual parameter distributions \((p_{{\psi_i}}, 1\leq i \leq N)\). Generally speaking, we assume that individuals are independent. This means that in the following analysis, it is sufficient to take a closer look at the distribution \(p_{{\psi_i}}\) of a unique individual i. The distribution \(p_{{\psi_i}}\) plays a fundamental role since it describes the inter-individual variability of the individual parameter \(\psi_i\). In Monolix, we consider that some transformation of the individual parameters is normally distributed and is a linear function of the covariates: \(h(\psi_i) = h(\psi_{\rm pop})+ \beta \cdot ({c}_i – {c}_{\rm pop}) + \eta_i \,, \quad \eta_i \sim {\cal N}(0,\Omega).\) This model gives a clear and easily interpreted decomposition of the variability of \(h(\psi_i)\) around \(h(\psi_{\rm pop})\), i.e., of \(\psi_i\) around \(\psi_{\rm pop}\): The component \(\beta \cdot ({c}_i – {c}_{\rm pop})\) describes part of this variability by way of covariates \({c}_i\) that fluctuate around a typical value \({c}_{\rm pop}\). The random component \(\eta_i\) describes the remaining variability, i.e., variability between subjects that have the same covariate values. By definition, a mixed effects model combines these two components: fixed and random effects. In linear covariate models, these two effects combine. In the present context, the vector of population parameters to estimate is \(\theta = (\psi_{\rm pop},\ beta,\Omega)\). Several extensions of this basic model are possible: We can suppose for instance that the individual parameters of a given individual can fluctuate over time. Assuming that the parameter values remain constant over some periods of time called occasions , the model needs to be able to describe the inter-occasion variability (IOV) of the individual parameters. If we assume that the population consists of several homogeneous subpopulations, a straightforward extension of mixed effects models is a finite mixture of mixed effects models, assuming for instance that the distribution \(p_{{\psi_i}}\) is a mixture of distributions. 4.2.2.Probability distribution of the individual parameters Objectives: learn how to define the probability distribution and the correlation structure of the individual parameters. Projects: warfarin_distribution1_project, warfarin_distribution2_project, warfarin_distribution3_project, warfarin_distribution4_project One way to extend the use of Gaussian distributions is to consider that some transformation of the parameters in which we are interested is Gaussian, i.e., assume the existence of a monotonic function \(h\) such that \(h(\psi)\) is normally distributed. Then, there exists some \(\omega\) such that, for each individual i: \(h(\psi_i) \sim {\cal N}(h(\bar{\psi}_i), \omega^2)\) where \(\bar{\psi}_i\) is the predicted value of \(\psi_i\). In this section, we consider models for the individual parameters without any covariate. Then, the predicted value of \(\psi_i\) is the \ (\bar{\psi}_i = \psi_{\rm pop}\) and \(h(\psi_i) \sim {\cal N}(h(\psi_{pop}), \omega^2)\) The transformation \(h\) defines the distribution of \(\psi_i\). Some predefined distributions/transformations are available in Monolix: • Normal distribution in ]-inf,+inf[: In that case, \(h(\psi_i) = \psi_i\). Note: the two mathematical representations for normal distributions are equivalent: \( \psi_i \sim {\cal N}(\bar{\psi}_{i}, \omega^2) ~~\Leftrightarrow~~ \psi_i = \bar{\psi}_i + \eta_i, ~~\text{where}~~\eta_i \sim {\cal N}(0,\omega^2).\) • Log-normal distribution in ]0,+inf[: In that case, \(h(\psi_i) = log(\psi_i)\). A log-normally random variable takes positive values only. A log-normal distribution looks like a normal distribution for a small variance \(\omega^2\). On the other hand, the asymmetry of the distribution increases when \(\omega^2\) increases. Note: the two mathematical representations for log-normal distributions are equivalent: \(\log(\psi_i) \sim {\cal N}(\log(\bar{\psi}_{i}), \omega^2) ~~\Leftrightarrow~~ \log(\psi_i)=\log(\bar{\psi}_{i})+\eta_i~~\Leftrightarrow~~ \psi_i = \bar{\psi}_i e^{\eta_i}, ~~\text{where}~~\eta_i \ sim {\cal N}(0,\omega^2).\) \(\bar{\psi}_i\) represents the typical value (fixed effect) and \(\omega\) the standard deviation of the random effects, which is interpreted as the inter-individual variability. Note that \(\bar{\ psi}_i\) is the median of the distribution (neither the mean, nor the mode). • Logit-normal distribution in ]0,1[: In that case, \(h(\psi_i) = log\left(\frac{\psi_i}{1-\psi_i}\right)\). A random variable \(\psi_i\) with a logit-normal distribution takes its values in ]0,1[. The logit of \(\psi_i\) is normally distributed, i.e., \(\text{logit}(\psi_i) = \log \left(\frac{\psi_i}{1-\psi_i}\right) \ \sim \ \ {\cal N}( \text{logit}(\bar{\psi}_i), \omega^2) ~~\Leftrightarrow~~ \text{logit}(\psi_i) = \text{logit}(\bar{\psi}_i) + \ eta_i, ~~\text{where}~~\eta_i \sim {\cal N}(0,\omega^2)\) Note that: \( m = \text{logit}(\psi_i) = \log \left(\frac{\psi_i}{1-\psi_i}\right) ~~\Leftrightarrow~~ \psi_i = \frac{\exp(m)}{1+\exp(m)} \) • Generalized logit-normal distribution in ]a,b[: In that case, \(h(\psi_i) = log\left(\frac{\psi_i – a}{b-\psi_i}\right)\). A random variable \(\psi_i\) with a logit-normal distribution takes its values in ]a,b[. The logit of \(\psi_i\) is normally distributed, i.e., \(\text{logit}_{(a,b)}(\psi_i) = \log \left(\frac{\psi_i – a}{b-\psi_i}\right) \ \sim \ \ {\cal N}( \text{logit}_{(a,b)}(\bar{\psi}_i), \omega^2) ~~\Leftrightarrow~~ \text{logit}_{(a,b)}(\psi_i) = \ text{logit}_{(a,b)}(\bar{\psi}_i) + \eta_i, ~~\text{where}~~\eta_i \sim {\cal N}(0,\omega^2)\) Note that: \( m = \text{logit}_{(a,b)}(\psi_i) = \log \left(\frac{\psi_i – a}{b-\psi_i}\right) ~~\Leftrightarrow~~ \psi_i = \frac{b \exp(m)+a}{1+\exp(m)} \) • Probit-normal distribution: The probit function is the inverse cumulative distribution function (quantile function) \(\Phi^{-1}\) associated with the standard normal distribution \({\cal N}(0,1)\). A random variable \(\psi\) with a probit-normal distribution also takes its values in ]0,1[. \(\text{probit}(\psi_i) = \Phi^{-1}(\psi_i) \ \sim \ {\cal N}( \Phi^{-1}(\bar{\psi}_i), \omega^2) .\) To chose one of these distribution in the GUI, click on the distribution corresponding to the parameter you want to change in the individual model part and choose the corresponding distribution. 1. If you change your distribution and your population parameter is not valid, then an error message is thrown. Typically, when you want to change your distribution to a log normal distribution, make sure the associated population parameter is strictly positive. 2. When creating a project, the default proposed distribution is lognormal. 3. Logit transformations can be generalized to any interval (a,b) by setting \( \psi_{(a,b)} = a + (b-a)\psi_{(0,1)}\) where \(\psi_{(0,1)}\) is a random variable that takes values in (0,1) with a logit-normal distribution. Thus, if you need to have bounds between a and b, you need to modify your structural model to reshape a parameter between 0 and 1 and use a logit or a probit distribution. Examples are shown on this page. • “Adapted” Logit-normal distribution: Another interesting possibility is to “extend” the logit distribution to be bounded in [a, b] rather than in [0, 1]. It is possible starting from the 2019 version. For that, set your parameter in a logit normal distribution. The setting button appear next to the distribution. Clicking on it will allow to define your bounds as in the following figure. Notice that if your parameter initial value is not in [0, 1], the bounds are automatically adapted and the following warning message is proposed “The initial value of XX is greater than 1: the logit limit is adjusted” Marginal distributions of the individual parameters • warfarin_distribution1_project (data = ‘warfarin_data.txt’, model = ‘lib:oral1_1cpt_TlagkaVCl.txt’) We use the warfarin PK example here. The four PK parameters Tlag, ka, V and Cl are log-normally distributed. LOGNORMAL distribution is then used for these four log-normal distributions in the main Monolix graphical user interface: The distribution of the 4 PK parameters defined in the MonolixGUI is automatically translated into Mlxtran in the project file: input = {Tlag_pop, omega_Tlag, ka_pop, omega_ka, V_pop, omega_V, Cl_pop, omega_Cl} Tlag = {distribution=lognormal, typical=Tlag_pop, sd=omega_Tlag} ka = {distribution=lognormal, typical=ka_pop, sd=omega_ka} V = {distribution=lognormal, typical=V_pop, sd=omega_V} Cl = {distribution=lognormal, typical=Cl_pop, sd=omega_Cl} Estimated parameters are the parameters of the 4 log-normal distributions and the parameters of the residual error model: Here, \(V_{\rm pop} = 7.94\) and \(\omega_V=0.326\) means that the estimated population distribution for the volume is: \(\log(V_i) \sim {\cal N}(\log(7.94) , 0.326^2)\) or, equivalently, \(V_i = 7.94 e^{\eta_i}\) where \(\eta_i \sim {\cal N}(0,0.326^2)\). • \(V_{\rm pop} = 7.94\) is not the population mean of the distribution of \(V_i\), but the median of this distribution (in that case, the mean value is 7.985). The four probability distribution functions are displayed figure Parameter distributions: • \(V_{\rm pop}\) is not the population mean of the distribution of \(V_i\), but the median of this distribution. The same property holds for the 3 other distributions which are not Gaussian. • Here, standard deviations \(\omega_{Tlag}\), \(\omega_{ka}\), \(\omega_V\) and \(\omega_{Cl}\) are approximately the coefficients of variation (CV) of Tlag, ka, V and Cl since these 4 parameters are log-normally distributed with variances < 1. • warfarin_distribution2_project (data = ‘warfarin_data.txt’, model = ‘lib:oral1_1cpt_TlagkaVCl.txt’) Other distributions for the PK parameters are used in this project: • NORMAL for Tlag, we fix the population value \(Tlag_{\text{pop}}\) to 1.5 and the standard deviation \(\omega_{\rm Tlag}\) to 1: • NORMAL for ka, • NORMAL for V, • and LOGNORMAL for Cl Estimated parameters are the parameters of the 4 transformed normal distributions and the parameters of the residual error model: Here, \( Tlag_{\rm pop} = 1.5\) and \(\omega_{Tlag}=1\) means that \(Tlag_i \sim {\cal N}(1.5, 1^2)\) while \(Cl_{\rm pop} = .133\) and \(\omega_{Cl}=..29\) means that \(log(Cl_i) \sim {\cal N}(log (.133), .29^2)\). The four probability distribution functions are displayed Figure Parameter distributions: Correlation structure of the random effects Dependency can be introduced between individual parameters by supposing that the random effects \(\eta_i\) are not independent. This means considering them to be linearly correlated. • warfarin_distribution3_project (data = ‘warfarin_data.txt’, model = ‘lib:oral1_1cpt_TlagkaVCl.txt’) Defining correlation between random effects in the interface To introduce correlations between random effects in Monolix, one can define correlation groups. For example, two correlation groups are defined on the interface below, between \(\eta_{V,i}\) and \(\ eta_{Cl,i}\) (#1 in that case) and between \(\eta_{Tlag,i}\) and \(\eta_{ka,i}\) in an other group (#2 in that case): To define a correlation between the random effects of V and Cl, you just have to click on the check boxes of the correlation for those two parameters. If you want to define a correlation between the random effects ka and Tlag independently of the first correlation group, click on the + next to CORRELATION to define a second group and click on the check boxes corresponding to the parameters ka and Tlag under the correlation group #2. Notice, that as the random effects of Cl and V are already in the correlation group #1, these random effects can not be used in another correlation group. When three of more parameters are included in a correlation groups, all pairwise correlations will be estimated. It is not instance not possible to estimate the correlation between \(\eta_{ka,i}\) and \(\eta_{V,i}\) and between \(\eta_{Cl,i}\) and \(\eta_{V,i}\) but not between \(\eta_{Cl,i}\) and \(\eta_{ka,i}\). It is important to mention that the estimated correlations are not the correlation between the individual parameters (between \(Tlag_i\) and \(ka_i\), and between \(V_i\) and \(Cl_i\)) but the (linear) correlation between the random effects (between \(\eta_{Tlag,i}\) and \(\eta_{ka,i}\), and between \(\eta_{V,i}\) and \(\eta_{Cl,i}\) respectively). • If the box is greyed, it means that the associated random effects can not be used in a correlation group, as in the following cases □ when the parameter has no random effects □ when the random effect of the parameter is already used in another correlation group • There are no limitation in terms of number of parameters in a correlation group • You can have a look in the FORMULA to have a recap of all correlations • In case of inter-occasion variability, you can define the correlation group for each level of variability independently. • The initial value for the correlations is zero and cannot be changed. • The correlation value cannot be fixed. Estimated population parameters now include these 2 correlations: Notice that the high uncertainty on \(\text{corr_ka_Tlag}\) suggests that the correlation between \(\eta_{Tlag,i}\) and \(\eta_{ka,i}\) is not reliable. How to decide to include correlations between random effects? The scatterplots of the random effects can hint at correlations to include in the model. This plot represents the joint empirical distributions of each pair of random effects. The regression line (in pink below) and the correlation coefficient (“information” toggle in the settings) permits to visually detect tendencies. If “conditional distribution” (default) is chosen in the display settings, the displayed random effects are calculated using individual parameters sampled from the conditional distribution, which permits to avoid spurious correlations (see the page on shrinkage for more details). If a large correlation is present between a pair of random effects, this correlation can be added to the model in order to be estimated as a population parameter. Depending on a number of random effects values used to calculate the correlation coefficient, a same correlation value can be more or less significant. To help the user identify significant correlations, Pearson’s correlation tests are performed in the “Result” tab, “Tests” section. If no significant correlation is found, like for the pair \(\eta_{Tlag}\) and \(\eta_{Cl}\) below, the distributions can be assumed to be independent. However, if a significant correlation appears, like for the pair \(\eta_V\) and \(\eta_{Cl}\) below, it can be hypothesized that the distributions are not independent and that the correlation must be included in the model and estimated. Once the correlation is included in the model, the random effects for \(V\) and \(Cl\) are drawn from the joint distribution rather than from two independent distributions. How do the correlations between random effects affect the individual model? In this example the model has four parameters Tlag, ka, V and Cl. Without correlation, the individual model is: \(log(Tlag) = log(Tlag_{pop}) + \eta_{Tlag}\) \(log(ka) = log(ka_{pop}) + \eta_{ka}\) \(log(V) = log(V_{pop}) + \eta_V\) \(log(Cl) = log(Cl_{pop}) + \eta_{Cl}\) The random effects follow normal distributions: \((\eta_{Tlag,i},\eta_{ka,i},\eta_{V,i},\eta_{Cl,i}) \sim \mathcal{N}(0,\Omega)\) \(\Omega\) is the variance-covariance matrix defining the distributions of the vectors of random effects, here: \(\Omega = \begin{pmatrix} \omega_{Tlag}^2 & 0 & 0 & 0 \\ 0 & \omega_{ka}^2 & 0 & 0 \\ 0 & 0 & \omega_V^2 & 0 \\ 0 & 0 & 0 & \omega_{Cl}^2 \end{pmatrix}\) In this example, two correlations between \(\eta_{Tlag}\) and \(\eta_{ka}\) and between \(\eta_{V}\) and \(\eta_{Cl}\) are added to the model. They are defined with two population parameters called \ (\text{corr_Tlag_ka}\) and \(\text{corr_V_Cl}\) that appear in the variance-covariance matrix. So the only difference in the individual model is in \(\Omega\), that is now: \(\Omega = \begin{pmatrix} \omega_{Tlag}^2 & \omega_{Tlag} \omega_{ka} \text{corr_Tlag_ka} & 0 & 0 \\ \omega_{Tlag} \omega_{ka} \text{corr_Tlag_ka} & \omega_{ka}^2 & 0 & 0 \\ 0 & 0 & \omega_V^2 & \ omega_{V} \omega_{Cl} \text{corr_V_Cl} \\ 0 & 0 & \omega_{V} \omega_{Cl} \text{corr_V_Cl} & \omega_{Cl}^2 \end{pmatrix}\) So the correlation matrix is related to the variance-covariance matrix \(\Omega\) as: Why should the correlation be estimated as part of the population parameters? The effect of correlations is especially important when simulating parameters from the model. This is the case in the VPC or when simulating new individuals in Simulx to assess the outcome of a different dosing scenario for instance. If in reality individuals with a large distribution volume also have a large clearance (i.e there is a positive correlation between the random effects of the volume and the clearance), but this correlation has not been included in the model, then the concentrations predicted by the model for a new cohort of individuals will display a larger variability than they would in reality. How do the EBEs change after having included correlation in the model? Before adding correlation in the model, the EBEs or the individual parameters sampled from the conditional distribution may already be correlated, as can be seen in the “correlation between random effects” plot. This is because the individual parameters (EBEs or sampled) are based on the individual conditional distributions, which takes into account the information given by the data. Especially when the data is rich, the data can indicate that individuals with a large volume of distribution also have a large clearance, even if this correlation is not yet included in the model. Including the correlation in the model as a population parameter to estimate allows to precisely estimate its value. Usually, one can see a stronger correlation for the corresponding pair of random effects when the correlation is included in the model compared to when it is not. In this example, after including the correlations in the individual model, the joint distribution of \(\eta_{V}\) and \(\eta_{Cl}\) displays a higher correlation coefficient (0.439 compared to 0.375 previously): • warfarin_distribution4_project (data = ‘warfarin_data.txt’, model = ‘lib:oral1_1cpt_TlagkaVCl.txt’) In this example, \(Tlag_i\) does not vary in the population, which means that \(\eta_{Tlag,i}=0\) for all subjects i, while the three other random effects are correlated: Estimated population parameters now include the 3 correlations between \(\eta_{ka,i}\), \(\eta_{V,i}\) and \(\eta_{Cl,i}\) : Parameters without random effects Adding/removing inter-individual variability By default, all parameters have inter-individual variability. To remove it, click on the checkbox of the random effect column: How the parameters with no variability are estimated is explained here. Custom parameter distributions Some datasets may require more complex parameter distributions that those pre-implemented in the Monolix GUI. This video shows how to implement a lognormal distribution with Box-Cox transformation and how to bound a parameter between two values using a transformed logit distribution (this latter case can be handled automatically from Monolix2019R1). 4.2.3.Bounded parameter distribution Bounded parameters in the interface Starting from the 2019 version, it is possible to “extend” the logit distribution to be bounded in [a, b] rather than in [0, 1]. For that, set your parameter in a logit normal distribution. The setting button appear next to the distribution. Clicking on it will allow to define your bounds as in the following figure. Notice that if your parameter initial value is not in [0, 1], the bounds are automatically adapted and the following warning message is proposed “The initial value of XX is greater than 1: the logit limit is adjusted” Bounded parameters in the structural model For versions of Monolix below 2019R1, a bounded parameter distribution, for example between a and b, can not be set directly through the interface, but have to be defined in two steps: (1) an auxiliary parameter and its distribution choice in the GUI, and (2) a transformation of the auxiliary parameter into the parameter of interest in the structural model file. The same approach should be used in Monolix2019R1 to define a single bound (upper or lower). Let’s take a simple PK example where a volume V is constrained. The structural model for this example is: input = {V, k} ; PK model definition Cc = pkmodel(V, k) • Thus, to have a parameter V between two bounds a=1 and b=10, you have to define the structural model as below input = {V_logit, k} ; PK model definition a = 1 b = 10 V_bound = a+V_logit*(b-a) Cc = pkmodel(V=V_bound, k) In the “Statistical model & Tasks” tab of the GUI, the distribution for V_logit should be set to LOGIT. • To have a parameter V larger than a=1 (with ‘a’ different from 0), you have to define the structural model as below input = {V_log, k} ; PK model definition a = 1 V_bound = a+V_log Cc = pkmodel(V=V_bound, k) In the “Statistical model & Tasks” tab of the GUI, the distribution for V_log should be set to LOGNORMAL. • To have a parameter V smaller than b=10, you have to define the structural model as below input = {V_log, k} ; PK model definition b = 10 V_bound = b-V_log Cc = pkmodel(V=V_bound, k) In the “Statistical model & Tasks” tab of the GUI, the distribution for V_log should be set to LOGNORMAL. Notice that, using that transformation, you have to multiply the standard error of V_logit by (b-a) in the first case to have the standard error of the initial V_bound parameter. It is not necessary for the two other cases as it is an offset. In addition, you can output V_bound for each individual using the table statement. 4.2.4.Model for individual covariates Objectives: learn how to implement a model for continuous and/or categorical covariates. Projects: warfarin_covariate1_project, warfarin_covariate2_project, warfarin_covariate3_project, phenobarbital_project Model with continuous covariates • warfarin_covariate1_project (data = ‘warfarin_data.txt’, model = ‘lib:oral1_1cpt_TlagkaVCl.txt’) The warfarin data contains 2 individual covariates: weight which is a continuous covariate and sex which is a categorical covariate with 2 categories (1=Male, 0=Female). We can ignore these columns if are sure not to use them, or declare them using respectively the reserved keywords CONTINUOUS COVARIATE and CATEGORICAL COVARIATE to define continuous and categorical covariate. Even if these 2 covariates are now available, we can choose to define a model without any covariate by not clicking on any check box in the covariate model. Here, an unchecked box in the line of the parameter V and the column of the covariate wt means that there is no relationship between weight and volume in the model. A diagnosis plot Individual parameters vs covariates is generated which displays possible relationships between covariates and individual parameters (even if these covariates are not used in the model): On the figure, we can see a strong correlation between the volume V and both the weight wt and the sex. One can also see a correlation between the clearance and the weight wt. Therefore, the next step is to add some covariate to our model. • warfarin_covariate2_project (data = ‘warfarin_data.txt’, model = ‘lib:oral1_1cpt_TlagkaVCl.txt’) We decide to use the weight in this project in order to explain part of the variability of \(V_i\) and \(Cl_i\). We will implement the following model for these two parameters: $$\log(V_i) = \log(V_{\rm pop}) + \beta_V \log(w_i/70) + \eta_{V,i} ~~\text{and}~~\log(Cl_i) = \log(Cl_{\rm pop}) + \beta_{Cl} \log(w_i/70) + \eta_{Cl,i}$$ which means that population parameters of the PK parameters are defined for a typical individual of the population with weight = 70kg. More details about the model The model for \(V_{i}\) and \(Cl_{i}\) can be equivalently written as follows: $$ V_i = V_{\rm pop} ( w_i/70 )^{\beta_V} e^{ \eta_{V,i} } ~~\text{and}~~ Cl_i = Cl_{\rm pop} ( w_i/70 )^{\beta_{Cl}} e^{ \eta_{Cl,i} }$$ The individual predicted values for \(V_i\) and \(Cl_i\) are therefore $$\bar{V}_i = V_{\rm pop} \left( w_i/70 \right)^{\beta_V} ~~\text{and}~~ \bar{Cl}_i = Cl_{\rm pop} \left( w_i/70 \right)^{\beta_{Cl}} $$ and the statistical model describes how \(V_i\) and \(Cl_i\) are distributed around these predicted values: $$ \log(V_i) \sim {\cal N}( \log(\bar{V}_i) , \omega^2_V) ~~\text{and}~~\log(Cl_i) \sim {\cal N}( \log(\bar{Cl}_i) , \omega^2_{Cl}) $$ Here, \(\log(V_i)\) and \(\log(Cl_i)\) are linear functions of \(\log(w_i/70)\): we then need to transform first the original covariate \(w_i\) into \(\log(w_i/70)\) by clicking on the button CONTINUOUS next to ADD COVARIATE (blue button). Then, the following pop up arises You have to • define the name of the covariate you want to add (the blue frame). • define the associated equation (the green frame). • click on the ACCEPT button • You can define any formula for your covariate as long as you use mathematical functions available in the Mlxtran language. • You can use any covariate available in the list of covariates proposed in the window. Thus, if you have a Height and Weight as covariates, you can directly compute the Body Mass Index. • If you go over a covariate with your mouse, all the information (min, mean, median, max and weighted mean) are displayed as a tooltip. The weighted mean is defined as \[ \text{weighted mean} (cov) = \exp \Big( \sum_i \frac{\text{nbObs}_i }{\text{nbObs}} \log(cov_i) \Big)\] where \(\text{nbObs}_i\) is the number of observation for the \(i^{th}\) individual and \(\text{nbObs}\) is the total number of observations. • If you click on the covariate name, it will be written in the formula. • You can use this formula box to replace missing continuous covariate values by an imputed value. This is explained in the feature of the week #141 below. For example, if your continuous covariate takes only positive values, you can use a negative value for the missing values in your dataset, for example -99, and enter the following formula: max(COV,0) + min(COV,0)/COV*ImputedValue with the desired ImputedValue (COV is thAse name of your covariate). We then define a new covariate model, where \(\log(V_i)\) and \(\log(Cl_i)\) are linear functions of the transformed weight \(lw70_i\) as shown on the following figure: Notice that by clicking on the button FORMULA, you have the display of all the individual model equations. Coefficients \(\beta_{V}\) and \(\beta_{Cl}\) are now estimated with their s.e. and the p-values of the Wald tests are derived to test if these coefficients are different from 0. Again, a diagnosis plot Individual parameters vs covariates is generated which displays possible relationships between covariates and individual parameters (even if these covariates are not used in the model) as one can see on the figure below on the left. However, as there are covariates on the model, what is interesting is to see if there still are correlation between the random effects and the covariates as one can see on the figure below on the right. Note: To make it automatically, starting from the 2019 version, there is an arrow next (in purple in the next figure) to the continuous covariate from the data set and propose to add a log transformed covariate centered by the weighted mean. Model with categorical covariates • warfarin_covariate3_project (data = ‘warfarin_data.txt’, model = ‘lib:oral1_1cpt_TlagkaVCl.txt’) We use sex instead of weight in this project, assuming different population values of volume and clearance for males and females. More precisely, we consider the following model for \(V_i\) and \ $$\log(V_i) = \log(V_{\rm pop}) + \beta_V 1_{sex_i=F} + \eta_{V,i}~~\text{and}~~\log(Cl_i) = \log(Cl_{\rm pop}) + \beta_{Cl} 1_{sex_i=F} + \eta_{Cl,i}$$ where \(1_{sex_i=F} =1\) if individual i is a female and 0 otherwise. Then, \(V_{\rm pop}\) and \(Cl_{\rm pop}\) are the population volume and clearance for males while \(V_{\rm pop}, e^{\beta_V}\) and \(Cl_{\rm pop} e^{\beta_{Cl}}\) are the population volume and clearance for females. By clicking on the purple button DISCRETE, the following window pops up You have to • define the name of the covariate you want to add (the blue frame). • define the associated categories (the green frame). • click on the ALLOCATE button to define all the categories. Then, you can • define the name of the categories (the blue frame). • define the reference category (the green frame). • click on ACCEPT Then, define the covariate model in the main GUI: Estimated population parameters, including the coefficients \(\beta_V\) and \(\beta_{Cl}\) are displayed with the results: Individual parameter graphic: Notice that for the volume and the clearance, the theoretical curve is is not the PDF of a lognormal distribution, due to the impact of the covariate sex. Calculating the typical value for each category Cl_pop represents the typical value for the reference category (in the example above SEX=0). The typical value for the other categories can be calculated based on the estimated beta parameters: • normal distribution: \( Cl_{SEX=1} = Cl_{pop} + beta\_Cl\_SEX\_1 \) • lognormal distribution: \( Cl_{SEX=1} = Cl_{pop} \times e^{beta\_Cl\_SEX\_1 } \) • logit distribution: \( F_{SEX=1} = \frac{1}{1+ e^{-\left( \log \left(\frac{F_{pop}}{1-F_{pop}} \right) + beta\_F\_SEX\_1 \right) }} \) Transforming categorical covariates • phenobarbital_project (data = ‘phenobarbital_data.txt’, model = ‘lib:bolus_1cpt_Vk.txt’) The phenobarbital data contains 2 covariates: the weight and the APGAR score which is considered as a categorical covariate. Instead of using the 10 original levels of the APGAR score, we will transform this categorical covariate and create 3 categories: Low = {1,2,3}, Medium = {4, 5, 6, 7} and High={8,9,10}. If we assume, for instance that the volume is related to the APGAR score, then \(\beta_{V,Low}\) and \(\beta_{V,High}\) are estimated (assuming that Medium is the reference level). In that case, one can see that both p-values concerning the transformed APGAR covariate are over .05. The same covariate effect for several parameters Adding a covariate effect on a parameter in the Individual Model section of the Statistical Model and Tasks tab, creates automatically a corresponding beta parameter (population parameter) that describes the strength of the effect. For example, adding weight WT on a parameter volume V1 creates beta_V1_WT. If you add the same covariate effect on two different parameters, eg. V1 and V2, then Monolix will create two corresponding parameters: beta_V1_WT and beta_V2_WT. If you need to add the same covariate effect for several parameters, eg. have only one parameter betaWT for both V1 and V2 in the example above, then this covariate effect has to be implemented in the structural model directly. It requires: • Reading weight WT from the dataset, so the columnt WT in the dataset has to be tagged as a regressor (not covariate) • Defining in the model new volumes with the covariate effect (sample code below, based on a two-compartment model from the PK library)[LONGITUDINAL] input = {WT, betaWT, ka, Cl, V1, Q2, V2} V1_withWT = V1*exp(betaWT*WT) V2_withWT = V2*exp(betaWT*WT) ; Parameter transformations Cc = pkmodel(ka, V, Cl, k12, k21) • In the Individual model in the GUI, set normal distribution for “betaWT” – to allow positive and negative values, and remove random effects from it – they are already included in the parameters V1, V2. This method gives more flexibility for more complex paramter-covariate relationships, also to include time-varying covariates in the model. However, remember that when you add covariate effects in the structural model – the statistical tests are not available for these relationships. Also, Monolix is more efficient when covariate effects are added in the GUI, as the algorithm “see” that V and beta_V_WT are related, and it improves the stability. 4.2.5.Complex parameter-covariate relationships Note that starting from version 2024 on, analytical solutions are used even if a time-varying regressor is used in the model. Complex parameter-covariate relationships and time-dependent continuous covariates Covariate-parameter relationships are usually defined via the Monolix GUI, leading for instance to exponential and power law relationships. However more complex parameter-covariate relationships such as Michaelis-Menten or Hill dependencies cannot the defined via the GUI because they cannot be put into the format where the (possibly transformed) covariate is added linearly on the transformed parameter. Similarly, when the covariate value is changing over time and thus not constant for each subject (or each occasion in each subject in case of occasions), the covariate cannot be added to the model via the GUI. In both cases, the effect of the covariate must be tagged as a regressor and the relationship must be defined directly in the model file. In the following, we will use as an example a Hill relationship between the clearance parameter Cl and the time-varying post-conception age (PCA) covariate, which is a typical way to scale clearance in paediatric pharmacokinetics: $$Cl_i = Cl_{pop} \frac{PCA^n}{PCA^n+A50^n} e^{\eta_i}$$ where \(Cl_i\) is the parameter value for individual i, \(Cl_{pop}\) the typical clearance for an adult, \(A50\) the PCA for the clearance to reach 50% mature, \(n\) the shape parameter and \(\eta_i \) the random effect for individual i. Step 1: To make the PCA covariate available as a variable is the model file, the first step is to tag it as a regressor column-type REGRESSOR when loading the data set (instead of using the CONTINUOUS COVARIATE column-type). Step 2: In the model file, the PCA covariate is passed as input argument and designated as being a regressor. The clearance Cl, the hill shape parameter n, and the A50 are passed as usual input input = {..., Cl, n, A50, PCA, ...} PCA = {use=regressor} If several regressors are used, be careful that the regressors are matched by order with the data set columns tagged as REGRESSOR (not by name). The relationship between the clearance Cl and the post-conception age PCA is defined in the EQUATION: block, before ClwithPCA is used (for instance in a simple (V,Cl) model): ClwithPCA = Cl * PCA^n / (PCA^n + A50^n) Cc = pkmodel(Cl=ClwithPCA, V) Note that the input parameter Cl includes the random effect ( \(Cl = Cl_{pop} e^{\eta_i} \) ), such that only the covariate term must be added. Because the parameter including the covariate effect CLwithPCA is not a standard keyword for macros, one must write Cl=ClwithPCA. Step 3: The definition of the parameters in the GUI deserves special attention. Indeed the parameters n and A50 characterize the covariate effect and are the same for all individuals: their inter-individual variability must be removed by unselecting the random effects. On the opposite, the parameter Cl keeps its inter-individual variability, corresponding to the \(e^{\eta_i} \) term. Step 4: When covariates relationships are not defined via the GUI, the p-value corresponding to the Wald test is not automatically outputted. It is however possible to calculate it externally. Assuming that we would like to test if the shape parameter n is significantly different from 1: $$H_0: \quad \textrm{”}n=1\textrm{”} \quad \textrm{versus} \quad H_1:\quad \textrm{”}n \neq 1\textrm{”}$$ Using the parameter estimate and the s.e outputted by Monolix, we can calculate the Wald statistic: $$W = \frac{\hat{n}-n_{ref}}{\textrm{s.e}(\hat{n})}$$ with \(\hat{n}\) the estimated value for parameter n, \(n_{ref}\) the reference value for n (here 1) and \(\textrm{s.e}(\hat{n}) \) the standard error for the n estimate. The test statistic W can then be compared to a standard normal distribution. Below we propose a simple R script to calculate the p-value: n_estimated = 1.32 n_ref = 1 se_n = 0.12 W = abs(n_estimated - n_ref)/se_n pvalue = 2 * pnorm(W, mean = 0, sd = 1, lower.tail = FALSE) Note that the factor 2 is added to do a two-sided test. Categorical time-varying covariates Categorical covariates may also be time-varying, for instance when the covariate represents concomitant medications over the course of the clinical trial or a fed/fasting state at the time of the In the following, we will use as an example a concomitant medication categorical covariate with 3 categories: no concomitant drug, concomitant drug 1, and concomitant drug 2. We would like to investigate the effect of the concomitant drug covariate on the clearance Cl. $$Cl_i = \begin{cases}Cl_{pop} e^{\eta_i} & \textrm{if no concomitant drug} \\ Cl_{pop} (1+\beta_1)e^{\eta_i} & \textrm{if concomitant drug 1} \\ Cl_{pop} (1+\beta_2)e^{\eta_i} & \textrm{if concomitant drug 2} \end{cases}$$ where \(Cl_i\) is the parameter value for individual i, \(Cl_{pop}\) the typical clearance if no concomitant drug, \(\beta_1\) the fractional change in case of concomitant drug 1, \(\beta_2\) the fractional change in case of concomitant drug 2 and \(\eta_i\) the random effect for individual i. Step 1: Encode the categorical covariate as integers. Indeed, while strings are accepted for the CATEGORICAL COVARIATE column-type, only numbers are accepted for the REGRESSOR column-type. Here we will use 0 = no concomitant medication, 1 = concomitant drug 1 and 2 = concomitant drug 2. Step 2: To make the COMED covariate available as a variable is the model file, tag it as a column-type REGRESSOR when loading the data set (instead of using the CATEGORICAL COVARIATE column-type). Step 3: In the model file, the COMED covariate is passed as input argument and designated as being a regressor. The clearance Cl, and the two beta parameters are passed as usual input parameters: input = {..., Cl, beta1, beta2, COMED, ...} COMED = {use=regressor} If several regressors are used, be careful that the regressors are matched by order with the data set columns tagged as REGRESSOR (not by name). To define the COMED covariate impact, we use a if/else statement in the EQUATION: block. The parameter value taking into account the COMED effect (called ClwithCOMED in this example) can then be used in an ODE system or within macros. if COMED==0 ClwithCOMED = Cl elseif COMED==1 ClwithCOMED = Cl * (1+beta1) ClwithCOMED = Cl * (1+beta2) Cc = pkmodel(Cl=ClwithCOMED, V) Note that the input parameter Cl includes the random effect ( \(Cl = Cl_{pop} e^{\eta_i} \) ), such that only the covariate term must be added. Because the parameter including the covariate effect CLwithCOMED is not a standard keyword for macros, one must write Cl=ClwithCOMED. If the categorical covariate has only two categories encoded as 0 and 1 (for instance COMED=0 for no concomitant medication and COMED=1 for concomitant medication), it is also possible to write the model in a more compact form: ClwithCOMED = Cl * (1 + beta * COMED) Step 4: The definition of the parameters in the GUI deserves special attention. Indeed the parameters beta1 and beta2 characterize the covariate effect and are the same for all individuals: their inter-individual variability must be removed by unselecting the random effects. On the opposite, the parameter Cl keeps its inter-individual variability, corresponding to the \(e^{\eta_i} \) term. In addition, we choose a normal distribution for beta1 and beta2 (with a standard deviation of zero as we have removed the random effects) in order to allow both positive and negative values. Covariate-dependent parameter When adding a categorical covariate on a parameter via the GUI, different typical values will be estimated for each group. However, all groups will have the same standard deviation. It can sometimes be useful to consider that the standard deviations also differ between groups. For example, healthy volunteers may have a smaller inter-individual variability than patients. From the 2018R1 version on, categorical covariates affecting both the typical value and the standard deviation have to be defined directly in the structural model, by using the covariate as a regressor and different parameters depending on the value of the regressor. Using a different parameter for each group permits to estimate a typical value and a standard deviation per group. Note that a regressor can contain only numbers, so the categorical covariate should be encoded with integers rather than strings. We show below an example, where the fixed effect and standard deviation of the volume V both depend on the covariate SEX. This require the definition of two different parameters VM (for male) and VF (for female). Step 1: To make the covariate SEX available as a variable is the model file, it has to be tagged as a regressor with column-type REGRESSOR when loading the data set (instead of using the CONTINUOUS COVARIATE and CATEGORICAL COVARIATE column-types). Step 2: In the model file shown below, the covariate SEX is passed as input argument and designated as being a regressor. Two parameters VM (for male) and VF (for female) are given as input, to be used as the volume V depending on the SEX. The use of VM or VF depending on the SEX value is defined in the EQUATION: block, before V is used (for instance in a simple (V,Cl) model): input = {Cl, VM, VF, SEX} SEX = {use=regressor} if SEX==0 V = VM V = VF Cc = pkmodel(Cl, V) output = {Cc} Step 3: The distribution of the parameters VM and VF is set as usual in the GUI. A different typical population value and a different standard deviation of the random effects will be estimated for males and females. Note: As SEX has been tagged as a regressor, it is not available as a covariate in the GUI. If a covariate effect of SEX on another parameter is needed, the column SEX can be duplicated in the dataset so that the duplicate can be tagged as a covariate. Covariate-dependent standard deviation This video shows a variation of the previous solution, where only the standard deviation of the random effect of a parameter is covariate-dependent, while the fixed effect is not affected. Transforming a continuous covariate into a categorical covariate The Monolix GUI allows to discretize a continuous covariate in order to handle it as a categorical covariate in the model, using binary 0/1 values. 4.2.6.Inter occasion variability (IOV) Objectives: learn how to take into account inter occasion variability (IOV). Projects: iov1_project, iov1_Evid_project, iov2_project, iov3_project, iov4_project A simple model consists of splitting the study into K time periods or occasions and assuming that individual parameters can vary from occasion to occasion but remain constant within occasions. Then, we can try to explain part of the intra-individual variability of the individual parameters by piecewise-constant covariates, i.e., occasion-dependent or occasion-varying (varying from occasion to occasion and constant within an occasion) ones. The remaining part must then be described by random effects. We will need some additional notation to describe this new statistical model. Let • \(\psi_{ik}\) be the vector of individual parameters of individual i for occasion k, where \(1\leq i \leq N\) and \(1\leq k \leq K\). • \({c}_{ik}\) be the vector of covariates of individual i for occasion k. Some of these covariates remain constant (gender, group treatment, ethnicity, etc.) and others can vary (weight, treatment, etc.). Let \(\psi_i = (\psi_{i1}, \psi_{i2}, \ldots , \psi_{iK})\) be the sequence of K individual parameters for individual i. We also need to define: • \(\eta_i^{(0)}\), the vector of random effects which describes the random inter-individual variability of the individual parameters, • \(\eta_{ik}^{(1)}\), the vector of random effects which describes the random intra-individual variability of the individual parameters in occasion k, for each \(1\leq k \leq K\). Here and in the following, the superscript (0) is used to represent inter-individual variability, i.e., variability at the individual level, while superscript (1) represents inter-occasion variability, i.e., variability at the occasion level for each individual. The model now combines these two sequences of random effects: \(h(\psi_{ik}) = h(\psi_{\rm pop})+ \beta(c_{ik} – c_{\rm pop}) + \eta_i^{(0)} + \eta_{ik}^{(1)} \) Remark: Individuals do not need to share the same sequence of occasions: the number of occasions and the times defining the occasions can differ from one individual to another. Occasion definition in a data set There are two ways to define occasions in a data set: • Explicitly using an OCCASION column. It is possible to have, in a data set, one or several columns with the column-type OCCASION. It corresponds to the same subject (ID should remain the same) but under different circumstances, occasions. For example, if the same subject has two successive different treatments, it should be considered as the same subject with two occasions. The OCC columns can contain only integers. • Implicitly using EVID column. If there is an EVID column with a value 4 then Monolix defines a washout and creates an occasion. Thus, if there are several times where EVID equals 4 for a subject, it will create the same number of occasions. Notice that if EVID equals 4 happens only once at the beginning, only one occasion will be defined and no inter occasion variability would be There are three kinds of occasions • Cross over study: In that case, data is collected for each patient during two independent treatment periods of time, there is an overlap on the time definition of the periods. A column OCCASION can be used to identify the period. An alternative way is to define an EVID column starting for all occasions with EVID equals 4. Both types of definition will be presented in the iov1 example. • Occasions with washout: In that case, data is collected for each patient during one period and there is no overlap between the periods. The time is increasing but the dynamical system (i.e. the compartments) is reset when the second period starts. In particular, EVID=4 indicates that the system is reset (washout) for example, when a new dose is administrated. • Occasions without washout: In that case, data is collected for each patient during one period and there is no overlap between the periods. The time is increasing and we want to differentiate periods in terms of occasions without any reset of the dynamical system. Multiple doses are administrated to each patient. each period of time between successive doses is defined as a statistical occasion. A column OCCASION is therefore necessary in the data file to define it. Cross over study • iov1_project (data = ‘iov1_data.txt’, model = ‘lib:oral1_1cpt_kaVk.txt’) In this example, PK data is collected for each patient during two independent treatment periods of time (each one starting at time 0). A column OCCASION is used to identify the study: This column is defined using the reserved keyword OCCASION. Then, the model associated to the individual parameter is as presented below In terms of covariates, we then see two parts as displayed below. We see the covariates • associated to the level ID (in green). It corresponds to all the covariates that are constant for each subject. • associated to the level OCC (in blue). It corresponds to all the covariates that are constant for each occasion but not on each subject. In the presented case, the treatment TRT varies for each individual. It contains inter-occasion information and is thus displayed with the occasion level. On the other hand, the SEX is constant for each subject. It contains then inter-individual information but no inter-occasion information. It is then displayed with the ID level. Covariates can be associated to the parameter if and only if their level of variability is coherent with the level of variability of the parameter. In the presented case, • TRT has inter-occasion variability. It can only be used with the parameter V that has inter-occasion variability. The two other parameters have only inter-individual variability and can therefore not use this TRT information. The interface is greyed and the user can not add this covariate to the parameters ka and Cl. • SEX has only inter-individual variability. It can therefore be associated to any parameter that has inter-individual variability. The population parameters now include the standard deviations of the random effects for the 2 levels of variability (omega is used fo IIV and gamma for IOV): Two important features are proposed in the plots. Firstly, in the individual fits, you can split or merge the occasions. When split is done, the name of the subject-occasion is the name of the subject, #, and the name of the occasion. Secondly, you can use the occasion to split the plots • iov1_Evid_project (data = ‘iov1_Evid_data.txt’, model = ‘lib:oral1_1cpt_kaVk.txt’) Another way to describe this cross over study is to use EVID=4 as explained in the data set definition. In that example, the EVID creates a washout and another occasion. Occasions with washout • iov2_project (data = ‘iov2_data.txt’, model = ‘lib:oral1_1cpt_kaVk.txt’) The time is increasing in this example, but the dynamical system (i.e. the compartments) is reset when the second period starts. Column EVID provides some information about events concerning dose administration. In particular, EVID=4 indicates that the system is reset (washout) when a new dose is administrated Monolix automatically proposes to define the treatment periods (between successive resetting) as statistical occasions and introduce IOV, as we did in the previous example. We can display the individual fit by splitting each occasion for each individual Or by merging the different occasions in a unique plot for each individual: : If you are modeling a PK as in this example, the washout implies that the occasions are independent. Thus, the cpu time is much faster as we do not have to compute predictions between occasions. Occasions without washout • iov3_project (data = ‘iov3_data.txt’, model = ‘lib:oral1_1cpt_kaVk.txt’) Multiple doses are administrated to each patient. We consider each period of time between successive doses as a statistical occasion. A column OCCASION is therefore necessary in the data file. The model for IIV and IOV can then be defined as usual. The plot of individual fits allows us to check that the predicted concentration is now continuous over the different occasions for each Multiple levels of occasions • iov4_project (data = ‘iov4_data.txt’, model = ‘lib:oral1_1cpt_kaVk.txt’) We can easily extend such an approach to multiple levels of variability. In this example, columns P1 and P2 define embedded occasions. They are both defined as occasions: We then define a statistical model for each level of variability. 4.2.7.Inter-occasion variability and effect of guar gum on alcohol concentration in blood This case study uses the MonolixSuite to analyze and model the absorption and elimination of alcohol with or without a dietary additive of guar gum. It focuses in particular on the modeling of inter-occasion variability. Guar gum, also called guaran, is a polysaccharide extracted from guar beans. As a natural polymer, it has been used for many years as an emulsifier, thickener, and stabilizer in the food industry. In the pharmaceutical sector, guar gum and guar-gum based systems are frequently studied for the development of controlled-released formulations and colon targeted drug delivery systems, as guar gum can protect active molecules from the enzymes and pH in the stomach and small intestine and it can be degraded by intestinal bacteria in the colon. [Aminabhavi, T. M., Nadagouda, M. N., Joshi, S. D., & More, U. A. (2014). Guar gum as platform for the oral controlled release of therapeutics. Expert Opinion on Drug Delivery, 11(5), 753–766.] Moreover, guar gum may affect the bioavailability of concomitantly administered substances due to its effect on the rate of gastrointestinal transit and gastric emptying. The goal of this case study is then to assess the effect of guar gum on the absorption and bioavailability of alcohol. Data set The data has been published in: Practical Longitudinal Data Analysis, David J. Hand, Martin J. Crowder, Chapman and Hall/CRC, Published March 1, 1996 It is composed of measurements of blood alcohol concentrations in 7 healthy individuals. All subjects took alcohol at time 0 and gave a blood sample at 14 times over a period of 5 hours. The whole procedure was repeated at a later date but with a dietary additive of guar gum. The two different periods of time are encoded in the data with overlapping times, both starting at time 0. Although the precise amount of ingested alcohol is unknown, for this case study we assume each amount to be 10g (standard drink). Data exploration in Datxplore The data is loaded in Datxplore to explore it graphically. It appears in the Data tab as above. The time is in hours, the alcohol concentration in the column Y is in mg/L, and the amount in AMT is in mg. Censored observations are indicated with the column CENS tagged as CENSORING. The two periods of measurements, during which the subjects have received or not guar gum in addition to alcohol are distinguished with the column OCC, which is automatically recognized as OCCASION. In addition, I use the column DIET that I tag as CATCOV to indicate which occasion corresponds to the addition of guargum. This column contains 2 strings: noGuar for occasion 1 and withGuar for occasion 2. In the Plots tab, the plot of alcohol concentration vs time in log-scale seen below suggests to use a one-compartment model with a first-order absorption. A non-linear elimination appears for some individuals, but the data might not be sufficient to capture it. Thus a linear elimination should be tried as a first model. The different occasions can be visualized in several ways. First, hovering on a curve highlights the curve in solid yellow and the curve corresponding to the other occasion from the same subject in dashed yellow, as on the figure above. Second, OCC is available for stratification along with the covariate DIET in the “Stratify” panel , and can thus be used for splitting, coloring or filterting. Below are for example the subplots of the data split by OCC and colored by ID. Each subject-occasion is assigned a color, with matched color shades for subject-occasions corresponding to the same subject. This is convenient to compare at a glance the two occasions for all subjects. The inter-individual variability seems mostly reproduced from one occasion to the other, and concentration levels seem slightly higher for OCC=2. This global trend can be confirmed in linear scale below. Here the plot is colored by DIET, whose categories are matched with OCC: DIET=noGuar corresponds to OCC=1 and DIET=withGear to OCC=2. Thus, the main inter-occasion variability seen in the data seems to be explained by the covariate DIET. First analysis in PKanalix Non-compartmental analysis As a first analysis, we can check the difference in PK parameters for the two occasions with non-compartmental analysis (NCA) in PKanalix. After loading the previously saved Datxplore project in PKanalix, the following settings are chosen: • extravascular administration, • “linear up log down” integral method, • “missing” for blq after Tmax (censored observations after Tmax are not used in the analysis). The “Check lambda_z” panel, seen below, allows to check the regressions estimating the elimination slope. The default “adjusted R2” rule selects for each individual the optimal number of points used in the regression to get the best regression. While the plots allow to adjust the selection of points for some individuals, it is not necessary here. The plots already show some variability between individuals in the estimated lambza_z. Running the NCA gives the lambda_z and other PK parameters for each individual. In the “Plot” tab, the plot of individual parameters vs. covariates is convenient to visualize the variability in the parameters and compare the distributions without and with guar gum. Here the following parameters have been selected: AUCINF_pred, Cl_F_pred, Cmax, HL_Lambda_z, Tmax: Some difference can be seen between the two conditions for the half-life, the apparent clearance and Cmax, however the parameters also show large variability within each dietary condition. Compartmental analysis in PKanalix Next, a compartmental analysis (CA) can be run to estimate a compartmental model and compare the estimated parameters between the two conditions. PKanalix considers that the subject-occasions are independent, thus the parameters are optimized independently on each individual and on each occasion. This allows to check easily if different values are estimated for the two occasions. Choosing a one-compartment model with a first-order absorption and a linear elimination gives the following individual fits (after choosing initial values with the “aut-init” button): The absorption phase is not really well captured. Zooming on this phase can help confirm that the absorption should be delayed. Choosing the same model as before but with a lag time before the absorption now gives good individual fits: the estimated individual parameters show different distributions across the two conditions of DIET, in particular for ka, V and k: Since the data size is small, it is not clear whether these differences are significant. The effect of DIET on the alcohol kinetics can be assessed more precisely with a detailed population analysis in Monolix. Moreover, the bioavailability is not explicitly taken into account by this model, because it is not identifiable with only extravascular administrations, so it is included in the apparent volume V. In Monolix, it is possible to use more complex models than the simple PK models from the library, and in particular to add the bioavailability explicitly, allowing to assess in a more meaningful way whether guar gum could have an effect on the relative bioavailability depending on the value of DIET. Therefore, we export the compartmental model from PKanalix to Monolix. Population modelling in Monolix This opens a Monolix project in which the data and the structural model are set up like in PKanalix. In “Statistical model & Tasks”, the “Individual model” part is now split in two. The part on the left at the ID level describes the inter-individual variability (IIV), and includes by default a random effect for each parameter. The part on the right (highlighted in the figure below) is dedicated to the OCC variability-level, where it is possible to add random effects at the inter-occasion level: this would create inter-occasion variability (IOV). Since DIET varies from one occasion to another, it appears in this panel to explain part of the inter-occasion variability with a covariate effect. The boxes for adding covariate effects are greyed out, because at this step there is no inter-occasion variability. It is only possible to add covariate effects at the occasion level on parameters that either have inter-occasion variability with random effects at the occasion level, or parameters that have no random effects at the id and occasion levels. Model without inter-occasion variability The first step in this worklow aims at validating the structural model without taking into account differences between the occasions. Thus we keep the statistical model to default, select all tasks in the scenario and save the project as run01.mlxtran. Estimating this model does not show misspecifications on the plot of Observations vs Predictions in log-log scale: All parameters are estimated with a good confidence, except omega_ka which is small and with a high rse: On the individual fits seen below, disabling the option “Split occasions” in “Display” allows to visualize the two occasions on the same plot for each individual. The observed data can be colored by occasion or equivalently by DIET in Stratify. In this case, the predictions are identical for both occasions and overlap, since no inter-occasion variability is taken into account in the model. The prediction curves are displayed in purple for the first occasion and orange for the second, by default the curve for the first occasion on top, except for the last individual for which the second occasion is on top because it corresponds to a smaller observation period. The individual fits shows that capturing both occasions with the same prediction is not possible, because there are small non-random variations from one occasion to another, as seen during the data exploration. This could corresponds to variability in the parameters between the occasions, that can be taken into account in the “Statistical model & Tasks” tab, by adding some random effects at the inter-occasion level or defining a covariate effect of DIET. We will first focus on the random effects. Model with relative bioavailability and unexplained inter-occasion variability Each parameter can be modelled with IIV, IOV or both. Physiological considerations can help deciding if a parameter should have variability at each level or not. But in the absence of clear physiological knowledge, a possible approach is also to add a random effect at the occasion level on each parameter for which variability may be relevant, and check if the estimated standard deviation of the random effect is small. In this case, all parameters may show some inter-occasion variability. Indeed, the elimination can easily show some variations between different periods, and the dietary additive of guar gum might change the values of the parameters Tlag and ka characterizing the absorption. The volume V is unlikely to vary much for one occasion to another, however in this case V corresponds to the apparent volume, that includes the bioavailability of alcohol, which may vary with guar gum. Thus, it would be possible to set a random effect for IOV on V, or alternatively to modify the structural model to include explicitly the bioavailability, and add the random effect at the occasion level on the bioavailability instead of the volume. This is what we are going to do. Before modifying the structural model, we use the last estimates as new initial values to facilitate the estimation for the next run. Then, we open the structural model in the editor, and add an argument p=F in the pkmodel macro. This means that the proportion of absorbed drug will be defined by the parameter F, that should also be added in the input list: input = {Tlag, ka, V, k, F} ; PK model definition Cc = pkmodel(Tlag, ka, V, k, p=F) output = Cc This modified model is then saved under a new name. The compile button is convenient to check that there is no syntax error. The new model can then be loaded in Monolix instead of the previous one. After loading the model, Monolix brings us to the “Check initial estimates” tab to choose a good initial value for F_pop. Here F is not the absolute bioavailability, but it corresponds to a relative bioavailability between the individuals. Thus F_pop is the reference value for the bioavailability, and it should be fixed to 1. This can be done in the list of the initial estimates, by changing the estimation method for F_pop to “Fixed”: Now that the model includes the relative bioavailability explicitly, we can consider IOV for F instead of for V. Since V and F are not identifiable together, we should not include IIV for F while there is already IIV for V. Clicking on Formula displays the model for the individual parameters. For instance, the model for Tlag now includes a random effect eta_OCC_Tlag in addition to the random effect eta_ID_Tlag: This project is saved as run02.mlxtran and all tasks are run. The table of population parameters now include the standard deviations of the new random effects at the OCC level, which are called gamma: There a few high rses for the standard deviations of the random effects, because it is not possible to identify well all the random effects with such a small dataset. The random effects with the smallest standard deviations could probably be removed, such as omega_ka or gamma_F. In a later step, we will check more precisely which random effects should be removed. For now, we will first check the relationships between the random effects and the covariate DIET. On the individual fits, there are now different individual predictions for each occasion. The colors associated to each value of DIET for the observations can be changed in “Stratify” to match the colors of the predictions. The predictions from occasion 1 are in purple, they correspond to the first category noGuar. The second category withGuar corresponds to the second occasion with orange After clicking on Information, the individual parameter values appear on the plots for each occasion (for example here for the two first individuals): For V, which has only IIV, a single value is estimated for each individual across both occasions. For F, which has only IOV, it is important to note that estimated individual random effects from the distribution defined by gamma_F are independent across ids and occasions, and take into account the fact that F is slightly different for all subject-occasions. So a different value is estimated for each subject-occasion. Thus, the inter-occasion variability represents also an inter-individual variability. For parameters that have both IIV and IOV (Tlag, ka and k), the variability at the id level represents the additional variability between individuals that is common across both occasions. The individual fits show that the IOV allows to properly capture the observations for each subject-occasion, and the predicted alcohol concentration seems usually higher when individuals have taken guar gum, except for ids 5 and 6. Let’s check this with the other diagnostic plots. Assessing the effect of guar gum on alcohol’s PK First, the plot of individual parameters vs covariates can be used to compare the distributions of each parameters across the two occasions. Notable differences appear for ka, k and F. The kinetics with guar gum exhibit higher absorption rates and bioavailability, and smaller elimination rates. We can try to implement one or several of these differences in the model with a covariate effect, starting with the hypothesis that guar gum could affect the bioavailability of alcohol. The statistical tests in Results show that these differences do not correspond to a significant correlation between eta_ka and DIET, but there is a significant correlation with eta_k, and a slightly significant correlation with eta_F. The lack of significance for ka and F is explained by the small size of the data which affects the p-values. Second, we can have a look at the VPC split by DIET. Here, the prediction intervals are based on simulations that use the IOV included in the model, which is independent from DIET. Thus the prediction intervals are almost identical on each plot, while the empirical curves differ with DIET. With the 4 bins computed by default, empirical curves are well captured by the prediction intervals, so with this size of data, the small differences caused by guar gum do not cause a visible discrepancy of the model, but when setting the number of bins to 6 (see below), a small discrepancy appears in the absorption phase. Although we should keep in mind that the empirical percentiles represent only a small number of individuals, this is a hint that it could be relevant to take into account an effect of guar gum on the absorption or the bioavailability. Model with inter-occasion variability and occasion-varying covariate effect As a result of this diagnosis, we can now adjust the model, after using the last estimates as new initial values. Based on the diagnostic plots and the biological knowledge on possible mechanisms for the effect of guar gum, we will try to explain part of the IOV by adding a covariate effect of DIET on F. We save this modified project as run03.mlxtran and run all tasks. The new parameter beta_F_DIET_withGuar is estimated to a small value (0.08) but with a good standard error, and it results in a small decrease of gamma_F (from 0.07 to 0.034): In the statistical tests, the p-value for the Wald test, which checks whether the parameter is close to 0, is small but the test is not quite significant. In addition, the correlation between F and DIET is significant: The diagnostic plot also show that this correlation is strong: Therefore the covariate effect is relevant and should not be removed from the model. Moreover, the -2*LL and BICc for run03.mlxtran are slightly smaller than run02.mlxtran (2 points of difference), showing that the modified model still captures the data as well as the previous run. This can be seen easily by comparing the runs in Sycomore: Estimation without simulated annealing Finally, in the next step we are going to check more precisely whether some random effects are not well estimated and should be removed. For the next run, we are going to modify the settings of SAEM to disable the simulated annealing: This option is explained in details in this video. Briefly, it constrains the variance of the random effects to decrease slowly during the estimation, in order to explore a large parameter space to avoid getting stuck in a local maximum. A side-effect of the simulated annealing is that it may keep the omega values artificially high, and prevent the identification of parameters for which the variability is in fact zero. This leads to large values in the standard errors. So when large standard errors are estimated for random effects, like it is the case here for omega_ka and gamma_F, it is recommended to disable the simulated annealing once the estimated parameters are close to the solution. Before changing the settings, the last estimates should be used as new initial values to start really close to the solution. The modified project is saved as run04.mlxtran and SAEM is run. In the graphical report, omega_ka and gamma_F now decrease to a very small value: This confirms that there is not enough information in the data to identify the distributions for ka and F. Therefore, we can use the last estimates and then remove the IIV on ka and the IOV on F. The IOV on F can be removed while keeping the covariate effect of DIET, because F has also no IIV. After doing this, we run the whole scenario again for the new project run05.mlxtran. All the parameters are now estimated with quite good standard errors, considering the small size of the data: With the covariate effect on DIET on F, the discrepancy in the VPC for the occasions with guar gum is slightly smaller but still present, and is likely due to the variability in the data: This run is the final model. Despite the small size of the data, it is able to take into account IIV and IOV, and to explain a modest part of the inter-occasion variability in bioavailability of alcohol by the effect of guar gum. 4.2.8.Mixture of distributions Objectives: learn how to implement a mixture of distributions for the individual parameters. Projects: PKgroup_project, PKmixt_project Mixed effects models allow us to take into account between-subject variability. One complicating factor arises when data is obtained from a population with some underlying heterogeneity. If we assume that the population consists of several homogeneous subpopulations, a straightforward extension of mixed effects models is a finite mixture of mixed effects models. There are two approaches to define a mixture of models: • defining a mixture of structural models (via a regressor or via the bsmm function) –> click here to go to the page dedicated to this approach, • introducing a categorical covariate (known or latent). This approach is detailed here. The second approach assumes that the probability distribution of some individual parameters vary from one subpopulation to another one. The introduction of a categorical covariate (e.g., sex, phenotype, treatment, status, etc.) into such a model already supposes that the whole population can be decomposed into subpopulations. The covariate then serves as a label for assigning each individual to a subpopulation. In practice, the covariate can either be known or not. If it is unknown, the covariate is called a latent covariate and is defined as a random variable with a user-defined number of modalities in the statistical model. Differences in estimation and diagnosis methods appear to deal with this additional random variable: this difference represents a task of unsupervised classification. Mixture models usually refer to models for which the categorical covariate is unknown and unsupervised classification is needed. For the sake of simplicity, we will consider a basic model that involves individual parameters \((\psi_i,1\leq i \leq N)\) and observations \((y_{ij}, i \leq N, 1\leq j \leq n_i)\). Then, the easiest way to model a finite mixture model is to introduce a label sequence \((z_i , 1\leq i \leq N)\) that takes its values in \(\{1,2,\ldots,M\}\) such that \(z_i=m\) if subject i belongs to subpopulation In some situations, the label sequence \((z_i , 1\leq i \leq N)\) is known and can be used as a categorical covariate in the model. If \((z_i)\) is unknown, it can be modeled as a set of independent random variables taking their values in \(\{1,2,\ldots,M\}\) where for \(i=1,2,\ldots, N\), \(P(z_i = m)\) is the probability that individual i belongs to group m. We will assume furthermore that the \((z_i)\) are identically distributed, i.e., \(P(z_i = m)\) does not depend on i for \(m=1, \ldots, M\). Mixture of distributions based on a categorical covariate • PKgroup_project (data = ‘PKmixt_data.txt’, model = ‘lib:oral1_1cpt_kaVCl.txt’) The sequence of labels is known as GROUP in this project and comes from the dataset. It is therefore defined as a categorical covariate that classifies We can then assume, for instance different population values for the volume in the two groups and estimate the population parameters using this covariate model. Then, this covariate GROUP can be used as a stratification variable and is very important in the modeling. Mixture of distributions based on unsupervised classification with a latent covariate A latent covariate is defined as a random variable, and the probability of each modality is part of the statistical model and is estimated as well. Methods for estimation and diagnosis are different. After the estimation, for each individual the categorical covariate is not perfectly known, only the probabilities of each modality are estimated. Note also that latent covariates can be useful to model statistical mixtures of populations, but they provide no biological interpretation for the cause of the heterogeneity in the population since they do not come from the dataset. Latent covariates can not be handled with IOV. • PKmixt_project (data = ‘PKmixt_data.txt’, model = ‘lib:oral1_1cpt_kaVCl.txt’) We will use the same data with this project but ignoring the column GROUP (which is equivalent to assuming that the label is unknown). If we suspect some heterogeneity in the population, we can introduce a “latent covariate” by clicking on the grey button MIXTURE. Remark: several latent covariates can be introduced in the model, with different number of categories. We can then use this latent covariate lcat as any observed categorical covariate. Again, we can assume again different population values for the volume in the two groups by applying it on the volume random effect and estimating the population parameters using this covariate model. Proportions of each group are also estimated, plcat_1 which is the probability to have modality 1: Once the population parameters are estimated, the sequence of latent covariates, i.e. the group to which belongs each subject, can be estimated together with the individual parameters, as the modes of the conditional distributions. The sequence of estimated latent covariates lcat can be used as a stratification variable. We can for example display the VPC in the 2 groups: By plotting the distribution of the individual parameters, we see that V has a bimodal distribution 5.Tasks and results Monolix tasks Monolix allows a workflow with several tasks. On the interface, one can see six different tasks • POP. PARAM.: it corresponds to the estimation of the population parameters, • EBEs: it corresponds to the estimation of the individual parameters using the conditional mode, i.e. the most probable individual parameters. • CONDITIONAL DISTRIBUTION: It corresponds to the draws individual parameters based on the conditional distribution. It allows to compute the mean value of the conditional distribution. • STD. ERRORS.: it corresponds to the calculation of the Fisher information matrix and standard errors. Two methods are proposed for it. Either using the linearization method or using the stochastic approximation. The choice between those methods is done with the “Use linearization method” toggle under the tasks. • LIKELIHOOD: it corresponds to the explicit calculation of the log-likelihood. A specificity of the SAEM algorithm is that it does not explicitly compute the objective function. Thus, a dedicated task is proposed. Two methods are proposed for it. Either using the linearization method or using the importance sampling. The choice between those methods is done with the “Use linearization method” toggle under the tasks. This toogle is for both STD ERRORS and LIKELIHOOD tasks to be more relevant. • PLOTS: it corresponds to the generation of the plots. Also, different types of results are available in the form of plots and tables. The tasks can be run individually by clicking on the associated button, or you can define a workflow by clicking on the tasks to run (on the small light blue checks) and click on the play button (in green) as proposed on the figure below. Notice that you can initialize all the parameters and the associated methods in the “Initial Estimates” frame as described here. Moreover, Monolix includes a convergence assessment tool. It is possible to execute a workflow as defined above but for different, randomly generated, initial values of fixed effects. Monolix results All the output files are detailed here. Monolix-R functions Monolix is now proposed with an API leading to the possibility to have access to the project exactly by the same way as you would do with the interface. All the functions are described here. Parameter initial estimates and associated methods Initial values are specified for the fixed effects, for the standard deviations of the random effects and for the residual error parameters. These initial values are available through the frame “Initial estimates” of the interface as can be seen on the following figure. It is recommended to initialize the estimation to have faster convergence. Initialization of the estimates Initialization of the “Fixed effects” The user can modify all the initial values of the fixed effects. When initializing the project, the values are set by default to 1. To change it, the user can click on the parameter and change the Notice that when you click on the parameter, an information is provided to tell what value is possible. The constraint depends on the distribution chosen for the parameter. For exemple, if the volume parameter V is defined as lognormal, its initial value should be strictly greater than 0. In that case, if you set a negative value, an error will be thrown and the previous parameter will be When a parameter depends on a covariate, initial values for the dependency (named with \(\beta\) prefix, for instance beta_V_SEX_M to add the dependency of SEX, on parameter V) are displayed. The default initial value is 0. In case of a continuous covariate, the covariate is added linearly to the transformed parameter, with a coefficient \(\beta\). For categorical covariates, the initial value for the reference category will be the one of the fixed effect, while for all other categories it will be the initial value for the fixed effect plus the initial value of the \(\beta\), in the transformed parameter space. It is possible to define different initial values for the non-reference categories. The equations for the parameters can be visualized by clicking on button formula in the “Statistical model & Tasks” frame Initialization of the “Standard deviation of the random effects” The user can modify all the initial values of the standard deviations of the random effects. The default value is set to 1. We recommend to keep these values high in order for SAEM to have the possibility to explore the domain. Initialization of the “Residual error parameters” The user can modify all the initial values of the residual error parameters. There are as many lines as continuous outputs of the model. The default value depends on the parameter (1 for “a”, 0.3 for “b” and 1 for “c”). What method can I use for the parameters estimations? For all the parameters, there are several methods for the estimation • “Fixed”: the parameter is kept to its initial value and so, it will not be estimated. In that case, the parameter name is set to orange. • “Maximum Likelihood Estimation”: The parameter is estimated using maximum likelihood. In that case, the parameter name remains grey. This is the default option • “Maximum A Posteriori Estimation”: The parameter is estimated using maximum a posteriori estimation. In that case, the user has to define both a typical value and a standard deviation. For more about this, see here. In that case, the parameter name is colored in purple. To change the method, click on the right of the parameter as on the following. A window pops up to choose the method as on the following figure Notice that you have buttons to fix all the parameters or estimate all on the top right of the window as can be seen on the following figure How to initialize your parameters? On the use of last estimates If you have already estimated the population parameters for this project, then you can use the “Use the last estimates” buttons to use the previous estimates as initial values. The user has the possibility to use all the last estimates or only the fixed effects. The interest of using only the fixed effects is not to have too low initial standard effects and thus let SAEM explore a larger domain for the next run. Check initial fixed effects When clicking on the “Check the initial fixed effects”, the simulations obtained with the initial population fixed effects values are displayed for each individual together with the data points, in case of continuous observations. It allows also an automatic initialization in case of a model of the PK library as described here. 5.1.1.1.Check initial estimates and auto init Check initial fixed effects The subtab “Check initial estimates” is part of the tab “Initial estimates“. When clicking on the “Check the initial estimates”, the simulations obtained with the initial population fixed effects values and the individual designs (doses and regressors) are displayed for each individual together with the data points, in case of continuous observations. This feature is very useful to find some “good” initial values. Although Monolix is quite robust with respect to initial parameter values, good initial estimates speed up the estimation. You can change the values of the parameters on the bottom of the screen and see how the agreement with the data change. In addition, you can change the axis to log-scale and choose the same limit on all axes to have a better comparison of the individuals. When you are confident with the initial values, you should click on the “SET AS INITIAL VALUES” button on the top of the frame to validate the selection. In addition, if you think that there are not enough points for the prediction (if there are a lot of doses for example), you can change the discretization and increase the number of points as displayed in the blue box of the figure. If several observation ids have been mapped to model outputs (for example a parent and a metabolite, or a PK and a PD observation), you can select which output to look at under the output section on the right: Using the reference in the “check initial estimates” Starting from the 2019 version, it is possible to add a reference and thus change a parameter to see the impact of the variation of this parameter. In this example, we click on reference to use the current fit as reference and change k12 from 1 to 2 as can be seen on the following figure. The solid red curve corresponds to the current curve and the dashed one corresponds to the reference. At any time, you can change the reference to use the current fit, and restore and delete the reference or delete all references by clicking on the icons at the top right of the frame. Automatic initialization of the parameters Starting from the 2021 version, an auto-init section appears on the right side of the frame: By clicking on RUN, Monolix will compute initial population parameters that best fit the data points, starting from parameter values currently used in the initial estimates panel, and using the data from the 12 first individuals by default, and from all observations mapped to a model output. It is possible to change the set of individuals used in the Ids selection panel just below the RUN The computation is done with a custom optimization method, on the pooled data, without inter-individual variability. The purpose is not to find a perfect match but rather to have all the parameters in the good range for starting the population modeling approach. While auto-init is running, we show the evolution of the cost of the optimization algorithm over the iterations. It is possible to stop the algorithm at any time if you find the cost has decreased sufficiently and you want to have a look at the parameter values. Note that the more individuals are selected, the longer the run will take. Moreover, it may be easier for the auto-init algorithm to find a point in the parameter space that is sensitive to specific model features (eg a third compartment, a complex absorption) if you select only a few individuals (1-3 for instance) for which this feature can be observed. After clicking on the button, the population parameters are updated and the corresponding fit is displayed. To use these parameters as initial estimates, you need to click on the button “SET AS INITIAL VALUES”. A new reference appears with previous parameter values so that you can come back to previous values if you are not satisfied with the fit. Note that the auto-init procedure takes into account the current initial values. Therefore, in the few cases where the auto-init might give poor results, it is possible to improve the results by changing manually the parameter values before running the auto-init again. 5.1.2.Population parameter estimation using SAEM The estimation of the population parameters is the key task in non-linear mixed effect modeling. In Monolix, it is performed using the Stochastic Approximation Expectation-Maximization (SAEM) algorithm [1]. SAEM has been shown to be extremely efficient for both simple and a wide variety of complex models: categorical data [2], count data [3], time-to-event data [4], mixture models [5–6], differential equation based models, censored data [7], … The convergence of SAEM has been rigorously proven [1] and its implementation in Monolix is particularly efficient. No other algorithms are available in Monolix. Calculations: the SAEM algorithm Running the population parameter estimation task The pop-up window which permits to follow the progress of the task is shown below. The algorithm starts with a small number (5 by default) of burn-in iterations for initialization which are displayed in the following way: (note that this step can be so fast that it is not visible by the user) Afterwards, the evolution of the value for each population parameter over the iterations of the algorithm is displayed. The red line marks the switch from the exploratory phase to the smoothing phase. The exact value at each iterations can be followed by hovering over the curve (as for Cl_pop below). The convergence indicator (in purple) helps to detect that convergence has been reached (see below for more details). Dependencies between tasks The “Population parameter” estimation task must be launched before running any other task. To skip this task, the user can fix all population parameters. If all population parameters have been set to “fixed”, the estimation will stop after a single iteration and allow the user to continue with the other tasks. The convergence indicator The convergence indicator (also sometimes called complete likelihood) is defined as the joint probability distribution of the data and the individual parameters and can be decomposed using Bayes law: \(\sum_{i=1}^{N_{\text{ind}}}\log\left(p(y_i, \phi_i; \theta)\right)=p(y_i| \psi_i; \theta)p(\psi_i; \theta)\) Those two terms have an analytical expression and are easy to calculate, using as \(\phi_i\) the individual parameters drawn by MCMC for the current iteration of SAEM. This quantity is calculated at each SAEM step and is useful to assess the convergence of the SAEM algorithm. The convergence indicator aggregates the information from all parameters and can serve to detect if the SAEM algorithm has already converged or not. When the indicator is stable, that is it oscillates around the same value without drifting, then we can be pretty confident that the maximum likelihood has been achieved. The convergence indicator is used, among other measures, in the auto-stop criteria to switch from the exploratory phase to the smoothing phase. Note that the likelihood (i.e the objective function) \(\sum_{i=1}^{N_{\text{ind}}}\log\left(p(y_i; \theta)\right)\) cannot be computed in closed form because the individual parameters \(\phi_i\) are unobserved. It requires to integrate over all possible values of the individual parameters. Thus, to estimate the log-likelihood an importance sampling Monte Carlo method is used in a separate task (or an approximation is calculated via linearization of the model). The simulated annealing The simulated annealing option (setting enabled by default) permits to keep the explored parameter space large for a longer time (compared to without simulated annealing). This allows to escape local maximums and improve the convergence towards the global maximum. In practice, the simulated annealing option constrains the variance of the random effects and the residual error parameters to decrease by maximum 5% (by default – the setting “Decreasing rate” can be changed) from one iteration to the next one. As a consequence, the variances decrease more slowly: The size of the parameter space explored by the SAEM algorithm depends on individual parameters sampled from their conditional distribution via Markov Chain Monte Carlo. If the standard deviation of the conditional distributions is large, the individual parameters sampled at iteration k can be quite far away from those at iteration (k-1), meaning a large exploration of the parameter space. The standard deviation of the conditional distribution depends on the standard deviation of the random effects (population parameters ‘omega’). Indeed, the conditional distribution is \(p(\psi_i|y_i;\hat {\theta})\) with \(\psi_i\) the individual parameters for individual \(i\), \(\hat{\theta}\) the estimated population parameters, and \(y_i\) the data (observations) for individual \(i\). The conditional distribution thus depends on the population parameters, and the larger the population parameters ‘omega’, the larger the standard deviation of the conditional distribution. That’s why we want to keep large ‘omega’ values during the first iterations. Methods for the parameters without variability Parameters without variability are not estimated in the same way as parameters with variability. Indeed, the SAEM algorithm requires to draw parameter values from their marginal distribution, which exists only for parameters with variability. Several methods can be used to estimate the parameters without variability. By default, these parameters are optimized using the Nelder-Mead simplex algorithm (Matlab’s fminsearch method). Other options are also available in the SAEM settings: • No variability (default): optimization via Nelder-Mead simplex algorithm • Add decreasing variability: an artificial variability (i.e random effects) is added for these parameters, allowing estimation via SAEM. The variability starts at omega=1 and is progressively decreased such that at the end of the estimation process, the parameter has a variability of 1e-5. The decrease in variability is exponential with a rate based on the maximum number of iterations for both the exploratory and smoothing phases. Note that if the autostop is triggered, the resulting variability might me higher. • Variability at the first stage: during the exploratory phase of SAEM, an artificial variability is added and progressively forced to 1e-5 (same as above). In the smoothing phase, the Nelder-Mead simplex algorithm is used. Depending on the specific project, one or the other method may lead to a better convergence. If the default method does not provide satisfying results, it is worth trying the other methods. In terms of computing time, if all parameters are without variability, the first option will be faster because only the Nelder-Mead simplex algorithm will be used to estimate all the fixed effects. If some parameters have random effects, the first option will be slower because the Nelder-Mead and the SAEM algorithm are computed at each step. In that case the second or third option will be faster because only the SAEM algorithm will be required when the artificial variability is added. Alternatively, the standard deviation of the random effects can be fixed to a small value, for instance 5% for log-normally distributed parameters. (See next section on how to enforce a fixed value). With this method, the SAEM algorithm can be used, and the variability is kept small. Other estimation methods: fixing population parameters or Bayesian estimation Instead of the default estimation method with SAEM, it is possible to fix a population parameter, or set a prior on the estimate and use Bayesian estimation. This page gives details on these methods and how to use them. In the graphical user interface The estimated population parameters are displayed in the POP.PARAM section of the RESULTS tab. Fixed effects names are “*_pop”, the standard deviation of the random effects “omega_*”, parameters of the error model “a”, “b”, “c”, the correlation between random effects “corr_*_*” and parameters associated to covariates “beta_*_*”. The standard deviation of the random effects is also expressed as coefficient of variation (CV) – a feature present in versions Monolix 2023 or above. The CV calculation depends on the parameter distribution: • lognormal: \(CV=100*\sqrt{exp(\omega_p^2) -1}\) • normal: \(CV=100*\frac{\omega_p}{p_{pop}}\) • logitnormal and probitnormal: the CV is computed by Monte-Carlo. 100000 samples X are drawn from the distribution (defined by \(\omega_p\) and \(p_{pop}\)) and the CV is calculated as the ratio of the sample standard deviation over the sample mean: \(CV=100*\frac{sd(X)}{mean(X)}\) When you run the “Standard errors” task, then the population parameter table contains also the standard error (s.e) and relative standard error (r.s.e). Note that the CV% represents the inter-individual variability, while the RSE% represents the uncertainty on the estimated parameters. Starting from version 2024, alongside the standard errors, the upper and lower percentiles, here P2.5 and P97.5 of the confidence interval with level 1−\( \alpha\) are computed for population parameters. This calculation depends on the execution of the “Standard errors” task, as it relies on the standard error of the respective parameter. In order to take into account the boundaries of the parameters (e.g omega or V_pop cannot be negative), the SE is transformed into the gaussian domain, the CI in gaussian domain is calculated assuming a normal distribution and finally the CI is backtransformed. Its transformation is determined by the type of parameter and its distribution assumption: • Normal distributed parameters: \(CI_{1-\alpha} (\theta _{pop})=[\theta_{pop} + q_{\alpha /2}\times s.e.(\theta_{pop}), ~\theta_{pop} + q_{1 – \alpha /2}\times s.e.(\theta_{pop})]\). This applies for typical values (fixed effects) of normally distributed parameters and covariate effects (betas). • Lognormal distributed parameters: \(CI_{1-\alpha} (\theta _{pop})=[\exp(\mu_{pop} + q_{\alpha /2}\times s.e.(\mu_{pop})), ~\exp(\mu_{pop} + q_{1 – \alpha /2}\times s.e.(\mu_{pop}))]\), where \( \mu_{pop} = \ln(\theta_{pop})\). This applies to typical values (fixed effects) of lognormally distributed parameters, standard deviations of the random effects (omegas), and error model • Logitnormal distributed parameters: \(CI_{1-\alpha} (\theta _{pop})=[logit^{-1}(\mu_{pop} + q_{\alpha /2}\times s.e.(\mu_{pop})), ~logit^{-1}(\mu_{pop} + q_{1 – \alpha /2}\times s.e.(\mu_{pop}))] \), where \( \mu_{pop} = logit(\theta_{pop})\). This applies to typical values (fixed effects) of logit-normally distributed parameters and correlation parameters. where \(q_{\alpha /2}\) is the quantile of order \( \alpha /2\) for a standard normal distribution. The confidence interval level \( \alpha\) can be modified in the settings of the “Standard errors” task within the Statistical Model & Tasks tab by clicking on the wrench icon. Information about the SAEM task performance are below the table (orange frame): • The total elapsed time for this task • The number of iterations in the exploratory and smoothing phases, along with a message that indicates whether the convergence has been reached (“auto-stop”), or if the algorithm arrived at the maximum number of iterations (“stopped at the maximum number of iterations/auto-stop criteria have not been reached”) or if it was stopped by the user (“manual stop”). (for Monolix versions 2021 or above) A “Copy table” icon on the top of the table allows to copy it in Excel, Word, etc. The table format and display is kept. Starting from version 2024R1, if categorical covariate effects are included in the statistical model, a “Display fixed effects by category” toggle allows to add a section named “Fixed effects by category” in the table, with the values of the typical population parameters in each category of the categorical covariates and their SEs, RSEs and confidence intervals. In the output folder After having run the estimation of the population parameters, the following files are available: • summary.txt: contains the estimated population parameters (and the number of iterations in Monolix2021R1), in a format easily readable by a human (but not easy to parse for a computer) • populationParameters.txt: contains the estimated population parameters (by default in csv format), including the CV. • populationParametersByGroups.txt: contains the estimated population parameters by categories of categorical covariates included in the statistical model. This file is not generated if there are no categorical covariate effects. • predictions.txt: contains for each individual and each observation time, the observed data (y), the prediction using the population parameters and population median covariates value from the data set (popPred_medianCOV), the prediction using the population parameters and individual covariates value (popPred), the prediction using the individual approximate conditional mean calculated from the last iterations of SAEM (indivPred_SAEM) and the corresponding weighted residual (indWRes_SAEM). • IndividualParameters/estimatedIndividualParameters.txt: individual parameters corresponding to the approximate conditional mean, calculated as the average of the individual parameters sampled by MCMC during all iterations of the smoothing phase. When several chains are used (see project settings), the average is also done over all chains. Values are indicated as *_SAEM in the file. Parameters without variability: □ method “no variability” or “variability at the first stage”: *_SAEM represents the value at the last SAEM iteration, so the estimated population parameter plus the covariate effects. In absence of covariate, all individuals have the same value. □ method “add decreasing variability”: *_SAEM represents the average of all iterations of the smoothing phase. This value can be slightly different from individual to individual, even in te absence of covariates. • IndividualParameters/estimatedRandomEffects.txt: individual random effects corresponding to the approximate conditional mean, calculated using the last estimations of SAEM (*_SAEM). For parameters without variability, see above. More details about the content of the output files can be found here. The settings are accessible through the interface via the button next to the parameter estimation task. Burn-in phase: The burn-in phase corresponds to an initialization of SAEM: individual parameters are sampled from their conditional distribution using MCMC using the initial values for the population parameters (no update of the population parameter estimates). Note: the meaning of the burn-in phase in Monolix is different to what is called burn-in in Nonmem algorithms. • Number of iterations (default: 5): number of iterations of the burn-in phase Exploratory phase: • Auto-stop criteria (default: yes): if ticked, auto-stop criteria are used to automatically detect convergence during the exploratory phase. If convergence is detected, the algorithm switches to the smoothing phase before the maximum number of iterations. The criteria take into account the stability of the convergence indicator, omega parameters and error model parameters. • Maximum number of iterations (default: 500, if auto-stop ticked): maximum number of iterations for the exploratory phase. Even if the auto-stop criteria are not fulfilled, the algorithm switches to the smoothing phase after this maximum number of iterations. A warning message will be displayed in the GUI if the maximum number of iterations is reached while the auto-stop criteria are not • Minimum number of iterations (default: 150, if auto-stop ticked): minimum number of iterations for the exploratory phase. This value also corresponds to the interval length over which the auto-stop criteria are tested. A larger minimum number of iterations means that the auto-stop criteria are harder to reach. • Number of iterations (default: 500, if auto-stop unticked): fixed number of iterations for the exploratory phase. • Step-size exponent (default: 0): The value, comprised between 0 and 1, represents memory of the stochastic process, i.e how much weight is given at iteration k to the value of the previous iteration compared to the new information collected. A value 0 means no memory, i.e the parameter value at iteration k is built based on the information collected at that iteration only, and does not take into account the value of the parameter at the previous iteration. • Simulated annealing (default: enabled): the Simulated Annealing version of SAEM permits to better explore the parameter space by constraining the standard deviation of the random effects to decrease slowly. • Decreasing rate for the variance of the residual errors (default: 0.95, if simulated annealing enabled): the residual error variance (parameter “a” for a constant error model for instance) at iteration k is constrained to be larger than the decreasing rate times the variance at the previous iteration. • Decreasing rate for the variance of the individual parameters (default: 0.95, if simulated annealing enabled): the variance of the random effects at iteration k is constrained to be larger than the decreasing rate times the variance at the previous iteration. Smoothing phase: • Auto-stop criteria (default: yes): if ticked, auto-stop criteria are used to automatically detect convergence during the smoothing phase. If convergence is detected, the algorithm stops before the maximum number of iterations. • Maximum number of iterations (default: 200, if auto-stop ticked): maximum number of iterations for the smoothing phase. Even if the auto-stop criteria are not fulfilled, the algorithm stops after this maximum number of iterations. • Minimum number of iterations (default: 50, if auto-stop ticked): minimum number of iterations for the smoothing phase. This value also corresponds to the interval length over which the auto-stop criteria is tested. A larger minimum number of iterations means that the auto-stop criteria is harder to reach. • Number of iterations (default: 200, if auto-stop unticked): fixed number of iterations for the smoothing phase. • Step-size exponent (default: 0.7): The value, comprised between 0 and 1, represents memory of the stochastic process, i.e how much weight is given at iteration k to the value of the previous iteration compared to the new information collected. The value must be strictly larger than 0.5 for the smoothing phase to converge. Large values (close to 1) will result in a smoother parameter trajectory during the smoothing phase, but may take longer to converge to the maximum likelihood estimate. Methodology for parameters without variability (if parameters without variability are present in the model): The SAEM algorithm requires to draw parameter values from their marginal distribution, which does not exist for parameters without variability. These parameters are thus estimated via another method, which can be chosen among: • No variability (default choice): After each SAEM iteration, the parameter without variability are optimized using the Nelder-Mead simplex algorithm. The absolute tolerance (stopping criteria) is 1e-4 and the maximum number of iterations 20 times the number of parameters to calculate via this algorithm. • Add decreasing variability: an artificial variability is added for these parameters, allowing estimation via SAEM. The variability is progressively decreased such that at the end of the estimation process, the parameter has a variability of 1e-5. • Variability in the first stage: during the exploratory phase, an artificial variability is added and progressively forced to 1e-5 (same as above). In the smoothing phase, the Nelder-Mead simplex optimization algorithm is used. Handling parameters without variability is also discussed here. Set all SAEM iterations to zero in one click The icon on the bottom left provides a shortcut to set all SAEM iterations to zero (in all three phases). This is convenient if the user wish to skip the estimation of the population parameters and keep the initial estimates as population estimates to run the other tasks, since running SAEM first is mandatory. It is not equivalent to fixing all population parameters, since standard errors are not estimated for fixed parameters, while they will be estimated for the initial estimates in this case. The action can be easily reversed with the second shortcut to reset default SAEM iterations. Good practice, troubleshooting and tips Choosing to enable or disable the simulated annealing As the simulated annealing option permits to more surely find the global maximum, it should be used during the first runs, when the initial values may be quite far from the final estimates. On the other side, the simulated annealing option may keep the omega values artificially high, even after a large number of SAEM iterations. This may prevent the identification of parameters for which the variability is in fact zero and lead to NaN in the standard errors. So once good initial values have been found and there is no risk to fall in a local maximum, the simulated annealing option can be disabled. Below we show an example where removing the simulated annealing permits to identify parameters for which the inter-individual variability can be removed. Example: The dataset used in the tobramycin case study is quite sparse. In these conditions, we expect that estimating the inter-individual variability for all parameters will be difficult. In this case, the estimation can be done in two steps, as shown below for a two-compartments model on this dataset: • First, we run SAEM with the simulated annealing option (default setting), which facilitates the convergence towards the global maximum. All four parameters V, k, k12 and k21 have random effects. The estimated parameters are shown below: The parameters omega_k12 and omega_k21 have high standard errors, suggesting that the variability is difficult to estimate. The omega_k12 and omega_k21 values themselves are also high (100% inter-individual variability), suggesting that they may have been kept too high due to the simulated annealing. • As a second step, we use the last estimates as new initial values (as shown here), and run SAEM again after disabling the simulated annealing option. On the plot showing the convergence of SAEM, we can see omega_V, omega_k12 and omega_k21 decreasing to very low values. The data is too sparse to correctly identify the inter-individual variability for V, k12 and k21. Thus, their random effects can be removed, but the random effect of k can be kept. Note that because the omega_V, omega_k12 and omega_k21 parameters decrease without stabilizing, the convergence indicator does the same. 5.1.2.1.The convergence indicator When you launch the estimation of the population parameters, you can see the evolution of the population parameter estimates over the iterations of the SAEM algorithm but also the convergence indicator in purple. The convergence indicator is the complete log-likelihood. It can help to follow convergence. Note that the complete likelihood is not the same as the log-likelihood computed as separate task. The likelihood is the probability density of the data given the population parameter, so the log-likelihood is defined as: \(\sum_{i=1}^{N_{\text{ind}}}\log\left(p(y_i; \theta)\right)\) The likelihood is the objective function, therefore it is the relevant quantity to compare model, but unfortunately it cannot be computed in closed form because the individual parameters \(\phi_i\) are unobserved. It requires to integrate over all possible values of the individual parameters. Thus, to estimate the log-likelihood an importance sampling Monte Carlo method is used in a separate task (or an approximation is calculated via linearization of the model). Complete log-likelihood On the contrary, the complete likelihood refers to the joint probability distribution of the data and the individual parameters. The convergence indicator (complete log-likelihood) is then defined \(\sum_{i=1}^{N_{\text{ind}}}\log\left(p(y_i, \phi_i; \theta)\right)\) The joint probability distribution can be decomposed using Bayes law as: \(p(y_i, \psi_i; \theta)=p(y_i| \psi_i; \theta)p(\psi_i; \theta)\) Those two terms have an analytical expression and are easy to calculate, using as \(\phi_i\) the individual parameters drawn by MCMC for the current iteration of SAEM. This quantity is calculated at each SAEM step and is useful to assess the convergence of the SAEM algorithm. Typical shape of the convergence indicator Typically, the convergence indicator decreases progressively and then stabilizes. The convergence indicator aggregates the information from all parameters and can serve to detect if the SAEM algorithm has already converged or not. When the indicator is stable, that is it oscillates around the same value without drifting, then we can be pretty confident that the maximum likelihood has been achieved. 5.1.2.2.Simulated annealing The estimation of the population parameters with SAEM includes a method of simulated annealing. It is possible to disable this option in the settings of SAEM. The option is enabled by default. The simulated annealing option permits to keep the explored parameter space large for a longer time (compared to without simulated annealing). This allows to escape local maximums and improve the convergence towards the global maximum. In practice, the simulated annealing option constrains the variance of the random effects and the residual error parameters to decrease by maximum 5% (by default, the setting “Decreasing rate” can be changed) from one iteration to the next one. As a consequence, the variances decrease more slowly: The size of the parameter space explored by SAEM depends on individual parameters sampled from their conditional distribution via Markov Chain Monte Carlo. If the standard deviation of the conditional distributions is large, the individual parameters sampled at iteration k can be quite far away from those at iteration (k-1), meaning a large exploration of the parameter space. The standard deviation of the conditional distribution depends on the standard deviation of the random effects (population parameters ‘omega’). Indeed, the conditional distribution is \(p(\psi_i|y_i;\hat {\theta})\) with \(\psi_i\) the individual parameters for individual \(i\), \(\hat{\theta}\) the estimated population parameters, and \(y_i\) the data (observations) for individual \(i\). The conditional distribution thus depends on the population parameters, and the larger the population parameters ‘omega’, the larger the standard deviation of the conditional distribution. That’s why we want to keep large ‘omega’ values during the first iterations. The simulated annealing option can be disabled or enabled in the “Population parameters” task settings. In addition, the settings allow to change the default decreasing rates for the standard deviations of the random effects and the residual errors. Choosing to enable or disable the simulated annealing As the simulated annealing option permits to more surely find the global maximum, it should be used during the first runs, when the initial values may be quite far from the final estimates. On the other side, the simulated annealing option may keep the omega values artificially high, even after a large number of SAEM iterations. This may prevent the identification of parameters for which the variability is in fact zero and lead to NaN in the standard errors. So once good initial values have been found and there is no risk to fall in a local maximum, the simulated annealing option can be disabled. Below we show an example where removing the simulated annealing permits to identify parameters for which the inter-individual variability can be removed. Example: identifying parameters with no variability The dataset used in the tobramycin case study is quite sparse. In these conditions, we expect that estimating the inter-individual variability for all parameters will be difficult. In this case, the estimation can be done in two steps, as shown below for a two-compartments model on this dataset: • First, we run SAEM with the simulated annealing option (default setting), which facilitates the convergence towards the global maximum. All four parameters V, k, k12 and k21 have random effects. The estimated parameters are shown below: The parameters omega_k12 and omega_k21 have high standard errors, suggesting that the variability is difficult to estimate. The omega_k12 and omega_k21 values themselves are also high (100% inter-individual variability), suggesting that they may have been kept too high due to the simulated annealing. • As a second step, we use the last estimates as new initial values (as shown here), and run SAEM again after disabling the simulated annealing option. On the plot showing the convergence of SAEM, we can see omega_V, omega_k12 and omega_k21 decreasing to very low values. The data is then too sparse to correctly identify the inter-individual variability for V, k12 and k21. Thus, their random effects can be removed, but the random effect of k can be kept. Note that because the omega_V, omega_k12 and omega_k21 parameters decrease without stabilizing, the convergence indicator does the same. 5.1.2.3.Bayesian estimation In the tab “Initial estimates”, clicking on the wheel icon next to a population parameters opens a window to choose among three estimation methods (see image below). “Maximum Likelihood Estimation” corresponds to the default method using SAEM, detailed on this page. “Maximum A Posteriori Estimation” corresponds to Bayesian estimation, and “Fixed” to a fixed parameter. Bayesian estimation Bayesian estimation allows to take into account prior information in the estimation of parameters. It is called in Monolix Maximum A Posteriori estimation, and it corresponds to a penalized maximum likelihood estimation, based on a prior distribution defined for a parameter. The weight of the prior in the estimation is given by the standard deviation of the prior distribution. Objectives: learn how to combine maximum likelihood estimation and Bayesian estimation of the population parameters. Projects: theobayes1_project, theobayes2_project, The Bayesian approach considers the vector of population parameters \(\theta\) as a random vector with a prior distribution \(\pi_\theta\). We can then define the *posterior distribution* of \(\theta \(\begin{aligned} p(\theta | y ) &= \frac{\pi_\theta( \theta )p(y | \theta )}{p(y)} \\ &= \frac{\pi_\theta( \theta ) \int p(y,\psi |\theta) \, d \psi}{p(y)} . \end{aligned} \) We can estimate this conditional distribution and derive statistics (posterior mean, standard deviation, quantiles, etc.) and the so-called maximum a posteriori (MAP) estimate of \(\theta\): \(\begin{aligned} \hat{\theta}^{\rm MAP} &=\text{arg~max}_{\theta} p(\theta | y ) \\ &=\text{arg~max}_{\theta} \left\{ {\cal LL}_y(\theta) + \log( \pi_\theta( \theta ) ) \right\} . \end{aligned} \) The MAP estimate maximizes a penalized version of the observed likelihood. In other words, MAP estimation is the same as penalized maximum likelihood estimation. Suppose for instance that \(\theta\) is a scalar parameter and the prior is a normal distribution with mean \(\theta_0\) and variance \(\gamma^2\). Then, the MAP estimate is the solution of the following minimization problem: \(\hat{\theta}^{\rm MAP} =\text{arg~min}_{\theta} \left\{ -2{\cal LL}_y(\theta) + \frac{1}{\gamma^2}(\theta – \theta_0)^2 \right\} .\) This is a trade-off between the MLE which minimizes the deviance, \(-2{\cal LL}_y(\theta)\), and \(\theta_0\) which minimizes \((\theta – \theta_0)^2\). The weight given to the prior directly depends on the variance of the prior distribution: the smaller \(\gamma^2\) is, the closer to \(\theta_0\) the MAP is. In the limiting case, \(\gamma^2=0\); this means that \(\theta\) is fixed at \(\theta_0 \) and no longer needs to be estimated. Both the Bayesian and frequentist approaches have their supporters and detractors. But rather than being dogmatic and following the same rule-book every time, we need to be pragmatic and ask the right methodological questions when confronted with a new problem. All things considered, the problem comes down to knowing whether the data contains sufficient information to answer a given question, and whether some other information may be available to help answer it. This is the essence of the art of modeling: find the right compromise between the confidence we have in the data and our prior knowledge of the problem. Each problem is different and requires a specific approach. For instance, if all the patients in a clinical trial have essentially the same weight, it is pointless to estimate a relationship between weight and the model’s PK parameters using the trial data. A modeler would be better served trying to use prior information based on physiological knowledge rather than just some statistical criterion. Generally speaking, if prior information is available it should be used, on the condition of course that it is relevant. For continuous data for example, what does putting a prior on the residual error model’s parameters mean in reality? A reasoned statistical approach consists of including prior information only for certain parameters (those for which we have real prior information) and having confidence in the data for the others. Monolix allows this hybrid approach which reconciles the Bayesian and frequentist approaches. A given parameter can be • a fixed constant if we have absolute confidence in its value or the data does not allow it to be estimated, essentially due to lack of identifiability. • estimated by maximum likelihood, either because we have great confidence in the data or no information on the parameter. • estimated by introducing a prior and calculating the MAP estimate or estimating the posterior distribution. Computing the Maximum a posteriori (MAP) estimate • demo project: theobayes1_project (data = ‘theophylline_data.txt’ , model = ‘lib:oral1_1cpt_kaVCl.txt’) We want to introduce a prior distribution for \(ka_{\rm pop}\) in this example. Click on the option button and select Maximum A Poteriori Estimation We propose a typical value, here 2 and standard deviation 0.1 for \(ka_{\rm pop}\) and to compute the MAP estimate for \(ka_{\rm pop}\). The parameter is then colored in purple. Starting from the 2021 version, it is possible to select maximum a posteriori estimation also for the omega parameters (standard deviations of the random effects). In this case, an inverse Wishart is set as a prior distribution for the omega matrix. The following distributions for the priors are used: • typical value (*_pop): the distribution of the prior is the same as the distribution of the parameter. For instance, if ka has been set with a lognormal distribution in the “Statistical model & Tasks” tab, a lognormal distribution is also used for the prior on ka_pop. When a lognormal distribution is used, setting sd=0.1 roughtly corresponds to 10% uncertainty in the provided prior value for ka_pop. • covariate effects (beta_*): a normal distribution is used, to allow betas to be either positive or negative. • standard deviations (omega_*) [starting version 2021]: an inverse Wishart distribution is used. Inverse Wisharts are common prior distributions for variance-covariance matrices as they allow to fulfill the positive-definite matrix requirement. The weight of the prior in the estimation is based on the number of degrees of freedom (df) of the inverse Wishart, instead of a standard deviation. More degrees of freedom correspond to a higher constrain of the prior in the omega estimation. In Monolix, each omega parameter is handled as a 1×1 matrix with its own degree of freedom, independently from the other omegas. The univariate inverse Wishart \( W^{-1}(df, (df+2)\omega_{typ}) \) simplifies to an inverse gamma distribution with shape parameter \(\alpha=df/2 \) and scale parameter \(\beta=\omega_{typ}*(df+2)/2\). The coefficient of variation is thus \(CV=\frac{1}{\sqrt{\frac{df}{2}-2}}\). To obtain a 20% uncertainty on omega (CV=0.2), the user can set • correlations: it is currently not possible to set a prior on the correlation parameters It is common to set the initial value of the parameter to be the same as the typical value of the prior. Note that the default value for the typical value of the prior is set to the initial value of the parameter. However, if the initial value of the parameter is modified afterwards, the typical value of the prior is not updated automatically. Fixing the value of a parameter Population parameters can be fixed to their initial values, in this case they are not estimated. It is possible to fix one, several or all population parameters, among the fixed effects, standard deviations of random effects, and error model parameters. In Monolix2021, it is also possible to fix correlation parameters. To fix a population parameter, click on the wheel next to the parameter in the tab “Initial estimates” and select “Fixed”, like on the image below: Fixed parameters appear on this tab in red. In Monolix2021, they are also colored in red in the subtab “Check initial estimates”. • theobayes2_project (data = ‘theophylline_data.txt’ , model = ‘lib:oral1_1cpt_kaVCl.txt’) We can combine different strategies for the population parameters: Bayesian estimation for \(ka_{\rm pop}\), fixed value for \(V_{\rm pop}\) and maximum likelihood estimation for \(Cl_{\rm pop}\), for instance. • The parameter \(V_{\rm pop}\) is fixed and then colored in red. • \(V_{\rm pop}\) is not estimated (it’s s.e. is not computed) but the standard deviation \(\omega_{V}\) is estimated as usual. EBEs stands for Empirical Bayes Estimates. The EBEs are the most probable value of the individual parameters (parameters for each individual), given the estimated population parameters and the data of each individual. In a more mathematical language, they are the mode of the conditional parameter distribution for each individual. These values are useful to compute the most probable prediction for each individual, for comparison with the data (for instance in the Individual Fits plot). Calculation of the EBEs (conditional mode) When launching the “EBEs” task, the mode of the conditional parameter distribution is calculated. Conditional distribution The conditional distribution is \( p(\psi_i|y_i;\hat{\theta})\) with \(\psi_i\) the individual parameters for individual i, \(\hat{\theta}\) the estimated population parameters, and \(y_i\) the data (observations) for individual i. The conditional distribution represents the uncertainty of the individual’s parameter value, taking into account the information at hand for this individual: the observed data for that individual, the covariate values for that individual and the fact that the individual belongs to the population for which we have already estimated the typical parameter value (fixed effects) and the variability (standard deviation of the random effects). It is not possible to directly calculate the probability for a given \(\psi_i\) (no closed form), but is possible to obtain samples from the distribution using a Markov-Chain Monte-Carlo procedure (MCMC). This is detailed more on the Conditional Distribution page. Mode of the conditional distribution The mode is the parameter value with the highest probability: $$ \hat{\psi}_i^{mode} = \underset{\psi_i}{\textrm{arg max }}p(\psi_i|y_i;\hat{\theta})$$ To find the mode, we thus need to maximize the conditional probability with respect to the individual parameter value \(\psi_i\). Individual random effects Once the individual parameters values \(\psi_i\) are known, the corresponding individual random effects can be calculated using the population parameters and covariates. Taking the example of a parameter \(\psi\) having a normal distribution within the population and that depends on the covariate \(c\), we can write for individual \(i\): $$ \psi_i = \psi_{pop} + \beta \times c_i + \eta_i$$ As \(\psi_i\) (estimated conditional mode), \(\psi_{pop}\) and \(\beta\) (population parameters) and \(c_i\) (individual covariate value) are known, the individual random effect \(\eta_i\) can easily be calculated. For each individual, to find the \(\psi_i\) values that maximizes the conditional distribution, we use the Nelder-Mead Simplex algorithm [1]. As the conditional distribution does not have a closed form solution (i.e \(p(\psi_i|y_i;\hat{\theta})\) cannot be directly or easily calculated for a given \(\psi_i\)), we use the Bayes law to rewrite it in the following way (leaving the population parameters \(\hat{\theta}\) out for clarity): The conditional density function of the data when knowing the individual parameter values (i.e \(p(y_i|\psi_i)\)) is easy to calculate, as well as the density function for the individual parameters (i.e \(p(\psi_i)\)), because they have closed form solutions. On the opposite, the likelihood \(p(y_i)\) has no closed form solution. But as it does not depend on \(\psi_i\), we can leave it out of the optimization procedure and only optimize \(p(y_i|\psi_i)p(\psi_i)\). The initial value used for the Nelder-Mead simplex algorithm is the conditional mean (estimated during the conditional distribution task) if avaible (typical case of a full scenario), or the approximate conditional mean calculated at the end of SAEM otherwise. Parameters without variability are not estimated, they are set to \( \psi_i = \psi_{pop} + \beta \times c_i \). Running the EBEs task When running the EBEs task, the progress is displayed in the pop-up window: Dependencies between tasks: In the graphical user interface In the Indiv.Param section of the Results tab, a summary of the individual parameters is proposed (min, max, median, quartiles, and shrinkage [in Monolix 2024 and later]) as shown in the figure below. The elapsed time for this task is also shown. To see the estimated parameter value for each individual, the user can click on the [INDIV. ESTIM.] section. Notice that the user can also see them in the output files, which can be accessed via the folder icon at the bottom left. Notice that there is a “Copy table” icon on the top of each table to copy them in Excel, Word, … The table format and display will be kept. In the output folder After having run the EBEs task, the following files are available: • summary.txt: contains the summary statistics (as displayed in the GUI). • IndividualParameters/estimatedIndividualParameters.txt: the individual parameters for each subject-occasion are displayed. In addition to the already present approximation conditional mean from SAEM (*_SAEM), the conditional mode (*_mode) is added to the file. • IndividualParameters/estimatedRandomEffects.txt: the individual random effects for each subject-occasion are displayed (*_mode), in addition to the already present value based on the approximate conditional mean from SAEM (*_SAEM). • IndividualParameters/shrinkage.txt: starting with Monolix 2024, the shrinkage for each parameter for the conditional mode as shrinkage_mode. More details about the content of the output files can be found here. The settings are accessible through the interface via the button next to the EBEs task. • Maximum number of iterations (default: 200): maximum number of iterations for the Nelder-Mead Simplex algorithm, for each individual. Even if the tolerance criteria is not met, the algorithm stops after that number of iterations. • Tolerance (default: 1e-6): absolute tolerance criteria. The algorithm stops when the change of the conditional probability value between two iterations is less than the tolerance. Calculate EBEs for a new data set using an existing model 5.1.4.Conditional distribution The conditional distribution represents the uncertainty of the individual parameter values. The conditional distribution estimation task permits to sample from this distribution. The samples are used to calculate the condition mean, or directly as estimators of the individual parameters in the plots to improve their informativeness [1]. They are also used to compute the statistical tests. Calculation of the conditional distribution Conditional distribution The conditional distribution is \(p(\psi_i|y_i;\hat{\theta})\) with \(\psi_i\) the individual parameters for individual i, \(\hat{\theta}\) the estimated population parameters, and \(y_i\) the data (observations) for individual i. The conditional distribution represents the uncertainty of the individual’s parameter value, taking into account the information at hand for this individual: • the observed data for that individual, • the covariate values for that individual, • and the fact that the individual belongs to the population for which we have already estimated the typical parameter value (fixed effects) and the variability (standard deviation of the random It is not possible to directly calculate the probability for a given \(\psi_i\) (no closed form), but is possible to obtain samples from the distribution using a Markov-Chain Monte-Carlo procedure MCMC algorithm MCMC methods are a class of algorithms for sampling from probability distributions for which direct sampling is difficult. They consist of constructing a stochastic procedure which, in its stationary state, yields draws from the probability distribution of interest. Among the MCMC class, we use the Metropolis-Hastings (MH) algorithm, which has the property of being able to sample probability distributions which can be computed up to a constant. This is the case for our conditional distribution, which can be rewritten as: \(p(y_i|\psi_i)\) is the conditional density function of the data when knowing the individual parameter values and can be computed (closed form solution). \(p(\psi_i)\) is the density function for the individual parameters and can also be computed. The likelihood \(p(y_i)\) has no closed form solution but it is constant. In brief, the MH algorithm works in the following way: at each iteration k, a new individual parameter value is drawn from a proposal distribution for each individual. The new value is accepted with a probability that depends on \(p(\psi_i)\) and \(p(y_i|\psi_i)\). After a transition period, the algorithm reaches a stationary state where the accepted values follow the conditional distribution probability \(p(\psi_i|y_i)\). For the proposal distribution, three different distributions are used in turn with a (2,2,2) pattern (setting “Number of iterations of kernel 1/2/3” in Settings > Project Settings): the population distribution, a unidimensional Gaussian random walk, or a multidimensional Gaussian random walk. For the random walks, the variance of the Gaussian is automatically adapted to reach an optimal acceptance ratio (“target acceptance ratio” setting in Settings > Project Settings). Conditional mean The draws from the conditional distribution generated by the MCMC algorithm can be used to estimate any summary statistics of the distribution (mean, standard deviation, quantiles, etc). In particular we calculate the conditional mean by averaging over all draws: $$ \hat{\psi}_i^{mean} = \frac{1}{K}\sum_{k=1}^{K}\psi_i^{k}$$ The standard deviation of the conditional distribution is also calculated. The number of samples used to calculate the mean and standard deviation corresponds to the number of chains times the total number of iterations of the conditional distribution task (not only during the convergence interval length). The mean is calculated over the transformed individual parameters (in the gaussian space), and back-transformed to the non-gaussian space. Samples from the conditional distribution Among all samples from the conditional distribution, a small number (between 1 and 10, see “Simulated parameters per individual” setting) is kept to be used in the plots. These samples are unbiased estimators and they present the advantage of not being affected by shrinkage, as shown for example on the documentation of the plot “distribution of the individual parameters“. Shrinkage and the use of random samples from the conditional distribution are explained in more details here. Stopping criteria At iteration k, the conditional mean is calculated for each individual by averaging over all k previous iterations. The average conditional means over all individuals (noted E(X|y)), and the standard deviation of the conditional means over all individuals (noted sd(X|y)) are calculated and displayed in the pop-up window. The algorithm stops when, for all parameters, the average conditional means and standard deviations of the last 50 iterations (“Interval length” setting) do not deviate by more than 5% (2.5% in each direction, “relative interval” setting) from the average and standard deviation values at iteration k. In some very specific cases (for example with a parameter with a normal distribution and a value very close to 1), it can take many iterations to reach the convergence criteria because the criteria is defined as a percentage. In that case, the toggle “enable maximum number of iterations” can be used to limit the number iterations of this task. If the limit is reached, a warning message will be displayed in the interface. Running the conditional distribution estimation task During the evaluation of the conditional distribution, the following plot pop-ups, displaying the average conditional means over all individuals (noted E(X|y)), and the standard deviation of the conditional means over all individuals (noted sd(X|y)) for each iteration of the MCMC algorithm. The convergence criteria described above means that the blue line, which represents the average over all individuals of the conditional mean, must be within the tube. The tube is centered around the last value of the blue line and spans over 5% of that last value. The algorithm stops when all blue lines are in their tube. Dependencies between tasks: • The “Population parameters” task must be run before launching the conditional distribution task. • The conditional distribution task is recommended before calculating the log-likelihood task without the linearization method (i.e log-likelihood via importance sampling). • The conditional distribution task is necessary for the statistical tests. • The samples generated during the conditional distribution task will be reused for the Standard errors task (without linearization). In the graphical user interface In the Indiv.Param section of the Results tab, a summary of the estimated conditional mean is given (min, max, quartiles, and shrinkage [in Monolix 2024 and later]), as shown in the figure below. Starting from Monolix2021R1, the number of iterations is also displayed, along with a message indicating whether convergence has been reached (“auto-stop”) or if the task was stopped by the user or reached the maximum number of iterations. To see the estimated parameter value for each individual, the user can click on the [INDIV. ESTIM.] section. Notice that the user can also see them in the output files, which can be accessed via the folder icon at the bottom left. Notice that there is a “Copy table” icon on the top of each table to copy them in Excel, Word, … The table format and display will be kept. In the output folder After having run the conditional distribution task, the following files are available: • summary.txt: contains the summary statistics (as displayed in the GUI) • IndividualParameters/estimatedIndividualParameters.txt: the individual parameters for each subject-occasion are displayed. The conditional mean (*_mean) and the standard deviation (*_sd) of the conditional distribution are added to the file. The number of samples used to calculate the mean and standard deviation corresponds to the number of chains times the total number of iterations of the conditional distribution task (not only during the convergence interval length). • IndividualParameters/estimatedRandomEffects.txt: the individual random effects for each subject-occasion are displayed. Those corresponding to the conditional mean (*_mean) are added to the file, together with the standard deviation (*_sd). • IndividualParameters/simulatedIndividualParameters.txt: several simulated individual parameters (draws from the conditional distribution) are recorded for each individual. The rep column permits to distinguish the several simulated parameters for each individual. • IndividualParameters/simulatedRandomEffects.txt: the random effects corresponding to the simulated individual parameters are recorded. • IndividualParameters/shrinkage.txt: starting with Monolix 2024, the shrinkage for each parameter for the conditional mean as shrinkage_mean and for the conditional distribution as More details about the content of the output files can be found here. To change the settings, you can click on the settings button next the conditional distribution task. • Interval length (default: 50): number of iterations over which the convergence criteria is checked. • Relative interval (default: 0.05): size of the interval (relative to the current average or standard deviation) in which the last “interval length” iterations must be for the stopping criteria to be met. A value at 0.05 means that over the last “interval length” iterations, the value should not vary by more than 5% (2.5% in each direction). • Simulated parameters per individual (default: via calculation): number of draws from the conditional distribution that will be used in the plots. The number is calculated as min(10, idealNb) with idealNb = max(500 / number of subject , 5000 / number of observations). This means that the maximum number is 10 (which is usually the case for small data sets). For large data sets, the number may be reduced, but the number of individual times the number of simulated parameters should be at least 500, and the number of observations times the number of simulated parameters should be at least 5000. This ensures to have a sufficiently large but not unnecessarily large number of dots in the plots such as Observations versus predictions or Correlation between random effects. If the user sets the number of simulated parameters to a value larger than the number of chains (project settings) times the total number of iterations of the conditional distribution task (maximum number of iterations or when convergence criteria are reached), the number will be restricted to the number of available samples. If the user sets the number of simulated parameters to a value smaller than interval length times number of chains, the simulated parameters are picked evenly from the interval length and the chains. If the requested number of simulated parameters is larger, the last n (n=number of requested simulated parameters) samples are picked. • Enable maximum iterations limit (default: toggle off) [from version 2020 on]: When the toggle in “on”, a maximum number of iterations can be defined. • Maximum number of iterations (default: 500, needs to be larger than the interval length, available if “enable maximum iterations limit’ is on): maximum number of iterations for the conditional distribution task. Even if the convergence criteria are not fulfilled, the algorithm stops after this maximum number of iterations. If the maximum number of iterations is reached, a warning message will be displayed in the interface. 5.1.4.1.Understanding shrinkage and how to circumvent it Shrinkage is a phenomenon that appears when the data is insufficient to precisely estimate the individual parameters (EBEs). In that case, the EBEs “shrink” towards the center of the population distribution and do not properly represent the inter-individual variability. This leads to diagnostic plots that may be misleading, either hiding true relationships or inducing wrong ones. In the diagnostic plots, Monolix uses samples from the conditional distribution as individual parameters, which lead to reliable plots even when shrinkage is present in the model [1]. This method is based on the calculation of the conditional distribution. Conditional distribution The conditional distribution is defined for each individual. It represents the uncertainty of the individual’s parameter value, taking the information at hand for this individual into account: • the observed data for that individual, • the covariate values for that individual, • the fact that the individual belongs to the population for which we have already estimated the typical parameter value (fixed effects) and the inter-individual variability (standard deviation of the random effects). In a mathematical formalism, the conditional distribution is written \(p(\psi_i|y_i;\hat{\theta})\) with \(\psi_i\) the individual parameters for individual \(i\), \(\hat{\theta}\) the estimated population parameters, and \(y_i\) the data (observations) for individual \(i\). It is not possible to directly calculate the probability for a given \(\psi_i\) (no closed form), but it is possible to obtain samples from the distribution using a Markov-Chain Monte-Carlo procedure (MCMC). This is what is done in the Conditional distribution task. With the following conditional distribution for the volume V of individual i, we see that the most probable value is around 25 L but there is quite some uncertainty: the value could also be 15 or 40 for instance. For visual purpose, we have drawn the distribution as a smooth curve, but remember that the conditional distribution has no explicit expression. One can only obtain samples from this distribution using MCMC. Conditional mode (EBEs) and conditional mean It is often convenient to work with a single value for the individual parameters (called an estimator), instead of a probability distribution. Several “summary” values can be used, such as the mode or the mean of the conditional distribution. The mode is also called maximum a posteriori or EBE (for empirical bayes estimate). It is often preferred over the mean, because the mode represents the most likely value, i.e the value which has the highest probability. In Monolix, the mode is calculated via the EBEs task, while the mean is calculated via the Conditional distribution task, as the average of all samples drawn from the conditional distribution. Once the value of the individual estimator (conditional mode or conditional mean) is known for each individual, one can easily calculate the individual random effects for each individual. For instance for the volume V, that depends on the covariate weight WT: $$\begin{array}{rl} V_i& = V_{pop}\left(\frac{\textrm{WT}_i}{70}\right)^{\beta}e^{\eta_i} \\ \Rightarrow \quad \eta_i& = \log (V_i) – \log(V_{pop}) – \beta \log \left(\frac{\textrm{WT}_i}{70}\right) \end{array} $$ The individual parameters and individual random effects are used in diagnostic plots. They are used either directly, such as in the Correlation between random effects or Individual parameter versus covariate plots, or indirectly to generate individual predictions, such as in Individual Fits, Observations versus Predictions or Scatter plot of the residuals. But these diagnostic plots can be biased in presence of shrinkage. When the individual data brings only few information about the individual parameter value, the conditional distribution is large, reflecting the uncertainty of the individual parameter value. In that case, the mode of the conditional distribution is close to (or “shrinks” to) the mode of the population distribution. If this is the case for all or most of the individuals, all individual parameters end up concentrated around the mode of the population distribution and do not correctly represent the inter-individual variability which has been estimated via the standard deviation parameters (omega parameters in Monolix). This is the shrinkage phenomenon. Shrinkage typically occurs when the data is sparse. Below we present the example of a parameter V which has a lot of shrinkage and a parameter k with almost no shrinkage. We consider a data set with 10 individuals. In the upper plots, the conditional distributions of each of the 10 individuals are shown. For the volume V, the individual parameter values are uncertain and their conditional distributions are large. When reporting the mode (closed circles) of the conditional distributions on the population distribution (black curve, bottom plots), the modes appear shrunk compared to the population distribution. On the opposite, for k, the conditional distributions are narrow and the modes are well spread over the population distribution. There is shrinkage for V, but not for k. Pulling the individual parameters of all individuals together, one can overlay the population distribution (black line) with the histogram of individual parameters (i.e conditional modes) (blue bars). This is displayed in the Distribution of the individual parameters plot in Monolix: The shrinkage phenomenon can be quantified via a shrinkage value for each parameter. In Monolix, the formula for shrinkage has been updated in version 2024 to use the standard deviation instead of the variance in accordance with industry standards. Starting with Monolix version 2024, shrinkage is calculated from the empirical standard deviation of the random effects \( \textrm{sd}(\eta_i) \) and the estimated standard deviation (the omega population parameter \(\omega\)). The random effects \( \textrm{sd}(\eta_i) \) can be calculated from the EBEs, conditional mean or samples from the conditional distribution. Typically, the shrinkage is reported using the EBEs. In the case of inter occasion variability, shrinkage includes both inter individual and inter occasion variability (the gamma population parameter \(\gamma\)) $$\eta\textrm{-sh}=1-\frac{\textrm{sd}(\eta_i)}{\sqrt{\omega^2 + \gamma^2}}$$ In Monolix versions 2023 and earlier, shrinkage is calculated using the ratio of the empirical variance and the estimated variance as: In Monolix versions 2023 and earlier, the shrinkage can be displayed in the Distribution of the individual parameters plot, by selecting the “information” toggle. Starting in Monolix version 2024, shrinkage information is available in several ways: • Results / Indiv. Results / Cond. Mean [summary] – shrinkage is reported if the Conditional Distribution task has been run • Results / Indiv. Results / Cond. Mode [summary] – shrinkage is reported if the EBEs task has been run • In the IndividualParameters folder inside the results folder, there is a file shrinkage.txt • In reports, using the metric SHRINKAGE in the population or individual parameters table placeholder • In the plot Distribution of the individual parameters by switching on “information” • In the plot Distribution of the standardized random effects by switching on “information” Calculating the shrinkage in R Starting with Monolix version 2024, there is a method available to directly return the shrinkage information: getEtaShrinkage(). In Monolix version 2023 and earlier, there is no function from the lixoftConnectors to directly get the shrinkage, but you can easily calculate it from the estimated parameters like this for example for parameter V: population_params <- read.csv("monolix_project/populationParameters.txt") eta_estimated <- read.csv("monolix_project/IndividualParameters/estimatedRandomEffects.txt") omega_V <- population_params$value[population_params$parameter=="V"] eta_V_estimated_mode <- eta_estimated$eta_V_mode shrinkage_eta_V_estimated_mode <- (1-var(eta_V_estimated_mode)/omega_V^2)*100 Comparison to Nonmem: the Nonmem definition of shrinkage is based on a ratio of standard deviations. This is also the case in Monolix 2024 and above. However, Monolix 2023 and below uses a ratio of variances (which is more common in statistics). Below we provide a “conversion table” which should be read in the following way: a situation that would in Nonmem and Monolix 2024 lead to a shrinkage calculation of 30%, would in Monolix 2023 lead to a calculation of shrinkage of around 50%. The Nonmem and Monolix 2024 version of the shrinkage can be calculated from the Monolix 2023 shrinkage using the following formula: Is it OK to get a negative shrinkage? Yes. In case of no shrinkage, \(var(\eta_i)= \omega^2\) when \(var(\eta_i)\) is calculated on an infinitely large sample. In practice, \(var(\eta_i)\) is calculated on a limited sample related to the number of individuals. Its value can be by chance a little bigger than \(\omega^2\), leading to a slightly negative shrinkage. Consequences of shrinkage In case of shrinkage the individual parameters (conditional mode/EBEs or conditional mean) are biased because they do not correctly reflect the population distribution. As these individual parameters are used in diagnostic plots (in particular the Correlation between random effects and the Individual parameters versus covariates plots) the diagnostic plots can become misleading in presence of shrinkage, either hiding relations or suggesting wrong ones. This complicates the identification of mis-specifications and burdens the modeling process. Note that the shrinkage of the EBEs has no consequences on the population parameter estimation via SAEM (which doesn’t use EBEs, contrary to FOCE for instance). However the lack of informative data may lead to large standard errors for the population parameters and a slower convergence. How to circumvent shrinkage Monolix provides a very efficient solution to circumvent the shrinkage problem, i.e the bias in the diagnostic plots induced by the use of shrunk individual parameters. Instead of using the shrunk conditional mode/EBEs or conditional mean, Monolix uses parameter values randomly sampled from the conditional distribution: The fact of pooling the random samples of the conditional distribution of all individuals allows us to look at them as if they where sampled from the population distribution. And this is exactly what we want: to have individual parameter values (i.e the samples) that correctly reflect the population distribution. From a mathematical point of view, one can show that the random samples are an unbiased estimator: $$p(\psi_i)=\int p(\psi_i|y_i)p(y_i)dy_i=\mathbb{E}_{y_i}(p(\psi_i|y_i))$$ The improvement brought by the random samples from the conditional distributions can be visualized in the following way: while the mode (closed circles) are shrunk, the random samples (stars) spread over the entire population distribution (in black). One can even draw several random sample per individual to increase the informativeness of the diagnostic plots. This is what is done in the MonolixSuite2018R1 version (while MonolixSuite2016R1 uses one sample per individual). In [1], the authors warrant the use of sampled individual parameters. They demonstrate their usefulness in diagnostic plots via numerical experiments with simulated data. They also show that statistical tests based on these sampled individual parameters are unbiased, the type I error rate is the desired significance level of the test and the probability to detect a mis-specification in the model increases with the magnitude of this mis-specification. Fitting the sparse Tobramycin data with a (V,k) model leads to a high shrinkage (75%) of the volume V when using the EBEs. On the opposite, when using samples from the conditional distributions of each individual, there is no shrinkage anymore. The usefulness of using the samples from the conditional distribution can be seen in the Correlation between the random effects plot. Using the EBEs, the plot suggests a positive correlation of about 30% between the volume and the elimination rate. Using the random samples, the plot does not suggest this correlation any more. If the correlation is added to the model, it is estimated small and not Another example of shrinkage can be seen for the parameter ka in the warfarin data set. In this example, the data is sparse during the absorption phase leading to a large uncertainty of the individual parameter values. The use of samples from the conditional distribution is a powerful way to avoid the bias due to the shrinkage in the diagnostic plots. This method has been validated mathematically and with numerical In Monolix, the random samples are used by default in all diagnostic plots, if the Conditional distribution task has been run. The choice of the estimator for the individual parameters can be changed in the Settings tab: 5.1.4.2.Statistical tests Several statistical tests may be automatically performed to test the different components of the model. These tests use individual parameters drawn from the conditional distribution, which means that you need to run the task “Conditional distribution” in order to get these results. In addition, the tests for the residuals require to have first generated the residuals diagnostic plots (scatter plot or distribution). The tests are all performed using the individual parameters sampled from the conditional distribution (or the random effects and residuals derived thereof). They are thus not subject to bias in case of shrinkage. For each individual, several samples from the conditional distribution may be used. The used tests include a correction to take into account that these samples are correlated among each Results of the tests are available in the tab “Results” and selecting “Tests” in the left menu The model for the individual parameters Consider a PK example (warfarin data set from the demos) with the following model for the individual PK parameters (ka, V, Cl): In this example, the different assumptions we make about the model are: • The 3 parameters are lognormally distributed • ka is function of sex • V is function of sex and weight. More precisely, the log-volume log(V) is a linear function of the log-weight \({\rm lw70 }= \log({\rm wt}/70)\). • Cl is not function of any of the covariates. • The random effects \(\eta_V\) and \(\eta_{Cl}\) are linearly correlated • \(\eta_{ka}\) is not correlated with \(\eta_V\) and \(\eta_{Cl}\) Let’s see how each of these assumptions are tested: Covariate model Individual parameters vs covariates – Test whether covariates should be removed from the model If an individual parameter is function of a continuous covariate, the linear correlation between the transformed parameter and the covariate is not 0 and the associated \(\beta\) coefficient is not 0 either. To detect covariates bringing redundant information, we check if these beta coefficients are different from 0. For this, we perform two different tests: a correlation test based on betas coefficients estimated with a linear regression, and a Wald test relying on the estimated population parameters and their standard error. In both cases, a small p-value indicates that the null hypothesis can be rejected and thus that the estimated beta parameter is significantly different from zero. If this is the case, the covariate should be kept in the model. On the opposite, if the p-value is large, the null hypothesis cannot be rejected and this suggests to remove the covariate from the model. Note that if beta is equal to zero, then the covariate has no impact on the parameter. High p-values are colored in yellow (p-value in [0.01-0.05]), orange (p-value in [0.05-0.10]) or red (p-value in [0.10-1]) to draw attention on parameter-covariate relationships that can be removed from the model from a statistical point of view. Correlation test Briefly, we perform a linear regression between the covariates and the transformed parameters and test if the resulting beta coefficient are different from 0. More precisely: for each individual \(i\), let \(z_i^l\) be the transformed individual parameters (e.g log(V) for log-normally distributed parameters or logit(F) for logit-distributed parameters) sampled from the conditional distribution (called replicates, index l). Here we will call covariates all continuous covariates and all non-referent categories of categorical covariates \(cov^{(c)}, c = 1..n_C\). For each individual \(i\), \(cov_i^{(c)}\) is the value of the c\(^{th}\) covariate, equal to 0 or 1 if the covariate is a category. The transformed individual parameters are first averaged over replicates for each individual: $$z_i^{(L)}=\frac{1}{L} \sum_{l=1}^{L} z_i^l$$ We then perform the following linear regression: $$z_i^{(L)}=\alpha_0 + \sum_{c=1}^{n_c} \beta_c \text{cov}_i^{(c)} + e_i$$ If two covariates \(cov^{(1)}\) and \(cov^{(2)}\) (for example WT and BMI) are strongly correlated with a parameter \(z^{(L)}\) (for example the volume), only one of them is needed in the model because they are redundant. In the linear regression, only one of the estimated \(\hat{\beta_1}\) and \(\hat{\beta_2}\) will be significantly different from zero. For each covariate \(c\) we conduct a t-test on the \(\hat{\beta_c}\) with the null hypothesis: H0: \(\beta_c = 0\). The test statistic is $$T_0 = \frac{\hat{\beta_c}}{se(\hat{\beta_c})}$$ where \(se(\hat{\beta_c})\) is the estimated standard error of \(\hat{\beta_c}\) (obtained by least squares estimation during the regression). If the null hypothesis is true, \(T_0\) follows a t distribution with \(N-n_C-1\) degrees of freedom (where N is the number of individuals). In our example, the correlation test suggests to remove sex from ka: Wald test The Wald test relies on the standard errors. Thus the task “Standard errors” must have been calculated to see the test results. The test can be performed using the standard errors calculated using either the “linearization method” (indicated as “linearization”) or not (indicated as “stochastic approximation” in the tests). The Wald test tests the following null hypothesis: H0: the beta parameter estimated by SAEM is equal to zero. The math behind: Let’s note \( \hat{\beta} \) the estimated beta value (which is a population parameter) and \(se(\hat{\beta}) \) the associated standard error calculated during the task “Standard errors”. The Wald test statistic is: $$W=\frac{\hat{\beta}}{se(\hat{\beta})} $$ The test statistic is compared to a normal distribution (z distribution). In our example, the Wald test suggests to remove sex from ka and V: Remark: the Wald test and the correlation test may suggest different covariates to keep or remove. Note that the null hypothesis tested is not the same. Random effects vs covariates – Test whether covariates should be added to the model Pearson’s correlation tests and ANOVA are performed to check if some relationships between random effects and covariates not yet included in the model should be added to the model. For continuous covariates, the Pearson’s correlation test tests the following null hypothesis: H0: the person correlation coefficient between the random effects (calculated from the individual parameters sampled from the conditional distribution) and the covariate values is zero For categorical covariates, the one-way ANOVA tests the following null-hypothesis: H0: the mean of the random effects (calculated from the individual parameters sampled from the conditional distribution) is the same for each category of the categorical covariate A small p-value indicates that the null hypothesis can be rejected and thus that the correlation between the random effects and the covariate values is significant. If this is the case, it is probably worth considering to add the covariate in the model. Note that the decision of adding a covariate in the model should not only be driven by statistical considerations but also biological relevance. Note also that for parameter-covariate relationships already included in the model, the correlation between the random effects and covariates is not significant (while the correlation between the parameter and the covariate can be – see above). Small p-values are colored in yellow (p-value in [0.05-0.10]), orange (p-value in [0.01-0.05]) or red (p-value in [0.00-0.01]) to draw attention on parameter-covariate relationships that can be considered for addition in the model from a statistical point of view. In our example, we already have sex on ka and V, and lw70 on V in the model. The only remaining relationship that could possibly be worth investigating is between weight (or the log-transformed weight “lw70”) and clearance. The math behind: Continuous covariate: Let \(\eta_i^l\) the random effects corresponding to the \(L\) individual parameters sampled from the conditional distribution (called replicates) for individual \(i\), and \ (cov_i\) the covariate value for individual \(i\). The random effects are first averaged over replicates for each individual: $$ \eta_i^{(L)}=\frac{1}{L} \sum_{l=1}^{L} \eta_i^l $$ We note \(\overline{cov} = \sum_{i=1}^N cov_i \) the average covariate value over the N subjects and \(\overline{\eta}=\sum_{i=1}^N \eta_i^{(L)} \) the average random effect. The Pearson correlation coefficient is calculated as: $$r=\frac{\sum_{i=1}^N(cov_i – \overline{cov})(\eta_i^{(L)} – \overline{\eta})}{\sqrt{ \sum_{i=1}^N(cov_i – \overline{cov})^2 \sum_{i=1}^N(\eta_i^{(L)} – \overline{\eta})^2}}$$ The test statistic is: and it is compared to a t-distribution with \(N-2\) degrees of freedom with \(N\) the number of individuals. Categorical covariates: The random effects are first averaged over replicates for each individual and a one-way analysis of variance is performed (simplified to a t-test when the covariate has only two categories). The model for the random effects Distribution of the random effects – Test if the random effects are normally distributed In the individual model, the distributions for the parameters assume that the random effects follow a normal distribution. Shapiro-Wilk tests are performed to test this hypothesis. The null hypothesis is: H0: the random effects are normally distributed If the p-value is small, there is evidence that the random effects are not normally distributed and this calls the choice of the individual model (parameter distribution and covariates) into question. Small p-values are colored in yellow (p-value in [0.05-0.10]), orange (p-value in [0.01-0.05]) or red (p-value in [0.00-0.01]). In our example, there is no reason to reject the null-hypothesis and no reason to question the chosen log-normal distributions for the parameters. The math behind: Let \(\eta_i^l\) the random effects corresponding to the \(L\) individual parameters sampled from the conditional distribution (called replicates) for individual \(i\). The Shapiro-Wilk test statistic is calculated for each replicate \(l\) (i.e the first sample from all individuals, then the second sample from all individuals, etc): $$W^l=\frac{\left( \sum_{i=1}^N a_i \eta_i^l \right)^2}{ \sum_{i=1}^N (\eta_i^l – \overline{\eta}^l)^2}$$ with \(a_i\) tabulated coefficient and \(\overline{\eta}^l=\frac{1}{N}\sum_{i=1}^N \eta_i^l \) the average over all individuals, for each replicate. The statistic displayed in Monolix corresponds to the average statistic over all replicates \(W=\frac{1}{L}\sum_{l=1}^L W^l \). For the p-values, one p-value is calculated for each replicate, using the Shapiro-Wild table with \(N\) (number of individuals) degrees of freedom. The Benjamini-Hochberg (BH) procedure is then applied: the p-values are ranked by ascending order and the BH critical value is calculated for each as \( \frac{\textrm{rank}}{L}Q \) with \(\textrm{rank}\) the individual p-value’s rank, \(L\) the total number of p-values (equal to the number of replicates) and \(Q= 0.05\) the false discovery rate. The largest p-value that is smaller than the corresponding critical value is selected. Joint distribution of the random effects – Test if the random effects are correlated Correlation tests are performed to test if the random effects (calculated from the individual parameters sampled from the conditional distribution) are correlated. The null-hypothesis is: H0: the expectation of the product of the random effects of the first and second parameter is zero The null-hypothesis is assessed using a t-test. Remark: In the 2018 version, a Pearson correlation test was used. For correlations not yet included in the model, a small p-value indicates that there is a significant correlation between the random effects of two parameters and that this correlation should be estimated as part of the model (otherwise simulations from the model will assume that the random effects of the two parameters are not correlated, which is not what is observed for the random effects estimated using the data). Small p-values are colored in yellow (p-value in [0.05-0.10]), orange (p-value in [0.01-0.05]) or red (p-value in [0.00-0.01]). For correlations already included in the model, a large p-value indicates that one cannot reject the hypothesis that the correlation between the random effects is zero. If the correlation is not significantly different from zero, it may not be worth estimating it in the model. High p-values are colored in yellow (p-value in [0.01-0.05]), orange (p-value in [0.05-0.10]) or red (p-value in In our example, we have assumed in the model that \(\eta_V\) and \(\eta_{Cl}\) are correlated. The high p-value (0.033, above the 0.01 threshold, see above) indicates that the correlation between the random effects of V and Cl is not significantly different from zero and suggests to remove this correlation from the model. Remark: as correlations can only be estimated by groups (i.e if a correlation is estimated between (ka, V) and between (V, Cl), then one must also estimate the correlation between (ka, Cl)), it may happen that it is not possible to remove a non-significant correlation without removing also a significant one. The math behind: Let \(\eta_{\psi_1,i}^l\) and \(\eta_{\psi_2,i}^l\) the random effects corresponding to the \(L\) individual parameters \(\psi_1\) and \(\psi_2\) sampled from the conditional distribution (called replicates) for individual \(i\). First we calculate the product of the random effects averaged over the replicates: $$p_i^{(L)} = \frac{1}{L} \sum_{l=1}^{L} \eta_{\psi_1,i}^l \eta_{\psi_2,i}^l $$ We note \( \overline{p}=\sum_{i=1}^{N} p_i^{(L)} \) the average of the product over the individuals and \(s\) their standard deviation. The test statistic is: $$ T=\frac{\overline{p}}{\frac{s}{\sqrt{N}}}$$ and it is compared to a t-distribution with \(N-1\) degrees of freedom with \(N\) the number of individuals. The distribution of the individual parameters Distribution of the individual parameters not dependent on covariates – Test if transformed individual parameters are normally distributed When an individual parameter doesn’t depend on covariates, its distribution (normal, lognormal, logit or probit) can be transformed into the normal distribution. Then, a Shapiro-Wilk test can be used to test the normality of the transformed parameter. The null hypothesis is: H0: the transformed individual parameter values (sampled from the conditional distribution) is normally distributed If the p-value is small, there is evidence that the transformed individual parameter values are not normally distributed and this calls the choice of the parameter distribution into question. Small p-values are colored in yellow (p-value in [0.05-0.10]), orange (p-value in [0.01-0.05]) or red (p-value in [0.00-0.01]). In our example, there is no reason to reject the null hypothesis of lognormality for Cl. Remark: testing the normality of a transformed individual parameter that does not depend on covariates is equivalent to testing the normality of the associated random effect. We can check in our example that the Shapiro-Wilk tests for \(\log(Cl)\) and \(\eta_{Cl}\) are equivalent. The math behind: Let \(z_i^l\) the transformed individual parameters (e.g log(V) for log-normally distributed parameters and logit(F) for logit-distributed parameters) sampled from the conditional distribution (called replicates, index \(l\) ) for individual \(i\). The Shapiro-Wilk test statistic is calculated for each replicate \(l\) (i.e the first sample from all individuals, then the second sample from all individuals, etc): $$W^l=\frac{\left( \sum_{i=1}^N a_i z_i^l \right)^2}{ \sum_{i=1}^N (z_i^l – \overline{z}^l)^2}$$ with \(a_i\) tabulated coefficient and \(\overline{z}^l=\frac{1}{N}\sum_{i=1}^N z_i^l \) the average over all individuals, for each replicate. The statistic displayed in Monolix corresponds to the average statistic over all replicates \(W=\frac{1}{L}\sum_{l=1}^L W^l \). For the p-values, one p-value is calculated for each replicate, using the Shapiro-Wild table with \(N\) (number of individuals) degrees of freedom. The Benjamini-Hochberg (BH) procedure is then applied: the p-values are ranked by ascending order and the BH critical value is calculated for each as \( \frac{\textrm{rank}}{L}Q \) with \(\textrm{rank}\) the individual p-value’s rank, \(L\) the total number of p-values (equal to the number of replicates) and \(Q= 0.05\) the false discovery rate. The largest p-value that is smaller than the corresponding critical value is selected. Distribution of the individual parameters dependent on covariates – test the marginal distribution of each individual parameter Individual parameters that depend on covariates are not anymore identically distributed. Each transformed individual parameter is normally distributed, with its own mean that depends on the value of the individual covariate. In other words, the distribution of an individual parameter is a mixture of (transformed) normal distributions. A Kolmogorov-Smirnov test is used for testing the distributional adequacy of these individual parameters. The null-hypothesis is: H0: the individual parameters are samples from the mixture of transformed normal distributions (defined by the population parameters and the covariate values) A small p-value indicates that the null hypothesis can be rejected. Small p-values are colored in yellow (p-value in [0.05-0.10]), orange (p-value in [0.01-0.05]) or red (p-value in [0.00-0.01]). With our example, we obtain: The model for the observations A combined1 error model with a normal distribution is assumed in our example: Distribution of the residuals Several tests are performed for the individual residuals (IWRES), the NPDE and for the population residuals (PWRES). Test if the distribution of the residuals is symmetrical around 0 A Miao, Gel and Gastwirth (2006) test (or Van Der Waerden test in the 2018 release) is used to test the symmetry of the residuals. Indeed, symmetry of the residuals around 0 is an important property that deserves to be tested, in order to decide, for instance, if some transformation of the observations should be done. The null hypothesis tested is: H0: the median of the residuals is equal to its mean A small p-value indicates that the null hypothesis can be rejected. Small p-values are colored in yellow (p-value in [0.05-0.10]), orange (p-value in [0.01-0.05]) or red (p-value in [0.00-0.01]). With our example, we obtain: The math behind: Let \(R_i\) the residuals (NPDE, PWRES or IWRES) for each individual \(i\), \(\overline{R}\) the mean of the residuals, and \(M_R\) their median. The MGG test statistic is: $$T=\frac{\sqrt{n}}{0.9468922}\frac{\overline{R}-M_R}{ \sum_{i=1}^{n}|R_i-M|}$$ with \(n\) the number of residuals. The test statistic is compared to a standard normal distribution. The formula above is valid for i.i.d (independent and identically distributed) residuals. For the IWRES, the residuals corresponding to a given time and given id are not independent (they ressemble each other). To solve the problem, we estimate an effective number of residuals. The number of residuals \(n\) can be split into the number of replicates \(L\) times the number of observations \(m\). We look for the effective number of replicates \(\tilde{L}\) such that: $$ \frac{\tilde{L}}{L} \sum_{l=1}^L (R_i^l)^2 \approx \chi^2(\tilde{L})$$ using a maximum likelihood estimation. The number of residuals is then calculated as \(n=\tilde{L} \times m \). Test if the residuals are normally distributed A Shapiro Wilk test is used for testing the normality of the residuals. The null hypothesis is: H0: the residuals are normally distributed If the p-value is small, there is evidence that the residuals are not normally distributed. The Shapiro Wilk test is known to be very powerful. Then, a small deviation of the empirical distribution from the normal distribution may lead to a very significant test (i.e. a very small p-value), which does not necessarily means that the model should be rejected. Thus, no color highlight is made for this test. In our example, we obtain: The math behind: Let \(R_i^l\) the residuals (NPDE, PWRES or IWRES) for individual \(i\). NPDE and PWRES have one values per time points and per individual. IWRES have one value per time point, per individual and per replicate (corresponding to the \(L\) individual parameters sampled from the conditional distribution). The Shapiro-Wilk test statistic is calculated for each replicate \(l\) (i.e the first sample from all individuals, then the second sample from all individuals, etc): $$W^l=\frac{\left( \sum_{i=1}^N a_i R_i^l \right)^2}{ \sum_{i=1}^N (R_i^l – \overline{R}^l)^2}$$ with \(a_i\) tabulated coefficient and \(\overline{R}^l=\frac{1}{N}\sum_{i=1}^N R_i^l \) the average over all individuals, for each replicate. The statistic displayed in Monolix corresponds to the average statistic over all replicates \(W=\frac{1}{L}\sum_{l=1}^L W^l \). For the p-values, one p-value is calculated for each replicate, using the Shapiro-Wild table with \(N\) (number of individuals) degrees of freedom. The Benjamini-Hochberg (BH) procedure is then applied: the p-values are ranked by ascending order and the BH critical value is calculated for each as \( \frac{\textrm{rank}}{L}Q \) with \(\textrm{rank}\) the individual p-value’s rank, \(L\) the total number of p-values (equal to the number of replicates) and \(Q= 0.05\) the false discovery rate. The largest p-value that is smaller than the corresponding critical value is selected. Starting from the 2019 version, the section Proposal in the tab Results includes automatic proposals of improvements for the statistical model, based on comparisons of many correlation, covariate and error models. Model selections are performed by using a BIC criteria (called Criteria in the interface), based on the current simulated individual parameters. This is why the proposal is computed by the task Conditional distribution. Note that the BIC criteria is not the same as the BIC computed for the Monolix project with the task log-likelihood: it does not characterize the whole model but only each part of the statistical model evaluated in the proposal, and thus yields different values for each type of model. Each criteria is given by the formula: \(BIC = -2\log({\cal L’})/nRep+\log(N) * k\), • \(\cal L’\) = likelihood of the linear regression • N = number of individuals • k = number of estimates betas • nRep = number of replicates (samples per individual) The number of estimated parameters k characterizes the part of the statistical model that is evaluated. The likelihood \(\cal L’\) is not the same as the one computed by the log-likelihood task, it is based on the joint distribution: \(p(y_i, \phi_i; \theta)=p(y_i| \phi_i; \theta)p(\phi_i; \theta)\) where all the parameters are fixed to the values estimated by Monolix except the parameters characterizing the model that is evaluated: error parameters (in \(\theta\)) for the error models, individual parameters or random effects (in \(\phi_i\)) for the covariate models or the correlation models. Proposed model The section Proposal is organized in 4 tabs. The first tab summarizes the best proposal for the statistical model, that is the combination of best proposals for the error, covariate and correlation The current statistical model is displayed below the proposed model, and the differences are highlighted in light blue. The proposed model can be applied automatically with the button “Apply”. This modifies the current project to include all elements of the proposed statistical model. It is then recommended to save the project under a new name to avoid overwriting previous results. Error model The error model selection is done by computing the criterion for each possible residual error model (constant, proportional, combined1, combined2), where the error parameters are optimized based on the data and the current predictions, for each observation model. The evaluated models are displayed for each observation mode in increasing order of criterion. The current error model is highlighted in blue. Covariate model The covariate model selection is based on the evaluation of the criterion for each individual parameter independently. For each individual parameter, all covariate models obtained by adding or removing one covariate are evaluated: beta parameters are estimated with linear regression, and the criterion is computed. The model with the best criterion is retained to continue the same procedure. All evaluated models are displayed for each parameter, in increasing order of criterion. The current covariate model is highlighted in blue. Since possibly many covariate models can be evaluated for each parameter, the maximum number of displayed models per parameter is 4 by default and can be changed with a slider, as below. Additional evaluated models can still be displayed by clicking on “more entries” below each table. Beta parameters are computed for each evaluated model by linear regression. They are not displayed by default, but can be displayed with a toggle as shown below. For categorical covariates, the categories corresponding to the beta parameters are also displayed. Correlation model The correlation model selection is done by computing the criterion for each possible correlation block at each dimension, starting with the best solution from the previous dimension. The correlation models are displayed by increasing value of criterion. The current correlation model is highlighted in blue. In Monolix2021, the correlation model includes also random effects at the inter-occasion level, like in the example below, while they were excluded from the proposal in previous versions. 5.1.5.Standard error using the Fisher Information Matrix The standard errors represent the uncertainty of the estimated population parameters. In Monolix, they are calculated via the estimation of the Fisher Information Matrix. They can for instance be used to calculate confidence intervals or detect model overparametrization. Calculation of the standard errors Several methods have been proposed to estimate the standard errors, such as bootstrapping or via the Fisher Information Matrix (FIM). In the Monolix GUI, the standard errors are estimated via the FIM. Bootstrapping can be accessed either through the RsSimulx R package or is now incorporated directly into the Monolix interface starting from version 2024R1. The Fisher Information Matrix (FIM) The observed Fisher information matrix (FIM) \(I \) is minus the second derivatives of the observed log-likelihood: $$ I(\hat{\theta}) = -\frac{\partial^2}{\partial\theta^2}\log({\cal L}_y(\hat{\theta})) $$ The log-likelihood cannot be calculated in closed form and the same applies to the Fisher Information Matrix. Two different methods are available in Monolix for the calculation of the Fisher Information Matrix: by linearization or by stochastic approximation. Via stochastic approximation A stochastic approximation algorithm using a Markov chain Monte Carlo (MCMC) algorithm is implemented in Monolix for estimating the FIM. This method is extremely general and can be used for many data and model types (continuous, categorical, time-to-event, mixtures, etc.). Via linearization This method can be applied for continuous data only. A continuous model can be written as: $$\begin{array}{cl} y_{ij} &= f(t_{ij},z_i)+g(t_{ij},z_i)\epsilon_{ij} \\ z_i &= z_{pop}+\eta_i \end{array}$$ with \( y_{ij} \) the observations, f the prediction, g the error model, \( z_i\) the individual parameter value for individual i, \( z_{pop}\) the typical parameter value within the population and \ (\eta_i\) the random effect. Linearizing the model means using a Taylor expansion in order to approximate the observations \( y_{ij} \) by a normal distribution. In the formulation above, the appearance of the random variable \ (\eta_i\) in the prediction f in a nonlinear way leads to a complex (non-normal) distribution for the observations \( y_{ij} \). The Taylor expansion is done around the EBEs value, that we note \( z_i^{\textrm{mode}} \). Standard errors Once the Fisher Information Matrix has been obtained, the standard errors can be calculated as the square root of the diagonal elements of the inverse of the Fisher Information Matrix. The inverse of the FIM \(I(\hat{\theta})\) is the variance-covariance matrix \(C(\hat{\theta})\): The standard error for parameter \( \hat{\theta}_k \) can be calculated as: Note that in Monolix, the Fisher Information Matrix and variance-covariance matrix are calculated on the transformed normally distributed parameters. MonolixSuite version 2024R1 incorporates a change in the calculation of the variance covariance matrix \( \tilde{C} \). In version 2023 and before, \( \tilde{C} \) was obtained for untransformed parameters using the Jacobian matrix \(J\). The Jacobian matrix is a first-order approximation. $$\tilde{C}=J^TC J$$ Starting from version 2024R1 and later, the Jacobian matrix has been replaced by exact formulas to compute the variance, dependent on the distribution of the parameters. Parameters are distingushed into those with normal distribution, lognormal distribution, as well as logit and probitnormal distribution: • Normal distributed parameters: No transformation is required. • Lognormal distributed parameters: The variance in the Gaussian domain (e.g variance of log(V_pop)) are obtained via the FIM. To obtain the variance of the untransformed parameyers (e.g variance of V_pop), the following formula is applied: \( Var(\hat{\theta}_{k}) = (\exp(\sigma ^{2}-1))\exp(2\mu + \sigma ^{2})\), with \( \mu = \ln(\hat{\theta}_{k}) \) and \( \sigma ^{2}=Var(\ln(\hat{\ • Logitnormal and probitnormal distributed parameters: There are no explicit formula to obtain the variance of the untransformed parameters (e.g bioavailability F) from the transformed parameter (e.g logit(F)). Therefore, a Monte Carlo sampling approach is used. 100000 samples are drawn from the covariance matrix in gaussian domain. Then the samples are transformed from gaussian to non-gaussian domain. For instance in the case of a logitnormal distributed parameter within bounds \( a\) and \( b\), the \(i\)-th sample of parameter \( \theta_{k,i}\): \(\mu_{k,i}=logit(\theta_ {k,i})\) is transformed into non-gaussian space by the inverse logit, i.e. \(\theta_{k,i}=\frac{b \times \exp(\mu_{k,i})+ a}{(1 + \exp(\mu_{k,i}))}\). Then the empirical variance \(\sigma^2\) over all transformed samples \(\theta_{k}\) is calculated. This transformation applies only to the typical values (fixed effects) “pop”. For the other parameters (standard deviation of the random effects “omega”, error model parameters, covariate effects “beta” and correlation parameters “corr”), we obtain directly the variance of these parameters from the FIM. Correlation matrix The correlation matrix is calculated from the variance-covariance matrix as: $$\text{corr}(\theta_i,\theta_j)=\frac{\tilde{C}_{ij}}{\textrm{s.e}(\theta_i)\textrm{ s.e}(\theta_j)}$$ Wald test For the beta parameters characterizing the influence of the covariates, the relative standard error can be used to perform a Wald test, testing if the estimated beta value is significantly different from zero. Running the standard errors task When running the standard error task, the progress is displayed in the pop-up window. At the end of the task, the correlation matrix is also shown, along with the elapsed time and number of Dependencies between tasks: The “Population parameters” task must be run before launching the Standard errors task. If the Conditional distribution task has already been run, the first iterations of the Standard errors (without linearization) will be very fast, as they will reuse the same draws as those obtained in the Conditional distribution task. In the graphical user interface In the Pop.Param section of the Results tab, three additional columns appear in addition to the estimated population parameters: • S.E: the estimated standard errors • R.S.E: the relative standard error (standard error divided by the estimated parameter value) To help the user in the interpretation, a color code is used for the p-value and the RSE: • For the p-value: between .01 and .05, between .001 and .01, and less than .001. • For the RSE: between 50% and 100%, between 100% and 200%, and more than 200%. When the standard errors were estimated both with and without linearization, the S.E and R.S.E are displayed for both methods. In the STD.ERRORS section of the Results tab, we display: • R.S.E: the relative standard errors • Correlation matrix: the correlation matrix of the population parameters • Eigen values: the smallest and largest eigen values, as well as the condition number (max/min) • The elapsed time and, starting from Monolix2021R1, the number of iterations for stochastic approximation, as well as a message indicating whether convergence has been reached (“auto-stop”) or if the task was stopped by the user or reached the maximum number of iterations. To help the user in the interpretation, a color code is used: • For the correlation: between .5 and .8, between .8 and .9, and higher than .9. • For the RSE: between 50% and 100%, between 100% and 200%, and more than 200%. When the standard errors were estimated both with and without linearization, both results appear in different subtabs. If you hover on a specific value with the mouse, both parameters are highlighted to know easily which parameter you are looking at: In the output folder After having run the Standard errors task, the following files are available: • summary.txt: contains the s.e, r.s.e, p-values, correlation matrix and eigenvalues in an easily readable format, as well as elapsed time and number of iterations for stochastic approximation (starting from Monolix2021R1). • populationParameters.txt: contains the s.e, r.s.e and p-values in csv format, for the method with (*_lin) or without (*_sa) linearization • FisherInformation/correlationEstimatesSA.txt: correlation matrix of the population parameter estimates, method without linearization (stochastic approximation) • FisherInformation/correlationEstimatesLin.txt: correlation matrix of the population parameter estimates, method with linearization • FisherInformation/covarianceEstimatesSA.txt: variance-covariance matrix of the transformed normally distributed population parameter, method without linearization (stochastic approximation) • FisherInformation/covarianceEstimatesLin.txt: variance-covariance matrix of the transformed normally distributed population parameter, method with linearization Interpreting the correlation matrix of the estimates The color code of Monolix’s results allows to quickly identify population parameter estimates that are strongly correlated. This often reflects model overparameterization and can be further investigated using Mlxplore and the convergence assessment. This is explained in details in this video: The settings are accessible through the interface via the button next to the Standard errors task: • Minimum number of iterations: minimum number of iterations of the stochastic approximation algorithm to calculate the Fisher Information Matrix. • Maximum number of iterations: maximum number of iterations of the stochastic approximation algorithm to calculate the Fisher Information Matrix. The algorithm stops even if the stopping criteria are not met. Good practices and tips When to use “use linearization method”? Firstly, it is only possible to use the linearization method for continuous data. For the linearization is available, this method is generally much faster than without linearization (i.e stochastic approximation) but less precise. The Fisher Information Matrix by model linearization will generally be able to identify the main features of the model. More precise– and time-consuming – estimation procedures such as stochastic approximation will have very limited impact in terms of decisions for these most obvious features. Precise results are required for the final runs where it becomes more important to rigorously defend decisions made to choose the final model and provide precise estimates and diagnosis plots. I have NANs as results for standard errors for parameter estimates. What should I do? Does it impact the likelihood? NaNs as standard errors often appear when the model is too complex and some parameters are unidentifiable. They can be seen as an infinitely large standard error. The likelihood is not affected by NaNs in the standard errors. The estimated population parameters having a NaN as standard error are only very uncertain (infinitely large standard error and thus infinitely large confidence intervals). 5.1.6.Log Likelihood estimation The log-likelihood is the objective function and a key information. The log-likelihood cannot be computed in closed form for nonlinear mixed effects models. It can however be estimated. Log-likelihood estimation Performing likelihood ratio tests and computing information criteria for a given model requires computation of the log-likelihood $$ {\cal L}{\cal L}_y(\hat{\theta}) = \log({\cal L}_y(\hat{\theta})) \triangleq \log(p(y;\hat{\theta})) $$ where \(\hat{\theta}\) is the vector of population parameter estimates for the model being considered, and \(p(y;\hat{\theta})\) is the probability distribution function of the observed data given the population parameter estimates. The log-likelihood cannot be computed in closed form for nonlinear mixed effects models. It can however be estimated in a general framework for all kinds of data and models using the importance sampling Monte Carlo method. This method has the advantage of providing an unbiased estimate of the log-likelihood – even for nonlinear models – whose variance can be controlled by the Monte Carlo size. Two different algorithms are proposed to estimate the log-likelihood: • by linearization, • by Importance sampling. Log-likelihood by importance sampling The observed log-likelihood \({\cal LL}(\theta;y)=\log({\cal L}(\theta;y))\) can be estimated without requiring approximation of the model, using a Monte Carlo approach. Since $${\cal LL}(\theta;y) = \log(p(y;\theta)) = \sum_{i=1}^{N} \log(p (y_i;\theta))$$ we can estimate \(\log(p(y_i;\theta))\) for each individual and derive an estimate of the log-likelihood as the sum of these individual log-likelihoods. We will now explain how to estimate \(\log(p (y_i;\theta))\) for any individual i. Using the \(\phi\)-representation of the model (the individual parameters are transformed to be Gaussian), notice first that \(p(y_i;\theta)\) can be decomposed as follows: $$p(y_i;\theta) = \int p(y_i,\phi_i;\theta)d\phi_i = \int p(y_i|\phi_i;\theta)p(\phi_i;\theta)d\phi_i = \mathbb{E}_{p_{\phi_i}}\left(p(y_i|\phi_i;\theta)\right)$$ Thus, \(p(y_i;\theta)\) is expressed as a mean. It can therefore be approximated by an empirical mean using a Monte Carlo procedure: 1. Draw M independent values \(\phi_i^{(1)}\), \(\phi_i^{(2)}\), …, \(\phi_i^{(M)}\) from the marginal distribution \(p_{\phi_i}(.;\theta)\). 2. Estimate \(p(y_i;\theta)\) with \(\hat{p}_{i,M}=\frac{1}{M}\sum_{m=1}^{M}p(y_i | \phi_i^{(m)};\theta)\) By construction, this estimator is unbiased, and consistent since its variance decreases as 1/M: $$\mathbb{E}\left(\hat{p}_{i,M}\right)=\mathbb{E}_{p_{\phi_i}}\left(p(y_i|\phi_i^{(m)};\theta)\right) = p(y_i;\theta) ~~~~\mbox{Var}\left(\hat{p}_{i,M}\right) = \frac{1}{M} \mbox{Var}_{p_{\phi_i}}\ We could consider ourselves satisfied with this estimator since we “only” have to select M large enough to get an estimator with a small variance. Nevertheless, it is possible to improve the statistical properties of this estimator. For any distribution \(\tilde{p_{\phi_i}}\) that is absolutely continuous with respect to the marginal distribution \(p_{\phi_i}\), we can write $$ p(y_i;\theta) = \int p(y_i|\phi_i;\theta) \frac{p(\phi_i;\theta)}{\tilde{p}(\phi_i;\theta)} \tilde{p}(\phi_i;\theta)d\phi_i = \mathbb{E}_{\tilde{p}_{\phi_i}}\left(p(y_i|\phi_i;\theta)\frac{p(\ phi_i;\theta)}{\tilde{p}(\phi_i;\theta)} \right).$$ We can now approximate \(p(y_i;\theta)\) using an importance sampling integration method with \(\tilde{p}_{\phi_i}\) as the proposal distribution: 1. Draw M independent values \(\phi_i^{(1)}\), \(\phi_i^{(2)}\), …, \(\phi_i^{(M)}\) from the proposal distribution \(\tilde{p_{\phi_i}}(.;\theta)\). 2. Estimate \(p(y_i;\theta)\) with \(\hat{p}_{i,M}=\frac{1}{M}\sum_{m=1}^{M}p(y_i | \phi_i^{(m)};\theta)\frac{p(\phi_i^{(m)};\theta)}{\tilde{p}(\phi_i^{(m)};\theta)}\) By construction, this estimator is unbiased, and its variance also decreases as 1/M: $$\mbox{Var}\left(\hat{p}_{i,M}\right) = \frac{1}{M} \mbox{Var}_{\tilde{p_{\phi_i}}}\left(p(y_i|\phi_i^{(m)};\theta)\frac{p(\phi_i^{(m)};\theta)}{\tilde{p}(\phi_i^{(m)};\theta)}\right)$$ There exist an infinite number of possible proposal distributions \(\tilde{p}\) which all provide the same rate of convergence 1/M. The trick is to reduce the variance of the estimator by selecting a proposal distribution so that the numerator is as small as possible. For this purpose, an optimal proposal distribution would be the conditional distribution \(p_{\phi_i|y_i}\). Indeed, for any \(m = 1,2, …, M,\) $$ p(y_i|\phi_i^{(m)};\theta)\frac{p(\phi_i^{(m)};\theta)}{p(\phi_i^{(m)}|y_i;\theta)} = p(y_i;\theta) $$ which has a zero variance, so that only one draw from \(p_{\phi_i|y_i}\) is required to exactly compute the likelihood \(p(y_i;\theta)\). The problem is that it is not possible to generate the \(\phi_i^{(m)}\) with this exact conditional distribution, since that would require computing a normalizing constant, which here is precisely \ Nevertheless, this conditional distribution can be estimated using the Metropolis-Hastings algorithm and a practical proposal “close” to the optimal proposal \(p_{\phi_i|y_i}\) can be derived. We can then expect to get a very accurate estimate with a relatively small Monte Carlo size M. The mean and variance of the conditional distribution \(p_{\phi_i|y_i}\) are estimated by Metropolis-Hastings for each individual i. Then, the \(\phi_i^{(m)}\) are drawn with a noncentral student t- $$ \phi_i^{(m)} = \mu_i + \sigma_i \times T_{i,m}$$ where \(\mu_i\) and \(\sigma^2_i\) are estimates of \(\mathbb{E}\left(\phi_i|y_i;\theta\right)\) and \(\mbox{Var}\left(\phi_i|y_i;\theta\right)\), and \((T_{i,m})\) is a sequence of i.i.d. random variables distributed with a Student’s t-distribution with \(\nu\) degrees of freedom (see section Advanced settings for the log-likelihood for the number of degrees of freedom). Remark: The standard error of the LL on all the draws is proposed. It represents the impact of the variability of the draws on the LL uncertainty, given the estimated population parameters, but it does not take into account the uncertainty of the model that comes from the uncertainty on the population parameters. Remark: Even if \(\hat{\cal L}_y(\theta)=\prod_{i=1}^{N}\hat{p}_{i,M}\) is an unbiased estimator of \({\cal L}_y(\theta)\), \(\hat{\cal LL}_y(\theta)\) is a biased estimator of \({\cal LL}_y(\theta) \). Indeed, by Jensen’s inequality, we have : $$\mathbb{E}\left(\log(\hat{\cal L}_y(\theta))\right) \leq \log \left(\mathbb{E}\left(\hat{\cal L}_y(\theta)\right)\right)=\log\left({\cal L}_y(\theta)\right)$$ Best practice: the bias decreases as M increases and also if \(\hat{\cal L}_y(\theta)\) is close to \({\cal L}_y(\theta)\). It is therefore highly recommended to use a proposal as close as possible to the conditional distribution \(p_{\phi_i|y_i}\), which means having to estimate this conditional distribution before estimating the log-likelihood (i.e. run task “Conditional distribution” Log-likelihood by linearization The likelihood of the nonlinear mixed effects model cannot be computed in a closed-form. An alternative is to approximate this likelihood by the likelihood of the Gaussian model deduced from the nonlinear mixed effects model after linearization of the function f (defining the structural model) around the predictions of the individual parameters \((\phi_i; 1 \leq i \leq N)\). Notice that the log-likelihood can not be computed by linearization for discrete outputs (categorical, count, etc.) nor for mixture models. Best practice: We strongly recommend to compute the conditional mode before computing the log-likelihood by linearization. Indeed, the linearization should be made around the most probable values as they are the same for both the linear and the nonlinear model. Best practices: When should I use the linearization and when should I use the importance sampling? Firstly, it is only possible to use the linearization algorithm for the continuous data. In that case, this method is generally much faster than importance sampling method and also gives good estimates of the LL. The LL calculation by model linearization will generally be able to identify the main features of the model. More precise– and time-consuming – estimation procedures such as stochastic approximation and importance sampling will have very limited impact in terms of decisions for these most obvious features. Selection of the final model should instead use the unbiased estimator obtained by Monte Carlo. Display and outputs In case of estimation using the importance sampling method, a graphical representation is proposed to see the valuation of the mean value over the Monte Carlo iterations as on the following: The final estimations are displayed in the result frame as below. Notice that there is a “Copy table” icon on the top of each table to copy them in Excel, Word, … The table format and display will be The log-likelihood is given in Monolix together with the Akaike information criterion (AIC) and Bayesian information criterion (BIC): $$ AIC = -2 {\cal L}{\cal L}_y(\hat{\theta}) +2P $$ $$ BIC = -2 {\cal L}{\cal L}_y(\hat{\theta}) +log(N)P $$ where P is the total number of parameters to be estimated and N the number of subjects. The new BIC criterion penalizes the size of \(\theta_R\) (which represents random effects and fixed covariate effects involved in a random model for individual parameters) with the log of the number of subjects (\(N\)) and the size of \(\theta_F\) (which represents all other fixed effects, so typical values for parameters in the population, beta parameters involved in a non-random model for individual parameters, as well as error parameters) with the log of the total number of observations (\(n_{tot}\)), as follows: $$ BIC_c = -2 {\cal L}{\cal L}_y(\hat{\theta}) + \dim(\theta_R)\log N+\dim(\theta_F)\log n_{tot}$$ If the log-likelihood has been computed by importance sampling, the number of degrees of freedom used for the proposal t-distribution (5 by default) is also displayed, together with the standard error of the LL on the individual parameters drawn from the t-distribution. In terms of output, a folder called LogLikelihood is created in the result folder where the following files are created • logLikelihood.txt: containing for each computed method, the -2 x log-likelihood, the Akaike Information Criteria (AIC), the Bayesian Information Criteria (BIC), and the corrected Bayesian Information Criteria (BICc). • individualLL.txt: containing the -2 x log-likelihood for each individual for each computed method. Advanced settings for the log-likelihood Monolix uses a t-distribution as proposal. By default, the number of degrees of freedom of this distribution is fixed to 5. In the settings of the task, it is also possible to optimize the number of degrees of freedom. In such a case, the default possible values are 1, 2, 5, 10 and 20 degrees of freedom. A distribution with a small number of degree of freedom (i.e. heavy tails) should be avoided in case of stiff ODE’s defined models. 5.2.Algorithms convergence assessment Monolix includes a convergence assessment tool. It allows to execute a workflow of estimation tasks several times, with different, randomly generated, initial values of fixed effects, as well as different seeds. The goal is to assess the robustness of the convergence. Running the convergence assessment For that, click on the shortcut button in the “Tasks” part. A dedicated panel opens as in the figure below. The first shortcut button next to Run can be used to go back to the estimation. The user can define □ the number of runs, or replicates □ the type of assessment: □ the initial parameters. By default, initial values are uniformly drawn from intervals defined around the estimated values if population parameters have been estimated, the initial estimates otherwise. Notice that it is possible to set one initial parameter constant while generating the others. The minimum and maximum of the generated parameters can be modified by the user. All settings used are saved and reloaded with the run containing the convergence assessment results. Notice that • In the case of estimation of the standard errors and log-likelihood by linearization, the individual parameters with the conditional mode method are computed as well to have more relevant • In the case of estimation of the standard errors and log-likelihood without the linearization, the conditional distribution method is computed too to have more relevant estimation. • The workflow is the same between the runs and is not the one defined in the interface. Click on Run to execute the tool. Thus you are able to estimate the population parameters using several initial seeds and/or several initial conditions. Display and outputs Several kinds of plots are given as a summary of the results. First of all, the SAEM convergence assessment is proposed. The convergence of each parameter on each run is proposed. It allows to see if the convergence for each run is ok. Then, a plot showing the estimated values for each replicate is proposed. If the estimation of the standard errors was included in the scenario, the estimated standard errors are also displayed as horizontal bars. It allows to see if all parameters converge statistically to the same values. Starting from the 2019 version, it is possible to export manually all the plots in the Assessment folder in your result folder by clicking on the “export” icon (purple box on the previous figure). Finally, if log-likelihood without linearization is used, the curves for convergence of importance sampling are proposed. In addition, a Monolix project and its result folder is generated for each set of initial parameters. They are located in the Assessment subfolder of the main project’s result folder (located by default next to the .mlxtran project file). Along with all the runs, there is a summary of all the runs “assessment.txt” providing all the individual parameter estimates along with the -2LL, as in the following: Notice that, starting from the 2019 version, it is possible to reload all the results of a previous convergence if nothing has changed in the project. Starting from version 2021, the settings of the convergence assessment are also reloaded. Best practices: what is the use the convergence assessment tool? We cannot claim that SAEM always converges (i.e., with probability 1) to the global maximum of the likelihood. We can only say that it converges under quite general hypotheses to a maximum – global or perhaps local – of the likelihood. A large number of simulation studies have shown that SAEM converges with high probability to a “good” solution – hopefully the global maximum – after a small number of iterations. The purpose of this tool is to evaluate the SAEM algorithm with initial conditions and see if the estimated parameters are the “global” minimum. The trajectory of the outputs of SAEM depends on the sequence of random numbers used by the algorithm. This sequence is entirely determined by the “seed.” In this way, two runs of SAEM using the same seed will produce exactly the same results. If different seeds are used, the trajectories will be different but convergence occurs to the same solution under quite general hypotheses. However, if the trajectories converge to different solutions, that does not mean that any of these results are “false”. It just means that the model is sensitive to the seed or to the initial conditions. The purpose of this tool is to evaluate the SAEM algorithm with several seeds to see the robustness of the convergence. 5.3.Model building Starting from the 2019 version, a panel Model building provides automatic model building tools: This panel is accessible via the button Model building next to Run in the interface of Monolix, or from the section Perspective in the tab Home. 5.3.1.Statistical tests for model assessment 5.3.2.Automatic statistical model building (covariate model, correlation model, error model) 5.3.3.Automatic complete statistical model building 5.3.4.Automatic covariate model building 5.3.5.Automatic variability model building 5.3.6.Statistical model building with SAMBA Starting from the 2019 version, an automatic statistical model building algorithm is implemented in Monolix: SAMBA (Stochastic Approximation for Model Building Algorithm) SAMBA is an iterative procedure to accelerate and optimize the process of model building by identifying at each step how best to improve some of the model components (residual error model, covariate effects, correlations between random effects). This method allows to find the optimal statistical model which minimizes some information criterion in very few steps. It is described in more details in the following publication: Prague, M, Lavielle, M. SAMBA: A novel method for fast automatic model building in nonlinear mixed-effects models. CPT Pharmacometrics Syst Pharmacol. 2022; 00: 1- 12. doi:10.1002/psp4.12742 At each iteration, the best statistical model is selected with the same method as the Proposal. This step is very quick as it does not require to estimate new parameters with SAEM for all evaluated models. Initial estimates are set to the estimated values from the Proposal. The population parameters of the selected model are then estimated with SAEM, individual parameters are simulated from the conditional distributions, and the log-likelihood is computed to evaluate the improvement of the model. The algorithm stops when no improvement is brought by the selected model or if it has already been tested. The linearization method is selected by default to compute the log-likelihood of the estimated model at each iteration. It can be unselected to use importance sampling. The improvement can be evaluated with two different criteria based on the log-likelihood, that can be selected in the settings (available via the icon next to Run): • BICc (by default) • LRT (likelihood ratio threshold): by default the forward threshold is 0.01 and the backward threshold is 0.01. These values can be changed in the settings. Starting from the 2021 version, all settings used are saved and reloaded with the run containing the model building results. Selecting covariates and parameters It is possible to select part of the covariates and individual parameters to be used in the algorithm: Moreover, a panel “Locked relationships” can be opened to lock in or lock out some covariate-parameter relationships among the ones that are available: Starting from the 2021 version, all relationships considered in model building are part of the settings which are saved and reloaded with the run containing the model building results. The results of the model building are displayed in a tab Results, with the list of model run at each iteration (see for example the figure below) and the corresponding -2LL and BICc values. All resulting runs are also located in the ModelBuilding subfolder of the project’s result folder (located by default next to the .mlxtran project file). In the Results tab, by default runs are displayed by order of iteration, except the best model which is displayed in first position, highlighted in blue. Note that the table of iterations can be sorted by iteration number or criteria (see green marks below). Buttons “export and load” (see blue mark below) can also be used to export the model estimated at this iteration as a new Monolix project with a new name and open it in the current Monolix session. While the algorithm is running, the progress of the estimation tasks at each iteration is displayed on a white pop-up window, and temporary green messages confirm each successful task. [Bootstrap is not available in versions prior to 2024R1] The Bootstrap module in Monolix provides a robust method to assess parameter uncertainty, offering an alternative to calculating standard errors via inversion of the Fisher Information Matrix. This approach becomes particularly valuable when facing issues such as NaNs in standard errors due to numerical errors in matrix inversion or biases in results caused by incorrect assumptions of asymptotic normality for parameter estimates. Bootstrap overcomes these challenges by sampling many replicate datasets and re-estimating parameters on each replicate. While powerful, bootstrap comes with certain drawbacks: • Running many replicates for population parameter estimation can be time-consuming. Bootstrap in Monolix can be used from command line and with distributed calculation (parallelization on a grid via MPI). • Saving a large number of new datasets and results may raise storage issues. You can choose in the settings whether to save sampled datasets and results. Accessing the bootstrap module Bootstrap can be accessed under the “Perspective” menu or with a shortcut next to “Run” in the “Statistical model & Tasks” tab: Available bootstrap settings Users can customize the following settings: • Number of runs: Number of bootstrap replicates • Estimation tasks: Estimation tasks to run among population pramaters, Standard Errors, and Log-Likelihood • Initial values: Whether bootstrap runs should start their estimation from the same initial values as the initial run, or from the final estimated values from the initial run. • Sampling method: Type of bootstrap: □ Nonparametric Bootstrap (case resampling): New datasets are sampled from the initial dataset for each replicate. Each individual of the new dataset is sampled randomly with replacement from the initial dataset. This means that some individuals from the original dataset will appear several times in the resampled dataset, while some individuals will not appear at all. The generated datasets are thus similar but different compared to the original dataset. This method makes no assumption on the model. □ Parametric Bootstrap (SSE): This method is also called SSE for stochastic simulation and estimation. New datasets are simulated from the model for each replicate. Individual parameters are sampled from the population distribution and individuals are simulated using the same design as in the original dataset (same treatments, covariates, regressors, and observation times). Residual error is added on top of the model predictions to obtain a realistic dataset. If the initial dataset has censored data, censoring limits to apply to the simulated datasets can be specified by the user. This method assumes that the model correctly captures the original data. • Sample size: Number of individuals in the bootstrap datasets. By default it is set equal to the number of individuals of the original dataset but can be modified if required. • Stratified resampling: Selection of categorical covariates which distribution should be preserved when re-sampling individuals to create datasets for non-parametric bootstrap. When the original dataset contains few individuals, or when a categorical covariate distribution is highly imbalanced with only few individuals belonging to one of the categories, resampled bootstrap datasets can results in quite different distributions (e.g only one individual left in one of the categories) and this can lead to a bias, in particular on the covariate effects (beta) parameters. To avoid this situation, the sampling can be done such that the number of individuals in each category of the covariates selected in “stratified resampling” remains the same as in the initial dataset. • Confidence interval level: Level of the confidence interval bounds displayed in the results to summarize the uncertainty on the population parameters. • Save bootstrap runs: Option to save or not save bootstrap datasets and results folder of each bootstrap run. Both options need to be selected if you want to be able to reload each bootstrap run including the results. • Replace runs with failed convergence: Option to replace or not bootstrap replicates that have a failed convergence. This option allows to replace runs for which the “auto-stop” criteria of the exploratiry phase of the population parameter estimation have not been reached. You can also set a maximal number of runs to replace, so that bootstrap will not go on forever if many runs do not converge. Bootstrap will stop once the maximal number is reached. Running bootstrap After clicking on “Run”, bootstrap is launched and the population parameters estimates from all runs already done are shown as a table and as a plot of their medians and their confidence intervals with respect to the bootstrap iterations. The table and plot updates as soon as new bootstrap run finishes. An estimation of the remaining time to complete all bootstrap runs is available at the top. If you have stopped bootstrap before the last run, or you want to add more runs to your bootstrap results, you can resume bootstrap so that the runs already done will be reused instead of being Bootstrap results reported in the interface and the output files include the following. Depending on the choice of estimation tasks in the bootstrap settings, results for population parameters, RSE (if Standard error task was selected), and LL (if log-likelihood task was selected) are available. • Results > [Estimates]: Estimates from each bootstrap run displayed in tables. When the options to save both the results folder and the datasets have been selected, it is possible to load each bootstrap run via the “load” button in the “Pop [Estimates]” tab: • Plots: Estimates from each bootstrap run displayed as distribution plots. Distribution plots can be displayed as boxplots or as histogramms (pdf). • Results > [Summary]: Summary table of bootstrap estimates including value of the initial run (reference), mean and median over bootstrap runs, standard error (SE) over bootstrap runs, relative standard error (RSE) over bootstrap runs, confidence intervals (according to chosen confidence level). For population parameter, the bias of the mean over bootstrap runs compared ot the reference value is calculated as bias = (mean – reference)/reference*100. If some runs have not reached convergences, a toggle appears to filter the summary table and keep only runs which have converged. • Main window (Estimation) > Results > Pop Param: The bootstrap results also appear in the table of population parameter estimates of the parent run. This is convenient to compare the estimated values and their confidence intervals estimated via the Fisher Information Matrix, to the median and confidence intervals estimated via bootstrap. Bootstrap results can be included in the pop param table when generating reports from Monolix. • Saved datasets: In the case of non-parametric bootstrap, resampled datasets used in bootstrap runs have new subject identifiers defined as integers in the column tagged as ID, and include an additional column named “original_id” with the corresponding subject identifiers from the initial dataset. 5.5.Output files Monolix generates a lot of different output files depending on the tasks done by the user. Here is a complete listing of the files, along with the condition for their creation and their content. Description: summary file. • Header: project file name, date and time of run, Monolix version • Estimation of the population parameters: Estimated population parameters & computation time Description: estimated population parameters (with SAEM). • First column (no name): contains the parameter names (e.g ‘V_pop’ and ‘omega_V’). • value: contains the estimated parameter values. All the files are in the IndividualParameters folder of the result folder Description: Individual parameters (from SAEM, mode, and mean of the conditional distribution) • ID: subject name and occasion (if applicable). If there is one type of occasion, there will be an additional(s) column(s) defining the occasions. • parameterName_SAEM: individual parameter estimated by SAEM, it corresponds to the average of the individual parameters sampled by MCMC during all iterations of the smoothing phase. When several chains are used (see project settings), the average is also done over all chains. This value is an approximation of the conditional mean. • parameterName_mode (if conditional mode was computed): individual parameter estimated by the conditional mode task, i.e mode of the conditional distribution \(p(\psi_i|y_i;\hat{\theta})\). • parameterName_mean (if conditional distribution was computed) : individual parameter estimated by the conditional distribution task, i.e mean of the conditional distribution \(p(\psi_i|y_i;\hat{\ theta})\) . The average of samples from all chains and all iterations is computed in the gaussian space (eg mean of the log values in case of a lognormal distribution), and back-transformed. • parameterName_sd (if conditional distribution was computed): standard deviation of the conditional distribution \(p(\psi_i|y_i;\hat{\theta})\) calculated during the conditional distribution task. • COVname: continuous covariates values corresponding to all data set columns tagged as “Continuous covariate” and all the associated transformed covariates. • CATname: modalities associated to the categorical covariates (including latent covariates and the bsmm covariates) and all the associated transformed covariates. Description: individual random effect, calculated using the population parameters, the covariates and the conditional mode or conditional mean. For instance if we have a parameter defined as \(k_i=k_ {pop}+\beta_{k,WT}WT_i+\eta_i\), we calculate \(\eta_i=k_i – k_{pop}-\beta_{k,WT}WT_i\) with \(k_i\) the estimated individual parameter (mode or mean of the conditional distribution), \(WT_i\) the individual’s covariate, and \(k_{pop}\) and \(\beta_{k,WT}\) the estimated population parameters. • ID: subject name and occasion (if applicable). If there is one type of occasion, there will be an additional(s) column(s) defining the occasions. • eta_parameterName_SAEM: individual random effect estimated by SAEM, it corresponds to the last iteration of SAEM. • eta_parameterName_mode (if conditional mode was computed): individual random effect estimated by the conditional mode task, i.e mode of the conditional distribution \(p(\psi_i|y_i;\hat{\theta}) • eta_parameterName_mean (if conditional distribution was computed) : individual random effect estimated by the conditional distribution task, i.e mean of the conditional distribution \(p(\psi_i| y_i;\hat{\theta})\) . The average of random effect samples from all chains and all iterations is computed. • eta_parameterName_sd (if conditional distribution was computed): standard deviation of the conditional distribution \(p(\psi_i|y_i;\hat{\theta})\) calculated during the conditional distribution • COVname: continuous covariates values corresponding to all data set columns tagged as “Continuous covariate” and all the associated transformed covariates. • CATname: modalities associated to the categorical covariates (including latent covariates and the bsmm covariates) and all the associated transformed covariates. Description: Simulated individual parameter (by the conditional distribution) • rep: replicate of the simulation • ID: subject name and occasion (if applicable). If there is one type of occasion, there will be an additional(s) column(s) defining the occasions. • parameterName: simulated individual parameter corresponding to the draw rep. • COVname: continuous covariates values corresponding to all data set columns tagged as “Continuous covariate” and all the associated transformed covariates. • CATname: modalities associated to the categorical covariates (including latent covariates and the bsmm covariates) and all the associated transformed covariates. Description: Simulated individual random effect (by the conditional distribution) • rep: replicate of the simulation • ID: subject name and occasion (if applicable). If there is one type of occasion, there will be an additional(s) column(s) defining the occasions. • eta_parameterName: simulated individual random effect corresponding to the draw rep. • COVname: continuous covariates values corresponding to all data set columns tagged as”Continuous covariate” and all the associated transformed covariates. • CATname: modalities associated to the categorical covariates (including latent covariates and the bsmm covariates) and all the associated transformed covariates. Description: summary file. • Header: project file name, date and time of run, Monolix version (outputted population parameter estimation task) • Estimation of the population parameters: Estimated population parameters & computation time (outputted population parameter estimation task). Standard errors and relative standard errors are • Correlation matrix of the estimates: correlation matrix by block, eigenvalues and computation time Description: estimated population parameters, associated standard errors and p-value. • First column (no name): contains the parameter names (outputted population parameter estimation task) • Column ‘parameter’: contains the estimated parameter values (outputted population parameter estimation task) • se_lin / se_sa: contains the standard errors (s.e.) for the (untransformed) parameter, obtained by linearization of the system (lin) or stochastic approximation (sa). • rse_lin / rse_sa: contains the parameter relative standard errors (r.s.e.) in % (param_r.s.e. = 100*param_s.e./param), obtained by linearization of the system (lin) or stochastic approximation • pvalues_lin / pvalues_sa: for beta parameters associated to covariates, the line contains the p-value obtained from a Wald test of whether beta=0. If the parameter is not a beta parameter, ‘NaN’ is displayed. Notice that if the Fisher Information Matrix is difficult to invert, some parameter’s standard error can maybe not be computed leading to NaN in the corresponding columns. All the more detailed files are in the FisherInformation folder of the result folder. covarianceEstimatesSA.txt and/or covarianceEstimatesLin.txt Description: variance-covariance matrix of the estimates for the (untransformed) parameters or transformed normally distributed parameters depending on the MonolixSuite version (see below) Outputs: matrix with the project parameters as lines and columns. First column contains the parameter names. The Fisher information matrix (FIM) is calculated for the transformed normally distributed parameters (i.e log(V_pop) if V has a lognormal distribution). By inverting the FIM, we obtain the variance-covariance matrix \(\Gamma\) for the transformed normally distributed parameters \(\zeta\). This matrix is then multiplied by the jacobian J (which elements are defined by \(J_{ij}=\frac{\ partial\theta_i}{\partial\zeta_j}\)) to obtain the variance-covariance matrix \(\tilde{\Gamma}\) for the untransformed parameters \(\theta\): $$\tilde{\Gamma}=J^T\Gamma J$$ The diagonal elements of the variance-covariance matrix \(\tilde{\Gamma}\) for the untransformed parameters are finally used to calculate the standard errors. correlationEstimatesSA.txt and/or correlationEstimatesLin.txt Description: correlation matrix for the (untransformed) parameters Outputs: matrix with the project parameters as lines and columns. First column contains the parameter names. The correlation matrix is calculated as: This implies that the diagonal is unitary. The variance-covariance matrix for the untransformed parameters \(\theta\) is obtained from the inverse of the Fisher Information Matrix and the jacobian. See above for the formula. Description: summary file. • Header: project file name, date and time of run, Monolix version (outputted population parameter estimation task) • Estimation of the population parameters: Estimated population parameters & computation time (outputted population parameter estimation task). Standard errors and relative standard errors are • Correlation matrix of the estimates: correlation matrix by block, eigenvalues and computation time • Log-likelihood Estimation: -2*log-likelihood, AIC and BIC values, together with the computation time All the more detailed files are in the LogLikelihood folder of the result folder Description: Summary of the log-likelihood calculation with the two methods. • criteria: OFV (Objective Function Value), AIC (Akaike Information Criteria), and BIC (Bayesian Information Criteria ) • method: ImportanceSampling and/or linearization Description: -2LL for each individual. Notice that we only have one by individual even if there are occasions. • ID: subject name • method: ImportanceSampling and/or linearization Description: predictions at the observation times • ID: subject name. If there are occasions, additional columns will be added to describe the occasions. • time: Time from the data set. • MeasurementName: Measurement from the data set. • RegressorName: Regressor value. • popPred_medianCOV: prediction using the population parameters and the median covariates. • popPred: prediction using the population parameters and the covariates, e.g \(V_i=V_{pop}\left(\frac{WT_i}{70}\right)^{\beta}\) (without random effects). • indivPred_SAEM: prediction using the mean of the conditional distribution, calculated using the last iterations of the SAEM algorithm. • indPred_mean (if conditional distribution was computed): prediction using the mean of the conditional distribution, calculated in the Conditional distribution task. • indPred_mode (if conditional mode was computed): prediction using the mode of the conditional distribution, calculated in the EBEs task. • indWRes_SAEM: weighted residuals \(IWRES_{ij}=\frac{y_{ij}-f(t_{ij}, \psi_i)}{g(t_{ij}, \psi_i)}\) with \(\psi_i\) the mean of the conditional distribution, calculated using the last iterations of the SAEM algorithm. • indWRes_mean (if conditional distribution was computed): weighted residuals \(IWRES_{ij}=\frac{y_{ij}-f(t_{ij}, \psi_i)}{g(t_{ij}, \psi_i)}\) with \(\psi_i\) the mean of the conditional distribution, calculated in the Conditional distribution task. • indWRes_mode (if conditional mode was computed): weighted residuals \(IWRES_{ij}=\frac{y_{ij}-f(t_{ij}, \psi_i)}{g(t_{ij}, \psi_i)}\) with \(\psi_i\) the mode of the conditional distribution, calculated in the EBEs task. Notice that in case of several outputs, Monolix generates predictions1.txt, predictions2.txt, … Below is a correspondence of the terms used for predictions in Nonmem versus Monolix: Charts data All plots generated by Monolix can be exported as a figure or as text files in order to be able to plot it in another way or with other software for more flexibility. The description of all generated text files is described here. 5.6.Reproducibility of MonolixSuite results Consistent reload of MonolixSuite projects and results MonolixSuite applications ensure full reproducibility and consistency of analysis results. Here is how it works: • MonolixSuite applications save in each project all the elements necessary to run the analysis and replicate results from submitted analyses. In practice, the path to the dataset file, the path to the model file (if custom model not from library), and all settings chosen in the Monolix GUI (tagging of the data set columns, initial parameter values, statistical model definition, task settings, seed of the random number generator, etc) are saved in the .mlxtran file. When an existing Monolix project is loaded and re-run, the results will be exactly the same as for the first • In addition, the results of the analysis are loaded in the interface from the results folder as valid results only if nothing has changed in the project since the time of the run. In practice, when launching a run, the content of the mlxtran file is saved in the result folder. When a Monolix project is reloaded, the content of the loaded mlxtran file and the information in the result folder is compared. If they are the same, it means that the results present in the result folder are consistent with the loaded mlxtran and that therefore they are valid. If anything that could affect the results has changed (e.g initial values indicated in the loaded mlxtran file are different from the initial values used to generate the results), the results are not loaded and the warning message “Results have not been loaded due to an old inconsistent project” appears. • Starting with the 2021R1 version, in Monolix and PKanalix a fingerprint of the dataset is also generated and checked when the project is loaded to ensure that not only the path to the data set file but also the content of the dataset has not been modified. • Plot settings are saved in a different file (not the .mlxtran file) and do not impact the reloading of the results. Rerunning a project with the same MonolixSuite version and the same OS (because random number generators are different in Windows, Linux and Mac) yields the exact same results if nothing has changed in the project. The software version used to generate the results is displayed in the file summary.txt in the results folder. History of runs In addition, in Monolix and PKanalix there is a timestamping option in the Preferences of the software called “Save History”. When it is enabled, the project and its results is saved in the results folder after each run, in a subfolder named “History”, thus making sure that previous runs are not lost even if they have been overwritten by a new run. Below is an example of “History” subfolder in which 3 runs have been saved. Each subfolder named with the time of the run contains the project and its results as they were at the time of the run. Starting from the 2019 version, it is possible to add comments to the project with a dedicated part in the interface in the frame “Comments” as in the following figure. HTML markup or markdown syntax. A preview window is also proposed in order to see the formatted text. The comments are displayed in Sycomore. 6.1.1.Generating and exporting plots Generating plots Diagnostic plots can be generated by clickin on the task “Plots” or as part of the scenario when clicking on Run. By default, when reloading a project which has already run, only the plots Observed data and Covariate viewer are displayed. To generate the other diagnostic plots, click on the Plot task. Generating plots automatically at project load can be enabled via a Preference. The set of plots to compute can be selected by clicking on the button next to the task as shown below, prior to running the task. By default, only a subset of plots are selected (see below), with one occurence for each plot. The number of occurences can be selected via the number selection box. Having several occurences of the same plot allows to choose different settings for each occurence, for instance one on linear scale and and one on log scale. The plot selection and number of occurences is saved as part of the mlxtran file upon save, and reapplied at project reload. The green arrow can be used to generate only one plot type with the given number of occurences without re-generating all other plots. The “+” button allows to add one additional occurence. If the information necessary to generate a plot is not available, these buttons are hidden. List of available plots • Observed data: This plot displays the original data w.r.t. time as a spaghetti plot, along with some additional information. Model for the observations Diagnosis plots based on individual parameters Predictive checks and predictions Convergence diagnosis • SAEM: This plot displays the convergence trajectories of the population parameters estimated with SAEM with respect to the iteration number. • MCMC: This plot displays the convergence of the Markov Chain Monte Carlo algorithm for the individual parameters estimation. • Importance sampling: This plot displays the convergence of log-likelihood estimation by importance sampling. Tasks results • Likelihood contribution: This plot displays the contribution of each individual to the log-likelihood. • Standard errors for the estimates: This plot displays the relative standard errors (in %) for the population parameters. Exporting plots and setting plot-related preferences Several features can be used to export plots or plot data or create plots automatically: • Saving a single plot as image (icon button) • Create plots at project load (Preference) • Save charts data as binary file (Preference) • Export VPC simulations (Preference and Export menu) • Export charts datasets (Preference and Export menu) • Export plots (Preference and Export menu) • Export charts data (Preference and Export menu) Each feature is detailed below in this section. Most of them are located: • in the Export menu: to export the plots as image file or a text files for a single project. • in the Preferences: to export the plots as image file or a text files automatically at the end of each plot generation task. Preferences are user-specific and apply to all projects. The preferences are saved in C:\Users\<username>\lixoft\monolix\monolixXXXXRX\config\config.ini. Saving a single plot as image (icon button) The user can choose to export each plot as an image with an icon on top of the plot, choosing between png and svg format. The files are saved in <result folder>/ChartsFigures with the name of the plot, the observation name (e.g y1 or y2) and a postfix indicating the plot occurence, and page number. Note that information frames are not exported. The icon becomes available only when a structural model has been selected. Create plots at project load (Preference) By default, when a project which has already run is re-opened, only the plots “Observed data” and “Covariate viewer” are displayed. To generate the other diagnostic plots, it is necessary to click on the “Plots” task. Note that it is not necessary to rerun the entire scenario of all tasks. Starting with version 2024, it is possible to choose whether plots should be created automatically at project load. This option is available in Settings > Preferences > Create plots at project load. When this option is “on”, plots are generated when opening a project with results. Note that this increases the load time, in particular if the charts data have not been saved as binary (see below). By default, the option is “off”. Save charts data as binary file (Preference) Generating the plots can take several minutes depending on the model and dataset, because it requires to redo the simulations used in the VPC for instance. Starting with version 2024, it is possible to save the charts data (in particular the simulations) as a binary file, which is re-used when the plots are re-generated. This greatly speeds up the time necessary to generate the diagnostic plots. This option is available in Settings > Preferences > Save charts data as binary file. When this option is “on” (default), the charts data are saved in a non-human-readable format in <result folder>/ ChartsData/.Internals/chartsData.dat each time the plots are generated. At project reload, if the file chartsData.dat exists and is valid, it is used to generate the plots faster. Export VPC simulations (Preference and Export menu) Simulations used to generate the VPC plot can be exported as txt file. This allows to replot the VPC with an external software for instance. If this export is required for a single project, the user can click Export > Export VPC simulations. The simulations are saved in <result folder>\ChartsData\VisualPredictiveCheck\XXX_simulations.txt. If the user wishes to do this export each time the VPC plot is generated, the option Settings > Preferences > Export VPC simulations can be set to “on”. Export charts datasets (Preference and Export menu) Simulations used for the VPC and the Individual fits (prediction on a fine time grid) can be exported as MonolixSuite-formatted datasets (including dose records, regressors, etc). This is useful if these simulations need to be loaded in another MonolixSuite application such as PKanalix for instance. To export these formatted simulations once use Export > Export Charts Datasets > VPC or Individual fits. A pop-up window will let you choose the name and location of the saved files. To export the simulations as formatted dataset systematically each time the plots are generated, the option Settings > Preferences > Export charts datasets can be set to “on”. The files are saved in <result folder>/DataFile/ChartsData/indfits_data.csv and vpc_data.csv. Export plots (Preference and Export menu) Plots can be saved as image files. To save a single plot, use the icon on the top of the plot (see section above). To save all plots for a single project, use Export > Export plots. To systematically save all generated plots as image file, set the the option Settings > Preferences > Explot plots to “on”. The files are saved in <result folder>/ChartsFigures with the name of the plot, the observation name (e.g y1 or y2) and a postfix indicating the plot occurence, and page number. The files are saved at the end of the plot generation task. The file format (png or svg) can be chosen in the Preferences. Note that information frames are not exported. Export charts data (Preference and Export menu) The values used to draw the plots can be saved as txt files. This can be useful to regenerate the plots using an external tool. To export the charts data for a single project, use Export > Export charts data. To systematically export the charts data at the end of the plot generation, set the the option Settings > Preferences > Explot charts data to “on”. The files are saved in <result folder> /ChartsData with one subfolder per plot. The content of each file is described on the dedicated page. The file separator can be chosen in the Preferences. Plot settings A few plot settings impact the simulations required for the plots. They can be chosen in the Settings panel for the Plots task. • Number of simulations: number of replicates simulated for the VPC, NPC and Prediction distribution plot (default: 500) • Grid: number of points of the fine time grid used for the individuals fits (default: 250). See the video below for more details. Customizing time grid in plots 6.1.2.Interacting with the plots Interactive diagnostic plots In the PLOTS tab, the right panel has several sub – tabs (at the bottom) to interact with the plots: • The tab “Settings” provides options specific to each plot, such as hiding or displaying elements of the plot, modifying some elements, or changing axes scales, ticks and limits. • The tab “Stratify” can be used to select one or several covariates for splitting, filtering or coloring the points of the plot. See below for more details. • The tab “Preferences” alows to customize graphical aspects such as colors, font size, dot radius, line width, … These tabs are marked in purple on the following figure, which is the panel that is showed for observed data: Highlight: tooltips and ID In all the plots, when you hover a point or a curve with your mouse, some informations are provided as tooltips. For example, the ID is displayed when hovering a point or the curve of an individual in the observed data plot, the ID, the time and/or the prediction is displayed in the scatter plot of the residuals. In addition, starting from the 2019 version, when hovering one point/ID in a plot, the same ID will be highlighted in all the plots with the same color. Stratification: split, color, filter The stratification panel allows to create and use covariates for stratification purposes. It is possible to select one or several covariates for splitting, filtering or coloring the data set or the diagnosis plots as exposed on the following video. The following figure shows a plot of the observed data from the warfarin dataset, stratified by coloring individuals according to the continuous covariate wt: the observed data is divided into three groups, which were set to equal size with the button “rescale”. It is also possible to set groups of equal width, or to personalize dividing values. In addition, the bounds of the continuous covariate groups can be changed manually. Moreover, clicking on a group highlights only the individuals belonging to this group, as can be seen below: Values of categorical covariates can also be assigned to new groups, which can then be used for stratification. In addition, the number of subjects in each categorical covariate groups is displayed. Starting from the 2021R1 version, the list of subplots obtained by split is displayed below the split selection. It is possible to reorder the items in this list with drag-and-drop, which also reorders the subplot layout. Moreover, an “edit” icon next to each subplot name in the list allows to change the subplot titles. Preferences: customizing the plot appearance In the “preferences” tab, the user can modify the different aspects of the plot: colors, line style and width, fonts and label position offsets, … The following figures show on the warfarin demo the choices for the plot content and the choices for the labels and titles (in the “Plotting region” section). The sizes of all elements of the plots can also be changed with the single “zoom” preference in “Plotting region”. The layout can be modified with buttons on top of each plot. The first button can be used to select a set of subplots to display in the page. For example, as shown below, it is possible to display 9 individual fits per page instead of 12 (default number). The layout is then automatically adapted to balance the number of rows and columns. The second button can be used to choose a custom layout (number of rows and columns). On the example figures below, the default layout with 3 subplots (left) is modified to arrange them on a single column (right). 6.1.3.Transferring plot formatting Note: the features described on this page are not available in MonolixSuite versions prior to 2024R1. MonolixSuite offers the ability to transfer plot formatting from one plot to another within the same project or across projects. This can save you time and effort when you want to apply the same formatting options, such as stratification, preferences, or settings, to multiple plots, or when you want to use the same plot formatting for different analyses or datasets. Transferring plot formatting within one project To transfer plot formatting within one project, you need to use the “Apply…” button on the bottom left of the Plots tab and select “From plot” in the window that opens: Selecting the source and target plots The window allows you to select the source plot, which is the plot that has the formatting you want to transfer, and the target plots, which are the plots that you want to apply the formatting to. You can select all plots by clicking on the Select all button, or select all plots specific to some observation id with the “Fast selection”, or corresponding to an occurrence rank if some plots have been generated with several occurences. Choosing the sections to transfer You can also choose which sections of the configuration you want to transfer: Stratify, Settings, or Preferences. By default, Stratify and Preferences are selected. • The Stratify section includes the options to stratify the plots by covariates or occasions. • The Settings section includes the options to customize the content of the plots: elements displayed or not on the plots, such as the legend, the grid, the observed data points, etc, and calculation settings such as the axes, bins etc. • The Preferences section includes the options to change the colors, fonts, sizes, etc. Once you have selected the source plot, the target plots, and the sections to transfer, you can click on the Apply button to transfer the plot configurations. Transferring plot formatting across projects with plot presets To transfer plot formatting across different projects, you need to use the plot presets feature, which allows you to define and apply presets for plot configurations. Defining a plot preset To define a plot preset, you need to first set up your plot configuration as you would like by choosing the stratification, preferences, and settings options for each plot (you can use the transfer or plot formatting between plots within the same project, described in the first section, to help you). Then, you can save this configuration as a preset by clicking on the Save icon in the Apply from the Plot Formatting panel: You will be asked to give a name and a description to your preset, and to select which sections of the formatting you want to include in the preset among Stratify, Settings and Preferences. If the project has several types of observations, only the plots corresponding to one of them can be used to define the preset so it is necessary to select the observation id. You can also choose to set this preset as your default, which means that when creating a new project or when you click on Reset, the plot formatting from this preset will be applied instead of the system default. Applying a plot preset After saving your preset, it will be available in the list of presets that you can choose from when you click on “From preset” in the window that opens with “Apply…” in the Plot Formatting panel. You can apply your preset to a list of plots in another project, by opening this project, clicking on “Apply…” and “From preset” and by selecting the preset and the target plots, and finally clicking on Apply. This will transfer the plot configurations from the preset to the target plots. The target plots must be: • the same type of plots that were used to define the preset: for example if no VPC had been generated in the project used to define the preset, then the plot preset cannot be applied to a VPC in a new project, as there is no VPC formatting option to apply. The VPC will be greyed out in the list of target plots. • the same plot occurrence that was used to define the preset: for example if the project used to define the preset had only one VPC occurrence, and you want to apply that preset in a project where you have generated two VPC occurrences, then the second VPC occurrence will be greyed out in the list of target plots. Note that axis limits values and bins settings are not saved as part of the preset because they are considered too project-specific. Please note this major difference between “Apply formatting from plot” and “Apply formatting from preset”: • “Apply formatting from plot” applies all selected options from the source plot to all selected target plots. Thus, if you select a VPC stratified by STUDY as source plot and the Observed data plot as target plot, and “Stratify” section in plot formatting to be transfer, you will obtain the Observed data plot stratified by STUDY. • “Apply formatting from preset” applies the selected options from each plot saved in the preset to the corresponding plot in your project. Thus, if you saved a preset including the “Stratify” section based on a project where the VPC is stratified by STUDY but not the Observed data plot, applying that preset to another project will stratify the VPC by STUDY but not the Observed data Managing plot presets You can also manage your presets by going to Settings > Manage presets > Plot formatting. This will open a window where you can see all presets, modify, update or remove your custom presets, and export or import them as separate files with a lixpst extension. That way you can easily share your presets with your colleagues or collaborators to standardize your plot formatting. Applying pre-defined plot formatting In addition to your custom presets, MonolixSuite also provides two pre-defined presets that you can use to apply plot formatting. These presets are: • Greyscale_Preferences: This preset affects only the Preferences section of the plot configurations, and produces publication-ready greyscale plots with optimized size and spacing of the texts to improve readability of the plots (example shown below for the VPC). • Typical_Reporting_Settings: This preset affects only the Settings section of the plot configuration, and corresponds to plot settings typically chosen to report the analysis results. In addition to removing the grid in the background for all plots: □ VPC: observed data points and predicted percentiles are displayed (see example below) □ Observation vs Predictions: Observations vs Population Predictions are displayed in addition to Observations vs Individual Predictions □ Scatterplot of the residuals: Population residuals and NPDE are displayed in addition to Individual residuals. You can apply these presets to any plot in your project by selecting them from the list of presets and clicking on Apply. You can also combine these presets with your own presets or with the Apply from plot feature, to create the plot configuration that suits your needs. Restoring default plot configuration Using the Reset button If you want to restore the default plot configuration, you can click on the Reset icon in the Apply from plot panel. This will reset all plots to the system default, or to your custom default if you have defined one. Note that resetting the plot configuration will not affect the plot presets that you have defined or imported. You can still apply them to any plot after resetting. Defining a custom default To define a custom default, you need to create a preset and select the option “Use this preset as default” when saving it. This will make this preset the default configuration for all plots. You can also unset a preset as default by going to Settings > Manage presets > Plot formatting, and unchecking the option “Use this preset as default” for the preset. 6.1.4.Export charts All plots generated by Monolix can be exported All plots generated by Monolix can be exported as a figure or as text files in order to be able to plot it in another way or with other software for more flexibility. All the files can be exported in R for example using the following command read.table("/path/to/file.txt", sep = ",", comment.char = "", header = T) • The separator is the one defined in the user preferences. We set “,” in this example as it is the one by default. • The command comment.char = "" is needed for some files because to define groups or color, we use the character # that can be interpreted as a comment character by R. The list of plots below corresponds to all the plots that Monolix can generate. They are computed with the task “Plots”, and the list of plots to compute can be selected by clicking on the button next to the task as shown below, prior to running the task. Exporting the charts data can be made through the Export menu or through the preferences as described here. In the following, we describe all the files generated by the export function Charts concerning the Data Observed data (continuous, categorical, and count) Description: observation values Full output file description Column Description Comment id Subject identifier – OCC Occasion value (optional) if there is IOV in the data set time Observation time – y Observation value (loq) The name of the column is the observation name censored 1 if the observation is censored, 0 otherwise – split Name of the split the subject belongs to – color Name of the color the observation is colored with – filter 1 if the subject is filtered, 0 otherwise – Observed data (event) Description: observation values Full output file description Column Description Needed Task time Observation times – survivalFunction Survival of first event – averageEventNumber Average number of event at that time – split Name of the split the subject occasion belongs to – Description: censored values Full output file description Column Description Needed Task time Observation times – values Survival of first event – split Name of the split the subject occasion belongs to – Model for the observations Individual Fits Description: observation values Full output file description Column Description Comment id Subject identifier – OCC Occasion value (optional) if there is IOV in the data set time Observation time – y Observation value (loq) – median Prediction interval median – piLower Lower percentile of the individual prediction interval – piUpper Upper percentile of the individual prediction interval – censored 1 if the observation is censored, 0 otherwise – Description: individual fits based on population parameters and individual parameters Full output file description Column Description Needed Task id Subject identifier – OCC Occasion value (optional) if there is IOV in the data set time Continuous time grid used to compute fits – pop Prediction using population parameter values and average covariate value from the population (continuous) or reference covariate value – popPred Prediction using population parameter values and individual covariates. – indivPredMean Prediction based on the individual parameter values estimated by conditional mean Conditional distribution need to be indivPredMode Prediction based on the individual parameter values estimated by conditional mode EBEs need to be computed Observation vs Prediction Description: observation and prediction (pop & indiv) values Full output file description Column Description Needed Task id Subject identifier – OCC Occasion value (optional) if there is IOV in the data set time time of the observation y Observation value (loq) – y_simBlq Observation value (simulated blq) popPred Predictions based on population parameter values indivPredMean Predictions based on the individual parameter values estimated by conditional mean Conditional distribution need to be computed indivPredMode Predictions based on the individual parameter values estimated by conditional mode EBEs need to be computed censored 1 if the observation is censored, 0 otherwise split Name of the split the subject occasion belongs to color Name of the color the observation is colored with filter 1 if the subject is filtered, 0 otherwise – Description: observation and simulated prediction values Full output file description Column Description Needed Task rep Replicate id id Subject identifier – OCC Occasion value (optional) if there is IOV in the data set time time of the observation y Observation value (loq) – y_simBlq Observation value (simulated blq) indivPredSimulated Predictions based on the simulated individual parameter values estimated by conditional distribution Conditional distribution need to be computed censored 1 if the observation is censored, 0 otherwise split Name of the split the subject occasion belongs to color Name of the color the observation is colored with filter 1 if the subject is filtered, 0 otherwise Description: splines and confidence intervals for predictions Full output file description Column Description Needed Task popPred Continuous grid over population prediction values – popPred_spline Spline ordinates for population predictions – popPred_piLower Lower percentile of prediction interval for population predictions – popPred_piUpper Upper percentile of prediction interval for population predictions – indivPred Continuous grid over individual prediction values – indivPred_spline Spline ordinates for individual predictions – indivPred_piLower Lower percentile of prediction interval for individual predictions – indivPred_piUpper Upper percentile of prediction interval for individual predictions – split Name of the split the visual guides belong to Distribution of the residuals Description: probability density function of each residual type (pwres, iwres, npde) Full output file description Column Description Needed Task pwRes_abscissa Abscissa for pwres pdf – pwRes_pdf Pdf of pwres – iwRes_abscissa Abscissa for iwres pdf – iwRes_pdf Pdf of the iwres – npde_abscissa Abscissa for npde pdf – npde_pdf Pdf of the npde – split – Description: cumulative distribution function of each residual type (pwres, iwres, npde) Full output file description Column Description Needed Task pwRes_abscissa Abscissa for pwres cdf – pwRes_cdf Cdf of pwres – iwRes_abscissa Abscissa for iwres cdf – iwRes_cdf Cdf of the iwres – npde_abscissa Abscissa for npde cdf – npde_cdf Cdf of the npde – split – Description: theoretical guides for the pdf and the cdf Full output file description Column Description Needed Task abscissa,pdf,cdf Abscissa for the theoretical curves – pdf Theoretical value of the pdf – cdf Theoretical value of the cdf – Scatter plot of the residuals Description: prediction percentiles of the iwREs to plot iwRes w.r.t. the prediction. The same files exists with the pwres and the npde. Full output file description Column Description Needed Task prediction Value of the prediction – empirical_median Empirical median of the iwRes – empirical_lower Empirical lower percentile of the iwRes empirical_upper Empirical upper percentile of the iwRes theoretical_median Theoretical median of the iwRes theoretical_lower Theoretical lower of the iwRes theoretical_upper Theoretical upper of the iwRes theoretical_median_piLower Lower bound of the theoretical median prediction interval theoretical_median_piUpper Upper bound of the theoretical median prediction interval theoretical_lower_piLower Lower bound of the theoretical lower prediction interval theoretical_lower_piUpper Upper bound of the theoretical lower prediction interval theoretical_upper_piLower Lower bound of the theoretical upper prediction interval theoretical_upper_piUpper Upper bound of the theoretical upper prediction interval split Name of the split the subject occasion belongs to Description: time percentiles of the iwREs to plot iwRes w.r.t. the time. The same files exists with the pwres and the npde. Full output file description Column Description Needed Task time Value of the time – empirical_median Empirical median of the iwRes – empirical_lower Empirical lower percentile of the iwRes empirical_upper Empirical upper percentile of the iwRes theoretical_median Theoretical median of the iwRes theoretical_lower Theoretical lower of the iwRes theoretical_upper Theoretical upper of the iwRes theoretical_median_piLower Lower bound of the theoretical median prediction interval theoretical_median_piUpper Upper bound of the theoretical median prediction interval theoretical_lower_piLower Lower bound of the theoretical lower prediction interval theoretical_lower_piUpper Upper bound of the theoretical lower prediction interval theoretical_upper_piLower Lower bound of the theoretical upper prediction interval theoretical_upper_piUpper Upper bound of the theoretical upper prediction interval split Name of the split the subject occasion belongs to Description: residuals values (pwres, iwres, npde) Full output file description Column Description Needed Task id Subject identifier – OCC Occasion value (optional) if there is IOV in the data set time Observation times – prediction_pwRes Predictions based on population parameter values SAEM pwRes PwRes (computed with observations) SAEM pwRes_blq PwRes (computed with simulated blq) prediction_iwRes_mean Predictions based on the individual parameter values estimated by conditional mean (INDIVESTIM) if available, SAEM either iwRes_mean IwRes (computed with observations and individual parameter values estimated by conditional mean (INDIVESTIM) if available, SAEM either) iwRes_mean_simBlq IwRes (computed with simulated blq and individual parameter values estimated by conditional mean (INDIVESTIM) if available, SAEM either) prediction_iwRes_mode Predictions based on the individual parameter values estimated by conditional mode (INDIVESTIM) iwRes_mode IwRes (computed with observations and the individual parameter values estimated by conditional mode (INDIVESTIM)) iwRes_mean_simBlq IwRes (computed with simulated blq and the individual parameter values estimated by conditional mode (INDIVESTIM)) prediction_npde Predictions based on population parameter values npde Npde (computed with observations) npde_simBlq Npde (computed with simulated blq) SAEM – If there are some censored data in the data censored 1 if the observation is censored, 0 otherwise split Name of the split the subject occasion belongs to color Name of the color the observation is colored with filter 1 if the subject is filtered, 0 otherwise – Description: simulated residuals values Full output file description Column Description Needed Task rep replicate – id Subject identifier – OCC Occasion value (optional) if there is IOV in the data set time Observation times – prediction_iwRes Predictions based on the simulated individual parameter values based on the conditional distribution iwRes_simulated IwRes (computed with observations and the simulated individual parameter values) iwRes_simulated_simBlq IwRes (computed with simulated blq and the simulated individual parameter values) censored 1 if the observation is censored, 0 otherwise – split Name of the split the subject occasion belongs to color Name of the color the observation is colored with filter 1 if the subject is filtered, 0 otherwise – Description: splines (residuals values against time and prediction) Full output file description Column Description Needed Task time_pwRes Time grid for pwRes spline SAEM time_pwRes_spline pwRes against time spline SAEM time_iwRes Time grid for iwRes spline At least SAEM time_iwRes_spline iwRes against time spline At least SAEM time_npde Time grid for npde spline SAEM time_npde_spline npde against time spline SAEM prediction_pwRes Prediction grid for pwRes spline SAEM prediction_pwRes_spline pwRes against population prediction spline SAEM prediction_iwRes Prediction grid for iwRes spline At least SAEM prediction_iwRes_spline iwRes against individual prediction spline At least SAEM prediction_npde Prediction grid for npde spline SAEM prediction_npde_npde npde against population prediction spline SAEM split Name of the split the visual guides belong to If the chart is splitted Description: bins values for the corresponding axis. Full output file description Column Description Needed Task binsValues Abscissa bins values – split Name of the split the bins refer to If the chart is splitted Model for the individual parameters Distribution of the individual parameters Full output file description
{"url":"https://monolix.lixoft.com/single-page/","timestamp":"2024-11-05T02:30:11Z","content_type":"text/html","content_length":"1049354","record_id":"<urn:uuid:d1f9f571-0da8-4d02-8797-0ac7d41b11b0>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00151.warc.gz"}
The 17 Most Misunderstood Facts About 온라인바카라 The first step to Mastering about sports betting is to be familiar with issue spreads. Many people are intrigued by betting lingo but deficiency the basic idea of what all of it usually means. Beneath We're going to demonstrate position spreads and We'll then be on our way to the exciting planet of athletics betting! Precisely what is some extent Spread? In any basketball or football game There's two groups that happen to be taking part in from one another. Individuals teams are not often precisely matched meaning that commonly just one staff should have an even better prospect at profitable the sport then one other 1 will. If bettors have been permitted to bet on who was merely planning to gain, the good bettors would of course wager on the higher crew which might very likely gain over 50% of enough time in the procedure. If profitable ended up that uncomplicated for everybody Las Vegas sportsbooks would quit using any bets! This is where the point distribute will come set up. The essential purpose of the point unfold is that can help equilibrium the probability of each and every crew profitable by changing the final score by The purpose spread. Immediately after this adjustment, you get the Towards the Distribute result (ATS final result for short). How to Read the Point Distribute The big apple Giants -7 vs. Philadelphia Eagles The greater workforce, referred to as the Favorite is expected to get the sport and ought to give or lay factors on the weaker workforce. The favourite is listed Using the minus sign together with the range of factors They may be favored. In the above mentioned example the The big apple Giants should not only gain the sport, but they have to earn the sport by more than 7 details for Eagle bettors to have a winning ATS final result. An Eagles bettor will earn his guess if: Philly wins the sport by any degree of details or Philly loses the game by under 7 factors. There is certainly also the prospect that the final rating could land exactly within the spread variety (instance: the Eagles gain 28-21 when -7) which is called a thrust or no action and a refund is then issued on the bettors on equally groups. The same sport and stage spread is often thought of in the weaker groups standpoint the Underdog (the Eagles inside our illustration) is just not anticipated to acquire the game and so gets or gets factors given to them with the more powerful staff. Every time a sport is said from the Underdogs standpoint, the workforce is stated which has a additionally signal along with the variety of points They may be underdogs by (instance: Eagles 7 vs. New York Giants). Make sure you Understand that Philly has 7 and Ny has -7 is identical level unfold on a similar game nevertheless it is simply stated in different ways. Mathematical Conclusions For some a mathematical tactic is useful. You'll be able to identify the ATS winner by subtracting the point distribute from your favorites rating (the 슬롯사이트 minus sign before the number) and then Evaluate it to the underdogs score. Or by adding the point distribute towards the underdogs scores (the as well as sign ahead of the variety) and afterwards Evaluate into the favorites score.
{"url":"http://messiahysxf963.timeforchangecounselling.com/the-17-most-misunderstood-facts-about-onlainbakala","timestamp":"2024-11-14T00:39:20Z","content_type":"text/html","content_length":"9208","record_id":"<urn:uuid:04f82549-39ce-442b-866c-4ee79ddc3b1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00573.warc.gz"}
Dr. Matt Hogancamp, Northeastern - The dotted Temperley-Lieb category and handle-slides - Department of Mathematics Dr. Matt Hogancamp, Northeastern – The dotted Temperley-Lieb category and handle-slides November 11, 2022 @ 4:00 pm - 5:00 pm Mode: In-person Title: The dotted Temperley-Lieb category and handle-slides Abstract: Khovanov homology can be upgraded to an invariant of pairs (K,V) where K is a framed knot and V is an object of the dotted Temperley-Lieb category dTL. In this context, the pair (K,V) is called a colored knot, and its Khovanov invariant is called colored Khovanov homology. In my talk I will discuss recent joint work with David Rose and Paul Wedrich, in which we construct an object in dTL (more accurately, an ind-object therein), called a Kirby color, whose associated colored Khovanov invariant satisfies the important handle-slide relation from topology. I will also give a diagrammatic description of the Kirby color, extending the presentation which defines dTL.
{"url":"https://math.unc.edu/event/dr-matt-hogancamp-northeastern-tba/","timestamp":"2024-11-12T05:22:50Z","content_type":"text/html","content_length":"111709","record_id":"<urn:uuid:08e17dea-1eda-4b91-a4eb-1e062c335608>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00197.warc.gz"}
@Article{cmes.2001.002.087, AUTHOR = {M.A. Golberg, C.S. Chen}, TITLE = {An Efficient Mesh-Free Method for Nonlinear Reaction-Diffusion Equations}, JOURNAL = {Computer Modeling in Engineering \& Sciences}, VOLUME = {2}, YEAR = {2001}, NUMBER = {1}, PAGES = {87--96}, URL = {http://www.techscience.com/CMES/v2n1/24721}, ISSN = {1526-1506}, ABSTRACT = {The purpose of this paper is to develop a highly efficient mesh-free method for solving nonlinear diffusion-reaction equations in R^d, d=2, 3. Using various time difference schemes, a given time-dependent problem can be reduced to solving a series of inhomogeneous Helmholtz-type equations. The solution of these problems can then be further reduced to evaluating particular solutions and the solution of related homogeneous equations. Recently, radial basis functions have been successfully implemented to evaluate particular solutions for Possion-type equations. A more general approach has been developed in extending this capability to obtain particular solutions for Helmholtz-type equations by using polyharmonic spline interpolants. The solution of the homogeneous equation may then be solved by a variety of boundary methods, such as the method of fundamental solutions. Preliminary work has shown that an increase in efficiency can be achieved compared to more traditional finite element, finite difference and boundary element methods without the need of either domain or surface meshing.}, DOI = {10.3970/cmes.2001.002.087} }
{"url":"https://www.techscience.com/CMES/v2n1/24721/bibtex","timestamp":"2024-11-09T14:03:22Z","content_type":"text/plain","content_length":"2009","record_id":"<urn:uuid:1b66f16d-5ef1-4520-b896-c83e756a9fa7>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00385.warc.gz"}
Hydraulic Turbines and Jets MCQ | Page 7 of 11 | Electricalvoice Hydraulic Turbines and Jets MCQ 71. A jet of water, of cross-sectional area 0.005 m^2 strikes a flat plate normally with a velocity of 15 m/s. If the plate is moving with a velocity of 5 m/s in the direction of jet and away from the jet, what is the force exerted by the jet on the plate? 1. 0.50 N 2. 250 N 3. 0.25 N 4. 500 N 72. A horizontal water jet with a velocity of 10 m/s and cross-sectional area of 10 mm^2 strikes a flat plate held normal to the flow direction. The density of water is 1000 kg/m^3. The total force on the plate due to the jet is 1. 10 N 2. 100 N 3. 1N 4. 0.1 N 73. Jet ratio (m) is defined as the ratio of 1. Diameter of the jet of water to diameter of the Pelton wheel 2. Velocity of vane to velocity of the jet of water 3. Velocity of flow to velocity of the jet of water 4. Diameter of Pelton wheel to diameter of the jet water 74. Study of impact of jets and of velocity triangles is based on Newton’s law which states: Force = mass x acceleration. For practical applications, the following features are emphasized. 1. Change of velocity takes place in time duration. 2. Rather than the “mass’, the ratio of ‘weight’ and ‘g’ is used. 3. Change of velocity normal to the direction of the impacting inflow is also relevant to the computations. Which of the above are correct? 1. i and ii only 2. i and iii only 3. ii and iii only 4. i, ii and iii 75. Water is supplied from a height of 5.2 m at the rate of 100 lps to a hydraulic ram that delivers 7 lps to a height of 24 m above the ram. The head loss at the supply pipe is 0.2 m and the delivery pipe has a loss of 1.0 m. The efficiency of the ram is 1. 22.8 % 2. 29.8% 3. 35.0% 4. 39.6% 76. A turbine develops 500 kW power under a net head of 30 m. If the overall efficiency of the turbine is 0.83, the discharge through the turbine is nearly 1. 3.5 m^3/s 2. 3.0 m^3/s 3. 2.5 m^3/s 4. 2.0 m^3/s 77. A rectangular plate, weighing 5 kg is suspended by a hinge on the top horizontal edge. The centre of gravity G of the plate is 10 cm from the hinged end. A horizontal jet of water of area 5 cm^2, whose axis is 15 cm below the hinge, impinges on the plate with velocity V m/sec. In this condition, the plate deflects from the vertical by 20° (sin 20° = 0.34, cos 20° = 0.94). What will be the jet velocity? Take g = 10 m/sec^2. 1. 4.9 m/sec 2. 5.2 m/sec 3. 5.5 m/sec 4. 5.7 m/sec 78. Constant efficiency curves are also known as 1. Newton curves 2. Pelton curves 3. Muschel curves 4. Kaplan curves 79. In the inlet part of the jet impinging on a Pelton bucket, the velocity of whirl V[w1] is equal to 1. absolute velocity of jet at inlet V[1] 2. relative velocity of jet at inlet V[r1] 3. zero 4. none of the above 80. A hydraulic turbine has a discharge of 3 m^3/s when operating under a head of 15 m and a speed of 500 rpm. If it is to operate under 12 m of head, the rotational speed will be 1. 600 rpm 2. 559 rpm 3. 447 rpm 4. 400 rpm
{"url":"https://electricalvoice.com/hydraulic-turbines-and-jets-mcq/7/","timestamp":"2024-11-02T21:36:45Z","content_type":"text/html","content_length":"91323","record_id":"<urn:uuid:b0495f32-5ec4-435c-b1fd-350f11ac5139>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00368.warc.gz"}
Member List Gecode::Iter::Ranges::Positive< I, strict > Member List This is the complete list of members for Gecode::Iter::Ranges::Positive< I, strict > , including all inherited members. i Gecode::Iter::Ranges::Positive< I, strict > [protected] init(I &i) Gecode::Iter::Ranges::Positive< I, strict > [inline] max(void) const Gecode::Iter::Ranges::Positive< I, strict > [inline] min(void) const Gecode::Iter::Ranges::Positive< I, strict > [inline] operator()(void) const Gecode::Iter::Ranges::Positive< I, strict > [inline] operator++(void) Gecode::Iter::Ranges::Positive< I, strict > [inline] Positive(void) Gecode::Iter::Ranges::Positive< I, strict > [inline] Positive(I &i) Gecode::Iter::Ranges::Positive< I, strict > [inline] width(void) const Gecode::Iter::Ranges::Positive< I, strict > [inline]
{"url":"https://www.gecode.org/doc/5.1.0/reference/classGecode_1_1Iter_1_1Ranges_1_1Positive-members.html","timestamp":"2024-11-14T12:15:36Z","content_type":"text/html","content_length":"6373","record_id":"<urn:uuid:036a2f49-1899-4ebb-bc9c-51cc577dbf1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00510.warc.gz"}
Remove even digits Very easy Execution time limit is 1 second Runtime memory usage limit is 128 megabytes Remove all even digits from the given positive integer. One positive integer n (n ≤ 10^18). Print the number n with all even digits removed. If the original number n contains only even digits, print 0. Submissions 9K Acceptance rate 46%
{"url":"https://basecamp.eolymp.com/en/problems/8682","timestamp":"2024-11-03T00:23:53Z","content_type":"text/html","content_length":"226264","record_id":"<urn:uuid:c725b3d4-50dc-46f3-bbf7-25f72f21b32f>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00851.warc.gz"}
Math Pathways Math Pathway Recommendations by Program of Study or Major These downloadable matrices show the non-STEM Math Pathway recommendations from Regents Advisory Committees. STEM pathway recommendations are based on the Academic and Student Affairs Handbook.^1 Full Recommendations Non-STEM Recommendations STEM Recommendations Infographic For many students, the required mathematics course becomes a barrier to degree completion, either because they are reluctant to sign up for a mathematics course, or because they sign up for a mathematics course that is not appropriate for their program of study. It is important that students take the right mathematics course for their program of study or major. Ideally they should complete this first mathematics course during their first semester of enrollment. The University System of Georgia actually has five mathematics pathways, each defined by the first mathematics courses that students take in IMPACTS Mathematics. The recommended mathematics pathways for STEM majors are defined in the Academic and Student Affairs Handbook (Section 2.4.4 Details Regarding Core IMPACTS Domains (Mathematics)). ^1 • Students majoring in science, technology, or mathematics are expected to start with MATH 1113 Pre-calculus or MATH 1112 Trigonometry in IMPACTS Mathematics. • Students majoring in engineering (and all Georgia Tech students) are expected to start with Calculus in IMPACTS Mathematics. Non-STEM majors have a choice of taking MATH 1001 Quantitative Reasoning, MATH 1101, Introduction to Mathematical Modeling, MATH 1111 College Algebra, OR (at institutions participating in the statistics pathway) MATH/STAT 1401 Elementary Statistics (or a higher mathematics course) in IMPACTS Mathematics. Up to now, the default for non-STEM majors has been MATH 1111 College Algebra. Academic advisors have tended to advise students into MATH 1111 no matter what their major, seeing this as a “safe choice” that can apply to any major. The problem is that it really isn’t a “safe choice.” MATH 1111 has a very high withdrawal or failure rate, and for many students it becomes the singular barrier to degree completion. In addition, while MATH 1111 could be part of the pathway for STEM majors, STEM majors are supposed to start with MATH 1113 Pre-Calculus or Calculus in IMPACTS Mathematics, so students who would need to start in MATH 1111 are already a step behind as STEM The Mathematical Association of America, American Math Association for Two-Year Colleges, and other national math associations agree that College Algebra is not an appropriate gateway math course for students not pursuing Calculus… Only 10% of students who take College Algebra ever enroll in a Calculus course. ^2 The USG Task Force on Transforming College Mathematics indicated that “System institutions should ensure the alignment of pathways for [IMPACTS Mathematics] to programs of study so that students learn the mathematical content necessary for success in their majors” ^3 and issued the following statement: Most students in System colleges now take College Algebra as their entry-level mathematics course. College Algebra was designed explicitly to meet the needs of students who are preparing to take Pre-calculus and Calculus. Most students in non-STEM majors would be better served by enrolling in Quantitative Reasoning or Introduction to Mathematical Modeling, possibly followed by a statistics course in [the IMPACTS STEM domain] (Natural Science, Mathematics, and Technology) of the core curriculum. Quantitative Reasoning and Introduction to Mathematical Modeling were designed to meet the needs of non-STEM majors and include significant real-world applications. They are appropriate, rigorous mathematics courses for a broad array of non-STEM programs of study in which deep knowledge of and facility with basic mathematics are essential to prepare students for responsible citizenship.^3 The IMPACTS Mathematics alternatives to MATH 1111 College Algebra for non-STEM majors are MATH 1001 Quantitative Reasoning, MATH 1101 Introduction to Mathematical Modeling, or (at statistics pathways institutions) MATH/STAT 1401 Elementary Statistics. Most institutions only offer one of these two courses and no institution is required to offer both. Regents Advisory Committees were asked to evaluate which of the non-STEM mathematics pathways was most appropriate for their students. In considering this question, they were asked to weigh most heavily whether or not calculus was required to complete a program of study in the discipline, and to recommend whether students should take MATH 1111 or one of the three alternative courses. It is important to note that these “alternative” courses are not “algebra free.” They are rigorous mathematical courses that incorporate essential algebra skills for non-STEM majors in an appropriate context. Students who are not STEM majors may not be REQUIRED to take a particular mathematics course (from among MATH 1001, 1101, 1111, or 1401), and may choose to take MATH 1111 even if MATH 1001, MATH 1101 or MATH/STAT 1401 is recommended for their program of study. Selection of mathematics courses for non-STEM majors is a matter of advisement, not a matter of degree requirements. All students are also free to take higher level mathematics courses if they place higher than MATH 1001, 1101, 1401 or 1111. The Regents Advisory Committee on Mathematical Subjects (ACMS)^4 issued the general course descriptions below to aid Regents Advisory Committees in recommending the appropriate IMPACTS Mathematics course for their program(s) of study. Links to more extensive descriptions of course content are also provided. Algebra-Calculus Pathway MATH 1111 College Algebra This course is the first step in the pathway to a calculus course. In general, students should not be entered in this pathway unless it is a prerequisite for a major requirement (either in mathematics or elsewhere). This course was designed explicitly to develop the algebra skills needed for success in calculus. Students who will not need these specific skills in a later course are usually better served in the other pathway. The next step in this pathway could be trigonometry, precalculus, or a survey of calculus. Detailed content description for MATH 1111 College Algebra Non-Algebra Pathways MATH 1001 Quantitative Reasoning or MATH 1101 Introduction to Mathematical Modeling or MATH/STAT 1401 Elementary Statistics Individual institutions in the USG typically offer either Quantitative Reasoning or Math Modeling. Both courses include the analysis of data–centered problems with the intent of developing appropriate mathematical models and communicating results in a clear and effective fashion. The difference between the two courses is that Quantitative Reasoning (MATH 1001) places more emphasis on decision making in the context of problem-solving while Math Modeling (MATH 1101) places the emphasis on modeling real-world data with elementary functions. Elementary Statistics is an introductory course in the topic for students and may be especially relevant for students pursuing programs that rely on statistical evidence. Possible next steps in this pathway would include courses in statistics, or further courses in mathematical modeling or decision-making. Detailed content description for MATH 1001 Quantitative Reasoning Detailed content description for MATH 1101 Introduction to Mathematical Modeling Detailed content description for MATH/STAT 1401 Elementary Statistics Math Pathway Recommendations by Program of Study or Major These downloadable matrices show the non-STEM Math Pathway recommendations from Regents Advisory Committees. STEM pathway recommendations are based on the Academic and Student Affairs Handbook.^1 Full Recommendations Non-STEM Recommendations STEM Recommendations ^1 Academic and Student Affairs Handbook, Section 2.4.4, IMPACTS Mathematics: http://www.usg.edu/academic_affairs_handbook/section2/C738/#p2.4.4_details_regarding_areas_af ^2 Complete College America Math Pathways: https://completecollege.org/strategy/math-pathways/ ^3 University System of Georgia: Transforming College Mathematics July 2013 (Report of the Task Force to Transform College Mathematics) ^4 Advisory Committee on Mathematical Subjects (ACMS), MATH Pathways in the University System of Georgia.
{"url":"https://completecollegegeorgia.org/math-pathways","timestamp":"2024-11-09T20:26:42Z","content_type":"text/html","content_length":"61829","record_id":"<urn:uuid:2f82dd32-d1e3-48cd-b20b-27d054b34260>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00877.warc.gz"}
Mathematics/ Physics Having Fun Working With Fractions Brian Cagle John W. Cook 8150 S. Bishop St. Chicago IL 60620 (773) 535- 3315 Model each addition and subtraction example by shading parts of each region Divide a word into appropriate fractional parts Pay attention to details in instruction relating to " first", " last", "second", etc. Analyze clues and decode the message Use standard algorithms for addition, subtraction, multiplication, and division of fractions Materials Needed: 15 12 in. by 12 in. Dry Erase Boards ( If not applicable you may have the students section off the chalkboard into square regions) 1 Dry erase markers (two different colors) 1 Ruler 1 Game Board (Pre-made) 1 Pair of dice 5 Game pieces 1 Calculator 1 Mars candy bar Station #1 "Fractional Parts" Students will use their dry erase boards to model five problems using addition and subtraction of fractions from off the board by shading parts of each region. Using one of the color markers to shade the parts. (See section for materials needed.) Station #2 "Mars Fraction Hunt" Students will write the appropriate parts of the words on the line to form a new word. When the message is complete, the first student to decode the message will be rewarded with a Mars candy bar. Ex.) The first half of food + the last quarter of door and the answer will be the first half of fo/od = fo and the last quarter of doo/r = r makes the new word for. Station #3 "Mir Mission To Mars Game Using Fractions" Teacher will create a board game making a path from Earth to Mars using around 30 to 40 spaces between the two planets. The teacher will use index cards for the three decks needed for the game. The first deck will be called "Danger" these cards are used to make the players either lose a turn, go back to Earth, or go back a specific amount of spaces. The second deck will be called "Warp Drive 1" consisting of problems where the players will have to answer easier problems consisting of addition, subtraction, multiplication, and division of fractions. The third deck will be called "Warp Drive 2" which will be made up of more difficult problems using fractions. This game is designed to be use as a cumulative activity on the students understanding of addition, subtraction, multiplication, and division of fractions. It is intended for 6, 7, and 8 graders but can be modified for the lower grades by changing the difficulty of the questions on the cards to fit your specific grade level. 1.) Each player rolls the dice to determine who goes first. The person with the highest number goes first, the play continues clockwise. 2.) The first player takes a card from either Warp Drive 1 stack or the Warp Drive 2 stack of cards. 3.) If the player answers the question/problem correctly, he/she will then roll the dice. 4.) The number rolled is the number of spaces the player may advance on the board. 5.) If the player answers the question incorrectly, they do not move, nor do they roll the dice. 6.) If a player lands on "Danger", he/she must take a card from the "Danger" stack and follow the instructions written on it. 7.) Play continues until all players have landed on Mars. The first one to reach Mars is the winner!! Performance Assessment: Ongoing assessment throughout the game based on the students’ ability to answer a percentage of questions correctly.
{"url":"https://smileprogram.info/mp0598.htm","timestamp":"2024-11-09T02:33:50Z","content_type":"text/html","content_length":"4558","record_id":"<urn:uuid:25dc11ab-cf2c-46d6-9082-e0adb1cffc14>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00365.warc.gz"}
Pi Day Celebration Graduate Student Chapter of the American Mathematical Society, together with the undergraduate Pi Mu Epsilon Math Club and the Society for Industrial and Applied Mathematics graduate chapter, organized a Pi Day (3/14/15) celebration for the public complete with T-shirts and free pie., organized a Pi Day celebration for the public complete with T-shirts and free pie. Pi is a number equal to the nonrepeating decimal 3.1415926 . . . , and in addition to being the ratio of a circle's circumference to its diameter, pi appears in many disparate and surprising parts of mathematics and physics. The date of March 14 (i.e., 3/14) is known as Pi Day and used to celebrate mathematics and promote awareness of mathematics education. This year's Pi Day was particularly special because it was the "Pi Day of the Century" occurring on 3/14/15, a numerical abbreviation of the date that we will not see again until March 14, 2115.
{"url":"https://www.uh.edu/nsm/math/news-events/stories/2015_2016/2015_pi_day_celebration","timestamp":"2024-11-08T19:08:10Z","content_type":"text/html","content_length":"33176","record_id":"<urn:uuid:ec2e76b7-448a-4b25-bde0-f0d208bc45f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00228.warc.gz"}
Kinetic Energy Sponsored Links Work must be done to set any object in motion, and any moving object can do work. Energy is the ability to do work and kinetic energy is the energy of motion. There are several forms of kinetic • vibration - the energy due to vibration motion • rotational - the energy due to rotational motion • translational - the energy due to motion from one location to another Energy has the same units as work and work is force times distance . One Joule is one Newton of force acting through one meter - Nm or Joule in SI-units. The Imperial units are foot-pound . • 1 ft lb = 1.356 N m (Joule) Translational Kinetic Energy Translational kinetic energy can be expressed as E[t ] = 1/2 m v^2(1) E[t ] = kinetic translation energy (Joule, ft lb) m = mass (kg, slugs ) v = velocity (m/s, ft/s) • one slug = 32.1740 pounds (as mass) - lb[m] Rotational Kinetic Energy Rotational kinetic energy can be expressed as E[r ] = 1/2 I ω^2(2) E[m] = kinetic rotation energy (Joule, ft lb) I = moment of inertia - an object's resistance to changes in rotation direction (kg m^2, slug ft^2) ω = angular velocity ( rad /s) Example - Kinetic Energy in a Car The kinetic energy of a car with mass of 1000 kg at speed 70 km/h can be calculated as E[t ] = 1/2 (1000 kg) ((70 km/h) (1000 m/km) / (3600 s/h))^2 = 189043 Joule The kinetic energy of the same car at speed 90 km/h can be expressed as E[t ] = 1/2 (1000 kg) ((90 km/h) (1000 m/km) / (3600 s/h))^2 = 312500 Joule Note! - when the speed of a car is increased with 28% (from 70 to 90 km/h ) - the kinetic energy of the car is increased with 65% (from 189043 to 312500 J ). This huge rise in kinetic energy must be absorbed by the safety construction of the car to provide the same protection in a crash - which is very hard to achieve. In a modern car it is possible to survive a crash at 70 km/h . A crash at 90 km/h is more likely fatal. Download and print Kinetic Energy in a Moving Car chart Example - Kinetic Energy in a Steel Cube moving on a Conveyor Belt A steel cube with weight 500 lb is moved on a conveyor belt with a speed of 9 ft/s . The steel cube mass can be calculated as m = (500 lb ) / (32.1740 ft/s^2) = 15.54 slugs The kinetic energy of the steel cube can be calculated as E[t ] = 1/2 (15.54 slugs) (9 ft/s)^2 = 629 ft lbs Example - Kinetic Energy in a Flywheel A flywheel with Moment of Inertia I = 0.15 kg m^2 is rotating with 1000 rpm (revolutions/min) . The angular velocity can be calculated as ω = (1000 revolutions /min) (0.01667 min/s) (2 π rad/ revolution ) = 104 rad/s The flywheel kinetic energy can be calculated E[r ] = 1/2 (0.15 kg m^2) (104 rad/s) ^ 2 = 821 J Sponsored Links Related Topics Motion of bodies and the action of forces in producing or changing their motion - velocity and acceleration, forces and torque. The relationships between forces, acceleration, displacement, vectors, motion, momentum, energy of objects and more. Related Documents Angular velocity and acceleration vs. power and torque. Car acceleration calculator. Calculate fuel consumption in liter per km - consumption chart and calculator. The momentum of a body is the product of its mass and velocity - recoil calculator. Maximum conveyor belt speed. Velocity plotted in time used diagram. Dynamic pressure is the kinetic energy per unit volume of a fluid in movement. Energy density - by weight and volume - for some ways to store energy The kinetic energy stored in flywheels - the moment of inertia. Linear and angular (rotation) acceleration, velocity, speed and distance. Calculate fuel consumption in miles per gallon - mpg - calculator and consumption charts. Heat vs. work vs. energy. Impact forces acting on falling objects hitting the ground, cars crashing and similar cases. Forces acting a very short time are called impulse forces. Pitot tubes can be used to measure fluid flow velocities by measuring the difference between static and dynamic pressure in the flow. Elevation and potential energy in hydropower. Calculate the range of a projectile - a motion in two dimensions. Rolling friction and rolling resistance. Melting points and latent energy of salt hydrates. Speed (mph) and time (hours) and distance traveled (miles) chart. Speed (km/h) vs. time (hours) and distance traveled (km). Wind load on surface - Wind load calculator. Sponsored Links
{"url":"https://www.engineeringtoolbox.com/amp/kinetic-energy-d_944.html","timestamp":"2024-11-11T04:36:27Z","content_type":"text/html","content_length":"27940","record_id":"<urn:uuid:0296990d-7bda-4169-8f78-f09027ceb98f>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00068.warc.gz"}
Pedro Machado Manhães de Castro, Sylvain Pion, and Monique Teillaud The goal of the circular kernel is to offer to the user a large set of functionalities on circles and circular arcs in the plane. All the choices (interface, robustness, representation, and so on) made here are consistent with the choices made in the CGAL kernel, for which we refer the user to the 2D kernel manual. In this first release, all functionalities necessary for computing an arrangement of circular arcs and these line segments are defined. Three traits classes are provided for the CGAL arrangement Software Design The design is done in such a way that the algebraic concepts and the geometric concepts are clearly separated. Circular_kernel_2 has therefore two template parameters: • the first parameter must model the CGAL three dimensional Kernel concept. The circular kernel derives from it, and it provides all elementary geometric objects like points, lines, circles, and elementary functionality on them. • the second parameter is the algebraic kernel, which is responsible for computations on polynomials and algebraic numbers. It has to be a model of concept AlgebraicKernelForCircles. The robustness of the package relies on the fact that the algebraic kernel provides exact computations on algebraic objects. The circular kernel uses the extensibility scheme presented in the 2D kernel manual (see Section Extensible Kernel). The types of Kernel are inherited by the circular kernel and some types are taken from the AlgebraicKernelForCircles parameter. Three new main geometric objects are introduced by Circular_kernel_2: circular arcs, points of circular arcs (used in particular for endpoints of arcs and intersection points between arcs) and line segments whose endpoints are points of this new type. In fact, the circular kernel is documented as a concept, CircularKernel, and two models are provided: The first example shows how to construct circles or circular arcs from points, and how to compute intersections between them using the global function. File Circular_kernel_2/intersecting_arcs.cpp #include <CGAL/Exact_circular_kernel_2.h> #include <CGAL/point_generators_2.h> template <typename T> double prob_2() { CGAL::Random_points_in_square_2<Point_2> g(1.0); double prob = 0.0; for (int i = 0; i < 10000; i++) { Point_2 p1, p2, p3, p4, p5, p6; p1 = *g++; p2 = *g++; p3 = *g++; p4 = *g++; p5 = *g++; p6 = *g++; // the pi's are points inherited from the Cartesian kernel Point_2, so, // the orientation predicate can be called on them T o1 = T(p1, p2, p3); T o2 = T(p4, p5, p6); typedef typename CGAL::CK2_Intersection_traits<Circular_k, T, T>::type std::vector<Intersection_result> res; prob += (res.size() != 0) ? 1.0 : 0.0; return prob/10000.0; int main() std::cout << "What is the probability that two arcs formed by" << std::endl; std::cout << "three random counterclockwise-oriented points on" << std::endl; std::cout << "an unit square intersect? (wait a second please)" << std::endl; std::cout << "The probability is: " << prob_2<Circular_arc_2>() << std::endl << std::endl; std::cout << "And what about the probability that two circles formed by" << std::endl; std::cout << "three random counterclockwise-oriented points on" << std::endl; std::cout << "an unit square intersect? (wait a second please)" << std::endl; std::cout << "The probability is: " << prob_2<Circle_2>() << std::endl; return 0; The following example shows how to use a functor of the kernel. File Circular_kernel_2/functor_has_on_2.cpp #include <CGAL/Exact_circular_kernel_2.h> #include <CGAL/point_generators_2.h> int main() int n = 0; Circular_arc_2 c = Circular_arc_2(Point_2(10,0), Point_2(5,5), Point_2(0, 0)); for(int i = 0; i <= 10; i++) { for(int j = 0; j <= 10; j++) { Point_2 p = Point_2(i, j); if(Circular_k().has_on_2_object()(c,p)) { std::cout << "(" << i << "," << j << ")" << std::endl; std::cout << "There are " << n << " points in the [0,..,10]x[0,..,10] " << "grid on the circular" << std::endl << " arc defined counterclockwisely by the points (0,0), (5,5), (10,0)" << std::endl << "See the points above." << std::endl; return 0; Design and Implementation History The first pieces of prototype code were comparisons of algebraic numbers of degree 2, written by Olivier Devillers [1],cgal:dfmt-amafe-02. Some work was then done in the direction of a "kernel" for CGAL. and the first design emerged in [2]. The code of this package was initially written by Sylvain Pion and Monique Teillaud who also wrote the manual. Athanasios Kakargias had worked on a prototype version of this kernel in 2003. Julien Hazebrouck participated in the implementation in July and August 1. The contribution of Pedro Machado Manhães de Castro in summer 2006 improved significantly the efficiency of this kernel. He also added more functionality in 2008. This work was partially supported by the IST Programme of the EU as a Shared-cost RTD (FET Open) Project under Contract No IST-2000-26473 (ECG - Effective Computational Geometry for Curves and Surfaces) and by the IST Programme of the 6th Framework Programme of the EU as a STREP (FET Open Scheme) Project under Contract No IST-006413 (ACS - Algorithms for Complex Shapes).
{"url":"https://doc.cgal.org/4.12.1/Circular_kernel_2/index.html","timestamp":"2024-11-13T07:53:32Z","content_type":"application/xhtml+xml","content_length":"22844","record_id":"<urn:uuid:8bec4a11-f570-41b5-8799-93ed43157e14>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00539.warc.gz"}
kVA to kW and kW to kVA Conversion Calculator Convert apparent power in kilovot-amps (kVA) to real power in kilowatts (kW) and real power in kilowatts (kW) to apparent power in kilovot-amps (kVA) using our conversion calculator. Learn more about the relations between Amps (A), Voltage (V), Watts (W), Volt-Amps (VA), and other units often used not only in Electrical Engineering but also in many other fields. Published: March 12, 2024. ┌───────────────────────────────────────┐ ┌───────────────────────────────────────┐ │ Kilovolt-Amps (kVA) to Kilowatts (kW) │ │ Kilowatts (kW) to Kilovolt-Amps (kVA) │ ├───────────────────────────────────────┤ ├───────────────────────────────────────┤ │ │ │ │ ├───────────────────────────────────────┤ ├───────────────────────────────────────┤ │ │ │ │ └───────────────────────────────────────┘ └───────────────────────────────────────┘ How to Convert Kilovolt-Amps (kVA) to Kilowatts (kW) The real power P(kW) equals the apparent power S(kVA) multiplied by the phase shift (often labeled as "Power Factor" PF): P(kW) = S(kVA) * PF Note: phase shift must be 1 or less. How to Convert Kilowatts (kW) to Kilovolt-Amps (kVA) The apparent power S(kVA) equals the real power P(kW) divided by the phase shift (Power Factor: PF): S(kVA) = P(kW) / PF Again, the phase shift must be 1 or less. Real Power vs. Apparent Power Real power and apparent power are two fundamental concepts used to describe power in AC (alternating current) circuits, each providing different insights into how electrical energy is managed and Real Power (Active Power) • Definition: Real power is the power that actually performs work in an AC circuit. It is the component of power that results in the actual consumption of energy, converting electrical energy into other forms like mechanical power, heat, or light. • Measurement Units: Watts (W). • Calculation: Real power (P) is calculated as P = V * I * cos(φ), where V is the voltage across the load, I is the current through the load, and cos(φ) is the power factor, representing the phase difference/shift between the current and voltage. • Significance: Real power is directly related to the useful work done by an electrical system and is a critical factor in determining the efficiency of electrical devices and systems. Apparent Power • Definition: Apparent power is the total power supplied to an AC circuit, representing the combination of real power and reactive power. It reflects the total amount of energy being transmitted from the source to the load without distinguishing between the energy that does work and the energy stored and then returned to the system. • Measurement Units: Volt-amperes (VA). • Calculation: Apparent power S is calculated as S = V * I, where V is the RMS voltage and I is the RMS current. • Significance: Apparent power is crucial for the design and sizing of electrical infrastructure, such as transformers and wiring, to ensure they can handle the total power flow through the system. It indicates the capacity required to transmit the electrical power, including both working and non-working components. Note: RMS stands for Root Mean Square. It's a mathematical method used to determine the effective value of an alternating current (AC) or voltage. The RMS value of an AC signal is the equivalent direct current (DC) value that delivers the same amount of power to a load as the AC signal over one cycle. RMS values are especially important in electrical engineering, as they provide a meaningful measure of the voltage and current in circuits that alternate, ensuring accurate power management and device safety. Key Differences • Nature of Power: Real power denotes the actual consumption and conversion of electrical energy into work, while apparent power combines this real consumption with reactive power, which is cyclically stored and released by the circuit's reactive components (capacitors and inductors). • Units of Measure: Real power is measured in watts, reflecting the energy conversion rate, whereas apparent power is measured in volt-amperes, indicating the total power flow. • Practical Implications: Real power impacts the efficiency and operational costs of electrical systems, being directly related to the work performed. Apparent power is more about the capacity of the system to deliver the energy, including both the energy that does useful work and the energy that does not contribute to work but must still be managed by the system. Understanding the distinction between real and apparent power is essential for electrical engineering, especially for the design, operation, and optimization of AC power systems, ensuring both efficient use of energy and the integrity and capacity of electrical infrastructure.
{"url":"https://www.batteryequivalents.com/calculators-and-charts/kva-to-kw-and-kw-to-kva-conversion-calculator.html","timestamp":"2024-11-02T02:30:11Z","content_type":"text/html","content_length":"34431","record_id":"<urn:uuid:954ee7ea-da1c-4c10-9c9a-662e5ca1fa8e>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00339.warc.gz"}
Physics & Astronomy 218 Allen Hall My research focuses on describing and controlling dynamics of quantum many-body systems. My goal is to address some of the fundamental problems of ultracold atom and condensed matter physics: 1. What are the generic alternatives to thermalization? 2. Interplay of pseudo gauge-fields, topology and dynamics 3. Dynamics of mesoscopic systems and (with and without non-Abelian excitations) Addressing these fundamental physics questions will have important implications for: (1) understanding the limitations and alternatives to thermodynamics, (2) generating non-Abelian excitations in static ultracold atom systems and building photonic crystals with broken time-reversal symmetry, (3) control over non-Abelian excitations will be useful for building quantum memory and quantum I received a BA in mathematics and a BS in physics from Rice University in 2002. For my Ph.D. I studied phase-slips in mesoscopic superconductors and superfluids with Prof. Paul Goldbart at the University of Illinois at Urbana Champaign. Following my Ph.D., I became a postdoc at Harvard University, where I mostly concentrated on theory of ultracold atom physics. After Harvard I became a Lee A. DuBridge Prize Postdoc at Caltech, where I worked on the intersection of topological physics and ultracold atom physics. Upon completion of my second postdoc I joined the faculty of the University of Pittsburgh. Selected Publications Graduate Advisor Chenxu Liu Binbin Tian
{"url":"https://www.physicsandastronomy.pitt.edu/people/david-pekker","timestamp":"2024-11-07T13:15:01Z","content_type":"text/html","content_length":"41304","record_id":"<urn:uuid:ff7cb127-b587-4446-93a1-525ee34fa114>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00034.warc.gz"}
Bioequivalence and Bioavailability Forum pkpdpkpd BE parallel design [General Sta­tis­tics] Post reply 2007-04-03 19:45 (6433 d 21:40 ago) While assessing BE in two-group parallel design study, should one use nominal data or log transformed data. If log transformed, should it be log or ln? Posting: # 616 PKPDPKPD Views: 20,183 BE parallel design Post reply Hi PKPDPKPD - what a nick! ❝ While assessing BE in two-group parallel design study, should one use nominal data or log transformed data. Log-transformed if you are applying a multiplicative model (for all clearance based parameters, Jaime_R , for AUC, Cmax,...) Untransformed for Tmax - if applicable to your formulation. 2007-04-03 20:58 ❝ If log transformed, should it be log or ln? (6433 d 20:27 ago) Whatever you like ( @ pkpdpkpd Posting: # 617 ld Views: 18,531 logarithmus dualis - would also do the job), but most people prefer Regards, Jaime pkpdpkpd BE parallel design Post reply 2007-04-03 21:07 (6433 d 20:18 ago) Thank you. While using raw data and log data, I have significantly different results. In case of the parallel design, I calculate the ratio taking the mean from one group and the mean of the other group for the raw data and log of the mean from one group and log of the mean from another gruop. Am I true? @ Jaime_R Posting: # 618 -- Views: 18,187 Edit: Full quote removed. [HS] BE parallel design Post reply Hi PKPDPKPD! ❝ Thank you. While using raw data and log data, I have significantly different results. That's quite common using transformations on data. I hope you have a statistical protocol in place, and are not playing around to see which results are meeting your expectations. ★★ ❝ In case of the parallel design, I calculate the ratio taking the mean from one group and the mean of the other group for the raw data and log of the mean from one group and log of the mean from another gruop. Am I true? 2007-04-03 21:41 I guess, you are talking of untransformed analysis first and transfomed analysis second? (6433 d 19:45 ago) Let's concentrate only on the transfomed analysis (because this is the one you will need). @ pkpdpkpd Posting: # 619 1. Calculate the log (Y[1],Y[2],...,Y[n]) of all individual values (X[1],X[2],...,X[m]), where n = number of subjects under test and m = number of subjects under reference Views: 18,086 2. Calculate the arithmetic means form log transfomed data (Y) for the two treatments (Y[T] and Y[R]) 3. If you want you can antilog these means (= geometric means of the original data) 4. Calculate the SDs for the two treatments (SD[T], SD[R]) 5. Calculate the total variance S^2 = [(n-1)*SD[T]^2 + (m-1)*SD[R]^2]/(n+m-2) 6. Calculate the difference Delta = Y[T] - Y[R] 7. Calculate the point estimate by taking the antilog of Delta 8. Calculate the upper/lower confidence limit as Delta + t(0.05,n+m-2)*S^2 9. Calculate the antilogs of these confidence limits Regards, Jaime BE parallel design pkpdpkpd Post reply ● Jaime, 2007-04-03 22:14 Thank you for your lesson. I am a beginner PKPDPKPD and am trying to understand the procedure which is normally done by the software. Just few questions to your instructions: (6433 d 19:11 ago) 1. delta: should it be a difference of the raw data and log transformed data 2. why shoudl i calculate the difference not ratio @ Jaime_R Posting: # 620 PKPDPKPD Views: 18,185 Edit: Full quote removed. [HS] BE parallel design Post reply Dear PKPDPKPD! ❝ I am a beginner PKPDPKPD and am trying to understand the procedure which is normally done by the software. That's the best start of possible ones! Never trust in any piece of software you haven't written yourself (and even then you should be cautious…) Jaime_R ❝ Just few questions to your instructions: ❝ 1. delta: should it be a difference of the raw data and log transformed data 2007-04-03 22:29 After you have log-transformed the data, you are only working with these (in my example the Ys and not the Xs) (6433 d 18:56 ago) ❝ 2. why shoudl i calculate the difference not ratio @ pkpdpkpd Posting: # 621 Since we are now in the log-domain, we have transformed the multiplicative model (which would call for ratios!) into an additive model (therefore we are interested in differences Views: 20,017 of logs). May be sound confusing, but once you have applied the transformation, all the nice examples given in statistical textbooks (99% are based on differences!) are working now… Actually you can boil it down into three steps: 1. Take logs of individual data (now you have a new data set) 2. Apply common statistical methods (t-test, ANOVA, GLM, whatsoever,…) 3. Antilog you results (point estimate, confidence limits) Regards, Jaime BE parallel design pkpdpkpd Post reply ● Should AUC and Cmax be also calculated from the log transformed data? 2007-04-03 23:48 -- (6433 d 17:37 ago) @ Jaime_R Posting: # 622 Full quote removed. Please see Views: 18,152 this post ! [HS] BE parallel design Post reply Dear PKPDPKPD! ❝ Should AUC and Cmax be also calculated from the log transformed data? See my first post . You should only transform the calculated PK (not the concentrations). 1. Calculate AUC from your analytical results by any method you like (trapezoidal rule preferred, but not limited to). 2. Cmax is simply the highest measured concentration. 3. For the comparison you log-transform AUC and Cmax. Second an apology: yesterday I was too fast (referring to my memory and not a textbook - I'm also using software and not explicit formulas). Therefore a little correction (valid for unequal group sizes): Let's misuse Helmut's data download here ★★ ) Barcelona, Although his data are from a cross-over study, we will use only period 1. 2007-04-04 17:14 (6433 d 00:11 ago) 1. Change the header of the first column from 'Seq' to 'Trt' 2. Delete the column 'Rand' @ pkpdpkpd 3. Delete the column 'P2' Posting: # 623 Views: 18,535 Now we have data from a parallel study where Trt 1 = reference and Trt 2 = test, Group 1 = Subjects 1-12, Group 2 = Subjects 13-24, and Response (your PK parameter) in column 'P1'. 1. log-transform 'P1' 2. calculate separately for each treatment: □ arithmetic mean: (1: 3.56227, 2: 3.38383) □ exp(arithmetic mean): (1: 35.24321, 2: 29.48349), □ note: these are the geometric means of untransformed data! □ standard deviations: (1: 0.35950, 2: 0.42377) □ variances = SD²: (1: 0.12924, 2: 0.17958) □ n[1,2] (group sizes): (1: 12, 2: 12) □ Q[1,2] = variance × (n[1,2]-1): (1: 1.42165, 2: 1.97539) 3. calculate R = sqrt[(n[1]+n[2])/(n[1]×n[2])×(Q[1]+Q[2])/(n[1]+n[2]-2)]: 0.16042 4. look up the critical value of the t-distribution for alpha=0.05 with n[1]+n[2]-2 degrees of freedom: t 1.71714 5. calculate t × R: 0.27547 6. calculate Delta (difference of means Trt 2 - Trt 1): -0.17844 7. antilog Delta (= point estimate): 0.83657 8. calculate lower/upper 90% confidence limit = Delta ± t × R: lo: -0.45391, hi: 0.09702 9. antilog lo and hi: exp(lo): 0.63514, exp(hi): 1.10189 So the final results are (point estimate and 90% confidence interval): 83.657% (63.514% - 110.189%) I checked the 'manual' calculation in WinNonlin and EquivTest: WinNonlin: 83.6572% (63.5100% - 110.1958%) EquivTest: 83.66% (63.51% - 110.18%) Slight differences seen in results are not uncommon... Regards, Jaime BE parallel design Post reply Hi Jaime! You recycled my data! ❝ So the final results are (point estimate and 90% confidence interval): ❝ 83.657% (63.514% - 110.189%) ❝ WinNonlin: 83.6572% (63.5100% - 110.1958%) ❝ EquivTest: 83.66% (63.51% - 110.18%) Helmut ❝ ❝ Slight differences seen in results are not uncommon... Vienna, Austria, Full ACK! 2007-04-04 17:31 (6432 d 23:54 ago) I don’t know which versions you were using; I'm getting @ Jaime_R exactly Posting: # 624 Views: 18,636 the same results in WinNonlin (v5.2) and EquivTest/PK. Kinetica (v4.4.1) comes up with: 83.6572% (63.514% – 110.19%) You never know when rounding will hit you – and don’t dare asking the software vendor for the algorithm… Dif-tor heh smusma 🖖🏼 Довге життя Україна! [] Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes ● BE parallel design 2007-04-04 19:32 Post reply (6432 d 21:53 ago) @ Helmut Thank you to all. It was a great help. Posting: # 625 Views: 18,091 PKPDPKPD BE parallel design Post reply Hi Helmut! Jaime_R ❝ You recycled my data! Sure! You 2007-04-04 22:33 wrote (6432 d 18:52 ago) @ Helmut Posting: # 626 You may use the data to 'play around' in your own piece of software. Views: 18,054 ❝ I don’t know which versions you were using… WinNonlin (v5.1.1), EquivTest (v2.00) Regards, Jaime BE parallel design Post reply HAI Jaime_R, Your BE parallel design calculation is very useful to me. This is the first time i am doing Parallel Design. Thank You. I already did one BE cross over design. there i Come across the Statistical calculation like Standard Error Mean Standard Error Sathya diff ☆ ratio India, upper 2008-09-01 09:21 lower (5917 d 08:04 ago) Power P1 and P2 @ Jaime_R Posting: # 2291 But in Paralled Design the Statistical calculation Views: 17,372 Point Estimate Lower & Higher I have a doubt that, is there no power calculation in Parallel design? I don't know whether both parallel & crossover have same calculation or different Please Clarify. and also help me how to do power calculation in cross over? BE parallel design Post reply Dear Jaime, ☆ I tried your sample Data I got some clarity in Parallel study. India, How can i Proceed it is in SAS? 2008-09-04 14:37 (5914 d 02:48 ago) Please Help me. @ Jaime_R Please give me a sample for cross over study like parallel study especially for power calculation. Posting: # 2314 Views: 17,240 Then it will be very helpful to me. can you please
{"url":"https://forum.bebac.at/mix_entry.php?id=616","timestamp":"2024-11-13T15:26:09Z","content_type":"text/html","content_length":"34684","record_id":"<urn:uuid:809c6f2a-78f4-4aa1-8ee5-7776f54d4042>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00093.warc.gz"}
How I’d Learn Machine Learning (If I Could Start Over) | by Egor Howell | Jan, 2024 Machine learning revolves around algorithms, which are essentially a series of mathematical operations. These algorithms can be implemented through various methods and in numerous programming languages, yet their underlying mathematical principles are the same. A frequent argument is that you don’t need to know maths for machine learning because most modern-day libraries and packages abstract the theory behind the algorithms. However, I would argue that if you want to become a top-level Machine Learning Engineer or Data Scientist, you need to know the basics of linear algebra, calculus, and statistics at least. There is of course more maths to learn, but best start with the basics and you can always enrich your knowledge later on. You don’t need to understand all these concepts to a master’s degree level but should be able to answer questions like what is a derivative, how to multiply matrices together and what is maximum likelihood estimation. That list I just wrote is the bedrock of nearly every machine learning algorithm, so having this solid foundation will set you up for success in the long run. Some of the key things I recommend you learn are: Be the first to comment
{"url":"https://quantinsightsnetwork.com/how-id-learn-machine-learning-if-i-could-start-over-by-egor-howell-jan-2024/","timestamp":"2024-11-13T01:45:56Z","content_type":"text/html","content_length":"157618","record_id":"<urn:uuid:b33cd8b7-6949-44d6-abfe-eb6a27d1e29d>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00520.warc.gz"}
Non-uniform embedding applications for synchronization analysis Iterative polynomial model for synchronization of vibrations was proposed. A polynomial was constructed using time delay coordinates. For time delay kernel estimation, a non-uniform embedding was used. Evolutionary optimization algorithms were introduced for non-uniform time delay identification. Performance investigation of proposed method was done for two different vibrations. 1. Introduction The synchronization phenomena have been intensively investigated due to its potential application in physical processes like communications [1, 2], system control [3], artificial analysis [4, 5], financial time series [6]; and biological processes, like neurophysiological systems [7] and cardiorespiratory rhythms [8]. Usually synchronization theory is based on phase synchronization [5, 9]. However, state space of the system can be reconstructed from their observations [10] and various geometrical and dynamic properties can be analyzed [11] and successfully applied in synchronization analysis [3, 12]. In past, a variety of the synchronization methodologies of chaotic vibration systems have been suggested, such as variable structure control approach, back stepping control approach and others [13, 14]. One of the main problems arising when designing of dynamical model is instability of a system [13, 15]. Assessing model based on its dynamic response requires synchronous response measurements. For modeling of dynamic synchronization the rules that governs the original system is required. To find those rules non-uniform embedding and polynomial approximation model have been invoked. Systems embedding is an important step in the process of modeling of a system [16]. Attractor embedding is used to characterize the dynamics of a true system represented by obtained vibration. The embedding theorem says that dynamical systems has the same topological properties and is better represented by delayed version of an observed vibration [17]. Embedding dimension and time delay are two parameters determining the reconstruction of a system by a delay coordinate method. There are several ways to determine embedding dimension like correlation dimension or neural networks [18, 19]. In this paper one of the classical methods called false nearest neighbor (FNN) method for identification of embedding dimension is used [20]. For identification of time delays computational techniques like mutual information or autocorrelation based methods often used [21]. However, in this paper we propose a simple way for time delay selection. For time delay identification, evolutionary optimization algorithms were introduced [16]. The problem of evolutional optimization is to identify optimal set of time delay that represents the best properties of reconstructed system. Polynomials approximation was used to construct the synchronization of vibrations [22]. Polynomials produce a good approximation to complex systems [23]. Polynomial model is derived using optimal non-uniform time delay coordinates. For synchronization task we propose new iterative algorithm for polynomial construction. The synchronization was examined for vibration obtained by aquarium with air pump [24] and wind turbine pinion gear [25]. The model of synchronization is constructed in several steps. In Section 2 the signal reconstruction based on non-uniform embedding proposed in [16] was applied. Evolutionary optimization algorithms for identification of the optimal set of time delays are discussed in Section 3. In Section 4 synchronization model was defined. Simulation results for two vibrations are presented in Section 5. And concluding remarks are discussed in the final section. 2. Non-uniform embedding reconstruction of a signal Let us consider discrete time series which is presented in form of ${x}_{1}$, ${x}_{2}$,…, ${x}_{N}$. The non-uniform reconstruction is obtained from original time series when the time delays between coordinates were chosen non-uniformly ${Y}_{i}=\left\{{x}_{i},{x}_{i+{\tau }_{1}},{x}_{i+{\tau }_{1}+{\tau }_{2}},\dots ,{x}_{i+\sum _{k=1}^{d-1}{\tau }_{k}}\right\}$, $i=1,2,\dots ,N-\sum _{k=1}^ {d-1}{\tau }_{k}$. The embedding dimension is denoted by $d$; ${Y}_{i}$ is the reconstructed vector with dimension $d$; ${\tau }_{k}\in N$ are the time delays. First task of the reconstruction is to determine optimal embedding dimension. For this purpose the false nearest neighbors method was applied [20]. In this method, if two points become distant by increasing embedding dimension $d$ then one of them is supposed as false. Embedding dimension is chosen when there is no false nearest neighbours. The nearest neighbor in the $\left(d-1\right)$-th cycle is supposed as false if: where ${R}_{d}^{2}\left(i\right)$ is the Euclidean distance of the $i$-th point in the $\left(d-1\right)$-th cycle to its nearest neighbour, ${R}_{d+1}^{2}\left(i\right)$ is the same distance in the $d$-th cycle with additional $x\left(t-{\tau }_{d}\right)$ time delay coordinate and ${R}_{tol}$ is a threshold [20]. The next step of time series embedding is to choose algorithm which helps to find the optimal set of time delays. This set gives best characteristics of the reconstructed attractor in $d$-dimensional space. For this case we use the objective function which was used for non-uniform attractor embedding in [16]: $F\left({\tau }_{1},{\tau }_{2},\dots ,{\tau }_{d-1}\right)=\frac{\pi }{2}\frac{{\int }_{0}^{\infty }A\left(\omega \right)Q\left({\tau }_{1},{\tau }_{2},\dots ,{\tau }_{d-1},\omega \right)d\omega } {{\int }_{0}^{\infty }A\left(\omega \right)d\omega },$ where $Q\left({\tau }_{1},{\tau }_{2},\dots ,{\tau }_{d-1},\omega \right)$ is the embedding quality function Eq. (3) of reconstruction into time delay phase space. This quality function evaluates arithmetic average of all planar projection of reconstructed attractor [16]. Thus, if time series is reconstructed into $d$-dimensional time delay space there are $d\left(d-1\right)/2$ planar projection and quality function can be expressed as: $Q\left({\tau }_{1},{\tau }_{2},\dots ,{\tau }_{d-1},\omega \right)$$=\frac{2}{d\left(d-1\right)}\left(\sum _{i=1}^{d-1}\left|\mathrm{sin}\left(\omega \delta {\tau }_{i}\right)\right|+\sum _{i=1}^ {d-2}\left|\mathrm{sin}\left(\omega \delta \left({\tau }_{i}+{\tau }_{i+1}\right)\right)\right|+\dots +\left|\mathrm{sin}\left(\omega \delta \sum _{j=1}^{d-1}{\tau }_{j}\right)\right|\right).$ The maximum of the objective function Eq. (2) gives the optimal set of time delays {${\tau }_{1},{\tau }_{2},\dots ,{\tau }_{d-1}\right\}$. With this set of time delays chaotic attractor in the reconstructed space have optimal geometric properties. 3. Evolutionary algorithms for the identification of an optimal set of non-uniform time delays As it was mentioned in previous section the non-uniform embedding can express the best topological properties of reconstructed signal [17]. Although this reconstruction is affective, but in this case an optimized set of all time delays ${\tau }_{i}$($i=$1, 2,…, $d-$1) must be identified. If the upper bound $T$ for possible numerical values of time delays are chosen, then the number of different combinations of time delays is the number of different combinations without permutations is $\left(T+d-2\right)!/\left(\left(d-1\right)!\left(T-1\right)!\right)$ [16]. For example, there is 292825 different combinations of time delays without permutations when $d=\text{5}$ and $T=$ 50. For higher dimension, it is infeasible to use full sorting. To solve this problem evolutionary optimization algorithms where used for identification for time delays. In this paper we use three different optimization algorithms which were adapted for identification of optimal set of non-uniform time delays. 3.1. Genetic algorithms for the identification of an optimal set of non-uniform time delays The first evolutionary algorithm used for optimization is Genetic Algorithm (GA) proposed by John Holland in 1975 [26]. It is an adaptive heuristic search algorithm that is widely used in various optimization problems. Genetic algorithm consists of four main steps: initialization, selection, crossover operation and mutation [27]. 1) Initialization. For identification of non-uniform time delays the GA algorithm is constructed in a way that every chromosome represents an array of time delays (genes). Every chromosome ${cr}_{i}= \left\{{\tau }_{1}^{\left(i\right)},{\tau }_{2}^{\left(i\right)},\dots ,{\tau }_{d-1}^{\left(i\right)}\right\}$ is of length $\left(d-1\right)$ and every gene is an integer number [17]. The initial population of n chromosomes ${\left\{cr}_{1},{cr}_{2},\dots ,{cr}_{n}\right\}$ is generated randomly. Then the objective function describe by Eq. (2) is maximized. 2) Selection. For selecting of chromosomes random roulette method was used [27]. The higher is the objective function value the higher is a chance for chromosome to be selected for the next 3) Crossover. All selected chromosomes are grouped into pairs. The crossover between two chromosomes is executed. The modified one-point algorithm is used in crossover operation [17]. 4) Mutation. After crossover mutation operation is applied. In other words random number for every gene is generated in the $k$-th generation and if the random number is lower than $\mu$ (usually the value of mutation parameter $\mu <$ 0.01) then that gene is changed by a random integer. These steps are repeated until maximum number of generations is achieved. 3.2. Artificial bee colony algorithms for the identification of an optimal set of non-uniform time delays The second optimization algorithm used for time delay identification is Artificial Bee Colony (ABC) algorithm [28, 29]. ABC is a metaheuristic algorithm introduced by Karaboga in 2005. This optimization algorithm is based of improvement of solution in every step of ABC algorithm. ABC has three parameters: the number of a food source (population), predetermined number of cycles (limit) and stopping criteria (maximum number of generation) [29]. Artificial bee colony optimization algorithm consists of four main phase: initialization, employed bee phase, onlooker bee phase and scout bee phase. 1) Initialization. For identification of non-uniform time delays the ABC algorithm is constructed in a way that every food source represents an array of time delays (food source position). Every food source ${s}_{i}=\left\{{\tau }_{1}^{\left(i\right)},{\tau }_{2}^{\left(i\right)},\dots ,{\tau }_{d-1}^{\left(i\right)}\right\}$ is of length $\left(d-1\right)$ and every position ${\tau }_{k}^{\left (i\right)}$, $k=$ 1, 2,…, $d-1$ is an integer number. The initial population of $n$ food sources ${\left\{s}_{1},{s}_{2},\dots ,{s}_{n}\right\}$ is generated randomly. 2) Employed bee phase. The position of the food source is updated by Eq. (4) and the profit value for each food source is evaluated: ${V}_{ij}={s}_{ij}+{\varphi }_{ij}\left({s}_{ij}-{s}_{kj}\right),$ where ${\varphi }_{ij}\left({s}_{ij}-{s}_{kj}\right)$ is a step size, ${\varphi }_{ij}$ is a random number between [–1;1], $k\in \left(1,2,\dots ,n\right)$ and $j\in \left(1,2,\dots ,d-1\right)$, $ke j$ are two randomly chosen numbers. 3) Onlooker bee phase. The selection of food source is performed to obtain best sets of time lags for optimization of the objective function. The food source is selected with probability value ${p}_ {i}$ Eq. (5) associated with that food source and is updated by Eq. (4): ${p}_{i}=\frac{F\left({s}_{i}\right)}{\sum _{n}F\left({s}_{n}\right)},$ where $n$ is the number of the food sources and $F\left({s}_{i}\right)$ is the value of objective function of the $i$-th solution in population. The new better solution replaces the old one. 4) Scout bee phase. In the last scout bee phase, employed bees whose solutions cannot be improved after a predefined number of trials become scouts and their solutions are discarded [28]. Scout bees randomly search for a new food source by Eq. (4). Scout bee phase is used only if the position of the food source cannot be improved. These steps are repeated until maximum number of cycles is reached. The result is set of time lags that give maximum value of the objective function. 3.3. Cuckoo search algorithms for the identification of an optimal set of non-uniform time delays Cuckoo search (CS) is metaheuristic search algorithm introduced by Xin-she Yang and Suash Deb in 2009 [30]. The initial number of cuckoos can lay one egg at the time in randomly chosen nest. The best nests with high objective function values are carried to the next generation. Eggs which are more similar to host nest eggs have a bigger chance to survive others are detected by host bird and thrown away. The grown eggs show the surviving rate in those nests [31]. Cuckoo search algorithm consists of three main steps: 1) Initialization. The CS algorithm is constructed in a way that every nest represents an array of time delays (nest position). Every nest ${c}_{i}=\left\{{\tau }_{1}^{\left(i\right)},{\tau }_{2}^{\ left(i\right)},\dots ,{\tau }_{d-1}^{\left(i\right)}\right\}$ is of length $\left(d-1\right)$. The initial population of $n$ nests ${\left\{c}_{1},{c}_{2},\dots ,{c}_{n}\right\}$ is generated 2) Phase 1. The objective value of randomly chosen nest is compared with the objective value of the new generated solution by Levy flights Eq. (6). If the new solution is better it replaces the chosen solution: ${V}_{ij}={c}_{ij}+\alpha \oplus L\stackrel{´}{e}vy\left(\lambda \right),$ where ${c}_{ij}$ is randomly chosen $j$-th time lag from ${c}_{i}$ nest, $\alpha >$0 is the step size scaling factor. In most cases $\alpha =$ 1. The product $\oplus$ means entrywise multiplication. In Cuckoo Search algorithm random walks are provided by Levy flights. In Levy flights the step length is estimated using Levy distribution: $L\stackrel{´}{e}vy~u={t}^{-\lambda },\left(1<\lambda <3\right).$ 3) Phase 2. The part of worst nests with probability ${p}_{a}\in \left[0;1\right]$ are abandoned and new solutions are generated using Eq. (6). The solutions are ranked and the best one solution is These steps are repeated until maximum number of generation is reached. 4. Polynomial synchronization Time series reconstruction into time delay space allows obtaining certain properties of attractor that can be used in system control. In most of these tasks synchronization of the system is needed. In this paper the synchronization problem is considered as a construction of a model for some dynamical system. Lets $g$ denote the dynamics of the time series: ${x}_{i+1}=g\left({X}_{1i},{X}_{2i},\dots ,{X}_{di}\right),i=N+1,N+2,\dots ,K,$ where ${X}_{1i}={x}_{i}$, ${X}_{2i}={x}_{i+{\tau }_{1}}$,…, ${X}_{di}={x}_{i+\sum _{k=1}^{d-1}{\tau }_{k}}$ is the time delayed vectors with dimension $d$ and set of time delays {${\tau }_{1},{\tau } _{2},\dots ,{\tau }_{d-1}\right\}$. However, the rule $g\left(\bullet \right)$ is unknown. In construction of synchronization model the principal rule is to find approximate function g which would be as close as possible to original dynamics $x$ (note that ${x}_{N+1},{x}_{N+2},\dots$ are known). Aim is to minimize the difference: $E=\mathrm{m}\mathrm{i}\mathrm{n}\sum _{i=N}^{K-1}‖{x}_{i+1}-g\left({X}_{1i},{X}_{2i},\dots ,{X}_{di}\right)‖.$ For construction of rule $g$ polynomials that provide good approximation to $x$ with proper structure selection were used [12, 22]. The number of terms in the polynomial expansion of $g$ grows with degree $n$ of the polynomial. There is $M=\sum _{i=1}^{n}\frac{\left(i+d-1\right)!}{\left(i\right)!\left(d-1\right)!}+1$ terms for polynomial of degree $n$ with $d$ variables. For example, there is $10$ terms for polynomial of degree $n=\text{2}$ and reconstruction dimension of $d=\text{3}$. We consider the $n$-th order polynomial: ${P}_{n}\left({X}_{1i},{X}_{2i},\dots ,{X}_{di}\right)={\omega }_{0}+{\omega }_{1}{X}_{1i}+{\omega }_{2}{X}_{2i}+\dots +{\omega }_{d}{X}_{di}+{\omega }_{d+1}{X}_{1i}{X}_{2i}+\dots +{\omega }_{M}{X}_ where ${\omega }_{0}$, ${\omega }_{1}$,…, ${\omega }_{M}$ is the coefficients of the polynomial and ${X}_{ki}$ ($k=$ 1, 2,…, $d$) time delay coordinates. The task is to find such ${\omega }_{0}$, ${\ omega }_{1}$,…, ${\omega }_{M}$ values that the overall solution minimizes error $E$ described by Eq. (9). Polynomial regression model using linear least square techniques based on orthogonal triangular factorization with column pivoting was applied [32]. However, with high embedding dimension and high polynomial order the terms of polynomial also increases. In order to decrease the number of polynomial terms the new iterative reconstruction algorithm was applied. The iterative polynomial approximation algorithm consist of two steps: 1-step. The second order polynomial ${P}_{2}\left({X}_{1i},{X}_{2i},\dots ,{X}_{di}\right)$ is constructed and coefficient ${\omega }_{0}$, ${\omega }_{1}$,…, ${\omega }_{M}$ are optimized. 2-step. Increase the order of the predefined polynomial, by using Eq. (10) with additional variable, which is the second order polynomial constructed in first step: ${P}_{4}\left({X}_{1i},{X}_{2i},\dots ,{X}_{di}\right)={P}_{2}\left({X}_{1i},{X}_{2i},\dots ,{X}_{di},{P}_{2}\left({X}_{1i},{X}_{2i},\dots ,{X}_{di}\right)\right).$ This step is repeated until the error $E$ does not improve. Finally, the synchronization model is the last polynomial that is found after the optimization process: ${x}_{i+1}={P}_{2n}\left({X}_{1i},{X}_{2i},\dots ,{X}_{di}\right).$ For even order polynomial construction first order polynomials should be used in first step. Thus, in the same synchronization problem we optimize two polynomials of odd and even orders one-at-a-time. The final solution is the lower order polynomial which gives the minimal errors. 5. Experiments The proposed method for signal synchronization based on non-uniform embedding works well with different vibrations. We will test this technique for a few different vibrations and then make some First for both vibrations Savitzky-Golay Filter was used [33]. The key idea of Savitzky-Golay Filter is that the smoothed value ${z}_{i}$ in the point ${x}_{i}$ ($i=$1, 2,…, $N$) is obtained by taking an average of the neighboring data. More generally we can also fit a polynomial through a fixed number of points. Then the value of the polynomial at ${x}_{i}$ gives the smoothed value ${z}_ Case 1. First vibration used for this technique is vibration of aquarium with air pump [24]. Length of a vibration is 51 s and sampling rate is 100 Hz. Every vibration can be efficiently transformed in electrical energy and employed to power electronic devices. In order to achieve this and make a better use of energy the dynamics of vibration must be known. Fig. 1Objective function value compared to number of generation. The thick line shows objective function value for uniform embedding. Upper thin line – is the maximum value of F for each generation. Middle thin line – is the mean value of F for each generation. Lower thin line – is the minimum value of F for each generation First task of proposed model is to prepare the vibration for synchronization. For noise reduction Savitzky-Golay Filter was applied [33]. The second step was reconstruction into non-uniform time delay phase space. The False Nearest Neighbors algorithm suggested that embedding dimension is $d=\text{6}$. With next step the set of optimal time delays was identificated by evolutionary optimization algorithm. Refer to Fig. 1 for objective function value comparison to a number of generation for each optimization algorithm. The number of initial population for each algorithm was chosen equally. In each generation, all three-optimization algorithm were executed for 100 times. The thick line in Fig. 1 shows objective function value of uniform embedding. The thin lines show non-uniform embedding objective function value for each generation. As Fig. 1, shows each algorithm managed to find better objective function value than using uniform embedding. The optimal set of time delays for uniform embedding is determined to be $F\left(9,9,9,9,9\right)=\text{1.0732}$. The non-uniform set of time delay is determined using evolutionary algorithms shown in Table 1. The set of time delay was chosen with the highest objection function value. This was obtained by Cuckoo Search algorithm. The optimal set for non-uniform embedding is determined to be $F\ left(9,6,6,6,10\right)=$ 1.0851. Refer to Fig. 2 for signal reconstruction into 3-dimensional time delay space using ${X}_{1}$, ${X}_{2}$, ${X}_{3}$. Table 1Evolutionary algorithm comparison for non-uniform embedding Cases Evolutionary algorithm Set of time delays $\left({\tau }_{1},\dots ,{\tau }_{d-1}\right)$ $F\left({\tau }_{1},\dots ,{\tau }_{d-1}\right)$ CS (9, 6, 6, 6, 10) $F\left(9,6,6,6,10\right)=$ 1.0851 Case 1 ABC (11, 11, 8, 11, 12) $F\left(11,11,8,11,12\right)=$ 1.0849 GA (9, 6, 6, 6, 9) $F\left(9,6,6,6,9\right)=$ 1.0846 CS (14, 13, 15) $F\left(14,13,15\right)=$ 1.1349 Case 2 ABC (14, 14, 13) $F\left(14,14,13\right)=$ 1.1059 GA (14, 13, 17) $F\left(14,13,17\right)=$ 1.1027 Fig. 2Vibration reconstructed into 3-dimensional space with time delay set (X1,X2,X3) and using time delays τ1= 9 and τ2= 6 The 4-th order polynomial was build with found delays for this vibration. The found model have been given in Appendix A.1. The synchronization for real data and constructed polynomials is shown in Fig. 3. For polynomial approximation, we use 36 terms (while traditional model with polynomial order of 4 and reconstruction dimension $d=\text{6}$ would have 210 terms). Case 2. Second vibration is radial vibration which measurements were taken on 3 MW wind turbine pinion gear. [25] Recorded length of a vibration is 6s and sampling rate is 97656 Hz. In this case we have high vibration levels. The same process was applied to this vibration as in Case 1. First Savitzky-Golay Filter was used to reduce noise of a signal. The False Nearest Neighbors algorithm suggested that embedding dimension is $d=\text{4}$. The non-uniform set of time delay is determined using evolutionary algorithms shown in Table 1. The best set of non-uniform embedding was found with Cuckoo Search algorithm $F\left(14,13,15\right)=\text{1.1349}$. Refer to Fig. 4 for synchronization results. The 6-th order polynomial was build with found delays for this vibration. The found model have been given in Appendix A.2. The found model has a total 27 terms (while traditional model with polynomial order of 6 and reconstruction dimension $d=$ 4 would have 210 terms). As expected synchronization performance was not high due to the complex nature of the vibrations (refer Fig. 4 for residual). Filtering reduces the complexity of vibration and improves synchronization results. Fig. 3a) Synchronization of real vibration (solid line) and vibration obtained with polynomial model (dashed line), b) residual of obtained and real signal Fig. 4a) Synchronization of real vibration (solid line) and vibration obtained with polynomial model (dashed line), b) residual of obtained and real signal 6. Conclusions It was demonstrated that synchronization of true dynamics and learnt model can be obtained by using polynomial model, which was derived using optimal non-uniform delay coordinates. This technique is based on combination of non-uniform embedding, evolutionary optimization algorithms and polynomial model. For identification of embedding dimension False Nearest Neighbor method was used, which is the easy and fast way to find minimal embedding dimension. For identification of set of non-uniform time delays three different evolutionary algorithms were used. The found set was used as a time delay kernel for polynomial construction. Synchronization was carried for two different real world vibrations. The proposed model showed good synchronization results. • Yang S. S., Duan C. K. Generalized synchronization in chaos systems. Chaos, Solitons and Fractals, Vol. 9, Issue 10, 1998, p. 1703-1707. • Maes K., Reynders E., Rezayat A., De Roeck G., Lombaert G. Offline synchronization of data acquisition systems using system identification. Journal of Sound and Vibration. Vol. 381, 2016, p. • Wu C. W., Chua L. O. A unified framework for synchronization and control of dynamical systems. International Journal of Bifurcation and Chaos, Vol. 4, Issue 4, 1994, p. 979-998. • Lehnertz K., Bialonski S., Horstmann M. T., Krug D., Rothkegel A., Staniek M., Wagner T. Synchronization phenomena in human epileptic brain networks. Journal of Neural Science Methods, Vol. 183, 2009, p. 42-48. • Jalili M. Spake phase synchronization in delayed – coupled neural networks: uniform vs. non-uniform transmission delay. Chaos, Vol. 23, Issue 1, 2013. • Radhakrishnan S., Duvvuru A., Sultornsanee S. Phase synchronization based minimum spanning tree for analysis of financial time series with nonlinear correlations. Physica A, Vol. 444, 2016, p. • Schafer C., Rosenblum M. G., Kurths J., Abel H. H. Heartbeat synchronized with ventilation. Nature, Vol. 392, 1998, p. 239-240. • Arnhold J., Grassberger P., Lehnertz K., Elger C. E. A robust method for detecting interdependences: application to intracranially recorded EEG. Physica D: Nonlinear Phenomena, Vol. 134, Issue 4, 1999, p. 419-430. • DeShazer D. J., Breban R., Ott E., Roy R. Detecting phase synchronization in chaotic laser array. Physical Review Letters, Vol. 87, Issue 4, 2001, p. 044101. • Pecora L. M., Carroll T. L. Synchronization in chaotic systems. Physical Review Letters, Vol. 84, Issue 8, 1990, p. 821-824. • Faes L., Porta A., Nollo G. Mutual nonlinear prediction as a tool to evaluate coupling strength and directionality in bivariate time series: comparison among different strategies based on k nearest neighbors. Physical Review E, Vol. 78, 2008, p. 026201. • Nichkawde C. Sparse model from optimal nonuniform embedding of time series. Physical Review E, Vol. 89, Issue 4, 2014, p. 042911. • Sun Yeong-Jeu A novel chaos synchronization of uncertain mechanical systems with parameter mismatchings, external excitations, and chaotic vibrations. Communication in Nonlinear Science and Numerical Simulation, Vol. 17, 2012, p. 496-504. • Awrejcewicz J., Krysko A. V., Yakovleva T. V., Zelenchuk D. S., Krysko V. A. Chaotic synchronization of vibrations of a coupled mechanical system consisting of a plate and beams. Latin American Journal of Solids and Structures, Vol. 10, 2013, p. 161-172. • Fradkov A., Tomchina O., Galitskaya V., Gorlatov D. Multiple controlled synchronization for 3-rotor vibration unit with varying Payload. IFAC Proceeding Volumes, Vol. 46, Issue 12, 2013, p. 5-10. • Ragulskis M., Lukoseviciute K. Non-uniform attractor embedding for time series forecasting by fuzzy inference systems. Neurocomputing, Vol. 72, 2009, p. 2618-2626. • Lukoseviciute K., Ragulskis M. Evolutionary algorithms for the selection of time lags for time series forecasting by fuzzy inference systems. Neurocomputing, Vol. 73, 2010, p. 2077-2088. • Theiler J. Efficient algorithm for estimating the correlation dimension from a set of discrete points. Physical Review A, Vol. 36, Issue 9, 1987, p. 4456-4462. • Maus A., Sprott J. C. Neural network method for determining embedding dimension of a time series. Communication in Nonlinear Science and Numerical Simulation, Vol. 16, 2011, p. 3294-3302. • Rhodes C., Morari M. False-nearest-neighbors algorithm and noise-corrupted time series. Physical Review E, Vol. 55, Issue 5, 1997, p. 6162-6170. • Cao L. Practical method for determining the minimum embedding dimension of a scalar time series. Physica D: Nonlinear Phenomena, 1997, p. 43-50. • Su Li-yun Prediction of multivariate chaotic time series with local polynomial fitting. Computers and Mathematics with Applications, Vol. 59, 2010, p. 734-744. • Varadan V., Leung H. Reconstruction of polynomial systems from noisy time-series measurements using genetic programming. IEEE Transactions on Industrial Electronics, Vol. 48, Issue 4, 2001, p. • Neri Igor, et al. A real vibration database for kinetic energy harvesting application. Journal of Intelligent Material Systems and Structures, Vol. 23, 2012, p. 2095-2101. • Bechhoefer E. High Speed Gear Dataset. Acknowledgement is made for the measurements used in this work provided through data-acoustics.com Database. • Holland J. H. Adaptation in Neural and Artificial Systems: An Introductory Analysis with Application to Biology, Control and Artificial Intelligence. The MIT Press, 1992. • Man K. F., Tang K. S., Kwong S. Genetic algorithms: concepts and applications. IEEE Transactions on Industrial Electronics, Vol. 43, Issue 5, 1996, p. 519-534. • Karaboga D., Basturk B. On the performance of artificial bee colony (abc) algorithm. Applied Soft Computing, Vol. 8, 2008, p. 687-697. • Karaboga D., Gorkemli B., Ozturk C., Karaboga N. A Comprehensive survey: artificial bee colony (ABC) algorithm and applications. Artificial Intelligence Review, Vol. 42, Issue 1, 2014, p. 21-57. • Rajabioun R. Cuckoo optimization algorithm. Applied Soft Computing, Vol. 11, 2011, p. 5508-5518. • Rakhshani H., Dehghanian E., Rahati A. Hierarchy Cuckoo search algorithm for parameter estimation in biological systems. Chemometrics and Intelligent Laboratory Systems, Vol. 159, 2016, p. • Draper N., Smith H. Applied regression analysis. Biometrical Journal, John Wiley and Sons, Vol. 11, Issue 6, 1969. • Candan C. A unified framework for derivation and implementation of Savitzky-Golay filters. Signal Processing, Vol. 104, 2014, p. 203-211. About this article Chaos, nonlinear dynamics and applications synchronization of vibration non-uniform embedding evolutionary optimization algorithms Financial support from Lithuania Science Council under Project No. MIP-078/2016 is acknowledged. Copyright © 2016 JVE International Ltd. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/18090","timestamp":"2024-11-14T04:35:41Z","content_type":"text/html","content_length":"162833","record_id":"<urn:uuid:b6c76826-2238-4905-ba1b-14499446e630>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00118.warc.gz"}
Math & Science Tutor - Algebra 2.1.5 (Pro unlocked) (Lessons unlocked 1500+ Math Tutor Video Lessons in Basic Math Algebra Calculus Physics Chemistry Engineering Statistics. 500+ hours of step-by-step instruction. Learn fast and get help in any subject by solving example problems step-by-step. Every lesson teaches the student how to solve problems gain practice and perform the calculations to score higher on exams and quizzes. All classes are taught assuming that the student has no knowledge of the subject. Whether learning basic math algebra calculus or advanced courses such as electrical engineering or mechanical engineering this method is the fastest way to truly master the material. Included Courses: Basic Math (Arithmetic): Addition Subtraction Multiplication Division Fractions Ratio Proportion Percents Word Problems. - Algebra 1 and Algebra 2: Real Numbers Integers Rational Numbers Algebraic Fractions Simplifying Expressions Solving Equations Multi Step Equations Graphing Quadratic Functions. - Geometry Lines Rays Planes Quadrilaterals Surface Area Volume Prisms Parallel Lines Geometric Theorems Proofs Circles Circumference. - College Algebra Rational Functions Shifting Functions Sequences Series Matrix Algebra Summation. - Trigonometry & PreCalculus Imaginary Numbers Complex Numbers Unit Circle Sin Cos Tan Trig Identities Exponential Functions Logarithmic Functions Trigonometric Equations. - Calculus 1 Limits Derivatives Integrals Techniques of Integration Substitution Improper Integrals Curve Sketching - Calculus 2 Integration by Parts Integration by Trig Substitution Sequences Series Convergence Implicit Differentiation Calculus 3 Partial Derivatives Line Integrals Surface Integrals Directional Derivatives Green's Theorem Stokes Theorem - Differential Equations Solving Differential Equations Graphing Solutions Systems of Equations - Calculator Tutorials Texas Instruments TI-84 TI-89 Graphing Calculator Tutorial - Physics 1 Motion Projectile Motion Torque Momentum Work Energy Friction Fluids Pressure Physics 2 Temperature Heat Thermodynamics Waves Simple Harmonic Motion Physics 3 Electricity Magnetism Maxwell's Equations Electric Field Magnetic Field - Chemistry Atoms Compounds Chemical Reactions Stoichiometry Gas Laws Redox Reactions - Probability & Statistics Sampling Statistics Central Limit Theorem Hypothesis Testing Linear Regression Correlation ANOVA - Electrical Engineering Circuit Analysis Node Voltage Mesh Current Dependent Sources Thevenin Circuits Phasors 3 Phase Circuits - Mechanical Engineering Statics Vector Mechanics Equilibrium Forces - Engineering Math Linear Algebra Laplace Transform Matrices - Java Programming Objects Classes For Loops While Loops Variables Methods - Matlab MS Word MS Excel - Science Experiments App Features: - Mark favorite lessons for later viewing. - Recently watched videos list. - Search all lessons for any topic. - View featured courses. - View recently released courses. - Worksheets for selected courses. - Share lessons via email & social media. Excel in school. Learn any subject fast by solving problems step-by-step. Our lessons have helped thousands of students achieve success! Information about Math Tutor subscriptions: - Most lessons in the app are free. For $19.99 a month you will get access to all 1500+ lessons and courses. - Your subscription will automatically renew at $19.99 each month billed through your account. - You can cancel anytime by turning off auto-renew in your account settings. - The subscription automatically renews every month unless auto-renew is turned off at least 24 hours before the end of the current period. - No cancellation of the current subscription is allowed during the active subscription period. - Read our terms of service (http://www.mathtutordvd.com/public/73.cfm) and privacy policy (http://www.mathtutordvd.com/public/department12.cfm) for more information.
{"url":"https://a2zapk.io/1350543-math-science-tutor-algebra-2-1-5-pro-unlocked-lessons-unlocked-a2z.html","timestamp":"2024-11-10T03:28:15Z","content_type":"text/html","content_length":"78314","record_id":"<urn:uuid:90a1e7bc-9b4b-4d5c-8bd2-816d225b45eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00299.warc.gz"}
For what values of x, if any, does f(x) = 1/((x^2-4)cos(pi/2+(7pi)/x) have vertical asymptotes? | HIX Tutor For what values of x, if any, does #f(x) = 1/((x^2-4)cos(pi/2+(7pi)/x) # have vertical asymptotes? Answer 1 $x = \pm 2 , \pm 7 \left(1 , \frac{1}{2} , \frac{1}{3} , \frac{1}{4} , \ldots\right)$ The vertical asymptotes are given by zero of #x^2-4=0# and #cos(pi/2+7/xpi)=0 => -sin(7pi/x)#. #x = +-2# and #7/k, k = +-1, +-2, +-3, ...# Note that k = 0 sends #x to +-oo# to make kx read 7 . The ad hoc graph is not to scale. graph{-1/((x^2-4)sin(21.991/x) [-3.51, 3.47, -17, 17]} Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/for-what-values-of-x-if-any-does-f-x-1-x-2-4-cos-pi-2-7pi-x-have-vertical-asympt-8f9af9cd2d","timestamp":"2024-11-13T05:42:52Z","content_type":"text/html","content_length":"576147","record_id":"<urn:uuid:fa05bb54-d11c-4641-ab87-2e64a9eec412>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00062.warc.gz"}
Chessboard Problems We will here consider the question of those boards that contain an odd number of squares. We will suppose that the central square is first cut out, so as to leave an even number of squares for division. Now, it is obvious that a square three by three can only be divided in one way, as shown in Fig. 1. It will be seen that the pieces A and B are of the same size and shape, and that any other way of cutting would only produce the same shaped pieces, so remember that these variations are not counted as different ways. The puzzle I propose is to cut the board five by five (Fig. 2) into two pieces of the same size and shape in as many different ways as possible. I have shown in the illustration one way of doing it. How many different ways are there altogether? A piece which when turned over resembles another piece is not considered to be of a different | H | | | HHHHH | | | H | Fig 1] | | | | | | | | | H | | | | HHHHH | | | | H | | | | H | | | | Fig 2] Read Answer
{"url":"https://www.mathpuzzle.ca/Puzzle/Boards-With-An-Odd-Number-Of-Squ.html","timestamp":"2024-11-10T05:00:26Z","content_type":"text/html","content_length":"15683","record_id":"<urn:uuid:13038201-bdb4-435b-8a6a-1df8efafff40>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00630.warc.gz"}
A quick guide on Solution of the Inequality Inequalities are widely used when stating an ordered relation between any two numbers or their two expressions of algebra. Five ways are there that can support providing mathematical relations between two distinct algebraic expressions. In this study, focus will be given to the ways for interpretation of solutions to inequalities. Furthermore, within the study, notable focus is given to determining the combination of inequalities, graphing inequalities and linear equalities. Understanding All About On Solutions of Inequalities Delving into the topic it needs to be noticed that inequalities majorly provide an expression that caters for providing an effective relationship between two expressions of algebra. It can also show relationships between integers as well as variables. Furthermore, in this study of solutions to inequalities, it is noticed that there are five respective relationships stated by signs of inequalities. In terms of graphical representation, it is noticed that with value to the functions of x, the critical values are plotted on a two-dimensional graph on the abscissa or x-axis. In mathematics, the notion of inequalities majorly depends on determining the relationship that determines, greater than, less than, less than equal to, not equal to and as well as greater than equal to. It can also be stated that mathematical analysis often depends on the notions of equalities. What Are Inequalities? The term inequality intends to refer to a statement of relationships between two distinct values in order to conduct an effective comparison. Moreover, in the aspects or proving of theorems, such notions as equalities are crucial as it supports scientists in determining the mathematical analysis. One such example is known as “inequality of Cauchy-Schwarz”, that provides in establishing a true relationship between the two algebraic expressions. This has also been noticed that relationships can be stated with the help of five conditions. Moreover, it is noted that the inequalities can be successively represented with the graph by evaluating the critical value of functions of x within a given equation. Combinations of Inequalities The term “combinations of inequalities” is a widely used term in chapters of algebra that provides assistance in satisfying both the algebraic inequalities. Moreover, it refers to a condition, where there lies a common number that will be present in both the sets associated with the solutions. Therefore, in simple terms, it means that both algebraic inequalities are successively satisfied and as well as their values within the point of intersection for two solution sets. The set can be represented as a notation of intervals such as, “(-2, 4)” respectively. In addition to this, the condition can be stated as “double inequality”. Graphing Inequalities Based on graphical representation, several steps can be applied, but primarily three necessary steps are concerned that are as follows. • The first step caters to the rearrangements of the ordinate of y on the left side of the equation leaving all the terms on right. • Afterwards, values associated with the ordinate or y are plotted on the graph, • Finally, the regions concerning the plotted values of the ordinate are to be defined by the shaded regions. Linear Inequalities Delving into the notions associated with the inequalities, linear inequality posits equal importance in its application. The linear equation refers to a solving system without focusing on points of intersection. In addition to these, it can be stated that the set of solutions refers to a region that intends to satisfy the linear equalities. Moreover, catering to the study, it is noticed that the better way to solve linear inequalities is by graphical representations. Graphically Solving Of System of Linear Inequalities The best way to acknowledge this is through graphical representations in solving equations associated with linear equalities. The first step is solving inequality for ordinate and then followed by treating the linear equations that can be represented on lines of solid or dashed. It is to be noted that the strict inequalities present dashed lines. Reaching this far in the study, it can be easily noticed that the inequalities are essential in providing a relationship between two expressions of algebra. Moreover, the inequalities can also be represented on the 2-dimensional graphs but effectively determine the value associated with abscissa or x-axis. Delving into the study, it can be acknowledged that solving inequalities provides a comparison to the values associated with relative sizes. It can be used to successively compare variables, integers as well as many algebraic expressions.
{"url":"https://unacademy.com/content/upsc/study-material/mathematics/a-quick-guide-on-solution-of-the-inequality/","timestamp":"2024-11-09T03:40:09Z","content_type":"text/html","content_length":"679794","record_id":"<urn:uuid:f3b02c22-e16a-4e0f-bfa4-f5bf9f593b16>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00857.warc.gz"}
Lambda Functions in Python - Data Science Parichay Lambda functions in python are single expression functions that are not bound to a name. This is why lambda functions are also called anonymous functions. In this tutorial, we’ll look at lambda functions, how are they different from normal functions, and where are they used along with some examples. Table of Contents • Lambda Function Syntax • How are lambda functions different from normal functions? • When do we use lambda functions? Lambda Function Syntax Lambda functions in python are defined using the lambda keyword. They are single expression functions. We use the following syntax to define them: lambda arguments : expressions add_10 = lambda x: x+10 The above code gives the following output on execution: In the above example, the statement lambda x: x+10 returns a function object which is stored in add_10. This function object can be used like any normal function. Lambda functions can have any number of arguments but only a single expression. a = lambda x,y,z: x+y+z The above code gives the following output on execution: 📚 Data Science Programs By Skill Level Introductory ⭐ Intermediate ⭐⭐⭐ Advanced ⭐⭐⭐⭐⭐ 🔎 Find Data Science Programs 👨💻 111,889 already enrolled Disclaimer: Data Science Parichay is reader supported. When you purchase a course through a link on this site, we may earn a small commission at no additional cost to you. Earned commissions help support this website and its team of writers. In the above example, the lambda function has three arguments, x, y, and z and a single expression x+y+z. How are lambda functions different from normal functions? Lambda functions in python work in a similar way as regular functions do. But, there are some differences between the two: 1. Lambda functions are defined using the lambda keyword and are not bound to a name. Regular functions, on the other hand, are defined using the def keyword and have a name associated with them. 2. Lambda functions are restricted to a single expression while normal functions can have any number of expressions. Lambda functions have an implicit return statement. During execution, a lambda function evaluates its expression and automatically returns the result. The example below illustrates the difference between the two: # Function to add two numbers def add(x,y): # return the sum return x+y add = lambda x,y: x+y In the above example, the same task of adding two numbers is performed by a regular python function and a lambda function. When do we use lambda functions? You may be wondering that if lambda functions and regular functions do the same thing, then why do we use them in the first place? • If a function is required for a single use simple task, lambda functions can be a good alternative. However, one should avoid writing complicated lambda functions that hamper code readability only to save a couple of lines of code. • Lambda functions can be used to return a function from another function. See the example below: # A function returning another function def add_n(n): # returns a lambda function to add n return lambda x: x+n # Function to add 10 to a number add_10 = add_n(10) # calling the function In the above example, the function add_n(n) takes in an argument n and returns a lambda function to add that particular argument. Lambda functions can be used for other use cases as well. For more, check out this answer on stack overflow. If you’d like to know about regular functions in python check out our article on python functions. In this tutorial, we looked at lambda functions in python. If you found this article useful do give it a share! For more such articles subscribe to us. If you’re a beginner looking to start your data science journey and learn python, check out our Python for Data Science Series.
{"url":"https://datascienceparichay.com/article/lambda-functions-in-python/","timestamp":"2024-11-13T19:26:56Z","content_type":"text/html","content_length":"259673","record_id":"<urn:uuid:bd881b92-da03-488f-9980-59e5ee28139d>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00021.warc.gz"}
Elementary Technical Mathematics, 12th ELEMENTARY TECHNICAL MATHEMATICS helps students with minimal math background successfully prepare for technical, trade, allied health or tech prep programs. Author Dale Ewen focuses on fundamental concepts in basic arithmetic: the metric system and measurement, algebra, geometry, trigonometry and statistics. Thousands of examples, exercises and applications cover such fields as industrial and construction trades, electronics, agriculture/horticulture, allied health, CAD/drafting, HVAC, welding, auto/diesel service, aviation, natural resources, culinary arts and business/personal finance--to engage students and provide them with the math background they need to succeed in future courses and careers. Purchase Enquiry INSTRUCTOR’S eREVIEW COPY 1. Basic Concepts. Review of Operations with Whole Numbers. Review of Basic Operations. Order of Operations. Area and Volume. Formulas. Prime Factorization Divisibility. Review of Operations with Fractions. Introduction to Fractions. Addition and Subtraction of Fractions. Multiplication and Division of Fractions. The U.S. System of Weights and Measures. Review of Operations with Decimal Fractions and Percent. Addition and Subtraction of Decimal Fractions. Rounding Numbers. Multiplication and Division of Decimal Fractions. Percent. Part, Base, and Rate. Powers and Roots. Application Involving Percent: Business and Personal Finance. 2. Signed Numbers and Powers of 10. Addition of Signed Numbers. Subtraction of Signed Numbers. Multiplication and Division of Signed Numbers. Signed Fractions. Powers of 10. Scientific Notation. Engineering Notation. Chapters 1-2: Cumulative Review. 3. The Metric System. Introduction to the Metric System. Length. Mass and Weight. Volume and Area. Time, Current, and Other Units. Temperature. Metric and U.S. Conversion. 4. Measurement. Approximate Numbers and Accuracy. Precision and Greatest Possible Error. The Vernier Caliper. The Micrometer Caliper. Addition and Subtraction of Measurements. Multiplication and Division of Measurements. Relative Error and Percent of Error. Color Code of Electrical Resistors. Reading Scales. Chapters 1-4: Cumulative Review. 5. Polynomials: An Introduction to Algebra. Fundamental Operations. Simplifying Algebraic Expressions. Addition and Subtraction of Polynomials. Multiplication of Monomials. Multiplication of Polynomials. Division by a Monomial. Division by a 6. Equations and Formulas. Equations. Equations with Variables in Both Members. Equations with Parentheses. Equations with Fractions. Translating Words into Algebraic Symbols. Applications Involving Equations. Formulas. Substituting Data into Formulas. Reciprocal Formulas Using a Calculator. Chapters 1-6: Cumulative Review. 7. Ratio and Proportion. Ratio. Proportion. Direct Variation. Inverse Variation. 8. Graphing Linear Equations. Linear Equations with Two Variables. Graphing Linear Equations. The Slope of a Line. The Equation of a Line. Chapters 1-8: Cumulative Review. 9. Systems of Linear Equations. Solving Pairs of Linear Equations by Graphing. Solving Pairs of Linear Equations by Addition. Solving Pairs of Linear Equations by Substitution. Applications Involving Pairs of Linear Equations. 10. Factoring Algebraic Expressions. Finding Monomial Factors. Finding the Product of Two Binomials Mentally. Finding Binomial Factors. Special Products. Finding Factors of Special Products. Factoring General Trinomials. Chapters 1-10: Cumulative Review. 11. Quadratic Equations. Solving Quadratic Equations by Factoring. The Quadratic Formula. Applications Involving Quadratic Equations. Graphs of Quadratic Equations. Imaginary Numbers. 12. Geometry. Angles and Polygons. Quadrilaterals. Triangles. Similar Polygons. Circles. Radian Measure. Prisms. Cylinders. Pyramids and Cones. Spheres. Chapters 1-12: Cumulative Review. 13. Right Triangle Trigonometry. Trigonometric Ratios. Using Trigonometric Ratios to Find Angles. Using Trigonometric Ratios to Find Sides. Solving Right Triangles. Applications Involving Trigonometric Ratios. 14. Trigonometry with Any Angle. Sine and Cosine Graphs. Period and Phase Shift. Solving Oblique Triangles: Law of Sines. Law of Sines: The Ambiguous Case. Solving Oblique Triangles: Law of Cosines. Chapters 1-14: Cumulative Review. 15. Basic Statistics. Bar Graphs. Circle Graphs. Line Graphs. Other Graphs. Mean Measurement. Other Average Measurements and Percentiles. Range and Standard Deviation. Grouped Data. Standard Deviation for Grouped Data. Statistical Process Control. Other Graphs for Statistical Data. Normal Distribution. Probability. Independent Events. 16. Binary and Hexadecimal Numbers. Introduction to Binary Numbers. Addition of Binary Numbers. Subtraction of Binary Numbers. Multiplication of Binary Numbers. Conversion from Decimal to Binary System. Conversion from Binary to Decimal System. Hexadecimal System. Addition and Subtraction of Hexadecimal Numbers. Binary to Hexadecimal Conversion. Hexadecimal Code for Colors. Chapters 1-16: Cumulative Review. Appendix A: Tables. Table 1: Formulas from Geometry. Table 2: Electrical Symbols. Appendix B: Exponential Equations. Appendix C: Simple Inequalities. Appendix D: Instructor's Answer Key to All Exercises. • Dale Ewen Dale Ewen, Executive Vice President of Parkland College, Illinois, (1999-present), graduated from the University of Illinois with a B.S. in 1963 and an M. Ed. in 1966. He has been the recipient of several AMATYC (American Mathematical Association of Two-Year Colleges) awards and served as the organization's President (1989-91). Dale continues to write a number of technical mathematic • Applications in the areas of industrial and construction trades, electronics and CAD/drafting were reviewed and updated with the assistance of experts working in these areas. All other areas were reviewed and updated with current information and data by the author. • Major effort was made to streamline the text by creating a more space efficient page design, reviewing art size and placement, moving Group Activities at the end of each chapter to the Instructor Companion Website, and removing outdated material. • After thorough review, this text provides better rationale for measurement accuracy and precision and calculations with measurements, compares single vs. multiple measurements, and introduces the concept of random and systematic errors. • Signed number drills exercises assist students in learning addition, subtraction and multiplication on signed numbers. • Cumulative reviews at the end of every even-numbered chapter enable students to review and prepare for comprehensive exams. • Many applications from a wide variety of technical areas are noted by marginal icons, including industrial and construction trades, electronics, agriculture/horticulture, allied health, CAD/ drafting, HVAC, welding, auto/diesel service, aviation, natural resources, culinary arts and business/personal finance engage students and illustrate the relevance of what they're learning. Marginal icons call attention to the applications, making it easier for instructors and students to find and explore them. • Each chapter opener presents basic information about a technical career underscoring the connection of the math to real life. • The inside covers contain useful, frequently referenced information--such as metric system prefixes, English weights and measures, metric and English conversion and formulas from geometry. • Chapter 1 reviews basic concepts in such a way that students or the entire class can easily study only those sections they need to review. • The use of a scientific calculator has been integrated in an easy-to-use format throughout the text to reflect its nearly universal use in technical classes and on the job. Cengage provides a range of supplements that are updated in coordination with the main title selection. For more information about these supplements, contact your Learning Consultant. Cengage Learning Testing, powered by Cognero® for Ewen/Nelson's Elementary Technical Mathematics Cengage Testing, powered by Cognero® for Ewen's Elementary Technical Mathematics, Instant Access Instructor's Companion Website for Ewen's Elementary Technical Mathematics, 12th Online Instructor's Solutions Manual for Ewen's Elementary Technical Mathematics, 12th
{"url":"https://www.cengageasia.com/title/default/detail?isbn=9781337630580","timestamp":"2024-11-13T01:07:41Z","content_type":"text/html","content_length":"57869","record_id":"<urn:uuid:8b252157-2174-4b08-83fb-4e6064c52014>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00315.warc.gz"}