content
stringlengths
86
994k
meta
stringlengths
288
619
GMAT vs. GRE To view this video please enable Javascript The content provides guidance for individuals torn between taking the GMAT or GRE for business school applications, highlighting the differences, advantages, and strategic considerations for choosing the right test. • All business schools accept the GMAT, and many also accept the GRE, presenting a choice for applicants. • The hardest math questions on the GMAT are slightly more challenging than those on the GRE, but this is mitigated by the computer-adaptive testing format of the GMAT. • GRE verbal sections focus more on vocabulary, whereas GMAT verbal sections emphasize grammar. • To decide which test to take, it's recommended to take a cold practice test for both the GMAT and GRE and compare performances. • Generally, the GMAT is recommended for business school applicants unless there's a significant performance disparity favoring the GRE in a practice scenario. Introduction to GMAT vs. GRE Understanding Test Acceptance Key Differences Between GMAT and GRE This lesson compares the current GMAT with the current GRE. In late 2023, the GMAT is becoming shorter, and on September 22, 2023, the GRE is also becoming shorter. Please see the lesson "Shorter GRE vs. GMAT Focus Content Differences" for more details about the two new exams.
{"url":"https://gmat.magoosh.com/lessons/1062-gmat-vs-gre?utm_source=gmatblog&utm_medium=blog&utm_campaign=gmatstudyschedule&utm_content=1-month-gmat-study-schedule&utm_term=inline","timestamp":"2024-11-13T05:26:15Z","content_type":"text/html","content_length":"64842","record_id":"<urn:uuid:7736158e-463d-4e12-8d1a-cc7a410e3d02>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00738.warc.gz"}
Teaching 2017-2018 Curriculum in Mathematics for Engineering Meeting on Fractional Derivatives - Fractional Calculus and its Applications – 15 Dec 2017 – La Sapienza, S.B.A.I. Department November 8th 2017 - Seminar Prof. Patera/Pontrelli Basic Course Reference for timetable Prof. Fabrizio Frezza (fabrizio.frezza@uniroma1.it) Corso di scrittura tecnico-scientifica (3 CFU) Emilio Matricciani (Politecnico di Milano), gennaio-febbraio 2018 Course M1 Giovanni Cerulli Irelli, Andrea Vietri 30 ore secondo semestre (15+15 ore) TITLE: Graphs: from combinatorics to representation theory First part: Graceful labellings and edge-colourings of graphs. After a general introduction to graphs, with no specific background required, the first part of the course proceeds with two distinct topics, Graceful Labellings and Edge-Critical Graphs, which have attracted much interest from decades and provide numerous open questions. Classical constructions are shown along the course. Some of the current research issues are presented together with the state of the art and the known techniques. The students are hopefully expected to give their personal contribution to the development of the themes. Basic definitions on graphs. Topic 1) Graceful labellings. Decomposition of a complete graph using a graceful labelling. The Ringel conjecture on trees. Rosa's necessary condition. Graceful collages. Graceful polynomials. Topic 2) Edge colouring and critical graphs. Vizing's theorem and the Classification problem. Colouring of bipartite graphs. Planar graphs. Critical graphs. Geometrical interpretation of criticality. Construction of critical graphs. V.Bryant, Aspects of Combinatorics, A Wide-ranging Introduction, Cambridge University Press, 1993. J.A.Gallian, A Dynamic Survey of Graph Labeling, Electr.J.Comb. 16, DS6 (on-line source). Second part: Representations. The second part is more algebraic. We will develop the theory of representations of oriented graphs (which in this context are called quivers). This theory has been developed since the late 60s, and it is now a central topic of research in algebra and representation theory. We will provide an introduction to the theme, starting from basic notions of homological algebra. The goal of the course is the proof of the famous Gabriel's theorem: "a quiver has only finitely many isoclasses of indecomposable representations if and only if it is an orientation of a simply-laced Dynkin graph of type A, D or E". We will mainly follow the book "Quiver Representations" by R. Schiffler. Time permitting we will also develop some basics of the theory of quiver Prerequisites: Linear algebra. Objectives: the category of quiver representations is a perfect object to start working with functors and derived functors. The student will acquire familiarity with those concepts by several examples and Standard facts of linear algebra will be applied in unexpected ways and hence rediscovered. Exams (for both parts): The exam will consist in the solution of weekly exercises and a short talk on a theme compatible with the interest for the student. Course M2 Boundary value problems in domains with irregular boundaries. Professors Maria Rosaria Lancia e Maria Agostina Vivaldi (Sapienza Università di Roma) 20/24 hours. Starting day: February 27; Final day: April 05; Tuesday and Thursday 14:00-16:00 room 1B1 Pal.002 Program: A list of the topics - Variational solutions to boundary value problems - Regularity results - Homogenization and asymptotic analysis - numerical approximation with the finite element method - problems with dynamic boundary conditions - parabolic boundary value problems Course M3 “Nonlinear diffusion in inhomogeneous environments'' Anatoli Tedeev (South Mathematical Inst. of VSC Russian Acad. Sci.Vladikavkaz Russia) and Daniele Andreucci (Sapienza Università di Roma) May- June 2018 Tuesday-Thursday room 1B1 11:00-13:00. Nonlinear diffusion in inhomogeneous environments In many problems of diffusion the spatially inhomogeneous character of the medium is important and affects the qualitative behavior of the solutions. The course will cover problems in unbounded domains of the Euclidean space or in non-compact Riemannian manifolds, providing a general introduction to the basic theory and then focusing on the asymptotic behavior of solutions for large times. The prerequisites are standard knowledge of Sobolev spaces and basic theory of Riemannian manifolds. Many results in this field will be recalled in the course. 1) Linear and non-linear diffusion equations; the concept of solutions and the variety of possible behaviors. The energy method. 2) Sobolev spaces on manifolds. Interplay between geometry and embedding results. 3) Criteria of stabilization for inhomogeneous linear parabolic equations. 4) Specific properties of nonlinear, possibly degenerate, diffusion. 5) The case of space-dependent coefficients; blow up of interfaces. 6) Asymptotics for large times: classical results in the Euclidean space. The asymptotic profile in linear and nonlinear diffusion. 7) The case of the Neumann problem in subdomains of the Euclidean space. 8) Asymptotic behavior in manifolds. Course M4 An introduction to the functions of bounded variation and existence results for elliptic equations with plaplacian principal part (p greater or equal to 1) and singular lower order terms Professors Virginia De Cicco (Sapienza Università di Roma) and Daniela Giachetti (Sapienza Università di Roma) From January to March 2018 Aula 1B1 ore 10,00-12,00 nei giorni 10, 12, 16, 19, 23, 26 di Gennaio 2018 March 27, 30 April 17 , 20, 24 May 2 - 10:00/12:00 room 1B1 Pal. RM002 │Lunedì│Martedì │Mercoledì│Giovedì│Venerdì │Sabato│Domenica│ │ │ │ │ │ │ │ │ │ │ │10/1/2018│ │12/1/2018│ │ │ │ │16/1/2018│ │ │19/1/2018│ │ │ │ │23/1/2018│ │ │26/1/2018│ │ │ The first part of the course deals with an introduction to the functions of bounded variation. We first consider only functions of one variable, we give some definitions, some examples, we present some characterizations, properties, and we recall a brief history. Then we consider the functions of bounded variation of several variables: we present the definition, we point out some properties, the main theorems, and some applications. The second part of the course deals with some existence results for elliptic equations with p-laplacian principal part (p greater or equal to 1) and singular lower order terms, with the following program: 1) the case p=2, mild singularity and strong singularity, definition of solutions, existence, stability and uniqueness. 2) results for the case1<p, p different from 2. 3) the case p=1.
{"url":"https://www.sbai.uniroma1.it/didattica/dottorati/teaching-2017-2018-curriculum-mathematics-for-engineering","timestamp":"2024-11-14T12:01:56Z","content_type":"text/html","content_length":"38605","record_id":"<urn:uuid:8e474992-334e-4dbc-a17f-434defbc4805>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00818.warc.gz"}
The word "blastoff" I thought this might be an operational word from the 1950s, but I can't find any reference. Where does it come from, and how did it get into the public patois of the day? The field of rocketry started with solid propellants, so that might be a clue for where "blastoff" got started.
{"url":"https://forum.nasaspaceflight.com/index.php?PHPSESSID=3obv3gc1dspqj78i4n9j7p6m0b&topic=61454.0","timestamp":"2024-11-03T06:13:39Z","content_type":"text/html","content_length":"58301","record_id":"<urn:uuid:ad2a237d-9cf9-4457-bfa6-cfcb41459109>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00622.warc.gz"}
Discontinuity between number of births (and deaths) in first year compared to subsequent years There will often be a small discontinuity between the number of births (and deaths) in the first year compared to subsequent years. Since Spectrum calculates mid-year populations (July 1 to June 30) we determine the number of women of reproductive age during this period as the average between women last July 1 and this July 1 (WRA = [females15-49(t-1) + females15-49(t)]/2 With this formula you cannot calculate births (or deaths) in the base year since you don’t know how many women there were in the previous year. The professional demographers always want us to leave the first year blank to show that you cannot calculate that value. Even though that is technically correct we don’t do that because it is not helpful to programs. So we estimate births in the first year using the base year population. Since the previous year’s population was probably somewhat smaller this may tend to over-estimate births by some small amount. These births are not used in any subsequent calculations, they are just for display. So the slight drop you see from the first to the second year is just due to this approximation. Have more questions? Submit a request 0 Comments Please sign in to leave a comment.
{"url":"https://support.avenirhealth.org/hc/en-us/articles/217088647-Discontinuity-between-number-of-births-and-deaths-in-first-year-compared-to-subsequent-years","timestamp":"2024-11-05T17:15:52Z","content_type":"text/html","content_length":"16427","record_id":"<urn:uuid:6c736ba9-dd85-47e2-9d8e-3f861f824b97>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00690.warc.gz"}
Are units the same as credit hours? Yes. They're both in common usage to describe: Semester Hours - The number of credits or units awarded for courses on the semester system. Quarter Hours - The same, but for those schools on a quarterly calendar. Is units same as credits? The term "unit" is often used interchangeably with the term "credit." A 4-unit course, for example, might very well be the same thing at your school as a 4-credit course. Regardless of how the terms are used, it's smart to see how your particular school assigns units (or credits) to the classes offered. How many credit hours is 4 units? Four credit units require students to work on that course for about 180 (45x4) hours in some combination of class/instructional time and out-of-class time. Is units the same as hours? A unit represents approximately three hours of work per week. Thus a 3 unit course will probably require 9 hours of work per week, a 5 unit course will require 15 hours per week, and so forth. How do you convert credit hours to units? 37.5 clock hours = 1 unit of credit Understanding Credit Hours How many credit hours is a unit? “(a) One credit hour of community college work (one unit of credit) shall require a minimum of 48 semester hours of total student work or 33 quarter hours of total student work which may include inside and/or outside-of-class hours.” How many hours is 12 units? In other words, a student taking 12 units is spending 12 hours in class and 24-36 hours outside of class in order to succeed. A full-time student has a full-time job just being a student. Courses and students vary. However, students should consider these guidelines when planning for each semester. How many hours is 1 credit hour? The general rule provided by the U.S. Department of Education and regional accreditors is that one academic credit hour is composed of 15 hours of direct instruction (50-60 minute hours) and 30 hours of out-of-class student work (60-minute hours). How many classes is 12 units in college? Twelve credit hours usually translates to four courses worth three credits a piece. Some students take more than 12 credit hours a semester. What is 12 semester units? In the U.S. higher education system, a typical college course is worth 3 credit hours, so "12 semester hours" would usually equate to taking 4 courses, each worth 3 credit hours, in that specific What does 24 units mean? In the United States, one college unit is usually equivalent to one semester credit. Therefore, 24 semester credits would be equivalent to 24 college units. Is 17 credit hours a lot? 17 semester hours is not an unusual course load for a college freshman/first-year student. It usually happens because a student takes the usual five-course load and has one-credit labs for science and foreign language. Does units also mean credits? A unit is an academic module which forms part of your course of study, which represents a credit point value that contributes towards your course. Your course program will state the total number of credit points you need to achieve (and often the specific units required) to attain your award. Are credits also called units? Also known as 'units', or commonly regarded as 'credit hours'. Credits are a numerical value indicating the number of hours assigned to each class per week. For example, a three credit course meets three hours per week. Is 20 units in college a lot? 20 units would be a full-time job and a half. This is further complicated should you want to pursue research, employment, athletics, or extracurricular opportunities as well. We encourage students to think of any activity that they regularly participate in as counting for 1 unit for every 3 hours. How many hours is 2 credit hours? During the course of the semester, a credit hour is equivalent to one of the following: 15 hours of classroom contact, plus appropriate outside preparation (30 hours). How many hours is 1 unit in college? Hours to Units: defines one unit of credit as a minimum of 48 hours of total student work for colleges on the semester system, and as a minimum of 33 hours for colleges on the quarter system. What is 3 credit hours mean? Every hour that a student spends in the class typically corresponds to a credit hour. For example, if a student enrolls in a class that meets for one hour on Monday, Wednesday, and Friday, that course would be worth three credit hours, which is common in many college courses. Is 17 units a lot in college? Taking 17 college units can be a significant workload, especially if you have other commitments such as work or extracurricular activities. It's important to consider your ability to manage your time effectively, as well as your capacity to handle the academic workload. Is 10 units a lot in college? Depends on major, but the average full time student is taking 12 to 15 units. How many hours is 2 unit in college? The general formula for contact hours is as follows: Further, a 2-unit, 15-week course requires a minimum of 1500 minutes or 25 hours; a 3-unit, 15-week course requires a minimum of 2250 minutes or 37.5 hours; and a 4-unit, 15-week course requires 3000 minutes or 50 hours. What is one unit of credit? A unit of college credit represents three hours of student time each week for a semester; one hour of scheduled classroom lecture and two hours in outside preparation. A longer time is scheduled for laboratory courses since more of the work is done in the classroom. How do credit units work? A college credit is a unit that measures learning at accredited colleges and universities in the United States. According to federal guidelines, one college credit hour “reasonably approximates” one hour of classroom learning plus two hours of independent work [1]. How many hours is 1 semester unit? The student is only awarded an “A” when they reach the minimum threshold of 90 percent. Units for Cooperative Work Experience courses are calculated as follows: Each 75 hours of paid work equals one semester credit or 50 hours equals one quarter credit.
{"url":"https://www.spainexchange.com/faq/are-units-the-same-as-credit-hours","timestamp":"2024-11-07T16:22:37Z","content_type":"text/html","content_length":"346432","record_id":"<urn:uuid:feec3590-e761-40ac-bdcd-901254dbec1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00506.warc.gz"}
• Code Dependent (cs) in reply to Comp Sci Student Comp Sci Student: Are we going to need a GodFactory, which returns an array of God pointers? Horrible pseudocode follows: GodFactory.getGods(GodFactory.JEWISH) = [God] GodFactory.getGods(GodFactory.CHRISTIAN) = [God, God, God] GodFactory.getGods(GodFactory.ATHIEST) = null GodFactory.getGods(GodFactory.ROMAN) = [Zeus, ...] GodFactory.getGods(GodFactory.PASTAFARIAN) = [FSM] and so on I tried to call GodFactory.getGods(GodFactory.AGNOSTIC), but the call keeps blocking. Frisbeetarian isn't working, either... it keeps getting stuck somewhere. • corvi (unregistered) HAI 1.2 BTW used some syntax liberties for areas underdefined around arrays HOW DUZ I JosephusCircle YR soldiers AN YR skip I HAS A soldier soldier r 1 I HAS A dead dead r 0 I HAS A deadsoldiers deadsoldiers r DIFF OF soldiers AN 1 I HAS A circle IM IN YR LOOP UPPIN YR soldier TIL BOTH SAEM soldier AN soldiers soldier IN MAH circle r WIN IM OUTTA YR LOOP I HAS A skipnum skipnum r 1 soldier r 1 IM IN YR KILLLOOP UPPIN YR soldier WILE DIFFRINT dead AN BIGGR OF dead AN deadsoldiers BOTH SAEM soldier IN MAH circle AN WIN, O RLY? YA RLY BOTH SAEM skipnum AN skip, O RLY? YA RLY soldier IN MAH circle r FAIL dead r SUM OF 1 AN dead skipnum r 1 NO WAI, skipnum r SUM OF skipnum AN 1 BOTH SAEM soldier AN soldiers, O RLY? YA RLY, soldier r 0 IM OUTTA YR KILLLOOP BOTH SAEM dead AN deadsoldiers, O RLY? YA RLY IM IN YR LOOP UPPIN YR soldier TIL BOTH SAEM soldier AN soldiers BOTH SAEM soldier IN MAH circle AN WIN, O RLY? YA RLY, safespot r soldier IM OUTTA YR LOOP NO WAI, safespot r 0 FOUND YR safespot IF U SAY SO • samanddeanus (cs) in reply to Dr. Batch @Dr. Batch: I have posted code in Scheme which yielded (josephus 32767 2) [of 32767 people, skip 2 and then kill the next person] in 27 iterations and (josephus (largest-fixnum) 2) in 98 iterations. [largest-fixnum is 144115188075855871] Even (josephus (expt (largest-fixnum) 100) 2) returns immediately. The ceiling function in Scheme works on rational numbers as well as floats, so it returns exact answers. I also made some Scheme code that "models" the situation using circular lists or message-passing, which probably runs slow as !@#$$ • Kevin M (unregistered) in reply to Kevin M An iterative version of my previous solution, also handles arbitrary count value k (kill every k'th element): int josephus(Vector<Integer> v, int k) { while (v.size() > 1) { for (int i = 0; i < k - 1; i++) { return v.elementAt(0).intValue(); • Dr. Batch (unregistered) in reply to samanddeanus @Dr. Batch: I have posted code in Scheme which yielded (josephus 32767 2) [of 32767 people, skip 2 and then kill the next person] in 27 iterations and (josephus (largest-fixnum) 2) in 98 iterations. [largest-fixnum is 144115188075855871] Even (josephus (expt (largest-fixnum) 100) 2) returns immediately. The ceiling function in Scheme works on rational numbers as well as floats, so it returns exact answers. I also made some Scheme code that "models" the situation using circular lists or message-passing, which probably runs slow as !@#$$ My arrogance and I stand corrected. • dvrslype (unregistered) public class JosephusCircle { public static int lastManStanding(int soldiers, int skip) { if (soldiers == 1) return 0; return (lastManStanding(soldiers - 1, skip) + skip) % soldiers; public static void main(String[] args) { System.out.println(lastManStanding(12, 3)); • phil (unregistered) Just for fun, here's a plot of the solutions for 2 <= num_soldiers <= 100 and 1 <= step_size <= num_soldiers Results are plotted as intensity values from 0 (black) to 100 (white). I was surprised how random they look. [image] • Iago (cs) in reply to Sue D. Nymme Sue D. Nymme: I love a good perl golf challenge! perl -E '$k=pop;$s=($s+$k)%$_ for 1..pop;say$s+1' 12 3 10 39 characters between the quotes; 54 on the whole command line. I can see two ways to save another character: $k=pop;($s+=$k)%=$_ for 1..pop;say$s+1 but I can't see a way to combine both tricks. Addendum (2009-07-29 19:13): Two more characters can be saved if the calling convention is reversed: ($s+="@ARGV")%=$_ for 1..pop;say$s+1 called with e.g. 3 40 rather than 40 3. • Edinburgh Mike (unregistered) Reasonably good Python solution. def circle(N, skip): p = 0 solders = range(0, N) while len(solders) > 1: p = (p + skip - 1) % len(solders) solders.pop(p) return solders[0] + 1 Just wanted to say I'm enjoying these little problems. Sure you can find solutions for them in minutes but your only cheating yourself if you do! • Qwertyuiopas (unregistered) PHB method: " Okay now everyone, please stand in a circle. Now take a number. Here is a stack of papers, every third a memo firing the recipient. Take the top one and pass the stack. No cheating. By fired, we mean leave NOW. I'll be standing over here off to the side by the doughnuts. " When it is down to one person: " I see it is finished. What is your number? (person: "42") Oh, by the way, you are fired too. " • mol1111 (cs) ZX Spectrum BASIC version. Screenshot for finished program (soldiers=10, skip=2): [image] 1 REM ********************** 2 REM * * 3 REM * Josephus' CIRCLE * 4 REM * * 5 REM * TheDailyWTF.com * 6 REM * * 7 REM * by mol1111 * 8 REM * * 9 REM * Greets go to * 10 REM * Alex,the organizer * 11 REM * Yorick * 12 REM * Lamac, for his ROM * 13 REM * * 14 REM ********************** 15 REM . 45 REM Bigger number makes 46 REM it slower. Set to zero 47 REM to wait for keypress 48 REM after each step. 50 LET wait=20 60 PAPER 0 70 INK 7 80 BORDER 0 90 CLS 100 INPUT "How many soldiers?",n 110 INPUT "How many to skip", skip 112 IF n>0 AND skip>=0 THEN GO TO 119 114 PRINT "There should be at least one soldier and the skip should be 0 or more." 116 GO TO 100 119 REM . 120 REM Draw soldiers 121 REM . 125 CLS 130 LET angle=2*PI/n 132 LET scale=60 140 FOR i=1 TO n 150 GO SUB 1000 160 CIRCLE x, y, 10 165 GO SUB 1100 170 NEXT i 180 REM . 190 REM Play. 200 REM . 204 REM Modulo function. 205 DEF FN m(a,b)=a-(INT (a/b)*b) 208 INK 2 210 DIM s(n) 220 LET i=1 230 LET lasts=n 240 IF lasts=0 THEN GO TO 500 250 LET stilltoskip=skip+1 260 IF stilltoskip=0 THEN GO TO 320 270 LET i=1+FN m(i,n) 280 IF s(i) THEN GO TO 270 300 LET stilltoskip=stilltoskip-1 310 GO TO 260 320 LET s(i)=1 322 LET lasts=lasts-1 324 IF lasts=0 THEN INK 4 330 GO SUB 1000 335 PAUSE wait 340 CIRCLE x,y,10 345 GO SUB 1100 350 GO TO 240 500 REM The End. 980 INK 7: REM reset to white 990 STOP 1000 REM . 1001 REM Compute middle of the CIRCLE . 1002 REM . 1010 LET x=128+scale*COS (angle*i) 1020 LET y=96+scale*SIN (angle*i) 1030 RETURN 1100 REM . 1101 REM Prints soldier's number 1102 REM It's not exact because 1103 REM print works just with 1104 REM rows and columns, not 1105 REM pixels. 1106 REM . 1110 LET column=x*(33/256)-1 1120 LET row=22-y*(25/192) 1130 PRINT AT row,column;i 1140 RETURN Addendum (2009-07-29 19:36): If you want to start counting including the first soldier (as in article's picture) change the line 220 to LET i=0 . • ponky (unregistered) First input is the number of soldiers. Second input is the number to skip. Output is the (zero based) index of the survivor. • Bradley Dean (unregistered) C# / generics: static int Josephus(int numSoldiers, int soldiersToSkip) // create soldiers List<int> Soldiers = new List<int>(); for (int i = 0; i < numSoldiers; i++) // kill soldiers int iSoldier = 0; while (Soldiers.Count() > 1) // skip ahead by the requested number of soldiers iSoldier = iSoldier + soldiersToSkip; // if we've gone over, then wrap back around to the top while (iSoldier > Soldiers.Count()) iSoldier -= Soldiers.Count(); // kill the choosen soldier Soldiers.RemoveAt(iSoldier - 1); // return answer return Soldiers[0] - 1; • Utunga (unregistered) in reply to IV I am taking the Schroedinger approach - instead of using an axe, have everyone climb into a box with a bottle of poison. At that point, everyone is both alive and dead. Thus, the suicide is successful (each person is dead), but our hero has survived (he is alive). No, no, no. For Scrödinger's approach to work you need quantum. Doesn't work without one, everyone knows that. • Goglu (unregistered) in reply to Code Dependent If it were, many wars would have been avoided... • bwross (unregistered) In dc: [ Lks. Lns. ] SR [ lRx 0 q ] SX Sk Sn 1 ln =X ln 1- lk lJx lk+ ln% ] SJ • Veltyen (unregistered) Shellscript. With graphical output. :) USAGE = <scriptname> <number in circle> Error handling non-existant. echo "Starting Number = " $NUM for (( i = $NUM ; i != 0; i-- )) do ARRAY="a $ARRAY" done echo $ARRAY #Killing time INCREMENTOR=0 ALLDEAD=NUP while [ $ALLDEAD != YUP ] do for ELEMENT in $ARRAY do if [ $ELEMENT = a ] then INCREMENTOR=expr $INCREMENTOR + 1 #echo "Incrementor is $INCREMENTOR" fi if [ $INCREMENTOR -eq 3 ] then INCREMENTOR=0 ARRAY2="$ARRAY2 d " else ARRAY2="$ARRAY2 $ELEMENT" fi #echo $ARRAY2 done ARRAY=$ARRAY2 echo $ARRAY STILLALIVE=0 for ELEMENT in $ARRAY do if [ $ELEMENT = a ] then STILLALIVE=expr $STILLALIVE + 1 fi if [ $STILLALIVE -eq 1 ] then ALLDEAD=YUP fi echo "still alive = $STILLALIVE" LUCKYNUM=1 for ELEMENT in $ARRAY do if [ $ELEMENT = d ] then LUCKYNUM=expr $LUCKYNUM + 1 else echo "Survivor is number $LUCKYNUM" fi • Strange Quark (unregistered) int josephus(int size, int skip) { if(size <= 1) return 1; // You have no reason to worry if its just you // Allocate array of size @size bool *point = new bool[size]; int axed = 0; // People axed int seq = 0; // Current skip sequence int pos = 0; // Current position in circle // Loop while there is more than 1 person alive while(axed < size - 1) // if its reached the sequence and that person is still alive... if(seq == (skip - 1) && point[pos]) // Axe the person point[pos] = false; seq = 0; seq++; // Else increase the sequence pos++; // Move to next position if(pos == size) pos = 0; // Reset to beginning if reached the end // return the single living position for(int i = 0; i < size; i++) return i; return -1; • John Stracke (unregistered) j delta n = step [1..n] 1 where step [x] _ = x step (x:xs) i = if i == delta then step xs 1 else step (xs++[x]) (i+1) • Chris (unregistered) in reply to steenbergh Okay, that does it-- Almost killed me. Too funny. • John Stracke (unregistered) in reply to Addison Alex Mont: Thus our formula is (code in Python): def J(n,k): return 0 return (J(n-1,k)+k)%n This is sexy. Sexy, but wrong. J(12,3) returns 0; look at the animation and you'll see that it should be either 9 or 10 (depending on whether you're counting from 0 or 1). • Paddles (cs) Here it is, in awk. A more awk-ish implementation would replace the troops string with an associative array, deleting (how appropriate!) elements until there was only one left - but that would come at the cost of making the snuff movie a bit harder. Sample output: C:\Temp>gawk -f josephus.awk -vVERBOSE=1 And the code... # josephus.awk # set VERBOSE to 1 for a very-low-budget snuff ASCII-art snuff movie. NF == 0 {exit; } NF >= 2 { interval = int($2); } NF == 1 || interval <0 { interval = 3;} int($1)>0 { troopsize =$1; troops = ""; for (i=1;i<=troopsize;i++) troops = troops "^"; lastmember = 0; for (i=1;i<troopsize;i++) { for (j=1;j<=interval;j++) { if (nextmember==0) nextmember=index(troops,"^") else nextmember += lastmember; lastmember = nextmember; troops = substr(troops,1,lastmember-1) "v" substr(troops,lastmember+1); if (VERBOSE>0) { printf("%s\r",troops); # Prints a pretty progressive display of alive and dead members marked by ^ and v. for (k=1;k<=300000;k++) {foo=sin(k);} # In lieu of a sleep command. if (VERBOSE >0) print ""; print index(troops,"^"); • horuskol (unregistered) My PHP offering: function allbutone($num, $skip) { $circle = array(); for ($i = 0; $i < $num; $i++) { $circle[$i] = $i+1; $step = $skip - 1; $idx = $step; while (sizeof($circle) > 1) { do { $idx += $step; } while ($idx < sizeof($circle) && sizeof($circle) > 1); while ($idx >= sizeof($circle)) { $idx -= sizeof($circle); return $circle[0]; $tests = array( array(12, 3), array(40, 3), array(41, 3), array(5, 1), foreach ($tests as $t) { echo $t[0] . "\t" . $t[1] . "\t" . allbutone($t[0], $t[1]) . "\n"; • Strilanc (unregistered) I feel like pointing out that O(n) is a horrible running time in this case. It's pseudo-linear (linear in the numeric value of the input but exponential in the size of the input). We can do much better than exponential in N's representation. Here is an algorithm which runs in O(lg(N)*S) time, assuming arithmetic operations are constant time. function f(int n, int s) as int: if n < 1 or s < 1 then error if s == 1 then return n - 1 #special case if n == 1 then return 0 #end condition #recurse on sub-group obtained after a cycle completed var t = f(n - ceiling(n / s), s) #map sub-result to current group t += -n mod s #assumes mod result is in [0, s) t += floor(t / (s-1)) return t mod n It achieves the massive speed up by performing an entire trip (and a bit more) around the circle in each recursion. I haven't figured out any way to make it non-exponential in S's representation • Anonymous Coward (unregistered) in reply to Code Dependent Code Dependent: Shouldn't God be static? He's a singleton instance. That way, you can still pass him into any CargoCultFollowing(IGod g) function using the IGod interface. • Anonymous Coward (unregistered) in reply to John Stracke John Stracke: Alex Mont: Thus our formula is (code in Python): def J(n,k): return 0 return (J(n-1,k)+k)%n This is sexy. Sexy, but wrong. J(12,3) returns 0; look at the animation and you'll see that it should be either 9 or 10 (depending on whether you're counting from 0 or 1). What are you smoking? >>> def j(n,k): ... if(n==1): return 0 ... else: return (j(n-1,k)+k)%n >>> j(12,3) • JayC (unregistered) Yet another simulator version, but maybe it's a little different from the others. public static IEnumerable<int> JosepheusInverseOrder(int requiredpeeps) Queue<int> people = null; if (requiredpeeps <= 0) people = new Queue<int>(new int[] {}); people = new Queue<int>(new int[] { 0 }); int numpeeps = people.Count() ; //work it backwards while (requiredpeeps > numpeeps) int temp; // use Queue as circular list temp = people.Dequeue(); temp = people.Dequeue(); return people.ToArray().Reverse(); Which, instead of killing the guys, pushes the guys to die in the proper order from Josepheus back and up till the first guy dies. Then all we need to do to get the index of Josepheus (Mr. System.Console.WriteLine("Josepheus needs to be at place {0}.", Array.IndexOf(people.ToArray(), 0)+1); • xeno (unregistered) Sandbox grid solution. This isn't the most efficient algorithm (a computer doesn't need multiple rows), you may even be able to express this problem as a mathematical equation. However, a person can execute this algorith by drawing a grid in the sand, as Josephus might have done, using the rows to avoid losing your place (which would fatally alter the result). The multiple rows help you keep your place. As you work your way along each row, "killing soldiers" you strike out the rest of the column (downwards, from the current row). When you run of the end of the row, go down to the next. I've allocated enough rows in the function, but if you're working in the sand you can just add more if you run out. That's just as well, as division was considered complex maths in the 1st century. $n = $argv[1]; $skip = $argv[2]; echo "The safe spot is #" . josephs_circle($n, $skip) . "\n"; function josephs_circle($n = 40, $skip = 3) $num_rows = ceil($n / $skip) + 1; $row = array_pad(array(), $n, ''); $grid = array_pad(array(), $num_rows, $row); $x = $skip -1; // count zero $y = 0; $kills = 0; while (true) while ($grid[$y][$x] and $x < $n) // column is "dead", still on this row if ($x >= $n) // have run off the end of the row (above) $x = 0; $grid[$y] = $grid[$y -1]; // strike out downwards $kills++; // now we're going to kill somebody, unless ... if ($kills == $n) // ... we're the last man standing return $x +1; // convert from count-zero to count-one and return $grid[$y][$x] = '*'; // hit the soldier with the axe $x += $skip; if ($x >= $n) // have skipped off the end of the row $x = $x % $n; $grid[$y] = $grid[$y -1]; // strike out downwards I have no idea why this is coming out double-spaced. Anyway, Josephus was the third man, like Orson Welles. • Sterge (unregistered) import java.util.HashMap; public class Josephus { public static void main(String[] args) { // The example of 12 soldiers in a circle is won by spot 10 (index 9). HashMap<Integer, Boolean> livingSoldiers = new HashMap <Integer, Boolean>(); int numSoldiers = 0; int numToSkip = 0; int skipped = 0; if (args.length != 2) System.out.println("Usage: java Josephus [soldiers] [to skip]"); numSoldiers = Integer.parseInt(args[0]); numToSkip = Integer.parseInt(args[1]); catch (NumberFormatException nfe) System.out.println("Usage: java Josephus [soldiers] [to skip]"); int[] soldiers = new int[numSoldiers]; for (int i = 0; i < numSoldiers; i++) soldiers[i] = 1; // Alive. livingSoldiers.put(i, true); while (livingSoldiers.size() > 1) for (int i = 0; i < numSoldiers; i++) if (soldiers[i] != 0 && skipped == numToSkip) soldiers[i] = 0; // Dead. skipped = 0; else if (soldiers[i] == 1) System.out.println("FTW choose index " + livingSoldiers.keySet().iterator().next() + " in a ZERO based array!"); • Ransom (unregistered) Not counting my linked list implementation - not as fast as the functional ones - but maybe a little more comprehensible def josephl(num_peeps, num_skip) ll = LinkList.new num_peeps.times {|i| ll.append(i)} iterator = ll.head (num_peeps - 1).times do (num_skip - 1).times do iterator = iterator.next delete = iterator iterator = iterator.next • Christian (cs) Here's an attempt with VBScript that simulates a linked list with an array. Function JosephusSafeSpot(intNumberOfSoldiers, intNumberToSkip) Dim arrPopulation(), intIndex, intPrevious, intSubIndex ReDim arrPopulation(intNumberOfSoldiers) For intIndex = 1 To intNumberOfSoldiers arrPopulation(intIndex) = (intIndex mod intNumberOfSoldiers) + 1 intIndex = 1 Do Until intIndex = arrPopulation(intIndex) For intSubIndex = 2 To intNumberToSkip intPrevious = intIndex intIndex = arrPopulation(intIndex) arrPopulation(intPrevious) = arrPopulation(intIndex) intIndex = arrPopulation(intPrevious) JosephusSafeSpot = intIndex End Function At least it works. • Stalker (cs) in reply to Dr. Batch Dr. Batch: Go ahead and try it with any of the other looping ones. I'll wait. My very bad F# looping version did that in 42 ms, (25 iterations) if I remove the prints during the loop. Just under a second with them. Not a very long wait either way. Raising the skip makes it much worse though, then the memory reallocations start to hurt. • SurturZ (unregistered) Visual Basic 2008, using generic list.... Function joe(ByVal soldiercount As Integer, ByVal skipcount As Integer) As Integer Dim i As Integer Dim lst As New List(Of Integer) For i = 1 To soldiercount Next i Do Until lst.Count = 1 'move top two to end i = lst(0) : lst.RemoveAt(0) : lst.Add(i) i = lst(0) : lst.RemoveAt(0) : lst.Add(i) 'kill third guy Return lst(0) 'last man standing's original position End Function • SurturZ (unregistered) in reply to SurturZ Oops, got it wrong, here is correct version: Function joe(ByVal soldiercount As Integer, ByVal skipcount As Integer) As Integer Dim i As Integer Dim lst As New List(Of Integer) For i = 1 To soldiercount Next i Do Until lst.Count = 1 'move top guys to end For j As Integer = 0 To skipcount - 2 i = lst(0) : lst.RemoveAt(0) : lst.Add(i) Next j 'kill appropriate guy Return lst(0) End Function • Firestryke31 (unregistered) Perhaps to calculate it quickly he used assembly? ;; Josephus' Circle - Intel 80386-compatible assembly ;; Parameters: eax = number of soldiers, ebx = kill stride ;; returns: eax = safe spot, ecx = original number of soldiers ;; Stuff to make it work with C global _joseC; global joseC; ;; I didn't really look into doint this more efficiently ;; but it's not the focus, so it doesn't matter push ebp mov ebp, esp push ebx push ecx push edx mov eax, [ebp + 8 ] mov ebx, [ebp + 12] call joseASM pop edx pop ecx pop ebx pop ebp ;; Here's the meat of the program: ;; Save the original number of soldiers for later on mov ecx, eax ;; we need to check if eax was one, and we need eax - 1 later ;; so subtract the one now dec eax ;; and check for zero or eax, eax ;; if it is, we're done. jz return ;; it wasn't, so let's save the original number of soldiers push ecx ;; and go through it all again with n-1 soldiers call joseASM ;; Now we need to add the kill stride add eax, ebx ;; get the original number of soldiers back pop ecx ;; zero out edx for the divide xor edx, edx ;; perform the modulus (div stores % in edx and / in eax) div eax, ecx ;; put the result in eax to make recursion easier mov eax, edx ;; leave the function • robsonde (unregistered) so it has been a few years since I did any code on the C64. but an answer in 6502 assembler is: *=$0800 jmp START NUM_PEOPLE .byte 41 NUM_JUMP .byte 3 NUM_ALIVE .byte 00 TEMP .byte 00 START LDA NUM_PEOPLE STA NUM_ALIVE LDA #01 LDX #00 CLEAR STA $0700,x INX CPX NUM_PEOPLE BNE CLEAR LDA #00 LDY #00 TOP LDX #$ff KILL_LOOP INX CPX NUM_PEOPLE BEQ TOP LDA $0700,x CMP #00 BEQ KILL_LOOP INC TEMP LDY TEMP CPY NUM_JUMP BNE KILL_LOOP LDA #00 STA $0700,x STA TEMP DEC NUM_ALIVE LDA NUM_ALIVE CMP #01 BNE KILL_LOOP BRK OBJECT CODE: □ = $0800 0800 JMP START 4C 07 08 0803 NUM_PE 0803 .BYTE $29 29 0804 NUM_JU 0804 .BYTE $03 03 0805 NUM_AL 0805 .BYTE $00 00 0806 TEMP 0806 .BYTE $00 00 0807 START 0807 LDA NUM_PE AD 03 08 080A STA NUM_AL 8D 05 08 080D LDA #$01 A9 01 080F LDX #$00 A2 00 0811 CLEAR 0811 STA $0700,X 9D 00 07 0814 INX E8 0815 CPX NUM_PE EC 03 08 0818 BNE CLEAR D0 F7 081A LDA #$00 A9 00 081C LDY #$00 A0 00 081E TOP 081E LDX #$FF A2 FF 0820 KILL_L 0820 INX E8 0821 CPX NUM_PE EC 03 08 0824 BEQ TOP F0 F8 0826 LDA $0700,X BD 00 07 0829 CMP #$00 C9 00 082B BEQ KILL_L F0 F3 082D INC TEMP EE 06 08 0830 LDY TEMP AC 06 08 0833 CPY NUM_JU CC 04 08 0836 BNE KILL_L D0 E8 0838 LDA #$00 A9 00 083A STA $0700,X 9D 00 07 083D STA TEMP 8D 06 08 0840 DEC NUM_AL CE 05 08 0843 LDA NUM_AL AD 05 08 0846 CMP #$01 C9 01 0848 BNE KILL_L D0 D6 084A BRK 00 HEX: 4C 07 08 29 03 00 00 AD 03 08 8D 05 08 A9 01 A2 00 9D 00 07 E8 EC 03 08 D0 F7 A9 00 A0 00 A2 FF E8 EC 03 08 F0 F8 BD 00 07 C9 00 F0 F3 EE 06 08 AC 06 08 CC 04 08 D0 E8 A9 00 9D 00 07 8D 06 08 CE 05 08 AD 05 08 C9 01 D0 D6 00 I have not written clean input / output code. but the basic idea is there and the code will run and give correct answer. • Stalker (cs) Didn't feel right to pack all soldiers into one process, so here they each get their own. #include <mpi.h> #include <stdio.h> #define SKIP 3 typedef struct _Axe int lastToHold; int counter; int goAway; } Axe; int main( int argc, char** argv ) int soldierId, numSoldiers, stillAlive, neighborR, neighborL; Axe axe; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &soldierId ); neighborR = soldierId - 1; MPI_Comm_size( MPI_COMM_WORLD, &numSoldiers ); neighborL = (soldierId + 1) % numSoldiers; stillAlive = 1; if( soldierId == 0 ) neighborR = numSoldiers - 1; axe.counter = SKIP - 1; axe.lastToHold = 0; axe.goAway = 0; printf( "Axe starting to go round\n" ); printf( "[%d] Sending to %d\n", soldierId, neighborL ); MPI_Send( &axe, 3, MPI_INT, neighborL, 0, MPI_COMM_WORLD ); while( 1 ) printf( "[%d] Waiting to receive from %d\n", soldierId, neighborR ); MPI_Recv( &axe, 3, MPI_INT, neighborR, 0, MPI_COMM_WORLD, &status ); if( axe.lastToHold == soldierId ) // Only one left axe.goAway = 1; MPI_Send( &axe, 3, MPI_INT, (soldierId + 1) % numSoldiers, 0, MPI_COMM_WORLD ); printf( "[%d] I won!\n", soldierId ); if( stillAlive && --axe.counter == 0 ) // Our turn to die axe.counter = SKIP; stillAlive = 0; printf( "[%d] Oh noes! I died!\n", soldierId ); if( stillAlive ) axe.lastToHold = soldierId; printf( "[%d] Sending to %d\n", soldierId, neighborL ); MPI_Send( &axe, 3, MPI_INT, neighborL, 0, MPI_COMM_WORLD ); if( axe.goAway ) printf( "[%d] Game over\n", soldierId ); return( 0 ); Not really how MPI is supposed to be used, but it works. At least for smaller values, falls apart with larger. • ikegami (cs) Functional programming solution in Perl. use strict; use warnings; use List::Util qw( reduce ); # Silence warnings. $a=$b if 0; sub step { my ($k) = @_; return sub { ($a + $k) % $b }; sub J { my ($n, $stepper) = @_; return reduce \&$stepper, 0, 2..$n; print( J(12,step(3)), "\n"); • [email protected] (unregistered) In the tacit form of the J programming language the solution looks like this: :@(}.@|.^:({:@]))i. 20 characters so 2 >:@(}.@|.^:({:@]))i. 41 - returns 31 If you allow an answer for index 0 the solution shortens to: }.@|.^:({:@])i. 15 characters and 2 }.@|.^:({:@])i. 41 - returns 30 for more information on J -- Jsoftware.com Reading the index 1 version right to left i. creates a list of 0 to n-1 soldiers ({:@]) is a loop counter that initializes to the last number in the soldier list (n-1) ^: symbolizes the power conjunction that will apply a function a specified number of times (in this case n-1). }.@|. is the function that is applied. The LHS arg shifts the soldier list 2 spaces and then the first soldier in the newly ordered line is dropped. After this has been done n-1 times the remaining number is the soldier number in index 0. For some reason this makes me think of a Monty Python sketch with suicidal Scots (but that would be the case with 0 skips). :@ then adds one to the number to convert it to index 1 Cheers, bob • bynio (unregistered) in reply to Scope Code Dependent: Shouldn't God be static? But then we wouldn't be able to mock God in a unit test! Well, you could use Supersede Instance Variable pattern for this legacy system. • Chris (unregistered) In Javascript: function SafeSpot(nrOfSoldiers, soldierSkip) var a = new Array(nrOfSoldiers); for (i=0; i<nrOfSoldiers; i++) a[i] = i+1; while (a.length > 1) if (a.length > soldierSkip) a = a.slice(soldierSkip).concat(a.slice(0,soldierSkip-1)); else if (a.length == soldierSkip) a = a.slice(0,soldierSkip-1); else if (a.length > 1 && (soldierSkip % a.length == 0)) a = a.slice(0, a.length-1); a = a.slice(soldierSkip % a.length).concat(a.slice(0,(soldierSkip % a.length) - 1)); return a[0]; • xeno (unregistered) in reply to Wongo Mmh, these solutions overlook one important point: "Josephus somehow managed to figure out — very quickly — where to stand with forty others". Unless Ole Joe had a phenomenal stack-based brain, he simply couldn't recurse through the 860 iterations required for a 41 soldiers circle (starting at 2). So there must be some kind of shortcut (e.g. to know whether a number is divisible by 5, you don't have to actually divide it, just check whether the final digit is 5 or 0). Indeed, here is the list of safe spots according to the number of soldiers (assuming a skip of 3): (deleted for the sake of brevity) The series on the right looks conspicuously susceptible to a shortcut. But I can't find it for the life of me... Anyone ? I think you're assuming some sort of mathematic trick that hadn't been invented in 1AD. This is the time before arabic numerals or algebra (and certainly before recursion), and division was still an acane artform. That's why I went for a "grid in sand" algorith. There are some cleverer algorithms here, but I think that it more likely that he might have had a minute or so to scratch in the sand while the other soldiers were talking to God. If I was in his sandles I'd much rather do it like that than solve equations or execute recursive functions in my head. Of course, you shouldn't mind me if you just want to find that equation. I just think it's unlikely Josephus might have done it that way. Incidentally, whatever method you're using is periodically wrong from 6 soldiers and up. Captcah: nulla. A count-zero error perhaps? • BTM (unregistered) in reply to Code Dependent I'd say it even should ba Abstract ;-) • BTM (unregistered) in reply to Code Dependent Shouldn't God be static? I'd say it even should ba Abstract ;-) • xeno (unregistered) in reply to xeno Oops. Jospehus was the 41st soldier, so the safe spot is #41. That in itself is a hint as to a mathmatical solution. I just checked, and whenever there is prime number of soldiers, the last (prime) place is the safe spot. I imagine that there is some mathematical solution involving finding the largest co-prime, but that's over my head. • xeno (unregistered) in reply to xeno Now that I think about it a grid-in-sand solution is silly. If he was going to do that, he might as well have just modeled the problem directly with a line or ring of 41 pebbles. Or maybe he was second to last and wacked the other guy first :-) • [email protected] (unregistered) The original WTF graphic that described this problem so well could be the clue to how Josephus would solve this problem. Draw a circle and place 41 stones around it just touching the circle. Starting at one move every third stone that is outside the circle across the line to the inside of the circle so that the order of the stones is maintained. Continue til there is only one stone left outside the circle. That is the last soldier position and that is where you should stand. This model would solve the problem and would seem to be within the technology of the times. Cheers, bob • [email protected] (unregistered) in reply to xeno I think your last prime solution fails for 3 soldiers when killing of every third soldier :) In this case I think #41 would be the 23rd to die. I'd still want to be #31 Cheers, bob • xeno (unregistered) in reply to [email protected] The original WTF graphic that described this problem so well could be the clue to how Josephus would solve this problem. Draw a circle and place 41 stones around it just touching the circle. Starting at one move every third stone that is outside the circle across the line to the inside of the circle so that the order of the stones is maintained. Continue til there is only one stone left outside the circle. That is the last soldier position and that is where you should stand. This model would solve the problem and would seem to be within the technology of the times. Snap :-) Or he might have chosen #41 out of mathematical superstition (a belief that prime numbers are lucky), or just hung back and joined the circle last. • KnoNoth (unregistered) //Good luck, trying to understand this function code( I used almost every mistake here, what I have seen ) int whos_alive(int varaibles, int varaibels){ char variabels[varaibles]; int variables [3]={0,1,0}; for(variables[0]=0;variables[0]<varaibles;variables[0]++)variabels[variables[0]]=1; for(variables[0]=0;1;variables[0]++)if(variables[0]>=varaibles) variables[0]=-1; else if(variabels [variables[0]]){if(variables[2]==varaibles-1)return variables[0]+1;if(variables[1]==varaibels){variabels[variables[0]]=0;variables[1]=1;variables[2]++;}else variables[1]++;} } int main(int argc, const char **argv){ printf("safe spot is %d( if you start counting positions from 1 not 0 )",whos_alive(atoi(argv[1]),atoi(argv[2]))); }
{"url":"https://thedailywtf.com/articles/comments/Programming-Praxis-Josephus-Circle/5","timestamp":"2024-11-06T14:15:27Z","content_type":"application/xhtml+xml","content_length":"102138","record_id":"<urn:uuid:a5388ded-fc27-4c85-aaee-dfc8d8f26986>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00528.warc.gz"}
Why Is 3 1/2 Multipiled By X In The Equation 3 1/2 x is multiplied by x In the equation because it is the coefficient, the number being multiplied by the variable Step-by-step explanation: I don't know the equation, but I will assume that 3 1/2 is the slope. Step-by-step explanation: For example, in the equation y = 10x - 9, the slope is 10. The angle x, has the measurement of 40° Step-by-step explanation: Step 1: Define complementary angles Complementary Angles: Two angles are complementary when the sum of them is equal to 90°. For example, 30° and 60° are complementary because they add up to 90°. Step 2: Figure out the complementary angle Since ∠ABD and ∠DBC are complementary, that means that they should add up to 90°. Since the angle value of ∠ABD is 50°, that means that the angle value of ∠DBC should be 90° - 50° = x. 90° - 50° = x 40° = x Answer: The angle x, has the measurement of 40°
{"url":"https://diemso.unix.edu.vn/question/why-is-3-12-multipiled-by-x-in-the-equation-lknb","timestamp":"2024-11-15T00:24:07Z","content_type":"text/html","content_length":"66102","record_id":"<urn:uuid:f01b90db-7f03-4616-830c-7fb094435bb1>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00558.warc.gz"}
This is an addition to a previous post, introducing the reader to different ways of calculating the moment of a force and the torque of a couple. This information will be useful in aircraft dynamics models. Calculating the moment of force by George Lungu – This tutorial presents a few ways of calculating the moment of force or torque. It… Read More... "Moment of Force and Torque Calculation" Longitudinal Aircraft Dynamics #5 – finishing the aircraft This section finalizes the aircraft (glider) by inserting the wing, the horizontal stabilizer and a center of gravity (CG) sprite in the layout. [sociallocker][/sociallocker] Longitudinal Aircraft Dynamics #5- putting the glider together by George Lungu – This section puts together the fuselage, main wing and stabilizer with the proper scale, shift and rotation determined by the input parameters. Scaling and… Read More... "Longitudinal Aircraft Dynamics #5 – finishing the aircraft" Newton Generalized Treatment Most of people have heard of Newton’s second law, mass, moment of inertia or the definition of the acceleration both linear and angular. The stuff presented here is elementary (9th grade), yet it is generally not properly understood. What happens when one applies a bunch arbitrary forces on an arbirtarily shaped body? The resultant force vector produces a linear acceleration… Read More... "Newton Generalized Treatment" Longitudinal Aircraft Dynamics #4 – virtual aircraft definition This section of the tutorial explains how to create the 2D aircraft components for the animated longitudinal stability model. The first part deals with extracting the x-y coordinates for the fuselage, canopy, vertical stabilizer and rudder. The second part handles the main wing airfoil and the horizontal stabilizer airfoil. All thses parts will be put together in the next section.… Read More... "Longitudinal Aircraft Dynamics #4 – virtual aircraft definition" Longitudinal Aircraft Dynamics #3 – layout parameters and wireframe fuselage generation This section discusses the layout of the virtual plane and provides for the worksheet implementation of the plane dimensions as input parameters controlled by spin buttons and macros. In the final part a freeform is used to generate raw data for the fuselage. [sociallocker][/sociallocker] Longitudinal Aircraft Dynamics #3- defining the virtual aircraft by George Lungu – This section of the tutorial… Read More... "Longitudinal Aircraft Dynamics #3 – layout parameters and wireframe fuselage generation" Longitudinal Aircraft Dynamics #2 – 2D polynomial interpolation of parameters cl, cd and cm In the previous section, the main wing airfoil and the horizontal stabilizer airfoil were simulated using Xflr5. The three coefficients, lift, drag and moment were then interpolated on charts in Excel using 4th and 5th order polynomials. This section shows a few tricks about how to easily introduce those 60 equations as spreadsheet formulas in Excel ranges. It also presents a simple linear interpolation method across the Reynolds… Read More... "Longitudinal Aircraft Dynamics #2 – 2D polynomial interpolation of parameters cl, cd and cm" Longitudinal Aircraft Dynamics #1 – using Xflr5 to model the main wing, the horizontal stabilizer and extracting the polynomial trendlines for cl, cd and cm This is a tutorial about using a free aerodynamic modeling package (Xflr5) to simulate two airfoils in 2D (the main wing and the horizontal stabilizer) for ten different Reynolds numbers, then using Excel to extract the approximate polynomial equations of those curves (cl, cd and cm) and based on them, simulate a 2D aircraft as an animated model. This section deals with… Read More... "Longitudinal Aircraft Dynamics #1 – using Xflr5 to model the main wing, the horizontal stabilizer and extracting the polynomial trendlines for cl, cd and cm" Aerodynamics Naive #3 – a brief introduction to Xflr5, a virtual wind tunnel The previous section implemented and charted the ping-pong polar diagrams in a spreadsheet and showed a reasonable similarity, for moderate angles of attack, between these diagrams and the ones modeled using Xflr5, a virtual wind tunnel. This section introduce the concept Reynolds number and it also contains a very brief introduction to Xflr5, the free virtual wind tunnel software. Aerodynamics… Read More... "Aerodynamics Naive #3 – a brief introduction to Xflr5, a virtual wind tunnel" Aerodynamics Naive #2 – spreadsheet implementation of the Ping-Pong polar diagrams This section of the tutorial implements the lift and drag formulas in a worksheet, creating and charting the polar diagrams for an ultra simplified ping-pong model of an airfoil. Comparing these diagrams with ones obtained by using a virtual wind tunnel (XFLR5) we can see a decent resemblance for moderate angles of attack (smaller than about 8 degrees in absolute value).… Read More... "Aerodynamics Naive #2 – spreadsheet implementation of the Ping-Pong polar diagrams" Aerodynamics Naive #1 – deriving the Ping-Pong airfoil polar diagrams This is the ping-pong aerodynamic analogy. The wing is a ping pong bat and the air is a bunch of evenly spaced array of ping pong balls. It is a naive model but, as we will see in a later post, the polar diagrams derived from this analogy (between -12 to +12 degrees of angle of attack) are surprisingly close shape wise to the real diagrams of a thin,… Read More... "Aerodynamics Naive #1 – deriving the Ping-Pong airfoil polar diagrams" How Do They Fly? – an intuitive look into lift generation and flight stability Have you ever wondered why the flight attendants of a half empty airliner talk people into moving to the front half of the plane? Have you ever wondered why a flying wing can fly without a tail or why the stability of some of these flying wing can be controlled only by computer? Or why a 12 pack stored in at… Read More... "How Do They Fly? – an intuitive look into lift generation and flight Flight Simulator Tutorial #7 – upgrading the joystick chart, adding a reset button and a throttle scroll bar This section displays the landscape on a 2D scatter chart and also upgrades the joystick chart by adding a dial behind the joystick image. This technique of using a stack of a back chart to display dial sprites and a front chart with transparent background to display various control devices, indicator needles and text will extensively be used in this… Read More... "Flight Simulator Tutorial #7 – upgrading the joystick chart, adding a reset button and a throttle scroll bar" Flight Simulator Tutorial #6 – macro review, scene derivation and integration, mapping of the 2D u-v data into a chartable 1D array, rejecting image artifacts This section finishes the macro analysis and continues with the conversion of the u-v 2D formula array into a chart-able 1D array. It also adds two columns to the chart-able array, a masking condition for each triangle and masked u-coordinate which will throw out of the visible portion of the chart any shape which has a minimum of one vertex… Read More... "Flight Simulator Tutorial #6 – macro review, scene derivation and integration, mapping of the 2D u-v data into a chartable 1D array, rejecting image artifacts" Flight Simulator Tutorial #5 – the worksheet implementation of the perspective handling formulas and VBA the macros This section explains the spreadsheet implementation of the perspective rotation and translation formulas within the Present array. It also shows the implementation of the 3D-2D conversion formulas within the Past array, then it goes on to presenting the VBA macros used (the Reset and JoyStick macros). [sociallocker][/sociallocker] A Basic Flight Simulator in Excel #5 – the worksheet implementation of the… Read More... "Flight Simulator Tutorial #5 – the worksheet implementation of the perspective handling formulas and VBA the macros"
{"url":"https://excelunusual.com/category/aerodynamics/page/2/","timestamp":"2024-11-09T15:30:28Z","content_type":"text/html","content_length":"173929","record_id":"<urn:uuid:bab8b334-d480-49ae-964a-60d90eced5fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00078.warc.gz"}
Holiday Gift Giveaway Hop I'm happy to participate in the Holiday Gift Giveaway Hop with Simply Stacie . My sponsor for the Holiday Gift Giveaway Hop is . Jasmere will give one of my readers $25 gift credit "US winner". Each day Jasmere offer discounts on the finest clothing, accessories, menswear, specialty foods, services, toys, jewelry, housewares, etc, on the web. Ends (December 5, 2010 at 11:59 pm EST) Mandatory Entry: Follow my blog on Google Friend Connect on facebook. Follow me on Twitter. Like my blog on Facebook. Subscribe to my blog Disclosure: This giveaway is presented by the sponsor in the Holiday Gift Giveaway Hop. 232 comments: GFC follower Following on twitter @bethwillis01 Like Jasmere on FB email subscriber follow google friend amyfedorchak1 AT gmail DOT com Like Jasmere on facebool amyfedorchak1 AT gmail DOT com follow you on twitter @highlight4u amyfedorchak1 AT gmail DOT com Like you on facebook Amy F amyfedorchak1 AT gmail DOT com subscribe email amyfedorchak1 AT gmail DOT com I follow via gfc! chacoyaguayo at yahoo dot com I follow you on Twitter chacoyaguayo at yahoo dot com gfc follower megankyser at gmail dot com I like jasmere on facebook megankyser at gmail dot com I follow you on twitter - mekyser megankyser at gmail dot com i'm a new follower I follow on GFC! GFC follower jmcghee2024 AT yahoo DOT com GFC follower jmcghee2024 AT yahoo DOT com I like Jasmere on FB jmcghee2024 AT yahoo DOT com Subscribe via Email jmcghee2024 AT yahoo DOT com I follow in GFC! Thanks for a chance to win this great giveaway! Best wishes, I'm an email subscriber, too! Thanks for a chance to win this great giveaway! Best wishes, i follow via GFC ardtiffany@yahoo.com i follow you on twitter @tiffany053p ardtiffany@yahoo.com i sent you a request on facebook tiffany kelly ard ardtiffany@yahoo.com i follow Jasmere on facebook and twitter ardtiffany@yahoo.com I follow your blog on GFC! I like Jasmere on Facebook! I follow you on Twitter! I like your blog on Facebook Amy Dee Eidson Clark I subscribe to your blog Like Jasmere on FB Chelsei Ryan Added you as a Friend on FB Chelsei Ryan I follow on twitter as humanecats. GFC follower follow Jasmere on facebook follow you on twitter subscribe via email follow gfc kendra22 thank u I follow via gfc. I follow on GFC lexigurl_17 (at) hotmail (dot) com I follow Jasmere on FB Alex Herring I follow you on GFC: Jen-Eighty MPH Mom mandjregan at gmail dot com Im in US Like Jarmese on FB I'm following you on GFC. Thanks! I Like Jasmere on Facebook. (Jodi Brown) Follow via GFC Like Jasmere on FB (Jessica Edgar Worley) I Like you -- on FB (Jessica Edgar Worley) Subscribed via email I Follow your blog on Google Friend Connect t_freckleton at yahoo dot com I Like Jasmere on facebook. t_freckleton at yahoo dot com @Mommy2Marlie follows on Twitter. I Like your blog on Facebook. t_freckleton at yahoo dot com I follow you via google friend connect Like Jasmere on facebook Follow u on Twitter @vickiecouturier Like your blog on Facebook I'm a GFC follower - alicia715 ohmiss14 at yahoo dot com I follow you on Twitter - @amccrenshaw ohmiss14 at yahoo dot com Congratulations, Laurie Harrison You are now following Life is a Sandcastle New follower I am on the Holiday Hop Giveaway and a new participant. I'm sponsoring myself so would appreciate a visit. I would be tickled to mail a US entrant a certificate for a free night's stay in a LaQuinta Inn of their choice (tax included) If you have holiday travel planned, this could come in handy-dandy! Hop over to my blog and enter up if you are interested. Thanks. geneveve2 at gmail dot com I like Jasmere on FB geneveve2 at gmail dot com Laurie Harrison I am a new follower on Twitter of yours. geneveve2 at gmail dot com I like Jasmere on Facebook (Elena Shkinder-Gugel). I am your Facebook fan (Elena Shkinder-Gugel). Email subscriber. GFC follower I like jasmere on FB I like you on facebook I like Jasmere on Facebook (Betsy Hoff) I follow you on Twitter @bukaeyes I am your friend on Facebook (Betsy Hoff) Hello, I am on GFC Twitter I am Moms Reviews and on Facebook I am Mommies Point of View. Thanks, Glenda I follow via google connect I subscribe via email I follow on Twitter @pittsy82 I "LIKE" Jasmere on FB (Nicole Pitts) following you on GFC pinkdandyshop at yahoo dot com following Jasmere on FB pinkdandyshop at yahoo dot com like you on FB pinkdandyshop at yahoo dot com following you on Twitter @pinkdandy pinkdandyshop at yahoo dot com Follower GFC Dana R dmraca at gmail dot com I follow you on twitter twitter name DayLeeDana dmraca at gmail dot com GFC follower jmatek AT wi DOT rr DOT com LIKE jasmere on FB: JUlie Matek jmatek AT wi DOT rr DOT com subscribed in google reader jmatek AT wi DOT rr DOT com I follow on GFC I follow you through GFC vibrantchick29 at gmail.com I am a follower on gfc. Thanks! jackievillano at gmail dot com I subscribe via email jackievillano at gmail dot com new GFC follower atexasranger(at)gmail(dot)com Like Jasmere on FB under name Beth March follow you on twitter-atexasranger like you on FB under name Beth March subscribe to your blog via email GFC follower-atexasranger(at)gmail(dot)com email subscriber-atexasranger follow you on Twitter-atexasranger I like Jasmere on FB Heather Saving couponinggirl at gmail dot com I ask you to be my friend on FB Heather Saving couponinggirl at gmail dot com I subscribe via email couponinggirl at gmail dot com I gfc Like Jasmere on facebook. Kristina W. Follow on twitter I follower Merry Xmas! Thanks! Janna Johnson google friend follower (Snowflake07) hotpepper71 at bell south dot Net following on twitter as SnowflakeDay email subscriber hotpepper71 at bell south dot Net I'm a GFC follower as sablelexi jlynettes @ hotmail . com I follow on twitter as @sablelexi jlynettes @ hotmail . com GFC Follower Jasmere facebook fan Email subscriber GFC follower (Lona728) Lona728 @yahoo dot com Like Jasmere on FB (Lona SausmaN Wibbels) Lona728@ yahoo dot com subscribe via email Lona728@ yahoo dot com follow you on FB (Lona Sausman Wibbels) Lona728@ yahoo dot com follow you on twitter (Lona728) Lona728@ yahoo dot com FB Follower of your blog Rhonda W-C I follow you on Twitter! @rhonjerm Like Jasmere on facebook. Rhonda W-C GFC Follower (Cathy) I subscribe to your email Like Jasmere on FB Twitter follower TraciK66 ozzykelley1(at)yahoo(dot)com FB Traci Kelley ozzykelley1(at)yahoo(dot)com google follower tbarrettno1 at gmail dot com like jasmere on fb tbarrettno1 at gmail dot com like on fb (michelle b) tbarrettno1 at gmail dot com email subscriber tbarrettno1 at gmail dot com Like Jasmere on FB (Estell Aekz) I follow your blog publicly via GFC (catalinak). i am a GFC follower! a_chilson(AT)hotmail(DOT)com i subscribe to your blog via email i follow you on twitter! - poae i added you on facebook - poaes I follow via gfc couponclippinmommy at yahoo dot com follow on gfc pr at sjunkie dot com follow u on twitter @SurveyJunky like Jasmere on FB Stacy Oleskiewicz GFC follower of yours- Leah As Tee Jay I follow this company on facebook: Jasmere facebook fan of yours Tee Jay follow you on twitter @HavensTwit subscribe via google reader I'm a GFC follower. txhottie_86 at yahoo dot com Suzanne Lewis follow on GFC at ginamichelleperez at gmail dot com follow jasmere on twitter @schoune follow on twitter @schoune like on FB @gina perez subscribed via email at schoune_wolf at Yahoo dot com follow on GFC -kylei chang Kyleichang at gmail dot com i follow on twitter GFC follower (Nicole H/MamaNYC) nhartmann54 at gmail dot co Jasmere FB Fan (Nicole H/MamaNYC) nhartmann54 at gmail dot com Twitter follower @nhartmann54 nhartmann54 at gmail dot com FB Fan (Nicole H/MamaNYC) nhartmann54 at gmail dot com Email subscriber nhartmann54 at gmail dot com I follow via GFC Thank You! I am a fan on FB User name~Jacqueline Taylor Griffin Thank You! I am a fan of Jasmere on FB User name~Jacqueline Taylor Griffin Thank You! I subscribe to your email Thank You! I follow you on twitter Thank You! I follow you on GFC. m_wohlk GFC follower Follow you on twitter Like Jasmere on fb Teresa Inscoe Choplin Following you via GFC Guide to Smart Shopping follow you (redfuzzycow) like jasmere on fb (jen reda) follow you on twitter (Redfuzzycow) like you on fb (jen reda) i follow you on gfc-heatheranya i follow you on twitter @1589m I follow you on GFC manthas24 I added you on FB I like Jasmere on FB I follow you on Twitter @manthas24 I get your feed in google reader Follow on twitter @HendyMartin hmhenderson AT yahoo DOT com follow your blog via google connect gfc follower. hewella1 at gmail dot com I like Jasmere on facebook. (Kelly Deaton) Kelly D. ~ dkad23(at)gmail(dot)com I follow you on twitter. (@dkad23) Kelly D. ~ dkad23(at)gmail(dot)com GFC follower: epblack at zoominternet dot net Like Jasmere on FB: Connie Cawthern Black epblack at zoominternet dot net I follow you (publicly) on GFC.. I "Like" Jasmere on Facebook! Larry H. on Facebook I'm your Facebook Friend Larry H. on Facebook I Like Jasmere on facebook. public GF follower! ejrichter60 at gmail dot com Following Jasmere on Facebook! eileen r. ejrichter60 at gmail dot com Im a follower via GFC I follow you on twitter! ejrichter60 at gmail dot com I LIKE you on FAcebook! eileen r. ejrichter60 at gmail dot com I follow GFC itzacin@aol dot com I follow on twitter as 1blessedmommi itzacin@aol dot com Follow your blog showmemama at ymail dot com Like Jasmere on facebook. showmemama at ymail dot com Follow me on Twitter. showmemama at ymail dot com
{"url":"https://lifeisasandcastle.blogspot.com/2010/12/holiday-gift-giveaway-hop.html?showComment=1291414840204","timestamp":"2024-11-05T09:30:06Z","content_type":"application/xhtml+xml","content_length":"341285","record_id":"<urn:uuid:8abecb3c-4648-42ff-bd3e-deaad825f9f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00469.warc.gz"}
Nikola Kovachki Operator Learning and Conditional Measure Transport In the first part of this talk, we consider the problem of learning general non-linear operators acting between two infinite-dimensional spaces of functions defined on bounded domains. We generalize the notation of a neural network from a map acting between Euclidean spaces to one acting between function spaces. The resulting architecture has the property that it can act on any arbitrary discretization of a function and produce a consistent answer. We show that the popular transformer architecture is a special case of our model once discretized. We prove universal approximation theorems for our network when the input and outputs spaces are Lebesgue, Sobolev, or continuously differentiable functions of any order. Furthermore, we show dimension-independent rates of approximation, breaking the curse of dimensionality, for operators arising from Darcy flow, the Euler equations, and the Navier-Stokes equations. Numerical experiments show state-of-the-art results for various PDE problems and successful applications in down-stream tasks such as multi-scale modeling and solving inverse problems. In the second part of this talk, we consider the problem of sampling a conditional distribution given data from the joint using a transport map. We show that a block-triangular structure in the transport map is sufficient for this task, even in infinite-dimensions, and prove posterior-consistency results for measures with unbounded support. We obtain explicit rates of approximation when the transport map is parameterized by a ReLU network. Numerically, we successfully apply our approach to uncertainty quantification for supervised learning and Bayesian inverse problems arising in imaging and PDEs. Our speaker Nikola a final-year PhD student in applied mathematics at Caltech working on machine learning methods for the physical sciences in theory and practice. To become a member of the Rough Path Interest Group, register here for free.
{"url":"https://datasig.ac.uk/2021-11-11-nikola-kovachki","timestamp":"2024-11-07T06:08:08Z","content_type":"text/html","content_length":"93867","record_id":"<urn:uuid:96512d4e-7869-4658-a813-17f8594c98e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00055.warc.gz"}
Online Kelvin to Celsius conversion Online Web Code Test | Online Image Picker | Online Color Picker Kelvin to Celsius conversion Kelvin and Celsius are two temperature scales. The size of the "degree" for each scale is the same magnitude, but the Kelvin scale starts at absolute zero (the lowest temperature theoretically attainable), while the Celsius scale sets its zero point at the triple point of water (the point at which water can exist in solid, liquid, or gaseous states, or 32.01 F). Conversion Formula The formula to convert Kelvin into Celsius is C = K - 273.15. All that is needed to convert Kelvin to Celsius is one simple step: Take your Kelvin temperature and subtract 273.15. Your answer will be in Celsius. The K does not use the word degree or the symbol; depending on the context, generally one or the other (or simply C) is used to report a Celsius temperature. Kelvin to Celsius How many degrees Celsius is 500 K? C = 500 - 273.15 500 K = 226.85 C Let's convert normal body temperature from Kelvin to Celsius. Human body temperature is 310.15 K. Put the value into the equation to solve for degrees Celsius: C = K - 273.15 C = 310.15 - 273.15 Human body temperature = 37 C Reverse Conversion: Celsius to Kelvin Similarly, it's easy to convert a Celsius temperature to the Kelvin scale. You can either use the formula given above or use K = C + 273.15. For example, let's convert the boiling point of water to Kelvin. The boiling point of water is 100 C. Plug the value into the formula: K = 100 + 273.15 K = 373.15 About Absolute Zero While typical temperatures experienced in daily life are often expressed in Celsius or Fahrenheit, many phenomena are described more easily using an absolute temperature scale. The Kelvin scale starts at absolute zero (the coldest temperature attainable) and is based on energy measurement (the movement of molecules). The Kelvin is the international standard for scientific temperature measurement, and is used in many fields, including astronomy and physics. While it's perfectly normal to get negative values for Celsius temperature, the Kelvin scale only goes down to zero. Zero K is also known as absolute zero. It is the point at which no further heat can be removed from a system because there is no molecular movement, so there is no lower temperature possible.
{"url":"https://www.pustudy.com/calculators/convert/temperature/kelvin-to-celsius.html","timestamp":"2024-11-14T00:07:06Z","content_type":"text/html","content_length":"13929","record_id":"<urn:uuid:89cf9e41-30f6-4969-932d-8b0d92c16330>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00081.warc.gz"}
Hahn-Banach Theorems for Convex Functions Marc Lassonde Total Page:16 File Type:pdf, Size:1020Kb [email protected] We start from a basic version of the Hahn-Banach theorem, of which we provide a proof based on Tychonoff’s theorem on the product of compact intervals. Then, in the first section, we establish conditions ensuring the existence of affine functions lying between a convex function and a concave one in the setting of vector spaces — this directly leads to the theorems of Hahn-Banach, Mazur-Orlicz and Fenchel. In the second section, we car- acterize those topological vector spaces for which certain convex functions are continuous — this is connected to the uniform boundedness theorem of Banach-Steinhaus and to the closed graph and open mapping theorems of Banach. Combining both types of results readily yields topological versions of the theorems of the first section. In all the text, X stands for a real vector space. For A ⊂ X, we denote by cor(A) the core (algebraic interior) of A : a ∈ cor(A) if and only if A − a is absorbing. Given a function f : X → IR ∪ {+∞}, we call domain of f the set dom f = { x ∈ X | f(x) < +∞ } and we declare f convex if f(tx + (1 − t)y) ≤ tf(x) + (1 − t)f(y) for all x, y in dom f and 0 ≤ t ≤ 1. A real-valued function p : X → IR is said to be sublinear if it is convex and positively homogeneous. We use standard abbreviations and notations: TVS for topological vec- tor space, LCTVS for locally convex topological vector space, lcs for lower semicontinuous, X∗ for the algebraic dual of X, X′ for its topological dual, σ(X∗, X) for the topology of pointwise convergence on X∗, etc. Prologue The following basic theorem is the starting point, and crucial part, of the theory. It retains the essence of both the Hahn-Banach theorem — 2 MARC LASSONDE non-emptiness assertion — and the Banach-Alaoglu theorem — σ(X∗, X)- compactness assertion. Its proof combines the key arguments of the proofs of these theorems. Basic Theorem For any sublinear function p : X → IR, the set { x∗ ∈ X∗ | x∗ ≤ p } is non-empty and σ(X∗, X)-compact. Proof. In the space E = IRX supplied with the product topology, the set Xν of all sublinear forms is closed and the set ν ν K := { q ∈ X | q ≤ p } = Y [−p(−x), p(x)] ∩ X x∈X is compact by Tychonoff’s theorem. For x ∈ X, put F (x) := { q ∈ K | q(x)+ q(−x) = 0 }. ∗ ∗ ∗ Clearly Tx∈X F (x)= { x ∈ X | x ≤ p }. Since each F (x) is closed in the compact set K, to obtain the desired result it only remains to show that for any finite family {x0,x1,...,xn} in X, the intersection of the F (xi)’s is not empty. We first observe that F (x0) is not empty; that is, we observe ν that there exists q0 ∈ X verifying q0 ≤ p and q0(x0)+ q0(−x0) = 0. Indeed, it suffices to take for q0 the sublinear hull of p and of the function equal to −p(x0) at −x0 and to +∞ elsewhere, namely: q0(x) := inf (p(x + λx0) − λp(x0)) . λ≥0 We then apply the argument again, with q0 and x1 in lieu of p and x0, to obtain q1 in F (x0) ∩ F (x1), and so forth until obtaining qn in F (x0) ∩ F (x1) ∩ ... ∩ F (xn). The non-emptiness assertion is Theorem 1 in Banach [2]. Its original proof, as well as the proof given in most textbooks, relies on the axiom of choice. The fact that it can also be derived from Tychonoff’s theorem on the product of compact intervals was observed for the first time by Lo´s and Ryll-Nardzewski [9]. It is now well-known, after the works of Luxemburg [10] and Pincus [12], that Banach’s theorem (and hence the Hahn-Banach theorem) is logically weaker than Tychonoff’s theorem on the product of compact intervals (and hence the Banach-Alaoglu theorem and the above theorem), which itself is weaker than the axiom of choice. From this point of view, the above statement is therefore optimal with respect to its proof. HAHNBANACH THEOREMS FOR CONVEX FUNCTIONS 3 1. Separation of convex functions We first extend the basic theorem to the case of convex functions. For 1 f : X → IR ∪ {+∞}, we denote by pf : x → inft>0 t f(tx) the homogeneous hull of f, and we set S(f) := { x∗ ∈ X∗ | x∗ ≤ f } = { x∗ ∈ X∗ | 0 ≤ inf(f − x∗) }. Because of the following elementary facts, the extension is a straightforward consequence of the basic theorem. Lemma 1 If f : X → IR ∪ {+∞} is convex such that f(0) ≥ 0 and 0 ∈ cor(dom f), then the function pf is real-valued and sublinear. Lemma 2 For any f : X → IR ∪ {+∞}, S(f)= S(pf ). Theorem 1 (Minoration of convex functions) Let f : X → IR∪ {+∞} be convex such that f(0) ≥ 0. If 0 ∈ cor(dom f), then S(f) is non-empty and σ(X∗, X)-compact. Proof. Apply the basic theorem to pf . Theorem 1’ (ε-subdifferential) Let f : X → IR ∪ {+∞} be convex. If x0 ∈ cor(dom f), then for every ε ≥ 0 the set ∗ ∗ ∗ { x ∈ X | x (x − x0) ≤ f(x) − f(x0)+ ε for all x ∈ X } is non-empty and σ(X∗, X)-compact. Proof. Apply Theorem 1 to the function f˜(x) : = f(x + x0) − f(x0)+ ε. Corollary If in Theorem 1 we suppose further that X is a TVS and that f is continuous at some point of its domain, then S(f) is non-empty, equicon- tinuous and σ(X′, X) -compact. Proof. The set S(f) is clearly equicontinuous, hence contained in X′, and it is also clearly σ(X′, X)-closed. More generally, we now search for separating a convex function from a concave one by an affine form. For f, g : X → IR ∪ {+∞}, we denote by f +e g : x → infy∈X (f(y)+ g(x − y)) the epi-sum (or inf-convolution) of f and g, and we set S(f, g) := { x∗ ∈ X∗ | − g ≤ x∗ + r ≤ f for some r ∈ IR } = { x∗ ∈ X∗ | 0 ≤ inf(f − x∗)+inf(g + x∗) }. 4 MARC LASSONDE As above, two elementary facts similarly reduce the argument to a simple invocation of the previous theorem. Lemma 3 If f, g : X → IR ∪ {+∞} are convex such that (f +e g)(0) is finite and 0 ∈ cor(dom f +dom g), then f +e g takes its values in IR∪{+∞} and is convex. − Lemma 4 For any f, g : X → IR ∪ {+∞}, S(f, g) = S(f +e g ), where g− : x → g(−x). Theorem 2 (Separation of convex functions) Let f, g : X → IR ∪ {+∞} be convex such that −g ≤ f. If 0 ∈ cor(dom f − dom g), then S(f, g) is non-empty and σ(X∗, X)-compact. − Proof. Apply Theorem 1 to f +e g . Theorem 2’ (Decomposition of the infimum of a sum) Let f, g : X → IR ∪ {+∞} be convex such that inf(f + g) is finite. If 0 ∈ cor(dom f − dom g), then for every ε ≥ 0 the set { x∗ ∈ X∗ | inf(f + g) ≤ inf(f − x∗)+inf(g + x∗)+ ε } is non-empty and σ(X∗, X)-compact. Proof. Apply Theorem 2 to the functions f˜ := f − inf(f + g)+ ε and g. Corollary If in Theorem 2 we suppose further that X is a TVS and that f +e g is continuous at some point of its domain (this is the case if f is con- tinuous at some point of its domain), then S(f, g) is non-empty, equicon- tinuous and σ(X′, X)-compact. Proof. If f is continuous at some point of its domain, it is actually contin- uous on the non-empty set int(dom f), and since 0 belongs to cor(dom f − dom g) = int(dom f) − dom g, we infer that f is continuous at some point − of dom f ∩ dom g, which implies at once that f +e g is continuous at 0. The result now follows from the corollary of Theorem 1 because S(f, g)= − S(f +e g ). The literature on the Hahn-Banach theorem is too broad to give any fair account in this short article. We refer to Buskes [5] for a comprehensive survey and an extensive bibliography, and to K¨onig [8] for a deep discus- sion on the theorem and its various applications. For variants of the above results, see, e.g., Holmes [6, p. 23 and p. 42], Vangeld`ere [21], Th´era [19]. HAHNBANACH THEOREMS FOR CONVEX FUNCTIONS 5 Before proceeding, we mention several simple consequences.
{"url":"https://docslib.org/doc/4816209/hahn-banach-theorems-for-convex-functions-marc-lassonde","timestamp":"2024-11-13T22:06:40Z","content_type":"text/html","content_length":"64593","record_id":"<urn:uuid:4bf122ee-6794-4ec7-899d-4f8d96a9cb50>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00612.warc.gz"}
Study: Intervals level 1 Subscription required! To view the complete study guide, you will need a valid subscription. Why not subscribe now? Already have a subscription? Make sure you login first! We've already discovered semitones and tones between adjacent notes, so we'll start with a quick recap on those. Next we'll take a look at the intervals between notes that are not adjacent. A quick recap Let's begin with a quick reminder of some things we've learned in the previous study guide, and start with a list of the natural notes: C, D, E, F, G, A, B, C Remember that there are gaps "between" some of these notes, which allow us to have sharp and flat versions of the natural notes. Tones and semitones The gap between a natural note and its sharpened or flattened version is an interval called a semitone, and the reason that there is no sharp (or flat) note between B and C or between E and F is because these natural notes are already a semitone apart, and so there is no "room" for a sharp or flat between them. The interval between two natural notes which is large enough to contain a flat or sharp is called a tone. Therefore, there are tones between these pairs of natural notes: • C and D • D and E • F and G • G and A • A and B Wider intervals An interval doesn't just have to be a semitone or a tone. In fact, we can describe the gap between any two notes with a system of describing intervals that is very important to the study of music Let's consider the list of natural notes again, and this time let's number them: 1. C 2. D 3. E 4. F 5. G 6. A 7. B If C (number 1) is the first note, we can see that D is the 2nd note that we come to, and that E is the 3rd. We can therefore say that D is an interval of a 2nd above C, and that E is an interval of a 3rd above C. Similarly, we can construct the intervals of a 4th, 5th, 6th and 7th. Carrying on with this, we can describe any interval between C and a note above in the same way, as shown in this example: The octave If the interval between C and B is a 7th, what about the interval between C and the C above? Is that an "8th"? The interval between two notes an "8th" apart is given a special name: an octave. It is not correct to use the term "an 8th", so you'll need to remember the word "octave" - fortunately, you'll come across it a lot, so it's an easy one to remember! Here are some examples of octaves: The unison Similarly, if the interval between C and D is a 2nd, what about the "interval" between C and itself? Is that a "1st"? This is another special case. The interval between any two notes that are the same is called a unison. Just as with the octave and "8th", it is not correct to call this interval a "1st", so you'll also need to remember to say "unison" instead. You'll also come across the word "unison" frequently - for example, we might say that two people are "singing in unison", when we mean that they are singing the same notes at the same time - in other words, the interval between them is always a unison! Trap: Equivalents Do you remember the word "equivalents" from the flats and sharps study guide - describing, for example, how C sharp is equivalent to D flat? This can lead to an important trap! C sharp is not the same note as D flat, it is simply equivalent. Therefore the interval between these two notes - even though they sound the same - is not a Remember that equivalent notes are written differently even though they sound the same. C sharp is a "type of" C, and D flat is a "type of" D, and we know that the interval between C and D is a 2nd. Therefore, the interval between C sharp and D flat is also a "type of" 2nd. We'll discover exactly what kind of 2nd in a later study guide, but for now, watch out for this trap! Starting elsewhere So far we have just looked at intervals starting on C, in the C major scale. We can use this system for any scale. Here are the intervals in the scales of G major, D major, and F major. Counting up Always remember to count up from the lower note when describing an interval, even if the first note in the pair is higher. Both of the following intervals are a 3rd: If you don't count up from the lower note, in the example above you might wrongly conclude that the second interval (B to G) is a 6th! Read more... With a subscription to Clements Theory you'll be able to read this and dozens of other study guides, along with thousands of practice questions and more! Why not subscribe now? Are you sure you've understood everything in this study guide? Why not try the following practice questions, just to be sure!
{"url":"https://thecorleyconspiracy.com/study/intervals-level-1/","timestamp":"2024-11-03T01:20:41Z","content_type":"text/html","content_length":"16149","record_id":"<urn:uuid:c71160ea-bd37-43e5-9ae2-612753ef1016>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00382.warc.gz"}
The Power of Log2(100) – Understanding the Benefits and Applications Understanding log2(100) Logarithms play an essential role in mathematics and various scientific fields. They are especially useful in solving exponentials and understanding the relationship between numbers. Among the numerous logarithms, log2, with a base of 2, is of particular significance in computer science and information theory. In this article, we will explore the concept of logarithms, delve into the specifics of log2, and discuss its benefits and applications, with a focus on the value log2(100). Explanation of logarithms and their purpose Before delving into the intricacies of log2, let’s first understand what logarithms are and why they are important. A logarithm is the inverse operation of exponentiation. It helps us solve equations where the unknown value is an exponent. In simpler terms, logarithms tell us what power we need to raise a base number to get a specific result. The general logarithm equation can be expressed as: log[b](x) = y Here, ‘b’ is the base of the logarithm, ‘x’ is the result of the exponentiation, and ‘y’ is the power to which the base should be raised. Common logarithms, indicated by log(x), typically have a base of 10, while natural logarithms, denoted by ln(x), have a base of the mathematical constant e (approximately 2.71828). Introduction to log2 and its significance Now that we have a basic understanding of logarithms, let’s focus on log2. Log2 represents logarithms with a base of 2. In other words, it tells us what power 2 needs to be raised to obtain a given result. This logarithm’s significance stems from its applications in computer science and information theory. Computer systems use binary digits, or bits, to represent data and perform computations. Since the binary system is based on powers of 2, log2 is widely used for calculations involving computer systems. It helps simplify complex operations and provides crucial insights into algorithms, information storage, and data compression. Benefits of log2(100) Now that we have a solid understanding of log2 let’s explore the benefits of using log2(100). By simplifying complex calculations, understanding data compression, and evaluating run-time complexities, log2(100) proves its worth in various applications. Simplifying complex calculations One of the primary advantages of log2(100) is its ability to simplify complex calculations involving exponential functions. As mentioned earlier, binary systems are extensively used in computer science. Instead of working directly with large numbers, log2(100) allows us to break down complex problems into smaller, more manageable parts. For example, let’s consider calculating 2^100. Without logarithms, this exponentiation calculation would involve multiplying 2 by itself 100 times. However, by leveraging log2(100), we can determine that the result is approximately 6.6438561898. This simplified representation proves helpful when dealing with massive calculations or when developing algorithms that involve repeated computations. Understanding data compression Data compression refers to the process of reducing the size of data to save storage space or transmit it efficiently. It plays a vital role in various applications, including file archiving, internet data transfer, and multimedia streaming. Log2(100) offers insights into the efficiency of data compression algorithms. When analyzing data compression, we often measure it using bits per symbol. If a data representation uses fewer bits per symbol, it means more information can be stored or transmitted using the same number of bits. Log2(100) plays a key role in calculating the number of bits required to represent a set number of symbols. Evaluating run-time complexities In computer science, understanding the run-time complexity of algorithms is crucial for optimizing performance. Log2(100) aids in evaluating the time complexities of algorithms, specifically those that exhibit logarithmic growth. Logarithmic time complexity, often denoted as O(log n), indicates that the algorithm’s running time increases proportionally to the logarithm of the input size. By incorporating log2(100) into the analysis, we gain a deeper understanding of how the algorithm performs and scales with larger inputs. This knowledge helps us select or design efficient algorithms for different computational tasks. Applications of log2(100) Now that we have explored the benefits of log2(100), let’s examine its applications in computer science and information theory. Computer science and programming Log2(100) finds extensive application in computer science and programming languages. Understanding this logarithm proves valuable when dealing with algorithms, data structures, and search operations. In algorithms, log2(100) often appears in divide and conquer strategies, where the input is repeatedly divided into smaller pieces. It helps determine the number of steps or iterations necessary to solve the problem optimally. Additionally, log2(100) aids in understanding the efficiency of various sorting and searching algorithms, such as binary search. Data structures, such as balanced binary trees, also rely on log2(100) for efficient operations. The logarithm helps maintain balanced structures and ensures fast search and insertion times. Information theory and signal processing Information theory deals with the quantification, storage, and communication of information. Log2(100) plays a vital role in coding theory, which involves creating efficient codes to transmit information reliably. Coding theory utilizes log2(100) to determine the number of bits required to represent a specific number of symbols. The logarithm helps in designing codes that minimize the required storage or transmission space while maintaining the ability to retrieve the original information accurately. In signal processing, log2(100) aids in quantifying the signal-to-noise ratio, encoding data, and optimizing compression algorithms. The logarithm helps determine the minimum number of bits per quantization level, ensuring accurate representation and minimizing data loss. Log2(100) holds immense significance in various scientific fields, particularly in the realms of computer science and information theory. By simplifying complex calculations, aiding data compression analysis, and evaluating run-time complexities, log2(100) proves to be a valuable tool in solving real-world problems. Its applications in computer science, programming, information theory, and signal processing highlight the depth and versatility of this logarithm. Understanding log2(100) unlocks immense potential for optimizing algorithms, data storage, and communication protocols. Embracing the power of logarithms positions us to navigate the increasingly complex world of technology with greater efficiency and accuracy.
{"url":"https://skillapp.co/blog/the-power-of-log2100-understanding-the-benefits-and-applications/","timestamp":"2024-11-14T10:27:58Z","content_type":"text/html","content_length":"112051","record_id":"<urn:uuid:3e1b75a5-c514-4603-88d8-ae4886857125>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00762.warc.gz"}
Many datasets are intrinsically hierarchical: geographic entities, such as census blocks, census tracts, counties and states; the command structure of businesses and governments; file systems; software packages. And even non-hierarchical data may be arranged hierarchically as with k-means clustering or phylogenetic trees. A good hierarchical visualization facilitates rapid multiscale inference: micro-observations of individual elements and macro-observations of large groups. This module implements several popular techniques for visualizing hierarchical data: Node-link diagrams show topology using discrete marks for nodes and links, such as a circle for each node and a line connecting each parent and child. The “tidy” tree is delightfully compact, while the dendrogram places leaves at the same level. (These have both polar and Cartesian forms.) Indented trees are useful for interactive browsing. Adjacency diagrams show topology through the relative placement of nodes. They may also encode a quantitative dimension in the area of each node, for example to show revenue or file size. The “icicle” diagram uses rectangles, while the “sunburst” uses annular segments. Enclosure diagrams also use an area encoding, but show topology through containment. A treemap recursively subdivides area into rectangles. Circle-packing tightly nests circles; this is not as space-efficient as a treemap, but perhaps more readily shows topology. See one of:
{"url":"https://d3js.org/d3-hierarchy","timestamp":"2024-11-02T08:25:31Z","content_type":"text/html","content_length":"81094","record_id":"<urn:uuid:f0285341-fffc-4cb0-ad5d-48bb03c0f4ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00376.warc.gz"}
For beneficiaries of trusts subject to the new law Determine your share of the capital gain of the trust You will need to determine whether you have a share of each capital gain made by the trust that has been included in the trust's net income for tax purposes. For every capital gain you have a share of, your statement of distribution or advice from the trust should advise you of: • your share of that gain • how much of the net income of the trust for tax purposes relates to each gain (or what is the 'attributable gain' to which your share relates) and • the type of capital gain to which your share relates and the method used by the trustee to calculate it (including any CGT discount or small business concessions applied). Your share of a capital gain is any amount of the capital gain to which you are 'specifically entitled' plus your 'adjusted Division 6 percentage' share of any amount of the capital gain to which no beneficiary is 'specifically entitled'. Divide by the total capital gain That amount is then divided by the total capital gain to give you your ‘fraction’ of the total capital gain. Multiply your fraction of the capital gain by the trust's taxable income relating to the capital gain Your fraction is then multiplied by the net income for tax purposes of the trust that relates to the capital gain. The result is your ‘attributable gain’. In certain circumstances where the trust's net capital gain and total net franked distributions exceed the net income of the trust for tax purposes, the amount of the trust's taxable income relating to the capital gain is rateably reduced. This ensures that beneficiaries and the trustee cannot be assessed on more than the total net income of the trust. Extra Capital gains you are taken to have made If you are a beneficiary who is taken to have an 'attributable gain' (your share of a trust’s capital gain included in its net income for tax purposes), you are taken to have made extra capital gains in addition to any other capital gains you may have made from your own CGT events. These extra capital gains are taken into account in working out your net capital gain for the income year. You include them at step 2 in part B or part C. In order to work out the amount of extra capital gains that are taken into account in working out your own net capital gain, you will need to know the method used by the trustee in calculating the trust’s capital gains that were included in the trust’s net capital gain. Your statement of distribution or advice should show this information. If you are a unit holder in a managed fund, the trustee or manager will generally advise you of your share of the trust’s net capital gain, together with details of your share of any other income distributed to you. In other cases, the trustee may have advised you what your share is or you may need to contact them to obtain details. Trust distributions to which the CGT discount or the small business 50% active asset reduction apply Your 'attributable gain' is then grossed up as appropriate for any CGT concessions (the general CGT discount or the small business 50% reduction) applied by the trustee to that capital gain. You have an extra capital gain equal to the grossed up amount. Where the trustee reduced the capital gain by the CGT discount or the small business 50% active asset reduction - you need to gross up your 'attributable gain' by multiplying it by two. This grossed-up amount is an extra capital gain. You multiply by four your share of any capital gain that the trust has reduced by both the CGT discount and the small business 50% active asset reduction. This grossed-up amount is an extra capital If the capital gain has not been reduced by either the CGT discount or the small business 50% active asset reduction, then your 'attributable gain' is an extra capital gain. You are then able to reduce your extra capital gains by any current or prior year capital losses that you have, and then apply any relevant discounts to work out your own net capital gain. No double taxation You are not taxed twice on these extra capital gains because you did not include your capital gains from trusts at item 13 on you tax return (supplementary section). End of attention Example 16: Applying the new trust provisions Step 1: determine the beneficiary’s share of the capital gain of the trust The Cropper Trust generated $100 of rent and a $500 capital gain (which was a discount capital gain). The trust also had a capital loss of $100. The trust deed does not define ‘income’ and therefore capital gains do not form part of the trust income. As a result, the income of the trust estate is $100 (being an amount equal to the rent), whereas the net income of the trust for tax purposes is $300. The $300 net income for tax purposes comprises the $200 net capital gain (which is the $500 capital gain less the $100 capital loss, reduced by the 50% CGT discount) plus the $100 rent income. The trustee resolves to distribute $200 related to the capital gain (after absorbing the capital loss) to Shane and the $100 of rent to Andrea. Shane is specifically entitled to 50% of the $500 capital gain because he can reasonably be expected to receive the economic benefit of 50% ($200) of the $400 capital gain remaining after accounting for the $100 capital loss. Shane’s share of the capital gain is $250 (50% of the $500 capital gain). Andrea’s share of the capital gain is also $250 because, being entitled to all of the $100 income of the trust (none of the capital gain being treated as trust income), she has an adjusted Division 6 percentage of 100% and there is $250 of the $500 capital gain to which no one is specifically entitled. Step 2: divide by the total capital gain Shane divides his share of the capital gain ($250) by the total capital gain ($500) and therefore has a fraction share of 1/2 of the capital gain. Andrea divides her share of the capital gain ($250) by the total capital gain ($500) and therefore also has a fraction share of 1/2 of the capital gain. Step 3: multiply the beneficiary’s fraction of the capital gain by the trust’s taxable income relating to the capital gain The net income of the trust for tax purposes relating to the capital gain is $200. Shane’s attributable gain is $100 ($200 x 1/2). Andrea’s attributable gain is $100 ($200 × 1/2). Step 4: gross up the amount for CGT discounts applied by the trustee As was the case under the former provisions, Shane is required to double his attributable gain of $100 to an extra capital gain of $200 because the trustee had applied the 50% CGT discount. Andrea similarly doubles her attributable gain to $200 which is her extra capital gain. Both Shane and Andrea will take their extra capital gain of $200 into account in working out their own net capital gain at 18. Shane and Andrea are individuals entitled to claim the 50% CGT discount. Neither have other capital gains or capital losses of their own to apply against their extra capital gains. Therefore, after applying the 50% CGT discount to their $200 extra capital gain, they will have made a net capital gain of ($200 extra capital gain x 50% = $100). They will write $100 at A item 18 Capital gains on their tax returns (supplementary section). They also write $200 (which is $100 grossed up) at H item 18. Note that Shane and Andrea's statement of distribution or advice from the trust advised each of them that the trust had made a capital gain of $500, that only $200 of this had been included in the net income of the trust estate for tax purposes, that the 50% discount had been applied and that their share of the gain was $250. Alternatively, it could have advised them that they each had an extra capital gain of $200 that was a discount capital gain. End of example Example 17: Distribution where the trust claimed concessions Serge is the sole beneficiary in the Shadows Unit Trust. His statement of distribution or advice from the trust shows that his 100% share of the net income of the Shadows Unit Trust for income tax purposes was $2,000. The $2,000 includes a net capital gain of $250 (made of a $1,000 capital gain that was reduced by the CGT discount and the small business 50% active asset reduction). His statement advises that he has a $1,000 (100%) share of the $1,000 capital gain. Because he has a 100% share of the capital gain, Serge will have an 'attributable gain' of $250 (that is, the whole of the net income of the trust estate for tax purposes that relates to the gain). Due to the application of the CGT discount and the small business 50% active asset reduction, Serge then grosses up his 'attributable gain' of $250 by multiplying it by 4 to $1,000 which is his extra capital gain. Serge has also made a capital loss of $100 from the sale of shares. He calculates his own net capital gain as follows: Serge’s extra capital gain $1,000 (that is, his $250 attributable gain x 4) Deduct capital losses $100 Capital gains before applying discounts $900 Apply the CGT discount of 50% $450 Apply the 50% active asset reduction $225 Net capital gain $225 Serge will write $1,000 at H item 18 on his tax return (supplementary section), which is his total current year capital gain. His net capital gain to be written at A item 18 on his tax return (supplementary section) is $225. He will write a trust distribution of $1,750 ($2,000 - $250) at U item 13 on his tax return (supplementary section). End of example Applying the concessions Remember that you must use the same method as the trust to calculate your capital gain. This means you cannot apply the CGT discount to capital gains distributed to you from the trust calculated using the indexation method or 'other' method. Also, you can only apply the small business 50% active asset reduction to grossed-up capital gains to which the trust applied that concession. End of attention
{"url":"https://www.ato.gov.au/forms-and-instructions/capital-gains-tax-guide-2011/part-a-about-capital-gains-tax/trust-distributions/capital-gains-made-by-a-trust/for-beneficiaries-of-trusts-subject-to-the-new-law","timestamp":"2024-11-04T06:00:42Z","content_type":"text/html","content_length":"369538","record_id":"<urn:uuid:6620ae78-bf01-4987-9881-b6e96342ce76>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00506.warc.gz"}
(PDF) Nilakantha's accelerated series for pi Author content All content in this area was uploaded by David Brink on Nov 09, 2015 Content may be subject to copyright. * (201*) Nilakantha’s accelerated series for π David Brink (København) 1. Introduction. Unbeknownst to its European discoverers—Gregory (1638–1675) and Leibniz (1646–1716)—the formula (1) π 2n+ 1 had been found in India already in the fourteenth or fifteenth century. It first appeared in Sanskrit verse in the book Tantrasangraha from about 1500 by the Indian mathematician, astronomer and universal genius Nilakantha (1445–1545). Unlike Gregory and Leibniz, Nilakantha also gave approxima- tions of the tail sums and found a more rapidly converging series, (2) π (2n+ 3)3−(2n+ 3). The reader is referred to Roy [14] for more details on this fascinating story. We show here that (2) is the first step of a certain series transformation that eventually leads to the accelerated series (3) π= (5n+ 3)n!(2n)! 2n−1(3n+ 2)! , in much the same way as the difference operator leads to the Euler transform (4) π= (2n+ 1)!. We call (3) the Nilakantha transform of (1) and note that it converges roughly as 13.5−n, whereas the Euler transform converges as 2−n. Applying 2010 Mathematics Subject Classification: Primary 65B10; Secondary 40A25. Key words and phrases: series for π, convergence acceleration. 2 D. Brink the Nilakantha transformation to the Newton–Euler formula [8] (5) π 2n+ 1 gives the accelerated series (A.5) (see the Appendix), also with convergence 13.5−n. Similar transformations of these and other formulas lead to other accelerated series for πwhich are collected in the Appendix in a standardized form, with (3) corresponding to (A.1). Several of the formulas in the Appendix are well known from the litera- ture. A series equivalent to (A.1) is attributed to Gosper in [2, eq. (16.81)]. The series (A.2) is due to Adamchik and Wagon [1] and is one of the simplest of all “BBP-like” formulas [2, Chapter 10]. The even simpler BBP-formula (A.14) appears in [3, eq. (18)] with two terms at a time, attributed to Knuth. I note that the formulas in the Appendix emerged quite naturally as accelerations of simple series, and all were derived by hand. As we shall see, this acceleration method can also be used to transform divergent series into convergent ones, in a process more properly called series deceleration. For example, decelerating the divergent series (6) π 2n+ 1 gives the convergent, fractional BBP-formula (A.18). This argument, of course, is no proof, but I also give an alternative, rigorous demonstration. The general principle behind these formulas is an acceleration scheme that allows one to approximate an alternating, “sporadic” sum S=a0−ak+a2k−a3k+·· · from a finite number of terms a0, a1, . . . , anof the complete sequence. The general theory—in particular the idea of letting the aibe the moments of a measure, writing Sas an integral with respect to that measure, and ap- proximating the integrand by means of Chebyshev polynomials—is strongly indebted to that in [7] which it generalizes from k= 1 to arbitrary k. As another example of this method, we show how to compute numerically the constant (7) K= (−1)nζ(n+ 1/2) 2n+ 1 =−2.1577829966 . . . , making the most of a small number of given integer and half-integer zeta- values, ζ(n) and ζ(n+ 1/2). The constant Kis known as the Schnecken- konstante and arises in connection with the Spiral of Theodorus [6]. Nilakantha’s accelerated series for π3 2. Alternating, sporadic series. Let µbe a finite, signed measure on [0,1] with moments (8) ai= xidµ, i ≥0, converging to zero for i→ ∞, and consider the alternating, sporadic series (9) S= 1 + xk= for some integer k≥1. Let there be given a polynomial (10) P(x) = with P(u) = 1 for uk=−1, and write (11) Q(x) = 1−P(x) 1 + xk= Define a new measure µ0with density P(x) with respect to µ, i.e., dµ0= P(x)dµ, and moments and consider the transformed series 1 + xk= Write the difference between the old and the new series as Q(x)dµ = Repeating this process gives a sequence of measures µ(n)with densities dµ(n)=P(x)ndµ and moments (12) a(n) as well as a sequence of transformed series (13) S(n)= 1 + xk= 4 D. Brink with differences (14) ∇S(n)=S(n)−S(n+1) = After nsteps one has i=0 ∇S(i)+S(n). If Mis the maximum of |P(x)|on the interval [0,1], then (13) gives the (15) |S(n)| ≤ Mn 1 + xk, cf. Remark 1 below. So if M < 1, one gets the accelerated series (16) S= n=0 ∇S(n) with convergence Mn, i.e., ∇S(n)=O(Mn). Remark 1.Asigned measure may take negative as well as positive values. Even if µwere required to be positive, the transformed measure µ0 would still take negative values if the density function P(x) did. By Jordan’s Decomposition Theorem [4], one can write µ=µ+−µ−with unique positive measures µ+and µ−with disjoint supports. The theory of integration with respect to a signed measure can thus be reduced to that of a usual, positive measure. Absolute integrals such as the one appearing in (15) are defined by means of the total variation |µ|=µ++µ−. The assumption that the moments (8) converge to zero is equivalent to µ({1}) = 0 and guarantees that 1 + xk= 1 −xk+x2k− · · · can be integrated termwise, say, by Lebesgue’s Dominated Convergence The- It is a result of Hausdorff [12, Satz I] that a sequence of real numbers a0, a1, a2, . . . is the sequence of moments of a finite, positive measure on [0,1] if and only if it is totally monotonic, i.e., (17) ∇nai≥0 for all i, n ≥0. As above, ∇ai=ai−ai+1 denotes the negated forward difference operator. Similarly, also by Hausdorff [12, Satz II], the aiare the moments of a finite, Nilakantha’s accelerated series for π5 signed measure on [0,1] if and only if (18) sup The latter condition thus implies that (ai) is the difference between two totally monotonic sequences. It is seen directly from the identity i=0 n that (17) implies (18). Note that the moments (8) need not be of the same sign, not even even- tually, so that in reality the series (9) is not necessarily alternating. For later use we also note that ai= 1/(i+ 1) and a∗ i= 1/(2i+ 1) are the moments of the usual Lebesgue measure µand the measure µ∗with density dµ∗=dµ/2√x, respectively. Remark 2.In order to compute Snumerically, we approximate it by the first difference ∇S. To minimize the error S0, we have to choose P(x) as a polynomial of high degree that approximates zero uniformly on [0,1]. In light of (11), this suggests taking P(x)=1−(1 + xk)Q(x), where the polynomial Q(x) is a Chebyshev approximation to 1/(1 + xk). This will be carried out in more detail in Sections 4 and 6. Remark 3.On the other hand, if we wish to transform Sinto an exact, accelerated series (16), we have to choose P(x) as a simple polynomial of low degree, so that the terms (14) can be computed explicitly. If, for example, P(x) is of the form xj(1 −x)n, then the terms of the transformed series S0 are a0 The binomial sum j=0 n x(x+ 1) · ·· (x+n) is well known. It holds as an identity in Q(x) and can be easily shown as a partial fraction decomposition. The substitution x=i+ 1 thus gives (19) ∇nai=n!i! (n+i+ 1)! for the sequence ai= 1/(i+ 1). Similarly, letting x=i+ 1/2 gives (20) ∇na∗ i!(2n+ 2i+ 1)! for a∗ i= 1/(2i+ 1). This will be of use later. 6 D. Brink 3. The Nilakantha transform. Let k= 2 throughout this section. Then P(x) must satisfy P(±i) = 1. We can take for P(x) any product of −x2,x(1 −x)2 2,−(1 −x)4 (21) P(x) = x(1 −x)2 so that Q(x) = 1 −x The transformed series S0has terms 2=ai+1 −2ai+2 +ai+3 hence the first step of the transformation becomes (22) S=a0−a1 ai+1 −2ai+2 +ai+3 The ntimes transformed series S(n)has terms so that we get the Nilakantha transform (23) S= with convergence 13.5−n, since (24) M=P(1/3) = 2/27. Example 4.To accelerate the Gregory–Leibniz series (1), we let ai= 1/(i+ 1). The first step (22) of the transformation is precisely Nilakantha’s series (2). To compute the fully accelerated series (23), we use the iden- tity (19) and get (A.1). Example 5.Taking instead P(x) as (25) −(1 −x)4 4,−x3(1 −x)2 2,−x4(1 −x)4 gives the three series (A.2), (A.3), (A.4) with M= 1/4, 54/3125, 1/1024, Example 6.The Newton–Euler series (5) can be rewritten as 4n+ 1 + 4n+ 3 Nilakantha’s accelerated series for π7 with two alternating, sporadic series corresponding to a∗ i= 1/(2i+ 1) and i= 1/(2i+ 3). Accelerating each series separately using P(x) = x(1 −x)2 2,−(1 −x)4 4,−x3(1 −x)2 and adding the results gives (A.5), (A.6), (A.7), respectively. Remark 7.It follows in advance from (24) that (A.1) and (A.5) con- verge as 13.5−n, but this is also evident from the expressions themselves, by Stirling’s Formula. A similar remark applies to all other formulas in the Remark 8.The factors n!(6n)! ,(2n)!(5n)!(6n)! appearing in (A.5) and (A.7) happen to be reciprocal integers by a criterion of Landau (1900), anticipated by Chebyshev (1852) and Catalan (1874). Such expressions are not too common and have been completely classified (in a suitable sense) [5]. Remark 9.I stress that there is no evidence that Nilakantha derived (2) the way we have done here, much less that he knew (3). Of course, the transformation (22) is straightforward to verify directly, making (1) and (2) essentially equivalent. 4. Numerical approximations. Let k≥1 be given. The Chebyshev polynomials of the first kind, Tm(x), are given recursively by T0(x) = 1, T1(x) = xand Tm(x) = 2xTm−1(x)−Tm−2(x). The zeros of Tm(x) are ηi= cos (2i−1)π 2m, i = 1, . . . , m. Let Q(x) be the Chebyshev approximation of order mof 1/(1 + xk) on the interval [0,1], i.e., Q(x) is the polynomial of degree less than magreeing with 1/(1 + xk) at the mpoints (1 + ηi)/2. Since (1 + ηi)/2 are the zeros of Tm(1 −2x), Q(x) satisfies 1 + xkmodulo Tm(1 −2x) and can be computed from this congruence by the Euclidean Algorithm. Thus, P(x) will be the polynomial of degree less than m+kwith zeros (1 + ηi)/2 and P(u) = 1 for uk=−1. Lagrange interpolation gives the 8 D. Brink explicit expression P(x) = Tm(1 −2x)X βmk·1 + xk with βm=Tm(1 −2u). In order to evaluate the maximum Mof |P(x)|for 0 ≤x≤1 as m→ ∞, first note |Tm(1−2x)| ≤ 1. For a fixed u, the numbers βmsatisfy β0= 1, β1= 1−2uand the recursion βm= (2−4u)βm−1−βm−2.Hence, βm= (λm with the roots λiof the characteristic polynomial λ2−(2 −4u)λ+ 1.We may suppose |λ1|>|λ2|, and conclude that βm∼ |λ1|m/2.Finally, let λbe the minimum of |λ1|as uruns through the roots of unity with uk=−1. Then M=O(λ−m) as m→ ∞. Some values of λare given in Table 1. Note that the value λ= 5.828 for k= 1 was found in [7]. Table 1. Values of λ λ5.828 4.612 3.732 3.220 2.890 2.659 2.488 2.356 2.250 2.164 Example 10.Suppose we want to compute numerically the alternating, sporadic sum 1 + x2= and that we have at our disposal the terms a0, a1, a2, . . . , a99. Letting k= 1 and m= 50 and using only every second term, a0, a2, a4, . . . , a98, we expect a relative error of 5.828−50, or 38 correct, significant digits of S. Letting k= 2 and m= 100, using all 100 available terms, we expect an error of 4.612−100, or 66 correct digits. Consider the constant (7), and rewrite it as (−1)iζ(i+ 3/2) 2i+ 3 in order to bypass the singularity at z= 1. Suppose that the zeta-values ζ(i/2+3/2) are available for i= 0,1,...,99. Using the first method gives 42 digits of S, while the second method gives 70 digits, in agreement with our expectations. The second method can be carried out in Pari as follows: We return to this example in Section 6. Nilakantha’s accelerated series for π9 Remark 11.In the above example, it was assumed that the terms a0, a1, ...,a99 were simply given in advance. In practice, one might obviously have to compute them first. Using k= 2 has the advantage that the integer zeta-values ζ(n) are much faster to compute than the half-integer values ζ(n+ 1/2). On the other hand, these values then need to be computed to a precision of up to 10 extra digits due to the numerically larger coefficients ci. Also, the cican be computed particularly efficiently for k= 1 (cf. [7]). Remark 12.For k= 2, we can compare the (optimal) value λ= 4.612 obtained from Chebyshev polynomials with the values λ=M−1/deg Pfrom the polynomials P(x) given in Section 3. Of these, the Nilakantha trans- formation (21) has the best convergence, i.e., λ= 2.381. The other three transformations (25) have λ= 1.414, 2.252, 2.378, respectively. 5. Geometrically converging series. Let µbe a finite, signed mea- sure on [0,1] with arbitrary moments (8), and consider the alternating, ge- ometrically converging series (26) S= 1 + θxk= for k≥1 and 0 < θ < 1. Let P(x) be given as in (10) with P(u/θ1/k) = 1 for uk=−1, and write Q(x) = 1−P(x) 1 + θxk. As before, we define a sequence of measures µ(n)with dµ(n)=P(x)ndµ and moments (12), and we get a sequence of transformed series 1 + θxk with differences (14) as well as an accelerated series (16) with ∇S(n)= O(Mn), where Mis the maximum of |P(x)|on [0,1]. Example 13.The arcus tangent series (27) arctan √θ 2i+ 1 has the form (26) with k= 1 and ai= 1/(2i+ 1). To accelerate it, P(x) must satisfy P(−1/θ) = 1, and we can take any product of −θx, θ(1 −x) θ+ 1 . 10 D. Brink (28) P(x) = θ(1 −x) θ+ 1 and using (20) gives Euler’s accelerated series (29) arctan √θ θ+ 1 θ+ 1nn!n! (2n+ 1)! with M=θ/(θ+ 1). Note that the original series (27) converges for |θ|<1, whereas the accelerated series (29) converges for |θ|<|θ+ 1|, or Re(θ)>−1/2. The pre- ceding discussion shows that the two series agree for 0 < θ < 1. The Identity Theorem for holomorphic functions and the fact that uniform convergence preserves holomorphicity show that (29) holds for Re(θ)>−1/2. Inserting θ= 1, 1/3, 3 gives three classical formulas such as (4). Also note that (4) is the Euler transform of the Gregory–Leibniz series, i.e., the acceleration corresponding to the negated difference operator ∇, or P(x) = 1−x Historical note 14.Euler develops the Euler transformation and de- rives (4) and (29) from (1) and (27) in his Institutiones Calculi Differentialis [9, Part II, Chapter 1] from 1755. Much earlier, he had given a series for arcsin2xessentially equivalent to (29) in a letter to Johann Bernoulli dated 10 December 1737 [15]. Euler proves (29) again (twice) as well as the Machin- like formula π= 20 arctan 1/7 + 8 arctan 3/79, and computes the two terms with 13 and 17 correct decimals, respectively, but without adding them, in 1779 [10]. He extends this calculation and computes 21 correct decimals of πin [11]. Several sources on the chronology of πstate that Euler did this calculation in 1755 and/or in less than an hour. It seems from the above, though, that the calculation could not have been carried out before 1779. Regarding the duration, the relevant passage reads: “totusque hic calculus laborem unius circiter horae consum[p]sit” (and the entire calculation took about an hour’s work). Example 15.Taking P(x) = −θ2x(1 −x) θ+ 1 rather than (28) gives the accelerated series arctan √θ 4(θ+ 1) θ+ 1n4n 2n−13θ+ 4 4n+ 1 −θ 4n+ 3 with M=θ2/4(θ+ 1). Nilakantha’s accelerated series for π11 By the same argument as before, this formula holds on a complex domain bounded by the curve |θ|2= 4|θ+ 1|, or (x2+y2)2= 16((x+ 1)2+y2) in real variables. This quartic, algebraic curve is a lima¸con of Pascal, named after ´ Etienne Pascal, the father of Blaise Pascal, and first studied in 1525 by Albrecht D¨urer [13] (1). Fig. 1. Lima¸con of Pascal Inserting θ= 1, 1/3, 3 gives (A.8), (A.9), (A.10). Note that the small loop around –1 is not included in the domain of convergence, corresponding nicely to the fact that arctan has a singularity at ±i. Example 16.Letting P(x) = −θ3x(1 −x)2 (θ+ 1)2 gives the formidable expression arctan √θ 9(θ+ 1)2 (θ+ 1)2n n!(6n)! 5θ2+ 15θ+ 9 6n+ 1 −θ2 6n+ 5 with M= 4θ3/27(θ+ 1)2. The domain of convergence is bounded by the sextic, lima¸con-like curve 16(x2+y2)3= 729((x+ 1)2+y2)2. Inserting θ= 1, 1/3, 3 gives (A.11), (A.12), (A.13). (1) I am grateful to my friend Kasper K. S. Andersen for identifying this curve. 12 D. Brink Formulas (A.8) and (A.11) are examples of van Wijngaarden’s trans- formation [7], i.e., they are the accelerations of the Gregory–Leibniz series corresponding to the polynomials P(x) = −x(1 −x) 2,−x(1 −x)2 Example 17.For k= 2 and ai= 1/(i+1), the general arctan series (27) cannot be accelerated as in the previous examples. It may, however, for specific choices of θ. Let θ= 1/3. Then we must have P(±i√3) and can take any product of 3,−(1 −x)3 P(x) = −(1 −x)3 8,x2(1 −x)3 gives the series (A.14), (A.15) with M= 1/8, 9/6250, respectively. Example 18.The convergence of the accelerated series (16), and its identity with (26), was proved under Hausdorff’s condition (18) and 0 < θ < 1. It is a common phenomenon, however, that acceleration techniques work in more general settings and even for divergent series [7, Remark 6]. Consider the divergent series (6), obtained by inserting θ= 3 into (27). Let k= 2 and θ= 3. Then P(x) must satisfy √3= 1. We can take for P(x) any product of −3x2,9x(1 −x)3 8,−27(1 −x)6 64 . Letting P(x) be 9x(1 −x)3 8,−27x3(1 −x)3 8,−27(1 −x)6 gives the three series (A.16), (A.17), (A.18) with M= 243/2048, 27/512, 27/64, respectively. These formulas can be checked numerically to many digits, but of course the above argument is no proof (although I like to think that Euler would have appreciated it). Remark 19.A quick, rigorous proof of (A.18) could go as follows. Write Mercator’s Formula with six terms at a time, −log(1 −z) = 6n+ 1 +· ·· +z6 6n+ 6. Insert z=eiπ/6√3/2 and take imaginary parts to get (A.18), q.e.d. Nilakantha’s accelerated series for π13 Similar proofs of (A.2) and (A.14) are possible: Insert z= (1 + i)/2 and z=eiπ/3/2 into Mercator’s Formula with four and three terms at a time, 6. Numerical approximations again. To approximate the geomet- rically converging series (26) numerically, let k≥1 be given, but now let Q(x) agree with 1/(1 + θxk) at the points (1 + ηi)/2, i.e., 1 + θxkmodulo Tm(1 −2x). P(x) = Tm(1 −2x)X βmk·1 + θxk with βm=Tm(1 −2uθ−1/k ). Again, βm∼ |λ1|m/2 with λ1the numerically greater root of the characteristic polynomial λ2−(2 −4uθ−1/k )λ+ 1. We conclude that M=O(λ−m) as m→ ∞ with λ= min{|λ1|:uk=−1}. Table 2 gives λ=λθfor various values of kand θ. Table 2. Values of λθ k λ1/2λ1/3λ1/4λ1/5λ1/6 1 9.899 13.93 17.94 21.95 25.96 2 6.129 7.328 8.352 9.263 10.09 3 4.607 5.254 5.782 6.236 6.636 4 3.829 4.264 4.612 4.905 5.160 5 3.357 3.685 3.942 4.157 4.343 6 3.040 3.303 3.508 3.678 3.823 7 2.811 3.031 3.202 3.342 3.462 8 2.636 2.827 2.973 3.094 3.196 9 2.499 2.667 2.796 2.902 2.991 10 2.388 2.539 2.654 2.748 2.828 Example 20.We return to the computation of the constant Kfrom Example 10. Write 4−1 + ζ1 (−1)iζ(i+ 3/2) −1 2i+ 3 to get a geometrically converging series, with θ= 1/2. Suppose again the 14 D. Brink zeta-values ζ(i/2 + 3/2) are given for i= 0,1,...,99. Using only the terms a0, a2, a4, . . . , a98, we expect an error of 9.899−50, or 50 correct digits (cf. Ta- ble 2). Using all 100 available terms, we expect an error of 6.129−100, or 79 correct digits. In practice, the two methods give 53 and 82 digits, respec- tively, confirming the theory. The second method can be carried out in Pari as follows: Appendix. Series for π (A.1) 3π 3n+ 1 +1 3n+ 2, (A.2) π= 4n+ 1 +2 4n+ 2 +1 4n+ 3, (A.3) 125π 5n+ 1 −22 5n+ 2 +8 5n+ 3 −7 5n+ 4, (A.4) 1024π= 8n+ 1 +117 8n+ 3 −15 8n+ 5 −5 8n+ 7, (A.5) 9π n!(6n)! 19 6n+ 1 +1 6n+ 5, (A.6) 16√2π= 8n+ 1 +13 8n+ 3 −3 8n+ 5 −5 8n+ 7, (A.7) 625π 10n+ 1 +184 10n+ 3 −16 10n+ 7 −11 10n+ 9, Nilakantha’s accelerated series for π15 (A.8) 2π= 4n+ 1 −1 4n+ 3, (A.9) 8π 4n+ 1 −1 4n+ 3, (A.10) 16π 4n+ 1 −3 4n+ 3, (A.11) 9π= n!(6n)! 29 6n+ 1 −1 6n+ 5, (A.12) 24√3π= n!(6n)! 131 6n+ 1 −1 6n+ 5, (A.13) 16π n!(6n)! 11 6n+ 1 −1 6n+ 5, (A.14) 4π 3n+ 1 +1 3n+ 2, (A.15) 500π 5n+ 1 +57 5n+ 2 +12 5n+ 3 +7 5n+ 4 , (A.16) 256π 4n+ 1 +72 4n+ 2 +15 4n+ 3, (A.17) 1024π 6n+ 1 +6 6n+ 3 −27 6n+ 5, (A.18) 64π 64 n 6n+ 1 +24 6n+ 2 +24 6n+ 3 +18 6n+ 4 +9 6n+ 5. [1] V. Adamchik and S. Wagon, A simple formula for π, Amer. Math. Monthly 104 (1997), 852–855. [2] J. Arndt and C. Haenel, Pi—Unleashed, 2nd ed., Springer, Berlin, 2001. [3] D. H. Bailey, A compendium of BBP-type formulas for mathematical constants, 2013; www.davidhbailey.com/dhbpapers/bbp-formulas.pdf. [4] P. Billingsley, Probability and Measure, 3rd ed., Wiley, New York, 1995. [5] J. W. Bober, Factorial ratios, hypergeometric series, and a family of step functions, J. London Math. Soc. 79 (2009), 422–444. 16 D. Brink [6] D. Brink, The spiral of Theodorus and sums of zeta-values at the half-integers, Amer. Math. Monthly 119 (2012), 779–786. [7] H. Cohen, F. Rodriguez Villegas and D. Zagier, Convergence acceleration of alter- nating series, Experiment. Math. 9 (2000), 3–12. [8] L. Euler, De summis serierum reciprocarum, Comment. Acad. Sci. Petropol. 7 (1740), 123–134; online: eulerarchive.maa.org, Enestr¨om index E41. [9] L. Euler, Institutiones Calculi Differentialis. . . , St. Petersburg, 1755; [E212]. [10] L. Euler, Investigatio quarundam serierum, quae ad rationem peripheriae circuli ad diametrum vero proxime definiendam maxime sunt accommodatae, Nova Acta Acad. Sci. Imp. Petropol. 11 (1798), 133–149; [E705]. [11] L. Euler, Series maxime idoneae pro circuli quadratura proxime invenienda, in: Opera Postuma I, St. Petersburg, 1862, 288–298; [E809]. [12] F. Hausdorff, Momentprobleme f¨ur ein endliches Intervall, Math. Z. 16 (1923), 220– [13] J. D. Lawrence, A Catalog of Special Plane Curves, Dover, New York, 1972. [14] R. Roy, The discovery of the series formula for πby Leibniz, Gregory and Nilakan- tha, Math. Mag. 63 (1990), 291–306. [15] P. St¨ackel, Eine vergessene Abhandlung Leonhard Eulers ¨uber die Summe der re- ziproken Quadrate der nat¨urlichen Zahlen, Bibliotheca Math. 8 (1908), 37–60. David Brink Akamai Technologies Larslejsstræde 6 1451 København K, Denmark E-mail: dbrink@akamai.com Received on 28.10.2014 and in revised form on 22.8.2015 (7975)
{"url":"https://www.researchgate.net/publication/283579663_Nilakantha's_accelerated_series_for_pi","timestamp":"2024-11-10T19:07:57Z","content_type":"text/html","content_length":"720249","record_id":"<urn:uuid:4af9dda4-7abb-4a0b-831a-f0c5810647ae>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00797.warc.gz"}
What is an astronomical unit? An astronomical unit (AU) is a fundamental measurement used in astronomy to denote distances within our solar system. It provides a convenient way to express distances on a scale relevant to planetary orbits and is particularly useful when dealing with the vast expanse of space. The concept of the astronomical unit is pivotal for understanding the relationships between celestial bodies in our solar system and plays a crucial role in astronomical calculations. The astronomical unit is defined as the average distance between Earth and the Sun. This distance is not constant due to the elliptical shape of Earth’s orbit, but taking the average provides a stable reference point. The International Astronomical Union (IAU) formally defined the astronomical unit as exactly 149,597,870.7 kilometers (about 93 million miles) in 2012. This definition replaced the earlier approximation based on radar reflections from the inner planets. One of the primary motivations for introducing the concept of the astronomical unit is the need for a standard unit of measurement within our solar system. Distances in space are immense, and using familiar terrestrial units like kilometers or miles becomes impractical when dealing with such vast scales. By anchoring measurements to the average Earth-Sun distance, astronomers have a more manageable reference for expressing distances within our solar system. The historical development of the astronomical unit is intertwined with the evolution of our understanding of the solar system. Early astronomers, including Claudius Ptolemy and Nicolaus Copernicus, proposed models of the solar system that did not rely on precise measurements of distances. It was only with the advancement of observational techniques and the work of astronomers like Johannes Kepler and Tycho Brahe that more accurate determinations of planetary orbits became possible. Kepler’s laws of planetary motion, formulated in the early 17th century, provided a crucial foundation for understanding the geometry of planetary orbits. His third law, in particular, relates the orbital period of a planet to its average distance from the Sun. This laid the groundwork for later astronomers to estimate the relative distances between planets. The first attempts to measure the Earth-Sun distance directly were made using triangulation during the 17th and 18th centuries. Astronomers aimed to observe the position of a planet from different points on Earth’s surface, forming a triangle with known side lengths. Although these efforts were pioneering, they lacked the precision needed for an accurate determination of the astronomical unit. It was not until the 19th century that technological advancements and more sophisticated observational techniques allowed for more precise measurements. The transits of Venus, where the planet passes across the face of the Sun, provided an opportunity to determine the Earth-Sun distance. By observing the transit from multiple locations on Earth and timing the duration of the transit, astronomers could use parallax to calculate the distance to Venus and, consequently, the astronomical unit. Notable expeditions were organized to observe the transits of Venus in 1761, 1769, 1874, and 1882. The results from these expeditions significantly improved the accuracy of the astronomical unit. Notable astronomers like Edmond Halley and Jean-Baptiste Joseph Delambre contributed to these efforts, refining our understanding of the solar system’s geometry. The introduction of radar technology in the mid-20th century provided another means of measuring the Earth-Sun distance. By bouncing radar signals off planets and timing their round-trip travel, astronomers could determine the distance to those planets with high precision. This method allowed for a more direct and continuous measurement of the astronomical unit, independent of planetary In the latter half of the 20th century, as space exploration advanced, spacecraft were equipped with instruments to measure distances to planets and other celestial bodies accurately. Pioneer, Voyager, and later space missions contributed valuable data that refined our understanding of planetary orbits and improved the accuracy of the astronomical unit. While the definition of the astronomical unit has evolved over time, its fundamental purpose remains constant: to provide a standardized measure for expressing distances within our solar system. The concept becomes especially relevant when considering the vast distances between planets and the challenges of interplanetary exploration. In addition to its role in measuring distances within the solar system, the astronomical unit serves as a fundamental parameter in Kepler’s laws and other celestial mechanics equations. Expressing planetary distances in terms of astronomical units simplifies calculations and allows astronomers to focus on the underlying dynamics of the solar system. As our understanding of the cosmos has expanded beyond the confines of our solar system, new units and measurements have been introduced to describe interstellar and intergalactic distances. The light-year, for example, represents the distance light travels in one year and is commonly used for expressing distances between stars. However, within the context of our solar system, the astronomical unit remains a crucial and widely used reference. Leave a Comment
{"url":"https://www.thedailyscience.org/what-is-an-astronomical-unit.html","timestamp":"2024-11-13T11:41:18Z","content_type":"text/html","content_length":"61520","record_id":"<urn:uuid:ae8af55b-9772-424a-bf06-7e0e14089a51>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00052.warc.gz"}
Band Elimination, m-derived sections m-derived filter Band Elimination, m-derived sections m-derived filter: m-derived filters or m-type filters are a type of electronic filter designed using the image method. They were invented by Otto Zobel in the early 1920s. This filter type was originally intended for use with telephone multiplexing and was an improvement on the existing constant k type filter. The main problem being addressed was the need to achieve a better match of the filter into the terminating impedances. In general, all filters designed by the image method fail to give an exact match, but the m-type filter is a big improvement with suitable choice of the parameter m. The m-type filter section has a further advantage in that there is a rapid transition from the cut-off frequency of the pass band to a pole of attenuation just inside the stop band. Despite these advantages, there is a drawback with m-type filters; at frequencies past the pole of attenuation, the response starts to rise again, and m-types have poor stop band rejection. For this reason, filters designed using m-type sections are often designed as composite filters with a mixture of k-type and m-type sections and different values of m at different points to get the optimum performance from both types. The building block of m-derived filters, as with all image impedance filters, is the "L" network, called a half-section and composed of a series impedance Z, and a shunt admittance Y. The m-derived filter is a derivative of the constant k filter. The starting point of the design is the values of Z and Y derived from the constant k prototype and are given by where k is the nominal impedance of the filter, or R0. The designer now multiplies Z and Y by an arbitrary constant m (0 < m < 1). There are two different kinds of m-derived section; series and shunt. To obtain the m-derived series half section, the designer determines the impedance that must be added to 1/mY to make the image impedance ZiT the same as the image impedance of the original constant k section. From the general formula for image impedance, the additional impedance required can be shown to be To obtain the m-derived shunt half section, an admittance is added to 1/mZ to make the image impedance ZiΠ the same as the image impedance of the original half section. The additional admittance required can be shown to be The general arrangements of these circuits are shown in the diagrams to the right along with a specific example of a low pass section. A consequence of this design is that the m-derived half section will match a k-type section on one side only. Also, an m-type section of one value of m will not match another m-type section of another value of m except on the sides which offer the Zi of the k-type. Operating frequency For the low-pass half section shown, the cut-off frequency of the m-type is the same as the k-type and is given by From this it is clear that smaller values of m will produce closer to the cut-off frequency and hence will have a sharper cut-off. Despite this cut-off, it also brings the unwanted stop band response of the m-type closer to the cut-off frequency, making it more difficult for this to be filtered with subsequent sections. The value of m chosen is usually a compromise between these conflicting requirements. There is also a practical limit to how small m can be made due to the inherent resistance of the inductors. This has the effect of causing the pole of attenuation to be less deep (that is, it is no longer a genuinely infinite pole) and the slope of cut-off to be less steep. This effect becomes more marked as is brought closer to , and there ceases to be Image impedance m-derived prototype shunt low-pass filter ZiTm image impedance for various values of m. Values below cut-off frequency only shown for clarity. The following expressions for image impedances are all referenced to the low-pass prototype section. They are scaled to the nominal impedance R0 = 1, and the frequencies in those expressions are all scaled to the cut-off frequency ωc = 1. Series sections The image impedances of the series section are given by As with the k-type section, the image impedance of the m-type low-pass section is purely real below the cut-off frequency and purely imaginary above it. From the chart it can be seen that in the passband the closest impedance match to a constant pure resistance termination occurs at approximately m = 0.6 Transmission parameters m-Derived low-pass filter transfer function for a single half-section For an m-derived section in general the transmission parameters for a half-section are given by For the particular example of the low-pass L section, the transmission parameters solve differently in three frequency bands.
{"url":"https://www.brainkart.com/article/Band-Elimination,-m-derived-sections-m-derived-filter_12506/","timestamp":"2024-11-10T12:48:40Z","content_type":"text/html","content_length":"36495","record_id":"<urn:uuid:e836b0a4-36ad-4d7e-b129-19afb020a391>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00449.warc.gz"}
The simplest rocket ever This is a nice calculation on dropping balls stacked on top of each other; it was also an activity of my PGCHE. Many thanks to my Year 1 students for discussing this topic with me. Before we start, grab these things • Pen and paper. • Two balls of different weight (like basketball and tennis ball, or a tennis and a squash ball, but any two bouncing objects would work). Try it yourself I hope you were able to find the balls. Now stack them on top of each other, with the light ball at the bottom and the heavy ball on top, and drop them. If you couldn’t find the balls, this is what I meant: Why is the small ball shooting up that fast? The tennis ball is going much much higher than what it would do if you drop it alone. Somehow, it’s being helped by the big basketball! As we will see, the big ball provides “fuel” to the small ball. And this is conceptually the same thing that happens in a rocket. Key lesson Today’s key concept is the conservation of energy and linear momentum. We believe these are among the most fundamental laws of Nature, at the backbone of our entire understanding of the physical world. Stacking balls is a neat example that shows them at a play. Setting up the stage Let us recall that the kinetic energy of a particle of mass \(m\) and velocity \(v\) is \(E=\frac{1}{2}m v^2\). The particle’s linear momentum is In our case we have two objects, so let’s indicate the mass of the big ball with \(M\) and that of the small ball with \(m\) Their velocities are \(v_M\) and \(v_m\). Initially, both balls are falling down with the same velocity. Let’s call this \(v\). So, we start with both velocities directed downwards and If you’re not sure why the two velocities must be the same, it’s time to revise the famous experiment by Galileo Galilei. Two collisions To understand why the tennis ball shoots up, we know need to track what happens to energy and momentum during the various collisions. Here is a schematic representation: 1. The first collision that takes place is that of the big ball and the ground (forget about the small ball for a second). We can very safely assume that the mass of the Earth is much much (much) bigger than the mass of the ball (how much? that’s a nice calculation you could do if you’re interested!) In other terms, the Earth does not move! If the Earth does not move, it’s linear momentum is obviously zero. That means that all of the linear momentum is in the big ball. Because linear momentum is conserved, the velocity of the big ball after hitting the floor must be the same he had before but is now directed upwards. So, still \(v\). 2. Ok, now the big ball is bouncing up while the small ball is still falling down. We need to study the head-on collision between the two balls. The unknowns of the problem are the final velocities of the balls, let’s call them \(v’_m\) and \(v’_M\). Here is where energy and momentum conservation become crucial. The energy before and after the collision must be the same. So one has \(\frac{1}{2} M v^2_M + \frac{1}{2} m v^2_m = \frac{1}{2} M v^{‘2}_M + \frac{1}{2} m v^{‘2}_m\) Linear momentum is also conserved, which means \(M v_M – m v_m = M v’_M+ mv’_m\) The minus sign in front of the second term is there because the small ball is going down, not up. We have all the ingredients! We know the velocities before the second collision (easy, it’s just \(v_M=v_m=v\) and we have two simple equations for \(v’_m\) and \(v’_M\). Up to you now Grab pen and paper and roll up your sleeves. Solve those two coupled equations. To simplify things, we are really only interested in \(v’_m\). You can find \(v’_M\) from one equation, plug it into the other, and derive \(v’_m\). This activity should take you about 5 minutes. Check your work You should have obtained a second-degree equation for \(v’_m\). Second-degree equations have two solutions, which in this case are \(v’_m = \frac{3M -m}{M+m}\) v. The first solution cannot possibly be right (can you say why? Hint: is the small ball going up or down in the experiment we started with?). So the second equation must be the physical solution. That’s how fast the small ball is shooting up. Sum up If the first ball is much more massive than the second one \(M\gg m\), the final velocity is close to \(v’_m\simeq 3v\) (can you see why? Formally, this is a mathematical limit). The small ball goes up approximately three times faster! In other terms, the small ball is stealing some of the energy and momentum from the big ball. This is the same thing that happens in a rocket. Fuel is pushed down such that the capsules with the astronauts can gain energy and momentum and reach, say, the International Space Station. More about rockets: you know they can steal momentum even from other planets? That’s called gravitational slingshot and it’s the only way rockets can reach the outer Solar System relatively quickly. Stretching you further Now, try to think about what happens if you were to put a third ball on top (you can try! Basketball + tennis ball + golf ball, but go outside or the golf ball will easily damage your ceiling!). The second ball goes up three times faster than the first one, so the third ball must go up three times faster than the second one. That is nine times the initial velocity! For one ball we have \ (v’_m\simeq 3v\), for two balls \(v’_m\simeq 3^2v = 9v\). Let’s stretch this further: if you imagine stacking \(N\) balls such that those at the bottom at always much heavier than those at the top, the final ball will receive a velocity \(v’_m \simeq 3^{N-1} v\). The velocity increases exponentially with the number of balls! Can we really make a rocket out of this? Yes! At least conceptually. To escape the gravitational pull of the Earth and reach outer space one needs a velocity of about 11 km/s (that is called the escape velocity; do you know how to compute Imagine we were dropping our balls from a height \(h\) of 1m. The velocity \(v\) with they hit the ground is given by (again: energy conservation) \(\frac{1}{2}m v^2 = mg\). The gravitational constant \(g\) is about 9.8 m/s^2 which means that the velocity \(v\) is about 4.5 m/s. Now, plug this number the equations we derived \(v’_m \simeq 3^{N-1} v\) for a few values of \(N\). For \(N=9\) the final velocity is about 30km/s, which is enough to send the smallest ball out into space! So: 9 balls on top of each other and you make a real rocket! That’s a great idealized experiment, but back to reality now. Do you think this is really practical? Think critically about all the approximations we did that might invalidate the calculation. And how about exploding stars? This simple problem also has an exciting analogy with supernova explosions and exploding stars! Let’s finish this activity off with the video below. You see now why I said you shouldn’t try the three-ball experiment inside?
{"url":"https://davidegerosa.com/simplestrocket/","timestamp":"2024-11-09T19:37:38Z","content_type":"text/html","content_length":"61954","record_id":"<urn:uuid:eb6f9d0f-356c-4787-8189-c93ab3be22e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00198.warc.gz"}
What is standard deviation in statistics PPT? What is standard deviation in statistics PPT? Definition: • Standard Deviation is the positive square root of the average of squared deviation taken from arithmetic mean. • The standard deviation is represented by the Greek letter 𝝈(sigma). • Formula. • Standard deviation = 𝜎= 𝑥− 𝑥 2 𝑛 What is standard deviation PDF? Standard deviation is a measurement that is designed to find the disparity between the calculated mean.it is one of the tools for measuring dispersion. To have a good understanding of these, it is of general interest to give a better light to the following terms (mean, median, mode) and variance) also their uses. What is mean deviation in statistics Slideshare? The mean deviation is the first measure of dispersion that we will use that actually uses each data value in its computation. It is the mean of the distances between each value and the mean. It gives us an idea of how spread out from the center the set of values is. What are the uses of standard deviation? The standard deviation is used to measure the spread of values in a dataset. Individuals and companies use standard deviation all the time in different fields to gain a better understanding of What are the properties of standard deviation? 6 Important Properties of Standard Deviation • It cannot be negative. • It is only used to measure spread or dispersion around the mean of a data set. • It shows how much variation or dispersion exists from the average value. • It is sensitive to outliers. What is importance of standard deviation? The answer: Standard deviation is important because it tells us how spread out the values are in a given dataset. What is mean and STD? How Are Standard Deviation and Standard Error of the Mean Different? Standard deviation measures the variability from specific data points to the mean. Standard error of the mean measures the precision of the sample mean to the population mean that it is meant to estimate. What is a good standard deviation? Statisticians have determined that values no greater than plus or minus 2 SD represent measurements that are are closer to the true value than those that fall in the area greater than ± 2SD. Thus, most QC programs require that corrective action be initiated for data points routinely outside of the ±2SD range. What is difference between mean deviation and standard deviation? Differentiate between Mean Deviation and Standard Deviation….Measures of Dispersion. Mean Deviation Standard Deviation 2. Mean or median is used in calculating the mean deviation. 2. Only mean is used in calculating the standard deviation. Why is it called standard deviation? The name “standard deviation” for SD came from Karl Pearson. I would guess no more than that he wanted to recommend it as a standard measure. If anything, I guess that references to standardization either are independent or themselves allude to SD. What is the concept of standard deviation? A standard deviation (or σ) is a measure of how dispersed the data is in relation to the mean. Low standard deviation means data are clustered around the mean, and high standard deviation indicates data are more spread out. What are properties of standard deviation? Properties of Standard Deviation The standard deviation of a random variable X X X, denoted as σ \sigma σ or σ ( X ) \sigma(X) σ(X), is defined as the square root of the variance. σ ( X ) = Var [ X ] . \sigma(X) = \sqrt{ \text{Var}[X] }. σ(X)=Var[X] . What is standard deviation used for? What is standard deviation? Standard deviation tells you how spread out the data is. It is a measure of how far each observed value is from the mean. In any distribution, about 95% of values will be within 2 standard deviations of the mean. What is standard deviation in real life? Standard deviation is a measure of how far away individual measurements tend to be from the mean value of a data set. The standard deviation of company A’s employees is 1, while the standard deviation of company B’s wages is about 5. Where is standard deviation used? The standard deviation is used in conjunction with the mean to summarise continuous data, not categorical data. In addition, the standard deviation, like the mean, is normally only appropriate when the continuous data is not significantly skewed or has outliers. What is standard deviation explain with example? The standard deviation measures the spread of the data about the mean value. It is useful in comparing sets of data which may have the same mean but a different range. For example, the mean of the following two is the same: 15, 15, 15, 14, 16 and 2, 7, 14, 22, 30.
{"url":"https://www.bearnaiserestaurant.com/writing-advice/what-is-standard-deviation-in-statistics-ppt/","timestamp":"2024-11-04T21:19:51Z","content_type":"text/html","content_length":"56206","record_id":"<urn:uuid:153ab054-48cc-43a1-8646-f3731b64e0df>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00155.warc.gz"}
#2: Notes on ‘the desertification of signs and men’ #2 Notes on ‘the desertification of signs and men’ Banality is a theme Baudrillard returns to again and again in his writings and interviews. It is for him quite simply an inevitable product of modernity—everywhere modernity spreads, banality follows. “The desertifiation of signs and men” is a phrase that appears in one of Baudrillard’s most celebrated books, America; it pertains to this process of banalisation. The paragraph in question begins, “The natural deserts tell me what I need to know about the deserts of the sign.” If I recall the printed book correctly, the italicisation of ‘at’ is a scanning error. The unceasing, viral-like spread of banality is intimately tied to ‘the end of transcendence’, another important Baudrillardian theme. Increasingly flattened out—metaphysically speaking—the world of the late twentieth and early twenty-first centuries seemingly offers us no ladders up to other spiritual planes. No side doors either. Life now is, or at any rate appears to be, decidedly earth-bound, confined to the space-time of the quotidian. However, it would be a mistake to assume Baudrillard views the entire modern era as one of only negative developments. In a later book he applauds the way modernity has “liberated us from the feudal and the religious”. “Liberation from the religious” should, I think, be conceived as liberation from Christian piety and religious certainty—it is a freeing from that medieval conviction that life has a fixed and known meaning. But for Baudrillard modern science is equally mistaken in its certainty that there is no meaning to life or the universe, and that the spiritual dimension of existence is a delusion. Baudrillard’s vision is perhaps best understood as a mystic one. Yet this is a sceptical, Pyrrhonian mysticism, highly doubtful that the ultimate truth of the world can ever be known. America, out of all his books, is the one with the most optimistic take on modernity, or at any rate the modernity to be found in the United States of the late twentieth century. As JB says, “America is the original version of modernity. We [ie Europe] are the dubbed or subtitled version.” With the benefit of a certain vision, life in eighties America, in all its banality, can be seen to possess a peculiar poetry and mystery (a rich vein that vaporwave would mine over two decades One of the odder things about America is that banalisation and entropy are not viewed, in this book, as exclusively or even mostly negative. Recall that that vision of ‘the desertification of signs and men’ is exalting. It is as if Baudrillard has unexpectedly arrived at a serene acceptance, not only of the United States’ civilisational trajectory, but of the world’s. Here is a state of being not far from those we encounter in the writings of Eckhart and Lao Tzu, indeed in the works of many mystics throughout history. For Jean Baudrillard, though, it was not to last. For whatever reason, he returned to rage: “What you get to read in the papers today makes your blood boil” as he put it in one 2001 interview. So what are we to make of this process of homogenisation and banalisation we are all still very much caught up in, a process which has clearly progressed much further since America hit book stores back in 1986? I think serene acceptance is beyond the capabilities of most of us, so we have to find other ways of surviving in an entropic culture that increasingly resembles a desert. Certainly it would help if we could arrive at a plausible theory of what is behind this remorseless spread of sameness and banality. There are the usual suspects at which we can point the finger: liberalism, capitalism, science and technology. But these kinds of explanations are by now overfamiliar to the point of being, dare I say it, banal. There’s an idea Baudrillard puts forward in a later interview that strikes me as a lot more interesting: the notion of an uncertainty revolution having taken place in Western societies, an epistemological revolution which is now in the process of going global. This turn toward a generalised state of uncertainty in all fields was something that was lying in wait for us on the road of History. The postmodern turn, which I date to the late 1950s and early 1960s, was not the beginning of this revolution, but rather the moment when its effects really began to make themselves felt across the whole culture. Deep epistemological uncertainty had been creeping up on us for some time—just consider the way Kant called into question the very possibility of objective knowledge. But how is the uncertainty revolution tied to banalisation? I believe the link is found in the way all convictions, all deeply held beliefs, have been undermined, and a conviction is usually the spur, the motivator of a passion. The decline of the passions we have witnessed in the decades since the Second World War is, I presume, obvious to almost everyone—along with the homogenisation of life which has accompanied it. In itself, then, the uncertainty revolution was and is neither good nor bad—it was simply inevitable. I would even say this revolution should have been a step forward for our civilisation, toward lucidity, toward an acceptance that certainty is forever unlikely in this world and that we should learn to live passionately even in uncertainty. But this has not happened—we don’t know how to cope with not knowing, and increasingly we instead neurotically insist on new certainties—I’m especially thinking of the dogmas of the current iteration of Left-wing ideology. But extreme neurosis shouldn’t be mistaken for passion, and there’s something deeply unconvincing about the ‘convictions’ of today’s Left. You get the sense it isn’t passion that drives these people, but desperation. Groupthink, conformism, safety in numbers—there’s nothing more banal, and little more abject, than unthinking herd behaviour like this. These ‘radical individualists’ believe what they believe because large numbers of others believe it, and because these beliefs are backed up by power—for these two reasons above all. After America, Baudrillard often returns to this theme of banalisation. With his Stateside serenity having vanished, he frames the unstoppable advance of the desert of banality in increasingly bleak terms. He speaks often of loss—of the Westerner having ‘lost his alterity’, or sometimes ‘his shadow’ (a clear allusion to the Jungian concept of the Shadow). Without these spiritual aspects of his being, the Westerner falls into a state of being ‘identical to himself’. He becomes ‘the banalised individual’, or as we might dub him, the bugman. Maybe I'm out of my depth, but do you think people can even survive without certainty/a narrative? Isn't that what makes the world intelligible in the first place? Without "what for?" the "what?" becomes just white noise - a bunch of trivia and factoids with no possible interpretation. Even the bodily senses of the world only have meaning through the evolutionary "telos" of survival of the fittest via trial-and-error, whether consciously recognized as such or not. Eg. a rabbit might not be conscious of this telos, but its smell, vision, hearing, interpretation, ie its entire phenomenology, is determined by this telos. Functionally, a "what for?" is inherently built into all of us from the ground up. And all this entirely within the empirical, scientific, kantian phenomenal realm. So, in all this uncertainty, must there be and is there somewhere a backstop of intelligiblity, a narrative? Expand full comment 2 more comments...
{"url":"https://www.someprivatediagonal.com/p/2-notes-on-the-desertification-of","timestamp":"2024-11-13T19:22:54Z","content_type":"text/html","content_length":"194495","record_id":"<urn:uuid:4fcae25a-67fa-45ef-abb6-6499d30e8ba8>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00792.warc.gz"}
SAT Math Prep - Option Education Dubai This article intends to explore various areas of the SAT Math component, designed to align with the math you are learning in school. Working hard in your math class and applying those skills to your science classes will give you the foundation for the SAT Math Test. The raw score, which is the number of correct answers, on the test is converted into a scaled score between 200 and 800 for each section. As there is no negative marking on the test, skipped or wrong questions do not add or subtract from your raw score. The Math Test has two portions. In the first section, you are given 55 minutes to complete 38 questions wherein calculators are permitted. This test section includes questions involving a more multifaceted mathematical approach. Using a calculator would give you a chance to work more efficiently. Some problems, however, might be easier to solve without a calculator. So, it will be up to you to decide whether or not to use one. The subsequent portion of the Math Test contains 20 questions, and 25 minutes will be allotted to complete it. Here, the focus is the test of your fluency with individual topics and concepts. This section does not permit a calculator, which adds a challenge to the exam. Multiple-choice forms the major chunk of the examination – around 80 percent – while the other 20% are gridded responses, which can include non-negative integers, fractions, or decimals. So, if you get a negative answer, rechecking your work is always a good idea. A set of reference formulas will be provided at the beginning of the test. You may find these facts and formulas helpful as you answer some of the questions, but depending on them alone without having sufficient practice would not bring out good results as it slows you down. To do well, you should ensure you are already comfortable using them. The Math test emphasizes three main areas: Questions from the area we call “Heart of Algebra” necessitate you to create, manipulate, and solve algebraic equations. The next area is Problem-Solving and Data Analysis, which examines your ability to use percentages, proportions, and ratios appropriately to solve real-world problems and construe logic from graphs and tables. The third area, the Passport to Advanced Math, involves questions requiring you to demonstrate an acquaintance with more complex equations or functions. These are math skills you will want to master if you want to pursue a career in science, technology, engineering, or math. On the Math test, there are a small number of questions that fall outside of the three main areas. These questions, collectively classified as ‘Additional Topics in Math,’ will focus on key concepts, including area and volume, coordinate geometry, and basic trigonometry. During the Math Test, some of the questions will be included from science and social science frameworks and other realistic settings. You are in the right place to learn more about the SAT Math Test at Option Training Institute. Time for some hands-on practice and real-time ‘smart’ training. Let’s begin!
{"url":"https://optioneducation.ae/tag/sat-math/","timestamp":"2024-11-03T18:21:57Z","content_type":"text/html","content_length":"141323","record_id":"<urn:uuid:bc171df4-c81a-4261-ba8f-b2c05aeee376>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00371.warc.gz"}
Solvable lattice models for metals with SciPost Submission Page Solvable lattice models for metals with Z2 topological order by Brin Verheijden, Yuhao Zhao, Matthias Punk This Submission thread is now published as Submission summary Authors (as registered SciPost users): Matthias Punk Submission information Preprint Link: https://arxiv.org/abs/1908.00103v2 (pdf) Date accepted: 2019-11-26 Date submitted: 2019-10-23 02:00 Submitted by: Punk, Matthias Submitted to: SciPost Physics Ontological classification Academic field: Physics • Condensed Matter Physics - Theory Specialties: • Quantum Physics Approach: Theoretical We present quantum dimer models in two dimensions which realize metallic ground states with Z2 topological order. Our models are generalizations of a dimer model introduced in [PNAS 112,9552-9557 (2015)] to provide an effective description of unconventional metallic states in hole-doped Mott insulators. We construct exact ground state wave functions in a specific parameter regime and show that the ground state realizes a fractionalized Fermi liquid. Due to the presence of Z2 topological order the Luttinger count is modified and the volume enclosed by the Fermi surface is proportional to the density of doped holes away from half filling. We also comment on possible applications to magic-angle twisted bilayer graphene. Author comments upon resubmission We thank the referee for the very positive report and the helpful comments on our manuscript. Below we give a detailed response to the referee’s questions and comments. We updated our manuscript accordingly and hope this facilitates a timely publication of our work. With many thanks and best regards, Brin Verheijden, Yuhao Zhao, Matthias Punk Response to referee’s requested changes: System Message: WARNING/2 (<string>, line 6) Title overline too short. Response to referee’s requested changes: 1.) So far we haven’t studied confinement transitions in our dimer model, but this is indeed a very interesting question. Such transitions would be manifest in the spontaneous breaking of lattice symmetries, leading to valence bond solid order for the purely bosonic RK model. In the presence of fermionic dimers we would expect a small Fermi surface which satisfies the conventional Luttinger count, since the Fermi surface is reconstructed due to the breaking of lattice symmetries. Unfortunatley we are not aware of definitive statements about the nature of confinement transitions between the Z2 fractionalized Fermi liquid studied in our work, and a confining phase in terms of an ordinary Fermi liquid with broken symmetries. However, related questions have been investigated recently in several numerical works, which studied square lattice models of fermionic matter coupled to Z2 gauge fields. We’ve added a corresponding comment in the discussion and conclusions section of our revised manuscript. 2.) We are not aware of a symmetry based argument which would explain why the dispersion minimum is at the M points of the Brillouin zone. Note, however, that a change of the sign of the dimer resonance amplitude delta t_1 shifts the position of the dispersion minimum back to the Gamma point. Moreover, a perturbation of the amplitude t_2 from the exactly solvable line (which we didn’t compute here, because we identified t_1 as the important perturbation) can shift the position of the dispersion minimum to the K points at the Brillouin zone corners. The position of the dispersion minima thus clearly depends on microscopic details. 3.) We want to emphasize that our model is at best a very simplistic toy model for the unconventional metallic state that has been observed in magic-angle twisted bilayer graphene (TBG) on the hole-doped side of the Mott-like insulator at a filling of nu = -2. Note that microscopic details of TBG are rather complex, as evidenced by the subtleties encountered in previous works that tried to construct a faithful tight-binding description. For this reason we wanted to refrain from making strong claims about the applicability of our model to TBG, besides pointing a few basic observations. As suggested by the referee, we now discuss relations to TBG in a new section of our manuscript, which contains an extended discussion of what was previously found in the conclusions section. List of changes 1.) New section 5 with an extended discussion of the relation of our results to TBG 2.) A new comment on confinement transitions in the Conclusions & Discussions section 3.) new references 34, 37, 38, 39, 40 Published as SciPost Phys. 7, 074 (2019)
{"url":"https://scipost.org/submissions/1908.00103v2/","timestamp":"2024-11-05T06:17:00Z","content_type":"text/html","content_length":"32975","record_id":"<urn:uuid:087164f8-e9d2-44ff-9bf6-711b5405bf6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00005.warc.gz"}
Top Unsupervised Machine Learning Courses - Learn Unsupervised Machine Learning Online Filter by The language used throughout the course, in both instruction and assessments. Results for "unsupervised machine learning" Skills you'll gain: Algorithms, Machine Learning, Machine Learning Algorithms, Dimensionality Reduction, Applied Machine Learning, Human Learning, Statistical Machine Learning Skills you'll gain: Algorithms, Machine Learning, Machine Learning Algorithms, Applied Machine Learning, Human Learning, Deep Learning, Machine Learning Software, Mathematics, Reinforcement Learning, Statistical Machine Learning Skills you'll gain: Machine Learning, Machine Learning Algorithms, Applied Machine Learning, Algorithms, Deep Learning, Machine Learning Software, Artificial Neural Networks, Human Learning, Statistical Machine Learning, Python Programming, Regression, Mathematics, Tensorflow, Critical Thinking, Network Model, Network Architecture, Reinforcement Learning Skills you'll gain: Machine Learning, Machine Learning Algorithms, Human Learning, Statistical Machine Learning, Data Analysis, Applied Machine Learning, Algorithms, Probability & Statistics, Forecasting, Statistical Analysis, Regression, Deep Learning, General Statistics, Python Programming, Machine Learning Software, Artificial Neural Networks, Exploratory Data Analysis, Feature Engineering, Dimensionality Reduction, Statistical Tests, Reinforcement Learning, Databases, Network Model, Tensorflow University of Colorado Boulder Skills you'll gain: Dimensionality Reduction Skills you'll gain: Machine Learning, Algorithms, Data Analysis, Human Learning, Python Programming, Regression • Skills you'll gain: Machine Learning, Python Programming • Skills you'll gain: Algorithms, Applied Machine Learning, Human Learning, Machine Learning, Machine Learning Algorithms, Machine Learning Software, Microsoft Azure, Cloud Computing, Regression, • Skills you'll gain: Algebra, Linear Algebra, Mathematics, Machine Learning, Mathematical Theory & Analysis, Computer Programming, Python Programming, Machine Learning Algorithms, Calculus, Algorithms, Differential Equations, Problem Solving, Statistical Analysis, Data Visualization, Dimensionality Reduction, Computer Programming Tools, Statistical Programming, Probability & Statistics, Regression • Skills you'll gain: Machine Learning, Calculus, Differential Equations, Mathematics, Machine Learning Algorithms, Regression, Algebra, Algorithms, Artificial Neural Networks, General Statistics, Linear Algebra, Probability & Statistics, Statistical Analysis • Skills you'll gain: Algorithms, Machine Learning, Machine Learning Algorithms, Python Programming, Applied Machine Learning, Statistical Machine Learning, Data Analysis, Regression, Human Learning, Statistical Programming, Computer Programming Searches related to unsupervised machine learning In summary, here are 10 of our most popular unsupervised machine learning courses
{"url":"https://www.coursera.org/courses?query=unsupervised%20machine%20learning","timestamp":"2024-11-02T09:39:56Z","content_type":"text/html","content_length":"787188","record_id":"<urn:uuid:1db7a062-46a7-4fc0-9540-4ea74783556f>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00652.warc.gz"}
Probability of Detection Probability of detection (POD) evaluation is a commonly used method of quantifying the reliability of an inspection. Inspections in practice have multiple factors which affect the detectability of flaws (e.g. human factors, uncertainty in the geometry of the part under test, or sensor/instrument noise). A POD study seeks to quantify the effect of these factors by answering the question: What is the largest flaw that could be missed when this inspection is implemented in production? POD provides an answer to this question by assessing the probability of detecting a flaw as a function of some defining characteristic (typically flaw length or depth). Along with estimating false call rate, POD has been successfully implemented in multiple industries as the primary method of reliability quantification for NDI inspections. This article is intended to introduce key aspects of POD that will help in understanding the scope and limitations of a POD study.
{"url":"https://www.nde-ed.org/NDEEngineering/POD/index.xhtml","timestamp":"2024-11-12T00:15:07Z","content_type":"application/xhtml+xml","content_length":"17813","record_id":"<urn:uuid:b4361387-3a6e-4671-862d-441c813f5027>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00883.warc.gz"}
I Beam Strength Calculator - CivilGang What is an I-Beam Strength Calculator? An I-Beam Strength Calculator is a tool used to estimate the maximum bending stress in an I-beam structure under a given load. It helps engineers ensure that the I-beam can safely support the applied loads without exceeding allowable stresses. Why use an I-Beam Strength Calculator? • Structural Analysis: It assists in analyzing the structural behavior of I-beams under different loads. • Design Considerations: Helps in designing I-beams that meet structural requirements and safety standards. • Material Selection: Allows for the selection of appropriate materials based on calculated stresses and loads. I-Beam Strength Calculator
{"url":"https://civil-gang.com/i-beam-strength-calculator/","timestamp":"2024-11-05T22:41:27Z","content_type":"text/html","content_length":"90092","record_id":"<urn:uuid:3248a2df-ceef-4df9-bbfa-dac397712513>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00112.warc.gz"}
120 Months in Years: Understanding Time Conversions - JoCalendars 120 Months in Years: Understanding Time Conversions Converting 120 Months in Years To convert 120 months to years, we can use the following formula: Number of years = Number of months / Number of months per year In most cases, we can assume there are 12 months in a year. Therefore: Number of years = 120 months / 12 months/year Number of years = 10 years So, 120 months is equivalent to 10 years (assuming no leap years are involved). Accounting for Leap Years As mentioned earlier, leap years add an extra day to the calendar approximately every four years. This can introduce a slight deviation when converting large numbers of months to years. Here’s why: • A standard year has 365 days. • A leap year has 366 days (one extra day). Since there are more days in a leap year, it takes slightly longer for 12 months to pass in a leap year compared to a standard year. Impact on Conversion: This difference in days becomes negligible when converting a small number of months. However, for larger time spans like 120 months, leap years can affect the conversion slightly. Here’s an illustration: • To account for leap years accurately, we’d need to know the specific distribution of standard and leap years within 120 months. • Without this information, assuming an even distribution of leap years provides an estimate. There are two common approaches to estimate the impact of leap years: 1. Statistical Approach: Statistically, a leap year occurs roughly every four years. Therefore, you could estimate that out of 120 months (10 years), approximately 30 months (2.5 years) would fall within leap years. 2. Conservative Approach: For a more conservative estimate, you could simply ignore leap years altogether. This would provide a slightly lower year count compared to the actual number of years with leap years. Applying the approaches to 120 months in years • Statistical Approach: Assuming 2.5 years of leap years within 120 months: □ Total years excluding leap years = 120 months / 12 months/year = 10 years □ Estimated leap year impact = 2.5 years □ Total years including estimated leap years = 10 years + 2.5 years ≈ 12.5 years • Conservative Approach: Ignoring leap years altogether: □ Total years (conservative estimate) = 120 months / 12 months/year = 10 years Choosing the Right Approach: The most appropriate approach depends on the level of precision required. If a rough estimate suffices, the conservative approach might be sufficient. However, for scenarios demanding greater accuracy, consider incorporating a statistical approach or using more sophisticated calculations that factor in the specific leap year distribution within 120 months. So in many personal, cultural, and historical contexts, 120 months represents a substantial period of time when visualized as a decade or 10 years. Converting months to years helps provide a more relatable and understandable perspective on long durations of time. Read: How Many Years in a Decade? : Calculating the Duration
{"url":"https://www.jocalendars.com/120-months-in-years/","timestamp":"2024-11-13T06:00:29Z","content_type":"text/html","content_length":"88666","record_id":"<urn:uuid:19f3f035-5b8b-4646-803b-c08788d2c69b>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00516.warc.gz"}
209. Minimum Size Subarray Sum • if sum of nums() < s, there is no proper subarray. My first solution: Just gets sum of the 0-th element to the k-th element: sum[k+1]. Then the sum of the i-th element to the j-th element will be sum[j] – sum[i]. Then traverse i from 0 to nums.size() to find the start of the subarray. Then traverse j from i+1 to nums.size() + 1 to find the end of the subarray. Find the min length, and replace the older one if shorter. However, the timing is not good. Whether to get sum initially? I remember some problems about subarray which got sum initially. However, we don’t need it in this problem. Second solution ( O(n) ): Count from the 0-th element and find sum[j] > s with min j. Gradually shorten the subarray by subtracting the element at the beginning of the subarray until sum[j] < s. Extend at the tail and shorten at the beginning. class Solution { int minSubArrayLen(int s, vector<int>& nums) { int begin = 0; int end = 0; int n = nums.size(); int sum = 0; int length = INT_MAX; while (end < n) sum += nums[end++]; while (sum >= s) length = min(end – begin,length); sum -= nums[begin++]; return length == INT_MAX ? 0: length; Well, you may wonder how can it be O(n) since it contains an inner while loop. Well, the key is that the while loop executes at most once for each starting position start. Then start is increased by 1 and the while loop moves to the next element. Thus the inner while loop runs at most O(n) times during the whole for loop from 0 to n - 1. Thus both the for loop and while loop has O(n) time complexity in total and the overall running time is O(n). (Cited from: https://discuss.leetcode.com/topic/17063/4ms-o-n-8ms-o-nlogn-c)
{"url":"http://xiaoxumeng.com/minimum-size-subarray-sum/","timestamp":"2024-11-08T21:36:07Z","content_type":"text/html","content_length":"22944","record_id":"<urn:uuid:f16ca38e-7c42-492b-812e-34d549283f7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00801.warc.gz"}
Screen many simulation inputs: a parent function to do_nmb_sim() — screen_simulation_inputs Screen many simulation inputs: a parent function to do_nmb_sim() cutpoint_methods = get_inbuilt_cutpoint_methods(), pair_nmb_train_and_evaluation_functions = FALSE, meet_min_events = TRUE, min_events = NA, show_progress = FALSE, cl = NULL A value (or vector of values): Sample size of training set. If missing, a sample size calculation will be performed and the calculated size will be used. A value (or vector of values): Number of simulations to run. A value (or vector of values): Sample size for evaluation set. A value (or vector of values): Simulated model discrimination (AUC). A value (or vector of values): simulated event rate of the binary outcome being predicted. cutpoint methods to include. Defaults to use the inbuilt methods. This doesn't change across calls to do_nmb_sim(). A function or NMBsampler (or list of) that returns named vector of NMB assigned to classifications use for obtaining cutpoint on training set. A function or NMBsampler (or list of) that returns named vector of NMB assigned to classifications use for obtaining cutpoint on evaluation set. logical. Whether or not to pair the lists of functions passed for fx_nmb_training and fx_nmb_evaluation. If two treatment strategies are being used, it may make more sense to pair these because selecting a value-optimising or cost-minimising threshold using one strategy but evaluating another is likely unwanted. Whether or not to incrementally add samples until the expected number of events (sample_size * event_rate) is met. (Applies to sampling of training data only.) A value: the minimum number of events to include in the training sample. If less than this number are included in sample of size sample_size, additional samples are added until the min_events is met. The default (NA) will use the expected value given the event_rate and the sample_size. Logical. Whether to display a progress bar. A cluster made using parallel::makeCluster(). If a cluster is provided, the simulation will be done in parallel. # Screen for optimal cutpoints given increasing values of # model discrimination (sim_auc) # \donttest{ get_nmb <- function() c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4) sim_screen_obj <- screen_simulation_inputs( n_sims = 50, n_valid = 10000, sim_auc = seq(0.7, 0.9, 0.1), event_rate = 0.1, fx_nmb_training = get_nmb, fx_nmb_evaluation = get_nmb # }
{"url":"https://docs.ropensci.org/predictNMB/reference/screen_simulation_inputs.html","timestamp":"2024-11-10T08:20:15Z","content_type":"text/html","content_length":"18157","record_id":"<urn:uuid:dbc33fbb-bfa7-4943-b877-f87d5af091e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00123.warc.gz"}
Microbiology - Online Tutor, Practice Problems & Exam Prep In this video, we're going to begin our lesson on concentration gradients and diffusion. A concentration gradient is defined as a difference in the concentration of a substance between two different areas. If we're comparing two areas and one area has a higher or lower concentration than another area, then a concentration gradient exists because there is a difference in the concentration between those two areas. However, if we're comparing two areas, the difference in the concentration of a substance between them is what defines a concentration gradient. Now a molecule will be moving with or down its concentration gradient when that molecule is going from an area of high concentration down to an area of low concentration. On the other hand, a molecule will be moving against or up its concentration gradient when that molecule is going from an area of low concentration to an area of high concentration. So, let’s observe the image below to clarify this idea. Notice that the image highlights concentration gradients and is divided into two halves. On the left-hand side, we see a high concentration of pink molecules, whereas on the right-hand side, there is a significantly lower concentration of these pink molecules. If one of these pink molecules is moving from an area of high concentration towards an area of low concentration, represented by the biker here, then the molecule will be moving down or with its concentration gradient. Just like it doesn't take much energy for a biker to cruise down a hill, it also does not require any energy for a molecule to move down or with its concentration gradient from areas of high concentration toward areas of low concentration. On the right-hand side, we have an image that shows an area of low concentration of pink molecules on the left and an area of much higher concentration of the pink molecules on the right. So, if a molecule is trying to move from an area of low concentration towards an area of high concentration and the molecule's movement is represented by this biker going uphill, then the molecule would be moving up or against its concentration gradient. Similarly, just as it takes energy for a biker to bike up a hill, it takes energy for a molecule to move up or against its concentration gradient from an area of low concentration to an area of high concentration. Notice here we are pointing out that energy is required. This introduction to the concentration gradient and molecules moving with or down and against or up the concentration gradient wraps up our lesson. We'll have the chance to practice applying these concepts as we move forward in our course. In the next lesson video, we'll discuss more about diffusion. I'll see you guys in the next video.
{"url":"https://www.pearson.com/channels/microbiology/learn/jason/ch-6-cell-membrane-transport/concentration-gradients-and-diffusion-Bio-1?chapterId=49adbb94","timestamp":"2024-11-04T01:59:41Z","content_type":"text/html","content_length":"433610","record_id":"<urn:uuid:0b0136b7-8fd2-4cbb-8c6d-643f211d29b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00730.warc.gz"}
This glossary gives brief definitions of all the key terms used in the book. A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | Y adjusted R2: a measure of how well a model fits the sample data that automatically penalises models with large numbers of parameters. Akaike information criterion (AIC): a metric that can be used to select the best fitting from a set of competing models and that incorporates a weak penalty term for including additional parameters. alternative hypothesis: a formal expression as part of a hypothesis testing framework that encompasses all of the remaining outcomes of interest aside from that incorporated into the null hypothesis. arbitrage: a concept from finance that refers to the situation where profits can be made without taking any risk (and without using any wealth). asymptotic: a property that applies as the sample size tends to infinity. autocorrelation: a standardised measure, which must lie between −1 and +1, of the extent to which the current value of a series is related to its own previous values. autocorrelation function: a set of estimated values showing the strength of association between a variable and its previous values as the lag length increases. autocovariance: an unstandardised measure of the extent to which the current value of a series is related to its own previous values. autoregressive conditional heteroscedasticity (ARCH) model: a time series model for volatilities. autoregressive (AR) model: a time series model where the current value of a series is fitted with its previous values. autoregressive moving average (ARMA) model: a time series model where the current value of a series is fitted with its previous values (the autoregressive part) and the current and previous values of an error term (the moving average part). autoregressive volatility (ARV) model: a time series model where the current volatility is fitted with its previous values. auxiliary regression: a second stage regression that is usually not of direct interest in its own right, but rather is conducted in order to test the statistical adequacy of the original regression balanced panel: a dataset where the variables have both time series and cross-sectional dimensions, and where there are equally long samples for each cross-sectional entity (i.e. no missing data). Bayes information criterion: see Schwarz’s Bayesian information criterion (SBIC). BDS test: a test for whether there are patterns in a series, predominantly used for determining whether there is evidence for nonlinearities. BEKK model: a multivariate model for volatilities and covariances between series that ensures the variance–covariance matrix is positive definite. BHHH algorithm: a technique that can be used for solving optimisation problems including maximum likelihood. backshift operator: see lag operator. Bera–Jarque test: a widely employed test for determining whether a series closely approximates a normal distribution. best linear unbiased estimator (BLUE): is one that provides the lowest sampling variance and which is also unbiased. between estimator: is used in the context of a fixed effects panel model, involving running a cross-sectional regression on the time averaged values of all the variables in order to reduce the number of parameters requiring estimation. biased estimator: where the expected value of the parameter to be estimated is not equal to the true value. bid–ask spread: the difference between the amount paid for an asset (the ask or offer price) when it is purchased and the amount received if it is sold (the bid). binary choice: a discrete choice situation with only two possible outcomes. bivariate regression: a regression model where there are only two variables – the dependent variable and a single independent variable. bootstrapping: a technique for constructing standard errors and conducting hypothesis tests that requires no distributional assumptions and works by resampling from the data. Box–Jenkins approach: a methodology for estimating ARMA models. Box–Pierce Q-statistic: a general measure of the extent to which a series is autocorrelated. break date: the date at which a structural change occurs in a time series or in a model’s parameters. Breusch–Godfrey test: a test for autocorrelation of any order in the residuals from an estimated regression model, based on an auxiliary regression of the residuals on the original explanatory variables plus lags of the residuals. broken trend: a process which is a deterministic trend with a structural break. calendar effects: the systematic tendency for a series, especially stock returns, to be higher at certain times than others. capital asset pricing model (CAPM): a financial model for determining the expected return on stocks as a function of their level of market risk. capital market line (CML): a straight line showing the risks and returns of all combinations of a risk-free asset and an optimal portfolio of risky assets. Carhart model: a time series model for explaining the performance of mutual funds or trading rules based on four factors: excess market returns, size, value and momentum. causality tests: a way to examine whether one series leads or lags another. censored dependent variable: where values of the dependent variable above or below a certain threshold cannot be observed, while the corresponding values for the independent variables are still central limit theorem: the mean of a sample of data having any distribution converges upon a normal distribution as the sample size tends to infinity. chaos theory: an idea taken from the physical sciences whereby although a series may appear completely random to the naked eye or to many statistical tests, in fact there is an entirely deterministic set of non-linear equations driving its behaviour. Chow test: an approach to determine whether a regression model contains a change in behaviour (structural break) part-way through based on splitting the sample into two parts, assuming that the break-date is known. Cochrane–Orcutt procedure: an iterative approach that corrects standard errors for a specific form of autocorrelation. coefficient of multiple determination: see R2. cointegration: a concept whereby time series have a fixed relationship in the long run. cointegrating vector: the set of parameters that describes the long-run relationship between two or more time series. common factor restrictions: these are the conditions on the parameter estimates that are implicitly assumed when an iterative procedure such as Cochrane–Orcutt is employed to correct for conditional expectation: the value of a random variable that is expected for time t + s (s = 1, 2, . . .) given information available until time t. conditional mean: the mean of a series at a point in time t fitted given all information available until the previous point in time t − 1. conditional variance: the variance of a series at a point in time t fitted given all information available until the previous point in time t − 1. confidence interval: a range of values within which we are confident to a given degree (e.g. 95% confident) that the true value of a given parameter lies. confidence level: one minus the significance level (expressed as a proportion rather than a percentage) for a hypothesis test. consistency: the desirable property of an estimator whereby the calculated value of a parameter converges upon the true value as the sample size increases. contemporaneous terms: those variables that are measured at the same time as the dependent variable – i.e. both are at time t. continuous variable: a random variable that can take on any value (possibly within a given range). convergence criterion: a pre-specified rule that tells an optimiser when to stop looking further for a solution and to stick with the best one it has already found. copulas: a flexible way to link together the distributions for individual series in order to form joint distributions. correlation: a standardised measure, bounded between −1 and +1, of the strength of association between two variables. correlogram: see autocorrelation function. cost of carry (COC) model: shows the equilibrium relationship between spot and corresponding futures prices where the spot price is adjusted for the cost of ‘carrying’ the spot asset forward to the maturity date. covariance matrix: see variance–covariance matrix. covariance stationary process: see weakly stationary process. covered interest parity (CIP): states that exchange rates should adjust so that borrowing funds in one currency and investing them in another would not be expected to earn abnormal profits. credit rating: an evaluation made by a ratings agency of the ability of a borrower to meet its obligations to meet interest costs and to make capital repayments when due. critical values (CV): key points in a statistical distribution that determine whether, given a calculated value of a test statistic, the null hypothesis will be rejected or not. cross-equation restrictions: a set of restrictions needed for a hypothesis test that involves more than one equation within a system. cross-sectional regression: a regression involving series that are measured only at a single point in time but across many entities. cumulative distribution: a function giving the probability that a random variable will take on a value lower than some pre-specified value. CUSUM and CUSUMSQ tests: tests for parameter stability in an estimated model based on the cumulative sum of residuals (CUSUM) or cumulative sum of squared residuals (CUSUMSQ) from a recursive daily range estimator: a crude measure of volatility calculated as the difference between the day’s lowest and highest observed prices. damped sine wave: a pattern, especially in an autocorrelation function plot, where the values cycle from positive to negative in a declining manner as the lag length increases. data generating process (DGP): the true relationship between the series in a model. data mining: looking very intensively for patterns in data and relationships between series without recourse to financial theory, possibly leading to spurious findings. data revisions: changes to series, especially macroeconomic variables, that are made after they are first published. data snooping: see data mining. day-of-the-week effect: the systematic tendency for stock returns to be higher on some days of the week than others. degrees of freedom: a parameter that affects the shape of a statistical distribution and therefore its critical values. Some distributions have one degree of freedom parameter, while others have degree of persistence: the extent to which a series is positively related to its previous values. dependent variable: the variable, usually denoted by y that the model tries to explain. deterministic: a process that has no random (stochastic) component. Dickey–Fuller (DF) test: an approach to
{"url":"https://www.cambridge.org/us/universitypress/textbooks/introductory-econometrics/glossary","timestamp":"2024-11-05T21:59:01Z","content_type":"text/html","content_length":"157210","record_id":"<urn:uuid:2621838f-19e0-4137-a53c-8812ba37f4ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00008.warc.gz"}
So here we're going to say that our standard deviation measures how close data results are in relation to the mean or average value. So basically, the smaller your standard deviation is, the more precise your measurements will be in relation to the mean or average value. So here, the formula for standard deviation is \( s \), which stands for standard deviation equals the square root and here we have the summation of our measurements minus our average squared divided by \( n-1 \). In terms of this equation, we're going to say here that \( x_i \) here represents an individual measurement that we're undertaking in terms of our data set. We're going to say that our average or mean value is represented by \( \overline{x} \). Variance is just our standard deviation squared. Later on, when we get more into statistical analysis, we'll see that the \( F \)-test has a close relationship to the variance of our calculations. Next, we have \( n \) which represents our numbers of measurements and \( n-1 \) represents our degree of freedom. Finally, we have our relative standard deviation, also called our coefficient of variation. That is just our standard deviation divided by our mean or average value times 100. At some point, we're going to run to using one of these variables in terms of the standard deviation equation. Just remember, the smaller your standard deviation is, the more precise all your measurements are within your data set. Now, your measurements can be precise, but that doesn't necessarily mean they will be accurate. Remember, accuracy is how close you are to a true value. Your measurements themselves may be close to one another, but still far off from the actual true value. So the accuracy may not be good. Now that you've known the basics of standard deviation, we'll take a look at the example left below. So, click on the next video and see how I approach this question which deals with standard deviation.
{"url":"https://www.pearson.com/channels/analytical-chemistry/learn/jules/ch-4-statistics/mean-evaluation","timestamp":"2024-11-14T14:55:08Z","content_type":"text/html","content_length":"222434","record_id":"<urn:uuid:cb8240e9-6661-4170-a9ca-777115968401>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00306.warc.gz"}
Propellers - PDFCOFFEE.COM Citation preview POWERING AND PROPULSION BY Prof. Dr. Galal Younis for 3rd. Year - Naval Architecture & Marine Engineering Students February 2002 1 POWERING OF SHIPS _____________________ Introduction : When a ship moves through water at a certain speed , she experiences resisting forces due to water and air . These resisting forces must be overcome by a thrust - producing mechanism . This mechanism is mostly a PROPELLER . The convergence into screw propellers was the last step in long and hard centuries of human inventions towards the development of sea transport systems . Oars and sails were the first elements in propulsion series followed by paddle wheels until about 1845 , when the first screw propelled English steamer " Great Britain " entered into service. From that time the screw propeller has reigned supreme in the realm of marine propulsion . Although the paddle wheels were still used for a long time after 1845 , they proved less popular than the screw propeller due to the following: 1. While the screw propeller is well protected from damage, the paddle wheel is projected outside the hull which makes it liable to damage in rough seas , also , the immersion of the paddle wheel varies with displacement and the wheel comes out of water during rolling causing erratic course keeping . 2. The paddle wheel increases the overall width of the ship and increases the resistance of the ship, while the propeller has not such defects . 3. The paddle wheel is generally less efficient than screw propeller . 4. The paddle wheel must be driven at low RPM , that requires a big and heavy machinery . For the a.m. reasons, it can be said that there is no real competitor to the screw propeller . Propelling Machineries. ---------------------------The propeller , whatever its type , needs an engine to provide it with the necessary power for rotation , the propelling machinery may be one of the following : 1. Steam Engines. 2. Internal combustion or Diesel engines,(constant torque). This type of engines is divided into groups according to the speed of the engine as follows : a. Slow speed engines directly coupled to propeller, and use low quality fuel but the size and weight of the engine are bigger than other types . b. Medium speed and high speed engines : Geared coupled to propeller , and use light fuels , but the size and weight of the engine are less than slow speed engines . 3. Marine Turbines . This type of engines is a high fuel consumer , with a high number of rotation , used mainly in war ships where the economy meets less concern . Factors Influencing the Choice of Propelling Machinery : -------------------------------------------------------1. Weight and size of engine , 2. Cost and reliability , 3. Fuel consumption and cost of upkeep , and 4. Suitability for the ship and propeller . Marine Ratings : -----------------------Rating Definitions : Ratings are based on ISO 8665 Conditions ( 100 Kpa , 25oC , and 30% relative humidity) 1. Continuous Duty (CON) The continuous duty engines are those intended for continuous use requiring uninterrupted service at full power . Typical application include : ocean going displacement hulls such as fishing trawlers , merchant ships , tugboats , towboats and all ships requires uninterrupted power operation . 2. Heavy Duty (HD) The heavy duty engines are those intended continuous use in variable load applications where full power is limited to eight hours out of every ten hours of operation. Also , reduced power operation must be at or below 200 rpm of the maximum rated RPM ( medium speed engines) . [ 5000 hours/year] Typical vessel applications include : mid-water trawlers ,ferries , crewboats . 3. Medium Continuous Duty (MCD) The medium Continuous Duty Engine are those intended for continuous use in variable load applications where full power is limited to 6 hours out of every twelve hours of operation. Also , reduced power operation must be at or below 200 rpm of the maximum rated RPM ( Medium speed engines) ,[3000 hours/year]. Typical vessel application include : Planning hull ferries , fishing vessels designed for high speed to and from fishing grounds , off-shore service boats , yachts ,and short trip coastal freighters 4. Intermittent Duty (INT) The intermittent duty engine are those intended for intermittent use in variable load applications where full power is limited to two hours out of every eight hours of operation . Also , reduced power operation must be at or below 200 rpm of the maximum RPM , [1500 hours/year ]. Typical vessel applications include custom boats , police vessels , pilot boats . 5. High Output (HO) The high output engine are those intended for use in variable load operation where full power operation is limited to one hour out of 8 hours operation . Also , reduced power must be at or below 200 rpm of the maximum rated RPM . [ 300 hours/year ] . Typical vessel applications include: pleasure crafts and sport fishers . POWER DEFINITIONS: ------------------------------------1. Indicated Power (Steam engines) PI . The power of steam engines is determined by measuring the steam stress cycle in the cylinder . This power is called the indicated power . PI = P.L.A.N / 1000 K.W P = Pressure intensity (Pa) L = Length of stroke (m) A = Area of piston N = Number of revolutions (rps). 2. Brake Power ( Internal combustion engines ) PB . The power measured at the fly wheel of internal combustion engines outside the cylinder by means of mechanical or electrical brake is called the brake power . PB = M . 2 π n / 1000 M = Engine torque (N.m) n = rps 3. Shaft Power, PS ( Turbines and Diesel ) The power measured at the tail shaft close to the propeller is called the shaft power . In diesel engines it is determined from the brake power by reducing bearing , transmission , gearing and mechanical losses . 4. Delivered Power ( Developed Power ) PD . It is the power actually delivered to the propeller , it is somewhat less than the power measured at the tail shaft due to the losses in stern tube bearing and the bearing between stern tube and the position where the shaft power is measured . 5. Thrust Power , PT . It is the power developed by the propeller thrust at the speed of advance Va . PT = S . Va /1000 K.W. 6. Effective Power , PE . It is the power required to tow a ship at a constant speed V without its propulsive device . PE = R . V /1000 K.W. R = Ship's total resistance . PT < PD < PS < PB Locations of Powers Measurement PROPULSION EFFICIENCIES The efficiency of any engineering object is defined as the ratio between the useful power output and the input power into the system . 1. The Open Water Efficiency of Propeller ηo It is the ratio between the power developed by the thrust of the propeller and that absorbed by propeller when operating in open water with uniform inflow velocity Va . η o = PT / PD = S . Va /( 2 . π. n ) . Mo Mo = the torque in open water 2. The Behind Efficiency η B It is the ratio between the power developed by the thrust of propeller and that absorbed by the propeller when operating behind a model or ship . ηB = PT / PD = S . Va /( 2 . π. n ) . M M = the torque in behind condition . 3. The Relative Rotative Efficiency ηR It is the ratio between propeller efficiency behind the hull and the efficiency in open water . η R = η B / η o = Mo / M 4. Transmission Efficiency ηt It is the ratio between the delivered power to the propeller and the shaft power . ηt = PD / PS 5. Hull Efficiency ηH It is the ratio between the useful work done on the ship and the work done by the propeller . ηH = PE / PT = R . V / S . Va 6. Quasi-Propulsive Efficiency η D It is the ratio between the useful power or effective power and the power delivered to the propeller . ηD = PE / PD = ηo ηR η H 7. Propulsive Efficiency η P It is the ratio between the useful power and the shaft power . ηP = PE / PS = ηo ηH ηt 8. The Overall Propulsive Efficiency η o.a It is the ratio between the effective power and the brake power , the gearing and mechanical losses are considered . ηo.a = PE / PB = ηP ηG = ηo η R η t η H η G η m η G = the gearing efficiency η m = the mechanical efficiency INTERACTION BETWEEN HULL AND PROPELLER The wake phenomenon : ----------------------------- The wake is the phenomenon of dead water behind the ship . The wake speed is the difference between the ship speed Vs and the speed of advance Va. The wake components: The wake can be split up into three components :1- Potential wake : It is the wake obtained if the ship moves in an ideal fluid without friction and wave making .It is influenced by form of stern ( full , U shaped ), increased pressure and decreased speed. 2 - Wave wake : It is the wake component origination from the movement of the water particles in the gravity waves .The orbital motion may be added or reduced from the wave depending on whether a crest or trough of wave is existing . Wave Crest Wave Trough 3 - Frictional wake : It is the wake created due to friction . It depends on the thickness of boundary layer and speed distribution through it . The loss of kinetic energy of water particles resists the homogeneity of flow behind the ship . The wake fraction w : ----------------------The wake speed ( Vs - Va ) as a fraction of ship's speed is called the wake fraction . w = ( Vs - Va ) / Vs Va = Vs ( 1 - w ) w is called Taylor's wake fraction . Froude expressed the wake speed as a fraction of speed of advance . wf = ( Vs - Va ) / Va Va = Vs / ( 1 + wf ) The more popular wake fraction is Taylor's ( w ) . Directional inequalities of wake : --------------------------------1 - Circumferential inequality of wake . 2 - Radial inequality of wake . 3 - Axial inequality of wake . The circumferential and radial components are the important ones ,they are measured by Pitot tubes located in the screw disc . If the measuring devices located in absence of propeller ,the measured wake will be the Nominal wake , while if in presence of propeller it will be the Effective wake . The distribution of wake speed after being measured , integrated throughout the disc area and divided by the area , gives the mean effective wake . The propeller design depends primarily on the wake distribution in the disk of propeller . The open water charts of propeller series were made on the basis of the homogeneous wake distribution ( w = constant ) . The determination of (w) for preliminary design purposes may be performed using approximate formulae . 1 - Taylor's : w = 0.5 Cb - 0.05 w = 0.5 Cb - 0.10 w = 0.55 Cb - 0.2 1910 Single screw ship 1923 Twin screw ship 2 - Heckscher : w = 0.70 Cp - 0.18 Single screw ship w = 0.70 Cp - 0.30 Twin screw ship w = 0.77 Cp - 0.28 Trawler These formula for normal cargo ships at 0.54
{"url":"https://pdfcoffee.com/propellers-pdf-free.html","timestamp":"2024-11-03T09:07:55Z","content_type":"text/html","content_length":"45637","record_id":"<urn:uuid:9c59cadc-ce76-4f8b-a8c0-c0d2928cbe01>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00375.warc.gz"}
Instructor Page Initial Publication Date: August 16, 2024 Guiding students through Semi-log and Log-log Plots An instructor's guide to Log Plots Kyle Fredrick (Pennsylvania Western University - California, PA) Yongli Gao (The University of Texas at San Antonio) What should students get out of this module? After completing this module, a student should be able to: • Evaluate data for the spread, or range, to determine the appropriateness of Semi-log or Log-log plots • Develop graphical relationships; • Interpret graphical data to determine optimal fit of mathematical functions; • Analyze mathematical models to explain and predict the behavior of a system. Why are these math skills challenging to incorporate into courses? Undergraduate Earth Science students are generally familiar with graphs and answering questions based on graphical data. However, when pressed, they often can't explain why the axes of a graph may not use a conventional numbering (linear) sequence, often starting at zero. In our experience, simply asking students what a log axis or exponential relationship is generates blank stares or confused responses. This presents a challenge because we use semi-log and log-log representations throughout geoscience courses, often assuming students know what they're looking at or that it is self-evident. Remediating their lack of familiarity takes time. But making sure students can read AND create or manipulate linear/semi-log/log-log / exponential graphs is critical to their ability to analyze and synthesize data. What we don't include in the page? The focus of the module is heavily weighted toward reading and interpreting graphs. There are additional The Math You Need for Majors modules addressing Exponential relationships, Logarithms, and Orders of Magnitude. Our module defers to those, especially Logarithms, which emphasize s log rules and calculations involving log arithmic equations s . Both authors are Hydrogeologists by training, so the problems and examples tend to fall within that area, though we have made a concerted effort to include examples from across the Earth Science disciplinary spectrum. Instructor resources Support for teaching this quantitative skill • One or more other resources that can help instructors teach about this topic. SERC collections such as Teaching Quantitative Skills can be a helpful place to look. Examples of activities that use this quantitative skill
{"url":"https://serc.carleton.edu/mathyouneed/geomajors/log-log/instructor.html","timestamp":"2024-11-07T09:11:28Z","content_type":"text/html","content_length":"49699","record_id":"<urn:uuid:44d5a3cd-d31f-4e2b-be13-1c97b078b46c>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00100.warc.gz"}
782 Arcmin/Square Year to Radian/Square Year Arcmin/Square Year [arcmin/year2] Output 782 arcmin/square year in degree/square second is equal to 1.3087224984669e-14 782 arcmin/square year in degree/square millisecond is equal to 1.3087224984669e-20 782 arcmin/square year in degree/square microsecond is equal to 1.3087224984669e-26 782 arcmin/square year in degree/square nanosecond is equal to 1.3087224984669e-32 782 arcmin/square year in degree/square minute is equal to 4.7114009944807e-11 782 arcmin/square year in degree/square hour is equal to 1.696104358013e-7 782 arcmin/square year in degree/square day is equal to 0.000097695611021552 782 arcmin/square year in degree/square week is equal to 0.004787084940056 782 arcmin/square year in degree/square month is equal to 0.090509259259259 782 arcmin/square year in degree/square year is equal to 13.03 782 arcmin/square year in radian/square second is equal to 2.2841516593173e-16 782 arcmin/square year in radian/square millisecond is equal to 2.2841516593173e-22 782 arcmin/square year in radian/square microsecond is equal to 2.2841516593173e-28 782 arcmin/square year in radian/square nanosecond is equal to 2.2841516593173e-34 782 arcmin/square year in radian/square minute is equal to 8.2229459735423e-13 782 arcmin/square year in radian/square hour is equal to 2.9602605504752e-9 782 arcmin/square year in radian/square day is equal to 0.0000017051100770737 782 arcmin/square year in radian/square week is equal to 0.000083550393776613 782 arcmin/square year in radian/square month is equal to 0.0015796845776152 782 arcmin/square year in radian/square year is equal to 0.22747457917659 782 arcmin/square year in gradian/square second is equal to 1.4541361094076e-14 782 arcmin/square year in gradian/square millisecond is equal to 1.4541361094076e-20 782 arcmin/square year in gradian/square microsecond is equal to 1.4541361094076e-26 782 arcmin/square year in gradian/square nanosecond is equal to 1.4541361094076e-32 782 arcmin/square year in gradian/square minute is equal to 5.2348899938674e-11 782 arcmin/square year in gradian/square hour is equal to 1.8845603977923e-7 782 arcmin/square year in gradian/square day is equal to 0.00010855067891284 782 arcmin/square year in gradian/square week is equal to 0.0053189832667289 782 arcmin/square year in gradian/square month is equal to 0.1005658436214 782 arcmin/square year in gradian/square year is equal to 14.48 782 arcmin/square year in arcmin/square second is equal to 7.8523349908012e-13 782 arcmin/square year in arcmin/square millisecond is equal to 7.8523349908012e-19 782 arcmin/square year in arcmin/square microsecond is equal to 7.8523349908012e-25 782 arcmin/square year in arcmin/square nanosecond is equal to 7.8523349908012e-31 782 arcmin/square year in arcmin/square minute is equal to 2.8268405966884e-9 782 arcmin/square year in arcmin/square hour is equal to 0.000010176626148078 782 arcmin/square year in arcmin/square day is equal to 0.0058617366612931 782 arcmin/square year in arcmin/square week is equal to 0.28722509640336 782 arcmin/square year in arcmin/square month is equal to 5.43 782 arcmin/square year in arcsec/square second is equal to 4.7114009944807e-11 782 arcmin/square year in arcsec/square millisecond is equal to 4.7114009944807e-17 782 arcmin/square year in arcsec/square microsecond is equal to 4.7114009944807e-23 782 arcmin/square year in arcsec/square nanosecond is equal to 4.7114009944807e-29 782 arcmin/square year in arcsec/square minute is equal to 1.696104358013e-7 782 arcmin/square year in arcsec/square hour is equal to 0.0006105975688847 782 arcmin/square year in arcsec/square day is equal to 0.35170419967759 782 arcmin/square year in arcsec/square week is equal to 17.23 782 arcmin/square year in arcsec/square month is equal to 325.83 782 arcmin/square year in arcsec/square year is equal to 46920 782 arcmin/square year in sign/square second is equal to 4.3624083282229e-16 782 arcmin/square year in sign/square millisecond is equal to 4.3624083282229e-22 782 arcmin/square year in sign/square microsecond is equal to 4.3624083282229e-28 782 arcmin/square year in sign/square nanosecond is equal to 4.3624083282229e-34 782 arcmin/square year in sign/square minute is equal to 1.5704669981602e-12 782 arcmin/square year in sign/square hour is equal to 5.6536811933768e-9 782 arcmin/square year in sign/square day is equal to 0.0000032565203673851 782 arcmin/square year in sign/square week is equal to 0.00015956949800187 782 arcmin/square year in sign/square month is equal to 0.003016975308642 782 arcmin/square year in sign/square year is equal to 0.43444444444444 782 arcmin/square year in turn/square second is equal to 3.6353402735191e-17 782 arcmin/square year in turn/square millisecond is equal to 3.6353402735191e-23 782 arcmin/square year in turn/square microsecond is equal to 3.6353402735191e-29 782 arcmin/square year in turn/square nanosecond is equal to 3.6353402735191e-35 782 arcmin/square year in turn/square minute is equal to 1.3087224984669e-13 782 arcmin/square year in turn/square hour is equal to 4.7114009944807e-10 782 arcmin/square year in turn/square day is equal to 2.7137669728209e-7 782 arcmin/square year in turn/square week is equal to 0.000013297458166822 782 arcmin/square year in turn/square month is equal to 0.0002514146090535 782 arcmin/square year in turn/square year is equal to 0.036203703703704 782 arcmin/square year in circle/square second is equal to 3.6353402735191e-17 782 arcmin/square year in circle/square millisecond is equal to 3.6353402735191e-23 782 arcmin/square year in circle/square microsecond is equal to 3.6353402735191e-29 782 arcmin/square year in circle/square nanosecond is equal to 3.6353402735191e-35 782 arcmin/square year in circle/square minute is equal to 1.3087224984669e-13 782 arcmin/square year in circle/square hour is equal to 4.7114009944807e-10 782 arcmin/square year in circle/square day is equal to 2.7137669728209e-7 782 arcmin/square year in circle/square week is equal to 0.000013297458166822 782 arcmin/square year in circle/square month is equal to 0.0002514146090535 782 arcmin/square year in circle/square year is equal to 0.036203703703704 782 arcmin/square year in mil/square second is equal to 2.3266177750522e-13 782 arcmin/square year in mil/square millisecond is equal to 2.3266177750522e-19 782 arcmin/square year in mil/square microsecond is equal to 2.3266177750522e-25 782 arcmin/square year in mil/square nanosecond is equal to 2.3266177750522e-31 782 arcmin/square year in mil/square minute is equal to 8.3758239901879e-10 782 arcmin/square year in mil/square hour is equal to 0.0000030152966364676 782 arcmin/square year in mil/square day is equal to 0.0017368108626054 782 arcmin/square year in mil/square week is equal to 0.085103732267663 782 arcmin/square year in mil/square month is equal to 1.61 782 arcmin/square year in mil/square year is equal to 231.7 782 arcmin/square year in revolution/square second is equal to 3.6353402735191e-17 782 arcmin/square year in revolution/square millisecond is equal to 3.6353402735191e-23 782 arcmin/square year in revolution/square microsecond is equal to 3.6353402735191e-29 782 arcmin/square year in revolution/square nanosecond is equal to 3.6353402735191e-35 782 arcmin/square year in revolution/square minute is equal to 1.3087224984669e-13 782 arcmin/square year in revolution/square hour is equal to 4.7114009944807e-10 782 arcmin/square year in revolution/square day is equal to 2.7137669728209e-7 782 arcmin/square year in revolution/square week is equal to 0.000013297458166822 782 arcmin/square year in revolution/square month is equal to 0.0002514146090535 782 arcmin/square year in revolution/square year is equal to 0.036203703703704
{"url":"https://hextobinary.com/unit/angularacc/from/arcminpy2/to/radpy2/782","timestamp":"2024-11-09T20:14:53Z","content_type":"text/html","content_length":"113744","record_id":"<urn:uuid:eeccacfc-f2e5-42aa-89cc-bc0411684ad5>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00049.warc.gz"}
Conditional Statements C++ | HackerRank Solutions Conditional Statements C++ Problem Statement : if and else are two of the most frequently used conditionals in C/C++, and they enable you to execute zero or one conditional statement among many such dependent conditional statements. We use them in the following ways: 1. if: This executes the body of bracketed code starting with statement1 if condition evaluates to true. if (condition) { 2. if - else: This executes the body of bracketed code starting with statement1 if condition evaluates to true, or it executes the body of code starting with statement2 if condition evaluates to false. Note that only one of the bracketed code sections will ever be executed. if (condition) { else { 3. if - else if - else: In this structure, dependent statements are chained together and the condition for each statement is only checked if all prior conditions in the chain evaluated to false. Once a condition evaluates to true, the bracketed code associated with that statement is executed and the program then skips to the end of the chain of statements and continues executing. If each condition in the chain evaluates to false, then the body of bracketed code in the else block at the end is executed. if(first condition) { else if(second condition) { else if((n-1)'th condition) { else { Input Format A single integer, n. 1 <= n <= 10^9 Output Format If (1 <= n <= 9), then print the lowercase English word corresponding to the number (e.g., one for 1 , two for 2 , etc.); otherwise, print Greater than 9. Solution : Solution in C : #include <iostream> #include <cstdio> #include <vector> using namespace std; int main() { vector<string> arr; int n; cin >> n; if(n > 9) cout << "Greater than 9" << endl; else cout << arr[n] << endl; return 0; View More Similar Problems Palindromic Subsets Consider a lowercase English alphabetic letter character denoted by c. A shift operation on some c turns it into the next letter in the alphabet. For example, and ,shift(a) = b , shift(e) = f, shift (z) = a . Given a zero-indexed string, s, of n lowercase letters, perform q queries on s where each query takes one of the following two forms: 1 i j t: All letters in the inclusive range from i t View Solution → Counting On a Tree Taylor loves trees, and this new challenge has him stumped! Consider a tree, t, consisting of n nodes. Each node is numbered from 1 to n, and each node i has an integer, ci, attached to it. A query on tree t takes the form w x y z. To process a query, you must print the count of ordered pairs of integers ( i , j ) such that the following four conditions are all satisfied: the path from n View Solution → Polynomial Division Consider a sequence, c0, c1, . . . , cn-1 , and a polynomial of degree 1 defined as Q(x ) = a * x + b. You must perform q queries on the sequence, where each query is one of the following two types: 1 i x: Replace ci with x. 2 l r: Consider the polynomial and determine whether is divisible by over the field , where . In other words, check if there exists a polynomial with integer coefficie View Solution → Costly Intervals Given an array, your goal is to find, for each element, the largest subarray containing it whose cost is at least k. Specifically, let A = [A1, A2, . . . , An ] be an array of length n, and let be the subarray from index l to index r. Also, Let MAX( l, r ) be the largest number in Al. . . r. Let MIN( l, r ) be the smallest number in Al . . .r . Let OR( l , r ) be the bitwise OR of the View Solution → The Strange Function One of the most important skills a programmer needs to learn early on is the ability to pose a problem in an abstract way. This skill is important not just for researchers but also in applied fields like software engineering and web development. You are able to solve most of a problem, except for one last subproblem, which you have posed in an abstract way as follows: Given an array consisting View Solution → Self-Driving Bus Treeland is a country with n cities and n - 1 roads. There is exactly one path between any two cities. The ruler of Treeland wants to implement a self-driving bus system and asks tree-loving Alex to plan the bus routes. Alex decides that each route must contain a subset of connected cities; a subset of cities is connected if the following two conditions are true: There is a path between ever View Solution →
{"url":"https://hackerranksolution.in/conditionalstatementsccc/","timestamp":"2024-11-15T04:20:15Z","content_type":"text/html","content_length":"39858","record_id":"<urn:uuid:7cde92de-322f-4b0d-92c6-e9d1888ca029>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00211.warc.gz"}
The period of 1/k The meaning of the term ‘period’ is as follows. Take the case of 1/11 = 0.090909. Here the two digits 0 and 9 are repeated forever. So the period of 1/11 is 2. Another example is 1/7 = 0.142857142857… Here the 6 digits 142857 are repeated forever. So the period of 1/7 is 6. (There are lots of interesting properties for the number 1/7. For more details see this link) There is a Theorem about the period: The period of 1/k for integer k is always < k. I have a proof for this below.
{"url":"https://medielectronics.com/the-period-of-1-k/","timestamp":"2024-11-10T12:43:38Z","content_type":"text/html","content_length":"101790","record_id":"<urn:uuid:5e4f2a94-aef4-4ee4-9313-7127496ada2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00536.warc.gz"}
plotmatrix(X,Y) creates a matrix of subaxes containing scatter plots of the columns of X against the columns of Y. If X is p-by-n and Y is p-by-m, then plotmatrix produces an n-by-m matrix of plotmatrix(X) is the same as plotmatrix(X,X) except that the subaxes along the diagonal are replaced with histogram plots of the data in the corresponding column of X. For example, the subaxes along the diagonal in the ith column is replaced by histogram(X(:,i)). The tick labels along the edges of the plots align with the scatter plots, not the histograms. plotmatrix(___,LineSpec) specifies the line style, marker symbol, and color for the scatter plots. The option LineSpec can be preceded by any of the input argument combinations in the previous plotmatrix(ax,___) plots into the specified target axes, where the target axes is an invisible frame for the subaxes. [S,AX,BigAx,H,HAx] = plotmatrix(___) returns the graphic objects created as follows: • S – Chart line objects for the scatter plots • AX – Axes objects for each subaxes • BigAx – Axes object for big axes that frames the subaxes • H – Histogram objects for the histogram plots • HAx – Axes objects for the invisible histogram axes BigAx is left as the current axes (gca) so that a subsequent title, xlabel, or ylabel command centers text with respect to the big axes. Create Scatter Plot Matrix with Two Matrix Inputs Create X as a matrix of random data and Y as a matrix of integer values. Then, create a scatter plot matrix of the columns of X against the columns of Y. X = randn(50,3); Y = reshape(1:150,50,3); The subplot in the ith row, jth column of the figure is a scatter plot of the ith column of Y against the jth column of X. Create Scatter Plot Matrix with One Matrix Input Create a scatter plot matrix of random data. The subplot in the ith row, jth column of the matrix is a scatter plot of the ith column of X against the jth column of X. Along the diagonal are histogram plots of each column of X. X = randn(50,3); Specify Marker Type and Color Create a scatter plot matrix of random data. Specify the marker type and the color for the scatter plots. X = randn(50,3); The LineSpec option sets properties for the scatter plots. To set properties for the histogram plots, return the histogram objects. Modify Scatter Plot Matrix After Creation Create a scatter plot matrix of random data. rng default X = randn(50,3); [S,AX,BigAx,H,HAx] = plotmatrix(X); To set properties for the scatter plots, use S. To set properties for the histograms, use H. To set axes properties, use AX, BigAx, and HAx. Use dot notation to set properties. Set the color and marker type for the scatter plot in the lower left corner of the figure. Set the color for the histogram plot in the lower right corner. Use the title command to title the figure. S(3).Color = 'g'; S(3).Marker = '*'; H(3).EdgeColor = 'k'; H(3).FaceColor = 'g'; title(BigAx,'A Comparison of Data Sets') Input Arguments X — Data to display Data to display, specified as a matrix. Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical Y — Data to plot against X Data to plot against X, specified as a matrix. Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical LineSpec — Line style, marker, and color string scalar | character vector Line style, marker, and color, specified as a string scalar or character vector containing symbols. The symbols can appear in any order. You do not need to specify all three characteristics (line style, marker, and color). For example, if you omit the line style and specify the marker, then the plot shows only the marker and no line. Example: "--or" is a red dashed line with circle markers. Line Style Description Resulting Line "-" Solid line "--" Dashed line ":" Dotted line "-." Dash-dotted line Marker Description Resulting Marker "o" Circle "+" Plus sign "*" Asterisk "." Point "x" Cross "_" Horizontal line "|" Vertical line "square" Square "diamond" Diamond "^" Upward-pointing triangle "v" Downward-pointing triangle ">" Right-pointing triangle "<" Left-pointing triangle "pentagram" Pentagram "hexagram" Hexagram Color Name Short Name RGB Triplet Appearance "red" "r" [1 0 0] "green" "g" [0 1 0] "blue" "b" [0 0 1] "cyan" "c" [0 1 1] "magenta" "m" [1 0 1] "yellow" "y" [1 1 0] "black" "k" [0 0 0] "white" "w" [1 1 1] ax — Target axes Axes object Target axes that frames all the subaxes, specified as an Axes object. If you do not specify this argument, then plotmatrix uses the current axes. Output Arguments S — Chart line objects for scatter plots Chart line objects for the scatter plots, returned as a matrix. These are unique identifiers, which you can use to query and modify the properties of a specific scatter plot. AX — Axes objects for subaxes Axes objects for the subaxes, returned as a matrix. These are unique identifiers, which you can use to query and modify the properties of a specific subaxes. BigAx — Axes object for big axes Axes object for big axes, returned as a scalar. This is a unique identifier, which you can use to query and modify properties of the big axes. H — Histogram objects vector | [] Histogram objects, returned as a vector or []. These are unique identifiers, which you can use to query and modify the properties of a specific histogram object. If no histogram plots are created, then H is returned as empty brackets. HAx — Axes objects for invisible histogram axes vector | [] Axes objects for invisible histogram axes, returned as a vector or []. These are unique identifiers, which you can use to query and modify the properties of a specific axes. If no histogram plots are created, then HAx is returned as empty brackets. Extended Capabilities GPU Arrays Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™. The plotmatrix function supports GPU array input with these usage notes and limitations: • This function accepts GPU arrays, but does not run on a GPU. For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox). Distributed Arrays Partition large arrays across the combined memory of your cluster using Parallel Computing Toolbox™. Usage notes and limitations: • This function operates on distributed arrays, but executes in the client MATLAB^®. For more information, see Run MATLAB Functions with Distributed Arrays (Parallel Computing Toolbox). Version History Introduced before R2006a R2015b: H returned as vector of histogram objects The H output argument is now a vector of histogram objects. In previous releases, it was a vector of patch objects.
{"url":"https://it.mathworks.com/help/matlab/ref/plotmatrix.html","timestamp":"2024-11-12T18:26:45Z","content_type":"text/html","content_length":"118232","record_id":"<urn:uuid:c16f9809-bb35-4329-8b42-c9e06f43f592>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00734.warc.gz"}
SQL get one result of counted column Understanding and Extracting a Single Result from a Counted Column in SQL Imagine you have a table called "Orders" storing information about customer purchases. You want to know how many orders were placed in total. You could use the COUNT() function in SQL to count the number of orders. However, you might only need the total count, not the entire table with every row. How can you extract this single value? Let's look at a simple example: SELECT COUNT(*) AS TotalOrders FROM Orders; This query will give you a table with a single column named "TotalOrders" and a single row containing the total number of orders. The Problem: You might want to directly use this value in another query or in your application code, but you can't just access it directly from the result table. The Solution: You can use the LIMIT clause to restrict the number of rows returned by your query. SELECT COUNT(*) AS TotalOrders FROM Orders LIMIT 1; This query will still return a table with a single column named "TotalOrders," but this time it will only contain the first row, which holds the total count. Why This Works: • The COUNT(*) function aggregates all rows in the table and returns a single value representing the total count. • The LIMIT clause is used to specify the maximum number of rows that should be returned by the query. By setting it to 1, you ensure that only the first row, containing the total count, is Practical Examples: • Calculating Total Sales: You can use this method to get the total number of sales for a particular period: SELECT COUNT(DISTINCT order_id) AS TotalSales FROM Orders WHERE order_date >= '2023-01-01' AND order_date <= '2023-03-31' LIMIT 1; • Finding the Most Popular Product: You could calculate the count of each product sold and use LIMIT to retrieve the product with the highest count: SELECT product_name, COUNT(order_id) AS product_count FROM Orders GROUP BY product_name ORDER BY product_count DESC LIMIT 1; Additional Tips: • In most SQL environments, the LIMIT clause can also be used with OFFSET to skip a certain number of rows before retrieving the desired row. • If you're working with more complex queries involving joins or subqueries, the LIMIT clause should be applied at the outermost level of your query. By applying LIMIT in your SQL queries, you can easily extract a single result value from a counted column for further processing or analysis.
{"url":"https://laganvalleydup.co.uk/post/sql-get-one-result-of-counted-column","timestamp":"2024-11-15T01:27:22Z","content_type":"text/html","content_length":"81742","record_id":"<urn:uuid:490a20bc-edb8-4fd4-8870-69ea928248e0>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00347.warc.gz"}
The Intersection of Mysticism and Mathematics: An IntroductionThe Intersection of Mysticism and Mathematics: An Introduction The Intersection of Mysticism and Mathematics: An Introduction syndu | Aug. 29, 2024, 10:58 p.m. The Intersection of Mysticism and Mathematics: An Introduction Throughout history, the realms of mysticism and mathematics have often been seen as distinct and separate fields. Mysticism, with its focus on spiritual experiences and the search for deeper truths, seems worlds apart from the logical, structured nature of mathematics. However, a closer examination reveals a rich tapestry of connections between these two domains. This introductory post will explore the historical and philosophical intersections of mysticism and mathematics, setting the stage for a series that delves into the lives and works of key figures who have bridged these worlds. Historical Connections Ancient Civilizations In ancient civilizations, mathematics and mysticism were often intertwined. The Egyptians, for example, used sacred geometry in the construction of their pyramids, believing that geometric shapes held spiritual significance. Similarly, the Babylonians and Greeks saw mathematics as a way to understand the cosmos and the divine order. Pythagoras and the Harmony of the Spheres One of the most notable figures who bridged the gap between mysticism and mathematics was Pythagoras. Known for the Pythagorean theorem, Pythagoras also founded a mystical school that believed in the harmony of the spheres—a concept that the universe is governed by mathematical ratios and harmonies. This idea laid the groundwork for later developments in both mathematics and mystical thought. The Middle Ages and the Renaissance During the Middle Ages and the Renaissance, scholars like Isaac Newton and Johannes Kepler continued to explore the connections between mathematics and mysticism. Newton, known for his laws of motion and gravity, was also deeply interested in alchemy and biblical prophecy. Kepler, who formulated the laws of planetary motion, was influenced by his mystical beliefs in the harmony of the cosmos. Philosophical Intersections Sacred Geometry Sacred geometry is a key philosophical concept that illustrates the intersection of mysticism and mathematics. It posits that geometric shapes and patterns are fundamental to the creation and structure of the universe. These shapes are often imbued with spiritual significance and are used in religious art and architecture to symbolize deeper truths. The Golden Ratio The golden ratio, a mathematical constant approximately equal to 1.618, appears in various natural phenomena, art, and architecture. Mystics and mathematicians alike have been fascinated by this ratio, seeing it as a bridge between the physical and spiritual worlds. The golden ratio's prevalence in nature and its aesthetic appeal in art have led to its association with divine proportion and Quantum Mysticism In the 20th and 21st centuries, the advent of quantum mechanics has brought new intersections between mysticism and mathematics. Quantum mysticism explores the parallels between the principles of quantum mechanics—such as uncertainty, interconnectedness, and the observer effect—and mystical traditions that emphasize the fluid and interconnected nature of reality. Setting the Stage for the Series This introductory post has provided a glimpse into the rich interplay between mysticism and mathematics. In the upcoming posts, we will delve deeper into the lives and works of key figures who have contributed to both fields. From Pythagoras and his mystical mathematical school to contemporary mathematicians who integrate spiritual practices into their work, this series will explore how these individuals have bridged the gap between the rational and the transcendent. The intersection of mysticism and mathematics is a fascinating and multifaceted topic that spans centuries and cultures. By exploring the historical and philosophical connections between these fields, we can gain a deeper understanding of how they have influenced each other and continue to do so. Stay tuned for the next post in the series, where we will delve into the life and teachings of Pythagoras, the mystic mathematician. SEO Optimization Keyword Research • Primary Keywords: Mysticism and mathematics, historical connections, sacred geometry, golden ratio, quantum mysticism. • Secondary Keywords: Pythagoras, harmony of the spheres, Isaac Newton, Johannes Kepler, mystical traditions. Meta Description "Explore the historical and philosophical connections between mysticism and mathematics. Discover how these fields have influenced each other throughout history and set the stage for a deeper • H1: The Intersection of Mysticism and Mathematics: An Introduction • H2: Introduction • H2: Historical Connections □ H3: Ancient Civilizations □ H3: Pythagoras and the Harmony of the Spheres □ H3: The Middle Ages and the Renaissance • H2: Philosophical Intersections □ H3: Sacred Geometry □ H3: The Golden Ratio □ H3: Quantum Mysticism • H2: Setting the Stage for the Series • H2: Conclusion Alt Text for Images • Image 1: "Ancient geometric patterns symbolizing the intersection of mysticism and mathematics." • Image 2: "A depiction of Pythagoras teaching his students about the harmony of the spheres." • Image 3: "The golden ratio illustrated in nature and art, symbolizing the bridge between science and spirituality."
{"url":"https://syndu.com/blog/the-intersection-of-mysticism-and-mathematics-an-introduction/","timestamp":"2024-11-05T19:04:30Z","content_type":"text/html","content_length":"47322","record_id":"<urn:uuid:9bcab92a-2604-42f4-8f65-896456be2ef5>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00107.warc.gz"}
パーツ材料 | Kyogetsuya top of page I'm an image title Describe your image here. I'm an image title Describe your image here. I'm an image title Describe your image here. I'm an image title Describe your image here. I'm an image title Describe your image here. I'm an image title Describe your image here. I'm an image title Describe your image here. I'm an image title Describe your image here. I'm an image title Describe your image here. I'm an image title Describe your image here. I'm an image title Describe your image here. I'm an image title Describe your image here. I'm an image title Describe your image here. bottom of page
{"url":"https://www.kyogetsuya.jp/%E3%83%91%E3%83%BC%E3%83%84","timestamp":"2024-11-04T13:24:23Z","content_type":"text/html","content_length":"896841","record_id":"<urn:uuid:66760350-fe1a-4ad9-925d-31fc8f5b1056>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00050.warc.gz"}
Google Interview Question: Integer to 1 Problem and Word Predictor Problem – 谷歌面试题 – interview proxy – 代面试 - csOAhelp|代码代写|面试OA助攻|面试代面|作业实验代写|考试高分代考 Interview Process Breakdown Problem 1: Integer to 1 Problem Clarification Stage The interviewer starts by explaining the problem: “Given an integer n, you are allowed two operations: divide the integer by 2 if it's divisible by 2, or add 1 to the integer. Your goal is to determine the smallest number of operations needed to reduce n to 1.” The candidate asks for clarification on the input: "Can I assume the input integer is always positive?" The interviewer confirms: "Yes, the input is always a positive integer." The candidate follows up: "If the integer is divisible by 2, should I prioritize dividing by 2 over incrementing it by 1, or is it up to me?" The interviewer responds: "That’s up to you to decide. You can approach it in a way that minimizes the number of operations." Discussion of Solution Strategy The candidate outlines their approach: "I’m thinking of using a recursive approach where I recursively try both operations. I’ll divide by 2 if it’s even or increment by 1. Then I’ll pick the option that leads to fewer operations." The interviewer asks: "What happens if the recursion leads to a cycle, for example, if you end up at the same value more than once?" The candidate realizes the potential issue: "Good point. I’ll keep track of visited numbers to avoid cycles, because revisiting the same number will only lead to an infinite loop." The interviewer agrees: "That sounds reasonable. Can you walk through an example with a number like 5?" Follow-Up Questions and Example Walkthrough The candidate explains with an example: *"Sure, if I start with 5: • First, I’ll increment 5 by 1 to make it 6, because dividing by 2 is not possible. • Then I divide 6 by 2, making it 3. • Next, I increment 3 to 4. • Divide 4 by 2 to get 2, and finally divide 2 by 2 to get 1. So, the total number of operations is 5."* The interviewer responds: "Looks good. What’s the time and space complexity of this approach?" Time and Space Complexity Explanation The candidate breaks it down: "For the recursive version, the time complexity will depend on how many times I can divide n by 2 before reaching 1, which is approximately O(log n). As for space complexity, if I’m using recursion and tracking visited numbers, it’s O(n) for the recursive call stack and the set of visited numbers. But, if I switch to an iterative, greedy approach, I can reduce the space complexity to O(1)." The interviewer acknowledges the explanation and asks the candidate to implement the greedy approach. Problem 2: Word Predictor Problem Clarification Stage The interviewer introduces the second problem: “Now for the second problem, you need to build a word predictor based on a bigram frequency model. Given some training data, your job is to predict the next most likely word based on the input.” The candidate confirms their understanding: "So the input will be a word, and I need to return the most likely next word based on the training data, right?" The interviewer responds: "Correct. And the prediction should be optimized for fast response times, so you might want to use a frequency-based heuristic." Solution Strategy Discussion The candidate explains their approach: "I’m thinking of building a model where I count the occurrences of each word pair (bigram) in the training data. Then, for a given input word, I’ll return the most frequent word that follows it in the data. Does that sound reasonable?" The interviewer asks: "What if the input word doesn’t exist in the training data?" The candidate suggests a fallback mechanism: "If a word doesn’t appear in the training data or doesn’t have a successor, I’ll return a default value, maybe an empty string or a placeholder like None." The interviewer asks: "What if two words have the same frequency? How would you handle that?" Handling Tie Scenarios The candidate responds: "If there’s a tie, I’ll break the tie by returning the word that appears first in the training data. This way, the model remains deterministic." The interviewer says: "That’s a fair approach. Let’s test it with an example. Here’s some training data: ['I', 'am', 'Sam'], ['Sam', 'I', 'am'], and ['I', 'like', 'green', 'eggs', 'and', 'ham']. Now, if I input the word Sam, what would you predict?" The candidate responds: "Based on the data, Sam is followed by I, so I would predict I." The interviewer follows up: "Correct. Now, what if I input like?" The candidate answers: "Like is followed by green, so the prediction should be green." The interviewer concludes: "Yes. What’s the time complexity of this solution?" Time Complexity Explanation The candidate provides a complexity analysis: "The time complexity for building the bigram frequency table is O(n), where n is the total number of words in the training data. For each prediction, the time complexity is O(1) because I’m just looking up the word in the frequency table." The interviewer seems satisfied and moves on to behavioral questions. Behavioral Questions (BQ) Question 1: Handling Difficult Problems The interviewer asks: "Tell me about a time when you faced a particularly difficult technical problem. How did you handle it?" The candidate responds: "There was a project where we were integrating a third-party API, and the documentation was incomplete. After many failed attempts to get it working, I reached out to the support team and scoured forums. Eventually, I figured out that the issue was caused by a minor but critical version mismatch in one of the dependencies." Question 2: Handling Pressure The interviewer follows up: "How did you handle the pressure of solving the problem under a deadline?" The candidate shares their approach: "I broke the problem down into smaller, manageable parts and kept communication open with the team. That helped alleviate some of the pressure because everyone knew I was working through it This interview tested the candidate’s problem-solving abilities through two distinct algorithmic problems: optimizing operations to reduce an integer to 1 and predicting the next word using bigram frequency. The candidate successfully clarified the problems, communicated their solution strategies, and walked through examples while addressing potential edge cases. Additionally, the candidate effectively handled behavioral questions, showcasing their ability to manage technical challenges and pressure. With the interview assistance provided by CSOAHelp, the candidate successfully achieved excellent results in this interview. If you are also looking for professional guidance and support during your interview process, feel free to contact us. We offer customized interview assistance services to help you confidently tackle interview challenges and succeed.
{"url":"https://csoahelp.com/2024/10/10/google-interview-question-integer-to-1-problem-and-word-predictor-problem-%E8%B0%B7%E6%AD%8C%E9%9D%A2%E8%AF%95%E9%A2%98-interview-proxy-%E4%BB%A3%E9%9D%A2%E8%AF%95/","timestamp":"2024-11-13T11:35:03Z","content_type":"text/html","content_length":"96582","record_id":"<urn:uuid:4f37e97b-6108-421e-8088-b73aa58773a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00465.warc.gz"}
mL to fl oz Calculator - Convert Milliliters to Fluid Ounces mL to fl oz Calculator - Convert Milliliters to Fluid Ounces! Ah, the eternal battle between metric and imperial measuring systems. One uses liters and grams, the other relies on cups and ounces. But fear not, we have the solution to all your conversion woes! Our handy-dandy online calculator will help you convert milliliters to ounces with just a few clicks. No more kitchen disasters – let us take the guesswork out of your culinary, medical, and cosmetic mL to fl oz Use Cases • Cooking connoisseurs: Perfect your recipes by converting milliliters to ounces for precise measurements of ingredients. Say goodbye to confusion and hello to deliciousness! • Medical professionals: Convert milliliters to ounces for accurate dosage of medications. Trust us, your patients will thank you. • Beauty gurus: DIY skincare enthusiasts can easily convert milliliters to ounces for precise mixing and measuring of ingredients. Your skin will thank you, too! • Bartenders: Impress your customers with perfectly crafted cocktails by converting milliliters to ounces. Cheers to accurate measurements and happy customers! • Science lovers: Conduct experiments with ease by converting milliliters to ounces. No more failed experiments due to incorrect measurements – just accurate results! From mL to fl oz and Back Again: How to Master Our Converter in 3 Easy Steps! Step-by-Step Guide: Step 1: Enter the number of milliliters you want to convert into the first box. Step 2: Voila! The number of fluid ounces appears in the second box - no need to click anything! Step 3: Want to switch things up? Simply click "Reset" and try a new conversion, or enter the number of fluid ounces you want to convert to milliliters in the second box and watch the magic happen. You can also select other conversion options from the drop-down menus for different metric and imperial conversions. Formula for Converting mL to fl oz: 1 milliliter = 0.033814 ounces Let's say you have 500 milliliters of water you want to convert to ounces. Simply multiply 500 by 0.033814 to get 16.907 ounces. Easy peasy! Types of Ounces: It's important to note that there are two different types of ounces: fluid ounces and weight ounces. Fluid ounces are typically used for measuring liquids, while weight ounces are used for measuring solids. When converting ml to ounces, it's usually referring to fluid ounces. However, if you need to convert weight ounces to milliliters or vice versa, you'll need to use a different formula. Here's a useful table for converting fluid ounces to milliliters and vice versa: Fluid Ounces (fl oz) Milliliters (ml) 1 fl oz 29.5735 ml 2 fl oz 59.1471 ml 3 fl oz 88.7206 ml 4 fl oz 118.2941 ml 5 fl oz 147.8677 ml 6 fl oz 177.4412 ml 7 fl oz 207.0147 ml 8 fl oz 236.5882 ml 9 fl oz 266.1618 ml 10 fl oz 295.7353 ml From mL to fl oz: Everything You Need to Know! 1. What is the most common number of milliliters people search to convert to ounces? The most common number of milliliters people search to convert to ounces is 100 ml, which is equal to 3.38 ounces. 2. How many ounces are in 500 milliliters? 500 milliliters is equal to 16.91 ounces. 3. Can I use this converter to convert weight ounces to milliliters? No, this converter is for fluid ounces only. To convert weight ounces to milliliters or vice versa, you'll need to use the oz to ml calculator. 4. What's the difference between fluid ounces and weight ounces? Fluid ounces are used to measure liquids, while weight ounces are used to measure solids. 5. Is ml the same as cc? Yes, ml (milliliters) and cc (cubic centimeters) are equivalent. 6. How many ounces are in 1 liter of water? 1 liter of water is equal to 33.814 fluid ounces. 7. How many ounces are in a shot glass? The size of a shot glass varies, but the standard size in the US is 1.5 fluid ounces. 💡 Fun Fact: Did you know that a jigger is a measuring tool used in bartending that measures 1.5 fluid ounces? It's named after a traditional English unit of measure, the "jigger," which was equivalent to 1.5 fluid ounces. Thanks for choosing our ML to Fl Oz converter to help you with all your cooking, medical, and cosmetic needs! With this handy tool, you can easily convert milliliters to fluid ounces and vice versa, making all your measuring tasks a breeze. With our FAQs and fun facts, you'll be a pro at converting ml to oz and impressing your friends with your knowledge of all things measurement-related! Happy measuring! 😉
{"url":"https://calculatorly.cc/convert/ml-to-oz/","timestamp":"2024-11-10T15:44:22Z","content_type":"text/html","content_length":"170797","record_id":"<urn:uuid:f89720b6-610f-4486-b929-7f5d9e86f5f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00276.warc.gz"}
Deterministic decremental reachability, scc, and shortest paths via directed expanders and congestion balancing Let G=(V, E, w) be a weighted, directed graph subject to a sequence of adversarial edge deletions. In the decremental single-source reachability problem (SSR), we are given a fixed source s and the goal is to maintain a data structure that can answer path-queries s rightarrowtail v for any v in V. In the more general single-source shortest paths (SSSP) problem the goal is to return an approximate shortest path to v, and in the SCC problem the goal is to maintain strongly connected components of G and to answer path queries within each component. All of these problems have been very actively studied over the past two decades, but all the fast algorithms are randomized and, more significantly, they can only answer path queries if they assume a weaker model: They assume an oblivious adversary which is not adaptive and must fix the update sequence in advance. This assumption significantly limits the use of these data structures, most notably preventing them from being used as subroutines in static algorithms. All the above problems are notoriously difficult in the adaptive setting. In fact, the state-of-the-art is still the Even and Shiloach tree, which dates back all the way to 1981 [1] and achieves total update time O(mn). We present the first algorithms to break through this barrier. •deterministic decremental SSR/SSC with total update time mn{2/3+o(1)} •deterministic decremental SSSP with total update time n{2+2/3+o(1)} To achieve these results, we develop two general techniques for working with dynamic graphs. The first generalizes expander-based tools to dynamic directed graphs. While these tools have already proven very successful in undirected graphs, the underlying expander decomposition they rely on does not exist in directed graphs. We thus need to develop an efficient framework for using expanders in directed graphs, as well as overcome several technical challenges in processing directed expanders. We establish several powerful primitives that we hope will pave the way for other expander-based algorithms in directed graphs. The second technique, which we call congestion balancing, provides a new method for maintaining flow under adversarial deletions. The results above use this technique to maintain an embedding of an expander. Publication series Name Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS Volume 2020-November ISSN (Print) 0272-5428 Conference 61st IEEE Annual Symposium on Foundations of Computer Science, FOCS 2020 Country/Territory United States City Virtual, Durham Period 11/16/20 → 11/19/20 • dynamic algorithm • single-source reachability • single-source shortest paths • strongly-connected components ASJC Scopus subject areas Dive into the research topics of 'Deterministic decremental reachability, scc, and shortest paths via directed expanders and congestion balancing'. Together they form a unique fingerprint.
{"url":"https://nyuscholars.nyu.edu/en/publications/deterministic-decremental-reachability-scc-and-shortest-paths-via","timestamp":"2024-11-12T03:12:50Z","content_type":"text/html","content_length":"61048","record_id":"<urn:uuid:e1c2e707-688a-44cc-841a-890cee0af027>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00482.warc.gz"}
Gesine Reinert Jump to navigation Jump to search Gesine Reinert is a University Professor in Statistics at the University of Oxford. She is a Fellow of Keble College, Oxford, a Fellow of the Alan Turing Institute,^[1] and a Fellow of the Institute of Mathematical Statistics.^[2] Her research concerns the probability theory and statistics of biological sequences and biological networks. Reinert has also been associated with the M. Lothaire pseudonymous mathematical collaboration on combinatorics on words.^[3] Reinert earned a diploma in mathematics from the University of Göttingen in 1989.^[4] She went on to graduate study in applied mathematics at the University of Zurich, completing her Ph.D. in 1994. Her dissertation, in probability theory, was A Weak Law of Large Numbers for Empirical Measures via Stein's Method, and Applications, and was supervised by Andrew Barbour.^[4]^[5] Reinert worked as a lecturer at the University of California, Los Angeles from 1994 to 1998, and as a senior research fellow at King's College, Cambridge from 1998 to 2000. She joined the Oxford faculty in 2000, and was given a professorship there in 2004.^[4] External links[edit]
{"url":"https://static.hlt.bme.hu/semantics/external/pages/v%C3%A9ges_%C3%A1llapot%C3%BA_transzducereket_(FST)/en.wikipedia.org/wiki/Gesine_Reinert.html","timestamp":"2024-11-13T05:53:39Z","content_type":"text/html","content_length":"45876","record_id":"<urn:uuid:2b6dd7d0-dfd3-4218-a504-965ee1b91f7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00023.warc.gz"}
PPT - Webs of soft gluons from QCD to N=4 super-Yang-Mills theory PowerPoint Presentation - ID:3326635 1. Webs of soft gluons from QCD to N=4 super-Yang-Mills theory Lance Dixon (SLAC) KEK Symposium “Towards precision QCD physics” in memory of Jiro Kodaira March 10, 2007 2. 26 years ago… L. Dixon Webs of soft gluons KEK symposium 3. 26 years ago… • I was also at SLAC in the summer of 1981 – but as an undergraduate working on the Mark III experiment at SPEAR • I had no idea what “Summing Soft Emission” meant – although I did sneak into some of the SLAC Summer Institute lectures on The Strong Interactions • So I could not yet appreciate the beauty of this formula: L. Dixon Webs of soft gluons KEK symposium 4. Outline • The two-loop soft anomalous dimension matrix in QCD– it’s all about K • Multi-loop analogs of K (cusp anomalous dim.) • – we now (probably) know them to all loop orders • in large Nc N= 4 super-Yang-Mills theory, and • tantalizing pieces of them in QCDas well Aybat, LD, Sterman, hep-ph/0606254, 0607309 Bern, Czakon, LD, Kosower, Smirnov, hep-th/0610248 Eden, Staudacher, hep-ph/ 0603157 Beisert, Eden, Staudacher, hep-th/0610251 Kotikov, Lipatov, Onishchenko, Velizhanin, hep-th/0404092 Benna, Benvenuti, Klebanov, Scardicchio, hep-th/0611135 L. Dixon Webs of soft gluons KEK symposium 5. IR Structure of QCD Amplitudes[Massless Gauge Theory Amplitudes] • Expand multi-loop amplitudes ind=4-2e around d=4 (e=0) • Overlapping soft (1/e) + collinear (1/e) divergences at each • loop order imply leading poles are ~1/e2Lat Lloops • Pole terms are predictable,due to soft/collinear factorization and exponentiation, in terms of a • collection of constants (anomalous dimensions) • Same constants control resummation of large logarithms • near kinematic boundaries – as Jiro Kodaira understood so well Mueller (1979); Akhoury (1979); Collins (1980), hep-ph/0312336; Kodaira, Trentadue (1981); Sen (1981, 1983); Sterman (1987); Botts, Sterman (1989); Catani, Trentadue (1989); Korchemsky (1989); Magnea, Sterman (1990); Korchemsky, Marchesini, hep-ph/9210281; Giele, Glover (1992); Kunszt, Signer, Trócsányi, hep-ph/9401294; Kidonakis, Oderda, Sterman, hep-ph/9801268, 9803241; Catani, hep-ph/9802439; Dasgupta, Salam, hep-ph/0104277; Sterman, Tejeda-Yeomans, hep-ph/0210130; Bonciani, Catani, Mangano, Nason, hep-ph/0307035; Banfi, Salam, Zanderighi, hep-ph/0407287; Jantzen, Kühn, Penin, Smirnov, hep-ph/0509157 L. Dixon Webs of soft gluons KEK 6. S = soft function (only depends on color of ith particle; • matrix in “color space”) • J = jet function (color-diagonal; depends on ith spin) • H= hard remainder function (finite as ; • vector in color space) • color: Catani, Seymour, hep-ph/9605323; Catani, hep-ph/9802439 Soft/Collinear Factorization Magnea, Sterman (1990) Sterman, Tejeda-Yeomans, hep-ph/0210130 L. Dixon Webs of soft gluons KEK symposium 7. _ • For the case n=2, gg 1 or qq 1, • the color structure is trivial,so the soft function S = 1 • Thus the jet function is the square-root of the Sudakov form factor (up to finite terms): The Sudakov form factor L. Dixon Webs of soft gluons KEK symposium 8. finite as e 0; contains all Q2dependence Pure counterterm (series of 1/e poles); like b(e,as), single poles in e determine completely also obey differential equations (ren. group): cusp anomalous dimension Jet function Mueller (1979); Collins (1980); Sen (1981); Korchemsky, Radyushkin (1987); Korchemsky (1989); Magnea, Sterman (1990) • By analyzing structure of soft/collinear terms • in axial gauge, find differential equation • for jet function J[i] (~ Sudakov form factor): L. Dixon Webs of soft gluons KEK symposium 9. _ as = running coupling in D=4-2e Jet function solution Magnea, Sterman (1990) • Solution to differential equations can be extracted from fixed-order calculations of form factors or related objects E.g. at three loops Moch, Vermaseren, Vogt, hep-ph/0507039, hep-ph/0508055 L. Dixon Webs of soft gluons KEK symposium 10. Solution is a path-ordered exponential: depends on massless 4-velocities ; momenta are Soft function • For generic processes, need soft functionS • Much less well-studied than J • Also obeys a (matrix) differential equation: Kidonakis, Oderda, Sterman, hep-ph/9803241 soft anomalous dimension matrix L. Dixon Webs of soft gluons KEK symposium 11. Equivalently, consider web functionW or eikonal amplitude of n Wilson lines E.g. for n=4, 1 + 2 3 + 4: Computation of soft anomalous dimension matrix • Only soft gluons • couplings classical, spin-independent • Take hard external partons to be scalars • Expand vertices and propagators Remove jet function contributions by dividing by appropriate Sudakov factors L. Dixon Webs of soft gluons KEK symposium 12. Expansion of 1-loop amplitude Agrees with known divergences of generic one loop amplitudes: Giele, Glover (1992); Kunszt, Signer, Trócsányi, hep-ph/9401294; Catani, hep-ph/9802439 Finite, hard parts scheme-dependent! 1-loop soft anomalous dim. matrix Kidonakis, Oderda, Sterman, hep-ph/9803241 1/e poles in 1-loop graph yield: L. Dixon Webs of soft gluons KEK symposium 13. 4E graphs factorize trivially into • products of 1-loop graphs. • 1-loop counterterms cancel all 1/e poles, leave no contribution to Two 3E graphs – each looks as if it might give a complicated color structure depending on 3 legs! 2-loop soft anomalous dim. matrix • Classify web graphs according to number of eikonal lines (nE) L. Dixon Webs of soft gluons KEK symposium 14. But: vanishes due to antisymmetry after changing to light-cone variables with respect to A, B = 0 and factorizes into 1-loop factors, allowing its divergences to be completely cancelled by 1-loop counterterms + L. Dixon Webs of soft gluons KEK symposium 15. All color factors become proportional to the one-loop ones, Proportionality constant dictated by cusp anomalous dimension 2-loop soft anomalous dimension – it’s all about K The 2E graphs Korchemsky, Radyushkin (1987); Korchemskaya, Korchemsky, hep-ph/9409446 All were previously analyzed for the cusp anomalous dimension Same analysis can be used here (although color flow is generically different) L. Dixon Webs of soft gluons KEK symposium 16. Implications for resummation • To resum a generic hadronic event shape • requires diagonalizing the exponentiated • soft anomalous dimension matrix in color space • Because of theproportionality relation, • same diagonalization at one loop (NLL) still works • at two loops (NNLL), and eigenvalue shift is trivial! • Result foreshadowed in the bremsstrahlung • (CMW) schemeCatani, Marchesini, Webber (1991) • for redefining the strength of parton showering using Kidonakis, Oderda, Sterman, hep-ph/9801268, 9803241; Dasgupta, Salam, hep-ph/0104277; Bonciani, Catani, Mangano, Nason, hep-ph/0307035; Banfi, Salam, Zanderighi, hep-ph/0407287 L. Dixon Webs of soft gluons KEK symposium 17. Why N=4 super-Yang-Mills theory? • Most supersymmetric theory possible without gravity • Uniquely specified by local internal symmetry group –e.g., number of colors Ncfor SU(Nc) • An exactly scale-invariant (conformal) field theory: for any coupling g,b(g) = 0 • Connected to gravity and/or string theory by • AdS/CFT correspondence, a weak/strong duality • Remarkable “transcendentality” relations with QCD L. Dixon Webs of soft gluons KEK symposium 18. “Leading transcendentality” relation between QCD and N=4 SYM • KLOV (Kotikov, Lipatov, Onishschenko, Velizhanin, hep-th/0404092) • noticed (at 2 loops) a remarkable relation between kernels for • BFKL evolution (strong rapidity ordering) • DGLAP evolution (pdf evolution = strong collinear ordering) • in QCD and N=4 SYM: • Set fermionic color factor CF = CA in the QCD result and • keep only the “leading transcendentality” terms. They coincide with the full N=4 SYM result (even though theories differ by scalars) • Conversely, N=4 SYM results predict pieces of theQCD result • transcendentality (weight): n for pn • n for zn Similar counting for HPLs and for related harmonic sums used to describe DGLAP kernels at finite j L. Dixon Webs of soft gluons KEK symposium 19. in QCD through 3 loops: K from Kodaira, Trentadue (1981) Moch Vermaseren, Vogt (MVV), hep-ph/0403192, hep-ph/0404111 L. Dixon Webs of soft gluons KEK symposium 20. in N=4 SYM through 3 loops: KLOV prediction • Finite j predictions confirmed (with assumption of integrability) • Staudacher, hep-th/0412188 • Confirmed at infinite j using on-shell amplitudes, unitarity • Bern, LD, Smirnov, hep-th/0505205 • and with all-orders asymptotic Bethe ansatz • Beisert, Staudacher, hep-th/0504190 • leading to an integral equation Eden, Staudacher, hep-th/ 0603157 L. Dixon Webs of soft gluons KEK symposium 21. Perturbative expansion: ? An all-orders proposal Integrability, plus an all-orders asymptotic Bethe ansatz led to the following proposal for the cusp anomalous dimension in large Nc N=4 SYM: Eden, Staudacher, hep-ph/0603157 where is the solution to an integral equation with Bessel-function kernel L. Dixon Webs of soft gluons KEK symposium 22. ES proposal (cont.) Eden, Staudacher, hep-ph/0603157 Because of various assumptions made, particularly an overall dressing factor, which could affect the entire “world-sheet S-matrix”, and which was known to be non-trivial at strong-coupling, the ES proposal needed checking via another perturbative method, particularly at 4 loops. L. Dixon Webs of soft gluons KEK symposium 23. Cusp anomalous dimension via AdS/CFT Maldacena, hep-th/9711200; Gubser, Klebanov, Polyakov, hep-th/9802109 • AdS/CFT duality suggests that • weak-coupling perturbation series • for planar N=4 SYM should have very special properties: strong-coupling limit is equivalent to weakly-coupled strings in large-radius AdS5 x S5 background • – s-model classically integrable too • – world-sheet s-model coupling is • Cusp anomalous dimension should be given • semi-classically, by energy of a long string, • a soliton in the s-model, spinning in AdS5 • First two strong-coupling terms known Bena, Polchinski, Roiban, hep-th/0305116 Gubser, Klebanov, Polyakov, hep-th/0204051 Frolov, Tseytlin, hep-th/0204226 L. Dixon Webs of soft gluons KEK symposium 24. Four-loop planar N=4 SYM amplitude BCDKS, hep-th/0610248 Very simple – only 8 loop integrals required! L. Dixon Webs of soft gluons KEK symposium 25. Soft function only defined up to a multiple of the identity matrix in color space • Planar limit is color-trivial; can absorb S intoJi • If all n particles are identical, say gluons, then each “wedge” is the square root of the “gg 1” process (Sudakov form factor): Soft/collinear simplification in large Nc(planar) limit L. Dixon Webs of soft gluons KEK symposium 26. Sudakov form factor in planar N=4 SYM b=0, so running coupling in D=4-2ehas only trivial (engineering) dependence on scale m , simplifying differential equations • Expand in terms of L. Dixon Webs of soft gluons KEK symposium 27. We found a numerical result consistent with: compared with ES prediction, a single sign flip at four loops! We also argued that at order the signs of terms containing should be flipped as well, … General amplitude in planar N=4 SYM Insert result for form factor into n-point amplitude extract cusp anomalous dimension from coefficient of pole L. Dixon Webs of soft gluons KEK symposium 28. Independently… Arutyunov, Frolov, Staudacher, hep-th/0406256; Hernandez, Lopez, hep-th/0603204; … At the same time, investigating the strong-coupling properties of the dressing factor led Beisert, Eden and Staudacher [hep-th/0610251] to propose an integral equation with a new kernel: with With the “2”, the result is to flip signs of odd-zeta terms in ES prediction, to all orders (actually, z2k+1i z2k+1) L. Dixon Webs of soft gluons KEK symposium 29. Soon thereafter … Benna, Benvenuti, Klebanov, Scardicchio [hep-th/0611135] solved BES integral equation numerically, by expanding in basis of Bessel functions. Solution agrees stunningly well with “KLV approximate formula,” which incorporates the known strong-coupling behavior L. Dixon Webs of soft gluons KEK symposium 30. Conclusions for part 2 • Combining a number of approaches, an exact solution for the cusp anomalous dimension in planar N=4 SYM appears to be in hand. • Result provides a very interesting test of the AdS/CFT correspondence. • Through KLOV conjecture, the exact solution provides all-loop information about certain “most transcendental” terms in gK(as) in perturbative QCD. • The multi-loop analogs of K are related to the energy of a spinning string in anti-de Sitter space! • What would Jiro Kodaira make of all this?! L. Dixon Webs of soft gluons KEK symposium 31. Extra Slides L. Dixon Webs of soft gluons KEK symposium 32. Soft computation (cont.) • Regularize collinear divergences by removing Sudakov-type factors (in eikonal approximation), from web function, defining soft function S by: • Soft anomalous dimension matrix determined • by single ultraviolet poles in e of S: L. Dixon Webs of soft gluons KEK symposium 33. 6E and 5E graphs factorize trivially into products of lower-loop graphs; no contribution to thanks to 2-loop result 4E graphs use same (A,B) change of variables ??? also trivial Proportionality at 3 loops? Again classify web graphs according to number of eikonal lines (nE) and then there are more 4E graphs, and the 3E and 2E graphs… L. Dixon Webs of soft gluons KEK symposium 34. Consistency with explicit multi-parton 2-loop computations • Results for • Organized according toCatani, hep-ph/9802439 • After making adjustments for different schemes, everything • is consistent Anastasiou, Glover, Oleari, Tejeda-Yeomans (2001); Bern, De Freitas, LD (2001-2); Garland et al. (2002); Glover (2004); De Freitas, Bern (2004); Bern, LD, Kosower, hep-ph/0404293 And electroweak Sudakov logs for 2 2 also match Jantzen, Kühn, Penin, Smirnov, hep-ph/0509157 L. Dixon Webs of soft gluons KEK symposium
{"url":"https://www.slideserve.com/sylvia/webs-of-soft-gluons-from-qcd-to-n-4-super-yang-mills-theory","timestamp":"2024-11-06T14:05:59Z","content_type":"text/html","content_length":"102684","record_id":"<urn:uuid:e222a3fd-c991-4fcf-ba0e-34d6fe79d114>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00227.warc.gz"}
IntroductionPipeline lifting construction field [9]The Theoretical ModelMechanical ModelMechanical model of pipelineThe mechanical properties of section i + 1 pipelineMarine Environmental LoadThe Discrete Numerical MethodThe lumped mass modelThe time domain simulation stepsNumerical Model Set-UpCoordinate SystemThe global and vessel coordinate systemThe direction and headings of waves and currentModel Set-Up in OrcaFlexThe schematic model of the pipeline lifting operation: (a) Simplified model in Orcaflex; (b) General schematic diagramValidation for the Lumped Mass MethodCharacteristics of towed cable systemValidation for the lumped mass modelResults and DiscussionDifferent Wave HeightsThe results of different wave heights: (a) The bending moment; (b) The curvature; (c) Effective tensionDifferent Current VelocitiesThe results of different current velocities: (a) The bending moment; (b) The curvature; (c) Effective tensionConclusionReferences With the further development of the offshore oil industry, offshore oil and gas transportation are becoming more and more prosperous as well [1,2]. The pipeline transportation system plays an increasingly prominent role in oil and gas transportation and has been widely used in developing offshore oil fields [3,4]. It is also the most practical type of deep-sea offshore oil system [5]. The pipeline system typically consists of a surface vessel, lifting pipe, lifting pump sets, buffer, lifting hose, and subsea collector [6,7]. In this system, the lifting pipe must be equipped with pump sets for lifting the mineral resources to the vessel and a buffer to regulate the density of the oil-water mixture. The pipeline stress-strength analysis is an important stage in overall construction design [8]. In mechanical analysis, the deformation and stress analysis of pipelines in the process of laying, sinking, and lifting are the key problems. During the lifting or sinking stage, one end of the pipeline is placed on the seabed, the other is lifted and suspended, and the middle suspension span is long, as shown in Fig. 1. The stress of the pipeline is complex, and the bending deformation is great. Therefore, in order to ensure that the deformation of the pipeline during laying or lifting is controlled within the elastic deformation range, it is necessary to conduct stress analysis on the pipeline sections at different heights to determine the deformation of the pipeline itself or to change the construction parameters to ensure that the deformation of the pipeline is controlled within the elastic deformation range. Therefore, the analysis of the mechanical properties of the suspended pipeline is the key to the construction analysis of the pipeline. Scholars have done a lot of research on the mechanical behavior of the pipeline already. For example, Preston et al. [10] reported and analyzed a new pipeline global buckling control method by laying a pipeline with a zig-zag shape to trigger controllable mitigatory lateral global buckling. Polak et al. [11] studied the influence of the curvature radius of the curved segment and pipeline stiffness on pipeline load. Cheng et al. [12] elaborated a theoretical model of the pull-back pipeline. Wu et al. [13] tested the pipeline’s mechanical behavior in the pipeline laying process through experimental methods, and the experiment test results are compared with the theoretical results. Guo et al. [9] used the continuous beam theory to analyze the mechanical behavior of pipelines during lifting construction and implemented the mechanical model of pipelines during the lifting construction process. The analysis showed that the lifting height varies linearly with the pipeline length. Liu et al. [14–16] carried out a series of research on pipe lifting projects in horizontal directional drilling. Their research has guiding significance for the actual engineering implementation of the pipeline lifting operation. Hong et al. [17] analyzed the feature of lifting deformation for a pipeline laid on a sleeper and studied nine influential factors of the variation in the lifting displacement. Zan et al. [18] proposed a coupled time-domain numerical model for the behavior study of pipeline laying. The model was solved by the Newmark method and verified with OrcaFlex software. Chung [19] conducted offshore tests of deep-sea mining systems to measure the response of full-scale pipelines. Guo et al. [20] studied the effect of pipeline surface roughness on the interaction between submarine landslides and pipelines. The effect of surface roughness is primarily reflected in the peak load of the impact forces on the pipelines. Puckett [21] used a new theoretical method to calculate the pull-back load in process of pull-back pipeline. Podbevsek et al. [22] proved that the pipeline has a large reaction force in the curved segment through a theoretical method. Erol [23] established the dynamic model of the stepped lifting pipe, and studied the longitudinal vibration characteristics of the lifting system with and without dynamic vibration absorber by using the method of separating variables. Prpíc-Ořsíc et al. [24] made a related study of the interaction between the submarine pipeline and construction vessel by nonlinear differential equations with the Runge-Kutta method, in which the extension of the general two dimensional formulations of pipeline dynamics was presented by accounting for the effects of wave excitation, cable/ship interaction and cable elasticity. Reda et al. [25] showed the process for the simulation of the pipeline laying tension and bending radius in OrcaFlex. Then the simulation models in OrcaFlex and compression test for compression limit state of HVAC submarine pipeline were made by Reda et al. [26]. As can be seen from the brief review of the most advanced research, the current research on the mechanical behavior of the pipeline focuses on modeling and simulation analysis of the whole deep-sea system cooperative operation, the dynamic characteristics analysis of the lifting pipe, and the process of laying from water surface to seabed [27–31]. However, there are few reports on the dynamic analysis of the pipeline lifting operation. In pipeline construction and maintenance, lifting the submerged pipeline to the sea surface is often necessary, weld and inspect the pipeline or riser joint for further laying or maintenance. At this stage, due to the large deflection deformation, the complex environment and working load such as wind and waves, the pipeline will produce great bending stress, which makes the pipeline vulnerable to serious damage. Therefore, the mechanical analysis at this stage is very important for the laying and construction of the pipeline. In this paper, the pipeline lifting operation is modeled in OrcaFlex, and the hydrodynamic response in the process of pipe lifting is calculated by the time-domain coupling dynamic analysis method. To ensure the authenticity of the simulation to the greatest extent, the simulation time step must be less than the period of the shortest natural node. It should not exceed 1/10 of the shortest natural period of the model. Combined with the calculation results of hydrodynamic performance then, some guiding suggestions are given. When the lifting force is applied to a pipeline, the pipeline will deflect. In an actual engineering project, compared with the length of the pipe section to be lifted, the lifting height and angle change is minimal. Therefore, the small deflection linear theory can be used to calculate the deflection of the pipeline. The lifting process is a problem of moving boundaries in the pullback step, so a multipoint lifting model of the pipeline can be built using a polynomial interpolation function. To find the position of the moving boundary, the displacement correction methods, such as load concentration correction, horizontal spacing correction, and iterative calculation, are used to convert the large deformation geometric nonlinear problem of the pipeline to a piecewise linear problem Fig. 2 shows the mechanical model. The uniform weight q is applied to the pipeline. H[i] and L[i] represent each point’s vertical and horizontal displacements, respectively. If the pipeline is segmented according to the pipeline’s endpoints and lifting points, the bending deformation is small. Based on the small deflection beam theory, the relationship among the physical parameters is as follows: Bending moment:M=EId2ydx2 Uniform load:q=EId4ydx4 Selecting the linear interpolation function as the calculation equation, the differential equation for bending deformation is as follows:d4y1dx4=qEI The solution of the Eq. (4) is as follows:y1(x)=a10+a11x+a12x2+a13x3−q1x424EI The interpolation function of section i +1 is as follows:yi+1(x)=yi(x)+ai+10+ai+11(x−Li)+ai+12(x−Li)2+ai+13(x−Li)3−qi+1(x−Li)424EI where, a[i][+10], a[i][+11], a[i][+12], a[i][+13] represent the unknown coefficients, L[i] represents the horizontal coordinate value of point x, y[i][+1](x) and y[i](x) represent the boundary point of interpolation functions, q[i][+1] represents the weight increment of segment i+1. At the same time, Fig. 3 shows the mechanical properties of the section, calculated as follows:qi+1=qcosθ(i+1)−qcosθ(i),i=1,2,…,n where, θ(i+1) represents the angle formed by the horizontal and oblique line calculated by points i and i+1, as does θ(i). According to the mechanics of materials, the deflection angle is 0, corner is 0, and bending moment is 0 at the ending point of the pipeline. Then the following equation can be gotten, y(0) = y′(0) = y′′(0) = 0. So the following equations can be written at the boundary points: The a[i][0], a[i][1], a[i][2] (i=1, 2, …, n) can be gotten by solving these equations. Therefore, Eqs. (5) and (6) can be transformed as follows: After the simplification, there are still some unknown coefficients which can be calculated according to the lifting height of each point and the pipeline ending point H[i]. Since the pipeline is gimbaled at the end point of the pipeline which means the bending moment is 0, the additional condition is as follows:Mn=EId2yndx2 The value of parameter A[i] is as follows:Ai=Hi−yi−1(Li)+qi(Li−Li−1)4/24EI(Li−Li−1)3 The interpolation function of each segment is as follows:yi(x)=yi−1(x)+Hi−yi−1(LI)+qi(Li−Li−1)424EI(Li−Li−1)3(x−Li−1)3−qi(x−Li−1)424EI,i=2,3,…,n The angle and moment can be gotten by solving Eq. (13), and the distance L[1] can be known according to the following equation:yn′′(Ln)=yn−1′′(Ln)+6Hn−yn−1(Ln)+qn(Ln−Ln−1)4/24EI(Ln−Ln−1)2−qn(Ln−Ln−1) After calculating the deflection curve of the pipeline, the equations of the rotation angle, bending moment and shear force of each pipeline segment can be obtained according to the linear beam After obtaining the interpolation function, the maximum stress can be calculated using the following equation:σmax=MmaxW where, W represents the section bending modulus, W=I/y[max]; I represents the moment of inertia, I=π(D^4−d^4)/64; y[max] represents the maximum deflection. Using the deflection equation of the last segment y[n] and boundary conditions at the ending point then, the corresponding angle can be obtained: Due to the particularity of the marine environment, the load on the subsea pipeline system is relatively complex. For subsea pipelines, waves and currents are the most important external loads. Since the pipe lifting operation is rarely carried out under strong wind conditions, to save calculation time, the wind load is not considered in this paper. When calculating and analyzing the pipeline, it is assumed that the pipeline is a flexible structure. The calculation and analysis mainly include the axial tension borne by the pipeline, the action of environmental load, and the coupling dynamic response of the whole system. The lump-mass method is used for modeling. The performance of the pipeline is equivalent to a nonlinear spring, which is discretized into the lump-mass model [33]. The pipeline is simulated as a combination of axial, rotating spring, and damper. The node concentrates half of the mass of two adjacent segments, and the force and moment act on the node, which is the mathematical basis for establishing the pipeline load calculation model in OrcaFlex. The wave and current forces on the submarine pipeline can be calculated by the Morrison equation: Normal component of current force:Fnc=12CnρDνn2 Tangential component of current force:Ftc=12CtρDνn2 Normal component of wave force:Fnw=12CnρDun2+CmρπD24∂u∂t Tangential component of wave force:Ftw=12CtρDut2 where, C[n] is the normal drag coefficient, C[t] is the tangential drag coefficient, the numerical value of the drag coefficient changes with Reynolds number; C[m] is the inertia force coefficient, the numerical value is 2; v[n]=v⋅sinθ, v[t]=v⋅cosθ, they are the normal velocity and tangential velocity of the current, respectively; u[n]= u⋅sinθ, u[t]=u⋅cosθ, they are the normal velocity and tangential velocity of the current, respectively. As an important marine environment load, the submarine surface sediments will also affect the dynamic behavior of the pipeline lifting operation [34]. The undrained shear strength is the main characteristic of the submarine surface sediments. In pipeline lifting operation, the undrained shear strength of surficial marine clays is a significant parameter for engineering construction and geological disaster assessment [35]. In this paper, the submarine surface is considered non-smooth. The hydrodynamic analysis includes static and dynamic analysis. Static analysis has two main functions: to analyze whether the system structure reaches static equilibrium under the action of gravity, buoyancy, and flow viscosity. The other is to provide an initial state for dynamic analysis. The dynamic analysis starts from the steady-state provided by static analysis. It includes the self-construction stage and model maintenance analysis stage. The self-construction stage is where the wave and ship motion gradually increase from static to the given value. This stage generally requires a wavelength of time. After the self-construction stage, the model can enter the maintenance analysis stage. The dynamic simulation calculation adopts two sets of explicit and implicit calculation methods. Both calculate the system’s geometric shape at each time step and fully consider the nonlinear geometric factors, including the spatial variation of wave load and contact load. The equation of motion is solved by explicit forward Euler integration with a fixed step size. The initial model parameters are obtained through static analysis to calculate the forces and moments of each free body and node, including gravity, buoyancy, hydrodynamic and air resistance, hydrodynamic added mass, tension and shear force, bending moment, seabed friction, object contact force, the force exerted by hinge and winch, etc. The numerical solution of the boundary problem for the pipeline is the discrete lumped mass method [36]. The basic idea of this model is to divide the pipeline into N segments, and the mass of each element is concentrated on one node, so that there are N+1 nodes. The tension T and shear V acting at the ends of each segment can be concentrated on a node, and any external hydrodynamic load is concentrated on the node. The equation of motion of i-th node (i=0, 1,…,N) is:MAiR¨i=Tei−Tei−1+FdIi+Vi−Vi−1+wiΔs¯i Among them, R represents the node position of the pipeline. MAi=Δs¯i(mi+π4Di2(Can−1))I−Δs¯iπ4Di2(Can−1)(τi⊗τi−1) is the mass matrix of a node, I is a 3×3 identity matrix; Tei=EAεi=EAΔs0iΔsεi , which stands for effective tension at a certain node; Δs0i=L0 (N−1) , which represents the original length of each segment; Δsεi=|Ri+1−Ri| , the stretched length of each segment; EA, axial stiffness of the pipeline. FdIi represents the external hydrodynamics of each node, which is calculated according to the Morison equation:FdIi=12ρDi1+εiΔs¯i(Cdni|vni|vni+πCdti|vti|vti)+π4Di2ρCaniΔs¯i(awi−(awi⋅τi))τi where, ρ is the density of sea water, D[i] is the diameter of each cable, C[dni] is the normal drag coefficient, C[dti] is the tangential drag coefficient, C[ani] is the inertia coefficient. Vi=EIi+1τi× (τi×τi+1)ΔsεiΔsεi+1−EIiτi×(τi−1×τi)Δsεi2+Hi+1τi×τi+1Δsεi , V represents the shear force at the node, H is the torsion. The lumped mass model is shown in Fig. 4. In OrcaFlex, there are two temporal discretization schemes, explicit and implicit integrations. The explicit time integration takes a constant time step to integrate forward. At the beginning of the simulation, after a preliminary static analysis, the initial positions and orientations of all nodes in the model are known. The forces and moments of all free bodies and nodes are calculated, including gravity, buoyancy, hydrodynamic force, hydrodynamic added mass, tension and shear, bending and moment, seabed friction, etc. The motion control equation for each free body and node in the model is as follows:M(s)a=F(s,v,t)−C(s,v)−K(s) where, s, v, a, t represent the displacement, velocity, acceleration and time, respectively. M, F, C, K represent the mass, load, drag force, and stiffness, respectively. For implicit integration, the generalized-α method is used in OrcaFlex [37]. Forces, moments, damping, weight are calculated in the same way as explicit integration. Since the forces, displacements, velocities, and accelerations are unknown at the last time step, an iterative approach is required. To improve computational efficiency, a pre-simulation stage is usually preset in OrcaFlex. The simulation time at this stage is set to be no less than one wave period. In the modeling preparation stage, the wave dynamic parameters, ship motion and current parameters are increased from 0 to a fixed value. In this way, the simulation can have a smooth start, reduce transient response and avoid long simulation runs. Although it is easy to achieve stability using implicit integration, the corresponding calculation results are often inaccurate. For rapidly changing physical phenomena, such as fast collisions, more attention should be paid to the accuracy of the calculation results. In this case, it is necessary to compare the calculation results of the implicit and explicit integration formats in order to study the sensitivity of the time step. Both methods recalculate the geometry of the system after each time step, so the numerical simulations are adequate for geometric nonlinearities, including spatial variations of wave loads and contact loads. The time domain simulation steps in this paper are shown in Fig. 5. The coordinate system can be divided into the global coordinate system and vessel coordinate system. Both use the right-hand coordinate system. The origin of the global coordinate system is set on the sea level, the Z direction is vertical upward, and the X and Y directions meet the right-hand rule. The coordinate vessel system is generally used when establishing the numerical model. When determining the position of the global model, the global coordinates are adopted. In this paper, the position of the whole model is determined regarding the global coordinates. The global and coordinate vessel system is shown in Fig. 6. The direction and headings of waves and currents is shown in Fig. 7. The upper boundary condition of the pipeline model mainly depends on the motion of the vessel to which it is connected. The motion of the vessel depends on RAOs. RAOs, known as response amplitude operators are a concept of engineering statistics in the field of ship or floating body designs, which can be used to calculate the behavior of ships working in sea conditions. Vessel RAOs can generally be obtained by model experiment or CFD (Computational Fluid Dynamics). It is usually necessary to calculate the motion of a floating body under various wave conditions. Its essence is a transfer function from wave excitation to vessel motion. In OrcaFlex, once the RAOs are determined, the vessel’s motion will be determined. The vessel length is 103m, the width is 16m, and the depth is 13.32m. The design draft is 6.66m, the transverse stability radius is 1.84m, the longitudinal stability radius is 114m, the displacement is 8800T, the front projection on the water surface is 191m^2, and the side projection on the water surface is 927m^2. The square coefficient C[B] is 0.804, and the oment of inertia of head swing rotation is 5.83×109kg.m^2. The data of RAOs, wave drift force, added mass coefficient, and damping coefficient of the ship is from diffraction analysis of a 103m long ship in a 400m water depth pool. The seabed has an important impact on the touchdown part of the pipeline. The seabed is non smooth. There is friction between the seabed and the pipeline touchdown area. The friction has a certain positive effect on the pipeline, which will hinder the low-frequency movement of the pipeline, that is, slow movement. However, how to accurately simulate the seabed friction is very difficult. We need the actual detection data of the seabed, which is generally unavailable. It is the seabed friction optimization model provided by OrcaFlex. The advantage of this model is straightforward. When the combined velocity V of the components in the X and Y directions of the landing cable section velocity is less than a critical value, the size of the seabed friction changes linearly. When the combined velocity V is equal to the critical value, the seabed friction reaches the maximum and then does not increase with the increase of the combined velocity. In OrcaFlex, the parameters related to the pipeline are as follows [38]: the pipeline length is 300m; the outer diameter is 0.4m; the inner diameter is 0.36m; the density is 7.85t/m^3; the unit length-weight is 0.187t/m; the bending stiffness is 9.16×10^4 kN.m^2; the axial stiffness 5.06×10^6kN; the water depth is 100m. The effects of current velocity and different wave heights are simulated, respectively. The range of current velocity is 0–4m/s, and one current velocity is taken every 0.5m/s. The wave height range is 0–3m, and one wave height is taken every 0.5m. The current direction and wave direction are taken as 0°. The schematic model of the pipeline lifting operation is shown in Fig. 8. In order to verify the correctness of the lumped mass method, a towed cable based on the mathematical formulation above is taken and let it move under the specified boundary condition [39]. The calculation results are compared with the previous results. The towed cable includes three sections: Cable, Array, Drogue. The properties of the towed cable are as shown in Table 1. Parameter Cable Array Drogue Length (m) 723 273.9 30.5 Mass per length (kg/m) 1.5895 5.07 0.58 Wet weight per length (N/m) 2.33 0 0.57 Diameter (m) 0.041 0.079 0.025 Axial stiffness EA (N) 1×10^8 1×10^8 5×10^6 Bending stiffness EI (N.m) 1000 1000 0.01 ρ[w] 2 1.8 1.8 Point A which is located at 8.2m of Array section is selected to make some comparisons between the lumped mass model and the previous research. The variations of depth of point A are compared to the results of Gobat et al. [40,41] and Ablow et al. [42]. Fig. 9 indicates that the results from the lumped mass model are consistent with the previous work, and the minimum depth of point A is closer to the measured depth. All the comparisons have validated the lumped mass method. Fig. 10 shows the changes of bending moment, curvature, and effective tension along the pipeline length at different wave heights. It is found that with the increase of wave height from 0 to 3m, the bending moment along the length of the pipeline increases at different wave heights, but the change range is not obvious. The bending moment and curvature overlap to a large extent. As the wave height increases from 0 to 3m, it is found in Fig. 10c that the effective tension along the length of the pipeline increases obviously with the increase of wave height. Through observation, the bending moment, curvature, and effective tension all have sudden changes at the positions where the pipeline length is 20, 40, and 70m. The trend is to increase sharply at first, then decrease, and then increase sharply at these three coordinate positions. Three lifting cables cause this phenomenon to facilitate operation and prevent damage to the pipeline due to the large arc bow of the pipeline during pipeline lifting. The contact position between the lifting cable and the pipeline is analyzed. The left and right of the contact point are affected by gravity and hydraulic damping. The tension along the pipeline direction increases gradually on the left of the 20m contact point, and the tension increases sharply at the contact point due to the stress on both sides. In the section from 20m contact point to 40m contact point and from 40m contact point to 70m contact point, there is a small concave arc bow due to gravity. This arc bow is affected by the gravity component of the pipelines on both sides along the axial centerline, resulting in a compression effect, offsetting part of the tension and reducing the tension value. At the 70m contact point, while being acted by the lifting cable and the left end pipeline, it is also acted by the gravity of the longer pipeline at the right end. Therefore, the tension and bending moment at this position are very large, which is easy to cause pipeline damage or fracture, especially when the pipeline is lifted to the highest position. At the 100m position, the pipeline is in the transition position between the upper convex arc bow of the lifted pipeline and the lower concave arc bow of the not lifted pipeline. At this time, the pipeline becomes basically a straight segment at the critical point, and the bending moment decreases sharply. However, the tension at this position is still large due to the lifting force and the gravity of the not lifted pipeline. Fig. 11 shows the changes in the bending moment, curvature, and the effective tension along the length of the pipeline at different flow rates. The changes of tension and bending moment under different flow velocities are similar to the previous analysis, but there are also some special phenomena worthy of attention. In the bending moment diagram, before about 75m along the pipeline length, the bending moment of the pipeline without flow and wave is much greater than that of the pipeline with the flow. Before the 20m contact point, the increase of flow velocity has little effect on the effective tension of the pipeline, and the effective tension coincides within this length range. In the range of 20–70m, the greater the velocity is, the smaller the effective tension is, showing a negative correlation phenomenon. After the 70m contact point, there is a positive correlation between velocity and tension. It is not difficult to explain this phenomenon by analyzing the action process of ocean currents on pipelines. The pipeline is divided into two parts, with the 70m contact point as the boundary. There is an upward convex arc bow in part before the 70m contact point. After facing the current, the current generates lifting force on this section of the pipeline, balancing the tension formed by a part of the lifting cable opposite gravity’s direction the greater the flow velocity, the greater the lift that can be provided, and the better the balance effect. Therefore, the tension in this section is negatively correlated with the flow velocity. Similarly, the arc bow of this section after the 70m contact point is concave, so the current will have a downward component on the pipeline after facing the current, which increases the tensile effect of pipe lifting operation on this section. Therefore, the greater the flow velocity, the greater the tension. This is also verified on the bending moment diagram and curvature diagram. The bending moment before and after the convex and concave transition points also has a similar trend. Still, it is opposite to the trend of tension. Before the transition point, due to the action of lift, the degree of convexity on the arc bow is deepened, and the greater the flow velocity, the greater the bending moment. After the transition point, the flow direction of the current faces the concave arc bow, which makes the alignment of this section of the pipeline close to the seabed more straight, so the bending moment decreases with the increase of velocity.
{"url":"https://cdn.techscience.cn/ueditor/files/fdmp/TSP_FDMP-19-3/TSP_FDMP_23919/TSP_FDMP_23919.xml?t=20220620","timestamp":"2024-11-09T19:21:43Z","content_type":"application/xml","content_length":"118509","record_id":"<urn:uuid:cd44676b-c52d-4a82-ba07-a936e2170c52>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00203.warc.gz"}
How to Report Chi-Square Test Results in APA Style: A Step-By-Step Guide In this article, we guide you through how to report Chi-Square Test results, including essential components like the Chi-Square statistic (χ²), degrees of freedom (df), p-value, and Effect Size, aligning with established guidelines for clarity and reproducibility. The Chi-Square Test of Independence is a cornerstone in the field of statistical analysis when researchers aim to examine associations between categorical variables. For instance, in healthcare research, it could be employed to determine whether smoking status is independent of lung cancer incidence within a particular demographic. This statistical technique can decipher the intricacies of frequencies or proportions across different categories, thereby providing robust conclusions on the presence or absence of significant associations. Conforming to the American Psychological Association (APA) guidelines for statistical reporting not only bolsters the credibility of your findings but also facilitates comprehension among a diversified audience, which may include scholars, healthcare professionals, and policy-makers. Adherence to the APA style is imperative for ensuring that the statistical rigor and the nuances of the Chi-Square Test are communicated effectively and unequivocally. • The Chi-Square Test evaluates relationships between categorical variables. • Reporting the Chi-Square, degrees of freedom, p-value, and effect size enhances scientific rigor. • A p-value under the significance level (generally 0.01 or 0.05) signifies statistical significance. • For tables larger than 2×2, use adjusted residuals; 5% thresholds are -1.96 and +1.96. • Cramer’s V and Phi measure effect size and direction. Guide to Reporting Chi-Square Test Results 1. State the Chi-Square Test Purpose Before you delve into the specifics of the Chi-Square Test, clearly outline the research question you aim to answer. The research question will guide your analysis, and it generally revolves around investigating how certain categorical variables might be related to one another. Once you have a well-framed research question, you must state your hypothesis clearly. The hypothesis will predict what you expect to find in your study. The researcher needs to have a clear understanding of both the null and alternative hypotheses. These hypotheses function as the backbone of the statistical analysis, providing the framework for evaluating the data. 2. Report Sample Size and Characteristics The sample size is pivotal for the reliability of your results. Indicate how many subjects or items were part of your study and describe the method used for sample size determination. Offer any relevant demographic information, such as age, gender, socioeconomic status, or other categorical variables that could impact the results. Providing these details will enhance the clarity and comprehensibility of your report. 3. Present Observed Frequencies For each category or class under investigation, present the observed frequencies. These are the actual counts of subjects or items in each category collected through your research. The expected frequencies are what you would anticipate if the null hypothesis is true, suggesting no association between the variables. If you prefer, you can also present these expected frequencies in your report to provide additional context for interpretation. 4. Report the Chi-Square Statistic and Degrees of Freedom Clearly state the Chi-Square value that you calculated during the test. This is often denoted as χ². It is the test statistic that you’ll compare to a critical value to decide whether to reject the null hypothesis. In statistical parlance, degrees of freedom refer to the number of values in a study that are free to vary. When reporting your Chi-Square Test results, it is vital to mention the degrees of freedom, typically denoted as “df.” 5. Indicate the p-value The p-value is a critical component in statistical hypothesis testing, representing the probability that the observed data would occur if the null hypothesis were true. It quantifies the evidence against the null hypothesis. Values below 0.05 are commonly considered indicators of statistical significance. This suggests that there is less than a 5% probability of observing a test statistic at least as extreme as the one observed, assuming that the null hypothesis is true. It implies that the association between the variables under study is unlikely to have occurred by random chance alone. 6. Report Effect Size While a statistically significant p-value can inform you of an association between variables, it does not indicate the strength or magnitude of the relationship. This is where effect size comes into play. Effect size measures such as Cramer’s V or Phi coefficient offer a quantifiable method to determine how strong the association is. Cramer’s V and Phi coefficient are the most commonly used effect size measures in Chi-Square Tests. Cramer’s V is beneficial for tables larger than 2×2, whereas Phi is generally used for 2×2 tables. Both are derived from the Chi-Square statistic and help compare results across different studies or datasets. Effect sizes are generally categorized as small (0.1), medium (0.3), or large (0.5). These categories help the audience in making practical interpretations of the study findings. 7. Interpret the Results Based on the Chi-Square statistic, degrees of freedom, p-value, and effect size, you need to synthesize all this data into coherent and clear conclusions. Here, you must state whether your results support the null hypothesis or suggest that it should be rejected. Interpreting the results also involves detailing the real-world relevance or practical implications of the findings. For instance, if a Chi-Square Test in a medical study finds a significant association between a particular treatment and patient recovery rates, the practical implication could be that the treatment is effective and should be considered in clinical guidelines. 8. Additional Information When working with contingency tables larger than 2×2, analyzing the adjusted residuals for each combination of categories between the two nominal qualitative variables becomes necessary. Suppose the significance level is set at 5%. In that case, adjusted residuals with values less than -1.96 or greater than +1.96 indicate an association in the analyzed combination. Similarly, at a 1% significance level, adjusted residuals with values less than -2.576 or greater than +2.576 indicate an association. Charts, graphs, or tables can be included as supplementary material to represent the statistical data visually. This helps the reader grasp the details and implications of the study more effectively. Vaccine Efficacy in Two Age Groups Suppose a study aims to assess whether a new vaccine is equally effective across different age groups: those aged 18-40 and those aged 41-60. A sample of 200 people is randomly chosen, half from each age group. After administering the vaccine, it is observed whether or not the individuals contracted the disease within a specified timeframe. Observed Frequencies • Age 18-40 □ Contracted Disease: 12 □ Did Not Contract Disease: 88 • Age 41-60 □ Contracted Disease: 28 □ Did Not Contract Disease: 72 Expected Frequencies If there were no association between age group and vaccine efficacy, we would expect an equal proportion of individuals in each group to contract the disease. The expected frequencies would then be: • Age 18-40 □ Contracted Disease: (12+28)/2 = 20 □ Did Not Contract Disease: (88+72)/2 = 80 • Age 41-60 □ Contracted Disease: 20 □ Did Not Contract Disease: 80 Chi-Square Test Results • Chi-Square Statistic (χ²): 10.8 • Degrees of Freedom (df): 1 • p-value: 0.001 • Effect Size (Cramer’s V): 0.23 • Statistical Significance: The p-value being less than 0.05 indicates a statistically significant association between age group and vaccine efficacy. • Effect Size: The effect size of 0.23, although statistically significant, is on the smaller side, suggesting that while age does have an impact on vaccine efficacy, the practical significance is • Practical Implications: Given the significant but moderate association, healthcare providers may consider additional protective measures for the older age group but do not necessarily need to rethink the vaccine’s distribution strategy entirely. Results Presentation To evaluate the effectiveness of the vaccine across two different age groups, a Chi-Square Test of Independence was executed. The observed frequencies revealed that among those aged 18-40, 12 contracted the disease, while 88 did not. Conversely, in the 41-60 age group, 28 contracted the disease, and 72 did not. Under the assumption that there was no association between age group and vaccine efficacy, the expected frequencies were calculated to be 20 contracting the disease and 80 not contracting the disease for both age groups. The analysis resulted in a Chi-Square statistic (χ²) of 10.8, with 1 degree of freedom. The associated p-value was 0.001, below the alpha level of 0.05, suggesting a statistically significant association between age group and vaccine efficacy. Additionally, an effect size was calculated using Cramer’s V, which was found to be 0.23. While this effect size is statistically significant, it is moderate in magnitude. Alternative Results Presentation To assess the vaccine’s effectiveness across different age demographics, we performed a Chi-Square Test of Independence. In the age bracket of 18-40, observed frequencies indicated that 12 individuals contracted the disease, in contrast to 88 who did not (Expected frequencies: Contracted = 20, Not Contracted = 80). Similarly, for the 41-60 age group, 28 individuals contracted the disease, while 72 did not (Expected frequencies: Contracted = 20, Not Contracted = 80). The Chi-Square Test yielded significant results (χ²(1) = 10.8, p = .001, V = .23). These results imply a statistically significant, albeit moderately sized, association between age group and vaccine efficacy. Reporting Chi-Square Test results in APA style involves multiple layers of detail. From stating the test’s purpose, presenting sample size, and explaining the observed and expected frequencies to elucidating the Chi-Square statistic, p-value, and effect size, each component serves a unique role in building a compelling narrative around your research findings. By diligently following this comprehensive guide, you empower your audience to gain a nuanced understanding of your research. This not only enhances the validity and impact of your study but also contributes to the collective scientific endeavor of advancing knowledge. Recommended Articles Interested in learning more about statistical analysis and its vital role in scientific research? Explore our blog for more insights and discussions on relevant topics. Frequently Asked Questions (FAQs) Q1: What is a Chi-Square Test of Independence? The Chi-Square Test of Independence is a statistical method used to evaluate the relationship between two or more categorical variables. It is commonly employed in various research fields to determine if there are significant associations between variables. Q2: When should I use a Chi-Square Test? Use a Chi-Square Test to examine the relationship between two or more categorical variables. This test is often applied in healthcare, social sciences, and marketing research, among other Q3: What is the p-value in a Chi-Square Test? The p-value represents the probability that the observed data occurred by chance if the null hypothesis is true. A p-value less than 0.05 generally indicates a statistically significant relationship between the variables being studied. Q4: How do I report the results in APA style? To report the results in APA style, state the purpose, sample size, observed frequencies, Chi-Square statistic, degrees of freedom, p-value, effect size, and interpretation of the findings. Additional information, such as adjusted residuals and graphical representations, may also be included. Q5: What is the effect size in a Chi-Square Test? Effect size measures like Cramer’s V or Phi coefficient quantify the strength and direction of the relationship between variables. Effect sizes are categorized as small (0.1), medium (0.3), or large Q6: How do I interpret the effect size? Interpret the effect size in terms of its practical implications. For example, a small effect size, although statistically significant, might not be practically important. Conversely, a large effect size would likely have significant real-world implications. Q7: What are adjusted residuals? In contingency tables larger than 2×2, adjusted residuals are calculated to identify which specific combinations of categories are driving the observed associations. Thresholds commonly used are -1.96 and +1.96 at a 5% significance level. Q8: Can I use Chi-Square Tests for small samples? Chi-square tests are more reliable with larger sample sizes. For small sample sizes, it is advisable to use an alternative test like Fisher’s Exact Test. Q9: What is the difference between a Chi-Square Test and a t-test? While a t-test is used to compare the means of two groups, a Chi-Square Test is used to examine the relationship between two or more categorical variables. Both tests provide different types of information and are used under other conditions. Q10: Are there any alternatives to the Chi-Square Test? Yes, options like the Fisher’s Exact Test for small samples and the Kruskal-Wallis test for ordinal data are available. These are used when the assumptions for a Chi-Square Test cannot be met.
{"url":"https://statisticseasily.com/report-chi-square/","timestamp":"2024-11-12T22:22:57Z","content_type":"text/html","content_length":"200970","record_id":"<urn:uuid:e5383ffc-efed-49aa-b9d8-8ef2c53ac1de>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00054.warc.gz"}
Current fluctuations for TASEP: A proof of the Prähofer-Spohn conjecture We consider the family of two-sided Bernoulli initial conditions for TASEP which, as the left and right densities (ρ-,ρ+) are varied, give rise to shock waves and rarefaction fans-the two phenomena which are typical to TASEP. We provide a proof of Conjecture 7.1 of [Progr. Probab. 51 (2002) 185-204] which characterizes the order of and scaling functions for the fluctuations of the height function of two-sided TASEP in terms of the two densities ρ-,ρ+ and the speed y around which the height is observed. In proving this theorem for TASEP, we also prove a fluctuation theorem for a class of corner growth processes with external sources, or equivalently for the last passage time in a directed last passage percolation model with two-sided boundary conditions: ρ- and 1-ρ+. We provide a complete characterization of the order of and the scaling functions for the fluctuations of this model's last passage time L(N,M) as a function of three parameters: the two boundary/source rates ρ- and 1 - ρ+, and the scaling ratio γ^2 = M/N. The proof of this theorem draws on the results of [Comm. Math. Phys. 265 (2006) 1-44] and extensively on the work of [Ann. Probab. 33 (2005) 1643-1697] on finite rank perturbations of Wishart ensembles in random matrix theory. • Asymmetric simple exclusion process • Interacting particle systems • Last passage percolation ASJC Scopus subject areas • Statistics and Probability • Statistics, Probability and Uncertainty Dive into the research topics of 'Current fluctuations for TASEP: A proof of the Prähofer-Spohn conjecture'. Together they form a unique fingerprint.
{"url":"https://nyuscholars.nyu.edu/en/publications/current-fluctuations-for-tasep-a-proof-of-the-pr%C3%A4hofer-spohn-conj","timestamp":"2024-11-01T19:55:39Z","content_type":"text/html","content_length":"53997","record_id":"<urn:uuid:8e6f5560-5e26-4d52-a820-eac493fb7ae6>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00240.warc.gz"}
Problems extracting phase of dual FFT 5 years ago ●4 replies● latest reply 5 years ago 164 views I am performing dual FFT analysis on a system. 1) I generate a Log Sine Sweep signal and save it as my reference array 2) I then produce an inverse filter to correct for amplitude and save this as inverse array 3) I play this signal through my device under test, and record the result from a measurement microphone and save this as my measurement array. 4) I perform an FFT on all three arrays. 5) I multiply both the measurementFFT and the referenceFFT with the inverseFFT. 6) I then divide the corrected measurementFFT by the corrected referenceFFT 7) finally I perform an inverse FFT on this division product to get back to the time domain and this gives me my impulse response. If I plot this I have a clear impulse response. If I try and extract the phase from this I have issues. To do this I find the reference delay time by finding the max absolute value of the impulse. This will be the number of samples I need to delay the reference signal by if I am to extract minimal So, I go back to square one: 1) I zero pad the array 2) I then move the reference array so it is delayed by the impulse sample number. (The zero padding in the previous step results in zeros each side to equal the same length as the measurement array) 3) I then FFT both of these, repeating steps 4-6 from before 4) I then extract the phase from the impulse in the frequency domain, by using i = atan2(im(i), re(i)) for each value of the division product. 5) I then convert to degrees by multiplying each value by 180 and then dividing by pi. If I plot this I end up with a lot of wraps in phase. Playing around a bit, I have discovered if I subtract my impulse delay time from the total count and use that as my new delay time, it gives the expected result. Have I missed something here. (ie if I have 32768 input samples, and my impulse is at sample 12, I would delay my reference by 32756 samples using the above method to give me the resultant phase I would expect). Have I missed anything obvious? [ - ] Reply by ●July 4, 2019 Hey there Samp17, I am not sure I do understand what the problem might be but I would suggest a couple of "tests" to acquire a better understanding. By delaying the reference signal by this (big amount of) time, looks like you could advance in time the measured signal to align them. Would you care to check this too? One more thing you could try doing is to perform a delay in the frequency domain (adding a phase offset to all bins) and see if you get the same results. Again, I would suggest trying it both ways (delay/advance both signals and check the results). Finally, just a side-note... You could possibly try to find the peak from the squared impulse response (energy method), which in close to ideal situations will yield similar (if not the same) results, but in some situations may slightly improve the noise robustness (beware, it is still quite susceptible to noise like the absolute impulse response method). [ - ] Reply by ●July 4, 2019 just curious : are you trying to do speaker or room eq ? [ - ] Reply by ●July 4, 2019 The aim is to make loudspeaker measurements within a venue. This is my second post on this forum. I would very much like to be able to take a log some sweep of two loudspeakers (with the mic in the same position), window the impulse response to filter out some of the room, smoothing the frequency response, and then I can time align the sources and EQ them. But a big part is producing a reliable phase trace [ - ] Reply by ●July 4, 2019 I’m having hard time understanding what you are saying at the end. Would you mind sharing images of the impulse/phase plots before and after you manually adjusted the delay
{"url":"https://www.dsprelated.com/thread/8961/problems-extracting-phase-of-dual-fft","timestamp":"2024-11-13T16:12:20Z","content_type":"text/html","content_length":"36627","record_id":"<urn:uuid:845c82f2-341f-462b-bfec-51835d6feac5>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00231.warc.gz"}
Math AI - Free & Powerful AI Math Solver | GeniusTutor GeniusTutor AI Math Solver Learn Math Like a Genius With Our Math AI Available in: +20 more Smart Math AI That Simplifies Math Learning Struggling with math problems? Looking desperately for ways to improve your math grade? GeniusTutor's math AI is here to help! Our AI math solver is your ticket to math confidence and success. From elementary school to college math learning, we can offer tailored, easy to grasp guidance on any concept and detailed, step-by-step solutions to any problem. Comprehensive AI Math Solver Our AI-driven platform will be the key to unlocking your full learning potential, making everything about math a breeze. Here's a quick overview of the types of math problems GeniusTutor can help you solve: Math AI You Can Trust GeniusTutor's math AI maintains a high level of accuracy and reliability, helping you learn math like a genius. Consistently High Accuracy You want an AI math solver that provides precise solutions. And GeniusTutor delivers just the help you need. Our math AI has been rigorously tested and has a 98% accuracy in offering correct Clarity Guaranteed With GeniusTutor, you not only get the answer. It takes you through a step-by-step process, breaking down complex problems into understandable parts. So you'll know clearly where that answer comes Key Concepts Demystified GeniusTutor explains key math concepts in an understandable way. This doesn't only help you solve the problems at hand, but also equips you with the ability to tackle similar problems in the future. AI Math Solver That Benefits Everyone Whoever you are, GeniusTutor provides tailored support to enhance your math learning. • Students For students, GeniusTutor can assist with homework, solve practice problems, and offer explanations for complex math concepts that are difficult to understand. • Lifelong Learners Lifelong learners can also benefit from our AI math solver as it provides a convenient and accessible way to refresh their math skills or solve math problems in your work. • Teachers More than providing math learning assistance, GeniusTutor can also serve as a teaching aid for teachers to supplement their lessons and provide additional support to their students. How Our AI Math Solver Work We know it has already worn you up to face the challenge of math and need quick help. So we've designed our AI math solver to be easy to use as counting 123. • 01 Input Your Problem Type or upload your math problem you're having trouble with. • 02 GeniusTutor Analyzes the Problem Our math AI will start looking into the problem and preparing the solution. • 03 Get Detailed Solution Get step-by-step instructions and clear explanations to understand the solution process. Why Choose GeniusTutor for Your Math Learning š Highly precise 98% accurate math AI š Comprehensive Covering all math problem types š Step-by-step guides Make sure you know how and why š § ā š ¤ ā š § Benefits everyone Students, lifelong learners, educators, ā ¦ • Can GeniusTutor help with math homework assignments? Absolutely! GeniusTutor is designed to assist with homework by guiding you through problems with detailed explanations, making it an invaluable homework companion. • Is GeniusTutor suitable for all educational levels? Yes, GeniusTutor caters to a broad spectrum of educational levels, from elementary school math to college-level courses. • How does GeniusTutor adapt to my learning style? GeniusTutor uses AI to analyze the level of your problem and your requirements. Then it tailors explanations to ensure you're grasping concepts in a way that makes the most sense to you. • How does your math AI ensure I understand the math concepts? We demonstrate the explanations and solutions in a clear, step-by-step manner, ensuring you not only know the answer but also understand the process. • Can I access GeniusTutor on multiple devices? Yes, GeniusTutor is accessible on various devices, including smartphones, tablets, and computers, so you can learn on the go or from the comfort of your home. • How much does GeniusTutor cost? We strive to make math education accessible. And GeniusTutor offers various subscription plans to fit different needs and budgets, along with a free version so you can try out the platform before Become a Math Whiz Today! Get started with GeniusTutor now and transform your math learning experience into a rewarding journey!
{"url":"https://geniustutor.ai/math","timestamp":"2024-11-01T19:00:54Z","content_type":"text/html","content_length":"82147","record_id":"<urn:uuid:cbe0d5b8-6eca-4a9e-8256-67a0b3400ade>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00808.warc.gz"}
The UW Math Placement Test This page provides a comprehensive overview of our policies and procedures regarding the UW math placement test. New and incoming students that have questions regarding enrollment eligibility, placement, course selection, transfer credit, etc. may refer to our New Student Placement & Enrollment page for more detailed and nuanced information. General Information Taking the UW Math Placement Test Information on the UW math placement test and retakes (including how to request retake permission) can be found below. Note that retakes should be requested as soon as possible to allow time for communications to be received and access to be processed in time for the above deadline. Incoming Students Incoming students are notified upon admission on which placement test(s) they need to take upon entry to the University and before their SOAR date. UW-Madison students are required to take the Math B exam. Math B is the proctored version of the UW System Math placement test, while Math A is the non-proctored version of the placement test. Both versions are designed to test the same set of skills. The only difference is that Math B is proctored and UW-Madison does not accept Math A results. The Math B exam is offered both in-person as a paper and pencil test, and online via Live Remote Proctoring. Both freshman and transfer students taking the test after March 2023 can select any of the options labeled “Math B”. All incoming freshmen must take the math placement test (Math B version), and transfer students are notified of their need to take it based on previous coursework. Students who do not have college transfer credits or college credits in mathematics and want to take math classes need to take the math placement test. The SOAR website has more information on how to register for placement tests. The SOAR FAQs page has a section specific to placement tests. Continuing Students • Continuing students who have not taken the placement test yet are welcome to do so. Directions for continuing students can be found here. Students who took their last math course over a year ago are strongly encouraged to take the math placement exam on an advisory basis, even if they are not required to. Placement based on their past work alone is often inaccurate. Students that would like to retake the math placement test will find more information further down the page. Viewing Scores Students can view their placement test scores within two weeks after taking the exam by following these directions. Placement Algorithm Math placement (in lieu of transfer credit for an equivalent math course) is based upon an algorithm. A student’s placement test scores are used in combination with the algorithm to determine appropriate math placement. Students with questions about how course selection and eligibility works based on placement test scores should refer to our New Student Placement & Enrollment page. This is an accordion element with a series of buttons that open and close related content panels. Current Placement Algorithm Current Placement Algorithm │ MFUND │ AALG │ TAG │ Mathematics Course Options │ │150-355│150-850│150-850│MATH 96 (this course does not count for degree credit) │ │356-465│150-850│150-850│MATH 96 (this course does not count for degree credit) or MATH 141. Must take MATH 96 if additional math courses are required (for major or as prerequisite to other courses)│ │ │ │150-555│MATH 112 (followed by MATH 113 for MATH 221) │ │ │150-485├───────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │556-850│MATH 112 (will not need MATH 113 for MATH 221) or MATH 114 or MATH 171/217 sequence (must take both courses and is equivalent to MATH 114 and MATH 221) │ │ ├───────┼───────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │150-555│MATH 114 or MATH 112 (followed by MATH 113 for MATH 221) or MATH 171/217 sequence (must take both courses and is equivalent to MATH 114 and MATH 221). [QR-A satisfied] │ │ │ │556-850│MATH 112 (will not need MATH 113 for MATH 221) or MATH 114 or MATH 171/217 sequence (must take both courses and is equivalent to MATH 114 and MATH 221) [QR-A satisfied] │ │ ├───────┼───────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │150-555│MATH 113 (will not need MATH 112 for MATH 221) or MATH 211 or MATH 114 or MATH 171/217 sequence (must take both courses and is equivalent to MATH 114 and MATH 221). [QR-A │ │ │536-850│ │satisfied] │ │ │ ├───────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │556-850│MATH 211 or MATH 221 [QR-A satisfied] │ Pre-Summer 2017 Placement Algorithm Pre-Summer 2017 Placement Algorithm │ MBSC │ ALG │ TRG │ Mathematics Course Options │ │150-355│150-850│150-850│Remedial status: If SAT-M>540 or ACT-M>21 then Math 101 OR Math 141, other Math 95 (See Note 1) │ │356-405│150-850│150-850│Math 101 OR Math 141 (See Note 1) │ │ │150-415│150-850│Math 101 OR Math 141 (See Note 1) │ │ ├───────┼───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │150-565│Math 112 (followed by Math 113 for calculus 221) │ │ │416-495├───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │566-850│Math 112 (will not need Math 113 for calculus 221) OR Math 114 OR Math 171-217 sequence (See Note 2) │ │ ├───────┼───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │406-850│ │150-565│Math 114 OR Math 171-217 sequence (See Notes 2 and 3) [QR-A satisfied] │ │ │496-565├───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │566-850│Math 112 (will not need Math 113 for calculus 221) OR Math 114 OR Math 171-217 sequence (See Note 2) [QR-A satisfied] │ │ ├───────┼───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │150-565│Math 113 OR Math 210 OR Math 211 OR Math 114 OR Math 171-217 sequence (See Note 2) [QR-A satisfied] │ │ │566-850├───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │566-850│Math 210 or 211 or 221 [QR-A satisfied] │ Retaking the Placement Test Important Considerations We strongly suggest that you retake the math placement test only if you think that your placement level is inaccurate. It may not be in your interest to place into a higher-level course if, in fact, you are not quite prepared for that class. Starting SOAR Summer 2024, the best (highest) math course placement (based on the scores from a single administration of the test) will be accepted in course requisites and degree requirements. This is a change from the previous policy that accepted the most recent math placement test results only. Be sure you have adequately reviewed your math skills so you will do well on the placement test. Review resources are below. Retest Permission Students need to get authorization from a math consultant for all retakes. We have two ways to go about getting retake permission depending on where a student is at: This is an accordion element with a series of buttons that open and close related content panels. During SOAR During SOAR, students should ask their advisor about retake permission. Advisors will be able to fill out a retake request form on behalf of their student, which will send an automated email with more information. After SOAR or During Academic Year After the conclusion of a student’s SOAR appointment or during the academic year, an email should be sent to placement@math.wisc.edu requesting permission to retake the placement test. Retest permission will not be granted before a SOAR session has been attended. Where, When & How to Retake the Placement Test Students have the option to retake the placement test after having received permission from an advisor or the Math Department. Immediately below are policies and considerations for retaking. If you are in need of or would like to request disability accommodations, please see the Retake Exam Accommodations section further down the page for more information. This is an accordion element with a series of buttons that open and close related content panels. Retake Scoring Timeline, Format/Modality & Schedule IMPORTANT NOTE for Fall 2024 ENROLLMENT Retake scores can take up to 2 business days to post. For students looking to retake and adjust their schedule for the Fall 2024, we suggest that they retake as soon as possible after having done some review to make appropriate schedule adjustments with respect to the add/drop deadlines. Please keep the following in mind: 1. The first day of the Fall 2024 semester is September 4. 2. If you retake your exam before the beginning of classes, they may be posted by the time classes begin, allowing you to change your schedule. 3. We suggest students keep the add/drop deadlines in mind when planning their retake date. They should ideally retake the exam so scores can be posted and schedules can be adjusted before September 4. If the retake exam is taken after September 11, the scores may not be posted in time for the add deadline on September 13. Any course enrollment after that time is based on department permission and is not guaranteed to be granted. 5. With any schedule adjustments after September 11, a DR grade will be reflected on a student’s record. However, this is generally not a significant issue and simply reflects an academic action. More information on DR grades can be found here. Format/Modality & Schedule Regular retesting will be held in-person. There are numerous opportunities for students to retake in time for the initial semester enrollment deadlines. Students can view the retake schedule for available exam dates/times. A few notes on retake dates: • Retakes are available periodically throughout the summer. • Retakes dates are available more frequently in the week leading up to the Fall 2024 semester. • Retake dates are available daily during the first two weeks of class. Once retake permission is given, advance registration for a testing session is required. More information about how and where to register will be shared as dates are posted. Students should also bring a government-issued photo ID, a couple of #2 pencils and a non-graphing calculator. If you intend on retaking in preparation for summer course enrollment: please plan to retake in-person and in time for scores to be posted for the summer term. If you are not able to retake in-person for summer enrollment, please contact placement@math.wisc.edu to discuss your options. There is no fee to retake the placement test. Viewing Your Retake Scores & Adjusting Your Schedule After the retake exam, students should check their Student Center daily by following these directions. If you receive your new scores and wish to make a schedule adjustment, you should be able to do so. If you are having issues with making adjustments despite your new scores being posted, please review the Enrollment, Scheduling & Placement Help page for info on how to get some help and who to If your scores are not posted within two business days of taking the exam, please contact placement@math.wisc.edu to see about getting your scores updated to your Student Center. Retake Exam Accommodations Any requests or questions regarding disability accommodations can be sent to Tim O’Connor (with Testing and Evaluation Services) at tnoconno@wisc.edu. Students that would like to request disability accommodations for the regular academic year can reach out to the McBurney Disability Resource Center. Review Resources We offer a number of optional resources for students that wish to review before retaking the placement test. This is an accordion element with a series of buttons that open and close related content panels. Math Learning Center (MLC) Resources Students are welcome to consider the free Boot Camp modules as review resources for math placement test retakes. You may consider the Precalculus and/or Calculus Boot Camp modules for preparation and review of relevant concepts. We recommend that you work through both Boot Camp modules if you are trying to place into calculus. These modules are only meant as a review of materials, and do not guarantee that you will be placed into a specific course. After retaking the placement test and adjusting your enrollment based on the results, you may consider reviewing our review resources page for some suggestions on reviewing and preparing for whichever math course you’d like to enroll in. Cengage Webassign UW Math Placement Practice Exam course • A free practice exam geared toward the UW math placement test; students get three tries • Questions 1-35 are MFUND, questions 36-62 are AALG, and questions 63-85 are TRIG. • Includes class insights so students will know what areas they struggle in • See here for directions on how to access the course • Class Key: wisc 4312 6471 UW Math Placement Remediation course • An optional, paid targeted practice module geared toward the UW math placement test; students pay a fee then receive access to module. The fee is $23.99. • Includes class insights so students will know what areas they struggle in. • See here for directions on how to access and purchase the optional module, and additional information. • Class Key: wisc 3339 0538 Testing and Evaluation Services • A link to a practice mathematics placement test with sample test items • A breakdown on the general characteristics of the test and a percentage of scale for mathematics concepts Advising & Enrollment Help Students that would like some help with questions/issues regarding enrollment, class scheduling, and placement can review the Enrollment, Scheduling & Placement Help page. If you need to readjust your math coursework after you retake your placement test, please reach out to the placement advisor. The placement advisor can assist you with math course scheduling changes and selection based of your new placement test scores. For specific math course selection and planning questions based on your program/major of interest, your assigned academic advisor or an advisor in your major/program of interest is likely your best
{"url":"https://math.wisc.edu/undergraduate/placement/placement-test/","timestamp":"2024-11-03T05:55:45Z","content_type":"text/html","content_length":"100498","record_id":"<urn:uuid:c32b48ca-942b-4bf3-bd55-343e64d12c8e>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00892.warc.gz"}
Ve203 Assigment 5: Sunzi, Fermat, Legendre Exercise 5.1 Binary Insertion Sort The binary insertion sort is a variation of the insertion sort that uses a binary search technique rather than a linear search technique to insert the i element in the correct place among the previously sorted elements. i) Express the binary insertion sort in pseudocode. (2 Marks) ii) Compare the number of comparisons of elements used by the insertion sort and the binary insertion sort when sorting the list 7, 4, 3, 8, 1, 5, 4, 2. (2 Marks) iii) Show that the insertion sort uses O(n ) comparisons of elements. (2 Marks) iv) Find the complexity of the binary insertion sort. Is it significantly faster? (2 Marks) Exercise 5.2 Order the letters M,I,C,H,I,G,A,N alphabetically using i) merge sort, (2 Marks) ii) insertion sort, (2 Marks) iii) bubble sort (2 Marks) algorithms. (Note that it does not matter that the letter “I” is repeated.) For each algorithm, show what the arrangement is after each pass/merge. How many comparisons of letters are made using each algorithm? Exercise 5.3 Application to Feng Shui The iterated integer sum of n ∈ N \ {0} is calculated as follows: The decimal digits of n are added to yield a sum n1. If n1 is greater than 9, the integers of n1 are added. This process is repeated until a number between 0 and 9 is obtained. For example, the iterated integer sum of 54469 is calculated as follows: 5 + 4 + 4 + 6 + 9 = 28, 2 + 8 = 10, 1 + 0 = 1. i) Give the worst-case number of additions that need to be performed to calculate the iterated integer sum of n ∈ N \ {0} in this way. (2 Marks) ii) How is the iterated integer sum of a number n related to n mod 9? Prove your assertion! (3 Marks) iii) Generalize (ii) to integers represented in arbitrary base b. (2 Marks) (The iterated integer sum plays a role in Feng Shui; see, for example, http://fengshui.about.com/od/fengshuicures/qt/kua number.htm.) (2+3+2 Marks) Exercise 5.4 Modular Exponentiation Find 41021042 mod 2014 using the algorithm for modular exponentiation given in the lecture. Show all the steps in the algorithm. (2 Marks) Exercise 5.5 Stein’s Algorithm for the GCD in base 2 Let a > b be two natural numbers. i) How can multiplication or division by 2 be efficiently performed in base 2? (1 Mark) ii) If a and b are both even, express gcd(a, b) in terms of gcd( a (1 Mark) iii) If a and b are both odd, express gcd(a − b, b) in terms of gcd(a, b). (1 Mark) iv) Work out an algorithm to calculate the gcd of two natural numbers in base 2. (2 Marks) (According to D. Knuth, this algorithm, called Stein’s algorithm, was probably known in 1st-century China.) Exercise 5.6 Solve the following recurrence relations: an = an−1 + 6an−2, n ≥ 2, a0 = 3, a1 = 6, an+2 = −4an+1 + 5an, n ≥ 0, a0 = 2, a1 = 8. (4 Marks) Exercise 5.7 Prove Theorem 2.3.8 of the lecture, which states that all solution to a linear homogeneous recurrence relation of degree two are of the form an = α1 · r 0 + α2 · nrn , α1, α2 ∈ R, n ∈ N. if there is only a single characteristic root r0. (2 Marks) Exercise 5.8 Find all solutions of the following recurrence relations: an = 5an−1 − 6an−2 + 42 · 4 an = −5an−1 − 6an−2 + 2n + 3n, an = 7an−1 − 16an−2 + 12an−3 + n4 (6 Marks)
{"url":"https://codingprolab.com/answer/ve203-assigment-5-sunzi-fermat-legendre/","timestamp":"2024-11-12T03:56:15Z","content_type":"text/html","content_length":"108266","record_id":"<urn:uuid:c29eff0f-1443-4af4-a14d-8564a660310c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00171.warc.gz"}
Let \(T=\left\\{(n, i) ; 0 \leq i \leq 2^{n}-1, n \in \mathbf{N}_{0}\right\\}\) be equipped with a partial ordering \(<\) determined by the relation \((n, i)<(n+1, j)\) iff \(j \in\\{2 i, 2 i+1\\}\). Then \((T,<)\) is a binary tree. By a segment we mean a subset \(S=\\{t \in\) \(T ;(n, i) \leq t \leq(m, j)\\}\) for some \((n, i),(m, j) \in T\). A maximal linearly ordered subset of \(T\) is called a branch. Let \(\Gamma\) denote the set of all branches in \(T\). Show that \(\operatorname{card}(\Gamma)=c\), the cardinality of the continuum. Short Answer Expert verified The cardinality of the set of all branches \(\textbackslash{}Gamma\) in \(T\) is \(\textbackslash{}mathbf{c}\), the cardinality of the continuum. Step by step solution Understand the Structure of T The set \(T=\text{\textbackslash{}left\text{\textbackslash{}\textbackslash{}(n, i) ; 0 \text{\textbackslash{}leq} i \text{\textbackslash{}leq} 2\text{\textasciicircum{}n}-1, n \text{\textbackslash{} in} \text{\textbackslash{}\textbackslash{}mathbf\text{\textbackslash{}{N}\text{\textbackslash{}_{\textbackslash{}0}\text{\textbackslash{}}}}\text{\textbackslash{}right\text{\textbackslash{}\ textbackslash{}}\text{\textbackslash{}{}}}\) is a set of pairs \((n, i)\) where \(n\) is a non-negative integer and \(i\) is an integer between 0 and \(2^n - 1\). This sets up a binary tree structure where \((n, i) < (n+1, j)\) if and only if \(j \text{\textbackslash{}in}\text{\textbackslash{}\textbackslash{}\text{\textbackslash{}{}2i, 2i+1\text{\textbackslash{}\textbackslash{}}}\). Define Branches A branch in this context is a maximal linearly ordered subset of \(T\). This means it is a path from the root node at \((0,0)\) through the tree, continuing infinitely. Define the Continuum and Cantor's Diagonal Argument The continuum \(c\) refers to the cardinality of the set of real numbers, which is equivalent to the power set of the natural numbers, \(\text{\textbackslash{}mathcal{P}}(\text{\textbackslash{}mathbf {N}})\). Cantor's diagonal argument shows that this set is uncountably infinite. Establish the Cardinality of Branches Each branch in \(T\) can be uniquely represented by an infinite binary sequence, where each step down the branch corresponds to choosing either \(2i\) or \(2i+1\). The number of all such infinite binary sequences is \(2\text{\textasciicircum{}\text{\textbackslash{}mathbf{N}}}\), which has the cardinality of the continuum, \(c\). Conclude the Solution Therefore, the set \(\text{\textbackslash{}Gamma}\) of all branches in \(T\) has the same cardinality as the continuum. This is because the number of infinite binary sequences (which corresponds to the number of branches in \(T\)) is equal to the cardinality of the real numbers. Key Concepts These are the key concepts you need to understand to accurately answer the question. Binary Tree A binary tree is a hierarchical structure in which each node has at most two children, commonly referred to as the left child and the right child. In our exercise, the set \(T\){ (n, i) ; 0 \leq i \ leq 2^n - 1, n \in \mathbf{N}_0 } forms a binary tree where each pair \((n, i)\) has two potential children: \((n+1, 2i)\) and \((n+1, 2i+1)\). This means we can visualize \(T\) as a tree starting from \((0,0)\) at the root and expanding as we go deeper: each node at level \(n\) has nodes \(2i\) and \(2i+1\) at level \(n+1\) as its children. This setup helps in visualizing complex hierarchical data in an explicit and ordered manner. The power of binary trees lies in their ability to efficiently manage and organize data, which is especially useful in search algorithms and hierarchical data representations. Cardinality refers to the number of elements in a set. In mathematical terms, it's the measure of the 'size' of a set. We encounter two primary types of cardinality: finite and infinite. In our context, we are particularly interested in infinite cardinalities. For instance, the set of natural numbers \(\mathbf{N}\) has a cardinality denoted by \(\aleph_0\) (aleph-null), representing the smallest infinity. Comparatively, the set of real numbers \(\mathbf{R}\) has a larger cardinality, denoted by \(\mathfrak{c}\), which is also known as the cardinality of the continuum. In our exercise, we show that the set of all branches in a binary tree, \(\Gamma\), has the same cardinality as the real numbers, \(\mathfrak{c}\). This fascinating result illustrates how certain infinite sets can be surprisingly vast, connecting to the broader topic of different sizes of infinity. Continuum Hypothesis The Continuum Hypothesis (CH) is a fascinating and widely debated topic in set theory and mathematical logic. It concerns the possible sizes of infinite sets. Specifically, it posits that there is no set whose cardinality is strictly between that of the integers (\(\aleph_0\)) and the real numbers (\(\mathfrak{c}\)). In simpler terms, CH suggests that there's no 'intermediate' size of infinity between the countable infinity of the natural numbers and the uncountable infinity of the real numbers. Despite its profound implications, the hypothesis remains unproven within the standard Zermelo-Fraenkel set theory with the Axiom of Choice (ZFC). It has been shown to be independent of these axioms, meaning it can neither be proven nor disproven from them. The Continuum Hypothesis plays a critical role in understanding the structure and hierarchy of infinite sets, providing a deeper insight into the nature of mathematical infinity. Cantor's Diagonal Argument Cantor's Diagonal Argument is a groundbreaking proof by mathematician Georg Cantor that demonstrates the uncountability of the set of real numbers. The argument shows that the real numbers between 0 and 1 (or any interval) cannot be listed in a complete sequence, meaning they have a greater cardinality than the set of natural numbers. Here's a simplified outline of the argument: • Assume that we can list all real numbers between 0 and 1 in a sequence. • Each number in the list is represented by its decimal (or binary) expansion. • Construct a new number by changing the nth digit of the nth number in our list, ensuring this new number differs from every number in the list at least in one digit. This newly constructed number cannot be part of the original list, contradicting our assumption. Hence, the set of real numbers is uncountably infinite, illustrating there are 'more' real numbers than natural numbers. This diagonal argument is a fundamental proof in understanding different sizes of infinity and the concept of uncountability, forming a cornerstone of real analysis and set theory.
{"url":"https://www.vaia.com/en-us/textbooks/math/functional-analysis-and-infinite-dimensional-geometry-0-edition/chapter-6/problem-49-let-tleftn-i-0-leq-i-leq-2n-1-n-in-mathbfn0rightb/","timestamp":"2024-11-03T15:31:02Z","content_type":"text/html","content_length":"253854","record_id":"<urn:uuid:357b7129-07d4-4ee6-8332-e360ed1282ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00270.warc.gz"}
Thyra::ModelEvaluator< Scalar > Pure abstract base interface for evaluating a stateless "model" that can be mapped into a number of different types of problems. More... template<class Scalar> class Thyra::ModelEvaluator< Scalar > Pure abstract base interface for evaluating a stateless "model" that can be mapped into a number of different types of problems. This interface defines a very loosely mathematically typed interface to a very wide variety of simulation-based models that can support a very wide range of simulation-based numerical algorithms. For the most part, a model represented by this interface is composed: • State vector function: (x_dot_dot,x_dot,x,{p(l)},t,...}) -> f <: f_space • Auxiliary response vector functions: (x_dot_dot,x_dot,x,{p(l)},t,...}) -> g(j) <: g_space(j), for j=0...Ng-1 • Other outputs: A model can compute other objects as well (see derivatives below). given the general input variables/parameters: • State variables vector: x <: x_space • State variables derivative w.r.t. t vector: x_dot <: x_space • State variables second derivative w.r.t. t vector: x_dot_dot <: x_space • Auxiliary parameter vectors: p(l) <: p_space(l), for l=0...Np-1 • Time point (or some other independent variable): t <: Scalar • Other inputs: A model can accept additional input objects as well (see below). where x_space <: RE^n_x, f_space <: RE^n_x, p_space(l) <: RE^n_p_l (for l=0...Np-1), and g_space(j) <: RE^n_g_j (for j=0...Ng-1) are Thyra vector spaces of the given dimensions. Above, the notation {p(l)} is shorthand for the set of parameter vectors { p(0), p(1), ..., p(Np-1) }. All of the above variables/parameters and functions are represented as abstract Thyra::VectorBase objects. The vector spaces associated with these vector quantities are returned by get_x_space(), get_p_space(), get_f_space(), and get_g_space(). All of the input variables/parameters are specified as a ModelEvaluatorBase::InArgs object, all functions to be computed are specified as a ModelEvaluatorBase::OutArgs object, and evaluations of all function at a single set of variable values is performed in a single call to evalModel(). A particular ModelEvaluator subclass object can support any subset of these inputs and outputs and it is up to the client to map these variables/parameters and functions into abstract mathematical problems. Some of the different types of abstract mathematical problems that can be represented through this interface are given in the next section. This interface can also support the computation of various derivatives of these functions w.r.t. the input arguments (see the section Function derivatives and sensitivities below). Examples of Abstract Problem Types There are a number of different types of mathematical problems that can be formulated using this interface. In the following subsections, a few different examples of specific abstract problems types are given. Nonlinear Equations f(x) = 0 Here it is assumed that D(f)/D(x) is nonsingular in general but this is not strictly required. If W=D(f)/D(x) is supported, the nature of D(f)/D(x) may be given by this->createOutArgs() Explicit ODEs x_dot = f(x,t) Here it is assumed that D(f)/D(x) is nonsingular in general but this is not strictly required. If W=D(f)/D(x) is supported, the nature of D(f)/D(x) may be given by this->createOutArgs() Above, the argument t may or may not be accepted by the model (i.e. createInArgs().supports(IN_ARG_t) may return false). Implicit ODEs or DAEs f(x_dot,x,t) = 0 Here it is assumed that D(f)/D(x) is nonsingular in general but this is not strictly required. The problem is either an implicit ODE or DAE depending on the nature of the derivative matrix D(f)/D(x_dot): • ODE: D(f)/D(x_dot) is full rank • DAE: D(f)/D(x_dot) is not full rank If supported, the nature of W=W_x_dot_dot_coeff*D(f)/D(x_dot_dot)+alpha*D(f)/D(x_dot)+beta*D(f)/D(x) may be given by this->createOutArgs().get_W_properties(). Here the argument t may or may not be accepted by *this. Unconstrained optimization min g(x,{p(l)}) where the objective function g(x,{p(l)}) is some aggregated function built from some subset of the the auxiliary response functions g(j)(x,{p(l)}), for j=1...Ng. In general, it would be assumed that the Hessian D^2(g)/D(x^2) is symmetric semidefinite but this is not strictly required. Equality constrained optimization min g(x,{p(l)}) s.t. f(x,{p(l)}) = 0 where the objective function g(x,{p(l)}) is some aggregated function built from some subset of the the auxiliary response functions g(j)(x,{p(l)}), for j=0...Ng-1. Here it is assumed that D(f)/D(x) is nonsingular in general but this is not strictly required. If W=D(f)/D(x) is supported, the nature of D(f)/D(x) may be given by this->createOutArgs() General constrained optimization min g(x,{p(l)}) s.t. f(x,{p(l)}) = 0 r(x,{p(l)}) = 0 hL <= h(x,{p(l)}) <= hU xL <= x <= xU pL(l) <= p(l) <= pU(l) where the objective function g(x,{p(l)}) and the auxiliary equality r(x,{p(l)}) and inequality h(x,{p(l)}) constraint functions are aggregated functions built from some subset of the the auxiliary response functions g(j)(x,{p(l)}), for j=0...Ng-1. The auxiliary response functions for a particular model can be interpreted in a wide variety of ways and can be mapped into a number of different optimization problems. Here it is assumed that D(f)/D(x) is nonsingular in general but this is not strictly required. If W=D(f)/D(x) is supported, the nature of D(f)/D(x) may be given by this->createOutArgs() Function derivatives and sensitivities A model can also optionally support the computation of various derivatives of the underlying model functions. The primary use for these derivatives is in the computation of various types of sensitivities. Specifically, direct and adjoint sensitivities will be considered. To illustrate the issues involved, consider a single auxiliary parameter p and a single auxiliary response function g of the form (x,p) => g. Assuming that (x,p) ==> f defines the state equation f (x,p)=0 and that D(f)/D(x) is full rank, then f(x,p)=0 defines the implicit function p ==> x(p). Given this implicit function, the reduced auxiliary function is g_hat(p) = g(x(p),p) The reduced derivative D(g_hat)/D(p) is given as: D(g_hat)/D(p) = D(g)/D(x) * D(x)/D(p) + D(g)/D(p) D(x)/D(p) = - [D(f)/D(x)]^{-1} * [D(f)/D(p)] Restated, the reduced derivative D(g_hat)/D(p) is given as: D(g_hat)/D(p) = - [D(g)/D(x)] * [D(f)/D(x)]^{-1} * [D(f)/D(p)] + D(g)/D(p) The reduced derivative D(g_hat)/D(p) can be computed using the direct or the adjoint approaches. The direct sensitivity approach first solves for D(x)/D(p) = - [D(f)/D(x)]^{-1} * [D(f)/D(p)] explicitly, then computes D(g_hat)/D(p) = D(g)/D(x) * D(x)/D(p) + D(g)/D(p). In this case, D(f)/D(p) is needed as a multivector since it forms the RHS for a set of linear equations. However, only the action of D(g)/D(x) on the multivector D(x)/D(p) is needed and therefore D (g)/D(x) can be returned as only a linear operator (i.e. a LinearOpBase object). Note that in Thyra a multivector is a linear operator and therefore every derivative object returned as a multivector automatically implements the forward and adjoint linear operators for the derivative operator. The final derivative D(g)/D(p) should be returned as a multivector that can be added to the multivector D(g)/D(x)*D(x)/D(p). The adjoint sensitivity approach computes D(g_hat)/D(p)^T = [D(f)/D(p)]^T * ( - [D(f)/D(x)]^{-T} * [D(g)/D(x)]^T ) + [D(g)/D(p)]^T by first solving the adjoint system Lambda = - [D(f)/D(x)]^{-T} * [D(g)/D(x)]^T for the multivector Lambda and then computes D(g_hat)/D(p)^T = [D(f)/D(p)]^T * Lambda + [D(g)/D(p)]^T. In this case, [D(g)/D(x)]^T is needed as an explicit multivector since it forms the RHS for the linear adjoint equations. Also, only the adjoint operator application[D(f)/D(p)]^T is needed. And in this case, the multivector form of the adjoint [D(g)/D(p)]^T is required. As demonstrated above, general derivative objects (e.g. D(f)/D(p), D(g)/D(x), and D(g)/D(p)) may be needed as either only a linear operator (where it's forward or adjoint application is required) or as a multivector for its forward or adjoint forms. A derivative D(h)/D(z) for some function h(z) can be supported in any of the following forms: • D(h)/D(z) as a LinearOpBase object where the forward and/or adjoint operator applications are supported • D(h)/D(z) as a MultiVectorBase object where each column in the multivector i represents D(h)/D(z(i)), the derivatives for all of the functions h for the single variable z(i) • [D(h)/D(z)]^T as a MultiVectorBase object where each column in the multivector k represents [D(h(k))/D(z)]^T, the derivatives for the function h(k) for all of the variables z A model can sign up to compute any, or all, or none of these forms of a derivative and this information is returned from this->createOutArgs().supports(OUT_ARG_blah,...) as a ModelEvaluatorBase::DerivativeSupport object, where blah is either DfDp, DgDx_dot, DgDx, or DgDp. The LinearOpBase form of a derivative is supported if this->createOutArgs().supports (OUT_ARG_blah,...).supports(DERIV_LINEAR_OP)==true. The forward MultiVectorBase form of the derivative is supported if this->createOutArgs().supports(OUT_ARG_blah,...).supports(DERIV_MV_BY_COL)==true while the adjoint form of the derivative is supported if this->createOutArgs().supports(OUT_ARG_blah,...).supports(DERIV_TRANS_MV_BY_ROW)==true. In order to accommodate these different forms of a derivative, the simple class ModelEvaluatorBase::Derivative is defined that can store either a LinearOpBase or one of two MultiVectorBase forms (i.e. forward or adjoint) of the derivative. A ModelEvaluatorBase::Derivative object can only store one (or zero) forms of a derivative object at one time. We now describe each of these derivative objects in more detail: • State function f(x_dot,x,{p(l)},t,...) derivatives □ State variable derivatives W = alpha*D(f)/D(x_dot) + beta*D(f)/D(x) This derivative operator is a special object that is derived from the LinearOpWithSolveBase interface and therefore supports linear solves. Objects of this type are created with the function create_W() and set on the InArgs object before they are computed in evalModel(). Note that if the model does not define or support an x_dot vector then the scalars alpha and beta need not be The LinearOpWithSolveBase form of this derivative is supported if this->createOutArgs().supports(OUT_ARG_W)==true. The LinearOpBase-only form (i.e. no solve operation is given) of this derivative is supported if this->createOutArgs().supports(OUT_ARG_W_op)==true. The W_op form of W is to be preferred when no linear solve with W will ever be needed. A valid implementation may support none, both, or either of these forms LOWSB and/or LOB of W. Also note that an underlying model may only support a single copy of W or W_op at one time. This is required to accommodate some types of underlying applications that are less flexible in how they maintain their memory and how they deal with their objects. Therefore, to accommodate these types of less ideal application implementations, if a client tries to create and maintain more than one W and/or W_op object at one time, then create_W() and create_W_op() may return null (or may throw an undetermined exception). However, every "good" implementation of this interface should support the creation and maintenance of as many W and/or W_op objects at one time as will fit into memory. □ State variable Taylor coefficients x_poly = x_dot_poly = f_poly = If x_poly is a given polynomial of degree x_dot_poly = x_poly f_poly = f(x_poly, x_dot_poly, t) + x_poly, x_dot_poly, and f_poly are represented by Teuchos::Polynomial objects where each coefficient is a Thyra::VectorBase object. The Taylor series coefficients of □ Auxiliary parameter derivatives DfDp(l) = D(f)/D(p(l)) for l=0...Np-1. These are derivative objects that represent the derivative of the state function f with respect to the auxiliary parameters p(l). This derivative is manipulated as a ModelEvaluatorBase::Derivative object. • Auxiliary response function g(j)(x,{p(l)},t,...) derivatives □ State variable derivatives DgDx_dot(j) = D(g(j))/D(x_dot) for j=0...Ng-1. These are derivative objects that represent the derivative of the auxiliary function g(j) with respect to the state variables derivative x_dot. This derivative is manipulated as a ModelEvaluatorBase::Derivative object. DgDx(j) = D(g(j))/D(x) for j=0...Ng-1. These are derivative objects that represent the derivative of the auxiliary function g(j) with respect to the state variables x. This derivative is manipulated as a ModelEvaluatorBase::Derivative object. □ Auxiliary parameter derivatives DgDp(j,l) = D(g(j))/D(p(l)) for j=0...Ng-1, l=0...Np-1. These are derivative objects that represent the derivative of the auxiliary function g(j) with respect to the auxiliary parameters p(l). This derivative is manipulated as a ModelEvaluatorBase::Derivative object. • Second-order derivatives □ Second-order derivatives of the state function f(x_dot,x,{p(l)},t,...) hess_f_xx = sum(f_multiplier * D^2(f)/D(x)^2). This is a derivative object that represents the second-order derivative of the state function f with respect to the state variables x. This derivative is manipulated as a LinearOpBase object. Objects of this type are created with the function create_hess_f_xx(). hess_f_xp(l) = sum(f_multiplier * D^2(f)/(D(x)D(p(l)))) for l=0...Np-1. These are derivative objects that represent the second-order mixed partial derivative of the state function f with respect to both the state variables x and the auxiliary parameters p(l). This derivative is manipulated as a LinearOpBase object. Objects of this type are created with the function create_hess_f_xp(l). hess_f_pp(l1,l2) = sum(f_multiplier * D^2(f)/(D(p(l1))D(p(l2)))) for l1=0...Np-1, l2=0...Np-1. These are derivative objects that represent the second-order mixed partial derivative of the state function f with respect to both the auxiliary parameters p(l1) and the auxiliary parameters p(l2). This derivative is manipulated as a LinearOpBase object. Objects of this type are created with the function create_hess_f_pp(l1,l2). □ Second-order derivatives of the auxiliary response function g(j)(x,{p(l)},t,...) hess_g_xx(j) = sum(g_multiplier(j) * D^2(g(j))/D(x)^2) for j=0...Ng-1. These are derivative objects that represent the second-order derivative of the auxiliary function g(j) with respect to the state variables x. This derivative is manipulated as a LinearOpBase object. Objects of this type are created with the function create_hess_g_xx(j). hess_g_xp(j,l) = sum(g_multiplier(j) * D^2(g(j))/(D(x)D(p(l)))) for j=0...Ng-1, l=0...Np-1. These are derivative objects that represent the second-order mixed partial derivative of the auxiliary function g(j) with respect to both the state variables x and the auxiliary parameters p (l). This derivative is manipulated as a LinearOpBase object. Objects of this type are created with the function create_hess_g_xp(j,l). hess_g_pp(j,l1,l2) = sum(g_multiplier(j) * D^2(g(j))/(D(p(l1))D(p(l2)))) for j=0...Ng-1, l1=0...Np-1, l2=0...Np-1. These are derivative objects that represent the second-order mixed partial derivative of the auxiliary function g(j) with respect to both the auxiliary parameters p(l1) and the auxiliary parameters p(l2). This derivative is manipulated as a LinearOpBase object. Objects of this type are created with the function create_hess_g_pp(j,l1,l2). Nominal values A model can optionally define a nominal value for any of the input arguments and these are returned from the getNominalValues() function as a const ModelEvaluatorBase::InArgs object. These nominal values can be used as initial guesses, as typical values (e.g. for scaling), or for other purposes. See evalModel() a discussion of how nonminal values are treated in an evaluation where the client does not pass in the values explicitly. Variable and Function Bounds A model can optionally define a set of upper and/or lower bounds for each of the input variables/parameters. These bounds are returned as const ModelEvaluatorBase::InArgs objects from the functions getLowerBounds() and getUpperBounds(). These bounds are typically used to define regions in space where the model functions are well defined. A client algorithm is free to ignore these bounds if they can not handle these types of constraints. That fact that a model can defined reasonable bounds but most numerical algorithms can not handle bounds is no reason to leave bounds out of this interface. Again, if the client algorithm can not handle bounds then then can be simply ignored and there is not harm done (except the client algorithm might run into lots of trouble computing functions with undefined values).. Significance of Parameter Subvectors The parameters for any particular model are partitioned into different subvectors p(l) for several different reasons: • Parameters are grouped together into a subvectors p(l) to allow an ANA to manipulate an entire set at a time for different purposes. It is up to someone to select which parameters from a model will be exposed in a parameter subvector and how they are partitioned into subvectors. For example, one parameter subvector may be used a design parameters, while another subvector may be used for uncertain parameters, while still another may be used as continuation parameters. • Parameters are grouped together into subvectors p(l) implies that certain derivatives will be supplied for all the parameters in the subvector or none. If an ANA wants to flexibility to get the derivative for any individual scalar parameter by itself, then the paramters must be segregated into different parameter subvectors p(l) with one component each (e.g. get_p_space(l)->dim()==1). Failed evaluations The way for a ModelEvalutor object to return a failed evaluation is to set NaN in one or more of the output objects. If an algebraic model executes and happens to pass a negative number to a squareroot or something, then a NaN is what will get created anyway (even if the ModelEvaluator object does not detect this). Therefore, clients of the ME interface really need to be searching for a NaN to see if an evaluation has faield. Also, a ME object can set the isFailed flag on the outArgs object on the return from the evalModel() function if it knows that the evaluation has failed for some reason. Inexact function evaluations The ModelEvaluator interface supports inexact function evaluations for f and g(j). This is supported through the use of the type ModelEvaluatorBase::Evaluation which associates an enum ModelEvaluatorBase::EEvalType with each VectorBase object. By default, the evaluation type is ModelEvaluatorBase::EVAL_TYPE_EXACT. However, a client ANA can request or allow more inexact (and faster) evaluations for different purposes. For example, ModelEvaluatorBase::EVAL_TYPE_APPROX_DERIV would be used for finite difference approximations to the Jacobian-vector products and ModelEvaluatorBase::EVAL_TYPE_VERA_APPROX_DERIV would be used for finite difference Jacobian preconditioner approximations. The type ModelEvaluatorBase::Evaluation is designed so that it can be seamlessly copied to and from a Teuchos::RCP object storing the VectorBase object. That means that if a client ANA requests an evaluation and just assumes an exact evaluation it can just set an RCP<VectorBase> object and ignore the presences of the evaluation type. Likewise, if a ModelEvaluator subclass does not support inexact evaluations, it can simply grab the vector object out of the OutArgs object as an RCP<VectorBase> object and ignore the evaluation type. However, any generic software (e.g. DECORATORS and COMPOSITES) must handle these objects as Evaluation objects and not RCP objects or the eval type info will be lost (see below). If some bit of software converts from an Evaluation object to an RCP and then coverts back to an Evaluation object (perhaps by accident), this will slice off the evaluation type enum value and the resulting eval type will be the default EVAL_TYPE_EXACT. The worse thing that will likely happen in this case is that a more expensive evaluation will occur. Again, general software should avoid Design Consideration: By putting the evaluation type into the OutArg output object itself instead of using a different data object, we avoid the risk of having some bit of software setting the evaluation type enum as inexact for one part of the computation (e.g. a finite difference Jacobian mat-vec) and then another bit of software reusing the OutArgs object and be computing inexact evaluation when they really wanted an exact evaluation (e.g. a line search algorithm). That could be a very difficult defect to track down in the ANA. For example, this could cause a line search method to fail due to an inexact evaluation. Therefore, the current design has the advantage of being biased toward exact evaluations and allowing clients and subclasses to ignore inexactness but has the disadvantage of allowing general code to slice off the evaluation type enum without even as much as a compile-time or any other type of warning. Compile-Time and Run-Time Safety and Checking The ModelEvaluator interface is designed to allow for great flexibility in how models are defined. The idea is that, at runtime, a model can decide what input and output arguments it will support and client algorithms must decide how to interpret what the model provides in order to form an abstract problem to solve. As a result, the ModelEvaluator interface is weakly typed mathematically. However, the interface is strongly typed in terms of the types of objects involved. For example, while a single ModelEvaluator object can represent anything and everything from a set of nonlinear equations to a full-blown constrained transient optimization problem, if the state vector x is supported, it must be manipulated as a VectorBase object, and this is checked at comple time. In order for highly dynamically configurable software to be safe and usable, a great deal of runtime checking and good error reporting is required. Practically all of the runtime checking and error reporting work associated with a ModelEvaluator object is handled by the concrete ModelEvaluatorBase::InArgs and ModelEvaluatorBase::OutArgs classes. Once a concrete ModelEvaluator object has setup what input and output arguments the model supports, as returned by the createInArgs() and createOutArgs() functions, all runtime checking for the proper setting of input and output arguments and error reporting is automatically handled. Notes to subclass developers Nearly every subclass should directly or indirectly derive from the node subclass ModelEvaluatorDefaultBase since provides checking for correct specification of a model evaluator subclass and provides default implementations for various features. Subclass developers should consider deriving from one (but not more than one) of the following subclasses rather than directly deriving from ModelEvaluatorDefaultBase: • StateFuncModelEvaluatorBase makes it easy to create models that start with the state function evaluation x -> f(x). • ResponseOnlyModelEvaluatorBase makes it easy to create models that start with the non-state response evaluation p -> g(p). • ModelEvaluatorDelegatorBase makes it easy to develop and maintain different types of decorator subclasses. When deriving from any of the above intermediate base classes, the subclass can override any of virtual functions in any way that it would like. Therefore these subclasses just make the job of creating concrete subclasses easier without removing any of the flexibility of creating a subclass. ToDo: Finish Documentation! Definition at line 741 of file Thyra_ModelEvaluator.hpp.
{"url":"https://docs.trilinos.org/dev/packages/thyra/doc/html/classThyra_1_1ModelEvaluator.html","timestamp":"2024-11-03T09:02:49Z","content_type":"application/xhtml+xml","content_length":"178733","record_id":"<urn:uuid:82b173c4-ac5c-42f4-afee-7349a66611f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00769.warc.gz"}
Slack variables - (Nonlinear Optimization) - Vocab, Definition, Explanations | Fiveable Slack variables from class: Nonlinear Optimization Slack variables are additional variables introduced into a mathematical optimization model to transform inequality constraints into equality constraints. They represent the difference between the left-hand side and right-hand side of an inequality, allowing for a more flexible approach in finding optimal solutions. By incorporating slack variables, optimization techniques can effectively navigate the feasible region defined by these constraints, which is particularly useful in various algorithms designed for solving nonlinear problems. congrats on reading the definition of slack variables. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. In the context of optimization, slack variables are added to convert 'less than or equal to' constraints into equations, making it easier to apply various solution techniques. 2. Slack variables can take on non-negative values, indicating how much 'slack' or unused capacity exists in the constraint. 3. In path-following algorithms, slack variables help maintain feasibility as the algorithm navigates toward the optimal solution, allowing it to stay within the defined constraints. 4. Interior penalty methods utilize slack variables to enforce constraint satisfaction by penalizing violations and guiding the solution toward feasible regions. 5. The use of slack variables can simplify the computational process and improve the convergence properties of iterative methods in nonlinear optimization. Review Questions • How do slack variables facilitate the transformation of inequality constraints into equality constraints, and why is this important for solving optimization problems? □ Slack variables help in transforming inequality constraints into equality constraints by allowing us to express any excess or unused capacity explicitly. This transformation is crucial for solving optimization problems as it enables algorithms to work with a consistent set of equations rather than inequalities. By converting these constraints, we can apply various mathematical techniques more effectively and ensure that we are exploring the entire feasible region defined by our problem. • Discuss the role of slack variables in path-following algorithms and how they contribute to maintaining feasibility during the optimization process. □ In path-following algorithms, slack variables play a key role in ensuring that the solution remains feasible as it navigates through the solution space. As the algorithm progresses toward an optimal solution, these variables allow for adjustments that help maintain compliance with inequality constraints. By incorporating slack variables, path-following algorithms can efficiently traverse boundaries between feasible regions while ensuring that no constraint violations occur throughout the iterative process. • Evaluate the impact of incorporating slack variables within interior penalty methods and how they influence convergence behavior in nonlinear optimization problems. □ Incorporating slack variables within interior penalty methods significantly impacts the convergence behavior of nonlinear optimization problems by providing a structured way to handle constraint violations. By introducing penalties for slack variable deviations, these methods guide solutions toward feasible regions while optimizing the objective function. This approach not only enhances stability during iterations but also accelerates convergence by ensuring that solutions adhere closely to constraints, ultimately leading to more efficient problem-solving "Slack variables" also found in: ยฉ 2024 Fiveable Inc. All rights reserved. APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/nonlinear-optimization/slack-variables","timestamp":"2024-11-05T00:50:35Z","content_type":"text/html","content_length":"150549","record_id":"<urn:uuid:e1a4cea6-0110-4b84-9722-f6bd33f8a569>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00389.warc.gz"}
Algebra 1: Common Core (15th Edition) Chapter 9 - Quadratic Functions and Equations - Chapter Review - Page 604 8 Work Step by Step We are given: $y=5x^2+8$ Let's list some values: $x=-2 \rightarrow y=28$ $x=-1 \rightarrow y=13$ $x=0 \rightarrow y=8$ $x=1 \rightarrow y=13$ $x=2 \rightarrow y=28$ The x-coordinate of the vertex is given by $x=\frac{-b}{2a}=0$ Find the y-coordinate of the vertex $y=5(0)^2+8=8$ Hence, the vertex is $(0,8)$.
{"url":"https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-9-quadratic-functions-and-equations-chapter-review-page-604/8","timestamp":"2024-11-07T22:49:17Z","content_type":"text/html","content_length":"98975","record_id":"<urn:uuid:aa32f965-1135-44a8-bb1c-5d860397232f>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00639.warc.gz"}
Least square mean calculation for the fully replicate design Least square mean calculation for the fully replicate design [General Sta­tis­tics] Hi Jaimik, Hi all! ❝ One question for the least square mean calculation for the fully replicate design as per USFDA in SAS. ❝ .... ❝ Please share your thoughts … I think when changes for one value have been done - it was not changes only for formulation, it was changes for sequence and period also. And if you look at model coefficients probably you will find changes in sequence coefficient. estimate is calculated as L*β where L is a vector of known constants. For example if we have 2 sequence, 2 period, 2 formulation, length of β vector is 4 and for one formulation L = [1; 1/2; 1/2; 0] for other L = [1; 1/2; 1/2; 1]. When value changed its lead to changes in sequence part of β, and then to marginal value of each formulation. I imagine it like this: one part of change is go to current formulation mean value, and some part goes to sequence and period (because it one model), and because sequence is crossed with other formulation it affect on other formulation level. Edit: Unnecessary quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5 Complete thread:
{"url":"https://forum.bebac.at/forum_entry.php?id=20747&order=time","timestamp":"2024-11-02T07:47:36Z","content_type":"text/html","content_length":"14857","record_id":"<urn:uuid:736584d2-879f-4c5f-b959-fd7a37b109d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00849.warc.gz"}
Sum of Weibull variates and performance of diversity systems Sum of Weibull random variables (RVs) is naturally of prime importance in wireless communications and related areas. Through the medium of the selection of poles as orthogonal Laguerre polynomials in Cauchy residue theorem, the moment-generation function (MGF), the probability density function (PDF) and the cumulative distribution function (CDF) of the sum of L ≥ 2 mutually independent any random variables (RVs) are represented in terms of fast convergent series, and the obtained results are applied to the sum of Weibull RVs in order to find symbol error rate (SER) and outage probability (OP) Publication series Name Proceedings of the 2009 ACM International Wireless Communications and Mobile Computing, Connecting the World Wirelessly, IWCMC 2009 Conference 2009 ACM International Wireless Communications and Mobile Computing Conference, IWCMC 2009 Country/Territory Germany City Leipzig Period 21/06/09 → 24/06/09 • Cumulative distribution function • Outage probability • Probability density function • Sum of random variables • Symbol error rate • Weibull distribution Dive into the research topics of 'Sum of Weibull variates and performance of diversity systems'. Together they form a unique fingerprint.
{"url":"https://research.itu.edu.tr/en/publications/sum-of-weibull-variates-and-performance-of-diversity-systems","timestamp":"2024-11-07T16:15:02Z","content_type":"text/html","content_length":"57677","record_id":"<urn:uuid:02bd1e1e-a0e5-4351-a7f8-68a901aff93f>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00124.warc.gz"}
Vane Shear Test Avg. Shear Strength Avg. Cohesion (Cu) Result of Test-I Shear Strenghth Undrained Shear Strenghth of the Soil $Torque\left(T\right)=K×\frac{\mathrm{Diffrence In Degree}}{\mathrm{180}}$ $Cohesion\left(Cu\right)=\frac{T}{\mathrm{\pi }{\mathrm{D}}^{2}\left(\frac{D}{6},+,\frac{H}{2}\right) }$ $Cu=\frac{0.67}{\mathrm{3.14}×14.06\left(\frac{\mathrm{3.75}}{6},+,\frac{\mathrm{7.5}}{2}\right) }$ Result of Test-II Shear Strenghth Undrained Shear Strenghth of the Soil $Torque\left(T\right)=K×\frac{\mathrm{Diffrence In Degree}}{\mathrm{180}}$ $Cohesion\left(Cu\right)=\frac{T}{\mathrm{\pi }{\mathrm{D}}^{2}\left(\frac{D}{6},+,\frac{H}{2}\right) }$ $Cu=\frac{1.33}{\mathrm{3.14}×14.06\left(\frac{\mathrm{3.75}}{6},+,\frac{\mathrm{7.5}}{2}\right) }$ Result of Test-III Shear Strenghth Undrained Shear Strenghth of the Soil $Torque\left(T\right)=K×\frac{\mathrm{Diffrence In Degree}}{\mathrm{180}}$ $Cohesion\left(Cu\right)=\frac{T}{\mathrm{\pi }{\mathrm{D}}^{2}\left(\frac{D}{6},+,\frac{H}{2}\right) }$ $Cu=\frac{2.00}{\mathrm{3.14}×14.06\left(\frac{\mathrm{3.75}}{6},+,\frac{\mathrm{7.5}}{2}\right) }$ vane shear test for the measurement of shear strength of cohesive soils is useful for soils of low shear strength of less than about 0'5 kgf/cml . This test gives the undrained strength of the soil and the undisturbed and rernoulded strengths obtained are used for evaluating the sensitivity of the soil. • The apparatus may be either of the hand-operated type or motorized. Provisions should be made in the apparatus for the following: • a) Fixing of vane and shaft to the apparatus in such a way that the vane can be lowered gradually and vertically into the soil specimen. • b) Fixing the tube containing the soil specimen to the base of the equipment for which it should have suitable hole. • c) Arrangement for lowering the vane into the soil specimen (contained in the tube fixed to the base) gradually and vertically and for holding the vane properly and securely in the lowered • d) Arrangement for rotating the vane steadily at a rate of approximately 1/60 rev/min (O'lo/s) and for measuring the rotation of the vane. • e) A torque applicator to rotate the vane in the soil and a device for measuring the torque applied to an accuracy of 0'05 cm.kgf. • f) A set of springs capable of measuring shear strength of 0'5 kgf/cml . • Vane - The vane shall consist of four blades each fixed at 90° . The vane should not deform under the maximum torque for which it is designed. The penetrating edge of the vane blades shall be sharpened having an included angle of 90°. The vane blades shall be welded together suitably to a central rod, the maximum diameter of which should preferably not exceed 25 mm in the portion of the rod which goes into the specimen during the test. The vane should be pr operly treated to prevent rusting and corrosion. Principle of Vane Shear Test • The specimen in the tube should be at least 37'5 mm in diameter and 75 mm long. Mount the specimen container with the specimen on the base of the vane shear apparatus and fix it securely to the base. If the specimen container is closed at one end it should be provided at the bottom with a hole of about I mm diameter. Lower the shear vanes into the specimen to their full length gradually with minimum disturbance of the soil specimen so that the top of the vane is at least 10 mm below the top of the specimen. Note the readings of the strain and torque indicators. Rotate the vane at a uniform rate approximately 0'1 o/s by suitably operating the torque applicator handle until the specimen fails. Note the final reading of the torque indicator. Torque readings and the corresponding strain readings may also be noted at desired intervals of time as the test proceeds. • Just after the determination of the maximum torque rotate the vane rapidly through a minimum of ten revolutions. The remoulded strength should then be determined within I minute after completion of the revolution. Laboratory Vane Shear Apparatus Vane Shear Test Calculations $Torque\left(T\right)=K×\frac{\mathrm{Diffrence of Reading}}{\mathrm{180}}$ $Cohesion\left(Cu\right)=\frac{T}{\mathrm{\pi }{\mathrm{D}}^{2}\left(\frac{D}{6},+,\frac{H}{2}\right) }$ • Diffrence in Reading is Final Reading - Inital Reading • D is the Diameter of specimen in the tube • H is Height of specimen in the tube • T = Torque in cm.kgf. • NOTE 1 - This Formula IS base on the following assumption: a. Shearing Strength in Ihe horizontal and Vertical Direction Are Same; b. At the peak value , shear srength is eqally mobilized at the end surface as well as at the central and c. The shear surface is cylindrical and has a diarmeter equal to the diameter of the vane. • NOTE 2 - It is Important that the dimension of the vane are checked periodically to ensure that the vane is not Distorted or worn.
{"url":"https://www.civil-engineering-calculators.com/Soil-Test/Vanes-Shear-Test-Calculator","timestamp":"2024-11-09T07:07:34Z","content_type":"application/xhtml+xml","content_length":"105498","record_id":"<urn:uuid:cce7681c-b930-49c6-916d-a7cb783d63d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00032.warc.gz"}
Global Fuel Sources and How To Reach Energy Sustainability By Sterling Ericsson and Preston Hurst Energy demand and supply has been a continuous topic in the world of fuel production. As populations expand and advanced technologies become more ubiquitous, the demand for more energy in the form of electricity, transportation fuel, and more multiplies. From 1971 to 2014, worldwide total energy supply enlarged from just under 6,000 Mtoe (millions of tons of oil equivalent) to nearly 14,000 Mtoe. Then, as now, fossil fuels composed most of that production, with renewables and biofuels only contributing 15% to global supply. But the gears have slowly begun to shift. The OECD region known for its fossil fuel production furnished over 60% of global needs in the 1970’s. Now that has shrunk to less than 40%, with China and the rest of Asia being the new up and comers. The core of that change, the story of the transition to finding more sustainable options, is the story of fuels themselves. So, in light of that, we shall investigate each piece of that story in turn.^[1][2] Fossil Fuels Around The World Global consumption of fossil fuels as an energy source continues to be the primary power producer for much of the world. With an increasing average energy consumption of 1-2% per year, this places the greatest strain on the three main fossil fuel sources: oil, natural gas, and coal. Their output needs to therefore increase to cover for this deficiency or other energy sources must take their Of these, oil remains the most consequential in energy supplies, making up 32.9% of global consumption in 2016. In general, oil has seen a steady decline in usage over the past two decades, with 2016 being the first improvement (1.9%) since 1999. This can likely be attributed to a rebounding of oil prices in early 2015, accelerating consumption enough, primarily in Europe, even to offset the USA’s continued decline of oil utilization.^[3] Natural gas also saw an extreme uptick of 1.7%, with there being a 5.4% boost alone in the US. But the OECD region remained the primary driver of natural gas expenditures at 53.5% of all gas consumed in the world. This all coincided with a boom in international gas trade, especially in pipeline importation.^[3] Out of the three available options, coal fared the worst in 2015-6, with global consumption and production falling by several percentage points. The US saw the greatest volumetric decline at 12.7%, possibly due to the replacement of coal sources with natural gas as a viable alternative.^[3] A variety of measurements have been employed to determine the future of energy usage and what that might entail for the future of fossil fuels. One recent study attempted a model that combined economic impacts with global energy consumption and how climate change effects will play a role. It was estimated that global energy requirements will more than double by 2100 and that fossil fuels will likely play some sort of dominant role in total energy supply until the mid-century mark.^[4] Oil will feature sustained growth until 2060, but thereafter will have its energy input fall precipitously. Coal will see a small amount of increase, but will ultimately fall off the energy map by 2030. Natural gas will see a longer lasting lifespan, due to its involvement in electrical output, but will hit a similar 2060 cap as seen by oil. The cause of the eventual dropoff will be thanks to the increasingly untenable prices that the three fuel sources will go for until their swift replacement.^[5] This model more or less corroborates expected diminishing of fossil fuel reserves, with oil and gas reaching their stretched limit within 35-40 years. The fuels will still exist after this point, but the quantities needed will not be possible to supply and will force other fuels to come to the forefront. Other models suggest that this will happen even sooner, with all three being phased out by 2025 or 2030. Whether such rapidity in fuel substitution will come to fruition is uncertain at this point.^[6] A final thing to note about fossil fuels on a global setting are the subsidies that many nations pay to keep them competitive and operational. Without these, the decline of such fuels would surely have happened sooner than any modern model predicts. So, these subsidies sitting at over $5 trillion worldwide (Over 6% of total GDP internationally) should be mentioned as an unpredictable element in all the predictive systems we apply to the problem. If multiple nations decided to reduce or entirely rescind their subsidies for fossil fuels, that would have a commensurate shift in the timeline models just discussed.^[7] The Productivity of Renewables In a world dealing with an increasingly more volatile energy crisis and a reliance on a fuel source producing harmful byproducts in the atmosphere, renewable sources of energy with minimal impact have been a long sought-after goal. Therefore, expanding production of electrical energy sources such as wind, solar, hydroelectric, and nuclear have become variable focuses depending on the country and region of the world. As an example, smaller nations with abundant waterways have bee-lined for hydroelectric power as their renewable of choice. Lesotho, Albania, and Paraguay have managed to achieve nearly 100% renewable energy reliance thanks to the dams and waterways in their areas of influence, with other countries like Iceland having a mixture between the former and their unique geothermal energy production due to the volcanic activity in the region. Renewables as a whole, outside of nuclear and hydroelectric sources, grew to 2.8% of global energy consumption (and 6.7% of global power generation) in 2015, with a power generation increase of 15.2% within the electricity supplied by renewables themselves. This is close to the decade long average of 15.9% increase per year. The countries that saw the most improvement in this field were Germany (23.5%) and China (20.9%), reflecting the intense attention both have been paying to funding renewables over the past several years.^[3] For wind and solar installation, the United States has seen significant improvement, with wind making up 27% of the energy production increase for the country in 2016. Though Germany in the previous year, per its position of most renewables investment, saw a double amount of 53.4%. Solar has also seen the US come third in the world in 2015, with a 41.8% change, surpassed by China and Japan.^[3] Nuclear as an energy production option saw more modest alterations, with only a 1.3% global output enhancement, centered almost entirely on the actions of China with its 28.9% gain of nuclear power. This has resulted in it reaching fourth in the world, past South Korea, as a nuclear electricity provider.^[3] Hydroelectric did the worst out of the available options, elevating by less than a percent globally. China remains the largest producer of hydroelectric power and even it only saw a 5% difference in the power output in 2015. One of the likely primary reasons for this stagnation in many regions is due to an ongoing worldwide drought caused by higher and fluctuating temperatures. When dealing with the effect and aftermath of such conditions, there is little need or desire to increase hydroelectric productivity.^[3] A major question facing renewables is how effective they can be at covering the energy demand requirements currently controlled by the fossil fuel market. Based on how many renewables work, whereby power generation is restricted to environmental conditions, there is concern that they will never be able to be used beyond incidental and outlying energy. When what is needed currently is a replacement base power system that can successfully shoulder the energy demand load and minimize or eliminate the need for fossil fuels.^[8] Hydroelectric can fill this gap, as several countries have shown, but not every region has enough sources of such power to make up a base power system. Thanks to this, out of all the available choices in renewables, nuclear will likely prove the best option for base power. But improvement in this realm is reliant on enactment of new nuclear power projects by governments, something that is not in favor in many nations today. Regardless, the energy production of other renewables will continue into the future, picking up a necessary chunk of the annual increase in demand. By 2020, electricity generation from renewables is expected to increase by 50-75% of the amount in 2010 and it is expected to double the former even by 2035. The vast majority of this increase will be in wind power and a lesser amount for solar.^[9] At minimum, any amount of power generation that is taken from fossil fuels will result in a reduction in greenhouse gas emissions. So, even if renewables won’t directly be the source for base power in the future, they will benefit the planet anyways by providing low cost, environmentally friendly electrical alternatives. First Generation Biofuels Break Open Energy Options While technology continues to advance apace, new energy solutions will obviously emerge to fill gaps and openings in global demand. Biofuels, in turn, have come onto the scene as one of the most rapidly ballooning fields in energy production. In many places, their usage is still limited and not totally adopted by national governments, but that has also been changing swiftly over the past two Biofuel production over the past decade alone has seen annual gains of 14.3% on average, with the US and Brazil having a strong attention on expansion. Though first generation biofuels, being so reliant on crop farming, are subject to the whims of the global market as well. In 2015, this resulted in only a 0.9% change, far off from the decade average. This was due to Indonesia and Argentina seeing huge 25 and 45 percent losses in their output.^[3] For the former, this was thanks to a massive influx of palm oil as an energy source, killing the biofuel market for that year, though this was later offset with a government subsidy. Argentina, meanwhile, has seen a freeze in their soy exports to China in 2015 that caused a serious hit to biofuels. Though, in both cases, the issues were resolved by 2016, allowing the biofuel field’s growth to continue.^[11] First generation biofuels, with the varieties of subfields making up different uses of plant matter for bioethanol, biodiesel, or solid fuels or the side products of animal agriculture used to mitigate methane exposure to the atmosphere like with biogas, are targeted to the different kinds of agriculture and the commodities they make. It is no surprise then that greenhouse gases (GHGs) and climate change influence are a main topic of discussion for them.^[12] Since 2000, biofuels have grown to make up, in one part, 4% of transportation fuels worldwide. Second generation biofuels in recent years have begun to quickly outstrip their predecessor, but first generation fuels continue to be used for a majority of the 25 billion gallons of bioethanol created in 2016. And, as previously noted, having the US and Brazil be the producers of 85% of that using corn and sugarcane.^[13][14] Even so, many scientists and industry professionals currently view first generation biofuels as just an initial step toward better technologies and a way to perfect things like fermentation machines and other devices that can be used with second generation biofuels and beyond. Because of the interactions between food and feed costs, land use, and water requirements (2-3% of global water and arable land use go to biofuels), there are several restrictions on wider scale application of bioethanol, biodiesel, and biogas. Increasing yield is one method to make the costs more efficient and to drive down concerns of GHG emissions, along with utilizing bioenergy crop species directly to avoid entanglement with food supplies.^[15] For current usage, at least, first generation biofuels will continue to play a considerable role in the biofuels field as a whole. And biogas, as a fuel production option that only deals with already existing animal raising anyways, will likely be expanded on all farms using animal agriculture. Italy saw a 10-fold increase in livestock farms using biogas alternatives, from 56 farms to 521 farms, in a period between 2001 and 2011, with more contemporary years seeing a greater and greater jump. Developed and developing nations that have a reliance on meat production may see a turn to biogas as an energy source to complement this.^[16] The biofuel field, even though its beginning dates to several decades back, is still seen as an emerging energy option in the world today. It has yet to gain the heights that many recognize it eventually will. But first generation biofuels have certainly left their mark on energy dependency in the world, particularly as a replacement for oil and diesel as a transportation fuel. The future for these fuels is uncertain. They may continue to grow and overtake other industries as they do so or they may end up being outclassed by the later generations of energy production in their very own field. At this point in time, there are too many possibilities to tell and disparate regions of the world may make their own choices that change this outcome. Even so, it is clear that first generation biofuels will continue to be used for now for several years to come. Second Generation Biofuels The next section covers what have come to be known as second generation biofuels. In a broad sense, this label covers biofuels produced from non-edible plant components^[17]. From a sustainability perspective, this category of biofuel removes the tradeoff between direct food supply and fuel production that is faced by first generation biofuel production. However, other sustainability challenges are involved in their production. We will look at two prominent sources of second generation biofuels, and discuss their application, and the implications on global energy sustainability. Lignocellulosic Ethanol There are four general steps to obtaining ethanol from lignocellulosic feedstocks^[18]: First, is breakdown of the lignin-cellulose matrix; second, the enzymatic breakdown of cellulose to form glucose; next, the glucose is converted to ethanol; and lastly, the mixture is processed/refined into a form of usable fuel^[18]. What makes second generation biofuels a more sustainable option than first generation is the type of feedstock used. Rather than using an edible product, such as maize grain, we may use biomass products that are not a component of the food supply. Feedstock examples include straw and stover left behind after grain harvest, as well as woody by-products of the lumber industry^[19]. This may also include ‘energy crops’; species that are planted for the harvest of their biomass, but which are not sources of food, thus will not increase global food demand. These may be perennial species such as some trees^[20]. The use of energy crops may pose a problem in terms of sustainability. If arable land is planted with these species, then it will have the same effect on the global food economy as traditional bio-ethanol. As previously stated, leftovers and by-products are viable feedstock sources, but specific crops will likely be needed for these second generation biofuels to compete with fossil fuels and be produced at a scale that is economically viable^[20]. The solution is to farm energy crops on land which is not productive for typical food crops. Lignocellulosic crops show promise in increasing our world’s sustainable energy on both the environmental level as well as with food security. The question is can we implement the technology in an economically viable way^[20]. Discovering ways to use existing infrastructure in the production chain, as well as finding value incentives for landowners to produce energy crops will help in making lignocellulosic ethanol a usable option for creating sustainable energy. Algal Derived Biofuel Another approach to producing fuel from non-edible plant sources is the use of algae. Algal cells are capable of producing biomass that contains high levels of sugars and lipids^[21][22][23][26], which may be used as energy carriers for biofuel production^[22][23]. Algae provide solutions to some of the sustainability challenges we face, yet they are not without their drawbacks. For example, compared to lignocellulosic sources, algal derived biofuel can produce ten to one-hundred times the amount of energy, yet it is more expensive^[24]. Perhaps this trade-off is inherent due to the ability to harvest algae multiple times a year, as compared to the few, if not single, annual harvests we expect from typical cropping systems. A key benefit of algal feedstock is the ability to utilize marginal land. Even deserts are able to produce algal growth, if the production infrastructure is built^[22][23][25]. In addition, freshwater is not necessarily required, as seawater may be utilized by some species^[26]. The ability to use marginal land and seawater are key to food supply sustainability, as neither of these resources will cause resource competition with traditional agriculture. Limitations do exist in algal biofuel production. The efficiency of algal feedstock cultivation has been the target of much scrutiny^[23][26][27][28]. Though there are several set-ups that can be used to grow algae, from raceway ponds to glass bioreactors, all need an external source of energy to be operational^[23][27][28]. Targeted improvements to various process checkpoints are needed to make algal biodiesel a viable source of sustainable energy. For example, using a harvesting system that uses sedimentation allows gravity to be a source of energy, thus increases the efficiency and the cost of the system as a whole^[27]. The trade-off here is a lower harvest concentration, or in other words, a less efficient cultivation. Nonetheless, the use of lipids from algal extraction over soy/other oleoginous crops provides added sustainability to the food economy, if not to the fuel economy. Further advances in production and extraction technologies will allow algal biofuel to play a sustainable role in the global energy marketplace. The growing human population will surely lead to not only an increased demand for energy, but for food as well. We face a great challenge in satiating the energy demand, without encroaching on resources needed for food production. Ultimately, renewable sources of fuel will need to be utilized if we are to have a 100% sustainable pool of energy. Fossil fuels cannot be considered a sustainable option. They are a finite resource, not to mention environmentally deleterious due to CO2 emissions contribution to climate effects. Wind, solar and nuclear sources provide optimism that truly renewable, sustainable energy sources exist. Although they are not presently capable of relieving dependence on fossil fuels, we expect they will reduce total consumption as their use is broadened in coming years. However, traditional renewables cannot fill the need for liquid fuel, such as that used for transportation. This is where renewable biofuel, produced from crops, is important. The sustainability issue facing bioethanol production is related to the food supply. Increases in global demand for food crops will lead to higher prices, raising ethical questions of how we should use the crops we grow. Innovations in the development of biofuels may provide answers. The second generation of biofuels are not made from edible plant products, so this makes them much more sustainable than corn ethanol and soy biodiesel, assuming they can be produced at a proper scale. There is still a great amount of research needed, across the fields of engineering, chemistry and biology, in order for us to move to the theoretical maximum of 100% sustainable energy. 1. International Energy Agency. (2016, September). Key World Energy Statistics. Retrieved September 2, 2017, from https://www.iea.org/publications/freepublications/publication/KeyWorld2016.pdf 2. Doman, L. E., Arora, V., Singer, L. E., Zaretskaya, V., Jones, A., Huetteman, T., . . . Lindstrom, P. (2016). International Energy Outlook 2016 (Vol. 21) [IEO2016]. Washington D.C., NY: U.S. Energy Information Administration. Retrieved September 2, 2017, from https://www.eia.gov/outlooks/ieo/world.php 3. BP. (2016, June). BP Statistical Review of World Energy June 2016. Retrieved September 2, 2017, from https://www.bp.com/content/dam/bp/pdf/energy-economics/statistical-review-2016/ 4. Bauer, N., Mouratiadou, I., Luderer, G., Baumstark, L., Brecha, R. J., Edenhofer, O., & Kriegler, E. (2016). Global fossil energy markets and climate change mitigation – an analysis with REMIND. Climatic Change, 136 (1), 69-82. doi:10.1007/s10584-013-0901-6 5. Shafiee, S. & Topal, E. (2009). When will fossil fuel reserves be diminished? Energy Policy, 37, 181–189. doi:10.1016/j.enpol.2008.08.016 6. Mohr, S. H., Wang, J., Ellem, G., Ward, J., & Giurco, D. (2015). Projection of world fossil fuels by country. Fuel, 141, 120-135. doi:10.1016/j.fuel.2014.10.030 7. Coady, D., Parry, I., Sears, L., & Shang, B. (2017). How Large Are Global Fossil Fuel Subsidies?. World Development, 91, 11-27. doi:10.1016/j.worlddev.2016.10.004 8. Foster, E., Contestabile, M., Blazquez, J., Manzano, B., Workman, M., & Shah, N. (2017). The unstudied barriers to widespread renewable energy deployment: Fossil fuel price responses. Energy Policy, 103, 258-264. doi:10.1016/j.enpol.2016.12.050 9. Bhattacharya, M., Paramati, S. R., Ozturk, I. & Bhattacharya, S. (2016). The effect of renewable energy consumption on economic growth: Evidence from top 38 countries. Applied Energy, 162, 733–741. doi:10.1016/j.apenergy.2015.10.104 10. Ellabban, O., Abu-Rub, H. & Blaabjerg, F. (2014). Renewable energy resources: Current status, future prospects and their enabling technology. Renewable and Sustainable Energy Reviews 39, 748–764. 11. Wright, T., & Rahmanulloh, A. (2016, July 28). Indonesia Biofuels Annual 2016. Retrieved November 15, 2017, from https://gain.fas.usda.gov/Recent%20GAIN%20Publications/ 12. Rathor, D., Nizami, A.-S., Singh, A. & Pant, D. (2016). Key issues in estimating energy and greenhouse gas savings of biofuels: challenges and perspectives. Biofuel Research Journal 10, 380–393. doi: 10.18331/BRJ2016.3.2.3 13. Araújo, K., Mahajan, D., Kerr, R. & Silva, M. D. (2017). Global Biofuels at the Crossroads: An Overview of Technical, Policy, and Investment Complexities in the Sustainability of Biofuel Development. Agriculture 7, 32. doi: 10.3390/agriculture7040032 14. Bertrand, E., Vandenberghe, L. P. S., Soccol, C. R., Sigoillot, J.-C. & Faulds, C. (2016). First Generation Bioethanol. Green Fuels Technology, 175–212. doi: 10.1007/978-3-319-30205-8_8 15. Rulli, M. C., Bellomi, D., Cazzoli, A., Carolis, G. D. & D’Odorico, P. (2016). The water-land-food nexus of first-generation biofuels. Scientific Reports 6. doi: 10.1038/srep22521 16. Torquati, B., Venanzi, S., Ciani, A., Diotallevi, F. & Tamburi, V. (2014). Environmental Sustainability and Economic Benefits of Dairy Farm Biogas Energy Production: A Case Study in Umbria. Sustainability 6, 6696–6713. doi: 10.3390/su6106696 17. Nanda, Sonil, Azargohar R., Dalai A.J., Kozinski J.A.. (2015) An assessment on the sustainability of lignocellulosic biomass for biorefining. Renewable and Sustainable Energy Reviews 50, 925-941. 18. Margeot, Antoine, et al.(2009) New improvements for lignocellulosic ethanol. Current Opinion in Biotechnology 20 (3), 372–380. doi: 10.1016/j.copbio.2009.05.009. 19. Sanderson, Katharine. (2011) Lignocellulose: A chewy problem. Nature 474 (7352). doi:10.1038/474s012a. 20. Robertson, G. Philip, Stephen K. Hamilton, Bradford L. Barham, Bruce E. Dale, R. Cesar Izaurralde, Randall D. Jackson, Douglas A. Landis, Scott M. Swinton, Kurt D. Thelen, and James M. Tiedje. (2017) Cellulosic biofuel contributions to a sustainable energy future: Choices and outcomes. Science 356 (6345). doi:10.1126/science.aal2324. 21. Hu, Qiang, Milton Sommerfeld, Eric Jarvis, Maria Ghirardi, Matthew Posewitz, Michael Seibert, and Al Darzins. (2008) Microalgal triacylglycerols as feedstocks for biofuel production: perspectives and advances. The Plant Journal 54 (4), 621-39. doi:10.1111/j.1365-313x.2008.03492.x. 22. Laura L Beer, Eric S Boyd, John W Peters, Matthew C Posewitz. (2009) Engineering algae for biohydrogen and biofuel production. Current Opinion in Biotechnology 20, 264-271. 23. Brennan, Liam, and Philip Owende. (2010) Biofuels from microalgae—A review of technologies for production, processing, and extractions of biofuels and co-products. Renewable and Sustainable Energy Reviews 14 (2), 557-77. doi:10.1016/j.rser.2009.10.009. 24. Biofuels: What are they? (2010). Retrieved November 18, 2017, from http://www.biofuel.org.uk/ 25. Georgianna, D. Ryan, and Stephen P. Mayfield. (2012) Exploiting diversity and synthetic biology for the production of algal biofuels. Nature 488 (7411), 329-35. doi:10.1038/nature11479. 26. Borowitzka WA, Moheimani NR. (2013) Sustainable biofuels from algae. Mitigation and Adaptation Strategies for Global Change 18, 13–25. doi:10.1007/s11027-010-9271-9 27. Milledge, John J., and Sonia Heaven. (2012) A review of the harvesting of micro-algae for biofuel production. Reviews in Environmental Science and Bio/Technology 12 (2), 165-78. doi:10.1007/ 28. Gouveia, Luisa, and Ana Cristina Oliveira. (2008) Microalgae as a raw material for biofuels production. Journal of Industrial Microbiology & Biotechnology 36 (2), 269-74. doi:10.1007/ Photo CCs: My Energetical from Wikimedia Commons
{"url":"https://bioscriptionblog.com/2017/12/24/global-fuel-energy-sustainability/","timestamp":"2024-11-02T15:35:33Z","content_type":"text/html","content_length":"75669","record_id":"<urn:uuid:4474734a-c8d7-4afa-9234-63bb6bd13fa3>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00157.warc.gz"}
Equations devs & users Hi, I was curious if there is a real added value to have Equations and Equations? ? • I know it behaves differently concerning obligations but I am not really sure what would be the advantages of using Equations and dealing with obligations using Next Obligation rather than the proof mode. • Is there a reason why Equations? returns a warning if you use it to define a function that do not generate obligations ? It would make sense to me to use Equations?, this way I would see it directly when obligations are left to deal with. I think it's because Equations? doesn't open proof mode when it doesn't produce any obligation? I'm also a user of Equations? because VSCoq doesn't support obligations well. From what I have gathered, it is because you have to open the proof mode before knowing if there are obligations left unsolved or not. But if you can raise a warning if there is none left, I don't see why you couldn't just close the proof mode. Last updated: Oct 13 2024 at 01:02 UTC
{"url":"https://coq.gitlab.io/zulip-archive/stream/237659-Equations-devs-.26-users/topic/Equations.20vs.20Equations.3F.html","timestamp":"2024-11-10T17:26:09Z","content_type":"text/html","content_length":"3577","record_id":"<urn:uuid:d5188946-9148-4d20-9b63-2149f2358eb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00169.warc.gz"}
Is the Mercator map cylindrical? Is the Mercator map cylindrical? Mercator is a cylindrical projection. The meridians are vertical lines, parallel to each other, and equally spaced, and they extend to infinity when approaching the poles. The poles project to infinity and cannot be shown on the map. The graticule is symmetric across the equator and the central meridian. What does a Mercator projection map show? Mercator projection, type of map projection introduced in 1569 by Gerardus Mercator. This projection is widely used for navigation charts, because any straight line on a Mercator projection map is a line of constant true bearing that enables a navigator to plot a straight-line course. What are the 4 types of map projections? What Are the Different Types of Map Projections? Rank Map Projection Name Examples 1 Cylindrical Mercator, Cassini, Equirectangular 2 Pseudocylindrical Mollweide, Sinusoidal, Robinson 3 Conic Lambert conformal conic, Albers conic 4 Pseudoconical Bonne, Bottomley, Werner, American polyconic What is a cylindrical map? Cylindrical projection, in cartography, any of numerous map projections of the terrestrial sphere on the surface of a cylinder that is then unrolled as a plane. Originally, this and other map projections were achieved by a systematic method of drawing the Earth’s meridians and latitudes on the flat surface. What is the most famous cylindrical projection map? the Mercator Some preserve area, some shape, and some true distance along their meridians. The most famous of all map projections—the Mercator—is a cylindrical projection. Like the Central Cylindrical, the Mercator is also unable to project the poles and creates severe area distortion at latitudes near the poles. What are the disadvantages of cylindrical projection? The downsides of cylindrical map projections are that they are severely distorted at the poles. While the areas near the Equator are the most likely to be accurate compared to the actual Earth, the parallels and meridians being straight lines don’t allow for the curvature of the Earth to be taken into consideration. What are the 3 main map projections? This group of map projections can be classified into three types: Gnomonic projection, Stereographic projection and Orthographic projection. What are the 5 map projections? Top 10 World Map Projections • Mercator. This projection was developed by Gerardus Mercator back in 1569 for navigational purposes. • Robinson. This map is known as a ‘compromise’, it shows neither the shape or land mass of countries correct. • Dymaxion Map. • Gall-Peters. • Sinu-Mollweide. • Goode’s Homolosine. • AuthaGraph. • Hobo-Dyer. What is simple cylindrical projection? A cylindrical projection can be imagined in its simplest form as a cylinder that has been wrapped around a globe at the equator. If the graticule of latitude and longitude are projected onto the cylinder and the cylinder unwrapped, then a grid-like pattern of straight lines of latitude and longitude would result. What is the most accurate flat map projection to use? Winkel tripel The lower the score, the smaller the errors and the better the map. A globe of the Earth would have an error score of 0.0. We found that the best previously known flat map projection for the globe is the Winkel tripel used by the National Geographic Society, with an error score of 4.563. What are the 3 common map projections? Why is the Mercator projection the standard map projection? It became the standard map projection for navigation because it is unique in representing north as up and south as down everywhere while preserving local directions and shapes. The map is thereby conformal. As a side effect, the Mercator projection inflates the size of objects away from the equator. How is the Transverse Mercator used on a map? The Universal Transverse Mercator (UTM) projection is used to define horizontal positions worldwide by dividing the earth’s surface into 6-degree zones, each mapped by the Transverse Mercator projection with a central meridian in the center of the zone. Transverse Mercator What was the latitude of the Mercator 1569 map? Mercator 1569 world map ( Nova et Aucta Orbis Terrae Descriptio ad Usum Navigantium Emendate Accommodata) showing latitudes 66°S to 80°N. The Mercator projection ( / mərˈkeɪtər /) is a cylindrical map projection presented by Flemish geographer and cartographer Gerardus Mercator in 1569. Who was the inventor of the map projection? The best known map projection is named for its inventor, Gerardus Mercator, who developed it in 1569. The Mercator projection is a cylindrical projection that was developed for navigation purposes. The Mercator projection was used for its portrayal of direction and shape, so it was helpful to the sailors of that time. Is the Mercator map cylindrical? Mercator is a cylindrical projection. The meridians are vertical lines, parallel to each other, and equally spaced, and they extend to infinity when approaching the poles. The poles project to infinity and cannot be shown on the map. The graticule is symmetric across the equator and the central meridian. What…
{"url":"https://bridgitmendlermusic.com/is-the-mercator-map-cylindrical/","timestamp":"2024-11-11T04:55:37Z","content_type":"text/html","content_length":"43367","record_id":"<urn:uuid:bc0b0c94-070d-4f9c-ad6c-9593124c15b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00260.warc.gz"}
Week3 Assignment-unittests.test_shortest_path(Graph_Advanced). How to use feedback from the tests? I get following feedback Failed test case: Failed to find the optimal solution for path between nodes 25 and 759 in a graph. To replicate the graph, you may run generate_graph(nodes = 1000, edges = 100, seed = 42), index = Expected: 31 Got: 30 Failed test case: Failed to find the optimal solution for path between nodes 654 and 114 in a graph. To replicate the graph, you may run generate_graph(nodes = 1000, edges = 100, seed = 43), index = Expected: 36 Got: 35 How do I rerun with debug to find out the issue ? Hi @genaicoder , You can do this either by printing intermediate values to track the algorithm’s progress, or create graphs with fewer nodes and edges to isolate potential issues more easily. Hope it helps! Let me know if you have any questions. That is probably because you are not using the right algorithm for the large graph. At this point I would say to probe the LLM for other algorithms to implement the large graph case, giving it necessary information about the size of your graph and the time it needs to run…
{"url":"https://community.deeplearning.ai/t/week3-assignment-unittests-test-shortest-path-graph-advanced-how-to-use-feedback-from-the-tests/711659","timestamp":"2024-11-05T06:27:45Z","content_type":"text/html","content_length":"35289","record_id":"<urn:uuid:a37db70c-dd5e-4764-a58f-11c1b3d45da8>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00012.warc.gz"}
Projectile Dynamics List of Topics Target Practice One classical type of dynamics problem is that of a projectile. A video solutions is provided below. In Part B of the solution, we investigate the meaning of the mathematical answer. The Trajectory is Parabolic! Here we show that the trajectory of a projectile, being pulled down to Earth by gravity, has the shape of a parabola. This fact might help, a lot, in the Espoo Challenge. Jumping Off a Cliff Suppose you want to jump off a cliff and land in the water. The problem is that the water is rather far away. Let’s suppose that the cliff is a height \(h\) and the water is a distance \(d\) from the bottom of a vertical cliff. How fast does one need to run off the cliff \(v_0\) in order to clear the beach and make it into the water?
{"url":"https://www.spumone.org/courses/dynamics-notes/projectile-dynamics/","timestamp":"2024-11-07T19:11:39Z","content_type":"text/html","content_length":"49182","record_id":"<urn:uuid:68562c50-0419-4113-b7ff-b2acb5c3f695>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00880.warc.gz"}
Notes on set cardinality Credit: deviantART / Sc0t0ma I'm working through LMU's Introduction to Mathematical Philosophy on Coursera at the moment. I stumbled across something that confused me (specifically, the claim that “has fewer elements than” defines a total order), so I'm using this blog post as a medium in which to work through it methodically enough to explain it to someone else. (Warning: maths below the fold.) Some boring technical definitions of function properties: Definition (injectivity): we say \(f \in (X \rightarrow Y)\) is injective iff no two elements map to the same result, i.e. \(\forall x_0, x_1 \in X: f(x_0) = f(x_1) \Leftrightarrow x_0 = x_1\). Definition (surjectivity): we say \(f \in (X \rightarrow Y)\) is surjective iff the preimage of \(Y\) in \(f\) is \(X\), i.e. \(\forall y \in Y: \exists x \in X: f(x)=y\). Definition (bijectivity): we say \(f \in (X \rightarrow Y)\) is bijective (or “one-to-one”) iff it is injective and surjective. Some boring technical definitions of relations on sets: Definition: the sets \(X\) and \(Y\) have equal size (“\(X =_\Sigma Y\)”) iff there exists a bijection (one-to-one function) between them. Example: \(\{0,1,2,3,\cdots\} =_\Sigma \{0,2,4,6,\cdots\}\), with one possible bijection being \(\lambda x \cdot 2x\). \(\mathbb{N} \neq_\Sigma \mathbb{R}\), due to Cantor's classic proof Definition: \(X\) is “no larger than” \(Y\) (“\(X \leq_\Sigma Y\)”) iff \(X =_\Sigma Y' \subset Y\). Definition: \(X\) is “smaller than” \(Y\) (“\(X <_\Sigma Y\)”) iff \(X \leq_\Sigma Y \land X \neq_\Sigma Y\). Example: \(\mathbb{N} <_\Sigma \mathbb{R}\). We already have non-equality; we also have \(\mathbb{N} \leq_\Sigma \mathbb{R}\) due to \(\mathbb{N} =_\Sigma \mathbb{N} \subset \mathbb{R}\). At this point, the lecturer continued with, “And so this is a total order!”. My immediate reaction to this was “Okay!”, followed half a minute later by “Wait... we're talking about an ordering which implies some very non-intuitive things I don't really understand, like all the transfinite ordinals. Maybe I should double check this makes sense.” Transitivity and reflexivity of \(\leq_\Sigma\) are obvious. It seems that we can break down the “hard part” of the proof that \(\leq_\Sigma\) is a total order into two parts: Conjecture 1 (totality): for all sets \(X\) and \(Y\), at least one of \(X \leq_\Sigma Y\) or \(Y \leq_\Sigma X\). Conjecture 2 (antisymmetry): for all sets \(X\) and \(Y\), if \(X \leq_\Sigma Y\) and \(Y \leq_\Sigma X\), then \(X =_\Sigma Y\). Let's plow right in: Proof of Conjecture 1: Assume \(\lnot(Y \leq_\Sigma X)\). It suffices to show that there exists some injective function from \(X\) to \(Y\). By our initial assumption, there are no injective functions from \(Y\) to \(X\). Thus we can invoke Zorn's lemma (over the set of \(X \rightarrow Y\) functions, and the “repeated operation” of “find some \(f(x_0) = f(x_1)\), and redefine \(f(x_0)\) to some value not in the image of \(f\)”). The resulting function can't be surjective (otherwise its “inverses” would be injective functions from \(Y \) to \(X\), a contradiction); hence it must be injective. The second half took me far longer than it should have (the better part of a day): Proof of Conjecture 2: Assume \(X \leq_\Sigma Y\) and \(Y \leq_\Sigma X\). It follows that there exist functions \(u, v \in (X \rightarrow Y)\), such that \(u\) is injective and \(v\) is surjective. (The latter results by “inverting” some injective function going in the opposite direction.) Then, we invoke Zorn's lemma (over the set of surjective \(X \rightarrow Y\) functions, and the “repeated operation” of “find some \(f(x_0) = f(x_1) \neq u(x_1)\) and redefine \(f(x_1)\) as \(u(x_1) \)”.) The resulting function must be a bijection, as desired. Thus, set cardinalities are a total order. It's not clear to me whether the Axiom of Choice was necessary to prove this, though. ...well, fuck. Credit: Abstruse Goose From here, the lecture notes went on to provide a definition of infinity attributed to Dedekind: Definition (infinitude): \(X\) is infinite iff \(X =_\Sigma X'\) for some \(X' \subset X\). (That is, it is equal-sized to a strict subset of itself.) Just for kicks (and going completely off-track from the lecture) let's prove that \(\mathbb{N}\) is the (well, “a”) smallest infinite set. Lemma: \(X\) is infinite if and only if \(\mathbb{N} \leq_\Sigma X\). Proof (\(\Rightarrow\)): We prove the contrapositive. Assume \(X <_\Sigma \mathbb{N}\). Then we can show that \(X =_\Sigma \{0, 1, \cdots, n-1\}\) for some \(n \in \mathbb{N}\) (“\(|X| = n\)”). By induction, we can show that any set of size \(n \in \mathbb{N}\) is not infinite. Proof (\(\Leftarrow\)): Let \(f\) be an injective function from \(\mathbb{N} \rightarrow X\). Let \(X' = X \setminus \{f(0)\}\). It remains to provide a bijection \(g \in (X \rightarrow X')\): \[ g (x) = \left\{ \begin{array}{ll} f(f^{-1}(x) + 1), & x \in f(\mathbb{N}) \\ x, & \textrm{otherwise} \end{array} \right. \] Finally, let's prove a couple of simple properties about Hilbert's hotel: Lemma (infinity plus infinity): If \(X\) is infinite, then \(X =_\Sigma \mathbb{Z}_2 \times X\). Proof (sketch): By infinitude, we can take an element out of \(X\) without decreasing its size. Combined with Zorn's lemma, this gives a simple construction. Lemma (infinity times infinity): If \(X\) is infinite, then \(X =_\Sigma X^2\). Example: \(\mathbb{N} =_\Sigma \mathbb{N^2}\). One bijection is: \[f(x_0,x_1) = x_0 + \dfrac{(x_0 + x_1)(x_0 + x_1 + 1)}{2}\] Example: \(\mathbb{R} =_\Sigma \mathbb{R^2}\). We could construct a bijection by taking two real numbers and “interlacing” their digits (with suitable fudging to remove [countably many] duplicates). Let \(X\) be given. To show that \(X^2 \leq_\Sigma X\), we want to find a partition function \(f \in (X \rightarrow \mathcal{P}X)\) such that all \(f(x)\) are disjoint and all \(f(x) =_\Sigma X\). (The construction follows fairly easily from that.) Zorn's lemma again. This time, the set under consideration is the set of partition functions in \(X \rightarrow \mathcal{P}X\) such that all \(f(x)\) satisfy \(f(x) =_\Sigma X \lor f(x) = \emptyset \). The repeated operation is taking some \(f(x_0) = X', f(x_1) = \emptyset\), picking a bijection \(g \in (\mathbb{Z}_2 \times X' \leftrightarrow X')\), and redefining \(f(x_i) = g(i,X')\). The resulting function is a partition with all \(f(x) =_\Sigma X\) as desired. Corollary: If \(X \leq_\Sigma Y\) and \(Y\) is infinite, then \(X \times Y =_\Sigma Y\). Proof: \(Y \leq_\Sigma X \times Y \leq_\Sigma Y \times Y \leq_\Sigma Y\).
{"url":"http://blog.openendings.net/2013/08/notes-on-set-cardinality.html","timestamp":"2024-11-12T12:40:49Z","content_type":"application/xhtml+xml","content_length":"51610","record_id":"<urn:uuid:87566b12-cf52-47ca-9882-6ed39cb555ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00607.warc.gz"}
Mathematical Optical Illusions There are five mathematical optical illusions that you can see by clicking on the tabs above. Each illusion comes with a question which you can answer using the dropdown boxes below. When you think you have answered all five questions correctly click on the Check button. You need to get all five answers correct first time to win a trophy. If you get one or more answers wrong the diagrams will change to show the correct answer. What colour is the arc that passes through the centre of the black circle? Which line is longer: red or blue? What is the colour of the longest line: red, blue or green? Which line is longest: red or blue? Which line is longest: red or blue? More mathematical optical illusions. Christmas Tables Missing Square Chess Board Paradox Parallel or not? Click an image above to see the details of the optical illusion. You got them all correct and have won a trophy! Midarc Trapezia Triangle Inequalities Dumbbells Check Take your time! Do not click the Check button until you are absolutely sure you have all the answers correct. You only get one chance! Only after you have made a decision on all five illusions should you click the Check button. You can go back to each diagram as many times as you want by clicking on the tabs above. You can change your answers as many times as you want too. The answers to this and other Transum puzzles, exercises and activities are available here when you are signed in to your Transum subscription account. If you do not yet have an account and you are a teacher, tutor or parent you can apply for one here. A Transum subscription also gives you access to the 'Class Admin' student management system, downloadable worksheets many more teaching resources and opens up ad-free access to the Transum website for you and your pupils.
{"url":"https://www.transum.org/Maths/Display/Optical_illusion/","timestamp":"2024-11-05T19:27:34Z","content_type":"text/html","content_length":"43402","record_id":"<urn:uuid:def6b5a2-62f7-4fdd-b4c0-4c59a1abd2fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00518.warc.gz"}
Comparison of 4-dimensional variational and ensemble optimal interpolation data assimilation systems using a Regional Ocean Modeling System (v3.4) configuration of the eddy-dominated East Australian Current system Articles | Volume 17, issue 6 © Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License. Comparison of 4-dimensional variational and ensemble optimal interpolation data assimilation systems using a Regional Ocean Modeling System (v3.4) configuration of the eddy-dominated East Australian Current system Ocean models must be regularly updated through the assimilation of observations (data assimilation) in order to correctly represent the timing and locations of eddies. Since initial conditions play an important role in the quality of short-term ocean forecasts, an effective data assimilation scheme to produce accurate state estimates is key to improving prediction. Western boundary current regions, such as the East Australia Current system, are highly variable regions, making them particularly challenging to model and predict. This study assesses the performance of two ocean data assimilation systems in the East Australian Current system over a 2-year period. We compare the time-dependent 4-dimensional variational (4D-Var) data assimilation system with the more computationally efficient, time-independent ensemble optimal interpolation (EnOI) system, across a common modelling and observational framework. Both systems assimilate the same observations: satellite-derived sea surface height, sea surface temperature, vertical profiles of temperature and salinity (from Argo floats), and temperature profiles from expendable bathythermographs. We analyse both systems' performance against independent data that are withheld, allowing a thorough analysis of system performance. The 4D-Var system is 25 times more expensive but outperforms the EnOI system against both assimilated and independent observations at the surface and subsurface. For forecast horizons of 5d, root-mean-squared forecast errors are 20%–60% higher for the EnOI system compared to the 4D-Var system. The 4D-Var system, which assimilates observations over 5d windows, provides a smoother transition from the end of the forecast to the subsequent analysis field. The EnOI system displays elevated low-frequency (>1d) surface-intensified variability in temperature and elevated kinetic energy at length scales less than 100km at the beginning of the forecast windows. The 4D-Var system displays elevated energy in the near-inertial range throughout the water column, with the wavenumber kinetic energy spectra remaining unchanged upon assimilation. Overall, this comparison shows quantitatively that the 4D-Var system results in improved predictability as the analysis provides a smoother and more dynamically balanced fit between the observations and the model's time-evolving flow. This advocates the use of advanced, time-dependent data assimilation methods, particularly for highly variable oceanic regions, and motivates future work into further improving data assimilation schemes. Received: 13 Oct 2023 – Discussion started: 24 Oct 2023 – Revised: 05 Jan 2024 – Accepted: 24 Jan 2024 – Published: 22 Mar 2024 • The predictive performances of two ocean data assimilation systems (EnOI and 4D-Var) are assessed in a Regional Ocean Modeling System (ROMS) configuration of the East Australian Current over 5d forecast horizons. • The forecast skill of the 4D-Var system surpasses the EnOI system against both assimilated and independent observations at the surface and subsurface. • The EnOI system has greater analysis increments, elevated low-frequency (>1d) surface-intensified variability in temperature, and elevated kinetic energy at length scales less than 100km at the beginning of the forecast windows. • The dynamically balanced 4D-Var system displays elevated energy in the near-inertial range throughout the water column, with the wavenumber kinetic energy spectra remaining unchanged upon Data assimilation (DA), the combination of numerical modelling and observations, is essential to produce accurate forecasts of the atmosphere or ocean circulation. The goal of any DA scheme is to combine observations and a numerical model such that the result is a better estimate of the ocean circulation than either alone. Observations provide sparse data points, while the model provides context. Since initial conditions play an important role in forecast quality, accurate and dynamically consistent state estimates are key to improving prediction. This study focuses on the comparison of two DA techniques applied to forecasting the ocean mesoscale circulation in a highly dynamic oceanic region. Mesoscale eddies exist throughout the global ocean and contain more than half of the kinetic energy of the ocean circulation. Western boundary current (WBC) regions are hotspots of high eddy variability as eddies emerge due to instabilities in the strong boundary current flow. The high mesoscale eddy variability (Stammer, 1997; Mata et al., 2000) and the complexities of eddy shedding processes and evolution (Mata et al., 2006; Bull et al., 2017) make WBCs challenging to model and predict (Feron, 1995; Imawaki et al., 2013; Roughan et al., 2017). Due to the chaotic nature of the mesoscale circulation, ocean models must be regularly updated through the assimilation of observations in order to correctly represent the timing and locations of eddies (e.g. Kerry et al., 2016; Li and Roughan, 2023), and accurate forecasts of eddies as they shed, evolve, and interact in WBC regions are lacking. The East Australian Current (EAC), the WBC of the South Pacific subtropical gyre (Fig. 1a), and its associated eddies dominate the circulation along the southeastern coast of Australia. The southward-flowing current is most coherent off 27°S (Sloyan et al., 2016) and intensifies at around 31°S (Kerry and Roughan, 2020a). The current typically separates from the coast between 31 and 32.5°S (Cetina Heredia et al., 2014) and turns eastward to form the EAC eastern extension, shedding large warm-core eddies in the Tasman Sea (Oke and Middleton, 2000; Cetina Heredia et al., 2014; Oke et al., 2019). In the EAC, eddies can directly influence shelf circulation (Schaeffer et al., 2014; Schaeffer and Roughan, 2015; Malan et al., 2023) and often intensify as the jet separates from the coast. After shedding, eddies propagate and evolve (Pilo et al., 2015b, a) and can display a complex vertical structure including tilting and stacking (Oke and Griffin, 2011; Macdonald et al., 2013; Roughan et al., 2017; Pilo et al., 2018). As such, the EAC is a challenging region to predict and provides an ideal test bed for comparison of DA methods. There are various DA techniques, by which a model estimate of the ocean state can be combined with ocean observations, that vary in complexity. Simpler, computationally efficient, time-independent methods such as 3-dimensional variational data assimilation (3D-Var) and ensemble optimal interpolation (EnOI) centre the observations and model on a single time and are capable of resolving slowly evolving flows governed by simple balance relationships at synoptic scales. These methods have provided useful state estimates and predictions. For example, the European Centre for Medium-Range Weather Forecasts uses 3D-Var to produce initial conditions for its coupled ocean–atmosphere modelling system (Mogensen et al., 2012), and EnOI was effectively employed in Australia's Bluelink Ocean Data Assimilation System (Oke et al., 2008a). In Oke et al. (2010) a case was presented for the use of EnOI, weighing up the predictive skill against its computational efficiency. Specifically, EnOI is highly computationally efficient as it does not represent the errors of the day; rather it assumes that the background error covariances are well represented by a stationary or seasonally varying ensemble. More recent work has shown that combining flow-dependent background error covariances (from an ensemble of model solutions) with a static ensemble achieves improved predictive skill ( Brassington et al., 2023). With increasing computational capacity and the pursuit of more accurate weather and ocean forecasts over the last 2 decades, a shift has been made to more advanced, time-dependent DA techniques ( Edwards et al., 2015; Moore et al., 2019). Advanced DA methods make use of the time-variable dynamics of the model, allowing the observations to be assimilated over a time interval given the temporal evolution of the circulation. In the atmosphere, these methods have provided considerable improvement compared to the earlier, time-independent DA techniques, particularly for forecasts (e.g. Lorenc and Rawlins, 2005; Brousseau et al., 2012) and for highly intermittent flows with irregularly sampled observations (e.g. Xu, 2013). Indeed, the two techniques that are the most promising in numerical weather prediction (NWP) are 4-dimensional variational data assimilation (4D-Var) and the ensemble Kalman filter (EnKF), and ocean DA is following suit (Moore et al., 2019). In 4D-Var the model and observations are combined using subsequent iterations of the tangent linear and adjoint models to compute increments in the forecast model (initial conditions, boundary conditions, and surface forcing) such that the difference between the new model solution and the observations is minimised over a time window (Moore et al., 2004). With 4D-Var, a continual and full estimate of the ocean over the assimilation window is created. This is ideal for both accuracy and timeliness of current state estimates and future predictions, as a continuous field evolves by the nonlinear primitive equations. The Kalman filter (KF) can be formally posed in the same way as 4D-Var (Lorenc, 1986) and in practice uses an ensemble of perturbed model simulations to approximate the model error covariances and their temporal evolution, and the ensemble mean is considered the best estimate of the state of the system (Evensen, 2002). An advantage of generating an ensemble of forecasts is that probabilistic forecasts can be derived from the ensemble spread. Indeed, with the shift to more advanced DA techniques in ocean forecasting, it is important to quantify the improvements gained. Here we use a Regional Ocean Modeling System (ROMS) configuration of a dynamic WBC (the EAC) to compare two DA methods in a quantifiable manner. We compare the time-independent DA technique (EnOI) with the time-dependent technique (4D-Var) using the same numerical model configuration and suite of observations. We quantify the differences in predictive skill achieved by the two systems against assimilated and independent observations at the surface and subsurface. We focus our analysis on the performance of the short-range (5d) forecasts. After presenting the experiments (Sect. 2), we begin by comparing forecast performance against assimilated observations (Sect. 3.1). Then we employ a suite of independent observations to assess the forecast skill of the two systems (Sect. 3.2). The model energetics (Sect. 4.2) and the temporal and spatial scales of variability (Sect. 4.3) are then compared to understand what may drive differences in predictive skill. Finally we summarise and discuss the way forward for improvements in Sect. 5. 2Model and data assimilation system configuration 2.1The Regional Ocean Modeling System configuration We use the Regional Ocean Modeling System (ROMS) to simulate the eddying ocean circulation off the southeastern coast of Australia between January 2012 and December 2013. This modelling suite is named the South East Australian Coastal Ocean Forecast System (SEA-COFS, Roughan and Kerry, 2023). ROMS is a widely used free-surface, hydrostatic, terrain-following, primitive equation ocean model and is described by Haidvogel et al. (2000), Marchesiello and Middleton (2000), and Shchepetkin and McWilliams (2005). The model configuration used in this study has been used in various past studies of the EAC and is described in detail in Kerry et al. (2016, 2020a) and Roughan and Kerry (2023). The study domain covers SE Australia from 25.25 to 41.55°S and approximately 1000km offshore (Fig. 1a). The domain covers the latitudinal extent of the EAC system from where the current jet is most coherent, the EAC separation region, the region of high eddy activity associated with the EAC eastern extension, and the EAC southern extension. The grid is rotated 20° clockwise such that the domain y axis is oriented roughly parallel with the coastline. The cross-shore horizontal resolution varies from 2.5km over the continental shelf and gradually increases to 6km offshore. The horizontal resolution is 5km in the along-shore direction. Higher resolution over the shelf allows the steep topography to be maintained while minimising pressure gradient errors that emerge in terrain-following coordinate schemes, which otherwise may result in artificial along-slope flow for steep topography (Haney, 1991; Mellor et al., 1994). As such, less topographic smoothing is required to ensure low horizontal pressure gradient errors while still representing the shelf and seamount structures in the model. The model utilises 30 vertical s layers with higher resolution in the upper 500m to resolve mesoscale dynamics and higher resolution near the seabed for improved representation of the bottom boundary layer. To better resolve surface currents, a near-constant-depth surface layer is provided by applying the vertical stretching scheme of De Souza et al. (2015). Initial conditions and boundary forcing are derived from the Bluelink ReANalysis version 3 (BRAN3; Oke et al., 2013). The boundary forcing is applied daily, and misfits in baroclinic energy to the BRAN3 condition are absorbed at the boundary via a flow-relaxation scheme. The model is forced at the surface with realistic atmospheric forcing derived from the 12km resolution Bureau of Meteorology (BOM) Australian Community Climate and Earth-System Simulation (ACCESS) analysis (Puri et al., 2013). The atmospheric forcing fields are applied every 6h and used to compute the surface wind stress and surface net heat and freshwater fluxes using the bulk flux parameterisation of Fairall et al. (1996). The free-running configuration, while unable to reproduce the temporal evolution of the mesoscale eddies, has been shown to accurately represent the mean dynamical features of the EAC and both the surface and subsurface (0–2000m) variability (Kerry and Roughan, 2020a). Specifically, they show that the model accurately represents the mesoscale eddy-related variability in sea surface height (SSH), the frequency in occurrence of EAC separation latitude, the seasonal cycle in sea surface temperature (SST), the ocean's subsurface structure based on data from Argo profiling floats, EAC transport, and the temperature depth structure across the EAC. Thus, using data assimilation, we aim to constrain the model to reproduce the temporal evolution of the mesoscale eddies and examine the forecast skill achieved. The same set of observations are assimilated into the ROMS model configuration using the two DA systems (EnOI and 4D-Var) for comparison in this study. These include satellite-derived SSH, SST, sea surface salinity (SSS), vertical profiles of temperature and salinity from profiling Argo floats, and vertical profiles of temperature from expendable bathythermographs (XBTs) (refer to Fig. 1b). The number of processed observations assimilated for each 5d assimilation window is shown in Fig. 1d and e. These observations are referred to as the “traditionally” available observations (TRAD) ( Siripatana et al., 2020). We describe the observations used and the observation uncertainties specified below. For a detailed description of the observations, the processing performed prior to assimilation, and the prescribed observation uncertainties, the reader is referred to Kerry et al. (2016). 2.2.1Satellite-derived sea surface height Archiving, Validation and Interpretation of Satellite Oceanographic Data (AVISO), France, produces global, daily, gridded ($\mathrm{1}/\mathrm{4}$°×$\mathrm{1}/\mathrm{4}$°) mean sea level anomaly (SLA) data by merging of all available along-track satellite altimetry data, computed with respect to a 7-year mean. We add the AVISO SLA data to the dynamic SSH mean from a long free run such that the sea level data are consistent with the ROMS model configuration. The AVISO delayed-time global SLA product error for the region is estimated at 2cm (AVISO, 2015). We prescribe an additional 4cm of uncertainty to account for imbalances between this statistical field and a dynamically balanced SSH field required by the model, as well as the smaller spatial-scale processes resolved by the model compared to the gridded product. As such, we prescribe an observation uncertainty of 6cm. As the AVISO gridded product poorly resolves continental shelf processes, we exclude SSH observations over water depths less than 1000m. We use the gridded AVISO product to constrain SSH, rather than the along-track altimetry, for this comparison study. Current work including the development of a high-resolution coastal ocean forecast system (Roughan and Kerry, 2023) is now making use of along-track SSH data successfully with 4D-Var. 2.2.2Satellite-derived sea surface temperature SST data from the US Naval Oceanographic Office's Global Area Coverage Advanced Very High Resolution Radiometer level-2 product (NAVOCEANO's GAC AVHRR L2P SST) are used for this study. Data are available 2–3 times per day. We remove day-time SST observations and any night-time observations when wind speed <2ms^−1 (Donlon et al., 2002). The percentage of SST observations removed per 5d cycle is 0.33%–54.3% (mean of 20.77%). As the resolution of the data is similar to the resolution of the model, the observation uncertainty for the assimilation is chosen to be equal to the specified product error (Andreu-Burillo et al., 2010), which is 0.4–0.5°C. 2.2.3Satellite-derived sea surface salinity We use the Level-3 gridded sea surface salinity (SSS) product derived from the National Aeronautics and Space Administration (NASA) Aquarius satellite (http://www.aquarius.umaine.edu/, last access: 6 March 2024). This product provides daily fields at a 1° resolution. We set the observation uncertainty to 0.4. The specified Aquarius SSS product error is ∼0.2, and 0.4 is chosen to account for representation errors. The value is considerably higher than the uncertainties specified for other in situ salinity observations, so SSS provides little constraint to the system (Kerry et al., 2016, 2.2.4Argo floats Argo (free-drifting profiling) floats measure temperature and salinity of the upper 2000m of the global ocean (http://www.argo.ucsd.edu, last access: 6 March 2024, Fig. 1b). The Argo data points are averaged to the model grid (in the horizontal and vertical) and a 5min time step. Uncertainty profiles are defined to specify the nominal minimal uncertainties for subsurface temperature and salinity (method described in Kerry et al., 2016). The profiles provide greater uncertainties in the depth ranges of greatest variability where representation errors are likely to be the largest. The observation error variance is specified as the maximum of this nominal minimum error variance and the variance of the observations from the same model cell. 2.2.5Expendable bathythermographs Expendable bathythermographs (XBTs) collect temperature profiles along repeat lines sampled by merchant ships; the Sydney–Wellington (PX34) and the Brisbane–Fiji (PX30) routes intersect our model domain (Fig. 1b). Four PX30 lines and seven PX34 lines took place over the assimilation period (2012–2013; Fig. 1e). XBT casts are performed at 10km intervals along the sections, and the XBT data points are averaged to the model grid and a 5min time step. The nominal minimal uncertainty variance profiles used for the Argo temperature observations are doubled for the XBT observations, and the observation error variance is specified as the maximum of the nominal minimum error variance and the variance of the observations from the same model cell. 2.2.6Independent observations used for system assessment A suite of additional observations were also available over the simulation period (2012–2013) that were collected as part of Australia's Integrated Marine Observing System (IMOS). These include surface velocity measurements from high-frequency coastal radar (HF radar); temperature, salinity, and velocity observations from continental-shelf moorings off the coast of New South Wales (NSW) and South East Queensland (SEQ); temperature, salinity, and velocity observations from five deep-water moorings across the core of the EAC at 28°S (EAC array); and temperature and salinity observations from ocean gliders (refer to Fig. 1c). These products provide independent observations against which we assess the performance of the two systems. Furthermore, these observations were assimilated into the ROMS model (along with the TRAD observations) using 4D-Var (Kerry et al., 2016, 2018; Siripatana et al., 2020). Given the full suite of available observations were assimilated, this system is referred to as the FULL system and considered the “best estimate” of the ocean state over the 2012–2013 period. As such, the FULL system is also used in this paper as a benchmark against which to compare the performance of the two systems presented in this study (4D-Var and EnOI systems that assimilate TRAD observations). 2.3Data assimilation experiments In this paper, we refer to three different configurations of the SEA-COFS model which differ in DA type and/or the observations assimilated. Each case is performed over the 2-year period from January 2012 and December 2013 and is described below. 1. 4D-Var TRAD refers to the 4D-Var system that assimilates “traditionally” available observations (SSH, SST, SSS, Argo, and XBT). This system is similar to the system described in Kerry et al. ( 2016) expect that it only assimilates the TRAD observations. 2. EnOI TRAD refers to the system that assimilates the same observations as the 4D-Var TRAD but using the EnOI DA method described in Sect. 2.4.1 below. 3. 4D-Var FULL refers to the 4D-Var system that assimilates all available observations (SSH, SST, SSS, Argo, XBT, HF radar, shelf and deep moorings, and glider data). It is similar to the system described in detail in Kerry et al. (2016, 2020b, 2018). A detailed comparison of the 4D-Var TRAD and the FULL systems was presented in Siripatana et al. (2020). The purpose of this paper is to compare the 4D-Var TRAD and the EnOI TRAD systems, in order to provide a comparison of the two DA schemes using a common suite of traditionally available observations. We introduce the 4D-Var FULL system as a benchmark when comparing against observations that are independent to the TRAD experiments in Sect. 3.2. 2.4Data assimilation methods The classic state estimation problem can be given by $\begin{array}{}\text{(1)}& {\mathbf{X}}_{\mathrm{a}}={\mathbf{X}}_{\mathrm{f}}+\mathbf{K}\left(\mathbit{y}-H\left({\mathbf{X}}_{\mathrm{f}}\right)\right),\end{array}$ where X is the state estimate; subscripts f and a refer to forecast and analysis, respectively; K is the Kalman gain; y is the observation vector; H is the observation operator that samples the background circulation to observation points in space and time. The y−H(X[f]) term is referred to as the innovation vector and describes the difference between the observations and the forecast model mapped to observation space. The difference in DA techniques lies in the formulation of K, which determines how the forecast innovations are mapped into model space to produce the new state estimate (X[a]). For the standard analysis equation that is solved by the Kalman filter and the dual form of 4D-Var, K can be expressed as $\begin{array}{}\text{(2)}& \mathbf{K}={\mathbf{BG}}^{\mathrm{T}}\left({\mathbf{GBG}}^{\mathrm{T}}+\mathbf{R}{\right)}^{-\mathrm{1}},\end{array}$ where B is the background covariance, R is the observation error covariance, and G performs the mapping from model space to observation space. For time-dependent methods (4D-Var and EnKF), observations are assimilated over a time window respecting the dynamics of the model. The observation operator H samples the nonlinear forecast model X [f] at the observation locations in space and time over an assimilation cycle time interval. In 4D-Var, the background error covariance matrix B is typically assumed to be unchanging in time, so there is no explicit flow dependence of the B. Flow dependence is implicit via the terms BG^T and GBG^T in Eq. (2), since G is the operator that maps the tangent linear model solution to the observation points and G^T is the adjoint ocean model forced at observation points (Moore et al., 2011c, 2020). In EnKF, the background error covariance matrix and its evolution in time is estimated from an ensemble of nonlinear model solutions (Houtekamer and Zhang, 2016). For 3D-Var and EnOI, observations are all centred at a single time and, rather than using the model physics to constrain the model versus observation error, time-invariant covariances are prescribed. Ensemble methods (which include the time-dependent EnKF and the time-independent EnOI) use an ensemble of model anomalies to estimate the background error covariances. The EnKF allows for the time-varying statistics by using a fixed number of nonlinear model members (ensembles) to provide a statistical representation of K. The ensembles are generated for every assimilation period so as to capture the state-dependent “errors of the day”. For EnOI, the ensemble of model anomalies is generated from a long non-assimilating model run. This makes the assumption that the background error covariances are not state-dependent and are well represented by a stationary or seasonally varying ensemble. This method is considerably less expensive than the time-dependent EnKF or 4D-Var methods as, once the stationary ensemble is generated, EnOI requires only a single integration of the nonlinear model to generate a background state and only a single solution of the analysis equations to update the background. In contrast, to generate an analysis field using EnKF, the forward nonlinear model must be integrated m times (where m is the number of ensemble members) to represent the time-varying background error covariances and a background state (often based on the ensemble mean). All ensemble members are then updated, requiring m solutions of the analysis equations. Therefore EnOI is m times less expensive than EnKF. A challenge of ensemble methods is to determine the sufficient number of ensemble members to capture the entirety of the state space, and techniques such as localisation and inflation are used to ensure unrealistic covariances are not applied (Houtekamer and Zhang, 2016). Specifically, localisation is used for three reasons: it reduces the fictitious large covariances at large distance due to sampling error; it improves the rank of the matrix inversion; and, with the use of a parametric form to taper to zero over the localisation distance, the inversions become perfectly parallel, improving computational efficiency (Gaspari and Cohn, 1999). Inflation is only applied to EnKF, not EnOI, with inflation of 5% being typical. The localisation and inflation techniques however remove some dynamical consistency from the solution. Recent work by the Australian BOM uses a hybrid ensemble transform Kalman filter (Sakov and Oke, 2008) based on 48 dynamic and 96 stationary ensemble members (Brassington et al., 2023). With EnOI, there is less constraint on the number of ensemble members, as the ensembles are only performed once to generate the stationary or seasonally varying For EnOI, there is no time dependence in K (Eq. 2). The mapping from model space to observation space performed by G is time-independent; all observations are co-located at a single time, and the analysis equation (Eq. 1) is considered only at that time. The background error covariance matrix is estimated from a static ensemble of model state anomalies and is given by $\begin{array}{}\text{(3)}& \mathbf{B}=\frac{\mathrm{1}}{m-\mathrm{1}}{\mathbf{AA}}^{\mathrm{T}},\end{array}$ where A is the matrix of background ensemble anomalies, and m is the ensemble size. In the EnOI system used in this study, we use a stationary ensemble to represent the intraseasonal model anomalies. Each member is calculated as a difference between a 2-week model average and a 2d average, centred at the same time. This is repeated every 30d to ensure the anomalies are independent, generating 266 ensemble members. The DA system is run with a 1d cycle and centred observation window, so an analysis is generated every day. For SSH, temperature, and salinity, the observation time is assumed to coincide with the analysis time, and innovations are calculated as the difference between observation and model state at the analysis time. The localisation method applied is based on local analysis (Ott et al., 2004); that is, an analysis of a local region is produced with a local background error covariance matrix that has lower dimension than the full state vector. The local analyses are then used to construct complete model states for advancement to the next forecast time. Performing the data assimilation analysis locally is convenient for parallelising the solver. In addition to this, a polynomial taper function is applied to bring the covariance to exactly zero on a specified length scale (Gaspari and Cohn, 1999). The localisation radius is set to 250km for SSH, temperature, and salinity observations and to 100km for SST observations. The observation errors are set equal to those described in Sect. 2.2 (identical for both EnOI and 4D-Var systems), except for SST for which the error variance is increased by a factor of 2 for the EnOI system to prevent overfitting to SST. The observation impact was moderated with an adaptive quality control procedure via the so-called K factor (Sandery and Sakov, 2017) with the value of K=2. For comparison with the 4D-Var system we perform 5d forecasts based on the EnOI analyses every 4d. Initial conditions for each subsequent 5d forecast are taken from the EnOI analysis. In this paper we focus on the forecast skill between the 4D-Var and EnOI systems (not the analysis skill). The 4D-Var system uses variational calculus to solve for increments in model initial conditions, boundary conditions, and forcing such that the differences between the observations and the new model trajectory are minimised – in a least-squares sense – over a specific assimilation window. The goal is for the model to represent all of the observations in time and space using the physics of the model and accounting for the uncertainties in the observations and background model state, producing a description of the ocean state that is dynamically balanced and a complete solution of the nonlinear model equations. This is achieved by minimising an objective cost function, J, that measures normalised deviations of the modelled ocean state (given the increment adjustments to model initial conditions, boundary conditions, and forcing) from the observations as well as from the modelled background state (the model prior). The cost function is a function of the increment vector $\begin{array}{}\text{(4)}& \mathit{\delta }\mathbit{z}=\left(\mathit{\delta }\mathbf{X}\left({t}_{\mathrm{0}}{\right)}^{\mathrm{T}},\mathit{\delta }{\mathbit{f}}^{\mathrm{T}}\left({t}_{\mathrm{1}}\ right),\mathrm{\dots },\mathit{\delta }{\mathbit{f}}^{\mathrm{T}}\left({t}_{n}\right),\mathit{\delta }{\mathbit{b}}^{\mathrm{T}}\left({t}_{\mathrm{1}}\right),\mathrm{\dots },\mathit{\delta }{\mathbit representing the increments to the initial conditions (time t[0]) and the surface forcing and boundary conditions for model times t[1] to t[n]. The cost function can then be written as $\begin{array}{}\text{(5)}& \begin{array}{rl}J\left(\mathit{\delta }\mathbit{z}\right)& =\frac{\mathrm{1}}{\mathrm{2}}\sum _{i=\mathrm{0}}^{n}\left(\mathbf{G}\mathit{\delta }\mathbit{z}-{\mathbit{d}} _{i}{\right)}^{\mathrm{T}}{\mathbf{R}}_{i}^{-\mathrm{1}}\left(\mathbf{G}\mathit{\delta }\mathbit{z}-{\mathbit{d}}_{i}\right)+\frac{\mathrm{1}}{\mathrm{2}}\left(\mathit{\delta }\mathbit{z}{\right)}^{\ mathrm{T}}{\mathbf{B}}^{-\mathrm{1}}\left(\mathit{\delta }\mathbit{z}\right)\\ & ={J}_{o}+{J}_{\mathrm{b}}\end{array},\end{array}$ where $\mathbf{G}={H}_{i}\mathbf{M}\left({t}_{i},{t}_{\mathrm{0}}\right)$, and M(t[i],t[0]) represents the tangent linear version of the nonlinear model equations ℳ, integrated from t[0] to t[i]. The difference between the modelled background state and the observations is represented by the innovation vector, introduced above, given at each time t[i] by ${\mathbit{d}}_{i}={\mathbit{y}}_{i}-{H}_ {i}\left({\mathbf{X}}_{\mathrm{f}}\left({t}_{i}\right)\right)$, where y are the observations and H[i] is the operator that samples the background circulation to observation points in space and time. As such, the Gδz−d[i] term represents the difference between the model and the observations given the increment adjustment integrated through the tangent linear model. R is the observation error covariance matrix, and B is the background error covariance matrix. We seek to minimise the cost function by equating the gradient to zero. The gradient of the cost function is given by $\begin{array}{}\text{(6)}& {\mathrm{abla }}_{\mathit{\delta }z}J=\sum _{i=\mathrm{0}}^{n}{\mathbf{G}}^{\mathrm{T}}{\mathbf{R}}_{i}^{-\mathrm{1}}\left(\mathbf{G}\mathit{\delta }\mathbit{z}-{\mathbit {d}}_{i}\right)+{\mathbf{B}}^{-\mathrm{1}}\left(\mathit{\delta }\mathbit{z}\right),\end{array}$ where G^T encompasses the adjoint of the tangent linear model equations. The desired analysis increment, δz[a], that minimises Eq. (5) corresponds to the solution of equation ∇[δz]J=0 and is given by $\begin{array}{}\text{(7)}& \mathit{\delta }{\mathbit{z}}_{\mathrm{a}}={\mathbf{BG}}^{\mathrm{T}}\left({\mathbf{GBG}}^{\mathrm{T}}+\mathbf{R}{\right)}^{-\mathrm{1}}\mathbit{d}\end{array}$ for the dual form (in observation space). In practice, with 4D-Var, subsequent integrations of the adjoint and tangent linear models are performed to solve for an increment vector that minimises (or acceptably reduces) J. This is performed in the inner loops. After the last inner loop, the final increment is applied to the initial conditions and boundary and surface forcing, and the new integration of the nonlinear model is performed. The integration of the nonlinear model given the increment adjustments that were solved for in the inner loops is referred to as the outer loop. The analysis field is given by the final integration of the nonlinear model (the final outer loop), which provides a model state estimate that is constrained to satisfy the nonlinear model equations (strong constraint) and better represent the observations over the assimilation window. The analysis provides an improved estimate of the initial conditions for the next assimilation window. In this study we find that 15 inner loops and a single outer loop give an acceptable reduction in J (rather than a true minimum). To solve for the nonlinear ocean solution that better represents the observations, we must take into account the uncertainties in the system. As such, the background (prior model) error covariance matrix, B, and the observation error covariance matrix, R, are important scaling factors in the cost function, J (Eq. 5). The background error covariance matrix should represent the expected uncertainties in the model initial conditions and surface and boundary forcings. We estimate B by factorisation, as described in Weaver and Courtier (2001), such that $\begin{array}{}\text{(8)}& \mathbf{B}={\mathbf{K}}_{\mathrm{b}}\mathrm{\Sigma }\mathrm{\Lambda }{L}_{v}^{\mathrm{1}/\mathrm{2}}{L}_{h}{L}_{v}^{\mathrm{1}/\mathrm{2}}\mathrm{\Lambda }\mathrm{\Sigma } where K[b] are the covariance operators of the balanced dynamics, Σ and Λ are the diagonal matrices of the background error standard deviations and normalisation factors respectively, and L[v] and L [h] are the univariate correlations in the vertical and horizontal directions. We prescribe K[b]=I such that the dynamics are coupled through the use of the tangent linear and adjoint models but not in the statistics of B. The correlation matrices, L[v] and L[h], and the normalisation factors, Λ, are computed as solutions to diffusion equations following Weaver and Courtier (2001). The characteristic length scales chosen for L[v] and L[h] are assumed to be homogeneous and isotropic (Table 1), and their choice is justified in Kerry et al. (2016). The specification of the observation error covariances is described in Sect. 2.2 above and in more detail in Kerry et al. (2016). Because we use the linearised model equations, the assimilation window length is limited by the time over which the tangent linear assumption remains reasonable (although longer windows have been shown to produce useful results). For the 4D-Var system presented in this study, we find that a 5d assimilation window is reasonable. We adjust the model initial conditions, boundary conditions, and surface forcing such that the new model solution (the analysis) better represents the observations over the assimilation interval. Open boundary conditions are adjusted every 12h and surface forcing every 3h. A 5d analysis is generated every 4d (that is, there is a 1d overlap between the analyses). Initial conditions for the subsequent 5d forecast are taken from day 4 of the previous analysis. The ROMS 4D-Var formulation and implementation is well described by Moore et al. (2011d, a, b), and it has been used successfully in many applications (e.g. Di Lorenzo et al., 2007; Powell et al., 2008; Powell and Moore, 2008; Broquet et al., 2009; Matthews et al., 2012; Zavala-Garay et al., 2012; Janeković et al., 2013; Souza et al., 2014; Kerry et al., 2016; Gwyther et al., 2022; Wilkin et al., 2022). This work adopts the same 4D-Var configuration as described in detail in Kerry et al. (2016). 2.4.3System comparison As discussed above, the way by which the observations and the model background are combined to generate the analysis is quite different for the 4D-Var and EnOI methods. Another significant difference is the computational expense. For the 15 inner loops and single outer loop used in this study, the 4D-Var data assimilation process is approximately 50 times more expensive than a single free run, making it 25 times more expensive than the EnOI system (once the stationary ensemble has been generated). This is comparable to the expense of an EnKF using 50 ensembles. The advantage of EnKF (over 4D-Var) is that the tangent linear and adjoint models are not required, all calculations are performed in nonlinear space, and the ensemble members can be run simultaneously if sufficient computing resources are available. The drawback is underdispersion of the ensemble and the loss of dynamic consistency introduced through localisation and inflation. With a 4D-Var system, the use of the adjoint model can provide useful insight into the sensitivity of the ocean state to prior changes in state variables or forcings (e.g. Powell et al., 2013; Kerry et al., 2022) and the direct quantification of observation impacts (e.g. Powell, 2017; Kerry et al., 2018). Observation impacts can also be computed from ensemble methods (Liu and Kalnay, 2008). Future work aims to compare the EnKF and 4D-Var methods and explore hybrid ensemble–4D-Var methods that capitalise on the advantages of both (i.e. the dynamical interpolation properties of the adjoint used in 4D-Var and the explicit flow-dependent error covariances of the EnKF (Lorenc et al., 2015; Lorenc and Jardak, 2018)). This paper sets a baseline for future work by first comparing the existing and commonly used EnOI method with the 4D-Var method, across a common modelling framework and observational network. 3System performance: assessing predictive skill 3.1Assimilated observations We begin by assessing the performance of the EnOI and 4D-Var systems relative to the observations that the systems assimilate. The 5d model forecast is compared to the observations that become available over those 5d (that is, they have not yet been assimilated) to quantitatively assess the performance of the model forecasts over time. Comparing forecasts against observations provides objective assessment of the system performance. Table 2 presents the mean innovation (mean absolute difference, MAD), innovation bias (mean difference, MD), and number of observations for the 2-year period. Both systems have an identical number of observations. Compared to the EnOI, the 4D-Var improves the SST forecast error from 0.42 to 0.36°C, the SSH forecast error from 10.3 to 8.3cm, in situ temperature from 0.90 to 0.71°C, in situ salinity from 0.079 to 0.056PSU, and SSS from 0.214 to 0.183PSU. Overall, the improvement of the MAD for the 4D-Var over the EnOI is 9%–21%. The percentage differences in forecast error between the two systems are less for the surface observations (SLA, SST, and SSS) compared to the in situ observations, indicating that the advantages of 4D-Var extend through the water column. In WBC regions, the parent model displayed MADs between reanalysed and observed SST values on day 1 of each assimilation of 0.2–0.6°C and MADs of 6–12cm for SSH (Chamberlain et al., 2021b). The performance of the two systems relative to SSH, SST, and Argo observations is presented in more detail using the root-mean-square difference (RMSD) between the model forecasts at the observation locations and the observation values. Figure 2a and b show the RMSD between the forecasts (4D-Var and EnOI, respectively) and observations for SSH across the model domain, averaged over the 2-year period. The EnOI forecasts display higher SSH errors across the model domain, with both systems showing higher errors in the eddy-dominated region compared to the rest of the domain. Figure 2c shows that the spatially averaged RMSD between the forecast and the observations is consistently higher for the EnOI forecasts over the 2-year period. As each forecast is initialised from the previous analysis, forecast errors typically increase over the forecast horizon. SSH forecast errors are averaged across the model domain (Fig. 2d) and for the eddy-dominated region (Fig. 2e) for each day of the 5d forecast horizon. With SSH, the forecast errors are consistently lower for the 4D-Var system due to lower errors in the initial conditions, while the rate of error increase is similar between the 4D-Var and EnOI systems. At day 5, the domain-averaged (eddy-dominated region averaged) root-mean-squared (rms) SSH forecast errors are 61% (64%) higher for the EnOI system compared to the 4D-Var system. In a similar manner to the SSH forecast errors in Fig. 2, the forecast errors relative to SST observations are presented in Fig. 3. Both systems display higher errors in the core of the EAC upstream of the typical separation region and in the eddy-dominated region. The EnOI forecasts display higher SST errors across the model domain, with the most pronounced difference in the eddy-dominated region (Fig. 3a, b). The time series of RMSD for EnOI and 4D-Var (Fig. 3c) are highly correlated as the statistics are sensitive to the number of observations and the coverage in the high variability area. While the EnOI analyses provide a slightly improved fit to SST (Fig. 3d, e at day 0), SST forecast errors grow more quickly than in the 4D-Var system and the 4D-Var system outperforms the EnOI system for SST forecasts after 1d. At day 5, the domain-averaged (eddy-dominated region averaged) rms SST forecast errors are 21% (29%) higher for the EnOI system compared to the 4D-Var system. To assess the subsurface predictive skill, we extract the 5d model forecast values at the observation times and locations for all Argo floats that were observed in the region over the forecast window. Binning these observations with depth, we present profiles for temperature and salinity of the mean (Fig. 4a, e), bias (Fig. 4b, f), and the RMSD between the forecasts and the observations for all observations that fall on the first day of the forecasts (Fig. 4c, g) and all observations that fall on day 5 of the forecasts (Fig. 4d, h). The magnitude of the RMSDs can be compared to the root-mean-squared (rms) observation anomaly, which describes the variability of the observations within each depth bin. For in situ temperature, both the 4D-Var and EnOI forecasts display similar skill on the first day of the forecasts (Fig. 4c); however by day 5 the 4D-Var forecasts display lower errors compared to the EnOI forecasts over the upper 600m, with a maximum difference in RMSD (bias-corrected RMSD) of 0.56°C (0.34°C) at 200m (Fig. 4d). For salinity, forecast errors at day 5 are of similar magnitude throughout the water column for the two systems (Fig. 4h). Both systems have rms errors considerably less that the rms observation anomaly. Salinity bias dominates the RMSD deeper than 600m, so bias-corrected RMSD values are less that the total RMSD (Fig. 4g, h). 3.2Independent observations As described in Sect. 2.2, a number of observations were withheld from the 4D-Var and EnOI DA systems presented in this paper, allowing the system performances to be assessed against independent observations. In this section, forecasts from the 4D-Var and EnOI systems (which assimilate the traditional suite of observations, TRAD) are compared to the analyses and forecasts produced by assimilating the full suite of observations (FULL). Comparisons are made between the observations and the model solutions extracted at the observation times and locations, and predictive skill is assessed for days 1 to 5 of the forecast horizons (and analysis windows in the case of the FULL analysis). Under the HF radar footprint at 30°S, surface radial velocity observations from two sources are combined to compute surface velocities to about 100km offshore, covering the shelf and shelf slope circulation. This coverage typically includes the EAC as a coherent jet and the intermittent formation of cyclonic frontal eddies inshore of the EAC (Archer et al., 2017; Schaeffer et al., 2017; Kerry et al., 2020a). The complex correlations between the observed and model velocities are presented in Fig. 5. At forecast day 5, the 4D-Var TRAD displays similar predictive skill to the FULL forecasts. The EnOI forecasts are worse than the 4D-Var TRAD across the 5d, showing that the 4D-Var system provides better representation of the circulation under the HF radar footprint in the analyses and forecasts. Glider data over the study period (2012–2013) were predominantly available over the NSW continental shelf in water depths <200m; however, from May–July 2012, several glider missions extended offshore into eddies and sampled down to below 1000m. These glider observations were shown to be particularly impactful in constraining transport and EKE estimates in the FULL simulation (Kerry et al., 2018). These observations represent independent data for the 4D-Var and EnOI TRAD systems, and Fig. 6 shows how the simulations represent temperature and salinity as measured by the gliders. Errors are lowest near the surface compared to over the thermocline region due to the assimilation of SST and SSS data in all three systems (4D-Var TRAD, EnOI TRAD, and 4D-Var FULL). The 4D-Var TRAD has rms forecast errors for temperature of a similar magnitude and depth structure as the rms observation anomalies, and the errors do not considerably change from day 1 to day 5 of the forecast window. The EnOI errors are of similar magnitude to the 4D-Var near the surface (∼1°C), but they are 20% greater between 100–200m for day 1 and 40% greater for that depth range at day 5 (Fig. 6 c, d). Temperature bias plays a considerable part in the EnOI RMSD values below 100m, but the bias-corrected RMSD for EnOI still exceeds the bias-corrected RMSD for 4D-Var TRAD at both day 1 and day 5 (Fig. 6c, d). For salinity, the 4D-Var and EnOI display similar forecast errors in the upper 200m. This depth range corresponds to where the many shelf glider observations exist. Below 200m (the off-shelf missions into the Tasman Sea), forecast errors peak at 300m reaching 0.30 for EnOI at day 5, compared to 0.23 for 4D-Var. Similar to the Argo-observed salinity (Fig. 4f, g, h), salinity bias dominates the errors associated with glider observed salinity below 500m for 4D-Var TRAD and below 200m for EnOI. Subsurface velocities are measured by acoustic Doppler current profilers mounted on moorings in the EAC array, the SEQ shelf and slope, and on the NSW shelf (Fig. 1c). In Fig. 7 we present the complex correlation between the modelled and observed velocities for selected moorings extending from 28°S to 34°S. The mooring locations are shown in Fig. 1c, with EAC2 and SEQ400 being in 1500 and 400m water depth at 28°S, CH100 being in 100m water depth at 30°S and SYD100 being in 100m water depth at 34°S. At EAC2 and SEQ400, the 4D-Var TRAD displays similar predictive skill to the FULL after 5d and considerably outperforms the EnOI system throughout the water column. This indicates the benefit of 4D-Var including the northern boundary conditions in the cost function. On the shelf at 30°S (CH100) and 34°S (SYD100), the EnOI and 4D-Var systems show similar predictive skill. As shown in both Figs. 5 and 7, the 4D-Var FULL complex correlations display a rapid reduction in correlation by day 3–5 of the forecast. As discussed in Siripatana et al. (2020), while the analysis fits the velocity observations along the continental shelf, the forecast model is unable to resolve the complexities of the shelf circulation such as the cyclonic vorticity inshore of the EAC. As such, the forecast skill of the TRAD system is similar to that of the FULL system for 5d forecast horizons. We have shown that the 4D-Var TRAD system outperforms the EnOI TRAD system at the surface and subsurface when compared against both assimilated and independent observations. Improvements to temperature forecasts with 4D-Var are more pronounced in the subsurface (the upper ∼400m) compared to at the surface (Figs. 4 and 6). We now examine the model forecasts to elucidate the differences between the representation of the ocean state (in model space, rather than observation space) across the two DA systems. 4Comparisons in model space 4.1Initial condition increments The model forecast, X[f], is adjusted by the assimilation of observations (as per Eq. 1) to produce an analysis, X[a]. This model state estimate should provide a better representation of the observations and provides updated (improved) initial conditions for the subsequent model forecast. In the 4D-Var system used in this study we perform a 5d forecast and a 5d analysis every 4d, such that the initial conditions for the subsequent forecast are taken from day 4 of the previous analysis. For the EnOI system, an analysis is generated every day. For consistent comparison across the two systems, we take the analysis every 4d as initial conditions and perform a 5d forecast. In both cases there are discontinuities in the ocean state between day 4 of the previous forecast and the beginning of the subsequent forecast (which correspond to concurrent times). This is illustrated in Fig. 8i, which shows a time series of temperature at the surface at 34°S. Assimilated (SST) and independent (SYD140 mooring near-surface temperature data) are shown for reference. The discontinuities between the forecasts are less pronounced for the 4D-Var system compared to the EnOI system. Over the entire 2-year test period, the RMSD between the initial conditions (from the analysis) and the previous forecast field at that time illustrates greater discontinuities for the EnOI system compared to the 4D-Var system for SSH, SST, and subsurface temperature (Fig. 8a–h). The discontinuities presented here do not exactly correspond to the analysis increments. We have presented the differences in the ocean state between day 4 of the previous (5d) forecast and the beginning of the subsequent forecast (which correspond to concurrent times). For 4D-Var, the ocean state at the beginning of the forecast is taken from the previous cycle analysis, and so the difference presented here represents the difference between the forecast (or the background) at day 4 and analysis at day 4 (once data assimilation has been performed on that assimilation cycle). This is essentially the “analysis increment at day 4”. However for a 4D-Var system the analysis increments typically refer to the adjustments to the initial conditions, boundary forcing, and surface forcing that are made to generate the analysis. For EnOI, the analysis increments refer to the difference between the background model and the analysis (both centred on a single time and computed daily in this case). However, here we take the analyses every 4d and perform 5d forecasts, and the differences presented here refer to the difference between day 4 of the forecast and the analysis that provides initial conditions for the subsequent forecast. With 4D-Var we are able to represent the entirety of the observations collected over a time window (in this case 5d), placing them in dynamical context using the (linearised) model equations. In contrast, EnOI performs discrete minimisations with observations centred on a single time (in this case every day). The estimate of the ocean over the observation window that is created with the 4D-Var assimilation system results is smaller discontinuities between forecast cycles, on average, compared to the EnOI system, as a continuous field evolves by the nonlinear primitive equations as opposed to starting a forecast from a discrete estimate, which can “shock” the system. Our results of the improved predictability achieved by the 4D-Var system support the understanding that a continual and dynamically balanced analysis field is advantageous to the quality of future predictions. The modelled velocities are used to compute eddy kinetic energy (EKE) and mean kinetic energy (MKE) over the 2012–2013 simulation period. MKE is given by $\mathrm{MKE}=\frac{\mathrm{1}}{\mathrm{2}}\ left({\stackrel{\mathrm{‾}}{U}}^{\mathrm{2}}+{\stackrel{\mathrm{‾}}{V}}^{\mathrm{2}}\right)$, where $\stackrel{\mathrm{‾}}{U}$ and $\stackrel{\mathrm{‾}}{V}$ are the time mean velocity components, and the EKE is given by $\mathrm{EKE}=\frac{\mathrm{1}}{\mathrm{2}}\left({{U}^{\prime }}^{\mathrm{2}}+{{V}^{\prime }}^{\mathrm{2}}\right)$, where U^′ and V^′ are the velocity anomalies. The MKE describes the energy associated with the mean currents, and the EKE describes the energy associated with the perturbations from the mean. Figure 9 shows the MKE and EKE averaged over the upper 400m and from 400–1200m. Comparisons of MKE above 400m show that the EAC core is narrower and more confined to the slope in the 4D-Var system, while MKE for the EnOI system is more spread out and with higher MKE directly over the continental shelf (Fig. 9a, e, i). This difference is despite the identical SSH observations being assimilated, noting that SSH observations in water depth <1000m are not assimilated, and the identical forward numerical model. In the 4D-Var simulation, the MKE is greater below 400m than the EnOI simulation downstream of 27.5°S to the typical EAC separation zone (Fig. 9b, f, j). This is consistent with Kerry and Roughan (2020a), who use a long-term integration of the free-running simulation to describe a downstream deepening of the EAC before separation. The spatial structure of the EKE is similar across the two systems. Above 400m, the EnOI system has elevated EKE over the EAC jet (Fig. 9k, blue regions), while the 4D-Var system has elevated EKE in the eddy-dominated regions (Fig. 9k, red regions). The elevated EKE for the EnOI system (in the more coherent region) relates to the greater discontinuities between the subsequent forecasts, which manifests itself as greater low-frequency >1d variability over the 5d forecasts as the 5d model run adjusts to the “shocks” to the system. In contrast, the elevated EKE in the 4D-Var system outside of the coherent jet relates to the greater near-inertial variability. This is explored in Sect. 4.3 and Fig. 12. At depth (400–1200m), EKE is elevated for EnOI compared to 4D-Var in the EAC southern extension. Eddies can form through barotropic instability in the mean flow or baroclinic instability in the vertical density structure. It is important for a model to correctly represent these instabilities, as they represent the pathways by which eddies are generated. Following Kang and Curchitser (2015), we calculate the barotropic conversion rate (KmKe) as $\begin{array}{}\text{(9)}& \mathrm{KmKe}={\mathit{\rho }}_{\mathrm{0}}\left[\stackrel{\mathrm{‾}}{{U}^{\prime }{U}^{\prime }}\frac{\partial \stackrel{\mathrm{‾}}{U}}{\partial x}+\stackrel{\mathrm {‾}}{{U}^{\prime }{V}^{\prime }}\frac{\partial \stackrel{\mathrm{‾}}{U}}{\partial y}+\stackrel{\mathrm{‾}}{{V}^{\prime }{U}^{\prime }}\frac{\partial \stackrel{\mathrm{‾}}{V}}{\partial x}+\stackrel{\ mathrm{‾}}{{V}^{\prime }{V}^{\prime }}\frac{\partial \stackrel{\mathrm{‾}}{V}}{\partial y}\right],\end{array}$ where ρ[0]=1025kgm^−3. The baroclinic conversion rate (PeKe), from eddy potential energy to EKE, is calculated as $\begin{array}{}\text{(10)}& \mathrm{PeKe}=-g\stackrel{\mathrm{‾}}{{\mathit{\rho }}^{\prime }{W}^{\prime }},\end{array}$ where the acceleration due to gravity is g=9.81ms^−2, and ρ^′ and W^′ are the density and vertical velocity anomalies. KmKe and PeKe have been previously used to explore eddy generation rates in the EAC (e.g. Li et al., 2021, 2022; Gwyther et al., 2023). Barotropic and baroclinic energy conversions are computed from the model forecast fields and averaged over the 2-year period (Fig. 10). Both the 4D-Var and EnOI systems show similar magnitude and overall spatial structure of the barotropic and baroclinic energy conversions, as well as similar partitioning between barotropic and baroclinic instabilities. The similarities are likely due to the common model and atmospheric forcing. The barotropic conversion (compare Fig. 10a, c) represents instabilities in the depth-mean flow, which 4D-Var and EnOI represent similarly. The baroclinic conversion (compare Fig. 10b–d) is also similar between the DA configurations in overall spatial structure and the zonally integrated magnitudes (Fig. 10e), although the EnOI baroclinic conversion rate contains more high-wavenumber spatial patterns, which likely relate to unbalanced adjustments upon assimilation. This is further explored in Fig. 14 and the associated discussion. 4.3Temporal and spatial scales of variability When observations are assimilated the goal is to provide an improved fit to the observations while retaining a dynamically consistent ocean state that can be used as initial conditions for the subsequent forecast. The background numerical model produces an estimate of the ocean state whose frequency and wavenumber spectra are limited by the resolution of the model and the processes resolved. If the observations sample time and space scales that cannot be resolved by the model, it is standard DA practice to either remove these scales of variability from the observations or account for them in the observation error terms (e.g. Kerry and Powell, 2022). If the model background is deficient at some space scale and/or timescale (which it is able to resolve), then these may be corrected by DA so that the analyses and forecasts are better. However, if the assimilation process introduces energy at different, non-physical scales, this may negatively impact the forecast skill. By presenting the temporal and spatial scales of variability of the forecast ocean state, we can understand how the assimilation has changed the ocean's energy distribution and understand the differences in error growth across the two DA systems. The subsurface structure of the model fields and their variability is shown in Fig. 11. The EnOI system has more temperature variability near the surface (upper 200–500m) compared to 4D-Var. The greater near-surface temperature variability in EnOI compared to 4D-Var is greater in the eddy-dominated region (34°S), where adjustments are greater (Fig. 8d, f) compared to more coherent, upstream region (28°S). For velocity variability, 4D-Var shows elevated variability almost everywhere except in the upper 250m near the shelf at 34°S. Both data-assimilating configurations show elevated variability in temperature and velocity over the upper ∼1000m compared to the non-data assimilating simulation (hereafter referred to as the Free-run). The differences are further illustrated in Figs. 12 and 13, where the frequencies of the variability are revealed. Frequency spectral analysis is first performed for all 5d forecast windows and then averaged (Fig. 12). A 5d window with the model output 4-hourly gives a frequency range from $\mathrm{1}/\mathrm {5}$d to $\mathrm{1}/\mathrm{8}$h, with 15 points in frequency space due to the short time series (31 points). Surface velocity (but not temperature) and subsurface temperature and velocity display elevated energy in the 16–24h band for the 4D-Var system compared to EnOI and the Free-run (Fig. 12), corresponding to the near-inertial band. This inertial energy is introduced through the assimilation adjustments which, due to the nature of 4D-Var, must satisfy the model equations. Increased near-inertial variability upon 4D-Var data assimilation was also shown in Matthews et al. ( 2012) and Kerry and Powell (2022). Matthews et al. (2012) found that the increased inertial energy had minimal impact on the mesoscale circulation. Using observing system simulation experiments, Kerry and Powell (2022) showed that, while the 4D-Var system displayed elevated near-inertial variability (compared to their free running truth simulation), near-inertial frequencies did not influence energy at other frequencies, and predictability at both higher frequencies (in their case internal tides) and lower frequencies (associated with the mesoscale circulation) was good. The differences between EnOI and the Free-run and EnOI and 4D-Var (as revealed in Fig. 11) are difficult to decipher from Fig. 12 as they exist at low frequencies (periods greater than 1d). In order to resolve the low frequencies, we concatenate the forecast cycles in time to produce a full 2-year time series. As a longer time series allows a higher resolution in frequency space, Fig. 13 show a higher frequency resolution compared to Fig. 12, for which the spectra are computed for all 5d periods and averaged. Concatenation of the time series requires removal of the 1d overlap (the last day of each cycle is excluded) such that time is monotonously increasing. Because of the assimilation updates, discontinuities exist between the cycles every 4d (this is not the case for the Free-run). These discontinuities (displayed in Fig. 8 and discussed in Sect. 4.1) manifest as harmonics of the $\mathrm{1}/\mathrm{4}$d frequency and are most pronounced for EnOI in temperature at 34°S. For example, the surface temperature spectra for the EnOI system at 34°S show spikes at harmonics of 0.25 (0.5, 0.75 ,1.0 ,1.25, 1.5, etc. cycles per day). Nevertheless, the spectra are useful in showing the differences in variability across the DA systems and the Free-run particularly for low frequencies. The Free-run displays less energy than both DA systems in both temperature and velocity in the eddy-dominated region, consistent with the reduced variability shown in Fig. 11. This relates to less variability at low frequencies (periods greater than 1d) in the Free-run compared to the DA systems, and for the 4D-Var system, less variability at inertial frequencies also. The elevated energy in the EnOI system compared to 4D-Var and the Free-run relates to periods greater than 1d for both temperature and velocity (Fig. 13). Greater variability at low frequencies in the EnOI system compared to 4D-Var exists in both temperature and velocity and is most pronounced in the upper 500m and in the eddy-dominated region (34°S compared to the more coherent region at 28°S, Fig. 13). This increased low-frequency variability in EnOI compared to 4D-Var dominates the total variability (displayed in Fig. 11) for near-surface temperature, but it is masked by greater inertial-period variability in the 4D-Var system for velocity. That is, despite the low-frequency velocity variability being greater in EnOI (Fig. 13), the total velocity variability is greater for 4D-Var (Fig. 11). We find that the greater low-frequency variability for EnOI compared to 4D-Var is associated with greater discontinuities between the subsequent forecasts. The discontinuities also exist in 4D-Var, but they are less pronounced (Fig. 8). The spatial scales of the forecast ocean state can be represented by wavenumber spectra. Here we present cross-shore wavenumber kinetic energy spectra through sections at 28 and 34°S (Fig. 1a) for days 1 and 5 of the forecasts, for the Free-run and for AVISO gridded geostrophic velocities (Fig. 14). The observational data product used is the AVISO gridded velocities from altimetry and drifters using multiscale interpolation, version 0100 (Ballarotta et al., 2022), with a $\mathrm{1}/\mathrm{10}$° spatial resolution and temporal coverage from 1 July 2016–30 June 2020. Note that the spatial resolution is the same as that of the assimilated SSH observations (Sect. 2.2.1). For the model forecasts, wavenumber kinetic energy spectra are computed for days 1 and 5 of all (186) cycles, and the averages are plotted. For the AVISO observations, wavenumber kinetic energy spectra are computed for every day of the available time period and the average plotted. Model spectra are shown at the surface and at depths of 400 and 1000m; AVISO data provide geostrophic velocities, and the corresponding spectra are plotted on the surface velocity panels. At the surface all systems, except the EnOI at day 1, display consistent kinetic energy spectra at 28°S. The AVISO velocities show less energy at spatial scales between 15–80km compared to the Free-run, the 4D-Var system across all forecast days, and the EnOI system at day 5. At 34°S, where eddy variability is high, the Free-run underrepresents the kinetic energy across all spatial scales at all depths. At the surface, the 4D-Var system across all forecast days and the EnOI system at day 5 represent the AVISO spectrum well, with the AVISO velocities again showing slightly lower energy at spatial scales between 15–80km. For the first day of the EnOI forecasts (representative of the analyses), there is elevated kinetic energy at finer length scales and this energy dissipates by day 5 of the forecast. This elevated energy is most pronounced at the surface and near-surface (upper 200m, not shown). Specifically, elevated kinetic energy exists in the EnOI initial states at length scales less than 100km at 28°S and between 20–80km at 34°S. For the 4D-Var system the wavenumber kinetic energy spectra remain relatively unchanged over the forecast window, with the day 1 and day 5 wavenumber spectra tracking closely. Compared to the Free-run, both the 4D-Var and EnOI assimilation systems introduce more kinetic energy across all spatial scales throughout the water column in the eddy-dominated region (illustrated by the sections through 34°S in Fig. 14). We include the idealised spectral slopes of ${k}^{-\mathrm{5}/\mathrm{3}}$ and k^−3 in Fig. 14 for reference. The wavenumber kinetic energy spectra approximately match the ${k}^{-\mathrm{5}/\mathrm {3}}$ slope for the mesoscale range, the k^−3 slope for the submesoscale range for the 4D-Var ocean state on day 1 and day 5, and the EnOI forecasts on day 5. However, we note that the submesoscale range is only partially resolved by the 2.5–6km resolution model and even less so by the AVISO observations. The ${k}^{-\mathrm{5}/\mathrm{3}}$ and k^−3 slopes have been shown to represent surface quasi-geostrophic and quasi-geostrophic dynamics, respectively; however realistic simulations show that other slopes are possible (Xu and Fu, 2011). We have shown that energy is elevated for shorter (less than 100km) length scales in the EnOI analyses, and upon integration of the forecast model this energy dissipates to match the energy associated with the 4D-Var system. Wavenumber kinetic energy analysis of the atmosphere by Skamarock (2004) showed the contrary: the increase of energy at small scales upon integration of the forecast model. They showed that the initial states of high-resolution NWP model forecasts lacked the fine-scale (mesoscale in the case of the atmosphere) energy because “observations to initialise the fine scales are not generally available and data assimilation methods that can use high-resolution observations are not yet mature”. The fine-scale portion of the kinetic energy spectrum was spun up in the forecasts in 6–12h, providing increased value to the NWP forecasts. In our study we observe the introduction of energy at small spatial scales upon DA with EnOI, and this elevated small-scale energy is lost by day 5. This implies that the small-scale energy dissipates over the 5d forecast. The elevated kinetic energy at scales less than 100km is not a physical space scale that is resolved by the observations, as shown by the AVISO kinetic energy spectra, and does not exist in the 4D-Var system (Fig. 14). Rather it comes about due to EnOI's adjustments upon assimilation. It is likely that the increased error growth (hence poorer forecast skill) for the EnOI system (compared to 4D-Var) in this study relates to these adjustments. The consistency of the wavenumber spectra over the 4D-Var 5d forecast windows likely relates to the constraint that the analysis is a complete solution of the model nonlinear equations, requiring dynamically balanced This study shows in a quantified manner that the smoother and more dynamically balanced fit between the observations and the model's time-evolving flow achieved by the 4D-Var system results in improved predictability against both assimilated and non-assimilated observations. The EnOI system does not produce as tight as fit to the SSH data as the 4D-Var system (although this may be related to tuneable parameters in the DA formulation); however, the SSH error grows at the same rate in the EnOI and 4D-Var forecasts (Fig. 2). The surface expression of the EAC and its associated eddies is associated with the barotropic mode, and our results show that the barotropic energy conversion rates are generally consistent across the two systems (Fig. 10a, c). However, the baroclinic conversion rate has small spatial scale variability in the EnOI forecasts compared to the 4D-Var (Fig. 10b, d), and the EnOI analyses (the forecast initial conditions) display elevated energy at fine (<100km) spatial scales (Fig. 14). This is accompanied by reduced predictive skill for both surface and in situ temperature, in situ salinity, and surface velocities (Figs. 3, 4, 5, 6, 7). For SST (Fig. 3) and temperature in the upper 600m (Fig. 4c, d), the analyses have errors of similar magnitude for the EnOI and 4D-Var systems, but error growth is considerably greater in the EnOI forecasts. Note that the upper 600m is the region of greatest variability in both temperature and salinity (Fig. 4c, d, g, h, blue lines). The improved forecasts of SST and in situ temperature in the upper 600m for 4D-Var after 5d (Figs. 3, 4d) are a demonstration of improved dynamical balance of the model initial conditions. This is evident by the smaller magnitude of the increments for 4D-Var (Fig. 8a, c, e, g) compared to EnOI (Fig. 8b, d, f, h). The bias-corrected salinity errors also show similar errors at forecast day 1 for both systems, with greater error growth in the EnOI system compared to 4D-Var by day 5 (Fig. 4g, h). Independent surface velocity observations as measured by the high-frequency radar array at 30°S are less well represented by the EnOI system compared to the 4D-Var system from day 1 through to day 5 of the forecasts (Fig. 5). Independent in situ temperature observations from gliders show only slightly lower analysis errors for 4D-Var compared to EnOI, and the subsurface temperature forecasts degrade faster over the 5d window for EnOI compared to 4D-Var (Fig. 6), consistent with the forecast errors associated with assimilated in situ temperature observations (Fig. 4). For salinity, EnOI and 4D-Var perform equally well on the shelf (observations above 200m in Fig. 6g, h are dominated by shelf gliders), but EnOI displays higher errors below 200m by day 5. The 4D-Var system displays improved velocity forecasts compared to the EnOI system for the upstream moorings (EAC2 and SEQ400, Fig. 7), while downstream and on the shelf the forecasts are comparable. This indicates the benefit of 4D-Var including the northern boundary conditions in the cost function. Generally, we show that the benefits of 4D-Var over EnOI are most pronounced in the (5d) forecasts, rather than the fit of the analyses to the observations, consistent with the paper “Why does 4D‐Var beat 3D‐Var?” by Lorenc and Rawlins (2005). The EnOI system displays greater discontinuities between the end of the forecast and the subsequent analysis, particularly for near-surface temperature (about the thermocline), and the discontinuities have greater magnitude in the downstream eddy-dominated region (Fig. 8). These assimilation “shocks” manifest as increased low-frequency variability (periods greater than 1d, Figs. 11 and 13). The 4D-Var system displays elevated energy in the near-inertial frequency band for both temperature and velocity (Figs. 12 and 13). Consistent with Kerry and Powell (2022) and Matthews et al. (2012), the energy at near-inertial frequencies does not appear to affect the mean low-frequency energetics associated with the mesoscale circulation. While the EnOI DA system introduces elevated energy at fine (<100km) spatial scales, 4D-Var maintains the kinetic energy distribution in wavenumber space upon assimilation (Fig. 14). This study chose to compare two DA methods across a common modelling framework and observational network. The two methods were chosen as EnOI has been widely used by the Australian ocean forecasting community (Oke et al., 2008b, 2010; Chamberlain et al., 2021b), and 4D-Var has been implemented to study predictability and observation impact in the EAC (Kerry et al., 2016, 2018; Siripatana et al. , 2020; Gwyther et al., 2022, 2023). It made sense for the two user groups (operational and research) to come together to objectively compare the two methods. Each system was tuned by its developers (Australian Bureau of Meteorology for EnOI and UNSW for 4D-Var). We note that the degree of fit between an analysis and the assimilated observations of a specific DA system is sensitive to the prior choice of various parameters, such as the observation and background error covariances, and that the system performance is influenced by the DA system configuration, such as size of the ensemble for ensemble methods and the assimilation window length for 4D-Var (Moore et al., 2020; Santana et al., 2023). For example, the EnOI system presented here could be further tuned to provide an improved fit to SSH observations (Fig. 2), and different ensemble sizes could be tested. For the 4D-Var system, different window lengths could be tested and the sensitivity to changes in B could be studied. However, the goal of this study was not to compare various versions of each DA method. Rather we compare a single version of the two methods, carefully tuned by each user group, and set a baseline for future comparisons. The focus of this paper is not the fit in the analyses but the rate of forecast error growth and the response of the ocean state to the assimilation methodology. As such, the study's utility and relevance is significant without a large number of comparisons with different prior specified parameters or DA system configurations. The EnOI system is ∼25 times cheaper than the 4D-Var system presented here. It is noted that EnOI has been effective for long-term reanalysis products where analyses were created every day (Oke et al., 2008b; Chamberlain et al., 2021b) and forecasts were not required. With increasing computational capacity and the pursuit of more accurate ocean forecasts, this study's comparison motivates the use of 4D-Var over EnOI for ocean forecasts of the EAC region. This result is likely to be applicable over similar, highly variable, oceanic regions such as WBCs. More generally, the comparison advocates for the use of advanced time-dependent DA schemes over time-independent methods. We illustrate how a DA scheme can influence forecast skill which motivates future development of DA methods. It is noted that Australia's operational ocean model (OceanMAPS) recently transitioned to an EnKF DA method (from EnOI). The new system achieves lower mean error and error variance in WBC extensions regions (Chamberlain et al., 2021a; Brassington et al., 2023), with lower increments to SSH and subsurface velocities, and less kinetic energy at depth in the analyses, due to more dynamically balanced adjustments, compared to the EnOI system. Our future work specifically aims to directly address the need to improve predictive skill in WBC regions. Time-independent schemes (e.g. 3D-Var and EnOI) are useful for intermittent cycling DA at synoptic scales and are capable of resolving slowly evolving flows governed by simple balance relationships. Time-dependent DA methods (e.g. 4D-Var and EnKF) are greatly beneficial for highly intermittent flows with irregularly sampled observations as the time-variable dynamics of the model are used to evolve the error covariances. Furthermore, these methods allow the entirety of observations over a time interval to be minimised rather than discrete minimisations. The time-evolving state is required to truly exploit many novel observation types that are nonlinearly or indirectly related to the model state. Indeed, the two techniques that are the most promising in NWP and ocean DA are 4D-Var and EnKF (Moore et al., 2019). In recent years it has been recognised that a marriage of 4D-Var and EnKF perhaps represents a more optimal approach since it capitalises on the advantages of both approaches (i.e. the dynamical interpolation properties of the adjoint and the explicit flow-dependent error covariances that capture the “errors of the day”). The relative performance of 4D-Var and EnKF methods in regional ocean models has been assessed by Moore et al. (2020), and the differences are due primarily to the properties of the background error covariances, so it is anticipated that the performance of a system using a hybrid covariance will be superior to either 4D-Var or the EnKF alone. Such ensemble-variational methods have been studied extensively for atmospheric DA (e.g. Lorenc et al., 2015) with improvements in forecast skill achieved particularly in dynamically active systems (Raynaud et al., 2011; Lorenc and Jardak, 2018). Code and data availability The ROMS model code is available from https://www.myroms.org (last access: 6 March 2024; DOI: https://doi.org/10.5281/zenodo.8294716, Roughan and Kerry, 2023). SEA-COFS model configuration is accessible at https://doi.org/10.26190/5e683944e1369 (Kerry and Roughan, 2020b), https://doi.org/10.26190/5ebe1f389dd87 (Kerry et al., 2020b), and https://doi.org/10.5281/zenodo.8294716 (Roughan and Kerry, 2023). The observations were sourced from the Integrated Marine Observing System (IMOS). IMOS is a national collaborative research infrastructure, supported by the Australian Government (https:// www.imos.org.au, last access: 6 March 2024). Observations are available at https://portal.aodn.org.au/ (last access: 6 March 2024). Argo data were collected and made freely available by the international Argo programme and the national programmes that contribute to it. (http://www.argo.ucsd.edu, last access: 6 March 2024). The Argo programme is part of the Global Ocean Observing System (https://doi.org/10.17882/42182, Notarstefano, 2020). We acknowledge AVISO for the delayed-time SLA data. The Ssalto/Duacs altimeter products were produced and distributed by the Copernicus Marine and Environment Monitoring Service (CMEMS) (https:// marine.copernicus.eu, last access: 6 March 2024). CGK developed the ROMS model configuration of the EAC system, processed the observations, and developed the 4D-Var DA configuration. CGK performed the 5d forecasts, given the EnOI analyses. CGK analysed the results to produce Figs. 1–8 and 11–14. DG produced Figs. 9–10. CGK wrote the manuscript with some original input from AS. AS generated the results in Table 1. We acknowledge Pavel Sakov, who generated the EnOI analyses, given the ROMS model configuration and the processed observations from CGK. MR, SK, GB, and JMACS provided useful guidance and input into the scope of the project and interpretation of results. The contact author has declared that none of the authors has any competing interests. Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. For this research, David Gwyther and Adil Siripatana were partially supported by the Australian Research Council Industry Linkage grant no. LP170100498 to Moninya Roughan, Colette Gabrielle Kerry, and Shane Keating. Prior model development was supported by the Australian Research Council grant nos. DP140102337 and LP160100162. CSIRO Marine and Atmospheric Research and Wealth from Oceans Flagship Program, Hobart, Tasmania, Australia, provided BRAN2020 output for boundary conditions. This research has been supported by the Australian Research Council (grant nos. LP170100498, DP140102337 and LP160100162). This paper was edited by Deepak Subramani and reviewed by two anonymous referees. Andreu-Burillo, I., Brassington, G., Oke, P., and Beggs, H.: Including a new data stream in the BLUElink Ocean Data Assimilation System, Aust. Meteorol. Ocean., 59, 77–86, 2010.a Archer, M., Roughan, M., Keating, S., and Schaeffer, A.: On the variability of the East Australian Current: Jet structure, meandering, and influence on shelf circulation, J. Geophys. Res.-Oceans, 122, 8464–8481, 2017.a AVISO: SSALTO/DUACS user handbook: (M)SLA and (M)ADT near-real time and delayed time products. CLS-DOS-NT-06-034, SALP-MU-P-EA-21065-CLS, CNES, 66 pp., http://www.aviso.oceanobs.com/fileadmin/ documents/data/tools/hdbk_duacs.pdf (last access: 6 March 2024), 2015.a Ballarotta, M., Ubelmann, C., Veillard, P., Prandi, P., Etienne, H., Mulet, S., Faugère, Y., Dibarboure, G., Morrow, R., and Picot, N.: Improved global sea surface height and current maps from remote sensing and in situ observations, Earth Syst. Sci. Data, 15, 295–315, https://doi.org/10.5194/essd-15-295-2023, 2023.a Brassington, G. B., Sakov, P., Divakaran, P., Aijaz, S., Sweeney-Van Kinderen, J., Huang, X., and Allen, S.: OceanMAPS v4. 0i: a global eddy resolving EnKF ocean forecasting system, in: OCEANS 2023-Limerick, IEEE, 1–8, https://doi.org/10.1109/OCEANSLimerick52467.2023.10244383, 2023.a, b, c Broquet, G., Edwards, C. A., Moore, A., Powell, B. S., Veneziani, M., and Doyle, J. D.: Application of 4D-Variational data assimilation to the California Current System, Dynam. Atmos. Oceans, 48, 69–92, 2009.a Brousseau, P., Berre, L., Bouttier, F., and Desroziers, G.: Flow-dependent background-error covariances for a convective-scale data assimilation system, Q. J. Roy. Meteor. Soc., 138, 310–322, 2012.a Bull, C. Y. S., Kiss, A. E., Jourdain, N. C., England, M. H., and Van Sebille, E.: Wind forced variability in eddy formation, eddy shedding, and the separation of the East Australian Current., J. Geophys. Res.-Oceans, 122, 9980–9998, 2017.a Cetina Heredia, P., Roughan, M., Van Sebille, E., and Coleman, M.: Long-term trends in the East Australian Current separation latitude and eddy driven transport, J. Geophys. Res., 119, 4351–4366, https://doi.org/10.1002/2014JC010071, 2014.a, b Chamberlain, M., Oke, P., Brassington, G., Sandery, P., Divakaran, P., and Fiedler, R.: Multiscale data assimilation in the Bluelink ocean reanalysis (BRAN), Ocean Model., 166, 101849, https:// doi.org/10.1016/j.ocemod.2021.101849, 2021a.a Chamberlain, M. A., Oke, P. R., Fiedler, R. A. S., Beggs, H. M., Brassington, G. B., and Divakaran, P.: Next generation of Bluelink ocean reanalysis with multiscale data assimilation: BRAN2020, Earth Syst. Sci. Data, 13, 5663–5688, https://doi.org/10.5194/essd-13-5663-2021, 2021b.a, b, c De Souza, J. M. A. C., Powell, B., Castillo-Trujillo, A. C., and Flament, P.: The Vorticity Balance of the Ocean Surface in Hawaii from a Regional Reanalysis, J. Phys. Oceanogr., 45, 424–440, 2015.a Di Lorenzo, E., Moore, A. M., Arango, H. G., Cornuelle, B. D., Miller, A. J., Powell, B. S., Chua, B. S., and Bennett, A. F.: Weak and Strong Constraint Data Assimilation in the inverse Regional Ocean Modelling System (ROMS): development and application for a baroclinic coastal upwelling system, Ocean Model., 16, 160–187, 2007.a Donlon, C., Minnett, P., Gentemann, C., Nightingale, T., Barton, I., Ward, B., and Murray, M.: Toward improved validation of satellite sea surface skin temperature measurements for climate research, J. Climate, 15, 353–369, 2002.a Edwards, C. A., Moore, A. M., Hoteit, I., and Cornuelle, B. D.: Regional ocean data assimilation, Annu. Rev. Mar. Sci., 7, 21–42, 2015.a Evensen, G.: Sequential data assimilation for nonlinear dynamics: the ensemble Kalman filter, in: Ocean Forecasting: Conceptual basis and applications, 97–116, Springer, https://doi.org/10.1007/ 978-3-662-22648-3_6, 2002.a Fairall, C. W., Bradley, E. F., Rogers, D. P., Edson, J. B., and Young, G. S.: Bulk parameterization of air-sea fluxes for tropical ocean-global atmosphere Coupled-Ocean Atmosphere Response Experiment, J. Geophys. Res., 101, 3747–3764, 1996.a Feron, R. C. V.: The southern ocean Western Boundary Currents: Comparison of fine resolution Antarctic model results with Geosat altimeter data, J. Geophys. Res., 100, 4959–4975, 1995.a Gaspari, G. and Cohn, S. E.: Construction of correlation functions in two and three dimensions, Q. J. Roy. Meteor. Soc., 125, 723–757, 1999.a, b Gwyther, D. E., Kerry, C., Roughan, M., and Keating, S. R.: Observing system simulation experiments reveal that subsurface temperature observations improve estimates of circulation and heat content in a dynamic western boundary current, Geosci. Model Dev., 15, 6541–6565, https://doi.org/10.5194/gmd-15-6541-2022, 2022.a, b Gwyther, D. E., Keating, S. R., Kerry, C., and Roughan, M.: How does 4DVar data assimilation affect the vertical representation of mesoscale eddies? A case study with observing system simulation experiments (OSSEs) using ROMS v3.9, Geosci. Model Dev., 16, 157–178, https://doi.org/10.5194/gmd-16-157-2023, 2023.a, b Haidvogel, D. B., Arango, H. G., Hedstrom, K., Beckmann, A., Malanotte-Rizzoli, P., and Shchepetkin, A. F.: Model evaluation experiments in the North Atlantic Basin: simulations in nonlinear terrain-following coordinates, Dynam. Atmos. Oceans, 32, 239–281, 2000.a Haney, R. L.: On the Pressure Gradient Force over Steep Topography in Sigma Coordinate Ocean Models, J. Phys. Oceanogr., 21, 610–619, 1991.a Houtekamer, P. L. and Zhang, F.: Review of the ensemble Kalman filter for atmospheric data assimilation, Mon. Weather Rev., 144, 4489–4532, 2016.a, b Imawaki, S., Bower, A. S., Beal, L., and Qiu, B.: Chapter 13 – Western Boundary Currents, in: Ocean Circulation and Climate, in: International Geophysics, edited by: Siedler, G., Griffies, S. M., Gould, J., and Church, J. A., vol. 103, 305–338, https://doi.org/10.1016/B978-0-12-391851-2.00013-1, 2013.a Janeković, I., Powell, B. S., Matthews, D., McManus, M. A., and Sevadjian, J.: 4D-Var Data Assimilation in a Nested, Coastal Ocean Model: A Hawaiian Case Study, J. Geophys. Res., 118, 5022–5035, https://doi.org/10.1002/jgrc.20389, 2013.a Kang, D. and Curchitser, E. N.: Energetics of Eddy–Mean Flow Interactions in the Gulf Stream Region, J. Phys. Oceanogr., 45, 1103–1120, https://doi.org/10.1175/jpo-d-14-0200.1, 2015.a Kerry, C. and Roughan, M.: Downstream Evolution of the East Australian Current System: Mean Flow, Seasonal, and Intra-annual Variability, J. Geophys. Res.-Oceans, 125, e2019JC015227, https://doi.org/ 10.1029/2019JC015227, 2020a.a, b, c, d Kerry, C. and Roughan, M.: A high-resolution, 22-year, free-running, hydrodynamic simulation of the EAC System using the ROMS, unsworks [data set], https://doi.org/10.26190/5e683944e1369, 2020b.a Kerry, C., Powell, B., Roughan, M., and Oke, P.: Development and evaluation of a high-resolution reanalysis of the East Australian Current region using the Regional Ocean Modelling System (ROMS 3.4) and Incremental Strong-Constraint 4-Dimensional Variational (IS4D-Var) data assimilation, Geosci. Model Dev., 9, 3779–3801, https://doi.org/10.5194/gmd-9-3779-2016, 2016.a, b, c, d, e, f, g, h, i, j , k, l, m Kerry, C. G., Roughan, M., and Powell, B. S.: Observation Impact in a Regional Reanalysis of the East Australian Current System, J. Geophys. Res.-Oceans, 123, 7511–7528, https://doi.org/10.1029/ 2017JC013685, 2018.a, b, c, d, e, f Kerry, C., Roughan, M., and Powell, B.: Predicting the submesoscale circulation inshore of the East Australian Current, J. Marine Syst., 204, 103286, https://doi.org/10.1016/j.jmarsys.2019.103286, 2020a.a, b Kerry, C., Roughan, M., Powell, B., and Oke, P.: A high-resolution reanalysis of the East Australian Current System assimilating an unprecedented observational data set using 4D-Var data assimilation over a two-year period (2012–2013), Version 2017, UNSW [data set], https://doi.org/10.26190/5ebe1f389dd87, 2020b.a, b Kerry, C., Roughan, M., and Azevedo Correia de Souza, J. M.: Drivers of upper ocean heat content extremes around New Zealand revealed by Adjoint Sensitivity Analysis, Frontiers in Climate, 205, https://doi.org/10.3389/fclim.2022.980990, 2022.a Kerry, C. G. and Powell, B. S.: Including tides improves subtidal prediction in a region of strong surface and internal tides and energetic mesoscale circulation, J. Geophys. Res.-Oceans, 127, e2021JC018314, https://doi.org/10.1029/2021JC018314, 2022.a, b, c, d Li, J. and Roughan, M.: Energetics of Eddy–Mean Flow Interactions in the East Australian Current System, J. Phys. Oceanogr., 53, 595–612, 2023.a Li, J., Roughan, M., and Kerry, C.: Dynamics of Interannual Eddy Kinetic Energy Modulations in a Western Boundary Current, Geophys. Res. Lett., 48, e2021GL094115, https://doi.org/10.1029/2021gl094115 , 2021.a Li, J., Roughan, M., and Kerry, C.: Variability and Drivers of Ocean Temperature Extremes in a Warming Western Boundary Current, J. Climate, 35, 1097–1111, https://doi.org/10.1175/JCLI-D-21-0622.1, Liu, J. and Kalnay, E.: Estimating observation impact without adjoint model in an ensemble Kalman filter, Q. J. Roy. Meteor. Soc., 134, 1327–1335, 2008.a Lorenc, A. C.: Analysis methods for numerical weather prediction, Q. J. Roy. Meteor. Soc., 112, 1177–1194, 1986.a Lorenc, A. C. and Jardak, M.: A comparison of hybrid variational data assimilation methods for global NWP, Q. J. Roy. Meteor. Soc., 144, 2748–2760, 2018.a, b Lorenc, A. C. and Rawlins, F.: Why does 4D-Var beat 3D-Var?, Q. J. Roy. Meteor. Soc., 131, 3247–3257, 2005.a, b Lorenc, A. C., Bowler, N. E., Clayton, A. M., Pring, S. R., and Fairbairn, D.: Comparison of hybrid-4DEnVar and hybrid-4DVar data assimilation methods for global NWP, Mon. Weather Rev., 143, 212–229, 2015.a, b Macdonald, H. S., Roughan, M., Baird, M. E., and Wilkin, J.: A numerical modeling study of the East Australian Current encircling and overwashing a warm-core eddy, J. Geophys. Res., 118, 301–315, https://doi.org/10.1029/2012JC008386, 2013.a Malan, N., Roughan, M., Hemming, M., and Schaeffer, A.: Mesoscale Circulation Controls Chlorophyll Concentrations in the East Australian Current Separation Zone, J. Geophys. Res.-Oceans, 128, e2022JC019361, https://doi.org/10.1029/2022JC019361, 2023.a Marchesiello, P. and Middleton, J. H.: Modeling the East Australian Current in the Western Tasman Sea, J. Phys. Oceanogr., 30, 2956–2971, 2000.a Mata, M. M., Tomczak, M., Wijffels, S. E., and Church, J. A.: East Australian Current volume transports at 30°S: Estimates from the World Ocean Circulation Experiment hydrographic sections PR11/P6 and the PCM3 current meter array, J. Geophys. Res., 105, 28509–28526, 2000.a Mata, M. M., Wijffels, S. E., Church, J. A., and Tomczak, M.: Eddy shedding and energy conversions in the East Australian Current, J. Geophys. Res., 111, C09034, https://doi.org/10.1029/2006JC003592, Matthews, D., Powell, B. S., and Janeković, I.: Analysis of Four-dimensional Variational State Estimation of the Hawaiian Waters, J. Geophys. Res., 117, C03013, https://doi.org/10.1029/2011JC007575, 2012.a, b, c, d Mellor, G. L., Ezer, T., and Oey, L. Y.: The pressure gradient error conundrum of sigma coordinate ocean models, J. Atmos. Ocean. Tech., 11, 1126–1134, 1994.a Mogensen, K., Balmaseda, M., and Weaver, A.: The NEMOVAR ocean data assimilation system as implemented in the ECMWF ocean analysis for System 4, European Centre for Medium-Range Weather Forecasts, Reading, UK, https://doi.org/10.21957/x5y9yrtm, 2012.a Moore, A., Martin, M., Akella, S., Arango, H., Balmaseda, M., Bertino, L., Ciavatta, S., Cornuelle, B., Cummings, J., Frolov, S., Lermusiaux, P., Oddo, P., Oke, P., Storto, A., Teruzzi, A., Vidard, A., and Weaver, A.: Synthesis of Ocean Observations Using Data Assimilation for Operational, Real-Time and Reanalysis Systems: A More Complete Picture of the State of the Ocean, Frontiers in Marine Science, 6, 90, https://doi.org/10.3389/fmars.2019.00090, 2019.a, b, c Moore, A., Zavala-Garay, J., Arango, H. G., Edwards, C. A., Anderson, J., and Hoar, T.: Regional and basin scale applications of ensemble adjustment Kalman filter and 4D-Var ocean data assimilation systems, Prog. Oceanogr., 189, 102450, https://doi.org/10.1016/j.pocean.2020.102450, 2020.a, b, c Moore, A. M., Arango, H. G., Di Lorenzo, E., Cornuelle, B. D., Miller, A. J., and Neilson, D. J.: A comprehensive ocean prediction and analysis system based on the tangent linear and adjoint of a regional ocean model, Ocean Model., 7, 227–258, 2004.a Moore, A. M., Arango, H. G., Broquet, G., Edwards, C., Veneziani, M., Powell, B. S., Foley, D., Doyle, J., Costa, D., and Robinson, P.: The Regional Ocean Modeling System (ROMS) 4-dimensional variational data assimilation systems: Part II – Performance and application to the California Current System, Prog. Oceanog., 91, 50–73, https://doi.org/10.1016/j.pocean.2011.05.003, 2011a.a Moore, A. M., Arango, H. G., Broquet, G., Edwards, C., Veneziani, M., Powell, B. S., Foley, D., Doyle, J., Costa, D., and Robinson, P.: The Regional Ocean Modeling System (ROMS) 4-dimensional variational data assimilation systems: Part III – Observation impact and observation sensitivity in the California Current System, Prog. Oceanog., 91, 74–94, https://doi.org/10.1016/ j.pocean.2011.05.005, 2011b.a Moore, A. M., Arango, H. G., Broquet, G., Powell, B. S., Weaver, A. T., and Zavala-Garay, J.: The Regional Ocean Modelling System (ROMS) 4-dimensional variational data assimilation systems: Part 1 – System overview and formulation, Prog. Oceanogr., 91, 34–49, 2011c.a Moore, A. M., Arango, H. G., Broquet, G., Powell, B. S., Zavala-Garay, J., and Weaver, A. T.: The Regional Ocean Modeling System (ROMS) 4-dimensional variational data assimilation systems: Part I – System overview and formulation, Prog. Oceanog., 91, 34–49, https://doi.org/10.1016/j.pocean.2011.05.004, 2011d.a Notarstefano, G.: Argo Float Data and Metadata from Global Data Assembly Centre (Argo GDAC), SEANOE [data set], https://doi.org/10.17882/42182, 2020.a Oke, P., Sakov, P., Cahill, M. L., Dunn, J. R., Fiedler, R., Griffin, D. A., Mansbridge, J. V., Ridgway, K. R., and Schiller, A.: Towards a dynamically balanced eddy-resolving ocean reanalysis: BRAN3, Ocean Model., 67, 52–70, 2013.a Oke, P. R. and Griffin, D. A.: The cold-core eddy and strong upwelling off the coast of New South Wales in early 2007, Deep-Sea Res. Pt. II, 58, 574–591, https://doi.org/10.1016/j.dsr2.2010.06.006, Oke, P. R. and Middleton, J. H.: Topographically Induced Upwelling off Eastern Australia, J. Phys. Oceanogr., 30, 512–530, 2000.a Oke, P. R., Brassington, G. B., Griffin, D. A., and Schiller, A.: The Bluelink ocean data assimilation system (BODAS), Ocean Model., 21, 46–70, 2008a.a Oke, P. R., Brassington, G. B., Griffin, D. A., and Schiller, A.: The Bluelink ocean data assimilation system (BODAS), Ocean Model., 21, 46–70, 2008b.a, b Oke, P. R., Brassington, G. B., Griffin, D. A., and Schiller, A.: Ocean data assimilation: a case for ensemble optimal interpolation, Aust. Meteorol. Ocean., 59, 67–76, 2010.a, b Oke, P. R., Roughan, M., Cetina-Heredia, P., Pilo, G. S., Ridgway, K. R., Rykova, T., Archer, M. R., Coleman, R. C., Kerry, C. G., Rocha, C., Schaeffer, A., and Vitarelli, E.: Revisiting the circulation of the East Australian Current: its path, separation, and eddy field, Prog. Oceanogr., 176, 102139, https://doi.org/10.1016/j.pocean.2019.102139, 2019.a Ott, E., Hunt, B. R., Szunyogh, I., Zimin, A. V., Kostelich, E. J., Corazza, M., Kalnay, E., Patil, D., and Yorke, J. A.: A local ensemble Kalman filter for atmospheric data assimilation, Tellus, 56A, 415–428, 2004.a Pilo, G. S., Mata, M. M., and Azevedo, J. L. L.: Eddy surface properties and propagation at Southern Hemisphere western boundary current systems, Ocean Sci., 11, 629–641, https://doi.org/10.5194/ os-11-629-2015, 2015a.a Pilo, G. S., Oke, P. R., Rykova, T., Coleman, R., and Ridgway, K.: Do East Australian Current anticyclonic eddies leave the Tasman Sea?, J. Geophys. Res.-Oceans, 120, 8099–8114, 2015b.a Pilo, G. S., Oke, P. R., Coleman, R., Rykova, T., and Ridgway, K.: Patterns of vertical velocity induced by eddy distortion in an ocean model, J. Geophys. Res.-Oceans, 123, 2274–2292, 2018.a Powell, B. S.: Quantifying How Observations Inform a Numerical Reanalysis of Hawaii, J. Geophys. Res.-Oceans, 122, 8427–8444, https://doi.org/10.1002/2017JC012854, 2017.a Powell, B. S. and Moore, A. M.: Estimating the 4DVAR analysis error of GODAE products, Ocean Dynam., 59, 121–138, 2008.a Powell, B. S., Arango, H. G., Moore, A. M., Di Lorenzo, E., Milliff, R. F., and Foley, D.: 4DVAR data assimilation in the Intra-Americas Sea with the Regional Ocean Modeling System (ROMS), Ocean Model., 25, 173–188, 2008.a Powell, B. S., Kerry, C. G., and Cornuelle, B. D.: Using a numerical model to understand the connection between the ocean and acoustic travel-time measurements, J. Acoust. Soc. Am., 134, 3211–3222, Puri, K., Dietachmayer, G., Steinle, P., Dix, M., Rikus, L., Logan, L., Naughton, M., Tingwell, C., Xiao, Y., Barras, V., Bermous, I., Bowen, R., Deschamps, L., Franklin, C., Fraser, J., Glowacki, T., Harris, B., Lee, J., Le, T., Roff, G., Sulaiman, A., Sims, H., Sun, X., Sun, Z., Zhu, H., Chattopadhyay, M., and Engel, C.: Operational implementation of the ACCESS Numerical Weather Prediction system, Aust. Meteorol. Ocean., 63, 265–284, 2013.a Raynaud, L., Berre, L., and Desroziers, G.: An extended specification of flow-dependent background error variances in the Météo-France global 4D-Var system, Q. J. Roy. Meteor. Soc., 137, 607–619, Roughan, M. and Kerry, C.: South East Australian Coastal Ocean Forecast System (SEA-COFS), Zenodo [data set], https://doi.org/10.5281/zenodo.8294716, 2023.a, b, c, d, e Roughan, M., Keating, S., Schaeffer, A., Cetina Heredia, P., Rocha, C., Griffin, D., Robertson, R., and Suthers, I.: A tale of two eddies: The biophysical characteristics of two contrasting cyclonic eddies in the e ast a ustralian currents ystem, J. Geophys. Res.-Oceans, 122, 2494–2518, 2017.a, b Sakov, P. and Oke, P. R.: A deterministic formulation of the ensemble Kalman filter: an alternative to ensemble square root filters, Tellus A, 60, 361–371, 2008.a Sandery, P. and Sakov, P.: Ocean forecasting of mesoscale features can deteriorate by increasing model resolution towards the submesoscale, Nat. Commun., 8, 1566, https://doi.org/10.1038/ s41467-017-01595-0, 2017.a Santana, R., Macdonald, H., O'Callaghan, J., Powell, B., Wakes, S., and H. Suanda, S.: Data assimilation sensitivity experiments in the East Auckland Current system using 4D-Var, Geosci. Model Dev., 16, 3675–3698, https://doi.org/10.5194/gmd-16-3675-2023, 2023.a Schaeffer, A. and Roughan, M.: Influence of a western boundary current on shelf dynamics and upwelling from repeat glider deployments, Geophys. Res. Lett., 42, 121–128, 2015.a Schaeffer, A., Roughan, M., and Wood, J. E.: Observed bottom boundary layer transport and uplift on the continental shelf adjacent to a western boundary current, J. Geophys. Res.-Oceans, 119, 4922–4939, https://doi.org/10.1002/2013JC009735, 2014.a Schaeffer, A., Gramoulle, A., Roughan, M., and Mantovanelli, A.: Characterizing frontal eddies along the E ast A ustralian C urrent from HF radar observations, J. Geophys. Res.-Oceans, 122, 3964–3980, 2017.a Shchepetkin, A. F. and McWilliams, J. C.: The regional oceanic modeling system (ROMS): a split-explicit, free-surface, topography-following-coordinate oceanic model, Ocean Model., 9, 347–404, 2005.a Siripatana, A., Kerry, C., Roughan, M., Souza, J. M. A., and Keating, S.: Assessing the impact of nontraditional ocean observations for prediction of the East Australian Current, J. Geophys. Res.-Oceans, 125, e2020JC016580, https://doi.org/10.1029/2020JC016580, 2020. a, b, c, d, e Skamarock, W. C.: Evaluating mesoscale NWP models using kinetic energy spectra, Mon. Weather Rev., 132, 3019–3032, 2004.a Sloyan, B. M., Ridgway, K. R., and Cowley, R.: The East Australian Current and Property Transport at 27°S from 2012 to 2013, J. Phys. Oceanogr., 46, 993–1008, https://doi.org/10.1175/JPO-D-15-0052.1 , 2016.a Souza, J., Powell, B. S., Castillo-Trujillo, A. C., and Flament, P.: The Vorticity Balance of the Ocean Surface in Hawaii from a Regional Reanalysis, J. Phys. Oceanogr., 45, 424–440, 2014.a Stammer, D.: Global Characteristics of Ocean Variability Estimated from Regional TOPEX/ POSEIDON Altimeter Measurements, J. Phys. Oceanogr., 27, 1743–1769, 1997.a Weaver, A. and Courtier, P.: Correlation modelling on the sphere using generalized diffusion equation., Q. J. Roy. Meteor. Soc., 127, 1815–1846, 2001.a, b Wilkin, J., Levin, J., Moore, A., Arango, H., López, A., and Hunter, E.: A data-assimilative model reanalysis of the US Mid Atlantic Bight and Gulf of Maine: Configuration and comparison to observations and global ocean models, Prog. Oceanogr., 209, 102919, https://doi.org/10.1016/j.pocean.2022.102919, 2022.a Xu, L.: 4D-Var Data Assimilation for Navy Mesoscale NWP, https://apps.dtic.mil/sti/citations/ADA598257 (last access: 6 March 2024), 2013.a Xu, Y. and Fu, L.-L.: Global variability of the wavenumber spectrum of oceanic mesoscale turbulence, J. Phys. Oceanogr., 41, 802–809, 2011.a Zavala-Garay, J., Wilkin, J. L., and Arango, H. G.: Predictability of mesoscale variability in the East Australian Current given strong-constraint data assimilation, J. Phys. Oceanogr., 42, 1402–1420, 2012.a
{"url":"https://gmd.copernicus.org/articles/17/2359/2024/","timestamp":"2024-11-05T08:49:42Z","content_type":"text/html","content_length":"456354","record_id":"<urn:uuid:d6c57bd3-e4ee-4f3b-ae71-0c71c86634f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00396.warc.gz"}
NCERT Solutions For Class 7 Maths Chapter 15 Visualising Solid Shapes | Free PDF Answer : (i) The given net cannot be folded as a cube. Because, it can be folded as below, (ii) The given net can be folded as a cube. Because, it can be folded as below, (iii) The given net can be folded as a cube. Because, it can be folded as below, (iv) The given net can be folded as a cube. Because, it can be folded as below, (v) The given net cannot be folded as a cube. Because, it can be folded as below, (vi) The given net can be folded as a cube. Because, it can be folded as below,
{"url":"https://www.orchidsinternationalschool.com/ncert-solutions-class-7-maths/chapter-15-visualising-solid-shapes","timestamp":"2024-11-06T23:11:51Z","content_type":"text/html","content_length":"131369","record_id":"<urn:uuid:ff606515-6bc3-4f42-97fe-95189185b1fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00458.warc.gz"}
Concurrency: 2024-2025 Lecturer Bill Roscoe Schedule A2(CS&P) — Computer Science and Philosophy Schedule B1 (CS&P) — Computer Science and Philosophy Schedule A2 — Computer Science Schedule B1 — Computer Science Schedule A2(M&CS) — Mathematics and Computer Science Schedule B1(M&CS) — Mathematics and Computer Science Term Michaelmas Term 2024 (16 lectures) Computer networks, multiprocessors and parallel algorithms, though radically different, all provide examples of processes acting in parallel to achieve some goal. All benefit from the efficiency of concurrency yet require careful design to ensure that they function correctly. The concurrency course introduces the fundamental concepts of concurrency using the notation of Communicating Sequential Processes. By introducing communication, parallelism, deadlock, livelock, etc., it shows how CSP represents, and can be used to reason about, concurrent systems. Students are taught how to design communicating processes, how to construct realistic models of real systems, and how to write specifications that can be used to verify the correctness of the system models. One important feature of the module is its use of both algebraic laws and semantic models to reason about reactive and concurrent designs. Another is its use of FDR to animate models and verify that they meet their Learning outcomes At the end of the course the student should: • understand some of the issues and difficulties involved in Concurrency; • be able to specify and model concuurent systems using CSP; • be able to reason about CSP models of systems using both algebraic laws and semantic models; • be able to analyse CSP models of systems using the model checker FDR. • Processes and observations of processes; point synchronisation, events, alphabets. Sequential processes: prefixing, choice, nondeterminism. Operational semantics; traces; algebraic laws. • Recursion. Fixed points as a means of explaining recursion; approximation, limits, least fixed points; guardedness and unique fixed points. • Concurrency. Hiding. Renaming. • Non-deterministic behaviours, refusals, failures. • Hiding and divergence, the failures-divergences model. • Specification and correctness. • Communication, buffers, sequential composition. Deterministic processes: traces, operational semantics; prefixing, choice, concurrency and communication. Nondeterminism: failures and divergences; nondeterministic choice, hiding and interleaving. Advanced CSP operators. Refinement, specification and proof. Process algebra: equational and inequational reasoning. Reading list Lecture Notes: The lecture notes for this year's course appear online on the course materials page. Course Text: • A.W. Roscoe, The Theory and Practice of Concurrency, Chapters 1-7, Prentice-Hall International, 1997; http://www.cs.ox.ac.uk/oucl/work/bill.roscoe/publications/68b.pdf. • C. A. R. Hoare, Communicating Sequential Processes, Prentice-Hall International, 1985; http://www.usingcsp.com. • S. A. Schneider, Concurrent and Real-time Systems, Chapters 1-8, Wiley, 2000; http://www.computing.surrey.ac.uk/personal/st/S.Schneider/books/CRTS.pdf. Background reading: Overheads: This year's slides will appear on the course materials page over the course of the term. Link to downloadable CSPM scripts from Roscoe's book: http://www.cs.ox.ac.uk/ucs/CSPM.html Related research Taking our courses This form is not to be used by students studying for a degree in the Department of Computer Science, or for Visiting Students who are registered for Computer Science courses Other matriculated University of Oxford students who are interested in taking this, or other, courses in the Department of Computer Science, must complete this online form by 17.00 on Friday of 0th week of term in which the course is taught. Late requests, and requests sent by email, will not be considered. All requests must be approved by the relevant Computer Science departmental committee and can only be submitted using this form.
{"url":"http://www.cs.ox.ac.uk/teaching/courses/2024-2025/concurrency/","timestamp":"2024-11-13T09:58:05Z","content_type":"text/html","content_length":"32095","record_id":"<urn:uuid:7b2de6b2-a8e9-4978-932b-aeb99d03cc08>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00306.warc.gz"}
Partial differential equations help Need help with the following problem Answers can only be viewed under the following conditions: 1. The questioner was satisfied with and accepted the answer, or 2. The answer was evaluated as being 100% correct by the judge. View the answer 1 Attachment Join Matchmaticians Affiliate Marketing Program to earn up to a 50% commission on every question that your affiliated users ask or answer.
{"url":"https://matchmaticians.com/questions/4opd9u/partial-differential-equations-help-question","timestamp":"2024-11-09T16:26:53Z","content_type":"text/html","content_length":"73801","record_id":"<urn:uuid:0bcac187-d4c2-4dc1-ae3a-a76cc9e6f7a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00146.warc.gz"}
Positional Number Systems The first positional number system in human history was invented by the Babylonians. They used a base 60 system. Remnants of this system can be found in the way that we measure time (60 seconds in a minute, 60 minutes in an hour). Nowadays, we use a base 10 system, the decimal system. With the exception of the Babylonian and Mayan systems (the Mayans used a base of 20), all positional notations in human history were base 10. Probably the reason for this is because humans have 10 fingers. However, base 10 is not the only base that we could use. We could use base 8, or base 12, or base 16, or... Of course, when using different bases, you need to convert from one base to another.
{"url":"http://mathlair.allfunandgames.ca/positional.php","timestamp":"2024-11-13T07:59:22Z","content_type":"text/html","content_length":"2717","record_id":"<urn:uuid:38c6ecef-4041-4a1f-bff4-71b50345ab49>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00568.warc.gz"}
cement grinding lime particle sizes Influence of grinding method and particle size 2013年12月20日· Particle size distribution (PSD) of cements containing the same percentage of 45μm residue, was not affected significantly by grinding method,2021年7月9日· Particle size distribution (PSD) is an essential property of cement The only standard method to measure the PSD of cement, namely ASTM C115 is limited in(PDF) Particle size distribution of cement and concrete Particle size distribution of raw materials including The distributions of particle size after crushing and grinding are shown in the Figure 1 Here, the majority of particle sizes of limestone powder came to 75 μm, which was up to 32%, and theproducing method (intergrinding or separate grinding) and particle size distribution on properties of Portlandlimestone cements (PLC) Experiments were carried out onInfluence of grinding method and particle size distribution on the Jet mill grinding of portland cement, limestone, and fly 2013年11月1日· Research described in this paper focuses on measuring particle size distributions obtained by jet mill grinding of two cementbased mixtures: (1)2013年5月1日· This paper describes the influence of the producing method (intergrinding or separate grinding) and particle size distribution on properties of PortlandlimestoneInfluence of grinding method and particle size distribution on the Grinding characteristics of multicomponent cementbased material In fact, the properties of cement The R~SB distribution as well as the characteristic diame based material depend to a large extent o the material ter (De) and the coefficient ofCements produced were compared for change in particle size distribution during grinding and 1, 2, 7, 28, and 90day compressive strengths points of view Also, interactionsEffect of grinding time on the particle size distribution of Impact of ball size distribution, compartment configuration, and 2022年11月1日· In the context of fullscale continuous cement ball milling, there is no exact rule for ball selection; rather, the cement industry designs the mixture of ball sizes and BSD on the basis of their experience along with recommendations from the mill machine suppliers (eg, , 2012, , 2014) and (empirical) Bond’s approach (Bond, 1958),Figure B21 provides an example calculation to assist the analyst in preparing particle sizespecific emission estimates using generic size distributions Identify and review the AP42 section dealing with that process Obtain the uncontrolled particulate emission factor for the process from the main text of AP42, and calculate uncontrolledAP42, Appendix B2 Generalized Particle Size Distributions Cement Grinding Plant Overview | Cement Grinding Unit | AGICO Cement 2019年10月11日· AGICO Cement is a cement grinding plant manufacturer, we'd like to helps you learn more about cement grinding plant and cement grinding machines Skip to content +86 2014年2月28日· Particle size distribution (PSD) of PPC varied for each method of production, which was largely due to the amount of Trass in the PPC Finally, it was concluded that physical properties of cements obtained by Intergrinding were slightly better than that of separate grinding; however, durability properties were not affected byEffect of grinding method and particle size distribution on the Cement Kilns: Size Reduction and Grinding In order to produce cement, this has to be ground, again to a powder of maximum size about 100 μm In addition, other process streams are involved When the kiln is fired with solid fuel, this will typically be received in a size range 年6月1日· Effect of grinding time on the particle size distribution of gasification ash and Portland cement clinkerpdf Available via license: CC BYNCND 40 Content may be subject to copyrightEffect of grinding time on the particle size distribution of Grinding process is a critical stage in cement production 2023年8月21日· The grinding process involves reducing the clinker particles to a specific fineness, typically measured in terms of Blaine specific surface area or particle size distribution The grinding process significantly influences the cement’s strength development, setting time, and other performance characteristics2007年3月23日· The particle size distribution of fly ash–cement system is analyzed using n value of Rosin–Rammler function and the n value is derived with a nonlinear least squares fitting method(PDF) The effect of particle size distribution on the properties Investigation on grinding impact of fly ash particles and its 2019年6月1日· From the particle size analysis, it is concluded that the performance of ethanol as a grinding agent is more efficient as compared to that of the methanol Scanning Electron Microscope (SEM) micrographs of the particles before and after milling clearly proved the combination of the cement particle by the wet grinding of the cement [13],2013年5月1日· For cement pastes with yield stress of 30 Pa, distance between coarse particle (4851 μm) and fine particle (663 μm) was 500 μm, while distance between midsized particle (2027 μm) and fineInfluence of grinding method and particle size distribution on The effects of particle size distribution and surface area upon cement 2009年1月10日· Particle size distribution, uniformity of the distribution and specific surface area (SSA) have a great influence on service properties of cement, particularly on strengthIn this paper the effects of these physical parameters on strength development were studied using PC 425 R In order to understand the significance of different2022年11月1日· In the context of fullscale continuous cement ball milling, there is no exact rule for ball selection; rather, the cement industry designs the mixture of ball sizes and BSD on the basis of their experience along with recommendations from the mill machine suppliers (eg, , 2012, , 2014) and (empirical) Bond’s approach (Bond, 1958),Impact of ball size distribution, compartment configuration, and Effects of cement particle size distribution on performance properties 1999年10月1日· The computer simulations are conducted using two cement particle size distributions that bound those commonly in use today and three different watertocement ratios: 05, 03, and 0246 For lower watertocement ratio systems, the use of coarser cements may offer equivalent or superior performance, as well as reducing productionOverview of cement clinker grinding Vipin Kant Singh, in The Science and Technology of Cement and Other Hydraulic Binders, 2023 96 Fineness and particle size distribution of cement The size of a cement particle has an important effect on the rate at which it will hydrate when exposed to water As it reacts, a layer of hydration product forms aroundCement Particle an overview | ScienceDirect Topics The particlesize effect of waste clay brick powder on its 2020年1月1日· 1 Introduction Nowadays, cement is widely used all over the world Over 4 billion tons of cement were manufactured in 2017, and China has accounts for about half of the total amount (Song et al, 2019)The manufacture of cement is considered as a high energy consumption and environmental pollution process (Fort et al, 2018)According toAbstract: In this research, the particle size distribution (PSD) of different components in interground Portland limestone cement (PLC) and limestoneslag cement (PLCS) was characterization by using an electron microscopy approach Firstly, the 2D PSD of limestone, slag, and Portland cement (OPC) was determined by means of imageEffect of grinding time on the particle size distribution of The effects of particle size distribution and surface area upon cement 2009年1月10日· For example, when considering particle size distributions of cement components, it was found that surface area of particle distributions were more correlated to cement strength evolution [68The grinding action employs much greater stress on the material than in a ball mill, and is therefore more efficient Energy consumption is typically half that of a ball mill However, the narrowness of the particle size distribution of the cement is problematic, and the process has yet to receive wide acceptance Highpressure roll pressesCement mill cement grinding lime particle sizes cement grinding lime particle sizes Akademia DiSC Mar 01, 2017 31 Particle size distribution, fineness values of cements, and grinding time The particle size distributions of limestone, Trass, and cements are presented in Fig 1ad Also, fineness values including Blaine fineness, 45μm residue (%) by Alpine apparatus, position parameter (x’) and d 年9月1日· Mechanical grinding can effectively trigger the pozzolanic activity of lead–zinc tailing powders (LZTPs) • The optimal particle size of LZTPs ranges from 33 ∼ 65 μm in cement mortarGrinding kinetics of lead–zinc tailing powders and its optimal Effect of grinding method on energy consumption and particle size 2016年1月1日· Triethanol amine and ethylene glycol are used as grinding aids for Ordinary Portland Cement (OPC) Standard water of consistency, Blaine area, initial and final setting times and compressive2018年8月12日· While cement content influences the overall compressive strength of rubberized concrete mixtures, the rubber particle size will have a more influential effect the concrete’s performance This study produced concrete mixtures that incorporated a coarse aggregate replacement with TC of a similar size, a fine aggregate replacement of aEffect of cement content and recycled rubber particle size on Effect of ball and feed particle size distribution on the milling 2018年6月1日· During the grinding process, coarser material (size class 1) break to finer material (size class 2) and (size class 3) which are termed the daughter products, as shown in Fig 1A chemical reaction on the other hand can consist of say reactant ‘A’ forming intermediate product ‘B’ then proceeding to final product ‘C’ or a competing reactant ‘A’2019年2月1日· Abstract Aiming at the problem that the mechanism model of cement grinding process is difficult to be established due to its comprehensive complexity, a modeling method of cement particle size based on LSSVM algorithm is proposed by analyzing its dynamic characteristic and manual operation mode The model can reflectCement Particle Size Modeling for Cement Combined Grinding EnergyEfficient Technologies in Cement Grinding | IntechOpen 2016年10月5日· Due to the more energyefficient grinding process, Cemex® ground cement will usually have a steeper particlesize distribution curve than corresponding ball mill cements Consequently, when ground to the same specific surface (Blaine), Cemex® cement will have lower residues on a 32 or 45 μm sieve and tend to have a fasterFigure B21 provides an example calculation to assist the analyst in preparing particle sizespecific emission estimates using generic size distributions Identify and review the AP42 section dealing with that process Obtain the uncontrolled particulate emission factor for the process from the main text of AP42, and calculate uncontrolledAP42, Appendix B2 Generalized Particle Size Distributions Jet mill grinding of portland cement, limestone, and fly ash: 2013年11月1日· The mean particle size of separately ground cement and limestone SGC85/GL15 was 29 μm, slightly lower than the mean particle size of the interground material IGC85/GL15 which was 33 μm The particle size distributions of separately ground portland cement (GC) and limestone (GL) are shown in Fig 5 a for comparisonIn cement production, the grinding stage is a very energyconsuming step Narrowing down the particle size distribution saves energy and reduces costs The average particle size of cement is usually between 10 µm and 20 µm In addition to the economic aspects of cement production, particle size has a major influence on the properties of theParticle Size in Building Materials: From Cement to Bitumen Optimizing cement strength Cement Lime Gypsum ZKG According to the study referred to, the increase in strength for Type I Cement by raising the clinker LSF from 96 to 98 is only 12 MPa [1] Whereas as per another literature source, the strength increase/decrease of cement by ±1% clinker LSF is ±5% at 1 day and ±15% at 28 days [2] C3S primarily governs strength gain up to 28 days while2007年8月25日· A higher addition dosage, a better fineness and a narrower particle size distribution, as well as, the grinding process, caused a decrease in the cumulative heat of hydration of the cements During the 100 h of the measurement, the separately ground blended cements released less heat of hydration than the interground ones and PPCThe effect of particle size distribution on the properties of Effect of Adjusting for ParticleSize Distribution of Cement on 2017年12月3日· 22 Approach To evaluate the effect of the classifications of particlesize distribution of cement, the mix proportion was newly required Nine mix proportions by the amount of different sizes of cement were classified based on the accumulated remains denoted as R at 10 m (R10), 20 m (R20), 40 m (R40), and 80 m (R80) using an Alpine air2013年11月1日· Effect of grinding method and particle size distribution on long term properties of binary and ternary cements Construction and Building Materials, Volume 134, 2017, pp 7582 E Ghiasvand , AA RamezanianpourThe effects of grinding on the properties of Portlandlimestone cement Particle Size Analysis Of Cement Using Laser Diffraction In a general cement, between 60 and 70% of the material should be between 3µm and 30µm in size Excess large particles (greater than 50µm) can cause problems by reducing strength due to incomplete hydration, where as excess small particles (less than 2µm) can reduce strength and cause the cement to crack by setting exothermicallyParticle Sizes Engineering ToolBox Particle Sizes The size of contaminants and particles are usually described in microns, a metric unit of measure where one micron is onemillionth of a meter 1 micron = 106 m = 1 μm In imperial units 1 inch = 25400 microns 1 micron = 1 / 25400 inch The eye can in Ver artículocement grinding lime particle sizes Characterization and Hazard Identification of Respirable Cement 2021年9月27日· 31 Particle Size Particle size is a key characteristic of any respirable dust, as it determines lung penetration and the ultimate fate of the particle thereafter The particle size distribution due to sawcutting of cement and concrete was found to be very broad, ranging from ultrafine (<100 nm) to a few microns2015年1月1日· Penetration depths of original and blended cements with 5, 10, 20, and 30 % wt limestone having particle sizes larger, similar, and smaller than that of the original cementSetting behavior of blended cement with limestone: influence The most energyeficient mill for cement grinding Adjustment of mill airflow and grinding pressure for optimisation of the operation, including adjustment of particle size distribution and switching between different types of products for example from Portland cement to slag cement, can be made immediately When necessary, adjustment of mechanical components such as the dam2019年6月17日· Improved particle size distribution (psd) Grinding with higher efficiency means reducing coarse particles without overgrinding and producing super fines, hence a narrower psd is achieved Improved strength A narrower psd has less coarse particles, and a more complete hydration is possible, leading to greater strengthImproving particle size distribution in cement production Influence of different particle sizes on reactivity of finely ground 2015年2月1日· However, limestone's effect on cementtreated clay has yet to be addressed comprehensively This paper presents the effects of uncalcined limestone of different sizes on the particle size distribution, consistency indices, and the UCS of cementtreated clay composites at curing periods of 1, 3, 7, and 28 daysFly Ash, Slag, Silica Fume, and Natural Pozzolans, Chapter 3 materials, such as portland cement, have solid angular particles The particle sizes in fly ash vary from less than 1 µm (micrometer) to more than 100 µm with the typical particle size measuring under 20 µmcement grinding lime particle sizes
{"url":"https://www.kvalitniobklady.cz/1699506694-ww/9197.html","timestamp":"2024-11-05T11:58:57Z","content_type":"text/html","content_length":"25701","record_id":"<urn:uuid:4ee2a772-fff9-4e82-a399-8c2d21dc203f>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00133.warc.gz"}
Find point inside polygon Determining whether a point lies within a polygon is a common problem in computer graphics, geographical information systems (GIS), and computational geometry. In this article, we will explore the concepts and methods used to solve this problem, including the original code that demonstrates the logic behind the solution. The Problem Scenario The challenge is to check if a given point lies inside a defined polygon. Below is an example of code that attempts to solve this problem using the ray-casting algorithm. def is_point_in_polygon(point, polygon): x, y = point n = len(polygon) inside = False p1x, p1y = polygon[0] for i in range(n + 1): p2x, p2y = polygon[i % n] if y > min(p1y, p2y): if y <= max(p1y, p2y): if x <= max(p1x, p2x): if p1y != p2y: xints = (y - p1y) * (p2x - p1x) / (p2y - p1y) + p1x if p1x == p2x or x <= xints: inside = not inside p1x, p1y = p2x, p2y return inside Understanding the Code The is_point_in_polygon function checks if a specified point is inside the given polygon: 1. Initialization: We initialize variables to get the coordinates of the point and the number of vertices in the polygon. 2. Ray-Casting Logic: The algorithm works by drawing an imaginary horizontal ray from the point towards the right. The number of times this ray intersects with the edges of the polygon determines whether the point is inside (odd number of intersections) or outside (even number of intersections). 3. Loop through edges: We loop through each edge of the polygon and check the conditions for an intersection with the horizontal ray. Practical Example Consider a polygon defined by the vertices (1, 1), (1, 5), (5, 5), (5, 1), forming a square. If we want to check whether the point (3, 3) is inside the polygon, we call the function as follows: polygon = [(1, 1), (1, 5), (5, 5), (5, 1)] point = (3, 3) if is_point_in_polygon(point, polygon): print("The point is inside the polygon.") print("The point is outside the polygon.") Optimization and Alternatives While the ray-casting method is effective, there are other algorithms such as the winding number algorithm which can also determine point-in-polygon relationships and might be more suitable in specific scenarios. Additionally, if you are dealing with a large number of points or polygons, consider using spatial data structures like Quad-trees or R-trees for faster point location and collision detection. In this article, we've covered how to determine if a point is inside a polygon using a straightforward approach with the ray-casting algorithm. Understanding this fundamental problem can greatly assist in various applications ranging from game development to geographic computations. Useful Resources By implementing and adapting these strategies, you can enhance your applications and analyses involving polygonal shapes and points. Remember to always consider the specific needs of your project when selecting an algorithm or method.
{"url":"https://laganvalleydup.co.uk/post/find-point-inside-polygon","timestamp":"2024-11-11T17:38:14Z","content_type":"text/html","content_length":"82345","record_id":"<urn:uuid:e669be17-6ad1-4a12-91ff-b2788f00446c>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00747.warc.gz"}
1 Gradian/Square Month to Revolution/Square Second Gradian/Square Month [grad/month2] Output 1 gradian/square month in degree/square second is equal to 1.3013588424653e-13 1 gradian/square month in degree/square millisecond is equal to 1.3013588424653e-19 1 gradian/square month in degree/square microsecond is equal to 1.3013588424653e-25 1 gradian/square month in degree/square nanosecond is equal to 1.3013588424653e-31 1 gradian/square month in degree/square minute is equal to 4.6848918328749e-10 1 gradian/square month in degree/square hour is equal to 0.000001686561059835 1 gradian/square month in degree/square day is equal to 0.00097145917046494 1 gradian/square month in degree/square week is equal to 0.047601499352782 1 gradian/square month in degree/square month is equal to 0.9 1 gradian/square month in degree/square year is equal to 129.6 1 gradian/square month in radian/square second is equal to 2.2712996550961e-15 1 gradian/square month in radian/square millisecond is equal to 2.2712996550961e-21 1 gradian/square month in radian/square microsecond is equal to 2.2712996550961e-27 1 gradian/square month in radian/square nanosecond is equal to 2.2712996550961e-33 1 gradian/square month in radian/square minute is equal to 8.1766787583459e-12 1 gradian/square month in radian/square hour is equal to 2.9436043530045e-8 1 gradian/square month in radian/square day is equal to 0.000016955161073306 1 gradian/square month in radian/square week is equal to 0.000830802892592 1 gradian/square month in radian/square month is equal to 0.015707963267949 1 gradian/square month in radian/square year is equal to 2.26 1 gradian/square month in gradian/square second is equal to 1.4459542694058e-13 1 gradian/square month in gradian/square millisecond is equal to 1.4459542694058e-19 1 gradian/square month in gradian/square microsecond is equal to 1.4459542694058e-25 1 gradian/square month in gradian/square nanosecond is equal to 1.4459542694058e-31 1 gradian/square month in gradian/square minute is equal to 5.205435369861e-10 1 gradian/square month in gradian/square hour is equal to 0.00000187395673315 1 gradian/square month in gradian/square day is equal to 0.0010793990782944 1 gradian/square month in gradian/square week is equal to 0.052890554836425 1 gradian/square month in gradian/square year is equal to 144 1 gradian/square month in arcmin/square second is equal to 7.8081530547915e-12 1 gradian/square month in arcmin/square millisecond is equal to 7.8081530547915e-18 1 gradian/square month in arcmin/square microsecond is equal to 7.8081530547915e-24 1 gradian/square month in arcmin/square nanosecond is equal to 7.8081530547915e-30 1 gradian/square month in arcmin/square minute is equal to 2.810935099725e-8 1 gradian/square month in arcmin/square hour is equal to 0.0001011936635901 1 gradian/square month in arcmin/square day is equal to 0.058287550227897 1 gradian/square month in arcmin/square week is equal to 2.86 1 gradian/square month in arcmin/square month is equal to 54 1 gradian/square month in arcmin/square year is equal to 7776 1 gradian/square month in arcsec/square second is equal to 4.6848918328749e-10 1 gradian/square month in arcsec/square millisecond is equal to 4.6848918328749e-16 1 gradian/square month in arcsec/square microsecond is equal to 4.6848918328749e-22 1 gradian/square month in arcsec/square nanosecond is equal to 4.6848918328749e-28 1 gradian/square month in arcsec/square minute is equal to 0.000001686561059835 1 gradian/square month in arcsec/square hour is equal to 0.0060716198154059 1 gradian/square month in arcsec/square day is equal to 3.5 1 gradian/square month in arcsec/square week is equal to 171.37 1 gradian/square month in arcsec/square month is equal to 3240 1 gradian/square month in arcsec/square year is equal to 466560 1 gradian/square month in sign/square second is equal to 4.3378628082175e-15 1 gradian/square month in sign/square millisecond is equal to 4.3378628082175e-21 1 gradian/square month in sign/square microsecond is equal to 4.3378628082175e-27 1 gradian/square month in sign/square nanosecond is equal to 4.3378628082175e-33 1 gradian/square month in sign/square minute is equal to 1.5616306109583e-11 1 gradian/square month in sign/square hour is equal to 5.6218701994499e-8 1 gradian/square month in sign/square day is equal to 0.000032381972348831 1 gradian/square month in sign/square week is equal to 0.0015867166450927 1 gradian/square month in sign/square month is equal to 0.03 1 gradian/square month in sign/square year is equal to 4.32 1 gradian/square month in turn/square second is equal to 3.6148856735146e-16 1 gradian/square month in turn/square millisecond is equal to 3.6148856735146e-22 1 gradian/square month in turn/square microsecond is equal to 3.6148856735146e-28 1 gradian/square month in turn/square nanosecond is equal to 3.6148856735146e-34 1 gradian/square month in turn/square minute is equal to 1.3013588424653e-12 1 gradian/square month in turn/square hour is equal to 4.6848918328749e-9 1 gradian/square month in turn/square day is equal to 0.000002698497695736 1 gradian/square month in turn/square week is equal to 0.00013222638709106 1 gradian/square month in turn/square month is equal to 0.0025 1 gradian/square month in turn/square year is equal to 0.36 1 gradian/square month in circle/square second is equal to 3.6148856735146e-16 1 gradian/square month in circle/square millisecond is equal to 3.6148856735146e-22 1 gradian/square month in circle/square microsecond is equal to 3.6148856735146e-28 1 gradian/square month in circle/square nanosecond is equal to 3.6148856735146e-34 1 gradian/square month in circle/square minute is equal to 1.3013588424653e-12 1 gradian/square month in circle/square hour is equal to 4.6848918328749e-9 1 gradian/square month in circle/square day is equal to 0.000002698497695736 1 gradian/square month in circle/square week is equal to 0.00013222638709106 1 gradian/square month in circle/square month is equal to 0.0025 1 gradian/square month in circle/square year is equal to 0.36 1 gradian/square month in mil/square second is equal to 2.3135268310493e-12 1 gradian/square month in mil/square millisecond is equal to 2.3135268310493e-18 1 gradian/square month in mil/square microsecond is equal to 2.3135268310493e-24 1 gradian/square month in mil/square nanosecond is equal to 2.3135268310493e-30 1 gradian/square month in mil/square minute is equal to 8.3286965917776e-9 1 gradian/square month in mil/square hour is equal to 0.000029983307730399 1 gradian/square month in mil/square day is equal to 0.01727038525271 1 gradian/square month in mil/square week is equal to 0.84624887738279 1 gradian/square month in mil/square month is equal to 16 1 gradian/square month in mil/square year is equal to 2304 1 gradian/square month in revolution/square second is equal to 3.6148856735146e-16 1 gradian/square month in revolution/square millisecond is equal to 3.6148856735146e-22 1 gradian/square month in revolution/square microsecond is equal to 3.6148856735146e-28 1 gradian/square month in revolution/square nanosecond is equal to 3.6148856735146e-34 1 gradian/square month in revolution/square minute is equal to 1.3013588424653e-12 1 gradian/square month in revolution/square hour is equal to 4.6848918328749e-9 1 gradian/square month in revolution/square day is equal to 0.000002698497695736 1 gradian/square month in revolution/square week is equal to 0.00013222638709106 1 gradian/square month in revolution/square month is equal to 0.0025 1 gradian/square month in revolution/square year is equal to 0.36
{"url":"https://hextobinary.com/unit/angularacc/from/gradpm2/to/rps2/1","timestamp":"2024-11-07T22:20:48Z","content_type":"text/html","content_length":"113728","record_id":"<urn:uuid:765447c4-965f-4869-9fd5-1ef53c9329b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00744.warc.gz"}
Comparing addition or subtraction statements involving decimals and fractions | Stage 1 Maths | HK 4 When we compare statements, we are thinking of whether one part of our statement is greater than ($>$>), equal to ($=$=), or less than ($<$<) the other part of our statement. We might have only fractions in our statement, or only decimals, but other times we might have a mixture of fractions and decimals to work with. Before we compare If we are comparing one fraction to another fraction, we may notice that our fractions have different denominators. That means we may need an extra step, so that they are expressed with the same denominator. The fractions are still equivalent, and have the same value, but are expressed differently. If we are comparing one decimal to another decimal, we might also need to rename one of them. This allows us to think about the value of our numbers, so we can compare tenths with tenths, or hundredths with hundredths, for example. If we are comparing a decimal and a fraction, it may be useful to convert the fraction to a decimal, or the decimal to a fraction. Let's look at how to do this now. Comparing values with whole numbers and decimals or fractions With those tools under our belt, we can now look at comparing numbers with a mixture of fractions and decimals. Video 2 works through some examples of how to use those tools. Making statements true with equality and inequality symbols You may have already looked at how to make statements true with whole numbers. The process for doing this with decimals and fractions is just the same. Let's look at how to do it now in this video. When a decimal has a zero at the end, to the right of the decimal point, it doesn't change the value of the number. Worked Examples Write the decimal $0.3$0.3 as a fraction. Choose the larger decimal. Write $\frac{49}{10}$4910 as a decimal.
{"url":"https://mathspace.co/textbooks/syllabuses/Syllabus-304/topics/Topic-5595/subtopics/Subtopic-74469/","timestamp":"2024-11-13T01:23:54Z","content_type":"text/html","content_length":"472368","record_id":"<urn:uuid:e900a6fb-f48c-474f-a590-ec1c69245eb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00057.warc.gz"}
Cantor Type Basic Sets of Surface $A$-endomorphisms Received 30 July 2021; accepted 24 August 2021 2021, Vol. 17, no. 3, pp. 335-345 Author(s): Grines V. Z., Zhuzhoma E. V. The paper is devoted to an investigation of the genus of an orientable closed surface $M^2$ which admits $A$-endomorphisms whose nonwandering set contains a one-dimensional strictly invariant contracting repeller $\Lambda_r^{}$ with a uniquely defined unstable bundle and with an admissible boundary of finite type. First, we prove that, if $M^2$ is a torus or a sphere, then $M^2$ admits such an endomorphism. We also show that, if $ \Omega$ is a basic set with a uniquely defined unstable bundle of the endomorphism $f\colon M^2\to M^2$ of a closed orientable surface $M^2$ and $f$ is not a diffeomorphism, then $ \Omega$ cannot be a Cantor type expanding attractor. At last, we prove that, if $f\colon M^2\to M^2$ is an $A$-endomorphism whose nonwandering set consists of a finite number of isolated periodic sink orbits and a one-dimensional strictly invariant contracting repeller of Cantor type $\Omega_r^{}$ with a uniquely defined unstable bundle and such that the lamination consisting of stable manifolds of $\Omega_r^{}$ is regular, then $M^2$ is a two-dimensional torus $\mathbb{T}^2$ or a two-dimensional sphere $\mathbb{S}^2$. Download File PDF, 309.94 Kb This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License
{"url":"http://nd.ics.org.ru/nd210307/","timestamp":"2024-11-10T04:14:37Z","content_type":"text/html","content_length":"17936","record_id":"<urn:uuid:8c3f8fa8-3885-4971-9aad-bccd9f6b0510>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00466.warc.gz"}
Rules question: what move for restraining someone? I recently started running a game of AW, and we had a situation come up which was a bit of head-scratcher (surprisingly; normally we don't struggle with this kind of thing, since we've all played a bunch of AW). I'd like to hear some opinions on what might work well, and what you would do with your home group. The situation is as follows: A violent individual is trying to catch a thief, who is hiding inside a house. The PC thinks it's a case of mistaken identity: the thief they're looking for is someone else! The person inside is someone they don't want to get hurt. Now, this violent individual - let's say Dremmer - is in a rage, and doesn't care about the PC - he just wants to get in there and get at the person inside, who he thinks stole from him. The PC is standing behind the violent individual at the door to the house. She doesn't want to hurt Dremmer, but she does want to protect her friend, inside. She says, "I reach out from behind, and put my pipe (she's carrying one, just in case!) around this guy's neck. I want to keep him from going inside!" How would you resolve this? The PC wants to stop Dremmer from getting inside, and she's willing to be physical with him, but not willing to hurt him. Dremmer is pissed, but he's not looking to hurt the PC, or even really paying any attention to her - he's focused on his target. It feels like it's a move of some sort (if not, what MC move would you respond with?), and several options are possible, but the choice isn't obvious. Sure, we can always fall back on "act under fire", but, with the fire being "he gets into the house", that feels a bit... not quite right. I'd love to hear some takes on this. How would you handle it, at your table?
{"url":"https://lumpley.games/thebarf/index.php?topic=9115.msg38506","timestamp":"2024-11-07T02:49:31Z","content_type":"application/xhtml+xml","content_length":"71809","record_id":"<urn:uuid:ef0f0cff-ca0b-4cb6-a7c6-19f77688c5be>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00483.warc.gz"}
Linear Equation In Two Variables Solution of TS & AP Board Class 9 Mathematics Exercise 6.3 Question 1. Draw the graph of each of the following linear equations. 2y = -x + 1 For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- Question 2. Draw the graph of each of the following linear equations. –x + y = 6 For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- Question 3. Draw the graph of each of the following linear equations. 3x + 5y = 15 For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- Question 4.Answer: For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- Question 5. Draw the graph of each of the following linear equations. y = x For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- Question 6. Draw the graph of each of the following linear equations. y = 2x For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- Question 7. Draw the graph of each of the following linear equations. y = -2x For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- Question 8. Draw the graph of each of the following linear equations. y = 3x For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- Question 9. Draw the graph of each of the following linear equations. y = -3x For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- Question 10. Answer the following question related to above graphs. i) Are all these equations of the form y = mx, where m is a real number? ii) Are all these graphs passing through the origin? iii) What can you conclude about these graphs? (i) Yes, all these are equations of the form y = mx, where m is a real number and m = 1,2,-2,3,-3 respectively in the above equations. (ii) Yes, all these are graphs passing through the origin, i.e., pt. A in every graph (iii) ∴ we can conclude that every graph of type y = mx passes through origin, where m is a real number. Question 11. Draw the graph of the equation 2x + 3y = 11. Find from the graph value of y when x = 1 For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- From the graph (pt. E) we can see that for x = 1, the y = 3. (Note: Also we can put x = 1 in the given equation and can find the value of y- We have, At x = 1, ⇒ y = 3 Question 12. Draw the graph of the equation y - x = 2. Find from the graph i) the value of y when x = 4 ii) the value of x when y = -3 For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- i)the value of y when x = 4 is y = 6 (pt. E) ii) the value of x when y = -3 is y = -5 (pt. F) Question 13. Draw the graph of the equation 2x + 3y = 12. Find the solutions from the graph i) Whose y-coordinate is 3 ii) Whose x-coordinate is -3 For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- (i) From the graph, we can see that for y = 3 is pt. E and the (ii) From the graph, we can see that for x = -3 is pt. F and the corresponding y = 6 for that. Question 14. Draw the graph of each of the equations given below and also find the coordinates of the points where the graph cuts the coordinate axes 6x - 3y = 12 For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- ⇒ ∴ the pts. Where graph cuts the co-ordinate axis(i.e., where x = 0 and where y = 0) are pt. A = (2,0) and pt. B = (0,-4) Question 15. Draw the graph of each of the equations given below and also find the coordinates of the points where the graph cuts the coordinate axes - x + 4y = 8 For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- ⇒ ∴ the pts. Where graph cuts the co-ordinate axis(i.e., where x = 0 and where y = 0) are pt. A = (-8,0) and pt. B = (0,2). Question 16. Draw the graph of each of the equations given below and also find the coordinates of the points where the graph cuts the coordinate axes 3x + 2y + 6 = 0 For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation- ⇒ ∴ the pts. Where graph cuts the co-ordinate axis(i.e., where x = 0 and where y = 0) are pt. A = (-2,0) and pt. B = (0,-3). Question 17. Rajiya and Preethi two students of Class IX together collected ₹ 1000 for the Prime Minister Relief Fund for victims of natural calamities. Write a linear equation and draw a graph to depict the Given that together Rajiya and Preethi collected Rs.1000. Now, Let the amount collected by Rajiya be Rs. x and by Preethi be Rs. y. ∴ the linear equation will be- ⇒ x + y = 1000 For graph, we’ll first make the table of solutions by putting some random values of x and thereafter we’ll find corresponding values of y and then we’ll plot these points on graph, join them and extend them in straight line to find the graph. (Note: ∵ equation is linear graph will always be straight line.) Table of solutions for the given equation-
{"url":"https://www.tgariea.in/2022/08/linear-equation-in-two-variables.html","timestamp":"2024-11-07T09:14:46Z","content_type":"application/xhtml+xml","content_length":"1049032","record_id":"<urn:uuid:788f89df-18de-446b-acec-d1cfcc004e65>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00350.warc.gz"}
PhD thesis of Samuel F. Rial Lesaga The 11th of October of 2019 Samuel F. Rial Lesaga defended his PhD Thesis: Temporal evolution of MHD wave in solar coronal arcades. The aim of this thesis is to go a step further in the theoretical modeling of coronal loops. Following that direction, our main goal is to be able to theoretically reproduce part of the observed complexity that these structures display when asudden release of energy occurs in the solar corona. Since not too much time ago, the theoretical models of these features have been rather simple. In order to have simple solutions, these models use big approximations, such as, straight instead of curved structures, one-dimensional/two-dimensional structures, etc. In order to go beyond that simple models, our aim is to increase the complexity of the theoretical models by adding what we think can be some key ingredients just like the curvature or three-dimensionality. In this work we will adopt the approach of increase the complexity of the model step by step to create a solid base of knowledge that helps us understand the underlying physics. Therefore we will begin with a well known two-dimensional problem and then we will allow perturbations propagate in the third direction as a first step towards three dimensionality. These are known as 2.5 dimensional models. Afterwards, we will add more ingredients such as a sharp model of coronal loop, di↵erent density profiles, curvature, etc. The study of coronal loop oscillations can be done from several points of view but in this thesis we will focus in two of them. The first one is to solve the time-dependent MHD equations by means of a temporal code. In the first two papers Rial et al. (2010) and Rial et al. (2013) we use this approach. The second approach consists in solve the normal modes of the system. The standard method to do so can be in general a difficult task because an specially designed numerical code is needed. For that reason another goal of this thesis is to develop a technique that allow us to find an alternative way to find the normal modes of any system. In Rial et al. (2019) we explain how this technique works as well as its advantages and disadvantages. The fortmat of this thesis is the compendium of this articles: • Rial, S., Arregui, I., Oliver, R. and Terradas, J.: 2019, Determining normal mode features from numerical simulations using CEOF analysis: I. Test case using transverse oscillations of a magnetic slab, ApJ 876(1), 86, doi:10.3847/1538-4357/ab1417 • Rial, S., Arregui, I., Terradas, J., Oliver, R. and Ballester, J. L.: 2010, Threedimensional Propagation of Magnetohydrodynamic Waves in Solar Coronal Arcades, ApJ 713, 651661, doi:10.1088/ • Rial, S., Arregui, I., Terradas, J., Oliver, R. and Ballester, J. L.: 2013, Wave Leakage and Resonant Absorption in a Loop Embedded in a Coronal Arcade, ApJ 763, 16, doi:10.1088/0004-637X/763/1/ File PhD thesis: • Title: Temporal evolution of MHD wave in solar coronal arcades • Autor: Samuel F. Rial Lesaga • Date: 11/10/2019 • PhD program: Física • Departament: Física • Directors: Dr. Íñigo Arregui Uribe-Echevarría i Dr. Ramón Oliver Herrero Figure 5 of the thesis. Snapshots of the two-dimensional distribution of the normal and perpendicular velocity components for ky L = 5 (top panels) and ky L = 60 (bottom panels). Some magnetic field lines (black curves) and the edge of the coronal loop (white lines) have been represented.(Animations and color version of this figure are available in the online journal.
{"url":"https://iac3.uib.es/2019/10/11/phd-thesis-of-samuel-f-rial-lesaga/","timestamp":"2024-11-14T21:22:25Z","content_type":"text/html","content_length":"82783","record_id":"<urn:uuid:5664a649-55ab-4dcc-8f14-11f00dfc793e>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00027.warc.gz"}
CBSE Sample Paper Class 8 Mathematics Annual Examination - CBSE News CBSE Sample Paper Class 8 Mathematics Annual Examination CBSE Sample Paper Class 8 Mathematics Annual Examination Annual Examination Subject- Mathematics Time: 2:30 hour/M.M.-80 Note: All questions are compulsory – Q.1 to 8 – 1 mark, Q.9 to 16 – 2 marks, Q.17 to 24 – 3 marks and Q.25 to 32 – 4 marks each. (Multiple choice questions) 1- 3/5 =? a) 30% b) 40% c) 45% d) 60% 2- Rajan buys a toy for Rs 75 and sells it for Rs 100. His gain percent is – a) 25% b) 20% c) % d) % 3- If 14 kg of pulses cost Rs 882, what is the cost of 22 kg of pulses? a) Rs 1254 b)Rs 1298 c) Rs 1342 d) Rs 1386 4- In which of the following quadrants does the point P(3,6) lie? a) I b) II c) III d) IV 5- The abscissa of a point is its distance from the a) Origin b) X- axis c) Y- axis d) None of these 6- A die is thrown what is the probability of getting 6? a) 1 b) 1/6 d) None of these 7- 8% when expressed as a decimal is a) 08 b) 0.008 c) 8 d) 0.8 8- What percent of 90 is 120 ? a) 75% b) % c) % d) None of these (Solve the following sums) 9- Find the gain or loss percent when —C.P = Rs 620 and P = Rs 713 10- Find the area of a trapezium whose parallel sides are 24 cm and 20 am distance between them is 15 cm. 11- Find the volume of the cylinder whose dimensions are – radius = 7cm and height = 50cm. 12- A coin is tossed. What are all possible outcomes? 13- Convert the ratio 4:5 to percentage. 14- The number of members in 20 families are given- 4,6,5,5,4,6,3,3,5,5,3,5,4,4,6,7,3,5,5,7 Prepare a frequency distribution of the data. 15- Write the formula of area of triangle and area of a trapezium. 16- Fill up- a) Each interior angle of a regular octagon is ______. b) A pentagon has ____ diagonals. 17- If 16% of a number is 72, find the number. 18- The marked price of a water cooler is Rs 4650. The shopkeeper offers an off-season discount of 18% on it. Find its selling price. 19- A truck covers a distance of 510 km in 34 liters of diesel. How much distance would it cover in 20 litres of diesels? 20- If 35 men can reap a field in 8 days, in how many days can 20 men reap the same field? 21- Construct a quadrilateral ABCD in which AB = 4.2cm, BC=6cm, CD=5.2cm, and AC =8cm. 22- Construct a parallelogram PQRS in which QR =6cm, PQ= 4cm, and ⦤PQR= 60^0 23- The following table depicts the maximum temperature on the seven days of a particular week. Study the table and draw a line graph for the same. Day Sun Mon Tues Wed Thurs Fri Sat Maximum tem. (^0C) 25 28 26 32 29 34 31 24- In a single throw of two coins, find the probability of getting a) Both tails b) at least 1 tail c) at the most 1 tail 25- A football team wins 7 games, which is 345% of the total games played. How many games were played in all? 26- The cost price of 12 candles is equal to the selling price 15 candles. Find the loss percent. 27- Find the amount and the compound interest on Rs 2500 for 2 years at 10% perineum, compounded annually. 28- Rajan can do a piece of work in 24 days while Amit can do it in 30 days. In how many days can they complete it, if they work together? 29- A solid rectangular piece of iron measures 1.05m x 70cm x 1.5am. Find the weight of this piece in kg if 1 cm^3 of iron weighs 8 grams. 30- On a graph paper, plot each the following points. a) A (4,3) b) B (-2,5) c) C (0,4) d) D (7,0) e) E (-3,-5) f) F (5, -3) g) G (-5,-5) h) H (0,0) 31- There are 900 creatures in a zoo as per list given below – Best animals Other land animals Birds Water animals Reptiles Represent the above data by a pie chart. 32- Find the amount and the compound interest on Rs8000 for 1 year at 10% per annum, compounded half-yearly.
{"url":"https://cbsenews.in/cbse-sample-paper-class-8-mathematics-annual-examination/","timestamp":"2024-11-02T21:46:48Z","content_type":"text/html","content_length":"49278","record_id":"<urn:uuid:af699f25-c9da-423e-bf86-89187135107d>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00487.warc.gz"}
Ordinary Line Given an arrangement of points, a Line containing just two of them is called an ordinary line. Moser (1958) proved that at least lines must be ordinary (Guy 1989, p. 903). See also General Position, Near-Pencil, Ordinary Point, Special Point, Sylvester Graph Guy, R. K. ``Unsolved Problems Come of Age.'' Amer. Math. Monthly 96, 903-909, 1989. © 1996-9 Eric W. Weisstein
{"url":"http://drhuang.com/science/mathematics/math%20word/math/o/o101.htm","timestamp":"2024-11-08T19:21:31Z","content_type":"text/html","content_length":"3278","record_id":"<urn:uuid:f0ee6aa7-8f5f-4dd3-b784-623f0443d0d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00229.warc.gz"}
Success rates of appeals to the Supreme Court by Circuit | R-bloggersSuccess rates of appeals to the Supreme Court by Circuit Success rates of appeals to the Supreme Court by Circuit [This article was first published on Peter's stats stuff - R , and kindly contributed to ]. (You can report issue about the content on this page ) Want to share your content on R-bloggers? if you have a blog, or if you don't. In the chaos of the last month or so of United States of America governance, one item that grabbed my attention was the claim by President Trump that 80% of appeals decided by the Ninth Circuit Court of Appeal are overturned by the Supreme Court of the United States (SCOTUS): “In fact, we had to go quicker than we thought because of the bad decision we received from a circuit that has been overturned at a record number. I have heard 80 percent, I find that hard to believe, that is just a number I heard, that they are overturned 80 percent of the time. I think that circuit is – that circuit is in chaos and that circuit is frankly in turmoil. But we are appealing that, and we are going further.” (Donald Trump during his 77 minute 16 February 2017 press conference) Put aside whether it is a good thing for the head of state in a press conference to be speculating on things that are “just a number I heard” and making casual judgments about whether a Federal Circuit Court of Appeal is in chaos (would the Queen do this for one of her realms’ courts? Probably not). This particular number sounded bad, and having eighty percent of appeals overturned by the higher appeals court surely would be a sign of chaos. The reality however is that this is eighty percent of the subset of cases from that appeals circuit that are considered by the Supreme Court, not eighty percent of the circuit’s total cases. The potential to mislead is nicely explained by the Snopes fact-checking/legend-busting site about a widely cited blog post arguing the 9th Circuit has known liberal bias and quoting a Congress member describing it as “presumptively reversible”: “So, although correctly worded, the blog post left many readers with the mistaken impression that 80 percent of the Ninth Circuit Court’s decisions were being overturned by SCOTUS. What it actually said was that of the very tiny fraction of decisions by federal courts of appeal that SCOTUS agrees to review each year (0.1%), 80 percent of that small portion of appeals originating with the Ninth Circuit Court were overturned.” Any reasonable view of the performance of one of the Circuit appeals courts should use as its denominator the number of cases decided by the Circuit, not the number of decided cases that are then accepted for consideration by the Supreme Court. As the Trump administration is I suspect currently discovering, just disagreeing with an appeals decision does not mean that there is any prospect of putting together a case worthy of consideration by the Supreme Court. Most of the discussion on the web is based on analysis in Roy E. Hofer’s 2010 paper Supreme Court Reversal Rates: Evaluating the Federal Courts of Appeals, which is a perfectly respectable piece of analysis. His results are presented in tables so to better understand them I tried a visual: I’m afraid it’s pretty ugly; lots of information but too wordy. Basically, in the period considered in Hofer’s analysis, about one in a thousand of the appeals decided by the Ninth Circuit was successfully overturned; a far cry from “presumptively reversible”. True however that if you get as far as putting together a coherent appeal and the Supreme Court agreeing to consider it, you have a good chance of success at the final hurdle. Sometimes more basic visualizations are better. So here’s a good old-fashioned stacked bar chart: This makes the tiny proportion of cases considered and overturned by the Supreme Court visually obvious. The graphic is scalable but you will have to zoom in a long way to see the slivers of green or pink that indicate a Supreme Court affirmation or overturn. The data Here’s how I went about exploring today’s data. After some hunting around I found that more up to date (and not particularly interesting for today’s purposes) data are available from the Supreme Court’s own webpage. More confusion is perpetuated by describing as a “Circuit Scorecard” a table that refers again to the percentage of cases that get to SCOTUS that are overturned, the same false view of the performance of Circuits that started this. As the Supreme Courts’ data although commendably open was not in a particularly convenient structure, I decided to confine myself for today’s purposes to just examining Hofer’s original Hofer’s paper is only available as a PDF, so the first task was to extract his tables into R. This is easy with the amazing tabulizer package, part of the ropensci project. The code snippet below downloads the file and extracts all of the tables in the document into a list called tabs. #-----------------download file and extract the tables--------------------- thefile <- tempfile() destfile = thefile, mode = "wb") tabs <- extract_tables(thefile) The first table in the paper was of the total number of cases decided by each Circuit by year. This is of some interest in itself and I produced this chart while familiarising myself with the data Here’s the code that grooms that data - which effectively forms the correct denominator to use in considering the degree to which appeals end up being overturnable - and produces the chart: #----------------------prepare data--------------- total_cases <- as.data.frame(tabs[[1]][ -(1:2), ]) names(total_cases) <- c("Court", 1999:2008, "Total") total_cases <- total_cases %>% gather(Year, Cases, -Court) %>% mutate(Cases = as.numeric(gsub(",", "", Cases)), Court = str_trim(Court)) total_cases %>% filter(Year != "Total") %>% mutate(Year = as.numeric(Year)) %>% ggplot(aes(x = Year, y = Cases)) + geom_point() + geom_line() + facet_wrap(~Court, scales = "free_y", ncol = 3) + scale_y_continuous("Total appeals terminated\n", label = comma) Next task was to extract the third table of the paper, which has the numbers of cases taken to the Supreme Court and what happened to them. This data comes out of the tabulizer process a little messier and needed a bit of hand treatment. Once stored in object called scotus, I combined this with the total cases denominator and drew the two graphs I presented in this post earlier: scotus <- as.data.frame(tabs[[3]][-(1:3), 1:5]) scotus <- cbind(scotus[ , -5], str_split(scotus[ , 5], " ", simplify = TRUE)[ , 1:2]) names(scotus) <- c("Court", "Reversed", "Vacated", "Affirmed", "Reversed + Vacated", "Appealed") scotus <- scotus %>% mutate(Court = str_trim(Court)) %>% gather(Result, Number, - Court) %>% mutate(Number = as.numeric(as.character(Number))) %>% spread(Result, Number) combined <- total_cases %>% filter(Year == "Total" & Court != "Annual Totals") %>% select(Court, Cases) %>% left_join(scotus, by = "Court") %>% mutate(rv_prop_cases = `Reversed + Vacated` / Cases, rv_prop_appealed = `Reversed + Vacated` / Appealed) %>% arrange(desc(rv_prop_cases)) %>% mutate(Court = factor(Court, levels = Court)) combined %>% select(Court, Cases, rv_prop_cases, rv_prop_appealed, Appealed) %>% gather(Denominator, Proportion, -Court, -Cases, -Appealed) %>% mutate(Denominator = ifelse(grepl("appealed", Denominator), "Percentage of total appeals", "Percentage of total cases"), Denominator_value = ifelse(grepl("total cases", Denominator), Cases, Appealed)) %>% ggplot(aes(x = Proportion, y = Court, size = Cases, label = Denominator_value)) + # force x axis to go to zero by drawing an invisible line: geom_vline(xintercept = 0, alpha = 0) + facet_wrap(~Denominator, scales = "free_x") + geom_text(colour = "steelblue") + scale_x_continuous(label = percent) + scale_size_area("Total original cases:", label = comma) + theme(legend.position = "none") + ggtitle("Federal courts of appeals cases reversed or vacated by the Supreme Court 1999 - 2008", "From the 9th Circuit, 80% of 175 cases that went to SCOTUS were overturned, but that was only 0.12% of the 114,199 total cases originally decided by that circuit in the period.") + labs(caption = "Data from http://www.americanbar.org/content/dam/aba/migrated/intelprop/magazine/LandslideJan2010_Hofer.authcheckdam.pdf Analysis at http://ellisp.github.io", y = "Appeals circuit\n\n", x = "Horizontal position indicates the proportion of cases overturned by the Supreme Court. The printed number is the denominator - total cases that could have been overturned at that stage. The size is proportionate to the total cases decided by each circuit.") grid.text(0.75, 0.26, label = "But only a tiny percentage of the number\nof cases originally decided by the circuit judges.", gp = gpar(cex = 0.75, family = "myfont", col = "grey50")) grid.text(0.35, 0.3, label = "A high proportion of the small number of\ncases that are taken to the Supreme Court\ndo end up reversed or vacated.", gp = gpar(cex = 0.75, family = "myfont", col = "grey50")) #--------------barchart version------------ combined %>% mutate(`Not considered by SCOTUS` = Cases - Appealed) %>% select(Court, `Not considered by SCOTUS`, Affirmed, `Reversed + Vacated`) %>% gather(Result, Number, -Court) %>% mutate(Result = ifelse(Result == "Reversed + Vacated", "Overturned", Result), Result = factor(Result, levels = c("Overturned", "Affirmed", "Not considered by SCOTUS"))) %>% ggplot(aes(x = Court, weight = Number, fill = Result)) + geom_bar(position = "stack") + coord_flip() + scale_y_continuous(label = comma) + scale_fill_discrete(guide = guide_legend(reverse = TRUE)) + ggtitle("Federal courts of appeals cases reversed or vacated by the Supreme Court 1999 - 2008") + labs(caption = "Data from http://www.americanbar.org/content/dam/aba/migrated/intelprop/magazine/LandslideJan2010_Hofer.authcheckdam.pdf Analysis at http://ellisp.github.io", x = "Appeals circuit", y = "Number of appeal cases") By the way, you might have asked (as I did) what do “vacated” and “reversed” mean? As far as I can tell, they have a similar effect in overturning the original decision. According to Josh Blackman, “Reverse is when things are really, really wrong. Vacate is when it is somewhat wrong.” Inferential analysis I was interested in whether the different proportions overturned of Circuits’ total cases were statistically significant evidence of different rates. I expected that they would be; not so much because of differing Circuit culture and competence (although these would be expected to vary at least somewhat), as that different types of cases go to different Circuits. I would summarise the conclusion as significant evidence of a difference (p value from the ANOVA test below is basically zero, thanks to the large sample size of cases), but not a particularly material difference. I fitted a generalized linear model with a binomial response (also known as logistic regression) to test this. Here are the confidence intervals for the effect on total overturnability (to coin a phrase) of the different Circuits, relative to the Eighth Circuit which I chose as a reference point because of its near-average performance. And here’s the code for the model fitting and presentation: model_data <- combined %>% mutate(Court = relevel(factor(Court), ref = "Eighth Circuit")) #----------------proportion of original------------ model1 <- glm(rv_prop_cases ~ Court, family = "binomial", weights = Cases, data = model_data) anova(model1, test = "Chi") res <- confint(model1)[-1, ] res_df <- as.data.frame(res) %>% mutate(Court = gsub("Court", "", rownames(res)), center = (`2.5 %` + `97.5 %`) / 2) %>% mutate(Court = fct_reorder(Court, center)) ggplot(res_df, aes(y = Court, yend = Court, x = `2.5 %`, xend = `97.5 %`)) + geom_segment() + geom_point(aes(x = center)) + labs(x = "95% confidence interval for increase in logarithm of odds of a completed case being appealed successfully by Supreme Court, relative to the Eighth Circuit", y = "", caption = "Data from http://www.americanbar.org/content/dam/aba/migrated/intelprop/magazine/LandslideJan2010_Hofer.authcheckdam.pdf Analysis at http://ellisp.github.io")
{"url":"https://www.r-bloggers.com/2017/02/success-rates-of-appeals-to-the-supreme-court-by-circuit/","timestamp":"2024-11-08T04:08:07Z","content_type":"text/html","content_length":"111958","record_id":"<urn:uuid:d71d1f12-d3d1-43aa-bef1-a75faf6ad986>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00420.warc.gz"}
What is the code to run an ordinal logistic regression in SAS Version: 9.4 (onDemand for Academics) Hi! I am a thesis student and I do not have a strong background in statistics. I need to run an ordinal logistic regression and I was not taught how to run this in my courses. The best way I can describe the data is that I have four independent variables that are continuous and one dependent variables that is ordinal. I believe I am on SAS version 9.4 (I was not sure how to find it so I looked up code that would tell me the version). Any help is greatly appreciated! 10-10-2023 03:26 PM
{"url":"https://communities.sas.com/t5/Statistical-Procedures/What-is-the-code-to-run-an-ordinal-logistic-regression-in-SAS/td-p/898005","timestamp":"2024-11-11T11:08:22Z","content_type":"text/html","content_length":"229103","record_id":"<urn:uuid:40a20618-b726-4081-9c96-caa8119ddd38>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00492.warc.gz"}
HI TO ALL ! EXCUSE ME I AM REALY NEW!! SO OBVIOUSLY MY QUESTION IS STUPID BUT PLEASE HELP ME! I would like to create a cube with these measures 14X16 HEIGHT 2MT, BUT I AM NOT ABLE TO MAKE IT RIGHT I double click on the square but when i put the cube parameters what are the right numbers 1200 or 12000 or 120 for 12 meters please give me a hand! because when i export them as 3ds and try to import them in sketchup the measure are completely wrong why?? Do I need to set something in animator??
{"url":"https://www.anim8or.com/smf/index.php?topic=4255.msg31367","timestamp":"2024-11-03T06:51:41Z","content_type":"application/xhtml+xml","content_length":"31777","record_id":"<urn:uuid:fd5d4d63-7b8c-4983-89cf-28bdc678b1a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00868.warc.gz"}
LM 14.5 Momentum in three dimensions Collection 14.5 Momentum in three dimensions by Benjamin Crowell, Light and Matter licensed under the Creative Commons Attribution-ShareAlike license. 14.5 Momentum in three dimensions In this section we discuss how the concepts applied previously to one-dimensional situations can be used as well in three dimensions. Often vector addition is all that is needed to solve a problem: Example 17: An explosion velocity components `v_"1y"=1.0×10^4 km"/"hr `, and the second with `v_"2x"=1.0×10^4 km"/"hr `, (all in the center of mass frame) what is the magnitude of the third one's velocity? `=>` In the center of mass frame, the planet initially had zero momentum. After the explosion, the vector sum of the momenta must still be zero. Vector addition can be done by adding components, so `mv_"1x"+mv_"2x"+mv_"3x"=0, and ` where we have used the same symbol `m`, and we find `v_"3x"=-1.0×10^4 km"/"hr ` `v_"3y"=-1.0×10^4 km"/"hr` which gives a magnitude of `=1.4×10^4 km"/"hr` The center of mass In three dimensions, we have the vector equations `and p_"total"=m_"total"v_"cm"`. The following is an example of their use. Example 18: The bola The bola, similar to the North American lasso, is used by South American gauchos to catch small animals by tangling up their legs in the three leather thongs. The motion of the whirling bola through the air is extremely complicated, and would be a challenge to analyze mathematically. The motion of its center of mass, however, is much simpler. The only forces on it are gravitational, so Using the equation `F_"total"=Deltap_"total""/"Deltat`, we find `Deltap_"total" "/"Deltat =m_"total"g`, and since the mass is constant, the equation `p_"total"=m_"total"v_"cm"` allows us to change this to The mass cancels, and `Deltav_(cm)"/"Deltat` is simply the acceleration of the center of mass, so In other words, the motion of the system is the same as if all its mass was concentrated at and moving with the center of mass. The bola has a constant downward acceleration equal to `g`, and flies along the same parabola as any other projectile thrown with the same initial center of mass velocity. Throwing a bola with the correct rotation is presumably a difficult skill, but making it hit its target is no harder than it is with a ball or a single rock. [Based on an example by Kleppner and Kolenkow.] Counting equations and unknowns vector has three components, so an unknown momentum vector counts as three unknowns. Conservation of momentum is a single vector equation, but it says that all three components of the total momentum vector stay constant, so we count it as three equations. Of course if the motion happens to be confined to two dimensions, then we need only count vectors as having two components. Example 19: A two-car crash with sticking Suppose two cars collide, stick together, and skid off together. If we know the cars' initial momentum vectors, we can count equations and unknowns as follows: unknown #1: `x` component of cars' final, total momentum unknown #2: `y` component of cars' final, total momentum equation #1: conservation of the total `p_x` equation #2: conservation of the total `p_y` Since the number of equations equals the number of unknowns, there must be one unique solution for their total momentum vector after the crash. In other words, the speed and direction at which their common center of mass moves off together is unaffected by factors such as whether the cars collide center-to-center or catch each other a little off-center. Example 20: Shooting pool Two pool balls collide, and as before we assume there is no decrease in the total kinetic energy, i.e., no energy converted from KE into other forms. As in the previous example, we assume we are given the initial velocities and want to find the final velocities. The equations and unknowns are: unknown #1: `x` component of ball #1's final momentum unknown #2: `y` component of ball #1's final momentum unknown #3: `x` component of ball #2's final momentum unknown #4: `y` component of ball #2's final momentum equation #1: conservation of the total `p_x` equation #2: conservation of the total `p_y` equation #3: no decrease in total `KE` Note that we do not count the balls' final kinetic energies as unknowns, because knowing the momentum vector, one can always find the velocity and thus the kinetic energy. The number of equations is less than the number of unknowns, so no unique result is guaranteed. This is what makes pool an interesting game. By aiming the cue ball to one side of the target ball you can have some control over the balls' speeds and directions of motion after the collision. It is not possible, however, to choose any combination of final speeds and directions. For instance, a certain shot may give the correct direction of motion for the target ball, making it go into a pocket, but may also have the undesired side-effect of making the cue ball go in a pocket. Calculations with the momentum vector force is required to change the direction of the momentum vector, just as one would be required to change its magnitude. Example 21: A turbine `=>` In a hydroelectric plant, water flowing over a dam drives a turbine, which runs a generator to make electric power. The figure shows a simplified physical model of the water hitting the turbine, in which it is assumed that the stream of water comes in at a 45° angle with respect to the turbine blade, and bounces off at a 90° angle at nearly the same speed. The water flows at a rate `R`, in units of kg/s, and the speed of the water is`v`. What are the magnitude and direction of the water's force on the turbine? `=>` In a time interval `Deltat`, the mass of water that strikes the blade is `RDeltat`, and the magnitude of its initial momentum is `mv=vRDeltat`. The water's final momentum vector is of the same magnitude, but in the perpendicular direction. By Newton's third law, the water's force on the blade is equal and opposite to the blade's force on the water. Since the force is constant, we can use the equation `F_"blade on water"=(Deltap_"water")/(Deltat)`. Choosing the `x` axis to be to the right and the `y` axis to be up, this can be broken down into components as `F_"blade on water",x=(Deltap_"water",x)/(Deltat)` `F_"blade on water",y=(Deltap_"water",y)/(Deltat)` The water's force on the blade thus has components `F_"water on blade",x=vR` `F_"water on blade",y=-vR`. In situations like this, it is always a good idea to check that the result makes sense physically. The `x` component of the water's force on the blade is positive, which is correct since we know the blade will be pushed to the right. The `y` component is negative, which also makes sense because the water must push the blade down. The magnitude of the water's force on the blade is `|F_"water on blade"|=sqrt(2)vR` and its direction is at a 45-degree angle down and to the right. Discussion Questions A The figures show a jet of water striking two different objects. How does the total downward force compare in the two cases? How could this fact be used to create a better waterwheel? (Such a waterwheel is known as a Pelton wheel.) 14.5 Momentum in three dimensions by Benjamin Crowell, Light and Matter licensed under the Creative Commons Attribution-ShareAlike license.
{"url":"https://www.vcalc.com/collection/?uuid=1e574b64-f145-11e9-8682-bc764e2038f2","timestamp":"2024-11-07T00:45:56Z","content_type":"text/html","content_length":"58886","record_id":"<urn:uuid:f659e5a7-5265-4c69-ae9e-48cfb8a76179>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00430.warc.gz"}
Interception Points of a Line Interception Points of a Line Before we go on to looking at some math regarding 2-variable linear equations (or lines), we must first become familiar with some important terms that applies to all sorts of 2D equations. Definition: The Origin in $2$-dimensional space is the point in which the $x$-axis and the $y$-axis intersect, and has coordinates $(0, 0)$. Definition A line is said to have an $x$-Interception, Root, or Solution at all points where the line intersects the $x$-axis. Similarly, a line is said to have a $y$-Interception at all points where the line intersects the $y$-axis. More generally, any curve that passes through the $x$-axis is said to have an $x$-interception at that point. Similarly, any curve that passes through the $y$-axis is said to have a $y$-interception at that point. Definition: Two lines are said to Intersect if they cross each other.
{"url":"http://mathonline.wikidot.com/interception-points-of-a-line","timestamp":"2024-11-13T18:45:43Z","content_type":"application/xhtml+xml","content_length":"14956","record_id":"<urn:uuid:25b26e9d-32e9-428b-a646-acf055b194c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00526.warc.gz"}
Minimum Candy Distribution - Interview Algorithm Problem | Codingeek Minimum Candy Distribution – Interview Algorithm Problem Recently in an online test I faced this question i.e. on how to minimize the number of candies/toffee to be distributed by a teacher and found that a lot of similar questions are frequently asked in a lot of interviews. Read More – Possible beautiful arrangements So the problem statement is – A teacher has some students in class. She has to distribute some candies to these students. Every student has some grades/rank that he/she has acquired over a series of tests. Now students are sitting in a line in a random order(will be provided in input) and there are few rules that teacher has to follow while distributing the candies. Rules are – • Among any two students who sit adjacent to each other and have different grades, student with the higher grades must get extra candies. • At least one candy is given to every student. • Students sitting adjacent to each other and have same grades then there is no condition on the number of candies they get i.e. if one get 5 candies other can get 15 or even 1 or 5 or any other positive number(but we have to minimize this). There might be other conditions on the input but for now, we will focus on the logic on how to do this – The approach that I actually used in the test was different than the solution that I will be sharing today as it was the more complicated one and required some extra operations as well but it worked for me. So in a brief what I did was that I kept the last location upto which I need to increase the candies distributed in case there is a long chain of students sitting with decreasing order of For example – If there are 7 students and their grades are 2, 3, 4, 4, 3, 2, 1 If there are 7 students and their grades are 2, 3, 4, 4, 3, 2, 1 Then distribution is like- After step 1,2,3 quantities distributed are– 1, 2, 3, 0, 0, 0, 0 (at this time 3rd student is marked as the last location upto which candy counter increment is needed because if we have to increase the number of candies of 3rd we need not worry about 2nd student because condition of higher candy does not breaks.) After 4th step – 1, 2, 3, 1, 0, 0, 0 (because 3rd and 4th student have the same grade and we have to minimize the toffee, now 4th student is marked) 5th Step – 1, 2, 3, 2, 1, 0, 0 (increase count till marked because we have to give at least one candy to every student and also maintain another condition of students with different grades, still 4th is marked) 6th step – 1, 2, 3, 3, 2, 1, 0 (still 4th is marked) 7th step – 1, 2, 3, 4, 3, 2, 1 So this was the solution that I try. You can also try this by yourself but now let’s jump to a better solution and this will solve this problem in O(2n) time complexity and is much easier to implement and understand. Theoretically in this approach first we go left to right and set “next” candy value to either “previous+1” or “1”. This way we get the up trends i.e. checking condition as per next student and not as the previous student sitting adjacently. Then we go right to left and do the same, this way getting the down trends. Implementation of Candy problem So lets suppose the input format is • First line – A value for number of students (n) • Next n lines – Grade of student. import java.util.Scanner; public class Solution { public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); int arr[] = new int[n]; for(int i = 0; i < n; i++) { arr[i] = in.nextInt(); int candies[] = new int[n]; candies[0] = 1; // First loop for up trends for(int i = 1; i<n; i++) { if(candies[i] == 0) { if(arr[i] > arr[i-1]) { candies[i] = candies[i-1]+1; // Second loop for down trends for(int i = n-1; i > 0; i--) { if(arr[i-1] > arr[i] && candies[i-1] <= candies[i]) { candies[i-1] = candies[i]+1; // Calculating the sum - This step can be avoided by // addition and substraction in previous loops, but for // simplicity it is seperated out. long sum = 0l; for(int i = 0; i < n; i++) { sum += candies[i]; System.out.println("Minumum number of candies required to distribute are - "+sum); Minumum number of candies required to distribute are - 16 So that’s all for this tutorial. Hope this helps and you like the tutorial. Do ask for any queries in the comment box and provide your valuable feedback. Do come back for more because learning paves way for a better understanding. Keep Coding!! Happy Coding!! 🙂 Recommended - Inline Feedbacks View all comments The second loop has to be something like this. for (int i = A.size() – 1; i > 0; i–) { if (candies[i – 1] A.get(i)) { candies[i – 1] = candies[i] + 1; | Reply
{"url":"https://www.codingeek.com/practice-examples/interview-programming-problems/minimum-candy-distribution-interview-algorithm-problem/","timestamp":"2024-11-11T04:35:36Z","content_type":"text/html","content_length":"64144","record_id":"<urn:uuid:9e7bb0b1-9bd9-4340-9dbd-1e461d59058e>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00143.warc.gz"}
Effects of aggregate type on concrete density in context of concrete density 31 Aug 2024 Title: The Impact of Aggregate Type on Concrete Density: A Comprehensive Analysis Abstract: Concrete density is a critical property that affects the overall performance and durability of concrete structures. Among various factors influencing concrete density, aggregate type plays a significant role. This article reviews the effects of different aggregate types on concrete density, providing insights into the underlying mechanisms and theoretical frameworks. Introduction: Concrete density is defined as the mass per unit volume of concrete (ρ = m/V), where ρ is the density, m is the mass, and V is the volume [1]. The density of concrete is influenced by various factors, including aggregate type, water content, cement content, and admixture usage. Aggregate type, in particular, has a profound impact on concrete density due to its significant contribution to the overall mass and volume of the mixture. Aggregate Types: Aggregates can be broadly classified into two categories: natural aggregates (NA) and manufactured aggregates (MA). NA includes gravel, sand, crushed stone, and other naturally occurring materials, while MA encompasses recycled aggregate, fly ash, and other artificially produced materials [2]. Effects of Aggregate Type on Concrete Density: 1. Natural Aggregates (NA) The density of concrete made with NA is generally lower than that made with MA due to the inherent porosity and voids present in natural aggregates [3]. The density of NA can be expressed as: ρ_NA = ρ_sand + ρ_gravel + ρ_voids where ρ_sand, ρ_gravel, and ρ_voids are the densities of sand, gravel, and voids, respectively. 2. Manufactured Aggregates (MA) The density of concrete made with MA is generally higher than that made with NA due to the absence of voids and the presence of a more uniform particle size distribution [4]. The density of MA can be expressed as: ρ_MA = ρ_flyash + ρ_recycledaggregate where ρ_flyash and ρ_recycledaggregate are the densities of fly ash and recycled aggregate, respectively. Discussion: The effects of aggregate type on concrete density are complex and multifaceted. The choice of aggregate type can significantly impact the overall performance and durability of concrete structures. While NA provides a more natural and aesthetically pleasing appearance, MA offers improved mechanical properties and reduced environmental impact. Conclusion: In conclusion, the type of aggregate used in concrete production has a significant impact on concrete density. Understanding the effects of different aggregate types is crucial for optimizing concrete performance and durability. Further research is needed to explore the underlying mechanisms and theoretical frameworks governing the relationship between aggregate type and concrete density. [1] ACI Committee 211 (2008). “Standard Practice for Selecting Proportions for Concrete.” American Concrete Institute, Farmington Hills, MI. [2] ASTM C33/C33M-18. “Standard Specification for Concrete Aggregates.” American Society for Testing and Materials, West Conshohocken, PA. [3] Neville, A. M. (1995). “Properties of Concrete.” Longman Scientific & Technical, Harlow, UK. [4] ACI Committee 233 (2011). “Guide to the Use of Recycled Aggregate in Concrete.” American Concrete Institute, Farmington Hills, MI. Related articles for ‘concrete density’ : Calculators for ‘concrete density’
{"url":"https://blog.truegeometry.com/tutorials/education/90d1b151a4fb135163f8dacf83be6458/JSON_TO_ARTCL_Effects_of_aggregate_type_on_concrete_density_in_context_of_concre.html","timestamp":"2024-11-06T08:43:39Z","content_type":"text/html","content_length":"17282","record_id":"<urn:uuid:1ccd3707-ddd9-4df7-acf0-0283916c3440>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00876.warc.gz"}
Cyclic numbers - Project Euler problem 358 Cyclic number , 1/7 specifically, is something amazed me while I was a child. However I never really thought about cyclic numbers other than 1/7 until this problem. I decided to spend an evening researching and resolving the latest Project Euler #358 Cyclic Number To solve this problem, we need an integer number X satisfying following criteria based on my understanding of cyclic number: 1. 1/X starts with 0.00000000137 2. X is dividable by 10^X-1 3. The result of (10^X-1/X) ends with 56789 4. X is a prime number 5. ? I put a question mark for the fifth criterion because I haven't figured out what it is, while only with four others, the problem can almost be solved. The first hint limits candidates between 724637681 and 729927007. Given the second and third hint, the last 5 digits of 56789 * X should be 99999, which means 56789 * (X-1) ends with 43210. X must ends with 09891 to satisfy such requirement. There are 53 numbers within range and ends with 09891, in which only 3 are prime numbers, 725509891, 726509891 and 729809891. The final answer hides in these three numbers. All these three numbers meets requirement 2. I tried answering them to projecteuler and found the last one is correct answer. However I haven't figure out how to filter out two others programmatically. This still bothers me. Java code is in GitHub. It's quite efficient as far as I can tell. Most of the time is spent on prime number checking. >>>>>> Runnining solution of problem 358 Last five digits result is 9891 Searching numbers between 724637681 and 729927007 Checking candidate: 725509891 with sum of digits 3264794505 Checking candidate: 726509891 with sum of digits 3269294505 Checking candidate: 729809891 with sum of digits 3284144505 53 numbers are verified <<<<<< Solution 358 took 878.149399 ms For several weeks I've been trying to put together an Angular application served Java Spring MVC web server in Bazel. I've seen the Java, Angular combination works well in Google, and given the popularity of Java, I want get it to work with open source. How hard can it be to run arguably the best JS framework on a server in probably the most popular server-side language with the mono-repo of planet-scale ? The rest of this post walks through the headaches and nightmares I had to get things to work but if you are just here to look for a working example, github/jiaqi/angular-on-java is all you need. https://github.com/jiaqi/angular-on-java Java web application with Appengine rule Surprisingly there isn't an official way of building Java web application in Bazel, the closest thing is the Appengine rule and Spring MVC seems to work well with it. 3 Java classes, a JSP and an appengine.xml was all I need. At this point, the server starts well but I got "No JPA annotation is like a subset of Hibernate annotation, this means people will find something available in Hibernate missing in JPA. One of the important missing features in JPA is customized ID generator. JPA doesn't provide an approach for developer to plug in their own IdGenerator. For example, if you want the primary key of a table to be BigInteger coming from sequence, JPA will be out of solution. Assume you don't mind the mixture of Hibernate and JPA Annotation and your JPA provider is Hibernate, which is mostly the case, a solution before JPA starts introducing new Annotation is, to replace JPA @SequenceGenerator with Hibernate @GenericGenerator. Now, let the code talk. /** * Ordinary JPA sequence. * If the Long is changed into BigInteger, * there will be runtime error complaining about the type of primary key */ @Id @Column(name = "id", precision = 12) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "XyzIdGenerator") @SequenceGe • Get link • Facebook • Twitter • Pinterest • Email • Other Apps 4 comments Amazon AWS Simple Workflow AWS Simple Workflow(SWF) from Amazon is a unique workflow solution comparing to traditional workflow products such as JBPM and OSWorkflow. SWF is extremely scalable and engineer friendly(in that flow is defined with Java code) while it comes with limitations and lots of gotchas. Always use Flow Framework The very first thing to know is, it's almost impossible to build a SWF application correctly without Flow Framework . Even though the low level SWF RESTful service API is public and available in SDK, for most workflow with parallelism, timer or notification, consider all possibilities of how each event can interlace with another, it's beyond manageable to write correct code with low-level API to cover all use cases. For this matter SWF is quite unique comparing to other thin-client AWS technologies. The SWF flow framework heavily depends on AspectJ for various purposes. If you are not familiar with AspectJ in Eclipse and Maven, this article
{"url":"https://blog.cyclopsgroup.org/2011/11/cyclic-numbers-project-euler-problem.html","timestamp":"2024-11-04T13:20:55Z","content_type":"text/html","content_length":"137517","record_id":"<urn:uuid:608f6e79-0698-4d49-8b8e-b97b1b362d0f>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00762.warc.gz"}
A note on weak w-projective modules A note on weak w-projective modules projective module , weak $w$-projective module, $w$-flat, $GV$-torsion, finitely presented type, $DW$-ring, coherent ring, $w$-coherent ring Let $R$ be a ring. An $R$-module $M$ is a weak $w$-projective module if ${\rm Ext}_R^1(M,N)=0$ for all $N$ in the class of $GV$-torsion-free $R$-modules with the property that ${\rm Ext}^k_R(T,N)=0$ for all $w$-projective $R$-modules $T$ and all integers $k\geq1$. In this paper, we introduce and study some properties of weak $w$-projective modules. We use these modules to characterise some classical rings. For example, we will prove that a ring $R$ is a $DW$-ring if and only if every weak $w$-projective is projective; $R$ is a von Neumann regular ring if and only if every FP-projective module is weak $w$-projective if and only if every finitely presented $R$-module is weak $w$-projective; and $R$ is $w$-semi-hereditary if and only if every finite type submodule of a free module is weak $w$-projective if and only if every finitely generated ideal of $R$ is weak $w$-projective. Download data is not yet available. How to Cite Assaad, R. A. K. (2024). A note on weak w-projective modules. New Zealand Journal of Mathematics, 55, 43–52. https://doi.org/10.53733/336
{"url":"https://nzjmath.org/index.php/NZJMATH/article/view/336","timestamp":"2024-11-10T15:40:13Z","content_type":"text/html","content_length":"24044","record_id":"<urn:uuid:dac772db-dce5-4793-b9d3-dc74be73b9ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00080.warc.gz"}
Texas Go Math Grade 1 Lesson 14.1 Answer Key Classify and Sort Two-Dimensional Shapes Refer to our Texas Go Math Grade 1 Answer Key Pdf to score good marks in the exams. Test yourself by practicing the problems from Texas Go Math Grade 1 Lesson 14.1 Answer Key Classify and Sort Two-Dimensional Shapes. Texas Go Math Grade 1 Lesson 14.1 Answer Key Classify and Sort Two-Dimensional Shapes Draw to sort the shapes. Write the sorting rule. Answer: Sorting rules: We can sort two-dimensional shapes into groups. A sorting rule is a rule that tells us how to sort objects. Read the sorting rule. Circle the shapes that follow the rule. FOR THE TEACHER • Read the following aloud. Devon wants to sort these shapes to show a group of triangles and a group of rectangles. Draw and write to show how Devon sorts the shapes. Answer: Here the question was asked that we need to sort a group of triangles and rectangles. We will represent by diagrammatically. First, we can sort two-dimensional shapes by comparing: 1. How many sides do they have. 2. How many vertices(corners) do they have. 3. If they have curves or not. 4. The length of their sides. Math Talk Mathematical Processes Is there a shaper that did not go in your groups? Explain. Answer: No No, there are no shapes that did not go in your group, because here we are dealing with two-dimensional shapes and we have triangle, square, circle, rectangle and so on. from the above questions so all of them are grouped. Here are some ways to sort two-dimensional shapes. Model and Draw Here are some ways to classify and sort two-dimensional shapes. A square is a special kind of a rectangle. Answer: Closed shape with no sides and no corners are the circles. closed shapes with ____3____ sides Answer: 3 sides closed shapes with ____4____ vertices Answer: 4 vertices Share and Show Read the sorting rule. Circle the shapes that follow the rule. THINK: Vertices (corners) are where the sides meet. Question 1. 4 vertices (corners) Answer: Rectangle, square, trapezium are having the 4 vertices. The coloured ones are meeting the two sides and also have 4 vertices. Question 2. not curved Answer: There are 4 shapes that are not curved. They are parallelogram, triangles, trapezium. The coloured ones are not curved. These shapes are having sides and corners. Question 3. Answer: In the given diagram, there are two triangles. The coloured ones are the triangles. Question 4. more than 3 sides Answer: There are 3 shapes that are more than 3 sides. The coloured ones are square, rectangle, trapezium. These shapes are having more than 3 sides. Problem Solving Circle the special rectangles in Exercise 6. Question 5. Color the shapes that are circles. Answer: In the below diagram, the shapes of circles are coloured. Question 6. Color the squares red. Color other rectangles blue. Answer: In the below diagram, the shapes of circles are coloured. H.O.T. Draw 2 different two-dimensional shapes that follow both parts of the sorting rule.        Explanation:                   – Both are having the same characteristics that follow the sorting rule are having 4 vertices and 4 sides. Moreover, length and breadth are also as follows. Question 7. 3 sides and 3 vertices (corners) Answer: Triangle Triangle is having 3 sides and 3 vertices. Question 8. 2 sides are long and 2 sides are short Answer: Rectangle Question 9. H.O.T. Multi-Step Write sorting rules to show two different ways to sort some of these shapes. Then color the shapes that follow one of your rules. Answer: Sorting rules. 1. I have coloured shapes having with 4 sides. 2. Others that are not coloured are having with 3 sides. Daily Assessment Task Choose the correct answer. Question 10. Analyze Which shape is a rectangle? Answer: Rectangle Question 11. Multi-Step Circle the triangles. Then circle the sorting rule you used. Answer: The representation has shown in the below diagram. Question 12. Texas Test Prep Which shape would not be sorted into this group? Answer: The middle one cannot be sorted. Explanation: The rhombus cannot be sorted because it has 4 sides and 4 corners but the other two was the triangle that are having 3 sides. TAKE HOME ACTIVITY • Gather some household objects such as photos. coins, and napkins. Ask your child to sort them by shape. Answer: Make your child do the activity. Ask them to do square, rectangle, triangle, rhombus, trapezium, parallelogram, etc. Texas Go Math Grade 1 Lesson 14.1 Homework and Practice Answer Key Read the sorting rule. Circle the shapes that follow the rule. Question 1. more than 3 sides Answer: Rectangle, square. Here, the question was asking more than 3 sides. So it was a rectangle and square that are having 4 sides. Question 2. Problem Solving Draw 2 different two-dimensional shapes that follow both parts of the sorting rule. Question 3. 4 sides are the same length Answer: Square Square: All sides are equal. • All four interior angles are equal to 90° • All four sides of the square are congruent or equal to each other • The opposite sides of the square are parallel to each other • The diagonals of the square bisect each other at 90° • The two diagonals of the square are equal to each other Question 4. Multi-Step Write sorting rules to show two different ways to sort some of these shapes. Answer: Triangle and rectangle. Rectangle sorting rules: The fundamental properties of rectangles are: • A rectangle is a quadrilateral • The opposite sides are parallel and equal to each other • Each interior angle is equal to 90 degrees • The sum of all the interior angles is equal to 360 degrees • The diagonals bisect each other • Both the diagonals have the same length Triangle sorting rules: • The sum of all the angles of a triangle (of all types) is equal to 180°. • The sum of the length of the two sides of a triangle is greater than the length of the third side. • In the same way, the difference between the two sides of a triangle is less than the length of the third side. • The side opposite the greater angle is the longest side of all the three sides of a triangle. Lesson Check Choose the correct answer. Question 5. Which shape is not curved? Answer: Rectangle Question 6. Which shape is a special rectangle? Answer: The middle one. Question 7. Multi-Step Kara is sorting shapes. Read the labels Circle the shapes that belong in each group. Leave a Comment You must be logged in to post a comment.
{"url":"https://gomathanswerkey.com/texas-go-math-grade-1-lesson-14-1-answer-key/","timestamp":"2024-11-14T07:04:09Z","content_type":"text/html","content_length":"267885","record_id":"<urn:uuid:cb0b94ef-3e2b-48b5-80b7-1b9a788e7b63>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00765.warc.gz"}
: Matrix MathObjects class and non-numerical elements Am I correct in assuming that the elements of the Matrix MathObjects class must be perl numbers, and cannot be math objects? If I try to use a formula as an element, I have problems with using the methods with the Matrix class. These are outlined below. If there is a way of using variables or named constants and retaining these functions, how could I do that? If there isn't a way, I do know that I could manage to write these problems without the MO matrix class. I tried these by using a matrix A = [x 1](as a perl variable, $A) with one element x, which I think is a Math Object formula here. Display (in text): The matrix displays perfectly normally as if it were completely numeric. Scalar Multiplication: If I multiply A by 5, and display this in the text section, I find that this is not multiplied through as it would be if A were completely numeric. So it shows [x 1]*5 rather than [5x 5]. Further, if I try to call the method "ans_array()" on it, this produces a singular blank rather than several for a matrix answer. Transpose, Trace, etc.: If I call any of these type of methods on A, I get an error like this: So $A->transpose; gives me an error. So I deduce from this that having x in the matrix might make this variable $A be interpreted as a Formula rather than a Matrix. Attached are a screencap and the code. Screencap here Am I correct in assuming that the elements of the Matrix MathObjects class must be perl numbers, and cannot be math objects? If I try to use a formula as an element, I have problems with using the methods with the Matrix class. That is correct; if you want to manipulate matrices, the entries must be reals. (They can't be fractions, either.) If you only want to display a matrix, then it can contain formulas. This seems to be the source of all of your problems. Scalar Multiplication: If I multiply A by 5, and display this in the text section, I find that this is not multiplied through as it would be if A were completely numeric. So it shows [x 1]*5 rather than [5x 5]. Further, if I try to call the method "ans_array()" on it, this produces a singular blank rather than several for a matrix answer. I'm guessing here that the product is not recognized as a Matrix, but something else, and so the ans_array() function produces a single blank by default. (If the object is a Matrix or Vector, then ans_array() of course will produce an appropriate display.) Transpose, Trace, etc.: If I call any of these type of methods on A, I get an error like this: Once again, part of the perl code that does the transposing is also used for the complex transpose, and bottoms down at the transpose (= conjugate) of a complex number. Since only real and complex numbers have "transposes", not formulas or fractions, this causes an error. Ah that is unfortunate! I was really hoping to be able to use the superior functionality of the Matrix class for a rather large set of questions using matrices that use symbolic variables or named constants in place of randomized numeric values. Thank you for your help nonetheless! Though, if you have any suggestions, feel free to let me know - I'm sure they would be useful. Two comments that may or may not be of any use to you: • You could look into having Sage do the manipulation of the matrices with formula entries, and take the Sage output and put it back into a Matrix MO for displaying. You'd get the added bonus of simplification of formulas (like x^2 + x^2 to 2x^2), which the MO Parser would not be able to do. • For numeric fraction values in the matrix entries, what I have done in the past is to let them fall down to decimal reals and do all of the matrix manipulation. Then at the last step, if I know the entries should be fractions with reasonable denominators, use a loop to grab each entry and "fractionify" it with Fraction() before putting these entries back into a Matrix MO. The result can display nicely as fractions rather than decimals. To do this, the current contextFraction.pl is needed to recognize things like 0.3333333 as 1/3. (Older versions will turn 0.3333333 into 333333/ These are great suggestions. I'm looking more into using Sage as I speak. Thank you! The basic insight is that you can usually have Matrix valued Formulas but not Matrices with formula entries. Alex's suggestion about using Sage is a good one (certainly for now). It might be possible to extend MathObjects slightly to cover edge cases that currently don't quite work, but it's probably not worth trying to convert MathObjects into a full blown CAS. Sage already does that and we should figure out how to harness it's power in WeBWorK. I think AskSage might be appropriate here. It's still relatively new and needs more problem examples. Here are some simple examples: Geogebra also has a CAS built in to it, but I don't think anyone has built problems combining that capability of Geogebra with webwork questions. I'll look into and see if I can put it so use. Thank you for the advice and examples. Well, I tried making a simple problem with AskSage before trying to make it do anything more complex, but I can't seem to make it work in the simplest fashion. I keep getting errors that make it seem that Sage isn't responding, but I can't be sure because I'm not experience with the type of error messages I would get in this situation. So to be sure it wasn't me, I literally copied and pasted the code for the example at https://hosted2.webwork.rochester.edu/webwork2/2014_07_UR_demo/askSage/1/, into the text editor that I am working on and tried that. But even that failed. I read that this isn't available for standard distribution yet, but it's on the PG develop version, but I think both places are on PG develop. The only thing I can think of then that might be different is the machines - one is on hosted2 and the other devel2. Could this cause a problem? I don't know how this works internally but my thinking was perhaps sage was installed on hosted2 but not on devel2. Do you have an idea of what could be causing these errors? Screencap below of the error messages (code is exactly the same as example problem) Hi Ian, I do not have an answer to your question that uses askSage. Instead, I have a low-tech solution given as a PG file below my signature. I wrote a little subroutine that will perform the transpose of a MathObject matrix with non-numeric entries. Basically, this subroutine exports a MathObject matrix $M as a Perl array of (unnamed) arrays @m, transposes the Perl array of arrays @m to another Perl array of arrays @mt, and then promotes the result to a MathObject matrix via Matrix(@mt); I have used low-tech tricks like this a lot as a way to work around limitations. They usually require a little bit more programming, but if done right, the code can be reused again and again. Best regards, Paul Pearson ############### Begin PG file ############# $M = Matrix( [ Compute("5 x"), Compute("x"), Compute("3") ], [ Compute("2 x"), Compute("x"), Compute("2") ], sub transpose_matrix_of_formulas { my $M = shift; my @m = $M->value; my $nrows = scalar(@m); # number of rows my $ncols = scalar(@{$m[0]}); # number of columns my @mt = (); foreach my $i (0..$nrows-1) { foreach my $j (0..$ncols-1) { $mt[$j][$i] = $m[$i][$j]; return Matrix(@mt); $Mt = transpose_matrix_of_formulas($M); \( $M = \) \{ $M->ans_array(5) \} \( $Mt = \) \{ $Mt->ans_array(5) \} ANS( $M->cmp, $Mt->cmp); I have gotten AskSage working, though unfortunately not to the point where I am comfortable using it - there still seem to be some problems I'm encountering that are unavoidable at the moment. For instance, I attempted to create a 3x1 matrix, a vertical column, and use this in a problem but changing between the output of sage and the constructor of a matrix makes it always a 1x3 matrix, a horizontal row. So, I'll be using some lower-tech solutions like you suggested Paul, and try to use some perl subs to make the operations easier. Thanks for the suggestion! Hi Ian, Good insight into the cause. AskSage requires the latest changes in the develop branch in order to work on freeBSD machines. I've updated devel2 to that level. I also put a copy of the UR demo course on devel2 so that you can check your results against the results in that course. -- Mike
{"url":"https://webwork.maa.org/moodle/mod/forum/discuss.php?d=3408","timestamp":"2024-11-14T13:23:28Z","content_type":"text/html","content_length":"159219","record_id":"<urn:uuid:30f3ec82-cce7-445f-9a7a-0315d5445f43>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00448.warc.gz"}
Optimization in Medicine (Springer Optimization and Its Applications) - SILO.PUB File loading please wait... Citation preview Optimization in Medicine Springer Series in Optimization and Its Applications VOLUME 12 Managing Editor Panos M. Pardalos (University of Florida) Editor—Combinatorial Optimization Ding-Zhu Du (University of Texas at Dallas) Advisory Board J. Birge (University of Chicago) C.A. Floudas (Princeton University) F. Giannessi (University of Pisa) H.D. Sherali (Virginia Polytechnic and State University) T. Terlaky (McMaster University) Y. Ye (Stanford University) Aims and Scope Optimization has been expanding in all directions at an astonishing rate during the last few decades. New algorithmic and theoretical techniques have been developed, the diffusion into other disciplines has proceeded at a rapid pace, and our knowledge of all aspects of the field has grown even more profound. At the same time, one of the most striking trends in optimization is the constantly increasing emphasis on the interdisciplinary nature of the field. Optimization has been a basic tool in all areas of applied mathematics, engineering, medicine, economics and other sciences. The Springer Series in Optimization and Its Applications publishes undergraduate and graduate textbooks, monographs and state-of-the-art expository works that focus on algorithms for solving optimization problems and also study applications involving such problems. Some of the topics covered include nonlinear optimization (convex and nonconvex), network flow problems, stochastic optimization, optimal control, discrete optimization, multi-objective programming, description of software packages, approximation techniques and heuristic approaches. Carlos J. S. Alves Panos M. Pardalos Luis Nunes Vicente Editors Optimization in Medicine Editors Carlos J. S. Alves Instituto Superior Técnico Av. Rovisco Pais 1 1049-001 Lisboa Portugal Panos M. Pardalos Department of Industrial and Systems Engineering University of Florida 303 Weil Hall Gainesville, FL 32611 USA Luis Nunes Vicente Departamento de Matemática Faculdade de Ciências e Tecnologia Universidade de Coimbra 3001-454 Coimbra Portugal Managing Editor: Editor/Combinatorial Optimization Panos M. Pardalos University of Florida Ding-Zhu Du University of Texas at Dallas ISBN 978-0-387-73298-5 e-ISBN 978-0-387-73299-2 DOI:10.1007/978-0-387-73299-2 Library of Congress Control Number: 2007934793 Mathematics Subject Classification (2000): 49XX, 46N60 c 2008 Springer Science+Business Media, LLC All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper. 9 8 7 6 5 4 3 2 1 springer.com Optimization has become pervasive in medicine. The application of computing to medical applications has opened many challenging issues and problems for both the medical computing field and the mathematical community. Mathematical techniques (continuous and discrete) are playing a key role with increasing importance in understanding several fundamental problems in medicine. Naturally, optimization is a fundamentally important tool due to the limitation of the resources involved and the need for better decision making. The book starts with two papers on Intensity Modulated Radiation Therapy (IMRT). The first paper, by R. Acosta, M. Ehrgott, A. Holder, D. Nevin, J. Reese, and B. Salter, discusses an important subproblem in the design of radiation plans, the selection of beam directions. The manuscript compares different heuristic methods for beam selection on a clinical case and studies the effect of various dose calculation grid resolutions. The next paper, by M. Ehrgott, H. W. Hamacher, and M. Nußbaum, reviews several contributions on the decomposition of matrices as a model for rearranging leaves on a multileaf collimator. Such a process is essential for block radiation in IMRT in order to achieve desirable intensity profiles. Additionally, they present a new approach for minimizing the number of decomposition segments by sequentially solving this problem in polynomial time with respect to fixed decomposition times. The book continues with a paper by G. Deng and M. Ferris on the formulation of the day-to-day radiation therapy treatment planning problem as a dynamic program. The authors consider errors due to variations in the positioning of the patient and apply neuro-dynamic programming to compute approximate solutions for the dynamic optimization problems. The fourth paper, by H. Fohlin, L. Kliemann, and A. Srivastav, considers the seed reconstruction problem in brachytherapy as a minimum-weight perfect matching problem in a hypergraph. The problem is modeled as an integer linear program for which the authors develop an algorithm based on a randomized rounding scheme and a greedy approach. The book also covers other types of medical applications. For instance, in the paper by S. Sabesan, N. Chakravarthy, L. Good, K. Tsakalis, P. Pardalos, and L. Iasemidis, the authors propose an application of global optimization in the selection of critical brain sites prior to an epileptic seizure. The paper shows the advantages of using optimization (in particular nonconvex quadratic programming) in combination with measures of EEG dynamics, such as Lyapunov exponents, phase and energy, for long-term prediction of epileptic seizures. E. K. Lee presents the optimization-classification models within discriminant analysis, to develop predictive rules for large heterogeneous biological and medical data sets. As mentioned by the author, classification models are critical to medical advances as they can be used in genomic, cell molecular, and system level analysis to assist in early prediction, diagnosis and detection of diseases, as well as for intervention and monitoring. A wide range of applications are described in the paper. This book also includes two papers on inverse problems with applications to medical imaging. The paper by A. K. Louis presents an overview of several techniques that lead to robust algorithms for imaging reconstruction from the measured data. In particular, the inversion of the Radon transform is considered as a model case of inversion. In this paper, a reconstruction of the inside of a surprise egg is presented as a numerical example for 3D X-Ray reconstruction from real data. In the paper by M. Malinen, T. Huttunen, and J. Kaipio, an inverse problem related to ultrasound surgery is considered in an optimization framework that aims to control the optimal thermal dose to apply, for instance, in the treatment of breast cancer. Two alternative procedures (a scanning path optimization algorithm and a feedforward-feedback control method) are discussed in detail with numerical examples in 2D and 3D. We would like to thank the authors for their contributions. It would not have been possible to reach the quality of this publication without the contributions of the many anonymous referees involved in the revision and acceptance process of the submitted manuscripts. Our gratitude is extended to them as well. This book was generated mostly from invited talks given at the Workshop on Optimization in Medicine, July 20-22, 2005, which took place at the Institute of Biomedical Research in Light and Image (IBILI), University of Coimbra, Portugal. The workshop was organized under the auspices of the International Center for Mathematics (CIM, http://www.cim.pt) as part of the 2005 CIM Thematic Term on Optimization. Finally, we would like to thank Ana Lu´ısa Cust´odio (FCT/UNL) for her help in the organization of the workshop and Pedro C. Martins (ISCAC/IPC) and Jo˜ ao M. M. Patr´ıcio (ESTT/IPT) for their invaluable editorial support. Coimbra, May 2007 C. J. S. Alves P. M. Pardalos L. N. Vicente The influence of dose grid resolution on beam selection strategies in radiotherapy treatment design Ryan Acosta, Matthias Ehrgott, Allen Holder, Daniel Nevin, Josh Reese, and Bill Salter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decomposition of matrices and static multileaf collimators: a survey Matthias Ehrgott, Horst W. Hamacher, and Marc Nußbaum . . . . . . . . . . . 25 Neuro-dynamic programming for fractionated radiotherapy planning Geng Deng and Michael C. Ferris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Randomized algorithms for mixed matching and covering in hypergraphs in 3D seed reconstruction in brachytherapy Helena Fohlin, Lasse Kliemann, and Anand Srivastav . . . . . . . . . . . . . . . . 71 Global optimization and spatial synchronization changes prior to epileptic seizures Shivkumar Sabesan, Levi Good, Niranjan Chakravarthy, Kostas Tsakalis, Panos M. Pardalos, and Leon Iasemidis . . . . . . . . . . . . . . . . . . . . 103 Optimization-based predictive models in medicine and biology Eva K. Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Optimal reconstruction kernels in medical imaging Alfred K. Louis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Optimal control in high intensity focused ultrasound surgery Tomi Huttunen, Jari P. Kaipio, and Matti Malinen . . . . . . . . . . . . . . . . . . 169 List of Contributors Ryan Acosta Institute for Computational and Mathematical Engineering, Stanford University Stanford, California, USA. [email protected] Niranjan Chakravarthy Department of Electrical Engineering, Fulton School of Engineering, Arizona State University, Tempe, AZ 85281, USA. [email protected] Geng Deng Department of Mathematics, University of Wisconsin at Madison, 480 Lincoln Dr., Madison, WI 53706, USA. [email protected] Matthias Ehrgott Department of Engineering Science, The University of Auckland, Auckland, New Zealand. [email protected] Michael C. Ferris Computer Sciences Department, University of Wisconsin at Madison, 1210 W. Dayton Street, Madison, WI 53706, USA. [email protected] Helena Fohlin Department of Oncology, Link¨ oping University Hospital, 581 85 Link¨ oping, Sweden. [email protected] Levi Good The Harrington Department of Bioengineering, Fulton School of Engineering, Arizona State University, Tempe, AZ 85281, USA. [email protected] Horst W. Hamacher Fachbereich Mathematik, Technische Universit¨ at Kaiserslautern, Kaiserslautern, Germany. [email protected] Allen Holder Department of Mathematics, Trinity University, and Department of Radiation Oncology, University of Texas Health Science Center, San Antonio, Texas, USA. [email protected] Tomi Huttunen Department of Physics, University of Kuopio, P.O. Box 1627, FIN-70211, Finland. List of Contributors Leon Iasemidis The Harrington Department of Bioengineering, Fulton School of Engineering, Arizona State University, Tempe, AZ 85281, USA. [email protected] Jari P. Kaipio Department of Physics, University of Kuopio, P.O. Box 1627, FIN-70211, Finland. Lasse Kliemann Institut f¨ ur Informatik, Christian– Albrechts–Universit¨ at zu Kiel, Christian-Albrechts-Platz 4, D–24098 Kiel, Germany. [email protected] Eva K. Lee Center for Operations Research in Medicine and HealthCare, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0205, USA. [email protected] Alfred K. Louis Department of Mathematics, Saarland University, 66041 Saarbr¨ ucken, Germany. [email protected] Matti Malinen Department of Physics, University of Kuopio, P.O. Box 1627, FIN-70211, Finland. [email protected] Daniel Nevin Department of Computer Science, Texas A&M University, College Station, Texas, USA. [email protected] Marc Nußbaum Fachbereich Mathematik, Technische Universit¨ at Kaiserslautern, Kaiserslautern, Germany. Panos M. Pardalos Department of Industrial and Systems Engineering, University of Florida, Gainesville, FL 32611, USA. [email protected] Josh Reese Department of Mathematics, Trinity University, San Antonio, Texas, USA. [email protected] Shivkumar Sabesan Department of Electrical Engineering, Fulton School of Engineering, Arizona State University, Tempe, AZ 85281, USA. [email protected] Bill Salter Department of Radiation Oncology, University of Utah Huntsman Cancer Institute, Salt Lake City, Utah, USA. [email protected] Anand Srivastav Institut f¨ ur Informatik, Christian– Albrechts–Universit¨ at zu Kiel, Christian-Albrechts-Platz 4, D–24098 Kiel, Germany. [email protected] Kostas Tsakalis Department of Electrical Engineering, Fulton School of Engineering, Arizona State University, Tempe, AZ 85281, USA. [email protected] The influence of dose grid resolution on beam selection strategies in radiotherapy treatment design Ryan Acosta1 , Matthias Ehrgott2 , Allen Holder3 , Daniel Nevin4 , Josh Reese5 , and Bill Salter6 1 Institute for Computational and Mathematical Engineering, Stanford University, Stanford, California, USA. [email protected]. Department of Engineering Science, The University of Auckland, Auckland, New Zealand. [email protected]. Department of Mathematics, Trinity University, and Department of Radiation Oncology, University of Texas Health Science Center, San Antonio, Texas, USA. [email protected]. Department of Computer Science, Texas A&M University, College Station, Texas, USA. [email protected]. Department of Mathematics, Trinity University, San Antonio, Texas, USA. [email protected]. Department of Radiation Oncology, University of Utah Huntsman Cancer Institute, Salt Lake City, Utah, USA. [email protected]. Summary. The design of a radiotherapy treatment includes the selection of beam angles (geometry problem), the computation of a fluence pattern for each selected beam angle (intensity problem), and finding a sequence of configurations of a multileaf collimator to deliver the treatment (realization problem). While many mathematical optimization models and algorithms have been proposed for the intensity problem and (to a lesser extent) the realization problem, this is not the case for the geometry problem. In clinical practice, beam directions are manually selected by a clinician and are typically based on the clinician’s experience. Solving the beam selection problem optimally is beyond the capability of current optimization algorithms and software. However, heuristic methods have been proposed. In this paper we study the influence of dose grid resolution on the performance of these heuristics for a clinical case. Dose grid resolution refers to the spatial arrangement and size of dose calculation voxels. In particular, we compare the solutions obtained by the heuristics with those achieved by a clinician using a commercial planning system. Our results show that dose grid resolution has a considerable influence on the performance of most heuristics. Keywords: Intensity modulated radiation therapy, beam angle selection, heuristics, vector quantization, dose grid resolution, medical physics, optimization. R. Acosta et al. 1 Introduction Radiotherapy is the treatment of cancerous and displasiac tissues with ionizing radiation that can damage the DNA of cells. While non-cancerous cells are able to repair slightly damaged DNA, the heightened state of reproduction that cancerous cells are in means that small amounts of DNA damage can render them incapable of reproducing. The goal of radiotherapy is to exploit this therapeutic advantage by focusing the radiation so that enough dose is delivered to the targeted region to kill the cancerous cells while surrounding anatomical structures are spared and maintained at minimal damage levels. In the past, it was reasonable for a clinician to design radiotherapy treatments manually due to the limited capabilities of radiotherapy equipment. However, with the advent of intensity modulated radiotherapy (IMRT), the number of possible treatment options and the number of parameters have become so immense that they exceed the capabilities of even the most experienced treatment planner. Therefore, optimization methods and computer assisted planning tools have become a necessity. IMRT treatments use multileaf collimators to shape the beam and control, or modulate the dose that is delivered along a fixed direction of focus. IMRT allows beams to be decomposed into a (large) number of sub-beams, for which the intensity can be chosen individually. In addition, movement of the treatment couch and gantry allows radiation to be focused from almost any location on a (virtual) sphere around the target volume. For background on radiotherapy and IMRT we refer to [24] and [29]. Designing an optimal treatment means deciding on a huge number of parameters. The design process is therefore usually divided into three phases, namely 1) the selection of directions from which to focus radiation on the patient, 2) the selection of fluence patterns (amount of radiation delivered) for the directions selected in phase one, and 3) the selection of a mechanical delivery sequence that efficiently administers the treatment. Today there are many optimization methods for the intensity problem, with suggested models including linear (e.g., [21, 23]), integer (e.g., [13, 19]), and nonlinear (e.g., [15, 27]) formulations as well as models of multiobjective optimization (e.g., [7, 9, 22]). Similarly, algorithms have been proposed to find good multileaf collimator sequences to reduce treatment times and minimize between-leaf leakage and background dose [3, 25, 31]. Such algorithms are in use in existing radiotherapy equipment. Moreover, researchers have studied the mathematical structure of these problems to improve algorithm design or to establish the optimality of an algorithm [1, 2, 11]. In this paper we consider the geometry problem. The literature on this topic reveals a different picture than that of the intensity and realization problems. While a number of methods were proposed, there was a lack of understanding of the underlying mathematics. The authors in [4] propose a mathematical framework that unifies the approaches found in the literature. Influence of dose grid resolution on beam selection The focus of this paper is how different approximations of the anatomical dose affect beam selection. The beam selection problem is important for several reasons. First, changing beam directions during treatment is time consuming, and the number of directions is typically limited to reduce the overall treatment time. Since most clinics treat patients steadily throughout the day, patients are usually treated in daily sessions of 15-30 minutes to make sure that demand is satisfied. Moreover, short treatments are desirable because lengthy procedures increase the likelihood of a patient altering his or her position on the couch, which can lead to inaccurate and potentially dangerous treatments. Lastly, and perhaps most importantly, beam directions must be judiciously selected so as to minimize the radiation exposure to life-critical tissues and organs, while maximizing the dose to the targeted tumor. Selecting the beam directions is currently done manually, and it typically requires several trial-and-error iterations between selecting beam directions and calculating fluence patterns until a satisfactory treatment is designed. Hence, the process is time intensive and subject to the experience of the clinician. Finding a suitable collection of directions can take as much as several hours. The goal of using an optimization method to identify quality directions is to remove the dependency on a clinician’s experience and to alleviate the tedious repetitive process of selecting angles. To evaluate the dose distribution in the patient, it is necessary to calculate how radiation is deposited into the patient. There are numerous dose models in the literature, with the gold standard being a Monte Carlo technique that simulates each particle’s path through the anatomy. We use an accurate 3D dose model developed in [18] and [17]. This so-called finite sized pencil beam approach is currently in clinical use in numerous commercial planning systems in radiation treatment clinics throughout the world. Positions within the anatomy where dose is calculated may be referred to as dose points. Because each patient image represents a slice of the anatomy of varying thickness, and hence, each dose point represents a 3D hyper-rectangle whose dimensions are decided by both the slice thickness and the spacing of the dose points within a slice, such dose calculation points are also referred to as voxels in recognition of their 3D, or volumetric, nature. We point out that the terms dose point and dose voxel are used interchangeably throughout this text. The authors in [16] study the effects of different dose (constraint) point placement algorithms on the optimized treatment planning solution (for given beam directions) using open and wedged beams. They find very different dose patterns and conclude that 2000-9000 points are needed for 10 to 30 CT slices in order to obtain good results. The goal of this paper is to evaluate the influence of dose voxel spacing on automated beam selection. In Section 2 we introduce the beam selection problem, state some of its properties and define the underlying fluence map optimization problem used in this study. In Section 3 we summarize the beam selection methods considered R. Acosta et al. in the numerical experiments. These are set covering and scoring methods as well as a vector quantization technique. Section 4 contains the numerical results and Section 5 briefly summarizes the 2 The beam selection problem First we note that throughout this paper the terms beam, direction, and angle are used interchangeably. The beam selection problem is to find N positions for the patient and gantry from which the treatment will be delivered. The gantry of a linear accelerator can rotate around the patient in a great circle and the couch can rotate in the plane of its surface. There are physical restrictions on the directions that can be used because some couch and gantry positions result in collisions. In this paper we consider co-planar treatments. That is, beam angles are chosen on a great circle around the CT-slice of the body that contains the center of the tumor. We let A = {aj : j ∈ J} be a candidate collection of angles from which we will select N to treat the patient, where we typically consider A = {iπ/36 : i = 0, 1, 2, . . . , 71}. To evaluate a collection of angles, a judgment function is needed that describes how well a patient can be treated with that collection of angles [4]. We denote the power set of A by P(A) and the nonnegative extended reals by R∗+ . A judgment function is a function f : P(A) → R∗+ with the property that A ⊇ A implies f (A ) ≤ f (A ). The value of f (A ) is the optimal value of an optimization problem that decides a fluence pattern for angles A , i.e., for any A ∈ P(A), f (A ) = min{z(x) : x ∈ X(A )}, where z maps a fluence pattern x ∈ X(A ), the set of feasible fluence patterns for angles A , into R∗+ . As pointed out above, there is a large amount of literature on modeling and calculating f , i.e., solving the intensity problem. In fact, all commercial planning systems use an optimization routine to decide a fluence pattern, but the model and calculation method differ from system to system [30]. We assume that if a feasible treatment cannot be achieved with a given set of angles A (X(A ) = ∅) then f (A ) = ∞. We further assume that x is a vector in R|A|×I , where I is the number of sub-beams of a beam, and make the tacit assumptions that x(a,i) = 0 for all sub-beams i of any angle a ∈ A \ A . The non-decreasing behavior of f with respect to set inclusion is then modeled via the set of feasible fluence patterns X(A) by assuming that X(A ) ⊆ X(A ) whenever A ⊆ A . We say that the fluence pattern x is optimal for A if f (A ) = z(x) and x ∈ X(A ). All fluence map optimization models share the property that the quality of a treatment cannot deteriorate if more angles are used. The result that a judgment function is non-decreasing Influence of dose grid resolution on beam selection with respect to the number of angles follows from the definition of a judgment function and the above assumptions, see [4]. A judgment function is defined by the data that forms the optimization problem in (1). This data includes a dose operator D, a prescription P , and an objective function z. We let d(k,a,i) be the rate at which radiation along subbeam i in angle a is deposited into dose point k, and we assume that d(k,a,i) is nonnegative for each (k, a, i). These rates are patient-specific constants and the operator that maps a fluence pattern into anatomical dose (measured in Grays, Gy) is linear. We let D be the matrix whose elements are d(k,a,i) , where the rows are indexed by k and the columns by (a, i). The linear operator x → Dx maps the fluence pattern x to the dose that is deposited into the patient (see, e.g., [12] for a justification of the linearity). To avoid unnecessary notation we use i to indicate that we are summing over the sub-beams in an angle. So, i x(a,i) is the total exposure (or fluence) for angle a, and d is the aggregated rate at which dose is deposited into dose point k i (k,a,i) from angle a. There are a variety of forms that a prescription can have, each dependent on what the optimization problem is attempting to accomplish. Since the purpose of this paper is to compare the effect of dose point resolution on various approaches to the beam selection problem, we focus on one particular judgment function. Let us partition the set of dose voxels into those that are being targeted for dose deposition (i.e., within a tumor), those that are within a critical structure (i.e., very sensitive locations, such as brainstem, identified for dose avoidance), and those that represent normal tissue (i.e., non-specific healthy tissues which should be avoided, but are not as sensitive or important as critical structures). We denote the set of targeted dose voxels by T , the collection of dose points in the critical regions by C, and the remaining dose points by N . We further let DT , DC , and DN be the submatrices of D such that DT x, DC x, and DN x map the fluence pattern x into the targeted region, the critical structures, and the normal tissue, respectively. The prescription consists of T LB and T U B, which are vectors of lower and upper bounds on the targeted dose points, CU B, which is a vector of upper bounds on the critical structures, and N U B, which is a vector of upper bounds on the normal tissue. The judgment function is defined by the following linear program [8]. ⎫ f (A ) = min ωα + β + γ ⎪ ⎪ ⎪ ⎪ T LB − eα ≤ DT x ⎪ ⎪ ⎪ ⎪ DT x ≤ T U B ⎪ ⎪ ⎪ ⎪ DC x ≤ CU B + eβ ⎬ DN x ≤ N U B + eγ (2) ⎪ ⎪ T LB ≥ eα ⎪ ⎪ ⎪ ⎪ −CU B ≤ eβ ⎪ ⎪ ⎪ ⎪ x, γ ≥ 0 ⎪ ⎪ ⎭ x = 0 for all a ∈ A\A . i (a,i) R. Acosta et al. Here e is the vector of ones of appropriate dimension. The scalars α, β, and γ measure the worst deviation from T LB, CU B, and N U B for any single dose voxel in the target, the critical structures, and the normal tissue, respectively. For a fixed judgment function such as (2), the N -beam selection problem is min{f (A ) − f (A) : A ∈ P(A), |A | = N } = min{f (A ) : A ∈ P(A), |A | = N } − f (A). Note that the beam selection problem is the minimization of a judgment function f . The value of the judgment function itself is the optimal value of an optimization problem such as (2) that in turn has an objective function z(x) to be minimized. The minimization problem (3) can be stated as an extension of the optimization problem that defines f using binary variables. Let ya = 1 angle a is selected, 0 otherwise. Then the beam selection problem becomes ⎫ min z(x) ⎪ ⎪ ⎬ y = N a∈A a i x(i,a) ≤ M ya for all a ∈ A ⎪ ⎪ ⎭ x ∈ X(A), where M is a sufficiently large constant. While (4) is a general model that combines the optimal selection of beams with the optimization of their fluence patterns, such problems are currently intractable because they are beyond modern solution capabilities. Note that there are between 1.4×107 and 5.4×1011 subsets of {iπ/36 : i = 0, 1, 2, . . . , 71} for clinically relevant values of N ranging from 5 to 10. In any study where the solution of these MIPs is attempted [5, 13, 14, 19, 28], the set |A| is severely restricted so that the number of binary variables is manageable. This fact has led researchers to investigate heuristics. In the following section we present the heuristics that we include in our computational results in the framework of beam selectors introduced in [4]. The function g : W → V is a beam selector if W and V are subsets of P(A) and g(W ) ⊆ W for all W ∈ W. A beam selector g : W → V maps every collection of angles in W to a subcollection of selected angles. An N -beam selector is a beam selector with | ∪W ∈W g(W )| = N . A beam selector is informed if it is defined in terms of the value of a judgment function and it is weakly informed if it is defined in terms of the data (D, P, z). A beam selector is otherwise uninformed. If g is defined in terms of a random variable, then g is stochastic. An important observation is that for any collection of angles A ⊂ A there is not necessarily a unique optimal fluence pattern, which means that informed Influence of dose grid resolution on beam selection beam selectors are solver dependent. An example in Section 5 of [4] shows how radically different optimal fluence patterns obtained by different solvers for the same judgment function can be. There are several heuristic beam selection techniques in the literature. Each heuristic approach to the problem can be interpreted as choosing a best beam selector of a specified type as described in [4]. Additional references on methods not used in this study and methods for which the original papers do not provide sufficient detail to reproduce their results can be found in [4]. 3 The beam selection methods We first present the set covering approach developed by [5]. Anε angle a covers the dose point k if i d(k,a,i) ≥ ε. For each k ∈ T , let Ak = {a ∈ A : a cover dose point k}. A (set-covering) SC-N -beam selector is an N -beam selector having the form gsc : {Aεk : k ∈ T } → (P(Aεk )\∅) . k∈T Two observations are important: 1. We have Aεk = A for all k ∈ T if and only if 0 ≤ ε ≤ ε∗ := min{ i d(k,a,i) : k ∈ T, a ∈ A}. The most common scenario is that each targeted dose point is covered by every angle. 2. Since gsc cannot map to ∅, the mapping has to select at least one angle to cover each targeted dose point. It was shown in [4] that for 0 ≤ ε ≤ ε∗ , the set covering approach to beam selection is equivalent to the beam selection problem (3). This equivalence means that we cannot solve the set-covering beam selection problem efficiently. However, heuristically it is possible to restrict the optimization to subsets of SC-N -beam selectors. This was done in [5]. The second observation allows the formulation of a traditional set covering problem to identify a single gsc . For each targeted dose point k, let q(k,a,i) be 1 if sub-beam i in angle a covers dose point k, and 0 otherwise. For each angle a, define q(k,a,i) CUBk if C = ∅, ca = k∈C i (5) 0 if C = ∅, and q(k,a,i) ·d(k,a,i) cˆa = k∈C i if C = ∅, if C = ∅, where CU B is part of the prescription in (2). The costs ca and cˆa are large if sub-beams of a intersect a critical structure that has a small upper bound. R. Acosta et al. Cost coefficients cˆa are additionally scaled by the rate at which the dose is deposited into dose point k from sub-beam (a, i). The associated set covering problems are ca y a : q(k,a) ya ≥ 1, k ∈ T, ya = N, ya ∈ {0, 1} (7) min a cˆa ya : q(k,a) ya ≥ 1, k ∈ T, ya = N, ya ∈ {0, 1} . The angles for which ya∗ = 1 in an optimal solution y ∗ of (7) or (8) are selected and define a particular SC-N -beam selector. Note that such N -beam selectors are weakly informed, if not at all informed, as they use the data but do not evaluate f . These particular set covering problems are generally easy to solve. In fact, in the common situation of Aεk = A for k ∈ T , (7) and (8) reduce to selecting N angles in order of increasing ca or cˆa , respectively. This leads us to scoring techniques for the beam selection problem. We can interpret ca or cˆa as a score of angle a. A (scoring) S-N -beam selector is an N -beam selector gs : {A} → P(A). It is not surprising that the scoring approach is equivalent to the beam selection problem. The difficulty here lies in defining scores that accurately predict angles that are used in an optimal treatment. The first scoring approach we consider is found in [20], where each angle is assigned the score ˆ(a,i) 2 1 d(k,a,i) · x , (9) ca = |T | TG i k∈T where xˆ(a,i) = min{min{CU Bk /d(k,a,i) : k ∈ C}, min{N U Bk /d(k,a,i) : k ∈ N }} and T G is a goal dose to the target and T LB ≤ T G ≤ T U B. An angle’s score increases as the sub-beams that comprise the angle are capable of delivering more radiation to the target without violating the restrictions placed on the non-targeted region(s). Here, high scores are desirable. The scoring technique uses the bounds on the non-targeted tissues to form constraints, and the score represents how well the target can be treated while satisfying these constraints. This is the reverse of the perspective in (7) and (8). Nevertheless, mathematically, every scoring technique is a set covering problem [4]. Another scoring method is found in [26]. Letting x∗ be an optimal fluence pattern for A, the authors in [26] define the entropy of an angle by δa := − i x∗(a,i) ln x∗(a,i) and the score of a is Influence of dose grid resolution on beam selection ca = 1 − δa − min{δa : a ∈ A} . max{δa : a ∈ A} In this approach, an angle’s score is high if the optimal fluence pattern of an angle’s sub-beams is uniformly high. So, an angle with a single high-fluence sub-beam would likely have a lower score than an angle with a more uniform fluence pattern. Unlike the scoring procedure in [20], this technique is informed since it requires an evaluation of f . The last of the techniques we consider is based on the image compression technique called vector quantization [10] (see [6] for further information on vector quantization). A is a contiguous subset of A if A is an ordered subset of the form {aj , aj+1 , . . . , aj+r }. A contiguous partition of A is a collection of contiguous subsets of A that partition A, and we let Wvq (N ) be the collection of N element contiguous partitions of A. A VQ-N -beam selector is a function of the form gvq : {Wj : j = 1, 2, . . . , N } → {{aj } : aj ∈ Wj }, where {Wj : j = 1, 2, . . . , N } ∈ Wvq (N ). The image of Wj is a singleton {aj }, and we usually write aj instead of {aj }. The VQ-N -beam selector relies on the probability that an angle is used in an optimal treatment. Letting α(a) be this probability, we have that the distortion of a quantizer is N α(a) · a − gvq (Wj ) 2 . j=1 a∈Wj Once the probability distribution α is known, a VQ-N -beam selector is calculated to minimize distortion. In the special case of a continuous A, the authors in [6] show that the selected angles are the centers-of-mass of the contiguous sets. We mimic this behavior in the discrete setting by defining a∈W a · α(a) . (11) gvq (Wj ) = j a∈Wj α(a) This center-of-mass calculation is not exact for discrete sets since the centerof-mass may not be an element of the contiguous set. Therefore angles not in A are mapped to their nearest neighbor, with ties being mapped to the larger element of A. Vector quantization heuristics select a contiguous partition from which a single VQ-N -beam selector is created according to condition (11). The process in [10] starts by selecting the zero angle as the beginning of the first contiguous set. The endpoints of the contiguous sets are found by forming the cumulative density and evenly dividing its range into N intervals. To improve this, we could use the same rule and rotate the starting angle through the 72 R. Acosta et al. candidates. We could then evaluate f over these sets of beams and take the smallest value. The success of the vector quantization approach directly relies on the ability of the probability distribution to accurately gauge the likelihood of an angle being used in an optimal N -beam treatment. An immediate idea is to make a weakly informed probability distribution by normalizing the scoring techniques in (5), (6) and (9). Additionally, the scores in (10) are normalized to create an informed model of α. We test these methods in Section 4. An alternative informed probability density is suggested in [10], where the authors assume that an optimal fluence pattern x∗ for f (A) contains information about which angles should and should not be used. Let ∗ x(a,i) i α(a) = . x∗ (a,i) a∈A i Since optimal fluence patterns are not unique, these probabilities are solverdependent. In [4] an algorithm is given to remove this solver dependency. The algorithm transforms an optimal fluence x∗ into a balanced probability density α, i.e., one that is as uniform as possible, by solving the problem lexmin (z(x), sort(x)) , where sort is a function that reorders the components of the vector x in a non-increasing order. The algorithm that produces the balanced solution iteratively reduces the maximum exposure time of the sub-beams that are not fixed, which intuitively means that we are re-distributing fluence over the remaining sub-beams. As the maximum fluence decreases, the fluences for some angles need to increase to guarantee an optimal treatment. The algorithm terminates as soon as the variables that are fixed by this “equalizing” process attain one of the bounds that describe an optimal treatment. At the algorithm’s termination, a further reduction of sub-beam fluences whose α value is high will no longer allow an optimal treatment. 4 Numerical comparisons In this section we numerically compare how the resolution of the dose points affects set cover (SC), scoring (S), and vector quantization (VQ) 9-beam selectors. The Radiotherapy optimAl Design software (RAD) at http://www.trinity.edu/aholder/research/oncology/rad.html was altered to accommodate the different beam selectors. This system is written in Matlabc and links to the CPLEXc solvers (CPLEX v. 6.6. was used). The code, except for commercial packages, and all figures used in this paper (and more) are available at http://lagrange.math.trinity.edu/tumath/ Influence of dose grid resolution on beam selection Fig. 1. The target is immediately to the left of the brainstem. The critical structures are the brain stem and the two eye sockets. The clinical example is an acoustic neuroma in which the target is immediately adjacent to the brain stem and is desired to receive between 48.08 and 59.36 Gy. The brain stem is restricted to no more than 50 Gy and the eye sockets to less than 5 Gy. Each image represents a 1.5 mm swath of the patient, and the 7 images in Figure 1 were used, creating a 10.5 mm thickness. The full clinical set contained 110 images, but we were unable to handle the full complement because of inherent memory limitations in Matlab. Angles are selected from {iπ/36 : i = 1, 2, . . . , 71}. These candidate angles were assigned twelve different values as follows. An optimal treatment (according to judgment function (2)) for the full set of candidate angles was found with CPLEX’s primal, dual, and interior-point methods and a balanced solution according to (12) was also calculated. The angle values were either the average sub-beam exposure or the maximal sub-beam exposure. So, “BalancedAvg” indicates that the angle values were created from the balanced solution of a 72-angle optimal treatment, where the angle values were the average sub-beam exposure. Similar nomenclature is used for “DualMax,” “PrimalAvg,” and so on. This yields eight values. The scaled and unscaled set cover values in (5) and (6) were also used and are denoted by “SC1” and “SC2.” The informed entropy measure in (10) is denoted by “Entropy,” and the scoring technique in (9) is denoted by “S.” We used T G = 0.5(T LB + T U B) in (9). So, in total we tested twelve different angle values for each of the beam selectors. The dose points were placed on 3 mm and 5 mm grids throughout the 3D patient space, and each dose point was classified by the type of tissue it represented. Since the images were spaced at 1.5 mm, we point out that dose points were not necessarily centered on the images in the superior inferior direction. The classification of whether or not a dose point was targeted, critical, or normal was accomplished by relating the dose point to the hyperrectangle in which it was contained. In a clinical setting, the anatomical dose is typically approximated by a 1 to 5 mm spacing, so the experiments are similar to clinical practice. However, as with the number of images, Matlab’s R. Acosta et al. 40 0.8 Fig. 2. The isodose contours for the balanced 72-angle treatment with 5 mm spacing. 70 Fig. 3. The DVH for the balanced 72angle treatment with 5 mm spacing. 60 0.8 50 0.6 20 0.2 10 10 Fig. 4. The isodose contours for the balanced 72-angle treatment with 3 mm spacing. Fig. 5. The DVH for the balanced 72angle treatment with 3 mm spacing. memory limitation did not allow us to further increase the resolution (i.e., decrease the dose point spacing). Treatments are judged by viewing the level curves of the radiation per slice, called isodose curves, and by their cumulative dose volume histogram (DVH). A dose volume histogram is a plot of percent dose (relative to T LB) versus the percent volume. The isodose curves and DVHs for the balanced 72angle treatment are shown for the 3 mm and 5 mm resolutions in Figures 2 through 5. An ideal DVH would have the target at 100% for the entire volume and then drop immediately to zero, indicating that the target is treated exactly as specified with no under or over dosing. The curves for the critical structures would instead drop immediately to zero, meaning that they receive no radiation. The DVHs in Figures 3 and 5 follow this trend and are, therefore, clinically reasonable. The curves from upper-right to lower left are for the target, the brain stem, normal tissue, and the eye sockets. The eye socket curves drop immediately to zero as desired and appear on the axes. The 3 mm brain stem curve indicates that this structure is receiving more radiation than with the 5 mm resolution. While the fluence maps generated for Influence of dose grid resolution on beam selection Fig. 6. The isodose contours for a clinical serial tomotherapy treatment. Fig. 7. The DVH for a clinical serial tomotherapy treatment. these two treatments are different, the largest part of this discrepancy is likely due to the 3 mm spacing more accurately representing the dose variation. Figures 6 and 7 are from a commercially available, clinically used serial tomotherapy treatment system (Corvus v6.1 – Nomos Inc., Cranberry Township, PA), which uses 72 equally spaced angles (the curve for the normal tissue is not displayed). Two observations are important. First, the similarity between the DVHs of our computed solutions and Corvus’ DVHs suggests that our dose model and judgment function are reasonable. Second, if our resolutions were decreased to 2 or 1.5 mm, it is likely that we would observe a brain stem curve more closely akin to that in Corvus’ DVH. We point out that the judgment function and solution procedure are different for the Corvus system (and are proprietary). A natural question is whether or not the dose point resolution affects the angle values. We expected differences, but were not clear as to how much of an effect to expect. We were intrigued to see that some of the differences were rather dramatic. The 3 mm and 5 mm “average” values are shown in Table 1. The selected angles and solution times are shown in Tables 2 and 3. The angles vary significantly from beam selector to beam selector and for the same beam selector with different resolutions. This variability of performance of the heuristics explored here is likely attributable to the redefinition of the solution space that occurs when the judgment function is made “aware” of dose voxels at the interface region of targeted and avoided structures. Measuring the quality of the selected angles is not obvious. One measure is of course the value of the judgment function. This information is shown in Table 4. The judgment values indicate that the 5 mm spacing is too course for the fluence model to adequately address the trade-offs between treating the tumor and not treating the brain stem. The 5 mm spacing so crudely approximates R. Acosta et al. Table 1. The angle values. The top rows are with 5 mm resolution and the bottom rows are with 3 mm resolution. BalancedAvg BalancedMax PrimalAvg 25 Entropy 1.2 1 0.8 0.6 1 S 2.5 x 10 0.2 10 20 30 40 50 60 70 80 1.2 1 0.8 0.6 0.4 0.2 10 20 30 40 50 60 70 80 the anatomical structures that it was always possible to design a 9-beam treatment that treated the patient as well as a 72-beam treatment. The problem is that the boundaries between target and critical structures, which is where over and under irradiating typically occurs, are not well defined, and hence, the regions that are of most importance are largely ignored. These boundaries are better defined by the 3 mm grid, and a degradation in the judgment value is observed. Judgment values do not tell the entire story, though, and are only one of many ways to evaluate the quality of treatment plans. The mean judgment values of the different techniques all approach the goal value of −5.0000, and claiming that one technique is better than another based on these values is tenuous. However, there are some outliers, and most significantly the scoring values did poorly with a judgment value of 3.0515 in the scoring and set cover beam selectors. The resulting 3 mm isodose curves and DVH for the scoring 9-beam selector are seen in Figures 8 and 9. These treatments are clearly less than desirable, especially when compared to Figures 4 and 5. Besides the judgment value, another measure determines how well the selected angles represent the interpretation of the angle values. If we think of the angle values as forming a probability density, then the expected value of the nine selected angles represents the likelihood of the angle collection being optimal. These expected values are found in Table 5. Influence of dose grid resolution on beam selection Table 2. The angles selected by the different beam selectors with 3 mm resolution. The times are in seconds and include the time needed to select angles and design a treatment with these angles. Angle Value Selected Angles Set Cover BalancedAvg 15 BalancedMax 10 PrimalAvg 15 PrimalMax 15 DualAvg 10 DualMax 15 InteriorAvg 15 InteriorMax 10 SetCover 1 20 SetCover 2 20 Scoring 245 Entropy 10 113.51 126.23 43.96 45.52 34.02 68.80 115.75 128.66 90.91 134.43 108.19 144.43 BalancedAvg 15 BalancedMax 10 PrimalAvg 15 PrimalMax 15 DualAvg 10 DualMax 15 InteriorAvg 15 InteriorMax 10 SetCover1 20 SetCover2 20 Scoring 245 Entropy 10 104.93 108.29 48.59 46.22 36.24 66.56 105.91 107.92 83.87 104.36 122.59 235.84 BalancedAvg BalancedMax PrimalAvg PrimalMax DualAvg DualMax InteriorAvg InteriorMax SetCover1 SetCover2 Scoring Entropy 197.62 71.93 55.27 121.91 115.53 126.94 198.43 71.98 52.56 187.10 134.33 56.14 The trend to observe is that the set cover and scoring techniques select angles with higher expected values than the vector quantization technique, meaning that the angles selected more accurately represent the intent of the angle values. This is not surprising, as the set cover and scoring methods can be interpreted as attempting to maximize their expected value. However, if R. Acosta et al. Table 3. The angles selected by the different beam selectors with 5 mm resolution. The times are in seconds and include the time needed to select angles and design a treatment with these angles. Angle Value Selected Angles Set Cover BalancedAvg 55 70 75 110 155 250 BalancedMax 110 120 155 225 245 250 PrimalAvg 45 55 100 150 190 250 PrimalMax 45 55 100 150 190 250 DualAvg 20 45 110 160 230 250 DualMax 20 45 110 160 230 250 InteriorAvg 55 70 75 110 155 250 InteriorMax 110 120 155 225 245 250 SetCover 1 20 145 150 155 200 320 SetCover 2 20 140 145 150 155 200 Scoring 95 185 230 260 265 270 Entropy 70 75 110 155 225 4.32 4.46 4.81 4.89 4.96 5.04 4.67 4.90 5.43 5.79 5.10 5.32 BalancedAvg 55 70 75 110 155 250 BalancedMax 110 120 155 225 245 250 PrimalAvg 45 55 100 150 190 250 PrimalMax 45 55 100 150 190 250 DualAvg 20 45 110 160 230 250 DualMax 20 45 110 160 230 250 InteriorAvg 55 70 75 110 155 250 InteriorMax 110 120 155 225 245 250 SetCover1 20 145 150 155 200 320 SetCover2 20 140 145 150 155 200 Scoring 95 185 230 260 265 270 Entropy 70 75 110 155 225 250 2.12 2.34 2.68 2.72 2.88 2.94 2.48 2.78 3.31 3.53 3.01 3.24 BalancedAvg 40 75 105 BalancedMax 40 80 115 PrimalAvg 40 85 105 PrimalMax 30 80 105 DualAvg 20 75 130 DualMax 20 75 130 InteriorAvg 40 75 105 InteriorMax 40 80 115 SetCover1 40 75 110 SetCover2 40 75 110 Scoring 185 190 195 Entropy 45 75 105 3.77 3.41 3.32 4.11 3.99 4.11 4.40 4.03 4.70 4.88 5.58 4.75 the angle assignments do not accurately gauge the intrinsic value of an angle, such accuracy is misleading. As an example, both the set cover and scoring methods have an expected value of 1 with respect to the scoring angle values in the 5 mm case. In this case, the only angles with nonzero values are 185 Influence of dose grid resolution on beam selection Table 4. The judgment values of the selected angles. SC VQ 5 mm 3 mm 5 mm 3 mm 5 mm 3 mm BalancedAvg BalancedMax PrimalAvg PrimalMax DualAvg DualMax InteriorAvg InteriorMax SC1 SC2 S Entropy -5.0000 -4.8977 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -4.8977 -4.9841 -4.9820 3.0515 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -4.8714 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -4.8714 -4.9841 -4.9820 3.0515 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -4.9194 -5.0000 -5.0000 -5.0000 -3.5214 -4.8909 -4.9194 -5.0000 -5.0000 -4.9984 -4.9967 -5.0000 -4.3092 -5.0000 -4.3048 -5.0000 -4.8538 -5.0000 3 mm Mean -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -5.0000 -4.9731 -4.9230 -5.0000 -5.0000 -4.5071 -4.9636 -4.9731 -4.9230 -4.9894 -4.9875 0.3688 -5.0000 Table 5. The expected values of the selected angles. BalancedAvg BalancedMax PrimalAvg PrimalMax DualAvg DualMax InteriorAvg InteriorMax SC1 SC2 S Entropy SC 3 mm 5 mm 3 mm 5 mm VQ 3 mm 5 mm 0.2157 0.2613 0.4191 0.4194 0.6144 0.5264 0.2157 0.2613 0.1492 0.1523 0.2352 0.3176 0.2176 0.2673 0.4191 0.4194 0.6144 0.5264 0.2176 0.2673 0.1492 0.1523 0.2352 0.3303 0.2059 0.3045 0.8189 0.7699 0.7443 0.7443 0.2059 0.3045 0.1461 0.1491 1.0000 0.332 0.1506 0.1234 0.1600 0.1362 0.0394 0.0359 0.1506 0.1234 0.1251 0.1234 0.1673 0.1399 0.2059 0.3045 0.8189 0.7699 0.7443 0.7443 0.2059 0.3045 0.1461 0.1491 1.0000 0.3320 0.1189 0.1344 0.0487 0.0429 0.3207 0.3207 0.1189 0.1344 0.1248 0.1273 0.5058 0.1402 and 275, and the perfect expected value only indicates that these two angles are selected. A scoring technique that only scores 2 of the 72 possible angles is not meaningful, and in fact, the other 7 angles could be selected at random. The expected values in Table 5 highlight how the angle assignments differ in philosophy. The weakly informed angle values attempt to measure each angle’s individual worth in an optimal treatment, regardless of which other angles are selected. The informed values allow the individual angles to compete through the optimization process for high values, and hence, these values are tempered with the knowledge that other angles will be used. The trend in Table 5 is that informed expected values are lower than weakly informed values, although this is not a perfect correlation. R. Acosta et al. 60 0.8 20 0.2 Fig. 8. The 3 mm isodose contours for the balanced treatment when 9 angles were selected with a scoring method and scoring angle values. Fig. 9. The 3 mm DVH for the balanced treatment when 9 angles were selected with a scoring method and scoring angle values. From the previous discussions, it is clear that beam selectors depend on the dose point resolution, but none of this discussion attempts to quantify the difference. We conclude with such an attempt. For each of the selected sets of angles, we calculated (in degrees) the difference between consecutive angles. These distances provide a measure of how the angles are spread around the great circle without a concern about specific angles. These values were compared in the 3 mm and 5 mm cases. For example, the nine angles selected by the VQ selector with the BalancedAvg angle values were {30, 60, 90, 120, 155, 205, 255, 295, 340} and {40, 75, 105, 140, 185, 230, 270, 305, 345} for the 3 mm and 5 mm cases, respectively. The associated relative spacings are {30, 30, 30, 35, 50, 50, 40, 45, 50} and {35, 30, 35, 45, 45, 40, 35, 40, 55}. This information allows us to ask whether or not one set of angles can be rotated to obtain the other. We begin by taking the absolute value of the corresponding relative spacings, so for this example the differences are 3 mm Relative Spacing 30 30 30 35 50 50 40 45 50 5 mm Relative Spacing 35 30 35 45 45 40 35 40 55 Difference 5 0 5 10 5 10 5 5 5 Depending on how the angles from the 3 mm and 5 mm cases interlace, we rotate (or shift) the first set to either the left or the right and repeat the calculation. In our example, the first angle in the 3 mm selection is 30, which is positioned between angles 40 and 345 in the 5 mm case. So we shift the 3 mm relative spacings to the left to obtain the following differences (notice that the first 30 of the 3 mm above is now compared to the last 55 of the 5 mm case). 3 mm Relative Spacing 30 30 35 50 50 40 45 50 30 5 mm Relative Spacing 35 30 35 45 45 40 35 40 55 Difference 5 0 0 5 5 0 10 10 25 Influence of dose grid resolution on beam selection Table 6. The mean and standard deviation of the (minimum) difference between the 3 mm and 5 mm cases. SC Mean S VQ BalancedAvg BalancedMax PrimalAvg PrimalMax DualAvg DualMax InteriorAvg InteriorMax SC1 SC2 S Entropy 45.56 40.00 28.89 16.67 37.78 36.67 45.56 40.00 0.00 0.00 40.00 44.44 47.78 45.56 28.89 16.67 37.78 36.67 47.78 45.56 0.00 0.00 40.00 44.44 5.55 11.11 14.44 13.33 16.67 15.56 5.56 11.11 1.11 0.00 35.56 13.33 31.30 32.59 11.94 SC 2465.30 2125.00 236.11 325.00 1563.20 1050.00 2465.30 2125.00 0.00 0.00 3481.20 1259.00 Variance S 4706.90 9.03 3346.50 73.61 236.11 165.28 325.00 131.25 1563.20 150.00 1050.00 84.03 4706.90 9.03 3346.50 73.61 0.00 4.86 0.00 0.00 3481.20 1909.00 1552.80 81.25 1424.60 2026.30 The smallest aggregate difference, which is 50 in the first comparisons versus 60 in the second, is used in our calculations. We do not include all possible shifts of the first set because some spatial positioning should be respected, and our calculation honors this by comparing spacing between neighboring angles. Table 6 contains the means and standard deviations of the relative spacing differences. A low standard deviation indicates that the selected angles in one case are simply rotated versions of the other. For example, the VQ selector with the InteriorAvg angle values has a low standard deviation of 9.03, which means that we can nearly rotate the 3 mm angles of {30, 60, 90, 120, 155, 205, 255, 295, 340} to obtain the 5 mm angles of {40, 75, 105, 140, 185, 230, 270, 305, 345}. In fact, if we rotate the first set 15 degrees, the average discrepancy is the stated mean value of 5.56. A low mean value but a high standard deviation means that it is possible to rotate the 3 mm angles so that several of the angles nearly match but only at the expense of making the others significantly different. Methods with high mean and standard deviations selected substantially different angles for the 3 mm and 5 mm cases. The last row of Table 6 lists the column averages. These values lead us to speculate that the VQ techniques are less susceptible to changes in the dose point resolution. We were surprised that the SC1 and SC2 angle values were unaffected by the dose point resolution, and that each corresponding beam selector chose (nearly) the same angles independent of the resolution. In any event, it is clear that the dose point resolution generally affects each of the beam selectors. Besides the numerical comparisons just described, a basic question is whether or not the beam selectors produce clinically adequate angles. Figures R. Acosta et al. Fig. 10. Isodose contours for initial design of a nine angle clinical treatment plan. Fig. 11. The DVH for the balanced 72-angle treatment with 5 mm spacing. Fig. 12. The isodose contours for a clinically designed treatment based on the 9 angles selected by the set cover method with BalancedAvg angle values and 3 mm spacing. Fig. 13. The DVH for a clinically designed treatment based on the 9 angles selected by the set cover method with BalancedAvg angle values and 3 mm spacing. 10 and 11 depict the isodose contours and a DVH of a typical clinical 9-angle treatment. This is not necessarily a final treatment plan, but rather what might be typical of an initial estimate of angles to be used. Treatment planners would typically adjust these angles in an attempt to improve the design. Using the BalancedAvg angle values, we used Nomos’ commercial software to design the fluence patterns for 9-angle treatments with the angles produced by the three different techniques with 3 mm spacing. Figures 12 through 17 contain the isodose contours and DVHs from the Corvus software. The set cover and scoring treatment plans in Figures 12 through Figures 15 are clearly inferior to the initial clinical design in that they encroach significantly onto critical structures and normal healthy tissue with high isodose Influence of dose grid resolution on beam selection Fig. 14. The isodose contours for a clinically designed treatment based on the 9 angles selected by the scoring method with BalancedAvg angle values and 3 mm spacing. Fig. 15. The DVH for a clinically designed treatment based on the 9 angles selected by the scoring method with BalancedAvg angle values and 3 mm spacing. Fig. 16. The isodose contours for a clinically designed treatment based on the 9 angles selected by the vector quantization method with BalancedAvg angle values and 3 mm spacing. Fig. 17. The DVH for a clinically designed treatment based on the 9 angles selected by the vector quantization method with BalancedAvg angle values and 3 mm spacing. with 5 mm spacing. levels. The problem is that the 9 angles are selected too close to each other. The fact that these are similar treatments is not surprising since the angle sets only differed by one angle. The vector quantization treatment in Figures 16 and 17 appears to be clinically relevant in that it compares favorably with the initial design of the 9 angle clinical plan (i.e., Figures 10 to 16 comparison and Figures 11 to 17 comparison). 5 Conclusions We have implemented several heuristic beam selection techniques to investigate the influence of dose grid resolution on these automated beam selection strategies. Testing the heuristics on a clinical case with two different dose point resolutions we have for the first time studied this effect and have found it to be R. Acosta et al. significant. We have also (again for the first time) compared the results with those from a commercial planning system. We believe that the effect of dose grid resolution becomes smaller as resolution increases, but further research is necessary to test that hypothesis. References 1. R. K. Ahuja and H. W. Hamacher. A network flow algorithm to minimize beam-on-time for unconstrained multileaf collimator problems in cancer radiation therapy. Networks, 45:36–41, 2004. 2. D. Baatar, H. W. Hamacher, M. Ehrgott, and G. J. Woeginger. Decomposition of integer matrices and multileaf collimator sequencing. Discrete Applied Mathematics, 152:6–34, 2005. 3. T. R. Bortfeld, A. L. Boyer, D. L. Kahler, and T. J. Waldron. X-ray field compensation with multileaf collimators. International Journal of Radiation Oncology, Biology, Physics, 28:723–730, 1994. 4. M. Ehrgott, A. Holder, and J. Reese. Beam selection in radiotherapy design. Linear Algebra and its Applications, doi: 10.1016/j.laa.2007.05.039, 2007. 5. M. Ehrgott and R. Johnston. Optimisation of beam directions in intensity modulated radiation therapy planning. OR Spectrum, 25:251–264, 2003. 6. A. Gersho and R. Gray. Vector Quantization and Signal Compression. Kluwer Academic Publishers, Boston, MA, 1991. 7. H. W. Hamacher and K. H. K¨ ufer. Inverse radiation therapy planing – A multiple objective optimization approach. Discrete Applied Mathematics, 118: 145–161, 2002. 8. A. Holder. Designing radiotherapy plans with elastic constraints and interior point methods. Health Care Management Science, 6:5–16, 2003. 9. A. Holder. Partitioning multiple objective optimal solutions with applications in radiotherapy design. Optimization and Engineering, 7:501–526, 2006. 10. A. Holder and B. Salter. A tutorial on radiation oncology and optimization. In H. Greenberg, editor, Emerging Methodologies and Applications in Operations Research, chapter 4. Kluwer Academic Press, Boston, MA, 2004. 11. S. Kamath, S. Sahni, J. Li, J. Palta, and S. Ranka. Leaf sequencing algorithms for segmented multileaf collimation. Physics in Medicine and Biology, 48:307– 324, 2003. 12. P. Kolmonen, J. Tervo, and P. Lahtinen. Use of the Cimmino algorithm and continuous approximation for the dose deposition kernel in the inverse problem of radiation treatment planning. Physics in Medicine and Biology, 43:2539–2554, 1998. 13. E. K. Lee, T. Fox, and I. Crocker. Integer programming applied to intensitymodulated radiation therapy treatment planning. Annals of Operations Research, 119:165–181, 2003. 14. G. J. Lim, M. C. Ferris, S. J. Wright, D. M. Shepard, and M. A. Earl. An optimization framework for conformal radiation treatment planning. INFORMS Journal on Computing, 13:366–380, 2007. 15. J. L¨ of. Development of a general framework for optimization of radiation therapy. PhD thesis, Department of Medical Radiation Physics, Karolinska Institute, Stockholm, Sweden, 2000. Influence of dose grid resolution on beam selection 16. S. Morrill, I. Rosen, R. Lane, and J. Belli. The influence of dose constraint point placement on optimized radiation therapy treatment planning. International Journal of Radiation Oncology, Biology, Physics, 19:129–141, 1990. 17. P. Nizin, A. Kania, and K. Ayyangar. Basic concepts of corvus dose model. Medical Dosimetry, 26:65–69, 2001. 18. P. Nizin and R. Mooij. An approximation of central-axis absorbed dose in narrow photon beams. Medical Physics, 24:1775–1780, 1997. 19. F. Preciado-Walters, R. Rardin, M. Langer, and V. Thai. A coupled column generation, mixed integer approach to optimal planning of intensity modulated radiation therapy for cancer. Mathematical Programming, 101:319–338, 2004. 20. A. Pugachev and L. Xing. Pseudo beam’s-eye-view as applied to beam orientation selection in intensity-modulated radiation therapy. International Journal of Radiation Oncology, Biology, Physics, 51:1361–1370, 2001. 21. H. E. Romeijn, R. K. Ahuja, J. F. Dempsey, A. Kumar, and J. G. Li. A novel linear programming approach to fluence map optimization for intensity modulated radiation therapy treatment planning. Physics in Medicine and Biology, 48:3521–3542, 2003. 22. H. E. Romeijn, J. F. Dempsey, and J. G. Li. A unifying framework for multicriteria fluence map optimization models. Physics in Medicine and Biology, 49:1991–2013, 2004. 23. I. J. Rosen, R. G. Lane, S. M. Morrill, and J. Belli. Treatment planning optimisation using linear programming. Medical Physics, 18:141–152, 1991. 24. W. Schlegel and A. Mahr. 3D-Conformal Radiation Therapy: A Multimedia Introduction to Methods and Techniques. Springer Verlag, Heidelberg, 2002. Springer Verlag, Berlin. 25. R. A. C. Siochi. Minimizing static intensity modulation delivery time using an intensity solid paradigm. International Journal of Radiation Oncology, Biology, Physics, 43:671–689, 1999. 26. S. S¨ oderstr¨ om and A. Brahme. Selection of beam orientations in radiation therapy using entropy and fourier transform measures. Physics in Medicine and Biology, 37:911–924, 1992. 27. S. V. Spirou and C. S. Chui. A gradient inverse planning algorithm with dosevolume constraints. Medical Physics, 25:321–333, 1998. 28. C. Wang, J. Dai, and Y. Hu. Optimization of beam orientations and beam weights for conformal radiotherapy using mixed integer programming. Physics in Medicine and Biology, 48:4065–4076, 2003. 29. S. Webb. Intensity-modulated radiation therapy (Series in Medical Physics). Institute of Physics Publishing, 2001. 30. I. Winz. A decision support system for radiotherapy treatment planning. Master’s thesis, Department of Engineering Science, School of Engineering, University of Auckland, New Zealand, 2004. 31. P. Xia and L. Verhey. Multileaf collimator leaf sequencing algorithm for intensity modulated beams with multiple segments. Medical Physics, 25:1424–1434, 1998. Decomposition of matrices and static multileaf collimators: a survey Matthias Ehrgott1∗ , Horst W. Hamacher2† , and Marc Nußbaum2 1 Department of Engineering Science, The University of Auckland, Auckland, New Zealand. [email protected] Fachbereich Mathematik, Technische Universit¨ at Kaiserslautern, Kaiserslautern, Germany. [email protected] Summary. Multileaf Collimators (MLC) consist of (currently 20-100) pairs of movable metal leaves which are used to block radiation in Intensity Modulated Radiation Therapy (IMRT). The leaves modulate a uniform source of radiation to achieve given intensity profiles. The modulation process is modeled by the decomposition of a given non-negative integer matrix into a non-negative linear combination of matrices with the (strict) consecutive ones property. In this paper we review some results and algorithms which can be used to minimize the time a patient is exposed to radiation (corresponding to the sum of coefficients in the linear combination), the set-up time (corresponding to the number of matrices used in the linear combination), and other objectives which contribute to an improved radiation therapy. Keywords: Intensity modulated radiation therapy, multileaf collimator, intensity map segmentation, complexity, multi objective optimization. 1 Introduction Intensity modulated radiation therapy (IMRT) is a form of cancer therapy which has been used since the beginning of the 1990s. Its success in fighting cancer is based on the fact that it can modulate radiation, taking specific patient data into consideration. Mathematical optimization has contributed considerably since the end of the 1990s (see, for instance, [31]) concentrating mainly on three areas, ∗ † The research has been partially supported by University of Auckland Researcher’s Strategic Support Initiative grant 360875/9275. The research has been partially supported by Deutsche Forschungsgemeinschaft (DFG) grant HA 1737/7 “Algorithmik großer und komplexer Netzwerke” and by New Zealand’s Julius von Haast Award. M. Ehrgott et al. Fig. 1. Realization of an intensity matrix by overlaying radiation fields with different MLC segments. • the geometry problem, • the intensity problem, and • the realization problem. The first of these problems finds the best selection of radiation angles, i.e., the angles from which radiation is delivered. A recent paper with the most up to date list of references for this problem can be found in [17]. Once a solution of the geometry problem has been found, an intensity profile is determined for each of the angles. These intensity profiles can be found, for instance, with the multicriteria approach of [20] or many other intensity optimization methods (see [30] for more references). In Figure 1, an intensity profile is shown as greyscale coded grid. We assume that the intensity profile has been discretized such that the different shades in this grid can be represented by non-negative integers, where black corresponds to 0 and larger integers are used for lighter colors. In the following we will therefore think of intensity profiles and N × M intensity matrices A as one and the same. In this paper, we assume that solutions for the geometry and intensity problems have been found and focus on the problem of realizing the intensity matrix A using so-called (static) multileaf collimators (MLC). Radiation is blocked by M (left, right) pairs of metal leaves, each of which can be positioned between the cells of the corresponding intensity profile. The opening corresponding to a cell of the segment is referred to as a bixel or beamlet. On the right-hand-side of Figure 1, three possible segments for the intensity profile on the left of Figure 1 are shown, where the black areas in the three rectangles correspond to the left and right leaves. Radiation passes (perpendicular to the plane represented by the segments) through the opening between the leaves (white areas). The goal is to find a set of MLC segments such that the intensity matrix A is realized by irradiating each of these segments for a certain amount of time (2, 1, and 3 in Figure 1). Decomposition of matrices and static multileaf collimators: a survey In the same way as intensity profiles and integer matrices correspond to each other, each segment in Figure 1 can be represented by a binary M × N matrix Y = (ymn ), where ymn = 1 if and only if radiation can pass through bixel (m, n). Since the area left open by each pair of leaves is contiguous, the matrix Y possesses the (strict) consecutive-ones (C1) property in its rows, i.e., for all m ∈ M := {1, . . . , M } and n ∈ N := {1, . . . , N } there exists a pair lm ∈ N , rm ∈ N ∪ {N + 1} such that ymn = 1 ⇐⇒ lm ≤ n < rm . Hence the realization problem can be formulated as the following C1 decomposition problem. Let K be the index set of all M × N consecutive-ones matrices and let K ⊆ K. A C1 decomposition (with respect to K ) is defined by non-negative integers αk , k ∈ K and M × N C1 matrices Y k , k ∈ K such that αk Y k . (2) A= k∈K The coefficients αk are often called the monitor units, MU, of Y k . In order to evaluate the quality of a C1 decomposition various objective functions have been used in the literature. The beam-on-time (BOT), total number of monitor units, or decomposition time (DT) objective αk (3) DT (α) := k∈K is a measure for the time a patient is exposed to radiation. Since every change from one segment of the MLC to another takes time, the number of segments or decomposition cardinality (DC) DC(α) := | {αk : αk > 0}| is used to evaluate the (constant) set-up time SUconst (α) := τ DC(α) for the MLC. Here we assume that it takes constant time τ to move from one segment to the next. If, on the other hand, τkl is a variable time to move from Y k to Y l and Y 1 , . . . , Y K are the C1 matrices used in a decomposition, then one can also consider the variable set-up time SUvar (α) = τπ(k),π(k+1) . Obviously, this objective depends on the sequence π(1), . . . , π(K) of these C1 matrices. The treatment time is finally defined for each radiation angle by T T (α) := DT (α) + SU (α), M. Ehrgott et al. where SU (α) ∈ {SUvar (α), SUconst (α)}. Since the set-up time SU (α) can be of the constant or variable kind, two different definitions of treatment time are possible. For therapeutic and economic reasons, it is desirable to find decompositions with small beam-on, set-up, and treatment times. These will be the optimization problems considered in the subsequent sections. In this paper we will summarize some basic results and present the ideas of algorithms to solve the decomposition time (Section 2) and the decomposition cardinality (Section 3) problem. In Section 4 we will deal with combined objective functions and mention some current research questions. 2 Algorithms for the decomposition time problem In this section we consider a given M × N non-negative integer matrix A corresponding to an intensity profile and look for the decomposition (2) of A into a non-negative linear combination A = k∈K αk Y k of C1 matrices such that the decomposition time (3) DT (α) := k∈K αk is minimized. First, we review results of the unconstrained DT problem in which all C1 matrices can be used, i.e., K = K. Then we discuss the constrained DT problem, where technical requirements exclude certain C1 matrices, i.e., K K. 2.1 Unconstrained DT problem The most important argument in the unconstrained case is the fact that it suffices to solve the DT problem for single row matrices. k Lemma 1. A is a decomposition with decomposition time k∈K αk Y = DT (α) := αk if and only if each row Am of A has a decomposik∈K tion Am = k∈K αkm Ymk into C1 row matrices with decomposition time DT (αm ) := k∈K αkm , such that M DT (α) := max DT (αm ). m=1 The proof of this result follows from the fact that in the unconstrained DT problem, the complete set of all C1 matrices can be used. Hence, the decomposition of the row with largest DT (αm ) can be extended in an arbitrary fashion by decompositions of the other rows to yield a decomposition of the matrix A with DT (α) = DT (αm ). The most prominent reference in which the insight of Lemma 1 is used is [8], which introduces the sweep algorithm. Each row is considered independently and then checked from left to right, if a position of a left or right leaf needs to be changed in order to realize given intensities amn . While most practitioners agree that the sweep algorithm provides decompositions with short Decomposition of matrices and static multileaf collimators: a survey left trajectory right trajectory Fig. 2. Representation of intensity row Am = (2, 3, 3, 5, 2, 2, 4, 4) by rods (a) and the corresponding left and right trajectories (b). DT (α), the optimality of the algorithm was only proved several years later. We will review some of the papers containing proofs below. An algorithm which is quoted very often in the MLC optimization literature is that of [32]. Each entry amn of the intensity map is assigned to a rod, the length of which represents the value amn (see Figure 2). The standard step-and-shoot approach, which is shared by all static MLC algorithms, is implemented in two parts, the rod pushing and the extraction. While the objective in [32] is to minimize total treatment time T Tvar , the proposed algorithm is only guaranteed to find a solution that minimizes DT (α). The authors in [1] prove the optimality of the sweep algorithm by transforming the DT problem into a linear program. The decomposition of a row Am into C1 row-matrices is first reformulated in a transposed form, i.e., the column vector ATm is decomposed into C1 column-matrices (columns with 1s in a single block). This yields a linear system of equations, where the columns of the coefficient matrix are all possible N (N − 1)/2 C1 column-matrices, the variables are the (unknown) decomposition times and the right-hand-side vector is the transpose ATm of row Am . The objective of the linear program is the sum of the MUs. Such a linear program is well known (see [2]) to be equivalent to a network flow problem in a network with N nodes and N (N − 1)/2 arcs. The authors in [1] use the special structure of the network and present a shortest augmenting path algorithm which saturates at least one of the nodes in each iteration. Since each of the paths can be constructed in constant time, the complexity for computing DT (αm ) is O(N ). This algorithm is applied to each of the rows of A, such that Lemma 1 implies the following result. Theorem 1 ( [1]). The unconstrained decomposition time problem for a given non-negative integer M × N matrix A can be solved in O(N M ) time. It is important to notice that the identification of the flow augmenting path and the determination of the flow value which is sent along this path can be interpreted as the two phases of the step-and-shoot process in the sweep algorithm of [8], thus establishing its optimality. An alternative optimality proof of the sweep algorithm can be found in [23]. Their methodology is based on analyzing the left and right leaf trajectories for M. Ehrgott et al. each row Am , m ∈ M. These trajectory functions are at the focus of research in dynamic MLC models. For static MLC in which each leaf moves from left to right, they are monotonously non-decreasing step functions with an increase of |am,n+1 − am,n | in the left or right trajectory at position n if am,n+1 − am,n increases or decreases, respectively. Figure 2 illustrates an example with row Am = (2, 3, 3, 5, 2, 2, 4, 4), the representation of each entry amn as rod, and the corresponding trajectories. By proving that the step size of the left leaf trajectory in any position n is an upper bound on the number of MUs of any other feasible decompositions, the authors in [23] establish the optimality of the decomposition delivered by their algorithm SINGLEPAIR for the case of single row DT problems. In combination with Lemma 1, this yields the optimality of their solution algorithm MULTIPAIR for the unconstrained DT problem, which is, again, a validity proof of the sweep algorithm. The same bounding argument as in [23] is used by the author in [18] in his TNMU algorithm (total number of monitor units). Instead of using trajectories, he bases his work directly on the M × (N + 1) difference matrix D = (dmn ) with dmn := amn − am(n−1) for all m = 1, . . . , M, n = 1, . . . , N + 1. Here, am0 := am(n+1) := 0. In each iteration, the TNMU algorithm reduces the TNMU complexity of A C(A) := max Cm (A), m∈M +1 where Cm (A) := N n=1 max{0, dm,n } is the row complexity of row Am . More precisely, in each iteration the algorithm identifies some integer p > 0 and some C1 matrix Y such that A = A − pY has non-negative entries and its TNMU complexity satisfies C(A ) = C(A) − p. Various strategies are recommended to find suitable p and Y , one version of which results in an O(N 2 M 2 ) algorithm. As a consequence of its proof, the following closed form expression for the optimal objective value of the DT problem in terms of the TNMU complexity is attained. Theorem 2 ( [18]). The unconstrained decomposition time problem for a given non-negative integer M ×N matrix A has optimal objective value DT (α) = C(A). As will be seen in Section 3.2, this idea also leads to algorithms for the decomposition cardinality problem. 2.2 Constrained DT problem Depending on the type of MLC, several restrictions may apply to the choice of C1 matrices Y k which are used in decomposition (2), i.e. K K. For example, the mechanics of the multileaf collimator may require that left and right leaf Decomposition of matrices and static multileaf collimators: a survey pairs (lm−1 , rm−1 ) and (lm , rm ) in adjacent rows Ym−1 and Ym of any C1 matrix Y must not overlap (interleaf motion constraints). More specifically, we call a C1 matrix Y shape matrix if lm−1 ≤ rm and rm−1 ≥ lm holds for all m = 2, . . . , M . The matrix ⎛ 0 ⎜0 Y =⎜ ⎝0 1 ⎞ 0 0⎟ ⎟ 0⎠ 0 is, for instance, a C1 matrix, but not a shape matrix, since there are two violations of (11), namely r1 = 4 < 5 = l2 and l3 = 3 > 2 = r4 . By drawing the left and right leaves corresponding to the left and right sets of zeros in each row of Y , it is easy to understand why the constraints (11) are called interleaf motion constraints. Another important restriction is the width or innerleaf motion constraint rm − lm ≥ δ for all m ∈ M, where δ > 0 is a given (integer) constant. A final constraint may be enforced to control tongue-and-groove (T&G) error which often makes the decomposition model (2) inaccurate. Since several MLC types have T&G joints between adjacent leaf pairs, the thinner material in the tongue and the groove causes a smaller or larger radiation than predicted in model 2 if a leaf covers bixel m, n (i.e., ymn = 0), but not m + 1, n (i.e., ym+1,n = 1), or vice versa. Some of this error is unavoidable, k k k k but a decomposition with ymn = 1, ym+1,n = 0 and ymn = 0, ym+1,n = 1 can th k k often be avoided by swapping the m rows of Y and Y . The authors in [7] present a polynomial algorithm for the DT problem with interleaf motion and width constraints by reducing it to a network flow problem with side constraints. They first construct a layered graph G = (V, E), the shape matrix graph which has M layers of nodes. The nodes in each layer represent left-right leaf set-ups in an MLC satisfying the width constraint or — equivalently — a feasible row in a shape matrix (see Figure 3). More precisely, node (m, l, r) stands for a possible row m in a C1 matrix with left leaf in position l and right leaf in position r, where the width constraint is modeled by allowing only nodes (m, l, r) with r − l ≥ δ. Hence, in each layer there are O(N (N − 1)) nodes, and the network has O(M N 2 ) nodes. Interleaf motion constraints are modeled by the definition of the arc set E according to ((m, l, r), (m + 1, l , r )) ∈ E if and only if r − l ≥ δ and r − l ≥ δ. It should be noted that the definition of the arcs can also be adapted to include the extended interleaf motion constraint M. Ehrgott et al. D Fig. 3. Shape matrix graph with two paths corresponding to two shape matrices. (Both paths are extended by the return arc (D , D).) rm − lm−1 ≥ γ and lm − rm−1 ≥ γ for all m ∈ M, where γ > 0 is a given (integer) constant. Also, T&G constraints can be modeled by the network structure. If we add a supersource D and a supersink D connected to all nodes (1, l, r) of the first layer and from all nodes (M, l, r) of the last layer, respectively (see Figure 3), the following result is easy to show. Lemma 2 ( [7]). Matrix Y with rows y1 , . . . , yM is a shape matrix satisfying width (with respect to given δ) and extended interleaf motion (with respect to given γ) constraints if and only if P (Y ) is a path from D to D in G where node (m, l, r) in layer m corresponds to row m of matrix Y . In the example of Figure 3 the two paths correspond to the two shape matrices ⎛ ⎞ ⎛ ⎞ 10 01 ⎜0 1⎟ ⎜1 1⎟ k ⎟ ⎜ ⎟ Yk =⎜ ⎝ 1 1 ⎠ and Y = ⎝ 1 0 ⎠ . 10 01 Since paths in the shape matrix graph are in one-to-one correspondence with shape matrices, the scalar multiplication αk Y k in decomposition (2) is equivalent to sending αk units of flow along path PY k from D to D . Hence, the DT problem is equivalent to a network flow problem. Decomposition of matrices and static multileaf collimators: a survey Theorem 3 ( [7]). The decomposition time problem with respect to a given non-negative integer valued matrix A is equivalent to the decomposition network flow problem: Minimize the flow value from source D to sink D subject to the constraints that for all m ∈ M and n ∈ N , the sum of the flow through nodes (m, l, r) with l ≤ n < r equals the entry am,n . In particular, the DT problem is solvable in polynomial time. The polynomiality of the decomposition network flow algorithm follows, since it is a special case of a linear program. Its computation times are very short, but it generally produces a non-integer set of decomposition times as solution, while integrality is for various practical reasons a highly desirable feature in any decomposition. The authors in [7] show that there always exists an alternative integer solution, which can, in fact, be obtained by a modification of the shape matrix graph. This version of the network flow approach is, however, not numerically competitive. An improved network flow formulation is given by [4]. A smaller network is used with O(M N ) nodes instead of the shape matrix graph G with O(M N 2 ) nodes. This is achieved by replacing each layer of G by two sets of nodes, representing a potential left and right leaf position, respectively. An arc between two of these nodes represents a row of a C1 matrix. The resulting linear programming formulation has a coefficient matrix which can be shown to be totally unimodular, such that the linear program yields an integer solution. Numerical experiments show that this double layer approach improves the running time of the algorithm considerably. In [5] a further step is taken by formulating a sequence of integer programs, each of which can be solved by a combinatorial algorithm, i.e., does not require any linear programming solver. The variables in these integer programs correspond to the incremental increases in decomposition time which are caused by the interleaf motion constraint. Using arguments from multicriteria optimization, the following complexity result shows that compared with the unconstrained case of Theorem 1, the complexity only worsens by a factor of M . Theorem 4 ( [5]). The constrained decomposition time problem with (extended) interleaf and width constraint can be solved in O(N M 2 ) time. While the preceding approaches maintain the constraints throughout the course of the algorithm, [23, 24] solve the constrained decomposition time problem by starting with a solution of the unconstrained problem. If this solution satisfies all constraints it is obviously optimal. If the optimal solution violates the width constraint, there does not exist a solution which does. Violations of interleaf motion and tongue-and-groove constraints are eliminated by a bounded number of modification steps. A similar correction approach is taken by [32] starting from his rod-pushing and extraction algorithm for the unconstrained case. M. Ehrgott et al. In the paper of [22] the idea of the unconstrained algorithm of [18] is carried over to the case of interleaf motion constraints. First, a linear program (LP) is formulated with constraints (2). Hence, the LP has an exponential number of variables. Its dual is solved by a maximal path problem in an acyclic graph. The optimal dual objective value is proved to correspond to a feasible C1 decomposition, i.e., a primally feasible solution of the LP, thus establishing the optimality of the decomposition using the strong LP duality theorem. 3 Algorithms for the decomposition cardinality problem 3.1 Complexity of the DC problem In contrast to the decomposition time problem, we cannot expect an efficient algorithm which solves the decomposition cardinality problem exactly. Theorem 5. The decomposition cardinality problem is strongly NP-hard even in the unconstrained case. In particular, the following results hold. 1. [5] The DC problem is strongly NP-hard for matrices with a single row. 2. [14] The DC problem is strongly NP-hard for matrices with a single column. The first NP-hardness proof for the DC problem is due to [9], who shows that the subset sum problem can be reduced to the DC problem. His proof applies to the case of matrices A with at least two rows. Independently, the authors in [12] use the knapsack problem to prove the (non-strong) NP-hardness in the single-row case. The stronger result of Theorem 5 uses a reduction from the 3-partition problem for the single row case. The result for single column matrices uses a reduction from a variant of the satisfiability problem, NAE3SAT(5). A special case, for which the DC problem can be solved in polynomial time, is considered in the next result. Theorem 6 ( [5]). If A = pB is a positive integer multiple of a binary matrix B, then the C1 decomposition cardinality problem can be solved in polynomial time for the constrained and unconstrained case. If A is a binary matrix, this result follows from the polynomial solvability of DT (α), since αk is binary for all k ∈ K and thus DT (α) = DC(α). If A = pB with p > 1, it can be shown that the DC problem for A can be reduced to the solution of the DT problem for B. Theorem 6 is also important in the analysis of the algorithm of [33]. The main idea is to group the decomposition into phases where in phase k, only matrix elements with values amn ≥ 2R−k are considered, i.e., the matrix elements can be represented by ones and zeros depending on whether amn ≥ 2k Decomposition of matrices and static multileaf collimators: a survey or not (R = log2 (max amn )). By Theorem 6 each of the decomposition cardinality problems can be solved in polynomial time using a DT algorithm. Hence, the Xia-Verhey algorithm runs in polynomial time and gives the best decomposition cardinality, but only among all decompositions with the same separation into phases. In view of Theorem 5, most of the algorithms in the literature are heuristic or approximative (with performance guarantee). Most often, they guarantee minimal DT (α) and minimize DC(α) heuristically or exactly subject to DT optimality. The few algorithms that are able to solve the problem exactly have exponential running time and are limited to small instances, as evident in Section 5. 3.2 Algorithms for the unconstrained DC problem The author in [18] applies a greedy idea to his TNMU algorithm. In each of his extraction steps A = A − pY , p is computed as maximal possible value such that the pair (p, Y ) is admissible, i.e., amn ≥ 0 for all m, n and C(A ) = C(A)−p. Since the algorithm is a specialized version of Engel’s decomposition time algorithm, it will only find good decomposition cardinalities among all optimal solutions of the DT problem. Note, however (see Example 1), that none of the optimal solutions of the DT problem may be optimal for the DC problem. The author in [21] shows the validity of an algorithm which solves the lexicographic problem of finding among all optimizers of DT one with smallest decomposition cardinality DC. The complexity of this algorithm is O(M N 2L+2 ), i.e., it is polynomial in M and N , but exponential in L (where L is a bound for the entries amn of the matrix A). It should be noted that this algorithm does not, in general, solve DC. This is due to the fact that among the optimal solutions for DT there may not be an optimal solution for DC (see Sections 4 and 5). The idea of Kalinowski’s algorithm can, however, be extended to solve DC. The main idea of this approach is to treat the decomposition time as a parameter c and to solve the problem of finding a decomposition with smallest cardinality such that its decomposition time is bounded by c. For c = min DT (α), this can be done by Kalinowski’s algorithm in O(M N 2L+2 ). For c = 1, . . . , M N L, the author in [28] shows that the complexity increases to O((M N )2L+2 ). We thus have the following result. Theorem 7 ( [28]). The problem of minimizing the decomposition cardinality DC(α) in an unconstrained problem can be solved in O((M N )2L+3 ). The authors in [27] present approximation algorithms for the unconstrained DC problem. They define matrices Pk whose elements are the k th digits in the binary representation of the entries in A. The (easy) segmentation of Pk for k = 1, . . . , log L then results in a O(M N log (L)) time (logL + 1)approximation algorithm for DC. They show that the performance guarantee M. Ehrgott et al. can be improved to log D + 1 by choosing D as the maximum of a set of numbers containing all absolute differences between any two consecutive row entries over all rows and the first and last entries of each row. In the context of approximation algorithms we finally mention the following result by [6]. Theorem 8. The DC problem is APX-hard even for matrices with a single row with entries polynomially bounded in N . 3.3 Algorithms for the constrained DC problem A similar idea as in [18] is used in [5] for the constrained decomposition cardinality problem. Data from the solution of the DT problem (see Section 2) is used as input for a greedy extraction procedure. The author in [22] also generalizes the idea of Engel to the case of DC problems with interleaf motion constraints. The authors in [10–13] consider the decomposition cardinality problem with interleaf motion, width, and tongue-and-groove constraints. The first two groups of constraints are considered by a geometric argumentation. The given matrix A is — similar to [32] — interpreted as a 3-dimensional set of rods, or as they call it a 3D-mountain, where the height of each rod is determined by the value of its corresponding matrix entry amn . The decomposition is done by a mountain reduction technique, where tongue-and-groove constraints are taken into consideration using a graph model. The underlying graph is complete with its node set corresponding to all feasible C1 matrices. The weight of the edges is determined by the tongue-and-groove error occurring if both matrices are used in a decomposition. Matching algorithms are used to minimize the tongue-and-groove error. In order to speed up the algorithm, smaller graphs are used and the optimal matchings are computed using a network flow algorithm in a sparse graph. The authors in [19] propose a difference-matrix metaheuristic to obtain solutions with small DC as well as small DT values. The metaheuristic uses a multiple start local search with a heuristic that sequentially extracts segments Yk based on results of [18]. They consider multiple constraints on the segments, including interleaf and innerleaf motion constraints. Reported results clearly outperform the heuristics implemented in the Elekta MLC system. 4 Combined objective functions A first combination of decomposition time and cardinality problems is the treatment time problem with constant set-up times T T (α) := DT (α) + SU (α) = DT (α) + τ DC (α). For τ suitably large, it is clear that the DC problem is a special case of the TT problem. Thus the latter is strongly NPhard due to Theorem 5. Decomposition of matrices and static multileaf collimators: a survey The most versatile approach to deal with the TT problem including different kinds of constraints, is by integer programming as done by [25]. They first formulate the decomposition time problem as an integer linear program (IP), where interleaf motion, width, or tongue-and-groove constraints can easily be written as linear constraints. The optimal objective z = DT (α) can then be used in a modified IP as upper bound for the decomposition time which is now treated as variable (rather than objective) and in which the number of C1 matrices is to be minimized. This approach can be considered as an ε-constraint method to solve bicriteria optimization problems (see, for instance, [16]). The solutions in [25] can thus be interpreted as Pareto optimal solutions with respect to the two objective functions DT (α) and DC(α). Due to the large number of variables, the algorithm presented in [25] is, however, not usable for realistic problem instances. The importance of conflict between the DT and DC objectives has not been investigated to a great extent. The author in [3] showed that for matrices with a single row there is always a decomposition that minimizes both DC(α) and DT (α). The following examples show that the optimal solutions of the (unconstrained) DT , DC and T Tvar problems are in general attained in different decompositions. As a consequence, it is not enough to find the best possible decomposition cardinality among all decompositions with minimal decomposition time as is done in most papers on the DC problem (see Section 3). We will present next an example which is the smallest possible one for different optimal solutions of the DT and DC problems. Example 1. Let Since the entries 1, . . . , 6 can only be uniquely represented by the numbers 1, 2 and 4, the unique optimal decomposition of the DC problem is given by A = 1Y 1 + 2Y 2 + 4Y 3 where 1 Y = ,Y = and Y = Hence, the optimal value of the DC problem is 3, with DT = 7. Since the optimal solution of the DT problem has DT = 6, we conclude that DC ≥ 4. It is not clear whether this example is of practical value. In Section 5 we see that in our tests the optimal solution of the DC problem examples was not among the DT optimal solutions in only 5 out of 32 examples. In these cases the difference in the DC objective was only 1. This is also emphasized by [26] who confirm that the conflict between DT and DC is often small in practice. M. Ehrgott et al. Another possible combination of objective functions is the treatment time problem with variable set-up time T Tvar (α) := DT (α)+SUvar (α) = DT (α)+ K−1 k=1 τπ(k),π(k+1) (see (6)). Minimizing T Tvar (α) is strongly NP-hard when looking at the special case τkl = τ for all k, l, which yields the objective function of T Tconst(α). Here, we consider π(k) π(k+1) π(k) π(k+1) − lm |, |rm − rm | , τπ (k),π(k+1) = max max |lm m∈M i.e., the maximal number of positions any leave moves between two consecutive matrices Y π(k) and Y π(k+1) in the sequence. Extending Example 1, the following example shows that the three objective functions DT (α), DC(α), and T Tvar (α) yield, in general, different optimal solutions. Example 2. Let The optimal decomposition for DC is This decomposition yields DT = 14, DC = 3 and T Tvar = DT + SUvar = 14 + 3 = 17, where SUvar = 1 + 2 = 3. The optimal decomposition for DT is Here we obtain DT = 9, DC = 4, SUvar = 2 + 2 + 2 = 6 and thus T Tvar = 15. The optimal decomposition for T Tvar is We get DT = 10, DC = 4 and SUvar = 2 + 1 + 1 = 4, leading to T Tvar = 14. If the set of C1 matrices Y 1 , . . . , Y K in the formulation T Tvar (α) is given, one can apply a traveling salesman algorithm to minimize SUvar (α). Since the number L of C1 matrices is in general rather small, the TSP can be solved exactly in reasonable time. If the set of C1 matrices is not given, the problem becomes a simultaneous decomposition and sequencing problem which is currently under research. Decomposition of matrices and static multileaf collimators: a survey 5 Numerical results Very few numerical comparisons are available in the literature. The author in [29] compares in his numerical investigations eight different heuristics for the DC problem. He concludes that the Algorithm of Xia and Verhey [33] outperforms its competitors. With new algorithms developed since the appearance of Que’s paper, the dominance of the Xia-Verhey algorithm is no longer true, as observed by [15] and seen below. In this section we present results obtained with the majority of algorithms mentioned in this paper for constrained and unconstrained problems. We consider only interleaf motion constraints, since these are the most common and incorporated in most algorithms. As seen in Section 2 the unconstrained and constrained DT problems can be solved in O (N M ), respectively O(N M 2 ) time. Moreover, we found that algorithms that guarantee minimal DT (α) and include a heuristic to reduce DC(α) do not require significantly higher CPU time. Therefore we exclude algorithms that simply minimize DT (α) without control over DC(α). Table 1 shows the references for the algorithms, and some remarks on their properties. We used 47 clinical examples varying in size from 5 to 23 rows and 6 to 30 columns, with L varying between 9 and 40. In addition, we used 15 instances of size 10×10 with entries randomly generated between 1 and 14. In all experiments we have applied an (exact) TSP algorithm to the resulting matrices to minimize the total treatment time for the given decomposition. Table 2 presents the results for the unconstrained and Table 3 presents those for the constrained problems. All experiments were run on a Pentium 4 PC with 2.4 GHz and 512 MB RAM. In both tables we first show the number of instances for which the algorithms gave the best values for DT, DC and T Tvar after application of the TSP to the matrices produced by the algorithms. Next, we list the maximal CPU time (in seconds) the algorithm took for any of the instances. The next four rows show the minimum, maximum, median, and average relative deviation from the best DC value found by any of the algorithms. The next four rows show the same for T Tvar . Finally, we list the improvement of variable setup time according to (14) obtained by applying the TSP to the matrices found by the algorithms. Table 1. List of algorithms tested. Algorithm Baatar et al. [5] Engel [18] Xia and Verhey [33] Baatar et al. [5] Kalinowski [22] Siochi [32] Xia and Verhey [33] unconstrained unconstrained unconstrained constrained constrained constrained constrained guarantees min DT , guarantees min DT , heuristic for DC guarantees min DT , guarantees min DT , guarantees min DT , heuristic for DC heuristic for DC heuristic for DC heuristic for DC heuristic for DC heuristic for T T M. Ehrgott et al. Table 2. Numerical results for the unconstrained algorithms. Baatar et al. [5] Best Best Best Best DT DC T Tvar CPU Max CPU Engel [18] Xia and Verhey [33] ∆ DC Min Max Median Mean 0.00% 33.33% 18.18% 17.08% 0.00% 0.00% 0.00% 0.00% 0.00% 86.67% 36.93% 37.82% ∆ TT Min Max Median Mean 0.00% 21.30% 0.00% 3.14% 0.00% 42.38% 5.66% 8.74% 0.00% 83.82% 14.51% 17.23% ∆ SU Min Max Median Mean 0.83% 37.50% 14.01% 13.91% 1.43% 27.27% 10.46% 12.15% 7.89% 43.40% 25.41% 25.74% Table 3. Numerical results for the constrained algorithms. Baatar et al. [5] Best Best Best Best DT DC T Tvar CPU Max CPU Kalinowski [22] 62 62 43 0 Siochi [32] 62 1 11 0 Xia and Verhey [33] 0 0 0 62 ∆ DC Min Max Median Mean 0.00% 160.00% 70.71% 71.37% 0.00% 0.00% 0.00% 0.00% 0.00% 191.67% 108.12% 102.39% 11.11% 355.56% 70.71% 86.58% ∆ TT Min Max Median Mean 0.00% 50.74% 5.23% 7.97% 0.00% 45.28% 0.00% 4.95% 0.00% 26.47% 8.49% 8.26% 10.66% 226.42% 51.03% 61.56% ∆ SU Min Max Median Mean 0.00% 18.18% 4.45% 5.34% 2.27% 35.25% 22.45% 21.66% 0.00% 20.00% 2.11% 3.24% 5.00% 24.05% 14.20% 14.42% Decomposition of matrices and static multileaf collimators: a survey Table 4. Comparison of Kalinowski [21] and Nußbaum [28]. A * next to the DC value indicates a difference between the algorithms. Data Sets Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Kalinowski [21] Nußbaum [28] 7 6 7* 6 8* 8 9 8 9 9 6 7 8 7 7 7 8 7 7 8 9* 9 9 9 9 10 7* 7 8 6 8* 10 Table 2 shows that Xia and Verhey [33] is the fastest algorithm. However, it never found the optimal DT value and found the best DC value for only one instance. Since the largest CPU time is 0.116 seconds, computation time is not an issue. Thus we conclude that Xia and Verhey [33] is inferior to the other algorithms. Baatar et al. [5] and Engel [18] are roughly equal in speed. Both guarantee optimal DT , but the latter performs better in terms of DC, finding the best value for all instances. However, the slightly greater amount of matrices used by the former method appears to enable better T Tvar values M. Ehrgott et al. and a slightly bigger improvement of the variable setup time by reordering the segments. We observe that applying a TSP algorithm is clearly worthwhile, reducing the variable setup time by up to 40%. The results for the constrained problems underline that the algorithm of [33], despite being the fastest for all instances, is not competitive. It did not find the best DT, DC, or T Tvar values for any example. The other three algorithms guarantee DT optimality. The algorithm of [22] performs best, finding the best DC value in all cases, and the best T Tvar value in 43 of the 62 tests. Baatar et al. [5] and Siochi [32] are comparable, with the former being slightly better in terms of DC, T Tvar and CPU time. Again, the application of a TSP algorithm is well worth the effort to reduce the variable setup time. Finally, the results of comparing the algorithm of [21] with its new iterative version of [28] on a subset of the clinical instances are given in Table 4. These tests were performed on a PC with Dual Xeon Processor, 3.2 GHz and 4 GB RAM. In the comparison of 32 clinical cases there were only five cases (3, 5, 22, 40, 46) where the optimal solution of the DC problem was not among the optimal solutions of the DT problem — and thus found by the algorithm of [21]. In these five cases, the DC objective was only reduced by a value of 1. Since the iterative algorithm performs at most N M L − DT applications of Kalinowski-like procedures, the CPU time is obviously considerably larger. Acknowledgements The authors thank David Craigie, Zhenzhen Mu, and Dong Zhang, who implemented most of the algorithms, and Thomas Kalinowski for providing the source code of his algorithms. References 1. R. K. Ahuja and H. W. Hamacher. A network flow algorithm to minimize beam-on-time for unconstrained multileaf collimator problems in cancer radiation therapy. Networks, 45:36–41, 2004. 2. R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network Flows: Theory, Algorithms and Applications. Prentice-Hall, 1993. 3. D. Baatar. Matrix Decomposition with Time and Cardinality Objectives: Theory, Algorithms, and Application to Multileaf Collimator Sequencing. PhD thesis, Department of Mathematics, Technical University of Kaiserslautern, 2005. 4. D. Baatar and H. W. Hamacher. New LP model for multileaf collimators in radiation therapy planning. In Proceedings of the Operations Research Peripatetic Postgraduate Programme Conference ORP3 , Lambrecht, Germany, pages 11–29, 2003. 5. D. Baatar, H. W. Hamacher, M. Ehrgott, and G. J. Woeginger. Decomposition of integer matrices and multileaf collimator sequencing. Discrete Applied Mathematics, 152:6–34, 2005. Decomposition of matrices and static multileaf collimators: a survey 6. Nikhil Bansal, Don Coppersmith, and Baruch Schieber. Minimizing setup and beam-on times in radiation therapy. In Josep D´ıaz, Klaus Jansen, Jos´e D. P. Rolim, and Uri Zwick, editors, APPROX-RANDOM. Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 9th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2006 and 10th International Workshop on Randomization and Computation, RANDOM 2006, Barcelona, Spain, August 28-30 2006, Proceedings, volume 4110 of Lecture Notes in Computer Science, pages 27–38. Springer Verlag, Berlin, 2006. 7. N. Boland, H. W. Hamacher, and F. Lenzen. Minimizing beam-on-time in cancer radiation treatment using multileaf collimators. Networks, 43:226–240, 2004. 8. T. R. Bortfeld, A. L. Boyer, D. L. Kahler, and T. J. Waldron. X-ray field compensation with multileaf collimators. International Journal of Radiation Oncology, Biology, Physics, 28:723–730, 1994. 9. R. E. Burkard. Open Problem Session, Oberwolfach Conference on Combinatorial Optimization, November 24–29, 2002. 10. D. Z. Chen, X. S. Hu, S. Luan, C. Wang, S. A. Naqvi, and C. X. Yu. Generalized geometric approaches for leaf sequencing problems in radiation therapy. In Procedings of the 15th Annual International Symposium on Algorithms and Computation (ISAAC), Hong Kong, December 2004, volume 3341 of Lecture Notes in Computer Science, pages 271–281. Springer Verlag, Berlin, 2004. 11. D. Z. Chen, X. S. Hu, S. Luan, C. Wang, S. A. Naqvi, and C. X. Yu. Generalized geometric approaches for leaf sequencing problems in radiation therapy. International Journal of Computational Geometry and Applications, 16(2-3):175–204, 2006. 12. D. Z. Chen, X. S. Hu, S. Luan, C. Wang, and X. Wu. Geometric algorithms for static leaf sequencing problems in radiation therapy. International Journal of Computational Geometry and Applications, 14:311–339, 2004. 13. D. Z. Chen, X. S. Hu, S. Luan, X. Wu, and C. X. Yu. Optimal terrain construction problems and applications in intensity-modulated radiation therapy. Algorithmica, 42:265–288, 2005. 14. M. J. Collins, D. Kempe, J. Saia, and M. Young. Nonnegative integral subset representations of integer sets. Information Processing Letters, 101:129–133, 2007. 15. S. M. Crooks, L. F. McAven, D. F. Robinson, and L. Xing. Minimizing delivery time and monitor units in static IMRT by leaf-sequencing. Physics in Medicine and Biology, 47:3105–3116, 2002. 16. M. Ehrgott. Multicriteria Optimization. Springer Verlag, Berlin, 2nd edition, 2005. 17. M. Ehrgott, A. Holder, and J. Reese. Beam selection in radiotherapy design. Linear Algebra and its Applications, doi: 10.1016/j.laa.2007.05.039, 2007. 18. K. Engel. A new algorithm for optimal MLC field segmentation. Discrete Applied Mathematics, 152:35–51, 2005. 19. A. D. A. Gunawardena, W. D’Souza, L. D. Goadrick, R. R. Meyer, K. J. Sorensen, S. A. Naqvi, and L. Shi. A difference-matrix metaheuristic for intensity map segmentation in step-and-shoot imrt delivery. Physics in Medicine and Biology, 51:2517–2536, 2006. 20. H. W. Hamacher and K.-H. K¨ ufer. Inverse radiation therapy planing – A multiple objective optimization approach. Discrete Applied Mathematics, 118:145– 161, 2002. M. Ehrgott et al. 21. T. Kalinowski. Algorithmic complexity of the minimization of the number of segments in multileaf collimator field segmentation. Technical report, Department of Mathematics, University of Rostock, 2004. Preprint 2004/1. 22. T. Kalinowski. A duality based algorithm for multileaf collimator field segmentation with interleaf collision constraint. Discrete Applied Mathematics, 152:52–88, 2005. 23. S. Kamath, S. Sahni, J. Li, J. Palta, and S. Ranka. Leaf sequencing algorithms for segmented multileaf collimation. Physics in Medicine and Biology, 48:307– 324, 2003. 24. S. Kamath, S. Sahni, S. Ranka, J. Li, and J. Palta. A comparison of stepand-shoot leaf sequencing algorithms that eliminate tongue-and-groove effects. Physics in Medicine and Biology, 49:3137–3143, 2004. 25. M. Langer, V. Thai, and L. Papiez. Improved leaf sequencing reduces segments of monitor units needed to deliver IMRT using MLC. Medical Physics, 28:2450– 58, 2001. 26. M. P. Langer, V. Thai, and L. Papiez. Tradeoffs between segments and monitor units are not required for static field IMRT delivery. International Journal of Radiation Oncology, Biology, Physics, 51:75, 2001. 27. S. Luan, J. Saia, and M. Young. Approximation algorithms for minimizing segments in radiation therapy. Information Processin Letters, 101:239–244, 2007. 28. M. Nußbaum. Min cardinality c1-decomposition of integer matrices. Master’s thesis, Department of Mathematics, Technical University of Kaiserslautern, 2006. 29. W. Que. Comparison of algorithms for multileaf collimator field segmentation. Medical Physics, 26:2390–2396, 1999. 30. L. Shao. A survey of beam intensity optimization in imrt. In T. Halliburton, editor, Proceedings of the 40th Annual Conference of the Operational Research Society of New Zealand, Wellington, 2-3 December 2005, pages 255– 264, 2005. Available online at http://secure.orsnz.org.nz/conf40/content/ paper/Shao.pdf. 31. D. M. Shepard, M. C. Ferris, G. H. Olivera, and T. R. Mackie. Optimizing the delivery of radiation therapy to cancer patients. SIAM Review, 41:721–744, 1999. 32. R. A. C. Siochi. Minimizing static intensity modulation delivery time using an intensity solid paradigm. International Journal of Radiation Oncology, Biology, Physics, 43:671–689, 1999. 33. P. Xia and L. Verhey. Multileaf collimator leaf sequencing algorithm for intensity modulated beams with multiple segments. Medical Physics, 25:1424–1434, 1998. Appendix: The instances Tables 5 and 6 show the size (N, M, L) of the instances, the optimal value of DT (α) in the constrained and unconstrained problems, and the best DC(α) and T Tvar (α) values found by any of the tested algorithms, with a * indicating proven optimality for DC in the unconstrained case. Decomposition of matrices and static multileaf collimators: a survey Table 5. The 15 random instances. Data Set Random Random Random Random Random Random Random Random Random Random Random Random Random Random Random Size 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Table 6. The 47 clinical instances. Data Set Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical 7* 6* 7* 6* 8* 8* 9* 8* 9* 9* 6* 7* 11 8* 7* 7* 7* 8* 7* 7* M. Ehrgott et al. Table 6. Continued. Data Set Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical Clinical 8* 9* 9* 9* 9* 9* 14 15 12 15 15 11 13 13 11 12 11 9 10* 7* 7* 8* 11 13 6* 8* 10* Neuro-dynamic programming for fractionated radiotherapy planning∗ Geng Deng1 and Michael C. Ferris2 1 2 Department of Mathematics, University of Wisconsin at Madison, 480 Lincoln Dr., Madison, WI 53706, USA, [email protected] Computer Sciences Department, University of Wisconsin at Madison, 1210 W. Dayton Street, Madison, WI 53706, USA, [email protected] Summary. We investigate an on-line planning strategy for the fractionated radiotherapy planning problem, which incorporates the effects of day-to-day patient motion. On-line planning demonstrates significant improvement over off-line strategies in terms of reducing registration error, but it requires extra work in the replanning procedures, such as in the CT scans and the re-computation of a deliverable dose profile. We formulate the problem in a dynamic programming framework and solve it based on the approximate policy iteration techniques of neuro-dynamic programming. In initial limited testing, the solutions we obtain outperform existing solutions and offer an improved dose profile for each fraction of the treatment. Keywords: Fractionation, adaptive radiation therapy, neuro-dynamic programming, reinforcement learning. 1 Introduction Every year, nearly 500,000 patients in the United States are treated with external beam radiation, the most common form of radiation therapy. Before receiving irradiation, the patient is imaged using computed tomography (CT) or magnetic resonance imaging (MRI). The physician contours the tumor and surrounding critical structures on these images and prescribes a dose of radiation to be delivered to the tumor. Intensity-Modulated Radiotherapy (IMRT) is one of the most powerful tools to deliver conformal dose to a tumor target [6, 17, 23]. The treatment process involves optimization over specific parameters, such as angle selection and (pencil) beam weights [8, 9, 16, 18]. The organs near the tumor will inevitably receive radiation as well; the ∗ This material is based on research partially supported by the National Science Foundation Grants DMS-0427689 and IIS-0511905 and the Air Force Office of Scientific Research Grant FA9550-04-1-0192. G. Deng and M.C. Ferris physician places constraints on how much radiation each organ should receive. The dose is then delivered by radiotherapy devices, typically in a fractionated regime consisting of five doses per week for a period of 4-9 weeks [10]. Generally, the use of fractionation is known to increase the probability of controlling the tumor and to decrease damage to normal tissue surrounding the tumor. However, the motion of the patient or the internal organs between treatment sessions can result in failure to deliver adequate radiation to the tumor [14, 21]. We classify the delivery error in the following types: 1. Registration Error (see Figure 1 (a)). Registration error is due to the incorrect positioning of the patient in day-to-day treatment. This is the interfraction error we primarily consider in this paper. Accuracy in patient positioning during treatment set-up is a requirement for precise delivery. Traditional positioning techniques include laser alignment to skin markers. Such methods are highly prone to error and in general show a (a) Registration error (b) Internal organ shifts (c) Tumor area shrinks (d) Non-rigid organ transformation Fig. 1. Four types of delivery error in hypo-fraction treatment. Neuro-dynamic programming for fractionated radiotherapy planning displacement variation of 4-7mm depending on the site treated. Other advanced devices, such as electronic portal imaging systems, can reduce the registration error by comparing real-time digital images to facilitate a time-efficient patient repositioning [17]. 2. Internal Organ Motion Error, (Figure 1 (b)). The error is caused by the internal motion of organs and tissues in a human body. For example, intracranial tissue shifts up to 1.5 mm when patients change position from prone to supine. The use of implanted radio-opaque markers allows physicians to verify the displacement of organs. 3. Tumor Shrinkage Error, (Figure 1 (c)). This error is due to tumor area shrinkage as the treatment progresses. The originally prescribed dose delivered to target tissue does not reflect the change in tumor area. For example, the tumor can shrink up to 30% in volume within three treatments. 4. Non-rigid Transformation Error, (Figure 1 (d)). This type of intrafraction motion error is internally induced by non-rigid deformation of organs, including for example, lung and cardiac motion in normal breathing conditions. In our model formulation, we consider only the registration error between fractions and neglect the other three types of error. Internal organ motion error occurs during delivery and is therefore categorized as an intrafraction error. Our methods are not real-time solution techniques at this stage and consequently are not applicable to this setting. Tumor shrinkage error and non-rigid transformation error mainly occur between treatment sessions and are therefore called interfraction errors. However, the changes in the tumor in these cases are not volume preserving, and incorporating such effects remains a topic of future research. The principal computational difficulty arises in that setting from the mapping of voxels between two stages. Off-line planning is currently widespread. It only involves a single planning step and delivers the same amount of dose at each stage. It was suggested in [5, 15, 19] that an optimal inverse plan should incorporate an estimated probability distribution of the patient motion during the treatment. Such distribution of patient geometry can be estimated [7, 12], for example, using a few pre-scanned images, by techniques such as Bayesian inference [20]. The probability distributions vary among organs and patients. An alternative delivery scheme is so called on-line planning, which includes multiple planning steps during the treatment. Each planning step uses feedback from images generated during treatment, for example, by CT scans. On-line replanning accurately captures the changing requirements for radiation dose at each stage, but it inevitably consumes much more time during every replanning procedure. This paper aims at formulating a dynamic programming (DP) framework that solves the day-to-day on-line planning problem. The optimal policy is selected from several candidate deliverable dose profiles, compensating over G. Deng and M.C. Ferris time for movement of the patient. The techniques are based on neuro-dynamic programming (NDP) ideas [3]. In the next section, we introduce the model formulation and in Section 3, we describe serval types of approximation architecture and the NDP methods we employ. We give computational results on a real patient case in Section 4. 2 Model formulation To describe the problem more precisely, suppose the treatment lasts N periods (stages), and the state xk (i), k = 0, 1, . . . , N, i ∈ T , contains the actual dose delivered to all voxels after k stages (xk is obtained through a replanning process). Here T represents the collection of voxels in the target organ. The state evolves as a discrete-time dynamic system: xk+1 = φ (xk , uk , ωk ), k = 0, 1, . . . , N − 1, where uk is the control (namely dose applied) at the k th stage, and ωk is a (typically three dimensional) random vector representing the uncertainty of patient positioning. Normally, we assume that ωk corresponds to a shift transformation to uk . Hence the function φ has the explicit form φ(xk (i), uk (i), ωk ) = xk (i) + uk (i + ωk ), ∀i ∈ T . Since each treatment is delivered separately and in succession, we also assume the uncertainty vector ωk is i.i.d. In the context of voxelwise shifts, ωk is regarded as a discretely distributed random vector. The control uk is drawn from an applicable control set U (xk ). Since there is no recourse for dose delivered outside of the target, an instantaneous error (or cost) g(xk , xk+1 , uk ) is incurred when evolving between stage xk and xk+1 . Let the final state xN represent the total dose delivered on the target during the treatment period. At the end of N stages, a terminal cost JN (xN ) will be evaluated. Thus, the plan chooses controls u = {u0 , u1 , . . . , uN −1 } so as to minimize an expected total cost: N −1 g(xk , xk+1 , uk ) + JN (xN ) J0 (x0 ) = min E k=0 s.t. xk+1 = φ(xk , uk , ωk ), uk ∈ U (xk ), k = 0, 1, . . . , N − 1. We use the notation J0 (x0 ) to represent an optimal cost-to-go function that accumulates the expected optimal cost starting at stage 0 with the initial state x0 . Moreover, if we extend the definition to a general stage, the cost-togo function Jj defined at j th stage is expressed in a recursive pattern, Neuro-dynamic programming for fractionated radiotherapy planning Jj (xj ) ⎡ = min E ⎣ N −1 ⎤ g(xk , xk+1 , uk ) + JN (xN ) xk+1 = φ(xk , uk , ωk ), uk ∈ U (xk ), k = j, . . . , N − 1 = min E [g(xj , xj+1 , uj ) + Jj+1 (xj+1 ) | xj+1 = φ(xj , uj , ωj ), uj ∈ U (xj )] . For ease of exposition, we assume that the final cost function is a linear combination of the absolute differences between the current dose and the ideal target dose at each voxel. That is JN (xN ) = p(i)|xN (i) − T (i)|. (4) i∈T Here, T (i), i ∈ T in voxel i represents the required final dosage on the target, and the vector p weights the importance of hitting the ideal value for each voxel. We typically set p(i) = 10, for i ∈ T , and p(i) = 1 elsewhere, in our problem to emphasize the importance of target volume. Other forms of final cost function could be used, such as the sum of least squares error [19]. A key issue to note is that the controls are nonnegative since dose cannot be removed from the patient. The immediate cost g at each stage is the amount of dose delivered outside of the target volume due to the random shift, p(i + ωk )uk (i + ωk ). (5) g(xk , xk+1 , uk ) = i+wk ∈T / It is clear that the immediate cost is only associated with the control uk and the random term ωk . If there is no displacement error (ωk = 0), the immediate cost is 0, corresponding to the case of accurate delivery. The control most commonly used in the clinic is the constant policy, which delivers uk = T /N at each stage and ignores the errors and uncertainties. (As mentioned in the introduction, when the planner knows the probability distribution, an optimal off-line planning strategy calculates a total dose profile D, which is later divided by N and delivered using the constant policy, so that the expected delivery after N stages is close to T .) We propose an on-line planning strategy that attempts to compensate for the error over the remaining time stages. At each time stage, we divide the residual dose required by the remaining time stages: uk = max(0, T − xk )/(N − k). Since the reactive policy takes into consideration the residual at each time stage, we expect this reactive policy to outperform the constant policy. Note the reactive policy requires knowledge of the cumulative dose xk and replanning at every stage — a significant additional computation burden over current practice. G. Deng and M.C. Ferris We illustrate later in this paper how the constant and reactive heuristic policies perform on several examples. We also explain how the NDP approach improves upon these results. The NDP makes decisions on several candidate policies (so-called modified reactive policies), which account for a variation of intensities on the reactive policy. At each stage, given an amplifying parameter a on the overall intensity level, the policy delivers uk = a · max(0, T − xk )/(N − k). We will show that the amplifying range of a > 1 is preferable to a = 1, which is equivalent to the standard reactive policy. The parameter a should be confined with an upper bound, so that the total delivery does not exceed the tolerance level of normal tissue. Note that we assume these idealized policies uk (the constant, reactive and modified reactive policies) are valid and deliverable in our model. However, in practice they are not because uk has to be a combination of dose profiles of beamlets fired from a gantry. In Voelker’s thesis [22], some techniques to approximate uk are provided. Furthermore, as delivering devices and planning tools become more sophisticated, such policies will become attainable. So far, the fractionation problem is formulated in a finite horizon3 dynamic programming framework [1, 4, 13]. Numerous techniques for such problems can be applied to compute optimal decision policies. But unfortunately, because of the immensity of these state spaces (Bellman’s “curse of dimensionality”), the classical dynamic programming algorithm is inapplicable. For instance, in a simple one-dimensional problem with only ten voxels involving 6 time stages, the DP solution times are around one-half hour. To address these complex problems, we design sub-optimal solutions using approximate DP algorithms — neuro-dynamic programming [3, 11]. 3 Neuro-dynamic programming 3.1 Introduction Neuro-dynamic programming is a class of reinforcement learning methods that approximate the optimal cost-to-go function. Bertsekas and Tsitsiklis [3] coined the term neuro-dynamic programming because it is associated with building and tuning a neural network via simulation results. The idea of an approximate cost function helps NDP avoid the curse of dimensionality and distinguishes the NDP methods from earlier approximation versions of DP methods. Sub-optimal DP solutions are obtained at significantly smaller computational costs. The central issue we consider is the evaluation and approximation of the reduced optimal cost function Jk in the setting of the radiation fractionation 3 Finite horizon means finite number of stages. Neuro-dynamic programming for fractionated radiotherapy planning problem — a finite horizon problem with N periods. We will approximate a total of N optimal cost-to-go functions Jk , k = 0, 1, . . . , N − 1, by simulation and training of a neural network. We replace the optimal cost Jk (·) with an approximate function J˜k (·, rk ) (all of the J˜k (·, rk ) have the same parametric form), where rk is a vector of parameters to be ascertained from a training process. The function J˜k (·, rk ) is called a scoring function, and the value J˜k (x, rk ) is called the score of state x. We use the optimal control u ˆk that solves the minimum problem in the (approximation of the) right-hand-side of Bellman’s equation defined using u ˆk (xk ) ∈ argmin E[g(xk , xk+1 , uk ) + J˜k+1 (xk+1 , rk+1 )| xk+1 = φ(xk , uk , ωk )]. (6) uk ∈U(xk ) ˆk is found by the direct The policy set U (xk ) is a finite set, so the best u comparison of a set of values. In general, the approximate function J˜k (·, rk ) has a simple form and is easy to evaluate. Several practical architectures of J˜k (·, rk ) are described below. 3.2 Approximation architectures Designing and selecting suitable approximation architectures are important issues in NDP. For a given state, several representative features are extracted and serve as input to the approximation architecture. The output is usually a linear combination of features or a transformation via a neural network structure. We propose using the following three types of architecture: 1. A neural network/multilayer perceptron architecture. The input state x is encoded into a feature vector f with components fl (x), l = 1, 2, . . . , L, which represent the essential characteristics of the state. For example, in the fractionation radiotherapy problem, the average dose distribution and standard deviation of dose distribution are two important components of the feature vector associated with the state x, and it is a common practice to add the constant 1 as an additional feature. A concrete example of such a feature vector is given in Section 4.1. The feature vector is then linearly mapped with coefficients r(j, l) to P ‘hidden units’ in a hidden layer, L r(j, l)fl (x), j = 1, 2, . . . , P, as depicted in Figure 2. The values of each hidden unit are then input to a sigmoidal function that is differentiable and monotonically increasing. For example, the hyperbolic tangent function G. Deng and M.C. Ferris Fig. 2. An example of the structure of a neural network mapping. σ(ξ) = tanh(ξ) = eξ − e−ξ , eξ + e−ξ or the logistic function 1 1 + e−ξ can be used. The sigmoidal functions should satisfy σ(ξ) = −∞ < lim σ(ξ) < lim σ(ξ) < ∞. ξ→−∞ The output scalars of the sigmoidal function are linearly mapped again to generate one output value of the overall architecture, L P ˜ J(x, r) = r(j)σ r(j, l)fl (x) . (8) j=1 Coefficients r(j) and r(j, l) in (7) are called the weights of the network. The weights are obtained from the training process of the algorithm. 2. A feature extraction mapping. An alternative architecture directly combines the feature vector f (x) in a linear fashion, without using a neural network. The output of the architecture involves coefficients r(l), l = 0, 1, 2, . . . , L, L ˜ r) = r(0) + J(x, r(l)fl (x). (9) l=1 An application of NDP that deals with playing strategies in a Tetris game involves such an architecture [2]. While this is attractive due to its simplicity, we did not find this architecture effective in our setting. The principal difficulty was that the iterative technique we used to determine r failed to converge. Neuro-dynamic programming for fractionated radiotherapy planning 3. A heuristic mapping. A third way to construct the approximate structure is based on existing heuristic controls. Heuristic controls are easy to implement and produce decent solutions in a reasonable amount of time. Although not optimal, some of the heuristic costs Hu (x) are likely to be fairly close to the optimal cost function J(x). Hu (x) is evaluated by averaging results of simulations, in which policy u is applied in every stage. In the heuristic mapping architecture, the heuristic costs are suitably weighted to obtain a good approximation of J. Given a state x and heuristic controls ui , i = 1, 2, . . . , I, the approximate form of J is ˜ r) = r(0) + J(x, r(i)Hui (x), where r is the overall tunable parameter vector of the architecture. The more heuristic policies that are included in the training, the more accurate the approximation is expected to be. With proper tuning of the parameter vector r, we hope to obtain a policy that performs better than all of the heuristic policies. However, each evaluation of Hui (x) is potentially expensive. 3.3 Approximate policy iteration using Monte-Carlo simulation The method we consider in this subsection is an approximate version of policy iteration. A sequence of policies {uk } is generated and the corresponding ˜ r) are used in place of J(x). The NDP alapproximate cost functions J(x, gorithms are based on the architectures described previously. The training of the parameter vector r for the architecture is performed using a combination of Monte-Carlo simulation and least squares fitting. The NDP algorithm we use is called approximate policy iteration (API) using Monte-Carlo simulation. API alternates between approximate policy evaluation steps (simulation) and policy improvement steps (training). Policies are iteratively updated from the outcomes of simulation. We expect the policies will converge after several iterations, but there is no theoretical guarantee. Such an iteration process is illustrated in Figure 3. Simulation step Simulating sample trajectories starts with an initial state x0 = 0, corresponding to no dose delivery. At the k th stage, an approximate cost-to-go function J˜k+1 (xk+1 , rk+1 ) for the next stage determines the policy u ˆk via the Equation (6), using the knowledge of the transition probabilities. We can then simulate ˆk and a realization of ωk . This process can be rexk+1 using the calculated u peated to generate a collection of sample trajectories. In this simulation step, ˆk ) the parameter vectors rk , k = 0, 1, . . . , N − 1, (which induce the policy u remain fixed as all the sample trajectories are generated. G. Deng and M.C. Ferris Fig. 3. Simulation and training in API. Starting with an initial policy, the MonteCarlo simulation generates a number of sample trajectories. The sample costs at each stage are input into the training unit in which rk s are updated by minimizing the least squares error. New sample trajectories are simulated using the policy based ˜ rk ) and (6). This process is repeated. on the approximate structure J(·, Simulation generates sample trajectories {x0,i = 0, x1,i , . . . , xN,i }, i = 1, 2, . . . , M . The corresponding sample cost-to-go for every transition state is equal to the cumulative instantaneous costs plus a final cost, c(xk,i ) = N −1 g(xj,i , xj+1,i , u ˆj ) + JN (xN,i ). Training step In the training process, we evaluate the cost and update the rk by solving a least squares problem at each stage k = 0, 1, . . . , N − 1, 1 ˜ |Jk (xk,i , rk ) − c(xk,i )|2 . 2 i=1 M min rk The least squares problem (11) penalizes the difference of approximate costto-go estimation J˜k (xk,i , rk ) and sample cost-to-go value c(xk,i ). It can be solved in various ways. In practice, we divide the M generated trajectories into M1 batches, with each batch containing M2 trajectories. M = M1 ∗ M2 . The least squares formulation (11) is equivalently written as ⎛ ⎞ M1 ⎝1 min |J˜k (xk,i , rk ) − c(xk,i )|2 ⎠ . rk 2 m=1 xk,i ∈Batchm We use a gradient-like method that processes each least squares term Neuro-dynamic programming for fractionated radiotherapy planning |J˜k (xk,i , rk ) − c(xk,i )|2 xk,i ∈Batchm incrementally. The algorithm works as follows: Given a batch of sample state trajectories (M2 trajectories), the parameter vector rk is updated by ˜ k,i , rk ) − c(xk,i ) , ∇J˜k (xk,i , rk ) J(x rk : = rk − γ xk,i ∈Batchm k = 0, 1, . . . , N − 1. Here γ is a stepsize length that should decrease monotonically as the number of batches used increases (see Proposition 3.8 in [3]). A suitable step length choice is γ = α/m, m = 1, 2, . . . , M1 , in the mth batch, where α is a constant scalar. The summation in the right-hand side of (14) is a gradient evaluation corresponding to (13) in the least squares formulation. The parametric vectors rk are updated via the iteration (14), as a batch of trajectories become available. The incremental updating scheme is motivated by the stochastic gradient algorithm (more details are given in [3]). In API, the rk s are kept fixed until all the M sample trajectories are generated. In contrast to this, another form of the NDP algorithm, called optimistic policy iteration (OPI), updates the rk more frequently, immediately after a batch of trajectories are generated. The intuition behind OPI is that the new changes on policies are incorporated rapidly. This ‘optimistic’ way of updating rk is subject to further investigation. Ferris and Voelker [10] applied a rollout policy to solve this same problem. The approximation is built by applying the particular control u at stage k and a control (base) policy at all future stages. This procedure ignores the training part of our algorithm. The rollout policy essentially suggests a simple form of ˜ J(x) = Hbase (x). The simplification results in a biased estimation of J(x), because the optimal cost-to-go function strictly satisfies J(x) ≤ Hbase (x). In our new approach, we use an approximate functional architecture for the cost-to-go function, and the training process will determine the parameters in the architecture. 4 Computational experimentation 4.1 A simple example We first experiment on a simple one dimensional fractionation problem with several variations of the approximating architectures described in the preceding section. As depicted in Fig. 4, the setting consists of a total of 15 voxels G. Deng and M.C. Ferris Fig. 4. A simple one-dimension problem. xk is the dose distribution over voxels in the target: voxels 3, 4, . . . , 13. {1, 2, . . . , 15}, where the target voxel set, T = {3, 4, . . . , 13}, is located in the center. Dose is delivered to the target voxels, and due to the random positioning error of the patient, a portion of dose is delivered outside of the target. We assume a maximum shift of 2 voxels to the left or right. In describing the cost function, our weighting scheme assigns relatively high weights on the target, and low weights elsewhere: ⎧ ⎨ 10, i ∈ T , p(i) = ⎩ 1, i ∈ / T. Definitions of final error and one step error refer to (4) and (5). For the target volume above, we also consider two different probability distributions for the random shift ωk . In the low volatility examples, we have ⎧ ⎪ −2, with probability 0.02 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ −1, with probability 0.08 ωk = 0, with probability 0.8 ⎪ ⎪ ⎪ ⎪ 1, with probability 0.08 ⎪ ⎪ ⎪ ⎪ ⎩ 2, with probability 0.02, for every stage k. The high volatility examples have Neuro-dynamic programming for fractionated radiotherapy planning ⎧ ⎪ ⎪ ⎪ −2, with probability 0.05 ⎪ ⎪ ⎪ ⎪ −1, with probability 0.25 ⎪ ⎨ ωk = 0, with probability 0.4 ⎪ ⎪ ⎪ ⎪ 1, with probability 0.25 ⎪ ⎪ ⎪ ⎪ ⎩ 2, with probability 0.05, for every stage k. While it is hard to estimate the volatilities present in the given application, the results are fairly insensitive to these choices. To apply the NDP approach, we should provide a rich collection of policies for the set U (xk ). In our case, U (xk ) consists of a total number of A modified reactive policies, U (xk ) = {uk,1 , uk,2 , . . . , uk,A | uk,i = ai · max(0, T − xk )/(N − k)}, where ai is a numerical scalar indicating an augmentation level to the standard reactive policy delivery; here A = 5 and a = {1, 1.4, 1.8, 2.2, 2.6}. We apply two of the approximation architectures in Section 3.2: the neural network/multilayer (NN) perceptron architecture and linear architecture using a heuristic mapping. The details follow. 1. API using Monte-Carlo simulation and neural network architecture. For the NN architecture, after experimentation with several different sets of features, we used the following six features fj (x), j = 1, 2, . . . , 6: a) Average dose distribution in the left rind of the target organ: mean of {x(i), i = 3, 4, 5}. b) Average dose distribution in the center of the target organ: mean of {x(i), i = 6, 7, . . . , 10}. c) Average dose distribution in the right rind of the target organ: mean of {x(i), i = 11, 12, 13}. d) Standard deviation of the overall dose distribution in the target. e) Curvature of the dose distribution. The curvature is obtained by fitting a quadratic curve over the values {xi , i = 3, 4, . . . , 13} and extracting the curvature. f) A constant feature f6 (x) = 1. In features (a)-(c), we distinguish the average dose on different parts of the structure, because the edges commonly have both underdose and overdose issues, while the center is delivered more accurately. G. Deng and M.C. Ferris In the construction of neural network formulation, a hyperbolic tangent function was used as the sigmoidal mapping function. The neural network has 6 inputs (6 features), 8 hidden sigmoidal units, and 1 output, such that weight of neural network rk is a vector of length 56. In each simulation, a total of 10 policy iterations were performed. Running more policy iterations did not show further improvement. The initial policy used was the standard reactive policy u : uk = max(0, T − xk )/(N − k). Each iteration involved M1 = 15 batches of sample trajectories, with M2 = 20 trajectories in each batch to train the neural network. To train the rk in this approximate architecture, we started with rk,0 as a vector of ones, and used an initial step length γ = 0.5. 2. API using Monte-Carlo simulation and the linear architecture of heuristic mapping. Three heuristic policies were involved as base policies: (1) constant policy u 1 : u1,k = T /N, for all k; (2) standard reactive policy u 2 : u2,k = max(0, T − xk )/(N − k), for all k; (3) modified reactive policy u 3 with the amplifying parameter a = 2 applied at all stages except the last one. For the stage k = N − 1, it simply delivers the residual dose: ⎧ ⎨ 2 · max(0, T − x )/(N − k), k = 0, 1, . . . , N − 2, k u3,k = ⎩ max(0, T − xk )/(N − k), k = N − 1. This third choice facilitates a more aggressive treatment in early stages. To evaluate the heuristic cost Hui (xk ), i = 1, 2, 3, 100 sub-trajectories starting with xk were generated for periods k to N . The training scheme was analogous to the above method. A total of 10 policy iterations were performed. The policy used in the first iteration was the standard reactive policy. All iterations involved M1 = 15 batches of sample trajectories, with M2 = 20 trajectories in each batch, resulting in a total of 300 trajectories. Running the heuristic mapping architecture entails a great deal of computation, because it requires evaluating the heuristic costs by sub-simulations. The fractionation radiotherapy problem is solved using both techniques with N = 3, 4, 5, 10, 14 and 20 stages. Figure 5 shows performance of API using a heuristic mapping architecture in a low volatility case. The starting policy is the standard reactive policy, that has an expected error (cost) of 0.48 (over M = 300 sample trajectories). The policies uk converge after around 7 policy iterations, taking around 20 minutes on a PIII 1.4GHz machine. After the training, the expected error decreases to 0.30, which is reduced by about 40% compared to the standard reactive policy. The main results of training and simulation with two probability distributions are plotted in Figure 6. This one-dimension example is small, but the revealed patterns are informative. For each plot, the results of the constant policy, reactive policy and NDP policy are displayed. Due to the significant randomness in the high volatility case, it is more likely to induce underdose in the rind of target, which is penalized heavily with our weighting scheme. Thus, Neuro-dynamic programming for fractionated radiotherapy planning Final Eror 3 4 5 6 7 Policy Iteration Number Fig. 5. Performance of API using heuristic cost mapping architecture, N = 20. For every iteration, we plot the average (over M2 = 20 trajectories) of each of M1 = 15 batches. The broken line represents the mean cost in each iteration. Expected Error Expected Error 2 Constant Policy Reactive Policy NDP Policy 1.5 1 0.5 0 Constant Policy Reactive Policy NDP Policy 2 Constant Policy Reactive Policy NDP Policy Constant Policy Reactive Policy NDP Policy 0.5 0 (b) NN architecture in high volatility. Expected Error Expected Error (a) NN architecture in low volatility. Total Number of Stages Total Number of Stages Total Number of Stages (c) Heuristic mapping architecture in low volatility. Total Number of Stages (d) Heuristic mapping architecture in high volatility. Fig. 6. Comparing the constant, reactive and NDP policies in low and high volatility cases. G. Deng and M.C. Ferris as volatility increases, so does the error. Note that in this one-dimensional problem, an ideal total amount of dose delivered to target is 11, which can be compared with the values on the vertical axes of the plots (which are multiplied by the vector p). Comparing the figures, we note remarkable similarities. Common to all examples is the poor performance of the constant policy. The reactive policy performs better than the constant policy, but not as well as the NDP policy in either architecture. The constant policy does not change much with number of total fractions. The level of improvement depends on the NDP approximate structure used. The NN architecture performs better than the heuristic mapping architecture when N is small. When N is large, they do not show significant difference. 4.2 A real patient example: head and neck tumor In this subsection, we apply our NDP techniques to a real patient problem — a head and neck tumor. In the head and neck tumor scenario, the tumor volume covers a total of 984 voxels in space. As noted in Figure 7, the tumor is circumscribed by two critical organs: the mandible and the spinal cord. We will perform analogous techniques as in the above simple example. The weight setting is the same: ⎧ ⎨ 10, i ∈ T , p(i) = ⎩ 1, i ∈ / T. Fig. 7. Target tumor, cord and mandible in the head and neck problem scenario. Neuro-dynamic programming for fractionated radiotherapy planning In our problem setting, we do not distinguish between critical organs and other normal tissue. In reality, a physician also takes into account radiation damage to the surrounding critical organs. For this reason, a higher penalty weight is usually assigned on these organs. ωk are now three-dimension random vectors. By assumption of independence of each component direction, we have P r(ωk = [i, j, k]) = P r(ωk,x = i) · P r(ωk,y = j) · P r(ωk,z = k). In the low and high volatility cases, each component of ωk follows a discrete distribution (also with a maximum shift of two voxels), ⎧ ⎪ ⎪ ⎪ −2, with probability 0.01 ⎪ ⎪ ⎪ ⎪ −1, with probability 0.06 ⎪ ⎨ ωk,i = 0, with probability 0.86 ⎪ ⎪ ⎪ ⎪ 1, with probability 0.06 ⎪ ⎪ ⎪ ⎪ ⎩ 2, with probability 0.01, ⎧ ⎪ −2, with ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ −1, with = 0, with ⎪ ⎪ ⎪ ⎪ 1, with ⎪ ⎪ ⎪ ⎪ ⎩ 2, with probability 0.05 probability 0.1 probability 0.7 probability 0.1 probability 0.05. We adjust the ωk,i by smaller amounts than in the one dimension problem, because the overall probability is the product of each component (16); the resulting volatility therefore grows. For each stage, U (xk ) is a set of modified reactive policies, whose augmentation levels include a = {1, 1.5, 2, 2.5, 3}. For the stage k = N − 1 (when there are two stages to go), setting the augmentation level a > 2 is equivalent to delivering more than the residual dose, which is unnecessary for treatment. In fact, the NDP algorithm will ignore these choices. The approximate policy iteration algorithm uses the same two architectures as in Section 4.1. However, for the neural network architecture, we need an extended 12 dimensional input feature space: (a) Features 1-7 are the mean values of the dose distribution of the left, right, up, down, front, back and center parts of the tumor. (b) Feature 8 is the standard deviation of dose distribution in the tumor volume. G. Deng and M.C. Ferris (c) Features 9-11. We extract the dose distribution on three lines through the center of the tumor. Lines are from left to right, from up to down, and from front to back. Features 9-11 are the estimated curvature of the dose distribution on the three lines. (d) Feature 12 is a constant feature, set as 1. In the neural network architecture, we build 1 hidden layer, with 16 hidden ˜ rk ) is of length 208. sigmoidal units. Therefore, each rk for J(x, We still use 10 policy iterations. (Later experimentation shows that 5 policy iterations are enough for policy convergence.) In each iteration, simulation generates a total of 300 sample trajectories that are grouped in M1 = 15 batches of sample trajectories, with M2 = 20 in each batch, to train the parameter rk . One thing worth mentioning here is the initial step length scaler γ in (14) is set to a much smaller value in the 3D problem. In the head and neck case, we set γ = 0.00005 as compared to γ = 0.5 in the one dimension example. A plot, Figure 8, shows the reduction of expected error as the number of policy iteration increases. ˜ r) using a linear combination of The alternative architecture for J(x, heuristic costs is implemented precisely as in the one dimension example. 400 Expected Error Policy Iteration Number Fig. 8. Performance of API using neural-network architecture, N = 11. For every iteration, we plot the average (over M2 = 20 trajectories) of each of M1 = 15 batches. The broken line represents the mean cost in each policy iteration. Neuro-dynamic programming for fractionated radiotherapy planning The overall performance of this second architecture is very slow, due to the large amount of work in evaluation of the heuristic costs. It spends a considerable time in the simulation process generating sample sub-trajectories. To save computation time, we propose an approximate way of evaluating each candidate policy in (6). The expected cost associated with policy uk is E[g(xk , xk+1 , uk ) + J˜k+1 (xk+1 , rk+1 )] 2 2 2 = P r(ωk )[g(xk , xk+1 , uk ) + J˜k+1 (xk+1 , rk+1 )]. ωk,1 =−2 ωk,2 =−2 ωk,2 =−2 For a large portion of ωk , the value of P r(ωk ) almost vanishes to zero when it makes a two-voxel shift in each direction. Thus, we only compute the sum of costs over a subset of possible ωk , 1 P r(ωk )[g(xk , xk+1 , uk ) + J˜k+1 (xk+1 , rk+1 )]. ωk,1 =−1 ωk,2 =−1 ωk,2 =−1 A straightforward calculation shows that we reduce a total of 125(= 53 ) evaluations of state xk+1 to 27(= 33 ). The final time involved in training the architecture is around 10 hours. Again, we plot the results of constant policy, reactive policy and NDP policy in the same figure. We still investigate on the cases where N = 3, 4, 5, 14, 20. As we can observe in all sub-figures in Figure 9, the constant policy still performs the worst in both high and low volatility cases. The reactive policy is better and the NDP policy is best. As the total number of stages increases, the constant policy remains almost at the same level, but the reactive and NDP continue to improve. The poor constant policy is a consequence of significant underdose near the edge of the target. The two approximating architectures perform more or less the same, though the heuristic mapping architecture takes significantly more time to train. Focusing on the low volatility cases, Figure 9 (a) and (c), we see the heuristic mapping architecture outperforms the NN architecture when N is small, i.e., N = 3, 4, 5, 10. When N = 20, the expected error is reduced to the lowest, about 50% from reactive policy to NDP policy. When N is small, the improvement ranges from 30% to 50%. When the volatility is high, it undoubtedly induces more error than in low volatility. Not only the expected error, but the variance escalates to a large value as well. For the early fractions of the treatment, the NDP algorithm intends to select aggressive policies, i.e., the augmentation level a > 2, while in the later stage time, it intends to choose more conservative polices. Since the weighting factor for target voxels is 10, aggressive policies are preferred in the early stage because they leave room to correct the delivery error on the target in the later stages. However, it may be more likely to cause delivery error on the normal tissue. G. Deng and M.C. Ferris 800 1600 Constant Policy Reactive Policy NDP Policy Constant Policy Reactive Policy NDP Policy Expected Error Expected Error Constant Policy Reactive Policy NDP Policy (b) NN architecture in high volatility. Expected Error Expected Error (a) NN architecture in low volatility. Total Number of Stages Total Number of Stages Constant Policy Reactive Policy NDP Policy Total Number of Stages (c) Heuristic mapping architecture in low volatility. Total Number of Stages (d) Heuristic mapping architecture in high volatility. Fig. 9. Head and neck problem — comparing constant, reactive and NDP policies in two probability distributions. 4.3 Discussion The number of candidate policies used in training is small. Once we have the optimal rk after simulation and training procedures, we can select uk from an extended set of policies U (xk ) (via (6)) using the approximate cost-to-go ˜ rk ), improving upon the current results. functions J(x, For instance, we can introduce a new class of policies that cover a wider delivery region. This class of clinically favored policies includes a safety margin around the target. The policies deliver the same dose to voxels in the margin as delivered to the nearest voxels in the target. As an example policy in the class, a constant-w1 policy (where ‘w1’ means ‘1 voxel wider’) is an extension of the constant policy, covering a 1-voxel thick margin around the target. As in the one-dimensional example in Section 4.1, the constant-w1 policy is defined as: Expected Error Expected Error Neuro-dynamic programming for fractionated radiotherapy planning 2 Constant Policy Reactive Policy Constant−w1 Policy Reactive−w1 Policy NDP Policy 1.5 1 0.5 0 Constant Policy Reactive Policy Constant−w1 Policy Reactive−w1 Policy NDP Policy Total Number of Stages (a) Heuristic mapping architecture in low volatility. Total Number of Stages (b) Heuristic mapping architecture in high volatility. Fig. 10. In the one-dimensional problem, NDP policies with extended policy set U (xk ). ⎧ ⎪ for i ∈ T , ⎪ ⎨ T (i)/N, uk (i) = T (3)/N = T (13)/N, for i = 2 or 14, ⎪ ⎪ ⎩ 0, elsewhere, where the voxel set {2, 14} represents the margin of the target. The reactivew1 policies and the modified reactive-w1 policies are defined accordingly. (We prefer to use ‘w1’ policies rather than ‘w2’ policies because ‘w1’ policies are observed to be uniformly better.) The class of ‘w1’ policies are preferable to apply in the high volatility case, but not in the low volatility case (see Figure 10). For the high volatility case, the policies reduce the underdose error significantly, which is penalized 10 times as heavy as the overdose error, easily compensating for the overdose error they introduce outside of the target. In the low volatility case, when the underdose is not as severe, they inevitably introduce redundant overdose error. The NDP technique was applied to an enriched policy set U (xk ), including the constant, constant-w1, reactive, reactive-w1, modified reactive and modified reactive-w1 policies. It automatically selected an appropriate policy at each stage based on the approximated cost-to-go function, and outperformed every component policy in the policy set. In Figure 10, we show the result of the one-dimensional example using the heuristic mapping architecture for NDP. As we have observed, in the low volatility case, the NDP policy tends to be the reactive or the modified reactive policy, while in the high volatility case is more likely to be the reactive-w1 or the modified reactive-w1 policy. Comparing to the NDP policies in Figure 6, we see that increasing the choices of policies in U (xk ), the NDP policy generates a lower expected error. G. Deng and M.C. Ferris 150 140 Expected Error Constant Policy Reactive Policy NDP Policy Total Number of Stages Fig. 11. Head and neck problem. Using API with a neural network architecture, in a low volatility case, with identical weight on the target and normal tissue. Another question concerns the amount of difference that occurs when switching to another weighting scheme. Setting a high weighting factor on the target is rather arbitrary. This will also influence the NDP in selecting policies. In addition, we changed the setting of weighting scheme to ⎧ ⎨ 1, i ∈ T , p(i) = ⎩ 1, i ∈ /T, and ran the experiment on the real example (Section 4.2) again. In Figure 11, we discovered the same pattern of results, while this time, all the error curves were scaled down accordingly. The difference between constant and reactive policy decreased. The NDP policy showed an improvement of around 12% over the reactive policy when N = 10. We even tested the weighting scheme ⎧ ⎨ 1, i ∈ T , p(i) = ⎩ 10, i ∈ / T, which reverted the importance of the target and the surrounding tissue. It resulted in a very small amount of delivered dose in the earlier stages, and at the end the target was severely underdosed. The result was reasonable because the NDP policy was cautious to deliver any dose outside of the target at each stage. 5 Conclusion Solving an optimal on-line planning strategy in fractionated radiation treatment is quite complex. In this paper, we set up a dynamic model for the Neuro-dynamic programming for fractionated radiotherapy planning day-to-day planning problem. We assume that the probability distribution of patient motion can be estimated by means of prior inspection. In fact, our experimentation on both high and low volatility cases displays very similar patterns. Although methods such as dynamic programming obtain exact solutions, the computation is intractable. We exploit neuro-dynamic programming tools to derive approximate DP solutions that can be solved with much fewer computational resources. The API algorithm we apply iteratively switches between Monte-Carlo simulation steps and training steps, whereby the feature based approximating architectures of the cost-to-go function are enhanced as the algorithm proceeds. The computational results are based on a finite policy set for training. In fact, the final approximate cost-to-go structures can be used to facilitate selection from a larger set of candidate policies extended from the training set. We jointly compare the on-line policies with an off-line constant policy that simply delivers a fixed dose amount in each fraction of treatment. The on-line policies are shown to be significantly better than the constant policy in terms of total expected delivery error. In most of the cases, the expected error is reduced by more than half. The NDP policy performs preferentially, enhancing the reactive policy for all our tests. Future work needs to address further timing improvement. We have tested two approximation architectures. One uses a neural network and the other is based on existing heuristic policies, both of which perform similarly. The heuristic mapping architecture is slightly better than the neural network based architecture, but it takes significantly more computational time to evaluate. As these examples have demonstrated, neurodynamic programming is a promising supplement to heuristics in discrete dynamic optimization. References 1. D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, Belmont, Massachusetts, 1995. 2. D. P. Bertsekas and S. Ioffe. Temporal differences-based policy iteration and applications in neuro-dynamic programming. Technical report, Lab. for Information and Decision Systems, MIT, 1996. 3. D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont, Massachusetts, 1996. 4. J. R. Birge and R. Louveaux. Introduction to Stochastic Programming. Springer, New York, 1997. 5. M. Birkner, D. Yan, M. Alber, J. Liang, and F. Nusslin. Adapting inverse planning to patient and organ geometrical variation: Algorithm and implementation. Medical Physics, 30:2822–2831, 2003. 6. Th. Bortfeld. Current status of IMRT: physical and technological aspects. Radiotherapy and Oncology, 61:291–304, 2001. G. Deng and M.C. Ferris 7. C. L. Creutzberg, G. V. Althof, M. de Hooh, A. G. Visser, H. Huizenga, A. Wijnmaalen, and P. C. Levendag. A quality control study of the accuracy of patient positioning in irradiation of pelvic fields. International Journal of Radiation Oncology, Biology and Physics, 34:697–708, 1996. 8. M. C. Ferris, J.-H. Lim, and D. M. Shepard. Optimization approaches for treatment planning on a Gamma Knife. SIAM Journal on Optimization, 13:921– 937, 2003. 9. M. C. Ferris, J.-H. Lim, and D. M. Shepard. Radiosurgery treatment planning via nonlinear programming. Annals of Operations Research, 119:247–260, 2003. 10. M. C. Ferris and M. M. Voelker. Fractionation in radiation treatment planning. Mathematical Programming B, 102:387–413, 2004. 11. A. Gosavi. Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning. Kluwer Academic Publishers, Norwell, MA, USA, 2003. 12. M. A. Hunt, T. E. Schultheiss, G. E. Desobry, M. Hakki, and G. E. Hanks. An evaluation of setup uncertainties for patients treated to pelvic fields. International Journal of Radiation Oncology, Biology and Physics, 32:227–233, 1995. 13. P. Kall and S. W. Wallace. Stochastic Programming. John Wiley & Sons, Chichester, 1994. 14. K. M. Langen and T. L. Jones. Organ motion and its management. International Journal of Radiation Oncology, Biology and Physics, 50:265–278, 2001. 15. J. G. Li and L. Xing. Inverse planning incorporating organ motion. Medical Physics, 27:1573–1578, 2000. 16. A. Niemierko. Optimization of 3D radiation therapy with both physical and biological end points and constraints. International Journal of Radiation Oncology, Biology and Physics, 23:99–108, 1992. 17. W. Schlegel and A. Mahr, editors. 3D Conformal Radiation Therapy - A Multimedia Introduction to Methods and Techniques. Springer-Verlag, Berlin, 2001. 18. D. M. Shepard, M. C. Ferris, G. Olivera, and T. R. Mackie. Optimizing the delivery of radiation to cancer patients. SIAM Review, 41:721–744, 1999. 19. J. Unkelback and U. Oelfke. Inclusion of organ movements in IMRT treatment planning via inverse planning based on probability distributions. Institute of Physics Publishing, Physics in Medicine and Biology, 49:4005–4029, 2004. 20. J. Unkelback and U. Oelfke. Incorporating organ movements in inverse planning: Assessing dose uncertainties by Bayesian inference. Institute of Physics Publishing, Physics in Medicine and Biology, 50:121–139, 2005. 21. L. J. Verhey. Immobilizing and positioning patients for radiotherapy. Seminars in Radiation Oncology, 5:100–113, 1995. 22. M. M. Voelker. Optimization of Slice Models. PhD thesis, University of Wisconsin, Madison, Wisconsin, December 2002. 23. S. Webb. The Physics of Conformal Radiotherapy: Advances in Technology. Institute of Physics Publishing Ltd., 1997. Randomized algorithms for mixed matching and covering in hypergraphs in 3D seed reconstruction in brachytherapy Helena Fohlin2 , Lasse Kliemann1∗ , and Anand Srivastav1 1 Institut f¨ ur Informatik Christian–Albrechts–Universit¨ at zu Kiel Christian-Albrechts-Platz 4, D–24098 Kiel, Germany {lki,asr}@numerik.uni-kiel.de Department of Oncology Link¨ oping University Hospital 581 85 Link¨ oping, Sweden [email protected] Summary. Brachytherapy is a radiotherapy method for cancer. In its low dose radiation (LDR) variant a number of radioactive implants, so-called seeds, are inserted into the affected organ through an operation. After the implantation, it is essential to determine the locations of the seeds in the organ. A common method is to take three X-ray photographs from different angles; the seeds show up on the X-ray photos as small white lines. In order to reconstruct the three-dimensional configuration from these X-ray photos, one has to determine which of these white lines belong to the same seed. We model the problem as a mixed packing and covering hypergraph optimization problem and present a randomized approximation algorithm based on linear programming. We analyse the worst-case performance of the algorithm by discrete probabilistic methods and present results for data of patients with prostate cancer from the university clinic of Schleswig-Holstein, Campus Kiel. These examples show an almost optimal performance of the algorithm which presently cannot be matched by the theoretical analysis. Keywords: Prostate cancer, brachytherapy, seed reconstruction, combinatorial optimization, randomized algorithms, probabilistic methods, concentration inequalities. 1 Introduction Brachytherapy is a method developed in the 1980s for cancer radiation in organs like the prostate, lung, or breast. At the Clinic of Radiotherapy (radiooncology), University Clinic of Schleswig-Holstein, Campus Kiel, among others, ∗ Supported by the Deutsche Forschungsgemeinschaft (DFG), Grant Sr7-3. H. Fohlin et al. low dose radiation therapy (LDR therapy) for the treatment of prostate cancer is applied, where 25-80 small radioactive seeds are implanted in the affected organ. They have to be placed so that the tumor is exposed with sufficiently high radiation and adjacent healthy tissue is exposed to as low a radiation dose as possible. Unavoidably, the seeds can move due to blood circulation, movements of the organ, etc. For the quality control of the treatment plan, the locations of the seeds after the operation have to be checked. This is done by taking usually 3 X-ray photographs from three different angles (so-called 3-film technique). On the films the seeds appear as white lines. To determine the positions of the seeds in the organ the task now is to match the three different images (lines) representing the same seed. 1.1 Previous and related work The 3-film technique was independently applied by Rosenthal and Nath [22], Biggs and Kelley [9] and Altschuler, Findlay, Epperson [2], while Siddon and Chin [12] applied a special 2-film technique that took the seed endpoints as image points rather than the seed centers. The algorithms invoked in these papers are matching heuristics justified by experimental results. New algorithmic efforts were taken in the last 5 years. Tubic, Zaccarin, Beaulieu and Pouliot [8] used simulated annealing, Todor, Cohen, Amols and Zaider [3] combined several heuristic approaches, and Lam, Cho, Marks and Narayanan [13] introduced the so-called Hough transform, a standard method in image processing and computer vision for the seed reconstruction problem. Recently, Narayanan, Cho and Marks [14] also addressed the problem of reconstruction with an incomplete data set. These papers essentially focus on the improvement of the geometric projections. From the mathematical programming side, branch-and-bound was applied by Balas and Saltzman [7] and Brogan [10]. These papers provide the link to integer programming models of the problem. None of these papers give a mathematical analysis or provable performance guarantee of the algorithms in use. In particular, since different projection techniques essentially result in different objective functions, it would be desirable to have an algorithm which is independent of the specific projection technique and thus is applicable to all such situations. Furthermore, it is today considered a challenging task in algorithmic discrete mathematics and theoretical computer science to give fast algorithms for N P -hard problems, which provably (or at least in practice) approximate the optimal solution. This is sometimes a fast alternative to branch-and-bound methods. A comprehensive treatment of randomized rounding algorithms for packing and covering integer programs has been given by Srivastav [27] and Srivastav and Stangier [28]. The presented algorithm has also been studied in [15]. Experimental results on an algorithm based on a different LP formulation combined with a visualization technique have recently been published [26]. Randomized algorithms for mixed matching and covering in hypergraphs 1.2 Our contribution In this paper we model the seed reconstruction problem as a minimum-weight perfect matching problem in a hypergraph: we consider a complete 3-uniform hypergraph, where its nodes are the seed images on the three films, and each of its hyperedges contains three nodes (one from each X-ray photo). We define a weight function for the hyperedges, which is close to zero if the three lines from a hyperedge belong to the same seed and increases otherwise. The goal is to find a matching, i.e., a subset of pairwise disjoint hyperedges, so that all nodes are covered and the total weight of these hyperedges is minimum. This is nothing other than the minimum-weight perfect matching problem in a hypergraph. Since this problem generalizes the N P -hard 3-dimensional assignment problem (see [16]), it is N P -hard as well. Thus we can only hope to find an algorithm which solves the problem approximately in polynomial time, unless P = N P . We model the problem as an integer linear program. To solve this integer program, an algorithm based on the so-called randomized rounding scheme introduced by Raghavan and Thompson [24] is designed and applied. This algorithm is not only very fast, but accessible at least in part for a mathematical rigorous analysis. We give a partial analysis of the algorithm combining probabilistic and combinatorial methods, which shows that in the worst-case the solution produced is in some strong sense close to a minimum-weight perfect matching. The heart of the analytical methods are tools from probability theory, like large deviation inequalities. All in all, our algorithm points towards a mathematically rigorous analysis of heuristics for the seed reconstruction problem and is practical as well. Furthermore, the techniques developed here are promising for an analysis of mixed integer packing and covering problems, which are of independent interest in discrete optimization. Moreover, we show that an implementation of our algorithm is very effective on a set of patient data from the Clinic of Radiotherapy, University Clinic of Schleswig-Holstein, Campus Kiel. In fact, the algorithm for a certain choice of parameters outputs optimal or nearly optimal solutions where only a few seeds are unmatched. It is interesting that the practical results are much better than the results of the theoretical analysis indicate. Here we have the challenging situation of closing the gap between the theoretical analysis and the good practical performance, which should be addressed in future work. In conclusion, while in previous work on the seed reconstruction problem only heuristics were used, this paper is a first step in designing mathematical analyzable and practically efficient algorithms. The paper is organized as follows. In Section 2 we describe the seed reconstruction problem more precisely and give a mathematical model. For this we introduce the notion of (b, k)matching which generalizes the notions of b-matching in hypergraphs and partial k-covering in hypergraphs. In fact, a (b, k)-matching is a b-matching, i.e., a subset of hyperedges such that no node is incident in more than b of H. Fohlin et al. them, covering at least k nodes. So for a hypergraph with n nodes, (1, n)matching is a perfect matching problem. Furthermore, some large deviation inequalities are listed as well. In Section 3 we give an integer linear programming formulation for the (b, k)-matching problem and state the randomized rounding algorithm. This algorithm solves the linear programming (LP) relaxation up to optimality and then generates an integer solution by picking edges with the probabilities given by the optimal LP-solution. After this procedure we remove edges in a greedy way to get a feasible b-matching. In Section 4 we analyze the algorithm with probabilistic tools. In Section 5 we test the practical performance of the algorithm on real patient data for five patients treated in the Clinic of Radiotherapy in Kiel. The algorithm is implemented in C++ , and is iterated for each patient data set 100 times. For most of the patients all seeds are matched if we choose good values of the parameters, i.e., letting them be close to the values enforcing a minimum-weight perfect matching. The algorithm is very fast: within a few seconds of CPU time on a PC, it delivers the solution. 2 Hypergraph matching model of 3D seed reconstruction Brachytherapy is a cancer radiation therapy developed in the 1980s. In the low dose variant of brachytherapy, about 25 to 80 small radioactive implants called seeds are placed in organs like the prostate, lung or breast, and remain there. A seed is a titan cylinder of length approximately 4.5 mm encapsulating radioactive material like Iod-125 or Pd-103. The method allows an effective continuous radiation of tumor tissue with a relatively low dose for a long time in which radiation is delivered at a very short distance to the tumor by placing the radioactive source in the affected organ. Today it is a widely spread technique and an alternative to the usual external radiation. A benefit for the patient is certainly that he/she does not have to suffer from a long treatment with various radiation sessions. For the treatment of prostate cancer, brachytherapy has been reported as a very effective method [6]. At the Clinic of Radiotherapy, University Clinic of Schleswig-Holstein, Campus Kiel, brachytherapy has become the standard radiation treatment of prostate cancer. 2.1 The optimization problem In LDR brachytherapy with seeds two mathematical optimization problems play a central role: Placement problem The most important problem is to determine a minimum number of seeds along with their placement in the organ. The placement must be such that Randomized algorithms for mixed matching and covering in hypergraphs a) the tumor tissue is exposed with sufficient dose avoiding cold spots (regions with insufficient radiation) and hot spots (regions with too much radiation) and b) normal tissue or critical regions like the urethra are exposed with a minimum possible, medical tolerable dose. The problem thus appears as a combination of several N P -hard multicriteria optimization problems, e.g., set covering and facility location with restricted areas. Since the dose distribution emitted by the seeds is highly nonlinear, the problem is further complicated beyond set covering with regular geometric objects, like balls, ellipsoids, etc. An intense research has been done in this area in the last 10 years. Among the most effective placement tools are the mixed-integer programming methods proposed by Lee [20]. At the Clinic of Radiotherapy, University Clinic of Schleswig-Holstein, Campus Kiel, R of the company VARIAN) is a commercial placement software (VariSeed applied. The software offers a two-film and a three-film technique. According to the manual of the software, the three-film technique is an ad hoc extension of the two-film technique of Chin and Siddon [12]. 3D seed reconstruction problem After the operative implantation of the seeds, due to blood circulation and movements of the organ or patient, the seeds can change their original position. Usually 1-2 hours after the operation a determination of the actual seed positions in the organ is necessary in order to control the quality and to take further steps. In the worst case a short high dose radiation (HDR brachytherapy) has to be conducted. The seed locations are determined by three X-ray films of the organ taken from three different angles, see Figures 1, 2, and 3. This technique was introduced by Amols and Rosen [4] in 1981. The advantage of the 3-film technique compared with the 2-film technique is that it seems to be less ambiguous in identifying seed locations. So, each film shows the seeds from a different 3-dimensional perspective. The task is to determine the location of the seeds in the organ by matching seed images on the three films. To formalize the seed reconstruction problem, an appropriate geometrical measure as a cost function for matching three seed images from each film is introduced. We now show how the cost function is computed for the upper endpoint of the seed (see Figure 4). The cost of the lower endpoint is calculated in the same way. For the three seed images we have three lines P1 , P2 , P3 connecting the lower respectively upper endpoint of the seed images with the X-ray source. We determine the shortest connections between the lines Pi and Pj for all i, j. Let ri = (xi , yi , zi ) be the centers of the shortest connections and let x, y, z be the mean values of the x, y, z coordinate of r1 , r2 , r3 . We define the standard deviation H. Fohlin et al. Fig. 1. X-ray, 0 degrees. Figures 1, 2, and 3 were provided by Dr. F.-A. Siebert, Clinic of Radiotherapy, University Clinic of Schleswig-Holstein, Campus Kiel, Kiel, Germany. ! ! ! " 3 " 3 " 3 "1 "1 "1 # # 2 2 ∆r = (xi − x) + (yi − y) + # (zi − z)2 . 3 i=1 3 i=1 3 i=1 The cost for the upper (respectively lower) endpoint of any choice of three seed images from the three X-ray photos is the ∆r of the associated lines. It is clear that ∆r is close to zero if the three seed images represent the same seed. The total cost for three seed images is the sum of the standard deviation ∆r for the upper endpoint and the standard deviation for the lower endpoint.3 An alternative cost measure can be the area spanned by the triangle r1 , r2 , r3 . But in this paper this cost function is not considered. 3 By appropriate scaling ∆r to ∆r/α, with some α ≥ 1, one can assume that the total cost is in [0, 1]. Randomized algorithms for mixed matching and covering in hypergraphs Fig. 2. X-ray, 20 degrees. If the cost function is well posed, the optimal solution of the problem should be in one-to-one correspondence to the real seed locations in the organ. Thus the problem reduces to a three-dimensional assignment (or matching) problem, where we minimize the cost of the matching. In the literature the problem is also noted as the AP3 problem, which is N P -hard. Thus under the hypothesis P = N P , we cannot expect an efficient, i.e., polynomial time algorithm solving the problem to optimality. 2.2 Hypergraph matching and seed reconstruction We use the standard notion of graphs and hypergraphs. A finite graph G = (V, E)$is%a pair of $a %finite set V (the set of vertices or nodes) and a subset E ⊆ V2 , where V2 denotes the set of all 2−element subsets of V . The H. Fohlin et al. Fig. 3. X-ray, 340 degrees. elements of E are called edges. A hypergraph (or set system) H = (V, E) is a pair of a finite set V and a subset E of the power set P(V ). The elements of E are called hyperedges. Let H = (V, E) be a hypergraph. For v ∈ V we define deg(v) := |{E ∈ E; v ∈ E}| and ∆ = ∆(H) := max deg(v). v∈V We call deg(v) the vertex-degree of v and ∆(H) is the maximum vertex degree of H. The hypergraph H is called r−regular respectively s−uniform, if deg(v) = r for all v ∈ V respectively |E| = s for all E ∈ E. It is convenient to order the vertices and hyperedges, V = {v1 , · · · , vn } and E = {E1 , · · · , Em }, and to identify vertices and edges with their indices. The hyperedge-vertex incidence matrix of a hypergraph H = (V, E), with V = {v1 , · · · , vn } and E = {E1 , · · · , Em }, is the matrix A = (aij ) ∈ {0, 1}m×n, where aij = 1 if vj ∈ Ei , and 0 else. Sometimes the vertex-hyperedge incidence matrix AT is used. Randomized algorithms for mixed matching and covering in hypergraphs Fig. 4. Cost function for the upper endpoint. We proceed to the formulation of a mathematical model for the seed reconstruction problem. Definition 1. Let H = (V, E) be a hypergraph with |V | = n, |E| = m. Let w : E → Q ∩ [0, 1] be a weight function. Let b, k ∈ N. (i) A b-matching in H is a subset E ∗ ⊆ E such that each v ∈ V is contained in at most b edges of E ∗ . (ii) A (b, k)-matching E ∗ is a b-matching, such that at least k vertices are covered by edges of E ∗ . (iii) For a subset E ∗ ⊆ E, we define its weight w(E ∗ ) as the sum of the weights of the edges from E ∗ . We consider the following optimization problem. Problem 1. Min-(b, k)-Matching: Find a (b, k)-matching with minimum weight, if such a matching exists. This problem, for certain choices of b and k, specializes to well-known problems in combinatorial optimization: 1. Min-(1, n)-Matching is the minimum-weight perfect matching problem in hypergraphs. 2. Min-(m, n)-Matching is the set covering problem in hypergraphs. 3. Min-(m, k) -Matching is the partial set covering (or k-set covering) problem in hypergraphs. H. Fohlin et al. The seed reconstruction problem can be modeled as a minimum-weight perfect matching problem in a 3-uniform hypergraph as follows: let V1 , V2 , V3 be the seed images on the X-ray photos 1, 2, 3. With V = V1 ∪ V2 ∪ V3 and E = V1 × V2 × V3 , the hypergraph under consideration is H=(V, E). Given a weight function w : E → Q ∩ [0, 1], the seed reconstruction problem is just the problem of finding the minimum-weight perfect matching in H. 2.3 Some probabilistic tools Throughout this article we consider only finite probability spaces (Ω, P), where Ω is a finite set and P is a probability measure with respect to the power set P(Ω) as the sigma field. We recall the basic Markov and Chebyshev inequalities. Theorem 1 (Markov Inequality). Let (Ω, P) be a probability space and X : Ω −→ R+ a random variable with expectation E(X) < ∞. Then for any λ ∈ R+ E(X) . P[X ≥ λ] ≤ λ An often sharper bound is the well-known inequality of Chebyshev: Theorem 2 (Chebyshev Inequality). Let (Ω, P) be a probability space and X : Ω −→ R a random variable with finite expectation E(X) and variance Var(X). Then for any λ ∈ R+ & 1 P[|X − E(X)| ≥ λ Var(X)] ≤ 2 . λ For one-sided deviation the following Chebyshev-Cantelli inequality (see [1]) gives better bounds: Theorem 3. Let X be a non-negative random variable with finite expectation E(X) and variance Var(X). Then for any a > 0 P[X ≤ E(X) − a] ≤ Var(X) . Var(X) + a2 The following estimate on the variance of a sum of dependent random variables can be proved as in [1], Corollary 4.3.3. Let X be the sum of any 0/1 random variables, i.e., X = X1 + . . . + Xn , and let pi = E(Xi ) for all i = 1, . . . , n. For a pair i, j ∈ {1, . . . , n} we write i ∼ j, if Xi and Xj are dependent. Let Γ be the set of all unordered dependent pairs i, j, i.e., the 2-element sets {i, j}, and let E(Xi Xj ). γ= {i,j}∈Γ Randomized algorithms for mixed matching and covering in hypergraphs Proposition 1. Var(X) ≤ E(X) + 2γ. Proof. We have Var(X) = Var(Xi ) + Cov[Xi , Xj ], where the second sum is over ordered pairs. Since Xi2 = Xi , and Var(Xi ) = E(Xi2 ) − E(Xi )2 = E(Xi )(1 − E(Xi )) ≤ E(Xi ), (1) gives Var(X) ≤ E(X) + Cov[Xi , Xj ]. (2) i=j If i j, then Cov[Xi , Xj ] = 0. For i ∼ j we have Cov[Xi , Xj ] = E(Xi Xj ) − E(Xi )E(Xj ) ≤ E(Xi Xj ), so (3) implies the assertion of the proposition. (3) 2 We proceed to the statement of standard large deviation inequalities for a sum of independent random variables. Let X1 , . . . , Xn be 0/1 valued mutually independent (briefly: independent) random variables, where P[Xj = 1] = pj , P[Xj = 0] = 1 − pj for probabilities pj ∈ [0, 1] for all 1 ≤ j ≤ n. For 1 ≤ j ≤ n let wj denote rational weights with 0 ≤ wj ≤ 1 and let X= wj Xj . The sum X= wj Xj with wj = 1 ∀ j ∈ {1, . . . , n} is the well-known binomially distributed random variable with mean np. The inequalities given below can be found in the books of Alon, Spencer and Erd˝ os [1], Habib, McDiarmid, Ramirez-Alfonsin and Reed [17], and Janson, L uczak, Ruci´ nski [19]. The following basic large deviation inequality is implicitly given in Chernoff [11] in the binomial case. In explicit form it can be found in Okamoto [23]. Its generalization to arbitrary weight is due to Hoeffding [18]. H. Fohlin et al. Theorem 4 ( [18]). Let λ > 0 and let X be as in (4). Then (a) P(X > E(X) + λ) ≤ e− (b) P(X < E(X) − λ) ≤ e 2λ2 n 2 − 2λ n . . In the literature Theorem 4 is well known as the Chernoff bound. For small expectations, i.e., E(X) ≤ n6 , the following inequalities due to Angluin and Valiant [5] give better bounds. Theorem 5. Let X1 , . . . , Xn be independent random n variables with 0 ≤ Xi ≤ 1 and E(Xi ) = pi for all i = 1, . . . , n. Let X = i=1 Xi and µ = E(X). For any β > 0 (i) P[X ≥ (1 + β) · µ] ≤ e (ii) P[X ≤ (1 − β) · µ] ≤ e β2 µ 2(1+β/3) 2 − β 2µ Note that for 0 ≤ β ≤ 3/2 the bound in (i) is at most exp(−β 2 µ/3). We will also need the Landau symbols O, o, Θ and Ω. Definition 2. Let f : N → R≥0 , g : N → R≥0 be functions. Then • f (n) = O(g ∃ c1 , c2 ∈ R>0 , such that f (n) ≤ c1 g(n) + c2 for all n ∈ N. • f (n) = Ω(g(n)) if g(n) = O(f (n)). • f (n) = Θ(g(n)) if f (n) = O(g(n)) and f (n) = Ω(g(n)). f (n) n→∞ −→ 0 (provided that g(n) = 0 for all n large • f (n) = o(g(n)) if g(n) enough). 3 Simultaneous matching and covering algorithms In this section we present a randomized algorithm for the (b, k)-matching problem. 3.1 Randomized algorithms for (b, k)-matching Let H = (V, E), |V | = n, |E| = m be a hypergraph. We identify the nodes and edges of H by their indices, so V = {1, . . . , n} and E = {1, . . . , m}. Let b ≥ 1. An integer programming formulation of the minimum-weight (b, k)-matching is the following: Randomized algorithms for mixed matching and covering in hypergraphs Min-(b, k)-ILP min wi Xi i=1 m i=1 m i=1 n aij Xi ≤ b aij Xi ≥ Yj ∀j ∈ {1, . . . , n} ∀j ∈ {1, . . . , n} Yj ≥ k (5) (6) (7) Xi , Yj ∈ {0, 1} ∀i ∈ {1, . . . , m} ∀j ∈ {1, . . . , n}. Note that Min-(b, n)-ILP is equivalent to the minimum-weight perfect b-matching problem and Min-(b, k)-ILP is a b-matching problem with a k-partial covering of the vertices. For the minimum-weight perfect b-matching problems in hypergraphs, where a perfect b-matching exists, for example the 3-uniform hypergraph associated to the seed reconstruction problem, an alternative integer linear programming formulation using m local covering conditions is useful. We add the condition i=1 aij Xi ≥ θ for some θ ∈ (0, 1] for all j ∈ {1, . . . , n} to Min-(b, k)-ILP. Then, by integrality all vertices are covered and any feasible solution of such an ILP is a perfect b-matching. For the integer program the additional condition is redundant, but since the LP-relaxation of Min-(b, k)-ILP together with the inequality has a smaller feasible region than the LP-relaxation of Min-(b, k)-ILP, the gap between the integer optimum and the feasible LP-optimum might be smaller as well. This leads to a better “approximation” of the integer optimum by the LP-optimum. Furthermore, we will see in the theoretical analysis (Section 4) that we can cover significantly more nodes if we add this condition. Min-(b, k, θ)-ILP min wi Xi i=1 m i=1 m i=1 m i=1 n aij Xi ≤ b aij Xi ≥ Yj aij Xi ≥ θ Yj ≥ k ∀j ∈ {1, . . . , n} ∀j ∈ {1, . . . , n} ∀j ∈ {1, . . . , n} (9) (10) (11) (12) Xi , Yj ∈ {0, 1} ∀i ∈ {1, . . . , m} ∀j ∈ {1, . . . , n}. (13) H. Fohlin et al. We have Proposition 2. Let H = (V, E) be a hypergraph with edge weights w : E → Q+ 0. The integer linear programs Min-(b, n)-ILP and Min-(b, k, θ)-ILP, θ > 0, are equivalent to the minimum-weight perfect b-matching problem in H. In the following we need some notations which we fix through the next remark. Remark 1. Let Min-(b, k, θ)-LP be the linear programming relaxation of Min(b, k, θ)-ILP. Let (b, k, θ)-ILP be the system of inequalities built by the constraints (9) - (13) of Min-(b, k, θ)-ILP, and let (b, k, θ)-LP be the LP-relaxation of (b, k, θ)-ILP, where Xi ∈ [0, 1] ∩ Q and Yj ∈ Q+ 0 for all i, j. 3.2 The randomized algorithm Before we state the randomized algorithm, we have to ensure whether or not a Min-(b, k, θ)-matching exists. For a given b, a choice of k = 0 and θ = 0 always makes the problem feasible. However, for some k and θ there might be no solution. Then we would like to find the maximum k such that a solution exists, given b and θ. Actually, for the integer programs we have to distinguish only between the cases θ = 0 and θ > 0 (which is the perfect b-matching problem). Algorithm LP-Search(θ) Input: θ ≥ 0. 1) Test the solvability of (b, 0, θ)-LP. If it is not solvable, return “(b, 0, θ)-LP is not feasible.” Otherwise set k := 1 and go to 2. 2) a) Test solvability of (b, k, θ)-LP and (b, k + 1, θ)-LP. b) If both are solvable, set k := k + 2 and go to 2a. If (b, k, θ)-LP is solvable, but (b, k + 1, θ)-LP is not solvable, return k. 2 If (b, 0, θ)-LP is solvable, we define k ∗ := max {k ∈ N ; k ≤ n ; (b, k, θ)-LP has a solution}. Obviously we have Proposition 3. The algorithm LP-Search (θ) either outputs “(b, 0, θ)-LP is not feasible” or solving at most n LPs, it returns k ∗ . It is clear that the number of iterations can be dropped to at most log(n) using binary search. In the following we work with a k ∈ N, returned by the algorithm LP-Search(θ), if it exists. The randomized rounding algorithm for the Min(b, k, θ)-matching problem is the Randomized algorithms for mixed matching and covering in hypergraphs Algorithm Min-(b, k, θ)-RR 1. Solve the LP-relaxation Min-(b, k, θ)-ILP optimally, x∗ = mwith solutions ∗ ∗ ∗ ∗ ∗ ∗ ∗ (x1 , . . . , xm ) and y = (y1 , . . . , yn ). Let OP T = i=1 wi xi . 2. Randomized Rounding: Choose δ ∈ (0, 1]. For i = 1, . . . , m, independently set the 0/1 random variable Xi to 1 with probability δx∗i and to 0 with probability 1 − δx∗i . So Pr[Xi = 1] = δx∗i and Pr [Xi = 0] = 1 − δx∗i , ∀i ∈ {1, . . . , m}. 3. Output X1 , . . . , Xm , the set of hyperedges M = {i ∈ E; xi = 1}, and its weight w(M ). 2 One can combine the algorithm Min-(b, k, θ)-RR with a greedy approach in order to get a feasible b-matching: Algorithm Min-(b, k, θ)-Round 1) Apply the algorithm Min-(b, k, θ)-RR and output a set of hyperedges M . 2) List the nodes in a randomized order. Passing through this list and arriving at a node for which the b-matching condition is violated, we enforce the b-matching condition at this node by removing incident edges from M with highest weights. 3) Output is the so obtained set M ⊆ M . 2 Variants of this algorithm are possible, for example, one can remove edges incident in many nodes, etc.. 4 Main results and proofs 4.1 The main results We present an analysis of the algorithm Min-(b, k, θ)-RR. Our most general result is the following theorem. C1 and C2 are positive constants depending only on l, δ, and θ. They will be specified more precisely later. Theorem 6. Let δ ∈ (0, 1) and OP T ∗ ≥ &m 2 ln(4n) we have: ln(4)(1 + 2δ)(1 − δ)−2 . For λ = (a) Let ∆ ≤ c1 · kb . For θ = 0, the algorithm Min-(b, k, θ)-RR returns a (δb + λ)-matching M in H of weight w(M ) ≤ OP T ∗ which covers at least ' 3b(∆(l − 1) + 3) k −δb (1 − e ) 1 − (14) b 2k(1 − e−δb ) nodes of H with a probability of at least 1/4. H. Fohlin et al. (b) Let ∆ ≤ c2 ·n. For θ > 0 the algorithm Min-(b, k, θ)-RR returns a (δb+λ)matching M in H of weight w(M ) ≤ OP T ∗ which covers at least ( 2.38(∆(l − 1) + 3) 0.632δθn 1 − (15) δθn nodes of H with a probability of at least 1/4. For special b, we have a stronger result. Theorem 7. Let δ ∈ (0, 1). Assume that i) b ≥ 23 ln(4n)(1 + 2δ)(1 − δ)−2 . ii) OP T ∗ ≥ 23 ln(4)(1 + 2δ)(1 − δ)−2 . (a) Let ∆ ≤ c1 · kb . For θ = 0, the algorithm Min-(b, k, θ)-RR returns a b-matching M in H of weight w(M ) ≤ OP T ∗ which covers at least ' k 3b(∆(l − 1) + 3) −δb (1 − e ) 1 − b 2k(1 − e−δb ) nodes of H with a probability of at least 1/4. (b) Let ∆ ≤ c2 · n. For θ > 0 the algorithm Min-(b, k, θ)-RR returns a b-matching M in H of weight w(M ) ≤ OP T ∗ which covers at least ( 2.38(∆(l − 1) + 3) 0.632δθn 1 − δθn nodes of H with a probability of at least 1/4. Remark 2. In Theorem 7 (a), for fixed δ, we have b = Ω(ln(n)). For b = Θ(ln(n)), and k = Ω(n) and ∆ ≤ c1 · kb , the number of covered nodes is at n (1 − o(1)) . (16) Ω ln(n) In this case we have an approximation of the maximum number of covered nodes k up to a factor of 1/ ln(n). From the techniques applied so far it is not clear whether the coverage can be improved towards Ω(k). 4.2 Proofs We will first prove Theorem 7 and then 6. We start with a technical lemma. Lemma 1. Let X1 , . . . , Xm be independent 0/1 random variables with E(Xi )= pi ∀i = 1, . . . , m. For wi ∈ [0, 1], i = 1, . . . , n, w(X) := m i=1 wi Xi . Let z ≥ 0 be an upper bound on E(w(X)), i.e., E(w(X)) ≤ z. Then Randomized algorithms for mixed matching and covering in hypergraphs β2 z i) P[w(X) ≥ z(1 + β)] ≤ e− 2(1+β/3) for any β > 0. ii) P[w(X) ≥ z(1 + β)] ≤ e− β2 z 3 for 0 ≤ β ≤ 1. Proof. Let z := z − E(w(X)), p = z − E(w(X)) − z , and let Y0 , Y1 , . . . , Yz be independent 0/1 random variables with E(Y0 ) = p and Yj = 1 ∀j ≥ 1. The random variable X := w(X) + Y0 + Y1 + . . . + Yz satisfies E(X ) = z and X ≥ w(X) and we may apply the Angluin-Valiant inequality (Theorem 5) to it: i) For any β > 0 we have P[w(X) ≥ z(1 + β)] ≤ P[X ≥ z(1 + β)] ≤ e β2 z ii) For 0 ≤ β ≤ 1 it is easy to see that e− 2(1+β/3) ≤ e− β2 z 3 β2 z 2(1+β/3) Let X1 , . . . , Xm and M be the output of the algorithm Min-(b, k, θ)-RR. Further let OP T and OP T ∗ be the integer respectively LP-optima for Min(b, k, θ)-ILP. Lemma 2. Suppose that δ ∈ (0, 1) and b ≥ 23 ln(4n)(1 + 2δ)(1 − δ)−2 . Then * ) m 1 aij Xi ≥ b ≤ . P ∃j ∈V : 4 i=1 Proof. First we compute the expectation m m m m E aij Xi = aij E(Xi ) = aij δx∗i = δ · aij x∗i ≤ δb. i=1 1 δ − 1. With Lemma 1 we get: * )m * m aij Xi ≥ b = P aij Xi ≥ (1 + β)δb P Set β := −β 2 δb 2(1 + β/3) 3 (1 − δ)2 ·b = exp − · 2 1 + 2δ 1 ≤ (using the assumption on b). 4n ≤ exp So ) P ∃j ∈V : m i=1 * aij Xi ≥ b ≤ n j=1 )m i=1 * aij Xi ≥ b ≤ n · 1 4n 1 . 4 H. Fohlin et al. Lemma 3. Suppose that δ ∈ (0, 1) and OP T ∗ ≥ 23 ln(4)(1 + 2δ)(1 − δ)−2 . Then )m * 1 ∗ P wi Xi ≥ OP T ≤ . 4 i=1 Proof. We have wi Xi wi E(Xi ) = wi δx∗i wi x∗i = δ · OP T ∗ . (18) Choose β = 1δ − 1. Then * )m * )m wi Xi ≥ OP T ∗ = P wi Xi ≥ δ(1 + β) OP T ∗ P i=1 wi Xi ≥ E wi Xi * (1 + β) m −β 2 E( i=1 wi Xi ) ≤ exp (Theorem 5(i)) 2(1 + β/3) −β 2 δ OP T ∗ = exp 2(1 + β/3) 1 ≤ 4 where the last inequality follows from the assumption on OP T ∗ . We now come to a key lemma, which controls the covering quality of the randomized algorithm. m n Let Yj := i=1 aij Xi for all j and Y := j=1 Yj . Lemma 4. For any δ ∈ (0, 1], m n ∗ i) E(Y ) ≥ n − j=1 e−δ i=1 aij xi , ii) If θ > 0, then E(Y ) ≥ n(1 − e−δθ ) ≥ 0.632δθn, iii) For Min-(b, k, θ)-RR with θ = 0 we have E(Y ) ≥ kb (1 − e−δb ). Proof. i) Define Ej := {E ∈ E; j ∈ E}. We have ⎛ ⎞ n n n E(Y ) = E ⎝ Yj ⎠ = E(Yj ) = P[Yj = 1] j=1 n j=1 (1 − P[Yj = 0]) = n − j=1 n j=1 P[Yj = 0]. Randomized algorithms for mixed matching and covering in hypergraphs Now P[Yj = 0] = P * aij Xi = 0 = P[(a1j X1 = 0) ∧ . . . ∧ (amj Xm = 0)] m + = P[aij Xi = 0] i=1 P[Xi = 0] (1 − δx∗i ). For u ∈ R we have the inequality 1 − u ≤ e−u . Thus + (1 − δx∗i ) ≤ e−δxi = e−δ· aij x∗ i Hence, with (19),(20) and (21) E(Y ) ≥ n − aij x∗ i m ∗ ii) Since i=1 aij xi ≥ θ for all j ∈ {1, . . . , n}, the first inequality immediately follows from (i). For the second inequality, observe that for x ∈ [0, 1], e−x ≤ 1 − x + x/e. This is true, because the linear function 1 − x + x/e is an upper bound for the convex function e−x in [0, 1]. So E(Y ) ≥ (1 − e−δθ )n ≥ (1 − 1/e)δθn ≥ 0.632δθn. iii) Since e−x is convex, the linear function 1 − x (δb)−1 + xe−δb (δb)−1 is an upper bound for e−x in [0, δb]. With (22) we get E(Y ) ≥ n − n + (1 − e n m aij x∗i ≥ (1 − e−δb ) · j=1 i=1 k . b An upper bound for the variance of Y can be computed directly, via covariance and dependent pairs: Lemma 5. Let ∆ be the maximum vertex degree of H and let l be the maximum cardinality of a hyperedge. Then Var(Y ) ≤ 1 · (∆(l − 1) + 3)E(Y ). 2 H. Fohlin et al. Proof. By Proposition 1 Var(Y) ≤ E(Y) + 2γ where γ is the sum of E(Yi Yj ) of all unordered dependent pairs i, j. Since the Yi s are 0/1 random variables, we have for pairs {i, j} with i ∼ j E(Yi Yj ) = P[(Yi = 1) ∧ (Yj = 1)] ≤ min(P[Yi = 1], P[Yj = 1]) 1 ≤ (P[Yi = 1] + P[Yj = 1]) 2 1 = (E(Yi ) + E(Yj )). 2 Hence 1 (E(Yi ) + E(Yj )) 2 {i,j}∈Γ {i,j}∈Γ ⎛ ⎞ n n 1 ⎝ 1 1 ≤ E(Yi ) + E(Yj )⎠ = E(Y ) + E(Yj ) 4 i=1 4 4 i=1 j∼i j∼i E(Yi Yj ) ≤ 1 1 1 1 E(Yi )∆(l − 1) = E(Y ) + ∆(l − 1)E(Y ) E(Y ) + 4 4 i=1 4 4 1 (∆(l − 1) + 1)E(Y ), 4 and (23) concludes the proof. Let c1 and c2 be positive constants depending only on l, δ, and θ such that for ∆ ≤ c1 · kb respectively ∆ ≤ c2 · n we have ' ( 3b(∆(l − 1) + 3) 2.38(∆(l − 1) + 3) < 1 respectively < 1. (24) −δb 2k(1 − e ) δθn Note that in the following we assume l, δ, and θ to be constants. Lemma 6. i) For a := ii) Let ∆ ≤ 3 2 (∆(l − 1) c1 · kb . Then + 3)E(Y ) : P[Y ≤ E(Y ) − a] ≤ 14 . ' 3b(∆(l − 1) + 3) k −δb E(Y ) − a ≥ (1 − e ) 1 − b 2k(1 − e−δb ) θ = 0. Randomized algorithms for mixed matching and covering in hypergraphs iii) Let ∆ ≤ c2 · n. Then E(Y ) − a ≥ 0.632δθn 1 − 2.38(∆(l − 1) + 3) δθn θ > 0. Proof. i) With the Chebyshev-Cantelli inequality (Theorem 3) we have P[Y ≤ E(Y ) − a] ≤ ≤ Var(Y ) 1 = a2 Var(Y ) + a2 1 + Var(Y) 1 1+ a2 0.5(∆(l−1)+3)E (Y ) 1.5(∆(l−1)+3)E (Y ) 0.5(∆(l−1)+3)E (Y ) (Lemma 5) = 1 . 4 ii) and iii): ⎛ E(Y ) − a = E(Y ) ⎝1 − 3 2 (∆(l − 1) + 3)E(Y ) E(Y ) ⎞ ⎠ & 3(∆(l − 1) + 3) & = E(Y ) 1 − 2E(Y ) ⎧ 0 3b(∆(l−1)+3) k −δb ⎪ ⎪ (1 − e ) 1 − if −δb ⎨b 2k(1−e ) 0 ≥ ⎪ 2.38(∆(l−1)+3) ⎪ if ⎩ 0.632δθn 1 − δθn (26) θ = 0, (27) θ > 0. Note that to the upper bound condition for ∆, the lower bounds in (27) are positive. 2 Proof (Theorem 7). Lemma 2, 3 and 6 imply Theorem 7. This theorem holds only for b = Ω(ln(n)). In the rest of this section we give an analysis also for the case of arbitrary b, losing a certain amount of feasibility. &m Lemma 7. Let δ > 0, µj = E( m i= 1 aij Xi ) for all j, and λ = 2 ln(4n). Then ) * m 1 P ∃j : aij Xi > δb + λ ≤ . 4 i=1 H. Fohlin et al. m Proof. As in (17), µj = E( i=1 aij Xi ) ≤ δb for all j. With the ChernoffHoeffding bound (Theorem 4) )m i=1 aij Xi > δb + λ ≤ P ≤ exp −2λ2 m * aij Xi > µj + λ = exp (− ln(4n)) = 1 . 4n So, ) P ∃j ∈ V : * aij Xi > δb + λ aij Xi > δb + λ ≤ n · 1 4n 1 . 4 Proof (Theorem 6). Lemma 3, 6 and 7 imply Theorem 6. 5 Experimental results 5.1 Implementation We run the algorithm Min-(b, k, θ)-Round for different values of b, k, and θ in a C++ -implementation. Recall that the algorithm Min-(b, k, θ)-Round has 3 steps: 1. It solves the linear program and delivers a fractional solution, 2. applies randomized rounding on the fractional solution and delivers an integer solution, which we call primary solution, 3. removes edges from the primary solution. In the primary solution the nodes might be covered by more than b edges. The superfluous edges are removed in step 3. Edges are removed in a randomized greedy approach. The nodes are chosen in a randomized order and if the considered node is covered by more than one edge, the ones with the greatest cost are removed. In the following tables we use 100 runs of the randomized rounding algorithm. As the final solution, we choose the one with the fewest number of uncovered nodes. (If this choice is not unique, we pick one with the smallest cost.) The LPs are solved with a simplex-method with the CLP-solver from the free COIN-OR library [21]. Randomized algorithms for mixed matching and covering in hypergraphs The columns in the tables in Sections 5.2 and 5.3 are organized as follows: 1: represents patient data (Patients 1-5 are real patient data, whereas Patient 6 is a phantom.) 2: represents number of seeds to be matched 3: represents the cost of the LP-solution 4: represents the cost of the matching returned by the algorithm 5: represents the running time in CPU seconds of the program 6: represents number of unmatched seeds 5.2 Results for the algorithm MIN-(b, k, θ)-ROUND Table 1. b = 1.20, k = 0.90 · n, θ = 0.00. Patient Seeds LP-OPT Time Unmatched 48.30 14.23 66.02 14.61 Table 2. b = 1.20, k = 1.00 · n, θ = 0.00. Patient Seeds LP-OPT Cost Time Unmatched 1 58.20 14.73 78.22 14.54 H. Fohlin et al. Table 3. b = 1.10, k = 0.90 · n, θ = 0.00. Patient Seeds LP-OPT Cost Time Unmatched 1 54.11 14.23 75.61 14.60 Table 4. b = 1.10, k = 1.00 · n, θ = 0.00. Patient Seeds LP-OPT Cost Time Unmatched 1 63.12 15.23 88.89 15.52 Table 5. b = 1.00, k = 0.90 · n, θ = 0.00. Patient Seeds LP-OPT Cost Time Unmatched 1 60.56 14.81 86.11 14.34 Randomized algorithms for mixed matching and covering in hypergraphs Table 6. b = 1.00, k = 1.00 · n, θ = 0.00. Patient Seeds LP-OPT Time Unmatched 5.3 Results for the algorithm MIN-(b, k, θ)-ROUND with θ > 0 Table 7. b = 1.20, k = 0.90 · n, θ = 0.20. Patient Seeds LP-OPT Cost Time Unmatched 1 55.10 14.26 74.11 14.34 Table 8. b = 1.20, k = 1.00 · n, θ = 0.20. Patient Seeds LP-OPT Cost Time Unmatched 1 60.76 14.31 78.11 14.73 H. Fohlin et al. Table 9. b = 1.10, k = 0.90 · n, θ = 0.20. Patient Seeds LP-OPT Cost Time Unmatched 1 57.01 14.17 75.54 14.48 Table 10. b = 1.10, k = 1.00 · n, θ = 0.20. Patient Seeds LP-OPT Cost Time Unmatched 1 63.12 14.88 93.04 15.32 Table 11. b = 1.00, k = 0.90 · n, θ = 0.20. Patient Seeds LP-OPT Cost Time Unmatched 1 63.64 14.19 87.97 14.43 Randomized algorithms for mixed matching and covering in hypergraphs Table 12. b = 1.00, k = 1.00 · n, θ = 0.20. Patient Seeds LP-OPT Time Unmatched Table 13. b = 1.20, k = 0.90 · n, θ = 1.00. Patient Seeds LP-OPT Time Unmatched Table 14. b = 1.20, k = 1.00 · n, θ = 1.00. Patient Seeds LP-OPT Time Unmatched H. Fohlin et al. Table 15. b = 1.10, k = 0.90 · n, θ = 1.00. Patient Seeds LP-OPT Time Unmatched Table 16. b = 1.10, k = 1.00 · n, θ = 1.00. Patient Seeds LP-OPT Time Unmatched Table 17. b = 1.00, k = 0.90 · n, θ = 1.00. Patient Seeds LP-OPT Time Unmatched Randomized algorithms for mixed matching and covering in hypergraphs Table 18. b = 1.00, k = 1.00 · n, θ = 1.00. Patient Seeds LP-OPT Time Unmatched 5.4 Discussion — implementation vs. theory With the algorithm Min-(b, k, θ)-Round for θ = 0, we get optimal results (except Patient 2) if the constraints are most restrictive: b = 1.00, k = 1.00 ·n, see Table 6. With the algorithm Min-(b, k, θ)-Round for θ > 0, the same observation holds: optimal results (except Patient 2) are achieved with the most restrictive constraints: b = 1.00, k = 1.00 · n, θ = 0.2 (Table 12) and b = 1.00, k = 1.00 · n, θ = 1.00 (Table 18). Obviously, a high θ can compensate for a low k, and vice versa (see Table 5 in comparison to 17, and 5 in comparison to 6). This clearly shows that the practical results for the instances are much better than the analysis of Section 4 indicates. However, to close the gap between theory and practice seems to be a challenging problem in the area of randomized algorithms, where the so far developed probabilistic tools seem to be insufficient. The non-optimal results for Patient 2 could be explained by the bad image quality of the X-rays and movement of the patient in the time between taking two different X-rays. Since it is important to find the correct matching of the seeds and not just any minimum-weight perfect matching, the question of whether this is the right matching is legitimate. This is difficult to prove, but results with help of a graphical 3D-program seem to be promising: we take the proposed seed positions in 3D and produce pictures, showing how the seeds would lie on the X-rays if these were the real positions. A comparison between the pictures and the real X-rays shows that the positions agree. This observation is supported by the results for the phantom (Patient 6), where we know the seed positions, and where the algorithm returns the optimal solution, see, e.g., Table 18. The running times of our algorithm are of the same order of magnitude R presently used at the Clinic as those of the commercial software VariSeed of Radiotherapy. These range between 4 and 20 seconds for instances with H. Fohlin et al. 43 to 82 seeds respectively. However — due to technical and licensing issues — we had to measure these times on a different computer (different CPU and operating system, but approximately the same frequency) than the one where the tests for our algorithm were performed, and we also had no exact method of measurement available (just a stopwatch). As we are dealing with an offline application, a few seconds in running time are unimportant. Also our implementation can likely be improved (especially the part for reading in the large instance files) to gain even a few seconds in running time and possibly outperform the commercial software with respect to running time. Our main advantage, however, lies in the quality of the solution delivered. Our algorithm also delivered the correct solution in certain cases where the R (versions commercial one failed. As shown by Siebert et al. [25], VariSeed 6.7/7.1) can compute wrong 3D seed distributions if seeds are arranged in certain ways, and these errors cannot be explained by the ambiguities inherent to the three-film technique. Our algorithm, however, performs well on the phantom instance studied in [25] (as well as on the tested patient data, except for Patient 2, which had a poor image quality). As a consequence, the immigration of our algorithm in the brachytherapy planning process at the Clinic of Radiotherapy in Kiel is 6 Open problems Most interesting are the following problems, which we leave open but would like to discuss in future work. 1. At the moment, we can analyze the randomized rounding algorithm, but we are not able to analyze the repairing step of the algorithm Min-(b, k, θ)Round. But this of course $ % is a major challenge for future work. 2. Can the coverage of Ω kb in Theorem 7 be improved towards Ω(k)? 3. Can the b-matching lower bound assumption b = Ω(ln(n)) in Theorem 7 be dropped towards b = O(1)? 4. What is the approximation complexity of the minimum-weight perfect matching problem in hypergraphs? Is there a complexity-theoretic threshold? References 1. N. Alon, J. Spencer, and P. Erd˝ os. The Probabilistic Method. John Wiley & Sons, Inc., 1992. 2. M. D. Altschuler, R. D. Epperson, and P. A. Findlay. Rapid, accurate, threedimensional location of multiple seeds in implant radiotherapy treatment planning. Physics in Medicine and Biology, 28:1305–1318, 1983. Randomized algorithms for mixed matching and covering in hypergraphs 3. H. I. Amols, G. N. Cohen, D. A. Todor, and M. Zaider. Operator-free, filmbased 3D seed reconstruction in brachytherapy. Physics in Medicine and Biology, 47:2031–2048, 2002. 4. H. I. Amols and I. I. Rosen. A three-film technique for reconstruction of radioactive seed implants. Medical Physics, 8:210–214, 1981. 5. D. Angluin and L. G. Valiant. Fast probabilistic algorithms for Hamiltonian circuits and matchings. Journal of Computer and System Sciences, 18:155–193, 1979. 6. D. Ash, J. Battermann, L. Blank, A. Flynn, T. de Reijke, and P. Lavagnini. ESTRO/EAU/EORTC recommendations on permanent seed implantation for localized prostate cancer. Radiotherapy and Oncology, 57:315–321, 2000. 7. E. Balas and M. J. Saltzman. An algorithm for the three-index assignment problem. Operations Research, 39:150–161, 1991. 8. L. Beaulieu, J. Pouliot, D. Tubic, and A. Zaccarin. Automated seed detection and three-dimensional reconstruction. II. Reconstruction of permanent prostate implants using simulated annealing. Medical Physics, 28:2272–2279, 2001. 9. P. J. Biggs and D. M. Kelley. Geometric reconstruction of seed implants using a three-film technique. Medical Physics, 10:701–705, 1983. 10. W. L. Brogan. Algorithm for ranked assignments with applications to multiobject tracking. IEEE Journal of Guidance, 12:357–364, 1989. 11. H. Chernoff. A measure of asymptotic efficiency for test of a hypothesis based on the sum of observation. Annals of Mathematical Statistics, 23:493–509, 1952. 12. L. M. Chin and R. L. Siddon. Two-film brachytherapy reconstruction algorithm. Medical Physics, 12:77–83, 1985. 13. P. S. Cho, S. T. Lam, R. J. Marks II, and S. Narayanan. 3D seed reconstruction for prostate brachytherapy using hough trajectories. Physics in Medicine and Biology, 49:557–569, 2004. 14. P. S. Cho, R. J. Marks, and S. Narayanan. Three-dimensional seed reconstruction from an incomplete data set for prostate brachytherapy. Physics in Medicine and Biology, 49:3483–3494, 2004. 15. H. Fohlin. Randomized hypergraph matching algorithms for seed reconstruction in prostate cancer radiation. Master’s thesis, CAU Kiel and G¨ oteborg University, 2005. 16. M. R. Garey and D. S. Johnson. Computers and Intractability. W.H. Freeman and Company, New York, 1979. 17. M. Habib, C. McDiarmid, J. Ramirez-Alfonsin, and B. Reed. Probabilistic methods for algorithmic discrete mathematics, volume 16 of Springer Series in Algorithms and Combinatorics. Springer-Verlag, 1998. 18. W. Hoeffding. Probability inequalities for sums of bounded random variables. American Statistical Association Journal, 58:13–30, 1963. 19. S. Janson, T. L uczak, and A. Ruci´ nski. Random Graphs. Wiley-Interscience Series in Discrete Mathematics and Optimization. John Wiley & Sons, Inc., New York, Toronto, 2000. 20. E. K. Lee, R. J. Gallagher, D. Silvern, C. S. Wu, and M. Zaider. Treatment planning for brachytherapy: an integer programming model, two computational approaches and experiments with permanent prostate implant planning. Physics in Medicine and Biology, 44:145–165, 1999. 21. R. Lougee-Heimer. The common optimization interface for operations research. IBM Journal of Research and Development, 47:75–66, 2003. H. Fohlin et al. 22. R. Nath and M. S. Rosenthal. An automatic seed identification technique for interstitial implants using three isocentric radiographs. Medical Physics, 10:475–479, 1983. 23. M. Okamoto. Some inequalities relating to the partial sum of binomial probabilities. Annals of the Institute of Statistical Mathematics, 10:29–35, 1958. 24. P. Raghavan and C. D. Thompson. Randomized rounding: a technique for provably good algorithms and algorithmic proofs. Combinatorica, 7:365–374, 1987. 25. F.-A. Siebert, P. Kohr, and G. Kov´ acs. The design and testing of a solid phantom for the verification of a commercial 3D seed reconstruction algorithm. Radiotherapy and Oncology, 74:169–175, 2005. 26. F.-A. Siebert, A. Srivastav, L. Kliemann, H. Fohlin, and G. Kov´acs. Threedimensional reconstruction of seed implants by randomized rounding and visual evaluation. Medical Physics, 34:967–957, 2007. 27. A. Srivastav. Derandomization in combinatorial optimization. In S. Rajasekaran, P. M. Pardalos, J. H. Reif, and J. D. Rolim, editors, Handbook of Randomized Computing, volume II, pages 731–842. Kluwer Academic Publishers, 2001. 28. A. Srivastav and P. Stangier. Algorithmic Chernoff-Hoeffding inequalities in integer programming. Random Structures & Algorithms, 8:27–58, 1996. Global optimization and spatial synchronization changes prior to epileptic seizures Shivkumar Sabesan1 , Levi Good2 , Niranjan Chakravarthy1, Kostas Tsakalis1 , Panos M. Pardalos3, and Leon Iasemidis2 1 Department of Electrical Engineering, Fulton School of Engineering, Arizona State University, Tempe, AZ, 85281 [email protected], [email protected], [email protected] The Harrington Department of Bioengineering, Fulton School of Engineering, Arizona State University, Tempe, AZ, 85281 [email protected], [email protected]. Department of Industrial and Systems Engineering, University of Florida, Gainesville, FL, 32611 [email protected] Summary. Epileptic seizures are manifestations of intermittent spatiotemporal transitions of the human brain from chaos to order. In this paper, a comparative study involving a measure of chaos, in particular the short-term Lyapunov exponent (ST Lmax ), a measure of phase (φmax ) and a measure of energy (E) is carried out to detect the dynamical spatial synchronization changes that precede temporal lobe epileptic seizures. The measures are estimated from intracranial electroencephalographic (EEG) recordings with sub-dural and in-depth electrodes from two patients with focal temporal lobe epilepsy and a total of 43 seizures. Techniques from optimization theory, in particular quadratic bivalent programming, are applied to optimize the performance of the three measures in detecting preictal synchronization. It is shown that spatial synchronization, as measured by the convergence of ST Lmax , φmax and E of critical sites selected by optimization versus randomly selected sites leads to long-term seizure predictability. Finally, it is shown that the seizure predictability period using ST lmax is longer than that of the phase or energy synchronization measures. This points out the advantages of using synchronization of the ST lmax measure in conjunction with optimization for long-term prediction of epileptic seizures. Keywords: Quadratic bivalent programming, dynamical entrainment, spatial synchronization, epileptic seizure predictability. S. Sabesan et al. 1 Introduction Epilepsy is among the most common disorders of the nervous system. It occurs in all age groups, from infants to adults, and continues to be a considerable economic burden to society [6]. Temporal lobe epileptic seizures are the most common types of seizures in adults. Seizures are marked by abrupt transitions in the electroencephalographic (EEG) recordings, from irregular (chaotic) patterns before a seizure (preictal state) to more organized, rhythmic-like behavior during a seizure (ictal state), causing serious disturbances in the normal functioning of the brain [10]. The epileptiform discharges of seizures may begin locally in portions of the cerebral hemispheres (partial/focal seizures, with a single or multiple foci), or begin simultaneously in both cerebral hemispheres (primary generalized seizures). After a seizure’s onset, partial seizures may remain localized and cause relatively mild cognitive, psychic, sensory, motor or autonomic symptoms (simple partial seizures), or may spread to cause altered consciousness, complex automatic behaviors, bilateral tonic-clonic (convulsive) movements (complex partial seizures) etc.. Generalized seizures cause altered consciousness at the onset and are associated with a variety of motor symptoms, ranging from brief localized body jerks to generalized tonic-clonic activity. If seizures cannot be controlled, the patient experiences major limitations in family, social, educational, and vocational activities. These limitations have profound effects on the patient’s quality of life, as well as on his or her family [6]. In addition, frequent and long, uncontrollable seizures may produce irreversible damage to the brain. A condition called status epilepticus, where seizures occur continuously and the patient typically recovers only under external treatment, constitutes a life-threatening situation [9]. Until recently, the general belief in the medical community was that epileptic seizures could not be anticipated. Seizures were assumed to occur randomly over time. The 80s saw the emergence of new signal processing methodologies, based on the mathematical theory of nonlinear dynamics, optimal to deal with the spontaneous formation of organized spatial, temporal or spatiotemporal patterns in various physical, chemical and biological systems [3–5, 13, 40]. These techniques quantify the signal structure and stability from the perspective of dynamical invariants (e.g., dimensionality of the signal using the correlation dimension, or divergence of signal trajectories using the largest Lyapunov exponent), and were a drastic departure from the signal processing techniques based on the linear model (Fourier analysis). Applying these techniques on EEG data recorded from epileptic patients, a long-term, progressive, preictal dynamical change was observed [26, 27]. This observation triggered a special interest in the medical field towards early prediction of seizures with the expectation that it could lead to prevention of seizures from occurring, and therefore to a new mode of treatment for epilepsy. Medical device companies have already started off designing and implementing intervention devices for various neurodegenerative diseases (e.g., stimulators for Parkinsonian patients) in addition to the existing ones for cardiovascular applications (e.g., Global optimization and spatial synchronization pacemakers, defibrillators). Along the same line, there is currently an explosion of interest for epilepsy in academic centers and medical industry, with clinical trials underway to test potential seizure prediction and intervention methodology and devices for Food and Drug Administration (FDA) approval. In studies on seizure prediction, Iasemidis et al. [28] first reported a progressive preictal increase of spatiotemporal entrainment/synchronization among critical sites of the brain as the precursor of epileptic seizures. The algorithm used was based on the spatial convergence of short-term maximum Lyapunov exponents (ST Lmax) estimated at these critical electrode sites. Later, this observation was successfully implemented in the prospective prediction of epileptic seizures [29, 30]. The key idea in this implementation was the application of global optimization techniques for adaptive selection of groups of electrode sites that exhibit preictal (before a seizure’s onset) entrainment. Seizure anticipation times of about 71.7 minutes with a false prediction rate of 0.12 per hour were reported across patients with temporal lobe epilepsy. In the present paper, three different measures of dynamical synchronization/entrainment, namely amplitude, phase and ST Lmax are compared on the basis of their ability to detect these preictal changes. Due to the current interest in the field, and the proposed measures of energy and phase as alternatives to ST Lmax [33–36] for seizure prediction, it was deemed important to comparatively evaluate all three measures’ seizure predictability (anticipation) capabilities in a retrospective study. Quadratic integer programming techniques of global optimization were applied to select critical electrode sites per measure for every recorded seizure. Results following such an analysis with 43 seizures recorded from two patients with temporal lobe epilepsy showed that: 1) Critical electrode sites selected on the basis of their synchronization per measure before a seizure outperform randomly selected ones in the ability to detect long-term preictal entrainment, and 2) critical sites selected on the basis of ST Lmax have longer and more consistent preictal trends before a majority of seizures than the ones from the other two measures of synchronization. We describe the three measures of synchronization utilized in the analysis herein in Section 2. In Section 3 we explain the formulation of a quadratic integer programming problem to select critical electrode sites for seizure prediction by each of the three measures. Statistical yardsticks used to quantify the performance of each measure in detecting preictal dynamics are given in Section 4. Results from the application of these methods to EEG are presented in Section 5, followed by conclusions in Section 6. 2 Synchronization changes prior to epileptic seizures There has not been much of an effort to relate the measurable changes that occur before an epileptic seizure to the underlying synchronization changes that take place within areas and/or between different areas of the epileptic S. Sabesan et al. brain. Such information can be extracted by employing methods of spatial synchronization developed for coupled dynamical systems. Over the past decade, different frameworks for the mathematical description of synchronization between dynamical systems have been developed, which subsequently have led to the proposition of different concepts of synchronization [12, 14, 19, 21]. Apart from the case of complete synchronization, where the state variables x1 and x2 of two approximately identical, strongly coupled systems 1 and 2 attain identical values (x1 (t) = x2 (t)), the term lag synchronization has been used to describe the case where the state variables of two interacting systems 1 and 2 attain identical values with a time lag (x1 (t) = x2 (t + τ )) [42, 43]. The classical concept of phase synchronization was extended from linear to nonlinear and even chaotic systems by defining corresponding phase variables φ1 , φ2 (see Section 2.2) [43]. The concept of generalized synchronization was introduced to cope with systems that may not be in complete, lag or phase synchronization, but nevertheless depend on each other (e.g., driver-response systems) in a more complicated manner. In this case, the state variables of the systems are connected through a particular functional relationship [2, 44]. Finally, a new type of synchronization that is more in alignment with the generalized synchronization was introduced through our work on the epileptic brain [24, 39]. We called it dynamical entrainment (or dynamical synchronization). In this type of synchronization, measures of dynamics of the systems involved attain similar values. We have shown the existence of such a behavior through measures of chaos (ST Lmax) at different locations of the epileptic brain long prior to the onset of seizures. Measures for each of these types of synchronization have been tested on models and real systems. In the following subsections, we present three of the most frequently utilized dynamical measures of EEG and compare their performance in the detection of synchronization in the epileptic human brain. 2.1 Measure of energy (E) profiles A classical measure of a signal’s strength is calculated as the sum of its amplitudes squared over a time period T = N ∆t, E= x2 (i · ∆t) where ∆t is the sampling period, t = i · ∆t and xi are the amplitude values of a scalar, real valued, sampled x signal in consideration. For EEG analysis, the Energy (E) values are calculated over consecutive non-overlapping windows of data, each window of T second in duration, from different locations in the brain over an extended period of time. Examples of E profiles over time from two electrode sites that show entrainment before a seizure are given in Figures 1(a) and 2(a) (left panels) for Patient 1 and 2 respectively. The highest Global optimization and spatial synchronization Fig. 1. Long-term synchronization prior to a seizure (Patient 1; seizure 15). Left Panels: (a) E profiles over time of two electrode sites (LST1, LOF2) selected to be mostly synchronized 10 min prior to the seizure. (b) φmax profiles of two electrode sites (RST1, ROF2) selected to be mostly synchronized 10 min prior to the seizure. (c) ST Lmax profiles of two electrode sites (RTD3, LOF2) selected to be mostly synchronized 10 min prior to the seizure (seizure’s onset is depicted by a vertical line). Right Panels: Corresponding T-index curves for the sites and measures depicted in the left panels. Vertical lines illustrate the period over which the effect of the ictal period is present in the estimation of the T-index values, since 10 min windows move forward in time every 10.24 sec over the values of the measure profiles in the left panels. Seizure lasted for 2 minutes, hence the period between vertical lines is 12 minutes. E values were observed during the ictal period. This pattern roughly corresponds to the typical observation of higher amplitudes in the original EEG signal ictally (during a seizure). As we show below (Section 3), even though no other discernible characteristics exist in each individual E profile per electrode, synchronization trends between the E profiles across electrodes over time in the preictal period exist. 2.2 Measure of maximum phase (φmax ) profiles The notion of phase synchronization was introduced by Huygens [22] in the 17th century for two coupled frictionless harmonic oscillators oscillating at m 1 different angular frequencies of ω1 and ω2 respectively, such that ω ω2 = n . In this classical case, phase synchronization is usually defined as the locking of the phases of the two oscillators: ϕn,m = nφ1 (t) − mφ2 (t) = constant S. Sabesan et al. Fig. 2. Long-term synchronization prior to a seizure (Patient 2; seizure 5). Left Panels: (a) E profiles over time of two electrode sites (RST1, LOF2) selected to be mostly synchronized 10 min prior to the seizure. (b) φmax profiles of two electrode sites (RTD1, LOF3) selected to be mostly synchronized 10 min prior to the seizure. (c) ST Lmax profiles of two electrode sites (RTD2, ROF3) selected to be mostly synchronized 10 min prior to the seizure (seizure’s onset is depicted by a vertical line). Right Panels: Corresponding T-index curves for the sites and measures depicted in the left panels. Vertical lines illustrate the period over which the effect of the ictal period is present in the estimation of the T-index values, since 10 min windows move forward every 10.24 sec over the values of the measures in the left panels. Seizure lasted 3 minutes, hence the period between vertical lines is 13 minutes. where n and m are integers, φ1 and φ2 denote the phases of the oscillators, and ϕn,m is defined as their relative phase. In order to investigate synchronization in chaotic systems, Rosenblum et al. [42] relaxed this condition of 1 phase locking by a weaker condition of phase synchronization (since ω ω2 may be an irrational real number and each system may contain power and phases at many frequencies around one dominant frequency): |ϕn,m | = |nφ1 (t) − mφ2 (t)| < constant. The estimation of instantaneous phases φ1 (t) and φ2 (t) is nontrivial for many nonlinear model systems, and even more difficult when dealing with noisy time series of unknown characteristics. Different approaches have been proposed in the literature for the estimation of instantaneous phase of a signal. In the analysis that follows, we take the analytic signal approach for phase estimation [15, 38] that defines the instantaneous phase of an arbitrary signal s(t) as φ(t) = arctan s˜(t) s(t) Global optimization and spatial synchronization where 1 s˜(t) = P.V. π +∞ −∞ s(τ ) dτ t−τ is the Hilbert transform of the signal s(t) (P.V. denotes the Cauchy Principal Value). From Equation (5), the Hilbert transform of the signal can be interpreted as a convolution of the signal s(t) with a non-causal filter h(t) = 1/πt. The Fourier transform H(ω) of h(t) is −jsgn(ω) where sgn(ω) is often called the signum function and ⎧ ⎪ ⎪ ⎨ 1, ω > 0, sgn(ω) = 0, ω = 0, (6) ⎪ ⎪ ⎩ 1, ω < 0. Hence, Hilbert transformation is equivalent to a type of filtering of s(t) in which amplitudes of the spectral components are left unchanged, while their phases are altered by π/2, positively or negatively according to the sign of ω. Thus, s˜(t) can then be obtained by the following procedure. First, a onesided spectrum Z(ω) in which the negative half of the spectrum is equal to zero is created by multiplying the Fourier transform S(ω) of the signal s(t) with that of the filter H(ω) (i.e., Z(ω) = S(ω)H(ω)). Next, the inverse Fourier transform of Z(ω) is computed to obtain the complex-valued “analytic” signal z(t). Since Z(ω) only has a positive-sided spectrum, z(t) is given by: 1 +∞ 1 +∞ 1 1 Z(ω)dω = Z(ω)dω. (7) z(t) = 2π −∞ 2π 0 The imaginary part of z(t) then yields s˜ (t). Mathematically, s˜(t) can be compactly represented as 1 +∞ 1 s˜(t) = −i (S(ω)H(ω))dω. (8) 2π 0 It is important to note that the arctangent function used to estimate the instantaneous phase in Equation (4) could be either a two-quadrant inverse tangent function (ATAN function in MATLAB) or a four-quadrant inverse tangent function (ATAN2 function in MATLAB). The ATAN function gives phase values that are restricted to the interval [−π/2, +π/2] and, on exceeding the value of +π/2, fall to the value of −π/2 twice in each cycle of oscillation, while the ATAN2 function when applied to the same data gives phase values that are restricted to the interval [−π, +π] and, on exceeding the value of +π, fall to the value of −π once during every oscillation’s cycle. In order to track instantaneous phase changes over long time intervals, this generated disjoint phase sequence has to be “unwrapped” [41] by adding either π, when using the ATAN function, or 2π, when using the ATAN2 function, at each phase discontinuity. Thus a continuous phase profile φ(t) over time can be generated. S. Sabesan et al. The φ(t) from EEG data were estimated within non-overlapping moving windows of 10.24 seconds in duration per electrode site. Prior to the calculation of phase, to avoid edge effects in the estimation of the Fourier transform, each window was tapered with a Hamming window before Fourier transforming the data. Per window, a set of phase values are generated that are equal in number to the number of data points in this window. The maximum phase value (φmax ), minimum phase value (φmin ), mean phase value (φmean ) and the standard deviation of the phase values (φstd ) were estimated per window. Only the dynamics of φmax were subsequently followed over time herein, because they were found to be more sensitive than the other three phase measures to dynamical changes before seizures. Examples of synchronized φmax profiles over time around a seizure in Patients 1 and 2 are given in the left panels of Figures 1(b) and 2(b) respectively. The preictal, ictal and postictal states correspond to medium, high and low values of φmax respectively. The highest φmax values were observed during the ictal period, and higher φmax values were observed during the preictal period than during the postictal period. This pattern roughly corresponds to the typical observation of higher frequencies in the original EEG signal ictally, and lower EEG frequencies postictally. 2.3 Measure of chaos (ST Lmax ) profiles Under certain conditions, through the method of delays described by Packard et al. [37] and Takens [46], sampling of a single variable of a system over time can determine all state variables of the system that are related to the observed state variable. In the case of the EEG, this method can be used to reconstruct a multidimensional state space of the brain’s electrical activity from a single EEG channel at the corresponding brain site. Thus, in such an embedding, each state in the state space is represented by a vector X(t), whose components are the delayed versions of the original single-channel EEG time series x(t), that is: X(t) = (x(t), x(t + τ ), . . . , x(t + (d − 1)τ )) where τ is the time delay between successive components of X(t) and d is a positive integer denoting the embedding dimension of the reconstructed state space. Plotting X(t) in the thus created state space produces the state portrait of a spatially distributed system at the subsystem (brain’s location) where x(t) is recorded from. The most complicated steady state a nonlinear deterministic system can have is a strange and chaotic attractor, whose complexity is measured by its dimension D, and its chaoticity by its Kolmogorov entropy (K) and Lyapunov exponents (Ls) [16, 17]. A steady state is chaotic if at least the maximum of all Lyapunov exponents (Ls) is positive. According to Takens, in order to properly embed a signal in the state space, the embedding dimension d should at least be equal to (2D + 1). Of the many Global optimization and spatial synchronization different methods used to estimate D of an object in the state space, each has its own practical problems [32]. The measure most often used to estimate D is the state space correlation dimension ν. Methods for calculating ν from experimental data have been described in [1] and were employed in our work to approximate D in the ictal state. The brain, being nonstationary, is never in a steady state at any location in the strict dynamical sense. Arguably, activity at brain sites is constantly moving through “steady states,” which are functions of certain parameter values at a given time. According to bifurcation theory [18], when these parameters change slowly over time, or the system is close to a bifurcation, dynamics slow down and conditions of stationarity are better satisfied. In the ictal state, temporally ordered and spatially synchronized oscillations in the EEG usually persist for a relatively long period of time (in the range of minutes). Dividing the ictal EEG into short segments ranging from 10.24 sec to 50 sec in duration, the estimation of ν from ictal EEG has produced values between 2 and 3 [25, 45], implying the existence of a low-dimensional manifold in the ictal state, which we have called “epileptic attractor.” Therefore, an embedding dimension d of at least 7 has been used to properly reconstruct this epileptic attractor. Although d of interictal (between seizures) “steady state” EEG data is expected to be higher than that of the ictal state, a constant embedding dimension d = 7 has been used to reconstruct all relevant state spaces over the ictal and interictal periods at different brain locations. The advantages of this approach are that a) existence of irrelevant information in dimensions higher than 7 might not influence much the estimated dynamical measures, and b) reconstruction of the state space with a low d suffers less from the short length of moving windows used to handle stationary data. The disadvantage is that relevant information to the transition to seizures in higher dimensions may not be captured. The Lyapunov exponents measure the information flow (bits/sec) along local eigenvectors of the motion of the system within such attractors. Theoretically, if the state space is of d dimensions, we can estimate up to d Lyapunov exponents. However, as expected, only D + 1 of these will be real. The others are spurious [38]. Methods for calculating these dynamical measures from experimental data have been published in [26, 45]. The estimation of the largest Lyapunov exponent (Lmax) in a chaotic system has been shown to be more reliable and reproducible than the estimation of the remaining exponents [47], especially when D is unknown and changes over time, as in the case of highdimensional and nonstationary EEG data. A method developed to estimate an approximation of Lmax from nonstationary data is called STL (Short-term Lyapunov) [25, 26]. The ST Lmax, defined as the average of the maximum local Lyapunov exponents in the state space, can be calculated as follows: a |δXi,j (∆t)| 1 log2 Na ∆t i=1 |δXi,j (0)| ST Lmax = S. Sabesan et al. where δXi,j (0) = X(ti ) − X(tj ) is the displacement vector at time ti , that is, a perturbation of the vectors X(ti ) in the fiducial orbit at ti , and δXi,j (∆t) = X(ti + ∆t) − X(tj + ∆t) is the evolution of this perturbation after time ∆t. ∆t is the evolution time for δXi,j , that is, the time one allows for δXi,j to evolve in the state space. Temporal and spatial constraints for the selection of the neighbor X(tj ) of X(ti ) are applied in the state space. These constraints were necessary for the algorithm to work under the presence of transients in the EEG (e.g., epileptic spikes) (for details see [25]). If the evolution time ∆t is given in seconds, ST Lmax has units of bits per second. Na is the number of local Lyapunov exponents that are estimated within a duration T of the data segment. Therefore, if ∆t is the sampling period for the time domain data, T = (N − 1)∆t ≈ Na ∆t − (d − 1)τ . The ST Lmax algorithm is applied to sequential EEG epochs of 10.24 seconds recorded from electrodes in multiple brain sites to create a set of ST Lmax profiles over time (one ST Lmax profile per recording site) that characterize the spatio-temporal chaotic signature of the epileptic brain. Long-term profiles of ST Lmax, obtained by analysis of continuous EEG at two electrode sites in patients 1 and 2, are shown in the left panels of Figures 1(c) and 2(c) respectively. These figures show the evolution of ST Lmax as the brain progresses from interictal to ictal to postictal states. There is a gradual drop in ST Lmax values over tens of minutes preceding a seizure at some sites, with no observable gradual drops at other sites. The seizure is characterized by a sudden drop in ST Lmax values with a consequent steep rise in ST Lmax. This behavior of ST Lmax indicates a gradual preictal reduction in chaoticity at some sites, reaching a minimum within the seizure state, and a postictal rise in chaoticity that corresponds to the reversal of the preictal behavior. What is most interesting and consistent across seizures and patients is an observed synchronization of ST Lmax values between electrode sites prior to a seizure. We have called this phenomenon preictal dynamical entrainment, and it has constituted the basis for the development of epileptic seizure prediction algorithms [7, 23, 29–31]. 2.4 Quantification of synchronization A statistical distance between the values of dynamical measures at two channels i and j estimated per EEG data segment is used to quantify the synchronization between these channels. Specifically, the Tij between electrode sites i and j for each measure ST Lmax, E and φmax at time t is defined as: t Tijt t |Dij | = t √ σ ˆij / m t where Dij and σ ˆij denote the sample mean and standard deviation respectively of all m differences between a measure’s values at electrodes i and j within a moving window wt = [t − m ∗ 10.24sec] over the measure profiles. Global optimization and spatial synchronization t t If the true mean µtij of the differences Dij is equal to zero, and σij are independent and normally distributed, Tijt is asymptotically distributed as the t-distribution with (m − 1) degrees of freedom. We have shown that these independence and normality conditions are satisfied [30]. We define desynchronization between electrode sites i and j when µtij is significantly different from zero at a significance level α. The desynchronization condition between the electrode sites i and j, as detected by the paired t-test, is Tijt > tα/2,m−1 = Tth where tα/2,m−1 is the 100(1 − α/2) critical value of the t-distribution with m − 1 degrees of freedom. If Tijt ≤ tα/2,m−1 (which means that we do not have satisfactory statistical evidence at the α level for the differences of values of a measure between electrode sites i and j within the time window wt to be not zero), we consider sites i and j to be synchronized with each other at time t. Using α = 0.01 and m = 60, the threshold Tth = 2.662. It is noteworthy that similar ST Lmax, E or φmax values at two electrode sites do not necessarily mean that these sites also interact. However, when there is a progressive convergence over time of the measures at these sites, the probability that they are unrelated diminishes. This is exactly what occurs before seizures, and it is illustrated in the right panels of Figures 1 and 2 for all the three measures considered herein. A progressive synchronization in all measures, as quantified by Tij , is observed preictally. Note that synchronization occurs at different sites per measure. The sites per measure are selected according to the procedure described below in Section 3. 3 Optimization of spatial synchronization Not all brain sites are progressively synchronized prior to a seizure. The selection of the ones that do (critical sites) is a global optimization problem that minimizes the distance between the dynamical measures at these sites. For many years, the Ising model [8] has been a powerful tool for studying phase transitions in statistical physics. The model is described by a graph G(V, E) having n vertices {v1 , . . . , vn } with each edge e(i, j) ∈ E having a weight Jij (interaction energy). Each vertex vi has a magnetic spin variable σi ∈ {−1, +1} associated with it. A spin configuration σ of minimum energy is obtained by minimizing the Hamiltonian: n Jij σi σj over all σ ∈ {−1, +1} . (13) H(σ) = 1≤i≤j≤n This optimization problem is equivalent to the combinatorial problem of quadratic bivalent programming. Its solution gives vertices with proper spin at the global minimum energy. Motivated by the application of the Ising model S. Sabesan et al. to phase transitions, we have adapted quadratic bivalent (zero-one) programming techniques to optimally select the critical electrode sites during the preictal transition [23, 30] that minimize the objective function of the distance of ST Lmax, E or φmax between pairs of brain sites. More specifically, we considered the integer bivalent 0-1 problem: n min x T x with x ∈ (0, 1) t xi = k where n is the total number of available electrode sites, k is the number of sites to be selected, and xi are the (zero/one) elements of the n-dimensional vector x. The elements of the T matrix, Tij , i = 1, . . . , n and j = 1, . . . , n were previously defined in Equation (10). If the constraint in Equation (12) is included in the objective function xt T x by introducing the penalty µ= n n |Tij | + 1, j=1 i=1 the above optimization problem in Equation (12) becomes equivalent to an unconstrained global optimization problem ⎡ n 2 ⎤ n min ⎣xt T x + µ xi − k ⎦ , where x ∈ (0, 1) . (16) i=1 The electrode site i is selected if the corresponding element x∗i in the n-dimensional solution x∗ of Equation (14) is equal to 1. The optimization for the selection of critical sites was performed in the preictal window w1 (t∗ ) = [t∗ , t∗ − 10 min] over a measure’s profiles, where t∗ is the time of a seizure’s onset, separately for each of the three considered measures. For k = 5, the corresponding T-index is depicted in Figures 3 and 4. After the optimal sites selection, the average T-index across all possible pairs of the selected sites is generated and followed backward in time from each seizure’s onset t∗ . In the following sections, for simplicity, we denote these spatially averaged T-index values by“T-index.” In the estimation of the average T-index curves depicted in Figures 3 and 4 for a seizure recorded from Patients 1 and 2, the 5 critical sites selected from the E profiles were [LST1, LOF2, ROF1, RST1, ROF2] and [LST1, LOF2, LST3, RST1, RTD1]; from the ST Lmax profiles [RST1, ROF2, RTD2, RTD3, LOF2] and [RST3, LOF3, RTD3, RTD4, ROF2], and [LST2, LOF2, ROF2, RTD1, RTD2] and [LOF1, LOF2, LTD1, RST2, RTD3] from the φmax profiles (see Figure 6 for the electrode montage). These T-index trends are then compared with the average n T-index of 100 non-optimal sites, selected randomly over the space of 5 tuples of five sites (n is the maximum amount of available recording sites). n The algorithm for random selection of one tuple involves generation of 5 Global optimization and spatial synchronization Fig. 3. Dynamical synchronization of optimal vs. non-optimal sites prior to a seizure (Patient 1; seizure 15). (a) The T-index profile generated by the E profiles of five optimal (critical) electrode sites selected by the global optimization technique 10 minutes before the seizure (solid line) and the average of the T-index profiles of 100 tuples of five randomly selected ones (non-optimal) (dotted line). (b) The T-index profile generated by the φmax profiles of five optimal (critical) electrode sites selected by the global optimization technique 10 minutes before the seizure (solid line) and the average of the T-index profiles of 100 tuples of five randomly selected ones (nonoptimal) (dotted line). (c) The T-index profile generated by the ST Lmax profiles of five optimal (critical) electrode sites selected by the global optimization technique 10 minutes before the seizure (solid line) and, for illustration purposes only, the average of the T-index profiles of 100 tuples of five randomly selected ones (non-optimal) (dotted line). Vertical lines in the figure represent the ictal state of the seizure that lasted -2 minutes. Gaussian random numbers between 0 and 1 and reordering of the T-indices of tuples of five sites according to the order indicated by the generated random number values, and finally, selection of the top tuple from the sorted list of tuples. Repetition of the algorithm with 100 different SEEDs gives 100 different randomly selected tuples of 5 sites per seizure. For comparison purposes, the T-index profile of these non-optimal tuples of sites, averaged across all 100 randomly selected tuples of sites, is also shown in Figures 3 and 4. S. Sabesan et al. Fig. 4. Dynamical synchronization of optimal vs. non-optimal sites prior to a seizure (Patient 2; seizure 5). (a) The T-index profile generated by the E profiles of five optimal (critical) electrode sites selected by the global optimization technique 10 minutes before the seizure (solid line) and the average of the T-index profiles of 100 tuples of five randomly selected ones (non-optimal) (dotted line). (b) The Tindex profile generated by the φmax profiles of five optimal (critical) electrode sites selected by the global optimization technique 10 minutes before the seizure (solid line) and the average of the T-index profiles of 100 tuples of five randomly selected ones (non-optimal) (dotted line). (c) The T-index profile generated by the ST Lmax profiles of five optimal (critical) electrode sites selected by the global optimization technique 10 minutes before the seizure (solid line) and, for illustration purposes only, the average of the T-index profiles of 100 tuples of five randomly selected ones (non-optimal) (dotted line). Vertical lines in the figure represent the ictal state of the seizure, that lasted 3 minutes. 4 Estimation of seizure predictability time The predictability time Tp for a given seizure is defined as the period before a seizure’s onset during which synchronization between critical sites is highly statistically significant (i.e., T-index< 2.662 = Tth ). Each measure of synchronization gives a different Tp for a seizure. To compensate for possible oscillations of the T-index profiles, we smooth it with a window w2 (t) moving backward in time from the seizure’s onset. The length of this window is the Global optimization and spatial synchronization Fig. 5. Estimation of the seizure predictability time Tp . The time average T-index within the moving window w2 (t) on the T-index profiles of the critical sites selected as being mostly entrained in the 10-min preictal window w1 (t) is continuously estimated moving backwards from the seizure onset. When the time average T-index is > Tth = 2.662, Tp is set equal to the right endpoint of w2 . same as the one of w1 (t), in order for the Tth to be the same. Then Tp is estimated by the following procedure: The time average of the T-index within a 10 minute moving window, w2 (t) = [t, t − 10 min] (the t decreases from the time t∗ of the seizure’s onset up to (t∗ − t) = 3 hours) is continuously estimated until the average of the T-index within a window w2 (t) is less than or equal to Tth . When t = t0 : T-index > Tth , the Tp = t∗ − t0 . This predictability time estimation is portrayed in Figure 5. The longer the Tp , the longer the observed synchronization prior to a seizure. Comparison of the estimated Tp by the three measures ST Lmax, E, φmax is given in the next section. 5 Results 5.1 EEG data A total of 43 seizures (see Table 1) from two epileptic patients with temporal lobe epilepsy were analyzed by the methodology described above. The EEG signals were recorded from six different areas of the brain by 28 electrodes (see Figure 6 for the electrode montage). Typically, 3 hours before (preictal period) S. Sabesan et al. Table 1. Patients and EEG data characteristics. Patient ID Number of electrode sites Location of epileptogenic focus Seizure types Duration of EEG recordings (days) Number of seizures recorded C & SC Fig. 6. Schematic diagram of the depth and subdural electrode placement. This view from the inferior aspect of the brain shows the approximate location of depth electrodes, oriented along the anterior-posterior plane in the hippocampi (RTD - right temporal depth, LTD - left temporal depth), and subdural electrodes located beneath the orbitofrontal and subtemporal cortical surfaces (ROF - right orbitofrontal, LOF left orbitofrontal, RST- right subtemporal, LST- left subtemporal). and 1 hour after (postictal period) each seizure were analyzed with the methods described in Sections 2, 3 and 4, in search of dynamical synchronization and estimation of seizure predictability periods. The patients in the study underwent a stereotactic placement of bilateral depth electrodes (RTD1 to RTD6 in the right hippocampus, with RTD1 adjacent to right amygdala; LTD1 to LTD6 in the left hippocampus with the LTD1 adjacent to the left amygdala; the rest of the LTD, RTD electrodes are extending posterior through the hippocampi. Two subdural strip electrodes were placed bilaterally over the orbitofrontal lobes (LOF1 to LOF4 in the left and ROF1 to ROF4 in the right lobe, with LOF1, ROF1 being most mesial and LOF4, ROF4 most lateral). Two subdural strip electrodes were placed bilaterally over the temporal lobes (LST1 to LST4 in the left and Global optimization and spatial synchronization RST1 to RST4 in the right, with LST1, RST1 being more mesial and LST4 and RST4 being more lateral). Video/EEG monitoring was performed using the Nicolet BMSI 4000 EEG machine. EEG signals were recorded using an average common reference with band pass filter settings of 0.1 Hz - 70 Hz. The data were sampled at 200Hz with a 10-bit quantization and recorded on VHS tapes continuously over days via three time-interleaved VCRs. Decoding of the data from the tapes and transfer to computer media (hard disks, DVDs, CD-ROMs) was subsequently performed off-line. The seizure predictability analysis also was performed retrospectively (off-line). 5.2 Predictability of epileptic seizures For each of the 43 recorded seizures, the five most synchronized sites were selected within 10 minutes (window w1 (t)) prior to each seizure onset by the optimization procedure described in Section 3 (critical sites). The spatially averaged T-index profiles over these critical sites were estimated per seizure. Then, the predictability time of Tp for each seizure and dynamical measure, according to the procedure described in Section 4, was estimated. Using Equation (15), predictability times were obtained for all 43 recorded seizures from the two patients for each of the three dynamical measures. The algorithm for estimation of Tp delivered visually agreeable predictability times for all profiles that decrease in a near-monotonic fashion. The average predictability time obtained across seizures in our analysis for Patients 1 and 2 were 61.6 and 71.69 minutes respectively (see Table 3). The measure of classical energy, applied to single EEG channels, was shown before to lack consistent predicative ability for a seizure [11, 20]. Furthermore, its predictive performance was shown to deteriorate by postictal changes and changes during sleep-wake cycles [34]. By studying the spatiotemporal synchronization of the energy profiles between multiple EEG signals, we found average predictability times of 13.72 and 27.88 minutes for Patients 1 and 2 respectively, a significant improvement in their performance over what has been reported in the literature. For the measure of phase synchronization, the average predictability time values were 39.09 and 47.33 minutes for Patients 1 and 2 respectively. The study of the performance of all the three measures in a prospective fashion (prediction) is currently underway. Improved predictability via global optimization Figures 3 and 4 show the T-index profiles generated by ST Lmax, E and φmax profiles (solid line) of five optimal (critical) electrode sites selected by the global optimization technique and five randomly selected ones (non-optimal) (dotted line) before a seizure. In these figures, a trend of T-index profiles toward low values (synchronization) can be observed preictally only when optimal sites were selected for a synchronization measure. The null hypothesis that the obtained average value of Tp from the optimal sites across all S. Sabesan et al. seizures is statistically smaller or equal to the average Tp from the randomly selected ones was then tested. Tp values were obtained for a total of 100 randomly selected tuples of five sites per seizure per measure. Using a two-sample t-test for every measure, the null hypothesis that the Tpopt values (Average Tp values obtained from optimal electrode sites) were greater than the mean of the Tprandom values (Average Tp values obtained from randomly selected electrode sites) was tested at α = 0.01 (2422 degrees of freedom for the t-test in Patient 1, that is 100 random tuples of sites per seizure for all 24 seizures 1 (2399 degrees of freedom) + one optimal tuple of sites per seizure for all 24 seizures-1 (23 degrees of freedom); similarly 1917 degrees of freedom for Patient 2). The Tpopt values were significantly larger than the Tprandom values for all three measures (see Tables 2 and 3). This result was consistent across both patients and further supports the hypothesis that the spatiotemporal dynamics of synchronization of critical (optimal) brain sites per synchronization measure should be followed in time to observe significant preictal changes predictive of an upcoming seizure. Table 2. Mean and standard deviation of seizure predictability time Tp of 100 groups of five randomly selected sites per seizure and measure in Patients 1 and 2. Tprandom (minutes) Measure Patient 1(24 seizures) Patient 2(19 seizures) M ean M ean ST Lmax Table 3. Mean and standard deviation of seizure predictability time Tp of optimal sites per measure across all seizures in Patients 1 and 2. Statistical comparison with Tp from 100 groups of non-optimal sites. Tpopt (minutes) Measure Patient 1(24 seizures) Patient 2(19 seizures) M ean std. P (Tpopt ≤ Tprandom ) M ean std. P (Tpopt ≤ Tprandom ) ST Lmax 61.60 45.50 P < 0.0005 71.69 33.62 P < 0.0005 13.72 11.50 P < 0.002 27.88 26.97 P < 0.004 39.09 20.88 P < 0.0005 47.33 33.34 P < 0.0005 Global optimization and spatial synchronization Comparative performance of energy, phase and ST Lmax measures in detection of preictal synchronization Dynamical synchronization using ST Lmax consistently resulted to longer predictability times Tp than the ones by the other two measures (See Table 3). Among the other two measures, the phase synchronization measure outperformed the linear, energy-based measure and, for some seizures, it even had comparable performance to that of ST Lmax-based synchronization. These results are consistent with the synchronization observed in coupled non-identical chaotic oscillator models: an increase in coupling between two oscillators initiates generalized synchronization (best detected by ST Lmax), followed by phase synchronization (detected by phase measures), and upon further increase in coupling, amplitude synchronization (detected by energy measures) [2, 14, 42, 43]. 6 Conclusion The results of this study show that the analyzed epileptic seizures could be predicted only if optimization and synchronization were combined. The key underlying principle for such a methodology is the existence of dynamical entrainment among critical sites of the epileptic brain prior to seizures. Synchronization of non-critical sites does not show any statistical significance for seizure prediction and inclusion of these sites may mask the phenomenon. This study suggests that it may be possible to predict focal-onset epileptic seizures by analysis of linear, as well as nonlinear, measures of dynamics of multichannel EEG signals (namely the energy, phase and Lyapunov exponents), but at different time scales. Previous studies by our group have shown that a preictal transition exists, in which the values of the maximum Lyapunov exponents (ST Lmax) of EEG recorded from critical electrode sites converge long before a seizure’s onset [26]. The electrode sites involved in such a dynamical spatiotemporal interaction vary from seizure to seizure even in the same patient. Thus, the ability to predict a given seizure depends upon the ability to identify the critical electrode sites that participate in the preictal period of that seizure. Similar conclusions can be derived from the spatiotemporal analysis of the EEG with the measures of energy and phase employed herein. By applying a quadratic zero-one optimization technique for the selection of critical brain sites from the estimated energy and the maximum phase profiles, we demonstrated that mean predictability times of 13 to 20 minutes for the energy and 36 to 43 minutes for the phase are attained, which are smaller than the ones obtained from the employment of the ST Lmax measure. For example, the mean predictability time across the two patients for the measure of phase (43.21 minutes) and energy (20.88 minutes) was worse than that of the STLmax (66.64 minutes). In the future, we plan to further study the S. Sabesan et al. observed spatiotemporal synchronization and the long-term predictability periods before seizures. For example, it would be worthy to investigate if similar synchronization exists at time points of the EEG recordings unrelated to the progression to seizures. Such a study will address how specific our present findings are to epileptic seizures. The proposed measures may also become valuable for on-line, real-time seizure prediction. Such techniques could be incorporated into diagnostic and therapeutic devices for long-term monitoring and treatment of epilepsy. Potential diagnostic applications include a seizure warning system from long-term EEG recordings in a hospital setting (e.g., in a diagnostic epilepsy monitoring unit). This type of system could be used to timely warn the patient or professional staff of an impending seizure in order to take precaution measures or to trigger certain preventive action. Also, such a seizure warning algorithm, being implemented in digital signal processing chips, could be incorporated into implantable therapeutic devices to timely activate deep brain stimulators (DBS) or implanted drug-release reservoirs to interrupt the route of the epileptic brain towards seizures. These types of devices, if they are adequately sensitive and specific to impending seizures, could revolutionize the treatment of epilepsy. Acknowledgement This project was supported by the Epilepsy Research Foundation and the Ali Paris Fund for LKS Research and Education, and National Institutes of Health (R01EB002089). References 1. H. D. I. Abarbanel. Analysis of Observed Chaotic Data. Springer Verlag, 1996. 2. V. S. Afraimovich, N. N. Verichev, and M. I. Rabinovich. General synchronization. Radiophysics and Quantum Electronics, 29:747, 1986. 3. A. M. Albano, A. I. Mees, G. C. de Guzman, P. E. Rapp, H. Degn, A. Holden, and L. F. Isen. Chaos in biological systems, 1987. 4. A. Babloyantz and A. Destexhe. Low-Dimensional Chaos in an Instance of Epilepsy. Proceedings of the National Academy of Sciences, 83:3513–3517, 1986. 5. H. Bai-Lin. Directions in Chaos Vol. 1. World Scientific Press, 1987. 6. C. E. Begley and E. Beghi. Laboratory Research The Economic Cost of Epilepsy: A Review of the Literature. Epilepsia, 43:3–10, 2002. 7. W. Chaovalitwongse, L. D. Iasemidis, P. M. Pardalos, P. R. Carney, D. S. Shiau, and J. C. Sackellares. Performance of a seizure warning algorithm based on the dynamics of intracranial EEG. Epilepsy Research, 64:93–113, 2005. 8. C. Domb and M. S. Green. Phase Transitions and Critical Phenomena. Academic Press, New York, 1974. 9. J. Engel. Seizures and Epilepsy. FA Davis, 1989. Global optimization and spatial synchronization 10. J. Engel Jr, P. D. Williamson, and H. G. Wieser. Mesial temporal lobe epilepsy. Epilepsy: a comprehensive textbook. Philadelphia: Lippincott-Raven, pages 2417–2426, 1997. 11. R. Esteller, J. Echauz, M. D’Alessandro, G. Worrell, S. Cranstoun, G. Vachtsevanos, and B. Litt. Continuous energy variation during the seizure cycle: towards an on-line accumulated energy. Clin Neurophysiol, 116:517–26, 2005. 12. L. Fabiny, P. Colet, R. Roy, and D. Lenstra. Coherence and phase dynamics of spatially coupled solid-state lasers. Physical Review A, 47:4287–4296, 1993. 13. W. J. Freeman. Simulation of chaotic EEG patterns with a dynamic model of the olfactory system. Biological Cybernetics, 56:139–150, 1987. 14. H. Fujisaka and T. Yamada. Stability theory of synchronized motion in coupledoscillator systems. Prog. Theor. Phys, 69:32–47, 1983. 15. D. Gabor. Theory of communication. Proc. IEE London, 93:429–457, 1946. 16. P. Grassberger and I. Procaccia. Characterization of Strange Attractors. Physical Review Letters, 50:346–349, 1983. 17. P. Grassberger and I. Procaccia. Measuring the strangeness of strange attractors. Physica D: Nonlinear Phenomena, 9:189–208, 1983. 18. H. Haken. Principles of Brain Functioning: A Synergetic Approach to Brain Activity, Behavior and Cognition. Springer–Verlag, Berlin, 1996. 19. S. K. Han, C. Kurrer, and Y. Kuramoto. Dephasing and Bursting in Coupled Neural Oscillators. Physical Review Letters, 75:3190–3193, 1995. 20. M. A. Harrison, M. G. Frei, and I. Osorio. Accumulated energy revisited. Clin Neurophysiol, 116(3):527–31, 2005. 21. J. F. Heagy, T. L. Carroll, and L. M. Pecora. Synchronous chaos in coupled oscillator systems. Physical Review E, 50:1874–1885, 1994. 22. C. Hugenii. Horoloquim Oscilatorium. Paris: Muguet. Reprinted in English as: The pendulum clock. Ames, IA: Iowa State UP, 1986. 23. L. D. Iasemidis, P. Pardalos, J. C. Sackellares, and D. S. Shiau. Quadratic Binary Programming and Dynamical System Approach to Determine the Predictability of Epileptic Seizures. Journal of Combinatorial Optimization, 5:9–26, 2001. 24. L. D. Iasemidis, A. Prasad, J. C. Sackellares, P. M. Pardalos, and D. S. Shiau. On the prediction of seizures, hysteresis and resetting of the epileptic brain: insights from models of coupled chaotic oscillators. Order and Chaos, T. Bountis and S. Pneumatikos, Eds. Thessaloniki, Greece: Publishing House K. Sfakianakis, 8:283–305, 2003. 25. L. D. Iasemidis, J. C. Principe, and J. C. Sackellares. Measurement and quantification of spatio-temporal dynamics of human epileptic seizures. In M. Akay, editor, Nonlinear Biomedical Signal Processing, volume II, pages 294–318. IEEE Press, 2000. 26. L. D. Iasemidis and J. C. Sackellares. The temporal evolution of the largest Lyapunov exponent on the human epileptic cortex. Measuring Chaos in the Human Brain. Singapore: World Scientific, pages 49–82, 1991. 27. L. D. Iasemidis and J. C. Sackellares. Chaos theory and epilepsy. The Neuroscientist, 2:118–125, 1996. 28. L. D. Iasemidis, J. C. Sackellares, H. P. Zaveri, and W. J. Williams. Phase space topography of the electrocorticogram and the Lyapunov exponent in partial seizures. Brain Topography, 2:187–201, 1990. S. Sabesan et al. 29. L. D. Iasemidis, D. S. Shiau, W. Chaovalitwongse, P. M. Pardalos, P. R. Carney, and J. C. Sackellares. Adaptive seizure prediction system. Epilepsia, 43:264–265, 2002. 30. L. D. Iasemidis, D. S. Shiau, W. Chaovalitwongse, J. C. Sackellares, P. M. Pardalos, J. C. Principe, P. R. Carney, A. Prasad, B. Veeramani, and K. Tsakalis. Adaptive epileptic seizure prediction system. IEEE Transactions on Biomedical Engineering, 50:616–627, 2003. 31. L. D. Iasemidis, D. S. Shiau, P. M. Pardalos, W. Chaovalitwongse, K. Narayanan, A. Prasad, K. Tsakalis, P. R. Carney, and J. C. Sackellares. Long-term prospective on-line real-time seizure prediction. Clin Neurophysiol, 116:532–44, 2005. 32. Eric J. Kostelich. Problems in estimating dynamics from data. Physica D: Nonlinear Phenomena, 58:138–152, 1992. 33. M. Le Van Quyen, J. Martinerie, V. Navarro, P. Boon, M. D’Hav´e, C. Adam, B. Renault, F. Varela, and M. Baulac. Anticipation of epileptic seizures from standard EEG recordings. The Lancet, 357:183–188, 2001. 34. B. Litt, R. Esteller, J. Echauz, M. D’Alessandro, R. Shor, T. Henry, P. Pennell, C. Epstein, R. Bakay, M. Dichter, et al. Epileptic Seizures May Begin Hours in Advance of Clinical Onset A Report of Five Patients. Neuron, 30:51–64, 2001. 35. F. Mormann, T. Kreuz, R. G. Andrzejak, P. David, K. Lehnertz, and C. E. Elger. Epileptic seizures are preceded by a decrease in synchronization. Epilepsy Research, 53:173–185, 2003. 36. I. Osorio, M. G. Frei, and S. B. Wilkinson. Real-time automated detection and quantitative analysis of seizures and short-term prediction of clinical onset. Epilepsia, 39:615–627, 1998. 37. N. H. Packard, J. P. Crutchfield, J. D. Farmer, and R. S. Shaw. Geometry from a Time Series. Physical Review Letters, 45:712–716, 1980. 38. P.F. Panter. Modulation, noise, and spectral analysis: applied to information transmission. McGraw-Hill, 1965. 39. A. Prasad, L. D. Iasemidis, S. Sabesan, and K. Tsakalis. Dynamical hysteresis and spatial synchronization in coupled non-identical chaotic oscillators. Pramana–Journal of Physics, 64:513–523, 2005. 40. L. Rensing, U. an der Heiden, and M. C. Mackey. Temporal Disorder in Human Oscillatory Systems: Proceedings of an International Symposium, University of Bremen, 8-13 September 1986. Springer-Verlag, 1987. 41. M. G. Rosenblum and J. Kurths. Analysing synchronization phenomena from bivariate data by means of the Hilbert transform. In H. Kantz, J. Kurths, and G. Mayer-Kress, editors, Nonlinear Analysis of Physiological Data, pages 91–99. Springer, Berlin, 1998. 42. M. G. Rosenblum, A. S. Pikovsky, and J. Kurths. Phase Synchronization of Chaotic Oscillators. Physical Review Letters, 76:1804–1807, 1996. 43. M. G. Rosenblum, A. S. Pikovsky, and J. Kurths. From Phase to Lag Synchronization in Coupled Chaotic Oscillators. Physical Review Letters, 78:4193–4196, 1997. 44. N. F. Rulkov, M. M. Sushchik, L. S. Tsimring, and H. D. I. Abarbanel. Generalized synchronization of chaos in directionally coupled chaotic systems. Physical Review E, 51:980–994, 1995. 45. J. C. Sackellares, L. D. Iasemidis, D. S. Shiau, R. L. Gilmore, and S. N. Roper. Epilepsy when chaos fails. Chaos in the brain? K. Lehnertz, J. Arnhold, Global optimization and spatial synchronization P. Grassberger and C. E. Elger, Eds. Singapore: World Scientific, pages 112–133, 2000. 46. F. Takens. Detecting strange attractors in turbulence. In D. A. Rand and L. S. Young, editors, Dynamical Systems and Turbulence, Lecture Notes in Mathematics. Springer–Verlag, Heidelburg, 1991. 47. J. A. Vastano and E. J. Kostelich. Comparison of algorithms for determining lyapunov exponents from experimental data. In G. Mayer-Press, editor, Dimensions and entropies in chaotic systems: quantification of complex behavior. Springer–Verlag, 1986. Optimization-based predictive models in medicine and biology Eva K. Lee1,2,3 1 Center for Operations Research in Medicine and HealthCare, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205 [email protected] Center for Bioinformatics and Computational Genomics, Georgia Institute of Technology, Atlanta, Georgia 30332 Winship Cancer Institute, Emory University School of Medicine, Atlanta, GA 30322 Summary. We present novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule); and (5) successive multi-stage classification capability to handle data points placed in the reserved judgment region. Application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; multistage discriminant analysis of biomarkers for prediction of early atherosclerosis; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis, and prediction of protein localization sites. In all these applications, the predictive model yields correct classification rates ranging from 80% to 100%. This provides motivation for pursuing its use as a medical diagnostic, monitoring and decision-making tool. Keywords: Classification, prediction, predictive health, discriminant analysis, machine learning, discrete support vector machine, multi-category classification models, optimization, integer programming, medical diagnosis. E.K. Lee 1 Introduction A fundamental problem in discriminant analysis, or supervised learning, concerns the classification of an entity into one of G(G ≥ 2) a priori, mutually exclusive groups based upon k specific measurable features of the entity. Typically, a discriminant rule is formed from data collected on a sample of entities for which the group classifications are known. Then new entities, whose classifications are unknown, can be classified based on this rule. Such an approach has been applied in a variety of domains, and a large body of literature on both the theory and applications of discriminant analysis exists (e.g., see the bibliography in [60]). In experimental biological and medical research, very often, experiments are performed and measurements are recorded under different conditions and/or on different cells/molecules. A critical analysis involves the discrimination of different features under different conditions that will reveal potential predictors for biological and medical phenomena. Hence, classification techniques play an extremely important role in biological analysis, as they facilitate systematic correlation and classification of different biological and medical phenomena. A resulting predictive rule can assist, for example, in early disease prediction and diagnosis, identification of new target sites (genomic, cellular, molecular) for treatment and drug delivery, disease prevention and early intervention, and optimal treatment design. There are five fundamental steps in discriminant analysis: a) Determine the data for input and the predictive output classes. b) Gather a training set of data (including output class) from human experts or from laboratory experiments. Each element in the training set is an entity with a corresponding known output class. c) Determine the input attributes to represent each entity. d) Identify discriminatory attributes and develop the predictive rule(s); e) Validate the performance of the predictive rule (s). In our Center for Operations Research in Medicine, we have developed a general-purpose discriminant analysis modeling framework and computational engine for various biological and biomedical informatics analyses. Our model, the first discrete support vector machine, offers distinct features (e.g., the ability to classify any number of groups, management of the curse of dimensionality in data attributes, and a reserved judgment region to facilitate multi-stage classification analysis) that are not simultaneously available in existing classification software [27, 28, 49, 42, 43]. Studies involving tumor volume identification, ultrasonic cell disruption in drug delivery, lung tumor cell motility analysis, CpG island aberrant methylation in human cancer, predicting early atherosclerosis using biomarkers, and fingerprinting native and angiogenic microvascular networks using functional perfusion data indicate that our approach is adaptable and can produce effective and reliable predictive rules for various biomedical and bio-behavior phenomena [14, 22, 23, 44, 46, 48, 50]. Optimization-based predictive models in medicine and biology Section 2 briefly describes the background of discriminant analysis. Section 3 describes the optimization-based multi-stage discriminant analysis predictive models for classification. The use of the predictive models on various biological and medical problems are presented in Section 4. This is followed by a brief summary in Section 5. 2 Background The main objective in discriminant analysis is to derive rules that can be used to classify entities into groups. Discriminant rules are typically expressed in terms of variables representing a set of measurable attributes of the entities in question. Data on a sample of entities for which the group classifications are known (perhaps determined by extraordinary means) are collected and used to derive rules that can be used to classify new yet-to-be-classified entities. Often there is a trade-off between the discriminating ability of the selected attributes and the expense of obtaining measurements on these attributes. Indeed, the measurement of a relatively definitive discriminating feature may be prohibitively expensive to obtain on a routine basis, or perhaps impossible to obtain at the time that classification is needed. Thus, a discriminant rule based on a selected set of feature attributes will typically be an imperfect discriminator, sometimes misclassifying entities. Depending on the application, the consequences of misclassifying an entity may be substantial. In such a case, it may be desirable to form a discrimination rule that allows less specific classification decisions, or even non-classification of some entities, to reduce the probability of misclassification. To address this concern, a number of researchers have suggested methods for deriving partial discrimination rules [10, 31, 35, 63, 65]. A partial discrimination rule allows an entity to be classified into some subset of the groups (i.e., rule out membership in the remaining groups), or be placed in a “reservedjudgement” category. An entity is considered misclassified only when it is assigned to a nonempty subset of groups not containing the true group of the entity. Typically, methods for deriving partial discrimination rules attempt to constrain the misclassification probabilities (e.g., by enforcing an upper bound on the proportion of misclassified training sample entities). For this reason, the resulting rules are also sometimes called constrained discrimination rules. Partial (or constrained) discrimination rules are intuitively appealing. A partial discrimination rule based on relatively inexpensive measurements can be tried first. If the rule classifies the entity satisfactorily according to the needs of the application, then nothing further needs to be done. Otherwise, additional measurements — albeit more expensive — can be taken on other, more definitive, discriminating attributes of the entity. One disadvantage of partial discrimination methods is that there is no obvious definition of optimality among any set of rules satisfying the constraints on the misclassification probabilities. For example, since some correct E.K. Lee classifications are certainly more valuable than others (e.g., classification into a small subset containing the true group versus a large subset), it does not make sense to simply maximize the probability of correct classification. In fact, to maximize the probability of correct classification, one would merely classify every entity into the subset consisting of all the groups — clearly, not an acceptable rule. A simplified model, whereby one incorporates only the reserved-judgment region (i.e., an entity is either classified as belonging to exactly one of the given a priori groups, or it is placed in the reserved-judgment category), is amenable to reasonable notions of optimality. For example, in this case, maximizing the probability of correct classification is meaningful. For the two-group case, the simplified model and the more general model are equivalent. Research on the two-group case is summarized in [60]. For three or more groups, the two models are not equivalent, and most work has been directed towards the development of heuristic methods for the more general model (e.g., see [10, 31, 63, 65]). Assuming that the group density functions and prior probabilities are known, the author in [1] showed that an optimal rule for the problem of maximizing the probability of correct classification subject to constraints on the misclassification probabilities must be of a specific form when discriminating among multiple groups with a simplified model. The formulae in Anderson’s result depend on a set of parameters satisfying a complex relationship between the density functions, the prior probabilities, and the bounds on the misclassification probabilities. Establishing a viable mathematical model to describe Anderson’s result, and finding values for these parameters that yield an optimal rule are challenging tasks. The authors in [27, 28] presented the first computational model for Anderson’s results. A variety of mathematical-programming models have been proposed for the discriminant-analysis problem [2–4, 15, 24, 25, 30, 32–34, 37, 54, 56, 58, 64, 70, 71]. None of these studies deal formally with measuring the performance of discriminant rules specifically designed to allow allocation to a reservedjudgment region. There is also no mechanism employed to constrain the level of misclassifications for each group. Many different techniques and methodologies have contributed to advances in classification, including artificial neural networks, decision trees, kernel-based learning, machine learning, mathematical programming, statistical analysis, and support vector machines [5, 8, 19, 20, 55, 61, 73]. There are some review papers for classification problems with mathematical programming techniques. The author in [69] summarizes basic concepts and ideas and discusses potential research directions on classification methods that optimize a function of the Lp -norm distances. The paper focuses on continuous models and includes normalization schemes, computational aspects, weighted formulations, secondary criteria, and extensions from two-group to multigroup classifications. The authors in [77] review the research conducted on the framework of the multicriteria decision aiding, covering different classification Optimization-based predictive models in medicine and biology models. The author in [57] and the authors in [7] give an overview of using mathematical programming approaches to solve data mining problems. Most recently, the authors in [53] provide a comprehensive overview of continuous and discrete mathematical programming models for classification problems. 3 Discrete support vector machine predictive models Since 1997, we have been developing in our computational center a generalpurpose discriminant analysis modeling framework and a computational engine that is applicable to a wide variety of applications, including biological, biomedical and logistics problems. Utilizing the technology of large-scale discrete optimization and support-vector machines, we have developed novel predictive models that simultaneously include the following features: 1) the ability to classify any number of distinct groups; 2) the ability to incorporate heterogeneous types of attributes as input; 3) a high-dimensional data transformation that eliminates noise and errors in biological data; 4) constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule); and 5) successive multi-stage classification capability to handle data points placed in the reserved judgment region. Based on the descriptions in [27, 28, 42, 43, 49], we summarize below some of the classification models we have developed. 3.1 Modeling of reserved-judgment region for general groups When the population densities and prior probabilities are known, the constrained rules with a reject option (reserved-judgment), based on Anderson’s results, calls for finding a partition {R0 , ..., RG } of Rk that maximizes the probability of correct allocation subject to constraints on the misclassification probabilities; i.e., 1 G max πg fg (w) dw (1) g=1 1 fh (w)dw ≤ αhg , h, g = 1, ..., G, h = g, where fh , h = 1, ..., G, are the group conditional density functions, πg denotes the prior probability that a randomly selected entity is from group g, g = 1, ..., G, and αhg , h = g are constants between zero and one. Under quite general assumptions, it was shown that there exist unique (up to a set of measure zero) nonnegative constants λih , i, h ∈ {1, ..., G}, i = h, such that the optimal rule is given by Rg = {x ∈ Rk : Lg (x) = maxh∈{0,1,...,G}Lh (x)}, g = 0, ..., G E.K. Lee where L0 (x) = 0 Lh (x) = πh fh (x) − λih fi (x), h = 1, ..., G. For G = 2 the optimal solution can be modeled in a rather straightforward manner. However, finding optimal λih ’s for the general case G ≥ 3 is a difficult problem, with the difficulty increasing as G increases. Our model offers an avenue for modeling and finding the optimal solution in the general case. It is the first such model to be computationally viable [27, 28]. Before proceeding, we note that Rg can be written as Rg = {x ∈ Rk : Lg (x) ≥ Lh (x) for all h = 0, ..., G}. So, since Lg (x) ≥Lh (x) if, and only if, G (1 G t=1 ft (x))Lg (x) ≥ (1 t=1 ft (x))Lh (x), the functions Lh , h = 1, ..., G can be redefined as Lh (x) = πh ph (x) − λih pi (x), h = 1, ..., G where pi (x) = fi (x) (6) in our model. G t=1 ft (x). We assume that Lh is defined as in equation 3.2 Mixed integer programming formulations Assume that we are given a training sample of N entities whose group classifications are known; say ng entities are in group g, where G g=1 ng = N . gj Let the k dimensional vectors x , g = 1, ..., G, j = 1, ..., ng , contain the measurements on k available characteristics of the entities. Our procedure for deriving a discriminant rule proceeds in two stages. The first stage is to use the training sample to compute estimates, fˆh , either parametrically or nonparametrically, of the density functions fh (e.g., see [60]) and estimates, π ˆh , of the prior probabilities πh , h = 1, ..., G. The second stage is to determine the optimal λih s given these estimates. This stage requires being able to estimate the probabilities of correct classification and misclassification for any candidate set of λih s. One could, in theory, substitute the estimated densities and prior probabilities into equations (5), and directly use the resulting regions Rg in the integral expressions given in (1) and (2). This would involve, even in simple cases such as normally distributed groups, the numerical evaluation of k-dimensional integrals at each step of a search for the optimal λih s. Therefore, we have designed an alternative approach. After substituting the ˆh s into equation (5), we simply calculate the proportion of training fˆh s and π sample points which fall in each of the regions R1 , ..., RG . The mixed integer programming (MIP) models discussed below attempt to maximize the proportion of training sample points correctly classified while satisfying constraints Optimization-based predictive models in medicine and biology on the proportions of training sample points misclassified. This approach has two advantages. First, it avoids having to evaluate the potentially difficult integrals in Equations (1) and (2). Second, it is nonparametric in controlling the training sample misclassification probabilities. That is, even if the densities are poorly estimated (by assuming, for example, normal densities for non-normal data), the constraints are still satisfied for the training sample. Better estimates of the densities may allow a higher correct classification rate to be achieved, but the constraints will be satisfied even if poor estimates are used. Unlike most support vector machine models that minimize the sum of errors, our objective is driven by the number of correct classifications, and will not be biased by the distance of the entities from the supporting hyperplane. A word of caution is in order. In traditional unconstrained discriminant analysis, the true probability of correct classification of a given discriminant rule tends to be smaller than the rate of correct classification for the training sample from which it was derived. One would expect to observe such an effect for the method described herein as well as an analogous effect with regard to constraints on misclassification probabilities — the true probabilities are likely to be greater than any limits imposed on the proportions of training sample misclassifications. Hence, the αhg parameters should be carefully chosen for the application in hand. Our first model is a nonlinear 0/1 MIP model with the nonlinearity appearing in the constraints. Model 1 maximizes the number of correct classifications of the given N training entities. Similarly, the constraints on the misclassification probabilities are modeled by ensuring that the number of group g training entities in region Rh is less than or equal to a pre-specified percentage, αhg (0 < αhg < 1), of the total number, ng , of group g entities, h, g ∈ {1, ..., G}, h = g. For notational convenience, let G = {1, ..., G} and Ng = {1, ..., ng }, for g ∈ G. Also, analogous to the definition of pi , define pˆi by pˆi = G fˆi (x) t=1 fˆt (x). In our model, we use binary indicator variables to denote the group classification of entities. Mathematically, let uhgj be a binary variable indicating whether or not xgj lies in region Rh ; i.e., whether or not the j th entity from group g is allocated to group h. Then Model 1 can be written as follows: max uggj g∈G j∈Ng Lhgj = π ˆh pˆh (xgj ) − λih pˆi (xgj ), h, g ∈ G, j ∈ Ng g ∈ G, j ∈ Ng ygj = max{0, Lhgj : h = 1, ..., G}, ygj − Lggj ≤ M (1 − uggj ), g ∈ G, j ∈ Ng ygj − Lhgj ≥ ε(1 − uhgj ), h, g ∈ G, j ∈ Ng , h = g (9) (10) E.K. Lee uhgj ≤ αhg ng , h, g ∈ G, h = g −∞ < Lhgj < ∞, ygj ≥ 0, λih ≥ 0, uhgj ∈ {0, 1}. Constraint (7) defines the variable Lhgj as the value of the function Lh evaluated at xgj . Therefore, the continuous variable ygj , defined in constraint (8), represents max{Lh (xgj ) : h = 0, ..., G}; and consequently, xgj lies in region Rh if, and only if, ygj = Lhgj . The binary variable uhgj is used to indicate whether or not xgj lies in region Rh ; i.e., whether or not the j th entity from group g is allocated to group h. In particular, constraint (9), together with the objective, force uggj to be 1 if, and only if, the j th entity from group g is correctly allocated to group g; and constraints (10) and (11) ensure that at most αhg ng (i.e., the greatest integer less than or equal to αhg ng ) group g entities are allocated to group h, h = g. One caveat regarding the indicator variables uhgj is that although the condition uhgj = 0, h = g, implies (by constraint (10)) that xgj ∈ / Rh , the converse need not hold. As a consequence, the number of misclassifications may be overcounted. However, in our preliminary numerical study we found that the actual amount of overcounting is minimal. For example, one could force the converse (thus, uhgj = 1 if and only if xgj ∈ Rh ) by adding constraints ygj − Lhgj ≤ M (1 − uhgj ). Finally, we note that the parameters M and ε are extraneous to the discriminant analysis problem itself, but are needed in the model to control the indicator variables uhgj . The intention is for M and ε to be, respectively, large and small positive constants. 3.3 Model variations We explore different variations in the model to grasp the quality of the solution and the associated computational effort. A first variation involves transforming Model 1 to an equivalent linear mixed integer model. In particular, Model 2 replaces the N constraints defined in (8) with the following system of 3GN + 2N constraints: ygj ≥ Lhgj , h, g ∈ G, j ∈ Ng h, g ∈ G, j ∈ Ng y˜hgj ≤ π ˆh pˆh (x )vhgj , h, g ∈ G, j ∈ Ng vhgj ≤ 1, g ∈ G, j ∈ Ng y˜hgj = ygj , g ∈ G, j ∈ Ng y˜hgj − Lhgj ≤ M (1 − vhgj ), gj where y˜hgj ≥ 0 and vhgj ∈ {0, 1}, h, g ∈ G, j ∈ Ng . These constraints, together with the non-negativity of ygj force ygj = max{0, Lhgj : h = 1, ..., G}. Optimization-based predictive models in medicine and biology The second variation involves transforming Model 1 to a heuristic linear MIP model. This is done by replacing the nonlinear constraint (8) with ygj ≥ Lhgj , h, g ∈ G, j ∈ Ng , and including penalty terms in the objective function. In particular, Model 3 has the objective max βuggj − γygj , g∈G j∈Ng g∈G j∈Ng where β and γ are positive constants. This model is heuristic in that there is nothing to force ygj = max{0, Lhgj : h = 1, ..., G}. However, since in addition to trying to force as many uggj s to one as possible, the objective in Model 3 also tries to make the ygj s as small as possible, and the optimizer tends to drive ygj towards max{0, Lhgj : h = 1, ..., G}. We remark that β and γ could be stratified by a group (i.e., introduce possibly distinct βg , γg , g ∈ G) to model the relative importance of certain groups to be correctly classified. A reasonable modification to Models 1, 2 and 3 involves relaxing the constraints specified by (11). Rather than placing restrictions on the number of type g training entities classified into group h, for all h, g ∈ G, h = g, one could simply place an upper bound on the total number of misclassified training entities. In this case, the G(G − 1) constraints specified by (11) would be replaced by the single constraint uhgj ≤ αN (17) g∈G h∈G\{g} where α is a constant between 0 and 1. We will refer to Models 1, 2 and 3, modified in this way, as Models 1T, 2T and 3T, respectively. Of course, other modifications are also possible. For instance, one could place restrictions on the total number of type g points misclassified for each g ∈ G. Thus, in place of the constraints specified in (17), one would include the constraints h∈G\{g} j∈Ng uhgj ≤ αg N , g ∈ G, where 0 < αg < 1. We also explore a heuristic linear model of Model 1. In particular, consider the linear program (DALP): max (c1 wgj + c2 ygj ) g∈G j∈Ng s.t. λih pˆi (xgj ), Lhgj = πh pˆh (xgj ) − h, g ∈ G, j ∈ Ng Lggj − Lhgj + wgj ≥ 0, Lggj + wgj ≥ 0, −Lhgj + ygj ≥ 0, h, g ∈ G, h = g, j ∈ Ng g ∈ G, j ∈ Ng , h, g ∈ G, j ∈ Ng , −∞ < Lhgj < ∞, wgj , ygj , λih ≥ 0. E.K. Lee Constraint (19) defines the variable Lhgj as the value of the function Lh evaluated at xgj . As the optimization solver searches through the set of feasible solutions, the λih variables will vary, causing the Lhgj variables to assume different values. Constraints (20), (21) and (22) link the objective-function variables with the Lhgj variables in such a way that correct classification of training entities, and allocation of training entities into the reserved-judgment region, are captured by the objective-function variables. In particular, if the optimization solver drives wgj to zero for some g, j pair, then constraints (20) and (21) imply that Lggj = max{0, Lhgj : h ∈ G}. Hence, the j th entity from group g is correctly classified. If, on the other hand, the optimal solution yields ygj = 0 for some g, j pair, then constraint (22) implies that max{0, Lhgj : h ∈ G} = 0. Thus, the j th entity from group g is placed in the reserved-judgment region. (Of course, it is possible for both wgj and ygj to be zero. One should decide prior to solving the linear program how to interpret the classification in such cases.) If both wgj and ygj are positive, the j th entity from group g is misclassified. The optimal solution yields a set of λih s that best allocates the training entities (i.e., “best” in terms of minimizing the penalty objective function). The optimal λih s can then be used to define the functions Lh , h ∈ G, which in turn can be used to classify a new entity with feature vector x ∈ Rk by simply computing the index at which max{Lh (x) : h ∈ {0, 1, ..., G}} is achieved. Note that Model DALP places no a priori bound on the number of misclassified training entities. However, since the objective is to minimize a weighted combination of the variables wgj and ygj , the optimizer will attempt to drive these variables to zero. Thus, the optimizer is, in essence, attempting either to correctly classify training entities (wgj = 0), or to place them in the reservedjudgment region (ygj = 0). By varying the weights c1 and c2 , one has a means of controlling the optimizer’s emphasis for correctly classifying training entities versus placing them in the reserved-judgment region. If c2 /c1 < 1, the optimizer will tend to place a greater emphasis on driving the wgj variables to zero than driving the ygj variables to zero (conversely, if c2 /c1 > 1). Hence, when c2 /c1 < 1, one should expect to get relatively more entities correctly classified, fewer placed in the reserved-judgment region, and more misclassified, than when c2 /c1 > 1. An extreme case is when c2 = 0. In this case, there is no emphasis on driving ygj to zero (the reserved-judgment region is thus ignored), and the full emphasis of the optimizer is on driving wgj to zero. Table 1 summarizes the number of constraints, the total number of variables, and the number of 0/1 variables in each of the discrete support vector machine models and in the heuristic LP model (DALP). Clearly, even for moderately-sized discriminant analysis problems, the MIP instances are relatively large. Also, note that Model 2 is larger than Model 3, both in terms of the number of constraints and the number of variables. However, it is important to keep in mind that the difficulty of solving an MIP problem cannot, in general, be predicted solely by its size; problem structure has a direct and substantial bearing on the effort required to find optimal solutions. The LP Optimization-based predictive models in medicine and biology Table 1. Model size. Model Total Variables 0/1 Variables nonlinear MIP 2GN + N + G(G − 1) 2GN + N + G(G − 1) linear MIP 5GN + 2N + G(G − 1) 4GN + N + G(G − 1) linear MIP 3GN + G(G − 1) 2GN + N + G(G − 1) nonlinear MIP 2GN + N + 1 2GN + N + G(G − 1) linear MIP 5GN + 2N + 1 4GN + N + G(G − 1) linear MIP 3GN + 1 2GN + N + G(G − 1) linear program N G + N + G(G − 1) relaxation of these MIP models pose computational challenges as commercial LP solvers return (optimal) LP solutions that are infeasible due to the equality constraints and the use of big M and small ε in the formulation. It is interesting to note that the set of feasible solutions for Model 2 is “tighter” than that for Model 3. In particular, if Fi denotes the set of feasible solutions of Model i, then F1 = {(L, λ, u, y) : there exists y˜, v such that (L, λ, u, y, y˜, v) ∈ F2 } F3 . (23) Novelties of the classification models developed herein: 1) they are suitable for discriminant analysis given any number of groups, 2) they accept heterogeneous types of attributes as input, 3) they use a parametric approach to reduce high-dimensional attribute spaces, and 4) they allow constraints on the number of misclassifications and utilize a reserved judgment to facilitate the reduction of misclassifications. The latter point opens the possibility of performing multistage analyses. Clearly, the advantage of an LP model over an MIP model is that the associated problem instances are computationally much easier to solve. However, the most important criterion in judging a method for obtaining discriminant rules is how the rules perform in correctly classifying new unseen entities. Once the rule is developed, applying it to a new entity to determine its group is trivial. Extensive computational experiments have been performed to gauge the qualities of solutions of different models [28, 49, 42, 43, 12, 13]. 3.4 Computational strategies The mixed integer programming models described herein offer a computational avenue for numerically estimating optimal values for the λih parameters in Anderson’s formulae. However, it should be emphasized that mixed integer programming problems are themselves difficult to solve. Anderson [1] himself noted the extreme difficulty of finding an optimal set of λih s. Indeed, E.K. Lee MIP is an NP-hard problem (e.g., see [29]). Nevertheless, due to the fact that integer variables — and in particular, 0/1 variables — are a powerful modeling tool, a wide variety of real-world problems have been modeled as mixed integer programs. Consequently, much effort has been invested in developing computational strategies for solving MIP problem instances. The numerical work reported in Section 4 is based on an MIP solver which is built on top of a general-purpose mixed integer research code, MIPSOL [38]. (A competitive commercial solver (CPLEX) was not effective in solving the problem instances considered.) The general-purpose code integrates state-ofthe-art MIP computational devices such as problem preprocessing, primal heuristics, global and local reduced-cost fixing, and cutting planes into a branch-and-bound framework. The code has been shown to be effective in solving a wide variety of large-scale real-world instances [6]. For our MIP instances, special techniques such as variable aggregation, a heuristic branching scheme, and hypergraphic cut generations are employed [28, 21, 12]. 4 Classification results on real-world applications The main objective in discriminant analysis is to derive rules that can be used to classify entities into groups. Computationally, the challenge lies in the effort expended to develop such a rule. Once the rule is developed, applying it to a new entity to determine its group is trivial. Feasible solutions obtained from our classification models correspond to predictive rules. Empirical results [28, 49] indicate that the resulting classification model instances are computationally very challenging, and even intractable by competitive commercial MIP solvers. However, the resulting predictive rules prove to be very promising, offering correct classification rates on new unknown data ranging from 80% to 100% on various types of biological/medical problems. Our results indicate that the general-purpose classification framework that we have designed has the potential to be a very powerful predictive method for clinical settings. The choice of mixed integer programming (MIP) as the underlying modeling and optimization technology for our support vector machine classification model is guided by the desire to simultaneously incorporate a variety of important and desirable properties of predictive models within a general framework. MIP itself allows for the incorporation of continuous and discrete variables and linear and nonlinear constraints, providing a flexible and powerful modeling environment. 4.1 Validation of model and computational effort We performed ten-fold cross validation, and designed simulation and comparison studies on our preliminary models. The results, reported in [28, 49], show the methods are promising, based on applications to both simulated data and Optimization-based predictive models in medicine and biology datasets from the machine learning database repository [62]. Furthermore, our methods compare well and at times superior to existing methods, such as artificial neural networks, quadratic discriminant analysis, tree classification, and other support vector machines, on real biological and medical data. 4.2 Applications to biological and medical problems Our mathematical modeling and computational algorithm design shows great promise as the resulting predictive rules are able to produce higher rates of correct classification on new biological data (with unknown group status) compared to existing classification methods. This is partly due to the transformation of raw data via the set of constraints in (7). While most support vector machines [53] directly determine the hyperplanes of separation using raw data, our approach transforms the raw data via a probabilistic model, before the determination of the supporting hyperplanes. Further, the separation is driven by maximizing the sum of binary variables (representing correct or incorrect classification of entities), instead of maximizing the margin between groups, or minimizing a sum of errors (representing distances of entities from hyperplanes) as in other support vector machines. The combination of these two strategies offers better classification capability. Noise in the transformed data is not as profound as in raw data. And the magnitudes of the errors do not skew the determination of the separating hyperplanes, as all entities have equal importance when correct classification is being counted. To highlight the broad applicability of our approach, in this paper we briefly summarize the application of our predictive models and solution algorithms to eight different biological problems. Each of the projects was carried out in close partnership with experimental biologists and/or clinicians. Applications to finance and other industry applications are described elsewhere [12, 28, 49]. Determining the type of Erythemato-Squamous disease The differential diagnosis of erythemato-squamous diseases is an important problem in dermatology. They all share the clinical features of erythema and scaling, with very little differences. The six groups are psoriasis, seboreic dermatitis, lichen planus, pityriasis rosea, chronic dermatitis, and pityriasis rubra pilaris. Usually a biopsy is necessary for the diagnosis, but unfortunately these diseases share many histopathological features as well. Another difficulty for the differential diagnosis is that a disease may show the features of another disease at the beginning stage and may have the characteristic features at the following stages [62]. The six groups consist of 366 subjects (112,61,72,49,52,20 respectively) with 34 clinical attributes. Patients were first evaluated clinically with 12 features. Afterwards, skin samples were taken for the evaluation of 22 histopathological features. The values of the histopathological features are E.K. Lee by an analysis of the samples under a microscope. The 34 attributes include 1) clinical attributes: erythema, scaling, definite borders, itching, koebner phenomenon, polygonal papules, follicular papules, oral mucosal involvement, knee and elbow involvement, scalp involvement, family history, age; and 2) histopathological attributes: melanin incontinence, eosinophils in the infiltrate, PNL infiltrate, fibrosis of the papillary dermis, exocytosis, acanthosis, hyperkeratosis, parakeratosis, clubbing of the rete ridges, elongation of the rete ridges, thinning of the suprapapillary epidermis, spongiform pustule, munro microabcess, focal hypergranulosis, disappearance of the granular layer, vacuolisation and damage of basal layer, spongiosis, saw-tooth appearance of retes, follicular horn plug, perifollicular parakeratosis, inflammatory monoluclear infiltrate, band-like infiltrate. Our multi-group classification model selected 27 discriminatory attributes, and successfully classified the patients into six groups, each with an unbiased correct classification of greater than 93% (with 100% correct rate for groups 1, 3, 5, 6) with an average overall accuracy of 98%. Using 250 subjects to develop the rule, and testing the remaining 116 patients, we obtain a prediction accuracy of 91%. Predicting aberrant CpG island methylation in human cancer [22, 23] Epigenetic silencing associated with aberrant methylation of promoter region CpG islands is one mechanism leading to loss of the tumor suppressor function in human cancer. Profiling of CpG island methylation indicates that some genes are more frequently methylated than others, and that each tumor type is associated with a unique set of methylated genes. However, little is known about why certain genes succumb to this aberrant event. To address this question, we used Restriction Landmark Genome Scanning (RLGS) to analyze the susceptibility of 1749 unselected CpG islands to de novo methylation driven by overexpression of DNMT1. We found that, whereas the overall incidence of CpG island methylation increased in cells overexpressing DNMT1, not all loci were equally affected. The majority of CpG islands (69.9%) were resistant to de novo methylation, regardless of DNMT1 overexpression. In contrast, we identified a subset of methylation-prone CpG islands (3.8%) that were consistently hypermethylated in multiple DNMT1 overexpressing clones. Methylation-prone and methylation-resistant CpG islands were not significantly different with respect to size, C+G content, CpG frequency, chromosomal location, or gene- or promoter-association. To discriminate methylation-prone from methylation-resistant CpG islands, we developed a novel DNA pattern recognition model and algorithm [45], and coupled our predictive model described herein with the patterns found. We were able to derive a classification function based on the frequency of seven novel sequence patterns that was capable of discriminating methylationprone from methylation-resistant CpG islands with 90% correctness upon Optimization-based predictive models in medicine and biology cross-validation, and 85% accuracy when tested against blind CpG islands unknown to us on the methylation status. The data indicate that CpG islands differ in their intrinsic susceptibility to de novo methylation, and suggest that the propensity for a CpG island to become aberrantly methylated can be predicted based on its sequence context. The significance of this research is two-fold. First, the identification of sequence patterns/attributes that distinguish methylation-prone CpG islands will lead to a better understanding of the basic mechanisms underlying aberrant CpG island methylation. Because genes that are silenced by methylation are otherwise structurally sound, the potential for reactivating these genes by blocking or reversing the methylation process represents an exciting new molecular target for chemotherapeutic intervention. A better understanding of the factors that contribute to aberrant methylation, including the identification of sequence elements that may act to target aberrant methylation, will be an important step in achieving this long-term goal. Secondly, the classification of the more than 29,000 known (but as yet unclassified) CpG islands in human chromosomes will provide an important resource for the identification of novel gene targets for further study as potential molecular markers that could impact both cancer prevention and treatment. Extensive RLGS fingerprint information (and thus potential training sets of methylated CpG islands) already exists for a number of human tumor types, including breast, brain, lung, leukemias, hepatocellular carcinomas, and PNET [17, 18, 26, 67]. Thus, the methods and tools developed are directly applicable to CpG island methylation data derived from human tumors. Moreover, new microarray-based techniques capable of ’profiling’ more than 7000 CpG islands have been developed and applied to human breast cancers [9, 74, 75]. We are uniquely poised to take advantage of the tumor CpG island methylation profile information that will likely be generated using these techniques over the next several years. Thus, our general-predictive modeling framework has the potential to lead to improved diagnosis and prognosis and treatment planning for cancer patients. Discriminant analysis of cell motility and morphology data in human lung carcinoma [14] This study focuses on the differential effects of extracellular matrix proteins on the motility and morphology of human lung epidermoid carcinoma cells. The behavior of carcinoma cells is contrasted with that of normal L-132 cells, resulting in a method for the prediction of metastatic potential. Data collected from time-lapsed videomicroscopy were used to simultaneously produce quantitative measures of motility and morphology. The data were subsequently analyzed using our discriminant analysis model and algorithm to discover relationships between motility, morphology, and substratum. Our discriminant analysis tools enabled the consideration of many more cell attributes than is customary in cell motility studies. The observations correlate with behaviors seen in vivo and suggest specific roles for the extracellular matrix proteins and E.K. Lee their integrin receptors in metastasis. Cell translocation in vitro has been associated with malignancy, as has an elongated phenotype [76] and a rounded phenotype [66]. Our study suggests that extracellular matrix proteins contribute in different ways to the malignancy of cancer cells, and that multiple malignant phenotypes exist. Ultrasonic assisted cell disruption for drug delivery [48] Although biological effects of ultrasounds must be avoided for safe diagnostic applications, an ultrasound’s ability to disrupt cell membranes has attracted interest in it as a method to facilitate drug and gene delivery. This preliminary study seeks to develop rules for predicting the degree of cell membrane disruption based on specified ultrasound parameters and measured acoustic signals. Too much ultrasound destroys cells, while cell membranes will not open up for absorption of macromolecules when too little ultrasound is applied. The key is to increase cell permeability to allow absorption of macromolecules, and to apply ultrasound transiently to disrupt viable cells so as to enable exogenous material to enter without cell damage. Thus our task is to uncover a “predictive rule” of ultrasound-mediated disruption of red blood cells using acoustic spectrums and measurements of cell permeability recorded in experiments. Our predictive model and solver for generating prediction rules are applied to data obtained from a sequence of experiments on bovine red blood cells. For each experiment, the attributes consist of 4 ultrasound parameters, acoustic measurements at 400 frequencies, and a measure of cell membrane disruption. To avoid over-training, various feature combinations of the 404 predictor variables are selected when developing the classification rule. The results indicate that the variable combination consisting of ultrasound exposure time and acoustic signals measured at the driving frequency and its higher harmonics yields the best rule. Our method compares favorably with the classification tree and other ad hoc approaches, with a correct classification rate of 80% upon cross-validation and 85% when classifying new unknown entities. Our methods used for deriving the prediction rules are broadly applicable, and could be used to develop prediction rules in other scenarios involving different cell types or tissues. These rules and the methods used to derive them could be used for real-time feedback about ultrasound’s biological effects. For example, it could assist clinicians during a drug delivery process, or could be imported into an implantable device inside the body for automatic drug delivery and monitoring. Identification of tumor shape and volume in treatment of sarcoma [46] This project involves the determination of tumor shape for adjuvant brachytherapy treatment of sarcoma, based on catheter images taken after surgery. In this application, the entities are overlapping consecutive triplets of catheter Optimization-based predictive models in medicine and biology markings, each of which is used for determining the shape of the tumor contour. The triplets are to be classified into one of two groups: Group 1 = [triplets for which the middle catheter marking should be bypassed], and Group 2 = [triplets for which the middle marking should not be bypassed]. To develop and validate a classification rule, we used clinical data collected from fifteen soft tissue sarcoma (STS) patients. Cumulatively, this comprised 620 triplets of catheter markings. By careful (and tedious) clinical analysis of the geometry of these triplets, 65 were determined to belong to Group 1, the “bypass” group, and 555 were determined to belong to Group 2, the “do-not-bypass” group. A set of measurements associated with each triplet is then determined. The choice of what attributes to measure to best distinguish triplets as belonging to Group 1 or Group 2 is non trivial. The attributes involved distance between each pair of markings, angles, and curvature formed by the three triplet markings. Based on the selected attributes, our predictive model was used to develop a classification rule. The resulting rule provides 98% correct classification on cross-validation, and was capable of correctly determining/predicting 95% of the shape of the tumor on new patients’ data. We remark that the current clinical procedure requires manual outline based on markers in films of the tumor volume. This study was the first to use automatic construction of tumor shape for sarcoma adjuvant brachytherapy [46, 47]. Discriminant analysis of biomarkers for prediction of early atherosclerosis [44] Oxidative stress is an important etiologic factor in the pathogenesis of vascular disease. Oxidative stress results from an imbalance between injurious oxidant and protective antioxidant events in which the former predominate [59, 68]. This results in the modification of proteins and DNA, alteration in gene expression, promotion of inflammation, and deterioration in endothelial function in the vessel wall, all processes that ultimately trigger or exacerbate the atherosclerotic process [16, 72]. It was hypothesized that novel biomarkers of oxidative stress would predict early atherosclerosis in a relatively healthy nonsmoking population who are free from cardiovascular disease. One hundred and twenty seven healthy non-smokers, without known clinical atherosclerosis had carotid intima media thickness (IMT) measured using ultrasound. Plasma oxidative stress was estimated by measuring plasma lipid hydroperoxides using the determination of reactive oxygen metabolites (d-ROMs) test. Clinical measurements include traditional risk factors such as age, sex, low density lipoprotein (LDL), high density lipoprotein (HDL), triglycerides, cholesterol, body-mass-index (BMI), hypertension, diabetes mellitus, smoking history, family history of CAD, Framingham risk score, and Hs-CRP. For this prediction, the patients are first clustered into two groups: (Group 1: IMT >= 0.68, Group 2: IMT < 0.68). Based on this separator, 30 patients belong to Group 1 and 97 belong to Group 2. Through each iteration, the classification method trains and learns from the input training set and returns the E.K. Lee most discriminatory patterns among the 14 clinical measurements; ultimately resulting in the development of a prediction rule based on observed values of these discriminatory patterns among the patient data. Using all 127 patients as a training set, the predictive model identified age, sex, BMI, HDLc, Fhx CAD < 60, hs-CRP and d-ROM as discriminatory attributes that together provide unbiased correct classification of 90% and 93%, respectively, for Group 1 (IMT >= 0.68) and Group 2 (IMT < 0.68) patients. To further test the power of the classification method for correctly predicting the IMT status on new/unseen patients, we randomly selected a smaller patient training set of size 90. The predictive rule from this training set yields 80% and 89% correct rates for predicting the remaining 37 patients into Group 1 and Group 2, respectively. The importance of d-ROM as a discriminatory predictor for IMT status was confirmed during the machine learning process. This biomarker was selected in every iteration as the “machine” learned and trained to develop a predictive rule to correctly classify patients in the training set. We also performed predictive analysis using Framingham Risk Score and d-ROM; in this case the unbiased correct classification rates (for the 127 individuals) for Groups 1 and 2 are 77% and 84%, respectively. This is the first study to illustrate that this measure of oxidative stress can be effectively used along with traditional risk factors to generate a predictive rule that can potentially serve as an inexpensive clinical diagnostic tool for the prediction of early atherosclerosis. Fingerprinting native and angiogenic microvascular networks through pattern recognition and discriminant analysis of functional perfusion data [50] The cardiovascular system provides oxygen and nutrients to the entire body. Pathological conditions that impair normal microvascular perfusion can result in tissue ischemia, with potentially serious clinical effects. Conversely, development of new vascular structures fuels the progression of cancer, macular degeneration and atherosclerosis. Fluorescence-microangiography offers superb imaging of the functional perfusion of new and existent microvasculature, but quantitative analysis of the complex capillary patterns is challenging. We developed an automated pattern-recognition algorithm to systematically analyze the microvascular networks, and then apply our classification model herein to generate a predictive rule. The pattern-recognition algorithm identifies the complex vascular branching patterns, and the predictive rule demonstrates 100% and 91% correct classification on perturbed (diseased) and normal tissue perfusion, respectively. We confirmed that transplantation of normal bone marrow to mice in which genetic deficiency resulted in impaired angiogenesis eliminated predicted differences and restored normal-tissue perfusion patterns (with 100% correctness). The pattern recognition and classification method offers an elegant solution for the automated fingerprinting of microvascular networks that could contribute to better understanding of angiogenic Optimization-based predictive models in medicine and biology and be utilized to diagnose and monitor microvascular deficiencies. Such information would be valuable for early detection and monitoring of functional abnormalities before they produce obvious and lasting effects, which may include improper perfusion of tissue, or support of tumor development. The algorithm can be used to discriminate between the angiogenic response in a native healthy specimen compared to groups with impairment due to age, chemical or other genetic deficiency. Similarly, it can be applied to analyze angiogenic responses as a result of various treatments. This will serve two important goals. First, the identification of discriminatory patterns/attributes that distinguish angiogenesis status will lead to a better understanding of the basic mechanisms underlying this process. Because therapeutic control of angiogenesis could influence physiological and pathological processes such as wound and tissue repairing, cancer progression and metastasis, or macular degeneration, the ability to understand it under different conditions will offer new insight in developing novel therapeutic interventions, monitoring and treatment, especially in aging, and heart disease. Thus, our study and the results form the foundation of a valuable diagnostic tool for changes in the functionality of the microvasculature and for discovery of drugs that alter the angiogenic response. The methods can be applied to tumor diagnosis, monitoring and prognosis. In particular, it will be possible to derive microangiographic fingerprints to acquire specific microvascular patterns associated with early stages of tumor development. Such “angioprinting” could become an extremely helpful early diagnostic modality, especially for easily accessible tumors such as skin cancer. Prediction of protein localization sites The protein localization database consists of 8 groups with a total of 336 instances (143, 77, 52, 35, 20, 5, 2, 2, respectively) with 7 attributes [62]. The eight groups are eight localization sites of protein, including cp (cytoplasm), im (inner membrane without signal sequence), pp (perisplasm), imU (inner membrane, uncleavable signal sequence), om (outer membrane), omL (outer membrane lipoprotein), imL (inner membrane lipoprotein), and imS (inner membrane, cleavable signal sequence). However, the last four groups are taken out from our classification experiment since the population sizes are too small to ensure significance. The seven attributes include mcg (McGeoch’s method for signal sequence recognition), gvh (von Heijne’s method for signal sequence recognition), lip (von Heijne’s Signal Peptidase II consensus sequence score), chg (Presence of charge on N-terminus of predicted lipoproteins), aac (score of discriminant analysis of the amino acid content of outer membrane and periplasmic proteins), alm1 (score of the ALOM membrane spanning region prediction program), and alm2 (score of ALOM program after excluding putative cleavable signal regions from the sequence). E.K. Lee In the classification we use 4 groups, 307 instances, with 7 attributes. Our classification model selected the discriminatory patterns mcg, gvh, alm1, and alm2 to form the predictive rule with unbiased correct classification rates of 89%, compared to the results of 81% by other classification models [36]. 5 Summary and conclusion In the article, we present a class of general-purpose predictive models that we have developed based on the technology of large-scale optimization and support-vector machines [28, 49, 42, 43, 12, 13]. Our models seek to maximize the correct classification rate while constraining the number of misclassifications in each group. The models incorporate the following features: 1) the ability to classify any number of distinct groups; 2) allowing incorporation of heterogeneous types of attributes as input; 3) a high-dimensional data transformation that eliminates noise and errors in biological data; 4) constraining the misclassification in each group and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule); and 5) successive multi-stage classification capability to handle data points placed in the reserved-judgment region. The performance and predictive power of the classification models is validated through a broad class of biological and medical applications. Classification models are critical to medical advances as they can be used in genomic, cell, molecular, and system level analyses to assist in early prediction, diagnosis and detection of disease, as well as for intervention and monitoring. As shown in the CpG island study for human cancer, such prediction and diagnosis opens up novel therapeutic sites for early intervention. The ultrasound application illustrates its application to a novel drug delivery mechanism, assisting clinicians during a drug delivery process, or in devising implantable devices into the body for automated drug delivery and monitoring. The lung cancer cell motility offers an understanding of how cancer cells behave under different protein media, thus assisting in the identification of potential gene therapy and target treatment. Prediction of the shape of a cancer tumor bed provides a personalized treatment design, replacing manual estimates by sophisticated computer predictive models. Prediction of early atherosclerosis through inexpensive biomarker measurements and traditional risk factors can serve as a potential clinical diagnostic tool for routine physical and health maintenance, alerting doctors and patients to the need for early intervention to prevent serious vascular disease. Fingerprinting of microvascular networks opens up the possibility of early diagnosis of perturbed systems in the body that may trigger disease (e.g., genetic deficiency, diabetes, aging, obesity, macular degeneracy, tumor formation), identifying the target site for treatment, and monitoring the prognosis and success of treatment. Thus, classification models serve as a basis for predictive medicine where the desire is to diagnose early and provide personalized target intervention. This has the Optimization-based predictive models in medicine and biology potential to reduce healthcare costs, improve the success of treatment and quality-of-life of patients. In [11], we have showed that our multi-category constrained discrimination analysis predictive model is strongly universally consistent. Further theoretical studys will be performed on these models to understand their characteristics and the sensitivity of the predictive patterns to model/ parameter variations. The modeling framework for discrete support vector machines offers great flexibility, enabling one to simultaneously incorporate the features as listed above, as well as many other features. However, deriving the predictive rules for such problems can be computationally demanding, due to the NP-hard nature of mixed integer programming [29]. We continue to work on improving optimization algorithms utilizing novel cutting plane and branch-and-bound strategies, fast heuristic algorithms, and parallel algorithms [6, 21, 38–41, 51, 52]. Acknowledgement This research was partially supported by the National Science Foundation. References 1. J. A. Anderson. Constrained discrimination between k populations. Journal of the Royal Statistical Society, Series B, 31:123–139, 1969. 2. S. M. Bajgier and A. V. Hill. An experimental comparison of statistical and linear programming approaches to the discriminant problems. Decision Sciences, 13:604–618, 1982. 3. K. P. Bennett and E. J. Bredensteiner. A parametric optimization method for machine learning. INFORMS Journal on Computing, 9:311–318, 1997. 4. K. P. Bennett and O. L. Mangasarian. Multicategory discrimination via linear programming. Optimization Methods and Software, 3:27–39, 1993. 5. C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, Oxford, 1995. 6. R. E. Bixby, W. Cook, A. Cox, and E. K. Lee. Computational experience with parallel mixed integer programming in a distributed environment. Annals of Operations Research, Special Issue on Parallel Optimization, 90:19–43, 1999. 7. P. S. Bradley, U. M. Fayyad, and O. L. Mangasarian. Mathematical programming for data mining: Formulations and challenges. INFORMS Journal on Computing, 11:217–238, 1999. 8. J. Breiman, R. Friedman, A. Olshen, and C. J. Stone. Wadsworth & Brooks/Cole Advanced Books & Software, Pacific Grove, CA, 1984. 9. G. J. Brock, T. H. Huang, C. M. Chen, and K. J. Johnson. A novel technique for the identification of CpG islands exhibiting altered methylation patterns (ICEAMP). Nucleic Acids Research, 29, 2001. 10. J. D. Broffit, R. H. Randles, and R. V. Hogg. Distribution-free partial discriminant analysis. Journal of the American Statistical Association, 71:934–939, 1976. E.K. Lee 11. J. P. Brooks and E. K. Lee. Analysis of the consistency of a mixed integer programming-based multi-category constrained discriminant model. Annals of Operations Research – Data Mining. Submitted, 2006. 12. J. P. Brooks and E. K. Lee. Solving a mixed-integer programming formulation of a multi-category constrained discrimination model. Proceedings of the 2006 INFORMS Workshop on Artificial Intelligence and Data Mining, Pittsburgh, PA, Nov 2006. 13. J. P. Brooks and E. K. Lee. Mixed integer programming constrained discrimination model for credit screening. Proceedings of the 2007 Spring Simulation Multiconference, Business and Industry Symposium, Norfolk, VA, March 2007. ACM Digital Library, pages 1–6. 14. J. P. Brooks, Adele Wright, C. Zhu, and E. K. Lee. Discriminant analysis of motility and morphology data from human lung carcinoma cells placed on purified extracellular matrix proteins. Annals of Biomedical Engineering, in review, 2006. 15. T. M. Cavalier, J. P. Ignizio, and A. L. Soyster. Discriminant analysis via mathematical programming: certain problems and their causes. Computers and Operations Research, 16:353–362, 1989. 16. M. Chevion, E. Berenshtein, and E. R. Stadtman. Human studies related to protein oxidation: protein carbonyl content as a marker of damage. Free Radical Research, 33:S99–S108, 2000. 17. J. F. Costello, M. C. Fruhwald, D. J. Smiraglia, L. J. Rush, G. P. Robertson, X. Gao, F. A. Wright, J. D. Feramisco, P. Peltomaki, J. C. Lang, D. E. Schuller, L. Yu, C. D. Bloomfield, M. A. Caligiuri, A. Yates, R. Nishikawa, H. H. Su, N. J. Petrelli, X. Zhang, M. S. O’Dorisio, W. A. Held, W. K. Cavenee, and C. Plass. Aberrant CpG-island methylation has non-random and tumour-typespecific patterns. Nature Genetics, 24:132–138, 2000. 18. J. F. Costello, C. Plass, and W. K. Cavenee. Aberrant methylation of genes in low-grade astrocytomas. Brain Tumor Pathology, 17:49–56, 2000. 19. N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, 2000. 20. R. O. Duda, P. E. Hart, and D. G. Stork. Pattern classification. Wiley, 2nd edition, New York, 2001. 21. T. Easton, K. Hooker, and E. K. Lee. Facets of the independent set polytope. Mathematical Programming B, 98:177–199, 2003. 22. F. A. Feltus, E. K. Lee, J. F. Costello, C. Plass, and P. M. Vertino. Predicting aberrant CpG island methylation. Proceedings of the National Academy of Sciences, 100:12253–12258, 2003. 23. F. A. Feltus, E. K. Lee, J. F. Costello, C. Plass, and P. M. Vertino. DNA signatures associated with CpG island methylation states. Genomics, 87:572– 579, 2006. 24. N. Freed and F. Glover. A linear programming approach to the discriminant problem. Decision Sciences, 12:68–74, 1981. 25. N. Freed and F. Glover. Evaluating alternative linear programming models to solve the two-group discriminant problem. Decision Sciences, 17:151–162, 1986. 26. M. C. Fruhwald, M. S. O’Dorisio, L. J. Rush, J. L. Reiter, D. J. Smiraglia, G. Wenger, J. F. Costello, P. S. White, R. Krahe, G. M. Brodeur, and C. Plass. Optimization-based predictive models in medicine and biology 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. Gene amplification in NETs/medulloblastomas: mapping of a novel amplified gene within the MYCN amplicon. Journal of Medical Genetics, 37:501–509, 2000. R. J. Gallagher, E. K. Lee, and D. Patterson. An optimization model for constrained discriminant analysis and numerical experiments with iris, thyroid, and heart disease datasets. in: Cimino jj. In Proceedings of the 1996 American Medical Informatics Association, pages 209–213, 1996. R. J. Gallagher, E. K. Lee, and D.A. Patterson. Constrained discriminant analysis via 0/1 mixed integer programming. Annals of Operations Research, Special Issue on Non-Traditional Approaches to Statistical Classification and Regression, 74:65–88, 1997. M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, New York, 1979. W. V. Gehrlein. General mathematical programming formulations for the statistical classification problem. Operations Research Letters, 5:299–304, 1986. M. P. Gessaman and P.H. Gessaman. A comparison of some multivariate discrimination procedures. Journal of the American Statistical Association, 67:468–472, 1972. F. Glover. Improved linear programming models for discriminant analysis. Decision Sciences, 21:771–785, 1990. F. Glover, S. Keene, and B. Duea. A new class of models for the discriminant problem. Decision Sciences, 19:269–280, 1988. W. Gochet, A. Stam, V. Srinivasan, and S. Chen. Multigroup discriminant analysis using linear programming. Operations Research, 45:213–225, 1997. J. D. F. Habbema, J. Hermans, and A. T. Van Der Burgt. Cases of doubt in allocation problems. Biometrika, 61:313–324, 1974. P. Horton and K. Nakai. A probablistic classification system for predicting the cellular localization sites of proteins. Intelligent Systems in Molecular Biology, pages 109–115, 1996. St. Louis, United States. G. J. Koehler and S. S. Erenguc. Minimizing misclassifications in linear discriminant analysis. Decision Sciences, 21:63–85, 1990. E. K. Lee. Computational experience with a general purpose mixed 0/1 integer programming solver (MIPSOL). Software report, School of Industrial and Systems Engineering, Georgia Institute of Technology, 1997. E. K. Lee. A linear-programming based parallel cutting plane algorithm for mixed integer programming problems. Proceedings for the Third Scandinavian Workshop on Linear Programming, pages 22–31, 1999. E. K. Lee. Branch-and-bound methods. In Mauricio G. C. Resende and Panos M. Pardalos, editors, Handbook of Applied Optimization. Oxford University Press, 2001. E. K. Lee. Generating cutting planes for mixed integer programming problems in a parallel distributed memory environment. INFORMS Journal on Computing, 16:1–28, 2004. E. K. Lee. Discriminant analysis and predictive models in medicine. In S. J. Deng, editor, Interdisciplinary Research in Management Science, Finance, and HealthCare. Peking University Press, 2006. To appear. E. K. Lee. Large-scale optimization-based classification models in medicine and biology. Annals of Biomedical Engineering, Systems Biology and Bioinformatics, 35:1095–1109, 2007. E.K. Lee 44. E. K. Lee, S. Ashfaq, D. P. Jones, S. D. Rhodes, W. S. Weintrau, C. H. Hopper, V. Vaccarino, D. G. Harrison, and A. A. Quyyumi. Prediction of early atherosclerosis in healthy adults via novel markers of oxidative stress and d-roms. Working Paper, 2007. 45. E. K. Lee, T. Easton, and K. Gupta. Novel evolutionary models and applications to sequence alignment problems. Operations Research in Medicine – Computing and Optimization in Medicine and Life Sciences, 148:167–187, 2006. 46. E. K. Lee, A. Y. C. Fung, J. P. Brooks, and M. Zaider. Automated tumor volume contouring in soft-tissue sarcoma adjuvant brachytherapy treatment. International Journal of Radiation Oncology, Biology and Physics, 47:1891–1910, 2002. 47. E. K. Lee, A. Y. C. Fung, and M. Zaider. Automated planning volume contouring in soft-tissue sarcoma adjuvant brachytherapy treatment. International Journal of Radiation Oncology Biology Physics, 51, 2001. 48. E. K. Lee, R. Gallagher, A. Campbell, and M. Prausnitz. Prediction of ultrasound-mediated disruption of cell membranes using machine learning techniques and statistical analysis of acoustic spectra. IEEE Transactions on Biomedical Engineering, 51:1–9, 2004. 49. E. K. Lee, R. J. Gallagher, and D. Patterson. A linear programming approach to discriminant analysis with a reserved judgment region. INFORMS Journal on Computing, 15:23–41, 2003. 50. E. K. Lee, S. Jagannathan, C. Johnson, and Z. S. Galis. Fingerprinting native and angiogenic microvascular networks through pattern recognition and discriminant analysis of functional perfusion data. Submitted, 2006. 51. E. K. Lee and S. Maheshwary. Facets of conflict hypergraphs. Submitted to Mathematics of Operations Research, 2005. 52. E. K. Lee and J. Mitchell. Computational experience of an interior-point SQP algorithm in a parallel branch-and-bound framework. In J. Franks, J. Roos, J. Terlaky, and J. Zhang, editors, High Performance Optimization Techniques, pages 329–347. Kluwer Academic Publishers, 1997. 53. E. K. Lee and T. L. Wu. Classification and disease prediction via mathematical programming. In O. Seref, O. Kundakcioglu, and P. Pardalos, editors, Data Mining, Systems Analysis, and Optimization in Biomedicine, AIP Conference Proceedings, 953: 1–42, 2007. 54. J. M. Liittschwager and C. Wang. Integer programming solution of a classification problem. Management Science, 24:1515–1525, 1978. 55. T. S. Lim, W. Y. Loh, and Y. S. Shih. A comparison of prediction accuracy, complexity, and training time of thirty-three old and new classification algorithms. Machine Learning, 40:203–228, 2000. 56. O. L. Mangasarian. Mathematical programming in neural networks. ORSA Journal on Computing, 5:349–360, 1993. 57. O. L. Mangasarian. Mathematical programming in data mining. Data Mining and Knowledge Discovery, 1:183–201, 1997. 58. O. L. Mangasarian, W. N. Street, and W. H. Wolberg. Breast cancer diagnosis and prognosis via linear programming. Operations Research, 43:570–577, 1995. 59. J. M. McCord. The evolution of free radicals and oxidative stress. The American Journal of Medicine, 108:652–659, 2000. 60. G. J. McLachlan. Discriminant Analysis and Statistical Pattern Recognition. Wiley, New York, 1992. Optimization-based predictive models in medicine and biology 61. K. R. M¨ uller, S. Mika, G. R¨ atsch, K. Tsuda, and B. Sch´ olkopf. An introduction to kernel-based learning algorithms. IEEE Transactions on Neural Networks, 12:181–201, 2001. 62. P. M. Murphy and D. W. Aha. UCI repository of machine learning databases. Technical report, Department of Information and Computer Science, University of California, Irvine, California, 1994. 63. T.-H. Ng and R. H. Randles. Distribution-free partial discrimination procedures. Computers and Mathematics with Applications, 12A:225–234, 1986. 64. R. Pavur and C. Loucopoulos. Examining optimal criterion weights in mixed integer programming approaches to the multiple-group classification problem. Journal of the Operational Research Society, 46:626–640, 1995. 65. C. P. Quesenberry and M. P. Gessaman. Nonparametric discrimination using tolerance regions. Annals of Mathematical Statistics, 39:664–673, 1968. 66. A. Raz and A. Ben-Z´eev. Cell-contact and -architecture of malignant cells and their relationship to metastasis. Cancer and Metastasis Reviews, 6:3–21, 1987. 67. L. J. Rush, Z. Dai, D. J. Smiraglia, X. Gao, F. A. Wright, M. Fruhwald, J. F. Costello, W. A. Held, L. Yu, R. Krahe, J. E. Kolitz, C. D. Bloomfield, M. A. Caligiuri, and C. Plass. Novel methylation targets in de novo acute myeloid leukemia with prevalence of chromosome 11 loci. Blood, 97:3226–3233, 2001. 68. H. Sies. Oxidative stress: introductory comments. H. Sies, Editor, Oxidative stress, Academic Press, London, 1–8, 1985. 69. A. Stam. Nontraditional approaches to statistical classification: Some perspectives on Lp-norm methods. Annals of Operations Research, 74:1–36, 1997. 70. A. Stam and E. A. Joachimsthaler. Solving the classification problem in discriminant analysis via linear and nonlinear programming. Decision Sciences, 20:285–293, 1989. 71. A. Stam and C. T. Ragsdale. On the classification gap in mathematicalprogramming-based approaches to the discriminant problem. Naval Research Logistics, 39:545–559, 1992. 72. S. Tahara, M. Matsuo, and T. Kaneko. Age-related changes in oxidative damage to lipids and DNA in rat skin. Mechanisms of Ageing and Development, 122:415–426, 2001. 73. V. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 1999. 74. P. S. Yan, C. M. Chen, H. Shi, F. Rahmatpanah, S. H. Wei, C. W. Caldwell, and T. H. Huang. Dissecting complex epigenetic alterations in breast cancer using CpG island microarrays. Cancer Research, 61:8375–8380, 2001. 75. P. S. Yan, M. R. Perry, D. E. Laux, A. L. Asare, C. W. Caldwell, and T. H. Huang. CpG island arrays: an application toward deciphering epigenetic signatures of breast cancer. Clinical Cancer Research, 6:1432–1438, 2000. 76. A. Zimmermann and H. U. Keller. Locomotion of tumor cells as an element of invasion and metastasis. Biomedicine & Pharmacotherapy, 41:337–344, 1987. 77. C. Zopounidis and M. Doumpos. Multicriteria classification and sorting methods: A literature review. European Journal of Operational Research, 138:229–246, 2002. Optimal reconstruction kernels in medical imaging Alfred K. Louis Department of Mathematics, Saarland University, 66041 Saarbr¨ ucken Germany [email protected] Summary. In this paper we present techniques for deriving inversion algorithms in medical imaging. To this end we present a few imaging technologies and their mathematical models. They essentially consist of integral operators. The reconstruction is then recognized as the solution of an inverse problem. General strategies, the socalled approximate inverse, for deriving a solution are adapted. Results from real data are Keywords: 3D-Tomography, optimal algorithms (accuracy, efficiency, noise reduction), error bounds for influence of data noise, approximate inverse. 1 Introduction The task in medical imaging is to provide, in a non-invasive way, information about the internal structure of the human body. The basic principle is that the patient is scanned by applying some sort of radiation and its interaction with the body is measured. This result is the data whose origin has to be identified. Hence we face an inverse problem. There are several different imaging techniques and also different ways to characterize them. For the patient, a very substantial difference is whether the source is inside or outside the body, whether we have emission or transmission tomography. From the diagnostic point of view the resulting information is a way to distinguish the different techniques. Some methods provide information about the density of the tissue as x-ray computer tomography, ultrasound computer tomography, or diffuse tomography. A distinction between properties of the tissues is possible with magnetic resonance imaging and impedance computer tomography. Finally the localization of activities is possible with biomagnetism, (electrical activities), and emission computer tomography, (nuclear activities of injected pharmaceuticals). A.K. Louis From a physical point of view the applied wavelengths can serve as a classification. The penetration of electromagnetic waves into the body is sufficient only for wavelengths smaller than 10−11 m or larger than a few cm respectively. In the extremely short ranges are x rays, single particle emission tomography and positron emission computer tomography. MRI uses wavelengths larger than 1m; extremely long waves are used in biomagnetism. In the range of a few mm to a few cm are microwaves, ultrasound and light. In this paper we present some principles in designing inversion algorithms in tomography. We concentrate on linear problems arising in connection with the Radon and the x-ray transform. In the original 2D x-ray CT problem, the Radon transform served as a mathematical model. Here one integrates over lines and the problem is to recover a function from its line integrals. The same holds in the 3D x-ray case, but in 3D the Radon transform integrates over planes, in general over N − 1 - dimensional hyperplanes in RN . Hence here the so-called x-ray transform is the mathematical model. Further differences are in the parametrization of the lines. The 3D - Radon transform merely appears as a tool to derive inversion formula. In the early days of MRI (magnetic resonance imaging), in those days called NMR, nuclear magnetic resonance, it served as a mathematical model, see for example Marr-Chen-Lauterbur [26]. But then, due to the limitations of computer power in those days one changed the measuring procedure and scanned the Fourier transform of the searchedfor function in two dimensions. The Radon transform has reappeared, now in three and even four dimensions as a mathematical model in EPRI (electron parametric resonance imaging) where spectral-spatial information is the goal, see, e.g., Kuppusamy et al. [11]. Here also incomplete data problems play a central role, see e.g. [12, 23]. The paper is organized as follows. We start with a general principle for reconstruction information from measured data, the so-called approximate inverse, see [16, 20]. The well-known inversion of the Radon transform is considered a model case for inversion. Finally, we consider a 3D x-ray problem and present reconstructions from real data. 2 Approximate inverse as a tool for deriving inversion algorithms The integral operators appearing in medical imaging are typically compact operators between suitable Hilbert spaces. The inverse operator of those compact operators with infinite dimensional range are not continuous, which means that the unavoidable data errors are amplified in the solution. Hence one has to be very careful in designing inversion algorithms has to balance the demand for highest possible accuracy and the necessary damping of the influence of unavoidable data errors. From the theoretical point of view, exact inversion formulae are nice, but they do not take care of data errors. The way out of this dilemma is the use of approximate inversion formulas whose principles are explained in the following. Optimal reconstruction kernels in medical imaging For approximating the solution of Af = g we apply the method of approximate inverse, see [16]. The basic idea works as follows: choose a so-called mollifier eγ (x, y) which, for a fixed reconstruction point x, is a function of the variable y and which approximates the delta distribution for the point x. The parameter γ acts as regularization parameter. Simply think in the case of one spatial variable x of eγ (x, y) = 1 χ[x−γ,x+γ](y) 2γ where χΩ denotes the characteristic function of Ω. Then the mollifier fulfills 1 eγ (x, y)dy = 1 (1) for all x and the function 1 fγ (x) = f (y)eγ (x, y)dy converges for γ → 0 to f . The larger the parameter γ, the larger the interval where the averaging takes place, and hence the stronger the smoothing. Now solve for fixed reconstruction point x the auxiliary problem A∗ ψγ (x, ·) = eγ (x, ·) where eγ (x, ·) is the chosen approximation to the delta distribution for the point x, and put fγ (x) = f, eγ (x, ·) = f, A∗ ψγ (x, ·) = Af, ψγ (x, ·) = g, ψγ (x, ·) =: Sγ g(x). The operator Sγ is called the approximate inverse and ψγ is the reconstruction kernel. To be precise it is the approximate inverse for approximating the solution f of Af = g. If we choose instead of eγ fulfilling (2.1) a wavelet, then fγ can be interpreted as a wavelet transform of f . Wavelet transforms are known to approximate in a certain sense derivatives of the transformed function f , see [22]. Hence this is a possibility to find jumps in f as used in contour reconstructions, see [16, 21]. The advantage of this method is that ψγ can be pre-computed independently of the data. Furthermore, invariances and symmetries of the operator A∗ can be directly transformed into corresponding properties of Sγ as the following consideration shows, see Louis [16]. Let T1 and T2 be two operators intertwining with A∗ A∗ T2 = T1 A∗ . A.K. Louis If we choose a standard mollifier E and solve A∗ Ψ = E then the solution of Equation (2) for the special mollifier eγ = T1 E is given as ψγ = T2 Ψ. As an example we mention that if A∗ is a translation invariant; i.e., T1 f (x) = T2 f (x) = f (x − a), then the reconstruction kernel is also a translation invariant. Sometimes it is easier to cheque these conditions for A itself. Using AT1∗ = ∗ T2 A we get the above relations by using the adjoint operators. This method is presented in [17] as a general regularization scheme to solve inverse problems. Generalizations are also given. The application to vector fields is derived by Schuster [31]. If the auxiliary problem is not solvable then its minimum norm solution leads to the minimum norm solution of the original problem. 3 Inversion of the Radon transform We apply the above approach to derive inversion algorithms for the Radon transform. This represents a typical behaviour for all linear imaging problems. The Radon transform in RN is defined as 1 Rf (θ, s) = f (x)δ(s − x θ) dx RN for unit vectors θ ∈ S N −1 and s ∈ R. Its inverse is R−1 = cN R∗ I 1−N where R∗ is the adjoint operator from L2 to L2 , also called the backprojection, defined as 1 g(θ, x θ)dθ, R∗ g(x) = S N −1 α I is the Riesz potential defined via the Fourier transform as 2 α g)(ξ) = |ξ|−α g(ξ), (I acting on the second variable of Rf and the constant cN = 1 (2π)1−N , 2 see, e.g., [27]. We start with a mollifier eγ (x, ·) for the reconstruction point x and get R∗ ψγ (x, ·) = eγ (x, ·) = cN R∗ I 1−N Reγ (x, ·) Optimal reconstruction kernels in medical imaging leading to ψγ (x; θ, s) = cN I 1−N Reγ (x; θ, s). The Radon transform for fixed θ is translational invariant; i.e., if we denote by Rθ f (s) = Rf (θ, s), then Rθ T1a f = T2a Rθ f with the shift operators T1a f (x) = f (x − a) and T2t g(s) = g(s − t). If we chose a mollifier e¯γ supported in the unit ball centered around 0 that is shifted to x as x−y ) eγ (x, y) = 2−N e¯γ ( 2 then also eγ is supported in the unit ball and the reconstruction kernel fulfills ψγ (x; θ, s) = 1¯ s − x θ ψγ (θ, ) 2 2 as follows from the general theory in [16] and as was used for the 2D case in [24]. Furthermore, the Radon transform is invariant under rotations; i.e., RT1U = T2U R for the rotation T1U f (x) = f (U x) with unitary U and T2 U g(θ, s) = g(U θ, s). If the mollifier is invariant under rotation; i.e., e¯γ (x) = e¯γ ( x ) then the reconstruction kernel is independent of θ leading to the following observation. Theorem 1. Let the mollifier eγ (x, y) be of the form eγ (x, y) = 2−N e¯γ ( x − y /2) then the reconstruction kernel is a function only of the variable s and the algorithm is of filtered backprojection type 1 1 fγ (x) = ψγ (x θ − s)Rf (θ, s)dsdθ . (4) S n−1 First references to this technique can be found in the work of Gr¨ unbaum [2] and Solmon [8]. Lemma 1. The function fγ from Theorem 3.1 can be represented as a smoothed inversion or as a reconstruction of smoothed data as −1 ˜g fγ = R−1 g = R−1 M γ g = Mγ R A.K. Louis where Mγ f (x) = f, eγ (x, ·) 1 and ˜ γ g(θ, s) = M g(θ, t)˜ eγ (s − t)dt where e˜γ (s) = Reγ (s) for functions eγ fulfilling the conditions of Theorem 3.1. 4 Optimality criteria There are several criteria which have to be optimized. The speed of the reconstruction is an essential issue. The scanning time has to be short for the sake of the patients. In order to guarantee a sufficiently high patient throughput, the time for the reconstruction cannot slow down the whole system, but has to be achieved in real-time. The above mentioned invariances adapted to the mathematical model give acceptable results. The speed itself is not sufficient, therefore the accuracy has to be the best possible to ensure the medical diagnosis. This accuracy is determined by the amount of data and of unavoidable noise in the data. To optimise with respect to accuracy and noise reduction, we consider the problem in suitable Sobolev spaces H α = H α (RN ) 1 H α = {f ∈ S : f 2H α = (1 + |ξ|2 )α |fˆ(ξ)|2 dξ < ∞}. RN The corresponding norm on the cylinder C N = S N −1 × R is evaluated as 1 1 2 g H α (C N ) = (1 + |σ|2 )α |ˆ g(θ, σ)|2 dσdθ S N −1 where the Fourier transform is computed with respect to the second variable. We make the assumption that there is a number α > 0 such that c1 f −α ≤ Af L2 ≤ c2 f −α for all f ∈ N (A)⊥ . For the Radon transform in RN this holds with α = (N − 1)/2, see, e.g., [14, 27]. We assume the data to be corrupted by noise; i.e., g ε = Rf + n where the true solution f ∈ Hβ Optimal reconstruction kernels in medical imaging and the noise n ∈ Ht with t ≤ 0. In the case of white noise, characterized by equal intensity at all frequencies, see, e.g., [10, 15], we hence have |ˆ n(θ, σ)| = const, and this leads to n ∈ H t with t < −1/2. As mollifier we select a low-pass filter in the Fourier domain, resulting in two dimensions in the so-called RAM-LAK-filter. Its disadvantages are described in the next section. The theoretical advantage is that we get information about the frequencies in the solution and therefore the achievable resolution. This means we select a cut-off 1/γ for γ sufficiently small and eˆ ˜γ (σ) = (2π)−1/2 χ[−1/γ,1/γ](σ) where χA denotes the characteristic function of A. Theorem 2. Let the true solution be f ∈ H β with f β = ρ and the noise be n ∈ H t (C N ) with n t = ε. Then the total error in the reconstruction is for s < β (β−s)/(β−t+(N −1)/2) R−1 γ g − f s ≤ c n t (s−t+(N −1)/2)/(β−t+(N −1)/2) f β when the cut-off frequency is chosen as 1/(β−t+(N −1)/2) n t γ=η . f β Proof. We split the error in the data error and the approximation error as ε −1 −1 R−1 γ g − f s ≤ Rγ n s + Rγ Rf − f s . In order to estimate the data error we introduce polar coordinates and apply the so-called projection theorem 2(θ, σ) fˆ(σθ) = (2π)(1−N )/2 Rf 2 ˜ γ g = (2π)1/2 e˜3γ gˆ we get relating Radon and Fourier transform. With M 1 1 −1 2 −1 2 1−N Rγ n s = (2π) (1 + |σ|2 )s σ N −1 |RR γ n| dσdθ S N −1 = (2π)1−N S N −1 ˜ γ n|2 dσdθ (1 + |σ|2 )s−t σ N −1 (1 + |σ|2 )t |M ≤ (2π)1−N sup ((1 + |σ|2 )s−t |σ|N −1 ) n 2t |σ|≤1/γ = (2π)1−N (1 + γ −2 )s−t γ 1−N n 2t ≤ (2π)1−N 2s−t γ 2(t−s)+1−N n 2t A.K. Louis where we have used γ ≤ 1. Starting from eˆ ˜γ = Reγ we compute the Fourier transform of eγ via the projection theorem as eˆγ (ξ) = (2π)−N χ[0,1/γ] (|ξ|) and compute the approximation error as 1 Rf − = (1 + |ξ|2 )s |fˆ(ξ)|2 dξ R−1 s γ RN 1 = (1 + |ξ|2 )(s−β) (1 + |ξ|2 )β |fˆ(ξ)|2 dξ ≤ sup (1 + |ξ|2 )(s−β) f 2β |ξ|≥1/γ ≤ γ 2(β−s) f 2β . The total error is hence estimated as ε (1−N )/2 (s−t)/2 (t−s)+(1−N )/2 2 γ n t + γ (β−s) f β . R−1 γ g − f s ≤ (2π) Next we minimize this expression with respect to γ where we put with a = s − t + (N − 1)/2 and ϕ(γ) = c1 γ −a ε + γ β−s ρ. Differentiation leads to the minimum at 1/(β−s+a) c1 aε . γ= (β − s)ρ Inserting in ϕ completes the proof. This result shows that if the data error goes to zero, the cut-off goes to infinity. It is related to the inverse of the signal-to-noise ratio. 5 The filtered backprojection for the Radon transform in 2 and 3 dimensions In the following we describe the derivation of the filtered backprojection, see Theorem 3.1, for two and three dimensions. As seen in Formula (3.1) the inverse operator of the Radon transform in RN has the representation R−1 = R∗ B with Hence, using B = cN I 1−N . e = R−1 Re = R∗ BRe = R∗ ψ Optimal reconstruction kernels in medical imaging this can easily be solved as ψγ = cN I 1−N Reγ . As mollifier we choose a translational and rotational invariant function e¯γ (x, y) = eγ ( x − y ) whose Radon transform then is a function of the variable s only. Taking the Fourier transform of Equation (4.1) we get (Reγ ))(σ) ψˆγ (σ) = cN (I 1−N 1 = (2π)(1−N )/2 |σ|N −1 eˆγ (σ), 2 where in the last step we have again used the projection theorem 2 fˆ(σθ) = (2π)(1−N )/2 R θ f (σ). So, we can proceed in the following two ways. Either we prescribe the mollifier eγ , where the Fourier transform is then computed to 1 ∞ eγ (s)sN/2 JN/2−1 (sσ)ds eˆγ (σ) = σ 1−N/2 0 where Jν denotes the Bessel function of order ν. On the other hand we prescribe eˆγ (σ) = (2π)−N/2 Fγ (σ) with a suitably chosen filter Fγ leading to 1 ψˆγ (σ) = (2π)1/2−N |σ|N −1 Fγ (σ). 2 If Fγ is the ideal low-pass; i.e., Fγ (σ) = 1 for |σ| ≤ γ and 0 otherwise, then the mollifier is easily computed as eγ (x, y) = (2π)−N/2 γ N JN/2 (γ x − y ) . (γ x − y )N/2 In the two-dimensional case, the calculation of ψ leads to the so called RAMLAK filter, which has the disadvantage of producing ringing artefacts due to the discontinuity in the Fourier domain. More popular for 2D is the filter ⎧ ⎨ sinc σπ , |σ| ≤ γ, 2γ Fγ (σ) = ⎩ 0, |σ| > γ. A.K. Louis From this we compute the kernel ψγ by inverse Fourier transform to get γ = π/h where h is the stepsize on the detector; i.e., h = 1/q if we use 2q + 1 points on the interval [−1, 1] and s = s = h, = −q, . . . , q ψγ (s ) = 1 γ2 , 4 π 1 − 42 known as Shepp-Logan kernel. The algorithm of filtered backprojection is a stable discretization of the above described method using the composite trapezoidal rule for computing the discrete convolution. Instead of calculating the convolution for all points θ x, the convolution is evaluated for equidistant points h and then a linear interpolation is applied. Nearest neighbour interpolation is not sufficiently accurate, and higher order interpolation is not bringing any improvement because the interpolated functions are not smooth enough. Then the composite trapezoidal rule is used for approximating the backprojection. Here one integrates a periodic function, hence, as shown with the Euler-Maclaurin summation formula, this formula is highly accurate. The filtered backprojection then consists of two steps. Let the data Rf (θ, s) be given for the directions θj = (cos ϕj , sin ϕj ), ϕj = π(j − 1)/p, j = 1, ..., p and the values sk = kh, h = 1/q and k = −q, ..., q. Step 1: For j=1,...,p, evaluate the discrete convolutions vj, = h ψγ (s − sk )Rf (θj , sk ), = −q, ..., q. Step 2: For each reconstruction point x compute the discrete backprojection p 2π $ ˜ (1 − η)vj, + ηvj,+1 f (x) = p j=1 where, for each x and j, and η are determined by s = θj x, ≤ s/h < + 1, η = s/h − see, e.g., [27]. In the three-dimensional case we can use the fact, that the operator I −2 is local, ∂2 I −2 g(θ, s) = 2 g(θ, s). ∂s If we want to keep this local structure in the discretization we choose Fγ (σ) = 2(1 − cos(hσ))/(hσ)2 leading to ψγ (s) = (δγ − 2δ0 + δ−γ ) (s). Optimal reconstruction kernels in medical imaging Hence, the application of this reconstruction kernel is nothing but the central difference quotient for approximating the second derivative. The corresponding mollifier then is ⎧ ⎨ (2π)−1 h−2 |y|−1 , for |y| < h, eγ (y) = ⎩ 0, otherwise, see [13]. The algorithm has the same structure as mentioned above for the 2D case. In order to get reconstruction formulas for the fan beam, geometry coordinate transforms can be used, and the structure of the algorithms does not change. 6 Inversion formula for the 3D cone beam transform In the following we consider the X-ray reconstruction problem in three dimensions when the data is measured by firing an X-ray tube emitting rays to a 2D detector. The movement of the combination source-detector determines the different scanning geometries. In many real-world applications the source is moved on a circle around the object. From a mathematical point of view this has the disadvantage that the data are incomplete and the condition of Tuy-Kirillov is not fulfilled. This condition says that essentially the data are complete for the three-dimensional Radon transform. More precisely, all planes through a reconstruction point x have to cut the scanning curve Γ . We base our considerations on the assumptions that this condition is fulfilled, the reconstruction from real data is then nevertheless from the above described circular scanning geometry, because other data is not available to us so far. A first theoretical presentation of the reconstruction kernel was given by Finch [5], and invariances were then used in the group of the author to speedup the computation time considerably, so that real data could be handled, see [18]. See also the often used algorithm from Feldkamp et al. [4] and the contribution of Defrise and Clack [3]. The approach of Katsevich [9] differs from our approach in that he avoids the Crofton symbol by restricting the backprojection to a range dependent on the reconstruction point x. An overview of the so far existing reconstruction algorithms is given by [34], it is based on a relation between the Fourier transform and the cone beam transform, derived by Tuy [33] generalizing the so-called projection theorem for the Radon transform, see Formula (4.3). The presentation follows Louis [19]. The mathematical model here is the so-called X-ray transform, where we denote with a ∈ Γ the source position, Γ ⊂ R3 is a curve, and θ ∈ S 2 is the direction of the ray: 1 ∞ f (a + tθ)dt. Df (a, θ) = 0 A.K. Louis The adjoint operator of D as mapping from L2 (R3 ) −→ L2 (Γ ×S 2 ) is given as 1 x−a D∗ g(x) = |x − a|−2 g a, da. |x − a| Γ Most attempts to find inversion formulae are based on a relation between X-ray transform and the 3D Radon transform, the so-called Formula of Grangeat, first published in Grangeat’s PhD thesis [6], see also [7]: 1 ∂ Rf (ω, a ω) = − Df (a, θ)δ (θ ω)dθ. ∂s S2 Proof. We copy the proof from [28]. It consists of the following two steps. i) We apply the adjoint operator of Rθ 1 1 Rf (θ, s)ψ(s)ds = f (x)ψ(x θ)dx. IR3 ii) Now we apply the adjoint operator D for fixed source position a 1 1 $ x−a % |x − a|−2 dx. Df (a, θ)h(θ)dθ = f (x)h 3 |x − a| 2 S IR Putting in the first formula ψ(s) = δ (s − a ω), using in the second h(θ) = δ (θ ω), and the fact that δ is homogeneous of degree −2 in IR3 , then this completes the proof. 2 We note the following rules for δ : i) 1 1 ψ(a ω)δ (θ ω)dω = −a θ S2 ψ (a ω)dω. S 2 ∩θ ⊥ ψ(ω)δ (θ ω)dω = − S 2 ∩θ ⊥ ∂ ψ(ω)dω. ∂θ Starting point is now the inversion formula for the 3D Radon transform 1 ∂2 1 f (x) = − 2 Rf (ω, x ω)dω (13) 8π S 2 ∂s2 rewritten as f (x) = 1 8π 2 1 S2 1 R ∂ Rf (ω, s)δ (s − x ω)dsdω. ∂s We assume in the following equation that the Tuy-Kirillov condition is fulfilled. Then we can change the variables as: s = a ω, n is the Crofton symbol; i.e., the number of source points a ∈ Γ such that a ω = x ω, m = 1/n and get Optimal reconstruction kernels in medical imaging 1 (Rf ) (ω, a ω)δ ((a − x) ω)|a ω|m(ω, a ω)dadω 8π 2 S 2 Γ 1 1 1 1 Df (a, θ)δ (θ ω)dθδ ((a−x) ω)|a ω|m(ω, a ω)dadω =− 2 8π S 2 Γ S 2 1 1 1 1 (x − a) =− 2 |x − a|−2 Df (a, θ)δ (θ ω)dθδ ω 8π Γ |x − a| S2 S2 f (x) = ×|a ω|m(ω, a ω)dadω where again δ is homogeneous of degree −2. We now introduce the following operators 1 g(θ)δ (θ ω)dθ (14) T1 g(ω) = S2 and we use T1 acting on the second variable as T1,a g(ω) = T1 g(a, ω) . We also use the multiplication operator MΓ,a h(ω) = |a ω|m(ω, a ω)h(ω), and state the following result. Theorem 3. Let the condition of Tuy-Kirillov be fulfilled. Then the inversion formula for the cone beam transform is given as f =− 1 D∗ T1 MΓ,a T1 Df 8π 2 with the adjoint operator D∗ of the cone beam transform and T1 and MΓ,a as defined above. Note that the operators D∗ and M depend on the scanning curve Γ . This form allows for computing reconstruction kernels. To this end we have to solve the equation D∗ ψγ = eγ in order to write the solution of Df = g as f (x) = ψγ (x, ·). In the case of exact inversion, formula eγ is the delta distribution. In the case of the approximate inversion formula it is an approximation of this distribution, see the method of approximate inverse. Using D−1 = − 8π1 2 D∗ T1 MΓ,a T1 we get 1 D∗ ψ = δ = − 2 D∗ T1 MΓ,a T1 Dδ 8π A.K. Louis Fig. 1. Reconstruction of a surprise egg with a turtle inside. and hence 1 T1 MΓ,a T1 Dδ. (17) 8π 2 We can explicitly give the form of the operators T1 and T2 = M T1 . The index at ∇ indicates the variable with respect to how the differentiation is performed. 1 T1 g(a, ω) = g(a, θ)δ (θ ω)dθ 2 S 1 = −ω ∇2 g(a, θ)dθ ψ=− S 2 ∩ω ⊥ δ (ω α)|a ω|m(ω, a ω)h(a, ω)dω 1 = −a α sign(a ω)m(ω, a ω)h(a, ω)dω 2 ⊥ 1 S ∩α −α |a α|∇1 m(ω, a ω)h(a, ω)dω S 2 ∩α⊥ 1 −a α |a ω|∇2 m(a, a ω)h(a, ω)dω S 2 ∩α⊥ 1 ∂ − |a ω|m(ω, a ω) h(a, ω)dω. ∂α 2 ⊥ S T1 MΓ,a h(a, α) = Note that the function m is piecewise constant and the derivatives are then Delta-distributions at the discontinuities with factor equal to the height of the jump; i.e., 1/2. Optimal reconstruction kernels in medical imaging Depending on the scanning curve Γ , invariances have to be used. For the circular scanning geometry this leads to similar results as mentioned in [18]. In Fig. 1 we present a reconstruction from data provided by the Fraunhofer Institut for Nondestructive Testing (IzfP) in Saarbr¨ ucken. The detector size was (204.8mm)2 with 5122 pixels and 400 source positions on a circle around the object. The number of data is 10.4 million. The mollifier used is 1 4 y 42 4 4 eγ (y) = (2π)−3/2 γ −3 exp − 4 4 . 2 γ Acknowledgement The author was supported in part by a Grant of the Hermann und Dr. Charlotte Deutsch Stiftung and by the Deutsche Forschungsgemeinschaft under grant LO 310/8-1. References 1. A. M. Cormack. Representation of a function by its line integral, with some radiological applications II. Journal of Applied Physics, 35:195–207, 1964. 2. M. E. Davison and F. A. Gr¨ unbaum. Tomographic reconstruction with arbitrary directions. IEEE Transactions on Nuclear Science, 26:77–120, 1981. 3. M. Defrise and R. Clack. A cone-beam reconstruction algorithm using shiftinvariant filtering and cone-beam backprojection. IEEE Transactions on Medical Imaging, 13:186–195, 1994. 4. L. A. Feldkamp, L. C. Davis, and J. W. Kress. Practical cone beam algorithm. Journal of the Optical Society of America A, 6:612–619, 1984. 5. D. Finch. Approximate reconstruction formulae for the cone beam transform, I. Preprint, 1987. 6. P. Grangeat. Analyse d’un syst`eme d’imagerie 3D par reconstruction ` a partir de Radiographics X en g´eom´etrie conique. Dissertation, Ecole Nationale Sup´erieure des T´el´ecommunications, 1987. 7. P. Grangeat. Mathematical framework of cone beam 3-D reconstruction via the first derivative of the radon transform. In G. T. Herman, A. K. Louis, and F. Natterer, editors, Mathematical Methods in Tomography, pages 66–97. Springer, Berlin, 1991. 8. I. Hazou and D. C. Solmon. Inversion of the exponential X-ray transform. I: analysis. Mathematical Methods in the Applied Sciences, 10:561–574, 1988. 9. A. Katsevich. Analysis of an exact inversion algorithm for spiral-cone beam CT. Physics in Medicine and Biology, 47:2583–2597, 2002. 10. H. H. Kuo. Gaussian measures in Banach spaces. Number 463 in Lecture Notes in Mathematics. Springer, Berlin, 1975. 11. P. Kuppusamy, M. Chzhan, A. Samouilov, P. Wang, and J. L. Zweier. Mapping the spin-density and lineshape distribution of free radicals using 4D spectralspatial EPR imaging. Journal of Magnetic Resonance, Series B, 197:116–125, 1995. A.K. Louis 12. A. K. Louis. Picture reconstruction from projections in restricted range. Mathematical Methods in the Applied Sciences, 2:209–220, 1980. 13. A. K. Louis. Approximate inversion of the 3D radon transform. Mathematical Methods in the Applied Sciences, 5:176–185, 1983. 14. A. K. Louis. Orthogonal function series expansion and the null space of the Radon transform. SIAM Journal on Mathematical Analysis, 15:621–633, 1984. 15. A. K. Louis. Inverse und schlecht gestellte Probleme. Teubner, Stuttgart, 1989. 16. A. K. Louis. The approximate inverse for linear and some nonlinear problems. Inverse Problems, 12:175–190, 1996. 17. A. K. Louis. A unified approach to regularization methods for linear ill-posed problems. Inverse Problems, 15:489–498, 1999. 18. A. K. Louis. Filter design in three-dimensional cone beam tomography: circular scanning geometry. Inverse Problems, 19:S31–S40, 2003. 19. A. K. Louis. Development of algorithms in computerized tomography. AMS Proceedings of Symposia in Applied Mathematics, 63:25–42, 2006. 20. A. K. Louis and P. Maass. A mollifier method for linear operator equations of the first kind. Inverse Problems, 6:427–440, 1990. 21. A. K. Louis and P. Maass. Contour reconstruction in 3D X-ray CT. IEEE Transactions on Medical Imaging, TMI12:764–769, 1993. 22. A. K. Louis, P. Maass, and A. Rieder. Wavelets : Theory and Applications. Wiley, Chichester, 1997. 23. A. K. Louis and A. Rieder. Incomplete data problems in X-ray computerized tomography, II: Truncated projections and region-of-interest tomography. Numerische Mathematik, 56:371–383, 1989. 24. A. K. Louis and T. Schuster. A novel filter design technique in 2D computerized tomography. Inverse Problems, 12:685–696, 1996. 25. P. Maass. The X-ray transform: singular value decomposition and resolution. Inverse Problems, 3:729–741, 1987. 26. R. B. Marr, C. N. Chen, and P. C. Lauterbur. On two approaches to 3D reconstruction in NMR zeugmatography. In G. T. Herman and F. Natterer, editors, Mathematical Aspects of Computerized Tomography, Berlin, 1981. Springer. 27. F. Natterer. The mathematics of computerized tomography. Teubner-Wiley, Stuttgart, 1986. 28. F. Natterer and F. W¨ ubbeling. Mathematical Methods in Image Reconstruction. SIAM, Philadelphia, 2001. 29. E. T. Quinto. Tomographic reconstruction from incomplete data – numerical inversion of the exterior Radon transform. Inverse Problems, 4:867–876, 1988. 30. A. Rieder. Principles of reconstruction filter design in 2d-computerized tomography. Contemporary Mathematics, 278:207–226, 2001. 31. T. Schuster. The 3D-Doppler transform: elementary properties and computation of reconstruction kernels. Inverse Problems, 16:701–723, 2000. 32. D. Slepian. Prolate spheroidal wave functions, Fourier analysis and uncertainty - V: the discrete case. Bell System Technical Journal, 57:1371–1430, 1978. 33. H. K. Tuy. An inversion formula for the cone-beam reconstruction. SIAM Journal on Applied Mathematics, 43:546–552, 1983. 34. S. Zhao, H. Yu, and G. Wang. A unified framework for exact cone-beam reconstruction formulas. Medical Physics, 32:1712–1721, 2005. Optimal control in high intensity focused ultrasound surgery Tomi Huttunen, Jari P. Kaipio, and Matti Malinen Department of Physics, University of Kuopio, P.O. Box 1627, FIN-70211, Finland [email protected] Summary. When an ultrasound wave is focused in biological tissue, a part of the energy of the wave is absorbed and turned into heat. This phenomena is used as a distributed heat source in ultrasound surgery, in which the aim is to destroy cancerous tissue by causing thermal damage. The main advantages of the ultrasound surgery are that it is noninvasive, there are no harmful side effects and spatial accuracy is good. The main disadvantage is that the treatment time is long for large cancer volumes when current treatment techniques are used. This is due to the undesired temperature rise in healthy tissue during the treatment. The interest for optimization of ultrasound surgery has been increased recently. With proper mathematical models and optimization algorithms the treatment time can be shortened and temperature rise in tissues can be better localized. In this study, two alternative control procedures for thermal dose optimization during ultrasound surgery are presented. In the first method, the scanning path between individual foci is optimized in order to decrease the treatment time. This method uses the prefocused ultrasound fields and predetermined focus locations. In the second method, combined feedforward and feedback controls are used to produce desired thermal dose in tissue. In the feedforward part, the phase and amplitude of the ultrasound transducers are changed as a function of time to produce the desired thermal dose distribution in tissue. The foci locations do not need to be predetermined. In addition, inequality constraint approximations for maximum input amplitude and maximum temperature can be used with the proposed method. The feedforward control is further expanded with a feedback controller which can be used during the treatment to compensate the modeling errors. All of the proposed control methods are tested with numerical simulations in 2D or 3D. Keywords: Ultrasound surgery, optimal control, minimum time control, feedforward control, feedback control. 1 Introduction In high intensity focused ultrasound surgery (HIFU), the cancerous tissue in the focal region is heated up to 50–90◦C. Due to the high temperature, thermal dose in tissue raises in a few seconds to the level that causes necrosis [43, 44]. T. Huttunen et al. Furthermore, the effect of the diffusion and perfusion can be minimized with high temperature and short sonication time [26]. In the current procedure of ultrasound surgery, the tissue is destroyed by scanning the cancerous volume point by point using predetermined individual foci [14]. The position of the focus is changed either by moving the transducer mechanically, or by changing the phase and amplitude of individual transducer elements when a phased array is used. The thermal information during the treatment is obtained via magnetic resonance imaging (MRI) [7]. This procedure is efficient for the treatment of small tumor volumes. However, as the tumor size increases and treatment is accomplished by temporal switching between foci, the temperature in healthy tissue accumulates and can cause undesired damage [9, 17]. This problem has increased the interest toward the detailed optimization of the treatment. With control and optimization methods, it is possible to decrease the treatment time as well as to control the temperature or thermal dose in both healthy and cancerous tissue. The different approaches have been proposed to control and optimize temperature or thermal dose in ultrasound surgery. For temperature control, a linear quadratic regulator (LQR) feedback controller was proposed in [21]. In that study, controller parameters were adjusted as a function of time according to absorption in focus. The controller was designed to keep temperature in focus point at a desired level. Another LQR controller was proposed in [46]. That controller was also designed to keep the temperature in single focus point at a predetermined level, and the tissue parameters for the controller were estimated with MRI temperature data before the actual treatment. The direct control of the thermal dose gives several advantages during ultrasound surgery. These advantages are reduced peak temperature, decreased applied power and decreased overall treatment time [13]. The proposed thermal dose optimization approaches include power adjusted focus scans [47], weighting approach [22] and temporal switching between single [17] or multiple focus patterns [13]. In all of these studies, only a few predetermined focus patterns were used, i.e., thermal dose was optimized by choosing the treatment strategy from the small set of possible paths or focus distributions. Finally, model predictive control (MPC) approach for thermal dose optimization was proposed in [1]. In the MPC approach, the difference between the desired thermal dose and current thermal dose was weighted with a quadratic penalty. Furthermore, the modeling errors in perfusion can be decreased with MRI temperature data during the control. However, the MPC approach was proposed for the predetermined focus points and scanning path, and it is computationally expensive, especially in 2D and 3D. In this study, alternative methods for optimization and control of the thermal dose are presented. The first method concerns scanning path optimization between individual foci. In this approach, the cancer volume is filled with a predetermined set of focal points, and focused ultrasound fields are computed for each focus. The optimization algorithm is then constructed Optimal control in high intensity focused ultrasound surgery as the minimum time formulation of the optimal control theory [20, 42]. The proposed algorithm optimizes the scanning path, i.e., finds the order in which foci are treated. The proposed optimization method uses the linear state equation and it is computationally easy to implement to current clinical machinery. The scanning path optimization method can be also used with MRI temperature feedback. The details of this method can be found from [34]. The simulations from the optimized scanning path show that treatment time can be efficiently decreased as compared to the current scanning technique. In the current technique, the treatment is usually started from the outermost focus and foci are scanned by the decreasing order of the distance between the outermost focus and transducer. The second method investigated here is a combination of model based feedforward control and feedback control to compensate modeling errors. In feedforward control the thermal dose distribution in tissue is directly optimized by changing the phase and amplitude of the ultrasound transducers as a function of time. The quadratic penalty is used to weight the difference between the current thermal dose and desired thermal dose. The inequality constraint approximations for the maximum input amplitude and maximum temperature are included in the design. This approach leads to a large dimension nonlinear control problem which is solved using gradient search [42]. The proposed feedforward control method has several advantages over other optimization procedures. First, the thermal dose can be optimized in both healthy and cancerous tissue. Second, the variation of diffusion and perfusion values in different tissues is taken into account. Third, the latent thermal dose which accumulates after the transducers have been turned off can be taken into account. The feedforward control method is discussed in detail in [32] for temperature control and in [33] for thermal dose optimization. In the second part of the overall control procedure, a linear quadratic Gaussian (LQG) controller with Kalman filter for state estimation is used to compensate the modeling errors which may appear in the feedforward control. The temperature data for the feedback can be adopted from MRI during the treatment. The feedback controller is derived by linearizing the original nonlinear control problem with respect to the feedforward trajectories for temperature and control input. The LQG controller and Kalman filter are then derived from these linearized equations. The details of the LQG feedback control can be found in [31]. In this study, numerical examples for each control procedure are presented. All examples concern the ultrasound surgery of breast cancer, and the modeling is done either in 2D or 3D. The potential of ultrasound surgery for the treatment of breast cancer is shown in clinical studies in [18] and [27]. Although all examples concern the ultrasound surgery of the breast, there are no limitations to using derived methods in the ultrasound surgery of other organs, see for example [33]. T. Huttunen et al. 2 Mathematical models 2.1 Wave field model The first task in the modeling of ultrasound surgery is to compute the ultrasound field. If acoustic parameters of tissue are assumed to be homogeneous, the time harmonic ultrasound field can be computed from the Rayleigh integral [39]. If the assumption of the homogeneity is not valid, the pressure field can be obtained as a solution of the Helmholtz equation. The Helmholtz equation in inhomogeneous absorbing media can be written as 1 κ2 ∇p + p = 0, (1) ∇· ρ ρ where ρ is density, c is the speed of sound and κ = 2πf /c + iα, where f is the frequency and α is the absorption coefficient [4]. The Helmholtz equation with suitable boundary conditions can be solved with a variety of methods. Traditional approaches include the low-order finite element method (FEM) and the finite difference method (FD) [28]. The main limitation of these methods is that they require several elements per wavelength to obtain a reliable solution. At high frequency ultrasound computations, this requirement leads to very large dimensional numerical problems. To avoid this problem ray approximations have been used to compute ultrasound field [5, 16, 29]. However, the complexity of ray approximation increases dramatically in complex geometries in the presence of multiple material interfaces. An alternative approach for ultrasound wave modeling is to use the improved full wave methods, such as the pseudo-spectral [48] and k-space methods [35]. In addition, there are methods in which a priori information of the approximation subspace can be used. In the case of the Helmholtz equation, a priori information is usually plane wave basis which is a solution of the homogeneous Helmholtz equation. The methods which use plane wave basis include the partition of unity method (PUM) [2], least squares method [37], wave based method (Trefftz method) [45] and ultra weak variational formulation (UWVF) [6, 24]. In this study, the Helmholtz equation (1) is solved using the UWVF. The computational issues of UWVF are discussed in detail in [24], and UWVF approximation is used in the related ultrasound surgery control problems in [32] and [33]. The main idea in UWVF is to use plane wave basis functions from different directions in the elements of standard FEM mesh. The variational form is formulated in the element boundaries, thus reducing integration task in assembling the system matrices. Finally, the resulting UWVF matrices have a sparse block structure. These properties make the UWVF a potential solver for high frequency wave problems. Optimal control in high intensity focused ultrasound surgery 2.2 Thermal evolution model The temperature in biological tissues can be modeled with the Pennes bioheat equation [38] ρCT ∂T = ∇ · k∇T − wB CB (T − TA ) + Q, ∂t where T = T (r, t) is the temperature in which r = r(x, y, z) is the spatial variable. Furthermore, in Equation (2) CT is the heat capacity of tissue, k is the diffusion coefficient, wB is the arterial perfusion, CB is the heat capacity of blood, TA is the arterial blood temperature and Q is the heat source term. The heat source for time-harmonic acoustic pressure can be defined as [39] Q=α |p|2 . ρc If the wave fields for the heat source are computed from the Helmholtz equation, the heat source term can be written as 42 4 m 4 4 α(r) α(r) 4 5k (r)44 , |p(r, t)|2 = Q= u ˜k (t)C (4) 4 4 ρ(r)c(r) ρ (r)c(r) 4 k=1 where u ˜k (t) ∈ C determines the amplitude and phase of the transducer num5k (r) ∈ CN is the time-harmonic solution of the Helmholtz problem, ber k and C where N is the number of spatial discretization points. The bioheat equation can be solved using the standard FEM [12, 36] or FD-time domain methods [8, 11]. In this study, the semi-discrete FEM with the implicit Euler time integration is used to solve the bioheat equation. The detailed FEM formulation of the bioheat equation can be found in [32] and [33]. The implicit Euler form of the bioheat equation can be written as Tt+1 = ATt + P + MD (But )2 , where Tt ∈ RN is the FEM approximation of temperature, matrix A ∈ RN ×N arises from the discretization of FEM and vector P is related to perfusion term. The heat source term MD (But )2 ∈ RN is constructed from the precomputed ultrasound fields as follows. The real and imaginary parts of the variable u˜k (t) ˜k and um+k = Im u ˜k , k = 1, ..., m, in Equation (4) are separated as uk = Re u resulting in the control variable vector u(t) ∈ R2m . Furthermore, solutions 3k = (C 5k (r1 ), ..., C 5k (rN ))T and of the Helmholtz problem are arranged as C N ×m 3 3 3 ˆ C = (C1 , ..., Cm ) ∈ C . For control purposes, the matrix C is written in the form where real and imaginary parts of the wave fields are separated as ⎛ ⎞ 3 −Im C 3 Re C ⎠ ∈ R2N ×2m . B=⎝ (6) 3 Re C 3 Im C T. Huttunen et al. In Equation (5), matrix MD ∈ RN ×2m is the modified mass matrix which is constructed as MD = [I, I]M , where I is the unit matrix. In addition, the square of the heat source term in Equation (5) is computed element wisely. With this procedure, it is possible to control the real and imaginary parts (phase and amplitude) of each transducer element separately. For detailed derivation of the heat source term, see example [33]. In this study, the boundary condition for the FE bioheat equation (5) was chosen as the Dirichlet condition in all simulations. In the Dirichlet condition, the temperature on the boundaries of the computational domain was set to 37◦ C. Furthermore, the initial condition for the implicit Euler iteration was set as T0 = 37◦ C in all simulated cases. 2.3 Thermal dose model The combined effect of the temperature and the treatment time can be evaluated using the thermal dose. For biological tissues thermal dose is defined as [40] 1 tf $ % 0.25 for T (r, t) < 43◦ C 43−T (r,t) dt , where R = R D(T (r, ·)) = (7) 0.50 for T (r, t) ≥ 43◦ C 0 and tf is the final time where thermal dose is integrated. The unit of the thermal dose is equivalent minutes at 43◦ C. In most of the soft tissues the thermal dose that causes thermal damage is between 50 and 240 equivalent minutes at 43◦ C [10, 11]. 3 Control and optimization algorithms for ultrasound surgery In the following, different control and optimization algorithms for thermal dose and temperature control in ultrasound surgery are presented. The numerical simulations are given after the theoretical part of each algorithm. 3.1 Scanning path optimization method In the scanning path optimization algorithm, the heat source term in the implicit Euler FEM form of the bioheat equation (5) is linearized. In this 5 ∈ RN ×Nf is constructed from focused ultrasound fields, case, a new matrix B where the number of foci is Nf . The mass matrix M is also included to matrix 5 With these changes, the bioheat equation is written as the B. 5t ut , Tt+1 = ATt + P + B Optimal control in high intensity focused ultrasound surgery 5t ∈ RN is the active field at time t and ut is the input power. The where B 5 in which 5t ) at time t is taken as a column from the matrix B active field (B focused fields for predetermined foci are set as columns. The cost function for the scanning path optimization can be set as a terminal condition J(D) = (D − Dd )T W (D − Dd ), where the difference between thermal dose and desired thermal dose Dd is penalized using positive definite matrix W . The Hamiltonian form for the state equation (8) and the cost function (9) can be written as [38] 5 t ), H(D, T, u) = D − Dd 2W + λTt (ATt − P + Bu where λt ∈ RN is the Lagrange multiplier for the state equation. The optimization problem can be solved from the costate equation [42] λt−1 = ∂H = AT λt + log(R)R43−Tt W (D − Dd ), ∂Tt where is the element wise (Hadamard) product of two vectors. The costate equation is computed backwards in time. The focus which minimizes the cost function (9) at time t can be found as 5 min{λTt so the focus which is chosen makes Equation (12) most negative at time t [20, 42]. The feedback law can be chosen as maximum effort feedback 5 < 0, Td − Ti,t , if λTt B ut+1 = (13) T 5 0, if λt B ≥ 0, where Ti,t is temperature at ith focus point at time t, and Td is the desired temperature in the cancer region. In this study, the desired temperature in the cancer region was set to Td =70◦C. The scanning path optimization algorithm consists of the following steps: 1) Solve the state equation (8) from time t upwards in a predetermined time window. 2) Solve the Lagrange multiplier from Equation (11) backwards in the same time window. 3) Find the next focus point from Equation (12). 4) Compute the input value from Equation (13). If the target is not fully treated, return to step 1). 3.2 Scanning path optimization simulations The scanning path optimization method was evaluated in two schematic 3D geometries, which are shown in Figure 1. In both geometries, the ultrasound surgery of the breast was simulated. In the first geometry, there are skin, T. Huttunen et al. Fig. 1. Computation domains for scanning path optimization. Left: Domain with the slice target. Right: Domain with the sphere target. The subdomains from left to right are skin, healthy breast and cancer. Table 1. Thermal parameters for subdomains. Subdomain α(Nep/m) k(W/mK) CT (J/kgK) wB (kg/m3 s) skin 1 0.5 10 healthy breast and slice shaped target, with the radius of 1 cm. In the second geometry, the subdomains are the same, but the target is a sphere with the radius of 1 cm. Both targets were located so that the center of the target was at the point (12,0,0) cm. The computation domains were partitioned into the following meshes. With the slice target, the mesh consists of 13,283 vertices and the 70,377 elements and with the spherical target the mesh consists of 24,650 vertices and 13,4874 elements. The transducer system in simulations was a 530-element phased array (Figure 2). The transducer was located so that the center of the target was in the geometrical focus. The ultrasound fields with the frequency of 1 MHz were computed for each element using the Rayleigh integral. The acoustical properties of tissue were set as c=1500 m/s, ρ=1000 kg/m3 and α=5 Nep/m [15, 19]. The thermal properties of tissue are given in Table 1, and these properties were also adopted from the literature [25, 30, 41]. In the control problem, the objective was to obtain the thermal dose of 300 equivalent minutes at 43◦ C in the whole target domain and keep the thermal dose in healthy regions as low as possible. A transition zone with the thickness of 0.5 cm was used around the target volume. In this region, the thermal dose was not limited. The maximum temperature in healthy tissue was limited to 44◦ C. If this temperature was reached, the tissue was allowed to cool to 42.5◦C or below. Optimal control in high intensity focused ultrasound surgery Fig. 2. 530-element phased array used in simulations. The weighting matrix W for the thermal dose difference was set to a diagonal matrix. The weights on the diagonal were set adaptively in the following way. The total number of foci was denoted with Nf and the number of foci in which the desired thermal dose was reached was denoted with Nd . The vertices in healthy subdomains were weighted with the function 10, 000×(1−Nd/Nf )2 , and the vertices in the cancerous domain with (1− Nd/Nf )−2 , i.e., the weighting from the healthy region was decreased and correspondingly increased in the target during the treatment. In addition, when the thermal dose of 300 equivalent minutes was reached, the weighting from this focus was removed. The implicit Euler form of the bioheat equation (8) was adopted by setting the time step as h=0.25 s for the slice target and h=0.5 s for the sphere target. The scanning path was chosen using the algorithm described in the previous section. The time window for state and costate equations were chosen to be 10 s upwards from the current time. Simulated results were compared to the treatment where scanning is started from the outermost focus (in x-coordinate) and the target volume is then scanned in decreasing order of the x-coordinate. For example, in the 3D case, the outermost untreated location in the x-coordinate was chosen and then the corresponding slice in y- and z-directions was sonicated. The feedback law and temperature constraints were the same for the optimized scanning path and this reference method. Furthermore, if the dose at the next focus location was above the desired level, this focus was skipped (i.e., power was not wasted). In the following, the results from this kind of sonication are referred to as “standard sonication.” For both of the methods, the treatment was terminated when the thermal dose of 300 equivalent minutes was reached in the whole target. The foci in target volumes were chosen so that the minimum distance in each direction from focus to focus was 1 mm. For the slice target, the foci were located in z=0 plane, while with the spherical target, the whole volume was covered with foci. T. Huttunen et al. Table 2. Results from the scanning path optimization. The number of the foci in target is Nf and t is the treatment time. Subscript O refers to optimized scanning path and S to standard sonication. tO (s) tS (s) Fig. 3. Thermal dose contours for the slice scan in xy-plane. Left: Thermal dose contours with optimized scanning path. Right: Thermal dose contours with standard sonication. The contour lines are for 240 and 120 equivalent minutes at 43◦ C. The treatment times for the optimized scanning path and standard sonication are given in Table 2. The sonication time is 30% shorter for the slice target and 44% shorter for the sphere shaped target as compared to standard sonication. The treatment time is reduced more for the spherical target, since the degrees of freedom for the optimization algorithm are increased in 3D. The thermal dose contours in xy-plane for the slice shaped target are shown in Figure 3. With both of the methods, the desired thermal dose is achieved well into the target region. In addition, the thermal dose decreases efficiently in the transition zone and there are no undesired thermal doses in healthy regions. The maximum temperature trajectories for the target and healthy domains for the slice target are shown in Figure 4. This figure shows that the whole target volume can be treated using a single sonication burst with both of the methods. With scanning path optimization, the maximum temperature in healthy domains is smaller than with the standard sonication. The thermal dose contours for the spherical target in different planes are shown in Figure 5. Again, the therapeutically relevant thermal dose is achieved in the whole target volume, and there are no big differences in dose contours between optimized and standard scanning methods. Optimal control in high intensity focused ultrasound surgery T ( C) T ( C) t (s) t (s) Fig. 4. Maximum temperatures for the slice scan. Left: Maximum temperature in cancer. Right: Maximum temperature in healthy tissue. Solid line is for optimized scan and dotted for standard Fig. 5. Thermal dose contours for the spherical scan in different planes. Left column: Thermal dose contours from optimized scanning path. Right column: Thermal dose contours from the standard sonication. The contour lines are for 240 and 120 equivalent minutes at 43◦ C. The maximum temperature trajectories for the target and healthy tissue from the spherical scan are shown in Figure 6. This figure indicates that the treatment can be accomplished much faster by using the optimized scanning path. The optimized scanning path needs three sonication bursts to treat the T. Huttunen et al. 46 44 T ( C) T ( C) t (s) t (s) Fig. 6. Maximum temperatures for the sphere scan. Left: Maximum temperature in cancer. Right: Maximum temperature in healthy tissue. Solid line is for optimized scan and dotted for standard whole cancer, while seven bursts are needed with the standard sonication. This is due to the fact that temperature in healthy tissue rises more rapidly with standard sonication, and tissue must be allowed to cool to prevent undesired damage. 3.3 Feedforward control method The first task in the feedforward control formulation is to define the cost function. In the thermal dose optimization, the cost function can be written as 1 1 1 tf T T J(D, u; ˙ t) = (D − Dd ) W (D − Dd ) + u˙ t S u˙ t dt, (14) 2 2 0 where the difference between the accumulated thermal dose D and the desired thermal dose Dd is weighted with the positive definite matrix W and the time derivative of the input is penalized with the positive definite matrix S. The maximum input amplitude of ultrasound transducers is limited. This limitation can be handled with an inequality constraint approximation, in which k th component c1,k (ut ) is c1,k (ut ) = c1,m+k (ut ) (15) ⎧ 2 % % $ $ 1/2 1/2 ⎨ K u2 + u2 − umax,i , if u2k,t + u2m+k,t ≥ umax,i , k,t m+k,t (t) = $ 2 % 1/2 ⎩ 0, if uk,t + u2m+k,t < umax,i , where K is the weighting scalar, uk,t and um+k,t are the real and imaginary parts of the control input for the k th transducer, respectively, umax,i is the maximum amplitude during the ith interval of the sonication and k = 1, . . . , m. With this manner it is possible to split treatment into several parts when transducers are alternatively on or off. For example, when large cancer volumes are treated, the healthy tissue can be allowed to cool between the sonication bursts. Furthermore, in feedforward control, it is useful to set the maximum amplitude limitation lower than what transducers can actually produce. With this manner it is possible to leave some reserve power for feedback purposes to compensate for the modeling errors. Optimal control in high intensity focused ultrasound surgery In practice, there are also limitations for the maximum temperature in both healthy and cancerous tissue. The pain threshold is reported to be approximately 45◦ C. In addition, the temperature in cancerous tissue must be below the water boiling temperature (100◦ C). These limitations can be made in the form of an inequality constraint approximation c2 , whose ith component is ⎧ 2 ⎪ ⎪ ⎨ K(Ti,t − Tmax,C ) , if Ti,t ∈ ΩC and Ti,t ≥ Tmax,ΩC , c2,i (Tt ) = K(Ti,t − Tmax,H )2 , if Ti,t ∈ ΩH and Ti,t ≥ Tmax,ΩH , (16) ⎪ ⎪ ⎩ 0, otherwise. where Ti is the temperature in the FE vertex i, the subset of the vertices in cancerous region is denoted by ΩC and the subset of the vertices in the healthy region is denoted by ΩH . The maximum allowed temperature is denoted in cancerous and healthy tissue by Tmax,ΩC and Tmax,ΩH , respectively. The feedforward control problem solution can be obtained via the Hamiltonian form [42]. Combining equations (14), (5), (15) and (16) gives the Hamiltonian 1 tf 1 u˙ Tt S u˙ t dt H(T, u, u; ˙ t) = (D − Dd )T W (D − Dd ) + 2 0 $ % $ % $ % T 2 +λt ATt − P − MD (But ) + µTt c1 ut + νtT c2 Tt , (17) where µt is the Lagrange multiplier for the control input inequality constraint approximation and νt is the Lagrange multiplier for the temperature inequality constraint approximation. The feedforward control problem can be now solved by using a gradient search algorithm. This algorithm consists of following steps: 1) Compute the state equation (5). 2) Compute the Lagrange multiplier for the state as −λt = ∂H/∂Tt backwards in time. 3) Compute the Lagrange multiplier for the control input inequality constraint as µt = (∂c1 /∂ut )−1 (∂H/∂ut ). 4) Compute the Lagrange multiplier for the temperature inequality constraint using the penalty function method as νt = ∂c2 /∂Tt . 5) Compute the stationary condition. For the th iteration round, the stationary condition (input update) (+1) () can be computed as ut = ut + () ∂H/∂ut () , where () is the iteration step length. 6) Compute the value of the cost function from Equation (14). If the change in the cost function is below a predetermined value, stop iteration, otherwise return to step 1. 3.4 Feedforward control simulations 2D example The computational domain in this example was chosen as a part of a cancerous breast, see Figure 7. The domain was divided into four subdomains which are T. Huttunen et al. 0.08 1 4 0.06 y (m) 8 0.02 19 −0.02 x (m) Fig. 7. Computational domain. Cancerous region is marked with the dashed line (Ω4 ). Twenty ultrasound transducers are numbered on the left hand side. Table 3. The acoustic and thermal parameters for the feedforward control simulation. Domain α (Nep/m) c (m/s) ρ (kg/m3 s) k (W/mK) CT (J/kgK) wB (kg/m3 s) Ω1 water (Ω1 ), skin (Ω2 ) a part of a healthy breast (Ω3 ) and the breast tumor (Ω4 ). The domain was partitioned into a mesh having 2108 vertices and 4067 elements. The transducers system was chosen as a 20-element phased array (see Figure 7). The transducer was located so that the center of the cancer was in the geometrical focus. The frequency of ultrasound fields was set to 500 kHz. The wave fields were computed using the UWVF for each transducer element. The acoustic and thermal parameters for the subdomains were adopted from the literature [3, 30, 41] and they are given in Table 3. It is worth noting that the frequency in this example was chosen lower than in scanning path optimization simulations, and the absorption coefficient in skin is therefore lower. The feedforward control objective was to obtain the thermal dose of 300 equivalent minutes at 43◦ C in the cancer region and below 120 equivalent minutes in healthy regions. The transient zone near the cancer, where thermal dose is allowed to rise, was not included to this simulation. The reason for this Optimal control in high intensity focused ultrasound surgery is the testing of the spatial accuracy of controller. The weighting for thermal dose distribution was chosen as follows. The weighting matrix W was set to diagonal matrix and the nodes in the skin, healthy breast and cancer were weighted with 500, 2500 and 2000, respectively. For feedforward control problem, the time interval t=[0,180] s was discretized with the step length h=0.5 s and the treatment was split into two parts. During the first part of the sonication (i.e., when t ∈[0,50] s) the maximum amplitude was limited with umax,1 =0.8 MPa, and during the second part (i.e., when t ∈ [50,180] s) the maximum amplitude was limited with umax,2 =0.02 MPa. In the inequality constraint approximation for maximum amplitude, the weighting was set to K=10,000. The smoothing of the transducer excitations was achieved by setting the weighting matrix for time derivative of the control input as S=diag(5000). In this simulation, the maximum temperature inequality constraint approximation was not used, i.e., c2,t = 0 for all t. The thermal dose was optimized using the algorithm described in previous section. The iteration was stopped when the relative change in cost function was below 10−4 . The thermal dose contours for the region of interest are shown in Figure 8. These contours indicate that the major part of the thermal dose is in the cancer area and only a small fraction of the dose is in the healthy breast. The thermal dose of 240 equivalent minutes at 43◦ C is achieved in 74% of the target area and 120 equivalent minutes at 92% of the cancer area. In the breast, only 2.4% of the area has thermal dose of 120 equivalent minutes. The maximum thermal dose peak in the breast is quite high. However, this dose peak is found only in a small part of the breast. In this simulation the modeling of cooling period between [50 180] s is crucial, since 75% of the thermal dose is accumulated during this time. The phase and amplitude trajectories for the transducers number 4 and 16 are shown in Figure 9. There are no oscillations in the phase and amplitude trajectories, so design criterion concerning this limitation is fulfilled. 0.04 0.03 0.02 120 0.01 0 −0.01 −0.02 −0.03 −0.04 −0.05 −0.04 −0.03 −0.02 −0.01 Fig. 8. The feedforward controlled dose at the final time (tf =180 s). Contour lines are for 120 and 240 equivalent minutes at 43◦ C. T. Huttunen et al. −0.5 phase amplitude −1 20 100 t (s) phase amplitude −2 20 100 t (s) Fig. 9. Phase and amplitude trajectories from the feedforward control for transducer number 4 (left) and 16 (right). Fig. 10. Left: Computation domain. Subdomains from the left are skin, healthy breast and the sphere shaped cancer. Right: 200-element phased array. Furthermore, the maximum input amplitude during the first part of sonication was 0.801 MPa and during the second part 0.0203 MPa, so the maximum amplitude inequality constraint approximation limits the amplitude with a tolerable accuracy. 3D example The computation domain for the 3D feedforward control problem is shown in Figure 10. The domain was divided into three subdomains which were skin, healthy breast and a sphere shaped cancer with the radius of 1 cm at the point (7,0,0) cm. The computational domain was partitioned into a mesh consisting of 23,716 vertices and 120,223 elements. The transducer system was a hemispherical phased array with 200 elements (see Figure 7). The transducer was located so that the center of the target was in the geometrical focus. The ultrasound fields with the frequency of 1 MHz were computed using the Rayleigh integral for each transducer element. The acoustical and thermal parameters were chosen as Section 3.2 (see Table 1). The control problem was to obtain the thermal dose of 300 equivalent minutes or greater at 43◦ C in the cancer region. The temperature in the healthy tissue was limited to 45◦ C and to 80◦ C in cancer, with the Optimal control in high intensity focused ultrasound surgery constraint approximation. In this simulation, a 0.5 cm transient zone was set between the cancer and healthy tissue, where temperature or thermal dose was not limited, since temperature in the simulation in this region was less than 80◦ C. The weighting matrix W was set to diagonal matrix, and the vertices in cancer region were weighted with 10,000 and other nodes had zero weights. For the feedforward control problem, the time interval t=[0,50] s was discretized with step length h=0.5 s and the treatment was split to two parts. During the first part of the sonication (i.e., when t ∈ [0,30] s), the maximum amplitude was limited with umax,1 =100 kPa and during the second part (i.e., when t ∈[30,50] s), the maximum amplitude was limited with umax,2 =2 kPa. The diagonal weighting matrix S for the time derivative of the input was set to S=diag(10,000). The weighting scalar for both state and input inequality constraint approximations was set to K = 2 × 106 . The stopping criterion for feedforward control iteration was that the thermal dose of 240 equivalent minutes was achieved in the whole cancer. The thermal dose contours from the feedforward control are shown in Figure 11. As it can be seen, the thermal dose of 240 equivalent minutes is achieved in the whole cancer region. Furthermore, the thermal dose is sharply localized in the cancer region. There are no undesired doses in the healthy regions. In this simulation, the thermal dose accumulation during the cooling period (t ∈[30, 50]) was 11% of the whole thermal dose. The temperature trajectories for cancer and healthy tissue are shown in Figure 12. From this figure it can be seen that the temperature in cancer regions is limited to 80◦ C. Also, the maximum temperature in healthy regions is near 45◦ C. The maximum temperature in the cancer was 80.3◦ C and in the healthy region 45.5◦ C. Furthermore, the maximum input amplitude inequality constraint approximation was found to be effective. The maximum amplitude during the first part of the sonication was 101 kPa and 2.02 kPa during the second part. Fig. 11. Feedforward controlled thermal dose contours for 3D simulation. Left: The dose in xy-plane. Middle: The dose in xz-plane. Right: The dose in yz-plane. Contour lines are for 120 and 240 equivalent minutes. T. Huttunen et al. 80 T (°C) t (s) Fig. 12. Maximum temperature trajectories in cancer (solid line) and in healthy subdomains (dotted line) for the 3D feedforward control simulation. 3.5 Feedback control method The modeling of ultrasound therapy is partly approximate. The main source of error in ultrasound therapy treatment planning is in the acoustic parameters in the Helmholtz equation and in the thermal parameters in the bioheat equation. These errors affect the obtained temperature or thermal dose distribution if the treatment is accomplished by using only the model-based feedforward control. Since the MRI temperature measurements are available during the treatment, it is natural to use this information as a feedback to compensate for the modeling errors. The feedback controller can be derived by linearizing the nonlinear state equation (5) with respect to feedforward control trajectories for temperature and control input. In this step, the time discretization is also changed. The feedforward control is computed with the time discretization of the order of a second. During ultrasound surgery, the temperature feedback from MRI is obtained in a few second intervals. Due to this mismatch, it is natural to consider the case when feedback is computed with a larger time discretization than the feedforward part. This also reduces the computation task of the feedback controller and filter. Let the step length of the time discretization in feedforward control be h. In feedback control, d steps of the feedforward control are taken at once, giving new step length dh. With these changes the multi-step implicit Euler form of the linearized state equation with the state noise wk is 5k ∆uk + wk , ∆Tk+1 = F5∆Tk + B where F5 = F d 5k = h B (19) F t−kd−1 G(u0,t ), and where G(u0,t ) is the Jacobian matrix with respect to feedforward input trajectory u0,t . The discrete time cost function for the feedback controller can be formulated as Optimal control in high intensity focused ultrasound surgery k $ % 1 (∆Tk − T0,k )T Q (∆Tk − T0,k ) + ∆uTk R ∆uk , ∆J = 2 where the error between the feedforward and actual temperature is weighted with matrix Q, and the matrix R weights the correction to the control input. The solution to the control problem can be obtained by computing the associated Riccati difference equation [42]. For the state estimation, the multi-step implicit Euler state equation and the measurement equation are written as 5k ∆uk + wk ∆Tk+1 = F5 ∆Tk + B yk = C∆Tk + vk , (22) (23) where yk is the MRI measured temperature, C ∈ RP ×N is the linear interpolation matrix and vk is the measurement noise. When state and measurement noises are independent Gaussian processes with a zero mean, the optimal state estimation can be computed using the Kalman filter. Furthermore, the covariance matrices in this study are assumed to be time independent, so the Kalman filter gain can be computed from the associated Riccati difference equation [42]. The overall feedback control and filtering schemes are applied to the original system via separation principle [42]. In this study, the zero-order hold feedback control is tested using synthetic data. In feedforward control, the acoustic and thermal parameters are adopted from the literature, i.e., they are only approximate values. As the real system is simulated, these parameters are varied. In this case, the original nonlinear state equation (5) with varied parameters and feedback correction can be written as Tt+1 = Ar Tt + Pr + MD,r (Br (u0,t + ∆uk ))2 , where the feedback correction ∆uk is held constant over the time interval t ∈ [k, k + 1] and subscript r denotes the associated FE matrices which are constructed by using the real parameters. The state estimate is computed for the same discretization (with step length h) as the state equation with original feedforward control matrices, since errors are considered as unknown disturbances to the system. During the time interval t ∈ [k, k + 1], the state estimate is T3t+1 = AT3t + P + MD (B(u0,t + ∆uk ))2 . The corrections for the state estimate and the input are updated after every step k from the measurements and the state estimated feedback as yk = CTk + vk $ %2 T3k+1 = AT3k + P + MD B(u0,k + ∆uk ) + L(yk − C T3k ) ∆uk+1 = −Kk+1 (T3k+1 − T0,k+1 ) , (26) (27) (28) T. Huttunen et al. where L is the Kalman gain and Kk+1 is the LQG feedback gain. The feedback correction is constant during time interval t ∈ [k, k +1] and piecewise constant during the whole treatment. 3.6 Feedback control simulations The LQG feedback control algorithm was tested for the 2D example of the feedforward control. The corresponding feedforward control problem is defined in Section 3.4. The time discretization for the feedback controller was set according to the data acquisition time of MRI during in the ultrasound surgery of the breast [27]. The time lag between the MRI measurements was set to 4 s to simulate MRI sequences and temperature measurement in each vertex was taken as a mean value during each 4 s interval. The multi-step implicit Euler equation (18) was adopted by setting d=8, since h in feedforward control was 0.5 s. The LQG feedback controller was derived by setting weighting matrices as Q = W/1000 for the state weighting and R=diag(1000) for the input correction weighting. The Kalman filter was derived by setting the state covariance matrix to diag(4) and the measurement disturbance covariance matrix to identity matrix. The LQG procedure was tested with simulations, where maximum error in the absorption coefficient was ±50% and ±30% in other acoustic and thermal parameters. New FEM matrices were constructed using these values (matrices with subscript r in Equation (24)). In this study, results from the two worst case simulations are given. In case A, absorption in subdomains is dramatically higher than in feedforward control. In case B, absorption in tissue is lower than in feedforward control. In addition, the other thermal and acoustic parameters are varied in tissue. In both cases, new ultrasound fields were computed with the UWVF. The acoustic and thermal parameters for the feedback case A are given in Table 4. As compared to Table 3, the acoustic and thermal parameters are changed so that the new parameters result in inhomogeneous errors in temperature trajectories in different subdomains. The thermal dose contours with and without feedback are shown in Figure 13. This figure indicates that feedback controller decreases undesired thermal dose in healthy regions, while without feedback the healthy breast Table 4. The acoustic and thermal parameters for the feedback case A. Domain α (Nep/m) c (m/s) ρ (kg/m3 s) k (W/mK) CT (J/kgK) wB (kg/m3 s) Ω2 y (cm) y (cm) Optimal control in high intensity focused ultrasound surgery −0.04 −0.05 −0.04 −0.03 −0.02 −0.01 0.01 0.02 0.03 x (cm) −0.04 −0.05 −0.04 −0.03 −0.02 −0.01 x (cm) 0.01 0.02 0.03 Fig. 13. Thermal dose contours for the feedback case A. Left: Thermal dose with feedback. Right: Thermal dose without feedback. 55 T (oC) T (oC) 45 Feedforward Feedback Without feedback t (s) Feedforward Feedback Without feedback t (s) Fig. 14. Temperature trajectories for the feedback case A. Left: Maximum temperature in cancer. Right: Maximum temperature in healthy breast. 0.5 0 −0.5 −1 phase amplitude 80 100 120 140 160 180 t (s) 3 2 1 0 −1 −2 phase amplitude 80 100 120 140 160 180 t (s) Fig. 15. Phase and amplitude trajectories from the feedback case A for transducer number 4 (left) and 16 (right). suffers from the undesired damage. The area where thermal dose of 240 equivalent minutes is achieved covers 72% of the cancer region with feedback and 99.4% without feedback. In the healthy breast, the area where thermal dose of 240 equivalent minutes is achieved is 0.7% of the whole region with feedback and 7.9% without feedback. The maximum temperature trajectories for the feedback case A are shown in Figure 14. The maximum temperature in cancerous and healthy tissue is decreased when the feedback controller is used. The phase and amplitude trajectories for transducers number 4 and 16 for feedback case A are shown in Figure 15. As compared to original input T. Huttunen et al. Table 5. The acoustic and thermal parameters for the feedback case B. Domain α (Nep/m) c (m/s) ρ (kg/m3 s) k (W/mK) CT (J/kgK) wB (kg/m3 s) Ω2 0.03 0.02 y (cm) y (cm) −0.04 −0.05 −0.04 −0.03 −0.02 −0.01 x (cm) 0.01 0.02 0.03 −0.04 −0.05 −0.04 −0.03 −0.02 −0.01 x (cm) 0.01 0.02 0.03 Fig. 16. Thermal dose contours for the feedback case B. Left: Thermal dose with feedback. Right: Thermal dose without feedback. trajectories (see Figure 9), the feedback controller decreases the amplitude during the first part of the sonication. This is due to the increased absorption in tissue. In addition, the phase is also altered throughout the treatment, since modeling errors are not homogeneously distributed between the subdomains. The acoustic and thermal parameters for the feedback case B are given in Table 5. Again, there are inhomogeneous changes in the parameters. Furthermore, the absorption in healthy breast is higher than in cancer, which makes the task for feedback controller more challenging. The thermal dose contours for feedback case B are shown in Figure 16. Without feedback, the thermal dose is dramatically lower than in feedforward control (see Figure 8). With feedback control, the thermal dose distribution is therapeutically relevant in the large part of the cancer, while high thermal dose contours appear in a small part of healthy breast. The area where thermal dose of 240 equivalent minutes is achieved covers 60% of the cancer region with feedback, while without feedback therapeutically relevant dose is not achieved in any part of the target. In healthy breast, the area where thermal dose of 240 equivalent minutes is achieved is 1.8% of the whole region with feedback. In this example, the slight damage to healthy breast was allowed. However, if undesired thermal dose is not allowed in healthy regions, it is possible to increase weighting in the healthy vertices when feedback controller is derived. Maximum temperature trajectories for feedback case B are shown in Figure 17. The feedback controller increases temperature in cancer effectively. In addition, the temperature in healthy breast does not increase dramatically. Optimal control in high intensity focused ultrasound surgery T (oC) T (oC) Feedforward Feedback Without feedback Feedforward Feedback Without feedback t (s) t (s) Fig. 17. Temperature trajectories for the feedback case B. Left: Maximum temperature in cancer. Right: Maximum temperature in healthy breast. 2 0.5 phase amplitude −1 20 100 t (s) phase amplitude −2 20 t (s) Fig. 18. Phase and amplitude trajectories from the feedback case B for transducer number 4 (left) and 16 (right). However, during the second part of the sonication, the feedback controller cannot increase the temperature in cancer to compensate modeling errors. This is due to the fact that during this period, transducers were turned effectively off, and the feedback gain is proportional to feedforward control amplitude (for details, see [31]). The phase and amplitude trajectories for the feedback case B are shown in Figure 18. The feedback controller increases the amplitude to compensate the decreased absorption in tissue. Furthermore, as compared to Figure 9, the phase trajectories are also changed with feedback. This is due to the inhomogeneous modeling errors in subdomains. 4 Conclusions In this study, alternative control procedures for thermal dose optimization in ultrasound surgery were presented. The presented methods are scanning path optimization methods if prefocused ultrasound fields are used and combined feedforward and feedback control approaches in which the phase and amplitude of the ultrasound transducers are changed as a function of time. The presented scanning path optimization algorithm is relatively simple. If any kind of treatment planning is made, it would be worth using this kind of approach to find the optimal scanning path. The numerical simulations show that the approach significantly decreases the treatment time, especially T. Huttunen et al. when a 3D volume is scanned. The given approach can be used with a single element transducer (where cancer volume is scanned by moving the transducer mechanically) as well as with a phased array. Furthermore, the presented algorithm is also tested with simulated MRI feedback data in [34]. Results from that study indicate that the optimized scanning path is robust even if there are modeling errors in tissue parameters. The combined feedforward and feedback control method can be applied in cases when the phased array is used in ultrasound surgery treatment. In feedforward control, the phase and amplitude of the transducers are computed as a function of time to optimize the thermal dose. With inequality constraint approximations, it is possible to limit the maximum input amplitude and maximum temperature in tissue. Furthermore, the diffusion and perfusion are taken into account in the control iteration. Finally, the latent accumulating thermal dose is taken into account if sonication is split to parts in which transducers are first on and then turned off. However, as the feedback simulations show, the model based feedforward control is not robust enough if modeling errors are present. For this case, the LQG feedback controller with Kalman filter for state estimation was derived to compensate modeling errors. The main advantage of the proposed feedback controller is that it can change not only the amplitude of the transducers but also the phase. As the results from the simulations show, the phase correction is needed to compensate inhomogeneous modeling errors. The feedback controller increases the robustness of the overall control scheme dramatically. As the computational task between the proposed approaches are compared, the combined feedforward and feedback approach is computationally much more demanding than the scanning path optimization method. The feedforward control iteration in particular is quite slow due to the large dimensions of the problem. In addition, the associated Riccati matrix equations for feedback controller and Kalman filter have very large dimensions. However, these Riccati equations, as well as the feedforward controller, can be computed off line before actual treatment. The modeling errors in the model based control of ultrasound surgery can be decreased with pretreatment. In this stage, it is possible to heat tissue with low ultrasound power levels and then measure the thermal response of tissue with MRI. From this data, thermal parameters of tissue can be estimated [23, 46]. References 1. D. Arora, M. Skliar, and R. B. Roemer. Model-predictive control of hyperthermia treatments. IEEE Transactions on Biomedical Engineering, 49:629–639, 2002. 2. I. Babuˇska and J. M. Melenk. The partition of unity method. International Journal for Numerical Methods in Engineering, 40:727–758, 1997. Optimal control in high intensity focused ultrasound surgery 3. J. C. Bamber. Ultrasonic properties of tissue. In F. A. Duck, A. C. Baker, , and H. C. Starrit, editors, Ultrasound in Medicine, pages 57–88. Institute of Physics Publishing, 1998. chapter 4. 4. A. B. Bhatia. Ultrasonic Absorption: An Introduction to the Theory of Sound Absorption and Dispersion in Gases, Liquids and Solids. Dover, 1967. 5. Y. Y. Botros, J. L. Volakis, P. VanBare, and E. S. Ebbini. A hybrid computational model for ultrasound phased-array heating in the presence of strongly scattering obstacles. IEEE Transactions on Biomedical Engineering, 44:1039– 1050, 1997. 6. O. Cessenat and B. Despr´es. Application of an ultra weak variational formulation of elliptic PDEs to the two-dimensional Helmholtz problem. SIAM Journal of Numerical Analysis, 35:255–299, 1998. 7. A. Chung, F. A. Jolesz, and K. Hynynen. Thermal dosimetry of a focused ultrasound beam in vivo by magnetic resonance imaging. Medical Physics, 26:2017–2026, 1999. 8. F. P. Curra, P. D. Mourad, V. A. Khokhlova, R. O. Cleveland, , and L. A. Crum. Numerical simulations of heating patterns and tissue temperature response due to high-intensity focused ultrasound. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 47:1077–1088, 2000. 9. C. Damianou and K. Hynynen. Focal spacing and near-field heating during pulsed high temperature ultrasound hyperthermia. Ultrasound in Medicine & Biology, 19:777–787, 1993. 10. C. Damianou and K. Hynynen. The effect of various physical parameters on the size and shape of necrosed tissue volume during ultrasound surgery. The Journal of the Acoustical Society of America, 95:1641–1649, 1994. 11. C. A. Damianou, K. Hynynen, and X. Fan. Evaluation of accuracy of a theoretical model for predicting the necrosed tissue volume during focused ultrasound surgery. IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, 42:182–187, 1995. 12. S. K. Das, S. T. Clegg, and T. V. Samulski. Computational techniques for fast hyperthermia optimization. Medical Physics, 26:319–328, February 1999. 13. D. R. Daum and K. Hynynen. Thermal dose optimization via temporal switching in ultrasound surgery. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 45:208–215, 1998. 14. D. R. Daum and K. Hynynen. Non-invasive surgery using ultrasound. IEEE Potentials, December 1998/January 1999, 1999. 15. F. A. Duck, A. C. Baker, and H. C. Starrit. Ultrasound in Medicine. Institute of Physics Publishing, 1998. 16. X. Fan and K. Hynynen. The effect of wave reflection and refraction at soft tissue interfaces during ultrasound hyperthermia treatments. The Journal of the Acoustical Society of America, 91:1727–1736, 1992. 17. X. Fan and K. Hynynen. Ultrasound surgery using multiple sonications – treatment time considerations. Ultrasound in Medicine and Biology, 22:471–482, 1996. 18. D. Gianfelice, K. Khiat, M. Amara, A. Belblidia, and Y. Boulanger. MR imaging-guided focused US ablation of breast cancer: histopathologic assessment of effectiveness – initial experience. Radiology, 227:849–855, 2003. 19. S. A. Goss, R. L. Johnston, and F. Dunn. Compilation of empirical ultrasonic properties of mammalian tissues II. The Journal of the Acoustical Society of America, 68:93–108, 1980. T. Huttunen et al. 20. L. M. Hocking. Optimal Control: An Introduction to the theory and applications. Oxford University Press Inc., 1991. 21. E. Hutchinson, M. Dahleh, and K. Hynynen. The feasibility of MRI feedback control for intracavitary phased array hyperthermia treatments. International Journal of Hyperthermia, 14:39–56, 1998. 22. K. Hutchinson and E. B. Hynynen. Intracavitary ultrasound phased arrays for noninvasive prostate surgery. IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, 43:1032–1042, 1996. 23. J. Huttunen, T. Huttunen, M. Malinen, and J. P. Kaipio. Determination of heterogeneous thermal parameters using ultrasound induced heating and MR thermal mapping. Physics in Medicine and Biology, 51:1102–1032, 2006. 24. T. Huttunen, P. Monk, and J. P. Kaipio. Computational aspects of the ultraweak variational formulation. Journal of Computational Physics, 182:27–46, 2002. 25. K. Hynynen. Biophysics and technology of ultrasound hyperthermia. In M. Gautherie, editor, Methods of External Hyperthermic Heating, pages 61–115. Springer-Verlag, 1990. Chapter 2. 26. K. Hynynen. Focused ultrasound surgery guided by MRI. Science & Medicine, pages 62–71, September/October 1996. 27. K. Hynynen, O. Pomeroy, D. N. Smith, P. E. Huber, N. J. McDannold, J. Kettenbach, J. Baum, S. Singer, and F. A. Jolesz. MR imaging-guided focused ultrasound surgery of fibroadenomas in the breast: A feasibility study. Radiology, 219:176–185, 2001. 28. F. Ihlenburg. Finite Element Analysis of Acoustic Scattering. Springer-Verlag, 1998. 29. E. K¨ uhnicke. Three-dimensional waves in layered media with nonparallel and curved interfaces: A theoretical approach. The Journal of the Acoustical Society of America, 100:709–716, 1996. 30. K. Mahoney, T. Fjield, N. McDannold, G. Clement, and K. Hynynen. Comparison of modeled and observed in vivo temperature elevations induced by focused ultrasound: implications for treatment planning. Physics in Medicine and Biology, 46:1785–1798, 2001. 31. M. Malinen, S. R. Duncan, T. Huttunen, and J. P. Kaipio. Feedforward and feedback control of the thermal dose in ultrasound surgery. Applied Numerical Mathematics, 56:55–79, 2006. 32. M. Malinen, T. Huttunen, and J. P. Kaipio. An optimal control approach for ultrasound induced heating. International Journal of Control, 76:1323–1336, 2003. 33. M. Malinen, T. Huttunen, and J. P. Kaipio. Thermal dose optimization method for ultrasound surgery. Physics in Medicine and Biology, 48:745–762, 2003. 34. M. Malinen, T. Huttunen, J. P. Kaipio, and K. Hynynen. Scanning path optimization for ultrasound surgery. Physics in Medicine and Biology, 50:3473–3490, 2005. 35. D. T. Mast, L. P. Souriau, D.-L. D. Liu, M. Tabei, A. I. Nachman, and R. C. Waag. A k-space method for large scale models of wave propagation in tissue. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 48:341–354, 2001. 36. P. M. Meaney, R. L. Clarke, G. R. ter Haar, and I. H. Rivens. A 3-D finite element model for computation of temperature profiles and regions of thermal damage during focused ultrasound surgery exposures. Ultrasound in Medicine and Biology, 24:1489–1499, 1998. Optimal control in high intensity focused ultrasound surgery 37. P. Monk and D. Wang. A least squares method for the Helmholtz equation. Computer Methods in Applied Mechanics and Engineering, 175:121–136, 1999. 38. H. H. Pennes. Analysis of tissue and arterial blood temperatures in the resting human forearm. Journal of Applied Physiology, 1:93–122, 1948. 39. A. D. Pierce. Acoustics: An Introduction to its Physical Principles and Applications. Acoustical Society of America, 1994. 40. S. A. Sapareto and W. C. Dewey. Thermal dose determination in cancer therapy. International Journal of Radiation Oncology, Biology, Physics, 10:787–800, June 1984. 41. M. G. Skinner, M. N. Iizuka, M. C. Kolios, and M. D. Sherar. A theoretical comparison of energy sources -microwave, ultrasound and laser- for interstitial thermal therapy. Physics in Medicine and Biology, 43:3535–3547, 1998. 42. R .F. Stengel. Optimal Control and Estimation. Dover Publications, Inc., 1994. 43. G. ter Haar. Acoustic surgery. Physics Today, pages 29–34, December 2001. 44. G. R. ter Haar. Focused ultrasound surgery. In F. A. Duck, A. C. Baker, and H. C. Starrit, editors, Ultrasound in Medicine, pages 177–188. Institute of Physics Publishing, 1998. 45. B. Van Hal. Automation and performance optimization of the wave based method for interior structural-acoustic problems. PhD thesis, Katholieke Universitet Leuven, 2004. 46. A. Vanne and K. Hynynen. MRI feedback temperature control for focused ultrasound surgery. Physics in Medicine and Biology, 48:31–43, 2003. 47. H. Wan, P. VanBaren, E. S. Ebbini, and C. A. Cain. Ultrasound surgery: Comparison of strategies using phased array systems. IEEE Transactions on Ultrasonics Ferroelectrics, and Frequency Control, 43:1085–1098, 1996. 48. G. Wojcik, B. Fornberg, R. Waag, L. Carcione, J. Mould, L. Nikodym, and T. Driscoll. Pseudospectral methods for large-scale bioacoustic models. IEEE Ultrasonic Symposium Proceedings, pages 1501–1506, 1997.
{"url":"https://silo.pub/optimization-in-medicine-springer-optimization-and-its-applications.html","timestamp":"2024-11-02T15:32:31Z","content_type":"text/html","content_length":"535396","record_id":"<urn:uuid:017d1a09-b00a-4222-aad3-d9fe5fa34b1b>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00494.warc.gz"}
Building a Neural Network & Making Predictions (Summary) – Real Python Building a Neural Network & Making Predictions (Summary) Congratulations! You built a neural network from scratch using NumPy. With this knowledge, you’re ready to dive deeper into the world of artificial intelligence in Python. In this course, you learned: • What deep learning is and what differentiates it from machine learning • How to represent vectors with NumPy • What activation functions are and why they’re used inside a neural network • What the backpropagation algorithm is and how it works • How to train a neural network and make predictions The process of training a neural network mainly consists of applying operations to vectors. Today, you did it from scratch using only NumPy as a dependency. This isn’t recommended in a production setting because the whole process can be unproductive and error-prone. That’s one of the reasons why deep learning frameworks like Keras, PyTorch, and TensorFlow are so popular. For additional information on topics covered in this course, check out these resources: Congratulations, you made it to the end of the course! What’s your #1 takeaway or favorite thing you learned? How are you going to put your newfound skills to use? Leave a comment in the discussion section and let us know. Fantastic. Brilliantly explained. Good explanation, but why, why there is no final example, where one vector would be given to the trained neuronal net with explanation: -This Vector was given because… -We await the prediction of… -The trained model predicted x, because… Short, to run the model and interpret the results by random input. nice overview course, that demystifies some of the deep-learning scarecrows. Two things: 1. Every tutorial that one might take after this one will be so much more advanced. Something intermediate building on top of this one here, but not yet being on the level of Tensorflow, Keras, PyTorch deep learning, would help flatten the steep learning curve 2. Something minor: Yeah, the dot-product measures similarities, and I know similarities are important for deep-learning. However, I guess the example in this course misses a bit the point. To my understanding it is the similarity of two different input vectors that matters, not the similarity of one input vector to the
{"url":"https://realpython.com/lessons/build-neural-network-ai-summary/","timestamp":"2024-11-14T21:22:05Z","content_type":"text/html","content_length":"46425","record_id":"<urn:uuid:fab42003-0b84-4dfa-a830-2711b5f32988>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00094.warc.gz"}
Arithmetic Sequences and Series, including Sigma Notation Task 1 - Arithmetic Sequence Understand and work with arithmetic sequences, including the formulae for nth term. Before we begin I want you to think about what you already know about sequences. Try to write a definition of sequence down, write it down do not just think of a definition. A sequence is a set of numbers written in a particular order. We sometimes write U1 for the first term of the sequence, U2 for the second term, and so on. We write the nth term as Un. You should already be familiar with sequences from Years 10 and 11 maths. Refresh your memory by completing these questions. 1. Un = 4n+5, use the formula to find the first five terms of the sequence where n = 1,2,3... 2. Un = 1/n4, use the formula to find the first four terms of the sequence where n = 1,2,3... What is the tenth term? 3. Can you remember the first eight terms of the Fibonacci Sequence? 4. Un = (−1)n+2/n, write the first five terms. A series is a sum of the terms in a sequence. If there are n terms in the sequence and we evaluate the sum then we often write Sn for the result, so that Sn=U1+U2+U3+...+Un. Working with the series below S1 = 1 The sum of the series to the first term is the first the, 1. S2 = 1+2 = 3 The sum of the series to the second term is the first and second term combined. S3 = 1+2+3 = 6 The sum of the series to the third term is the first, second and third term combined. Your go. Answer these questions in your book. Find S1, S2, S3, S6 for these sequences. 1. 1,3,5,7,9... 2. 6,4,2,0,-2... 3. 0,10,20,30... Useful Calculator Skills Arithmetic Sequences An arithmetic sequence, or AS, is a sequence where each new term after the first is obtained by adding a constant d, called the common difference, to the preceding term. If the first term of the sequence is a then the arithmetic sequence a, a+d, a+2d, a+3d, ... where the nth term is You are going to work through a series of video lessons from an Australian teacher as well as TLMaths. I chose these videos because they are really easy to follow and very good at covering the topic. However, in Australia, they use T1, T2... instead of U1, U2... so bear that in mind. Watch Introduction to Arithmetic Sequences by TLMaths. Watch Introduction to Arithmetic Sequences by McClatchey Maths. Complete this exercise on arithmetic sequences on Transum. You should be able to finish level 2 comfortably but you will need to finish the unit to complete level 4. No Common Difference, No Problem! So far you have been working with problems where you have been given consecutive numbers or the common difference. But what if you do not know the common difference and are not given consecutive To work out the nth term in these problems you will need to use simultaneous equations. Some of you will love simultaneous equations some of you will not. If you are in the not group please take a deep breath and do not panic. I am going to walk you through step by step and give you plenty of practice so that you feel more comfortable moving forward. Begin by watching the Arithmetic Sequences and Simultaneous Equations video by McClatchey Maths. For a second explanation go to TLMaths: 5.19 Arithmetic: 3rd term is 10, 25th term is 142. I am not going into simultaneous equations in depth here but it is important you know how to solve them with and without a calculator. Check out the calculator unit of work if you need a recap. What if it asks for nth Term? You already know how to find a specific term e.g. find the 80th term of a sequence but you may be asked to find the nth term. In these questions, you are just being asked to give a formula for the nth term. Watch TLMaths: Finding the nth term. How Many Terms Are in a Sequence? You may also be given a section of a finite sequence and the value for the nth term and asked how many terms are in the sequence. Again, this is all about using the formula and substituting in the Watch TLMaths: How many terms in an arithmetic sequence? Task 2 - Arithmetic Series Understand and work with arithmetic sequences and series, including the formulae for nth term and the sum to n terms. There are two useful formulas for working out the sum of a finite arithmetic sequence, a series. Which one you use will depend on the information you are given. For one formula you need to know d for the second you need to know the start and end terms. Watch TLMaths: Introducing Arithmetic Series. Watch the following three TLMaths videos, they are short so they won't take long. The sum of the terms of an arithmetic sequence gives an arithmetic series. If the starting value is a and the common difference is d then the sum of the first n terms is If we know the value of the last term ℓ instead of the common difference d then we can write the sum as Task 3 -Sigma Use of sigma notation for sums of arithmetic sequences. Watch TLMaths: Introducing Sigma Notation and then Writing a Sum Using Sigma Notation. Task 4 - Practice Go and work through Unit 9 of Algebra 1 at Khans Academy. Go all the way down to Quiz 1. Some terms they use may be different but that is ok. It is good to understand mathematical terminology from other countries. Then go to Unit 18 Algebra at Khans Academy and work through Basic Sigma notation to the end of Arithmetic Series. Complete this exercise on arithmetic sequences on Transum. Finish levels 3 and 4 this time. Task 5 - Using Arithmetic Sequences and Series We can use our knowledge of Arithmetic sequences and series to help us with several real-life situations. We are going to look briefly at two situations: simple interest and depreciation. Both of these are covered more fully in the module on financial maths. Watch MacClutchey Maths: Simple Interest. Again, in Australia, they use the same formula but different letters in the UK the formula for simple interest is A = final amount P = Principal balance r = annual interest rate t = years Watch McClutchey Math Depreciation video. Here is a great textbook unit on simple interest and depreciation. You can choose to work through the examples and problems and master depreciation now or just look it over to understand the relationship between sequences and financial maths. Task 6 - Consolidate In your journal use a double page to begin writing about sequences, include: 1. Definitions for arithmetic sequence and series. 2. Important formulas. 3. Examples of using the formulas. 4. Real-life examples. 5. Note anything you have difficulty with (this will help you when it comes to revising). Add this sheet to your notebook.
{"url":"https://www.happyhomeeducation.com/post/arithmetic-sequences-and-series","timestamp":"2024-11-14T20:39:42Z","content_type":"text/html","content_length":"1050489","record_id":"<urn:uuid:41c6df3f-d36b-4543-a50b-89e7afc4b1e9>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00440.warc.gz"}
Van Inwagen on the Rate of Time’s Passage This post is co-authored by Hud Hudson, Ned Markosian, Ryan Wasserman, and Dennis Whitcomb. It is based on an unpublished paper by the four of us that is available online here. In the 2^nd edition of his book, Metaphysics (Boulder, CO: Westview Press, 2002), Peter van Inwagen offers a new argument against the passage of time. In the 3^rd edition of the book (Westview Press, 2009) the same argument appears, and it also appears in a recent Analysis paper by Eric Olson (“The Rate of Time’s Passage,” Analysis 61: pp. 3-9). Here’s a quote from van Inwagen. Does the apparent “movement” of time… raise a problem? Yes, indeed… the problem is raised by a simple question. If time is moving (or if the present is moving, or if we are moving in time) how fast is whatever it is that is moving moving? No answer to this question is possible. “Sixty seconds per minute” is not an answer to this question, for sixty seconds is one minute, and – if x is not 0 – x /x is always equal to 1 (and ‘per’ is simply a special way of writing a division sign). And ‘1’ is not, and cannot ever be, an answer to a question of the form, ‘How fast is such-and-such moving?’ – no matter what “such-and-such” may be… ‘One’, ‘one’ “all by itself,” ‘one’ period, ‘one’ full stop, can be an answer only to a question that asks for a number; typically these will be questions that start ‘How many…’… ‘one’ can never be an answer, not even a wrong one, to any other sort of question – including those questions that ask ‘how fast?’ or ‘at what rate?’. Therefore, if time is moving, it is not moving at any rate or speed. And isn’t it essential to the idea of motion that anything moving be moving at some speed…? (2002: 59) Here’s the gist of van Inwagen’s argument. If time passes, then it has to pass at some rate. And even if that rate is expressible in a number of different ways (e.g., 60 minutes per hour, 24 hours per day, etc.), it must also be true (if time passes at all) that time passes at a rate of one minute per minute. But one minute per minute is equivalent to one minute divided by one minute. And when you divide one minute by one minute, you get one (since, van Inwagen says, “if x is not 0 – x/x is always equal to 1”). But ‘one’ (not ‘one’ of anything, but just plain old ‘one’) is the wrong kind of answer to any question of the form “How fast…?” So it must be that time does not pass after all. QED. We can put the reductio part of van Inwagen’s argument a bit more carefully as follows. (1) The rate of time’s passage = 1 minute per minute. (2) 1 minute per minute = 1 minute ÷ 1 minute. (3) 1 minute ÷ 1 minute = 1. (4) The rate of time’s passage = 1. We have several problems with this argument, but will discuss only two of them here. (We discuss some other problems, and the two problems raised here in more detail, in the paper linked to above.) First problem: It’s not true that for any x distinct from 0, x ÷ x = 1. Take for example the Eiffel Towel. If you divide the Eiffel Tower by itself, you don’t get 1. You don’t get anything, because division is not defined for national landmarks. Division is an operation on numbers, and a minute – like a meter or a tower or a car – is not a number. So 1 minute ÷ 1 minute is undefined, and thus (3) is false. (One can, of course, say things like: 10kg divided by 5 kg is 2 kg. But we take this to be loose talk – it is the numbers, not the quantities, that are being divided. Similarly, one can show that a rate of one kilometer per minute is equal to sixty kilometers per hour by multiplying fractions and canceling out units: 1k/1m x 60m/1hour = 60k/1hour. Once again, we take this to be a loose way of speaking – it is the fractions, not the rates, that are being multiplied.) Second problem: (2) is also false. Van Inwagen supports it by saying that “…‘per’ is simply a special way of writing the division sign.” (2002: 59) We disagree. The forward-slash (‘/’) can be used to abbreviate both ‘per’ (i.e., ‘for every’) and ‘divided by’, but it is a mistake to treat ‘per’ as synonymous with ‘divided by’. To see this, consider the claim that time passes at a rate of one minute per minute. This may be uninformative, but that doesn’t make it untrue. A minute does pass every time a minute passes, just as a car passes every time a car passes. So ‘1 minute per minute’ expresses a genuine rate. But now consider the claim that time passes at a rate of 1 minute ÷ 1 minute. This is worse than uninformative – it is nonsensical. That is because 1 minute ÷ 1 minute is a division problem (without a defined answer) and a division problem is not a rate of change. One might as well say that time passes at a rate of orange x banana. So ‘1 minute ÷ 1 minute’, unlike ‘1 minute per minute’, does not express a rate. We conclude that van Inwagen’s anti-passage argument fails, for (2) and (3) are both false. 22 comments: 1. At most van Inwagen's argument seems to imply that the rate of passage of time is a dimensionless quantity, but there is nothing wrong with dimensionless quantities. Maybe the idea is that rates (like velocities) usually are not dimensionless quantities. But the fact that most rates are not dimensionless, does not mean that all rates must have dimensions. 2. I might agree with some of the points about division. But I take it that the main thrust of this problem is as follows. Loosely, to ask for a rate of change is to assess variation in one dimension against variation in another. To ask for the rate of change of time itself is to attempt to asses one dimension of variation against itself. So the only answers you can get are utterly You reply "this may be uninformative, but that doesn't make it untrue". But if the answer to the question "what is the rate of passage?" is necessarily uninformative, how can you claim to have given any content to the idea that time passes? For example: Have you given any more sense to that idea, than to the rival claim that space passes at a rate of 1 metre per metre? If not, then what do you mean when you claim that time, unlike space, passes? 3. I had a discussion with Joe Melia a few days ago that had me thinking about the van Inwagen (/Olson) argument --- and thinking it was wrong --- but I'm not sure how it squares with your response to the argument. Joe reminded me that physicists often like to use so-called "natural units", which have the result that certain physical constants end up getting the value "1" (with no further unit attached: not 1 cm or 1 ohm or whatever, but just 1). One system is used in relativity, in which the speed of light in a vacuum comes out as 1. So, for natural units L (length) and T (time), the speed of light = L/T = 1. So that's bad for van Inwagen's argument; if "how fast does light travel in a vacuum" can sensibly be answered with an unadorned "1", then "how fast does time pass" should be sensibly answered the same way. But the reasoning that lets the physicists get to the point of assigning light the value (1) seems suspiciously similar to the sort of reasoning that goes on in the (1)--(4) argument. (It's a little more complex, because the natural units for length and time aren't obviously the same units, but the result still goes via the thought that certain units can "cancel" each other.) So I'm not sure how to think about physicists' uses of natural units if your criticisms of (2) and (3) go through. 4. You say, "One can, of course, say things like: 10kg divided by 5 kg is 2 kg." However that's not true - 10 kg divided by 5 kg is 2, not 2 kg. The sorts of things that can be divided are quantities, some of which have units and others don't. To see that it's the quantities and not the numbers that are being divided, note that 10 kg is the same as 10,000 g, so 10,000 g divided by 5 kg should be the same as 10 kg divided by 5 kg. If we were dividing the numbers, we would get 2,000 in one case and 2 in the other, which would be problematic. Admittedly, the mathematical operations applied to quantities don't seem to be quite the same as the operations applied to numbers, but I think they're generalizations of these operations. Quantities come in various types, and operations change the type of quantity involved, so a distance times a distance is an area, and a force divided by a mass is an acceleration. Notably, a mass divided by a mass is a number, and a number times a volume is a volume. The operations of addition and subtraction only work when the quantities being operated on are of the same type, while multiplication and division always make sense. Each type has various characteristic units that can be used, like meters, minutes, Newtons, kilograms per second, etc. But I'm not exactly sure where this leaves van Inwagen's argument. A "rate" isn't a single type - some rates (like speeds) have units of meters per second, others (like flows) have units of cubic meters per second, and you could have others (like frequencies) in numbers per second, and so on. I don't see why it isn't the case that the rate of time's passage is just 1. Unfortunately, that's not a very informative answer, but I'm not quite sure whether that means it isn't an answer. 5. Oh, and I forgot to mention - I'm sure all this stuff about units and quantities and the like is discussed much better in the Luce, Krantz, Suppes, and Tversky Foundations of Measurement, though I haven't read it. I got these ideas from reading Field's Science Without Numbers and some idle thought about dimensionless quantities in physics. 6. First, I think Jason, Kenny, and I are trying to make more or less the same point. (Am I wrong guys?) Second, the fact that the rate of passage of time is given by a dimensionless quantity is not particularly surprising. Those who think time passes plausibly think that the rate of its passage is a fundamental physical constant and such constants are often dimensionless quantities (their value is usually an artifact of our fundamental units of measurement). This last further suggest that the value of a physical constant is rarely informative (about the world). Third, what Kenny is saying about dividing 1m/1m is completely right and I think Ned et al. are wrong in saying that '/' is ambiguous between 'divided by' and 'per'. If it takes me 4 hours to travel 400km, my average speed (the rate at which I travel) is 400km/4h or, dividing both numerator and denominator by 4, 100km/(1)h or 400km/h. So, even in this context, '/' has the meaning 'divided by'. Finally, if you are interested, the branch of mathematics that studies this kinds of problems is called dimensional analysis. 7. This comment has been removed by the author. 8. Gabrielle, I think we're all roughly making the same point, yes (but I wasn't tracking that at first, as I wasn't tracking "dimensions" as having to do with units). Only, I think I'm less confident than the two of you that it's on the money. For instance -- in the absence of natural units -- we might always think that multiplication or division of units is really two operations: division (which on this view is only defined between numbers) and a structurally similar unit-conversion-type operation. If we said this, we might think that in the "natural units" cases, the values aren't really 1; they're instead 1 something-or-other (maybe the "vel" is a fundamental unit of velocity, defined so that 1 vel = the speed of light in a vacuum; then (spatial and temporal) distance are defined from it in such a way that we can always just drop the "vel" from the unit names, leaving it as I'm pretty sure these two ways of looking at things aren't just notational variants of each other, and that if the way I just mentioned is right, then Ned-et-al's argument hits the nail on the head. (They can say that the conversion "rule" for s/s leaves it as it is: that is, 1 s/s is different than just the value 1.) If the other way is right, then the argument is confused about units. But I don't have a good grip on which way is the (metaphysically) better way of thinking about units, so I'm not sure what to say about Net-et-al's argument. 9. This argument against passage isn't new with van Inwagen. Price offers it in his (1996). But I suspect it's a v old chesnut which keeps getting rediscovered. As no doubt does the reply, my version of which (targetted at Olson) is forthcoming in this summer's Analysis here: I absolutely agree with Tim that this kind of reply doesn't answer deeper objections to the notion of passage. But I do think that it answers the objection as it's given. I also agree with Kenny's points about the division of quantities. But I don't see why we should think of rates as division operations. Only if we do that do we end up with a rate equal to 1. I'm no expert on natural units but it's not at all clear to me that they provide any reason to think that (e.g.) the speed of light is just one. No doubt the speed of light can be one _in some system of natural units_. But as the link Jason gives makes clear there is always a metrical value one can convert into. Whether the right metrical conversion is metres or coulombs isn't something that just magically appears out of thin air. 10. Jason - I think I see the distinction you're making, but there may yet be arguments that there isn't a separate unit-conversion operation (or if there is, that it does the expected thing with s/s and cancels the units). For instance, consider something like the efficiency of an engine - it has as an input some amount of heat energy, and as output some amount of kinetic energy. If the machine takes in 10 J of energy for every 5 J of energy it puts out, then it seems right to say that it's efficiency is 50%. It doesn't seem right to ask of that 50% number what units it's expressed in. No matter what units the original question of engine efficiency was phrased in, the answer will be "50%". And it's clear that there is some natural sense in which the actual number .5 is expressed in this physical system, in a sense that 10 and 5 are only conventionally present. As for the case of natural units, I'm really not sure what to think. On the one hand, it seems that speeds and distances are just two different types of quantity, and they can't be added. But if natural units mean that these things are really dimensionless, then they should be the sorts of things that can be added. Therefore, natural units can't mean that things are dimensionless. On the other hand, from the little I understand, it sounds like general relativity says that mass and energy really are two aspects of the same thing. Observers from two different frames will disagree on the mass and energy of a given object, just as they'll disagree about its velocity. But they'll agree about its mass-energy, so therefore mass-energy seems to be one thing. But then Einstein's equation shows that e=mc^2, and if mass and energy are actually quantities with the same dimension, then the speed of light must be dimensionless. I don't know what to believe here. I suppose I just need to study dimensional analysis more. (I've heard from physicist friends that huge amounts of physics can be derived quite simply just from dimensional analysis, with the empirical work only required to fill in values of some constants, while the form of the equation is derived a priori.) 11. The obvious analogue of "How fast is time?" would be "How big is space?" and this harmless game can be extended a bene placito. What is the duration of duration? What is the speed of speed? Well-educated readers, if any, might eventually recognize the "third man" under so many disguises... all' heteron ti tês metabolês aition. "Something else is the cause of this association," or maybe a more appropriate translation here would be "Something else is to blame for this hodge-podge." Special relativity provides the most facile solution for this puzzle, and there it makes sense to ask how much time passed, and how fast it passed, inside the inevitable example of a spaceship as it traveled from A to B, and then the annoying "third man" has apparently disappeared. But just as Alida Valli is eternally unimpressed by the Anglo-American earnestness of Joseph Cotten, and patiently awaits the return of her unruly lover Orson Welles, a "third time" intrudes to unite relatively compressed and uncompressed durations, and if we accordingly define "philosophical time" to measure progress with these mysteries since 420 BCE, no time has passed at all. 12. Dimensionless numbers are all over physics---in fluid mechanics the ratio of inertial forces to viscous forces is the reynolds number, a dimensionless quantity which tells you a lot about the behavior of a fluid. So it certainly makes sense to divide a force by a force. You can divide my velocity by the velocity of light, to get a useful (dimensionless) measure of my velocity. It doesn't seem to me like a good response to the price/van inwagen/olson argument to say that you cannot meaningfully divide forces, or velocities. I suspect the problem with their argument is somewhere else. Each quantity has an associated dimension: speed has dimension (length)/(time), energy has dimension (mass)(length)^2/(time)^2, the reynolds number of the Hudson river is dimensionless. Seems like their argument turns on the claim that: A quantity is not a RATE (or a rate of change) unless it has dimension (OTHER DIMENSIONS)^n/(time). The "rate" at which time passes, though, is the ratio of two quantities which each have dimension (time); so the ratio is dimensionless. So this claim entails that this ratio is not a rate (or not a rate of change). But I don't see why the claim is at all appealing. I would have thought the correct claim was: A quantity is not a rate unless it is the quotient of two quantities, the second of which has dimension (time). Then the rate of time's passage can be a rate. 13. Brad -- you invoke the principle that if a ratio is a ratio of two quantities each with the same unit/dimension, then the ratio is dimensionless. But why accept that? In the time case, why not think that the ratio is a ratio of time to time. I'm not suggesting that you can't divide times (forces, velocities etc.). But why assume that a ratio is simply a division operation? 14. Tim: It sounds like you are saying that you agree with us about PVI’s argument, but think that there are two deeper problems for the passage view: (1) the problem of explaining what it means to say that time passes, and (2) an argument about the view’s inability to give a coherent and informative answer to the question ‘How fast does time pass?’ I agree that these are deeper problems for the passage view. (For what it’s worth, my 1993 paper, “How Fast Does Time Pass?” is an attempt to deal with exactly these two problems.) Gabriele, Kenny, Jason, and Brad: My view is that division is an operation on numbers, and not defined for any other entities. (I’m pretty sure that my co-authors have the same view.) So I agree with the position spelled out by Jason on our behalf in his post at 3:38AM on April 26th. (Perhaps it wasn’t 3:38AM where Jason was when he made the post.) Notice, though, that if this is wrong, and if you guys are right that 1 is a legitimate rate for the passage of time, then PVI’s argument still fails (but for a different reason). Ian: I am on your side when you ask, “why assume that a ratio is simply a division operation?” In fact, I would go further, and say that there are excellent reasons to deny that a ratio (like 10km per hour, for example) is simply a division problem. Here is one. If 10km/hour is a division problem, then so is 1km/hour. But 1km/hour = 1km/1hour. And if that were a division problem, then the 1’s would cancel out, which means that 1km/hour would be equal to km/hour, i.e., kilometers divided by hours. But kilometers cannot be divided by hours. (“How many hours are there in a kilometer?” sounds like something Chico Marx would say. And not in a good way.) Here’s another reason to deny that a ratio like 1 minute per minute is a division problem. If it is, then we have to say that the rate of time’s passage = 1, that the rate of time’s passage = 5 - 4, that the rate of time’s passage = the speed of light, and that the rate of time’s passage + 1 = an even prime number. Here’s a third reason. When we say that Abibe Bikila’s rate over the course of the marathon is 12 miles per hour, we are saying that his position changes by 12 miles for every one hour change in the passage of time. (And that’s all we’re saying.) But there is no temptation to think of this last claim as any kind of division operation. It is simply a comparison of one change to another. 15. Ian--- I may agree with both you and Ned. Here is how I was thinking. First, there are quantities, like the mass of this computer, or the volume of water in Bellingham Bay. Each quantity has a dimension, and each dimension can be written as a product of powers of the fundamental dimensions (length, time, mass). We use numbers to measure magnitudes of quantities---though the number used to measure the magnitude depends on a choice of a system of units of measurement. The ratio of any two quantities is itself a quantity. The dimension of this new quantity is got by dividing the dimension of the first quantity by the dimension of the second, and then canceling. The magnitude of this new quantity is also measured by a number, relative to a choice of a system of units. Some quantities are dimensionless. Then the number that measures their magnitude is the same in all systems of units of measurement. But the number is still being used to measure the magnitude of the quantity. The Reynolds number of the water in the hudson river is a dimensionless quantity. Maybe it is 33. But it remains true that the Reynolds number is the ratio of two forces. That fact doesn't disappear when you say that the quantity is dimensionless. So I think it's correct to say that the rate of time's passage is 1. I don't think you need to add "seconds per second," though you can; seconds is a particular unit for measuring time, and the rate of time's passage is 1 no matter what the choice of unit. I also think that, even though the rate of time's passage of 1, this quantity is still the ratio of two quantities, the second of which has dimension (time), so I still think it qualifies as a rate. But, like Ned (I think), I do not think that the rate of time's passage is a number. The rate of time's passage is a quantity that is measured by a number. (So "The rate of time's passage is 1" is not an identity statement; it asserts that a certain relation of measurement holds between the rate (a quantity) and the number 1.) 16. There's an analogy between asking how fast time passes and asking how dense space is. Other things' rates can be given in units of quantity per unit time interval, and other things' densities can be given in units per unit volume, e.g. kg/m3. If someone tells you that space has a density of 1 m3 per m3 surely you'd think they were talking some kind of nonsense. Both cases are unlike those dimensionless quantities like Reynolds number which result from the cancelling out of dimensions. Reynolds number quantifies a ratio of two different kinds of forces, inertial force and viscous force, the ratio of which is physically significant although they have the same dimensions and so the dimensions disappear in the ratio. Even angle, which some authorities say is dimensionless, because it is a ratio of turns, is meaningful in a way which neither time's rate nor space's density is. 17. I think now that the argument about the absurdity (or otherwise) of the rate of passage of time is no more problematic for the idea of time passing than for a B-theorist. Suppose a tap drips at a constant rate of 5 drips per minute. A B-theorist doesn't have to think of the drips and their rate as passing from future to past via the present. In any temporal interval there are so many events of dripping, in a minute, five. (Drips, it seems, don't have their own physical quantity dimension, but still they are different from other kinds of event). A rate is just a temporal density. So it is just as absurd to ask how much time there is per unit time as it is to ask how much space there is per unit space. Time no more passes than space is spread out. If space were spread out like honey on toast, it would make sense to wonder whether it is denser in some places than others. 18. How about relativity? It may be uninformative to say "The flow of time is one minute per minute", but it may be informative to say about a traveller who went faster than light that he got older 1 minute of his time in, say, 1 hour of our time. This seems to be a difference to the question "how big is space?" If one allows for different rates of change in different places one could also allow for changing velocity of the passage of time. This only makes sense (to me at least) if there is a rate of passage. Hhm. Needs further thinking 19. I agree about the need for more thought but relativistic differences apply to space too: suppose two fast spaceships a mile long pass each other in opposite directions. From the reference frame of either one, the other is shorter than it is. 20. "However that's not true - 10 kg divided by 5 kg is 2, not 2 kg." Technically, isn't it 2 kg/kg? After all, there's a difference between "two items" and "two kilograms of salt per kilogram of sand." "Even angle, which some authorities say is dimensionless, because it is a ratio of turns" Isn't angle dimensionless because it is a ratio of lengths? Subtended arc length to radius length for a fixed sector of a circle centered at the angle's vertex? 21. Saying 2kg/kg is different then two on the basis of "two kilograms of salt per kilogram of sand" is just to sneak in a different unit of measurement. The ratio indicated is 2 (kg salt)/(kg sand). Regardless, I think there's reason to believe there's more to scalar measurements than the units. Consider torque (the scalar). T = (distance)X(Force)X(angle) the units of which is (distance)X(distance)X(mass)/(time)X(time) Consider kinetic energy: E = .5X(mass)X(velocity)X(velocity) the units of which is: Perhaps there's some special relationship between torque and energy that I'm unaware of, but it seems to me that a measurement is more than its units. 22. VS Bandaneer here, Well how do you know time is passing? The sun rising passing and setting is one way---also thoughts coming and going, actions, events. I submit that there is no time---,sans events, and that indeed events are what time consists of. Time is abstracted from the coming and going of events--in my view. How could you--or why would you--have such a notion as time if nothing new showed up? How would the notion come about? We trace events with a clock---a clock is cycles--actions--ticks-- that we compare with other actions or events; the travel of the sun--the length of the day-----is what a clock tracks---the sun's travel is so many actions--clicks of a clock. Whether there is time as an independent entity or no is an ontological question----Is there some entity apart from the comparison of actions--clock actions compared to non-clock actions--we call time --or is time just a notion absracted from this set up and there is no actual entity. WHether time is an independent entity involves down to the issue of medeival scholasticism--the prolem of universals---(Abelard pronounced on this,what progress philosophy has made!)------is time just a name or is it a thing? And ultimately the question arises from the mind/body split. How you gonna decide that? Whether abstraction or entity--material thing--whatever that means) ---seems more an assumption. You all are considering time as a material thing apparently----why should I assume as you do? I'd love to read your argument.
{"url":"https://substantialmatters.blogspot.com/2009/04/van-inwagen-on-rate-of-times-passage.html","timestamp":"2024-11-03T17:05:18Z","content_type":"text/html","content_length":"178621","record_id":"<urn:uuid:b4ead0db-70b4-4669-9932-80204412b34c>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00086.warc.gz"}
How to Implement Deep Learning in C++ - reason.townHow to Implement Deep Learning in C++ How to Implement Deep Learning in C++ Learn how to implement deep learning in C++ with these easy to follow tips. You’ll be able to use deep learning algorithms to improve your C++ programming skills in no time! Checkout this video: Introduction to Deep Learning Deep learning is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using a deep graph with multiple processing layers, or “neural networks”. Neural networks are a kind of machine learning algorithm that are used to model complex patterns in data. Deep learning algorithms are able to learn complex patterns in data by using a large number of processing layers, or “neurons”, in the neural network. What is Deep Learning? Deep learning is a subset of machine learning that uses algorithms to model high-level abstractions in data. These algorithms are able to learn complex patterns in data and make predictions about new data. Deep learning is often used for supervised learning tasks, such as image classification and object detection. How Deep Learning Works Deep learning is a subset of machine learning where artificial neural networks, algorithms inspired by the brain, learn from large amounts of data. Deep learning is mainly used for computer vision, natural language processing, and speech recognition. C++ is a good choice for deep learning because it’s a fast language and it can be used for highly parallelizable code. Additionally, there are many open source deep learning libraries available for C++, such as TensorFlow, PyTorch, and Caffe. If you’re interested in using deep learning in your own projects, there are a few things you need to know before getting started. First, you’ll need to decide what kind of data you want to use. Deep learning models can be trained on both labelled and unlabelled data. Labelled data is data that has been manually labeled by humans, such as images that have been tagged with labels such as “cat” or “dog.” Unlabelled data is data that hasn’t been manually labeled, such as video footage or text documents. Once you’ve decided what kind of data you want to use, you’ll need to gather it and format it so that it can be used by a deep learning model. This usually involves using some kind of pre-processing tool to convert the data into a format that can be read by the model. For example, if you’re using image data, you’ll need to use a tool to convert the images into a format that can be read by the model (such as RGB values). After the data has been formatted, you’ll need to choose a deep learning algorithm and train the model on the data. This usually involves adjusting some parameters so that the model can learn properly from the data. Once the model has been trained, you can then test it on newdata to see how well it performs. Benefits of Deep Learning Deep learning is a powerful tool for machine learning, and has been shown to achieve state-of-the-art results in many areas. In this article, we’ll explore some of the benefits of using deep learning in C++. Deep learning allows machines to learn from data without being explicitly programmed. This is accomplished by training an artificial neural network on a large dataset. The neural network learns to recognize patterns in the data, and can then be used to make predictions about new data. Deep learning is particularly well suited for tasks that are difficult for humans, such as image recognition and classification, speech recognition, and natural language processing. Deep learning networks can learn to perform these tasks with high accuracy after being trained on a large dataset. C++ is a great choice for implementing deep learning algorithms, due to its performance and efficiency. C++ code can be optimized to run very quickly on modern CPUs and GPUs. Additionally, C++ allows you to work directly with low-level details such as memory management, which can be important for getting the most out of your hardware. There are many open source deep learning libraries available for C++, such as TensorFlow, Caffe2, and MXNet. These libraries make it easy to get started with deep learning in C++. Applications of Deep Learning Deep learning is a type of machine learning that can be used to model complex patterns in data. Deep learning algorithms are able to learn from data without being explicitly programmed to do so. This makes them well suited for tasks such as image recognition and natural language processing. Deep learning algorithms have been shown to be effective at a variety of tasks, including: -Image recognition -Natural language processing -Speech recognition -Predicting consumer behavior -Detecting fraud Implementing Deep Learning in C++ Deep learning is a branch of machine learning that specializes in using data to train artificial neural networks. Neural networks are a type of algorithm that can learn to recognize patterns. Deep learning allows neural networks to learn from data in a way that is similar to the way humans learn. There are many different programming languages that can be used to implement deep learning, but C++ is a good choice because it is fast and efficient. Additionally, there are many open source libraries available for C++ that make it easier to implement deep learning algorithms. To get started with deep learning in C++, you will need to choose a deep learning library and install it on your computer. Then, you will need to create a dataset that you will use to train your neural network. After your dataset is created, you will need to write code that will create and train your neural network. Finally, you will need to test your neural network on data that it has never seen before to make sure it is working correctly. Tips for Implementing Deep Learning in C++ C++ is a powerful object-oriented language that enables developers to create sophisticated applications. In recent years, C++ has gained popularity in the field of deep learning, due to its ability to handle complex algorithms and data structures. When implementing deep learning in C++, there are a few things to keep in mind in order to get the most out of the language. First, it is important to use the right tools and libraries. Second, one must be familiar with the concepts of object-oriented programming and template metaprogramming. Finally, it is helpful to have a solid understanding of linear algebra and computational mathematics. By following these tips, you can maximize your productivity when implementing deep learning algorithms in C++. Troubleshooting Deep Learning Implementations in C++ If you’re having trouble getting your deep learning implementation in C++ to work correctly, there are a few things you can try. First, make sure that you are using the correct compile flags. The most important flags for deep learning implementations are -Ofast and -march=native. These flags will ensure that your code is optimized for your specific processor architecture. You may also need to specify additional linker flags, such as -lstdc++ and -lm. Once you have the correct compiler flags set, you will need to ensure that your code is correctly parallelized. Deep learning algorithms are heavily dependent on matrix operations, which can be parallelized using libraries such as OpenMP and MPI. Make sure that your code is correctly accessing these libraries, and that the number of threads you are using is appropriate for your system. If you are still having problems, there are a number of online forums and mailing lists where you can get help from other C++ programmers. You can also hire a programmer to help you with your specific issue. Deep learning is a powerful tool that can be used to solve many complex problems. While it has traditionally been implemented in Python, there are many ways to incorporate deep learning into C++ programs. In this article, we have discussed some of the most popular deep learning frameworks and libraries that can be used for C++ development. We have also provided a tutorial on how to implement a simple neural network in C++ using the popular TensorFlow library. With the help of these tools, you can easily develop sophisticated deep learning applications in C++. Further Reading If you want to learn more about how to implement deep learning in C++, here are some useful resources: -The Deep Learning 101 series by Corey Lewis is a great introduction to the basics of deep learning. -For a more technical overview, check out Deep Learning 101: A Step-by-Step Guide for Beginners by Michael Nielsen. -If you’re looking for a more hands-on approach, consider taking the Deep Learning nanodegree program from Udacity.
{"url":"https://reason.town/deep-learning-cpp/","timestamp":"2024-11-07T12:59:32Z","content_type":"text/html","content_length":"99127","record_id":"<urn:uuid:a9b2b3c7-075b-4d17-90c1-da79f6b2ae90>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00605.warc.gz"}
Re: Re: Simplifying with KroneckerDelta • To: mathgroup at smc.vnet.net • Subject: [mg101535] Re: [mg101462] Re: [mg101426] Simplifying with KroneckerDelta • From: Bob Hanlon <hanlonr at cox.net> • Date: Thu, 9 Jul 2009 01:59:29 -0400 (EDT) • Reply-to: hanlonr at cox.net sub = expr_* KroneckerDelta[c_, d_] :> Simplify[expr, c == d] * KroneckerDelta[c, d]; Exp[a - b] KroneckerDelta[a, b] /. sub KroneckerDelta[a, b] Bob Hanlon ---- "Francisco Rojas Fern=C3=A1ndez" <fjrojas at gmail.com> wrote: That helps, but maybe I should have been more specfic because I actually have long and more involved expressions. What would be really useful for me in this case would be a replacement rule krule = f[b_] KroneckerDelta[a_, b_] -> f[a] KroneckerDelta[a, b] where f[ ] is any function. For example, if I had an expression like: Exp[a-b] KroneckerDelta[a,b] I would like Mathematica to give me Exp[0] KroneckerDelta[a,b] which after simplifying of course it's just KroneckerDelta[a,b] Is there anyway I can do this? I tried some ways but they didn't work (maybe writing f[b_] is not the right way?) Thanks in advance again for your help. On Tue, Jul 7, 2009 at 7:43 AM, David Park <djmpark at comcast.net> wrote: > Why not use a rule? > krule = b_ KroneckerDelta[a_, b_] -> a KroneckerDelta[a, b] > (m + n) KroneckerDelta[k, m + n] /. Krule > k KroneckerDelta[k, m + n] > But the real work might be if you have a sum before the KroneckerDelta that > only includes m + n as sub-terms. You could write a routine that collected > on the KroneckerDelta forms, then wrapped Hold around the m+n sub-terms > (both places), then applied the rule and released the Hold. > David Park > djmpark at comcast.net > http://home.comcast.net/~djmpark/ <http://home.comcast.net/%7Edjmpark/> > From: frojasf [mailto:fjrojas at gmail.com] > Hello, > Does anybody know how to tell mathematica to use the KroneckerDelta in > order > to simplify expressions? For example, it would be great if it could receive > something like this: > (m+n) KroneckerDelta[k,m+n] and give > k KroneckerDelta[k,m+n] > Of course this is simple example, but I have an expression which is about > 200 terms long, so if it could use the delta to simplify things by itself, > that would be great. > Thanks to all in advance, > Francisco > -- > View this message in context: > http://www.nabble.com/Simplifying-with-KroneckerDelta-tp24362541p24362541.ht > ml<http://www.nabble.com/Simplifying-with-KroneckerDelta-tp24362541p24362541.ht%0Aml> > Sent from the MathGroup mailing list archive at Nabble.com.
{"url":"http://forums.wolfram.com/mathgroup/archive/2009/Jul/msg00199.html","timestamp":"2024-11-06T17:35:57Z","content_type":"text/html","content_length":"32755","record_id":"<urn:uuid:942b4b52-bbca-4b43-ae71-13cf89a31c3e>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00143.warc.gz"}