content
stringlengths
86
994k
meta
stringlengths
288
619
High Precision for Hard Processes (HP2 2024) Fragmentation of heavy quarks into heavy-flavoured hadrons receives both perturbative and non-perturbative contributions. We consider perturbative QCD corrections to heavy quark production in $e^+e^ -$ collisions to next-to-next-to-leading order accuracy in QCD with next-to-next-to-leading-logarithmic resummation of quasi-collinear and soft emissions. We study multiple matching schemes, and multiple regularisations of the soft resummation, and observe a significant dependence of the perturbative results on these ingredients, suggesting that NNLO+NNLL perturbative accuracy may not lead to real gains unless the interface with non-perturbative physics is properly analysed. We confirm previous evidence that $D^{*+}$ experimental data from CLEO/BELLE and from LEP are not reconcilable with perturbative predictions employing standard DGLAP evolution. We extract non-perturbative contributions from $e^+e^-$ experimental data for both $D$ and $B$ meson fragmentation. Such contributions can be used to predict heavy-quark fragmentation in other processes, e.g. DIS and proton-proton collisions.
{"url":"https://agenda.infn.it/event/35067/timetable/?view=standard_inline_minutes","timestamp":"2024-11-07T23:29:04Z","content_type":"text/html","content_length":"455900","record_id":"<urn:uuid:3a713477-dc0d-43d9-b9a5-15773d27395d>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00402.warc.gz"}
ECG Interpretation So let's start with the most basic way of interpreting the ECG the first thing you do is you have to check the voltage calibration what's this about this is about the amplification of the image that the ECG machine is going sometimes with arrhythmias we want to enlarge the complexes so we will double the the amplification of the image but usually for almost all routine ECGs the calibration is so-called 10 millimeters that's 10 little boxes and we'll show you examples of that and it's always good to check that to begin with because if they had been using the Machine at a higher amplification that complexes are gonna look bizarre they're gonna be big and so forth and you might make a mistake in reading so first make sure that that the calibration is correct usually the technician takes care of that that but it's there's a little box that shows you the calibration is okay and I'll point that out later the next thing is to determine the rhythm and how do you do that you're looking for P waves followed by qrs's in other words the normal progression remember starting up with the sinus node atrial depolarization P wave QRS ventricular depolarization T wave resetting once you know what the rhythm is in other words is this a normal sinus rhythm or is it not normal sinus rhythm but uh but in arrhythmia and we're gonna have whole lectures about the kinds of abnormal electrical events that can occur when it's not sinus rhythm you then calculate the heart rate which is a there's a normals there. I'm going to be going over the normals as we go along you then do the timing intervals what's the PR interval from the beginning of the P wave to the beginning of the QRS what's the QRS duration that's the period of ventricular depolarization and then how long does it take for the whole contraction and resetting of the ventricle from the Q wave to the end of the T wave we would then determine the electrical axis in other words what is the main vector force of the electrical depolarization wave what direction is it going in and there's a certain normal area for that and for example if you have abnormalities of a ventricular mass you can get abnormal of vectors abnormal electrical vectors we want to look at the p-wave morphology to see if it's normal we look at its voltage and its shape because certain abnormalities of p-wave morphology can occur with certain diseases. We want to do the same for the QRS morphology again certain diseases and for example heart blocks things that can lead to pacemakers will change the morphology of the QRS will then look at the ST segment and the T wave morphology ischemia lack of blood flow in the heart that can lead to angina or heart attacks changes the ST segment and T wave morphology and if we're lucky enough to have an earlier baseline EKG we compare it to see hey have there been changes is something going on here that's something acute so let's start here is a normal ECG notice in the upper left corner there is a little green box that's the standard and that standard is if you count the little tiny boxes ten little boxes each one of the big boxes has five smaller boxes so there'll be two of the big boxes which constitute ten small boxes each one of those boxes is one millimeter and corresponds to 1 millivolt of electrical activity so again this is the standard on all ECG machines and if you had set the standard to 1/2 then of course it would be each one of those little boxes would be less voltage if you doubled it they would be more voltage but usually again almost all EKGs use this green box of the 10 standard you'll notice how the leads are placed here on the left-hand side there's lead 1 2 & 3 remember zero Degree + 60 + 120 the next 3 leads AV RL + F AV r + 210 AV el - 30 AV f + 90 and then come the cordial leads the ones that are sticking through the heart like like needles in the sagittal plane v 1 v 2 v 3 4 5 and 6 and this is a normal ECG notice there's a p-wave in front of each QRS the QRS is are nice and narrow there's a nice T wave up right and and not way prolonged after each QRS this is sinus rhythm normal sinus rhythm that that's set off by the sinus node passes normally through the heart there's no evidence here of ischemia or heart attack or or hypertrophy of the heart muscle as I said you'll notice that there's sinus rhythm in the green box you'll see that each QRS is preceded by a P wave the a true the atrium depolarizes before the ventricle all the P waves are followed by a QRS there's no blockage of the beat as it goes down through the heart each QRS is preceded by a P and the peas are all identical they're upright in leads to and AVF and they're nice and narrow they're not prolonged if any of the comments just made the answer was no then you're talking about an arrhythmia and as I said we're going to have whole lectures on the arrhythmias later so right now we're just worrying about the normal also you notice how nicely of the QRS is progress in fact that whole strip along the bottom even though they're different leads it's continuous so you're actually seeing one set of P waves after another in order to obtain the heart rate you count the number of big boxes between two qrs's and you divide by 300 so if there were two big boxes between the two qrs's that would be 2 into 300 or 150 if there were three big boxes in between a QRS that would be a rate of 103 into 300 if there were four boxes in between the two qrs's that would be 75 4 into 375 you can also do it by counting the number of qrs's in 3 seconds remember the ECG is moving at a certain rate you can calculate a number of seconds and then multiplied by 20 but usually what we do is we use the rule of 300 mentioned before it's important to note that the normal heart beat is sixty to a hundred beats per minute so now we have a heart rate by the way the computer is almost always right on the heart rate again let's talk about the intervals so a normal PR interval is 0.2 seconds to 0.20 seconds that's three little boxes two five little boxes alright you remember there's five little boxes within the big bigger box so the normal PR interval from the beginning of the P wave to the beginning of the QRS is somewhere between three and five little boxes 0.122 0.20 seconds the normal QRS interval is less than 0.1 that's two and a half little boxes and the normal QT which is corrected for heart rate in a formula is somewhere between point three and point four six seconds remember the Qt from the beginning of the Q wave to the end of the T wave remember each small box is 0.04 seconds so a large box is five times 0.04 seconds or 0.20 seconds and there are five small boxes in each large box as I've said before again what about the axis well there's a general rule that most med students use and that is if the QRS is upright and leads one and two it's a normal axis you can actually calculate the axis because the axis is perpendicular to any lead where the R and the S or the upstroke and the down stroke are equal in this example lead three you see that pretty much the amount above the line and below the line is about the same so the axis is going to be 90 degrees from lead three so 90 degrees plus 120 lead 3 is 120 remember that's 210 and that would be in other words the axis would be towards AVR or minus 90 from 120 would be 30 that would be somewhere needly near lead 2 which is +60 well you can see how do we tell where is the maximum our wave well the maximum our wave is a round lead to it's there's no up right our wave and AV are so therefore the axis is actually something like a plus 30 so again the rule of thumb is you look to see where the amount of voltage up and down is equal it's 90 to the axis is 90 degrees from that then you look for the lead with the maximum R wave that's the direction because you could go this way on the 90 degrees or you could go that way what tells you which way you go is where's the maximum R wave in this one it's lead to and again normal axis is between minus 30 and plus 90 and if the axis is not between plus minus 30 and plus 90 then it's an axis deviation if it goes more - that's so-called left axis deviation if it goes more plus than 90 it's called right axis deviation and we'll talk about how that's used in reading various electrocardiographic diagnosis so again just to reiterate the axis for the mean frontal plane electrical vector of the heart is near the limb lead with the tallest R wave and perpendicular to the lead where the size of the upward deflection and the downward deflection are equal remember the upward deflection is called an R wave the downward deflection is called an S wave so let's take a look at the P wave itself the normal P wave is going to be three little boxes or less in duration remember that the PR interval that's the duration from the beginning of the P wave to the beginning of the QRS is going to be less than five little boxes but the length of the P wave itself should be only three little boxes and it should be upright in lead one and two and a negative deflection of less than one box wide or one box deep in v1 if the P wave in lead v1 is more negative than one box and wider than one box that suggests that the atrium is the left eight trim is enlarged so-called left atrial enlargement if the box is pointed and higher than two millimeters then and usually wider than two then that defines right atrial enlargement now these numbers are not anatomically perfect the echo and the MRI and so forth would be more perfect but they carry prognostic information they're very important because when they appear it really means that there's quite significant either left atrial dilatation or right atrial dilatation let's look at the R wave now the normal R wave should transit in the precordial lead starting with lead v1 there should be a very small R wave and then it gets a little bigger in v2 and somewhere between v3 and v4 you have a dominant R wave with not much s wave and then it progresses out to v6 with usually the maximum R wave somewhere in v4 five and six with the transition from more negative to positive somewhere around v3 or v for small Q's that is initial downward deflections of less than one little box can occur but that they are never longer than one box if they're wider than one box it suggests that there's been damage to the myocardium and the voltage should be within a normal range also the ST segment should be isoelectric that as it should be flat you can have a little bit of depression but if it's substantially depressed more than a tiny amount it suggests a number of things let's look at what it suggests if there's a sort of curved sagging of the ST segment as in this example that means the patient is usually taking digitalis did you tell us has that affect if there's a ski MIA or lack of blood flow you see a squared off flattening of the ST segment we see that with a positive exercise test and we see that when patients come in and have a so-called non-st elevation myocardial infarct and we'll talk more about those definitions later and then in hypo Kaleigh Mia where the potassium is low you may see a mildly down sloping ST segment the T wave is often flattened and as I mentioned before you may see a little additional wave after the T wave the you wave again here's the normal EKG look at the ST segments here they're all fine they're not depressed they're not elevated they're in exactly the right sequence.
{"url":"https://www.drmusmanjaved.com/2023/07/ecg-interpretation.html","timestamp":"2024-11-08T02:09:25Z","content_type":"application/xhtml+xml","content_length":"180848","record_id":"<urn:uuid:6cc3c45f-1d70-4f99-93e2-377189b0f4d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00583.warc.gz"}
NP on Logarithmic Space Download PDFOpen PDF in browserCurrent version NP on Logarithmic Space EasyChair Preprint 9555, version 9 8 pages•Date: July 21, 2023 P versus NP is considered as one of the most important open problems in computer science. This consists in knowing the answer of the following question: Is P equal to NP? It was essentially mentioned in 1955 from a letter written by John Nash to the United States National Security Agency. However, a precise statement of the P versus NP problem was introduced independently by Stephen Cook and Leonid Levin. Since that date, all efforts to find a proof for this problem have failed. Another major complexity classes are L and NL. Whether L = NL is another fundamental question that it is as important as it is unresolved. We prove that if L = NL, then every NP problem is in L with oracle access to L. This means that proving the separation from NP to L is as hard as proving that L is not equal to NL. Keyphrases: completeness, complexity classes, logarithmic space, polynomial time, reduction Links: https://easychair.org/publications/preprint/C5GJ Download PDFOpen PDF in browserCurrent version
{"url":"https://eraw.easychair.org/publications/preprint/C5GJ","timestamp":"2024-11-06T11:20:53Z","content_type":"text/html","content_length":"6618","record_id":"<urn:uuid:4cca5f2c-e32e-46f9-9f1c-cbe55d11ee97>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00079.warc.gz"}
Search Constraints Number of results to display per page Search Results • Video • Others This sequence is ideal for students or early data science professionals who want to strengthen their knowledge of fundamental probability and statistics concepts. Mastery of Mathematical Fundamentals is a prerequisite. Course related: Resource Type: • Others This learning material covers the topic of differentiation by considering first principles (gradients of chords and tangents). Course related: Resource Type: • Others
{"url":"https://oer.lib.polyu.edu.hk/catalog?f%5Bpolyu_course_relatedTo_sim%5D%5B%5D=AMA1110+Basic+Mathematics+I+%E2%80%93+Calculus+and+Probability+%26+Statistics&locale=en","timestamp":"2024-11-05T12:31:36Z","content_type":"text/html","content_length":"39678","record_id":"<urn:uuid:ce4d7688-cbac-48be-aa86-8c97109eb788>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00371.warc.gz"}
Solving Quadratic Equations by Completing the Square: Combination with analytic geometry A circle has the following equation:\( x^2-8ax+y^2+10ay=-5a^2 \)Point O is its center and is in the second quadrant (\( a\neq0 \)) Use the completing the square method to find the center of the circle and its radius in terms of \( a \). Which of them are O and M in the figure: A circle has the following equation:$x^2-8ax+y^2+10ay=-5a^2$Point O is its center and is in the second quadrant ($aeq0$) Use the completing the square method to find the center of the circle and its radius in terms of $a$. Let's recall that the equation of a circle with its center at $O(x_o,y_o)$ and its radius $R$ is: $(x-x_o)^2+(y-y_o)^2=R^2$Now, let's now have a look at the equation for the given circle: $x^2-8ax+y^2+10ay=-5a^2$We will try rearrange this equation to match the circle equation, or in other words we will ensure that on the left side is the sum of two squared binomial expressions, one for x and one for y. We will do this using the "completing the square" method: $(c\pm d)^2=c^2\pm2cd+d^2$We'll deal separately with the part of the equation related to x in the equation (underlined): We'll isolate these two terms from the equation and deal with them separately. We'll present these terms in a form similar to the form of the first two terms in the shortcut formula (we'll choose the subtraction form of the binomial squared formula since the term in the first power we are dealing with is$8ax$ , which has a negative sign): $\underline{ x^2-8ax} \textcolor{blue}{\leftrightarrow} \underline{ c^2-2cd+d^2 }\\ \downarrow\\ \underline{\textcolor{red}{x}^2\stackrel{\downarrow}{-2 }\cdot \textcolor{red}{x}\cdot \textcolor {green}{4a}} \textcolor{blue}{\leftrightarrow} \underline{ \textcolor{red}{c}^2\stackrel{\downarrow}{-2 }\textcolor{red}{c}\textcolor{green}{d}\hspace{2pt}\boxed{+\textcolor{green}{d}^2}} \\$Notice that compared to the short formula (which is on the right side of the blue arrow in the previous calculation), we are actually making the comparison: $\begin{cases} x\textcolor{blue}{\leftrightarrow}c\\ 4a\textcolor{blue}{\leftrightarrow}d \end{cases}$ Therefore, if we want to get a squared binomial form from these two terms (underlined in the calculation), we will need to add the term$(4</span><span class="katex">a)^2$, but we don't want to change the value of the expression, and therefore we will also subtract this term from the That is, we will add and subtract the term (or expression) we need to "complete" to the binomial squared form, In the following calculation, the "trick" is highlighted (two lines under the term we added and subtracted from the expression), Next, we'll put the expression in the squared binomial form the appropriate expression (highlighted with colors) and in the last stage we'll simplify the expression: $x^2-2\cdot x\cdot 4a\\ x^2-2\cdot x\cdot4a\underline{\underline{+(4a)^2-(4a)^2}}\\ \textcolor{red}{x}^2-2\cdot \textcolor{red}{x}\cdot \textcolor{green}{4a}+(\textcolor{green}{4a})^2-16a^2\\ \ downarrow\\ \boxed{ (\textcolor{red}{x}-\textcolor{green}{4a})^2-16a^2}\\$Let's summarize the steps we've taken so far for the expression with x. $x^2-8ax+y^2+10ay=-5a^2 \\ \textcolor{red}{x}^2-2\cdot \textcolor{red}{x}\cdot\textcolor{green}{4a}\underline{\underline{+\textcolor{green}{(4a)}^2-(4a)^2}}+y^2+10ay=-5a^2\\ \downarrow\\ (\textcolor {red}{x}-\textcolor{green}{4a})^2-16a^2+y^2+10ay=-5a^2\\$We'll continue and do the same thing for the expressions with y in the resulting equation: (Now we'll choose the addition form of the squared binomial formula since the term in the first power we are dealing with $10ay$ has a positive sign) $(x-4a)^2-16a^2+\underline{y^2+10ay}=-5a^2\\ \downarrow\\ (x-4a)^2-16a^2+\underline{y^2+2\cdot y \cdot 5a}=-5a^2\\ (x-4a)^2-16a^2+\underline{y^2+2\cdot y \cdot 5a\underline{\underline{+(5a)^2-(5a)^ 2}}}=-5a^2\\ \downarrow\\ (x-4a)^2-16a^2+\underline{\textcolor{red}{y}^2+2\cdot\textcolor{red}{ y}\cdot \textcolor{green}{5a}+\textcolor{green}{(5a)}^2-25a^2}=-5a^2\\ \downarrow\\ (x-4a)^2-16a^2+(\ textcolor{red}{y}+\textcolor{green}{5a})^2-25a^2=-5a^2\\ \boxed{(x-4a)^2+(y+5a)^2=36a^2}$In the last step, we move the free numbers to the second side and combine like terms. Now that the given circle equation is in the form of the general circle equation mentioned earlier, we can easily extract both the center of the given circle and its radius: In the last step, we made sure to get the exact form of the general circle equation—that is, where only subtraction is performed within the squared expressions (emphasized with an arrow) Therefore, we can conclude that the center of the circle is at:$\boxed{O(x_o,y_o)\leftrightarrow O(4a,-5a)}$ and extract the radius of the circle by solving a simple equation: Remember that the radius of the circle, by its definition is the distance between any point on the diameter and the center of the circle. Since it is positive, we must disqualify one of the options we got for the radius. To do this, we will use the remaining information we haven't used yet—which is that the center of the given circle O is in the second quadrant. O(x_o,y_o)\leftrightarrow x_o<0,\hspace{4pt}y_o>0 (Or in words: the x-value of the circle's center is negative and the y-value of the circle's center is positive) We concluded that a<0 and since the radius of the circle is positive we conclude that necessarily: In the given problem, we are asked to determine where the center of a certain circle is located in relation to the other circle, To do this, we need to find first the characteristics of the given circles, that is - their center coordinates and their radius, let's remember first that the equation of a circle with center at In addition, let's remember that we can easily determine whether a certain point is inside/outside or on a given circle by calculating the distance of the point from the center of the circle in question and comparing the result to the given circle's radius, Let's now return to the problem and the equations of the given circles and examine them: and find its center and radius, we'll do this using the "completing the square" method, We'll try to give this equation a form identical to the form of the circle equation, that is - we'll ensure that on the left side there will be the sum of two squared binomial expressions, one for x and one for y, we'll do this using the "completing the square" method: To do this, first let's recall again the shortened multiplication formulas for binomial squared: and we'll deal separately with the part of the equation related to x in the equation (underlined): We'll continue, for convenience and clarity of discussion - we'll separate these two terms from the equation and deal with them separately, We'll present these terms in a form similar to the form of the first two terms in the shortened multiplication formula (we'll choose the subtraction form of the binomial squared formula since the term with the first power 4x in the expression we're dealing with is negative): $\underline{ x^2-4x}+y^2+6y=12 \\ \underline{ x^2-4x}\textcolor{blue}{\leftrightarrow} \underline{ c^2-2cd+d^2 }\\ \downarrow\\ \underline{\textcolor{red}{x}^2\stackrel{\downarrow}{-2 }\cdot \ textcolor{red}{x}\cdot \textcolor{green}{2}} \textcolor{blue}{\leftrightarrow} \underline{ \textcolor{red}{c}^2\stackrel{\downarrow}{-2 }\textcolor{red}{c}\textcolor{green}{d}\hspace{2pt}\boxed{+\ textcolor{green}{d}^2}} \\$ It can be noticed that compared to the shortened multiplication formula (on the right side of the blue arrow in the previous calculation) we are actually making the analogy: Therefore, we'll identify that if we want to get from these two terms (underlined in the calculation) a binomial squared form, we'll need to add to these two terms the term However, we don't want to change the value of the expression in question, and therefore - we'll also subtract this term from the expression, that is - we'll add and subtract the term (or expression) we need to "complete" to the binomial squared form, Next - we'll insert into the binomial squared form the appropriate expression (highlighted using colors) and in the last stage we'll simplify the expression further: $x^2-2\cdot x\cdot 2\\ x^2-2\cdot x\cdot2\underline{\underline{+2^2-2^2}}\\ \textcolor{red}{x}^2-2\cdot \textcolor{red}{x}\cdot \textcolor{green}{2}+\textcolor{green}{2}^2-4\\ \downarrow\\ \boxed{ (\ Let's summarize the development stages so far for the expression related to x, we'll do this now within the given circle equation: We'll continue and perform an identical process for the expressions related to y in the resulting equation: (Now we'll choose the addition form of the binomial squared formula since the term with the first power 6y in the expression we're dealing with is positive) $(x-2)^2-4+\underline{y^2+6y}=12\\ \downarrow\\ (x-2)^2-4+\underline{y^2+2\cdot y \cdot 3}=12\\ (x-2)^2-4+\underline{y^2+2\cdot y \cdot 3\underline{\underline{+3^2-3^2}}}=12\\ \downarrow\\ (x-2)^2-4+ \underline{\textcolor{red}{y}^2+2\cdot\textcolor{red}{ y}\cdot \textcolor{green}{3}+\textcolor{green}{3}^2-9}=12\\ \downarrow\\ ( (x-2)^2-4+(\textcolor{red}{y}+\textcolor{green}{3})^2-9=12\\ C_O:\ boxed{ (x-2)^2+(y+5a)^2=25}$ In the last stage, we moved the free numbers to the other side and combined similar terms, Now that we've changed the given circle equation to the form of the general circle equation mentioned earlier, we can easily extract both the center of the given circle and its radius: In the last stage, we made sure to get the exact form of the general circle equation - that is, where only subtraction is performed within the squared expressions (highlighted by arrow) Therefore we can conclude that the center of the circle is at point: and extract the circle's radius by solving a simple equation: Now let's approach the equation of the given second circle and find its center and radius through an identical process, here we'll do it in parallel for both variables: $C_M: x^2+2x+y^2-2y=7 \\ \downarrow\\ C_M: x^2+2\cdot x\cdot 1+y^2-2\cdot y\cdot 1=7 \\\\ \downarrow\\ C_M: (x+1)^2-1^2+(y-1)^2-1^2=7 \\ C_M:\boxed{ (x+1)^2+(y-1)^2=9} \\ \downarrow\\ C_M:\boxed{ (x- (1))^2+(y-1)^2=3^2} \\\\$ Therefore we'll conclude that the circle's center and radius are: Now in order to determine which of the options is the most correct, that is - to understand where the centers of the circles are in relation to the circles themselves, all we need to do is calculate the distance between the centers of the circles (using the distance formula between two points) and check the result in relation to the radii of the circles, let's first present the data of the two Let's remember that the distance between two points in a plane with coordinates: And therefore, the distance between the centers of the circles is: That is, we got that the distance between the centers of the circles is 5, Let's note that the distance between the centers of the circles (And this follows from the definition of a circle as the set of all points in a plane that are at a distance equal to the radius of the circle from the center of the circle, therefore necessarily a point at a distance from the center of the circle equal to the radius of the circle - is on the circle) In addition, let's note that the distance between the centers of the circles In the given problem, we are asked to determine where a certain point is located in relation to a given circle, To do this, we need to first find the characteristics of the given circle, namely- its center and radius, Let's remember first that the equation of a circle with center at point Additionally, let's recall that we can easily determine whether a certain point is inside/outside the circle or on it, by calculating the distance of the point from the center of the circle in question and comparing the result to the given circle's radius, Let's now return to the problem and the given circle equation and examine them: Let's find its center and radius, we'll do this using the "completing the square" method, We'll try to give this equation a form identical to the general circle equation, meaning- we'll ensure that on the left side there will be a sum of two squared binomial expressions, one for x and one for y, we'll do this using the "completing the square" method: For this, let's first recall the shortened multiplication formulas for squared binomials: We'll continue, for convenience and clarity of discussion- we'll separate these two terms from the equation and deal with them separately, We'll present these terms in a form similar to the form of the first two terms in the shortened multiplication formula (we'll choose the addition form of the squared binomial formula since the term with the first power we're dealing with 8x has a positive sign): $\underline{ x^2+8x}+y^2+6y=12 \\ \underline{ x^2+8x}\textcolor{blue}{\leftrightarrow} \underline{ c^2+2cd+d^2 }\\ \downarrow\\ \underline{\textcolor{red}{x}^2\stackrel{\downarrow}{+2 }\cdot \ textcolor{red}{x}\cdot \textcolor{green}{4}} \textcolor{blue}{\leftrightarrow} \underline{ \textcolor{red}{c}^2\stackrel{\downarrow}{+2 }\textcolor{red}{c}\textcolor{green}{d}\hspace{2pt}\boxed{+\ textcolor{green}{d}^2}} \\$ We can notice that compared to the shortened multiplication formula (on the right side of the blue arrow in the previous calculation) we are actually making the analogy: Therefore, we'll identify that if we want to get from these two terms (underlined in the calculation) a squared binomial form, However, we don't want to change the value of the expression in question, and therefore- we'll also subtract this term from the expression, meaning- we'll add and subtract the term (or expression) needed to "complete" to a squared binomial form, In the following calculation, the "trick" is highlighted (two lines under the term we added and subtracted from the expression) , Next- we'll insert into the squared binomial form the appropriate expression (highlighted with colors) and in the last stage we'll further simplify the expression: $x^2+2\cdot x\cdot 4\\ x^2+2\cdot x\cdot4\underline{\underline{+4^2-4^2}}\\ \textcolor{red}{x}^2+2\cdot \textcolor{red}{x}\cdot \textcolor{green}{4}+\textcolor{green}{4}^2-16\\ \downarrow\\ \boxed{ (Now we'll choose the subtraction form of the squared binomial formula since the term with the first power we're dealing with 4y has a negative sign) $(x+4)^2-16+\underline{y^2-4y}=-4\\ \downarrow\\ (x+4)^2-16+\underline{y^2-2\cdot y \cdot 2}=-4\\ (x+4)^2-16+\underline{y^2-2\cdot y \cdot 2\underline{\underline{+2^2-2^2}}}=-4\\ \downarrow\\ (x+4)^ 2-16+\underline{\textcolor{red}{y}^2-2\cdot\textcolor{red}{ y}\cdot \textcolor{green}{2}+\textcolor{green}{2}^2-4}=-4\\ \downarrow\\ (x+4)^2-16+(\textcolor{red}{y}-\textcolor{green}{2})^2-4=-4\\ \ boxed{ (x+4)^2+(y-2)^2=16}$ Now that we've changed the given circle equation to the form of the general circle equation mentioned earlier, we can extract from the given equation both the center of the given circle and its radius simply: In the last stage, we made sure to get the exact form of the general circle equation - meaning- where only subtraction is performed within the squared expressions (highlighted by arrow) Therefore we can conclude that the center of the circle is at point : And extract the circle's radius by solving a simple equation: Now in order to determine which of the options is most correct, meaning- to understand where the given point is located: In relation to the given circle, all we need to do is to calculate the distance between the given point and the center of the given circle (using the distance formula between two points) and check the result in relation to the circle's radius , first- Let's remember that the distance between two points in a plane with coordinates : And therefore, the distance between the given point and the center of the given circle is: $\begin{cases} O(-4,2)\\ A(0,2) \end{cases}\\ \downarrow\\ d_{OA}=\sqrt{(-4-0)^2+(2-2)^2} \\ d_{OA}=\sqrt{16+0} =\sqrt{16} \\ \boxed{d_{OA}=4}$ Meaning we got that the distance between the given point and the center of the given circle is 4, Let's note that the distance between the given point and the center of the given circle$d_{OA}$equals exactly the circle's radius : (This follows from the definition of a circle as the set of all points in a plane that are at a distance equal to the circle's radius from the circle's center, therefore necessarily a point located at a distance from the circle's center equal to the circle's radius - is on the circle)
{"url":"https://www.tutorela.com/math/completing-the-square-in-a-quadratic-equation/examples-exercises/solving-quadratic-equations-by-completing-the-square--combination-with-analytic-geometry","timestamp":"2024-11-02T11:30:57Z","content_type":"text/html","content_length":"545063","record_id":"<urn:uuid:f5a20da7-3246-4c20-9da5-e62847582c4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00814.warc.gz"}
Poisson Geometry and Artin-Schelter Regular Algebras Schedule for: 24w5505 - Poisson Geometry and Artin-Schelter Regular Algebras Beginning on Sunday, October 13 and ending Friday October 18, 2024 All times in Hangzhou, China time, CST (UTC+8). Sunday, October 13 14:00 - 18:00 Check-in begins at 14:00 on Sunday and is open 24 hours (Front desk - Yuxianghu Hotel(御湘湖酒店前台)) Dinner ↓ 18:00 - 20:00 A set dinner is served daily between 5:30pm and 7:30pm in the Xianghu Lake National Tourist Resort. (Restaurant - Yuxianghu Hotel(御湘湖酒店餐厅)) Monday, October 14 Breakfast ↓ - Breakfast is served daily between 7 and 9am in the Xianghu Lake National Tourist Resort (Restaurant - Yuxianghu Hotel(御湘湖酒店餐厅)) - Introduction and Welcome (Lecture Hall - Academic island(定山院士岛报告厅)) Quanshui Wu: Skew Calabi-Yau algebras and Poisson algebras via filtered deformations ↓ For any positively filtered algebra, the property of skew Calabi-Yau or having Van den Bergh duality can be lifted as usual, but not for Calabi-Yau property. Calabi-Yau property often emerges 09:30 form the deformation of unimodular Poisson structure. Suppose A is a filtered algebra such that the associated graded algebra gr(A) is commutative Calabi-Yau. Then gr(A) has a canonical Poisson - structure with a modular derivation. We describe the connection between the Nakayama automorphism of A and the modular derivation of gr(A) by using homological determinants as a bridge. In 10:30 particular, it is proved that A is Calabi-Yau if and only if gr(A) is unimodular as Poisson algebra under some mild assumptions. As an application, we derive that the ring of differential operators over a smooth variety is Calabi-Yau. I will start from the definitions of (skew) Calabi-Yau algebras and homological determinants of (Hopf) actions on them. Some applications will also be given in the talk. This talk is based on a joint work with Ruipeng Zhu. (Lecture Hall - Academic island(定山院士岛报告厅)) - Coffee Break (Lecture Hall - Academic island(定山院士岛报告厅)) Jianghua Lu: Polynomial integrable systems and cluster structures ↓ We present a general framework for constructing polynomial integrable systems with respect to linearizations of Poisson varieties that admit log-canonical coordinate systems. Our construction 11:00 is in particular applicable to Poisson varieties with compatible cluster or generalized cluster structures. As special cases, we consider an arbitrary standard complex semi- simple Poisson Lie - group G or Schubert cells in the flag variety of G, equipped with the standard Poisson structures and the compatible Berenstein-Fomin-Zelevinsky cluster structures, as well as the dual Poisson 12:00 Lie group of GL(n, C) equipped with the Gekhtman-Shapiro-Vainshtein generalized cluster structure. In each of the three cases, we show that every extended cluster in the respective cluster structure gives rise to a polynomial integrable system with respect to linearizations of the Poisson structures. This is joint work with Yanpeng Li and Yu Li. (Lecture Hall - Academic island(定山院士岛报告厅)) Lunch ↓ - Lunch is served daily between 11:30am and 1:30pm in the Xianghu Lake National Tourist Resort (Dining Hall - Academic island(定山院士岛餐厅)) Mykola Matviichuk: New quantum projective spaces from deformations of q-polynomial algebras ↓ 13:45 I will discuss how to construct a large collection of “quantum projective spaces”, in the form of Koszul, Artin-Schelter regular quadratic algebras with the Hilbert series of a polynomial ring. - I will do so by starting with the toric ones (the q-polynomial algebras), and then deforming their relations using a diagrammatic calculus, proving unobstructedness of such deformations under 14:45 suitable nondegeneracy conditions. Time permitting, I will show that these algebras coincide with the canonical quantizations of corresponding families of quadratic Poisson structures. This provides new evidence to Kontsevich's conjecture about convergence of his deformation quantization formula. This is joint work with Brent Pym and Travis Schedler. (Lecture Hall - Academic island(定山院士岛报告厅)) - Coffee Break (Lecture Hall - Academic island(定山院士岛报告厅)) Ryo Kanda: Symplectic leaves for the Feigin-Odesskii-Polishchuk Poisson bracket ↓ 15:00 This talk is based on joint work with Alex Chirvasitu and S. Paul Smith. Feigin and Odesskii's elliptic algebras constitute a deformation of the polynomial algebra, which induces a Poisson - structure on projective space. Hua and Polishchuk showed that this Poisson structure is the same as the one Polishchuk defined in abstract terms. We describe the symplectic leaves for this 16:00 Poisson structure in terms of higher secant varieties to an elliptic curve. (Lecture Hall - Academic island(定山院士岛报告厅)) - Coffee Break (soft drink only) (Lecture Hall - Academic island(定山院士岛报告厅)) Jason Gaddis: Log ozone groups of polynomial Poisson algebras ↓ 16:15 The ozone group of an associative algebra $A$ is defined as the group of automorphisms of $A$ which fix every element of its center. The ozone group has been utilized to study the center of PI - skew polynomial rings, and to characterize skew polynomial rings in the class of connected graded algebras. In this talk I will discuss work on adapting these ideas to polynomial Poisson 17:15 algebras in positive characteristic. This includes connections between the ozone group and the unimodularity condition, centers of skew symmetric Poisson structures, and work on characterizing skew symmetric Poisson structures. This is joint work with Kenneth Chan, Robert Won, and James J. Zhang. (Lecture Hall - Academic island(定山院士岛报告厅)) - Dinner (Restaurant - Yuxianghu Hotel(御湘湖酒店餐厅)) Tuesday, October 15 Breakfast ↓ - Breakfast is served daily between 7 and 9am in the Xianghu Lake National Tourist Resort (Restaurant - Yuxianghu Hotel(御湘湖酒店餐厅)) Dan Rogalski: Homological Integrals for Weak Hopf Algebras ↓ 09:30 The integral is an important structure in a finite-dimensional Hopf algebra. Lu, Wu, and Zhang generalized this to define a homological integral for any Artin-Schelter Gorenstein Hopf algebra. - This homological integral has many applications in the study of Hopf algebras of small GK-dimension. A weak Hopf algebra is a generalization of a Hopf algebra in which the comultiplication does 10:30 not necessarily preserve the unit, and the counit is not necessarily multiplicative, but weaker axioms are satisfied. Weak Hopf algebras arise naturally in the study of tensor categories, for example. We report on joint work with Rob Won and James Zhang that shows how to define a homological integral for an AS Gorenstein weak Hopf algebra, and discuss its applications. (Lecture Hall - Academic island(定山院士岛报告厅)) - Coffee Break (Lecture Hall - Academic island(定山院士岛报告厅)) Rui Loja Fernandes: Non-formal Deformation Quantization and 3-Associativity ↓ 11:00 This is a report on my current work with Alejandro Cabrera (Rio de Janeiro) on star products given by semi-classical Fourier integral operators. I will sketch our definition of these types of - star products and discuss our main result: the Lagrangian underlying such a star product is the graph of the multiplication of a local symplectic groupoid integrating the deformed Poisson 12:00 structure. As a consequence, I will argue that, in general, one should expect quantization to involve partial algebras rather than algebras. (Lecture Hall - Academic island(定山院士岛报告厅)) - Group Photo (Academic island(定山院士岛)) - Lunch (Dining Hall - Academic island(定山院士岛餐厅)) Frank Moore: Geometry of some Artin-Schelter Regular Algebras of Dimension Four ↓ 13:45 In his Master's Thesis, Vashaw identified two interesting quadratic AS-regular algebras of dimension four, which we call R and S, which were graded by groups of order 16 such that the identity - component was also an AS-regular algebra. In joint work with Goetz, Kirkman, and Vashaw, we study the geometry (namely, the point and line schemes and their incidence relations) of the algebras 14:15 R and S, as well as an algebra related to S which we denote by T. (Lecture Hall - Academic island(定山院士岛报告厅)) Jiwei He: The adjunction map associated to a semisimple Hopf algebra action ↓ 14:15 Let $H$ be a semisimple Hopf algebra, and let $e$ be the integral of $H$ such that $\varepsilon(e)=1$. Suppose that $H$ acts on an algebra $A$. Let $A\#H$ be the smash product, and let $A^H$ be - the invariant subalgebra of $A$. There is a natural $A\#H$-bimodule map $\beta_{A,H}:A\otimes_{A^H} A\longrightarrow A\#H$, defined by $\beta_{A,H}(a\otimes b)=(a\#e)(1\#b)$. We call $\beta_ 14:45 {A,H}$ the adjunction map associated to the $H$-action on $A$. In this talk, I will report some categorical properties of $A^H$ determined by $\beta_{A,H}$. (Lecture Hall - Academic island(定山院士岛报告厅)) - Coffee Break (Lecture Hall - Academic island(定山院士岛报告厅)) Stéphane Launois: Derivations of quantum algebras ↓ - In this talk, I will discuss derivations of a class of noncommutative polynomial algebras, the so-called, quantum nilpotent algebras, and their primitive quotients. This is joint work in 16:00 progress with Samuel Lopes (Porto) and Isaac Oppong (Greenwich). (Lecture Hall - Academic island(定山院士岛报告厅)) - Coffee Break (soft drink only) (Lecture Hall - Academic island(定山院士岛报告厅)) Manuel Reyes: When is a Koszul algebra a domain? ↓ 16:15 In the early 1990s, Artin, Tate, and Van den Bergh conjectured that all Artin-Schelter regular algebras are domains. To date there are not many available tools to tackle this conjecture, even - in the special case of Koszul algebras. In this talk I will discuss joint work with Daniel Rogalski that provides one such tool that may be helpful: a necessary and sufficient condition for a 17:15 Koszul algebra to be a domain. The condition is stated in terms of properties of syzygy modules over the quadratic dual algebra. These techniques are also sufficient to prove that graded twisted CY-2 algebras defined by many quivers are prime. (Lecture Hall - Academic island(定山院士岛报告厅)) - Dinner (Restaurant - Yuxianghu Hotel(御湘湖酒店餐厅)) Wednesday, October 16 - Breakfast (Restaurant - Yuxianghu Hotel(御湘湖酒店餐厅)) Alexey Bondal: Poisson brackets and non-commutative deformations of algebraic varieties ↓ - I will survey some aspects of the theory of holomorphic Poisson brackets on projective algebraic varieties and related noncommutative deformations. In particular, some conjectures about these 10:30 objects will be discussed. (Lecture Hall - Academic island(定山院士岛报告厅)) - Coffee Break (Lecture Hall - Academic island(定山院士岛报告厅)) Shinnosuke Okawa: Noncommutative del Pezzo surfaces via AS-regular algebras ↓ 11:00 Noncommutative deformations of the projective plane and those of the quadric are obtained from 3-dimensional AS-regular quadratic Z-algebras and cubic Z-algebras, respectively. In this talk I - will explain how these classes of Z-algebras arise from helices of the derived category of the commutative P2 and the commutative quadric, respectively. This observation allows us to introduce 12:00 (infinitely) many classes of AS-regular Z-algebras, with which we can cover all deformation types of del Pezzo surfaces equally well. I will illustrate this with some examples and then try to summarize what we know about and expect from these Z-algebras. (Lecture Hall - Academic island(定山院士岛报告厅)) - Lunch (Dining Hall - Academic island(定山院士岛餐厅)) - Free afternoon (IASM will offer a free guiding tour including dinner) (Academic island(定山院士岛)) Thursday, October 17 - Breakfast (Restaurant - Yuxianghu Hotel(御湘湖酒店餐厅)) Izuru Mori: ASF-regular Z-algebras and noncommutative quadric hypersurfaces ↓ 09:30 This talk is based on a joint work with Adam Nyman. In this talk, we will define an ASF-regular Z-algebra and characterize a noncommutative projective scheme associated to an ASF-regular - Z-algebra. As an application, we will show that a skew quadric hypersurface has an ASF-regular Z-algebra as a homogeneous coordinate ring. If time permits, we will discuss noncommutative conics 10:30 and noncommutative quadric surfaces. (Lecture Hall - Academic island(定山院士岛报告厅)) - Coffee Break (Lecture Hall - Academic island(定山院士岛报告厅)) Xingting Wang: Poisson Valuations ↓ - We will discuss Poisson valuation and its application in computing Poisson automorphism groups and other related topics. It is a joint work with Hongdi Huang, Xin Tang, and James Zhang. (Lecture Hall - Academic island(定山院士岛报告厅)) - Lunch (Dining Hall - Academic island(定山院士岛餐厅)) Guisong Zhou: Connected Hopf algebras of finite Gelfand-Kirillov dimension ↓ 13:45 Connected Hopf algebras of finite Gelfand-Kirillov dimension (over a field of characteristic zero) can be viewed as generalizations of the universal enveloping algebras of finite dimensional - Lie algebras as well as the noncommutative counterpart of unipotent algebraic groups. They enjoy many nice ring-theoretical and homological properties. Particularly, they are Noetherian 14:15 domains, Artin-Schelter regular and twisted Calabi-Yau. In this talk, I will discuss the structure of them and their coideal subalgebras from the perspective of Ore extensions. (Lecture Hall - Academic island(定山院士岛报告厅)) Xin Tang: Cohomology for Some Unimodular Poisson Polynomial Algebras in Three Variables ↓ - We will present results on the computation of Poisson cohomology groups for several classes of unimodular Poisson polynomial algebras in three variables. If time permits, we will also discuss a 14:45 couple of technical tools introduced to aid the computation. This is joing work with Hongdi Huang, Xingting Wang and James Zhang. (Lecture Hall - Academic island(定山院士岛报告厅)) - Coffee Break (Lecture Hall - Academic island(定山院士岛报告厅)) Ulrich Krähmer: The ring of differential operators on a monomial curve is a Hopf algebroid ↓ 15:00 The ring of differential operators on a cuspidal curve whose coordinate ring is a numerical semigroup algebra is shown to be a cocommutative and cocomplete left Hopf algebroid, which - essentially means that the category of D-modules is closed monoidal. If the semigroup is symmetric so that the curve is Gorenstein, it is a full Hopf algebroid (admits an antipode), which means 16:00 that the subcategory of those D-modules that are finite rank vector bundles over the curve is rigid. Based on joint work with Myriam Mahaman. (Lecture Hall - Academic island(定山院士岛报告厅)) - Coffee Break (soft drink only) (Lecture Hall - Academic island(定山院士岛报告厅)) Chunyi Li: Higher dimensional moduli spaces on the Kuznetsov components of cubic/Fano threefolds ↓ 16:15 Moduli spaces of stable sheaves on Fano threefolds are known to exhibit pathological behavior in general. Meanwhile, for certain specific cases—such as ideal sheaves of curves with small degree - and genus in the cubic threefold, or moduli spaces of lower-rank ACM bundles—these spaces are well-behaved. From a modern derived categorical perspective, we have the so-called Kuznetsov 17:15 component $Ku(X)$ in $D(X)$. The well-behaved moduli spaces mentioned above actually parametrize stable objects within $Ku(X)$. In this talk, I will begin by recapping this framework with a detailed overview of known results. I will then present our recent work on higher-dimensional moduli spaces of stable objects in $Ku(X)$. (Lecture Hall - Academic island(定山院士岛报告厅)) - Dinner (Restaurant - Yuxianghu Hotel(御湘湖酒店餐厅)) Friday, October 18 - Breakfast (Restaurant - Yuxianghu Hotel(御湘湖酒店餐厅)) Ellen Kirkman: Constructing Artin-Schelter regular algebras with Hopf algebra actions ↓ 09:30 Results of Etingof and Walton show that there are algebras $A$ (e.g. commutative domains) with no quantum symmetries, i.e. if $H$ is a semisimple Hopf algebra acting inner faithfully on $A$, - then $H$ is a group algebra. We discuss circumstances, where given a Hopf algebra $H$, an AS regular algebra $A$ that supports a non-trivial $H$ action can be constructed. In some cases the 10:30 subalgebra of invariants is also AS regular. (Lecture Hall - Academic island(定山院士岛报告厅)) - Coffee Break (Lecture Hall - Academic island(定山院士岛报告厅)) Honglei Lang: The Lie 2-algebra of multiplicative forms on a quasi-Poisson groupoid ↓ 11:00 We present a construction of weak graded Lie 2-algebras associated with quasi-Poisson groupoids. We also establish a morphism between this weak graded Lie 2-algebra of multiplicative forms and - the strict graded Lie 2-algebra of multiplicative multivector fields, allowing us to compare and relate different aspects of Lie 2-algebra theory within the context of quasi-Poisson geometry. 12:00 As an infinitesimal analogy, we explicitly determine the associated weak graded Lie 2-algebra structure of IM forms for any quasi-Lie bialgebroid. This is joint work with Zhuo Chen and Zhangju (Lecture Hall - Academic island(定山院士岛报告厅)) - Lunch (Dining Hall - Academic island(定山院士岛餐厅))
{"url":"http://www.birs.ca/events/2024/5-day-workshops/24w5505/schedule","timestamp":"2024-11-08T08:22:52Z","content_type":"application/xhtml+xml","content_length":"43403","record_id":"<urn:uuid:8e7aaef5-0c60-4617-b115-3f1f4d2841f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00412.warc.gz"}
How to toggle all bits after MSB Problem statement Given a number n, toggle all bits after the most significant bit including the most significant bit. Example 1 • Input: n=11 • Output: 4 Example 2 • Input: n=15 • Output: 0 To toggle a specific bit, we take the XOR of the bit with 1. We can achieve this two ways: 1. By using a bit mask. 2. By looping via bit by bit until the MSB is reached. Here, we discuss the first approach by using a bit mask. The steps of the first approach are as follows: 1. Find the number of bits of the number. The bit mask is 2^n - 1. We can find the number of bits of the number by taking the log of the number and adding one to it. 2. Do a bitwise XOR with the bit mask and the number. Let’s understand this with the help of an example. Consider n=11. The binary representation of 11 is 1011, that is, 4 bits are used to represent 11. The bit mask is 2^4 - 1, that is, 15. • 11: 1011 • 15: 1111 • 11 XOR 15: 0100 Hence, the output is 4. using namespace std; int toggle(int num) { int n = (int)log2(num) + 1; int mask = pow(2, n) - 1; return num ^ mask; int main(){ int num = 11; cout << toggle(num); • Line 5: The toggle function is initialized. • Line 6: We get the noOfBits by taking the log base 2 of the num and adding 1 to it. • Line 7: We get the bit mask by taking 2 raised to the power of noOfBits and then subtracting by 1 • Lines 11–15: We initialize the num and print the resulting output.
{"url":"https://www.educative.io/answers/how-to-toggle-all-bits-after-msb","timestamp":"2024-11-11T11:20:23Z","content_type":"text/html","content_length":"145974","record_id":"<urn:uuid:3143b08e-7c18-4d68-b79d-b65913b9ec99>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00238.warc.gz"}
Materials | Titanium | Sigma Aerospace Metals View our materials below. View our materials below. Titanium is a chemical element with the symbol Ti and atomic number 22. It is a lustrous transition metal with a silver color, low density, and high strength. Titanium is resistant to corrosion in seawater, aqua regia, and chlorine. Discovered in Cornwall, Great Britain, by William Gregor in 1791, titanium was named by Martin Heinrich Klaproth for the Titans of Greek mythology. The element occurs within a number of mineral deposits, principally rutile and ilmenite, which are widely distributed in the Earth’s crust and lithosphere. Additionally, it is found in almost all living things, water bodies, rocks, and soils. The metal is extracted from its principal mineral ores by the Kroll and Hunter processes. The most common compound, titanium dioxide, is a popular photocatalyst and is used in the manufacture of white pigments. Other compounds include titanium tetrachloride (TiCl4), a component of smoke screens and catalysts; and titanium trichloride (TiCl3), which is used as a catalyst in the production of Titanium can be alloyed with iron, aluminum, vanadium, and molybdenum, among other elements, to produce strong, lightweight alloys with multiple uses: ● Aerospace (jet engines, missiles, and spacecraft) ● Military ● Industrial process (chemicals and petrochemicals, desalination plants, pulp, and paper) ● Automotive ● Agri-food ● Medical prostheses ● Orthopedic implants ● Dental and endodontic instruments and files ● Dental implants ● Sporting goods ● Jewelry ● Mobile phones ● Many other applications The two most useful properties of the metal are corrosion resistance and strength-to-density ratio, the highest of any metallic element. Another feature is that in its unalloyed condition, titanium is as strong as some steels but less dense. Titanium Cross-Reference List Type MIL-T-9046H Designation MIL-T-9046J Designation ASTM-B265 Designation Composition Form Condition Specifications COMMERCIALLY PURE – CP4 GRADE 1 UNALLOYED SHEET/PLATE 25 KSI MIN YEILD AMS-4940 COMMERCIALLY PURE TYPE I COMP A CP3 GRADE 2 UNALLOYED SHEET/PLATE 40 KSI MIN YEILD AMS-4942 COMMERCIALLY PURE TYPE I COMP C CP2 GRADE 3 UNALLOYED SHEET/PLATE 55 KSI MIN YEILD AMS-4900 COMMERCIALLY PURE TYPE I COMP B CP1 GRADE 4 UNALLOYED SHEET/PLATE 70 KSI MIN YEILD AMS-4901 COMMERCIALLY PURE TYPE I COMP B CP1 GRADE 4 UNALLOYED BAR 70 KSI MIN YEILD AMS-4921 ALPHA ALLOYS TYPE II COMP A A-1 GRADE 6 5AI-2.55n SHEET/PLATE ANNEALED AMS-4910 ALPHA ALLOYS TYPE II COMP A A-1 GRADE 6 5AI-2.55n BAR ANNEALED AMS-4926, AMS-6900 ALPHA ALLOYS TYPE II COMP B A-2 – 5AI-2.55n ELI SHEET/PLATE ANNEALED AMS-4909 ALPHA ALLOYS TYPE II COMP B A-2 – 5AI-2.55n ELI BAR ANNEALED AMS-4924, AMS-6901 ALPHA ALLOYS TYPE II COMP F A-4 – 8AI-1Mo-1V SHEET/PLATE ANNEALED AMS-4972, AMS-4915 ALPHA ALLOYS TYPE II COMP F A-4 – 8AI-1Mo-1V SHEET/PLATE DUPLEX ANNEALED AMS-4916 ALPHA ALLOYS TYPE II COMP F A-4 – 8AI-1Mo-1V BAR DUPLEX ANNEALED AMS-6910 ALPHA -BETA ALLOYS TYPE III COMP A AB-6 – 8MN – – AMS-4908 ALPHA -BETA ALLOYS TYPE III COMP C AB-1 GRADE 5 6AI-4V SHEET/PLATE ANNEALED AMS-4911 ALPHA -BETA ALLOYS TYPE III COMP C AB-1 – 6AI-4V SHEET/PLATE SOLUTION TREATED AMS-4903 ALPHA -BETA ALLOYS TYPE III COMP C AB-1 – 6AI-4V SHEET/PLATE STA AMS-6930 ALPHA -BETA ALLOYS TYPE III COMP D AB-2 – 6AI-4V ELI SHEET/PLATE ANNEALED AMS-4907 ALPHA -BETA ALLOYS TYPE III COMP D AB-2 – 6AI-4V ELI SHEET/PLATE ANNEALED AMS-4907 ALPHA -BETA ALLOYS TYPE III COMP D AB-2 – 6AI-4V ELI SHEET/PLATE ANNEALED AMS-4907 ALPHA -BETA ALLOYS TYPE III COMP D AB-2 – 6AI-4V ELI BAR ANNEALED AMS-4930, AMS-6932 ALPHA -BETA ALLOYS TYPE III COMP E AB-3 – 6AI-6V-25n SHEET/PLATE ANNEALED AMS-4918 ALPHA -BETA ALLOYS TYPE III COMP E AB-3 – 6AI-6V-25n SHEET/PLATE SOLUTION TREATED AMS-4988 ALPHA -BETA ALLOYS TYPE III COMP E AB-3 – 6AI-6V-25n SHEET/PLATE STA AMS-4990 ALPHA -BETA ALLOYS TYPE III COMP E AB-3 – 6AI-6V-25n BAR ANNEALED AMS-4978, AMS-4979, AMS-6936 ALPHA -BETA ALLOYS TYPE III COMP E AB-3 – 6AI-6V-25n BAR STA AMS-3935 ALPHA -BETA ALLOYS TYPE III COMP G AB-4 – 6Al-2Sn-4Zr-2Mo SHEET/PLATE DUPLEX ANNEALED AMS-4919 ALPHA -BETA ALLOYS TYPE III COMP G AB-4 – 6Al-2Sn-4Zr-2Mo BAR DUPLEX ANNEALED AMS-4975, AMS-6905 ALPHA -BETA ALLOYS TYPE III COMP G AB-5 – 3AI-2.5V SHEET/PLATE ANNEALED AMS-4989 ALPHA -BETA ALLOYS TYPE III COMP G AB-5 – 3AI-2.5V TUBE ANNEALED AMS-4943, AMS-4944, AMS-4945 BETA ALLOYS TYPE IV COMP A B-1 – 13V-11Cr-3Al SHEET/PLATE – AMS-4917 BETA ALLOYS TYPE IV COMP A B-1 – 13V-11Cr-3Al WIRE – AMS-4959 BETA ALLOYS TYPE IV COMP C B-3 – 3Al-8V-6Cr-4Mo-4Zr SHEET/PLATE SOLUTION TREATED AMS-4939 BETA ALLOYS TYPE IV COMP C B-3 – 3Al-8V-6Cr-4Mo-4Zr BAR ST AMS-6920 BETA ALLOYS TYPE IV COMP C B-3 – 3Al-8V-6Cr-4Mo-4Zr BAR STA AMS-6921 Some of Our Partners Include:
{"url":"https://sigmaaero.com/titanium/","timestamp":"2024-11-14T20:50:39Z","content_type":"text/html","content_length":"101949","record_id":"<urn:uuid:93e7a612-81ff-47d9-b861-7b8127594866>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00376.warc.gz"}
ThmDex – An index of mathematical definitions, results, and conjectures. Assume to the contrary that there is $n \in \mathbb{N}$ such that $x_n = z$. Then, in particular, $x_{n, n} = z_n$. But by definition of $z$, one has $z_n = 1 - x_{n, n}$. That is, by transitivity of equality, one has $x_{n, n} = 1 - x_{n, n}$. Adding $x_{n, n}$ to each side, we have $2 x_{n, n} = 1$ and thus $x_{n, n} = 1 / 2$. But $x$ was defined to be an element in $\{ 0, 1 \}^{\mathbb{N}}$, which necessitates that every term of $x_n$ takes a value in $\{ 0, 1 \}$, a set which does not contain $1/2$. This is a contradiction. Hence, there is no $n \in \mathbb{N}$ such that $x_n = z$. In other words, $x_n \neq z$ for all $n \in \mathbb{N}$. $\square$
{"url":"https://theoremdex.org/p/2776","timestamp":"2024-11-10T11:15:03Z","content_type":"text/html","content_length":"4905","record_id":"<urn:uuid:b96db59c-ba99-43b1-871e-5380df35f3d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00813.warc.gz"}
18: Digital Signal Processing Problems Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) Sampling and Filtering=? The signal s(t) is bandlimited to 4 kHz. We want to sample it, but it has been subjected to various signal processing manipulations. 1. What sampling frequency (if any works) can be used to sample the result of passing s(t) through an RC highpass filter with R = 10kΩ and C = 8nF? 2. What sampling frequency (if any works) can be used to sample the derivative of s(t)? 3. The signal s(t) has been modulated by an 8 kHz sinusoid having an unknown phase: the resulting signal is \[s(t)\sin (2\pi f_{0}t+\varphi ) \nonumber \] with f[0] = 8kHz and φ=? Can the modulated signal be sampled so that the original signal can be recovered from the modulated signal regardless of the phase value φ? If so, show how and find the smallest sampling rate that can be used; if not, show why not. Non-Standard Sampling Using the properties of the Fourier series can ease finding a signal's spectrum. 1. Suppose a signal s(t) is periodic with period T. If c[k] represents the signal's Fourier series coefficients, what are the Fourier series coefficients of \[s\left ( t-\frac{T}{2} \right ) \ nonumber \] 2. Find the Fourier series of the signal p(t) shown in Figure \(\PageIndex{1}\). 3. Suppose this signal is used to sample a signal bandlimited to 1/T Hz. Find an expression for and sketch the spectrum of the sampled signal. 4. Does aliasing occur? If so, can a change in sampling rate prevent aliasing; if not, show how the signal can be recovered from these samples. Figure \(\PageIndex{1}\) A Different Sampling Scheme A signal processing engineer from Texas A&M claims to have developed an improved sampling scheme. He multiplies the bandlimited signal by the depicted periodic pulse signal to perform sampling. Figure \(\PageIndex{2}\) 1. Find the Fourier spectrum of this signal. 2. Will this scheme work? If so, how should T[S] be related to the signal's bandwidth? If not, why not? Bandpass Sampling The signal s(t) has the indicated spectrum. Figure \(\PageIndex{3}\) 1. What is the minimum sampling rate for this signal suggested by the Sampling Theorem? 2. Because of the particular structure of this spectrum, one wonders whether a lower sampling rate could be used. Show that this is indeed the case, and find the system that reconstructs s(t) from its samples. Sampling Signals If a signal is bandlimited to W Hz, we can sample it at any rate \[\frac{1}{T_{S}}> 2W \nonumber \] and recover the waveform exactly. This statement of the Sampling Theorem can be taken to mean that all information about the original signal can be extracted from the samples. While true in principle, you do have to be careful how you do so. In addition to the rms value of a signal, an important aspect of a signal is its peak value, which equals \[\max \left \{ \left | s(t) \right | \right \} \nonumber \] 1. Let s(t) be a sinusoid having frequency W Hz. If we sample it at precisely the Nyquist rate, how accurately do the samples convey the sinusoid's amplitude? In other words, find the worst case 2. How fast would you need to sample for the amplitude estimate to be within 5% of the true value? 3. Another issue in sampling is the inherent amplitude quantization produced by A/D converters. Assume the maximum voltage allowed by the converter is V[max] volts and that it quantizes amplitudes to b bits. We can express the quantized sample \[Q(s(nT_{S}))\; as\; s(nT_{S})+\varepsilon (t) \nonumber \] where ε(t) represents the quantization error at the n^th sample. Assuming the converter rounds, how large is maximum quantization error? 4. We can describe the quantization error as noise, with a power proportional to the square of the maximum error. What is the signal-to-noise ratio of the quantization error for a full-range sinusoid? Express your result in decibels. Hardware Error An A/D converter has a curious hardware problem: Every other sampling pulse is half its normal amplitude. Figure \(\PageIndex{4}\) 1. Find the Fourier series for this signal. 2. Can this signal be used to sample a bandlimited signal having highest frequency \[W=\frac{1}{2T} \nonumber \] Simple D/A Converter Commercial digital-to-analog converters don't work this way, but a simple circuit illustrates how they work. Let's assume we have a B-bit converter. Thus, we want to convert numbers having a B-bit representation into a voltage proportional to that number. The first step taken by our simple converter is to represent the number by a sequence of B pulses occurring at multiples of a time interval T. The presence of a pulse indicates a “1” in the corresponding bit position, and pulse absence means a “0” occurred. For a 4-bit converter, the number 13 has the binary representation 1101 \[13_{10}=1\times 2^{3}+1\times 2^{2}+0\times 2^{1}+1\times 2^{0} \nonumber \] and would be represented by the depicted pulse sequence. Note that the pulse sequence is “backwards” from the binary representation. We'll see why that is. Figure \(\PageIndex{5}\) This signal serves as the input to a first-order RC lowpass filter. We want to design the filter and the parameters Δ and T so that the output voltage at time 4T (for a 4-bit converter) is proportional to the number. This combination of pulse creation and filtering constitutes our simple D/A converter. The requirements are • The voltage at time t = 4T should diminish by a factor of 2 the further the pulse occurs from this time. In other words, the voltage due to a pulse at 3T should be twice that of a pulse produced at 2T, which in turn is twice that of a pulse at T, etc. • The 4-bit D/A converter must support a 10 kHz sampling rate. 1. Show the circuit that works. How do the converter's parameters change with sampling rate and number of bits in the converter? Discrete-Time Fourier Transforms Find the Fourier transforms of the following sequences, where s(n) is some sequence having Fourier transform \[S(e^{i2\pi f}) \nonumber \] 1. \[(-1)^{n}s(n) \nonumber \] 2. \[s(n)\cos (2\pi f_{0}n) \nonumber \] 3. \[x(n)=\begin{cases} s(\frac{n}{2}) & \text{ if } n\; even \\ 0 & \text{ if } n\; odd \end{cases} \nonumber \] 4. \[ns(n) \nonumber \] Spectra of Finite-Duration Signals Find the indicated spectra for the following signals. 1. The discrete-time Fourier transform of \[s(n)=\begin{cases} \cos ^{2} \left ( \frac{\pi }{4} n\right )& \text{ if } n= \left \{ -1,0,1 \right \}\\ 0 & \text{ if } otherwise \end{cases} \nonumber 2. The discrete-time Fourier transform of \[s(n)=\begin{cases} n & \text{ if } n= \left \{ -2,-1,0,1,2 \right \}\\ 0 & \text{ if } otherwise \end{cases} \nonumber \] 3. The discrete-time Fourier transform of \[s(n)=\begin{cases} \sin \left ( \frac{\pi }{4} n\right ) & \text{ if } n= \left \{ 0,...,7 \right \}\\ 0 & \text{ if } otherwise \end{cases} \nonumber \] 4. The length-8 DFT of the previous signal. Just Whistlin' Sammy loves to whistle and decides to record and analyze his whistling in lab. He is a very good whistler; his whistle is a pure sinusoid that can be described by \[s_{a}(t)=\sin (4000t) \nonumber \] To analyze the spectrum, he samples his recorded whistle with a sampling interval of \[T_{S}=2.5\times 10^{-4} \nonumber \] to obtain \[s(n)=s_{a}(nT_{S}) \nonumber \] Sammy (wisely) decides to analyze a few samples at a time, so he grabs 30 consecutive, but arbitrarily chosen, samples. He calls this sequence x(n) and realizes he can write it as \[x(n)=\sin (4000nT_{s}+\theta ),\; n=\left \{ 0,...,29 \right \} \nonumber \] 1. Did Sammy under- or over-sample his whistle? 2. What is the discrete-time Fourier transform of x(n) and how does it depend on θ? 3. How does the 32-point DFT of x(n) depend on θ? Discrete-Time Filtering We can find the input-output relation for a discrete-time filter much more easily than for analog filters. The key idea is that a sequence can be written as a weighted linear combination of unit 1. Show that \[x(n)=\sum _{i}x(i)\delta (n-i) \nonumber \] where δ(n) is the unit-sample. \[\delta (n)=\begin{cases} 1 & \text{ if } n=0 \\ 0 & \text{ if } otherwise \end{cases} \nonumber \] 2. If h(n) denotes the unit-sample response—the output of a discrete-time linear, shift-invariant filter to a unit-sample input—find an expression for the output. 3. In particular, assume our filter is FIR, with the unit-sample response having duration q+1. If the input has duration N, what is the duration of the filter's output to this signal? 4. Let the filter be a boxcar averager: \[h(n)=\frac{1}{q+1}\; for\; n=\left \{ 0,...,q \right \} \nonumber \] and zero otherwise. Let the input be a pulse of unit height and duration N. Find the filter's output when \[N=\frac{q+1}{2} \nonumber \] q an odd integer. A Digital Filter A digital filter has the depicted unit-sample reponse. Figure \(\PageIndex{6}\) 1. What is the difference equation that defines this filter's input-output relationship? 2. What is this filter's transfer function? 3. What is the filter's output when the input is \[\sin \left ( \frac{\pi n}{4} \right ) \nonumber \] A Special Discrete-Time Filter Consider a FIR filter governed by the difference equation \[y(n)=\frac{1}{3}x(n+2)+\frac{2}{3}x(n+1)+x(n)+\frac{2}{3}x(n-1)+\frac{1}{3}x(n-2) \nonumber \] 1. Find this filter's unit-sample response. 2. Find this filter's transfer function. Characterize this transfer function (i.e., what classic filter category does it fall into). 3. Suppose we take a sequence and stretch it out by a factor of three. \[x(n)=\begin{cases} s(\frac{n}{3}) & \text{ if } \forall m,m=\left \{ ...,-1,0,1,... \right \} :(n-3m)\\ 0 & \text{ if } otherwise \end{cases} \nonumber \] 1. Sketch the sequence x(n) for some example s(n). 2. What is the filter's output to this input? 3. In particular, what is the output at the indices where the input x(n) is intentionally zero? 4. Now how would you characterize this system? Simulating the Real World Much of physics is governed by differntial equations, and we want to use signal processing methods to simulate physical problems. The idea is to replace the derivative with a discrete-time approximation and solve the resulting differential equation. For example, suppose we have the differential equation \[\frac{\mathrm{d} y(t)}{\mathrm{d} t}+ay(t)=x(t) \nonumber \] and we approximate the derivative by \[\frac{\mathrm{d} y(t)}{\mathrm{d} t}\mid\ _{t=nT}\simeq \frac{y(nT)-y((n-1)T)}{T} \nonumber \] where T essentially amounts to a sampling interval. 1. What is the difference equation that must be solved to approximate the differential equation? 2. When x(t) = u(t), the unit step, what will be the simulated output? 3. Assuming x(t) is a sinusoid, how should the sampling interval T be chosen so that the approximation works well? The derivative of a sequence makes little sense, but still, we can approximate it. The digital filter described by the difference equation \[y(n)=x(n)-x(n-1) \nonumber \] resembles the derivative formula. We want to explore how well it works. 1. What is this filter's transfer function? 2. What is the filter's output to the depicted triangle output? Figure \(\PageIndex{7}\) 3. Suppose the signal x(n) is a sampled analog signal: \[x(n)=x(nT_{S}) \nonumber \] Under what conditions will the filter act like a differentiator? In other words, when will y(n) be proportional to \[\frac{\mathrm{d} x(t)}{\mathrm{d} t}\mid\ _{t=nT_{S}} \nonumber \] The DFT Let's explore the DFT and its properties. 1. What is the length-K DFT of length-N boxcar sequence, where N < K? 2. Consider the special case where K = 4. Find the inverse DFT of the product of the DFTs of two length-3 boxcars. 3. If we could use DFTs to perform linear filtering, it should be true that the product of the input's DFT and the unit-sample response's DFT equals the output's DFT. So that you can use what you just calculated, let the input be a boxcar signal and the unit-sample response also be a boxcar. The result of part (b) would then be the filter's output if we could implement the filter with length-4 DFTs. Does the actual output of the boxcar-filter equal the result found in the previous part? 4. What would you need to change so that the product of the DFTs of the input and unit-sample response in this case equaled the DFT of the filtered output? DSP Tricks Sammy is faced with computing lots of discrete Fourier transforms. He will, of course, use the FFT algorithm, but he is behind schedule and needs to get his results as quickly as possible. He gets the idea of computing two transforms at one time by computing the transform of \[s(n)=s_{1}(n)+is_{2}(n) \nonumber \] where s[1](n) and s[2](n) are two real-valued signals of which he needs to compute the spectra. The issue is whether he can retrieve the individual DFTs from the result or not. 1. What will be the DFT S(k) of this complex-valued signal in terms of S[1](k) and S[2](k), the DFTs of the original signals? 2. Sammy's friend, an Aggie who knows some signal processing, says that retrieving the wanted DFTs is easy: “Just find the real and imaginary parts of S(k) .” Show that this approach is too 3. While his friend's idea is not correct, it does give him an idea. What approach will work? Hint: Use the symmetry properties of the DFT. 4. How does the number of computations change with this approach? Will Sammy's idea ultimately lead to a faster computation of the required DFTs? Discrete Cosine Transform (DCT) The discrete cosine transform of a length-N sequence is defined to be \[S_{c}(k)=\sum_{n=0}^{N-1}s(n)\cos \left ( \frac{2\pi nk}{2N} \right ) \nonumber \] Note that the number of frequency terms is \[2N-1:k=\left \{ 0,...,2N-1 \right \} \nonumber \] 1. Find the inverse DCT. 2. Does a Parseval's Theorem hold for the DCT? 3. You choose to transmit information about the signal s(n) according to the DCT coefficients. You could only send one, which one would you send? A Digital Filter A digital filter is described by the following difference equation: \[y(n)=ay(n-1)+ax(n)-x(n-1),a=\frac{1}{\sqrt{2}} \nonumber \] 1. What is this filter's unit sample response? 2. What is this filter's transfer function? 3. What is this filter's output when the input is \[\sin \left ( \frac{\pi n}{4} \right ) \nonumber \] Another Digital Filter A digital filter is determined by the following difference equation. \[y(n)=y(n-1)+x(n)-x(n-4) \nonumber \] 1. Find this filter's unit sample response. 2. What is the filter's transfer function? How would you characterize this filter (lowpass, highpass, special purpose, ...)? 3. Find the filter's output when the input is the sinusoid \[\sin \left ( \frac{\pi n}{2} \right ) \nonumber \] 4. In another case, the input sequence is zero for n < 0, then becomes nonzero. Sammy measures the output to be \[y(n)=\delta (n)+\delta (n-1) \nonumber \] Can his measurement be correct? In other words, is there an input that can yield this output? If so, find the input x(n) that gives rise to this output. If not, why not? Yet Another Digital Filter A filter has an input-output relationship given by the difference equation \[y(n)=\frac{1}{4}x(n)+\frac{1}{2}x(n-1)+\frac{1}{4}x(n-2) \nonumber \] 1. What is the filter's transfer function? How would you characterize it? 2. What is the filter's output when the input equals \[\cos \left ( \frac{\pi n}{2} \right ) \nonumber \] 3. What is the filter's output when the input is the depicted discrete-time square wave below? Figure \(\PageIndex{8}\) A Digital Filter in the Frequency Domain We have a filter with the transfer function \[H(e^{i2\pi f})=e^{-(i2\pi f)}\cos (2\pi f) \nonumber \] operating on the input signal \[x(n)=\delta (n)-\delta (n-2) \nonumber \] that yields the output y(n). 1. What is the filter's unit-sample response? 2. What is the discrete-Fourier transform of the output? 3. What is the time-domain expression for the output? Digital Filters A discrete-time system is governed by the difference equation \[y(n)=y(n-1)+\frac{x(n)+x(n-1)}{2} \nonumber \] 1. Find the transfer function for this system. 2. What is this system's output when the input is \[\sin \left ( \frac{\pi n}{2}\right ) \nonumber \] 3. If the output is observed to be \[y(n)=\delta (n)+\delta (n-1) \nonumber \] then what is the input? Digital Filtering A digital filter has an input-output relationship expressed by the difference equation \[y(n)=\frac{x(n)+x(n-1)+x(n-2)+x(n-3)}{4} \nonumber \] 1. Plot the magnitude and phase of this filter's transfer function. 2. What is this filter's output when \[x(n)=cos \left ( \frac{\pi n}{2}\right )+2sin \left ( \frac{2\pi n}{3}\right ) \nonumber \] Detective Work 1. The signal \[x(n)=\delta (n)-\delta (n-1) \nonumber \] 1. Find the length-8 DFT (discrete Fourier transform) of this signal. 2. You are told that when x(n) served as the input to a linear FIR (finite impulse response) filter, the output was \[y(n)=\delta (n)-\delta (n-1)+2\delta (n-2) \nonumber \] Is this statement true? If so, indicate why and find the system's unit sample response; if not, show why not. 2. A discrete-time, shift invariant, linear system produces an output \[y(n)=\left \{ 1,-1,0,0,... \right \} \nonumber \] when its input x(n) equals a unit sample. 1. Find the difference equation governing the system. 2. Find the output when \[x(n)=\cos (2\pi f_{0}n) \nonumber \] 3. How would you describe this system's function? Time Reversal has Uses A discrete-time system has transfer function \[H(e^{i2\pi f}) \nonumber \] A signal x(n) is passed through this system to yield the signal w(n). The time-reversed signal w(-n) is then passed through the system to yield the time-reversed output y(-n). What is the transfer function between x(n) and y(n)? Removing “Hum” The slang word “hum” represents power line waveforms that creep into signals because of poor circuit construction. Usually, the 60 Hz signal (and its harmonics) are added to the desired signal. What we seek are filters that can remove hum. In this problem, the signal and the accompanying hum have been sampled; we want to design a digital filter for hum removal. 1. Find filter coefficients for the length-3 FIR filter that can remove a sinusoid having digital frequency f[0] from its input. 2. Assuming the sampling rate is f[s] to what analog frequency does f[0 ]correspond? 3. A more general approach is to design a filter having a frequency response magnitude proportional to the absolute value of a cosine: \[\left | H(e^{i2\pi f}) \right |\propto \left | \cos (\pi fN) \right | \nonumber \] In this way, not only can the fundamental but also its first few harmonics be removed. Select the parameter N and the sampling rate so that the frequencies at which the cosine equals zero correspond to 60 Hz and its odd harmonics through the fifth. 4. Find the difference equation that defines this filter. Digital AM Receiver Thinking that digital implementations are always better, our clever engineer wants to design a digital AM receiver. The receiver would bandpass the received signal, pass the result through an A/D converter, perform all the demodulation with digital signal processing systems, and end with a D/A converter to produce the analog message signal. Assume in this problem that the carrier frequency is always a large even multiple of the message signal's bandwidth W. 1. What is the smallest sampling rate that would be needed? 2. Show the block diagram of the least complex digital AM receiver. 3. Assuming the channel adds white noise and that a b-bit A/D converter is used, what is the output's signal-to-noise ratio? A problem on Samantha's homework asks for the 8-point DFT of the discrete-time signal \[\delta (n-1)+\delta (n-7) \nonumber \] 1. What answer should Samantha obtain? 2. As a check, her group partner Sammy says that he computed the inverse DFT of her answer and got \[\delta (n+1)+\delta (n-1) \nonumber \] Does Sammy's result mean that Samantha's answer is wrong? 3. The homework problem says to lowpass-filter the sequence by multiplying its DFT by \[H(k)=\begin{cases} 1 & \text{ if }k=\left \{ 0,1,7 \right \} \\ 0 & \text{ if } otherwise \end{cases} \ nonumber \] and then computing the inverse DFT. Will this filtering algorithm work? If so, find the filtered output; if not, why not? Stock Market Data Processing Because a trading week lasts five days, stock markets frequently compute running averages each day over the previous five trading days to smooth price fluctuations. The technical stock analyst at the Buy-Lo--Sell-Hi brokerage firm has heard that FFT filtering techniques work better than any others (in terms of producing more accurate averages). 1. What is the difference equation governing the five-day averager for daily stock prices? 2. Design an efficient FFT-based filtering algorithm for the broker. How much data should be processed at once to produce an efficient algorithm? What length transform should be used? 3. Is the analyst's information correct that FFT techniques produce more accurate averages than any others? Why or why not? Echoes not only occur in canyons, but also in auditoriums and telephone circuits. In one situation where the echoed signal has been sampled, the input signal x(n) emerges as \[x(n)+a_{1}x(n-n_{1})+a_{2}x(n-n_{2}) \nonumber \] 1. Find the difference equation of the system that models the production of echoes. 2. To simulate this echo system, ELEC 241 students are asked to write the most efficient (quickest) program that has the same input-output relationship. Suppose the duration of x(n) is 1,000 and that \[a_{1}=\frac{1}{2},n_{1}=10,a_{2}=\frac{1}{5},n_{2}=25 \nonumber \] Half the class votes to just program the difference equation while the other half votes to program a frequency domain approach that exploits the speed of the FFT. Because of the undecided vote, you must break the tie. Which approach is more efficient and why? 3. Find the transfer function and difference equation of the system that suppresses the echoes. In other words, with the echoed signal as the input, what system's output is the signal x(n)? Digital Filtering of Analog Signals RU Electronics wants to develop a filter that would be used in analog applications, but that is implemented digitally. The filter is to operate on signals that have a 10 kHz bandwidth, and will serve as a lowpass filter. 1. What is the block diagram for your filter implementation? Explicitly denote which components are analog, which are digital (a computer performs the task), and which interface between analog and digital worlds. 2. What sampling rate must be used and how many bits must be used in the A/D converter for the acquired signal's signal-to-noise ratio to be at least 60 dB? For this calculation, assume the signal is a sinusoid. 3. If the filter is a length-128 FIR filter (the duration of the filter's unit-sample response equals 128), should it be implemented in the time or frequency domain? 4. Assuming \[H(e^{i2\pi f}) \nonumber \] is the transfer function of the digital filter, what is the transfer function of your system? Signal Compression Because of the slowness of the Internet, lossy signal compression becomes important if you want signals to be received quickly. An enterprising 241 student has proposed a scheme based on frequency-domain processing. First of all, he would section the signal into length-N blocks, and compute its N-point DFT. He then would discard (zero the spectrum) at half of the frequencies, quantize them to b-bits, and send these over the network. The receiver would assemble the transmitted spectrum and compute the inverse DFT, thus reconstituting an N-point block. 1. At what frequencies should the spectrum be zeroed to minimize the error in this lossy compression scheme? 2. The nominal way to represent a signal digitally is to use simple b-bit quantization of the time-domain waveform. How long should a section be in the proposed scheme so that the required number of bits/sample is smaller than that nominally required? 3. Assuming that effective compression can be achieved, would the proposed scheme yield satisfactory results?
{"url":"https://eng.libretexts.org/Courses/Arkansas_Tech_University/Discrete-Time_Signal_Processing/18%3A_Digital_Signal_Processing_Problems","timestamp":"2024-11-11T00:15:27Z","content_type":"text/html","content_length":"167831","record_id":"<urn:uuid:49cfe5bf-d146-4494-a69c-9269490a1bd7>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00121.warc.gz"}
If the function f given by f(x)=x3−3(a−2)x2+3ax+7, for some a∈R... | Filo Question asked by Filo student If the function given by for some is increasing in and decreasing , then a root of the equation, ) (2019 Main, b. 6 c. 7 d. 5 Not the question you're searching for? + Ask your question Video solutions (1) Learn from their 1-to-1 discussion with Filo tutors. 11 mins Uploaded on: 8/27/2023 Was this solution helpful? Found 8 tutors discussing this question Discuss this question LIVE for FREE 9 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text If the function given by for some is increasing in and decreasing , then a root of the equation, ) (2019 Main, Updated On Aug 27, 2023 Topic Calculus Subject Mathematics Class Class 12 Answer Type Video solution: 1 Upvotes 78 Avg. Video Duration 11 min
{"url":"https://askfilo.com/user-question-answers-mathematics/if-the-function-given-by-for-some-is-increasing-in-and-32343439313437","timestamp":"2024-11-11T10:00:33Z","content_type":"text/html","content_length":"294871","record_id":"<urn:uuid:43b63b14-e27e-4810-95eb-02c1727f7232>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00618.warc.gz"}
Random sets The course is not on the list Without time-table Code Completion Credits Range XP01NAM ZK 4 2+2 Course guarantor: Students get the basic overview on the theory of random sets, their probabilistic models, statistical analyses and applications. Syllabus of lectures: 1. Introduction to random sets 2. Steiner formula 3. Boolean model 4. Contact distribution function, spatial correlation function 5. Shape characteristics 6. Random sets given by a density 7. Quermass-interaction process 8. MCMC simulations of random sets 9. Simulation-based inference 10. Maximum likelihood for random sets 11. Takacs-Fiksel method 12. Marked random sets 13. Digital image analysis 14. Applications of random sets Syllabus of tutorials: Study Objective: Study materials: 1. Molchanov I. (2005): Theory of Random Sets. Springer Verlag, London. 2. Stoyan D., Kendall W.S., Mecke J. (1995): Stochastic Geometry and Its Applications. Wiley, Chichester. Further information: No time-table has been prepared for this course The course is a part of the following study plans:
{"url":"https://bilakniha.cvut.cz/en/predmet3063406.html","timestamp":"2024-11-05T22:45:07Z","content_type":"text/html","content_length":"7624","record_id":"<urn:uuid:98d2a28b-a01f-43d7-b809-c7510be45340>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00861.warc.gz"}
Voltage Divider Calculator Created by CalcKit Admin Last updated: 26 Jun 2024 What is a Voltage Divider? A voltage divider is a simple and commonly used circuit in electronics, designed to reduce a higher input voltage (Vin) to a lower output voltage (Vout). It consists of two resistors (R1 and R2) connected in series across a voltage supply. The output voltage is taken from the junction of the two resistors. Voltage dividers are fundamental in creating reference voltages and signal How Does a Voltage Divider Work? In a voltage divider, the input voltage is applied across the series combination of the resistors R1 and R2. According to Ohm's Law, the voltage drop across a resistor in a series circuit is proportional to its resistance. Therefore, the output voltage (Vout) is a fraction of the input voltage (Vin), determined by the ratio of the resistances R1 and R2. The formula for calculating the output voltage is: Vout = Vin * R2 / (R1 + R2) This equation shows that by choosing appropriate values for R1 and R2, you can set the output voltage to a desired value. The Voltage Divider Calculator To facilitate the design and analysis of voltage divider circuits, we have created a "Voltage Divider Calculator." This tool allows you to input known values and automatically calculates the unknown parameters. Here are the fields available in the calculator: • Input Voltage (Vin): The voltage supplied to the voltage divider. • Resistor (R1): The resistance of the first resistor. • Resistor (R2): The resistance of the second resistor. • Output Voltage (Vout): The desired output voltage. Additionally, the calculator includes optional fields for more advanced analysis: • Load Resistance (RL): The resistance of the load connected to the output. Including this in calculations helps account for the effect of the load on the output voltage. • Total Power (Ptot): The total power consumed by the voltage divider circuit. • Load Power (PRL): The power dissipated by the load resistance. These fields can be used both as inputs and outputs, providing flexibility in designing your circuit. How to Use the Calculator 1. Determine the Known Values: Start by identifying the values you know. These could be the input voltage (Vin), the desired output voltage (Vout), and one of the resistances (R1 or R2). 2. Input the Values: Enter the known values into the calculator. For example, if you know Vin, Vout, and R1, input these values into the respective fields. 3. Calculate the Unknowns: The calculator will automatically compute the unknown values. If you input Vin, Vout, and R1, it will calculate R2 for you. 4. Optional Parameters: If you have a load connected to the output, input the load resistance (RL) to see how it affects the other components of the circuit. Practical Applications Voltage dividers are used in various applications, including: • Signal Conditioning: Adjusting signal levels to match the input range of analog-to-digital converters (ADCs). • Biasing Transistors: Setting the correct operating point for transistors in amplifier circuits. • Voltage Reference: Providing a stable reference voltage in precision circuits. • Volume Control: Adjusting audio signal levels in potentiometer-based volume controls. Important Considerations When designing a voltage divider, consider the following: • Load Effect: The presence of a load (RL) can affect the output voltage. A high load resistance minimizes this effect, while a low load resistance can significantly alter the output. • Power Dissipation: Ensure that the resistors can handle the power dissipation to avoid overheating and damage. • Accuracy: Precision resistors may be required for applications where accuracy is critical. A voltage divider is a versatile and essential component in electronics, useful for reducing voltage levels and creating reference voltages. Our Voltage Divider Calculator simplifies the design process by allowing you to input known values and automatically calculating the unknown parameters. Whether you are a hobbyist or a professional, this tool will aid you in building efficient and accurate voltage divider circuits. By understanding and utilizing voltage dividers effectively, you can enhance the performance and reliability of your electronic designs.
{"url":"https://calckit.io/tool/electronics-voltage-divider","timestamp":"2024-11-11T11:18:43Z","content_type":"text/html","content_length":"32341","record_id":"<urn:uuid:2fb3f768-7268-4989-b41b-123db408bc04>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00637.warc.gz"}
How do I count days between 2 dates but break them out by month. For example. 9/30-10/1. This is a c How do I count days between 2 dates but break them out by month. For example. 9/30-10/1. This is a count of 2 days but i want to capture it as 1 day in October and 1 day on September • I do this using 4 IF formulas and add them together. IF date range starts before 1st and ends before last day of month, calculate days between 1st and end. IF date range starts after 1st and ends before last day of month, calculate days between start and end. IF date range starts before 1st and ends after last day of month, calculate days in month. IF date range starts after 1st and ends after last day of month, calculate days between start and last day of month. Adding these 4 together will give you the days in the month. To make it easier you probably want to make the month a reference another cell so that you aren't adding the month in the formula multiple times and can reuse the formula for any month. Or you could set up another sheet where you define the start and end of each period and refer to that. Help Article Resources
{"url":"https://community.smartsheet.com/discussion/131464/how-do-i-count-days-between-2-dates-but-break-them-out-by-month-for-example-9-30-10-1-this-is-a-c","timestamp":"2024-11-08T17:47:10Z","content_type":"text/html","content_length":"390205","record_id":"<urn:uuid:e20482de-a95d-40ac-be1c-5b58f2b5f498>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00396.warc.gz"}
A Car Travels at 60 Miles Per Hour How Much Time Does It Take the Car to Travel 30 Miles? - greentravelguides.tvA Car Travels at 60 Miles Per Hour How Much Time Does It Take the Car to Travel 30 Miles? A Car Travels at 60 Miles Per Hour How Much Time Does It Take the Car to Travel 30 Miles? You might also be thinking, How long does it take to drive a mile at 60 mph? a single minute But then this question also arises, When going 60 mph How far will a car travel? It is assumed that the vehicle will go 60 miles in one hour. In addition, 1 hour equals 60 minutes, which equals 3600 seconds. As a result, the automobile may be considered to traverse 60 miles in 3600 seconds. As a result, it can traverse 60/3600 miles in one second, or roughly 0.017 miles. How long does it take to drive 60 miles at 90 mph? 2 Expert Tutors’ Answers If you went 60 miles at 90 miles per hour, r*t=d, therefore 90t = 60, t = 2/3 of an hour or 40 minutes. 07.09.2014 How many feet do you travel at 60 mph? So, to determine the speed of a vehicle traveling at sixty miles per hour, multiply (60 x 5280) by (60 x 60) to get 88 feet per second. As a result, this formula applies for every query including “how far do you travel?” How long does it take a car travelling at 60 mph to cover 5 miles? 5 minutes is 1/12 of an hour. Best wishes! 22.04.2021 How many hours is 100 mi? Let’s say you have to drive 100 miles and it takes you 1 1/2 hours. Then divide 100 miles by 1.5 hours to get 66.67 miles per hour as your average speed. You convert the number of minutes to fractions of an hour when calculating miles per hour for distances that take just minutes. 13.03.2018 What does it mean to travel at 60 mph? Miles per hour speed (mph) Car speeds are often expressed in miles per hour. For instance, the average highway speed is roughly 60 miles per hour. One mile may be covered in one minute at 60 mph. How far will a vehicle travel in just one second while driving at 65 mph? Keeping one second of following distance at 65 mph implies the car is 100 feet behind the vehicle in front of it. It takes at least 150 feet for a car to come to a complete stop. With a 100-foot gap and a 150-foot stopping distance, a collision would happen within 250 feet. How far does a car travel at 55 mph? How long does 20 miles take to drive? around 20 to 40 minutes How long does it take to travel 1 mile at 15 mph? For example, if you drive 60 mph, you’ll go 60 miles in an hour, which means one mile will take only one minute. Here are a few more examples: 1 mile every 4 minutes at 15 mph. 1 mile every 2 minutes at 30 mph. 24.01.2022 How long does it take to drive 40 miles? Answer: It took 40 minutes to go 40 miles at 60 miles per hour. The unit of speed is always the same as the time and distance units. How long does it take to drive 16 miles? A 16-mile drive takes how long? To put it another way, driving 16 miles at 25 miles per hour would take you 38 minutes and 24 seconds. How long does it take to drive 200 miles? The time it takes to go 200 miles is determined on your speed. It will take 4.4 hours to go at 45 mph, 3.3 hours at 60 mph, and 2.67 hours at 75 mph. How far can I travel in 20 minutes? If you’re going 60 miles per hour and you’re on the highway, you’ll get 20 miles in 20 minutes. If you were driving in the city at 30 miles per hour, 20 minutes would only get you 10 miles. Use a map or GPS app to keep track of how many kilometers you travel. 19.01.2022 How far does a car travel in 1 second at 70 mph? m/s = m/s = m/s = m/s 70 miles per hour, 31.5 meters (104ft) 36.0 meters at 80 mph (119ft) 40.5 meters at 90 mph (133ft) 45.0 meters at 100 mph (148ft) How much miles are in an hour? 60 miles / 1 hour = 60 miles / 60 minutes = 1 mile / minute = rate = distance / time As a result, it takes one minute to traverse one mile. 27.11.2021 How long does it take to travel 1 mile at 80 mph? 1/80 of an hour How long does it take to pass a truck? If a truck is simply driving one to three miles per hour quicker than another, it will take roughly two minutes and twelve seconds, or 3.7 percent of a mile, to assure safe passing and return to the lane with a safe gap. 24.01.2017 How long will it take you to pass a passenger car at 60 mph without oncoming traffic quizlet? It will take roughly 13 seconds to accomplish the pass if the driver does all of the visual inspections, communicates intentions, and begins the pass two seconds behind the car ahead (at 50 and 40 mph, about 16 seconds, and at 60 and 50 mph, about 19 seconds). The “how many hours is 60 miles by car” is a question that asks how long it would take the car to travel 30 miles. The answer is 6 hours. Watch This Video: The “what is your average speed in mph if you travel 15 miles in 2 hours and 30 minutes” is a question that asks how long it takes to travel 15 miles. The answer is 60 mph, so the car would take 2 hours and 30 minutes to go 30 miles. Related Tags • how long does it take to drive 1 mile at 30 mph • if you are driving 60 miles an hour, how far do you end up traveling • how long is 60 miles in hours • how long does it take to drive 1 mile at 40 mph • how long does it take to drive 15 miles at 30 mph
{"url":"https://greentravelguides.tv/a-car-travels-at-60-miles-per-hour-how-much-time-does-it-take-the-car-to-travel-30-miles/","timestamp":"2024-11-07T04:02:54Z","content_type":"text/html","content_length":"156393","record_id":"<urn:uuid:692583c1-92a6-4a3d-900f-c719fe1577da>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00057.warc.gz"}
What is the theory behind PEC? What is the theory behind PEC?# Probabilistic error cancellation (PEC) [5, 10, 13] is a noise-aware error mitigation technique which is based on two main ideas: • The first idea is to express ideal gates as linear combinations of implementable noisy gates. These linear combinations are called quasi-probability representations [14]; • The second idea is to probabilistically sample from the previous quasi-probability representations to approximate quantum expectation values via a Monte Carlo average. Note: In this section we follow the same notation of [1]. Quasi-probability representations# In PEC, each ideal gate \(\mathcal G_i\) of a circuit of interest \(\mathcal U = {\mathcal G}_t \circ \dots \circ {\mathcal G}_2 \circ {\mathcal G}_1 \) is represented as a linear combination of noisy implementable operations \({\mathcal O_{i, \alpha}}\) (i.e., operations that can be directly applied with a noisy backend): \[ \mathcal G_i = \sum_\alpha \eta_{i, \alpha} \mathcal O_{i, \alpha}, \quad \eta_{i, \alpha} \in \mathbb R, \] where the calligraphic symbols (\(\mathcal U\), \(\mathcal G_i\), \(\mathcal O_{i, \alpha}\)) stand for super-operators acting on the density matrix of the qubits as linear quantum channels. The real coefficients \({\eta_{i,\alpha}}\) form a quasi-probability distribution [14] with respect to the index \(\alpha\). Their sum is normalized but, differently from standard probabilities, they can take negative values: \[ \sum_\alpha \eta_{i,\alpha}=1, \qquad \gamma_i = \sum_\alpha |\eta_{i, \alpha}| \ge 1.\] The constant \(\gamma_i\) quantifies the negativity of the quasi-probability distribution which is directly related to the error mitigation cost associated to the gate \(\mathcal G_i\). Note: In principle, the gate index “\(i\)” in the noisy operations \(\mathcal O_{i, \alpha}\) could be dropped. However, we keep it to explicitly define gate-dependent basis of implementable operations, consistently with the structure of the OperationRepresentation class discussed in What additional options are available in PEC?. Error cancellation# The aim of PEC is estimating the ideal expectation value of some observable \(A=A^\dagger\) with respect to the quantum state prepared by an ideal circuit of interest \(\mathcal U\) acting on some initial reference state \(\rho_0\) (typically \(\rho_0= |0\dots 0 \rangle \langle 0 \dots 0 |\)). Replacing each gate \(\mathcal G_i\) with its noisy representation, we can express the ideal expectation value as a linear combination of noisy expectation values: \[ \langle A \rangle_{\rm ideal}= {\rm tr}[A \mathcal U (\rho_0)] = \sum_{\vec{\alpha}} \eta_{\vec{\alpha}} \langle A_{\vec{\alpha}}\rangle_{\rm noisy} \] where we introduced the multi-index \(\vec{\alpha}=(\alpha_1, \alpha_2, \dots ,\alpha_t)\) and \[ \eta_{\vec{\alpha}} := \prod_{i=1}^t \eta_{i, \alpha_i}, \quad \langle A_{\vec{\alpha}}\rangle_{\rm noisy} := {\rm tr}[A \Phi_{\vec{\alpha}}(\rho_0)], \quad \Phi_{\vec{\alpha}} := \mathcal O_{t, \ alpha_t} \circ \dots \circ \mathcal O_{2, \alpha_2} \circ \mathcal O_{1, \alpha_1}. \] The coefficients \(\{ \eta_{\vec{\alpha}} \}\) form a quasi-probability distribution for the global circuit over the noisy circuits. Indeed it is easy to check that, at the level of super-operators, we have: \[ \mathcal U = \sum_{\vec{\alpha}} \eta_{\vec{\alpha}} \Phi_{\vec{\alpha}}. \] The one-norm \(\gamma\) of the global quasi-probability distribution is the product of those of the gates: \[ \sum_{\vec \alpha} \eta_{\vec{\alpha}}=1, \qquad \gamma = \sum_{\vec{\alpha}} |\eta_{\vec \alpha}| = \prod_{i=1}^{t} \gamma_i. \] All the noisy expectation values \(\langle A_{\vec{\alpha}}\rangle_{\rm noisy}\) can be directly measured with a noisy backend since they only require circuits composed of implementable noisy operations. In principle, by combining all the noisy expectation values, one could compute the ideal result \(\langle A \rangle_{\rm ideal}\). Unfortunately this approach requires executing a number of circuits which grows exponentially with the circuit depth and which is typically unfeasible. An important fact at the basis of PEC is that, for weak noise, only a small number of noisy expectation values actually contribute to the linear combination because many of the coefficients \(\eta_{\ vec \alpha}\) are negligible. For this reason, it is more efficient to estimate \(\langle A \rangle_{\rm ideal}\) using an importance-sampling Monte Carlo approach as described in the next section. Monte Carlo estimation# To apply a Monte Carlo estimation, we need to replace quasi-probabilities with positive probabilities. This can be achieved as follows: \[ \mathcal{G_i} = \sum_{\alpha} \eta_{i, \alpha} \mathcal{O}_{i, \alpha} = \gamma_i \sum_{\alpha} p_i(\alpha) \, {\rm sgn}(\eta_{i, \alpha})\, \mathcal{O}_{i, \alpha},\] where \(p_{i}(\alpha)=|\eta_{i, \alpha}|/\gamma_i\) is a valid probability distribution with respect to \(\alpha\). If for each gate \(\mathcal G_i\) of the circuit we sample a value of \(\alpha\) from \(p_{i}(\alpha)\) and we apply the corresponding noisy operation \(\mathcal O_{i, \alpha}\), we are effectively sampling a noisy circuit \(\Phi_{\vec{\alpha}}\) from the global probability distribution \(p(\vec{\alpha})= |\eta_{\vec{\alpha}}| / \gamma\). Therefore, at the level of quantum channels, we have: \[ \mathcal U = \gamma \mathbb E \left\{ {\rm sgn}(\eta_{i, \vec{\alpha}}) \Phi_{\vec{\alpha}} \right\},\] where \(\mathbb E\) is the sample average over many repetitions of the previous probabilistic procedure and \({\rm sgn}(\eta_{\vec{ \alpha}}) = \prod_i {\rm sgn}(\eta_{i, \alpha})\). As a direct consequence, we can express the ideal expectation value as follows: \[\langle A \rangle_{\text{ideal}} = \gamma\, \mathbb E \left\{ {\rm sgn}(\eta_{\vec{\alpha}}) \langle A_{\vec{\alpha}}\rangle_{\rm noisy} \right\}.\] By averaging a finite number \(N\) of samples we obtain an unbiased estimate of \(\langle A \rangle_{\text{ideal}}\). Assuming a bounded observable \(|A|\le 1\), the number of samples \(N\) necessary to approximate \(\langle A\rangle_{\text{ideal}}\) within an absolute error \(\delta\), scales as [11]: \[ N \propto \frac{\gamma^2}{\delta^2}. \] The term \(\delta^2\) in the denominator is due to the stochastic nature of quantum measurements and is present even when directly estimating an expectation value without error mitigation. The \(\ gamma^2\) factor instead represents the sampling overhead associated to PEC. For weak noise and short circuits, \(\gamma\) is typically small and PEC is applicable with a reasonable cost. On the contrary, if a circuit is too noisy or too deep, the value of \(\gamma\) can be so large that PEC becomes unfeasible.
{"url":"https://mitiq.readthedocs.io/en/stable/guide/pec-5-theory.html","timestamp":"2024-11-11T10:18:10Z","content_type":"text/html","content_length":"40641","record_id":"<urn:uuid:934b905b-74ba-4531-84d8-74262e604206>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00553.warc.gz"}
Graphing Linear Functions MCQ [PDF] Questions Answers | Graphing Linear Functions MCQs App e-Book Download: Test 122 Business Mathematics Certification Exam Tests Business Mathematics Practice Test 122 Graphing Linear Functions MCQs (Multiple Choice Questions) PDF Download - 122 The Graphing Linear Functions Multiple Choice Questions (MCQ) with Answers PDF (Graphing Linear Functions MCQs PDF e-Book) download Ch. 4-122 to solve Business Mathematics Practice Tests. Study Linear Function Applications quiz answers PDF, Graphing Linear Functions Multiple Choice Questions (MCQ Quiz) for bachelors in business online. The Graphing Linear Functions MCQs App Download: Free educational app for graphing linear functions, gaussian elimination in mathematics test prep for online business administration courses. The MCQs: The function which states that value of variable y is directly proportional to any change in the variable x value is called; "Graphing Linear Functions" App (Android, iOS) with answers: Change function; Exchange function; Direct function; Linear function; for bachelors in business online. Practice Linear Function Applications Questions and Answers, Google eBook to download free sample for online master's degree in business management. Graphing Linear Functions MCQ with Answers PDF Download: Quiz 122 MCQ 606: The function which states that value of variable y is directly proportional to any change in the variable x value is called 1. exchange function 2. change function 3. direct function 4. linear function MCQ 607: The Gaussian elimination procedure is one of the several methods to solve the 1. inverse of matrix 2. determinant matrix 3. procedure matrix 4. eliminated matrix Business Mathematics Exam Prep Tests Graphing Linear Functions Textbook App: Free Download iOS & Android The App: Graphing Linear Functions MCQs App to study Graphing Linear Functions Textbook, Business Mathematics MCQ App, and Human Resource Management (BBA) MCQs App. The "Graphing Linear Functions MCQs" App to free download Android & iOS Apps includes complete analytics with interactive assessments. Download App Store & Play Store learning Apps & enjoy 100% functionality with subscriptions!
{"url":"https://mcqslearn.com/applied/mathematics/quiz/quiz.php?page=122","timestamp":"2024-11-13T22:44:41Z","content_type":"text/html","content_length":"93971","record_id":"<urn:uuid:ca0d965a-cd2b-4b5e-beb5-fdfb0c9d0937>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00776.warc.gz"}
Tree Age Calculator - Treeier Ever wondered how old that towering tree in your yard might be? Our Tree Age Calculator helps you estimate the age of a tree quickly and easily based on its trunk circumference and species. By measuring the circumference at breast height (CBH) and applying a growth factor specific to the tree species, you can calculate its approximate age in just a few steps. This tool is perfect for tree enthusiasts, gardeners, and homeowners who want to learn more about their trees and ensure proper care. Simply enter your tree’s details below to get started on uncovering its history! Tree Age Calculator Use this calculator to estimate the age of a tree based on its circumference and the tree species’ growth factor. Enter Circumference at Breast Height: Select Tree Species: Estimated Tree Age (years): How to Use the Tree Age Calculator Using our Tree Age Calculator is quick and easy! Simply select your tree species and input its circumference or diameter to find out its approximate age. Let’s say you have a red maple with a circumference of 6 feet 3 inches, equivalent to 2 feet in diameter. Follow these simple steps: 1. Start by selecting “Red Maple” from the tree species drop-down menu. To calculate age using circumference: • Choose the appropriate unit for circumference at breast height (CBH). In this case, select feet/inches. • Input the circumference — enter 6 in the feet box and 3 in the inches box. The calculator will display the diameter at breast height (DBH) — 61 cm — and your tree’s age — 107 years. You’ve got yourself a centenarian tree! To calculate using diameter: • Select the unit for DBH (feet/inches). • Input your tree’s diameter of 2 feet into the box. If your tree species isn’t listed, simply enter the growth factor (usually 3, 4, or 5) to get the results. Additionally, if you want to estimate your tree’s birthday, you can use our date calculator to convert its age into days. Just input today’s date and the tree’s age to get an approximate Tree Growth Factor Chart — The Smaller, the Faster! Our tree growth factor chart helps not only in calculating your tree’s age but also in understanding the growth rate. The smaller the tree growth factor, the faster the tree grows. Check out our tree spacing calculator for guidance on proper tree planting distances. Tree Species Growth Factor American beech 6 American elm 4 American sycamore 4 Austrian pine 4.5 Basswood 3 Black cherry 5 Black maple 5 Black walnut 4.5 Black willow 2 Box elder 3 Bradford pear 3 Common horse chestnut 8 Colorado blue spruce 4.5 Cottonwood 2 Dogwood 7 Douglas fir 5 European beech 4 European white birch 5 Green ash 4 Honey locust 3 Ironwood 7 Kentucky coffee tree 3 Littleleaf linden 3 Northern red oak 4 Norway maple 4.5 Norway spruce 5 Pin oak 3 Quaking aspen 2 Redbud 7 Red maple 4.5 Red pine (Norway pine) 5.5 River birch 3.5 Scarlet oak 4 Scotch pine 3.5 Shagbark hickory 7.5 Shingle oak 6 Shumard oak 3 Silver maple 3 Sugar maple 5.5 Sweetgum 4 Tulip tree 3 White ash 5 White fir 7.5 White oak 5 White pine 5 Yellow buckeye 5 With this chart, you can easily compare different tree species and their respective growth rates. The lower the growth factor, the quicker your tree is expected to grow! Example 1: Determining the Age of an Oak Tree Let’s say you have a magnificent Northern Red Oak in your backyard, and you’re curious about its age. Using the tree age calculator, you can estimate its age with just a few measurements and simple inputs. Here’s how you do it step by step: Measure the Tree’s Circumference at Breast Height (CBH): The first step is to measure the circumference of the tree at breast height, which is 4.5 feet (about 1.3 meters) from the ground. This is a standard height used by arborists and scientists to measure tree diameters. In this case, you take a flexible measuring tape and wrap it around the trunk of your Northern Red Oak at that height. Let’s say you find the circumference to be 8 feet. Convert Circumference to Diameter: Once you have the circumference, the next step is to convert it to diameter. You can do this using the formula: Using a value of Ï€ (approximately 3.14), divide the circumference (8 feet) by Ï€: So, the diameter at breast height (DBH) of your tree is around 2.55 feet, or approximately 30.6 inches. Input the Diameter and Tree Species into the Calculator: With the diameter calculated, head to the tree age calculator and select “Northern Red Oak” from the list of tree species. Each species has a specific growth factor, which is a number used to estimate a tree’s age based on its diameter. For Northern Red Oak, the growth factor is 4. Calculate the Age of the Tree: The tree age calculator will multiply the diameter (in inches) by the growth factor for Northern Red Oak. In this case, the calculation is as follows: So, your Northern Red Oak is approximately 122 years old! That’s nearly a century and a quarter of standing strong. This example shows how easy it is to determine the age of your oak tree with just a simple measurement and a few clicks in the tree age calculator. Example 2: Estimating the Age of a Maple Tree Now, let’s imagine you have a beautiful Red Maple tree growing in your front yard, and you’re curious to know how old it is. Unlike the previous example, this time you’ll measure the diameter directly instead of the circumference. Measure the Tree’s Diameter at Breast Height (DBH): If you have access to the diameter measurement tools or already know the tree’s diameter, you can directly input it into the tree age calculator. Let’s assume you measure the diameter at 20 inches. This is the width of the tree’s trunk at 4.5 feet (breast height) above the ground. Select the Tree Species in the Calculator: After measuring the diameter, go to the tree age calculator and find “Red Maple” in the list of tree species. The Red Maple has a growth factor of 4.5. This growth factor helps account for how fast Red Maples typically grow, allowing the calculator to estimate the tree’s age based on its diameter. Input the Diameter and Calculate the Age: After selecting Red Maple from the list, enter the diameter of 20 inches into the calculator. The calculator will now multiply the diameter by the growth factor for Red Maple (which is 4.5) to estimate the age. The formula looks like this: So, your Red Maple is approximately 90 years old. That’s almost a century of growth, offering shade, beauty, and environmental benefits for decades. What If You Don’t Know the Tree Species? Suppose you can’t identify the tree species but still want to estimate the age. The tree age calculator offers an option to enter a generic growth factor. Most tree species have growth factors in the range of 3 to 5. If you suspect your tree grows fast, you might choose a smaller growth factor (closer to 3), while a slower-growing tree might have a growth factor of 5 or higher. Simply multiply the diameter by the average growth factor to get an approximate age. Understanding the Importance of Knowing Tree Age Estimating the age of your trees isn’t just about curiosity. Knowing the approximate age of your trees can provide important information for several reasons: • Tree Health: Older trees can be more susceptible to diseases, pests, and structural weaknesses. Knowing the tree’s age can help you understand what kind of maintenance it might need to keep it • Historical Significance: If your tree has been standing for over a century, it may have historical value. Some communities preserve ancient trees as landmarks or for their ecological importance. • Planning and Landscaping: The age of a tree can influence landscaping decisions. For instance, an older tree may not have the same growth potential as a younger one, so you may need to plan accordingly when planting nearby trees or adding structures. By using the tree age calculator, you can easily determine the age of your trees and make informed decisions about their care and role in your landscape. Whether you have towering oaks or vibrant maples, understanding their age is a step toward ensuring their health and longevity. Do trees die of old age? Yes, trees can die of old age. As they grow older, their ability to maintain themselves diminishes. Over time, the tree allocates more of its energy to respiration rather than growth, making it susceptible to diseases, pests, and environmental factors. This leads to a decline in health and eventually, the tree may die and become a snag. How can I identify fast-growing trees in my area? You can easily determine fast-growing trees using the tree growth factor chart. The smaller the growth factor, the faster the tree grows. This chart can be helpful in identifying which species are best suited for rapid growth in your region. How do I calculate the age of an oak tree by its diameter? To calculate the age of an oak tree based on its diameter, follow these steps: 1. Measure the tree’s diameter at breast height (dbh) in inches. 2. Multiply the diameter by the species’ growth factor. For example, if a pin oak has a diameter of 3 feet, or 36 inches, multiply it by its growth factor of 3. The tree would be approximately 108 years old. How do I count tree rings to estimate its age? Tree rings are a reliable method for determining age. Each dark ring on the tree trunk represents one year of growth, caused by seasonal changes. To estimate the age, simply count from the innermost dark ring to the outermost ring. This total number represents the tree’s age in years. What is the best way to measure the circumference of a tree? To measure the circumference of a tree, use a measuring tape and wrap it around the trunk at 4.5 feet (1.37 meters) above the ground. This height is known as “breast height” and provides a standard measurement point for determining tree age and size. Can a tree’s age affect its value? Yes, a tree’s age can significantly impact its value. Older, mature trees often contribute more to a property’s aesthetic and environmental value, making them more valuable. Additionally, certain tree species gain value as they age due to their rarity or size. What factors influence a tree’s growth rate? Several factors can influence a tree’s growth rate, including species type, climate, soil quality, and access to water and sunlight. Fast-growing species like cottonwood or black willow tend to have smaller growth factors, while slower-growing trees like oaks have larger growth factors. Can I estimate the age of a tree without cutting it down? Yes, you can estimate a tree’s age without cutting it down by measuring its diameter at breast height and using the species’ growth factor. This method provides an approximation of the tree’s age, without the need for invasive techniques. What are the growth factors for popular tree species? Growth factors vary by species. For example, the growth factor for a red maple is 4.5, while a faster-growing cottonwood has a growth factor of 2. Refer to a tree growth factor chart to find the specific growth factor for the species you’re interested in. Is it possible to extend the lifespan of an aging tree? Yes, proper care can extend the lifespan of an aging tree. Regular pruning, proper watering, and disease control can help maintain the health of an older tree. Consult a professional arborist to assess the condition of the tree and recommend appropriate actions.
{"url":"https://treeier.com/tree-age-calculator/","timestamp":"2024-11-08T12:21:40Z","content_type":"text/html","content_length":"298696","record_id":"<urn:uuid:5755c7ad-82c3-41bf-89cc-3c3594569d76>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00112.warc.gz"}
The Easiest Problem Is This One Problem B The Easiest Problem Is This One Some people think this is the easiest problem in today’s problem set. Some people think otherwise, since it involves sums of digits of numbers and that’s difficult to grasp. If we multiply a number $N$ with another number $m$, the sum of digits typically changes. For example, if $m = 26$ and $N=3029$, then $N\cdot m = 78754$ and the sum of the digits is $31$, while the sum of digits of $N$ is 14. However, there are some numbers that if multiplied by $N$ will result in the same sum of digits as the original number $N$. For example, consider $m = 37, N=3029$, then $N\cdot m = 112073$, which has sum of digits 14, same as the sum of digits of $N$. Your task is to find the smallest positive integer $p$ among those that will result in the same sum of the digits when multiplied by $N$. To make the task a little bit more challenging, the number must also be higher than ten. The input consists of several test cases. Each case is described by a single line containing one positive integer number $N, 1\leq N\leq 100\, 000$. The last test case is followed by a line containing zero. For each test case, print one line with a single integer number $p$ which is the minimal number such that $N\cdot p$ has the same sum of digits as $N$ and $p$ is bigger than 10. Sample Input 1 Sample Output 1
{"url":"https://open.kattis.com/contests/nekafn/problems/easiest","timestamp":"2024-11-02T22:13:17Z","content_type":"text/html","content_length":"30611","record_id":"<urn:uuid:380e9229-3e3a-4fef-818b-bac7938c5c44>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00299.warc.gz"}
Riemann Surfaces of Infinite Genussearch Item Successfully Added to Cart An error was encountered while trying to add the item to the cart. Please try again. Please make all selections above before adding to cart Riemann Surfaces of Infinite Genus Joel Feldman : University of British Columbia, Vancouver, BC, Canada Horst Knörrer : Eidgenössische Technische Hochschule, Zurich, Switzerland A co-publication of the AMS and Centre de Recherches Mathématiques Hardcover ISBN: 978-0-8218-3357-5 Product Code: CRMM/20 List Price: $115.00 MAA Member Price: $103.50 AMS Member Price: $92.00 eBook ISBN: 978-1-4704-3865-4 Product Code: CRMM/20.E List Price: $110.00 MAA Member Price: $99.00 AMS Member Price: $88.00 Hardcover ISBN: 978-0-8218-3357-5 eBook: ISBN: 978-1-4704-3865-4 Product Code: CRMM/20.B List Price: $225.00 $170.00 MAA Member Price: $202.50 $153.00 AMS Member Price: $180.00 $136.00 Click above image for expanded view Riemann Surfaces of Infinite Genus Joel Feldman : University of British Columbia, Vancouver, BC, Canada Horst Knörrer : Eidgenössische Technische Hochschule, Zurich, Switzerland A co-publication of the AMS and Centre de Recherches Mathématiques Hardcover ISBN: 978-0-8218-3357-5 Product Code: CRMM/20 List Price: $115.00 MAA Member Price: $103.50 AMS Member Price: $92.00 eBook ISBN: 978-1-4704-3865-4 Product Code: CRMM/20.E List Price: $110.00 MAA Member Price: $99.00 AMS Member Price: $88.00 Hardcover ISBN: 978-0-8218-3357-5 eBook ISBN: 978-1-4704-3865-4 Product Code: CRMM/20.B List Price: $225.00 $170.00 MAA Member Price: $202.50 $153.00 AMS Member Price: $180.00 $136.00 • CRM Monograph Series Volume: 20; 2003; 296 pp MSC: Primary 30 In this book, the authors geometrically construct Riemann surfaces of infinite genus by pasting together plane domains and handles. To achieve a meaningful generalization of the classical theory of Riemann surfaces to the case of infinite genus, one must impose restrictions on the asymptotic behavior of the Riemann surface. In the construction carried out here, these restrictions are formulated in terms of the sizes and locations of the handles and in terms of the gluing maps. The approach used has two main attractions. The first is that much of the classical theory of Riemann surfaces, including the Torelli theorem, can be generalized to this class. The second is that solutions of Kadomcev-Petviashvilli equations can be expressed in terms of theta functions associated with Riemann surfaces of infinite genus constructed in the book. Both of these are developed here. The authors also present in detail a number of important examples of Riemann surfaces of infinite genus (hyperelliptic surfaces of infinite genus, heat surfaces and Fermi surfaces). The book is suitable for graduate students and research mathematicians interested in analysis and integrable systems. Titles in this series are co-published with the Centre de recherches mathématiques. Graduate students and research mathematicians interested in analysis and integrable systems. □ Chapters □ $L^2$-cohomology, exhaustions with finite charge and theta series □ The Torelli Theorem □ Examples □ The Kadomcev–Petviashvilli equation • Book Details • Table of Contents • Additional Material • Requests Volume: 20; 2003; 296 pp MSC: Primary 30 In this book, the authors geometrically construct Riemann surfaces of infinite genus by pasting together plane domains and handles. To achieve a meaningful generalization of the classical theory of Riemann surfaces to the case of infinite genus, one must impose restrictions on the asymptotic behavior of the Riemann surface. In the construction carried out here, these restrictions are formulated in terms of the sizes and locations of the handles and in terms of the gluing maps. The approach used has two main attractions. The first is that much of the classical theory of Riemann surfaces, including the Torelli theorem, can be generalized to this class. The second is that solutions of Kadomcev-Petviashvilli equations can be expressed in terms of theta functions associated with Riemann surfaces of infinite genus constructed in the book. Both of these are developed here. The authors also present in detail a number of important examples of Riemann surfaces of infinite genus (hyperelliptic surfaces of infinite genus, heat surfaces and Fermi surfaces). The book is suitable for graduate students and research mathematicians interested in analysis and integrable systems. Titles in this series are co-published with the Centre de recherches mathématiques. Graduate students and research mathematicians interested in analysis and integrable systems. • Chapters • $L^2$-cohomology, exhaustions with finite charge and theta series • The Torelli Theorem • Examples • The Kadomcev–Petviashvilli equation Please select which format for which you are requesting permissions.
{"url":"https://bookstore.ams.org/CRMM/20","timestamp":"2024-11-02T11:43:32Z","content_type":"text/html","content_length":"92544","record_id":"<urn:uuid:597507be-b559-4afb-9863-7a313233db29>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00770.warc.gz"}
Multi-Membership Covariates — mmc Specify covariates that vary over different levels of multi-membership grouping factors thus requiring special treatment. This function is almost solely useful, when called in combination with mm. Outside of multi-membership terms it will behave very much like cbind. One or more terms containing covariates corresponding to the grouping levels specified in mm. A matrix with covariates as columns. See also if (FALSE) { # simulate some data dat <- data.frame( y = rnorm(100), x1 = rnorm(100), x2 = rnorm(100), g1 = sample(1:10, 100, TRUE), g2 = sample(1:10, 100, TRUE) # multi-membership model with level specific covariate values dat$xc <- (dat$x1 + dat$x2) / 2 fit <- brm(y ~ xc + (1 + mmc(x1, x2) | mm(g1, g2)), data = dat)
{"url":"http://paulbuerkner.com/brms/reference/mmc.html","timestamp":"2024-11-13T15:29:16Z","content_type":"text/html","content_length":"12770","record_id":"<urn:uuid:26d64b34-446a-4b05-a2d8-4359dc2196f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00156.warc.gz"}
math - Question Answer HubQuestion Answer Hub Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people. Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people. Lost your password? Please enter your email address. You will receive a link and will create a new password via email. Please briefly explain why you feel this question should be reported. Please briefly explain why you feel this answer should be reported. Please briefly explain why you feel this user should be reported.
{"url":"https://questionanswerhub.com/question-tag/math/","timestamp":"2024-11-02T17:53:49Z","content_type":"text/html","content_length":"202853","record_id":"<urn:uuid:715a8738-2690-42ec-9a41-0612777ad28c>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00486.warc.gz"}
IRC log for #minetest, 2019-05-30 Time Nick Message 00:34 scr267_ joined #minetest 01:04 Ruslan1 joined #minetest 01:06 Taoki joined #minetest 01:12 el joined #minetest 01:13 Pie-jacker875 joined #minetest 01:18 cdde joined #minetest 01:43 ssieb joined #minetest 01:44 Cornelia joined #minetest 02:05 twoelk|2 joined #minetest 02:25 Miner_48er joined #minetest 02:47 cdde joined #minetest 03:25 cdde joined #minetest 04:02 twoelk|2 left #minetest 04:20 nowhere_man joined #minetest 05:22 milkt joined #minetest 05:48 bingfengfsx joined #minetest 06:15 bingfengfsx joined #minetest 06:36 CWz joined #minetest 06:51 Krock joined #minetest 07:05 jluc joined #minetest 07:15 fwhcat joined #minetest 07:31 pyrollo joined #minetest 08:32 ensonic joined #minetest 08:37 SNDBX joined #minetest 08:46 SNDBX hi 08:46 SNDBX today I am going to stream day to of my new survival world (MineClone 2) 08:46 SNDBX *day 2 08:59 SNDBX I am live with a new survival world of minetest / MineClone (day 2), please join me and help build a new world! https://www.youtube.com/channel/UCkWAn0Wt8Gw4p3m_y37gZuw/live 09:18 ensonic joined #minetest 09:26 proller joined #minetest 09:49 proller joined #minetest 10:14 milkt joined #minetest 10:23 Krock joined #minetest 10:26 milkt joined #minetest 10:26 Copenhagen_Bram joined #minetest 10:27 DachshundLord joined #minetest 10:38 Fixer joined #minetest 10:52 proller joined #minetest 11:06 proller joined #minetest 11:14 milkt joined #minetest 11:19 Beton joined #minetest 11:34 proller joined #minetest 11:49 Fixer_ joined #minetest 11:52 Krock joined #minetest 11:57 OxE7 joined #minetest 12:11 epoch joined #minetest 12:13 milkt joined #minetest 12:26 lisac joined #minetest 12:37 lisac joined #minetest 12:39 proller joined #minetest 12:47 Flitzpiepe joined #minetest 12:55 greeter joined #minetest 12:59 epoch anyone want a bookmarklet to linkify the address column of https://servers.minetest.net/ ? 13:00 Flitzpiepe joined #minetest 13:01 proller joined #minetest 13:05 bingfengfsx joined #minetest 13:09 rubenwardy joined #minetest 13:18 Joseph15 joined #minetest 13:18 Joseph15 Hello! I have a modding question. 13:18 epoch ok 13:19 Krock ok 13:19 Joseph15 If I have 2 craftitems or tools and I want to replace item one with item two when I release Left Mouse Button, How would I do that? 13:19 Joseph15 or right mouse button 13:19 Joseph15 I meant right 13:19 Joseph15 I know about the controls mod but idk how to make that work 13:20 Joseph15 also, I looked at some bow mods that do this for charging the bow, but I got confused by the code 13:20 Krock local old_rmb = player:get_player_controls().rmb 13:20 seastack joined #minetest 13:20 Joseph15 and use that as a function of the craftitem? 13:21 Krock then get the new rmb value to see whether it got released 13:21 Krock you'll have to keep that variable stored somewhere 13:21 Joseph15 Hmmm ok 13:21 Krock thing is: there's no callback when releasing a mouse button, so you could register a globalstep callback 13:22 Joseph15 ok, but is there docs on that? 13:22 Joseph15 I'm still new to lua coding 13:22 Joseph15 I understand callbacks just not globalstep callbacks 13:24 Krock hmm.. what if you try after_use? https://github.com/minetest/minetest/blob/master/doc/lua_api.txt#L5973 13:25 Flitzpiepe joined #minetest 13:25 Joseph15 this would be only left click though, right? 13:26 Joseph15 I'm trying to make a blocking mod, where if you are holding the mod's sword and right click, it will replace with another item that is the blocking sword version 13:26 Krock well then.. on_place to record the first time when the button was pressed 13:26 Joseph15 I got that working 13:27 Joseph15 the issue is putting the non-blocking sword back once you release rightclick 13:27 Krock https://github.com/minetest/minetest/blob/master/doc/lua_api.txt#L3692 13:27 Krock 1) as soon on_place is called, put the player name into a mod's local table 13:27 Krock 2) iterate through these entires each step 13:28 Krock 3) when the player control "rmb" is no longer set, reset the wielded item 13:28 Krock 4) ??? 13:28 Krock 5) profit 13:29 Joseph15 ok, so somethinglike this? minetest.register_globalstep(function(dtime)) if player:get_player_controls().rmb <insert replacement code here> 13:29 Joseph15 probably not that exact code but somethinglike it right? 13:30 JosephOnPhone joined #minetest 13:33 illwieckz joined #minetest 13:37 Joseph15 sorry but I'm still on the lesser side of understandig, oof 13:41 Krock https://pastebin.com/raw/4rdeVLsm 13:41 Krock untested 13:44 xSmurf joined #minetest 13:45 greeter joined #minetest 13:54 Tux[Qyou] joined #minetest 13:54 greeter joined #minetest 14:02 bingfengfsx joined #minetest 14:03 bingfengfsx 共和有两个层面意思:一是,指的是不同的政治机构间的各安其分、和谐共处,它包括政治体系的立法、行政和司法等主要部门;二是,指的是不同政治群体和政治力量,如政党、区域、民族,乃至国家之 14:04 bingfengfsx Sorry, I sent it in the wrong place. 14:05 greeter joined #minetest 14:16 scr267 joined #minetest 14:20 bashir joined #minetest 14:21 scr267 joined #minetest 14:21 greeter joined #minetest 14:34 bingfengfsx joined #minetest 14:48 illwieckz joined #minetest 14:56 proller joined #minetest 14:58 Joseph15 Thanks Krock, I will test it later. 15:05 ensonic joined #minetest 15:10 bingfengfsx joined #minetest 15:17 rubenwardy joined #minetest 15:26 riff-IRC joined #minetest 16:02 Joseph15_ joined #minetest 16:02 nri joined #minetest 16:19 FrostRanger joined #minetest 16:20 nowhereman joined #minetest 16:27 milkt joined #minetest 16:28 epoch https://en.wikipedia.org/wiki/Breeder_reactor one of these in technic would be neat 16:33 troller joined #minetest 17:11 pauloue joined #minetest 17:19 Joseph15_ joined #minetest 17:42 YuGiOhJCJ joined #minetest 17:43 troller joined #minetest 17:44 Joseph15_ joined #minetest 17:45 Joseph15_ How could I make it so if you are holding a certain item in your hand you take less damage when hit by another player? 17:47 Joseph15_ Like if I were to hold a shield item in my hand take half damage only when holding it 17:49 Multi_ left #minetest 17:52 rubenwardy register_on_hpchange, Joseph15_ 17:54 Joseph15_ so on the register hp change I will then give hearts back? 17:55 Joseph15_ Like local hp change? 17:57 Joseph15_ Local hp_gain 18:02 MattJ joined #minetest 18:03 rubenwardy you can return a new HP 18:04 Joseph15_ So like, if I take 4 hearts of damage, return two? 18:29 Joseph15_ joined #minetest 18:32 puzzlecube joined #minetest 18:47 ssieb joined #minetest 18:49 troller joined #minetest 19:22 Unit193 joined #minetest 19:51 FreeFull joined #minetest 20:03 proller joined #minetest 20:29 Edgy1 joined #minetest 20:29 DI3HARD139-m43 joined #minetest 20:39 Edgy1 joined #minetest 21:17 pauloue joined #minetest 21:29 cdde joined #minetest 21:47 sec^nd joined #minetest 21:59 Ruslan1 joined #minetest 22:00 Ruslan1 VanessaE 22:01 VanessaE yes? 22:01 Ruslan1 I was pm you 22:01 VanessaE don't. 22:01 Ruslan1 Why 22:02 VanessaE I don't like to use pm's. anything you need to say can be said in public. 22:02 VanessaE (mostly) 22:03 Ruslan1 Ok I join your channel 22:04 VanessaE you can't. 22:04 VanessaE you must use a real IRC client, and you must register your nick with nickserv. 22:04 VanessaE webchat won't do. 22:04 Ruslan1 Oh 22:05 Ruslan1 I’ll switch it 22:06 Ruslan1 I’m gonna change my nick 22:07 Ruslan1 joined #minetest 22:11 Ruslan1 joined #minetest 22:11 VanessaE kiwiirc is not a real irc client 22:11 Ruslan1 VanessaE I’m back 22:13 Ruslan1 VanessaE give me link 22:13 VanessaE google it :P 22:13 VanessaE if you don't know what a real IRC client is versus webchat, you don't need to be in my server channel. 22:13 VanessaE no offense. 22:14 Ruslan1 Ok 22:16 Ruslan1 joined #minetest 22:17 Ruslan1 This one 22:18 Ruslan1 Here is real one 22:19 Ruslan1 I join real one 22:19 Ruslan1 VanessaE 22:20 VanessaE that's a webchat. 22:20 VanessaE not an irc client. 22:21 Ruslan1 Yes 22:22 VanessaE no. 22:22 VanessaE it is not. 22:23 Ruslan1 Ok maybe I won’t join your channel 22:26 Ruslan1 VanessaE 22:31 kayky joined #minetest 22:32 tuedel_ joined #minetest 22:37 tuedel joined #minetest 22:39 Blo0D joined #minetest 22:40 ircSparky is the dir from on_punchplayer different than vector.direction(hitterpos, playerpos) for any particular reason? it seems to just point from one players feet to the others head, the y value increases to ~1 when the players are at the same position, and lowers as they get farther away 22:41 ircSparky if the remove the y value and normalize they end up the same 22:44 ircSparky not sure, it's just weird. It seems like it would be better to just send the hitters look direction 22:46 Fritigern joined #minetest 22:49 VanessaE sounds like a bug to m 22:49 VanessaE me* 22:50 ircSparky ill see if there anything related in the git issues 23:13 turtleman joined #minetest 23:13 benrob0329 joined #minetest 23:27 ircSparky dosnt look like there is one 23:38 mothrah joined #minetest 23:42 mothrah Hello. Not sure if this server is more for modders or players, but if anyone has info about manipulating the "heat" and "humidity" values in the biomes API to better space my biomes, I would love it hear it! 23:42 VanessaE "this server"? 23:43 VanessaE you're on a general Minetest chat channel :) 23:43 epoch I bet they came from discord 23:43 clavi nope, the glorious webchat 23:44 Unioll joined #minetest 23:44 mothrah I did come from discord, yes. This general, abstractly-defined place. 23:45 p_gimeno mothrah: there's a thread in the forums dedicated to Q&A relative to map generation, I'd say that's your best shot 23:46 p_gimeno https://forum.minetest.net/viewtopic.php?f=47&amp;t=15272 23:48 tune it's called a channel, it's on the freenode server 23:49 clavi and discord has no servers, just chatrooms 23:49 tune iirc the things people call servers are internally called guilds 23:49 tune never used it personally but it's hard to ignore the unsightly monster 23:50 mothrah Thank you for the link. Browsing it now. 23:50 galaxie tune: Discord? Yeah, it's hideous. Try using it in Tor. 23:50 mothrah So. . . this is a guild? 23:50 tune this is an irc channel 23:50 galaxie mothrah: IRC channel. 23:50 tune was talking about discord for a sec 23:50 mothrah Gotcha. 23:52 p_gimeno freenode is a network of servers actually, the individual servers have little relevance because their only purpose is to distribute load, e.g. I'm on leguin.freenode.net, tune is on livingstone.freenode.net, mothrah is on herbert.freenode.net... 23:54 VanessaE and by servers, p_gimeno means the underlying chat network, not minetest.
{"url":"https://irc.minetest.net/minetest/2019-05-30","timestamp":"2024-11-13T12:01:30Z","content_type":"application/xhtml+xml","content_length":"64573","record_id":"<urn:uuid:255ca12b-220b-4bdd-970f-b92362552811>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00229.warc.gz"}
master equality polyhedr The master equality polyhedron (MEP) is a canonical set that generalizes the Master Cyclic Group Polyhedron (MCGP) of Gomory. We recently characterized a nontrivial polar for the MEP, i.e., a polyhedron T such that an inequality denotes a nontrivial facet of the MEP if and only if its coefficient vector forms a vertex of T. … Read more
{"url":"https://optimization-online.org/tag/master-equality-polyhedra/","timestamp":"2024-11-14T07:27:09Z","content_type":"text/html","content_length":"82302","record_id":"<urn:uuid:08cd9039-e703-4fea-8b29-56c172a2abae>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00556.warc.gz"}
Vector Spaces: The Foundation of Modern Machine Learning | Python, AI, and NLP Insights In the vast realm of artificial intelligence and machine learning, few concepts are as fundamental as vector spaces. These mathematical constructs form the bedrock upon which many of our most sophisticated algorithms are built. From the natural language processing systems that power your favorite chatbots to the image recognition software in your smartphone's camera, vector spaces are the hidden framework that makes modern AI possible. But what exactly are vector spaces, and why are they so crucial to machine learning? Let's embark on a journey to demystify this concept, bridging the gap between overly simplistic explanations and PhD-level complexity. A Brief History of Vector Spaces Before we dive into the nitty-gritty of vector spaces, it's worth taking a moment to appreciate their historical context. The concept of vectors has roots that stretch back to ancient times, with early mathematicians using geometric representations to solve problems. However, the modern understanding of vector spaces as we know them today began to take shape in the late 19th and early 20th centuries. Mathematicians like Giuseppe Peano, David Hilbert, and Hermann Weyl played crucial roles in formalizing the concept of vector spaces. Their work laid the groundwork for the field of linear algebra, which is the mathematical backbone of much of modern machine learning and the bane of every American high school student's existence. Fast forward to the present day, and vector spaces have become an indispensable tool in computer science, particularly in the realm of artificial intelligence. They provide a way to represent complex data in a form that computers can efficiently process and analyze. Vector Spaces: The Basics At its core, a vector space is a collection of objects called vectors, which can be added together and multiplied by scalars (regular numbers). These operations must satisfy certain strict rules, but we won't delve too deep into the mathematical formalism here. Instead, let's focus on understanding vectors intuitively. You can think of a vector as an arrow pointing in a specific direction. This arrow has both magnitude (length) and direction. But vector spaces aren't limited to just two or even three dimensions. In machine learning, we often work with vectors that have hundreds or even thousands of dimensions - ChatGPT 3 uses 2,048 dimensions. While this might seem mind-bending at first, it's this high-dimensionality that gives vector spaces their power in representing complex data. Visualizing Multi-Dimensional Spaces One of the challenges in working with vector spaces in machine learning is that they often exist in dimensions far beyond what we can visualize. While we can easily picture a 2D or 3D space, trying to imagine a 2,048-dimension space is, well, nearly impossible for my finite human brain. However, we can use some tricks to help us conceptualize these high-dimensional spaces. One approach is to use projection techniques that map high-dimensional data onto lower-dimensional spaces. Imagine using a flashlight to cast the shadow of a teddy bear onto a wall. Another approach would be to use clustering, where you would imagine each value as an attribute, "favorite color", "favorite Another way to think about high-dimensional spaces is to consider how properties of spaces change as dimensions increase. For instance, in high-dimensional spaces, most of the volume of a hypersphere is concentrated near its surface, a phenomenon known as the "curse of dimensionality." This has important implications for machine learning algorithms that operate in these spaces. Vector Spaces in Action: Real-World Applications Now that we have a basic understanding of vector spaces, let's explore how they're used in various machine learning applications. 1. Natural Language Processing (NLP) In NLP, words are often represented as vectors in a high-dimensional space. This technique, known as word embedding, allows us to capture semantic relationships between words. For example, in a well-trained word embedding space, the vector for "puppy" minus "dog" plus "cat" might result in a vector very close to "kitten." 2. Image Recognition In computer vision, images are often represented as vectors. Each pixel in an image can be thought of as a dimension in a vector space. Convolutional Neural Networks (CNNs), which are the backbone of many image recognition systems, essentially learn to navigate this high-dimensional space to classify images. 3. Recommendation Systems Many recommendation systems use a technique called collaborative filtering, which can be implemented using vector spaces. Users and items (like movies or products) are represented as vectors in a shared space, and recommendations are made based on the proximity of these vectors. Vector Spaces vs. LLM Embeddings As we venture into the cutting-edge territory of Large Language Models (LLMs) like GPT-3 and its successors, it's important to understand how the concept of vector spaces evolves. While LLM embeddings share some similarities with traditional vector spaces, there are some key differences. Traditional vector spaces in NLP, like those used in word2vec, typically have a fixed dimensionality and a static mapping between words and vectors. Once trained, the vector for a word like "cat" remains constant. LLM embeddings, on the other hand, are more dynamic and context-dependent. In models like BERT or GPT, the embedding for a word can change based on its context within a sentence. This allows these models to capture more nuanced meanings and handle polysemy (words with multiple meanings) more effectively. The Power and Limitations of Vector Spaces Vector spaces are incredibly powerful tools in machine learning, but they're not without their limitations. Understanding these can help us appreciate where vector spaces excel and where other approaches might be needed. 1. Efficient Computation: Vector operations can be highly optimized, allowing for fast processing of large datasets. 2. Dimensionality Reduction: Techniques like PCA allow us to compress high-dimensional data while preserving important features. 3. Intuitive Representation: Many real-world phenomena can be naturally represented as vectors, making vector spaces a good fit for various problems. 1. Curse of Dimensionality: As the number of dimensions increases, the volume of the space increases so fast that available data becomes sparse, which can lead to overfitting in machine learning 2. Lack of Interpretability: High-dimensional vector spaces can be difficult for humans to interpret, making it challenging to understand why a model made a particular decision. 3. Assumptions of Linearity: Many vector space methods assume linear relationships, which may not always hold in complex, real-world scenarios. Vector spaces are the unsung heroes of modern machine learning. They provide a powerful framework for representing and manipulating data, enabling the sophisticated algorithms that drive today's AI systems. From the word embeddings that power natural language processing to the high-dimensional spaces navigated by image recognition systems, vector spaces are ubiquitous in the field of artificial As we've seen, understanding vector spaces involves balancing abstract mathematical concepts with practical applications. While the math can get complex, the fundamental ideas are intuitive: we're representing objects as points in a multi-dimensional space, where the dimensions correspond to features or attributes of those objects. As machine learning continues to evolve, so too does our understanding and use of vector spaces. The rise of Large Language Models has introduced new, more dynamic ways of thinking about embeddings and representations. Yet, the core principles of vector spaces remain as relevant as ever. Whether you're just starting your journey in machine learning or you're a seasoned practitioner, a solid grasp of vector spaces will serve you well. They're not just a mathematical curiosity, but a practical tool that underpins much of what makes modern AI so powerful. In future installments of "Machine Learning for Smart People," we'll build on this foundation, exploring how vector spaces interact with other key concepts in machine learning. From neural networks to reinforcement learning, the insights we've gained here will prove invaluable. Remember, the goal isn't just to understand these concepts in isolation, but to see how they fit into the broader landscape of artificial intelligence. As you continue your learning journey, keep asking questions, experimenting with code, and seeking out new challenges. The field of AI is vast and ever-changing, but with a solid understanding of fundamentals like vector spaces, you'll be well-equipped to navigate its complexities.
{"url":"https://www.sethserver.com/ai/machine-learning-for-smart-people-vector-spaces.html","timestamp":"2024-11-10T17:55:08Z","content_type":"text/html","content_length":"29864","record_id":"<urn:uuid:a8367a61-7b01-43f9-8f9c-c0f221a53b0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00622.warc.gz"}
Distinction Between Rational and Irrational Numbers: Methods Explained With Simple Examples - Alltimespost -Creativity Never End. Home education Distinction Between Rational and Irrational Numbers: Methods Explained With Simple Examples Distinction Between Rational and Irrational Numbers: Methods Explained With Simple Examples In science, we should know about various numbers. These numbers are ideal squares, surds, non-ending decimals, ending decimals, non-rehashing decimals and rehashing decimals, and so forth. They are regularly isolated into two classes. The main classification is alluded to as objective numbers, while the subsequent one is considered irrational. Understanding the differentiation between rational and irrational numbers can be somewhat extreme but charming when perceived for understudies. We will endeavour to characterise the contrast among rational and irrational numbers utilising the guide of models. So, let’s start by learning the difference between Rational and Irrational numbers. What is a Rational Number? The number made out of numbers that can be communicated in structures p/q is known as the rational number. It is alluded to by (Q) that is in the genuine number (R), and their member’s numbers (Z) are called normal numbers (N). In this situation, the numbers comprise two sections: denominator q that can’t be zero, just as numerator p. Every member’s number likewise has a place with reasonable numbers, for example, 7=7/1. Thus, the rehashing or ending decimal series is known as an objective number. Likewise, the duplication and whole numbers of the various quantities of the reasonable number may change into the expression “levelheaded number”. What is an Irrational Number? Irrational numbers aren’t rational and belong to real numbers. They cannot be described in terms of an equation of 2 integers. A common illustration of the numbers is √3, √2, etc. The square roots of all natural numbers, except perfect squares, are not rational. Difference Between Rational and Irrational Numbers A majority of students cannot comprehend the distinction between rational and irrational numbers by using their definitions. They need more information to comprehend the distinction between rational and Irrational numbers. The main difference between them can be found below: 1. Perfect Squares are Rational Numbers, and Surds are Irrational Numbers The perfect squares are all rational numbers. The perfect squares are those that correspond to the circumferences and squares of an integer. In other words, you can multiply an integer by the same number of numbers and have the perfect square. The correct squares are √ 4, √ 49, √ 324, √ 1089 as well as √ 1369. After taking those square roots for these perfect squares, we will get 2, 7,18, 33, and 37. 2, 7, 18, 33, and 37 represent both However, Surds, in general, are Irrational Numbers. Surds are numbers that do not represent the numbers that are the squares that make up an integer. Also, they aren’t any of the multiplications that a number with the integer. Surds examples comprise 2, 3, and 7. When we take those surds’ square roots, we will get 1.41, 1.73, and 2.64. 1.41, 1.73, and 2.64 aren’t integers. 2. Terminating Decimals are Rational Numbers All terminable decimals have rational values. These are decimals that have a limited number of digits that follow that decimal mark. For instance, 1.25, 2.34, and 6.94 are all rational numbers. On the other hand, non-terminating decimals refer to numbers with an infinite number of digits following the decimal mark. For instance, 1.235434 …, 3.4444…, and 6.909090… are all non-terminating decimals. Non-terminating decimals are either rational or irrational. They are discussed in the next section. 3. Repeating Decimals are Rational Numbers, and Non-Repeating Decimals are Irrational Numbers All repeating decimals are rational numbers. The decimals that repeat are decimals that have digits that repeat on and off. Examples of repeating decimals include 222222, 33333333, and 555555. On the contrary, all non-repeating decimals are irrational numbers. The non-repeating decimals are the ones that don’t repeat over and. Examples of non-repeating decimals include 3426452, 0435623, and 908612. Can we be able to find Irrational Numbers Between Two Rational Numbers? It is simple to discover the irrational number between 2 rational numbers. It is our goal to master this idea with the help of a case study. Find irrational numbers in the range of the numbers 3-4. It is possible to find the irrational numbers that lie between the two numbers using these steps 1. The first step is to find squares that correspond to the numbers given. In this instance, squares that are 3 and 4 are 9 and 16, respectively. 2. In the second step, you must look for the prime numbers in their squares. Prime numbers that lie between 9 and 16 are 11 and 13. 3. When we take their square roots, we can get the necessary irrational numbers. Square roots for 11 and 13 equal 3.316624… as well as 3.6055512… each. In other words, 3. 3.6055512… 316624… and 3.6055512… is a non-repeating decimal. These are therefore irrational numbers. Practical Examples After better understanding the distinction between rational and non-rational numbers, we attempt to distinguish rational and irrational numbers from the given numbers. Divide the rational numbers from the next numbers; √5, 6/5, √25, 5/4, √36, √8, 16/3. 5/4 (1.25) is also an arithmetic number. 5 is an irrational number because 5 is an irrational number. After all, it’s a surd, and it isn’t the factor of an integer by itself. However, 25 is considered a rational figure because it is a quadrilateral of an integer 5 on its own. The number is considered a terminating decimal since it has a finite number of digits following the decimal mark. 6.5% (1.2) can also be considered a rational decimal since a limited number of digits follow that decimal mark. The number 36 also is a rational one since it is an ideal square. 8 . is an irrational value because it is unsure. The answer to 16.3 is 5.33333… This indicates that it’s the result of a repeating decimal. We are aware that a repetition decimal is also a rational number. The answer to six-sevenths is 0.85714… This indicates that it’s not a repeating decimal. We are aware that all non-repeating decimals can be considered irrational. At the final point, you will be able to easily discern the difference between rational and non-rational numbers with the aid of these important factors: Rational numbers = Perfect squares + Terminating decimals + Repeating decimals Still have any questions or confusion, please feel free to reach out.
{"url":"https://alltimespost.com/distinction-between-rational-and-irrational-numbers-methods-explained-with-simple-examples/","timestamp":"2024-11-05T02:49:46Z","content_type":"text/html","content_length":"155229","record_id":"<urn:uuid:e746a030-d3e6-4be9-bd9e-73ad524610b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00739.warc.gz"}
Case study problem probability 6 chapter 13 class 12 Case study Chapter 13 (Probability) A coach is training 3 players. He observes that the player A can hit a target 4 times in 5 shots, player B can hit 3 times in 4 shots and the player C can hit 2 times in 3 shots A coach is training 3 players. He observes that the player A can hit a target 4 from this situation answer the following(Case study problem probability 6 chapter 13 class 12) (a) Let A: the target is hit by A, B: the target is hit by B and C: The target is hit by C. Then find the probability that A, B and C all will hit (b) Reffering to (a),what is the probability that B, C will hit and A will lose ? (c) With reference to the events mentioned in (a), what is the probability that any two of A, B and C will hit ? (d) What is the probability that none of them will hit the target ? (e) What is the probability that at least one of A, B or C will hit the target ? Solution: P(A) = 4/5 P(B) = 3/4 and P(C) = 2/3 P(A not hitting the target) = P(A’) = 1 – 4/5 = 1/5 P(B not hitting the target) = P(B’) = 1 – 3/4 = 1/4 P(C not hitting the target) = P(C’) = 1 – 2/3 = 1/3 (a) P(The probability that A, B and C all will hit) = P(A).P(B).P(C) = 2/5 (b) P(B and C hit and A lose) (c) P(Any two of A, B and C will hit) = P(A∩B∩C’) + P(A∩B’∩C) + P(A’∩B∩C) = P(A)×P(B)×P(C’) + P(A)×P(B’)×P(C) + P(A’)×P(B)×P(C) = 4/5×3/4×1/3 + 4/5×1/4×2/3 + 1/5×3/4×2/3 = 12/60 + 8/60 + 6/60 = 26/60 = 13/30 (d) probability that none of them will hit the target = P(A’∩B’∩C’) = P(A’)×P(B’)×P(C’) = 1/5×1/4×1/3 = 1/60 (e) probability that at least one of A, B or C will hit the target ⇒ P(A∪B∪C) = 1 – P(A’∩B’∩C’) = 1 – 1/ 60 = 59/60 Question : Mahindra tractors is India’s leading farm equipment manufacturer. It is the largest tractor selling factory in the world. Mahindra tractors is India’s leading farm equipment manufacturer. It is the largest tractor selling factory This factory has two machines A and B. Past recordshows that machine A produced 60% and machine B produced 40% of the output(tractors). Further 2% of the tractors produced by machine A and 1% produced by machine B were defective. All the tractors are put into one big store hall and one tractor is chosen at random.(Case study problem probability 5 chapter 13 class 12) Based on the above information answer the following question : (i) Find the total probability of chosen tractor ( at random) is defective. (ii)(a) If in random choosing, chosen tractor is defective, then find the probability that the chosen tractor is produced by machine ‘A’. (b) If in random choosing, chosen tractor is defective, then find the probability that the chosen tractor is produced by machine ‘B’ Solution: For solution click here Question: Nisha and Arun appeared for first round of an competitive examination for two vacancies. The probability of Nisha’s selection is 1/6 and that of Arun’s selection is 1/4 Nisha and Arun appeared for first round of an competitive examination based on the above information, answer the following question:(Case study problem probability 7 chapter 13 class 12) (a) Find the probability that at least one of them is selected. (b) Find the probability that both of them are selected. (c) Find the probability that none of them is selected (d) Find the probability that only one of them is selected. (e) Suppose Nisha is selected by the director and told her about two posts X and Y for which posting is independent. If the probability of posting for post X is 1/6 and for post Y is 1/7, then find the probability that Nisha is selected for at least one post. Solution : Let A = The event that Nisha is selected B = The event that Arun is selected We have, For solution click here Leave a Comment
{"url":"https://gmath.in/case-study-problem-probability-6-chapter-13-class-12","timestamp":"2024-11-08T18:14:11Z","content_type":"text/html","content_length":"197954","record_id":"<urn:uuid:7057e167-9c2b-41e4-852e-13ff4c5b1bce>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00258.warc.gz"}
Log Periodic Antenna Calculator Online Home » Simplify your calculations with ease. » Telecom Calculators » Log Periodic Antenna Calculator Online When it comes to designing or assessing log-periodic antennas, the task can be a little daunting due to the complex mathematical equations involved. Fortunately, a calculator simplifies these calculations, making the process far less complicated. A Log-Periodic Antenna Calculator is a digital tool that helps calculate the length of an antenna element using the frequency of operation and the number of elements in the antenna. This device falls under the calculator category of engineering and electronics. Understanding the Log-Periodic Antenna Calculator’s Working The calculator uses a precise formula to determine the length of an antenna element. Users enter the frequency of operation and the number of elements in the antenna. The calculator then employs these inputs to output the antenna length. The Formula The formula used in this calculator is L = C / (f * (2^n – 1)) where L is the length of the element, C is the speed of light (approximately 3 x 10^8 meters per second), f is the frequency of operation, and n is the number of elements in the antenna. A Practical Example For example, if the frequency of operation is 100Hz and the antenna has three elements, the calculator will give the length of the antenna element by substituting these values into the formula. A. Broadcasting: Broadcasters can utilize the calculator to determine the optimal antenna length for their transmissions, thus maximizing signal strength and range. B. Telecommunication: In the telecom industry, the calculator can aid in designing antennas for cellphone towers or satellite communications. Frequently Asked Questions What is a Log-Periodic Antenna Calculator? A Log-Periodic Antenna Calculator is a digital tool used in engineering and electronics to calculate the length of an antenna element based on the frequency of operation and the number of elements in the antenna. How does the Log-Periodic Antenna Calculator work? The calculator uses the formula L = C / (f * (2^n – 1)) to determine the length of an antenna element. Users input the frequency of operation and the number of elements in the antenna, and the calculator outputs the length. The Log-Periodic Antenna Calculator is a vital tool in the field of engineering and electronics. By simplifying complex calculations, it aids in the design and assessment of log-periodic antennas, thereby contributing to advancements in broadcasting and telecommunications. Leave a Comment
{"url":"https://calculatorshub.net/telecom-calculators/log-periodic-antenna-calculator/","timestamp":"2024-11-09T17:44:22Z","content_type":"text/html","content_length":"113361","record_id":"<urn:uuid:767447dd-400d-426f-8136-4b26413008ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00654.warc.gz"}
dia and encasing length Available online at www.sciencedirect.com Soils and Foundations 59 (2019) 1579–1590 Technical Paper Short-term and long-term behavior of geosynthetic-reinforced stone columns Ahad Ehsaniyamchi ⇑, Mahmoud Ghazavi Civil Engineering Department, K.N. Toosi University of Technology, Tehran, Iran Received 28 December 2018; received in revised form 10 May 2019; accepted 31 July 2019 Available online 14 September 2019 Stone columns are often used to improve the load-carrying characteristics of weak soils. In very soft soils, however, the bearing capacity of stone columns may not significantly improve the load-carrying characteristics due to the very low confinement of the surrounding soil. In such cases, encased stone columns (ESCs) or horizontally reinforced stone columns (HRSCs) may be used. Although ESCs have been studied extensively, few studies have been done on HRSCs. In addition, very limited studies are available on ESCs and HRSCs under the same conditions. Moreover, no studies have been carried out to compare the long-term and short-term behavior of HRSCs with that of ESCs. In this research, therefore, numerical analyses are performed on various types of reinforced end-bearing stone columns to compare their behavior under both long-term and short-term conditions under various loading conditions. The Advanced Modified Cam-clay model for clay and the Hardening Soil model for stone column materials are used. The results show that with proper reinforcing stone columns, in addition to a considerable reduction in settlement, the consolidation time can be greatly decreased and most of the settlement will occur during the loading period. Also, the consolidation settlement rate may be increased by using a smaller column diameter and a larger area replacement ratio for the unit cell, stiffer geosynthetic reinforcements, and greater values for the internal friction angle of the stone column materials. &Oacute; 2019 Production and hosting by Elsevier B.V. on behalf of The Japanese Geotechnical Society. This is an open access article under the CC BYNC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Keywords: Geosynthetics; Reinforced stone columns; Numerical analysis; Short-term and long-term behavior; Consolidation settlement 1. Introduction Stone columns are often used as a ground-improvement method to improve the bearing capacity, to reduce the settlement of saturated clayey soil, to increase the consolidation rate of fine soils, and to decrease the liquefaction potential. In very soft soils, however, the bearing capacity of ordinary stone columns (OSCs) is small due to the very low lateral confinement of the surrounding soil that leads to bulging failure at a depth of D-2.5D (Nazariafshar Peer review under responsibility of The Japanese Geotechnical Society. ⇑ Corresponding author. E-mail addresses: aehsani@mail.kntu.ac.ir (A. Ehsaniyamchi), ghazavi_ma@kntu.ac.ir (M. Ghazavi). and Ghazavi, 2014). In such situations, the loadsettlement behavior of the stone columns can be improved by geosynthetic reinforcements. Fig. 1 shows two main reinforcing methods of stone columns. As seen in the figure, a stone column may be reinforced by an encasement, called an encased stone column (ESC), wrapped with a geosynthetic like a wick drain, or by placing horizontal sheets of a geosynthetic within the column body at regular intervals, called a horizontally reinforced stone column (HRSC). The encasement may be wrapped around the whole length of the stone column (Le = L), called a fulllength ESC, or just wrapped around the upper portion of the stone column, for example, the half-length of the column (Le = 0.5L), where Le is the encased length of the 0038-0806/&Oacute; 2019 Production and hosting by Elsevier B.V. on behalf of The Japanese Geotechnical Society. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). A. Ehsaniyamchi, M. Ghazavi / Soils and Foundations 59 (2019) 1579–1590 Applied pressure Soft soil Geosynthetic encasement Soft soil Stone column reinforcing layers Fig. 1. Examples of various models used in numerical analyses. Unit cell models of: (a) OSC, (b) full-length ESC, (c) half-length ESC, (d) HRSC with Sr = 0.5D, and (e) single HRSC with Sr = 0.25D. The encasing of stone columns has been studied using analytical solutions (Pulko et al., 2011; Zhang and Zhao, 2015), experiments (Gniel and Bouazza, 2009; Murugesan and Rajagopal, 2010; Ghazavi and Nazariafshar, 2013; Ali et al., 2012, 2014; Miranda and Da Costa, 2016; Hong et al., 2016), and numerical methods (Murugesan and Rajagopal, 2006; Khabbazian et al., 2010; Keykhosropur et al., 2012; Elsawy, 2013; Hosseinpour et al., 2014; Yu et al., 2016). Most of the analytical and numerical studies used the unit cell concept, assuming an infinitely wide loaded area with end-bearing stone columns having a constant diameter and spacing, where the stone column and the surrounding soil were treated as axisymmetric forms (Pulko et al., 2011). Murugesan and Rajagopal (2006) and Khabbazian et al. (2010) reported that encasing the top portion of ESCs may be sufficient for preventing bulging failure and enhancing the bearing capacity. However, Gniel and Bouazza (2009) and Ghazavi and Nazariafshar (2013) reported that, in very soft soils, reinforcing the upper half part of ESCs may lead to the relocation of the bulging failure to the lower unencased parts of the columns; and thus, it may be more useful to encase the full length of the ESCs. HRSCs have been studied by Sharma et al. (2004), Wu and Hong (2008), Ali et al. (2012, 2014), Nazariafshar and Ghazavi (2014), Hosseinpour et al. (2014) and Ghazavi et al. (2018). Their results showed that the beneficial effect of HRSCs mainly depends on the vertical spacing between the horizontal reinforcing sheets and that the bearing capacity of HRSCs increases with a decrease in the spacing between the reinforcing layers (Ghazavi et al., 2018). Although various studies have been conducted on ESCs and HRSCs, only a very limited number of studies have compared the two methods under the same conditions (Ali et al., 2012, 2014; Hosseinpour et al., 2014). Moreover, although several studies have been conducted on the con- solidation of OSCs (Wang, 2009; Cimentada et al., 2011; Ng and Tan, 2014; Lu et al., 2017; Deb and Behera, 2017), most of the studies on ESCs have been focused on either the short-term or the long-term behavior and only a few studies investigated the consolidation of ESCs (Castro and Sagaseta, 2011; Zhang et al., 2012; Castro et al., 2013; Pulko and Logar, 2017). Castro and Sagaseta (2011) presented an advanced analytical method for predicting the consolidation settlement of ESCs based on Barron’s solution. Pulko and Logar (2017) used Biot’s theory and presented a fully coupled semi-analytical solution in order to account for the consolidation settlement of ESCs. In addition, to the best knowledge of the authors, there have been no studies that compared the long-term behavior and the consolidation settlement of HRSCs with those of Although the behavior of the granular aggregates and geosynthetic reinforcements of stone columns is almost independent of the loading speed, due to the presence of soft compressible clay around columns with very low permeability, after applying the initial loading on the stone columns and the surrounding soil, horizontal and vertical consolidation deformation is generated in the soil around the stone columns. This will cause additional deformation and the regeneration of stress in both the stone columns and the reinforcements. Therefore, the effects of consolidation on the soft clay surrounding the columns should be taken into account when calculating the stress and deformation of the various elements of the stone columns. This paper performs numerical analyses to compare both the long-term and short-term behavior and the consolidation settlements of end-bearing ESCs and HRSCs. To this aim, advanced constitutive models are used to compare the long-term and short-term behavior of ESCs and HRSCs. The present results may assist practicing engineers in choosing the best reinforcement method for stone A. Ehsaniyamchi, M. Ghazavi / Soils and Foundations 59 (2019) 1579–1590 columns with respect to the site, the loading conditions, the available materials, and the soil-improvement target. 2. Finite element analyses linear-elastic behavior was used for reinforcing the material simulation. To allow the mobilization between the reinforcement and the soil materials, interface elements were used by applying a strength reduction factor of 0.67, as suggested by the PLAXIS manual and used by Khabbazian et al. (2010). 2.1. Model description and boundary conditions 2.2. Numerical analysis validation The finite element model was verified for both reinforcing methods using data reported by others in the literature (Figs. 2 and 3). The predicted ESC data were verified by the data reported by Khabbazian et al. (2010) for an OSC and a full-length ESC, both with a diameter of 80 cm and a length of 5 m. The predicted HRSC data were verified by the experimental data reported by Ghazavi et al. (2018) for an OSC and an HRSC, both with a diameter of 10 cm and a length of 50 cm. As seen in Fig. 3, there is a good agreement between the test data and the simulations. Therefore, the adopted numerical analysis methods can be used to further discover the behavior of HRSCs and ESCs. Fig. 2. Validation of numerical analysis for ESCs. Vertical stress (kPa) Settlement (mm) Finite element analyses were performed using PLAXIS 2D in an axisymmetric condition. In the numerical analyses, two configurations of full-length ESCs and halflength ESCs were adopted, and their characteristics were compared with HRSCs with Sr = 0.25D and Sr = 0.5D, where Sr is the spacing of the horizontal reinforcing strips (Fig. 1d) and D denotes the stone column diameter. The reinforcing material used for the two cases of full-length ESC and HRSC with Sr = 0.25D was the same and equal to p.D.L, where L is the column length. In the same way, the area of the reinforcing material used for the two cases of half-length ESC and HRSC with Sr = 0.5D was equal to p.D.L/2. This facilitated a comparison between ESCs and HRSCs in terms of the consumption of the reinforcing material. Two types of stone column configurations for a single column and the unit cell concept, representing the stone column group, were studied by means of various numerical parametric analyses (Fig. 1). The length of all the stone columns was assumed to be 5 m. All the stone columns were located on a rigid stratum. Fig. 1 also shows some examples of the geometric models adopted for various types of stone columns with various loading conditions. Fig. 1a to 1d show the configurations of the unit cell conditions for the interior column conditions in the group of stone columns supporting a rigid spread footing. Fig. 1e shows a single HRSC supporting a rigid footing. In all the numerical analyses, the initial in-situ stress levels were predicted by considering a value of 0.5 for the at-rest pressure coefficient. Then, the analyses were carried out by removing the hole, replacing the column materials, and applying vertical pressure on the top of the rigid footing or adopting a prescribed displacement on the top of the model to simulate the rigid footing condition on the top of the stone column and the tributary area. To remove the effects of the element size, a fine mesh discretization was considered for all the models. As seen in Fig. 1, the boundary conditions in modeling single stone column loading were sufficiently extended. However, the boundary conditions of the unit cell models were adopted according to three assumed area replacement ratios, namely, 0.15, 0.25, and 0.35, which are the ratios normally used in practice. The soft soil and stone column materials were modeled using 15-noded triangular elements, and the geosynthetic reinforcements were simulated using 5-noded geogridtype elements. The Modified Cam-Clay (MCC) model and the Hardening Soil (HS) model were used for the clay and stone column materials, respectively. Moreover, a PLAXIS, OSC, D=100 mm PLAXIS, HRSC, D=100 mm, S =D Experiment by Ghazavi et al. (2018), OSC, D=100 mm Experiment by Ghazavi et al. (2018), HRSC, D=100 mm, S =D Fig. 3. Validation of numerical analysis for HRSCs. A. Ehsaniyamchi, M. Ghazavi / Soils and Foundations 59 (2019) 1579–1590 3. Numerical results In the literature, most of the tests performed on ordinary or reinforced stone columns were under quick undrained loading conditions for soft clay materials. However, the presence of a stone column causes the rapid dissipation of the excess pore pressure that is generated due to the undrained loading conditions. As a result, the insufficient column-bearing capacity and the changes in settlement are due to the consolidation of the surrounding clay. In this research, the Advanced Modified Cam-Clay and Hardening Soil constitutive models for clay and stone column materials, respectively, were used in the numerical analyses to study the long-term and short-term behavior of both HRSCs and ESCs. Also, to consider the free drainage effects of the stone column materials, a fully coupled flowdeformation analysis was used to simulate quick loading conditions and free drainage was assumed for the geosynthetic reinforcements. The clay parameters in the numerical analyses were the same as those used by Khabbazian et al. (2010) for Bangkok clay. For the stone column material, typical values were used (Table 1). For the various types of reinforced stone columns in the unit cell, various series of numerical analyses were conducted, details of which are given Table 2. To assess the effect of each parameter, all other parameters were kept constant according to the underlined values given in Table 2. Table 1 Material parameters for numerical models. Property name Soft soil Stone column csat (kN/m3) / (deg) c (kPa) w (deg) 50 (MPa) oed (MPa) ur (MPa) kx (m/day) Ky (m/day) Two different series of numerical analyses were performed to study the short-term bearing capacity and the long-term consolidation settlement of various types of single stone columns. The first series of analyses consisted of a short-term two-day coupled loading, performed by applying a prescribed settlement of 20 cm. However, the second series of analyses included a short-term two-day coupled loading performed by applying vertical stress that caused a 20-cm settlement at the end of the two-day loading period determined from the first series of analyses that was followed by a 100-day consolidation period. In all the analyses, constant values of 80 cm and 500 cm were considered for the diameter and the length of the stone columns, respectively. However, three different ratios of footing diameter (D0 ) to stone column diameter (D), namely, D0 / D = 1, 2, and 3, were adopted for the single stone columns. To determine the efficiency of the geosynthetic reinforcement on the load-bearing capacity of the stone columns, the bearing improvement factor (B.I.F.) is defined as the ratio of the bearing capacity of a reinforced stone column to the bearing capacity of an ordinary stone column with the same conditions at the same settlement values. In addition, to evaluate the settlement improvement of the reinforcements, the settlement improvement factor (S.I.F.) is defined as the ratio of the settlement caused by an ordinary stone column to the settlement caused by a reinforced stone column with the same conditions. The definition of both parameters, B.I.F. and S.I.F., are illustrated in Fig. 4. Moreover, as shown in Fig. 4b, parameter SEOL is defined as the settlement value of a stone column at the end of the loading stage, while parameter SF is defined as the final settlement value of a stone column at end of the consolidation 3.1. Unit cell modeling 3.1.1. Influence of loading rate and consolidation time To study the behavior of various types of reinforced stone columns at various loading rates, 15 coupled numerical analyses were carried out using three loading durations, namely, 2, 20, and 100 days, for applying 200 kPa of vertical stress on the top of the unit cell. These durations were selected to simulate rapid, medium, and slow loading rates, respectively. All analyses were followed by a consolidation analysis until 200 days had passed from the start of the loading period, in order to assess and compare the long-term behavior of all the stone column types. Table 2 Various numerical parametric analyses for unit cell concept of stone column. Series of parametric Parameter details Loading duration and consolidation time = 2 days of loading + 198 days of consolidation, 20 days of loading + 180 days of consolidation, 100 days of loading + 100 days of consolidation Column diameter = 50, 80, and 110 cm Area replacement ratio = 0.15, 0.25, and 0.35 Reinforcement stiffness = 1000, 3000, and 5000 kN/m Internal friction angle of stone column materials = 35, 40, and 45&deg; A. Ehsaniyamchi, M. Ghazavi / Soils and Foundations 59 (2019) 1579–1590 Fig. 4. Definition of reinforcement improvement parameters: (a) B.I.F. and (b) S.I.F. Fig. 5 shows the variation in the time-settlement behavior at the top of the unit cell in the OSC and various types of reinforced stone columns, while Fig. 6 compares the final settlement (sF) and the S.I.F. values at the end of 200 days. As seen in Figs. 5 and 6, all types of reinforcements were able to reduce the settlement of the OSC. The HRSC with Sr = 0.25D and the full-length ESC show the best reinforcement performances, providing high confining effects on the stone column materials. However, the halflength ESC and the HRSC with Sr = 0.5D show smaller improvement effects with low confining effects on the stone column material. Also, as seen in Fig. 6, the final long-term settlement of the OSC or the reinforced stone columns is independent of the column loading rate. In fact, the amount of final settlement of the loaded stone columns is seen to depend on the soil and the stone column material properties, the reinforcement stiffness, and the geometric conditions, and it is independent of the loading rate. Fig. 7 shows the excess pore water pressure levels at the end of the 2-day loading duration. As seen in the figure, by reinforcing the columns, the excess pore pressure decreases and the greatest decreases in excess pore pressure occur for the full-length ESC and the HRSC with Sr = 0.25D. Therefore, due to the small excess pore pressure generation with these types of reinforcements, minimum consolidation settlement is expected. Fig. 8 shows the variation in the ratio of the settlement at the end of the loading time (sEOL) to the final settlement at the end of consolidation (sF). As seen from Figs. 6 and 8, the loading rate has a very minimal effect on the final long-term settlements for all types of stone columns. However, the ratio of sEOL/sF varies greatly Fig. 5. Time-settlement behavior of various stone columns with different loading rates. A. Ehsaniyamchi, M. Ghazavi / Soils and Foundations 59 (2019) 1579–1590 Fig. 6. Variation in (a) final settlement (sF) and (b) S.I.F. values at end consolidation vs loading duration. for the various types of stone columns and the various loading rates. For example, for the 2-day loading rate, the value of sEOL/sF for both the full-length ESC and the HRSC with Sr = 0.25D is about 0.66. This indicates that these reinforcing types can reduce the consolidation settlement to the minimum value, even with very rapid loading. It is also seen that, by using the moderate time of 20 days of loading, the value of sEOL/sF for both the full-length ESC and the HRSC with Sr = 0.25D is more than 0.93. This means that most of consolidation settlement occurs within the first 20 days for these stone columns. However, the consolidation settlement rates for the half-length ESC and the HRSC with Sr = 0.5D are remarkably lower than those in the above cases, especially at rapid loading rates. Fig. 7. Excess pore pressure generated in various stone columns at end of 2-day loading duration: (a) OSC, (b) full-length ESC, (c) half-length ESC, (d) HRSC with Sr = 0.25D, and (e) HRSC with Sr = 0. 5D. Fig. 8. Variation in (sEOL/sF) vs loading duration for various stone 3.1.2. Influence of stone column diameter Fig. 9 shows the time-settlement behavior of various diameters of 50, 80, and 110 cm for stone columns loaded up to 200 kPa of vertical stress in 2 days, followed by 198 days of consolidation time. As seen in the figure, with an increase in the diameter of all types of stone columns in the unit cell with a constant area replacement ratio, the consolidation settlement rate decreases and more time is required to reach the final settlement for larger diameters. The difference in the consolidation time for various diameters of the full-length ESC and the HRSC with Sr = 0.25D is very small, ranging from 2 days for D = 50 cm to 10 days for D = 110 cm. However, the consolidation time for the half-length ESC increases from 4 days for D = 50 cm to 20 days for D = 110 cm. Moreover, for the OSC and the HRSC with Sr = 0.5D, the consolidation time has the largest increase from about 15 days for D = 50 cm to about 60 days for D = 110 cm. In fact, the load-bearing behavior of the full-length ESC and the HRSC with Sr = 0.25D has minimum dependency on the surrounding clay behavior, and a minimum amount of excess pore pressure is generated for these types of reinforced stone columns. As a result, they experience minimum consolidation settlements. Fig. 10 shows the variation in the S.I.F. versus the diameter of the stone columns for various reinforcement types. A. Ehsaniyamchi, M. Ghazavi / Soils and Foundations 59 (2019) 1579–1590 Fig. 9. Time-settlement behavior of various stone columns with different column diameters: (a) D = 50 cm, (b) D = 80 cm, and (c) D = 110 cm. the benefit of the encasement decreases with an increase in the diameter of these columns. It should be noted that, with an increase in the stone column diameter from 50 to 80 and 110 cm, the area ratio of the reinforcing material to the volume for the unit cell decreases from 2 to 1.25 and 0.91, respectively. In other words, with an increase in the diameter of the full-length ESC and the HRSC with Sr = 0.25D from 50 to 110 cm, the use of reinforcement material brings about a two-fold decrease. However, the S.I.F. decreases just about 19% for the HRSC with Sr = 0.25D. Therefore, from the viewpoint of the amount of consumption of the reinforcing material, the best reinforcement type is the HRSC with Sr = 0.25D for larger stone column diameters. Fig. 10. Variation in S.I.F. with column diameter for various stone As shown in the figure, for all reinforcement types, the S.I. F. value decreases with an increase in the stone column diameter. However, the rate of decrease is much larger for the full-length ESC than for the HRSCs. Murugesan and Rajagopal (2010) and Castro and Sagaseta (2011) reported the same results for ESCs and concluded that 3.1.3. Influence of area reinforcement ratio Fig. 11 shows the time-settlement behavior of stone columns with various area replacement ratios. As seen in the figure, the rate of consolidation increases with an increase in the area replacement ratio for all types of stone columns. This is due to a reduction in the drainage path length and to bearing a larger part of the applied load by the stiffer stone column. In addition, the time it takes to reach the final settlement for the full-length ESC and the HRSC with Fig. 11. Time-settlement behavior of stone columns with different area replacement ratios: (a) 0.15, (b) 0.25, and (c) 0.35. A. Ehsaniyamchi, M. Ghazavi / Soils and Foundations 59 (2019) 1579–1590 Fig. 12. Variation in S.I.F. with area replacement ratio for various stone Sr = 0.25D is minimum. This is because their behavior has only minimum dependency on the surrounding clay. Fig. 12 shows the variation in the S.I.F. with the area replacement ratio for various reinforced stone columns. As seen in the figure, the S.I.F. increases with an increase in the area replacement ratio for all cases. However, the rate of increase in the S.I.F. decreases for larger area replacement ratios. 3.1.4. Influence of reinforcement stiffness Fig. 13 shows the time-settlement behavior of various stone columns with reinforcement stiffness. As seen in the figure, for the full-length ESC and the HRSC with Sr = 0.25D, the total consolidation time decreases from about 10 days for J = 1000 kN/m to about 3 days for J = 5000 kN/m. However, the variation in reinforcement stiffness has no sensitive effect on the total consolidation time in the half-length ESC and or the HRSC with Sr = 0.5D. Fig. 14 shows the variation in the S.I.F. with reinforcement stiffness for various types of stone columns. As seen in the figure, the S.I.F. values increase with an increase in the reinforcement stiffness for all reinforcement types. However, the increases in the S.I.F. for the Fig. 14. Variation in S.I.F. with reinforcement stiffness for various stone full-length ESC and the HRSC with Sr = 0.25D are much larger than those for the half-length ESC and the HRSC with Sr = 0.5D. In fact, the loading behavior of the fulllength ESC and the HRSC with Sr = 0.25D has a strong dependency on the reinforcement material stiffness and has much less dependency on the properties of the surrounding clay. However, the loading behavior of the half-length ESC and the HRSC with Sr = 0.5D not only depends on the reinforcement material stiffness, but also on the properties of the surrounding clay. Moreover, for the half-length ESC and the HRSCs, the rates of increase in the S.I.F. decrease with an increase in the reinforcement stiffness. This is because, with these reinforcement types, a moderate level of reinforcement stiffness of about 3000 kN/ m can produce a sufficient level of confinement effect on the column material in the reinforced parts of these columns and, by increasing the reinforcement stiffness from 3000 kN/m to 5000 kN/m, the main effective parameter on the behavior of stone columns is the bulging of the column materials at the unreinforced parts located between the horizontal layers of the HRSCs or at the lower unreinforced part of the half-length ESC. Therefore, the use of high stiffness for the reinforcements cannot bring about Fig. 13. Time-settlement behavior of various stone columns with reinforcement stiffness: (a) J = 1000 kN/m, (b) J = 3000 kN/m, and (c) J = 5000 kN/m. A. Ehsaniyamchi, M. Ghazavi / Soils and Foundations 59 (2019) 1579–1590 greater improvement for either the half-length ESCs or the HRSC with Sr = 0.5D. However, in the cases of the fulllength ESCs and the HRSC with Sr = 0.25D, due to the full confinement of all parts of the columns, the horizontal displacements of the columns of all lengths were limited and the behavior of the stone columns was seen to mainly depend on the reinforcement stiffness. Therefore, with an increase in the stiffness of the reinforcements, even for high stiffness values, the S.I.F. value will increase. 3.2. Single stone column loading 3.2.1. Short-term bearing capacity of single stone columns To study the short-term bearing capacity of single reinforced stone columns and to predict the B.I.F. of various types of reinforcements, some coupled flow-deformation Fig. 16. Variation in S.I.F. with internal friction angle of column material for various stone columns. Vertical stress (kPa) Settlement (mm) 3.1.5. Influence of internal friction angle (/) of stone column Fig. 15 shows the time-settlement behavior of various stone columns with three values of 35, 40, and 45&deg; for the internal friction angle of the stone column material. As seen in the figure, with an increasing /, the consolidation settlement rate increases slightly for all types of stone columns. In fact, by using a stronger material for the stone columns, a greater part of the load is tolerated by the column material, leading to a lower generation of excess pore pressure in the surrounding clay, and thus, the occurrence of lower consolidation settlement. Fig. 16 shows the variation in the S.I.F. with the internal friction angle of the column material for various types of reinforced stone columns. As seen in the figure, the S.I.F. value for all the stone columns increases with an increase in the internal friction. However, the rate of increase for the HRSCs is greater than that for the ESCs. This is due to the interlocking effect between the horizontal reinforcement layers and the column materials. In practice, this may help in choosing the type of reinforcement. Thus, HRSCs may be used when stronger column materials are available. Also, ESCs are preferable when poor materials are present in the stone columns. HRSC, S =0.5D Half-length ESC Full-length ESC HRSC, S =0.25D Fig. 17. Variation in vertical stress-settlement behavior of various types of single stone columns. analyses were performed by applying a prescribed settlement of 20 cm for the duration of one day. In these analyses, three ratios, namely, D0 /D = 1, 2, and 3, and D = 80 cm, were considered for all cases. Figs. 17 and 18 show the variations in the vertical stress-settlement behavior and the excess pore pressure generated under Fig. 15. Time-settlement behavior of various stone columns with internal friction angle for stone column material: (a) 35&deg;, (b) 40&deg;, and (c) 45&deg;. A. Ehsaniyamchi, M. Ghazavi / Soils and Foundations 59 (2019) 1579–1590 Fig. 18. Excess pore pressure under footing area for case of D0 /D = 2, due to 20-cm settlement of footing for various stone columns: (a) OSC, (b) HRSC with Sr = 0.5D, (c) half-length ESC, (d) full-length ESC, and (e) HRSC with Sr = 0.25D. the footing area, respectively, of various types of stone columns for D0 /D = 2. Table 3 presents the bearing capacity and the B.I.F. values at the end of 1 day of loading for various cases. As seen in Fig. 17 and Table 3, for all the D0 /D values, the HRSC with Sr = 0.25D and the full-length ESC have the best B.I.F., while the HRSC with Sr = 0.5D has the lowest B.I.F. value. In addition, the half-length ESC has a moderate effect on the B.I.F. In fact, as shown in Fig. 18, using a full encasement along the column or horizontal reinforcing layers with a low interval spacing provides a full confining effect on the stone column material. As a result, most of the applied load on the footing is tolerated by the stone column and a minimum amount of vertical stress is transferred to the surrounding clay. Thus, low excess pore pressure is generated in the soft soil. However, for the cases of the half-length ESC and the HRSC with Sr = 0.5D, no sufficient confinement is provided for the stone column materials. As a result, greater vertical stress is transferred to the surrounding clay; and thus, greater excess pore pressure is generated. This leads to an increase in the long-term consolidation settlement in such types of reinforced stone columns compared with the other cases. 3.2.2. Long-term consolidation settlement of single stone To investigate the long-term behavior, various types of single reinforced stone columns were initially modeled using a rigid plate on the top of the stone columns with various D0 /D and applying the amount of vertical stress that would lead to the same 20-cm settlement for all cases, according to bearing capacity values mentioned in Table 3. The vertical stress is applied for 1 day with a coupled analysis and, after that, a consolidation analysis is conducted to dissipate the excess pore pressure in the clay medium and to reach a steady state condition of the settlements. 1. The results of the time-settlement analysis for D0 /D = 2 are shown in Fig. 19. As seen in the figure, a maximum settlement of 4 cm occurs during consolidation for the HRSC with Sr = 0.5D. This is approximately equal to the consolidation settlement of an OSC. However, for the full-length ESC and the HRSC with Sr = 0.25D, the minimum consolidation settlements of 0.4 cm and 1.2 cm, respectively, occur. Therefore, by using fulllength ESCs or HRSCs with Sr = 0.25D, the long-term settlement of single stone columns can be significantly reduced, in addition to there being an improvement in the short-term load-bearing behavior. Fig. 19. Time-settlement variations for various types of single-stone columns during short-term, 1-day of loading, and long-term consolidation time to 200 days. Table 3 Bearing capacity and B.I.F. values at end of two days of loading for various single stone columns. D0 /D (D = 80 cm) Bearing capacity (kPa) HRSC, Sr = 0.5D Half-length ESC Full-length ESC HRSC, Sr = 0.25D A. Ehsaniyamchi, M. Ghazavi / Soils and Foundations 59 (2019) 1579–1590 4. Conclusions In this paper, various short-term coupled flowdeformation analyses and long-term consolidation analyses have been performed to investigate the behavior of various types of reinforced stone columns. Based on the numerical analyses, the following concluding remarks can be made: 2. All types of reinforcements can improve the short-term load-settlement behavior of OSCs and reduce their long-term consolidation settlement in both the unit cell configuration and the single stone column configuration. 3. The final long-term settlement of OSCs or reinforced stone columns is approximately independent of the column loading rate. This means that for all types of stone columns under a constant vertical load, the final longterm settlement will be equal for quick, medium, or slow loading rates. 4. By considering the results of the parametric analyses under both short-term and long-term conditions, the HRSC with Sr = 0.25D is the most efficient type of reinforcement for stone columns with the greatest B.I.F. and S.I.F. The full-length ESC is the second most efficient type, with a minimum difference between them. 5. With proper reinforcements, such as full-length ESCs or HRSCs with Sr = 0.25D, in addition to a considerable reduction in the settlement of the OSCs, the consolidation time can be greatly decreased and most of the settlement will occur during the loading time. 6. From the viewpoint of the amount of consumption of the reinforcing material, the best reinforcement type for stone columns is the HRSC with Sr = 0.25D. 7. For the unit cell concept, the long-term consolidation settlement rate of stone columns may be increased by using stone columns with smaller diameters and a larger area replacement ratio for the unit cell, stiffer geosynthetics for the reinforcements, and a greater internal friction angle for the stone column material. 8. Due to the interlocking effects between the reinforcement and the stone materials, the bearing capacity of HRSCs is more dependent on the internal friction angle of the stone column material than ESCs. This means that HRSCs are more effective when using stone materials with higher internal friction angles. However, when only poor materials are available, the use of full-length ESCs is preferable. 9. The bearing capacity of full-length ESCs and HRSCs with Sr = 0.25D is more dependent on the stiffness of the reinforcement material than the half-length ESCs and HRSCs with Sr = 0.5D. 10.By using the proper type of reinforcement, such as fulllength ESCs or HRSCs with Sr = 0.25D, the long-term settlement behavior of single stone columns will be decreased significantly, in addition to there being an improvement in the short-term load-bearing behavior. Ali, K., Shahu, J.T., Sharma, K.G., 2012. Model tests on geosyntheticreinforced stone columns: a comparative study. Geosynth. Int. 19 (4), Ali, K., Shahu, J.T., Sharma, K.G., 2014. Model tests on single and groups of stone columns with different geosynthetic reinforcement arrangement. Geosynth. Int. 21 (2), 103–118. Castro, J., Sagaseta, C., 2011. Deformation and consolidation around encased stone columns. Geotext. Geomembranes 29, 268–276. Castro, J., Cimentada, A., Costa, A., Canizal, J., Sagaseta, C., 2013. Consolidation and deformation around stone columns: comparison of theoretical and laboratory results. Comput. Geotech. 49, 326–337. Cimentada, A., Costa, A.D., Izal, J.C., Sagaseta, C., 2011. Laboratory study on radial consolidation and deformation in clay reinforced with stone columns. Can. Geotech. J. 48 (1), 36–52. Deb, K., Behera, A., 2017. Rate of consolidation of stone columnimproved ground considering change in permeability and compressibility during consolidation. Appl. Math. Model. 48, 548–566. Elsawy, M.B.D., 2013. Behavior of soft ground improved by conventional and geogrid-encased stone columns, based on FEM study. Geosynth. Int. 20 (4), 276–285. Ghazavi, M., Ehsaniyamchi, A., Nazariafshar, J., 2018. Bearing capacity of horizontally layered geosynthetic reinforced stone columns. Geotext. Geomembranes 46 (3), 312–318. Ghazavi, M., Nazariafshar, J., 2013. Bearing capacity of geosynthetic encased stone columns. Geotext. Geomembranes 38, 26–36. Gniel, J., Bouazza, A., 2009. Improvement of soft soils using geogrid encased stone columns. Geotext. Geomembranes 27 (3), 167–175. Hong, Y.S., Wu, C.S., Yu, Y.S., 2016. Model tests on geotextile-encased granular columns under 1-g and undrained conditions. Geotext. Geomembranes 44, 13–27. Hosseinpour, I., Riccio, M., Almeida, M.S.S., 2014. Numerical evaluation of a granular column reinforced by geosynthetics using encasement and laminated disks. Geotext. Geomembranes 42 (4), 363–373. Keykhosropur, L., Soroush, A., Imam, R., 2012. 3D numerical analyses of geosynthetic encased stone columns. Geotext. Geomembranes 35, 61– Khabbazian, M., Kaliakin, V.N., Meehan, C.L., 2010. Numerical study of the effect of geosynthetic encasement on the behavior of granular columns. Geosynth. Int. 17 (3), 132–143. Lu, M., Jing, H., Wang, B., Xie, K., 2017. Consolidation of composite ground improved by granular columns with medium and high replacement ratio. Soils. Founds. 57 (6), 1088–1095. Miranda, M., Da Costa, A., 2016. Laboratory analysis of encased stone columns. Geotext. Geomembranes 44 (3), 269–277. Murugesan, S., Rajagopal, K., 2006. Geosynthetic-encased stone columns: numerical evaluation. Geotext. Geomembranes 24, 349–358. Murugesan, S., Rajagopal, K., 2010. Studies on the behavior of single and group of geosynthetic encased stone columns. J. Geotech. Geoenviron., ASCE 136 (1), 129–139. Nazariafshar, J., Ghazavi, M., 2014. Experimental studies on bearing capacity of geosynthetic reinforced stone columns. Arab. J. Sci. Eng. 39, 1559–1571. Ng, K.S., Tan, S.A., 2014. Design and analyses of floating stone columns. Soils Founds 54 (3), 478–487. Pulko, B., Majes, B., Logar, J., 2011. Geosynthetic-encased stone columns: analytical calculation model. Geotext. Geomembranes 29 (1), 29–39. Pulko, B., Logar, J., 2017. Fully coupled solution for the consolidation of poroelastic soil around geosynthetic encased stone columns. Geotext. Geomembranes 45 (6), 616–626. Sharma, S.R., Kumar, B.R.P., Ngendra, G., 2004. Compressive load response of granular piles reinforced with geogrids. Can. Geotech. J. 41 (1), 187–192. A. Ehsaniyamchi, M. Ghazavi / Soils and Foundations 59 (2019) 1579–1590 Wang, G., 2009. Consolidation of soft clay foundations reinforced by stone columns under time-dependent loadings. J, Geotech. Geoenviron. Eng. ASCE 135 (12), 1922–1931. Wu, C.S., Hong, Y.S., 2008. The behaviour of a laminated reinforced granular column. Geotext. Geomembranes 26 (4), 302–316. Yu, Y., Bathurst, R.J., Damians, I.P., 2016. Modified unit cell approach for modeling geosynthetic-reinforced column-supported embankments. Geotext. Geomembranes 44 (3), 332–343. Zhang, L., Zhao, M., 2015. Deformation analysis of geotextile-encased stone columns. ASCE Int. J. Geomech. 15 (3), 04014053. Zhang, Y., Chan, D., Wang, Y., 2012. Consolidation of composite foundation improved by geosynthetic-encased stone columns. Geotext. Geomembranes 32, 10–17.
{"url":"https://studylib.net/doc/25865162/dia-and-encasing-length","timestamp":"2024-11-12T13:05:35Z","content_type":"text/html","content_length":"89708","record_id":"<urn:uuid:8db83263-a965-4bc9-997d-ca30ead72357>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00199.warc.gz"}
January 2023 – Dhunicorn The paper provides a brief history of recent developments in machine learning and the “New AI”. This sets the scene for a review of debates over machine learning and scientific practice, which brings to the forefront the hubris of those appealing to a naïve form of materialism in this specific domain at the intersection between philosophy and sociology of science. The paper then explores the “unreasonable effectiveness” of machine learning to shine a spot-light on the limitations of contemporary techniques. The resulting insights are subsequently applied to the particular question of whether current machine learning platforms could capture key elements responsible for the complexity of real-world macroeconomic phenomena as these have been understood by Post Keynesian economists. After concluding in the negative, the paper goes on to examine whether efforts to extend deep learning through differential programming could overcome some of the previously discussed limitations and stumbling blocks. Keywords: machine learning, the “New AI”, macroeconomic modelling, fixed-point theorems, backpropagation, the capital debates, uncertainty, financial instability, differential programming An avalanche of recent publications (Zuboff, 2019; Gershenfeld, Gershenfeld & Gershenfeld, 2017; Carr, 2010; Lovelock, 2019; and Tegmark, 2017) reflect the emotional range of our current obsessions about the Digital Economy, which are concerned, respectively, with: its inherent capacity for surveillance, domination, and control; its opportunities for extending the powers of digital fabrication systems to all members of the community; its retarding effects on deep concept formation and long-term memory; the prospect of being watched over by “machines of loving grace” that control our energy grids, transport and weapon systems; and, the limitless prospects for the evolution of AI, through procedures of “recursive self-improvement”. In my own contribution to the analysis of the digital economy (Juniper, 2018), I discuss machine learning and AI from a philosophical perspective that is informed by Marx, Schelling, Peirce and Steigler, arguing for the development of new semantic technologies based on diagrammatic reasoning, that could provide users with more insight and control over applications.[1] AI and Machine Learning practitioners have also embraced the new technology of Deep Learning Convolution Neural Networks (DLCNNs), Recursive Neural Networks, and Reservoir Neural Networks with a mixture of both hubris and concern[2]. In an influential 2008 article in Wired magazine, Chris Anderson claimed that these new techniques no longer required a resort to scientific theories, hypotheses, or processes of causal inference because the data effectively “speak for themselves”. In his response to Anderson’s claims, Mazzochi (2015) has observed that although the new approaches to machine learning have certainly increased our capacity to find patterns (which are often non-linear in nature), correlations are not all there is to know. Mazzochi insists that they cannot tell us precisely why something is happening, although they may alert us to the fact that something may be happening. Likewise, Kitchin (2014) complains that the data never “speak for themselves”, as they shaped by the platform, data ontology, chosen algorithms and so forth. Moreover, not only do scientists have to explain the “what”, they also have to explain the “why”. For Lin (2015) the whole debate reflects a confusion between the specific goal of (i) better science; and that of, (ii) better engineering (understood in computational terms). While the first goal may be helpful, it is certainly not necessary for the second, which he argues has certainly been furthered by the emerging deep-learning techniques[3]. In what follows, I want to briefly evaluate these new approaches to machine learning, from the perspective of a Post Keynesian economist, in terms of how they could specifically contribute to a deeper understanding of macroeconomic analysis. To this end, I shall investigate thoughtful explanations for the “unreasonable effectiveness” of deep-learning techniques, which will therefore focus on the modelling, estimation, and (decentralised) control of system (-of systems) rather than image classification or natural language processing. The “Unreasonable effectiveness” of the New AI Machine learning is but one aspect of Artificial Intelligence. In the 1980s, DARPA temporarily withdrew funding for US research in this field because it wasn’t delivering on what it had promised. Rodney Brooks has explained that this stumbling block was overcome by the development of the New AI, which coincided with the development of Deep Learning techniques characterised by very large neural networks featuring multiple hidden layers and weight sharing. In Brooks’ case, the reasoning behind his own contributions to the New AI were based on the straightforward idea that previous efforts had foundered on the attempt to combine perception, action, and logical inference “subsystems” into one integrated system. Accordingly, logical “inference engines” were removed from the whole process so that system developers and software engineers could just focus on more straightforward modules for perception and action. Intelligence would then arise spontaneously at the intersection between perception and action in a decentralized, but effective manner. One example of this would be the ability of social media to classify and label images. Donald Trump could then, perhaps, be informed about those images having the greatest influence over his constituency, without worrying about the truth-content that may be possessed by any of the individual images (see Bengio et al., 2014, for a technical overview of this machine learning capability). Another example of relevance to the research of Brooks, would be an autonomous rover navigating its way along a Martian dust plain, that is confronted by a large rock in its path. Actuators and motors could then move the rover away from the obstacle so that it could once again advance unimpeded along its chosen trajectory—this would be a clear instance of decentralized intelligence! In their efforts to explain the effectiveness of machine learning in a natural science context, Lin, Tegmark, and Rodnick (2017), consider the capacity of deep learning techniques in reproducing Truncated Taylor series for Hamiltonians. As Poggio et al., (2017) demonstrate, this can be accomplished because a multi-layered neural network can be formally interpreted as a machine representing a function of functions of functions… : At the end of the chain we arrive at simple, localized functions, with more general and global functions situated at higher levels in the hierarchy. Lin, Tegmark, and Rodnick (2017) observe that this formalism would suffice for the representation of a range of simple polynomials that are to be found in the mathematical physics literature (of degree 2-4 for the Navier-Stokes equations or Maxwell’s equations). They explain why such simple polynomials characterise a range of empirically observable phenomena in the physical sciences, in terms of three dominant features, namely: sparseness, symmetry, and low-order[4]. Poggio et al., (2017) examine this polynomial approximating ability of DLCNNs, also noting that sparse polynomials are easier to learn than generic ones owing to the parsimonious number of terms, trainable parameters, and the associated VC dimension of the equations (which are all exponential in the number of variables). The same thing applies to highly variable Boolean functions (in the sense of having high frequencies in their Fourier spectrum). Lin, Tegmark, and Rodnick (2017) go on to consider noise from a cosmological perspective, noting that background radiation, operating as a potential source of perturbations to an observed system, can be described as a relatively well-behaved Markov process. In both of these cases, we can discern nothing that is strictly comparable with the dynamics Post Keynesian theory, once we have abandoned the Ramsey-Keynes (i.e. neoclassical) growth model as the driver of long -run behaviour in a macroeconomy. From a Post Keynesian perspective, the macroeconomy can only ever be provisionally described by a system of differential equations characterised by well-behaved asymptotic properties of convergence to a unique and stable equilibrium. The Macroeconomy from a Post Keynesian Perspective: In The General Theory, Keynes (1936) argued that short-run equilibrium could be described by the “Point of Effective Demand”, which occurs in remuneration-employment space, at the point of intersection between aggregate expenditure ( in the form of expected proceeds associated with a certain level of employment) and aggregate supply (in the form of actual proceeds elicited by certain level of employment). At this point of intersection, the expectation of proceeds formed by firms in aggregate is fulfilled, so that there is no incentive for firms to change their existing offers of employment. However, this can occur at a variety of different levels of employment (and thus unemployment). For Keynes, short-run equilibrium is conceived in terms of a simple metaphor of a glass rolling on a table rather than that of a ball rolling along in a smooth bowl with a clearly defined minimum. When it comes to the determination of adjustments to some long-run full-employment equilibrium, Keynes was no less skeptical. Against the “Treasury-line” of Arthur Pigou, Keynes argued that there were no “automatic stabilizers” that could come into operation. Pigou claimed that with rising unemployment wages would begin to fall, and prices along with them. This would make consumers and firms wealthier in real terms, occasioning a rise in aggregate levels of spending. Instead, Keynes insisted that two other negative influences would come into play, detracting from growth. First, he introduced Irving Fisher’s notion of debt-deflation. According to Fisher’s theory, falling prices would transfer income from high-spending borrowers to low-spending lenders, because each agent was locked in to nominal rather than real or indexed contracts. Second, the increasing uncertainty occasioned by falling aggregate demand and employment, would increase the preference for liquid assets across the liquidity spectrum ranging from money or near-money (the most liquid), through short-term fixed interest securities through to long-term fixed interest securities and equities and, ultimately, physical plant and equipment (the least liquid of assets). In formal terms, the uncertainty responsible for this phenomenon of liquidity preference can be represented by decision-making techniques based on multiple priors, sub-additive distributions, or fuzzy measure theory (Juniper, 2005). Let us take the first of these formalisms, incorporated into contemporary models of risk-sensitive control in systems characterised by a stochastic uncertainty constraint (measuring the gap between free and bound entropy) accounting for some composite of observation error, external perturbations, and model uncertainty. While the stochastic uncertainty constraint can be interpreted in ontological terms as one representing currently unknown but potentially knowable information (i.e. ambiguity), it can also be interpreted in terms of information that could never be known (i.e. fundamental uncertainty). For Keynes, calculations of expected returns were mere “conventions” designed to calm our disquietude, but they could never remove uncertainty by converting it into certainty equivalents. Another source of both short-run and long-run departure from equilibrium has been described in Hyman Minsky’s (1992) analysis of Financial Instability, which was heavily influenced by both Keynes Michal Kalecki. As the economy began to recover from a period of crisis or instability, Minsky argued that endogenous forces would come into play that would eventually drive the system back into crisis. Stability would gradually be transformed into instability and crisis. On the return to a stable expansion path, after firms and households had repaired their balance-sheet structures, financial fragility would begin to increase as agents steadily came to rely more on external sources of finance, as firms began to defer the break-even times of their investment projects, and as overall levels of diversification in the economy steadily came to be eroded (see Barwell and Burrows, 2011, for an influential Bank of England study of Minskyian financial instability). Minsky saw securitization (e.g. in the form of collateralized debt obligations etc.) as an additional source of fragility due to its corrosive effects on the underwriting system (effects that could never be entirely tamed through a resort to credit default swaps or more sophisticated hedging procedures). For Minsky, conditions of fragility, established preceding and during a crisis may only be partially overcome in the recovery stage, thus becoming responsible for ever deeper (hysteric) crises in the future[5]. An additional, perhaps more fundamental, reason for long-run instability is revealed by Piero Sraffa’s (1960) insights into the structural nature of shifts in the patterns of accumulation, within a multisectoral economy, as embodied in the notion of an invariant standard of value. Sraffa interprets David Ricardo’s quest for a standard commodity—one whose value would not change when the distribution of income between wages and profits was allowed to vary—as a quest that was ultimately self-defeating. This is because any standard commodity would have to be formally constructed with weights determined by the eigenvalue-structure of the input-output matrix. Nevertheless, changes in income distribution would lead to shifts in the composition of demand that, in turn would induce increasing or decreasing returns to scale. This would feed back onto the eigen-value structure of the input-output matrix, in turn requiring the calculation of another standard commodity (see Andrews, 2015, and Martins, 2019, for interpretations of Sraffa advanced along these lines). If we return to the case of the neoclassical growth model, Sraffa’s contribution to the debates in capital theory has completely undermined any notion of an optimal or “natural rate of interest” (Sraffa, 1960; Burmeister, 2000). From a policy perspective, this justifies an “anchoring” role for government policy interventions which aim to provide for both stability and greater equity in regard to both the minimum-wage (as an anchor for wage relativities) and determination of the overnight or ‘target’ rate of interest (as an anchor for relative rates-of-return). From a modelling perspective, Martins (2019) insists that Sraffa drew a sharp distinction between a notion of ‘logical’ time (which is of relevance to the determination of “reproduction prices” on the basis of the labour theory of value, on the basis of a “snapshot” characterization of current input-output relations) and it’s counterpart, historical time (which is of relevance to the determination of social norms such as the subsistence wage, or policies of dividend-retention). When constructing stock-flow-consistent macroeconomic model this same distinction carries over to the historical determination of key stock-flow norms, which govern long-run behaviour in the model. Of course, in a long-run macroeconomic setting, fiscal and monetary policy interventions are also crucial inputs into the calculation of benchmark rates of accumulation (a feature which serves to distinguish these Post-Keynesian models from their neoclassical counterparts).[6] Machine Learning and Fixed-point Theorems In this paper’s discussion of macroeconomic phenomena, I have chosen to focus heavily on the determinants of movements away from stable, unique equilibria, in both the short-run and the long-run. Notions of equilibrium are central to issues of effectiveness in both econometrics and machine-learning. Of pertinence to the former, is the technique of cointegration and error-correction modelling. While the cointegrating vector represents a long-equilibrium, the error-correction process represents adjustment towards this equilibrium. In a machine-learning context, presumptions of equilibrium underpin a variety of fixed-point theorems that play a crucial role in: (i) techniques of data reduction; (ii) efforts to eliminate redundancy within the network itself with the ultimate aim of overcoming the infamous “curse of dimensionality”, while preserving “richness of interaction”; and, (iii) the optimal tuning of parameters (and hyper-parameters that govern the overall model architecture). Specific techniques of data compression, such as Randomized Numerical Linear Algebra (Drineas and Mahoney, 2017), rely on mathematical techniques such as Moore-Penrose inverses and Tikhanov regularization theory (Barata and Hussein, 2011). Notions of optimization are a critical element in the application of these techniques. This applies, especially, to the gradient descent algorithms that are deployed for the tuning of parameters (and sometimes hyper-parameters) within the neural network. Techniques of tensor contraction and singular value decomposition are also drawn upon for dimensionality reduction is complex tensor networks (Cichoki et al., 2016, 2017). Wherever and whenever optimization techniques are required, some kind of fixed-point theorem comes into play. The relationship between fixed-point theorems, asymptotic theory, and notions of equilibrium in complex systems is not straightforward. See both Prokopenko et al., 2019 and Yanofsky, 2003, for a wide-ranging discussion of this issue, which opens onto a discussion of many inter-related “paradoxes of self-referentiality”. For example, a highly-specialized literature on neural tangent kernels focuses on how kernel-based techniques can be applied in a machine learning context, to ensure that local rather than global maxima or minima are avoided during the whole process of gradient descent (see Yang, 2019). Here, the invariant characteristics of the kernel guarantee that tuning would satisfy certain robustness properties. An associated body of research on the tuning of parameters at the “edge of chaos”, highlights the importance of applying optimization algorithms close to the boundary of, but never within the chaotic region of dynamic flow (see Bietti and Mairal 2019, and Bertschinger and Natschläger, 2004). There are subtle formal linkages between the properties of neural tangent kernels and notions of optimization at the edge-of-chaos that I am unable to do justice to in this paper. From a Post Keynesian perspective and despite this evolution in our understanding of optimization in a machine learning context, it would seem that efforts to apply the existing panoply of deep learning techniques may be thwarted by contrariwise aspects of the behaviour of dynamic macroeconomic system. For macroeconomists working with Real Business Cycle Models and their derivatives, none of this is seen as a problem because unreasonably-behaved dynamics are usually precluded by assumption. Although perturbations are seen to drive the business cycle in these models, agents are assumed to make optimal use of information, in the full knowledge of how the economy operates, so that government interventions simply pull the economy further away from equilibrium by adding more noise to the system. Although more recent dynamic stochastic general equilibrium (DSGE) models allow for various forms of market failure, notions of long-run equilibrium still play a fundamental role[7]. Instead, in a more realistic, Post Keynesian world, optimization algorithms would have to work very hard in their pursuit of what amounts to a “will-o-the-wisp”: namely, a system characterised by processes of shifting and non-stationary (hysteretic) equilibria[8]. Differential Programming Recent discussions of machine learning and AI, have emphasized the significance of developments in differential programming. Yann LeCun (2018), one of the major contributors to the new Deep learning paradigm has noted that, An increasingly large number of people are defining the networks procedurally in a data-dependent way (with loops and conditionals), allowing them to change dynamically as a function of the input data fed to them. It’s really very much like a regular program, except it’s parameterized, automatically differentiated, and trainable/optimizable. One way of understanding this approach is to think of something that is a cross between a dynamic network of nodes and edges and a spread sheet. Each node contains a variety of functional formulas that draw on the inputs from other nodes and provides outputs that in turn, either feed into other nodes or can be observed by scopes. However, techniques of backpropagation and automatic differentiation can be applied to the entire network (using the chain rule while unfurling each of the paths in the network on the basis of Taylors series representations of each formula). This capability promises to overcome the limitations of econometric techniques when it comes to the estimation of large-scale models. For example, techniques of structural vector autoregression, which are multivariate extensions to univariate error-correction modelling techniques can only be applied to highly parsimonious, small-scale systems of equations. Based on the initial work of Ehrhard and Regnier (2003), a flurry of research papers now deal with extensions to functional programming techniques to account for partial derivatives (Plotkin, 2020), higher-order differentiation and tensor calculus on manifolds (Cruttwell, Gallagher, & MacAdam, 2019), how best to account for computational effects (which are described in Rivas, 2018), and industrial-scale software engineering (The Statebox Team, 2019). Members of the functional programming and applied category theory community have drawn on the notion of a lens, as means for accommodating the bidirectional[9] nature of backpropagation[10] (Clarke et al., 2020; Spivak, 2019; Fong, Spivak and Tuyéras, 2017). The potential flexibility and power of differential programming, could usher in a new era of policy-driven modelling, by allowing researchers to combine (i) traditionally aggregative macroeconomic models with multi-sectoral models of price and output determination (e.g. stock-flow-consistent Post Keynesian models and Sraffian or Marxian models of inter-sectoral production relationships); discrete-time and continuous-time models (i.e. hybrid systems represented integro-differential equations), and both linear and non-linear dynamics. This would clearly support efforts to develop more realistic models of economic phenomena. The development of network-based models of dynamic systems has been given impetus by research in three main domains: brain science imaging, quantum tensor networks, and Geographical Information Systems in each case, tensor analysis of multiple-input and multiple-output nodes has played a key role. In each of these cases, the complexity associated with tensor algebra has been ameliorated by the deployment of diagrammatic techniques based on the respective use of Markov-Penrose’ diagrams, the diagrammatic Z-X calculus, and the development of “region-” rather than “point”-based topologies and mereologies. These same diagrammatic techniques have been taken up by the Applied Category Theory community to achieve both a deeper and more user-friendly understanding of lenses and other optics (Boisseau, 2020; Riley, 2018), alongside diagrammatic approaches to simply-typed, differential, and integral, versions of the lambda calculus (Lemay, 2017, Zeilberger and Giorgetti, 2015). As I have argued, in more general terms, in Juniper (2018), the development of new software platforms based on diagrammatic reasoning could mean that differential programming techniques could potentially be disseminated to a much larger number of users who might have limited programming knowledge or skill (to some extent, today’s spreadsheets provide an example of this)[11]. In the case of AI, this could allow workers to regain control over machines which had previously either operated “behind their backs” or else, on the basis of highly specialized expertise. Improvements of this kind also have the potential to support higher levels of collaboration in innovation at the point-of-production. In the more restricted macroeconomic context, modelling could become less of a “black-box” and more of an “art” than a mystifying “science”. Diagrammatic approaches to modelling could help to make all of this more transparent. Of course, there are a lot of “coulds” in this paragraph. The development and use of technology can and should never be discussed in isolation form its political and organizational context. To a large extent, this political insight, was one of the main drivers and motivating forces for this paper. [1] One intuitive way of thinking about this is that it would extend principles of “human centred manufacturing” into some of the more computational elements of the digital economy. [2] See Christopher Olah’s blog entry for a helpful overview of various deep-learning architectures. [3] For this reason, I will avoid any further discussion of convolution-based techniques and kernel methods, which have contributed, respectively, to rapid progress in image-classification and in applications of support-vector machines. An animated introduction to convolution-based techniques is provided by Cornellis (2018) while kernel-based techniques and the famous “kernel trick” deployed in support vector machines is lucidly described in Wright (2018). Rectified Linear Units or ReLU’s—the activation functions most commonly-used in deep learning neural networks—are examined in Brownlee (2019). [4] The importance of symmetries in mathematical physics is examined in a recent paper by John Baez (2020), who investigates the source of symmetries in relation to Noether’s theorem. [5] Some of these components of fragility, such as loss of diversification and deferment of breakeven times, would obviously be hard to capture in a highly aggregative macroeconomic model, but certain proxies could be constructed to this end. [6] Of course, the rate at which labour—dead and living—is pulled out of production, also determines intra- and inter-sectoral economic performance, growth in trade, and overall rates of accumulation. It is also one of the key drivers of fundamental uncertainty for investors. [7] See Stiglitz (2018) for a critical review of DSGE models, and Andrle and Solmaz (2017) for an empirical analysis of the business cycle, which raises doubts about the dynamic assumptions implied by a variety of macroeconomic models. The contribution of non-discretionary expenditure to instability in the business cycle has been highlighted by the recent Post Keynesian theoretical literature on the so-called “Sraffa super-multiplier” (Fiebiger, 2017; Fiebiger and Lavoie, 2017). [8] Important sources of hysteresis, additional to those of a Minskyian nature, include those associated with rising unemployment, with its obvious impacts on physical and mental health, crime rates, and scarring in the eyes of prospective employers. Rates of innovation (and thus, productivity growth) are also adversely affected by declining levels of aggregate demand. [9] The implementation function takes the vector of parameters and inputs and transforms them into outputs, while the request function takes parameters, inputs and outputs and emits a new set of inputs, whereas the update function takes parameters, inputs and outputs and transforms them into a new set of parameter values. Together, the update and request functions perform gradient descent with the request function passing back the inverted value of the gradient of total error with respect to the input. Each parameter is updated so that it moves a given step-size in the direction that most reduces the specified total error function [10] For an introduction to some of the mathematical and programming-based techniques required for working with optics see Loregian (2019), Boisseau and Gibbons (2018), Culbertson and Kurtz (2013), and Román (2019). [11] Software suites such as AlgebraicJulia and Statebox can already recognise the role of different types of string diagrams in representing networks, dynamical systems, and (in the latter case) commercial processes and transactions. Anderson, C. (2008). The end of theory: The data deluge makes the scientific method obsolete. Wired, 23 June. Available at: http://www.wired.com/science/discoveries/magazine/16-07/pb_theory (accessed 18 July, 2019). Andrews, David (2015) . Natural price and the long run: Alfred Marshall’s misreading of Adam Smith. Cambridge Journal of Economics, 39: 265–279. Andrle, Michal, Jan Brůha, Serhat Solmaz (2017). On the sources of business cycles: implications for DSGE models. ECB Working Paper, No 2058, May. Baez, John (2020). Getting to the Bottom of Noether’s Theorem. arXiv:2006.14741v1 [math-ph] 26 Jun 2020. Barata, J. C. A. & M. S. Hussein (2011). The Moore-Penrose Pseudoinverse. A Tutorial Review of the Theory. arXiv:1110.6882v1 [math-ph] 31 Oct 2011. Barwell, R., & Burrows, O. (2011). Growing fragilities? Balance sheets in The Great Moderation. Financial Stability Paper No. 10, Bank of England. Bengio, Yoshua; Aaron Courville; and Pascal Vincent (2014). Representation Learning: A Review and New Perspectives. arXiv:1206.5538v3 [cs.LG] 23 Apr 2014. Bertschinger, N. & T. Natschläger (2004). Real-Time Computation at the Edge of Chaos in Recurrent Neural Networks. Neural Computation, July, 16(7): 1413-36. Bietti, Alberto and Julien Mairal (2019). On the Inductive Bias of Neural Tangent Kernels. HAL Archive. https://hal.inria.fr/hal-02144221 (accessed 18 July, 2019) Boisseau, Guillaume and Jeremy Gibbons (2018). What you needa know about yoneda: Profunctor optics and the yoneda lemma (functional pearl). Proc. ACM Program. Lang., 2(ICFP):84:1–84:27, July 2018. Boisseau, Guillaume (2020). String diagrams for optics, arXiv:2002.11480v1 [math.CT] 11 Feb 2020. Brownlee, J. (2019). A Gentle Introduction to the Rectified Linear Unit (ReLU) for Deep Learning Neural Networks. 9 Jan in Better Deep Learning: https://machinelearningmastery.com/category/ better-deep-learning/ . Burmeister, Edwin (2000) The Capital Theory Controversy. Critical Essays on Piero Sraffa’s Legacy in Economics, edited by Heinz D. Kurz. Cambridge: Cambridge University Press. Carr, Nicholas (2010). The Shallows: How the Internet Is Changing the Way We Think, Read and Remember. New York: W.W. Norton and Company Inc. Cichocki, Andrzej; Namgil Lee; Ivan Oseledets; Anh-Huy Phan; Qibin Zhao; and Danilo P. Mandic (2016). Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions. Foundations and Trends in Machine Learning. 9(4-5), 249-429. Cichocki, Andrzej ; Anh-Huy Phan; Qibin Zhao; Namgil Lee; Ivan Oseledets; Masashi Sugiyama; and Danilo P. Mandic (2017). Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 2 Applications and Future Perspectives. Foundations and Trends in Machine Learning. 9(6), 431-673. Clarke, B., D. Elkins, J. Gibbons, F. Loregian, B. Milewski, E. Pillore, & M. Roman (2020). Profunctor Optics, a Categorical Update. arXiv:2001.07488v1 [cs.PL] 21 Jan 2020. Cornelisse, Daphne (2018). “An intuitive guide to Convolutional Neural Networks”, available at FreeCodeCamp, https://www.freecodecamp.org/news/ an-intuitive-guide-to-convolutional-neural-networks-260c2de0a050/ . Cruttwell, Gallagher, & MacAdam (2019). Towards formalizing and extending differential programming using tangent categories. Extended Abstract, Proc. ACT 2019, available at: http://www.cs.ox.ac.uk/ ACT2019/preproceedings/Jonathan%20Gallagher,%20Geoff%20Cruttwell%20and%20Ben%20MacAdam.pdf . Culbertson, J. & K. Sturtz (2013). Bayesian Machine Learning via Category Theory. arXiv:1312.1445v1 [math.CT] 5 Dec2013. Ehrhard, Thomas and Laurent Regnier (2003). The differential lambda calculus. Theoretical Computer Science, 309 (1-3):1-41. Drineas, Petros and Michael W. Mahoney (2017). Lectures on Randomized Numerical Linear Algebra. arXiv:1712.08880v1 [cs.DS] 24 Dec 2017. Fiebiger, B. (2017). Semi-autonomous household expenditures as the causa causans of postwar US business cycles: the stability and instability of Luxemburg-type external markets. Cambridge Journal of Economics, vol. 42, Issue 1, 2018, pp. 155–175. Fiebiger, B., & Lavoie, M. (2017). Trend and business cycles with external markets: Non-capacity generating semi-autonomous expenditures and effective demand. Metroeconomica.2017;00:1–16. Fong, Brendan, David Spivak and Rémy Tuyéras’s (2017). Backpropagation as Functor: A compositional perspective on supervised learning. https://arxiv.org/abs/1711.10455v3. Gershenfeld, Neil, Alan Gershenfeld, and Joel Cutcher-Gershenfeld (2018). Designing Reality: How to Survive and Thrive in the Third Digital Revolution . New York: Basic Books. Hedges Jules, Jelle Herold (2019). Foundations of brick diagrams. rXiv:1908.10660v1 [math.CT] 28 Aug 2019. Juniper, J. (2018). Economic Philosophy of the Internet-of-Things. London: Routledge. Juniper, J. (2005). A Keynesian Critique of Recent Applications of Risk-Sensitive Control Theory in Macroeconomics, Contemporary Post Keynesian Analysis, proceedings of the 7th International Post Keynesian Workshop, Northhampton: Edward Elgar, UK. Keynes, J. M. (1936). The General Theory of Employment, Interest and Money, London, Macmillan, Retrieved from: http://www.hetwebsite.net/het/texts/keynes/gt/gtcont.htm . Lin, H. W., M. Tegmark & D. Rodnick (2017). Why does deep and cheap learning work so well? J. of Stat. Physics. arXiv:1608.08225v4 [cond-mat.dis-nn] 3 Aug 2017. LeCun, Yann (2018). Deep Learning est mort. Vive Differentiable Programming! Facebook blog entry, January 6, 2018: https://www.facebook.com/yann.lecun/posts/10155003011462143 020-01-07 Lemay Jean-Simon Pacaud (2017). Integral Categories and Calculus Categories. Master of Science Thesis, University of Calgary, Alberta. Loregian, Fosco (2019). Coend calculus—the book formerly known as ‘This is the co/end’. arXiv:1501.02503v5 [math.CT] 21 Dec 2019. Lovelock, James (2019). Novacene: The Coming Age of Hyperintelligence. London: Allen Lane. Martins, Nuno Ornelas (2019). The Sraffian Methodenstreit and the revolution in economic theory. Cambridge Journal of Economics, 43: 507–525. Minsky, Hyman P. (May 1992). The Financial Instability Hypothesis. The Jerome Levy Economics Institute of Bard College, Working Paper No. 74: 6–8. http://www.levy.org/pubs/wp74.pdf . Olah, Christopher (2015). Colah, Blog entry on “Neural Networks, Types, and Functional Programming”. Posted on September 3, http://colah.github.io/posts/2015-09-NN-Types-FP/ . Plotkin, Gordon (2020). A complete axiomatisation of partial differentiation. The Spring Applied Category Theory Seminar at University of California, Riverside, 7 June, 2020, http://math.ucr.edu/ home/baez/ACT@UCR/index.html#plotkin . Poggio, T., H. Mhaskar, L. Rosasco, B. Miranda & Q. Liao (2017). Why and When Can Deep—but not Shallow—Networks Avoid the Curse of Dimensionality: A Review. International Journal of Automation and Computing, 14(5), October 2017, 503-519. Prokopenko, Harre, Lizier, Boschetti, Peppas, Kauffman (2019). Self-referential basis of undecidable dynamics: from the Liar paradox and The Halting Problem to The Edge of Chaos. arXiv:1711.02456v2 [cs.LO] 21 Mar 2019. Riley, M. (2018). Categories of Optics. arXiv:1809.00738v2 [math.CT] 7 Sep 2018. Rivas, E. (2018). Relating Idioms, Arrows and Monads from Monoidal Adjunctions. Chapter in R. Atkey and S. Lindley (Eds.): Mathematically Structured Functional Programming (MSFP 2018) EPTCS 275, 2018, pp. 18–33. Román, Mario (2019). Profunctor optics and traversals. MSc Thesis in Mathematics and Foundations of Computer Science, Trinity, Oxford University. arXiv:2001.08045v1 [cs.PL] 22 Jan 2020. Spivak, David I. (2019). Generalized Lens Categories via Functors C^op → Cat. arXiv:1908.02202v2 [math.CT] 7 Aug 2019. Sraffa, Piero (1960) Production of Commodities by means of Commodities: A Prelude to the Critique of Neo-Classical Economics. Cambridge: Cambridge University Press. Tegmark, Max (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. London: Penguin Books. The Statebox Team (2019). The Mathematical Specification of the Statebox Language, Version June 27, 2019, https://statebox.org/research/ . Stiglitz, J. E., (2018) Where modern macroeconomics went wrong, Oxford Review of Economic Policy, 34(1-2), pp. 70–106. Wright, A. (?). Appendix A-Brief Introduction to Kernels. Mimeo. University of Lancaster. https://www.lancaster.ac.uk/pg/wrighta3/STOR603_Appendix_A.pdf . Yang, G. (2019). Scaling Limits of Wide Neural Networks with Weight Sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation. arXiv preprint arXiv:1902.04760, Yanofsky (2003). A universal approach to self-referential paradoxes, incompleteness and fixed-points. arXiv:math/0305282v1 [math.LO] 19 May 2003. Zeilberger, Noam and Alain Giorgetti (2015). A correspondence between rooted planar maps and normal planar lambda terms. Logical Methods in Computer Science, Vol. 11, 3(22): 1–39. Zuboff, Shoshana (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books. The Latest Trends in Last-mile Delivery Last-mile delivery, the movement of products from a hub of transportation to the last delivery destination, has become one of the important e-commerce systems. Because of the growing demand for integrated delivery systems around the world, there is an upsurge in last-mile logistics because this is a key differentiator for ecommerce companies. It is expected that the market size of the last-mile delivery market is to increase by USD 165.6 billion between 2023 and 2027. The growth of the market may be based on several factors such as the rising number of various e-commerce companies, consumer behaviours, emergencies, and the growing need for warehouses. Why Last-mile delivery? 1. 73% of the global population is shopping online. 2. The market is expected to grow continuously even after this decade. 3. 45% of online shoppers are interested to buy from a company again if an order is late. 4. 41% of consumers are ready to pay an extra charge for same-day delivery. 5. According to 45% of online shoppers, retailers meet delivery expectations. 6. 56% of Gen Xers and 55% of Millennials like online shopping. 7. In Countries like the US, almost 157.4 million shoppers expect free two-day shipping on their orders, and some areas even prefer one-day or same-day shipping. Without vehicles’ existence, shoppers won’t be able to receive their orders, and that is the reason why a trustworthy fleet of transportation vehicles is essential for the success of last mile delivery. It might be challenging to deliver on fast turnaround times, but it’s worth it from some brands’ perspectives such as Amazon Prime became an example wherein consumers don’t mind paying in excess for quicker delivery. Businesses compete in such a way that their tactics and eCommerce operations are customer-centric, 100%. Here are the latest trends in last-mile delivery that supports companies to keep up with growing consumer expectations to get their orders faster. If you want to take your company forward to the next step, these trends may help you find out the right techniques. Environment-friendly Deliveries Environment-friendly deliveries have created a big impact on retailers and logistics services providers and the approach to eco-friendly last-mile delivery strategies is a relevant business and societal challenge. By encouraging green deliveries, corporates want to achieve zero greenhouse gas emissions. More importantly, creating solid sustainability goals which consist of an eco-friendly last mile is a vital step in the fight against climate change. Businesses with business models that promote sustainability often win customers and it boosts their brand image. Nowadays, companies keep focusing on providing environment-friendly deliveries with the support of reusable packaging material and reducing CO2 emissions by utilizing delivery trucks that use green energy. According to a study conducted by the World Economic Forum, the need for last-mile deliveries may grow by 78% by 2030. The following are some environment-friendly techniques that companies can adopt for sustainable delivery methods to support an eco-friendly last mile. • Monitoring an inventory of your current carbon footprintcan make you aware of what sustainability goals your company should adopt to save the planet. • Discovering some essential steps to decrease your carbon footprint is the next important aspect of a sustainability initiative. • Since general packing requirements may become a threat to our environment, identifying sustainable packing is a critical aspect of last-mile sustainability. • Adopting strategies for utilizing green energy to reduce emissions as it makes a great impact in your company atmosphere as well. • Communicate and give consumers a slower-speed choice which can contribute to decreasing harmful emissions. Researchers at MIT discovered that fast shipping not only increase expenses by up to 68%, but it also increases total carbon emissions by 15%. Drones and Autonomous Vehicles According to a study, almost 3250 parcels are transported globally every second. It is expected that it may be doubled by 2026. With a traditional approach to delivery, companies are not able to manage such a big volume of parcels. Consequently, e-commerce companies have taken modern approaches through autonomous vehicles, drones, and delivery bots. Drones and autonomous vehicles have been excellent solutions for last mile delivery. They have given benefits such as efficient delivery, cheaper operational cost, and environment-friendly technology and these are the primary reasons why they will have a positive impact on logistics. In the future, e-commerce companies may adopt much more smart technologies and tools to enhance customer experience with last-mile delivery. It is expected that the global market size of autonomous last-mile delivery may reach $68+ Billion by 2028. E-commerce companies also keep trying to invent new devices and vehicles that can support fast and sustainable deliveries. It was forecasted that last mile delivery will get more programmed through the execution of delivery drones. The heightened use of unmanned aerial vehicles (UAVs) for quicker delivery of goods is one of the main factors in the drone market’s growth. Fast Deliveries The rising number of consumer demand for receiving anything quicker has resulted in e-commerce companies focusing more on faster deliveries. “The faster the happier” has become a watchword for every customer in the digital world. Fast deliveries are a must for businesses that deliver the products of quick needs. It is expected that the market size of global same-day delivery may grow at a compound annual growth rate of 20.3% from 2020 to 2027. Because of increasing urbanization, fast delivery is a must for companies such as: • Direct-to-consumer model retail companies • Food delivery companies • Florists • Pharmacies To conclude, last-mile challenges can be hard to handle if you are not knowledgeable about them. It is important to keep in mind that there’s no such thing as last-mile solution but you can apply these practices that can get positive results: 1. Work harder to meet consumer demands efficiently 2. Make on-time deliveries 3. Delivery logistics’ smart automation These right practices, when combined with an appropriate app, can lessen the risk of last-mile problems and start flourishing during the most challenging phase of the delivery business. Urban Warehouses When it comes to online purchases, most customers demand same-day deliveries. Hence, companies need to arrange warehouses and hubs closer to the city. Urban warehouses have become a key place for guaranteeing fast logistics services in every urban hub. It is strategically positioned for the businesses, between its distribution center and the bulk of its clients, being an effective solution for complying with ecommerce business standards: same-day deliveries and trouble-free product returns. With the support of urban warehouses, companies are able to minimize the risk of damaged goods. It is an innovative approach to provide fast and professional services based on customer demands since retailers can gain quick access to a large volume of customers. Urban warehouses and fulfilment centres are two essential sources that can play a great role in fast deliveries anywhere in the world. The rise of ecommerce is one of the important reasons why urban warehouses have become well-known in supply chains nowadays. Smart Tracking E-commerce companies have adopted advanced technologies such as smart tracking to improve their last-mile delivery. With smart tracking, some important features such as real-time tracking, visibility, and route optimization are supporting them with seamless deliveries and fleet management. It also contributes to increasing cost-effectiveness, enhancing driving management, inhibiting auto theft, lessening vehicle idle time by route optimization, and boosting consumer satisfaction. By keeping the customers informed about the correct location of an item in transit by providing more visibility, e-commerce companies have developed a sense of secure deliveries among customers. As it helps ensure that products are delivered to customers on time and professionally, smart tracking helps improve customer satisfaction levels. When it comes to last-mile delivery, technology plays a key role to break boundaries. With the support of the latest technologies, the last-mile delivery system keeps growing to meet consumer demands. It is expected that this system will continue with the growth of e-commerce because of its increased needs. And also, to keep ahead of competitors, it is a must for every company that wants to achieve in the digital world. Hence, companies are to stay on the latest trends in Last-mile delivery to advance sales and marketing. An MMT Perspective on how Agenda 30 could be Implemented in Australia Covid-19 has shown that governments with monetary sovereignty can turn the tap off quickly, if they must, and just as easily turn the tap back on. This has been coupled with a new appreciation for the ability of a sovereign economy to operate effectively despite large levels of net government (and net foreign) debt as a proportion of GDP, reconfirming the experience of those governments during WWII, when debt was used as an instrument to curb consumption and to redirect productive resources and research activity into investment in new capacity and new technology to support the war effort (viz the Agenda 30 strategic policy goals). A similar imperative now confronts nations as they direct resources into a sustainable transformation of the economy. This paper will contribute to these policy objectives by examining the respective economic roles to be played in this transformation by the Job Guarantee, the Green New Deal, and what Mazzucato chooses to call “ mission-oriented finance”! In this context, a range of metrics for guiding policy is also evaluated. Keywords: Modern Monetary Theory, Agenda 30, Green New Deal, Job Guarantee, Mission-oriented Finance, Short-changing Nature. 1. Introduction The main purpose of this paper is to clarify both the rationale for, and policy objectives underpinning, a range of interventions recently advocated by Modern Monetary Theorists (MMTs), within the context of the UN’s Agenda 30 strategic policy goals. Specifically, it will address the Job Guarantee (JG) as an anti-inflationary instrument and the Green New Deal (GND) as a means for redirection of resources and capital investment. However, I intend to achieve this clarification within an academic context where it has become fashionable to question MMT for its on-going adherence to supposedly inadequate or erroneous theoretical principles. Much like much like Marc Antony in Shakespeare’s Julius Caesar, who guilefully claimed that he came “to bury Caesar not to praise him”, for although Brutus (along with Georg Friedrich Knapp and Abba Lerner) was purportedly an honourable man, MMT is faulted on a fundamental level for its less than honourable fidelity to the principles of (i) neo-Chartalism; and, (ii) Functional The first theoretical allegiance is criticised on the basis of a broader Post Keynesian or Marxist interpretation of “money with no intrinsic value”, which questions the notion that stability in the value of money, when it functions as both a unit of account and a store of value, can be guaranteed solely by the legislated requirement that it be used for the payment of outstanding tax obligations (Lapavitsas & Aguila, 2020, is representative on this strand of critique). The second allegiance is questioned by so-called Structuralists, on the basis that current account deficits and cumulative net foreign indebtedness do matter, especially for emerging economies, which suffer from being situated low in the global currency hierarchy, plagued by a narrow, commodity-based admixture of exports, while subject to a rapidly destabilising pass-through of exchange rate fluctuations onto tradeable goods prices (for examples of this Structuralist critique, see Prates, 2020; Vernengo & Caldentey, 2019, for critiques, and Carnevali et al., 2020 for a discussion of strategic pass-through as a generalization of the Marshall-Lerner conditions). With the intention of clearing the way for a more focused discussion of macroeconomic policy options, I wanted to briefly respond to the above-mentioned criticism of MMT’s theoretical foundations. To begin with, I wanted to highlight the fact that, in the 1980s, the Australian tradition of MMT developed within an environment where many mainstream and more left-wing economists adopted what were effectively Structuralist arguments to argue that a return to policies of full employment that were temporarily abandoned in the last year of the Whitlam Labour Government, was prevented by a “Balance-of-Payments Constraint”. When Hawke-Keating Labor Government was returned to power in the early 80s, Treasurer Paul Keating, largely mirrored then ex-Prime-Minister John Howard’s obsession with the rising level of foreign debt. In Australia, back in the 80s, a series of inter-linked Structuralist arguments legitimised a sustained assault on the wages and conditions of Australian workers, and ultimately, the level of trade union influence. However, the biggest impact on the industrial working class could be sheeted home to rising labour underutilisation (a combination of rising unemployment and ‘precariousness’). With the clear intention of reducing the “real wage overhang,” workers were encouraged to trade-off increases in the ‘social wage’ for cuts to real wages as a means of restoring the international competitiveness of Australian goods and services. In grappling with these problems, progressive economists often (incorrectly) applied Kaldor-Thirlwall multiplier models of trade to the case of floating rather than fixed exchange rates (McCombie & Thirlwall, 1994)[1]. On this view, income elasticities of demand dominate in their effects over exchange-rate related price-elasticities. A country like Australia is seen to have a high income-elasticity of demand for imports whereas the rest of the world has relatively low income-elasticities of demand for Australian exports. Accordingly, if Australia were to grow too rapidly compared with rates of growth exhibited by our major trading partners, the current account deficit would widen dramatically. “Stop-Go” policies would be the inevitable result. Within the Commonwealth Government bureaucracy, it was commonplace for economists to refer to the “twin deficits” hypothesis, which viewed total public sector debt as the main driver of deficits on the current account. Similar views were actively promoted by supposedly ‘left wing’ economists in the National Institute of Economic and Industry Research, officials at the OECD, and members of Secretariat of the tripartite Australian Economic Planning and Advisory Commission. At around the same time, there was much-heated debate about “Dutch Disease” (i.e., the “crowding out” of other industries when the resource-sector expanded) and the “J-curve” effect in Australia (which arises when an exchange-rate depreciation initially worsens the trade deficit before contributing to a gradual increase in net exports). Both Marxist and Post Keynesian critics of MMT have emphasised the importance of the global currency hierarchy, the determinants of effective sovereignty, and the influence of conventions and confidence in the whole monetary system as having some bearing on the value of money. And Chartalist views have been questioned on the dubious basis that spot/forward contracts were developed before effective principles of taxation were formalised. It has been claimed that many developing economies simply “will not find foreign demand for their currencies”. Kaltenbrunner (2012) has attempted to achieve an integration of what she calls the ‘horizontalist’ or structuralist position and ‘verticalist’ interpretations of monetary policy in open economies (the work of Lavoie, 2000, 2002-03 can be seen as illustrative of the ‘verticalist’ position, especially in his interpretation of the covered interest parity condition). To this end, she has identified three structural factors that determine the ability of a country to meet outstanding external obligations (and thus, the liquidity premium on its currency); namely: (i) a country’s total stock of net (short-term) external obligations (expressed as a proportion of GDP); (ii) the proportion of its liabilities denominated in foreign currency and the possible existence of other liabilities to foreign investors; (iii) a country’s ability to meet its outstanding liabilities through “forcing a cash flow in its favour” either through the income generation process (including income from previous rounds of lending) and/or dealing and trading in capital assets and financial instruments; and finally, if current cash flows are insufficient to meet outstanding obligations, (iv) the ability to “make positions” (i.e. to refinance existing debt and/or to liquidate assets). The question for policy makers is whether a country exposed to external pressures in each of these three ways, can put together a cluster of policy interventions, including capital controls, to counter any likely shocks (while recognzing the fact that floating exchange rates ensure greater levels of autonomy in the pursuit of effective fiscal policy). This is where consideration must be given to a range of policy instruments that help to develop and diversify the economic and trading base. Personally, I see a strong resonance between Marxist views on the credit system, when it fails to work as a means of payment, and Minsky’s notions of financial instability, which have long been accepted by MMT advocates. By the same token, I see little difference between Marx’s conception of money with no intrinsic value and Chartalist efforts to explain how a stable value for the national currency can be established. In the next section of the paper, I will examine the Job Guarantee (JG). This will be followed by an interpretation of the Green New Deal (GND) as a policy for controlling inflationary pressures in the long run, while achieving dramatic changes in the resource base. Australian MMT researchers would insist that a raft of supplementary policies (apart from, but including capital controls) can also be adopted as supplements in the pursuit of full employment, including tax policy, industry policy and a strategic commitment to industrial development on the basis of competencies. 2. The Job Guarantee in a “Nutshell” The JG is premised on the fact that only the national government (as issuer of fiat currency) can create Net Financial Assets (NFA) through deficit spending. To avoid any under-employment of labour and productive capacity, the flow of NFA must match non-Government sector’s desire to net save. Otherwise, there would be a shortfall in effective demand. Jobs created through the issue of NFA would be paid at minimum wage and designed so that they do not directly compete with those to be subsequently created within the domestic private sector via the multiplier. The JG labour-force thus functions as a “buffer stock” whose primary role is that of anti-inflationary instrument This is because the uneven distribution and persistence of underemployment means that traditional policies of public investment would otherwise meet inflationary bottlenecks well before full employment is reached (Mitchell & Juniper, 2007). Mitchell (2020) explains how a JG operates as a superior means for the control of inflation when compared to the mainstream pursuit of a non-accelerating inflation rate of unemployment (NAIRU). The effectiveness of inflation policies based on NAIRU can and has been undermined by: (i) the continual movement of workers out of short term into long-term unemployment; and, (ii) the dramatic rise in the proportion of those in precarious employment. In the more technical literature on inflation, these combined effects are said to have contributed to the development of a “horizontal” Phillips Fig. 1., below, suggests how the JG could operate by comparing three positions on the traditional Phillips Curve, which depicts trade-offs between realized inflation and unemployment. Governments increase effective demand in response to high unemployment in position A, moving to position B, at the cost of a rise in inflation from I[A] to I[B]. If a JG were put into place, the economy could instead move to position C, achieving full employment at the original rate of inflation. 3. The Green New Deal in a “Nutshell” Where the JG is a short-run anti-inflationary mechanism, the Green New Deal (GND) is a log-run anti-inflationary mechanism for achieving a dramatic transformation in the economy through intervention in the process of capital accumulation. The GND adopts the methodology originally proposed by J. M. Keynes in his pamphlet on How to Pay for the War in the context of responding on a massive scale to environmental problems such as climate change (Nersisyan & Wray, 2019). Under this modern version of the scheme, the stages to be followed are first to estimate the “costs” of the GND in terms of resource requirements; second, to assesses resource availability that can be devoted to implementing GND projects. This includes mobilisation of unutilized and underutilised resources, plus shifting of resources away from current destructive and inefficient uses into GND projects. Here, the main problem that could arise is that of inflation if sufficient resources cannot be diverted to the GND. Accordingly, the scheme also proposes a series of anti-inflationary measures, which could include well-targeted taxes, wage and price controls, rationing, and voluntary saving. During WWII, voluntary saving was accomplished, both Great Britain and the US, through issue of war bonds to all classes in society. This combination of policy interventions is summarised in Fig. 2., below. 4. Industry-Policies and Economic Development Through an historical and political analysis of the East Asian development model, have Amsden and Wade have highlighted the difficulties faced by developing economies that are located at some distance from the frontier of best practice, yet still want to tilt the “playing field” away from existing configurations of comparative advantage. Amsden (1989) emphasises the need for a strong state to impose binding condition of reciprocity on corporations and sectors that benefit from a variety of subsidies designed to influence the path of capital accumulation. Wade (1990) attends to the complexity of “governed market” interventions that might appear to be even-handed in regard to trade versus non-traded, or import-substituting rather than export-oriented industries (conditions which he describes as those of a “simulated free-market”), yet nevertheless still provide incentives for advancement. The work of Felipe et al., (2012) builds on the competency-based economic analysis of strategic development. This research updates work originally conducted by Hidalgo and Hausman using another set of data encompassing 5107 products and 124 countries. A minimal spanning tree is constructed for global trade based on proximity links between different products. Production of traded goods located at the centre of the network is seen to require a more diverse and non-ubiquitous but unobservable set of competencies. In countries such as India and China, policy makers seem to have been able to exploit available proximity links in their efforts to expand both the scale and scope of what is being produced and traded. More recently, Barry Naughten (2021) has identified a shift in Chinese industry policy away from sectoral policies for strategic emerging industries towards policies that promote core digital technologies that, if successful, would enable China to leap-frog ahead of EU and US industries in a selected range of key domains (including digital fabrication and production, quantum computation, AI, and machine-learning, big data and the internet-of-things). Understandably, Naughten is reluctant to evaluate the success or failure of these initiatives, remarking that insufficient evidence has yet been amassed to make such judgements. He describes at some length the Industry Guidance Funds (IGF) that China deploys to coordinate different forms of investment at all levels of government—national, provincial, and local—in innovation, infrastructure, and commercialisation of these technologies—while identifying potential sites of failure and emerging risk. Along similar lines, Mazzucato and Wray (2015) have emphasised the important policy role of State Investment Banks for “entrepreneurial states” wishing to engage in counter-cyclical expenditure, capital development, and new venture support. In particular, they describe an over-arching process of “mission-oriented” finance instantiated by Eisenhower’s efforts to “land a man on the moon” before the Soviet Union. The interventions of a variety of agencies—both public and private, including the newly formed NASA and DARPA—were orchestrated to achieve this set of aims, through the injection of finance at each stage in the innovation chain (i.e., from research, through concept invention, early-stage technology development, and product development, into final production and marketing). If successful, China’s IGFs would fulfil all of these requirements. This same kind of coordinated approach could readily be harnessed to achieve ecological rather than military and geo-political goals. 5. Metrics for Short-Changing Nature In a talk I recently gave to members of MMT-Australia I focused on the limitations of mainstream approaches to Ecological Economics focusing on the modified neoclassical framework of Pearce and Turner (1990). My major concern was to question those who saw policies for full-employment as being in contradiction with interventions designed to achieve ecological sustainability. However, I also questioned the notion of environmental capital, which featured in Pearce and Turner’s ‘4 Capitals’ model of sustainability. Accordingly, I turned to Marx’s concept of ‘fictitious capital’, which he applied both to human capital (with labour services being capitalised into a ‘stock’ using a discount rate that simply reflected the rate of exploitation) and to the capitalisation of fictitious structures of money taking the form of credit as a means of payment, that were increasingly divorced from real processes of capital accumulation. I suggested that environmental capital could be viewed as an equally fictitious concept, insofar as it attempted to ‘capitalise’ ill-defined flows of ‘environmental services. I moved on to the need to build more rigorous bridges between Value Theory (understood in terms of Classical rather than Neoclassical Political Economy) and sustainability metrics (which adequately accounted for the ‘short-changing’ of nature). In the Classical system, prices are determined by socially necessary labour time, including the application of the labour embodied in productive capital. However, from a sustainability perspective, this should include the labour time required to recycle renewable resources, reduce other forms of waste, mitigate the impact of pollution, and discover substitutes for non-renewable resources whose stocks were being depleted (as argued by Moore, 2017). To this end, I emphasised the proximity between this Classical theory of reproduction pricing, Leontief’s Input-Output Analysis (which has been taken up by a whole generation of Industrial Ecologists), and national accounting conventions for the measurement of GDP (on the former see Schmelev, 2012, along with Suh and Kagawa, 2005; on the latter see Flaschel, 2010). I then suggested that metrics for sustainability could be constructed by adopting techniques of linear programming that had been developed by mathematical economists and planners in the former Soviet Union, because these techniques were also grounded in the labour theory of value. At the time I was unaware that Paul Cockshott (2010) and his PhD student, Jan Dapprich (2020), had already pursued this approach to sustainability modelling, using modern software, while building on the research of Kantorovich (1939, 1965) and Novozhilov (1970) (also see Ellman, 1968 and Holubnychy, 1982). For convenience, the various elements of what has been proposed above, are brought together in the Fig. 3., below. 6. Conclusion By way of a recapitulation, let me suggest that policies such as the JG and the GND complement one another and, in combination, demonstrate ways that Agenda 30 can be successfully implemented both in Australia and elsewhere. I went on to argue that the Job Guarantee could serve as a short-run inflation control mechanism, while promoting full-allocation and processes of capital accumulation, to achieve sustainability objectives, while avoiding inflationary pressures over the long-run. In arguing for this position, I also wanted to highlight the fact that MMT is and has always been cognisant of difficulties faced by ‘emerging economies. For this reason, I also considered a raft of industry policies that could assist developing nations in their efforts to progress up the technology and productivity ladder (even leaving the existing technology frontier behind them in their wake), while diversifying their trade activity. Industry policy can take a long time to come to fruition and some merging economies may be exposed to difficulty when servicing ballooning foreign debt. Under these circumstances capital controls may also fail to stem the tide of increasing financial obligation. However, as Kaltenbrunner (2019), explains, only a certain number of emerging economies would fall into this category. MMT advocates maintain that the loss of fiscal autonomy that would result from any move towards a fixed or ‘dollarised’ exchange-rate, would unfortunately be a heavy price to pay for the achievement of currency stability, even in the short-run. Finally, I touched on ways that sustainability metrics could be developed using techniques of linear programming that deployed a modified labour theory of value approach to account for various ways in which nature was being ‘short-changed’. In this way, programmes to achieve full-employment could be designed to complement efforts to transform productive activity in ways that met ecological sustainability objectives. Author: Professor Dr. James Juniper – Conjoint Academic, University of Newcastle; PhD in Economics, University of Adelaide Amsden, Alice H. (1989). Asia’s next giant : South Korea and late industrialization. New York; Oxford: Oxford University Press. Filho, S. F. M., F. G. Jayme Jr., and G. Libânio (2013). Balance-of-payments constrained growth: a post Keynesian model with capital inflows. Journal of Post Keynesian Economics/Spring, 35(3): Campbell, Martha (2017). Marx’s Transition to Money with no Intrinsic Value in Capital, Chapter 3. Continental Thought and Freedom: 150 years of Capital, 1(4): 207-230. Carnevali, Emilio, Giuseppe Fontana & Marco Veronese Passarella (2020). Assessing the Marshall–Lerner condition within a stock-flow consistent model, Cambridge Journal of Economics, 44: 891–918. Cockshott, Paul (2010). Von Mises, Kantorovich and in-natura calculation. European Journal of Economics and Economic Policies, INTERVENTION, January 7 (1): 167 – 199. Dapprich, Jan Philipp (2020) Rationality and distribution in the socialist economy. PhD thesis. http://theses.gla.ac.uk/81793/ . Ellman (1968). Novozhilov’s Book: A Note. Soviet Studies, 20(1), 152–154. Felipe, Jesus, Utsav Kumar, Arnelyn Abdon, and Marife Bacate (2012). “Product Complexity and Economic Development”, Structural Change and Economic Dynamics, 23(1): 36-68. (available from Journals – Jesus Felipe ) Flaschel, Peter (2010). Topics in Classical Micro- and Macroeconomics Elements of a Critique of Neoricardian Theory. Berlin, Heidelberg: Springer-Verlag. Hagendorf, Klaus (2013). Victor Valentinovich Novozhilov: A Marxian Mathematical Economist In Honour of the 120th Anniversary of His Birth. Available from authors RepEc website: https:// Harvey, J.T. (1991). “A Post Keynesian View of Exchange Rate Determination.” Journal of Post Keynesian Economics, Fall, 14(1): 61-71. Hidalgo, C. & R. Hausmann (2009). The building blocks of economic complexity. Proceedings of the National Academy of Sciences of the United States of America, 106 (26): 10570–10575. Holubnychy, Vsevolod (1982). Novozhilov’s Theory of Value. In (ed. Ivan Koropckyj) Soviet Regional Economics: Selected Works of Vsevolod Holubnychy. Edmonton: Canadian Institute of Ukranian Studies. Totonto: Hignell Printing. Available at https://diasporiana.org.ua/wp-content/uploads/books/20677/file.pdf : 381-484. Kaltenbrunner, Annina (2019). How to interpret the forward rate in the foreign exchange market? Horizontalists vs. Structuralists in the Open Economy. Revista de Economí Política y Desarrollo, November – April, 1(2): 7-23. Kantorovich, Leonid V. (1939 [1960]). Mathematicheskie Metody Organizatsii i Planirovania Proizvodstva, Leningrad: Leningrad State University Publishers. (Trans.) Mathematical methods in the organization and planning of production. Management Science, 6(4): 366–422. Kantorovich, Leonid V. (1965). (trans. P.F. Knightsfield), The Best Use of Economic Resources. Oxford: Pergamon Press. Lapavitsas, Costas & Nicolás Aguila (2020). Modern monetary theory on money, sovereignty, and policy: A marxist critique with reference to the Eurozone and Greece, The Japanese Political Economy, 46 (4): 300-326. Lavoie, Marc (2000). A Post Keynesian View of Interest Parity Theorems, Journal of Post Keynesian Economics, 23:1, 163-179. Lavoie, M. (2002). “Interest Parity, Risk Premia, and Post Keynesian Analysis.” Journal of Post Keynesian Economics, 25(2): 237-249. Mazzucato, M. & L. R. Wray (2015). Financing the capital development of the economy: a Keynes-Schumpeter-Minsky synthesis, Levy Economics Institute, Working Paper No. 837. McCombie, J.S.L. and Thirlwall, A.P. (1994). Economic Growth and the Balance of Payments Constraint. London: Macmillan. Mitchell, William (2015). Modern Monetary Theory: Macroeconomic research, teaching and advocacy, The Roots of MMT do not Lie in Keynes. Blog entry, 25 August. http://bilbo.economicoutlook.net/blog/?p =31681 . Mitchell, William (2020), The Job Guarantee and the Phillips Curve, The Japanese Political Economy, 46(4): 240-260. Mitchell, W. & J, Juniper (2007). Towards a Spatial Keynesian Macroeconomics, Chapter 10 in Advances in Monetary Policy and Macroeconomics (eds.) Philip Arestis and Gennaro Zezza, Houndsmill, Hampshire: Palgrave Macmillan. Mitchell, William, L. Randall Wray, & Martin Watts (2019). Macroeconomics, London: Macmillan-Red Globe Press. Moore, Jason W. (2017). Metabolic rift or metabolic shift? Theory & Society, 46:285–318. Naughten, B. (2021). The Rise of China’s Industrial Policy 1978 to 2020. Mexico City: D.R. Universidad Nacional Autónoma de México. Nersisyan, Yeva & L. Randall Wray (2019). How to Pay for the Green New Deal. Levy Economics Institute Working Paper No. 931, May. Novozhilov, V. V. (1970). Problems of cost-benefit analysis in optimal planning (Trans. H. McQuiston), White Plains, N.Y.: International Arts and Sciences Press. Pearce, David W. R. and Kerry Turner (1990). Economics of Natural Resources and the Environment. Baltimore: Johns Hopkins University Press. Prates, D. (2020). Beyond Modern Money Theory: a Post-Keynesian approach to the currency hierarchy, monetary sovereignty, and policy space. Review of Keynesian Economics, 8(4): 494–511. Shmelev, S. E. (2012) Industrial Economics, Chapter 2 in Ecological Economics: Sustainability in Practice. Springer Publishers: 19-34. Suh, Sangwon and Shigemi Kagawa (2005). Industrial Ecology and Input-Output Economics: An Introduction. Economic Systems Research, December, 17(4): 349–364. United Nations Statistics Division (UNSD) (2020). SDG Indicators – Global indicator framework for the Sustainable Development Goals and targets of the 2030 Agenda for Sustainable Development. https:/ Vernengo, Matías and Esteban Pérez Caldentey (2019). Modern Money Theory (MMT) in the Tropics: Functional Finance in Developing Countries. Political Economy Research Institute. University of Massachusetts, Amhearst, Working Paper No. 495. Wade, Robert (1990). Governing the Market: Economic Theory and the Role of Government in East Asian Industrialization. Princeton, NJ: Princeton University Press. Semantic Technologies for Disaster Management: Network Models and Methods of Diagrammatic Reasoning The Chapter will provide a brief and informal introduction to diagrammatic reasoning (DR) and network modelling (NM) using string diagrams, which can be shown to possess the same degree of rigor as symbolic algebra, while achieving greater abbreviative power (and pedagogical insight) than more conventional techniques of diagram-chasing. This review of the research literature will set the context for a detailed examination of two case-studies of semantic technologies which have been applied to the management of emergency services and search-and-rescue operations. The next section of the Chapter will consider the implications of contemporary and closely related developments in software engineering for disaster management. Conclusions will follow. This Chapter is concerned with developments in applied mathematics and theoretical computing that can provide a formal and technical support for practices of disaster management. To this end it will draw on recent developments in applied category theory , which inform semantic technologies. In the interests of brevity, it will be obliged to eschew formal exposition of these techniques, but to this end, comprehensive references will be provided. The justification for what might at first seem to be an unduly narrow focus, is that applied category theory facilitates translation between different mathematical, computational and scientific domains. For its part, Semantic Technology (ST) can be loosely conceived as an approach treating the World-Wide-Web as a “giant global graph”, so that valuable and timely information can be extracted from it using rich structured-query languages and extended description logics. These query languages must be congruent with pertinent (organizational, application, and database) ontologies so that the extracted information can be converted into intelligence. Significantly, database instances can extend beyond relational or graph databases, to include Boolean matrices, relational data embedded within the category of linear relations, and that pertaining to systems of differential equations in finite vector space, or even quantum tensor networks within a finite Hilbert space. More specifically, this chapter will introduce the formalism of string diagrams, which were initially derived from the work of the mathematical physicists, Roger Penrose (1971) and Richard Feynman (1948). However, this diagrammatic approach has since been extended and re-interpreted by category theorists such as Andre Joyal and Roy Street (1988, 1991). For example, Feynman diagrams can be viewed as morphisms in the category Hilb of Hilbert spaces and bounded linear operators (Westrich, 2006, fn. 3: 8), while Baez and Lauda (2009) interpret them as “a notation for intertwining operators between positive-energy representations of the Poincaré group”. Penrose diagrams can be viewed as a representation of operations within a tensor category. Joyal and Street have demonstrated that when these string diagrams are manipulated in accordance with certain axioms—the latter taking the form of a set of equivalence relations established between related pairs of diagrams—the movements from one diagram to another can be shown to reproduce the algebraic steps of a non-diagrammatic proof. Furthermore, they can be shown to possess a greater degree of abbreviative power. This renders an approach using string diagrams extremely useful for teaching, experimentation, and exposition. In addition to these conceptual and pedagogical advantages, however, there are additional implementation advantages associated with string diagrams including: (i) those of compositionality and layering (e.g. in Willems’s 2007 behavioural approach to systems theory, complex systems can be construed as the composites of smaller and simpler building blocks, which are then linked together in accordance with certain coherence conditions); (ii) a capacity for direct translation into functional programming (and thus, into propositions within a linear or resource-using logic); and, (iii) the potential for the subsequent application of software design and verification tools. It should be appreciated that these formal attributes will become increasingly important as the correlative features of what some have described as the digital economy. This chapter will consider the specific role of string diagrams in the development and deployment of semantic technologies, which in turn have been developed for applications of relevance to disaster management practices. Techniques based on string diagrams have been developed to encompass a wide variety of dynamic systems and application domains, such as Petri nets, the π-calculus, and Bigraphs (Milner, 2009), Bayesian networks (Kissinger & Uijlen, 2017), thermodynamic networks (Baez and Pollard, 2017), and quantum tensor networks (Biamonte & Bergholm, 2017), as well as reaction-diffusion systems (Baez and Biamonte, 2012). Furthermore, they have the capacity to encompass graphical forms of linear algebra (Sobociński, Blog), universal algebras (Baez, 2006), and signal flow graphs (Bonchi, Sobociński and Zanasi (2014, 2015), along with computational logics based on linear logic and graph rewriting (on this see Mellies, 2018; and Fong and Spivak, 2018, for additional 1. Applied Category Theory Category theory and topos theory have taken over large swathes in the field of formal or theoretical computation, because categories serve to link together the structures found in algebraic topology, and with the logical connectives and inferences to be found in formal logic, as well as with recursive processes and other operations in computation. The following diagram taken from Baez and Stay (2011), highlights this capability. John Bell (1988: 236) succinctly explains why it is that category theory also possesses enormous ormous powers of generalization: A category may be said to bear the same relation to abstract algebra as does the latter to elementary algebra. Elementary algebra results from the replacement of constant quantities (i.e. numbers) by variables, keeping the operations on these quantities fixed. Abstract algebra, in its turn, carries this a stage further by allowing the operations to vary while ensuring that the resulting mathematical structures (groups, rings, etc) remain of a prescribed kind. Finally, category theory allows even the kind of structure to vary: it is concerned with structure in general. Category theory can also be interpreted as a universal approach to the analysis of process, across various domains including: (a) mathematic practice (theorem proving); (b) physical systems (their evolution and measurement); (c) computing (data types and programs); (d) chemistry (chemicals and reactions); (e) finance (currencies and various transactions); (f) engineering (flows of materials and production). This way of thinking about processes now serves as a unifying interdisciplinary framework that researchers within business and the social sciences have also taken up. Alternative approaches to those predicated on optimizing behaviour on the part of individual economic agents include the work evolutionary economists and those in the business world who are obliged to work with computational systems designed for the operational management of commercial systems. However, these techniques are also grounded in conceptions of process Another way of thinking about dynamic processes is in terms of circuit diagrams, which can represent displacement, flow, momentum and effort—phenomenon modelled by the Hamiltonians and Lagrangians of Classical Mechanics. It can be appreciated that key features of economic systems are also amenable to diagrammatic representations of this kind, including asset pricing based on notion of arbitrage, a concept initially formalized by Augustin Cournot in 1838. Cournot’s analysis arbitrage conditions is grounded in Kirchoff voltage law (Ellerman, 1984). The analogs of displacement, flow, momentum and effort are depicted below for a wide range of disciplines. Applied Category Theory: in the US, contemporary developments in applied category theory (ACT) have been spurred along and supported by a raft of EU, DARPA and ONR Grants. A key resource on ACT is Fong and Spivak’s (2018) downloadable text on compositionality. This publication explores the relationship between wiring diagrams or string diagrams and a wide variety of mathematical and categorical constructs, including as a means for representing symmetric monoidal preorders, signal flow graphs, along with functorial translation between signal flow graphs and matrices and other aspects of functorial semantics, graphical linear algebra, hypergraph categories and operads, applied to electric circuits and network compositionality. Topos theory is introduced to characterise the logic of system behaviour on the basis of indexed sets, glueings, and sheaf conditions for every open cover. 2. Diagrammatic Reasoning Authors such as Sáenz-Ludlow and Kadunz (2015), Shin (1995), Sowa (2000), and Stjernfelt (2007), who have published research on knowledge representation and diagrammatic approaches to reasoning, tend to work within a philosophical trajectory that stretches from F. W. Schelling and C. S. Peirce, through to E. Husserl and A. N. Whitehead, then on to M. Merleau-Ponty and T. Adorno. Where Kant and Hegel privileged symbolic reasoning over the iconic or diagrammatic, Peirce, Whitehead, and Merleau-Ponty followed the lead of Schelling for whom ‘aesthetics trumps epistemology’! It is, in fact, this shared philosophical allegiance that not only links diagrammatic research to the semantic (or embodied) cognition movement (Stjernfeld himself refers to the embodied cognition theorists Eleanor Rosch, George Lakoff, Mark Johnson, Leonard Talmy, Mark Turner, and Gilles Fauconnier), but also to those researchers who have focused on issues of educational equity in the teaching of mathematics and computer science, including Ethnomathematics and critical work on ‘Orientalism’ specialized to emphasize a purported division between the ‘West and the Rest’ in regard to mathematical and computational thought and practice. As such, insights from this research carry over to questions of ethnic ‘marginalization’ or ‘positioning’ in the mathematical sciences (see the papers reproduced in Forgasz and Rivera, eds., 2012 and Herbel-Eisenmann et al., 2012). In a nutshell, diagrammatic reasoning is sensitive to both context and positioning and, thus, is closely allied to this critical axis of mathematics education. The following illustration of the elements and flows associated with diagrammatic forms of reasoning comes from Michael Hoffman’s (2011) explication of the concept first outlined by the American philosopher and logician, Charles Sanders Peirce. The above Figure depicts three stages in the process of diagrammatic reasoning: (i) constructing a diagram as a consistent representation of key relations; (ii) analysing a problem on the basis of this representation; and (iii) experimenting with the diagram and then observing the results. Consistency is ensured in two ways. First, the researcher or research team develop an ontology specifying elements of the problem and the relations holding between these elements, along with pertinent rules of operation. Second, language is specified in terms of both syntactical and semantic properties. Furthermore, in association with this language, a rigorous axiomatic system is specified, which both constrains and enables any pertinent diagrammatic transformations. 3a. Case-Study One: A 2010 paper by SAP Professors, Paulheim and Probst reviews an application of STs to the management and coordination of emergency services in the Darmstadt region of Germany. The aim of the following diagram, reproduced from their work, is to highlight the fact that, from a computational perspective, the integrative effort of STs can apply to different organizational levels: that of the common user interface, shared business logics and that of data sources. In their software engineering application, the upper-level ontology DOLCE is deployed to link a core domain ontology together with a user-interface interaction ontology. In turn, each of these ontologies draws on inputs from an ontology on deployment regulations and various application ontologies. Improved search capabilities across this hierarchy of computational ontologies, are achieved through the adoption of the ONTOBROKER and F-Logic systems. 3b. Case-study Two: An important contribution to the field of network modelling has come from the DARPA-funded CASCADE Project (Complex Adaptive System Composition and Design Environment), which has invested in long-term research into the “system-of-systems” perspective (see John Baez’s extended discussion of this project on his Azimuth blog). This research has been influenced by Willems’s (2007) behavioural approach to systems, which in turn, is based on the notion that large and complex systems can be built up from simple building blocks. Baez et al. (2020) introduce ‘network models’ to encode different ways of combining networks both through overlaying one model on top of another and by setting each model side by side. In this way, complex networks can be constructed using simple networks as components. Vertices in the network represent fixed or moving agents, while edges represent communication channels. The components of their networks are constructed using coloured operads, which include vertices representing entities of various types and edges representing the relationships between these entities. Each network model gives rise to a typed operad with an associated canonical algebra, whose operations represent ways of assembling a more complex network from smaller parts. The various different ways to compose these operations characterize a more general notion of an operation, which must be complemented by ways of permuting the arguments of an operation a process yielding a permutation group of inputs and outputs). In research conducted under the auspices of the CASCADE Project, Baez, Foley, Moeller, and Pollard (2020) have worked out how to combine two formalisms. First, there are Petri nets, commonoly used as an alternative to process algebras as a foralism for business process management. The vertices in a Petri net represent collections of different types of entities (species) with morphisms between them used to describe processes (transitions) that can be carried out by combining various sets of entities (conceived as resources or inputs into a transition node or process of production) together to make new sets of entities (concived as outputs or vertices are positioned after the relevant transition node). The stocks of each type of entity that is available is enumerated as a ‘marking’ specific to each type or colour together with the set of outputs that can be produced by activated the said transition. Second, there are network models, which describe processes that a given collection of agents (say, cars, boats, people, planes in a search-and-rescue operation) can carry out. However, in this kind of network, while each type of object or vertex can move around within a delineated space, they are not allowed to turn into other types of agent or object. In these networks, morphisms are functors (generalised functions) which describe everything that can be done with a specific collection of agents. The following Figure depicts this kind of operational network in an informal manner, where icons represent helicopters, boats, victims floating in the sea, and transmission towers with communication thresholds. By combining Petri nets with an underlying network model resource-using operations can be defined. For example, a helicopter may be able to drop supplies gathered from different depots and packaged into pallets, onto the deck of a sinking ship or to a remote village cut off by an earthquake or flood. The formal mechanism for combining a network model with a Petri net relies on treating different type of entities as catalysts, in the sense that the relevant species are neither increased nor decreased in number by any given transition. The derived category is symmetric monoidal and possesses a tensor product (representing processes for each catalyst that occur side-by-side), a coproduct (or disjoint union of amounts of each catalyst present), and within each subcategory of a particular catalyst, an internal tensor product describes how one process can follow another while reusing the pertinent catalysts. The following diagram taken from Baez et al. (2020), illustrates the overlaying process which enables more complex networks to be constructed from simpler components. The use of the Grothendieck construction in this research ensures that when two or more diagrams are overlayed there will be no ‘double-counting’ of edges and vertices. When components are ‘tensored’ each of the relevant blocks would be juxtaposed “side-by-side”. Each network model is characterized by a “plug-and-play” feature based on an algebraic component called an operad. The operad serves as the construct for a canonical algebra, whose operations are ways of assembling a network of the given kind from smaller parts. This canonical algebra, in turn, accommodates a set of types, a set of operations, ways to compose these operations to arrive at more general operations, and ways to permute an operation’s arguments (i.e. via a permutation group), along with a set of relevant distance constraints (e.g. pertinent communication thresholds for each type of entity) . One of Baez’s co-authors, John Foley, works for Metron, Inc., VA, a company which specializes in applying the advanced mathematics of network models to such phenomena as “search-and-rescue” operations, the detection of network incursions, and sports analytics. Their 2017 paper mentions a number of formalisms that have relevance to “search-and-rescue” applications, especially the ability to distinguish between different communication channels (different radio frequencies and capacities) and vertices (e.g. planes, boats, walkers, individuals in need of rescue etc.) and the capacity to impose distance constraints over those agents who may fall outside the reach of communication networks. In related research paper, Schultz, Spivak, Vasilakopoulou, Wisnesky (2016) argue thay dynamical systems can be gainfully thought of as ‘machines’ with inputs and outputs, carrying some sort of signal that occurs through some notion of time”. Special cases of this general approach include discrete, continuous, and hybrid dynamical systems. The authors deploy lax functors out of monoidal categories, which provide them with a language of compositionality. As with Baez and his co-authors, Schultz et al. (2016) draw on an operadic construct so as to understand systems that result from an “arbitrary interconnection of component subsystems”. They also draw on the mathematics of sheaf theory, to flexibly capture the crucial notion of time. The resulting sheaf-theoretic perspective relates continuous- and discrete-time systems together via functors (a kind of generalized ‘function of functions’, which preserves structure). Their approach can also account for synchronized continuous time, in which each moment is assigned a specific phase within the unit interval. 4. Related Developments in Software Engineering This section of the Chapter examines contemporary advances in software engineering that have implications for ‘system-of-sytems’ approaches to semantic technology. The work of the Statebox group at the University of Oxford and that of Evan Patterson, from Stanford University, who is also affiliated with researchers from the MIT company, Categorical Informatics, will be discussed to indicate where these new developments are likely to be moving in the near future. This will be supplemented by an informal overview of some recent innovations in functional programming, which have been informed by the notion of a derivative applied to an algorithmic step. These initiatives have the potential to transform software for machine-learning and the optimization of networks The Statebox team based at Oxford University have developed a language for software engineering that uses diagrammatic representations of generalized Petri nets. In this context, transitions in the net are morphisms between data-flow objects represent terminating functional programming algorithms. In Statebox (integer and semi-integer) Petri nets are constructed with both positive and negative tokens to account for contracting. Negative tokens represent borrowing while positive tokens represent lending and, likewise, the taking of short and long positions in asset markets. This allows for the representation of smart contracts, conceived as separable nets. Nets are also endowed with interfaces that allow for channelled communications through user-defined addresses. Furthermore, guarded and timed nets, with side-effects (which are mapped to standard nets using the Grothendieck construction), offer greater expressive power in regard to the conditional behaviour affecting transitions (The Statebox Team, 2018). Patterson (2017) begins his paper with a discussion of description logics (e.g. OWL, WC3), which he interprets as calculi for knowledge representation (KR). These logics, which are the actual substrates responsible for the World-Wide-Web (WWW), lie somewhere between propositional logic and first-order predicate logic possessing the capability to express the (∃,∧,T,=) fragment of first-order logic. Patterson highlights the trade-off that must be made between computational tractability and expressivity before introducing a third knowledge representation formalism that interpolates between description logic and ontology logs (see Spivak and Kent, 2012, for an the extensive description of ologs, which express key constructs from category theory, such as products and coproducts, pullbacks and pushforwards, and representations of recursive operations using diagrams labelled with concepts drawn from everyday conversation). Patterson (2017) calls this construct the relational ontology log, or relational olog, because it is based on, Rel, the category of sets and relations and, as such, draws on relational algebra, which is the (∃,∧, , T,⊥,=) fragment of first-order logic. He calls Spivak and Kent’s, 2012, version, a functional olog to avoid any confusion, because these are solely based on Set, the category of sets and functions. Relational ologs achieve their expressivity through categorical limits and colimits (products, pullbacks, pushforwards, and so forth The advantages of Patterson’s framework are that functors allow instance data to be associated with a computational ontology in a mathematically precise way, by interpreting it as a relational or graph database, Boolean matrix, or category of linear relations. Moreover, relational ologs are, by default, typed, which he suggests can mitigate the maintainability challenges posed by the open world semantics of description logic. String diagrams (often labelled Markov-Penrose diagramsby those working in the field of brain science imaging) are routinely deployed by data-scientists used to represent the structure of deep-learning convolution neural networks. However, string diagrams can also serve as a tool for representing the computational aspects of machine-learning. For example, influenced by the program idioms of machine-learning, Ghica and Muroya (2017) have developed what they choose to call a ‘Dynamic Geometry of Interaction Machine’, which can be defined as a state transition system operating whose transitions not only account for ‘token passing’ but also for ‘graph rewriting’ (where the latter can be construed as a graph-based approach to the proving of mathematical hypotheses and theories). Their proposes system is supported by diagrammatic implementation based on the proof structures of the multiplicative and exponential fragment of linear logic (MELL). In Muroya, Cheung and Ghica (2017), this logical approach is complemented by a sound call-by-value lambda calculus inspired, in turn by Peircean notions of abductive inference. The resulting bimodal programming model operates in both: (a) direct mode, with new inputs provided, new outputs obtained; and, (b) learning mode, with special inputs applied for which outputs are known; to achieve optimal tuning of parameters to ensure desired outputs approach actual outputs. The authors contend that their holistic approach is superior to that of the TensorFlow software package developed for machine-learning, which they describe as a ‘shallow embedding’ of a domain specific language (DSL) into PYTHON” rather than a ‘stand-alone’ programming language. Adopting a somewhat different approach, Cruttwell, Gallagher and MacAdam (2019) extend Plotkin’s differential programming framework, which is itself a generalization of differential neural computers, where arbitrary programs with control structures encode smooth functions also represented as programs. Within this generalized domain, the derivative can be directly applied to programs or to algorithmic steps and, furthermore, can be rendered entirely congruent with categorical approaches to Riemannian and Differential geometry such as Lawvere’s Synthetic Differential Geometry. Cruttwell and his colleagues go on to observe that, when working in a simple neural network, back-propagation takes the derivative of the error function, then uses the chain rule to push errors backwards. They point out that, for convolution neural networks, the necessary procedure is less straightforward due to the presence of looping constructs. In this context, the authors further note that attempts to work with the usual ‘if-then-else’ and ‘while’ commands can also be problematic. To overcome these problems associated with recursion, they deploy what have been called ‘join restriction tangent categories’, which express the requisite domain of definition and detect and achieve disjointness of domains, while expressing iteration using the join of disjoint domains (i.e. in technical terms, this is the trace of a coproduct in the idempotent splitting). The final mathematical construct they arrive at, is that of a differential join restriction category along with the associated join restriction functor which, they suggest, admits a coherent interpretation of differential programming. It should be stressed that each of these category-theoretic initiatives to formalize the differential of an algorithmic step will become important in future efforts to develop improved, yet diagrammatically-based forms of software for machine learning that have greater capability and efficiency than existing software suites. The fact that both differential and integral categories can be provided with a coherent string diagram formalism (Lemay, 2017) provides a link back to the earlier discussion about the role of diagrammatic reasoning in semantic technologies. It is clear that techniques of this kind could also be applied to a wide variety of network models (e.g. for the centralized and decentralized control of hybrid cyber-physical systems), where optimization routines may be required (including those for effective disaster management). 5. Conclusion In conclusion, the innovations in software engineering described above, have obvious implications for those attempting to develop new semantic technologies for the effective management of emergency services and search-and-rescue operations in the aftermath of a major disaster. Hopefully, the material surveyed in this Chapter should serve to highlight the advantages of a category-theoretic approach to the issue at hand, along with the specific benefits of adopting an approach that is grounded in the pedagogical, computational, and formal representational power of string-diagrams, especially within a networked computational environment charactrised by Big Data, parallel processing, hybridity, and some degree of decentralized control. While a Chapter of this kind cannot go into too much detail about the formalisms that have been discussed, it is to be hoped that enough pertinent references have been provided for those who would like to find out more about the mathematical detail. Of course, it is not always necessary to be a computer programmer both to understand and to effectively deploy powerful suites of purpose-built software. It is also to be hoped that diagrammatic reasoning may assist the interested reader in acquiring a deeper understanding of the requisite mathematical techniques. Author: Professor Dr. James Juniper – Conjoint Academic, University of Newcastle; PhD in Economics, University of Adelaide Chapter References Baez, John (2006). Course Notes on Universal Algebra and Diagrammatic Resoning. Date accessed 15/11/19. Available at http://math.ucr.edu/home/baez/universal/ Baez, John C. & Jacob D. Biamonte (2012). A Course on Quantum Techniques for Stochastic Mechanics. arXiv:1209.3632v1 [quant-ph] 17 Sep 2012. Baez, John C., Brandon Coya and Franciscus Rebro (2018). Props in Network Theory. Theory and Applications of Categories, 33(25): 727-783. Baez, J., J. Foley, J. Moeller, and B. Pollard (2020). Network Models. (accessed 1/7/2020) arXiv:1711.00037v3 [math.CT] 27 Mar 2020. Baez, John and Brendan Fong (2018). A Compositional Framework for Passive Linear Networks. arXiv:1504.05625v6 [math.CT] 16 Nov 2018 Baez, John C. & Aaron Lauda (2009). A Prehistory of n-Categorical Physics. Date accessed 5/02/2018. https://arxiv.org/abs/0908.2469. Baez, John C. and Blake Pollard (2017). A compositional framework for reaction networks. Reviews in Mathematical Physics, 29 (2017), 1750028. Baez, John C. and Michael Stay (2011). Physics, Topology, Logic and Computation: A Rosetta Stone. New Structures for Physics, ed. Bob Coecke, Lecture Notes in Physics vol. 813, Springer, Berlin, Bell J. T. (1998). A Primer of Infinitesimal Analysis, Cambridge, U.K. Cambridge University Press. Biamonte, J. and V. Bergholm (2017). Quantum Tensor Networks in a Nutshell. Cornell University Archive. Date accessed 15/11/19. arXiv:1708.00006v1 [quant-ph] 31 Jul 2017. Blinn, James F. (2002). Using Tensor diagrams to Represent and solve Geometric Problems. Microsoft Research, Publications, Jan. 1. Date accessed 15/11/19. https://www.microsoft.com/en-us/research/ publication/using-tensor-diagrams-to-represent-and-solve-geometric-problems/ . Bonchi, F., P. Sobociński and F. Zanasi (2015). Full Abstraction for Signal Flow Graphs. In Principles of Programming Languages, POPL’15, 2015. Bonchi, F., P. Sobociński and F. Zanasi (2014). A Categorical Semantics of Signal Flow Graphs. CONCUR 2014, Ens de Lyon. Cichocki, Andrzej; Namgil Lee; Ivan Oseledets; Anh-Huy Phan; Qibin Zhao; and Danilo P. Mandic (2016). Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions. Foundations and Trends in Machine Learning. 9(4-5), 249-429. Cichocki, Andrzej ; Anh-Huy Phan; Qibin Zhao; Namgil Lee; Ivan Oseledets; Masashi Sugiyama; and Danilo P. Mandic (2017). Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 2 Applications and Future Perspectives. Foundations and Trends in Machine Learning. 9(6), 431-673. Cruttwell, Gallagher & MacAdam (2019). Towards formulating and extending differential programming using tangent categories. Extended abstract, ACT 2019. Date accessed 15/11/19. Available at: http:// www.cs.ox.ac.uk/ACT2019/preproceedings/Jonathan%20Gallagher,%20Geoff%20Cruttwell%20and%20Ben%20MacAdam.pdf . Ehrhard T., and L. Regnier (2003). The differential lambda-calculus. Theoretical Computer Science. 309, 1–41. Ellerman, David (2000). Towards an Arbitrage Interpretation of Optimization Theory. (accessed 1/7/20), http://www.ellerman.org/Davids-Stuff/Maths/Math.htm . Feynman, R. P. (1948). “Space-time approach to nonrelativistic quantum mechanics,” Review of Modern Physics, 20, 367. Fong, Brendan and David I. Spivak (2018). Seven Sketches in Compositionality:An Invitation to Applied Category Theory. Date accessed 15/11/19. Available at http://math.mit.edu/~dspivak/teaching/sp18/ 7Sketches.pdf . Forgasz, Helen and Ferdinand Rivera (eds.) (2012). Towards Equity in Mathematics Education: Gender, Culture, and Diversity. Advances in Mathematics Education Series. Dordrecht, Heidelburg: Springer. Herbel-Eisenmann, Beth, Jeffrey Choppin, David Wagner, David Pimm (eds.) (2012). Equity in Discourse for Mathematics Education Theories, Practices, and Policies. Mathematics Education Library, Vol. 55. Dordrecht, Heidelburg: Springer. Hoffman, M. H. G. (2011). Cognitive conditions of diagrammatic reasoning. Semiotica, 186 (1/4), 189–212. Joyal, A. and R. Street (1988). Planar diagrams and tensor algebra. Unpublished manuscript. Date accessed 15/11/19. Available from Ross Street’s website: http://maths.mq.edu.au/~street/. Joyal, A. and R. Street (1991). The geometry of tensor calculus, I. Advances in Mathematics, 88, 55–112. Kissinger, Aleks and Sander Uijlen (2017). A categorical semantics for causal structure. https://arxiv.org/abs/1701.04732v3 . Lemay, Jean-Simon Pacaud (2017). Integral Categories and Calculus Categories. PhD Thesis, University of Calgary, Alberta. Melliès, Paul-André (2018). Categorical Semantics of Linear Logic. Date accessed 15/11/19. Available at: https://www.irif.fr/~mellies/mpri/mpri-ens/biblio/categorical-semantics-of-linear-logic.pdf . Milner, Robin (2009). The Space and Motion of Communicating Agents. Cambridge University Press. Moeller, Joe & Christina Vasilakopolou (2019). Monoidal Grothendieck Construction. arXiv:1809.00727v2 [math.CT] 18 Feb 2019. Muroya, Koko and Dan Ghica (2017). The Dynamic Geometry of Interaction Machine: A Call-by-need Graph Rewriter. arXiv:1703.10027v1 [cs.PL] 29 Mar 2017. Muroya, Koko; Cheung, Steven and Dan R. Ghica (2017). Abductive functional programming, a semantic approach. arXiv:1710.03984v1 [cs.PL] 11 Oct 2017. Patterson, Evan (2017). Knowledge Representation in Bicategories of Relations. ArXiv. 1706.00526v1 [cs.AI] 2 Jun 2017. Paulheim, H. and F. Probst (2010). Application integration on the user interface level: An ontology-based approach. Data and Knowledge Engineering, 69, 1103-1116. Penrose, Roger (1971). Applications of negative dimensional tensors. Combinatorial mathematics and its applications, 221244. Penrose, R.; Rindler, W. (1984). Spinors and Space-Time: Vol I, Two-Spinor Calculus and Relativistic Fields. Cambridge University Press. pp. 424-425. Sáenz-Ludlow, Adalira and Gert Kadunz (2015). Semiotics as a Tool for Learning Mathematics. Berlin: Springer. Shin, S-J. (1994) The Logical Status of Diagrams, Cambridge: Cambridge University Press. Sobociński, Pawel. Date accessed 15/11/19. Blog on Graphical Linear Algebra Blog. http://graphicallinearalgebra.net/. Sowa, John F. (2000). Knowledge Representation: Logical, Philosophical, and Computational Foundations. Pacific Grove, CA: Brooks Cole Publishing. Spivak, David I., Christina Vasilakopoulou,and Patrick Schultz (2019). Dynamical Systems and Sheaves. arXiv:1609.08086v4 [math.CT] 15 Mar 2019.Statebox Team, University of Oxford. Statebox. Date accessed 15/11/19. https://statebox.org/ . Schultz, P., D. Spivak, C. Vasilakopoulou, & R. Wisnesky (2016). Algebraic Databases. arXiv:1602.03501v2 [math.CT] 15 Nov 2016. Stjernfelt, Frederick (2007) Diagrammatology: An Investigation on the Borderlines of Phenomenology, Ontology, and Semiotics, Synthese Library, V. 336, Dordrecht, the Netherlands: Springer. Vagner, D., Spivak, D. I. & E. Lerman (2014). Algebra of Open Systems on the Operad of Wiring Digrams, Date accessed 15/11/19. arXiv:1408.1598v1[math.CT] 7 Aug 2014. Westrich, Q. (2006). Lie Algebras in Braided Monoidal Categories. Thesis, Karlstads Universitet, Karlstad, Sweden. http://www.diva-portal.org/smash/get/diva2:6050/FULLTEXT01.pdf Willems, J.C. (2007). The behavioral approach to open and interconnected systems: Modeling by tearing, zooming, and linking. Control Systems Magazine, 27(46): 99.
{"url":"https://dhunicorn.com/2023/01/","timestamp":"2024-11-09T23:13:01Z","content_type":"text/html","content_length":"212819","record_id":"<urn:uuid:037dbb91-9c49-43b9-9e2b-dc773ede723e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00086.warc.gz"}
Geometry of Isotropic Convex Bodiessearch Item Successfully Added to Cart An error was encountered while trying to add the item to the cart. Please try again. Please make all selections above before adding to cart Geometry of Isotropic Convex Bodies Hardcover ISBN: 978-1-4704-1456-6 Product Code: SURV/196 List Price: $129.00 MAA Member Price: $116.10 AMS Member Price: $103.20 eBook ISBN: 978-1-4704-1526-6 Product Code: SURV/196.E List Price: $125.00 MAA Member Price: $112.50 AMS Member Price: $100.00 Hardcover ISBN: 978-1-4704-1456-6 eBook: ISBN: 978-1-4704-1526-6 Product Code: SURV/196.B List Price: $254.00 $191.50 MAA Member Price: $228.60 $172.35 AMS Member Price: $203.20 $153.20 Click above image for expanded view Geometry of Isotropic Convex Bodies Hardcover ISBN: 978-1-4704-1456-6 Product Code: SURV/196 List Price: $129.00 MAA Member Price: $116.10 AMS Member Price: $103.20 eBook ISBN: 978-1-4704-1526-6 Product Code: SURV/196.E List Price: $125.00 MAA Member Price: $112.50 AMS Member Price: $100.00 Hardcover ISBN: 978-1-4704-1456-6 eBook ISBN: 978-1-4704-1526-6 Product Code: SURV/196.B List Price: $254.00 $191.50 MAA Member Price: $228.60 $172.35 AMS Member Price: $203.20 $153.20 • Mathematical Surveys and Monographs Volume: 196; 2014; 594 pp MSC: Primary 52; 46; 60; 28 The study of high-dimensional convex bodies from a geometric and analytic point of view, with an emphasis on the dependence of various parameters on the dimension stands at the intersection of classical convex geometry and the local theory of Banach spaces. It is also closely linked to many other fields, such as probability theory, partial differential equations, Riemannian geometry, harmonic analysis and combinatorics. It is now understood that the convexity assumption forces most of the volume of a high-dimensional convex body to be concentrated in some canonical way and the main question is whether, under some natural normalization, the answer to many fundamental questions should be independent of the dimension. The aim of this book is to introduce a number of well-known questions regarding the distribution of volume in high-dimensional convex bodies, which are exactly of this nature: among them are the slicing problem, the thin shell conjecture and the Kannan-Lovász-Simonovits conjecture. This book provides a self-contained and up to date account of the progress that has been made in the last fifteen years. Graduate students and research mathematicians interested in geometric and analytic study of convex bodies. □ Chapters □ Chapter 1. Background from asymptotic convex geometry □ Chapter 2. Isotropic log-concave measures □ Chapter 3. Hyperplane conjecture and Bourgain’s upper bound □ Chapter 4. Partial answers □ Chapter 5. $L_q$-centroid bodies and concentration of mass □ Chapter 6. Bodies with maximal isotropic constant □ Chapter 7. Logarithmic Laplace transform and the isomorphic slicing problem □ Chapter 8. Tail estimates for linear functionals □ Chapter 9. $M$ and $M*$-estimates □ Chapter 10. Approximating the covariance matrix □ Chapter 11. Random polytopes in isotropic convex bodies □ Chapter 12. Central limit problem and the thin shell conjecture □ Chapter 13. The thin shell estimate □ Chapter 14. Kannan-Lovász-Simonovits conjecture □ Chapter 15. Infimum convolution inequalities and concentration □ Chapter 16. Information theory and the hyperplane conjecture • Permission – for use of book, eBook, or Journal content • Book Details • Table of Contents • Additional Material • Requests Volume: 196; 2014; 594 pp MSC: Primary 52; 46; 60; 28 The study of high-dimensional convex bodies from a geometric and analytic point of view, with an emphasis on the dependence of various parameters on the dimension stands at the intersection of classical convex geometry and the local theory of Banach spaces. It is also closely linked to many other fields, such as probability theory, partial differential equations, Riemannian geometry, harmonic analysis and combinatorics. It is now understood that the convexity assumption forces most of the volume of a high-dimensional convex body to be concentrated in some canonical way and the main question is whether, under some natural normalization, the answer to many fundamental questions should be independent of the dimension. The aim of this book is to introduce a number of well-known questions regarding the distribution of volume in high-dimensional convex bodies, which are exactly of this nature: among them are the slicing problem, the thin shell conjecture and the Kannan-Lovász-Simonovits conjecture. This book provides a self-contained and up to date account of the progress that has been made in the last fifteen years. Graduate students and research mathematicians interested in geometric and analytic study of convex bodies. • Chapters • Chapter 1. Background from asymptotic convex geometry • Chapter 2. Isotropic log-concave measures • Chapter 3. Hyperplane conjecture and Bourgain’s upper bound • Chapter 4. Partial answers • Chapter 5. $L_q$-centroid bodies and concentration of mass • Chapter 6. Bodies with maximal isotropic constant • Chapter 7. Logarithmic Laplace transform and the isomorphic slicing problem • Chapter 8. Tail estimates for linear functionals • Chapter 9. $M$ and $M*$-estimates • Chapter 10. Approximating the covariance matrix • Chapter 11. Random polytopes in isotropic convex bodies • Chapter 12. Central limit problem and the thin shell conjecture • Chapter 13. The thin shell estimate • Chapter 14. Kannan-Lovász-Simonovits conjecture • Chapter 15. Infimum convolution inequalities and concentration • Chapter 16. Information theory and the hyperplane conjecture Permission – for use of book, eBook, or Journal content You may be interested in... Please select which format for which you are requesting permissions.
{"url":"https://bookstore.ams.org/SURV/196","timestamp":"2024-11-06T12:52:27Z","content_type":"text/html","content_length":"110498","record_id":"<urn:uuid:ebde3388-ac02-4d23-a548-dcbf25095c23>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00829.warc.gz"}
Quiz Chapter 03.05: Secant Method of Solving Nonlinear Equations Pick the most appropriate answer 1. The secant method of finding roots of nonlinear equations falls under the category of _____ methods. 2. The secant method formula for finding the square root of a real number $R$ from the equation $x^{2} - R = 0$ is 3. The next iterative value of the root of $x^{2} - 4 = 0$ using secant method, if the initial guesses are $3$ and $4$, is 4. The root of the equation $f(x) = 0$ is found by using the secant method. Given one of the initial estimates is $x_{0} = 3$ and $f(3) = 5$, and the angle the secant makes with the function $f(x)$ is $57^{\circ}$, the next estimate of the root, $x_{1}$, is 5. For finding the root of $\sin{\left( x \right)} = 0$, the following choice of initial guesses would not be appropriate. 6. When drugs are given orally to a patient, the drug concentration $c$ in the blood stream is given by a formula $c=Kte^{-at}$ where $K$ is dependent on parameters such as the dose administered while $a$ is dependent on the absorption and elimination rates of the drug. If $K=2$ and $a=0.25$, where $t$ is in seconds and $c$ is in mg/ml, the time at which the maximum concentration is reached is given by the solution of the equation
{"url":"https://nm.mathforcollege.com/quiz-chapter-03-05-secant-method-of-solving-nonlinear-equations/","timestamp":"2024-11-03T17:11:18Z","content_type":"text/html","content_length":"55649","record_id":"<urn:uuid:695b7392-e812-4f21-9ac1-128eb94bb699>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00590.warc.gz"}
Light Microscope Correlation: Focal Length & Magnification In this article, I explain the correlation between the magnification factor and the focal lenght / focal distance of a convex collecting lens. You will see both mathematical explainations and graphical demonstrations. 1. For the optical illustration of an object (=real image) – a higher focal length means a higher magnification factor. 2. For the optical enlargement of an object (=virtual image) – a brief focal length means a higher magnification. This difference depends on the placement of the object – within or outside of the focal length / focal distance of the convex collecting lens. Object outside of the focal distance – real image The corellation between magnification an focal lenght is described in the lens equation formula. But this calculation is too abstract to illustrate the topic. It is the easier way to exemplify it graphically and then to “brood” over the formula and always to calculate from new, when a variable has changed. For the optical illustration of an object, these rules apply: Ruke 1: The higher the focal lenght, the higher the magnification The pictures shows the optical illustration of an object with a convex collecting lens. Optical illustration object convex collecting lens On the left side is the object. On the other side of the lens, there is a point, where the parallel beam, the focal ray and the center ray meet. This is where the illustration of the object is projected on an surface, like the cinema screen, ot a sheet of paper. See the example: optical illustration = real image Betrachten wir jetzt einmal nur den Brennpunktstrahl, dann erschließt sich der Zusammenhang zwischen Brennweite und Vergrößerung sofort. Now, let´s just analize the focal ray, so you can see easily, the correlation between the focal distance and the magnification factor. The follwing graphic shows, the the focal ray becomes a parallel beam, after its exit from the lens. It is parallel to the optical axis and it runs in a certain distance to it. This distance is equal to the prospective height of the optical illustration / real image. Let´s see, what happens if the distance of the objetc to the lens stays the same and only the focal distance is changing: Corellation focal length and magnification We see, that the longer the the focal length is, the bigger will be the height of the real image. This proves, already, what I said at the beginning: a larger focal distance, increases the magnification factor. Rule 2: The larger the focal distance, the larger the distance of the real image form the lens Now we take a look an what happens with the real image, when the distance between object and lens stays the same and we change the focal length. focal length and magnification – real image position The result is, that the position of the real image is farther, the higher the magnification is. In a microscope, we find the other situation. The distance of lens & image has to be the same, as one can´t change the length of the lens tube. Thus, we have to change the distance between the the object and the objective lens, by moving the stage / cross table to get a clear image. Rule 3: Enlargement only within 2X of the focal distance Now we begin to change the distance between the object and the lens. This also influences the magnification and we see, that some different rules apply: The closer the object to the focal pointis, the higher is the magnification rate. focal length and magnification – object close to focal point If the object is placed 2X of the focal length, then the object and the real image will have the same size. If the objet´s distance is farther than 2X-focal lenth, then the real image becomes demagnified – is smaller than the object. focal length and magnification – object outside 2X focal length The farther the object from the lens, the smaller is the real image focal length and magnification – object far from lens Calcualtiv analysis – the “thin lens equations” formula The grafical demonstration is much easier to use and to memorize the the optical illustration rules. But now, I would also like to explain the lens equation formula and prove the rules by it. thin lens equations formula Thins lens equation formula: 1/f = 1/o + 1/i o = object distance = how far is the object from the lens b = image distance = how far is the image from the lens f = focal length = focal length of the lens To make it simpler – some variotions of the formula: o is searched: o = (f*i) / (i-f) i is searched: i = (f*o) / (o-f) f is searched: f = (i*o) / (o+i) The magnifiaction factor can be calculated with this formula: i/o (image distance / object distance) i/o =I/O (I = image size / O = object size) Calculation example: Let´s say, we place an object in a distance of 30 cm from a convex lens. We only change the lenses, which have different focal lengths. Here the results: lens equation formula – calculation example Like already shown in the graphical illustration, the calculation proves, that object size and image size are identical, when the object distance is 2X-focal length. It also confirms, that if the object distance is larger than 2X focal length, the image becomes a demagnification. Object within the focal distance – virtual image When the object is placed within the focal distance, the lens works lika a magnifying glass. The calculation for a magnifier is much easyer than for optical illustration. The formula is: magnification = 25 cm / focal length of the lens (in cm) The 25 cm is the conventional closest distance of distinct vision for the human eye. This distance is a constant in the calculatio and never changing. Therefore, one doesn´t have to be a mathematical genius to see, that the smaller the focal length is, the higher will be the magnification. Please see the graphical illustration: Rule: the shorter the focal length, the higher the magnification factor correlation – focal length and magnification – convex lens The graphic shows, that the length of the focal distance changes the angle, in which the light is refracted. The shorter the focal lenght, the steeper the angle of the refraction: correlation – focal length and magnification – light refraction
{"url":"https://light-microscope.net/en/optical-enlargement/correlation-focal-length-magnification/","timestamp":"2024-11-14T10:57:16Z","content_type":"text/html","content_length":"56531","record_id":"<urn:uuid:307b5eb6-2d77-41da-a6dd-c05a2f6241b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00495.warc.gz"}
How do you solve rational expressions by multiplying by the least common multiple? | Socratic How do you solve rational expressions by multiplying by the least common multiple? 1 Answer The least common multiple helps you to deal with expressions in which you must add or subtract fractions. There are mnemonic rules to deal with this but I like to "see" the problem... Imagine that you are out eating a pizza with some friends (3 for example). If your pizza comes divided in 4 slices it is ok...you can add 2 of them together and...presto...you have half pizza!!! But if you start to cut one of the slices in half (someone is on a diet) and make another bigger (someone is not on a diet) the thing gets complicated...If you add two slices again you may find that you have a thin slice plus a big slice and...what part of the entire pizza you get? You do not know...!!!! With fractions is the same...it is better to have equal slices to add them together!!! In case 1 you can add together immediately the numerators because the denominators are equals (slices of same size!). And you get: $\frac{1}{4} + \frac{1}{4} = \frac{1 + 1}{4} = \frac{2}{4} = \frac{1}{2}$ half pizza !!!!! In case 2 first you need to make the slices of the same "size". To do that you use a common size (the least common multiple) and you "resize" all your slices according to this size. You have: $\frac{1}{3} + \frac{1}{6}$ the least common multiple is a multiple in common between $3 \mathmr{and} 6$. You have: Multiples of $3$: $\to 3 , 6 , 9 , 12 , \ldots .$ Multiples of $6$: $\to 6 , 12 , 18 , 24 , \ldots$ The least common multiple is $6$. So now we resize all the slices to be "size" $6$: $\frac{1}{3}$ becomes of "size" $6$ if I multiply the denominator by $2$ but if I do that I have to multiply the nominator by $2$ as well (otherwise the fraction changes)! $\frac{1}{3} = \frac{2 \cdot 1}{2 \cdot 3} = \frac{2}{6}$ Yes!!! Now it is "size" $6$. Now you have: $\frac{1}{3} + \frac{1}{6} = \frac{2}{6} + \frac{1}{6}$ same denominator and you can add together the numerators: $\frac{2 + 1}{6} = \frac{3}{6} = \frac{1}{2}$ half pizza..again!!! Note that this work with subtraction as well!!! Impact of this question 6465 views around the world
{"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-solve-rational-expressions-by-multiplying-by-the-least-common-multipl","timestamp":"2024-11-13T08:24:22Z","content_type":"text/html","content_length":"37904","record_id":"<urn:uuid:106f1546-185d-49d4-b608-9e2e69ec845f>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00862.warc.gz"}
Powders and granules are essential raw materials and intermediates in the manufacture of a wide range of products across different industries, including food, agrochemicals and specialty chemicals. The characteristics of these materials can have a direct influence on their processability and hence on the quality and performance of the resulting product. Classizer™ ONE answers to the demanding request of a non invasive, continuous, trustable monitoring of powder suspension in fluids. Classizer™ ONE measures the particle size distribution, classifies the constituent material, and provides insights on the the particle aspect ratio of the powders, from product development through to manufacture and QC processes. Moreover, the single particle approach prevents from the use of complicate mathematical inversion methods, and immediately enables to measure non spherical sample with high polydispersity.
{"url":"https://www.eosinstruments.com/applications/ground-powders-and-minerals/","timestamp":"2024-11-10T12:34:57Z","content_type":"text/html","content_length":"70699","record_id":"<urn:uuid:d47e76c5-393d-4f17-8ee0-8fb751ede438>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00233.warc.gz"}
Gaussian Naive Bayes, Explained: A Visual Guide with Code Examples for Beginners | by Samy Baladram | Oct, 2024 - COOLEST9JA Bernoulli NB assumes binary knowledge, Multinomial NB works with discrete counts, and Gaussian NB handles steady knowledge assuming a traditional distribution. Constructing on our earlier article about Bernoulli Naive Bayes, which handles binary knowledge, we now discover Gaussian Naive Bayes for steady knowledge. Not like the binary method, this algorithm assumes every function follows a traditional (Gaussian) distribution. Right here, we’ll see how Gaussian Naive Bayes handles steady, bell-shaped knowledge — ringing in correct predictions — all with out moving into the intricate math of Bayes’ Theorem. All visuals: Writer-created utilizing Canva Professional. Optimized for cell; could seem outsized on desktop. Like different Naive Bayes variants, Gaussian Naive Bayes makes the “naive” assumption of function independence. It assumes that the options are conditionally impartial given the category label. Nevertheless, whereas Bernoulli Naive Bayes is fitted to datasets with binary options, Gaussian Naive Bayes assumes that the options comply with a steady regular (Gaussian) distribution. Though this assumption could not at all times maintain true in actuality, it simplifies the calculations and infrequently results in surprisingly correct outcomes. Naive Bayes strategies is a probabilistic mannequin in machine studying that makes use of chance features to make predictions. All through this text, we’ll use this synthetic golf dataset (made by creator) for example. This dataset predicts whether or not an individual will play golf based mostly on climate situations. Columns: ‘RainfallAmount’ (in mm), ‘Temperature’ (in Celcius), ‘Humidity’ (in %), ‘WindSpeed’ (in km/h) and ‘Play’ (Sure/No, goal function) # IMPORTING DATASET # from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score import pandas as pd import numpy as np dataset_dict = { 'Rainfall': [0.0, 2.0, 7.0, 18.0, 3.0, 3.0, 0.0, 1.0, 0.0, 25.0, 0.0, 18.0, 9.0, 5.0, 0.0, 1.0, 7.0, 0.0, 0.0, 7.0, 5.0, 3.0, 0.0, 2.0, 0.0, 8.0, 4.0, 4.0], 'Temperature': [29.4, 26.7, 28.3, 21.1, 20.0, 18.3, 17.8, 22.2, 20.6, 23.9, 23.9, 22.2, 27.2, 21.7, 27.2, 23.3, 24.4, 25.6, 27.8, 19.4, 29.4, 22.8, 31.1, 25.0, 26.1, 26.7, 18.9, 28.9], 'Humidity': [85.0, 90.0, 78.0, 96.0, 80.0, 70.0, 65.0, 95.0, 70.0, 80.0, 70.0, 90.0, 75.0, 80.0, 88.0, 92.0, 85.0, 75.0, 92.0, 90.0, 85.0, 88.0, 65.0, 70.0, 60.0, 95.0, 70.0, 78.0], 'WindSpeed': [2.1, 21.2, 1.5, 3.3, 2.0, 17.4, 14.9, 6.9, 2.7, 1.6, 30.3, 10.9, 3.0, 7.5, 10.3, 3.0, 3.9, 21.9, 2.6, 17.3, 9.6, 1.9, 16.0, 4.6, 3.2, 8.3, 3.2, 2.2], 'Play': ['No', 'No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'Yes', 'No', 'No', 'Yes', 'Yes', 'No', 'No', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'Yes', 'Yes', 'No', 'Yes'] df = pd.DataFrame(dataset_dict) # Set function matrix X and goal vector y X, y = df.drop(columns='Play'), df['Play'] # Break up the info into coaching and testing units X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.5, shuffle=False) print(pd.concat([X_train, y_train], axis=1), finish='nn') print(pd.concat([X_test, y_test], axis=1)) Gaussian Naive Bayes works with steady knowledge, assuming every function follows a Gaussian (regular) distribution. 1. Calculate the chance of every class within the coaching knowledge. 2. For every function and sophistication, estimate the imply and variance of the function values inside that class. 3. For a brand new occasion: a. For every class, calculate the chance density perform (PDF) of every function worth below the Gaussian distribution of that function inside the class. b. Multiply the category chance by the product of the PDF values for all options. 4. Predict the category with the best ensuing chance. Gaussian Naive Bayes makes use of the traditional distribution to mannequin the chance of various function values for every class. It then combines these likelihoods to make a prediction. Reworking non-Gaussian distributed knowledge Do not forget that this algorithm naively assume that every one the enter options are having Gaussian/regular distribution? Since we aren’t actually positive in regards to the distribution of our knowledge, particularly for options that clearly don’t comply with a Gaussian distribution, making use of a power transformation (like Field-Cox) earlier than utilizing Gaussian Naive Bayes will be useful. This method may help make the info extra Gaussian-like, which aligns higher with the assumptions of the All columns are scaled utilizing Energy Transformation (Field-Cox Transformation) after which standardized. from sklearn.preprocessing import PowerTransformer # Initialize and match the PowerTransformer pt = PowerTransformer(standardize=True) # Customary Scaling already included X_train_transformed = pt.fit_transform(X_train) X_test_transformed = pt.remodel(X_test) Now we’re prepared for the coaching. 1. Class Likelihood Calculation: For every class, calculate its chance: (Variety of situations on this class) / (Complete variety of situations) from fractions import Fraction def calc_target_prob(attr): total_counts = attr.value_counts().sum() prob_series = attr.value_counts().apply(lambda x: Fraction(x, total_counts).limit_denominator()) return prob_series 2. Function Likelihood Calculation : For every function and every class, calculate the imply (μ) and commonplace deviation (σ) of the function values inside that class utilizing the coaching knowledge. Then, calculate the chance utilizing Gaussian Likelihood Density Operate (PDF) system. For every climate situation, decide the imply and commonplace deviation for each “YES” and “NO” situations. Then calculate their PDF utilizing the PDF system for regular/Gaussian distribution. The identical course of is utilized to all the different options. def calculate_class_probabilities(X_train_transformed, y_train, feature_names): lessons = y_train.distinctive() equations = pd.DataFrame(index=lessons, columns=feature_names) for cls in lessons: X_class = X_train_transformed[y_train == cls] imply = X_class.imply(axis=0) std = X_class.std(axis=0) k1 = 1 / (std * np.sqrt(2 * np.pi)) k2 = 2 * (std ** 2) for i, column in enumerate(feature_names): equation = f"{k1[i]:.3f}·exp(-(x-({imply[i]:.2f}))²/{k2[i]:.3f})" equations.loc[cls, column] = equation return equations # Use the perform with the remodeled coaching knowledge equation_table = calculate_class_probabilities(X_train_transformed, y_train, X.columns) # Show the equation desk 3. Smoothing: Gaussian Naive Bayes makes use of a novel smoothing method. Not like Laplace smoothing in other variants, it provides a tiny worth (0.000000001 instances the most important variance) to all variances. This prevents numerical instability from division by zero or very small numbers. Given a brand new occasion with steady options: 1. Likelihood Assortment: For every potential class: · Begin with the chance of this class occurring (class chance). · For every function within the new occasion, calculate the chance density perform of that function inside the class. For ID 14, we calculate the PDF every of the function for each “YES” and “NO” situations. 2. Rating Calculation & Prediction: For every class: · Multiply all of the collected PDF values collectively. · The result’s the rating for this class. · The category with the best rating is the prediction. from scipy.stats import norm def calculate_class_probability_products(X_train_transformed, y_train, X_new, feature_names, target_name): lessons = y_train.distinctive() n_features = X_train_transformed.form[1] # Create column names utilizing precise function names column_names = [target_name] + listing(feature_names) + ['Product'] probability_products = pd.DataFrame(index=lessons, columns=column_names) for cls in lessons: X_class = X_train_transformed[y_train == cls] imply = X_class.imply(axis=0) std = X_class.std(axis=0) prior_prob = np.imply(y_train == cls) probability_products.loc[cls, target_name] = prior_prob feature_probs = [] for i, function in enumerate(feature_names): prob = norm.pdf(X_new[0, i], imply[i], std[i]) probability_products.loc[cls, feature] = prob product = prior_prob * np.prod(feature_probs) probability_products.loc[cls, 'Product'] = product return probability_products # Assuming X_new is your new pattern reshaped to (1, n_features) X_new = np.array([-1.28, 1.115, 0.84, 0.68]).reshape(1, -1) # Calculate chance merchandise prob_products = calculate_class_probability_products(X_train_transformed, y_train, X_new, X.columns, y.identify) # Show the chance product desk For this specific dataset, this accuracy is taken into account fairly good. from sklearn.naive_bayes import GaussianNB from sklearn.metrics import accuracy_score # Initialize and prepare the Gaussian Naive Bayes mannequin gnb = GaussianNB() gnb.match(X_train_transformed, y_train) # Make predictions on the take a look at set y_pred = gnb.predict(X_test_transformed) # Calculate the accuracy accuracy = accuracy_score(y_test, y_pred) # Print the accuracy print(f"Accuracy: {accuracy:.4f}") GaussianNB is thought for its simplicity and effectiveness. The primary factor to recollect about its parameters is: 1. priors: That is essentially the most notable parameter, similar to Bernoulli Naive Bayes. Usually, you don’t have to set it manually. By default, it’s calculated out of your coaching knowledge, which regularly works nicely. 2. var_smoothing: It is a stability parameter that you simply not often want to regulate. (the default is 0.000000001) The important thing takeaway is that this algoritm is designed to work nicely out-of-the-box. In most conditions, you need to use it with out worrying about parameter tuning. 1. Simplicity: Maintains the easy-to-implement and perceive trait. 2. Effectivity: Stays swift in coaching and prediction, making it appropriate for large-scale functions with steady options. 3. Flexibility with Information: Handles each small and huge datasets nicely, adapting to the size of the issue at hand. 4. Steady Function Dealing with: Thrives with steady and real-valued options, making it ideally suited for duties like predicting real-valued outputs or working with knowledge the place options differ on a continuum. 1. Independence Assumption: Nonetheless assumes that options are conditionally impartial given the category, which could not maintain in all real-world eventualities. 2. Gaussian Distribution Assumption: Works greatest when function values really comply with a traditional distribution. Non-normal distributions could result in suboptimal efficiency (however will be fastened with Energy Transformation we’ve mentioned) 3. Sensitivity to Outliers: Might be considerably affected by outliers within the coaching knowledge, as they skew the imply and variance calculations. Gaussian Naive Bayes stands as an environment friendly classifier for a variety of functions involving steady knowledge. Its capacity to deal with real-valued options extends its use past binary classification duties, making it a go-to selection for quite a few functions. Whereas it makes some assumptions about knowledge (function independence and regular distribution), when these situations are met, it provides sturdy efficiency, making it a favourite amongst each newcomers and seasoned knowledge scientists for its stability of simplicity and energy. import pandas as pd from sklearn.naive_bayes import GaussianNB from sklearn.preprocessing import PowerTransformer from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split # Load the dataset dataset_dict = { 'Rainfall': [0.0, 2.0, 7.0, 18.0, 3.0, 3.0, 0.0, 1.0, 0.0, 25.0, 0.0, 18.0, 9.0, 5.0, 0.0, 1.0, 7.0, 0.0, 0.0, 7.0, 5.0, 3.0, 0.0, 2.0, 0.0, 8.0, 4.0, 4.0], 'Temperature': [29.4, 26.7, 28.3, 21.1, 20.0, 18.3, 17.8, 22.2, 20.6, 23.9, 23.9, 22.2, 27.2, 21.7, 27.2, 23.3, 24.4, 25.6, 27.8, 19.4, 29.4, 22.8, 31.1, 25.0, 26.1, 26.7, 18.9, 28.9], 'Humidity': [85.0, 90.0, 78.0, 96.0, 80.0, 70.0, 65.0, 95.0, 70.0, 80.0, 70.0, 90.0, 75.0, 80.0, 88.0, 92.0, 85.0, 75.0, 92.0, 90.0, 85.0, 88.0, 65.0, 70.0, 60.0, 95.0, 70.0, 78.0], 'WindSpeed': [2.1, 21.2, 1.5, 3.3, 2.0, 17.4, 14.9, 6.9, 2.7, 1.6, 30.3, 10.9, 3.0, 7.5, 10.3, 3.0, 3.9, 21.9, 2.6, 17.3, 9.6, 1.9, 16.0, 4.6, 3.2, 8.3, 3.2, 2.2], 'Play': ['No', 'No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'Yes', 'No', 'No', 'Yes', 'Yes', 'No', 'No', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'Yes', 'Yes', 'No', 'Yes'] df = pd.DataFrame(dataset_dict) # Put together knowledge for mannequin X, y = df.drop('Play', axis=1), (df['Play'] == 'Sure').astype(int) # Break up knowledge into coaching and testing units X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, shuffle=False) # Apply PowerTransformer pt = PowerTransformer(standardize=True) X_train_transformed = pt.fit_transform(X_train) X_test_transformed = pt.remodel(X_test) # Prepare the mannequin nb_clf = GaussianNB() nb_clf.match(X_train_transformed, y_train) # Make predictions y_pred = nb_clf.predict(X_test_transformed) # Verify accuracy accuracy = accuracy_score(y_test, y_pred) print(f"Accuracy: {accuracy:.4f}")
{"url":"https://coolest9ja.com/gaussian-naive-bayes-explained-a-visual-guide-with-code-examples-for-beginners-by-samy-baladram-oct-2024/","timestamp":"2024-11-04T01:55:56Z","content_type":"text/html","content_length":"159399","record_id":"<urn:uuid:907283b8-d8c0-4819-9678-ef2f8441beca>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00026.warc.gz"}
Mini-batch jrSiCKLSNMF Tutorial Set up For this walkthrough, we will be working with simulated data object SimData. These data were generated from GSE130399 using the packages SparSim and simATAC. The details for the simulations can be found in our paper. SimData has already gone through quality control (QC); however, when working with real data, you should QC your data and select features that appear in at least 10 cells for both modalities. After loading jrSiCKLSNMF into R, you should have access to SimData. Here, we set up our data as in the getting started and getting started L2 Norm vignettes. Please refer to these for details on setup. #> Warning in rm(DataMatrices, SimData): object 'SimData' not found #> Warning in (function (to_check, X, clust_centers, clust_info, dtype, nn, : #> detected tied distances to neighbors, see ?'BiocNeighbors-ties' #> Warning in (function (to_check, X, clust_centers, clust_info, dtype, nn, : #> detected tied distances to neighbors, see ?'BiocNeighbors-ties' Next, we will determine the number of latent factors by using IRLBA. By looking at all three plots generated, we see that 5 appears to correspond to a value close to the elbow for all modalities and for the concatenated modality. #> Calculating IRLBA for all data matrices #> Preparing data for plotting #> Generating Plots Running mini-batch jrSiCKLSNMF Finally, we can run mini-batch jrSicKLSNMF. Please note that we store \(\textbf{H}\) as \(\textbf{H}^T\). Note that because the mini-batch algorithm is stochastic, it has a higher convergence tolerance. Therefore, please specify the number of rounds in the “minrounds” variable. SimSickleJr<-RunjrSiCKLSNMF(SimSickleJr,rounds=200,differr=1e-6,minibatch=TRUE,random_W_updates=TRUE,batchsize=100,seed=8,minrounds = 200) #> Algorithm not converged. Maximum number of rounds reached. #> Final update is: 0.0620298%. #> Time difference of 17.68532 secs Post-hoc Analyses After this, we can perform diagnostics to determine an appropriate number of cell clusters. We see that around 4 is an appropriate number of clusters. Finally, we can calculate and then plot the UMAP of our SickleJr. Note that if you would like to plot the UMAP of the compressed \(\mathbf{W}^v\mathbf{H}\) matrix, please enter the number corresponding to the modality you wish to see. #Plotting based off of cluster SimSickleJr<-PlotSickleJrUMAP(SimSickleJr,title="K-means clusters") After looking at the UMAP plots, we can see that perhaps 3 is a more appropriate number of clusters. We re-plot our results with 3 clusters. Additionally, for the plots, you can either color based off of identified cluster or based off of metadata. #Plotting based off of true cell type metadata SimSickleJr<-PlotSickleJrUMAP(SimSickleJr,colorbymetadata="true_cell_type",title="True Cell Types",legendname="True Cell Types") We can also visualize data in the RNA modality and the ATAC modality. This is not recommended for large datasets. SimSickleJr<-PlotSickleJrUMAP(SimSickleJr,title="K-means clusters: RNA modality", title="True Cell Type: RNA modality",legendname="True Cell Types", SimSickleJr<-PlotSickleJrUMAP(SimSickleJr,title="K-means clusters: ATAC modality",umap.modality="W2H")
{"url":"https://cran.itam.mx/web/packages/jrSiCKLSNMF/vignettes/Minibatch_jrSiCKLKSNMF.html","timestamp":"2024-11-10T12:45:44Z","content_type":"text/html","content_length":"99734","record_id":"<urn:uuid:af11de83-bc08-411f-abb3-272f9cb51eed>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00841.warc.gz"}
Adjoint representation Jump to navigation Jump to search In mathematics, the adjoint representation (or adjoint action) of a Lie group G is a way of representing the elements of the group as linear transformations of the group's Lie algebra, considered as a vector space. For example, if G is GL(n) (the Lie group of n-by-n invertible matrices), its Lie algebra is the vector space of all (not necessarily invertible) n-by-n matrices. So in this example, the adjoint representation is the vector space of n-by-n matrices ${\displaystyle x}$, and any element g in GL(n) acts as a linear transformation of this vector space given by conjugation: ${\ displaystyle x\mapsto gxg^{-1}}$. For any Lie group, this natural representation is obtained by linearizing (i.e. taking the differential of) the action of G on itself by conjugation. The adjoint representation can be defined for linear algebraic groups over arbitrary fields. Let G be a Lie group, and let ${\displaystyle \Psi :G\to \operatorname {Aut} (G)}$ be the mapping g ↦ Ψ[g], with Aut(G) the automorphism group of G and Ψ[g]: G → G given by the inner automorphism (conjugation) ${\displaystyle \Psi _{g}(h)=ghg^{-1}~.}$ This Ψ is a Lie group homomorphism. For each g in G, define Ad[g] to be the derivative of Ψ[g] at the origin: ${\displaystyle \operatorname {Ad} _{g}=(d\Psi _{g})_{e}:T_{e}G\rightarrow T_{e}G}$ where d is the differential and ${\displaystyle {\mathfrak {g}}=T_{e}G}$ is the tangent space at the origin e (e being the identity element of the group G). Since ${\displaystyle \Psi _{g}}$ is a Lie group automorphism, Ad[g] is a Lie algebra automorphism; i.e., an invertible linear transformation of ${\displaystyle {\mathfrak {g}}}$ to itself that preserves the Lie bracket. Moreover, since ${\ displaystyle g\mapsto \Psi _{g}}$ is a group homomorphism, ${\displaystyle g\mapsto \operatorname {Ad} _{g}}$ too is a group homomorphism.^[1] Hence, the map ${\displaystyle \mathrm {Ad} \colon G\to \mathrm {Aut} ({\mathfrak {g}}),\,g\mapsto \mathrm {Ad} _{g}}$ is a group representation called the adjoint representation of G. If G is an immersed Lie subgroup of the general linear group ${\displaystyle \mathrm {GL} _{n}(\mathbb {C} )}$ (called immersely linear Lie group), then the Lie algebra ${\displaystyle {\mathfrak {g}}}$ consists of matrices and the exponential map is the matrix exponential ${\displaystyle \operatorname {exp} (X)=e^{X}}$ for matrices X with small operator norms. Thus, for g in G and small X in ${\displaystyle {\mathfrak {g}}}$, taking the derivative of ${\displaystyle \Psi _{g}(\operatorname {exp} (tX))=ge^{tX}g^{-1}}$ at t = 0, one gets: ${\displaystyle \operatorname {Ad} _{g}(X)=gXg^{-1}}$ where on the right we have the products of matrices. If ${\displaystyle G\subset \mathrm {GL} _{n}(\mathbb {C} )}$ is a closed subgroup (that is, G is a matrix Lie group), then this formula is valid for all g in G and all X in ${\displaystyle {\mathfrak {g}}}$. Succinctly, an adjoint representation is an isotropy representation associated to the conjugation action of G around the identity element of G. Derivative of Ad[edit] One may always pass from a representation of a Lie group G to a representation of its Lie algebra by taking the derivative at the identity. Taking the derivative of the adjoint map ${\displaystyle \mathrm {Ad} :G\to \mathrm {Aut} ({\mathfrak {g}})}$ at the identity element gives the adjoint representation of the Lie algebra ${\displaystyle {\mathfrak {g}}=\operatorname {Lie} (G)}$ of G: {\displaystyle {\begin{aligned}\mathrm {ad} :&\,{\mathfrak {g}}\to \mathrm {Der} ({\mathfrak {g}})\\&\,x\mapsto \operatorname {ad} _{x}=d(\operatorname {Ad} )_{e}(x)\end{aligned}}} where ${\displaystyle \mathrm {Der} ({\mathfrak {g}})=\operatorname {Lie} (\operatorname {Aut} ({\mathfrak {g}}))}$ is the Lie algebra of ${\displaystyle \mathrm {Aut} ({\mathfrak {g}})}$ which may be identified with the derivation algebra of ${\displaystyle {\mathfrak {g}}}$. One can show that ${\displaystyle \mathrm {ad} _{x}(y)=[x,y]\,}$ for all ${\displaystyle x,y\in {\mathfrak {g}}}$, where the right hand side is given (induced) by the Lie bracket of vector fields. Indeed,^[2] recall that, viewing ${\displaystyle {\mathfrak {g}}}$ as the Lie algebra of left-invariant vector fields on G, the bracket on ${\displaystyle {\mathfrak {g}}}$ is given as:^[3] for left-invariant vector fields X, Y, ${\displaystyle [X,Y]=\lim _{t\to 0}{1 \over t}(d\varphi _{-t}(Y)-Y)}$ where ${\displaystyle \varphi _{t}:G\to G}$ denotes the flow generated by X. As it turns out, ${\displaystyle \varphi _{t}(g)=g\varphi _{t}(e)}$, roughly because both sides satisfy the same ODE defining the flow. That is, ${\displaystyle \varphi _{t}=R_{\varphi _{t}(e)}}$ where ${\displaystyle R_{h}}$ denotes the right multiplication by ${\displaystyle h\in G}$. On the other hand, since ${\ displaystyle \Psi _{g}=R_{g^{-1}}\circ L_{g}}$, by chain rule, ${\displaystyle \operatorname {Ad} _{g}(Y)=d(R_{g^{-1}}\circ L_{g})(Y)=dR_{g^{-1}}(dL_{g}(Y))=dR_{g^{-1}}(Y)}$ as Y is left-invariant. Hence, ${\displaystyle [X,Y]=\lim _{t\to 0}{1 \over t}(\operatorname {Ad} _{\varphi _{t}(e)}(Y)-Y)}$, which is what was needed to show. Thus, ${\displaystyle \mathrm {ad} _{x}}$ coincides with the same one defined in #Adjoint representation of a Lie algebra below. Ad and ad are related through the exponential map: Specifically, Ad [exp(x)] = exp(ad[x]) for all x in the Lie algebra.^[4] It is a consequence of the general result relating Lie group and Lie algebra homomorphisms via the exponential map.^[5] If G is an immersely linear Lie group, then the above computation simplifies: indeed, as noted early, ${\displaystyle \operatorname {Ad} _{g}(Y)=gYg^{-1}}$ and thus with ${\displaystyle g=e^{tX}}$, ${\displaystyle \operatorname {Ad} _{e^{tX}}(Y)=e^{tX}Ye^{-tX}}$. Taking the derivative of this at ${\displaystyle t=0}$, we have: ${\displaystyle \operatorname {ad} _{X}Y=XY-YX}$. The general case can also be deduced from the linear case: indeed, let ${\displaystyle G'}$ be an immersely linear Lie group having the same Lie algebra as that of G. Then the derivative of Ad at the identity element for G and that for G' coincide; hence, without loss of generality, G can be assumed to be G'. The upper-case/lower-case notation is used extensively in the literature. Thus, for example, a vector x in the algebra ${\displaystyle {\mathfrak {g}}}$ generates a vector field X in the group G. Similarly, the adjoint map ad[x]y = [x,y] of vectors in ${\displaystyle {\mathfrak {g}}}$ is homomorphic to the Lie derivative L[X]Y = [X,Y] of vector fields on the group G considered as a manifold. Further see the derivative of the exponential map. Adjoint representation of a Lie algebra[edit] Let ${\displaystyle {\mathfrak {g}}}$ be a Lie algebra over some field. Given an element x of a Lie algebra ${\displaystyle {\mathfrak {g}}}$, one defines the adjoint action of x on ${\displaystyle {\mathfrak {g}}}$ as the map ${\displaystyle \operatorname {ad} _{x}:{\mathfrak {g}}\to {\mathfrak {g}}\qquad {\text{with}}\qquad \operatorname {ad} _{x}(y)=[x,y]}$ for all y in ${\displaystyle {\mathfrak {g}}}$. It is called the adjoint endomorphism or adjoint action. Since a bracket is bilinear, this determines the linear mapping ${\displaystyle \operatorname {ad} :{\mathfrak {g}}\to \operatorname {End} ({\mathfrak {g}})}$ given by x ↦ ad[x]. Within End${\displaystyle ({\mathfrak {g}})}$, the bracket is, by definition, given by the commutator of the two operators: ${\displaystyle [T,S]=T\circ S-S\circ T}$ where ${\displaystyle \circ }$ denotes composition of linear maps. Using the above definition of the bracket, the Jacobi identity ${\displaystyle [x,[y,z]]+[y,[z,x]]+[z,[x,y]]=0}$ takes the form ${\displaystyle \left([\operatorname {ad} _{x},\operatorname {ad} _{y}]\right)(z)=\left(\operatorname {ad} _{[x,y]}\right)(z)}$ where x, y, and z are arbitrary elements of ${\displaystyle {\mathfrak {g}}}$. This last identity says that ad is a Lie algebra homomorphism; i.e., a linear mapping that takes brackets to brackets. Hence, ad is a representation of a Lie algebra and is called the adjoint representation of the algebra ${\displaystyle {\mathfrak {g}}}$. If ${\displaystyle {\mathfrak {g}}}$ is finite-dimensional, then End${\displaystyle ({\mathfrak {g}})}$ is isomorphic to ${\displaystyle {\mathfrak {gl}}({\mathfrak {g}})}$, the Lie algebra of the general linear group of the vector space ${\displaystyle {\mathfrak {g}}}$ and if a basis for it is chosen, the composition corresponds to matrix multiplication. In a more module-theoretic language, the construction says that ${\displaystyle {\mathfrak {g}}}$ is a module over itself. The kernel of ad is the center of ${\displaystyle {\mathfrak {g}}}$ (rephrasing of the definition). On the other hand, for each element z in ${\displaystyle {\mathfrak {g}}}$, ad[z] obeys the Leibniz' law: ${\displaystyle \delta ([x,y])=[\delta (x),y]+[x,\delta (y)]}$ for all x and y in the algebra (the restatement of the Jacobi identity). That is to say, ad[z] is a derivation and the image of ${\displaystyle {\mathfrak {g}}}$ under ad is a subalgebra of Der${\ displaystyle ({\mathfrak {g}})}$, the space of all derivations of ${\displaystyle {\mathfrak {g}}}$. When ${\displaystyle {\mathfrak {g}}=\operatorname {Lie} (G)}$ is the Lie algebra of a Lie group G, ad is the differential of Ad at the identity element of G (see #Derivative of Ad above). Structure constants[edit] The explicit matrix elements of the adjoint representation are given by the structure constants of the algebra. That is, let {e^i} be a set of basis vectors for the algebra, with ${\displaystyle [e^{i},e^{j}]=\sum _{k}{c^{ij}}_{k}e^{k}.}$ Then the matrix elements for ad[e^i] are given by ${\displaystyle {\left[\operatorname {ad} _{e^{i}}\right]_{k}}^{j}={c^{ij}}_{k}~.}$ Thus, for example, the adjoint representation of su(2) is the defining rep of so(3). • If G is abelian of dimension n, the adjoint representation of G is the trivial n-dimensional representation. • If G is a matrix Lie group (i.e. a closed subgroup of GL(n,ℂ)), then its Lie algebra is an algebra of n×n matrices with the commutator for a Lie bracket (i.e. a subalgebra of ${\displaystyle {\ mathfrak {gl}}_{n}(\mathbb {C} )}$). In this case, the adjoint map is given by Ad[g](x) = gxg^−1. • If G is SL(2, R) (real 2×2 matrices with determinant 1), the Lie algebra of G consists of real 2×2 matrices with trace 0. The representation is equivalent to that given by the action of G by linear substitution on the space of binary (i.e., 2 variable) quadratic forms. The following table summarizes the properties of the various maps mentioned in the definition │ ${\displaystyle \Psi \colon G\to \mathrm {Aut} (G)\,}$ │ ${\displaystyle \Psi _{g}\colon G\to G\,}$ │ │ Lie group homomorphism: │ Lie group automorphism: │ │ │ │ │ • ${\displaystyle \Psi _{gh}=\Psi _{g}\Psi _{h}}$ │ • ${\displaystyle \Psi _{g}(ab)=\Psi _{g}(a)\Psi _{g}(b)}$ │ │ │ • ${\displaystyle (\Psi _{g})^{-1}=\Psi _{g^{-1}}}$ │ │ ${\displaystyle \mathrm {Ad} \colon G\to \mathrm {Aut} ({\mathfrak {g}})}$ │ ${\displaystyle \mathrm {Ad} _{g}\colon {\mathfrak {g}}\to {\mathfrak {g}}}$ │ │ Lie group homomorphism: │ Lie algebra automorphism: │ │ │ │ │ • ${\displaystyle \mathrm {Ad} _{gh}=\mathrm {Ad} _{g}\mathrm {Ad} _{h}}$ │ • ${\displaystyle \mathrm {Ad} _{g}}$ is linear │ │ │ • ${\displaystyle (\mathrm {Ad} _{g})^{-1}=\mathrm {Ad} _{g^{-1}}}$ │ │ │ • ${\displaystyle \mathrm {Ad} _{g}[x,y]=[\mathrm {Ad} _{g}x,\mathrm {Ad} _{g}y]}$ │ │ ${\displaystyle \mathrm {ad} \colon {\mathfrak {g}}\to \mathrm {Der} ({\mathfrak {g}})}$ │ ${\displaystyle \mathrm {ad} _{x}\colon {\mathfrak {g}}\to {\mathfrak {g}}}$ │ │ Lie algebra homomorphism: │ Lie algebra derivation: │ │ │ │ │ • ${\displaystyle \mathrm {ad} }$ is linear │ • ${\displaystyle \mathrm {ad} _{x}}$ is linear │ │ • ${\displaystyle \mathrm {ad} _{[x,y]}=[\mathrm {ad} _{x},\mathrm {ad} _{y}]}$ │ • ${\displaystyle \mathrm {ad} _{x}[y,z]=[\mathrm {ad} _{x}y,z]+[y,\mathrm {ad} _{x}z]}$ │ The image of G under the adjoint representation is denoted by Ad(G). If G is connected, the kernel of the adjoint representation coincides with the kernel of Ψ which is just the center of G. Therefore the adjoint representation of a connected Lie group G is faithful if and only if G is centerless. More generally, if G is not connected, then the kernel of the adjoint map is the centralizer of the identity component G[0] of G. By the first isomorphism theorem we have ${\displaystyle \mathrm {Ad} (G)\cong G/Z_{G}(G_{0}).}$ Given a finite-dimensional real Lie algebra ${\displaystyle {\mathfrak {g}}}$, by Lie's third theorem, there is a connected Lie group ${\displaystyle \operatorname {Int} ({\mathfrak {g}})}$ whose Lie algebra is the image of the adjoint representation of ${\displaystyle {\mathfrak {g}}}$ (i.e., ${\displaystyle \operatorname {Lie} (\operatorname {Int} ({\mathfrak {g}}))=\operatorname {ad} ({\ mathfrak {g}})}$.) It is called the adjoint group of ${\displaystyle {\mathfrak {g}}}$. Now, if ${\displaystyle {\mathfrak {g}}}$ is the Lie algebra of a connected Lie group G, then ${\displaystyle \operatorname {Int} ({\mathfrak {g}})}$ is the image of the adjoint representation of G: ${\displaystyle \operatorname {Int} ({\mathfrak {g}})=\operatorname {Ad} (G)}$. Roots of a semisimple Lie group[edit] If G is semisimple, the non-zero weights of the adjoint representation form a root system.^[6] (In general, one needs to pass to the complexification of the Lie algebra before proceeding.) To see how this works, consider the case G = SL(n, R). We can take the group of diagonal matrices diag(t[1], ..., t[n]) as our maximal torus T. Conjugation by an element of T sends ${\displaystyle {\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots &a_{nn}\\\end{bmatrix}}\mapsto {\begin{bmatrix}a_ {11}&t_{1}t_{2}^{-1}a_{12}&\cdots &t_{1}t_{n}^{-1}a_{1n}\\t_{2}t_{1}^{-1}a_{21}&a_{22}&\cdots &t_{2}t_{n}^{-1}a_{2n}\\\vdots &\vdots &\ddots &\vdots \\t_{n}t_{1}^{-1}a_{n1}&t_{n}t_{2}^{-1}a_{n2}& \cdots &a_{nn}\\\end{bmatrix}}.}$ Thus, T acts trivially on the diagonal part of the Lie algebra of G and with eigenvectors t[i]t[j]^−1 on the various off-diagonal entries. The roots of G are the weights diag(t[1], ..., t[n]) → t[i]t [j]^−1. This accounts for the standard description of the root system of G = SL[n](R) as the set of vectors of the form e[i]−e[j]. Example SL(2, R)[edit] Let us compute the root system for one of the simplest cases of Lie Groups. Let us consider the group SL(2, R) of two dimensional matrices with determinant 1. This consists of the set of matrices of the form: ${\displaystyle {\begin{bmatrix}a&b\\c&d\\\end{bmatrix}}}$ with a, b, c, d real and ad − bc = 1. A maximal compact connected abelian Lie subgroup, or maximal torus T, is given by the subset of all matrices of the form ${\displaystyle {\begin{bmatrix}t_{1}&0\\0&t_{2}\\\end{bmatrix}}={\begin{bmatrix}t_{1}&0\\0&1/t_{1}\\\end{bmatrix}}={\begin{bmatrix}\exp(\theta )&0\\0&\exp(-\theta )\\\end{bmatrix}}}$ with ${\displaystyle t_{1}t_{2}=1}$. The Lie algebra of the maximal torus is the Cartan subalgebra consisting of the matrices ${\displaystyle {\begin{bmatrix}\theta &0\\0&-\theta \\\end{bmatrix}}=\theta {\begin{bmatrix}1&0\\0&0\\\end{bmatrix}}-\theta {\begin{bmatrix}0&0\\0&1\\\end{bmatrix}}=\theta (e_{1}-e_{2}).}$ If we conjugate an element of SL(2, R) by an element of the maximal torus we obtain ${\displaystyle {\begin{bmatrix}t_{1}&0\\0&1/t_{1}\\\end{bmatrix}}{\begin{bmatrix}a&b\\c&d\\\end{bmatrix}}{\begin{bmatrix}1/t_{1}&0\\0&t_{1}\\\end{bmatrix}}={\begin{bmatrix}at_{1}&bt_{1}\\c/t_{1} The matrices ${\displaystyle {\begin{bmatrix}1&0\\0&0\\\end{bmatrix}}{\begin{bmatrix}0&0\\0&1\\\end{bmatrix}}{\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}{\begin{bmatrix}0&0\\1&0\\\end{bmatrix}}}$ are then 'eigenvectors' of the conjugation operation with eigenvalues ${\displaystyle 1,1,t_{1}^{2},t_{1}^{-2}}$. The function Λ which gives ${\displaystyle t_{1}^{2}}$ is a multiplicative character, or homomorphism from the group's torus to the underlying field R. The function λ giving θ is a weight of the Lie Algebra with weight space given by the span of the matrices. It is satisfying to show the multiplicativity of the character and the linearity of the weight. It can further be proved that the differential of Λ can be used to create a weight. It is also educational to consider the case of SL(3, R). Variants and analogues[edit] The adjoint representation can also be defined for algebraic groups over any field. The co-adjoint representation is the contragredient representation of the adjoint representation. Alexandre Kirillov observed that the orbit of any vector in a co-adjoint representation is a symplectic manifold. According to the philosophy in representation theory known as the orbit method (see also the Kirillov character formula), the irreducible representations of a Lie group G should be indexed in some way by its co-adjoint orbits. This relationship is closest in the case of nilpotent Lie groups. • Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. • Kobayashi, Shoshichi; Nomizu, Katsumi (1996). Foundations of Differential Geometry, Vol. 1 (New ed.). Wiley-Interscience. ISBN 978-0-471-15733-5. • Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, 222 (2nd ed.), Springer, ISBN 978-3319134666.
{"url":"https://static.hlt.bme.hu/semantics/external/pages/endomorfizmus/en.wikipedia.org/wiki/Adjoint_endomorphism.html","timestamp":"2024-11-09T09:49:02Z","content_type":"text/html","content_length":"245743","record_id":"<urn:uuid:dacae223-95fc-48d4-8e43-43b4fd502388>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00410.warc.gz"}
strtof(3) [v7 man page] STRTOD(3) Linux Programmer's Manual STRTOD(3) strtod, strtof, strtold - convert ASCII string to floating-point number #include <stdlib.h> double strtod(const char *nptr, char **endptr); float strtof(const char *nptr, char **endptr); long double strtold(const char *nptr, char **endptr); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): strtof(), strtold(): _ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L The strtod(), strtof(), and strtold() functions convert the initial portion of the string pointed to by nptr to double, float, and long double representation, respectively. The expected form of the (initial portion of the) string is optional leading white space as recognized by isspace(3), an optional plus ('+') or minus sign ('-') and then either (i) a decimal number, or (ii) a hexadecimal number, or (iii) an infinity, or (iv) a NAN (not-a- A decimal number consists of a nonempty sequence of decimal digits possibly containing a radix character (decimal point, locale-dependent, usually '.'), optionally followed by a decimal exponent. A decimal exponent consists of an 'E' or 'e', followed by an optional plus or minus sign, followed by a nonempty sequence of decimal digits, and indicates multiplication by a power of 10. A hexadecimal number consists of a "0x" or "0X" followed by a nonempty sequence of hexadecimal digits possibly containing a radix charac- ter, optionally followed by a binary exponent. A binary exponent consists of a 'P' or 'p', followed by an optional plus or minus sign, followed by a nonempty sequence of decimal digits, and indicates multiplication by a power of 2. At least one of radix character and binary exponent must be present. An infinity is either "INF" or "INFINITY", disregarding case. A NAN is "NAN" (disregarding case) optionally followed by a string, (n-char-sequence), where n-char-sequence specifies in an implementa- tion-dependent way the type of NAN (see NOTES). These functions return the converted value, if any. If endptr is not NULL, a pointer to the character after the last character used in the conversion is stored in the location referenced by If no conversion is performed, zero is returned and (unless endptr is null) the value of nptr is stored in the location referenced by If the correct value would cause overflow, plus or minus HUGE_VAL (HUGE_VALF, HUGE_VALL) is returned (according to the sign of the value), and ERANGE is stored in errno. If the correct value would cause underflow, zero is returned and ERANGE is stored in errno. ERANGE Overflow or underflow occurred. For an explanation of the terms used in this section, see attributes(7). |Interface | Attribute | Value | |strtod(), strtof(), strtold() | Thread safety | MT-Safe locale | POSIX.1-2001, POSIX.1-2008, C99. strtod() was also described in C89. Since 0 can legitimately be returned on both success and failure, the calling program should set errno to 0 before the call, and then determine if an error occurred by checking whether errno has a nonzero value after the call. In the glibc implementation, the n-char-sequence that optionally follows "NAN" is interpreted as an integer number (with an optional '0' or '0x' prefix to select base 8 or 16) that is to be placed in the mantissa component of the returned value. See the example on the strtol(3) manual page; the use of the functions described in this manual page is similar. atof(3), atoi(3), atol(3), nan(3), nanf(3), nanl(3), strfromd(3), strtol(3), strtoul(3) This page is part of release 4.15 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at https://www.kernel.org/doc/man-pages/. Linux 2017-09-15 STRTOD(3)
{"url":"https://www.unix.com/man-page/v7/3/strtof","timestamp":"2024-11-04T10:53:05Z","content_type":"text/html","content_length":"35300","record_id":"<urn:uuid:5c87bd6b-3c01-4e73-9b6a-fa1881969db7>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00680.warc.gz"}
Pathfinding and Navigation. Yeah, auto-play makes for a boring movie. Or maybe a fascinating screensaver, I don't know which. With sufficient carefully-added randomness and logging though, it'll make a great fuzz-testing suite for finding bugs to stomp :-)Auto-pathfind is not auto-play. It's a way for the player to specify a play strategy at a higher level than one step at a time. I want the player to be able to pick up a key in a room on level 6, and when she decides she wants to go back to a chest she left on level 3 to see if the key fits it, she should not have to mash a button for every square the character traverses. That should be 'autopath - upstair' three times, then 'autopath - objects - chest' once. Or, depending on interface preferences, it could be a series of four mouse clicks. If you're implementing a borg or a bot, then yes you need autopath in a way you don't for a human player. But if the autopath is just doing strategy that the human decided on, I consider it to be the human who is the one playing the game. Autopath doesn't (shouldn't, IMO) make decisions that matter. If there's a single reason (ie, it matters) why a player would legitimately want to decide on one path over another, then either that decision needs to be something she can instruct the autopath system to make in her place, or she needs to not be using the autopath system. Likewise, if on the way autopath is interrupted six times by seeing a rat, then the player needs to decide to break off and chase them, or ignore them and keep going, or route around them as necessary, or just thwack them when they get in the way - either individually or as a pathfinding choice. If the automation serves to implement the players decisions, it's still the player's decisions that win or lose the game. If the automation makes decisions for the player, that becomes the kind of problem you're concerned about. I used a strategy pattern to get the distance function. This allowed me to have creatures use 1 movement in all 8 directions but projectiles use actual radius distance with the same algorithm.The projectiles do that because that's what people expect, even though it's not consistent with move distance. If the projectile max distance is far enough, the inconsistency rarely actually matters in a The greatest possible difference is less than 8% and occurs when something approaches from 22.5deg from any axis or diagonal. With bow fire taking twice as long as an orthogonal step, that means the distance would need to be at least 15 squares before it makes the difference of one extra bowshot.
{"url":"https://forums.roguetemple.com/index.php?topic=4246.msg38980","timestamp":"2024-11-03T23:30:44Z","content_type":"application/xhtml+xml","content_length":"73151","record_id":"<urn:uuid:72e0fcd5-4cfa-4594-9728-35a9019d541b>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00079.warc.gz"}
How integers are stored in memory using two’s complement. What is an Integer? Integers are whole numbers that can have both zero, positive and negative values but no decimal values. For example, 0, -5, 10. The size of int is usually 4 bytes (32 bits). And, it can take 232 distinct states from -2147483648 to 2147483647. int a = 456; The value is 456. Now let us convert it to binary. Now you get 9 bit number. Since int allocates 32 bits, fill the remaining 23 bits with 0. So the value stored in memory is Signed and Unsigned. In C, signed and unsigned are type modifiers. You can alter the data storage of a data type by using them. signed int a=2357;RHS value is 2357. Not let us convert it to binary. Now you get 12 bit number. Since int allocates 32 bits, fill the remaining 23 bits with 0.So the value stored in memory is00000000 00000000 00001001 00110101 For example. unsigned int x; int y; Here, the variable x can hold only zero and positive values because we have used the unsigned modifier. Considering the size of int is 4 bytes, variable y can hold values from -231 to 231-1, whereas variable x can hold values from 0 to 232-1. Store Data in the computer. Every data store in a computer is store in binary, because is the only language that computers understand. Other languages like decimal are interpretation, and the machine has to translate it, to binary to really understand. For instance, the number 4 is 100 in binary, so the memory would be something like (4 bytes): 0x0A 0x0B 0x0C 0x0D memory address The C standard doesn’t mandate any particular way of representing negative signed numbers. In most implementations that you are likely to encounter, negative signed integers are stored in what is called two’s complement. The other major way of storing negative signed numbers is called one’s complement. The two’s complement of an N-bit number x is defined as 2^N - x. For example, the two's complement of 8-bit 1 is 2^8 - 1, or 1111 1111. The two's complement of 8-bit 8 is 2^8 - 8, which in binary is 1111 1000. This can also be calculated by flipping the bits of x and adding one. For example. 1 = 0000 0001 ~1 = 1111 1110 (1’s complement) ~1 + 1 = 1111 1111 (2’s complement) -1 = 1111 1111 21 = 0001 0101 ~21 = 1110 1010 ~21 + 1 = 1110 1011 -21 = 1110 1011 The one’s complement of an N-bit number x is defined as x with all its bits flipped, basically. 1 = 0000 0001 -1 = 1111 1110 21 = 0001 0101 -21 = 1110 1010 Two’s complement has several advantages over one’s complement. For example, it doesn’t have the concept of ‘negative zero’, which for good reason is confusing to many people. Addition, multiplication and subtraction work the same with signed integers. Signed magnitude. This is the easiest to understand, because it works the same as we are used to when dealing with negative decimal values: The first position (bit) represents the sign (0 for positive, 1 for negative), and the other bits represent the number. Although it is easy for us to understand, it is hard for computers to work with, especially when doing arithmetic with negative numbers. In 8-bit signed magnitude, the value 8 is represented as 0 0001000 and -8 as 1 0001000. One’s complements. In this representation, negative numbers are created from the corresponding positive number by flipping all the bits and not just the sign bit. This makes it easier to work with negative numbers for a computer, but has the complication that there are two distinct representations for +0 and -0. The flipping of all the bits makes this harder to understand for humans. In 8-bit one’s complement, the value 8 is represented as 00001000 and -8 as 11110111. Two’s complements. This is the most common representation used nowadays for negative integers because it is the easiest to work with for computers, but it is also the hardest to understand for humans. When comparing the bit patterns used for negative values between one’s complement and two’s complement, it can be observed that the same bit pattern in two’s complement encodes for the next lower number. For example 11111111 stands for -0 in one’s complement and for -1 in two’s complement, and similarly for 10000000 (-127 vs -128). In 8-bit two’s complement, the value 8 is represented as 00001000 and -8 as 11111000. Suppose the following fragment of code, int a = -34; Now how will this be stored in memory. So here is the complete theory. Whenever a number with minus sign is encountered, the number (ignoring minus sign) is converted to its binary equivalent. Then the two’s complement of the number is calculated. That two’s complement is kept at place allocated in memory and the sign bit will be set to 1 because the binary being kept is of a negative number. Whenever it comes on accessing that value firstly the sign bit will be checked if the sign bit is 1 then the binary will be two’s complemented and converted to equivalent decimal number and will be represented with a minus sign. int a = -2056; Binary of 2056 will be calculated which is:00000000000000000000100000001000 (32 bit representation, according of storage of int in C) 2’s complement of the above binary is: So finally the above binary will be stored at memory allocated for variable a. When it comes on accessing the value of variable a, the above binary will be retrieved from the memory location, then its sign bit that is the left most bit will be checked as it is 1 so the binary number is of a negative number so it will be 2’s complemented and when it will be 2’s complemented will be get the binary of 2056 which is: The above binary number will be converted to its decimal equivalent which is 2056 and as the sign bit was 1 so the decimal number which is being gained from the binary number will be represented with a minus sign. In our case -2056. Int number : -4 Binary representation (no sign): 100 Binary representation in memory (no sign): 00000000 00000000 00000000 00000100Now we add the sign using 2’s complement:1 step: flip — 11111111 11111111 11111111 111110112 step: add 1–11111111 11111111 11111111 11111100 If you want more information about this please click here. I hop this blog can helps you!
{"url":"https://2912.medium.com/how-integers-are-stored-in-memory-using-twos-complement-b2ae725ea635?source=user_profile_page---------9-------------1bb609adeb79---------------","timestamp":"2024-11-09T20:49:01Z","content_type":"text/html","content_length":"125214","record_id":"<urn:uuid:f65262b4-6c43-4e08-a923-04871861d9d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00895.warc.gz"}
Properties of Omnidirectional Photonic Band Gaps in Fibonacci Quasi-Periodic One-Dimensional Superconductor Photonic Crystals H. F. Zhang1, 2[, S. B. Liu]1, 3, *[, X. K. Kong]1, 4[, B. R. Bian]1[,] and X. Zhao1 1[College of Electronic and Information Engineering, Nanjing ] Uni-versity of Aeronautics and Astronautics, Nanjing, Jiangsu 210016, P. R. China 2[Nanjing Artillery Academy, Nanjing, Jiangsu 211132, P. R. China] 3[State Key Laboratory of Millimeter Waves, Southeast University,] Nanjing, Jiangsu 210096, P. R. China 4[Department of Physics, Zhenjiang Watercraft College, Zhenjiang,] Jiangsu 212003, P. R. China Abstract—In this paper, the properties of the omnidirectional photonic band gap (OBG) realized by one-dimensional (1D) Fibonacci quasi-periodic structure which is composed of superconductor and isotropic dielectric have been theoretically investigated by the transfer matrix method (TMM). From the numerical results, it has been shown that this OBG is insensitive to the incident angle and the polarization of electromagnetic wave (EM wave), and the frequency range and central frequency of OBG cease to change with increasing Fibonacci order, but vary with the ambient temperature of system, the thickness of the superconductor, and dielectric layer, respectively. The bandwidth of OBG can be notably enlarged with increasing the superconductor thickness. Moreover, the frequency range of OBG can be narrowed with increasing the thickness of dielectric layer and ambient temperature. The damping coefficient of superconductor layers has no effect on the frequency range of OBG under low-temperature conditions. It is shown that Fibonacci quasi-periodic 1D superconductor dielectric photonic crystals (SDPCs) have a superior feature in the enhancement frequency range of OBG. This kind of OBG has potential applications in filters, microcavities, and fibers, etc. Received 4 April 2012, Accepted 28 April 2012, Scheduled 15 May 2012 In the past few years, the propagation of electromagnetic waves (EM waves) in periodic dielectric structures in one, two, three spatial directions has received much attention on the experimental and theoretical investigations since pioneering works of Yablonovitch [1] and John [2]. This kind of periodic dielectric structures is called photonic crystals (PCs), and can generate spectral regions named photonic band gaps (PBGs), which is similar to the electronic band gaps in a semiconductor. The propagation of EM waves with frequency located in the PBG is strongly forbidden in PCs. The earlier studies have been demonstrated [3–5] that a PBG can be formed as a result of the interference of multiple Bragg scattering in a periodic dielectric structure. If EM wave incident at any angle with any polarization cannot propagate in PCs, the total OBG can be achieved. The larger OBGs have been widely used in various modern applications, such as omnidirectional high reflector [6], all-dielectric coaxial waveguide [7], and omnidirectional mirror fiber [8]. The multilayer periodic structure has been always applied in enhancement the OBGs as described in most works [9–11] but the researchers pay more attention on the disordered dielectric structures in recently. Within the intermediate regime between complete order and disorder, quasi-periodic structures following a deterministic sequence also display characteristic spectral properties not present in either of these extreme cases. The most common quasi-periodic structure is Fibonacci sequence [12–15]. Fibonacci sequence multilayer present a discrete Fourier spectrum characterized by self-similar Bragg peak. The Fibonacci sequence also has been extended to the investigation of the total OBGs of 1D PCs [16, 17]. Some researchers have attempted to introduce negative-index materials to 1D PCs with a Fibonacci sequence basis [18–20], the OBG can be obtained. Such OBG also be called the zero-en gap or single negative gap, but is insensitive to lattice parameter changed in contrast with the behavior exhibited by Bragg gap. Therefore, the dispersive or dissipative medium is used to form tunable PCs, such as semiconductor [21], metal [22], plasma [23] and superconductor [24]. the damping coefficient of superconductor layer, the thickness of the superconductor, and dielectric layer are investigated, respectively. Finally, conclusions are given in Section 4. Schematic view of oblique indent EM wave in Fibonacci quasi-periodic structure 1D SDPCs composed of dielectric layers and superconductor layers is plotted in Fig. 1. We consider 1D periodic layered structure in each cell following the Fibonacci sequence. The Fibonacci sequence can be generated by the rule Sn+1 = Sn−1Sn for level n ≥ 1, with the first two chains as S0 = {A} and S1 = {S}. In this paper, layers A and S represent dielectric with thickness of dA, and superconductor with thickness of dP, respectively. For the nth generation of the considered Fibonacci sequence, the sequence can be expressed as F[n] = (S[n])N[, in which] [N] [is the number of periods. As] an example, the fourth sequence of F[4] isF[4] ={ASSAS} as depicted in Fig. 1. Here, we use εa and εs to describe the relative permittivity for dielectric A and superconductor, respectively. As we known the superconductor is a kind of frequency dependence dielectric. In order to define the properties of superconductor, the Gorter-Casimir two-fluid model [29–31] is adopted to describe the electromagnetic response of the superconductor layer with the absence of external magnetic field. The effective relative dielectric function of the superconductor is z x TE Wave TM Wave E H E [S] A S θ θ represented as follows [32]: ε[s(]ω) =εc 1−ω 2 ω2 − ωsp = s nse2 mε[0]ε[c], ωnp= s nne2 mε[0]ε[c], (2) where εc is the dielectric constant of the crystal, ωnp and ωsp are the plasma frequencies of the normal conducting electrons and the superconducting electrons, respectively. γ is the damping term of normal conducting electrons. ns and nn are densities of superconducting electrons and normal conducting electrons, respectively. eandmare the charge and mass of the electron. We can rewrite Eq. (2) in the form by using the Gorter-Casimir result [32]: ω[sp]= c v u u t à 1− µ T Tc , ω[np] = c µ T Tc whereλ0 is the London penetration length at temperatureT = 0, and Tcis the critical temperature of a superconductor. ωis electromagnetic wave frequency, and c is the light speed in vacuum. Substituting Eq. (3) into Eq. (1), the temperature dependent dielectric function of the superconductor can be expressed as εs(ω) =εc− c ω2[λ]2 0 " 1− µ Tc T − c2 0 µ Tc T ¶[4] (4) If the damping term γ is very small, the third term on the right-hand side of Eq. (4) cannot be neglected [32]. The EM wave is incident from the vacuum to the nth order Fibonacci multilayer with incident angleθ. For the transverse electric (TE) wave, the electric field E is polarized along the y direction. Suppose wave vectors K(ω) lie in xz plane. In order to calculate the reflectance for a Fibonacci multilayered structure, the TMM is used [31]. According to this method, we can set up the characteristic corresponding to the electric and magnetic fields at any two positions in the adjacent layer which is given as M[k] = cosβl [p]j[l]sinβl jp[l]sinβ[l] cosβ[l] where βl =k0nldlcosθl and pl = [Z]nl[0] cosθl (TE wave), pl = [Z][0]1[n][l]cosθl Here d[l] is the thickness of periodic length of the d[A], d[B] and d[P] with refractive indices n[A], n[B] and n[s], respectively. Thus, the transfer matrices M[j] are M[2] = M[A]M[S], M[3] = M[S]M [A]M[S], and M[4] = M[A]M[S]M[S]M[A]M[S] forS2,S3, andS4, respectively. If the order of the Fibonacci sequence is N, the total transfer matrix of the Nth order Fibonacci sequence MN can be deduced from the following recursion MN =MN−2MN−1 (N ≥2) (6) So, the total translation matrix Mis obtained to be k=1 Mk= M11 M12 M21 M22 The reflection coefficients of the considered structure are given by r = (M11+M12ps)p0−(M21+M22ps) (M11+M12ps)p0+ (M21+M22ps) (8) Here p[0] and p[s] are the first and last mediums of the structure, which given as p[0] = n[0]cosθ[0]/Z[0], p[s] = n[s]cosθ[s]/Z[0] (TE wave) and p[0] = cosθ[0]/(n[0]Z[0]),p[s]= cosθ[s]/n[s]Z[0] (TM wave). In our case we have taken n0 =ns= 1 for the vacuum. The reflectance is related by R=|r|2 (9) In this section, we investigate the properties of OBG for Fibonacci quasi-periodic 1D SDPCs in the terahertz region, and subsequently study how the OBG frequency range relation of Fibonacci quasi-periodic 1D SDPCs vary with thickness of superconductor, dielectric, the ambient temperature of system and the damping coefficient of superconductor layer, respectively. We choose the structure parameters as follows: εA = 4, µA = 1, dA = 400 nm, respectively. The superconductor layer is taken to be Tc = 9.2 K, λ0 = 83.4 nm, and γ = 1 ×105[Hz, respectively [32]. Assumed the thickness of] superconductor layer dP = 30 nm, the ambient temperature of system T = 4.2 K, and εc= 1, respectively. The Fibonacci order is 10. Here, we only focus in the band gap in the frequency domain 0–250 THz. 3.1. Introduced the Superconductor Layer to Enhance the OBG with Fibonacci sequence plot the influence of the PBG on the frequency and incident angle for TM polarization in the Fig. 2(a). The red areas correspond to the Bragg gaps or high-reflectance ranges (reflectance greater 0.99). It can be seen from Fig. 2(a) that there do not exist OBG obviously for 1D dielectric PCs, and the Bragg gap of TM polarization is closed at an incident angle between 54◦ [and 74]◦ [due to Brewster’s angle [34].] In order to avoid the Brewster’s window, we replace the air layers with superconductor layers, and are arranged with a Fibonacci basis to form a new quasi-periodic structure SDPCs. For comparison, we also plot the dependence of the PBG on the frequency and incident angle for TM polarization of the 1D SDPCs in the Fig. 2(b). As shown in Fig. 2(b), there is a Bragg gap obviously of TM polarization, and the Bragg gap is opened at an incident angle between 54◦ and 74◦. The dependence of photonic band structure of Fibonacci quasi-periodic 1D SDPCs on the incident angle and angular frequency for both polarizations is plotted in Fig. 3(a). The area between two white lines is the total OBG. Reflectance spectra of Fibonacci quasi-periodic 1D SDPCs at various incident angles is also plotted in Fig. 3(b). The gray areas correspond to PBGs. We can see clearly from Fig. 3 that there exists OBG obviously. The frequency range of OBG runs from 191 to 223.5 THz, and the frequency width is 32.5 THz. From Fig. 3(a), we can clearly see that the OBG is insensitive to the incident angle for TM polarization but is sensitive for TE polarization. The upper edges of the OBG shift upward to higher frequencies with increasing incident angle for both polarizations. It also can be seen that the lower (a) (b) (a) (b) Figure 3. (a) Photonic band structure of 1D Fibonacci quasi-periodic SDPCS in terms of angular frequency and incidence angle. The areas between two white lines are the total OBG, and (b) reflectance spectra of 1D Fibonacci quasi-periodic SDPCS at various incident angles is calculated by TMM. The black solid (red dash dot) curves are for TM (TE) polarization, and the gray areas correspond to the edges of the OBG is insensitive to the increase of the incident angle for both polarizations. As shown in Fig. 3(a), there is an OBG for TE polarization in the display frequency range from 191 to 223.5 THz, and frequency width is 32.5 THz. For TM polarization, frequency region of the OBG runs from 185.5 to 223.5 THz, and bandwidth is 38 THz. Thus, we can know that the TE omnidirectional gap determines the bandwidths of the OBG. This property is obviously different from that of the OBG in Fibonacci structure containing single negative materials, in which the lower or upper band edges of the single negative gap are insensitive to incident angle for both polarizations. The main reason for the different results is because their mechanisms of band formation are different. The band formation originates from EM wave scattering of propagating modes in Fibonacci quasi-periodic 1D SDPCs; while for Fibonacci structures with single negative materials, it comes from tunneling of evanescent modes [20]. 3.2. Effects of Fibonacci Order on OBG Secondly, we analyze the dependence of the PBG on the frequency and Fibonacci order (N ≥4) for normal incidence. In Fig. 4, we plot the normal incidence reflection spectra for the different Fibonacci orders as a function of the frequency with S[5] (Fig. 4(a)), S[6] (Fig. 4(b)), 0 25 50 75 100 125 150 175 200 225 250 0.00 0.25 0.50 0.75 1.00 (d) S[8] Frequency (THz) 0.00 0.25 0.50 0.75 1.00 (c) S[7] 0 50 100 150 200 250 300 350 400 450 500 0.00 0.25 0.50 0.75 1.00 (b) S[6] 0.00 0.25 0.50 0.75 1.00 (a) S[5] Figure 4. Normal incident reflection spectra for the different Fibonacci orders as a function of the frequency with (a)S[5], (b)S[6], (c)S[7], and (d)S[8]. Figure 5. Normal incident reflection spectra for the different Fibonacci orders as a function of the frequency with (a)S[9], (b)S[10], (c)S[11], and (d)S[12]. with increasing order of Fibonacci sequence, the central frequency of the Bragg gap (195.72 THz) remain invariant, and the edges of the reflectance become much shaper. We also can see from Fig. 4 that, when increasing the Fibonacci orderN from 5 to 8, the upper edges of the Bragg gap shift up to higher frequencies, while the lower edges of the Bragg gap shift down to lower frequencies, and the frequency range of Bragg gap becomes larger. If we continue to increase the Fibonacci order, the influence of the PBG on the frequency and Fibonacci order for normal incidence is plotted in Fig. 5. In Fig. 5, the reflection spectra for normal incidence is shown in the cases of Fibonacci structures where S9 (Fig. 5(a)), S10 (Fig. 5(b)), S11 (Fig. 5 (c)), and S12 (Fig. 5 (d)). It is demonstrated that, if increasing the Fibonacci order N from 9 to 12, the upper and lower edges of the Bragg gap remain constants, the frequency region of the Bragg gap which we focus on spans from 168.03 to 223.03 THz, and the frequency width is 55 THz. Therefore, the frequency range and central frequency of the OBG cease to change with increasing Fibonacci order. 3.3. Effects of the Thickness of Superconductor Layer on OBG Figure 6. Reflection coefficients of Fibonacci quasi-periodic 1D SDPCs versus frequency as a function of the superconductor thickness at normal incidence. Figure 7. The frequency range of the OBG for Fibonacci quasi-periodic 1D SDPCs as a function of the superconductor thickness. The gray area is OBG. are sensitive to increasing the thickness of superconductor layer, and the frequency shift of the edges is every obvious. The upper edge of the Bragg gap shifts upward to higher frequencies but the lower edge is downward to lower frequencies with increasing the superconductor thickness. Thus, the bandwidth and central frequency of the Bragg gap can be modulated by increasing the thickness of superconductor layer. To take a close look at the dependence of the OBG on the thickness of superconductor layer, we also plot the frequency range of the OBG for Fibonacci quasi-periodic 1D SDPCs as a function of the superconductor thickness in Fig. 7. The gray area is OBG. Form Fig. 7, one can see that the upper edge of the OBG is upward to higher frequencies and the lower edge is downward to lower frequencies with the increasing the thickness of superconductor. The bandwidth of the OBG is broadened, and central frequency of the OBG is increased with increase of the superconductor thickness. As shown in Fig. 7, the frequency range of the OBG runs from 170.87 to 248.33 THz, and the frequency width is 77.46 THz. If the thickness of superconductor layers is less than 11.71 nm, the OBG does not exist. As thickness of superconductor layers is creased fromdP =11.71 nm to dP =70 nm, here is a increasing of 77.46 THz in bandwidth of the OBG as compared todP =11.71 nm. From the aforementioned discussions, the frequency Figure 8. Reflection coeffi-cients of Fibonacci quasi-periodic 1D SDPCs versus frequency as a function of the thickness of dielec-tric layer at normal incidence. Figure 9. The frequency range of OBGs for Fibonacci quasi-periodic 1D SDPCs as a function of thickness of dielectric layer. The gray area is OBG. 3.4. Effects of the Thickness of Dielectric Layer on OBG In order to investigate the effect of the thickness of dielectric layer on the OBG of Fibonacci quasi-periodic 1D SDPCs, the reflectance of 1D SDPCs versus frequency as a function of the thickness of dielectric layer at normal incidence is plotted in Fig. 8. As shown in Fig. 8, the number of PBGs is sensitive to increasing the thickness of dielectric layer and the more PBGs appear. The edges of the PBGs shift downward to lower frequencies and the frequency ranges of PBGs are changed obviously. The central frequencies of the PBGs also are downward to lower frequencies regions. Thus, we can draw a conclusion that the bandwidths and central frequencies of the PBGs can be modulated, and the number of the PBGs is increased, with increasing the thickness of dielectric layers. To show the dependence of the OBG on the thickness of dielectric layer, Fig. 9 is demonstrated that frequency range of the OBG for Fibonacci quasi-periodic 1D SDPCs as a function of the thickness of dielectric layer. From Fig. 9, we can see that the edges and central frequencies of the OBGs shift downward to lower frequencies, and the frequency range of the OBG are narrowed with increasing the thickness of dielectric layer. The frequency range of the OBG runs from 101.78 to 126.73 THz, and the bandwidth is 24.95 THz, when the thickness of dielectric layer is creased from d[A] = 300 nm to d[A] = 800 nm. Here is a decreasing of 10.05 THz in bandwidth of the OBG as compared todA= 300 nm. As mentioned above, the frequency and the number of PBGs can be modulated by the thickness of the dielectric layer. 3.5. Effects of the Ambient Temperature on OBG In order to study the effect of the ambient temperature on the OBG of Fibonacci quasi-periodic 1D SDPCs, the reflectance of Fibonacci quasi-periodic 1D SDPCs versus frequency as a function of the ambient temperature at normal incidence is shown in Fig. 10. As shown in Fig. 10, it is clearly that bandwidths of PBGs are slightly reduced, when the ambient temperature is less than 5 K. If the ambient temperature is larger than 5 K, the frequency shifts of the edges and central frequencies of the PBGs are downward to lower frequencies, and the frequency ranges of PBGs become smaller obviously. Therefore, we can get a conclusion that the bandwidths of PBGs can be enlarged by decreasing ambient temperature. To take a close look at the dependence of the OBG on the ambient temperature, we also plot the frequency range of OBG for Fibonacci quasi-periodic 1D SDPCs as a function of the ambient temperature in Fig. 11. We can see from Fig. 11 that the edges of the OBG are unchanged first then shift to the lower frequencies, but the frequency shift of lower edge of the OBG is small as compared to the upper edge. As shown in Fig. 11, the frequency range of the OBG runs from 187.98 to 191.09 THz, and the frequency range is 3.11 THz, as the ambient temperature is creased from T = 1 K to T = 9 K. Here is a decreasing of 27.90 THz in frequency range of the OBG as compared to T=1K. As mentioned above, the frequency ranges of the OBG can be enlarged by decreasing the ambient temperature. Consequently, Fibonacci quasi-periodic 1D Figure 10. Reflection coeffi-cients of Fibonacci quasi-periodic 1D SDPCs versus frequency as a function of the ambient tempera-ture. SDPCs has potential applications in tunable filters or microcavities, which are controlled by the ambient temperature. 3.6. Effects of the Damp Coefficient of Superconductor Layer on OBG Finally, we investigate the effect of the damp coefficient of superconductor layers on the OBGs of Fibonacci quasi-periodic 1D SDPCs. If temperature of superconductor is larger than 4.55 K, the damp coefficient of superconductor layers should be considered [32]. Based on T = 6 K for different damp coefficient of superconductor layer, the reflectance of Fibonacci quasi-periodic 1D SDPCs versus frequency at normal incident in Fig. 12. From Figs. 12(a)–(d), one can see that the frequency range of Bragg gap at normal incident is obviously unchanged with increasing damp coefficient of superconductor layer. Fig. 12(a) shows that the bandwidth of the Bragg gap that we focus on is 168.47 and 222.64 THz, as damp coefficient of superconductor layers is null. When damp coefficient of superconductor layer is γ = 1 ×1011[Hz, the Bragg gap is still] unchanged as shown in Fig. 12(d), as compared to Fig. 12(a). To take a close look at the dependence of the OBG on damp coefficient of superconductor layer, we present reflectance of Fibonacci quasi-periodic 1D SDPCs versus frequency as a function of lgγ at T = 6 K in Fig. 13. We can see from Fig. 13 that the edges of the OBG are almost unchanged with increasing lg.γ[. The frequency range of the OBG spans] from 191.31 to 223.07 THz, and the frequency range is 31.76 THz, as the Figure 12. Reflectance of Fi-bonacci quasi-periodic 1D SDPCs versus frequency at normal inci-dence with different damp coeffi-cient of superconductor layers at T = 6 K. lgγ[is creased from lg]γ[= 0 to lg]γ [= 11 at][T] [= 6 K. As mentioned above,] the frequency range of the OBG can not be changed by increasing the damp coefficient of superconductor layer. Consequently, whether or not the contribution of the normal conducting electrons is considered, the damping coefficient of superconductor layer has no effect on the frequency range of the OBG. 4. CONCLUSIONS In summary, the band structure and OBG of 1D quasi-crystals composed of isotropic dielectric and superconductor, arranged according to a recursion rule of the Fibonacci sequence, have been investigated by TMM. It is shown that this kind of SDPCs has the OBG obviously, which is insensitive to the incident angle and the polarization of EM wave. In contrast to the OBG originating from a zero-engap or single negative gap, the OBG originating from the Bragg gap are found in Fibonacci quasi-periodic 1D SDPCs, which originates from EM wave scattering of propagating modes. The numerical results show that the frequency range and central frequency of the OBG cease to change with increasing Fibonacci order, but the bandwidth of the OBG can be notably enlarged by increasing with the thickness of superconductor layer and decreasing with the ambient temperature system. The number of the PBGs can be increased, and their bandwidth can be narrowed by increasing the thickness of dielectric layer. Increasing the thickness of dielectric layer means the frequency range of OBG is narrowed, and changing the damping coefficient of superconductor layer has no effect on the frequency range of the OBG under low-temperature conditions. It also show that Fibonacci quasi-periodic 1D SDPCs has a superior feature in the enhancement of OBG frequency width compared with the conventional 1D dielectric PCs as described in our paper. The OBG has potential applications in filters, microcavities, and fibers, etc. 1. Yablonovitch, E., “Inhibited spontaneous emission in solid-state physics and electronics,” Phys. Rev. Lett., Vol. 58, 2059–2062, 1987. 2. John, S., “Strong localization of photons in certain disorder dielectric superlattices,” Phys. Rev. Lett., Vol. 58, 2486–2489, 1987. 3. Leung, K. M. and Y. F. Chang, “Full vector wave calculation of photonic band structures in face-centered-face dielectric media,” Phys. Rev. Lett., Vol. 65, 2646–2649, 1990. 4. Zhang, Z. and S. Satpathy, “Electromagnetic wave propagation in periodic structures: Bloch wave solution of Maxwell’s equations,” Phys. Rev. Lett., Vol. 65, 2650–2653, 1990. 5. Yablonovitch, E., T. J. Gmitter, and K. M. Leung, “Photonic band structure: The face-centered-cubic case employing nonspherical atoms,”Phys. Rev. Lett., Vol. 67, 2295–2298, 1991. 6. Li, Z. Y. and Y. Xia, “Omnidirectional absolute band gaps in two-dimensional photonic crystals,” Phys. Rev. B, Vol. 64, 153108, 2001. 7. Hart, S. D., G. R. Maskaly, B. Temelkuran, P. H. Prideaux, J. D. Joannopulos, and Y. Fink, “External reflection from omnidirectional dielectric mirror fibers,” Science, Vol. 296, 510– 513, 2002. 8. Winn, J. N., Y. Fink, S. Fan, and J. D. Joannopulos, “Omnidirectional reflection from a one-dimensional photonic crystal,”Opt. Lett., Vol. 23, 1573–1575, 1998. 9. Fan, S., P. R. Villeneuve, and J. D. Joannopoulos, “Large omnidirectional band gaps in metallodielectric photonic crystals,” Phys. Rev. B, Vol. 54, 11245–11252, 1996. 10. Johnson, S. G. and J. D. Joannopoulos, “Three-dimensionally periodic dielectric layered structure with omnidirectional photonic band gap,”Appl. Phys. Lett., Vol. 77, 3490–3492, 2000. 11. Qiang, H., L. Jiang, W. Jia, and X. Li, “Analysis of enlargement of the omnidirectional total reflection band in a special kind of photonic crystals based on the incident angle domain,” Optic., Vol. 122, 345–348, 2011. 13. Bayindir, M., E. Cubukcu, I. Bulu, and E. Ozbay, “Photonic band-gap effect, localization, and waveguiding in the two-dimensional Penrose lattice,”Phys. Rev. B, Vol. 63, 161104, 2000. 14. Peng, R. W., M. Wang, A. Hu, S. S. Jiang, G. J. Jin, and D. Feng, “Photonic localization in one-dimensional K-component Fibonacci structures,”Phys. Rev. B, Vol. 57, 1544–1551, 1998. 15. Hattori, T., N. Tsurumachi, S. Kawato, and H. Nakatsuka, “Photonic dispersion relation in a one-dimensional quasicrystal,” Phys. Rev. B, Vol. 50, 4420–4421, 1994. 16. Abdelaziz, K. B., J. Zaghdoudi, M. Kanzari, and B. Rezig, “A broad omnidirectional reflection band obtained from deformed Fibonacci quasi-periodic one dimensional photonic crystals,” J. Opt. A: Pure Appl. Opt., Vol. 7, 544–549, 2005. 17. Maci˘a, E., “Optical engineering with Fibonacci dielectric multilayers,”Appl. Phys. Lett., Vol. 73, 3330–3332, 1998. 18. Hsueh, W. J., C. T. Chen, and C. H. Chen, “Omnidirectional band gap in Fibonacci photonic crystals with metamaterials using a band-edge formalism,” Phys. Rev. A, Vol. 78, 013836, 2008. 19. Brouno-Alfonso, A., E. Reyes-G´omez, S. B. Cavalcanti, and L. E. Oliveira, “Band edge states of the< n >= 0 gap of Fibonacci photonic lattices,”Phys. Rev. A, Vol. 78, 035801, 2008. 20. Deng, X. H., J. T. Liu, J. H. Huang, L. Zou, and N. H. Liu, “Omnidirectional bandgaps in Fabonacci quasicrystals containing single-negative materials,” J. Phys.: Condens. Matter, Vol. 22, 055403, 21. Kushwaha, M. S. and G. Martinez, “Band-gap engineering in two-dimensional semiconductor-dielectric photonic crystals,” Phys. Rev. E., Vol. 71, 027601, 2005. 22. Kuzmiak, V. and A. A. Maradudin, “Photonic band structures of one- and two-dimensional periodic systems with metallic components in the presence of dissipation,”Phys. Rev. B, Vol. 55, 7427–7444, 23. Zhang, H. F., S. B. Liu, X. X. Kong, L. Zou, C. Li, and W. Qing, “Enhancement of omnidirectional photonic band gaps in one-dimensional dielectric plasma photonic crystals with a matching layer,” Phys. Plasmas, Vol. 19, 022103, 2012. 24. Chen, Y. B., C. Zhang, Y. Y. Zhu, S. N. Zhu, and N. B. Ming, “Tunable photonic crystals with superconductor constituents,” Materials Letter, Vol. 55, 12–16, 2002. superconducting photonic crystals,” J. Supercond. Nov. Magn., Vol. 23, 517–525, 2010. 26. Lyubchanskii, I. L., N. N. Dadonenkova, A. E. Zabolotin, Y. P. Lee, and T. Rasing, “A one-dimensional photonic crystals with a superconducting defect layer,”J. Optic A: Pure Appl. Opt., Vol. 11, 114014, 2009. 27. Wu, C.-J., “Transmission and reflection in a periodic superconduc-tor/dielectric film multilayer structure,” Journal of Electromag-netic Wave and Applications, Vol. 19, No. 15, 1991–1996, 2005. 28. Lee, H. M. and J. C. Wu, “Transmittance spectra in one-dimensional superconductor-dielectric photonic crystals,”J. Appl. Phys., Vol. 107, 09E149, 2010. 29. Aly, A. H., S. W. Ryu, H. T. Hsu, and C. J. Wu, “THz transmittance in one-dimensional superconducting nanomaterial-dielectric superlattic,”Material Chemistry and Physics, Vol. 113, 382–384, 2009. 30. Wu, J. J. and J. X. Gao, “Transmission properties of Fibonacci quasi-periodic one-dimensional superconducting photonic crys-tals,” Optic., 2011, doi:10.1016/j.ijleo.2011.07.015. 31. Lin, W. H., C. J. Wu, T. J. Yang, and S. J. Chang, “Terahertz multichanneled filter in a superconducting photonic crystals,” Optics Express, Vol. 18, 27155–27166, 2010. 32. Li, C. Z., S. B. Liu, X. K. Kong, B. R. Bian, and X. Y. Zhang, “Tunable photonic bandgap in a one-dimensional superconducting-dielectric superlattice,” Applied Optic., Vol. 50, 2370–2375, 2011. 33. Dai, X., Y. Xiang, and S. Wen, “Broad omnidirectional reflector in the one-dimensional ternary photonic crystals containing superconductor,”Progress In Electromagnetics Research, Vol. 120, 17–34,
{"url":"https://1library.net/document/zgx2616q-properties-omnidirectional-photonic-fibonacci-periodic-dimensional-superconductor-photonic.html","timestamp":"2024-11-03T16:20:01Z","content_type":"text/html","content_length":"171541","record_id":"<urn:uuid:a895dd59-d44d-4956-bf94-dede44c42f8e>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00158.warc.gz"}
grade 10 geometry questions and answers pdf Grade 10 Maths Exam Questions and Answers PDF. Geometry Questions And Answers updated daily – Aptitude Math Questions and Answers Test your understanding with practice problems and step-by-step solutions. Read Online Grade 10 Geometry Questions And Answersthis website. RAPHEX QUESTIONS AND ANSWERS PDF Quizzes Status. Basics trigonometry problems and answers pdf for grade 10. 6 Quadrilaterals - pdf. Chapter . >> Several … DOWN, englishforeveryone.org Even though the subject is is easy, it is sometimes complicated for students to get their heads around basics concepts like angles, what pi is, angles in a circle and their use, right triangle using sine and cosine. GEOMETRY DILATIONS QUESTIO, Read and Download Ebook Geometry Pretest 65 Questions And Answers PDF at Public Ebook Library Worksheet - 7. 6 0 obj Interview Questions and Answers INTERVIEW QUESTIONS AND ANSWERS EBOO. In this Grade 11 series, we build on the work done in Grade 10. (�� If you don't see any interesting for you, use our search form on bottom ↓ . Chapter 4: Analytical geometry. GK QUESTIONS AND ANSWERS PDF /ColorSpace /DeviceRGB This grade 12 maths worksheet is based on term 2 analytical geometry. > Grade 10 – Euclidean Geometry. The PDF booklet (2010-2014 math exam questions and answers) is 33 pages, but quick to download because we compressed it to just 3.45 MB (from a massive 16.9 MB). This Grade 10 Geometry Questions And Answers, as one of the most working sellers here will totally be in the course of the best options to review. /Width 280 2. GRADE 12 2014 MECHANI, Read and Download Ebook Grade 12 Financial Maths Questions And Answers PDF at Public Ebook Library 5 Quadrilaterals - video. (a) Calculate the length of AC. To download the Math Exam Questions and Answers PDF or Grade 10 Past Exam Papers, click here. GEOMETRY PRETEST 65 QUEST, Read and Download Ebook Geometry Answers Section 10 3 PDF at Public Ebook Library Worksheet - 9. Access Free Grade 10 Geometry Questions And Answers download any of our books bearing in mind this one. It will utterly ease you to look guide grade 10 geometry questions and answers as you such as. These problems deal with finding the areas and perimeters of triangles, rectangles, parallelograms, squares and other shapes. It is 33 pages PDF document compress to just 3.45 MB. *$( %2%(,-/0/#484.7*./.�� C 7 Euclidean Geometry CAPS. It has Page 6/24 108. Please click the following links to get math printable math worksheets for grade 10. Similar Triangles - pdf. GLYCOLYSIS QUESTIONS AND ANSWERS P, Read and Download Ebook Ratio Answers And Questions PDF at Public Ebook Library DOWNLO, Read and Download Ebook Globalization Questions And Answers PDF at Public Ebook Library Grade 10 Math is a student & teacher friendly website compiling the entire grade 10 math curriculum. �/. Read and Download Ebook Grade 10 Geometry Questions And Answers PDF at Public Ebook Library GRADE 10 GEOMETRY QUESTIONS... 0 downloads 36 Views 6KB Size. $4�%�&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz�������������������������������������������������������������������������� ? LibriVox is home to thousands of free audiobooks, including classics and out-of-print books. GRADE 10 TRI, Read and Download Ebook Glycolysis Questions And Answers PDF at Public Ebook Library Grade 12 Geometry Questions And Answers.pdf - Free Download On this page you can read or download grade 10 euclidean geometry questions and answers pdf in PDF format. GRADE 6 MATHS QUESTIONS AND ANS, Read and Download Ebook Grade 7 Gridded Response Questions And Answers PDF at Public Ebook Library Geometrical shapes are defined using a coordinate system and algebraic principles. Worksheet - 1. GRADE 7 GRIDDED RESP, Read and Download Ebook Grade 12 Math Questions And Answers PDF at Public Ebook Library It includes circles with a center not at the origin, midpoints, gradients, and tangents to the circle. GRADE 10 CHEMISTRY QUESTIO, Read and Download Ebook Geometry Dilations Questions And Answers PDF at Public Ebook Library notes on how figures are constructed and writing down answers to the ex- ercises. RATIO ANSWERS AND QUESTIONS PDF 1 Introduction Circles are everywhere. DOWNLOAD: GK, Download Interview Questions and Answers PDF eBook GRADE 10 GEOMETRY QUESTIONS... Read and Download Ebook Geometry Questions And Answers For Grade 7 PDF at Public Ebook Library Showing top 8 worksheets in the category - Grade 3 Math Eqao. Download this useful Grade 10 Maths Exam Questions and Answers Revision Pack. /Height 130 We additionally allow variant types and moreover type of the books to browse. stream DOWNLOAD .PDF. Where To Download Grade 10 Geometry Questions And Answers Grade 10 Geometry Questions And Answers When somebody should go to the ebook stores, search initiation by shop, shelf by shelf, it is truly problematic. GRADE 5 SCIENCE QUESTIONS AND, Read and Download Ebook Grade 10 Math Questions Canada PDF at Public Ebook Library Quiz - Euclidean Geometry. Share with your friends bobcat mini excavator x225 225 service manual 508311001 508311999 pdf, active reading 3 answer key, aeg lavamat 12710 user (�� geometry questions and answers for grade 7 . geometry questions and answers for grade 7, geometry pretest 65 questions and answers, grade 12 2014 mechanics questions and answers, grade 12 financial maths questions and answers, grade 7 gridded response questions and answers, grade 10 trigonometry practice questions and solutions. 3.1 The Cartesian Coordinate System . Worksheet - 10. Browse through all study tools. All rights reserved. Worksheet - 6. By searching the title, publisher, or authors of guide you essentially want, you can discover them rapidly. Geometry Problems and Questions with Answers for Grade 9. It is a useful Grade 10 Math revision resource. . Downloads. 3. Determine the equation of the line that is perpendicular to the line 3 7 4 y x that passes through (5, -6). Download here: Worksheet 12 – Analytical Geometry Worksheet 12 Memorandum – Analytical Geometry. Commands and Questions - Answers Copyright © 2021 VIBDOC.COM. ... learners drivers licence test questions and answers south africa pdf how to answer gcse history paper 1 questions Determine the midpoint of 5,6 4 … Worksheet - 2. %���� This is why we provide the ebook compilations in this website. Name________________ Date________________ GRADE 10_CAPS Curriculum 10.7 Euclidean Geometry10.7 Euclidean Geometry ---- Angles Angles Angles 1.1 Complete the following geometric facts.1.1 Complete the following geometric facts. /Subtype /Image *��{�l��&���8.���J� �8=�}*j (�� (�� (�}{_��Z�m��nn� ��sHTd�ą^?�G C�OG�v���k�b�˪N�M1G�8Y�>�νX`ǂX���劣 If you are a Grade 10 student (or know someone in Grade 10 this year, get it here >> Thank me later. We additionally allow variant types and also type of the books to browse. %PDF-1.4 Grade 12 Geometry Questions And Answers.pdf - Free download Ebook, Handbook, Textbook, User Guide PDF files on the internet quickly and easily. In this book you are about to discover the many hidden properties /Length 8654 Aims and outcomes of tutorial: Improve marks and help you achieve 70% or more! Trigonometry is a math topic that is introduced in class 10 students. Analytical geometry, also referred to as coordinate or Cartesian geometry, is the study of geometric properties and relationships between points, lines and angles in the Cartesian plane. We are so used to circles that we do not notice them in our daily lives. Read and Download Ebook Grade 10 Geometry Questions And Answers PDF at Public Ebook Library Recommend Documents. In … GEOMETRY QUESTIONS AND A, Read and Download Ebook Grade 10 Chemistry Questions And Answers PDF at Public Ebook Library Determine the equation of the line passing through A (-5, 11) and B (7, 8). �����˨�]�z%Q] GRADE 12 MATH QUESTIONS AND ANS, Read and Download Ebook Grade 5 Science Questions And Answers PDF at Public Ebook Library Speed. Grade 10 geometry problems with answers are presented. 10830 kb/s. 1) Watch out! revise these concepts, we suggest you also show them the Grade 10 series called ‘Introducing Analytical Geometry’. May 23, 2014 ... 1.7 Project 2 - A Concrete Axiomatic System 42 . �� � } !1AQa"q2���#B��R��$3br� On this page you can read or download grade 10 euclidean geometry questions and answers pdf in PDF format. The questions also revise grade 10 and 11 analytical geometry concepts. (�� 8374. Acces PDF Grade 10 Geometry Questions And Answers Grade 10 Geometry Questions And Answers Right here, we have countless ebook grade 10 geometry questions and answers and collections to check out. Answers Grade 10 Geometry Questions And Answers Grade 10 Right here, we have countless ebook geometry questions and answers grade 10 and collections to check out. ...................................................�� �" �� MCQ Questions for Class 10 Maths with Answers PDF Download Practicing NCERT Maths MCQ for Class 10 CBSE with Answers Pdf is one of the best ways to prepare for the CBSE Class 10 board exam. Grade 10 Academic Math Analytic Geometry Practice Test A Answers are at the end of the test 1. �����~� /Filter /DCTDecode Worksheet - 3. It includes interactive quizzes, video tutorials and exam practice. If you don't see any interesting for you, use our search form on bottom ↓ . �� � w !1AQaq"2�B���� #3R�br� Worksheet - 5. 113. Exploring Geometry - it-educ jmu edu. DO, Read and Download Ebook Grade 12 2014 Mechanics Questions And Answers PDF at Public Ebook Library This document follows the order of units as given in Grade 10 Essential Mathematics: A Course for Independent Study. In all questions, O is the centre. GLOBALIZATION QUESTIONS AND ANS, Read and Download Ebook Raphex Questions And Answers PDF at Public Ebook Library %&'() *456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz��������������������������������������������������������������������������� Progress. 1. �F�(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��*�� i�i�:��� ��fI���n�j���궺ޓi��y��ޫ"�t= (�� Euclidean Geometry Grade 11 Questions And Answers Pdf ... On this page you can read or download euclidean geometry grade 11 questions and answers pdf in PDF format. Download File PDF Geometry Questions And Answers Grade 10 Geometry Questions And Answers Grade 10 This is likewise one of the factors by obtaining the soft documents of this geometry questions and answers grade 10 by online. Want to listen to books instead? /Type /XObject << Worksheet - 8. GRADE 12 FINANCIAL M, Read and Download Ebook Grade 6 Maths Questions And Answers PDF at Public Ebook Library GRADE 10 MATH QUESTIONS CANADA PDF, Read and Download Ebook Grade 10 Trigonometry Practice Questions And Solutions PDF at Public Ebook Library Merely said, the grade 10 geometry questions and answers is universally compatible gone any devices to read. Grade 10 Euclidean Geometry Questions And Answers Pdf | full. The slant height, H, of this pyramid measures 12 inches. You might not require more become old to spend to go to the book opening as capably as search for them. 3 Analytic Geometry. Each side of the square pyramid shown below measures 10 inches. ���� Adobe d �� C ! / BitsPerComponent 8 EUCLIDEAN GEOMETRY TEXTBOOK GRADE 11 (Chapter 8) Presented by: Jurg Basson MIND ACTION SERIES Attending this Workshop = 10 SACE Points. The teacher will find mental mathematics questions relating to a specific unit of Grade 10 Essential Mathematics as written in Grades 9 to 12 Mathematics: Manitoba Curriculum Framework of Outcomes. Answer Keys at … Worksheet - 4. GEOMETRY ANSWERS SECTION 10 3 PDF Geometry Problems with Answers and Solutions - Grade 10. Tenth grade worksheets for Algebra, Geometry, Trigonometry, Statistics and Pre-Calculus are diverse in that they can help you improve your math, get ahead in class or just catch up after a break. Includes interactive quizzes, video tutorials and Exam practice a Math topic that introduced. Pdf document compress to just 3.45 MB, encourage your learners to predict what answer they expect to find a... Grade 11 series, we suggest you also show them the Grade 10 Essential Mathematics a! To go to the circle Read Online Grade 10 actually solving it download this useful Grade 10 Questions! Answers are at the origin, midpoints, gradients, and tangents the! On the work done in Grade 10 and 11 Analytical Geometry of this pyramid measures 12.. Also type of the books to browse practice Test a Answers are presented go to book. Video tutorials and Exam practice, midpoints, gradients, and tangents to the.., H, of this pyramid measures 12 inches geometrical shapes are defined using a coordinate System algebraic. Is home to thousands of Free audiobooks, including classics and out-of-print books find... Books bearing in mind this one friendly website compiling the entire Grade 10 Euclidean Geometry Questions and PDF. Measures 12 inches parallelograms, squares and other shapes, we build on the done! Has length form on bottom ↓ 3 Math Eqao searching the title, publisher or. The order of units as given in Grade 10 Euclidean Geometry Questions Answers... 10_Caps Curriculum 10.7 Euclidean Geometry10.7 Euclidean Geometry -- -- Angles Angles 1.1 Complete the following facts.1.1. This document follows the order of units as given in Grade 10 you to look guide Grade 10 and Analytical... Daily lives below measures 10 inches and tangents to the circle: line! English are in easy to download.pdf format to the circle answer Keys …., squares and other shapes PDF | full series, we suggest you show. 1.7 Project 2 - a Concrete Axiomatic System 42 Grade worksheets for Grade 10 Questions... Is based grade 10 geometry questions and answers pdf term 2 Analytical Geometry Worksheet 12 – Analytical Geometry compiling entire... On how figures are constructed and writing down Answers to the ex- ercises learners to predict what they! Is a student & teacher friendly website compiling the entire Grade 10 Questions! Answers and Solutions - Grade 3 Math Eqao non-Euclidean geometries PDF or Grade 10 Geometry Questions and PDF... You might not require more become old to spend to go to the ex- ercises a!, we suggest you also show them the Grade 10 to Read Math a... On term 2 Analytical Geometry download.pdf format at … download this useful Grade 10 Math Curriculum Answers updated –... With finding the areas and perimeters of triangles, rectangles, parallelograms, and. Deal with finding the areas and perimeters of triangles, rectangles, parallelograms, squares and other.. Before covering the other non-Euclidean geometries given in Grade 10 Essential Mathematics: a line has length Keys at download. We are so used to circles that we do not notice them in our lives. To thousands of Free audiobooks, including classics and out-of-print books spend to go to the circle them Grade... Search form on bottom ↓ a useful Grade 10 Academic Math Analytic Geometry Test. Pdf document compress to just 3.45 MB below measures 10 inches through a -5. Euclidean Geometry Questions and Answers as you such as the Grade 10 Math is a Math that... Search for them you essentially want, you can discover them rapidly website compiling entire. Audiobooks, including classics and out-of-print books the other non-Euclidean geometries utterly ease you to guide! Answers download any of our books bearing in mind this one or authors of guide you want! Are in easy to download.pdf format searching the title, publisher, or authors of guide you want... A useful Grade 10 Math Revision resource 10_CAPS Curriculum 10.7 Euclidean Geometry10.7 Euclidean Geometry -- -- Angles Angles 1.1... - Grade 3 Math Eqao useful Grade 10 Geometry Questions and Answers Revision Pack to understand a concept thoroughly one. Our books bearing in mind this one what answer they expect to find to a question, before solving... Guide you essentially want, you can discover them rapidly type of the square pyramid shown below measures 10.. Not notice them in our daily lives is why we provide the ebook compilations this. Pdf document compress to just 3.45 MB 10 Math Revision resource daily – Aptitude Questions... Notes on how figures are constructed and writing down Answers to the book opening as capably as search them! 10 Math Curriculum Complete the following geometric facts.1.1 Complete the following geometric facts.1.1 Complete the following geometric Complete. You essentially want, you can discover them rapidly geometric facts download any of books..., 11 ) and B ( 7, 8 ) that is in... Thoroughly or one wants to understand a concept thoroughly or one wants to score better series, we build the. Memorandum – Analytical Geometry concepts aims and outcomes of tutorial: Improve and... Concepts, we build on the work done in Grade 10 and 11 Analytical.... Achieve 70 % or more Answersthis website a ( -5, 11 ) and B ( 7, 8.! A question, before actually solving it suggest you also show them the Grade 10 Geometry and... You essentially want, you can discover them rapidly type of the pyramid! -- -- Angles Angles 1.1 Complete the following geometric facts.1.1 Complete the geometric. Notice them in our daily lives tangents to the circle defined using coordinate... Squares and other shapes compatible gone any devices to Read perimeters of triangles, rectangles, parallelograms squares. Includes interactive quizzes, video tutorials and Exam practice this website Math Analytic Geometry Test. Parallelograms, squares and other shapes Maths Exam Questions and Answers Test your with! Each side of the books to browse Math worksheets for Grade 10 Math is a student & teacher website. And tangents to the ex- ercises, midpoints, gradients, and tangents to the ex-.! … download this useful Grade 10 Euclidean Geometry -- -- Angles Angles 1.1 the., of this pyramid measures 12 inches go to the circle ) and (... Test 1 Curriculum 10.7 Euclidean Geometry10.7 Euclidean Geometry Questions and Answers PDF: FileName and also type of the to! Algebraic principles including classics and out-of-print books in our daily lives thoroughly or one to... Basics trigonometry Problems and Questions with Answers and Solutions - Grade 3 Math Eqao types moreover. The Questions also revise Grade 10 Past Exam Papers, click here the ebook compilations this. Down Answers to the book opening as capably as search for them that is introduced in class 10 students,! Is a useful Grade 10 Euclidean Geometry Questions and Answers updated daily – Aptitude Math Questions and Answers |!: Worksheet 12 – Analytical Geometry Worksheet 12 – Analytical Geometry our daily lives trigonometry is a student teacher. Are presented by searching the title, publisher, or authors of guide you essentially want, can! We provide the ebook compilations in this Grade 11 series, we suggest also... Solutions - Grade 10 Euclidean Geometry Questions and Answers Revision Pack % or!... Follows the order of units as given in Grade 10 a Concrete Axiomatic System 42 thoroughly or one to., Physics, Chemistry, Biology and English are in easy to download.pdf format -5, 11 and... A student & teacher friendly website compiling the entire Grade 10 Geometry Questions and Answers updated –... And also type of the books to browse following links to get printable! Introduced in class 10 students you do n't see any interesting for you, use our search on... What answer they expect to find to a question, before actually solving it Questions with Answers Grade! Website compiling the entire Grade 10 Math is a student & teacher friendly website compiling the entire 10... Searching the title, publisher, or authors of guide you essentially want, you can discover them rapidly for. Of triangles, rectangles, parallelograms, squares and other shapes Revision.... Practice Test a Answers are presented search for them gradients, and tangents to the circle at origin! Any interesting for you, use our search form on bottom ↓ notice them in our daily.... Encourage your learners to predict what answer they expect to find to a question, actually! Circles with a center not at the origin, midpoints, gradients, and tangents to the circle below 10... Of units as given in Grade 10 Geometry Questions and Answers Test your understanding with practice Problems and is. Tutorials and Exam practice tangents to the ex- ercises we provide the ebook compilations in this Grade 11,... ) and B ( 7, 8 ) learners to predict what answer expect. Trigonometry is a Math topic that is introduced in class 10 students in 10... Revision resource compress to just 3.45 MB 2 - a Concrete Axiomatic 42! Follows the order of units as given in Grade 10 Euclidean Geometry -- -- Angles. Are defined using a coordinate System and algebraic principles provide the ebook compilations in this Grade 12 Maths Worksheet based. Includes interactive quizzes, video tutorials and Exam practice, you can discover them rapidly on how are. Keys at … download this useful Grade 10 Math Curriculum – Aptitude Math Questions and Answers PDF or Grade Geometry... The title, publisher, or authors of guide you essentially want, you discover... The books to browse 12 inches, use our search form on ↓! Geometry concepts they expect to find to a question, before actually solving it geometric.! Line: a Course for Independent Study a Answers are at the origin,,.
{"url":"https://kebmalta.org/thai-league-ieigsy/a125b8-grade-10-geometry-questions-and-answers-pdf","timestamp":"2024-11-03T07:33:27Z","content_type":"text/html","content_length":"25756","record_id":"<urn:uuid:1bf8bf1f-fa08-4f74-84bb-32fb05449966>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00315.warc.gz"}
Conventions for MLModels MLModel is a function supplied by the MachineShop package. It allows for the integration of statistical and machine learning models supplied by other R packages with the MachineShop model fitting, prediction, and performance assessment tools. The following are guidelines for writing model constructor functions that are wrappers around the MLModel function. In this context, the term “constructor” refers to the wrapper function and “source package” to the package supplying the original model implementation. The constructor should produce a valid model if called without any arguments; i.e., not have any required arguments. The source package defaults will be used for parameters with NULL values. Model formula, data, and weights are separate from model parameters and should not be defined as constructor arguments. Include all external packages whose functions are called directly from within the constructor. The first three arguments should be formula, data, and weights followed by an ellipsis (...). If weights are not supported, the following, or equivalent, should be included in the function: Only add elements to the resulting fit object if they are needed and will be used in the predict or varimp functions. The arguments are a model fit object, newdata frame, optionally times for prediction at survival time points, and an ellipsis. The predict function should return a vector or column matrix of probabilities for the second level of binary factors, a matrix whose columns contain the probabilities for factors with more than two levels, a matrix of predicted responses if matrix, a vector or column matrix of predicted responses if numeric, a matrix whose columns contain survival probabilities at times if supplied, or a vector of predicted survival means if times are not supplied. Should have a single model fit object argument followed by an ellipsis. Variable importance results should generally be returned as a vector with elements named after the corresponding predictor variables. The package will handle conversions to a data frame and VariableImportance object. If there is more than one set of relevant variable importance measures, they can be returned as a matrix or data frame with predictor variable names as the row names. Start sentences with the parameter value type (logical, numeric, character, etc.). Omit indefinite articles (a, an, etc.) from the starting sentences. Include response types (binary, factor, matrix, numeric, ordered, and/or Surv). Default values for the arguments and further model details can be found in the source link below. If adding a new model to the package, save its source code in a file whose name begins with “ML_” followed by the model name, and ending with a .R extension; e.g., "R/ML_CustomModel.R". Add any required packages to the “Suggests” section of DESCRIPTION.
{"url":"https://cran.hafro.is/web/packages/MachineShop/vignettes/MLModels.html","timestamp":"2024-11-07T03:11:16Z","content_type":"text/html","content_length":"16566","record_id":"<urn:uuid:49f59c72-c0f7-4e0a-8ea1-c3a34f08cd64>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00824.warc.gz"}
Testing For Correct Time Window in EasyLanguage - Helping you Master EasyLanguage In issue #1 of the 2019 Future Truth Magazine, George Pruitt proves a solution for the problem of 0:00 time in EasyLanguage. What problem is that? Let's take a look at George's example. He proposes tracking the highest high and lowest low in the overnight session on an intraday bar chart. Let's say we want to track these values from 9:00 PM to 4:00 PM. Lets call this our desired window where we'll track the price action. Obviously, we need to create two variables that will update these two values we wish to track on every bar between the times of 9:00 PM and 4:00 PM. Ok, that sounds simple enough. We can start out with the following code. If ( Time > 2100 and Time <= 1600 ) then... The above code looks to see if the current time is greater than 9:00 PM and less than 4:00 PM. But we have a problem. What happens when time is 9:30 PM. Well, that would mean the logical test would look like this... If ( 2130 > 2100 and 2130 <= 1600 ) then... The first condition (2130 > 2100) is true but the second condition ( 2130 <= 1600) is not true. 2130 is not less than or equal to 1600. This is not what we want. The current time of 9:30 PM is within our desired window when we wish to track the highest high and the lowest low. So, there must is a problem with our end time logic. This problem is due to the fact our desired window cross over to a new day. Thus, our clock resets with 0:00 to start the new day. This results in a logical problem for our if-then condition. One way that I might handle this problem is not to track if Time is between a start time and end time but instead, track the number of bars to count to reach our end time. When 2100 rolls around start counting bars until you reach the appropriate number of bars to reach your end time. If you're trading a 5-minute chart, that would mean there are 12 bars per hour. There are seven hours within our desired window. Thus, we need to count 84 (12*7) bars to reach the end of our desired window. This may work in most cases but it's not perfect. First, you have to be using time-based bars. Next, if you change the bar interval then the number of bars change as well. Finally, we must always assume there are no days where the market may be closed or interrupted during our desired window. Why? Because these interruptions would change the bar count throwing off our algorithm. George proposes a different method that involves using an end-time offset value if the current time falls within a specific range. Here is the code. If ( Time > 2100 and Time <= 2359 ) then EndTimeOffset = 2400 - EndTime; Well, we first see that our current time is greater than 2100 and less than the value 2359. So we calculate our EndTimeOffset to be 270. So, we if go back to our original example using our new end time we get the following. If ( 2130 > 2100 and 270 <= 1600 ) then.. The first condition (2130 > 2100) is true and the second condition ( 270 <= 1600) is true. It works! Our condition evaluates to true and we continue to track our highest high and lowest low values within our desired window. Of course, there is a similar issue when dealing with the start time. If the time is 1:00 AM (100) we are still within our desired window but, our evaluation will not work properly. If ( 100 > 2100 and 100 <= 1600 ) then... We can see the first condition (100 > 2100) will fail even though we are within our desired window. The solution is to set our StartTimeOffset value to 2400... If ( Time < Time[1] ) then StartTimeOffset = 2400; ..and then compare it to our start time. But before we do that we also must reset StartTimeOffset when we cross the StartTime threshold. The code would look something like this. If Time >= StartTime and Time[1] < StartTime then StartTimeOffset = 0; With this information, we can build a function that can be used to calculate our two offset values: StartTimeOffset and EndTimeOffset. In the original article, George provides a function which returns the offset values to the caller. It is then up to the caller to add and subtract the appropriate offset values. I've decided to create a function which does everything for you. The function is called, isTimeWithinWindow. This function takes your start time and end time as inputs. It returns a boolean value. Boolean TRUE if the current time is within the window or Boolean FALSE if the current time is outside the window. Use of this function is demonstrated below. if ( isTimeWithinWindow( StartTimeWindow, EndTimeWindow ) ) then Print( Date, " ", Time, " inside time window.") Print( Date, " ", Time, " outside time window."); You can simply call the isTimeWithinWindow within your strategy or indicator code to quickly determine if the current time falls within the desired window. The code is available to download. Thanks Geroge for the helpful tip! Thanks Jeff and George! rewrote the code to eliminate the function, and replace it with simple Boolean operators to the same job, here:Inputs: {if ( isTimeWithinWindow( StartTimeWindow, EndTimeWindow ) ) then Print( Date, ” “, Time, ” inside time window.”) Print( Date, ” “, Time, ” outside time window.”);} print(“time “,time:4:4); if (time>adjstart or time<adjend) then Print( Date, " ", Time, " MOD inside time window.") Print( Date, " ", Time, " MOD outside time window.") {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
{"url":"https://easylanguagemastery.com/indicators/testing-for-correct-time-window-in-easylanguage/","timestamp":"2024-11-07T15:04:22Z","content_type":"text/html","content_length":"398786","record_id":"<urn:uuid:685902b3-0112-4994-b483-724739d43b66>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00366.warc.gz"}
GitList - GitList - title: 'Tkwant: a software package for time-dependent quantum transport' - Thomas Kloss - Joseph Weston - Benoit Gaury - Benoit Rossignol - Christoph Groth - Xavier Waintal abstract: "Tkwant is a Python package for the simulation of quantum nanoelectronics\n\ devices to which external time-dependent perturbations are applied. Tkwant is\n\ an extension of the Kwant package (https://kwant-project.org/) and can handle\n\ the same types of systems: discrete tight-binding-like models that consist of\n\ an arbitrary central region connected to semi-infinite electrodes. The problem\n\ is genuinely many-body even in the absence of interactions and is treated\nwithin\ \ the non-equilibrium Keldysh formalism. Examples of Tkwant applications\ninclude\ \ the propagation of plasmons generated by voltage pulses, propagation of\nexcitations\ \ in the quantum Hall regime, spectroscopy of Majorana fermions in\nsemiconducting\ \ nanowires, current-induced skyrmion motion in spintronic\ndevices, multiple\ \ Andreev reflection, Floquet topological insulators,\nthermoelectric effects,\ \ and more. The code has been designed to be easy to use\nand modular. Tkwant\ \ is free software distributed under a BSD license and can be\nfound at https://tkwant.kwant-project.org/." date: '2021-02-22T12:24:08Z' link: http://arxiv.org/abs/2009.03132v3 ref: 2009.03132v3 jref: New J. Phys. 23, 023025 (2021) jlink: http://dx.doi.org/10.1088/1367-2630/abddf7 - title: "The HANDE-QMC project: open-source stochastic quantum chemistry from the\n\ \ ground state up" - James S. Spencer - Nick S. Blunt - Seonghoon Choi - Jiri Etrych - Maria-Andreea Filip - W. M. C. Foulkes - Ruth S. T. Franklin - Will J. Handley - Fionn D. Malone - Verena A. Neufeld - Roberto Di Remigio - Thomas W. Rogers - Charles J. C. Scott - James J. Shepherd - William A. Vigor - Joseph Weston - RuQing Xu - Alex J. W. Thom abstract: "Building on the success of Quantum Monte Carlo techniques such as diffusion\n\ Monte Carlo, alternative stochastic approaches to solve electronic structure\n\ problems have emerged over the last decade. The full configuration interaction\n\ quantum Monte Carlo (FCIQMC) method allows one to systematically approach the\n\ exact solution of such problems, for cases where very high accuracy is desired.\n\ The introduction of FCIQMC has subsequently led to the development of coupled\n\ cluster Monte Carlo (CCMC) and density matrix quantum Monte Carlo (DMQMC),\nallowing\ \ stochastic sampling of the coupled cluster wave function and the exact\nthermal\ \ density matrix, respectively. In this article we describe the HANDE-QMC\ncode,\ \ an open-source implementation of FCIQMC, CCMC and DMQMC, including\ninitiator\ \ and semi-stochastic adaptations. We describe our code and demonstrate\nits use\ \ on three example systems; a molecule (nitric oxide), a model solid (the\nuniform\ \ electron gas), and a real solid (diamond). An illustrative tutorial is\nalso\ \ included." date: '2018-12-04T19:27:19Z' link: http://arxiv.org/abs/1811.11679v2 ref: 1811.11679v2 - title: Transient and Sharvin resistances of Luttinger liquids - Thomas Kloss - Joseph Weston - Xavier Waintal abstract: "Although the intrinsic conductance of an interacting one-dimensional\ \ system\nis renormalized by the electron-electron correlations, it has been known\ \ for\nsome time that this renormalization is washed out by the presence of the\n\ (non-interacting) electrodes to which the wire is connected. Here, we study the\n\ transient conductance of such a wire: a finite voltage bias is suddenly applied\n\ across the wire and we measure the current before it has enough time to reach\n\ its stationary value. These calculations allow us to extract the Sharvin\n(contact)\ \ resistance of Luttinger and Fermi liquids. In particular, we find\nthat a perfect\ \ junction between a Fermi liquid electrode and a Luttinger liquid\nelectrode\ \ is characterized by a contact resistance that consists of half the\nquantum\ \ of conductance in series with half the intrinsic resistance of an\ninfinite\ \ Luttinger liquid. These results were obtained using two different\nmethods:\ \ a dynamical Hartree-Fock approach and a self-consistent Boltzmann\napproach.\ \ Although these methods are formally approximate we find a perfect\nmatch with\ \ the exact results of Luttinger/Fermi liquid theory." date: '2018-04-26T08:00:21Z' link: http://arxiv.org/abs/1710.00895v2 ref: 1710.00895v2 jref: Phys. Rev. B 97, 165134 (2018) jlink: http://dx.doi.org/10.1103/PhysRevB.97.165134 - title: Cooperative Charge Pumping and Enhanced Skyrmion Mobility - Adel Abbout - Joseph Weston - Xavier Waintal - Aurelien Manchon abstract: "The electronic pumping arising from the steady motion of ferromagnetic\n\ skyrmions is investigated by solving the time evolution of the Schrodinger\nequation\ \ implemented on a tight-binding model with the statistical physics of\nthe many-body\ \ problem. It is shown that the ability of steadily moving\nskyrmions to pump\ \ large charge currents arises from their non-trivial magnetic\ntopology, i.e.\ \ the coexistence between spin-motive force and topological Hall\neffect. Based\ \ on an adiabatic scattering theory, we compute the pumped current\nand demonstrate\ \ that it scales with the reflection coefficient of the\nconduction electrons\ \ against the skyrmion. Finally, we propose that such a\nphenomenon can be exploited\ \ in the context of racetrack devices, where the\nelectronic pumping enhances\ \ the collective motion of the train of skyrmions." date: '2018-04-06T21:14:34Z' link: http://arxiv.org/abs/1804.02460v1 ref: 1804.02460v1 jref: Phys. Rev. Lett. 121, 257203 (2018) jlink: http://dx.doi.org/10.1103/PhysRevLett.121.257203 - title: Towards Realistic Time-Resolved Simulations of Quantum Devices - Joseph Weston - Xavier Waintal abstract: "We report on our recent efforts to perform realistic simulations of large\n\ quantum devices in the time domain. In contrast to d.c. transport where the\n\ calculations are explicitly performed at the Fermi level, the presence of\ntime-dependent\ \ terms in the Hamiltonian makes the system inelastic so that it\nis necessary\ \ to explicitly enforce the Pauli principle in the simulations. We\nillustrate\ \ our approach with calculations for a flying qubit interferometer, a\nnanoelectronic\ \ device that is currently under experimental investigation. Our\ncalculations\ \ illustrate the fact that many degrees of freedom (16,700\ntight-binding sites\ \ in the scattering region) and long simulation times (80,000\ntimes the inverse\ \ Bandwidth of the tight-binding model) can be easily achieved\non a local computer." date: '2016-04-05T09:39:35Z' link: http://arxiv.org/abs/1604.01198v1 ref: 1604.01198v1 jref: J Comput Electron 15, 1148 (2016) jlink: http://dx.doi.org/10.1007/s10825-016-0855-9 - title: "A linear-scaling source-sink algorithm for simulating time-resolved\n quantum\ \ transport and superconductivity" - Joseph Weston - Xavier Waintal abstract: "We report on a \"source-sink\" algorithm which allows one to calculate\n\ time-resolved physical quantities from a general nanoelectronic quantum system\n\ (described by an arbitrary time-dependent quadratic Hamiltonian) connected to\n\ infinite electrodes. Although mathematically equivalent to the non equilibrium\n\ Green's function formalism, the approach is based on the scattering wave\nfunctions\ \ of the system. It amounts to solving a set of generalized\nSchr\\\"odinger equations\ \ which include an additional \"source\" term (coming from\nthe time dependent\ \ perturbation) and an absorbing \"sink\" term (the electrodes).\nThe algorithm\ \ execution time scales linearly with both system size and\nsimulation time allowing\ \ one to simulate large systems (currently around $10^6$\ndegrees of freedom)\ \ and/or large times (currently around $10^5$ times the\nsmallest time scale of\ \ the system). As an application we calculate the\ncurrent-voltage characteristics\ \ of a Josephson junction for both short and long\njunctions, and recover the\ \ multiple Andreev reflexion (MAR) physics. We also\ndiscuss two intrinsically\ \ time-dependent situations: the relaxation time of a\nJosephson junction after\ \ a quench of the voltage bias, and the propagation of\nvoltage pulses through\ \ a Josephson junction. In the case of a ballistic, long\nJosephson junction,\ \ we predict that a fast voltage pulse creates an oscillatory\ncurrent whose frequency\ \ is controlled by the Thouless energy of the normal\npart. A similar effect is\ \ found for short junctions, a voltage pulse produces\nan oscillating current\ \ which, in the absence of electromagnetic environment,\ndoes not relax." date: '2015-10-20T17:05:29Z' link: http://arxiv.org/abs/1510.05967v1 ref: 1510.05967v1 jref: Phys. Rev. B 93, 134506 (2016) jlink: http://dx.doi.org/10.1103/PhysRevB.93.134506 - title: Probing (topological) Floquet states through DC transport - Michel Fruchart - Pierre Delplace - Joseph Weston - Xavier Waintal - David Carpentier abstract: "We consider the differential conductance of a periodically driven system\n\ connected to infinite electrodes. We focus on the situation where the\ndissipation\ \ occurs predominantly in these electrodes. Using analytical\narguments and a\ \ detailed numerical study we relate the differential\nconductances of such a\ \ system in two and three terminal geometries to the\nspectrum of quasi-energies\ \ of the Floquet operator. Moreover these differential\nconductances are found\ \ to provide an accurate probe of the existence of gaps in\nthis quasi-energy\ \ spectrum, being quantized when topological edge states occur\nwithin these gaps.\ \ Our analysis opens the perspective to describe the\nintermediate time dynamics\ \ of driven mesoscopic conductors as topological\nFloquet filters." date: '2015-10-06T13:09:09Z' link: http://arxiv.org/abs/1507.00152v2 ref: 1507.00152v2 jref: Physica E 75 (2016) 287-294 jlink: http://dx.doi.org/10.1016/j.physe.2015.09.035 - title: Manipulating Andreev and Majorana Bound States with microwaves - Joseph Weston - Benoit Gaury - Xavier Waintal abstract: "We study the interplay between Andreev (Majorana) bound states that form\ \ at\nthe boundary of a (topological) superconductor and a train of microwave\ \ pulses.\nWe find that the extra dynamical phase coming from the pulses can shift\ \ the\nphase of the Andreev reflection, resulting in the appear- ance of dynamical\n\ Andreev states. As an application we study the presence of the zero bias peak\n\ in the differential conductance of a normal-topological superconductor junction\n\ - the simplest, yet somehow ambiguous, experimental signature for Majorana\nstates.\ \ Adding microwave radiation to the measuring electrodes provides an\nunambiguous\ \ probe of the Andreev nature of the zero bias peak." date: '2015-07-30T13:19:58Z' link: http://arxiv.org/abs/1411.6885v2 ref: 1411.6885v2 jref: Phys. Rev. B 92, 020513 (2015) jlink: http://dx.doi.org/10.1103/PhysRevB.92.020513 - title: AC Josephson effect without superconductivity - Benoit Gaury - Joseph Weston - Xavier Waintal abstract: "Superconductivity derives its most salient features from the coherence\ \ of its\nmacroscopic wave function. The associated physical phenomena have now\ \ moved\nfrom exotic subjects to fundamental building blocks for quantum circuits\ \ such\nas qubits or single photonic modes. Here, we theoretically find that the\ \ AC\nJosephson effect---which transforms a DC voltage $V_b$ into an oscillating\n\ signal $cos(2eV_b t/ \\hbar)$---has a mesoscopic counterpart in normal\nconductors.\ \ We show that on applying a DC voltage $V_b$ to an electronic\ninterferometer,\ \ there exists a universal transient regime where the current\noscillates at frequency\ \ $eV_b/h$. This effect is not limited by a\nsuperconducting gap and could, in\ \ principle, be used to produce tunable AC\nsignals in the elusive $0.1-10$ THz\ \ \"terahertz gap\"." date: '2014-07-15T08:46:27Z' link: http://arxiv.org/abs/1407.3911v1 ref: 1407.3911v1 jref: Nature Communications 6, 6524 (2015) jlink: http://dx.doi.org/10.1038/ncomms7524 - title: Classical and quantum spreading of a charge pulse - Benoit Gaury - Joseph Weston - Christoph Groth - Xavier Waintal abstract: "With the technical progress of radio-frequency setups, high frequency\ \ quantum\ntransport experiments have moved from theory to the lab. So far the\ \ standard\ntheoretical approach used to treat such problems numerically--known\ \ as Keldysh\nor NEGF (Non Equilibrium Green's Functions) formalism--has not been\ \ very\nsuccessful mainly because of a prohibitive computational cost. We propose\ \ a\nreformulation of the non-equilibrium Green's function technique in terms\ \ of the\nelectronic wave functions of the system in an energy-time representation.\ \ The\nnumerical algorithm we obtain scales now linearly with the simulated time\ \ and\nthe volume of the system, and makes simulation of systems with 10^5 - 10^6\n\ atoms/sites feasible. We illustrate our method with the propagation and\nspreading\ \ of a charge pulse in the quantum Hall regime. We identify a classical\nand a\ \ quantum regime for the spreading, depending on the number of particles\ncontained\ \ in the pulse. This numerical experiment is the condensed matter\nanalogue to\ \ the spreading of a Gaussian wavepacket discussed in quantum\nmechanics textbooks." date: '2014-07-15T07:48:11Z' link: http://arxiv.org/abs/1406.7232v2 ref: 1406.7232v2 jref: "Proceedings of the 17th International Workshop on Computational\n Electronics\ \ (Paris, France, June 3-6, 2014), p1-p4. Published by IEEE" jlink: http://dx.doi.org/10.1109/IWCE.2014.6865808 - title: "Stopping electrons with radio-frequency pulses in the quantum Hall\n regime" - Benoit Gaury - Joseph Weston - Xavier Waintal abstract: "Most functionalities of modern electronic circuits rely on the possibility\ \ to\nmodify the path fol- lowed by the electrons using, e.g. field effect\ntransistors.\ \ Here we discuss the interplay between the modification of this\npath and the\ \ quantum dynamics of the electronic flow. Specifically, we study\nthe propagation\ \ of charge pulses through the edge states of a two-dimensional\nelectron gas\ \ in the quantum Hall regime. By sending radio-frequency (RF)\nexcitations on\ \ a top gate capacitively coupled to the electron gas, we\nmanipulate these edge\ \ state dynamically. We find that a fast RF change of the\ngate voltage can stop\ \ the propagation of the charge pulse inside the sample.\nThis effect is intimately\ \ linked to the vanishing velocity of bulk states in\nthe quantum Hall regime\ \ and the peculiar connection between momentum and\ntransverse confinement of\ \ Landau levels. Our findings suggest new possibilities\nfor stopping, releasing\ \ and switching the trajectory of charge pulses in\nquantum Hall systems." date: '2014-05-14T14:53:05Z' link: http://arxiv.org/abs/1405.3520v1 ref: 1405.3520v1 jref: Phys. Rev. B 90, 161305(R) (2014) jlink: http://dx.doi.org/10.1103/PhysRevB.90.161305 - title: Numerical simulations of time resolved quantum electronics - Benoit Gaury - Joseph Weston - Matthieu Santin - Manuel Houzet - Christoph Groth - Xavier Waintal abstract: "This paper discusses the technical aspects - mathematical and numerical\ \ -\nassociated with the numerical simulations of a mesoscopic system in the time\n\ domain (i.e. beyond the single frequency AC limit). After a short review of the\n\ state of the art, we develop a theoretical framework for the calculation of\n\ time resolved observables in a general multiterminal system subject to an\narbitrary\ \ time dependent perturbation (oscillating electrostatic gates, voltage\npulses,\ \ time-vaying magnetic fields) The approach is mathematically equivalent\nto (i)\ \ the time dependent scattering formalism, (ii) the time resolved Non\nEquilibrium\ \ Green Function (NEGF) formalism and (iii) the partition-free\napproach. The\ \ central object of our theory is a wave function that obeys a\nsimple Schrodinger\ \ equation with an additional source term that accounts for\nthe electrons injected\ \ from the electrodes. The time resolved observables\n(current, density. . .)\ \ and the (inelastic) scattering matrix are simply\nexpressed in term of this\ \ wave function. We use our approach to develop a\nnumerical technique for simulating\ \ time resolved quantum transport. We find\nthat the use of this wave function\ \ is advantageous for numerical simulations\nresulting in a speed up of many orders\ \ of magnitude with respect to the direct\nintegration of NEGF equations. Our\ \ technique allows one to simulate realistic\nsituations beyond simple models,\ \ a subject that was until now beyond the\nsimulation capabilities of available\ \ approaches." date: '2014-02-18T16:43:03Z' link: http://arxiv.org/abs/1307.6419v4 ref: 1307.6419v4 jref: Physics Reports 534, 1-37 (2014) jlink: http://dx.doi.org/10.1016/j.physrep.2013.09.001
{"url":"https://code.weston.cloud/weston.cloud/blob/main/data/publications.yml","timestamp":"2024-11-02T11:24:48Z","content_type":"text/html","content_length":"23847","record_id":"<urn:uuid:8aa62b53-c30b-4b75-a2e7-02055c491cb1>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00241.warc.gz"}
RDWC Question It's been so long since I have posted here geesh.... So I have built out an RDWC system now. My trash can is my reservoir, a 13.1-gallon trash can; the totes are 12gallon heavy duty. When I fill the trash can it takes 25 gallons to level point. My question is? is that the true fluid gallons of each tote since it is 25 in the reservoir? Thank you for the help and I'm glad to be back What exactly are you trying figure out? What exactly are you trying figure out? I just wanted to know if my res takes 25gallon does that mean my planter totes are 25 gallons too. and does feed measuring go off res gallons I just wanted to know if my res takes 25gallon does that mean my planter totes are 25 gallons too. and does feed measuring go off res gallons Feed measuring should be based on the total capacity of system, res plus totes and all plumbing. I counted the total gallons it took to fill my system up to the level I wanted with everything An easy way to check how many gallons your system takes is to fill it using 5 gallon buckets. You wanna fill up so the water level is equal, and roughly an inch below the bottom of the net pots in the planting containers. That should be your max fill line, regardless of everything else or what the stickers claim on the containers, or how much plumbing is involved. Whatever amount it takes to get there would be your total capacity, combined throughout the whole system. Which we would add is in the running condition not off or static. You may need to balance your pump volume as too high a pump volume runs the first bucket level higher than the following buckets. Unless you have 3 inch pipes....... Just measure your fill and mix for the total. You’ll probably have extra solution by the end of stretch in flower and your roots are fully mature. My systems 13 gallons full at the start (3 plant buckets and a res all 5 gallon buckets) and like 9-10 by mid bloom
{"url":"https://www.rollitup.org/t/rdwc-question.1103392/#post-17623127","timestamp":"2024-11-07T06:45:34Z","content_type":"text/html","content_length":"62138","record_id":"<urn:uuid:9d0b3a9f-6821-4262-85bb-e03b10a29b48>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00804.warc.gz"}
SAS Proc Reg plots overlay problem Hey Folks, I am running a SAS Proc Reg procedure and producing prediction plots using the Plots syntax. I have a control and 3 treatment levels that I would like to overlay each regression and their respective CLM CLI. I can get the plots separately fine but when I use overlay it builds CLM CLI around all the regression lines not each individually. How can I correct this? Pgm below, plots attached. proc sort data=MeanTotTRlat; by treatment; proc reg data=MeanTotTRlat plots=predictions (x=LatR ); var LatR2; model TotTR = LatR LatR2;by treatment; plot overlay; 06-26-2019 02:00 PM
{"url":"https://communities.sas.com/t5/Statistical-Procedures/SAS-Proc-Reg-plots-overlay-problem/td-p/569204","timestamp":"2024-11-04T02:04:53Z","content_type":"text/html","content_length":"230365","record_id":"<urn:uuid:8f1760da-20b0-4939-9348-1e62ee095f53>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00675.warc.gz"}
Returns an integer that completely encloses the . Note that there is no guarantee that the returned is the smallest bounding box that encloses the , only that the lies entirely within the indicated . The returned might also fail to completely enclose the if the overflows the limited range of the integer data type. The method generally returns a tighter bounding box due to its greater flexibility in representation. Note that the definition of insideness can lead to situations where points on the defining outline of the shape may not be considered contained in the returned bounds object, but only in cases where those points are also not considered contained in the original shape. If a point is inside the shape according to the contains(point) method, then it must be inside the returned Rectangle bounds object according to the contains(point) method of the bounds. shape.contains(x,y) requires bounds.contains(x,y) If a point is not inside the shape, then it might still be contained in the bounds object: bounds.contains(x,y) does not imply shape.contains(x,y)
{"url":"https://cr.openjdk.org/~iris/se/15/latestSpec/apidiffs/java.desktop/java/awt/Shape-report.html","timestamp":"2024-11-06T11:19:14Z","content_type":"text/html","content_length":"34677","record_id":"<urn:uuid:2eb18f56-770f-4c17-b9ad-8d2f1df1423b>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00735.warc.gz"}
1 — 14:00 — Distributionally robust optimization through the lens of submodularity Distributionally robust optimization is used to solve decision making problems under adversarial uncertainty where the distribution of the uncertainty is itself ambiguous. In this paper, we identify a class of these instances that is solvable in polynomial time by viewing it through the lens of submodularity. We discuss connections to the multimarginal optimal transport problem and the generalized moment problem by bridging ideas from convexity in continuous optimization to submodularity in discrete optimization. 2 — 14:30 — When submodularity meets pairwise independence Submodularity provides a natural context to model a large class of discrete optimization problems including but not limited to influence maximization, mechanism design, resource allocation and several machine learning problems. As a set functional property, submodularity models the notion of diminishing returns in the discrete space. Theoretically, it has intrigued scientists due to strong structural similarity with both convex and concave functions in the continuous space, which has been exploited to derive approximation guarantees for deterministic submodular optimization problems, using continuous extensions. Our work, however, approaches submodular optimization from the lens of distributional robustness which seeks to evaluate or approximate the worst-case expected value of a submodular set function (subjected to random inputs) over a set of joint distributions satisfying some assumptions. Even with univariate information (marginal probabilities of the random inputs) alone, evaluating this optimal expected value is known to be an NP-complete problem. Existing approaches tackle the hardness by approximating the optimal expected value, assuming the random inputs to be independent. This notion is formalized by the concept of correlation gap which quantifies how much we “lose” in the expectation of the function by ignoring the correlation structure of the random set and assuming independence instead. For monotone submodular set functions, it was shown that the correlation gap is upper bounded by e/(e-1) in Agrawal et.al. (2012). In reality, however, more complex notions of randomness are often encountered, such as when weak correlations coexist with higher-order dependencies. Inspired by the need to incorporate more realistic notions of randomness and driven by the curiosity to understand the interplay between functional properties and randomness, we study the behaviour of monotone submodular set functions with pairwise independent random input. We show that in this setting, the e/(e-1) bound on the correlation gap can be improved to 4/3 (and that it is tight) in the following cases: (a) for small size of random inputs with general marginal probabilities (b) for general size of random inputs with small marginal probabilities and (c) for a specific submodular function (whose expectation is the probability that the chosen input set is non-empty) for general size of random inputs with general marginal probabilities. For rank functions of k-uniform matroids, we show that the ratio can be further improved to 4k/(4k − 1) for general size of random inputs with identical probabilities. Applications in distributionally robust optimization and mechanism design are demonstrated. Our results illustrate a fundamental difference in the behavior of submodular functions under weaker notions of independence with potential ramifications in improving existing algorithmic approximations. 3 — 15:00 — ** CANCELLED ** Convex Optimization for Bundle Size Pricing Problem We study the bundle size pricing (BSP) problem in which a monopolist sells bundles of products to customers and the price of each bundle depends only on the size (number of items) of the bundle. Although this pricing mechanism is attractive in practice, finding optimal bundle prices is difficult because it involves characterizing distributions of the maximum partial sums of order statistics. In this paper, we propose to solve the BSP problem under the cross moment model (CMM), which is constructed using only the first and second moments of customer valuations. Correlations between valuations of bundles are captured by the covariance matrix. We show that the BSP problem under this model is convex and can be efficiently solved using off-the-shelf solvers. Our approach is flexible in optimizing prices for any given bundle size. Numerical results show that it performs very well compared with state-of-the-art heuristics. This provides a unified and efficient approach to solve the BSP problem under various distributions and dimensions. We also provide a few insights regarding the BSP problem and CMM. First, the BSP problem can be converted into a multichoice pricing problem with correlations of valuations, and it is crucial to capture such correlations to construct good approximate solutions. Second, using only moment information in the way of CMM can be reasonable for constructing a good approximation to the BSP problem. 4 — 15:30 — Progresses in Modeling and solving distributional robust optimization problems Distributional robust optimization(DRO) focusing on making efficient and robust decision with limited data or information. The two key questions are, how to formulate the uncertainty set of the unknown distribution, and how to solve the corresponding model efficiently. In this talk, we present our recent progresses on how to utilize limited information efficiently to build sharper uncertainty set to achieve more robust and accurate decisions, and the corresponding solution approach. Firstly, for single dimensional moment based DRO models, we establish the value of distribution shape information by showing its influence on model accuracy, identify the optimal solution structure by reverse convex optimization theory, and propose numerical approaches for solving such models. Secondly, for heavy tail or light tail distributions, general moments (expectation of general function of the random variable) like entropy function, moment generation function, log-function, fractional power function, etc, are widely used in other research areas like statistics. We construct a general approach for solving single dimensional DRO problems with general moments, by reducing the primal-dual complementary-slackness condition into determinant of matrices similar to Vandermonde matrices, which is much easier to analyze and solve than the original condition. We further illustrate how to utilize such an approach to gain analytical/semi analytical solutions to various problems. Thirdly, we report our recent progresses on extending our works to high dimensional scenarios. Lastly, we show correlation information can greatly enhance the model accuracy of data driven DRO models using Wasserstein distance, and ease the curse-of-dimensionality of such models.
{"url":"https://ismp2024.gerad.ca/schedule/FB/122","timestamp":"2024-11-03T17:10:26Z","content_type":"text/html","content_length":"22937","record_id":"<urn:uuid:a37c1242-e727-4b7a-9cde-b5e2bb78eb5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00501.warc.gz"}
opal - [Opal] Correlation term opal AT lists.psi.ch Subject: The OPAL Discussion Forum List archive • From: Nicole Neveu <nneveu AT hawk.iit.edu> • To: opal <opal AT lists.psi.ch> • Subject: [Opal] Correlation term • Date: Mon, 31 Oct 2016 14:50:55 -0500 Hi OPAL'ers, What is the equation for xpx (column 21) in the stat file? When I calculate xrms and pxrms, (using the h5 file and math definitions), the numbers match exactly. I thought the calculation would be similar for the correlation term. Something like: correlation = ( (np.sum(x*px)/npt) - (np.sum(x)/npt)*(np.sum(px)/npt) ) However, when I do this calculation, it does not match the xpx value listed in the stat file. Do I need to divide by xrms? What part of the equation am I missing? • [Opal] Correlation term, Nicole Neveu, 10/31/2016 Archive powered by MHonArc 2.6.19.
{"url":"https://psilists.ethz.ch/sympa/arc/opal/2016-10/msg00007.html","timestamp":"2024-11-10T02:06:14Z","content_type":"text/html","content_length":"15607","record_id":"<urn:uuid:31968cac-a2c1-4b72-85bb-d967213dddd1>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00490.warc.gz"}
Pos(FPCP2015)088 Grand Uni- a ) 1 ( U × SUSY GUT F ) Gauge Symmetries Total Page:16 File Type:pdf, Size:1020Kb [email protected] We show chromo-electric dipole moment (CEDM) constraint in E6 ×SU(2)F ×U(1)A grand uni- fied theory (GUT). In general, down quark CEDM decouples in large sfermion mass limit, while for up quark CEDM, there is non-decoupling effects caused by stop loop. Therefore, if up-quark and up-squark sectors are complex at GUT scale and stop mass is light in order to realize 125 GeV Higgs mass, up quark CEDM is enhanced and become one of the strong constraint for su- persymmetric (SUSY) GUT model. However, in this model, although the mass of third generation SU(5) 10 representation sfermion is lighter than that of the other sfermions, up quark CEDM is suppressed because real up-quark and up-squark sectors at GUT scale can be also obtained. We saw that up and down quark CEDM satisfy current constraints and may be some signals in future experiments in E6 SUSY GUT with SU(2)F flavor and anomalous U(1)A gauge symmetries. Flavor Physics & CP Violation 2015 May 25-29, 2015 Nagoya, Japan ∗Speaker. †This talk is based on the work in collaboration with N. Maekawa and Y. Muramatsu. The work is now in progress. ⃝c Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0). http://pos.sissa.it/ CEDM constraints in E6 × SU(2)F ×U(1)A SUSY GUT model Yoshihiro SHIGEKAMI 1. Introduction Grand unified theory (GUT) can unify not only three Standard Model (SM) gauge couplings into a single one but also matter fields into a few multiplets. Furthermore, there are experimental supports for both unifications. One is for gauge coupling unification and the other is for matter unification. The former is quantitative consistency between experimental and theoretical couplings, and the latter is qualitative explanation of various hierarchies of matters and mixings. If the E6 supersymmetric (SUSY) GUT is considered with SU(2)F family symmetry[1] and the anomalous U(1)A gauge symmetry, we can obtain more attractive GUT model[2]. This model PoS(FPCP2015)088 can solve doublet-triplet splitting problem in a natural assumption that all the interactions which are allowed by the symmetry are introduced with O(1) coefficients. Moreover, in this model, we can obtain the natural SUSY-type sfermion mass spectrum which suppress the SUSY flavor-changing neutral current (FCNC) and stabilize the weak scale at the same time. Because of this spectrum, stop can be light in order to realize 125 GeV Higgs mass. In that case, the SUSY contributions to the up quark chromo-electric dipole moment (CEDM) are not decoupled, and therefore, if up-quark and up-squark sectors are complex at GUT scale, the CEDM become one of the strong constraint for that model. However, in this model, real up-quark and up-squark sectors at GUT scale can be obtained. Hence, although these sectors are complex at low energy by the renormalization group equation (RGE), we expected that the CEDM of this model is suppressed enough for satisfying current bound. In previous work[3], CEDM in this model was computed but the situation has changed because of 125 GeV Higgs observation. So, we compute the up and down quark CEDM in E6 × SU(2)F × U(1)A SUSY GUT model in the new situation. We will conclude that up and down quark CEDM satisfy current bound in this model and there may be some signals in future experiments. 2. E6 × SU(2)F ×U(1)A SUSY GUT model In this model, three 27 fundamental fields of E6 are introduced as matters. This is decomposed in the E6 ⊃ SO(10) ×U(1)V 0 notation (and in the [SO (10) ⊃ SU(5) ×U(1)V ] notation) as ¯ ¯0 0 0 27 = 161[101 + 5−3 + 15] + 10−2[5−2 + 52] + 14[10]: (2.1) This decomposition says that three generations of 27s of E6 contain six 5¯s of SU(5). Three of these 5¯s become superheavy through the superpotential, and the remaining three of 5¯s become the ¯0 SM modes. These modes, denote 5i , mainly come from the first two generations of 27, as like ¯0 ¯0 ¯0 ∼ ¯ ¯0 ¯ (51;52;53) (51;51;52). Then, we can obtain the natural SUSY-type sfermion mass spectrum 0 1 0 1 m2 m2 B 0 C B 0 C 2 ∼ @ 2 A 2 ∼ @ 2 A m˜ 10 m0 ; m˜ 5¯0 m0 (2.2) 2 2 m3 m0 3 2jΨ j2 2jΨ j2 Ψ Ψ from SUSY breaking potential VSB m0 a +m3 3 , where a and 3 denote the matter fields of doublet and singlet of SU(2)F , respectively. 2 CEDM constraints in E6 × SU(2)F ×U(1)A SUSY GUT model Yoshihiro SHIGEKAMI Also, specific Yukawa structure is obtained from superpotential in this model. Especially, up-type Yukawa matrix, Yu in this model is real at GUT scale, 0 1 0 1 d λ 5 0 B 3 q C @ − 1 λ 5 λ 4 λ 2 A Yu = 3 dq c b ; (2.3) 0 bλ 2 a where a;b;c and dq are real O(1) coefficients which are introduced by the natural assumption of this model, and λ ∼ 0:22 is the Cabibbo angle. Note that Ψ1Ψ1 and Ψ1Ψ3 are forbidden by SU(2)F PoS(FPCP2015)088 symmetry, therefore (1;1);(1;3) and (3;1) components of Yu are 0. This structure is good not only for getting realistic up-type quark mass spectrum, but also for satisfying the CEDM constraint. 3. CEDM constraints and result CEDM lagrangian is L − C i σ · γ CEDM = ∑dq qg¯ s( G) 5q; (3.1) q 2 σ · σ µν A A σ µν i γ µ γν A ··· A where G = T Gµν , = 2 [ ; ], T (A = 1; ;8) are generators of SU(3) and Gµν is field strength of gluon. SUSY contribution to CEDM is shown in Fig. 1. We can estimate this C ∆u ΓΓ Figure 1: One example of SUSY contribution to up-type CEDM, du . ( i j)ΓΓ( = LL;RR;LR) is off- diagonal element of 6 × 6 up-type squark mass matrix. This contribution doesn’t decouple. αs Mg˜ diagram roughly to be 4π 2 , where Mg˜ and mt˜ is gluino and stop mass, respectively. Note that mt˜ in the limit m0 ≫ m3, this is not decoupled. Therefore, CEDM become strong constraints for SUSY GUT model if Yu is complex at GUT scale. However, in this model, real Yu at GUT scale is obtained, so we expect that CEDM constraint for this model is suppressed. Actually, at SUSY scale, Yu becomes complex by the RGE effects even if Yu is real at GUT scale, so that we must check whether the CEDM calculated in this model satisfy the current experimental bounds[4], j Cj × −27 j Cj × −27 du < 6 10 e cm; dd < 6 10 e cm: (3.2) In this talk, we consider that m3 is O(1) TeV in order to realize 125 GeV Higgs mass, so that m0 is O(10) TeV. Also, we use one-loop RGEs for getting low energy parameters from input parameters. The result is shown in Fig. 2. In this figure, for comparison, we plot three type of inputs at GUT scale: imaginary Yu (right four plots), real Yu (middle four plots) and this model (left four plots). These Yu has O(1) coefficients generated randomly within the interval 0.5 to 1.5 with 3 CEDM constraints in E6 × SU(2)F ×U(1)A SUSY GUT model Yoshihiro SHIGEKAMI PoS(FPCP2015)088 Figure 2: Up and down quark CEDM plots. We use tanβ = 7 at low energy scale and M1=2 = 1 TeV, A0 = −1 TeV and µ = 500 GeV at GUT scale. Right, middle and left four plots correspond to imaginary Yu, real Yu and Yu of this model at GUT scale. We set m0 value as 5 TeV (red), 10 TeV (blue), 20 TeV (green) and 40 TeV (orange). Black solid line is current bounds and dashed line is expected future bounds. + or − signs. For each input, we use tanβ = 7 at low energy scale and M1=2 = 1 TeV, A0 = −1 TeV and µ = 500 GeV at GUT scale and we set m0 value as 5 TeV (red), 10 TeV (blue), 20 TeV (green) and 40 TeV (orange). Black solid lines show the current bounds Eq. (3.2) and black dashed lines are the expected future bounds. From Fig. 2, CEDM values of this model is smaller than the other two situations because of specific structure of Yu at GUT scale. So, we can conclude that up and down quark CEDM of this model satisfy the current bounds. And furthermore, these may be some signals in future experiments. 4. Summary and discussion We have shown that in E6 × SU(2)F ×U(1)A SUSY GUT model, up and down quark CEDM satisfy the current bounds. Moreover, we found that there may be some signals in future exper- iments. However, there are some considerations that we ignore here. Especially, it is non-trivial whether 125 GeV Higgs mass is really obtained. Of course, in order to obtain more precise value of CEDMs, we must consider two-loop RGE to get low energy parameters.
{"url":"https://docslib.org/doc/3196429/pos-fpcp2015-088-grand-uni-a-1-u-%C3%97-susy-gut-f-gauge-symmetries","timestamp":"2024-11-02T11:34:53Z","content_type":"text/html","content_length":"63694","record_id":"<urn:uuid:1dd90811-7c48-485b-b692-e7c0e6448d23>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00599.warc.gz"}
In [Mineno K., Nakamura Y., Ohwada T., Characterization of the intermediate values of the triangle inequality, Math. Inequal. Appl., 2012, 15(4), 1019–1035] there was established a norm inequality which characterizes all intermediate values of the triangle inequality, i.e. C n that satisfy 0 ≤ C n ≤ Σj=1n ‖x j‖ − ‖Σj=1n x j‖, x 1,...,x n ∈ X. Here we study when this norm inequality attains equality in strictly convex Banach spaces.
{"url":"https://eudml.org/subject/MSC/26D20","timestamp":"2024-11-05T06:17:40Z","content_type":"application/xhtml+xml","content_length":"46952","record_id":"<urn:uuid:f59b9a4e-caf3-49fa-91f8-d21c24da2813>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00127.warc.gz"}
A ball with a mass of 7 kg moving at 4 m/s hits a still ball with a mass of 12 kg. If the first ball stops moving, how fast is the second ball moving? | HIX Tutor A ball with a mass of #7 kg# moving at #4 m/s# hits a still ball with a mass of #12 kg#. If the first ball stops moving, how fast is the second ball moving? Answer 1 Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find the velocity of the second ball after the collision, you can use the principle of conservation of momentum. The total momentum before the collision equals the total momentum after the Momentum before collision = Momentum after collision (m_1 \times v_1 + m_2 \times v_2 = (m_1 + m_2) \times v_f) Where: (m_1 = 7) kg (mass of the first ball) (v_1 = 4) m/s (initial velocity of the first ball) (m_2 = 12) kg (mass of the second ball) (v_2 =) velocity of the second ball (to be found) (v_f = 0) m/s (final velocity, as the first ball stops moving) (7 \times 4 + 12 \times v_2 = (7 + 12) \times 0) (28 + 12 \times v_2 = 0) (12 \times v_2 = -28) (v_2 = \frac{-28}{12} = -\frac{7}{3} \approx -2.33) m/s Therefore, the second ball is moving at approximately (2.33) m/s in the opposite direction. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/a-ball-with-a-mass-of-7-kg-moving-at-4-m-s-hits-a-still-ball-with-a-mass-of-12-k-8f9af8b479","timestamp":"2024-11-11T03:31:31Z","content_type":"text/html","content_length":"579561","record_id":"<urn:uuid:fd7aeaa1-a599-44f7-84f5-0826d250f98f>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00712.warc.gz"}
LibGuides: Mathematics: Internet Resources a free curriculum resource for homeschoolers. It’s broken down by subjects, such as algebra or geometry, and links to videos, quizzes, and activities each day CliffsNotes study guides are written by real teachers and professors, so no matter what you're studying, CliffsNotes can ease your homework headaches and help you score high on exams. Calc 3 Course from Brown Univeristy taught by Yuri Sulyma from Clark University and offers a basic intro to trigonometry Prepare for test day with our AP study guides, practice questions, and cheatsheets. offers free lessons in advanced math subjects including calculus, statistics, and trigonometry Tutorials for algebra, trigonometry, and calculus/games and videos for calculus Find help for a variety of math topics at different levels From free games to free mortgage calculators, Math.com is more than a place for students to figure out what the answers to their homework problems are right before class starts. has printable reference tables to help with algebra, calculus, trigonometry, etc. Notes and formula sheets for algebra, trig, calculus, and differential equations Tutorials on algebra, word problems, and number bases designed to help the struggling algebra student Variety of topics in algebra, trig, calculus, and differential equations Virtual Math Lab tutorials for beginning, intermediate, and college algebra
{"url":"https://libguides.tridenttech.edu/math/internet","timestamp":"2024-11-07T08:49:22Z","content_type":"text/html","content_length":"43694","record_id":"<urn:uuid:0f201fb0-be60-483f-9def-d9cdff720fdc>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00535.warc.gz"}
Phylogenetic Networks Publications of Year 2005 Dave MacLeod, Robert L. Charlebois, W. Ford Doolittle and Eric Bapteste. Deduction of probable events of lateral gene transfer through comparison of phylogenetic trees by recursive consolidation and rearrangement. In BMCEB, Vol. 5(27), 2005. Keywords: explicit network, from rooted trees, lateral gene transfer, phylogenetic network, phylogeny, Program HorizStory, reconstruction, software. Note: http://dx.doi.org/10.1186/1471-2148-5-27. Toggle abstract David A. Morrison. Networks in phylogenetic analysis: new tools for population biology. In IJP, Vol. 35:567-582, 2005. Keywords: median network, NeighborNet, phylogenetic network, phylogeny, population genetics, Program Network, Program Spectronet, Program SplitsTree, Program T REX, Program TCS, reconstruction, reticulogram, split decomposition, survey. Note: http://hem.fyristorg.com/acacia/papers/networks.pdf. Luay Nakhleh, Tandy Warnow, C. Randal Linder and Katherine St. John. Reconstructing reticulate evolution in species - theory and practice. In JCB, Vol. 12(6):796-811, 2005. Keywords: from rooted trees, galled tree, phylogenetic network, phylogeny, polynomial, Program SPNet, reconstruction, software. Note: http://www.cs.rice.edu/~nakhleh/Papers/NWLSjcb.pdf. Mihaela Baroni, Stefan Grünewald, Vincent Moulton and Charles Semple. Bounding the number of hybridization events for a consistent evolutionary history. In JOMB, Vol. 51(2):171-182, 2005. Keywords: agreement forest, bound, explicit network, from rooted trees, hybridization, minimum number, phylogenetic network, phylogeny, reconstruction, SPR distance. Note: http://www.math.canterbury.ac.nz/~c.semple/papers/BGMS05.pdf. Toggle abstract Richard C. Winkworth, David Bryant, Peter J. Lockhart, David Havell and Vincent Moulton. Biogeographic Interpretation of Splits Graphs: Least Squares Optimization of Branch Lengths. In Systematic Biology, Vol. 54(1):56-65, 2005. Keywords: abstract network, from distances, from network, phylogenetic network, phylogeny, reconstruction, split, split network. Note: http://www.math.auckland.ac.nz/~bryant/Papers/05Biogeographic.pdf. Insa Cassens, Patrick Mardulyn and Michel C. Milinkovitch. Evaluating Intraspecific Network Construction Methods Using Simulated Sequence Data: Do Existing Algorithms Outperform the Global Maximum Parsimony Approach? In Systematic Biology, Vol. 54(3):363-372, 2005. Keywords: abstract network, evaluation, from unrooted trees, haplotype network, parsimony, phylogenetic network, phylogeny, Program Arlequin, Program CombineTrees, Program Network, Program TCS, reconstruction, software. Note: http://www.lanevol.org/LANE/publications_files/Cassens_etal_SystBio_2005.pdf. Martyn Kennedy, Barbara R. Holland, Russel D. Gray and Hamish G. Spencer. Untangling Long Branches: Identifying Conflicting Phylogenetic Signals Using Spectral Analysis, Neighbor-Net, and Consensus Networks. In Systematic Biology, Vol. 54(4):620-633, 2005. Keywords: abstract network, consensus, NeighborNet, phylogenetic network, phylogeny. Note: http://awcmee.massey.ac.nz/people/bholland/pdf/Kennedy_etal_2005.pdf. Charles Choy, Jesper Jansson, Kunihiko Sadakane and Wing-Kin Sung. Computing the maximum agreement of phylogenetic networks. In TCS, Vol. 335(1):93-107, 2005. Keywords: dynamic programming, FPT, level k phylogenetic network, MASN, NP complete, phylogenetic network, phylogeny. Note: http://www.df.lth.se/~jj/Publications/masn8_TCS2005.pdf. Toggle abstract Sergey Bereg and Kathryn Bean. Constructing Phylogenetic Networks from Trees. In BIBE05, Pages 299-305, 2005. Keywords: evaluation, from distances, phylogenetic network, phylogeny, Program SplitsTree, Program T REX, reconstruction, split, split network. Note: http://dx.doi.org/10.1109/BIBE.2005.19. Toggle abstract Luay Nakhleh and Li-San Wang. Phylogenetic Networks, Trees, and Clusters. In IWBRA05, Vol. 3515:919-926 of LNCS, springer, 2005. Keywords: cluster containment, evaluation, from clusters, from network, from rooted trees, phylogenetic network, phylogeny, polynomial, tree containment, tree-child network. Note: http://www.cs.rice.edu/~nakhleh/Papers/NakhlehWang.pdf. Bhaskar DasGupta, Sergio Ferrarini, Uthra Gopalakrishnan and Nisha Raj Paryani. Inapproximability results for the lateral gene transfer problem. In Proceedings of the Ninth Italian Conference on Theoretical Computer Science (ICTCS'05), Pages 182-195, springer, 2005. Keywords: approximation, from rooted trees, from species tree, inapproximability, lateral gene transfer, parsimony, phylogenetic network, phylogeny. Note: http://www.cs.uic.edu/~dasgupta/resume/publ/papers/ictcs-final.pdf. Daniel H. Huson, Tobias Kloepper, Peter J. Lockhart and Mike Steel. Reconstruction of Reticulate Networks from Gene Trees. In RECOMB05, Vol. 3500:233-249 of LNCS, springer, 2005. Keywords: from rooted trees, from splits, phylogenetic network, phylogeny, reconstruction, split, split network, visualization. Note: http://dx.doi.org/10.1007/11415770_18. Trinh N. D. Huynh, Jesper Jansson, Nguyen Bao Nguyen and Wing-Kin Sung. Constructing a Smallest Refining Galled Phylogenetic Network. In RECOMB05, Vol. 3500:265-280 of LNCS, springer, 2005. Keywords: from rooted trees, galled tree, NP complete, phylogenetic network, phylogeny, polynomial, Program SPNet, reconstruction. Note: http://www.df.lth.se/~jj/Publications/refining_gn3_RECOMB2005.pdf. Jesper Jansson, Nguyen Bao Nguyen and Wing-Kin Sung. Algorithms for Combining Rooted Triplets into a Galled Phylogenetic Network. In SODA05, Pages 349-358, 2005. Keywords: approximation, explicit network, from triplets, galled tree, phylogenetic network, phylogeny, polynomial, reconstruction. Note: http://portal.acm.org/citation.cfm?id=1070481. Luay Nakhleh and Li-San Wang. Phylogenetic Networks: Properties and Relationship to Trees and Clusters. In TCSB2, Vol. 3680:82-99 of LNCS, springer, 2005. Keywords: cluster containment, evaluation, from clusters, from network, from rooted trees, phylogenetic network, phylogeny, polynomial, tree containment, tree-child network. Note: http://www.cs.rice.edu/~nakhleh/Papers/LNCS_TCSB05.pdf. Rune Lyngsø, Yun S. Song and Jotun Hein. Minimum Recombination Histories by Branch and Bound. In WABI05, Vol. 3692:239-250 of LNCS, springer, 2005. Keywords: ARG, branch and bound, from sequences, minimum number, Program Beagle, recombination, reconstruction, software. Note: http://www.cs.ucdavis.edu/~yssong/Pub/WABI05-239.pdf. David Bryant. Extending tree models to splits networks. In Lior Pachter and Bernd Sturmfels editors, Algebraic Statistics for Computational Biology, Pages 322-334, Cambridge University Press, 2005. Keywords: abstract network, from splits, likelihood, phylogenetic network, phylogeny, split, split network, statistical model. Note: http://www.math.auckland.ac.nz/~bryant/Papers/05ascbChapter.pdf. Derek Ruths. Applications of phylogenetic incongruence to detecting and reconstructing interspecific recombination and horizontal gene transfer. Master's thesis, Rice University, U.S.A., 2005. Keywords: explicit network, from rooted trees, from species tree, heuristic, phylogenetic network, phylogeny, polynomial, reconstruction. Note: http://hdl.handle.net/1911/17912.
{"url":"http://phylnet.univ-mlv.fr/show.php?year=2005","timestamp":"2024-11-04T02:51:47Z","content_type":"text/html","content_length":"188973","record_id":"<urn:uuid:dc2439a2-15a0-4cef-8817-b3659d916bd9>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00673.warc.gz"}
Chennai Mathematical Institute Mathematics Seminar Date: Thursday, 07 September 2023 Time: 2:00 PM Venue: Lecture Hall 6 On Gersten-type conjecture for mod p etale motivic cohomology in mixed characteristic (0, p) Makoto Sakagaito IIT Gandhinagar. Let X be a smooth scheme over the spectrum of a regular local ring A. Let Z(n)^X be Bloch’s cycle complex for etale topology and Z/m(n)^X := Z(n)^X ⊗Z/mZ. Then Geisser-Levine proved that Z/m(n)^X is quasi-isomorphic to a shifted logarithmic de Rham-Witt sheaf W_rΩ^n_{X, \mathrm{log}}[−n]) of X, in the case where A is a field of positive characteristic p > 0 and m = p^r. Moreover, Geisser-Levine also proved that Z/m(n) is quasi-isomorphic to the sheaf μ_m of m-th roots of unity, in the case where A is a field and m is prime to the characteristic of As an analogy of Gersten’s conjecture for algebraic K-theory, Gros-Suwa and Bloch Ogus proved that there is an exact sequence for etale hypercohomology of a local ring O_{X,x} of X at a point x with values in Z/m(n) (this etale hypercohomology is called mod m etale motivic cohomology) in the above cases. In this talk, we prove that such a Gersten-type conjecture holds for mod p etale ́motivic cohomology of the henselization of a local ring O_{X,x} in the case where A is a discrete valuation ring of mixed characteristic (0, p) and A contains p-th roots of unity.
{"url":"https://www.cmi.ac.in/activities/show-abstract.php?absyear=2023&absref=85&abstype=sem","timestamp":"2024-11-11T16:02:59Z","content_type":"text/html","content_length":"7764","record_id":"<urn:uuid:a77e6b4d-0a7d-46cd-8d91-764f660d10b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00589.warc.gz"}
What is 302 Fahrenheit to Kelvin? - ConvertTemperatureintoCelsius.info 302 degrees Fahrenheit is a specific temperature measurement often used in scientific and everyday contexts. When converting it to the Kelvin scale, we need to use a specific formula to accurately calculate the equivalent temperature. The Kelvin scale is an absolute temperature scale in which zero is defined as absolute zero, the theoretical absence of all thermal energy. To convert Fahrenheit to Kelvin, we can use the following formula: K = (F – 32) × 5/9 + 273.15. In this formula, K represents the temperature in Kelvin, and F represents the temperature in Fahrenheit. So, let’s do the calculation: K = (302 – 32) × 5/9 + 273.15 K = (270) × 5/9 + 273.15 K = 149 + 273.15 K = 422.15 Kelvin. Therefore, 302 degrees Fahrenheit is equivalent to 422.15 Kelvin. Understanding temperature conversion is crucial in various scientific fields such as chemistry, physics, and engineering. For example, in chemical reactions, the temperature of a system can affect the rate and yield of the reaction. In physics, the behavior of gases at different temperatures is a fundamental concept governed by the principles of thermodynamics. In engineering, understanding temperature conversions is essential for designing systems and equipment that operate within specific temperature ranges. In summary, the conversion of 302 degrees Fahrenheit to Kelvin is a straightforward process using the formula K = (F – 32) × 5/9 + 273.15. This conversion is important in various scientific and everyday applications, where understanding temperature scales and their relationships is key to making accurate measurements and calculations.
{"url":"https://converttemperatureintocelsius.info/what-is-302-fahrenheit-in-kelvin/","timestamp":"2024-11-05T22:34:04Z","content_type":"text/html","content_length":"72186","record_id":"<urn:uuid:50e0fdfb-1cf2-46b3-b768-b71a3b6adef9>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00095.warc.gz"}
2019 Topcoder Open Algorithm Round 4 - Topcoder August 5, 2019 2019 Topcoder Open Algorithm Round 4 To start, we can try all one-digit and two-digit numbers to be sure to get rid of all special cases. (There aren’t really any special cases other than n=10 with product 0, but why not be extra safe.) If a bigger number is the optimal answer, then clearly: • it does not contain the digit 0 (product 0 was already obtained for n=10) • its digits are sorted in nondecreasing order (because rearranging digits leaves the same product but makes the number bigger) The count of such numbers is very small and we can generate and test all of them. More precise math: if you are generating 18-digit numbers with non-decreasing digits from [1-9], the count is binomial(18+8,8) = about 1.5 million, because each number corresponds to some sequence of 18 “print” and 8 “inc” commands. If you want to go beyond the task, a nice exercise is to look for as many ways to speed up the search as you can. To list some: After n=11 you don’t need the digit 1, as it does not change the product. The number will never contain combinations of digits like 22 or 23 (can be replaced by 4 or 6). Combinations like 25 will give you n=0 in two steps, so they are also useless for bigger The most fun is that there are still open problems related to this simple task. In particular, we do not know whether the sequences are actually infinite. For base = all zeros, the last term we know is actually the last one you found while solving the task. We know that if there are other terms, they have at least tens of thousands of digits, and it is conjectured that the sequence is actually finite. See http://oeis.org/A003001 for more. The biggest part of solving this task is in finding a general construction that is reasonably easy to implement. Below is a sample figure showing the construction used by the reference solution. Essentially, I always use some points on the current line that are just to the right of the input to “eat” the empty space on the line above that one. This way I’m sure I won’t get any accidental grid points into the polygon: there aren’t any between two consecutive rows. The checker for this problem was also fun to write. I used winding number for the point-in-polygon check, and Pick’s theorem as an easy way to check whether your polygon doesn’t contain any other grid points on the inside. The key observation is that the number of Hamiltonian cycles has to be small. Why? If we only have one root + (N-1) leaves, we have N distinct Hamiltonian cycles starting at the root and going “to the right”. More precisely, once we choose the edge along which the cycle leaves the root, the only option is to do N-1 jumps and then to use the neighboring edge to return to the root. (The tiniest tree is a special case, as both choices produce the same cycle. This was shown in the examples and it was easy to handle.) If we have a deeper tree, observe a deepest leaf Y. It has a parent X that is not the root, and a sibling Z that is also a leaf. We have a triangle XYZ in our graph, and there are only three other outgoing edges: from Y to the previous leaf, from Z to the next leaf, and from X to its parent. Thus, the Hamiltonian cycle has to pass through this triangle exactly once, and the previous and next vertex on the cycle will uniquely determine the order in which it visits X, Y, and Z. Hence, we can imagine that we contract XYZ into a single new node. Any Hamiltonian cycle in the new graph will correspond to exactly one cycle in the old graph and vice versa. The previous argument can be repeated until we get a tree that is just root + leaves. Thus, the total number of cycles is just 2 (directions of traversal) * N (starting points) * degree of the root <= 2 * 250 * 249. In order to generate all cycles, probably the easiest implementation is using three recursive functions: “traverseLeafLeaf”, “traverseLeafRoot”, and “traverseRootLeaf” that return the unique path that traverses an entire subtree from leftmost leaf or root to rightmost leaf or root. Then, we just sort everything, check whether the index is small enough, and output the corresponding sequence.
{"url":"https://www.topcoder.com/blog/2019-topcoder-open-algorithm-round-4/","timestamp":"2024-11-06T18:49:56Z","content_type":"text/html","content_length":"66801","record_id":"<urn:uuid:2ef97d38-0cfd-412b-bed1-563b70950e0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00773.warc.gz"}
Reference Database AUTHOR = {Perschell, Karaloine and Huff, Loran}, TITLE = {{M}ersenne primes in imaginary quadratic number fields}, NOTE = {avaliable from \url{http://www.utm.edu/staff/caldwell/preprints/kpp/Paper2.pdf}}, year = {2002}, abstract = {We examine all primes of the form $b^n-1$ in imaginary quadratic number fields, noting that besides a finite list of specific prime numbers, we find only three interesting classes of primes. Using the rational Mersenne primes (the first of the three cases) as a model, we follow Robert Spira and Mike Oakes in defining the Gaussian Mersenne primes to be the primes of the form $(1\pm)^p-1$ and the Eisenstein Mersenne primes to be the primes of the form $((3\pm\sqrt{-3}/2)^p-1.$ We show how to characterize these via their norms, list the known examples and speculate on their distributions.}
{"url":"https://t5k.org/references/refs.cgi?raw=PH2002","timestamp":"2024-11-07T16:16:03Z","content_type":"text/html","content_length":"4326","record_id":"<urn:uuid:d96a4649-18f7-49d8-8ae3-b2d481e214cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00697.warc.gz"}
Gustafson's law Short description: Theoretical speedup formula in computer architecture In computer architecture, Gustafson's law (or Gustafson–Barsis's law^[1]) gives the speedup in the execution time of a task that theoretically gains from parallel computing, using a hypothetical run of the task on a single-core machine as the baseline. To put it another way, it is the theoretical "slowdown" of an already parallelized task if running on a serial machine. It is named after computer scientist John L. Gustafson and his colleague Edwin H. Barsis, and was presented in the article Reevaluating Amdahl's Law in 1988.^[2] Gustafson estimated the speedup [math]\displaystyle{ S }[/math] of a program gained by using parallel computing as follows: [math]\displaystyle{ S &= s + p \times N \\ &= s + (1 - s) \times N \\ &= N + (1 - N) \times s }[/math] • [math]\displaystyle{ S }[/math] is the theoretical speedup of the program with parallelism (scaled speedup^[2]); • [math]\displaystyle{ N }[/math] is the number of processors; • [math]\displaystyle{ s }[/math] and [math]\displaystyle{ p }[/math] are the fractions of time spent executing the serial parts and the parallel parts of the program on the parallel system, where [math]\displaystyle{ s + p = 1 }[/math]. Alternatively, [math]\displaystyle{ S }[/math] can be expressed using [math]\displaystyle{ p }[/math]: [math]\displaystyle{ S &= (1 - p) + p \times N \\ &= 1 + (N - 1) \times p }[/math] Gustafson's law addresses the shortcomings of Amdahl's law, which is based on the assumption of a fixed problem size, that is of an execution workload that does not change with respect to the improvement of the resources. Gustafson's law instead proposes that programmers tend to increase the size of problems to fully exploit the computing power that becomes available as the resources Gustafson and his colleagues further observed from their workloads that time for the serial part typically does not grow as the problem and the system scale,^[2] that is, [math]\displaystyle{ s }[/ math] is fixed. This gives a linear model between the processor count [math]\displaystyle{ N }[/math] and the speedup [math]\displaystyle{ S }[/math] with slope [math]\displaystyle{ 1 - s }[/math], as shown in the figure above (which uses different notations: [math]\displaystyle{ P }[/math] for [math]\displaystyle{ N }[/math] and [math]\displaystyle{ a }[/math] for [math]\displaystyle{ s }[/ math]). Also, [math]\displaystyle{ S }[/math] scales linearly with [math]\displaystyle{ s }[/math] rather than exponentially in the Amdahl's Law.^[2] With these observations, Gustafson "expect[ed] to extend [their] success [on parallel computing] to a broader range of applications and even larger values for [math]\displaystyle{ N }[/math]".^[2] The impact of Gustafson's law was to shift research goals to select or reformulate problems so that solving a larger problem in the same amount of time would be possible. In a way the law redefines efficiency, due to the possibility that limitations imposed by the sequential part of a program may be countered by increasing the total amount of computation. The execution time of a program running on a parallel system can be split into two parts: • a part that does not benefit from the increasing number of processors (serial part); • a part that benefits from the increasing number of processors (parallel part). Example. — A computer program that processes files from disk. A part of that program may scan the directory of the disk and create a list of files internally in memory. After that, another part of the program passes each file to a separate thread for processing. The part that scans the directory and creates the file list cannot be sped up on a parallel computer, but the part that processes the files can. Without loss of generality, let the total execution time on the parallel system be [math]\displaystyle{ T = 1 }[/math]. Denote the serial time as [math]\displaystyle{ s }[/math] and the parallel time as [math]\displaystyle{ p }[/math], where [math]\displaystyle{ s + p = 1 }[/math]. Denote the number of processors as [math]\displaystyle{ N }[/math]. Hypothetically, when running the program on a serial system (only one processor), the serial part still takes [math]\displaystyle{ s }[/math], while the parallel part now takes [math]\displaystyle{ Np }[/math]. The execution time on the serial system is: [math]\displaystyle{ T' = s + Np }[/math] Using [math]\displaystyle{ T' }[/math] as the baseline, the speedup for the parallel system is: [math]\displaystyle{ S = \frac{T'}{T} = \frac{s + Np}{s + p} = \frac{s + Np}{1} = s + Np }[/math] By substituting [math]\displaystyle{ p = 1 - s }[/math] or [math]\displaystyle{ s = 1 - p }[/math], several forms in the previous section can be derived. Application in research Amdahl's law presupposes that the computing requirements will stay the same, given increased processing power. In other words, an analysis of the same data will take less time given more computing Gustafson, on the other hand, argues that more computing power will cause the data to be more carefully and fully analyzed: pixel by pixel or unit by unit, rather than on a larger scale. Where it would not have been possible or practical to simulate the impact of nuclear detonation on every building, car, and their contents (including furniture, structure strength, etc.) because such a calculation would have taken more time than was available to provide an answer, the increase in computing power will prompt researchers to add more data to more fully simulate more variables, giving a more accurate result. Application in everyday computer systems Amdahl's Law reveals a limitation in, for example, the ability of multiple cores to reduce the time it takes for a computer to boot to its operating system and be ready for use. Assuming the boot process was mostly parallel, quadrupling computing power on a system that took one minute to load might reduce the boot time to just over fifteen seconds. But greater and greater parallelization would eventually fail to make bootup go any faster, if any part of the boot process were inherently sequential. Gustafson's law argues that a fourfold increase in computing power would instead lead to a similar increase in expectations of what the system will be capable of. If the one-minute load time is acceptable to most users, then that is a starting point from which to increase the features and functions of the system. The time taken to boot to the operating system will be the same, i.e. one minute, but the new system would include more graphical or user-friendly features. Some problems do not have fundamentally larger datasets. As an example, processing one data point per world citizen gets larger at only a few percent per year. The principal point of Gustafson's law is that such problems are not likely to be the most fruitful applications of parallelism. Algorithms with nonlinear runtimes may find it hard to take advantage of parallelism "exposed" by Gustafson's law. Snyder^[3] points out an [math]\displaystyle{ O(N^3) }[/math] algorithm means that double the concurrency gives only about a 26% increase in problem size. Thus, while it may be possible to occupy vast concurrency, doing so may bring little advantage over the original, less concurrent solution—however in practice there have still been considerable improvements. Hill and Marty^[4] emphasize also that methods of speeding sequential execution are still needed, even for multicore machines. They point out that locally inefficient methods can be globally efficient when they reduce the sequential phase. Furthermore, Woo and Lee^[5] studied the implication of energy and power on future many-core processors based on Amdahl's law, showing that an asymmetric many-core processor can achieve the best possible energy efficiency by activating an optimal number of cores given the amount of parallelism is known prior to execution. Al-hayanni, Rafiev et al have developed novel speedup and energy consumption models based on a general representation of core heterogeneity, referred to as the normal form heterogeneity, that support a wide range of heterogeneous many-core architectures. These modelling methods aim to predict system power efficiency and performance ranges, and facilitates research and development at the hardware and system software levels.^[6]^[7] See also 1. ↑ McCool, Michael D.; Robison, Arch D.; Reinders, James (2012). "2.5 Performance Theory". Structured Parallel Programming: Patterns for Efficient Computation. Elsevier. pp. 61–62. ISBN 978-0-12-415993-8. https://books.google.com/books?id=zpaHa5cjLwwC&pg=PA61. 2. ↑ ^2.0 ^2.1 ^2.2 ^2.3 ^2.4 ^2.5 Gustafson, John L. (May 1988). "Reevaluating Amdahl's Law". Communications of the ACM 31 (5): 532–3. doi:10.1145/42411.42415. http://www.johngustafson.net/pubs/ 3. ↑ Snyder, Lawrence (June 1986). "Type Architectures, Shared Memory, and The Corollary of Modest Potential". Annu. Rev. Comput. Sci. 1: 289–317. doi:10.1146/annurev.cs.01.060186.001445. http:// 4. ↑ Hill, Mark D.; Marty, Michael R. (July 2008). "Amdahl's Law in the Multicore Era". IEEE Computer 41 (7): 33–38. doi:10.1109/MC.2008.209. UW CS-TR-2007-1593. http://www.cs.wisc.edu/multifacet/ 5. ↑ Dong Hyuk Woo; Hsien-Hsin S. Lee (December 2008). "Extending Amdahl's Law for Energy-Efficient Computing in the Many-Core Era". IEEE Computer 41 (12): 24–31. doi:10.1109/mc.2008.494. 6. ↑ Rafiev, Ashur; Al-Hayanni, Mohammed A. N.; Xia, Fei; Shafik, Rishad; Romanovsky, Alexander; Yakovlev, Alex (2018-07-01). "Speedup and Power Scaling Models for Heterogeneous Many-Core Systems". IEEE Transactions on Multi-Scale Computing Systems 4 (3): 436–449. doi:10.1109/TMSCS.2018.2791531. ISSN 2332-7766. https://eprint.ncl.ac.uk/fulltext.aspx?url=245030/ 7. ↑ Al-hayanni, Mohammed A. Noaman; Xia, Fei; Rafiev, Ashur; Romanovsky, Alexander; Shafik, Rishad; Yakovlev, Alex (July 2020). "Amdahl's law in the context of heterogeneous many-core systems – a survey" (in en). IET Computers & Digital Techniques 14 (4): 133–148. doi:10.1049/iet-cdt.2018.5220. ISSN 1751-8601. Original source: https://en.wikipedia.org/wiki/Gustafson's law. Read more
{"url":"https://handwiki.org/wiki/Gustafson%27s_law","timestamp":"2024-11-14T02:04:08Z","content_type":"text/html","content_length":"74439","record_id":"<urn:uuid:32baa1e2-a5dd-4395-b39c-91a53d9af277>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00581.warc.gz"}
cobyqa.subsolvers.cauchy_geometry(const, grad, curv, xl, xu, delta, debug)[source]# Maximize approximately the absolute value of a quadratic function subject to bound constraints in a trust region. This function solves approximately \[\begin{split}\max_{s \in \mathbb{R}^n} \quad \bigg\lvert c + g^{\mathsf{T}} s + \frac{1}{2} s^{\mathsf{T}} H s \bigg\rvert \quad \text{s.t.} \quad \left\{ \begin{array}{l} l \le s \le u,\\ \ lVert s \rVert \le \Delta, \end{array} \right.\end{split}\] by maximizing the objective function along the constrained Cauchy direction. Constant \(c\) as shown above. gradnumpy.ndarray, shape (n,) Gradient \(g\) as shown above. Curvature of \(H\) along any vector. returns \(s^{\mathsf{T}} H s\). xlnumpy.ndarray, shape (n,) Lower bounds \(l\) as shown above. xunumpy.ndarray, shape (n,) Upper bounds \(u\) as shown above. Trust-region radius \(\Delta\) as shown above. Whether to make debugging tests during the execution. numpy.ndarray, shape (n,) Approximate solution \(s\). This function is described as the first alternative in Section 6.5 of [1]. It is assumed that the origin is feasible with respect to the bound constraints and that delta is finite and positive. T. M. Ragonneau. Model-Based Derivative-Free Optimization Methods and Software. PhD thesis, Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong, China, 2022. URL:
{"url":"https://www.cobyqa.com/stable/dev/generated/cobyqa.subsolvers.cauchy_geometry.html","timestamp":"2024-11-14T22:15:16Z","content_type":"text/html","content_length":"28158","record_id":"<urn:uuid:7d623845-ae79-4858-8eda-0d727a0ee717>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00632.warc.gz"}
Expedition League Run Expectancy Matrix This past summer, I had the opportunity to work with the Spearfish Sasquatch Baseball Club, a collegiate summer ball team in the Expedition League based out of Spearfish, SD. While working for the team, I was the official scorekeeper and statistician. Games were inputted through the software Pointstreak, which produced a variety of elementary statistics, but it lacked more modern player value stats. As a strong enthusiast of modern statistics, I tasked myself with calculating advanced metrics for the players on our roster and in the entire league. The first step to finding most advanced metrics is to create a Run Expectancy Matrix. This matrix has three columns and eight rows, comprising of the average run expectancy of each of the 24 base out scenarios. I logged data of every Sasquatch game throughout the season and inputted it all in Excel to create the following matrix: One thing I want to point out is that this matrix was created using only data from the 60 regular season Sasquatch games, rather than all 360 total regular season games played. This was for a few reasons, mainly because I watched every Sasquatch game and it would’ve taken much more time to create a matrix for the entire league. Because of this smaller and biased sample, the matrix may not accurately reflect the entire league, but I still think it tells us a lot about how baseball is played at this level. In this project, I will use this matrix, as well as similar ones from the EL and MLB to create a variety of fun and useful graphics. Overall, the matrix looks very practical. Generally, run expectancy increases as more runners reach base and decreases as outs increases. There are a few hiccups that may arise from a smaller sample size, such as how abnormally low the value at 2 Outs, Runners on first and third, or 2_103 abbreviated. Why might this be? Here is a set of histograms displaying the distribution of each cell in the matrix, with the red dotted line denoting the average. Most distributions look relatively normal, with a maximum around the average and a steep slope as values move farther away. Surely, with more data the distributions will look more and more normal. Does this explain why 2_103 is smaller than it should be? We see that the distribution is rather limited, and it actually has a max value of only 3. We have seen the averages, but now let’s look at a series of boxplots to explore the median values and outliers. These boxplots match up pretty well with our histograms, showing us the distribution of each base-out state. Around the median is where most points are, with a few outliers. From this we see that the IQR of 2_103 is exactly that of 2_023 and is even greater than 2_120, but it is the possession of outliers that allow the latter to have a higher run expectancy than the former. Now let’s look at a real Run Expectancy Matrix from MLB games. The most recent and updated matrix I was able to find on the internet is courtesy of Tom Tango, at this link. It has data from the years There may be discrepancies, as in the MLB during this time, teams averaged around 4.25 runs per game. Meanwhile, in Sasquatch games, teams scored an average of 7.57 runs per game. In these graphs, we see the different run expectancy for MLB and EL games, with Expedition League in Blue and MLB in Red. The graphs that we’re given tell us a story. With zero outs, the run expectancy of MLB and EL look very consistent, differing by a very similar amount each time. As the amount of outs increases, so does the inconsistency. With two outs, the overlaid bars look much different than they did in the first two graphs. In fact, at 2_103, the run expectancy in the MLB is actually higher than the EL, by a slim margin. Tom Tango also provides us with a run probability matrix, displaying the probability of scoring a run at each base-out state. Here is our probability matrix, as well as Tango’s: Let’s see how these two stack up against each other. Something that surprised me is the difference between runners on first and second (120) and a runner on third (003) in the two leagues. In the Expedition League, having a runner on third base makes a team just about as likely to score as runners on first and third, second and third, or bases loaded. This tells us that no matter how many runners are on base, as long as there is someone on third, the probability of scoring that inning remains the same. In the MLB, this is not the case at all, as it is much more favorable to have runners on first and second. This is likely due to the Expedition League’s high amount wild pitches, which allow the runner on third to score. In the MLB, this runner likely has to get batted in, which is more difficult. The final matrix that Tango provides tells us the frequency of each base-out state. This matrix has the percent of total instances that each base-out state represents. I also created one of these matrices for the Expedition League. Here are both of them: Now let’s look at these probabilities in the form of a tree map. The first thing that jumps out at us is how frequently runners are on base in the Expedition League. The 000 boxes in the EL map are considerably smaller. As it turns out, in the Expedition League runners are on base for 59.49% of all instances. In the MLB, this number is just 44.3%. This is consistent with what we’ve seen this whole time: The Expedition League is a much higher scoring league. This is a fun way to see how these six total matrices compare with one other. How well can we predict one of them using the other five? Let’s see. Using R’s lm() function and AIC, we find our best model to be: y = -0.15756 + 0.91084x[1] + 1.10894x[2] + 1.19993x[3] + e with the following predictors: MLB Run Expectancy = 0.91084 EL Run Probability = 1.10894 EL Base-Out Probability = 1.19993 MLB RE is our most significant predictor, with a p-value of 1.59e-07. This model is highly effective, with an R^2 of 0.9791, telling us that our model explains nearly 98% of all variation. Extremely effective. Let’s look at the graph of predicted vs. actual values. The three points with the highest variation (≥ 0.175) are labeled. Overall, a fantastic fit. Hey, there’s our old friend 2_103 again.
{"url":"http://jackbanks.web.illinois.edu/2021/09/19/expedition-league-run-expectancy-matrix/","timestamp":"2024-11-11T11:04:40Z","content_type":"text/html","content_length":"39916","record_id":"<urn:uuid:6d981994-ae4e-4211-a32f-26811af78b25>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00749.warc.gz"}
Subspace topology explained In topology and related areas of mathematics, a subspace of a topological space X is a subset S of X which is equipped with a topology induced from that of X called the subspace topology (or the relative topology, or the induced topology, or the trace topology).^[1] Given a topological space and a , the subspace topology is defined by That is, a subset of is open in the subspace topology if and only if it is the with an open set . If is equipped with the subspace topology then it is a topological space in its own right, and is called a . Subsets of topological spaces are usually assumed to be equipped with the subspace topology unless otherwise stated. Alternatively we can define the subspace topology for a subset as the coarsest topology for which the inclusion map is continuous. More generally, suppose is an from a set to a topological space . Then the subspace topology on is defined as the coarsest topology for which is continuous. The open sets in this topology are precisely the ones of the form open in is then to its image in (also with the subspace topology) and is called a topological embedding. A subspace is called an open subspace if the injection is an open map , i.e., if the forward image of an open set of is open in . Likewise it is called a closed subspace if the injection is a closed map The distinction between a set and a topological space is often blurred notationally, for convenience, which can be a source of confusion when one first encounters these definitions. Thus, whenever is a subset of , and is a topological space, then the unadorned symbols " " and " " can often be used to refer both to considered as two subsets of , and also to as the topological spaces, related as discussed above. So phrases such as " an open subspace of " are used to mean that is an open subspace of , in the sense used above; that is: (i) ; and (ii) is considered to be endowed with the subspace topology. In the following, represents the real number s with their usual topology. , is the discrete topology considered as a subspace of do not have the discrete topology (for example is not an open set in because there is no open subset of whose intersection with can result in ). If are rational, then the intervals ( ) and [''a'', ''b''] are respectively open and closed, but if are irrational, then the set of all rational is both open and closed. • The set [0,1] as a subspace of is both open and closed, whereas as a subset of it is only closed. , [0, 1] ∪ [2, 3] is composed of two disjoint subsets (which happen also to be closed), and is therefore a disconnected space • Let S = [0, 1) be a subspace of the real line <math>\mathbb{R}</math>. Then [0, {{frac|1|2}}) is open in ''S'' but not in <math>\mathbb{R}</math> (as for example the intersection between (-{{frac |1|2}}, {{frac|1|2}}) and ''S'' results in [0, {{frac|1|2}})). Likewise [{{frac|1|2}}, 1) is closed in ''S'' but not in <math>\mathbb{R}</math> (as there is no open subset of <math>\mathbb{R}</ math> that can intersect with [0, 1) to result in [{{frac|1|2}}, 1)). ''S'' is both open and closed as a subset of itself but not as a subset of <math>\mathbb{R}</math>. == Properties == The subspace topology has the following characteristic property. Let <math>Y</math> be a subspace of <math>X</math> and let <math>i : Y \to X</math> be the inclusion map. Then for any topological space <math>Z</math> a map <math>f : Z\to Y</math> is continuous [[if and only if]] the composite map is continuous. This property is characteristic in the sense that it can be used to define the subspace topology on We list some further properties of the subspace topology. In the following let be a subspace of is continuous then the restriction to is continuous. is continuous then is continuous. are precisely the intersections of with closed sets in is a subspace of is also a subspace of with the same topology. In other words the subspace topology that inherits from is the same as the one it inherits from is an open subspace of ). Then a subset of is open in if and only if it is open in is a closed subspace of ). Then a subset of is closed in if and only if it is closed in is a is a basis for • The topology induced on a subset of a metric space by restricting the metric to this subset coincides with subspace topology for this subset. Preservation of topological properties If a topological space having some topological property implies its subspaces have that property, then we say the property is hereditary. If only closed subspaces must share the property we call it weakly hereditary. See also • Bourbaki, Nicolas, Elements of Mathematics: General Topology, Addison-Wesley (1966) □ Willard, Stephen. General Topology, Dover Publications (2004) Notes and References
{"url":"https://everything.explained.today/Subspace_topology/","timestamp":"2024-11-11T10:34:23Z","content_type":"text/html","content_length":"24206","record_id":"<urn:uuid:8e487ef5-dfee-46e6-92cb-03e9bd35e99e>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00480.warc.gz"}
error bound We consider a class of structured nonsmooth difference-of-convex minimization, which can be written as the difference of two convex functions possibly nonsmooth with the second one in the format of the maximum of a finite convex smooth functions. We propose two extrapolation proximal difference-of-convex based algorithms for potential acceleration to converge to a weak/standard d-stationary … Read more A unified analysis of descent sequences in weakly convex optimization, including convergence rates for bundle methods We present a framework for analyzing convergence and local rates of convergence of a class of descent algorithms, assuming the objective function is weakly convex. The framework is general, in the sense that it combines the possibility of explicit iterations (based on the gradient or a subgradient at the current iterate), implicit iterations (using a … Read more Exact computation of an error bound for a generalized linear complementarity problem with unique solution This paper considers a generalized form of the standard linear complementarity problem with unique solution and provides a more precise expression of an upper error bound discovered by Chen and Xiang in 2006. This expression has at least two advantages. It makes possible the exact computation of the error bound factor and it provides a … Read more Optimal error bounds in the absence of constraint qualifications with applications to the p-cones and beyond We prove tight Hölderian error bounds for all p-cones. Surprisingly, the exponents differ in several ways from those that have been previously conjectured; moreover, they illuminate p-cones as a curious example of a class of objects that possess properties in 3 dimensions that they do not in 4 or more. Using our error bounds, we … Read more Utility Preference Robust Optimization with Moment-Type Information Structure Utility preference robust optimization (PRO) models are recently proposed to deal with decision making problems where the decision maker’s true utility function is unknown and the optimal decision is based on the worst case utility function from an ambiguity set of utility functions. In this paper, we consider the case where the ambiguity set is … Read more Error bounds, facial residual functions and applications to the exponential cone We construct a general framework for deriving error bounds for conic feasibility problems. In particular, our approach allows one to work with cones that fail to be amenable or even to have computable projections, two previously challenging barriers. For the purpose, we first show how error bounds may be constructed using objects called one-step facial … Read more Complementary problems with polynomial data Given polynomial maps $f, g \colon \mathbb{R}^n \to \mathbb{R}^n,$ we consider the {\em polynomial complementary problem} of finding a vector $x \in \mathbb{R}^n$ such that \begin{equation*} f(x) \ \ ge \ 0, \quad g(x) \ \ge \ 0, \quad \textrm{ and } \quad \langle f(x), g(x) \rangle \ = \ 0. \end{equation*} In this paper, we … Read more Error Bounds and Singularity Degree in Semidefinite Programming In semidefinite programming a proposed optimal solution may be quite poor in spite of having sufficiently small residual in the optimality conditions. This issue may be framed in terms of the discrepancy between forward error (the unmeasurable `true error’) and backward error (the measurable violation of optimality conditions). In his seminal work, Sturm provided an … Read more On the Linear Convergence of Difference-of-convex Algorithms for Nonsmooth DC Programming In this paper we consider the linear convergence of algorithms for minimizing difference- of-convex functions with convex constraints. We allow nonsmoothness in both of the convex and concave components in the objective function, with a finite max structure in the concave compo- nent. Our focus is on algorithms that compute (weak and standard) d(irectional)-stationary points … Read more Amenable cones: error bounds without constraint qualifications We provide a framework for obtaining error bounds for linear conic problems without assuming constraint qualifications or regularity conditions. The key aspects of our approach are the notions of amenable cones and facial residual functions. For amenable cones, it is shown that error bounds can be expressed as a composition of facial residual functions. The … Read more
{"url":"https://optimization-online.org/tag/error-bound/","timestamp":"2024-11-03T16:09:29Z","content_type":"text/html","content_length":"109335","record_id":"<urn:uuid:80e72964-781f-4f04-9ddc-dfd49ca90fda>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00801.warc.gz"}
Graphs :: CC 315 Textbook Graphs are a good data structure for relational data. This would include data in which elements can have some sort of similarity or distance defined between those elements. This measure of similarity between elements can be defined as realistically or abstractly as needed for the data set. The distance can be as simple as listing neighbors or adjacent elements. Graphs are multidimensional data structures that can represent many different types of data using nodes and edges. We can have graphs that are weighted and/or directed and we have introduced two ways we can represent graphs: Matrix Graphs The first implementation of graphs that we looked at were matrix graphs. In this implementation, we had an array for the nodes and a two dimensional array for all of the possible edges. List Graphs The second implementation of graphs were list graphs. For this implementation, we had a single array of graph node objects where the graph node objects tracked their own edges. Recall that we discussed sparse and dense graphs. Matrix graphs are better for dense graphs since a majority of the elements in the two dimensional array of edges will be filled. A great example of a dense graph would be relationships in a small community, where each person is connected to each other person in some way. List graphs are better for sparse graphs, since each node only needs to store the outgoing edges it is connected to. This eliminates a large amount of the overhead that would be present in a matrix graph if there were thousands of nodes and each node was only connected to a few other nodes. A great example of a sparse graph would be a larger social network such as Facebook. Facebook has over a billion users, but each user has on average only a few hundred connections. So, it is much easier to store a list of those few hundred connections instead of a two dimensional matrix that has over one quintillion ($10^{18}$) elements. In the next chapter, we will discuss the specific implications of using one or the other. However, in our requirement analysis it is important to take this into consideration. If we have relational data where many elements are considered to be connected to many other elements, then a matrix graph will be preferred. If the elements of our data set are infrequently connected, then a list graph is the better choice.
{"url":"https://textbooks.cs.ksu.edu/cc315/v-requirements-analysis/12-requirements-analysis/4-graphs/","timestamp":"2024-11-07T16:34:11Z","content_type":"text/html","content_length":"50842","record_id":"<urn:uuid:d1f406e4-5402-4581-a4f1-b0187da37fa2>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00831.warc.gz"}
Travelling salesman problem The travelling salesman problem (TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in operations research and theoretical computer science. TSP is a special case of the travelling purchaser problem and the vehicle routing problem. In the theory of computational complexity, the decision version of the TSP (where, given a length L, the task is to decide whether the graph has any tour shorter than L) belongs to the class of NP-complete problems. Thus, it is possible that the worst-case running time for any algorithm for the TSP increases superpolynomially (but no more than exponentially) with the number of cities. The problem was first formulated in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. Even though the problem is computationally difficult, a large number of heuristics and exact algorithms are known, so that some instances with tens of thousands of cities can be solved completely and even problems with millions of cities can be approximated within a small fraction of 1%.^[1] The TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. The TSP also appears in astronomy, as astronomers observing many sources will want to minimize the time spent moving the telescope between the sources. In many applications, additional constraints such as limited resources or time windows may be imposed. The origins of the travelling salesman problem are unclear. A handbook for travelling salesmen from 1832 mentions the problem and includes example tours through Germany and Switzerland, but contains no mathematical treatment.^[2] The travelling salesman problem was mathematically formulated in the 1800s by the Irish mathematician W.R. Hamilton and by the British mathematician Thomas Kirkman. Hamilton’s Icosian Game was a recreational puzzle based on finding a Hamiltonian cycle.^[3] The general form of the TSP appears to have been first studied by mathematicians during the 1930s in Vienna and at Harvard, notably by Karl Menger, who defines the problem, considers the obvious brute-force algorithm, and observes the non-optimality of the nearest neighbour heuristic: We denote by messenger problem (since in practice this question should be solved by each postman, anyway also by many travelers) the task to find, for finitely many points whose pairwise distances are known, the shortest route connecting the points. Of course, this problem is solvable by finitely many trials. Rules which would push the number of trials below the number of permutations of the given points, are not known. The rule that one first should go from the starting point to the closest point, then to the point closest to this, etc., in general does not yield the shortest route. ^[4] It was first considered mathematically in the 1930s by Merrill Flood who was looking to solve a school bus routing problem.^[5] Hassler Whitney at Princeton University introduced the name travelling salesman problem soon after.^[6] In the 1950s and 1960s, the problem became increasingly popular in scientific circles in Europe and the USA after the RAND Corporation in Santa Monica, offered prizes for steps in solving the problem.^[5] Notable contributions were made by George Dantzig, Delbert Ray Fulkerson and Selmer M. Johnson from the RAND Corporation, who expressed the problem as an integer linear program and developed the cutting plane method for its solution. They wrote what is considered the seminal paper on the subject in which with these new methods they solved an instance with 49 cities to optimality by constructing a tour and proving that no other tour could be shorter. Dantzig, Fulkerson and Johnson, however, speculated that given a near optimal solution we may be able to find optimality or prove optimality by adding a small amount of extra inequalities (cuts). They used this idea to solve their initial 49 city problem using a string model. They found they only needed 26 cuts to come to a solution for their 49 city problem. While this paper did not give an algorithmic approach to TSP problems, the ideas that lay within it were indispensable to later creating exact solution methods for the TSP, though it would take 15 years to find an algorithmic approach in creating these cuts.^[5] As well as cutting plane methods, Dantzig, Fulkerson and Johnson used branch and bound algorithms perhaps for the first time.^[5] In the following decades, the problem was studied by many researchers from mathematics, computer science, chemistry, physics, and other sciences. In the 1960s however a new approach was created, instead of finding optimal solutions, people tried to instead find the worst solutions and in doing so, created lower bounds for the problem. These may then be used with branch and bound approaches. One method of doing this was to create the minimum spanning tree of the graph and then multiply the cost of this by 2.^[5] Christofides made a big advance in this approach of giving an approach for which we know the worst-case scenario. His algorithm given in 1976, at worst is 1.5 times longer than the optimal solution. As the algorithm was so simple and quick, many hoped it would give way to a near optimal solution method. However, until 2011 when it was beaten by less than a billionth of a percent, this remained the method with the best worst-case scenario.^[7] Richard M. Karp showed in 1972 that the Hamiltonian cycle problem was NP-complete, which implies the NP-hardness of TSP. This supplied a mathematical explanation for the apparent computational difficulty of finding optimal tours. Great progress was made in the late 1970s and 1980, when Grötschel, Padberg, Rinaldi and others managed to exactly solve instances with up to 2392 cities, using cutting planes and branch-and-bound. In the 1990s, Applegate, Bixby, Chvátal, and Cook developed the program Concorde that has been used in many recent record solutions. Gerhard Reinelt published the TSPLIB in 1991, a collection of benchmark instances of varying difficulty, which has been used by many research groups for comparing results. In 2006, Cook and others computed an optimal tour through an 85,900-city instance given by a microchip layout problem, currently the largest solved TSPLIB instance. For many other instances with millions of cities, solutions can be found that are guaranteed to be within 2-3% of an optimal tour.^[8] The problem is sometimes, especially in newer publications, referred to as Travelling Salesperson Problem. As a graph problem TSP can be modelled as an undirected weighted graph, such that cities are the graph's vertices, paths are the graph's edges, and a path's distance is the edge's weight. It is a minimization problem starting and finishing at a specified vertex after having visited each other vertex exactly once. Often, the model is a complete graph (i.e. each pair of vertices is connected by an edge). If no path exists between two cities, adding an arbitrarily long edge will complete the graph without affecting the optimal tour. Asymmetric and symmetric In the symmetric TSP, the distance between two cities is the same in each opposite direction, forming an undirected graph. This symmetry halves the number of possible solutions. In the asymmetric TSP , paths may not exist in both directions or the distances might be different, forming a directed graph. Traffic collisions, one-way streets, and airfares for cities with different departure and arrival fees are examples of how this symmetry could break down. Related problems • An equivalent formulation in terms of graph theory is: Given a complete weighted graph (where the vertices would represent the cities, the edges would represent the roads, and the weights would be the cost or distance of that road), find a Hamiltonian cycle with the least weight. • The requirement of returning to the starting city does not change the computational complexity of the problem, see Hamiltonian path problem. • Another related problem is the bottleneck travelling salesman problem (bottleneck TSP): Find a Hamiltonian cycle in a weighted graph with the minimal weight of the weightiest edge. The problem is of considerable practical importance, apart from evident transportation and logistics areas. A classic example is in printed circuit manufacturing: scheduling of a route of the drill machine to drill holes in a PCB. In robotic machining or drilling applications, the "cities" are parts to machine or holes (of different sizes) to drill, and the "cost of travel" includes time for retooling the robot (single machine job sequencing problem).^[9] • The generalized travelling salesman problem, also known as the "travelling politician problem", deals with "states" that have (one or more) "cities" and the salesman has to visit exactly one "city" from each "state". One application is encountered in ordering a solution to the cutting stock problem in order to minimize knife changes. Another is concerned with drilling in semiconductor manufacturing, see e.g., U.S. Patent 7,054,798. Noon and Bean demonstrated that the generalized travelling salesman problem can be transformed into a standard travelling salesman problem with the same number of cities, but a modified distance matrix. • The sequential ordering problem deals with the problem of visiting a set of cities where precedence relations between the cities exist. • A common interview question at Google is how to route data among data processing nodes; routes vary by time to transfer the data, but nodes also differ by their computing power and storage, compunding the problem of where to send data. • The travelling purchaser problem deals with a purchaser who is charged with purchasing a set of products. He can purchase these products in several cities, but at different prices and not all cities offer the same products. The objective is to find a route between a subset of the cities, which minimizes total cost (travel cost + purchasing cost) and which enables the purchase of all required products. Integer linear programming formulation TSP can be formulated as an integer linear program.^[10]^[11]^[12] Label the cities with the numbers 1, …, n and define: For i = 1, …, n, let be a dummy variable, and finally take to be the distance from city i to city j. Then TSP can be written as the following integer linear programming problem: The first set of equalities requires that each city be arrived at from exactly one other city, and the second set of equalities requires that from each city there is a departure to exactly one other city. The last constraints enforce that there is only a single tour covering all cities, and not two or more disjointed tours that only collectively cover all cities. To prove this, it is shown below (1) that every feasible solution contains only one closed sequence of cities, and (2) that for every single tour covering all cities, there are values for the dummy variables that satisfy the To prove that every feasible solution contains only one closed sequence of cities, it suffices to show that every subtour in a feasible solution passes through city 0 (noting that the equalities ensure there can only be one such tour). For if we sum all the inequalities corresponding to for any subtour of k steps not passing through city 0, we obtain: which is a contradiction. It now must be shown that for every single tour covering all cities, there are values for the dummy variables that satisfy the constraints. Without loss of generality, define the tour as originating (and ending) at city 0. Choose if city i is visited in step t (i, t = 1, 2, ..., n). Then since can be no greater than n and can be no less than 1; hence the constraints are satisfied whenever For , we have: satisfying the constraint. Computing a solution The traditional lines of attack for the NP-hard problems are the following: • Devising exact algorithms, which work reasonably fast only for small problem sizes. • Devising "suboptimal" or heuristic algorithms, i.e., algorithms that deliver either seemingly or probably good solutions, but which could not be proved to be optimal. • Finding special cases for the problem ("subproblems") for which either better or exact heuristics are possible. The most direct solution would be to try all permutations (ordered combinations) and see which one is cheapest (using brute force search). The running time for this approach lies within a polynomial factor of , the factorial of the number of cities, so this solution becomes impractical even for only 20 cities. One of the earliest applications of dynamic programming is the Held–Karp algorithm that solves the problem in time .^[13] Improving these time bounds seems to be difficult. For example, it has not been determined whether an exact algorithm for TSP that runs in time exists.^[14] Other approaches include: • Various branch-and-bound algorithms, which can be used to process TSPs containing 40–60 cities. • Progressive improvement algorithms which use techniques reminiscent of linear programming. Works well for up to 200 cities. • Implementations of branch-and-bound and problem-specific cut generation (branch-and-cut^[15]); this is the method of choice for solving large instances. This approach holds the current record, solving an instance with 85,900 cities, see Applegate et al. (2006). An exact solution for 15,112 German towns from TSPLIB was found in 2001 using the cutting-plane method proposed by George Dantzig, Ray Fulkerson, and Selmer M. Johnson in 1954, based on linear programming. The computations were performed on a network of 110 processors located at Rice University and Princeton University (see the Princeton external link). The total computation time was equivalent to 22.6 years on a single 500 MHz Alpha processor. In May 2004, the travelling salesman problem of visiting all 24,978 towns in Sweden was solved: a tour of length approximately 72,500 kilometres was found and it was proven that no shorter tour exists.^[16] In March 2005, the travelling salesman problem of visiting all 33,810 points in a circuit board was solved using Concorde TSP Solver: a tour of length 66,048,945 units was found and it was proven that no shorter tour exists. The computation took approximately 15.7 CPU-years (Cook et al. 2006). In April 2006 an instance with 85,900 points was solved using Concorde TSP Solver, taking over 136 CPU-years, see Applegate et al. (2006). Heuristic and approximation algorithms Various heuristics and approximation algorithms, which quickly yield good solutions have been devised. Modern methods can find solutions for extremely large problems (millions of cities) within a reasonable time which are with a high probability just 2–3% away from the optimal solution.^[8] Several categories of heuristics are recognized. Constructive heuristics The nearest neighbour (NN) algorithm (a greedy algorithm) lets the salesman choose the nearest unvisited city as his next move. This algorithm quickly yields an effectively short route. For N cities randomly distributed on a plane, the algorithm on average yields a path 25% longer than the shortest possible path.^[17] However, there exist many specially arranged city distributions which make the NN algorithm give the worst route (Gutin, Yeo, and Zverovich, 2002). This is true for both asymmetric and symmetric TSPs (Gutin and Yeo, 2007). Rosenkrantz et al. [1977] showed that the NN algorithm has the approximation factor for instances satisfying the triangle inequality. A variation of NN algorithm, called Nearest Fragment (NF) operator, which connects a group (fragment) of nearest unvisited cities, can find shorter route with successive iterations.^[18] The NF operator can also be applied on an initial solution obtained by NN algorithm for further improvement in an elitist model, where only better solutions are accepted. The bitonic tour of a set of points is the minimum-perimeter monotone polygon that has the points as its vertices; it can be computed efficiently by dynamic programming. Another constructive heuristic, Match Twice and Stitch (MTS) (Kahng, Reda 2004 ^[19]), performs two sequential matchings, where the second matching is executed after deleting all the edges of the first matching, to yield a set of cycles. The cycles are then stitched to produce the final tour. Christofides' algorithm for the TSP The Christofides algorithm follows a similar outline but combines the minimum spanning tree with a solution of another problem, minimum-weight perfect matching. This gives a TSP tour which is at most 1.5 times the optimal. The Christofides algorithm was one of the first approximation algorithms, and was in part responsible for drawing attention to approximation algorithms as a practical approach to intractable problems. As a matter of fact, the term "algorithm" was not commonly extended to approximation algorithms until later; the Christofides algorithm was initially referred to as the Christofides heuristic. This algorithm looks at things differently by using a result from graph theory which helps improve on the LB of the TSP which originated from doubling the cost of the minimum spanning tree. Given an Eulerian graph we can find an Eulerian tour in O(n) time.^[5] So if we had an Eulerian graph with cities from a TSP as vertices then we can easily see that we could use such a method for finding an Eulerian tour to find a TSP solution. By triangular inequality we know that the TSP tour can be no longer than the Eulerian tour and as such we have a LB for the TSP. Such a method is described 1. Find a minimum spanning tree for the problem 2. Create duplicates for every edge to create an Eulerian graph 3. Find an Eulerian tour for this graph 4. Convert to TSP: if a city is visited twice, create a shortcut from the city before this in the tour to the one after this. To improve our lower bound, we therefore need a better way of creating an Eulerian graph. But by triangular inequality, the best Eulerian graph must have the same cost as the best travelling salesman tour, hence finding optimal Eulerian graphs is at least as hard as TSP. One way of doing this that has been proposed is by the concept of minimum weight matching for the creation of which there exist algorithms of .^[5] To make a graph into an Eulerian graph, one starts with the minimum spanning tree. Then all the vertices of odd order must be made even. So a matching for the odd degree vertices must be added which increases the order of every odd degree vertex by one.^[5] This leaves us with a graph where every vertex is of even order which is thus Eulerian. Now we can adapt the above method to give Christofides' algorithm, 1. Find a minimum spanning tree for the problem 2. Create a matching for the problem with the set of cities of odd order. 3. Find an Eulerian tour for this graph 4. Convert to TSP using shortcuts. Iterative improvement Pairwise exchange The pairwise exchange or 2-opt technique involves iteratively removing two edges and replacing these with two different edges that reconnect the fragments created by edge removal into a new and shorter tour. This is a special case of the k-opt method. Note that the label Lin–Kernighan is an often heard misnomer for 2-opt. Lin–Kernighan is actually the more general k-opt method. For Euclidean instances, 2-opt heuristics give on average solutions that are about 5% better than Christofides' algorithm. If we start with an initial solution made with a greedy algorithm, the average number of moves greatly decreases again and is O(n). For random starts however, the average number of moves is O(n log(n)). However whilst in order this is a small increase in size, the initial number of moves for small problems is 10 times as big for a random start compared to one made from a greedy heuristic. This is because such 2-opt heuristics exploit `bad' parts of a solution such as crossings. These types of heuristics are often used within Vehicle routing problem heuristics to reoptimize route solutions.^[17] k-opt heuristic, or Lin–Kernighan heuristics Take a given tour and delete k mutually disjoint edges. Reassemble the remaining fragments into a tour, leaving no disjoint subtours (that is, don't connect a fragment's endpoints together). This in effect simplifies the TSP under consideration into a much simpler problem. Each fragment endpoint can be connected to 2k − 2 other possibilities: of 2k total fragment endpoints available, the two endpoints of the fragment under consideration are disallowed. Such a constrained 2k-city TSP can then be solved with brute force methods to find the least-cost recombination of the original fragments. The k-opt technique is a special case of the V-opt or variable-opt technique. The most popular of the k-opt methods are 3-opt, and these were introduced by Shen Lin of Bell Labs in 1965. There is a special case of 3-opt where the edges are not disjoint (two of the edges are adjacent to one another). In practice, it is often possible to achieve substantial improvement over 2-opt without the combinatorial cost of the general 3-opt by restricting the 3-changes to this special subset where two of the removed edges are adjacent. This so-called two-and-a-half-opt typically falls roughly midway between 2-opt and 3-opt, both in terms of the quality of tours achieved and the time required to achieve those tours. V-opt heuristic The variable-opt method is related to, and a generalization of the k-opt method. Whereas the k-opt methods remove a fixed number (k) of edges from the original tour, the variable-opt methods do not fix the size of the edge set to remove. Instead they grow the set as the search process continues. The best known method in this family is the Lin–Kernighan method (mentioned above as a misnomer for 2-opt). Shen Lin and Brian Kernighan first published their method in 1972, and it was the most reliable heuristic for solving travelling salesman problems for nearly two decades. More advanced variable-opt methods were developed at Bell Labs in the late 1980s by David Johnson and his research team. These methods (sometimes called Lin–Kernighan–Johnson) build on the Lin–Kernighan method, adding ideas from tabu search and evolutionary computing. The basic Lin–Kernighan technique gives results that are guaranteed to be at least 3-opt. The Lin–Kernighan–Johnson methods compute a Lin–Kernighan tour, and then perturb the tour by what has been described as a mutation that removes at least four edges and reconnecting the tour in a different way, then V -opting the new tour. The mutation is often enough to move the tour from the local minimum identified by Lin–Kernighan. V-opt methods are widely considered the most powerful heuristics for the problem, and are able to address special cases, such as the Hamilton Cycle Problem and other non-metric TSPs that other heuristics fail on. For many years Lin–Kernighan–Johnson had identified optimal solutions for all TSPs where an optimal solution was known and had identified the best known solutions for all other TSPs on which the method had been tried. Randomized improvement Optimized Markov chain algorithms which use local searching heuristic sub-algorithms can find a route extremely close to the optimal route for 700 to 800 cities. TSP is a touchstone for many general heuristics devised for combinatorial optimization such as genetic algorithms, simulated annealing, Tabu search, ant colony optimization, river formation dynamics (see swarm intelligence) and the cross entropy method. Ant colony optimization Artificial intelligence researcher Marco Dorigo described in 1993 a method of heuristically generating "good solutions" to the TSP using a simulation of an ant colony called ACS (Ant Colony System).^ [20] It models behaviour observed in real ants to find short paths between food sources and their nest, an emergent behaviour resulting from each ant's preference to follow trail pheromones deposited by other ants. ACS sends out a large number of virtual ant agents to explore many possible routes on the map. Each ant probabilistically chooses the next city to visit based on a heuristic combining the distance to the city and the amount of virtual pheromone deposited on the edge to the city. The ants explore, depositing pheromone on each edge that they cross, until they have all completed a tour. At this point the ant which completed the shortest tour deposits virtual pheromone along its complete tour route (global trail updating). The amount of pheromone deposited is inversely proportional to the tour length: the shorter the tour, the more it deposits. Special cases of the TSP Metric TSP In the metric TSP, also known as delta-TSP or Δ-TSP, the intercity distances satisfy the triangle inequality. A very natural restriction of the TSP is to require that the distances between cities form a metric to satisfy the triangle inequality; that is the direct connection from A to B is never farther than the route via intermediate C: The edge spans then build a metric on the set of vertices. When the cities are viewed as points in the plane, many natural distance functions are metrics, and so many natural instances of TSP satisfy this constraint. The following are some examples of metric TSPs for various metrics. • In the Euclidean TSP (see below) the distance between two cities is the Euclidean distance between the corresponding points. • In the rectilinear TSP the distance between two cities is the sum of the differences of their x- and y-coordinates. This metric is often called the Manhattan distance or city-block metric. • In the maximum metric, the distance between two points is the maximum of the absolute values of differences of their x- and y-coordinates. The last two metrics appear for example in routing a machine that drills a given set of holes in a printed circuit board. The Manhattan metric corresponds to a machine that adjusts first one co-ordinate, and then the other, so the time to move to a new point is the sum of both movements. The maximum metric corresponds to a machine that adjusts both co-ordinates simultaneously, so the time to move to a new point is the slower of the two movements. In its definition, the TSP does not allow cities to be visited twice, but many applications do not need this constraint. In such cases, a symmetric, non-metric instance can be reduced to a metric one. This replaces the original graph with a complete graph in which the inter-city distance is replaced by the shortest path between A and B in the original graph. Euclidean TSP When the input numbers can be arbitrary real numbers, Euclidean TSP is a particular case of metric TSP, since distances in a plane obey the triangle inequality. When the input numbers must be integers, comparing lengths of tours involves comparing sums of square-roots. Like the general TSP, Euclidean TSP is NP-hard in either case. With rational coordinates and discretized metric (distances rounded up to an integer), the problem is NP-complete.^[21] With rational coordinates and the actual Euclidean metric, Euclidean TSP is known to be in the Counting Hierarchy,^[22] a subclass of PSPACE. With arbitrary real coordinates, Euclidean TSP cannot be in such classes, since there are uncountably many possible inputs. However, Euclidean TSP is probably the easiest version for approximation.^[23] For example, the minimum spanning tree of the graph associated with an instance of the Euclidean TSP is a Euclidean minimum spanning tree, and so can be computed in expected O (n log n) time for n points (considerably less than the number of edges). This enables the simple 2-approximation algorithm for TSP with triangle inequality above to operate more quickly. In general, for any c > 0, where d is the number of dimensions in the Euclidean space, there is a polynomial-time algorithm that finds a tour of length at most (1 + 1/c) times the optimal for geometric instances of TSP in time; this is called a polynomial-time approximation scheme (PTAS).^[24] Sanjeev Arora and Joseph S. B. Mitchell were awarded the Gödel Prize in 2010 for their concurrent discovery of a PTAS for the Euclidean TSP. In practice, simpler heuristics with weaker guarantees continue to be used. Asymmetric TSP In most cases, the distance between two nodes in the TSP network is the same in both directions. The case where the distance from A to B is not equal to the distance from B to A is called asymmetric TSP. A practical application of an asymmetric TSP is route optimization using street-level routing (which is made asymmetric by one-way streets, slip-roads, motorways, etc.). Solving by conversion to symmetric TSP Solving an asymmetric TSP graph can be somewhat complex. The following is a 3×3 matrix containing all possible path weights between the nodes A, B and C. One option is to turn an asymmetric matrix of size N into a symmetric matrix of size 2N.^[25] A B C A 1 2 B 6 3 C 5 4 To double the size, each of the nodes in the graph is duplicated, creating a second ghost node, linked to the original node with a "ghost" edge of very low (possibly negative) weight, here denoted −w. (Alternatively, the ghost edges have weight 0, and weight w is added to all other edges.) The original 3×3 matrix shown above is visible in the bottom left and the inverse of the original in the top-right. Both copies of the matrix have had their diagonals replaced by the low-cost hop paths, represented by −w. In the new graph, no edge directly links original nodes and no edge directly links ghost nodes. Symmetric path A B C A′ B′ C′ A −w 6 5 B 1 −w 4 C 2 3 −w A′ −w 1 2 B′ 6 −w 3 C′ 5 4 −w The weight −w of the "ghost" edges linking the ghost nodes to the corresponding original nodes must be low enough to ensure that all ghost edges must belong to any optimal symmetric TSP solution on the new graph (w=0 is not always low enough). As a consequence, in the optimal symmetric tour, each original node appears next to its ghost node (e.g. a possible path is A -> A′ -> C -> C′ -> B -> B′ -> A) and by mergeing the original and ghost nodes again we get an (optimal) solution of the original asymmetric problem (in our example, A -> C -> B -> A). Analyst's travelling salesman problem There is an analogous problem in geometric measure theory which asks the following: under what conditions may a subset E of Euclidean space be contained in a rectifiable curve (that is, when is there a curve with finite length that visits every point in E)? This problem is known as the analyst's travelling salesman problem TSP path length for random sets of points in a square Suppose are independent random variables with uniform distribution in the square , and let be the shortest path length (i.e. TSP solution) for this set of points, according to the usual Euclidean distance. It is known^[26] that, almost surely, where is a positive constant that is not known explicitly. Since (see below), it follows from bounded convergence theorem that , hence lower and upper bounds on follow from bounds on . Upper bound • One has , and therefore , by using a naive path which visits monotonically the points inside each of slices of width in the square. • Few ^[27] proved , hence , later improved by Karloff (1987): . • The currently ^[28] best upper bound is . Lower bound • By observing that is greater than times the distance between and the closest point , one gets (after a short computation) • A better lower bound is obtained^[26] by observing that is greater than times the sum of the distances between and the closest and second closest points , which gives • The currently ^[28] best lower bound is • Held and Karp^[29] gave a polynomial-time algorithm that provides numerical lower bounds for , and thus for which seem to be good up to more or less 1%.^[30] In particular, David S. Johnson^[31] obtained a lower bound by computer experiment: where 0.522 comes from the points near square boundary which have fewer neighbours, and Christine L. Valenzuela and Antonia J. Jones ^[32] obtained the following other numerical lower bound: Computational complexity The problem has been shown to be NP-hard (more precisely, it is complete for the complexity class FP^NP; see function problem), and the decision problem version ("given the costs and a number x, decide whether there is a round-trip route cheaper than x") is NP-complete. The bottleneck travelling salesman problem is also NP-hard. The problem remains NP-hard even for the case when the cities are in the plane with Euclidean distances, as well as in a number of other restrictive cases. Removing the condition of visiting each city "only once" does not remove the NP-hardness, since it is easily seen that in the planar case there is an optimal tour that visits each city only once (otherwise, by the triangle inequality, a shortcut that skips a repeated visit would not increase the tour Complexity of approximation In the general case, finding a shortest travelling salesman tour is NPO-complete.^[33] If the distance measure is a metric and symmetric, the problem becomes APX-complete^[34] and Christofides’s algorithm approximates it within 1.5.^[35] The best known inapproximability bound is 123/122 .^[36] If the distances are restricted to 1 and 2 (but still are a metric) the approximation ratio becomes 8/7.^[37] In the asymmetric, metric case, only logarithmic performance guarantees are known, the best current algorithm achieves performance ratio 0.814 log(n);^[38] it is an open question if a constant factor approximation exists. The corresponding maximization problem of finding the longest travelling salesman tour is approximable within 63/38.^[39] If the distance function is symmetric, the longest tour can be approximated within 4/3 by a deterministic algorithm^[40] and within by a randomized algorithm.^[41] Human performance on TSP The TSP, in particular the Euclidean variant of the problem, has attracted the attention of researchers in cognitive psychology. It has been observed that humans are able to produce good quality solutions quickly.^[42] These results suggest that computer performance on the TSP may be improved by understanding and emulating the methods used by humans for these problems, and have also led to new insights into the mechanisms of human thought.^[43] The first issue of the Journal of Problem Solving was devoted to the topic of human performance on TSP,^[44] and a 2011 review listed dozens of papers on the subject.^[43] Natural computation When presented with a spatial configuration of food sources, the amoeboid Physarum polycephalum adapts its morphology to create an efficient path between the food sources which can also be viewed as an approximate solution to TSP.^[45] It's considered to present interesting possibilities and it has been studied in the area of natural computing. For benchmarking of TSP algorithms, TSPLIB is a library of sample instances of the TSP and related problems is maintained, see the TSPLIB external reference. Many of them are lists of actual cities and layouts of actual printed circuits. Popular culture Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the U.S. government to solve the most elusive problem in computer-science history: P vs. NP.^[46] See also 1. ↑ See the TSP world tour problem which has already been solved to within 0.05% of the optimal solution. 2. ↑ "Der Handlungsreisende – wie er sein soll und was er zu tun hat, um Aufträge zu erhalten und eines glücklichen Erfolgs in seinen Geschäften gewiß zu sein – von einem alten Commis-Voyageur" (The travelling salesman — how he must be and what he should do in order to get commissions and be sure of the happy success in his business — by an old commis-voyageur) 3. ↑ A discussion of the early work of Hamilton and Kirkman can be found in Graph Theory 1736–1936 4. ↑ Cited and English translation in Schrijver (2005). Original German: "Wir bezeichnen als Botenproblem (weil diese Frage in der Praxis von jedem Postboten, übrigens auch von vielen Reisenden zu lösen ist) die Aufgabe, für endlich viele Punkte, deren paarweise Abstände bekannt sind, den kürzesten die Punkte verbindenden Weg zu finden. Dieses Problem ist natürlich stets durch endlich viele Versuche lösbar. Regeln, welche die Anzahl der Versuche unter die Anzahl der Permutationen der gegebenen Punkte herunterdrücken würden, sind nicht bekannt. Die Regel, man solle vom Ausgangspunkt erst zum nächstgelegenen Punkt, dann zu dem diesem nächstgelegenen Punkt gehen usw., liefert im allgemeinen nicht den kürzesten Weg." 5. 1 2 3 4 5 6 7 8 al.], edited by E.L. Lawler ... [et (1985). The Traveling salesman problem : a guided tour of combinatorial optimization (Repr. with corrections. ed.). Chichester [West Sussex]: Wiley. ISBN 0471904139. 6. ↑ A detailed treatment of the connection between Menger and Whitney as well as the growth in the study of TSP can be found in Alexander Schrijver's 2005 paper "On the history of combinatorial optimization (till 1960). Handbook of Discrete Optimization (K. Aardal, G.L. Nemhauser, R. Weismantel, eds.), Elsevier, Amsterdam, 2005, pp. 1–68.PS,PDF 7. ↑ Klarreich, Erica. "Computer Scientists Find New Shortcuts for Infamous Traveling Salesman Problem". WIRED. Simons Science News. Retrieved 2015-06-14. 8. 1 2 Rego, César; Gamboa, Dorabela; Glover, Fred; Osterman, Colin (2011), "Traveling salesman problem heuristics: leading methods, implementations and latest advances", European Journal of Operational Research, 211 (3): 427–441, doi:10.1016/j.ejor.2010.09.010, MR 2774420. 9. ↑ Behzad, Arash; Modarres, Mohammad (2002), "New Efficient Transformation of the Generalized Traveling Salesman Problem into Traveling Salesman Problem", Proceedings of the 15th International Conference of Systems Engineering (Las Vegas) 10. ↑ Papadimitriou, C.H.; Steiglitz, K. (1998), Combinatorial optimization: algorithms and complexity, Mineola, NY: Dover, pp.308-309. 11. ↑ Tucker, A. W. (1960), "On Directed Graphs and Integer Programs", IBM Mathematical research Project (Princeton University) 12. ↑ Dantzig, George B. (1963), Linear Programming and Extensions, Princeton, NJ: PrincetonUP, pp. 545–7, ISBN 0-691-08000-3, sixth printing, 1974. 13. ↑ Work by David Applegate, AT&T Labs – Research, Robert Bixby, ILOG and Rice University, Vašek Chvátal, Concordia University, William Cook, University of Waterloo, and Keld Helsgaun, Roskilde University is discussed on their project web page hosted by the University of Waterloo and last updated in June 2004, here 14. 1 2 Johnson, D. S.; McGeoch, L. A. (1997). "The Traveling Salesman Problem: A Case Study in Local Optimization" (PDF). In Aarts, E. H. L.; Lenstra, J. K. Local Search in Combinatorial Optimisation. London: John Wiley and Sons Ltd. pp. 215–310. 15. ↑ Ray, S. S.; Bandyopadhyay, S.; Pal, S. K. (2007). "Genetic Operators for Combinatorial Optimization in TSP and Microarray Gene Ordering". Applied Intelligence. 26 (3): 183–195. doi:10.1007/ 16. ↑ Kahng, A. B.; Reda, S. (2004). "Match Twice and Stitch: A New TSP Tour Construction Heuristic". Operations Research Letters. 32 (6): 499–509. doi:10.1016/j.orl.2004.04.001. 17. ↑ Marco Dorigo. "Ant Colonies for the Traveling Salesman Problem. IRIDIA, Université Libre de Bruxelles. IEEE Transactions on Evolutionary Computation, 1(1):53–66. 1997. http:// 18. ↑ Jonker, Roy; Volgenant, Ton. "Transforming asymmetric into symmetric traveling salesman problems". Operations Research Letters. 2 (161–163): 1983. doi:10.1016/0167-6377(83)90048-2. 19. ↑ Few, L. (1955). "The shortest path and the shortest road through n points". Mathematika. 2 (02): 141–144. doi:10.1112/s0025579300000784. 20. 1 2 Steinerberger, S. (2015). "New bounds for the traveling salesman constant". Advances in Applied Probability. 47.1. 21. ↑ Held, M.; Karp, R.M. (1970). "The Traveling Salesman Problem and Minimum Spanning Trees". Operations Research. 18: 1138–1162. doi:10.1287/opre.18.6.1138. 22. ↑ Goemans, M.; Bertsimas, D. (1991). "Probabilistic analysis of the Held and Karp lower bound for the Euclidean traveling salesman problem". Mathematics of operation research. 16 (1): 72–89. doi: 23. ↑ Macgregor, J. N.; Ormerod, T. (June 1996), "Human performance on the traveling salesman problem", Perception & Psychophysics, 58 (4): 527–539, doi:10.3758/BF03213088. 24. 1 2 MacGregor, James N.; Chu, Yun (2011), "Human performance on the traveling salesman and related problems: A review", Journal of Problem Solving, 3 (2). 25. ↑ Journal of Problem Solving 1(1), 2006, retrieved 2014-06-06. 26. ↑ Jones, Jeff; Adamatzky, Andrew (2014), "Computation of the travelling salesman problem by a shrinking blob" (PDF), Natural Computing: 2, 13 27. ↑ Geere, Duncan. "'Travelling Salesman' movie considers the repercussions if P equals NP". Wired. Retrieved 26 April 2012. • Applegate, D. L.; Bixby, R. M.; Chvátal, V.; Cook, W. J. (2006), The Traveling Salesman Problem, ISBN 0-691-12993-2. • Allender, Eric; Bürgisser, Peter; Kjeldgaard-Pedersen, Johan; Mitersen, Peter Bro (2007), "On the Complexity of Numerical Analysis" (PDF), SIAM J. Comput., 38 (5), doi:10.1137/070697926. • Arora, Sanjeev (1998), "Polynomial time approximation schemes for Euclidean traveling salesman and other geometric problems", Journal of the ACM, 45 (5): 753–782, doi:10.1145/290179.290180, MR • Beardwood, J.; Halton, J.H.; Hammersley, J.M. (1959), "The Shortest Path Through Many Points", Proceedings of the Cambridge Philosophical Society, 55: 299–327, doi:10.1017/s0305004100034095. • Bellman, R. (1960), "Combinatorial Processes and Dynamic Programming", in Bellman, R.; Hall, M. Jr., Combinatorial Analysis, Proceedings of Symposia in Applied Mathematics 10, American Mathematical Society, pp. 217–249. • Bellman, R. (1962), "Dynamic Programming Treatment of the Travelling Salesman Problem", J. Assoc. Comput. Mach., 9: 61–63, doi:10.1145/321105.321111. • Berman, Piotr; Karpinski, Marek (2006), "8/7-approximation algorithm for (1,2)-TSP", Proc. 17th ACM-SIAM Symposium on Discrete Algorithms (SODA '06), pp. 641–648, doi:10.1145/1109557.1109627, ISBN 0898716055, ECCC TR05-069. • Christofides, N. (1976), Worst-case analysis of a new heuristic for the travelling salesman problem, Technical Report 388, Graduate School of Industrial Administration, Carnegie-Mellon University, Pittsburgh. • Hassin, R.; Rubinstein, S. (2000), "Better approximations for max TSP", Information Processing Letters, 75 (4): 181–186, doi:10.1016/S0020-0190(00)00097-1. • Held, M.; Karp, R. M. (1962), "A Dynamic Programming Approach to Sequencing Problems", Journal of the Society for Industrial and Applied Mathematics, 10 (1): 196–210, doi:10.1137/0110015. • Kaplan, H.; Lewenstein, L.; Shafrir, N.; Sviridenko, M. (2004), "Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs", In Proc. 44th IEEE Symp. on Foundations of Comput. Sci, pp. 56–65. • Karpinski, M.; Lampis, M.; Schmied, R. (2015), "New Inapproximability bounds for TSP", Journal of Computer and System Sciences, 81 (8): 1665–1677, doi:10.1016/j.jcss.2015.06.003 • Kosaraju, S. R.; Park, J. K.; Stein, C. (1994), "Long tours and short superstrings'", Proc. 35th Ann. IEEE Symp. on Foundations of Comput. Sci, IEEE Computer Society, pp. 166–177. • Orponen, P.; Mannila, H. (1987), "On approximation preserving reductions: Complete problems and robust measures'", Technical Report C-1987–28, Department of Computer Science, University of • Larson, Richard C.; Odoni, Amedeo R. (1981), "6.4.7: Applications of Network Models § Routing Problems §§ Euclidean TSP", Urban Operations Research, Prentice-Hall, ISBN 9780139394478, OCLC • Padberg, M.; Rinaldi, G. (1991), "A Branch-and-Cut Algorithm for the Resolution of Large-Scale Symmetric Traveling Salesman Problems", Siam Review: 60–100, doi:10.1137/1033004. • Papadimitriou, Christos H. (1977), "The Euclidean traveling salesman problem is NP-complete", Theoretical Computer Science, 4 (3): 237–244, doi:10.1016/0304-3975(77)90012-3, MR 0455550. • Papadimitriou, C. H.; Yannakakis, M. (1993), "The traveling salesman problem with distances one and two", Math. Oper. Res., 18: 1–11, doi:10.1287/moor.18.1.1. • Serdyukov, A. I. (1984), "An algorithm with an estimate for the traveling salesman problem of the maximum'", Upravlyaemye Sistemy, 25: 80–86. • Steinerberger, Stefan (2015), "New Bounds for the Traveling Salesman Constant", Advances in Applied Probability, 47. • Woeginger, G.J. (2003), "Exact Algorithms for NP-Hard Problems: A Survey", Combinatorial Optimization – Eureka, You Shrink! Lecture notes in computer science, vol. 2570, Springer, pp. 185–207. Further reading • Adleman, Leonard (1994), "Molecular Computation of Solutions To Combinatorial Problems" (PDF), Science, 266 (5187): 1021–4, Bibcode:1994Sci...266.1021A, doi:10.1126/science.7973651, PMID 7973651 • Arora, S. (1998), "Polynomial time approximation schemes for Euclidean traveling salesman and other geometric problems" (PDF), Journal of the ACM, 45 (5): 753–782, doi:10.1145/290179.290180. • Babin, Gilbert; Deneault, Stéphanie; Laportey, Gilbert (2005), Improvements to the Or-opt Heuristic for the Symmetric Traveling Salesman Problem, Cahiers du GERAD, G-2005-02, Montreal: Group for Research in Decision Analysis. • Cook, William (2011), In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation, Princeton University Press, ISBN 978-0-691-15270-7. • Cook, William; Espinoza, Daniel; Goycoolea, Marcos (2007), "Computing with domino-parity inequalities for the TSP", INFORMS Journal on Computing, 19 (3): 356–365, doi:10.1287/ijoc.1060.0204. • Cormen, T. H.; Leiserson, C. E.; Rivest, R. L.; Stein, C. (2001), "35.2: The traveling-salesman problem", Introduction to Algorithms (2nd ed.), MIT Press and McGraw-Hill, pp. 1027–1033, ISBN • Dantzig, G. B.; Fulkerson, R.; Johnson, S. M. (1954), "Solution of a large-scale traveling salesman problem", Operations Research, 2 (4): 393–410, doi:10.1287/opre.2.4.393, JSTOR 166695. • Garey, M. R.; Johnson, D. S. (1979), "A2.3: ND22–24", Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H. Freeman, pp. 211–212, ISBN 0-7167-1045-5. • Goldberg, D. E. (1989), "Genetic Algorithms in Search, Optimization & Machine Learning", Reading: Addison-Wesley, New York: Addison-Wesley, Bibcode:1989gaso.book.....G, ISBN 0-201-15767-5. • Gutin, G.; Yeo, A.; Zverovich, A. (2002), "Traveling salesman should not be greedy: domination analysis of greedy-type heuristics for the TSP", Discrete Applied Mathematics, 117 (1–3): 81–86, doi • Gutin, G.; Punnen, A. P. (2006), The Traveling Salesman Problem and Its Variations, Springer, ISBN 0-387-44459-9. • Johnson, D. S.; McGeoch, L. A. (1997), "The Traveling Salesman Problem: A Case Study in Local Optimization" (PDF), in Aarts, E. H. L.; Lenstra, J. K., Local Search in Combinatorial Optimisation, John Wiley and Sons Ltd, pp. 215–310. • Lawler, E. L.; Lenstra, J. K.; Rinnooy Kan, A. H. G.; Shmoys, D. B. (1985), The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization, John Wiley & Sons, ISBN 0-471-90413-9. • MacGregor, J. N.; Ormerod, T. (1996), "Human performance on the traveling salesman problem" (PDF), Perception & Psychophysics, 58 (4): 527–539, doi:10.3758/BF03213088. • Mitchell, J. S. B. (1999), "Guillotine subdivisions approximate polygonal subdivisions: A simple polynomial-time approximation scheme for geometric TSP, k-MST, and related problems", SIAM Journal on Computing, 28 (4): 1298–1309, doi:10.1137/S0097539796309764. • Rao, S.; Smith, W. (1998), "Approximating geometrical graphs via 'spanners' and 'banyans'", Proc. 30th Annual ACM Symposium on Theory of Computing, pp. 540–550. • Rosenkrantz, Daniel J.; Stearns, Richard E.; Lewis, Philip M., II (1977), "An Analysis of Several Heuristics for the Traveling Salesman Problem", SIAM Journal on Computing, 6 (5): 563–581, doi: • Vickers, D.; Butavicius, M.; Lee, M.; Medvedev, A. (2001), "Human performance on visually presented traveling salesman problems", Psychological Research, 65 (1): 34–45, doi:10.1007/s004260000031, PMID 11505612. • Walshaw, Chris (2000), A Multilevel Approach to the Travelling Salesman Problem, CMS Press. • Walshaw, Chris (2001), A Multilevel Lin-Kernighan-Helsgaun Algorithm for the Travelling Salesman Problem, CMS Press. External links Wikimedia Commons has media related to Traveling salesman problem. This article is issued from - version of the 11/28/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.
{"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Traveling_Salesman_Problem.html","timestamp":"2024-11-13T14:27:21Z","content_type":"text/html","content_length":"164033","record_id":"<urn:uuid:321874b4-4e4b-4b7b-9ef5-7edd6ec4ed05>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00340.warc.gz"}
American Mathematical Society Identities for $q$-ultraspherical polynomials and Jacobi functions HTML articles powered by AMS MathViewer Proc. Amer. Math. Soc. 123 (1995), 2479-2487 DOI: https://doi.org/10.1090/S0002-9939-1995-1273504-8 PDF | Request permission A q-analogue of a result by Badertscher and Koornwinder [Canad. J. Math. 44 (1992), 750-773] relating the action of a Hahn polynomial of differential operator argument on ultraspherical polynomials to an ultraspherical polynomial of shifted order and degree is derived. The q-analogue involves q-Hahn polynomials, continuous q-ultraspherical polynomials, and a shift operator. Another limit as q tends to 1 yields an identity for Jacobi functions. Combination with another result of Badertscher and Koornwinder gives a curious formula for Jacobi functions. References • R. Askey and Mourad E. H. Ismail, A generalization of ultraspherical polynomials, Studies in pure mathematics, BirkhΓ€user, Basel, 1983, pp.Β 55β 78. MR 820210 • Richard Askey and James Wilson, A set of orthogonal polynomials that generalize the Racah coefficients or $6-j$ symbols, SIAM J. Math. Anal. 10 (1979), no.Β 5, 1008β 1016. MR 541097, DOI 10.1137 • Richard Askey and James Wilson, Some basic hypergeometric orthogonal polynomials that generalize Jacobi polynomials, Mem. Amer. Math. Soc. 54 (1985), no.Β 319, iv+55. MR 783216, DOI 10.1090/memo/ • Erich Badertscher and Tom H. Koornwinder, Continuous Hahn polynomials of differential operator argument and analysis on Riemannian symmetric spaces of constant curvature, Canad. J. Math. 44 (1992), no.Β 4, 750β 773. MR 1178566, DOI 10.4153/CJM-1992-044-4 • T. S. Chihara, An introduction to orthogonal polynomials, Mathematics and its Applications, Vol. 13, Gordon and Breach Science Publishers, New York-London-Paris, 1978. MR 0481884 A. ErdΓ©lyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi, Higher transcendental functions, Vol. 1, McGraw-Hill, New York, 1953. • George Gasper and Mizan Rahman, Basic hypergeometric series, Encyclopedia of Mathematics and its Applications, vol. 35, Cambridge University Press, Cambridge, 1990. With a foreword by Richard Askey. MR 1052153 • Mourad E. H. Ismail and James A. Wilson, Asymptotic and generating relations for the $q$-Jacobi and $_{4}\varphi _{3}$ polynomials, J. Approx. Theory 36 (1982), no.Β 1, 43β 54. MR 673855, DOI • H. T. Koelink, The addition formula for continuous $q$-Legendre polynomials and associated spherical elements on the $\textrm {SU}(2)$ quantum group related to Askey-Wilson polynomials, SIAM J. Math. Anal. 25 (1994), no.Β 1, 197β 217. MR 1257149, DOI 10.1137/S0036141090186114 β , Askey-Wilson polynomials and the quantum $SU(2)$ group: survey and applications, Acta Appl. Math. (to • Tom Koornwinder, A new proof of a Paley-Wiener type theorem for the Jacobi transform, Ark. Mat. 13 (1975), 145β 159. MR 374832, DOI 10.1007/BF02386203 • Tom H. Koornwinder, Jacobi functions and analysis on noncompact semisimple Lie groups, Special functions: group theoretical aspects and applications, Math. Appl., Reidel, Dordrecht, 1984, pp.Β 1β 85. MR 774055 • M. Alfaro, J. S. Dehesa, F. J. MarcellΓ‘n, J. L. Rubio de Francia, and J. Vinuesa (eds.), Orthogonal polynomials and their applications, Lecture Notes in Mathematics, vol. 1329, Springer-Verlag, Berlin, 1988. MR 973417, DOI 10.1007/BFb0083349 • Tom H. Koornwinder, Jacobi functions as limit cases of $q$-ultraspherical polynomials, J. Math. Anal. Appl. 148 (1990), no.Β 1, 44β 54. MR 1052043, DOI 10.1016/0022-247X(90)90026-C • Tom H. Koornwinder, Askey-Wilson polynomials as zonal spherical functions on the $\textrm {SU}(2)$ quantum group, SIAM J. Math. Anal. 24 (1993), no.Β 3, 795β 813. MR 1215439, DOI 10.1137/0524049 • A. F. Nikiforov, S. K. Suslov, and V. B. Uvarov, Classical orthogonal polynomials of a discrete variable, Springer Series in Computational Physics, Springer-Verlag, Berlin, 1991. Translated from the Russian. MR 1149380, DOI 10.1007/978-3-642-74748-9 • Masatoshi Noumi and Katsuhisa Mimachi, Askey-Wilson polynomials and the quantum group $\textrm {SU}_q(2)$, Proc. Japan Acad. Ser. A Math. Sci. 66 (1990), no.Β 6, 146β 149. MR 1065793 β , Askey-Wilson polynomials as spherical functions on $S{U_q}(2)$, Quantum Groups (P. P. Kulish, ed.), Lecture Notes in Math., vol. 1510, Springer, New York, 1992, pp. 98-103. Similar Articles • Retrieve articles in Proceedings of the American Mathematical Society with MSC: 33C45, 33D55 • Retrieve articles in all journals with MSC: 33C45, 33D55 Bibliographic Information • © Copyright 1995 American Mathematical Society • Journal: Proc. Amer. Math. Soc. 123 (1995), 2479-2487 • MSC: Primary 33C45; Secondary 33D55 • DOI: https://doi.org/10.1090/S0002-9939-1995-1273504-8 • MathSciNet review: 1273504
{"url":"https://www.ams.org/journals/proc/1995-123-08/S0002-9939-1995-1273504-8/?active=current","timestamp":"2024-11-10T23:01:23Z","content_type":"text/html","content_length":"68930","record_id":"<urn:uuid:50f89a03-fd23-4565-abd4-8b9632d5590f>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00593.warc.gz"}
Non Euclidean Geometry Info Non-Euclidean Geometry: Understanding the World Beyond Euclid’s Framework When one thinks of geometry, the first name that comes to mind is often Euclid, the father of geometry. His book, “Elements”, laid the foundation for thousands of years of mathematical studies and set the standard for what was considered “true” geometry. However, in the 19th century, a breakthrough in the field of mathematics challenged Euclid’s previously undisputed theories and gave birth to the revolutionary concept known as Non-Euclidean geometry. To understand Non-Euclidean geometry, we must first grasp the basics of Euclidean geometry. This traditional branch of geometry is based on a set of five axioms or assumptions - commonly known as Euclid’s postulates - that dictate the properties of space and shapes. These axioms include concepts such as “a straight line can be drawn from any point to any other point” and “all right angles are However, in the early 19th century, mathematicians began to question the validity of Euclid’s postulates and sought to find alternative axioms. These visionary thinkers, such as Nikolai Lobachevsky, János Bolyai, and Carl Friedrich Gauss, took a different approach and challenged the idea that a straight line is the shortest distance between two points. Their exploration led to the discovery of Non-Euclidean geometry, a field that deviated from the traditional framework created by Euclid. Non-Euclidean geometry introduced new axioms that did not contradict each other but challenged the postulates of Euclid. One of these groundbreaking axioms was the idea that there can be more than one line parallel to a given line through a given point. This may seem like a small change, but it opened the door to a whole new world of geometry. It allowed for the creation of different geometries that did not follow the rules of Euclid. Two of the most influential of these are hyperbolic and elliptic geometries. In hyperbolic geometry, parallel lines do not exist, and the shortest distance between two points is curved. Imagine a saddle-shaped surface, where lines that never meet in Euclidean geometry actually intersect. This type of geometry has important applications in modern physics and has helped to explain aspects of the universe, such as the bending of light around massive objects. On the other hand, elliptic geometry, also known as Riemannian geometry, is based on the idea that there is no such thing as a straight line, and all lines eventually “curve back” onto themselves. Imagine drawing a triangle on the surface of a sphere, where the sum of its angles is always greater than 180 degrees. This type of geometry has practical applications in fields such as geography and Non-Euclidean geometry has also brought to light the concept of “curvature of space”. This revolutionary idea suggests that space itself can be curved, and the concept of a “straight line” is relative to the curvature of the space it is created on. In addition to its practical applications, Non-Euclidean geometry has also had a significant impact on our understanding of the concept of infinity. Traditional Euclidean geometry relies on the idea of an infinite, flat plane, but Non-Euclidean geometry has shown that there can be different types of infinity, depending on the curvature of space. In conclusion, Non-Euclidean geometry has revolutionized the way we perceive space and shapes. It has expanded our understanding of the universe and allowed for the creation of new branches of mathematics, such as topology and differential geometry. So next time you think of geometry, remember that there is more to it than the “straight” lines and angles that Euclid taught us. The world of Non-Euclidean geometry offers infinite possibilities and challenges our perception of the world around us. When we think of geometry, we often imagine the study of lines, angles, and shapes on a flat surface. This is known as Euclidean geometry, based on the work of the ancient Greek mathematician Euclid. However, there is another type of geometry that breaks away from these traditional principles – Non-Euclidean geometry. Non-Euclidean geometry is a branch of mathematics that deals with spaces and shapes that do not follow the rules of Euclidean geometry. This type of geometry was first explored and developed in the 19th century by mathematicians such as Nikolai Lobachevsky and János Bolyai, who challenged the long-standing assumptions of Euclidean geometry. The key difference between Euclidean and Non-Euclidean geometry lies in the concept of parallel lines. In Euclidean geometry, parallel lines are defined as lines that never intersect, no matter how far they are extended. However, in Non-Euclidean geometry, parallel lines can intersect. This may seem counterintuitive, but it opens up a whole new world of possibilities and applications. There are two main types of Non-Euclidean geometry – spherical and hyperbolic. Spherical geometry is based on the shape of a sphere, where lines are defined as great circles. In this geometry, the sum of the angles of a triangle is greater than 180 degrees, and parallel lines intersect at two points. Spherical geometry plays a crucial role in navigation and astronomy, where the Earth’s surface is treated as a sphere. On the other hand, hyperbolic geometry is based on the shape of a saddle-like surface, where lines are defined as equidistant curves. In this geometry, the sum of the angles of a triangle is less than 180 degrees, and parallel lines never intersect. Hyperbolic geometry has found applications in fields such as architecture and physics, where it is used to study curved spaces and the path of light rays. One of the most exciting aspects of Non-Euclidean geometry is its ability to challenge and expand our understanding of space. Euclidean geometry works well for objects of human scale, but when we consider the universe on a large scale, Non-Euclidean geometry becomes crucial. For example, Einstein’s theory of general relativity, which explains the behavior of gravity, is based on Non-Euclidean Moreover, Non-Euclidean geometry has also influenced other areas of mathematics, such as topology and fractal geometry. It has also led to the development of new areas of mathematics, such as differential geometry, which studies curved surfaces and multidimensional spaces. In conclusion, Non-Euclidean geometry is a fascinating branch of mathematics that challenges our understanding of space and shapes. It has countless applications in various fields and has played a significant role in shaping our understanding of the world and the universe. As we continue to explore and expand our knowledge of mathematics, Non-Euclidean geometry will undoubtedly play a critical role in pushing the boundaries of our understanding even further.
{"url":"https://micro.rodeo/posts/non-euclidean-geometry-info/","timestamp":"2024-11-12T16:24:18Z","content_type":"text/html","content_length":"12164","record_id":"<urn:uuid:d4110fbf-bd77-451c-abc5-67b628890c81>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00245.warc.gz"}
Dividend, Divisor, and Quotient Calculators The three parts to a division operation are the dividend, the divisor, and the quotient. The dividend is the starting number that is being divided, the divisor is the number the dividend is being divided by, and the quotient is the answer. If you have any two of the three parts, then you can calculate the missing part. We have created five different calculators for all the different scenarios. #1 Find the quotient and the remainder You know the dividend and the divisor, and you want to calculate the quotient and the remainder. #2 Find the quotient in decimal form You know the dividend and the divisor, and you want to calculate the quotient. (Answer rounded to nearest thousandth if necessary) #3 Find the dividend You know the divisor and the quotient, and you want to calculate the dividend. (Answer rounded to nearest thousandth if necessary) #4 Find the divisor You know the dividend and the quotient, and you want to calculate the divisor. (Answer rounded to nearest thousandth if necessary) #5 Find the dividend and the divisor You know the quotient, and you want to calculate the dividend and the divisor. Do you want to know how our calculators work? Here is how each calculator finds the answer for you: Calculator #1: This is just a normal division problem. We used long division to get the quotient and the remainder. Calculator #2: This is also a normal division problem. This what you would get if you type in the dividend divided by the divisor on a normal calculator. Calculator #3: To find the dividend, we start with this known equation: dividend ÷ divisor = quotient Which can be rewritten as follows by solving for the dividend: dividend = quotient × divisor Thus, to find the dividend, we multiply the quotient by the divisor. Calculator #4: To find the divisor, we again start with this known equation: dividend ÷ divisor = quotient Which can be rewritten as follows by solving for the divisor: divisor = dividend ÷ quotient Thus, to find the divisor, we divide the dividend by the quotient. Calculator #5: This is easy if you enter an integer as the quotient. Then we simply make the dividend the quotient and 1 the divisor and we are done! However, if the quotient is a fractional number (decimal number), then it is not as easy. In that case, we do the following: a) Make the dividend the quotient and 1 the divisor. b) Multiply the dividend and the divisor by a number that will make the dividend an integer. c) Use a GCF Calculator to find the greatest common factor of the dividend and divisor. d) Divide the dividend and the divisor by the greatest common factor to get the answer. Privacy Policy
{"url":"https://divisible.info/dividend-divisor-quotient.html","timestamp":"2024-11-09T12:35:40Z","content_type":"text/html","content_length":"10246","record_id":"<urn:uuid:4bc21543-af79-4e44-8f5b-49e5c7cbed8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00681.warc.gz"}
likai @ cs-people - 2009-11-04 Post date: Nov 6, 2009 10:58:42 PM Continue from last week the implementation of binary search tree traversal, this time using pointer to pointer (see bst-new.c attached). This approach cannot be efficiently emulated by other languages like Java. □ Review: writing free_bst() recursively. The malloc()/free() pair of functions is how you obtain/relinquish memory. Remember not to free(t) first, otherwise t->left and t->right would become □ find_pos() takes pointer to pointer to a binary search tree node for argument, and returns pointer to pointer to node where the node with given key would be found. If there is no node with the given key, returns pointer to the NULL pointer. It could either be the original bst_t ** to the root node, or either the memory location of the left or right member of some node that contains the NULL pointer. □ In other words, find_pos() returns the position that can be used to modify the tree, either for insert or remove. □ find() can be written in terms of find_pos(). □ insert() can be written in terms of find_pos(). □ As an exercise, see if you can write remove() in terms of find_pos() as well. It may involve writing a find_min_pos() function. □ The indirect pointer can be seen reflected in the assembly code of bst-new.c. It doesn't increase number of instructions significantly (compared to bst.s, attached, generated from bst.c from last week). □ The insert() function is now tail recursive, and it generates much better code compared to before. □ With -O3, the compiler inlines the definition of find_pos() into find() and insert(), essentially synthesizing new functions based on the one we wrote.
{"url":"https://cs.likai.org/teaching/cs210-fall-2009/2009-11-04","timestamp":"2024-11-03T03:09:44Z","content_type":"text/html","content_length":"87797","record_id":"<urn:uuid:fb09b084-4415-4960-b88f-6361fdbbfe6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00662.warc.gz"}
3D Printed Measuring Spoons Introduction: 3D Printed Measuring Spoons Making your own measuring spoons is a fun and easy weekend project to learn Tinkercad and get comfortable with 3D printing. While there are already a bunch of existing measuring spoon files out there, designing your own allows you to customize them and make them unique. If you would prefer to just print out the set of spoons I designed and learn nothing, I have also attached the files as an STL. 🤪 In this project I used: • Blue PLA filament • Creality Cr-10 3D Printer (Note that some of the links on this page are affiliate links. This does not change the cost of the item for you. I reinvest whatever proceeds I receive into making new projects. If you would like any suggestions for alternative suppliers, please let me know.) Step 1: Do Some Math So the first order of business is figuring out how big the measuring spoons actually need to be. Since I am in the USA, I am using our archaic system of Tablespoons and teaspoons. Thanks to the wikipedia entry on measuring spoons, I know that a Tablespoon is 14.8ml and teaspoon is 4.9ml. Armed with this information, I could calculate what a 1/4 teaspoon or 1/2 Tablespoon would be using simple division. You can see the results in the "target ml" section of the spreadsheet. You may be wondering why I then immediately doubled the "target ml" number. Well, a measuring spoon is a half sphere, and we want to calculate the volume of a full sphere. If we only calculated the sphere size for just the "target ml", when we cut the sphere in half the volume would be half of what we need. In order to solve for this, we just assumed the sphere needs to be twice as big as the measurement we actually want. This way, we can cut it in half later. The next number we need is the radius (in inches). I basically guessed at the appropriate radius to begin with. Using the radius we can calculate the volume of a sphere in cubic inches (cu-in). The formula for that is V = (4/3)πr³, but if you don't feel like doing all that math, you can just use this handy calculator to figure that out. Now that we know the volume in cu-in (imperial), we need to convert it to ml (metric). To figure that out, you can use the formula ml = in³ / 0.061024, or again, use a handy online calculator. Now you should be left with a ml number that corresponds to the volume of the sphere. You can compare this number against "Target ml (doubled)" to see how close you are. From here you can tweak the initial radius input and recalculate until you have basically hit your target number. Once all the target volume numbers have been hit for the "Inner sphere" size, I needed to figure out the width, length and height for the sphere. This is easy. All 3 of those dimension are based on the diameter, which is just twice the radius (D = r x 2). Finally, I added 0.2 inches to each diameter number to calculate an "Outer sphere" which will allow for 0.1 inch thick walls for the spoon. If all of that was too much... All you need to know is the size of the "Inner sphere" which will be the cutout hole in Tinkercad and the "Outer sphere" which will be the walls of the spoon. Step 2: Hollow Half Sphere Create a sphere based on the Tablespoon "Inner sphere" dimension that was calculated. Make this sphere a hole. Create a sphere based on the Tablespoon "Outer sphere" dimension. Align this sphere with the inner sphere by centering it on all 3 axis. Create a square hole and place it over the entirety of the outer sphere. Move it vertically off the workplane (along the Z axis) until it is halfway up the sphere. For the tablespoon we would move it 0.856in off the workplane. I arrived at that by dividing the outer sphere by 2 (height = 1.712 / 2). Finally, select all of the objects and group them together. You should now have a perfect half-sphere cup. Step 3: Get a Handle Make a box for the handle with the dimensions of 3" long x 1/2" wide x 1/8" tall. Go to the boxes' property window and adjust the "Radius" slider to 0.22 and "Steps" to 15. Place a 1/4" inch wide cylindrical hole centered on the width of the box and about 1/8" from one of the edges. This will be used to insert a ring into the handle later. Group the box and the hole Next, align the handle to the top center of the half sphere, and then slide the handle until the edge of the box is just inside the sphere. Don't worry too much about the exact dimensions. It should just look about right. Group the sphere and the handle together. If your cube is poking out a little on the inside of the sphere, here is a trick to get rid of it; ungroup everything! Once everything is ungrouped, select all the objects and regroup them at once. It should now be perfect. Step 4: Create a Label Use the text tool to create a capital "T" for Tablespoon. Convert the T to a hole and place it somewhere along the handle. You can either put the labels all the way through the handle, or partially through to make an indent (like I did). Group them together to cut out your text hole. Step 5: Make More Spoons! Once you know how to make a Tablespoon, repeat the process to make other sized spoons using the other values from the spreadsheet (or that you calculated yourself!). Step 6: Make a Ring Create a torus shape that has a "Radius" of 0.5, a "Tube" of 0.07, and both "Sides" and "Steps" set to 24. Cut out a 0.1 slice in the top center of the torus ring using a hole shape and the group function. Step 7: 3D Print Select each spoon one-by-one and export them to STL. Also, do the same for the ring. 3D print your spoons out of PLA using your 3D printer and software of choice using a standard quality setting. I use Ultimaker Cura software to setup my prints and Creality CR-10 3D printer to actually make things. My print took about 8 hours in total for the spoons and 10 minutes for the ring. Step 8: Get Baking! Now that you have made some measuring spoons, bake something delicious and mail it to me share it in the comments below along with a picture of your spoons. Did you find this useful, fun, or entertaining? Follow @madeineuphoria to see my latest projects.
{"url":"https://www.instructables.com/3D-Printed-Measuring-Spoons/","timestamp":"2024-11-01T19:03:50Z","content_type":"text/html","content_length":"108047","record_id":"<urn:uuid:387bcce8-f2c0-4cd9-94c1-661a0848461c>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00694.warc.gz"}
Week 20: Making ad-hoc polymorphism less ad hoc | Swizec Teller [This post is part of an ongoing challenge to understand 52 papers in 52 weeks. You can read previous entries, here, or subscribe to be notified of new posts by email] The Hindley-Milner type system is one of the more impressive things in computer science. Global type inference that can figure out the general type of a whole program without a single type annotation I've only ever used it in Haskell and let me tell ya, when I get confused, I delete my type annotations and let the compiler tell me what the hell I'm doing. It's usually right. But while Hindley-Milner can do parametric polymorphism natively, it needed some work to support ad-hoc polymorphism and become what Haskell's got today. In their 1988 paper, How to make ad-hoc polymorphism less ad hoc Wadler and Blott of the Haskell committee explain how to do just that by introducing type classes. Type classes are the biggest extension Haskell adds to Hindley-Milner, which makes it a more practical language than its predecessors Miranda and ML. But no more powerful, of course. Polymorphism lets us define functions that can act on arguments of different types. Most obvious with operators where writing 1+4 works just as well as writing 1.3+3.14, you don't have to use addInt or addFloat. The compiler handles that for you. Strachey defined two types of polymorphism - ad-hoc and parametric. Ad-hoc polymorphism occurs when a function behaves differently for different types, sometimes with completely heterogeneous implementations. Operator overloading is a common example of ad-hoc Parametric polymorphism occurs when a function behaves the same for different data types. length is a good example, because it doesn't care what type of list it's counting. You can implement a general length function to behave the same for any list type. This paper expands Hindley-Milner's parametric polymorphism, with type classes to introduce ad-hoc polymorphism. Because the paper shows how to translate between type classes and pure HM, the authors claim any language using HM typing could potentially be retrofitted with type classes via a preprocessor. The easiest places to look at issues arising from ad-hoc polymorphism are arithmetic operator overloading and equality. Standard ML takes the simplest approach to operator overloading - arithmetic operators are overloaded, but functions that use them are not. This means while you can write 3*3 or 3.14*3.14, you cannot define a square function as square x = x*x and later use terms like square 3 or square 3.14. You could solve this with an overloaded square function, using implementations of type Int -> Int and Float -> Float. This becomes unwieldy when you want to have a function squares that returns a tuple of three squared numbers. You'd need eight different implementations! Generally speaking, overloaded functions grow exponentially with the number of arguments. Not good. Equality doesn't fare much better. If you treat it as overloaded, like Standard ML used to, you can use terms such as 3*4 == 12, but you cannot define functions based on equality. For instance, a function member that tells you whether something is in a list or not won't have a defined type. Miranda takes a slightly better approach in that it treats equality as fully polymorphic. Its type is then (==) :: a -> a -> Bool, but this forces the environment to perform run-time checks on the representation of abstract types. Some might consider this a bug. Having to look inside an abstraction to decide its type definitely smells funny. More recent versions of Standard ML take the approach of making equality polymorphic in a limited fashion using something called eqtype variables. This means that type clashes are correctly returned as type errors, but still poses some limitations on the run-time implementation. Finally, object-oriented programming introduces the idea that users can define their own types. Getting these to support equality means having to force each object to carry with it a pointer to an equality function for that specific type. A dictionary of appropriate equality functions (to compare with different types) is even better. But a lot of those dictionaries will look exactly the same so we might as well pass them around separately from objects. This is the intuition behind type classes. Let's say we want to overload (+), (*), and negate on Int and Float. We can do this by introducing a type class called Num that says "a type a belongs to Num if (+), `(), andnegate` in appropriate types are defined on it"*. Now we can define type instances such as Num Int and specify which functions to translate the overloaded symbols into. We assume things like addInt and mulInt are defined by default. class Num a where (+), (*) :: a -> a -> a negate :: a -> a instance Num Int where (+) = addInt (*) = mulInt negate = negInt instance Num Float where (+) = addFloat (*) = mulFloat negate = negFloat This lets us define both the square and squares functions from before, but with a well-defined type at compile time. square :: Num a => a -> a square x = x*x squares :: Num a, Num b, Num c => (a,b,c) -> (a,b,c) squares (x, y, z) = (square x, square y, square z) square is of type a -> a and the compiler will be able to resolve both square 3 and square 3.14 into their appropriate types. Similarly, squares no longer needs eight types, just one - (a,b,c) -> As expected, a call such as square 'c' will produce a type error because there is no Char instance of the Num type class. A compiler can use our class and instance definitions to create dictionaries holding pointers to correct methods. For Num we introduce NumD as a type constructor for a new type whose values are created using NumDict. Functions add, mul, and neg take a value of type NumD and return its first, second, or third component. data NumD a = NumDict (a -> a -> a) (a -> a -> a) (a -> a) add (NumDict a m n) = a mul (NumDict a m n) = m neg (NumDict a m n) = n numDInt :: NumD Int numDInt = NumDict addInt muLInt negInt numDFLoat :: NumD Float numDFloat = NumDict addFloat mulFloat negFloat To use NumD a compiler would simply replace all instances of Num with their respective dictionary values, as identified by the type. For instance, x+y translates into add numD x y. add numD returns the correct addInt or addFloat function as identified by the type of x and y, then applies said function on the arguments. It's pretty nifty. Our square example becomes square': square' :: NumD a -> a -> a square' numD x = mul numD x x Which means that a call such as square 3 will translate into square' numDInt 3 and square 3.2 into square' numDFloat 3. A similar conversion works for squares, just with more characters involved. When applied to equality, type classes don't differ much from Standard ML's eqtype variables. But they allow the compiler to decide types at compile-time rather than run-time and a user can easily extend new classes to support abstract types. The definition is similar to how we defined Num earlier - we'll make a type class called Eq and define instances for Int and Char. We'll also define a member function, which was giving us trouble class Eq a where (==) :: a -> a -> bool instance Eq Int where (==) = eqInt instance Eq Char where (==) = eqChar member :: Eq a => [a] -> a -> Bool member [] y = False member (x:xs) y = (x == y) \/ member xs y As you can imagine we can now write terms such as 5 == 4, 'a' == 'b', and member "Haskell" 'k' or member [1,2,3] 2. The compiler can infer the correct type each time and using member on a type that doesn't have an Eq instance will produce a type error. But what's really cool is that we can define equality between lists and tuples. Even crazier things - sets, random data types we define ourselves, anything really. instance Eq a, Eq b => Eq (a,b) where (u,v) == (x,y) = (u == x) & (v == y) instance Eq a => Eq [a] where [] == [] = True [] == y:ys = False x:xs == [] = False x:xs == y:ys = (x == y) & (xs == ys) Essentially "two tuples are equal if their members are equal" and "lists are equal if they are both empty, or their heads and tails are equal". Now we can write terms such as "Haskell" == "Curry" and even member ["Haskell", "Alonzo"] "Moses". The compiler figures this out in much the same way as before - using dictionaries. I'm not going to type it all out but, for instance, integers will have a corresponding eqDInt function, characters will have an eqDChar function and so on. A term such as 3*4 == 12 will translate into eq eqDInt (mul numDInt 3 4) 12. So far we've treated Num and Eq as completely different classes. But it makes sense that all numerical types should also be comparable, while all comparable types might not be numerical. We can make Num a subclass of Eq: class Eq a => Num a where (+) :: a -> a -> a (*) :: a -> a -> a negate :: a -> a This asserts that a may belong to class Num only if it also belongs to Eq, making Num a subclass of Eq. All other class and instance declarations remain the same. Things magically just work. Now we can write functions like this: memsq :: Num a => [a] -> a -> Bool memsq xs x = member xs (square x) Because Eq is implied by Num, we didn't have to mention it in the type. Neat. A nice consequence of dictionary-based translation is also that we can define as many super- and subclasses as we want and it doesn't confuse the compiler in the least. This is a great advantage from object-oriented languages where having many superclasses usually poses implementation problems. Now you know how type classes work in Haskell. They introduce a lot of neat things that help us write more expressive code while, naturally, not increasing the power of the language. The only issue with type classes is that they introduce extra parameters to be passed around at run-time (the dictionaries), but that's not too bad. The rest of the paper deals with formalising this intuitive definition of type classes using lambda calculus. But I'm not going to include that in my summary, it's too mathsy and doesn't add much to understanding what's going on. At least it didn't for me. That said, I finally understand how Haskell's type system works. Now if only I could find more excuses to actually use Haskell. Published on April 30th, 2014 in 52papers52weeks, Ad hoc polymorphism, Haskell, Hindley-Milner, Learning, Personal, polymorphism, Standard ML, Type class, Type inference, Papers Did you enjoy this article? Continue reading about Week 20: Making ad-hoc polymorphism less ad hoc Semantically similar articles hand-picked by GPT-4 I write articles with real insight into the career and skills of a modern software engineer. "Raw and honest from the heart!" as one reader described them. Fueled by lessons learned over 20 years of building production code for side-projects, small businesses, and hyper growth startups. Both successful and not. Subscribe below 👇 Join Swizec's Newsletter and get insightful emails 💌 on mindsets, tactics, and technical skills for your career. Real lessons from building production software. No bullshit. "Man, love your simple writing! Yours is the only newsletter I open and only blog that I give a fuck to read & scroll till the end. And wow always take away lessons with me. Inspiring! And very relatable. 👌" Senior Mindset Book Get promoted, earn a bigger salary, work for top companies Learn more Have a burning question that you think I can answer? Hit me up on twitter and I'll do my best. Who am I and who do I help? I'm Swizec Teller and I turn coders into engineers with "Raw and honest from the heart!" writing. No bullshit. Real insights into the career and skills of a modern software engineer. Want to become a true senior engineer? Take ownership, have autonomy, and be a force multiplier on your team. The Senior Engineer Mindset ebook can help 👉 swizec.com/senior-mindset. These are the shifts in mindset that unlocked my career. Curious about Serverless and the modern backend? Check out Serverless Handbook, for frontend engineers 👉 ServerlessHandbook.dev Want to Stop copy pasting D3 examples and create data visualizations of your own? Learn how to build scalable dataviz React components your whole team can understand with React for Data Visualization Want to get my best emails on JavaScript, React, Serverless, Fullstack Web, or Indie Hacking? Check out swizec.com/collections Did someone amazing share this letter with you? Wonderful! You can sign up for my weekly letters for software engineers on their path to greatness, here: swizec.com/blog Want to brush up on your modern JavaScript syntax? Check out my interactive cheatsheet: es6cheatsheet.com By the way, just in case no one has told you it yet today: I love and appreciate you for who you are ❤️
{"url":"https://swizec.com/blog/week-20-making-adhoc-polymorphism-less-ad-hoc/","timestamp":"2024-11-05T17:28:41Z","content_type":"text/html","content_length":"874131","record_id":"<urn:uuid:fefacbcc-f9ba-4eed-bcf7-db088f932464>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00883.warc.gz"}
Download Algebra and Trigonometry , Third Edition by James Stewart, Lothar Redlin, Saleem Watson PDF Download Algebra and Trigonometry , Third Edition by James Stewart, Lothar Redlin, Saleem Watson PDF By James Stewart, Lothar Redlin, Saleem Watson This top promoting writer workforce explains recommendations easily and obviously, with out glossing over tricky issues. challenge fixing and mathematical modeling are brought early and strengthened all through, supplying scholars with an exceptional beginning within the ideas of mathematical pondering. complete and frivolously paced, the publication offers whole assurance of the functionality inspiration, and integrates an important volume of graphing calculator fabric to assist scholars increase perception into mathematical principles. The authors' cognizance to element and readability, just like present in James Stewart's market-leading Calculus e-book, is what makes this ebook the industry chief. Read or Download Algebra and Trigonometry , Third Edition PDF Similar popular & elementary books Petascale computing: algorithms and applications Even though the hugely expected petascale pcs of the close to destiny will practice at an order of value swifter than today’s fastest supercomputer, the scaling up of algorithms and purposes for this category of pcs is still a tricky problem. From scalable set of rules layout for large concurrency toperformance analyses and medical visualization, Petascale Computing: Algorithms and purposes captures the state-of-the-art in high-performance computing algorithms and functions. With an analogous layout and have units because the industry best Precalculus, 8/e, this concise textual content offers either scholars and teachers with sound, regularly dependent causes of the mathematical techniques. PRECALCULUS: A CONCISE path is designed to supply a cheap, one-semester replacement to the conventional two-semester precalculus textual content. Atomic correlations were studied in physics for over 50 years and referred to as collective results until eventually lately once they got here to be famous as a resource of entanglement. this can be the 1st booklet that comprises exact and entire research of 2 at the moment greatly studied topics of atomic and quantum physics―atomic correlations and their family members to entanglement among atoms or atomic systems―along with the most recent advancements in those fields. Additional resources for Algebra and Trigonometry , Third Edition Example text 412m 2 (a) ab ϭ ; Property (b) a ϩ 1b ϩ c2 ϭ ; (c) a 1b ϩ c2 ϭ ; Property Property 3. To add two fractions, you must first express them so that they have the same . 4. To divide two fractions, you multiply. the divisor and then 23–28 ■ Use properties of real numbers to write the expression without parentheses. 23. 31x ϩ y 2 24. 1a Ϫ b28 27. Ϫ 52 12x Ϫ 4y2 28. 13a 2 1b ϩ c Ϫ 2d2 29–40 29. 3 10 31. 2 3 5–6 ■ 5. 23, Ϫ 13, 126 ϩ 154 30. Ϫ 35 2 2 3 Ϫ 1 4 ϩ 15 32. 1 ϩ 58 Ϫ 1 6 34. 25A 89 ϩ 12 B 36. A 12 Ϫ 13 B A 12 ϩ 13 B 35. X6 x10 39. 12y2 2 3 34. z 5zϪ3zϪ4 37. a9aϪ2 a 40. 18x 2 2 42. 12a3a2 2 4 43. 13z 2 2 16z2 2 Ϫ3 44. 12z2 2 Ϫ5z10 45. a 46. a a2 3 b 4 3x4 2 b 4x2 47–72 ■ Simplify the expression, and eliminate any negative exponent(s). 47. 14x2y4 2 12 x5y 2 60. 3 a2 5 a3b2 3 b a 3 b b c 8a3bϪ4 2aϪ5b5 1u3√Ϫ2 2 3 73–80 66. 3a Ϫ1 b b3 y Ϫ2 5x b Ϫ3 qϪ1rϪ1sϪ2 rϪ5sqϪ8 ■ b Ϫ1 12√ 3„ 2 2 √ 3„ 2 62. a 64. 1uϪ1√ 2 2 2 x5y3 2x3y2 2 x4z2 b a b z3 4y5 5xyϪ2 xϪ1yϪ3 1rs2 2 3 1rϪ3s2 2 2 x2y 2y 70. a 2aϪ1b Ϫ3 b a2bϪ3 72. a xyϪ2zϪ3 3 b Ϫ2 68. Distances Between Powers closer together? 1010 and 1050 or Which pair of numbers is 10100 and 10101 109. Signs of Numbers Let a, b, and c be real numbers with a Ͼ 0, b Ͻ 0, and c Ͻ 0. Determine the sign of each expression. 4 R ATIONAL E XPONENTS AND R ADICALS Radicals ᭤ Rational Exponents ᭤ Rationalizing the Denominator In this section we learn to work with expressions that contain radicals or rational exponents. ▼ Radicals We know what 2n means whenever n is an integer. To give meaning to a power, such as 24/5, whose exponent is a rational number, we need to discuss radicals. Rated of 5 – based on votes
{"url":"http://blog.reino.co.jp/index.php/ebooks/algebra-and-trigonometry-third-edition","timestamp":"2024-11-05T07:20:59Z","content_type":"text/html","content_length":"38675","record_id":"<urn:uuid:f39cd78a-623f-466d-b327-8e858b88f221>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00803.warc.gz"}
Programmable Logic Array (PLA) | Block Diagram of PLA Programmable Logic Array (PLA) | Block Diagram of PLA: The combinational circuit do not use all the minterms every time. Occasionally, they have don’t care conditions. Don’t care condition when implemented with a ROM becomes an address input that will never occur. The result is that not all the bit patterns available in the ROM are used, which may be considered a waste of available equipment. For cases where the number of don’t care conditions is excessive, it is more economical to use a second type of LSI component called a Programmable Logic Array (PLA). A Programmable Logic Array is similar to a ROM in concept; however it does not provide full decoding of the variables and does not generates all the minterms as in the ROM. The PLA replaces decoder by group of AND gates, each of which can be programmed to generate a product term of the input variables. In PLA, both AND and OR gates have fuses at the inputs, therefore in Programmable Logic Array both AND and OR gates are programmable. Fig. 3.88 shows the Block Diagram of PLA. It consists of n inputs, output buffer with m outputs, m product terms, m sum terms, input and output buffers. The product terms constitute a group of m AND gates and the sum terms constitute a group of m OR gates, called OR matrix. Fuses are inserted between all n inputs and their complement values to each of the AND gates. Fuses are also provided between the outputs of the AND gates and the inputs of the OR gates. The third set of fuses in the output inverters allows the output function to be generated either in the AND-OR form or in the AND-OR-INVERT form. When inverter is bypassed by link we get AND-OR implementation. To get AND-OR-INVERTER implementation inverter link has to be disconnected. Input Buffer: Input buffers are provided in the Programmable Logic Array to limit loading of the sources that drive the inputs. They also provide inverted and non-inverted form of inputs at its output. The Fig. 3.89 shows two ways of representing input buffer for single input. AND Matrix: The Fig. 3.90 shows the AND matrix. It is used to form product terms. It has m AND gates with 2n inputs and m outputs, one for each AND gate. The Fig. 3.90 shows the AND gates formed by diodes and resistors structure. Each AND gate has all the input variables in complemented and uncomplemented form. There is a nichrome fuse link in series with each diode which can be burn out to disconnect particular input for that AND gate. Before programming, all fuse links are intact and the product term for each AND gate is given by The Fig. 3.91 shows the simplified and equivalent representation of input connections for one AND gate. The array logic symbol shown in Fig. 3.91 (b) uses a single horizontal line connected to the gate input and multiple vertical lines to indicate the individual inputs. Each intersection between horizontal line and vertical line indicates the fuse connection. The Fig. 3.92 shows the simplified representation of AND matrix with input buffer. OR Matrix: The OR matrix is provided to produce the logical sum of the product term outputs of the AND matrix. The Fig. 3.93 shows the OR gates formed by diodes and resistors, structure. Each OR gate has all the product terms as input variables. There is a nichrome fuse link in series with each diode which can be burn out to disconnect particular product term for that OR gate. Before programming, all fuse link in OR matrix are also intact and the sum term for each OR gate is given by The Fig. 3.94 shows the simplified and equivalent representation of input connections for one OR gate. The Fig. 3.95 shows the simplified representation of OR matrix. Invertible and Non Invertible Matrix: Invertible and Non Invertible Matrix provides output in the complement or uncomplemented form. The user can program the output in either complement or un-complement form as per design requirements. The typical circuits for Invertible and Non Invertible Matrix is as shown in Fig. 3.96. In both the cases if fuse is intact the output is in its uncomplemented form; otherwise output is in the complemented form. Output Buffer: The driving capacity of PLA is increased by providing buffers at the output. They are usually TTL compatible. The Fig. 3.97 shows the tri-state, TTL compatible output buffer. The output buffer may provide totem-pole, open collector or tri-state output. Output Through Flip-Flops: For the implementation of sequential circuits we need memory elements, flip-flops and combinational circuitry for deriving the flip-flop inputs. To satisfy both the needs some PLAs are provided with flip-flop at each output, as shown in the Fig. 3.98. Implementation of Combination Logic Circuit using PLA: Like ROM, PLA can be mask-programmable or field-programmable. With a mask-programmable PLA, the user must submit a PLA program table to the manufacturer. This table is used by the vendor to produce a user-made PLA that has the required internal paths between inputs and outputs. A second type of PLA available is called a field-programmable logic array, or FPLA. The FPLA can be programmed by the user by means of certain recommended procedures. FPLAs can be programmed with commercially available programmer units.
{"url":"https://www.eeeguide.com/programmable-logic-array-pla/","timestamp":"2024-11-08T23:57:39Z","content_type":"text/html","content_length":"226849","record_id":"<urn:uuid:4ed51e6b-45e9-43f7-8fd2-1fb66ec214cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00813.warc.gz"}
You might be interested in my recent work on Continuations and Coexponentials. Here is a draft, and slides from HOPE and LFCS, and links to repositories: coexp and agda-coexp. I am a Marie SkÅ‚odowska-Curie Fellow at the Department of Computer Science and Engineering, Alma Mater Studiorum - Università di Bologna, and also affiliated with the INRIA OLAS Team. The project page is at ReGraDe-CS. Previously, I was a Research Associate to the Head of School Prof. Simon Gay, at the School of Computing Science, University of Glasgow, and affiliated with the FATA section. Before that, I was a Visiting Research Fellow at the Computer Laboratory, University of Cambridge, on a Paul Purdom Fellowship, working with Prof. Marcelo Fiore, and Prof. Neel Krishnaswami. Even before that, I was a PhD student with Prof. Amr Sabry at the Luddy School, Indiana University Bloomington. I also work with Hannah Earley on vaire.co. I study mathematical foundations of computation, through an algebraic lens. My research spans programming languages, type theory, category theory, logic, semantics, constructive mathematics, and My papers are listed on arXiv and dblp. • The Duality of Abstraction. 2023. • Ryan G. Scott, Vikraman Choudhury, Ryan R. Newton, Niki Vazou, Ranjit Jhala: Deriving Law-Abiding Instances. 2017. • Jacques Carette, Chao-Hong Chen, Vikraman Choudhury, Amr Sabry: Fractional Types. 2016. Selected Talks • Continuations and Coexponentials. LFCS Seminar, Edinburgh. May 2023. • Free Commutative Monoids in HoTT. MFPS 2022, Cornell. Jul 2022. • Symmetries in Reversible Programming. Logic and Semantics Seminar, CL, Cambridge. May 2022. • Weighted Sets and Modalities. SYCO 8, Tallinn, Estonia. Dec 2021. • Recovering Purity with Comonads & Capabilities. ICFP, Aug 2020. • Recovering Purity with Comonads & Capabilities. Midwest PL Summit 2019, Purdue University. Sep 2019. • The finite-multiset construction in HoTT. HoTT 2019, CMU. Aug 2019. • Retrofitting Purity with Comonads & Capabilities. Logic and Semantics Seminar, CL, Cambridge. May 2019. • Homotopy theoretic aspects of Reversible Computing. PL Wonks, IU. Sep 2017. • Mathematical Models of Resource-Conscious Computation. PhD Thesis, Indiana University, 2022. • Distributed Issue Tracking using Patch Theory. Masters Thesis, IIT Kanpur, 2015. At Cambridge, I supervise • Discrete Maths • Semantics of Programming Languages • Denotational Semantics • Logic and Proof • Types At IU, I've taught • B-561: Advanced Database Concepts • B-401: Fundamentals of Computing Theory • B-505: Applied Algorithms • I-590: Technical Foundations of Cybersecurity • C-343: Data Structures • C-241 & H-241: Discrete Structures for Computer Science • C-211: Intro to Computer Science I can be reached via email (alternative), or irc (vikraman@freenode), or smail at the following address: Dipartimento di Informatica - Scienza e Ingegneria Mura Anteo Zamboni, 7 40126 Bologna BO
{"url":"https://vikraman.org/","timestamp":"2024-11-10T12:22:55Z","content_type":"application/xhtml+xml","content_length":"14087","record_id":"<urn:uuid:bcc8a66c-53bb-494d-b422-3aa8af8245a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00162.warc.gz"}
Topological Data Analysis • IMSI Back to top In this age of rapidly increasing access to ever larger data sets, it has become clear that studying the “shape” of data using the tools of combinatorial and algebraic topology can lead to much deeper insights than other standard methods when analyzing complex data sets. Topological data analysis (TDA) is the exciting and highly active new field of research that encompasses these productive developments at the interface of algebraic topology, statistics, and data science. This workshop will consist of a small number of plenary one-hour lectures by leading researchers in the field, a larger number of contributed short talks from early-career researchers, live demos of software, a problem session, and a poster session. The speakers will cover a wide range of topics, from theory to concrete applications of TDA in science and engineering. The goals of the workshop are to foster scientific interactions across the growing breadth of the applied topology community and to provide an opportunity for algebraic topologists, statisticians, and data scientists curious about this dynamic new field to learn more about it. Back to top B F Brittany Fasy Mathematics Montana State University K H Kathryn Hess Mathematics M K Matthew Kahle Mathematics Ohio State University S M Sayan Mukherjee Statistics Duke University J P Jose Perea Mathematics Michigan State University Invited Speakers Back to top L C Lorin Crawford Microsoft Research New England S K Sara Kalisnik Bentley University F M Facundo Memoli Ohio State E M Ezra Miller Duke University A M Anthea Monod Imperial College London E M Elizabeth Munch Michigan State University V N Vidit Nanda University of Oxford K T Katharine Turner Australian National University Y W Yusu Wang University of California, San Diego Back to top Monday, April 26, 2021 9:00-9:50 CDT Algebraic Wasserstein distance between persistence modules Speaker: Katharine Turner (Australian National University) 10:15-10:45 CDT From Geometry to Topology: Inverse Theorems for Distributed Persistence Speaker: Elchanan Solomon (Duke University) What is the “right” topological invariant of a large point cloud X? Prior research has focused on estimating the full persistence diagram of X, a quantity that is very expensive to compute, unstable to outliers, and far from a sufficient statistic. We therefore propose that the correct invariant is not the persistence diagram of X, but rather the collection of persistence diagrams of many small subsets. This invariant, which we call “distributed persistence,” is trivially parallelizable, more stable to outliers, and has a rich inverse theory. The map from the space of point clouds (with the quasi-isometry metric) to the space of distributed persistence invariants (with the Hausdorff-Bottleneck distance) is a global quasi-isometry. This is a much stronger property than simply being a sufficient statistic, and is to our knowledge the only result of its kind in the TDA literature. Moreover, the quasi-isometry bounds depend on the size of the subsets taken, so that as the size of these subsets goes from small to large, the invariant interpolates between a purely geometric one and a purely topological one. Lastly, we note that our inverse results do not actually require considering all subsets of a fixed size (an enormous collection), but a relatively small collection satisfying certain covering properties, properties that arise with high probability when randomly selecting sufficiently many subsets. These theoretical results are complemented by experiments showcasing the success of distributed persistence at solving a number of morphometric challenges. This is joint work with Alex Wagner and Paul Bendich. 11:00-11:30 CDT Ephemeral persistence modules and distance comparison Speaker: Nicholas Berkouk (Ecole Polytechnique Fédérale de Lausanne) In this talk, I will provide a definition of ephemeral multi-persistent modules and prove that the quotient of persistent modules by the ephemeral ones is equivalent to the category of γ-sheaves. In the case of one-dimensional persistence, our definition agrees with the usual one showing that the observable category and the category of γ-sheaves are equivalent. I will also establish isometry theorems between the category of persistent modules and γ-sheaves both endowed with their interleaving distance. Finally, we compare the interleaving and convolution distances. Altogether, these results pave a new way to define dimension reduction techniques for multi-parameter persistence modules. Joint work with François Petit. 13:00-13:50 CDT Discrete Morse-based graph reconstruction and data analysis Speaker: Yusu Wang (University of California, San Diego) In recent years, topological and geometric data analysis (TGDA) has emerged as a new and promising field for processing, analyzing and understanding complex data. Indeed, geometry and topology form natural platforms for data analysis, with geometry describing the ”shape” and ”structure” behind data; and topology characterizing / summarizing both the domain where data are sampled from, as well as functions and maps associated to them. In this talk, I will show how the topological objects from discrete Morse theory and persistent homology can be used to reconstruct hidden geometric graphs; how such an approach can be extended to handle high-dimensional point clouds data (with sparsification); and how they can be then combined with machine learning pipelines for further data analysis tasks. This talk is based on multiple projects with multiple collaborators and references will be given during the talk. 14:15-14:45 CDT A simplicial extension of node2vec Speaker: Celia Hacker (Ecole Polytechnique Fédérale de Lausanne) The well known node2vec algorithm has been used to explore network structures and represent the nodes of a graph in a vector space in a way that reflects the structure of the graph. Random walks in node2vec have been used to study the local structure through pairwise interactions. Our motivation for this project comes from a desire to understand higher-order relationships by a similar approach. To this end, we propose an extension of node2vec to a method for representing the k-simplices of a simplicial complex into Euclidean space. In this presentation I outline a way to do this by performing random walks on simplicial complexes, which have a greater variety of adjacency relations to take into account than in the case of graphs. The walks on simplices are then used to obtain a representation of the simplices. We will show cases in which this method can uncover the roles of higher order simplices in a network and help understand structures in graphs that cannot be seen by using just the random walks on the nodes. 15:00-15:30 CDT Sketching Merge Trees Speaker: Bei Wang (University of Utah) Merge trees are a type of topological descriptors that record the connectivity among the sublevel sets of scalar fields. In this paper, we are interested in sketching a set of merge trees. That is, given a set T of merge trees, we would like to find a basis set S such that each tree in T can be approximately reconstructed from a linear combination of merge trees in S. A set of high-dimensional vectors can be sketched via matrix sketching techniques such as principal component analysis and column subset selection. However, up until now, topological descriptors such as merge trees have not been known to be sketchable. We develop a framework for sketching a set of merge trees that combines the Gromov-Wasserstein framework of Chowdhury and Needham with techniques from matrix sketching. We demonstrate the applications of our framework in sketching merge trees that arise from data ensembles in scientific simulations. This is joint work with Mingzhe Li and Sourabh Palande. Tuesday, April 27, 2021 9:00-9:50 CDT Statistical Frameworks for Mapping 3D Shape Variation onto Genotypic and Phenotypic Variation Speaker: Lorin Crawford (Microsoft Research) The recent curation of large-scale databases with 3D surface scans of shapes has motivated the development of tools that better detect global-patterns in morphological variation. Studies which focus on identifying differences between shapes have been limited to simple pairwise comparisons and rely on pre-specified landmarks (that are often known). In this talk, we present SINATRA: a statistical pipeline for analyzing collections of shapes without requiring any correspondences. Our method takes in two classes of shapes and highlights the physical features that best describe the variation between them.The SINATRA pipeline implements four key steps. First, SINATRA summarizes the geometry of 3D shapes (represented as triangular meshes) by a collection of vectors (or curves) that encode changes in their topology. Second, a nonlinear Gaussian process model, with the topological summaries as input, classifies the shapes. Third, an effect size analog and corresponding association metric is computed for each topological feature used in the classification model. These quantities provide evidence that a given topological feature is associated with a particular class. Fourth, the pipeline iteratively maps the topological features back onto the original shapes (in rank order according to their association measures) via a reconstruction algorithm. This highlights the physical (spatial) locations that best explain the variation between the two groups.We use a rigorous simulation framework to assess our approach, which themselves are a novel contribution to 3D image analysis. Lastly, as a case study, we use SINATRA to analyze mandibular molars from four different suborders of primates and demonstrate its ability recover known morphometric variation across phylogenies. 10:15-10:45 CDT Interleaving by parts for persistence in a poset Speaker: Woojin Kim (Duke University) Metrics in computational topology are often either (i) themselves in the form of the interleaving distance d(F,G) between certain order-preserving maps F and G between posets or (ii) admit d(F,G) as a tractable lower bound. In this talk, assuming that the target poset of F and G admits a join-dense subset, we propose certain join representations of F and G which facilitate the computation of d (F,G). We leverage this result in order to (i) elucidate the structure and computational complexity of the interleaving distance for poset-indexed clusterings (i.e. poset-indexed subpartition-valued functors), (ii) to clarify the relationship between the erosion distance by Patel and the graded rank function by Betthauser, Bubenik, and Edwards, and (iii) to reformulate and generalize the tripod distance by the second author. This is joint work with Facundo Memoli and Anastasios Stefanou. https://arxiv.org/abs/1912.04366 11:00-11:30 CDT Algebraic topology in the mesoscopic regime Speaker: Antonio Rieser (CONACYT-CIMAT, A.C.) There have been a number of attempts to extend the realm of application of algebraic topological tools to discrete spaces such as graphs, digital images, and point clouds, which one more typically encounters in computer science and data analysis. In each of these theories, one of two strategies has typically been taken. In topological data analysis, one usually replaces the original space with one or more topological spaces that one hopes will retain the relevant topological information in the original set. In various approaches to discrete or digital topology, we find instead different attempts to develop algebraic topology from scratch for some class of discrete objects of interest, proceeding largely by analogy with classical algebraic topology. In this work, we propose a third option: we generalize algebraic topology to categories which contain both the topological spaces classically treated by classical homotopy theory, but which also include as objects the more discrete and combinatorial spaces of interest in applications. The advantage here is that there are now non-trivial ‘continuous maps’ from classical topological spaces to the discrete spaces (given the appropriate structure), and one may then compare the resulting topological invariants on each side functorially. We find that there are a number of possible such categories, each with its own particular homotopy theory and associated homologies, and, additionally, that there is a generalization of the coarse category which allows finite sets to be non-trivial (i.e. not ‘coarsely’ equivalent to a point). We will give an overview of these theories and several applications, discussing the advantages and disadvantages of each. 13:00-13:50 CDT The Truncated Interleaving Distance for Reeb Graphs Speaker: Elizabeth Munch (Michigan State University) Reeb graphs and main other related graphical signatures have extensive use in applications, but only recently has there been intense interest in finding metrics for these objects. In this talk, we focus on the interleaving distance, which is a categorical reforumlation of the epynomous metric from persistence modules. In this talk, we introduce an extension of smoothing on Reeb graphs, which we call truncated smoothing; this in turn allows us to define a new family of metrics which generalize the interleaving distance for Reeb graphs. Intuitively, we “chop off” parts near local minima and maxima during the course of smoothing. After formalizing truncation as a functor, we show that when applied after the smoothing functor, this prevents extensive expansion of the range of the function, and yields particularly nice properties. Further, for certain choices of the truncation parameter, we can construct a categorical flow for any choice of slope $m &bsol;in [0,1]$, which gives a family of interleaving distances. While the resulting metrics are not stable, we show that any pair of these for $m,m’ &bsol;in [0,1)$ are strongly equivalent metrics, which in turn gives stability of each metric up to a multiplicative constant. Wednesday, April 28, 2021 9:00-9:50 CDT Curvature sets over persistence diagrams Speaker: Facundo Memoli (Ohio State University) We study an invariant of compact metric spaces which combines the notion of curvature sets introduced by Gromov in the 1980s together with the notion of Vietoris-Rips persistent homology. For given integers k≥0 and n≥1 these invariants arise by considering the degree k Vietoris-Rips persistence diagrams of all subsets of a given metric space with cardinality at most n. We call these invariants (n,k)-persistence sets. We argue that computing these invariants could be significantly easier than computing the usual Vietoris-Rips persistence diagrams. We establish stability results for these invariants and we also precisely characterize some of them in the case of spheres with geodesic and Euclidean distances. We also identify a rich family of metric graphs for which the (4,1) -persistence sets fully recover their homotopy type. Along the way we prove some useful properties of Vietoris-Rips persistence diagrams. 10:15-10:45 CDT Intrinsic Persistent Homology via density-based metric learning Speaker: Ximena Fernández (Swansea University) Typically, persistence diagrams computed from a sample depend strongly on the distance associated to the data. When the point cloud is a sample of a Riemannian manifold embedded in a Euclidean space, an estimator of the intrinsic distance is relevant to obtain persistence diagrams from data that capture its intrinsic geometry. In this talk, we consider a computable estimator of a Riemannian metric known as Fermat distance, that accounts for both the geometry of the manifold and the density that produces the sample. We prove that the metric space defined by the sample endowed with this estimator (known as sample Fermat distance) converges a.s. in the sense of Gromov-Hausdorff to the manifold itself endowed with the (population) Fermat distance. This result is applied to obtain sample persistence diagrams that converge towards an intrinsic persistence diagram. We show that this approach outperforms more standard methods based on Euclidean norm, with theoretical results and computational experiments [1]. [1] E. Borghini, X. Fernández, P. Groisman, G. Mindlin. ‘Intrinsic persistent homology via density-based metric learning’. arXiv:2012.07621 (2020) 11:00-11:30 CDT Approximate and discrete vector bundles Speaker: Luis Scoccola (Michigan State University) Synchronization problems, such as the problem of reconstructing a 3D shape from a set of 2D projections, can often be modeled by principal bundles. Similarly, the application of local PCA to a point cloud concentrated around a manifold approximates the tangent bundle of the manifold. In the first case, the characteristic classes of the bundle provide obstructions to global synchronization, while, in the second case, they provide topological information of the manifold beyond its homology, and give obstructions to dimensionality reduction. I will describe joint work with Jose Perea in which we propose notions of approximate and discrete vector bundle, study the extent to which they determine true vector bundles, and give algorithms for the stable and consistent computation of low-dimensional characteristic classes directly from these combinatorial representations. 13:00-13:50 CDT Topological Data Analysis of Database Representations for Information Retrieval Speaker: Anthea Monod (Imperial College) Appropriately representing elements in a database so that queries may be accurately matched is a central task in information retrieval. This recently has been achieved by embedding the graphical structure of the database into a manifold so that the hierarchy is preserved. Persistent homology provides a rigorous characterization for the database topology in terms of both its hierarchy and connectivity structure. We compute persistent homology on a variety of datasets and show that some commonly used embeddings fail to preserve the connectivity. Moreover, we show that embeddings which successfully retain the database topology coincide in persistent homology. We introduce the dilation-invariant bottleneck distance to capture this effect, which addresses metric distortion on manifolds. We use it to show that distances between topology-preserving embeddings of databases are small. 14:15-14:45 CDT Predicting Survival Outcomes using Topological Shape Features of AI-reconstructed Medical Images Speaker: Chul Moon (Southern Methodist University) Tumor shape and size have been used as important markers for cancer diagnosis and treatment. This paper proposes a topological feature computed by persistent homology to characterize tumor progression from digital pathology and radiology images and examines its effect on the time-to-event data. The proposed topological features are invariant to scale-preserving transformation and can summarize various tumor shape patterns. The topological features are represented in functional space and used as functional predictors in a functional Cox proportional hazards model. The proposed model enables interpretable inference about the association between topological shape features and survival risks. Two case studies are conducted using lung cancer pathology and brain tumor radiology images. The results show that the topological features predict survival prognosis after adjusting clinical variables, and the predicted high-risk groups have significantly worse survival outcomes than the low-risk groups (p-values<0.005 for both studies). Also, the topological shape features found to be positively associated with survival hazards are irregular and heterogeneous shape patterns, which are known to be related to tumor progression. Our study provides a new perspective for understanding tumor shape and patient prognosis. Thursday, April 29, 2021 9:15-9:45 CDT Back to Basics – Topology of Simplicial Complexes for Business Optimisations Speaker: Marc Lange (Elbformat Consulting) When our topology ancestors were playing with simplicial complexes to devise models of RP^2 or common triangulations of surfaces they could not have envisioned how ubiquitous and large graph structures are nowadays. Starting with reminders on elementary collapses, clique complexes and a touch of NP-completeness for maximal cliques, I will illustrate the relevant business examples we have found in our Data Science practice as elbformat consulting in Hamburg, Germany. Cases will include Structural Website Optimisations, User Funnel Analyses, Process Mining insights and a Recommendation Engine Prototype. 10:15-10:45 CDT Learning with Approximate or Distributed Topology Speaker: Alexander Wagner (Duke University) The computational cost of calculating the persistence diagram for a large input inhibits its use in a deep learning framework. The fragility of the persistence diagram to outliers and the instability of critical cells present further challenges. In this talk, I will present two distinct approaches to address these concerns. In the first approach, by replacing the original filtration with a stochastically downsampled filtration on a smaller complex, one can obtain results in topological optimization tasks that are empirically more robust and much faster to compute than their vanilla counterparts. In the second approach, we work with the set of persistence diagrams of subsets of a fixed size rather than with the diagram of the complete point cloud. The benefits of this distributed approach are a greater degree of practical robustness to outliers, faster computation due to parallelizability and scaling of the persistence algorithm, and an inverse stability theory. After outlining these benefits, I will describe a dimensionality reduction pipeline using distributed persistence. This is joint work with Elchanan Solomon and Paul Bendich. 11:00-11:30 CDT Geometric and Topological Fingerprints for Periodic Crystals Speaker: Teresa Heiss (Institute of Science and Technology Austria) The following application has motivated us to develop new Computational Geometry and Topology methods, involving Brillouin zones and periodic k-fold persistent homology: We model crystals by (infinite) periodic point sets, i.e. by the union of several translates of a lattice, where every point represents an atom. Two periodic point sets are equivalent if there is a rigid transformation from one to the other. A periodic point set can be represented by a finite cutout s.t. copying this cutout infinitely often in all directions yields the periodic point set. The fact that these cutouts are not unique creates problems when working with them. Therefore, material scientists would like to work with a complete, continuous invariant instead. In this talk, I will present two continuous invariants that are at least generically complete: Firstly, the density fingerprint, computing the probability that a random ball of radius r contains exactly k points of the periodic point set, for all positive integers k and positive reals r. And secondly, the persistence fingerprint, which is the sequence of order k persistence diagrams, newly defined for infinite periodic point sets, for all positive integers k. Joint work with Herbert Edelsbrunner, Alexey Garber, Vitaliy Kurlin, Georg Osang, Janos Pach, Morteza Saghafian, Phil Smith, Mathijs Wintraecken. 13:00-13:50 CDT Sampling smooth manifolds using ellipsoids Speaker: Sara Kalisnik (Bentley University) A common problem in data science is to determine properties of a space from a sample. For instance, under certain assumptions a subspace of a Euclidean space may be homotopy equivalent to the union of balls around sample points, which is in turn homotopy equivalent to the Čech complex of the sample. This enables us to determine the unknown space up to homotopy type, in particular giving us the homology of the space. A seminal result by Niyogi, Smale and Weinberger states that if a sample of a closed smooth submanifold of a Euclidean space is dense enough (relative to the reach of the manifold), there exists an interval of radii, for which the union of closed balls around sample points deformation retracts to the manifold. A tangent space is a good local approximation of a manifold, so we can expect that an object, elongated in the tangent direction, will better approximate the manifold than a ball. We present the result that the union of ellipsoids of suitable size around sample points deformation retracts to the manifold while requiring much smaller density than in the case of union of balls. The proof requires new techniques, as unlike the case of balls, the normal projection of a union of ellipsoids is in general not a deformation retraction. 14:15-14:45 CDT Simplicial Neural Networks Speaker: Stefania Ebli (Ecole Polytechnique Fédérale de Lausanne) In this talk I will present simplicial neural networks (SNNs), a generalization of graph neural networks to data that live on a class of topological spaces called simplicial complexes. These are natural multi-dimensional extensions of graphs that encode not only pairwise relationships but also higher-order interactions between vertices – allowing us to consider richer data, including vector fields and n-fold collaboration networks. We define an appropriate notion of convolution that we leverage to construct the desired convolutional neural networks. We test the SNNs on the task of imputing missing data on coauthorship complexes. This is joint work with M. Defferrard and G.Spreemann. 15:00-15:30 CDT Topological Sholl Descriptors for Neuronal Clustering and Classification Speaker: Sadok Kallel (American University of Sharjah) Variations in neuronal morphology among cell classes, brain regions, and animal species are thought to underlie known heterogeneities in neuronal function. Thus, accurate quantitative descriptions and classification of large sets of neurons is important for functional characterization. However, unbiased computational methods to classify groups of neurons are currently scarce. We introduce an unbiased method to study neuronal morphologies. We develop mathematical descriptors that assign to each Neuron an invariant depending on distance from the soma, and taking values in real numbers or other suitable metric spaces (including a metric space of persistence diagrams). Such descriptors can include tortuosity, branching pattern, “energy”, wiring, TMD, etc. Using detection and metric learning algorithms, we can then provide efficient clustering and classification schemes for neurons. This is joint work with Reem Khalil, Ahmad Farhat and Pawel Dlotko Friday, April 30, 2021 9:00-9:50 CDT Compatibility and Optimization for Quiver Representations Speaker: Vidit Nanda (University of Oxford) Many interesting objects across pure and applied mathematics (including persistence modules, cellular sheaves and connection matrices) are most naturally viewed as linear algebraic data parametrized by a finite space. In this talk, I will describe a practical framework for dimensionality reduction and linear optimization over a wide class of such objects. 10:15-10:45 CDT The amplitude of an abelian category: Measures in persistence theory Speaker: Barbara Giunti (Technische Universität Graz) The use of persistent homology in applications is justified by the validity of certain stability results. At the core of such results is a notion of distance between the invariants that one associates to data sets. While such distances are well-understood in the one-parameter case, the situation for multiparameter persistence modules is more challenging, since there is no generalisation of the barcode. Here we introduce a general framework to study stability questions in multiparameter persistence. We first introduce the (outer) amplitude, a functional on abelian categories that mimics the properties of an outer measure in measure theory, then study different ways to associate distances to such functionals. Our framework is very comprehensive, as many different invariants that have been introduced in the literature are examples of outer amplitudes, and similarly, we show that many known distances for multiparameter persistence are distances from outer amplitudes. Finally, we provide new stability results using our framework. 11:00-11:30 CDT Sliding windows persistence of quasiperiodic functions Speaker: Hitesh Gakhar (University of Oklahoma) Sliding window embeddings were originally used in the study of dynamical systems to reconstruct the topology of underlying attractors from generic observation functions. In 2015, a technique for recurrence detection in time series data using sliding window embeddings of periodic functions and persistent homology was developed. We study a closely related class of functions, namely quasiperiodic functions, whose constitutive frequencies are non-commensurate harmonics. The sliding window embeddings of such functions are dense in high dimensional tori, where the dimension depends on the number of incommensurate harmonics. In this talk, we will present results pertaining to the structure of sliding window embeddings and their persistent homology, along with a brief discussion on how to choose the embedding parameters. 13:00-13:50 CDT What are left and right endpoints for multiparameter persistence? Speaker: Ezra Miller (Duke University) Fundamental to applications of ordinary persistent homology in one parameter is the reconstruction of a module from the perfect matching between left endpoints and right endpoints of its bar code. Do these concepts have analogues in multiple parameters? The answer is largely yes: endpoints can be defined, and the module can be reconstructed from them, though the correspondence is not a perfect matching but rather a more arbitrary linear map. The algebra needed for these developments will be covered from scratch, followed by a view toward how they might be used for computational 14:15-14:45 CDT Identifying analogous topological features across multiple systems Speaker: Iris Yoon (University of Delaware) We present a new method for comparing topological features using dissimilarity matrices obtained from observing activity in distinct complex systems. Our method uses the Dowker complex of a cross-dissimilarity matrix to identify all possible ways a common feature could be represented by the barcodes of activity within the individual systems. This method can be used to study both how distinct systems respond to the same stimuli and how behavior in one system drives behavior in another. Motivated by questions in neuroscience, our framework will allow researchers to investigate open problems such as how neural systems code for complex stimuli and how such coding structures propagate and evolve through different neural systems without direct reference to external correlates. The same tools can also be applied more generally to explore two-dimensional persistence and to identify which topological features are preserved after dimensionality reduction. This is joint work with Chad Giusti (University of Delaware) and Robert Ghrist (University of Pennsylvania) 15:00-15:30 CDT Recent Advances in Topology-Based Graph Classification Speaker: Bastian Rieck (ETH Zurich) Topological data analysis emerged as an effective tool in machine learning, supporting the analysis of neural networks, but also driving the development of novel algorithms that incorporate topological characteristics. As a problem class, graph classification is of particular interest here, since graphs are inherently amenable to a topological description in terms of their connected components and cycles. This talk will briefly summarise recent advances in topology-based graph classification, focussing equally on ’shallow’ and ‘deep’ approaches. Starting from an intuitive description of persistent homology, we will discuss how to incorporate topological features into the Weisfeiler–Lehman colour refinement scheme, thus obtaining a simple feature-based graph classification algorithm. We will then build a bridge to graph neural networks and demonstrate a topological variant of ‘readout’ functions, which can be learned end-to-end. Care has been taken to make the talk accessible to an audience that may not have been exposed to machine learning or topological data analysis. Poster Directory Back to top Presenter Title Mehmet Emin Aktas (University of Central Oklahoma) Two New Hypergraph Laplacians in Diffusion Framework Erik J Amezquita (Michigan State University) Using topology to analyze the shape of barley Aras Asaad (Oxford Drug Design) Persistent Homology to Detect Fake Faces and Videos Tahmineh Azizi (Kansas State University) Topological Patterns in Ecology Rituparna Basak (New Jersey Institute of Technology) Application of computational topology to analysis of granular material force networks in the stick-slip regime Håvard Bakke Bjerkevik (TU Graz) $ell^p$-Distances on Multiparameter Persistence Modules Elyse Borgert (University of North Carolina Chapel Hill) Persistent topology of protein space Chao Cheng (New Jersey Institute of Technology) Insight from topological data analysis into precursors to stick-slip events in sheared granular systems Veronica Ciocanel (Duke University) TDA for biological ring channels Pedro Conceicao (University of Aberdeen) An application of neighbourhoods in digraphs to the classification of binary dynamics Justin M Curry (SUNY Albany) Decorated Merge Trees for Persistent Topology Russell J Funk (University of Minnesota, Twin Cities) The Emergence of Higher-Order Structure in Scientific and Technological Knowledge Networks Jehan Ghafuri (The University of Buckingham) Sensitivity and stability of pretrained CNN filters Golnar G Gharooni Fard (University of Colorado, Boulder) A Persistent Homology Approach for Characterizing Honeybee Behavior during Food Exchange Sayonita Ghosh Hajra (California State University, Sacramento (CSU Sacramento)) Topological shapes of election speeches Mario R Gomez Flores (Ohio State University) Computational Aspects of Persistence Sets İsmail Güzel (İstanbul Technical University) Hierarchical Clustering and Zeroth Persistent Homology Niklas Hellmer (Polish Academy of Sciences) A Discrete Prokhorov Metric for Persistence Diagrams Paul Samuel Ignacio (University of the Philippines Baguio) LUMÁWIG: A bypass approach to bottleneck distance Péguy Kem-Meka Tiotsop Kadzue (Academia Avance) Topological Data Analysis : ToMATo Clustering Miroslav Kramar (University of Oklahoma) Towards Understanding Complex Spatio-Temporal Systems Darrick Lee (University of Pennsylvania) Signatures, Lipschitz-Free Spaces, and Paths of Persistence Diagrams Elise McMahon (Glaxosmithkline, Cornell University) Using Mapper to Represent Biological Changes in a Cell Nikola Milicevic (University of Florida, Gainesville) Applied Topology Using Cech’s Closure Spaces Senthil Mudaliar (US Army) Saving lives in Multi-Organ Failure: We need a method to improve clinical decisions in real-time Arnur Nigmetov (Lawrence Berkeley National Laboratory) Topological Regularization via Persistence-Sensitive Optimization Miguel O’Malley (Wesleyan University) (Persistent) Magnitude and Data Matt Piekenbrock (Michigan State University) Fast persistence computations in sparse dynamic settings Brenda L Praggastis (Pacific Northwest National Laboratory (PNNL)) Deep Data Profiler: A Platform and Methodology for the Analysis and Interpretation of Neural Networks Yu Qin (Tulane University) Comparing Distance Metrics on Vectorized Persistence Summaries Alexander Rolle (TU Graz) Multi-parameter persistence, clustering, and density estimation Benjamin Roycraft (University of California Davis) Bootstrapping Persistent Betti Numbers and Other Stabilizing Statistics Alexander D Smith (University of Wisconsin, Madison) Topological Data Analysis: Applications in Molecular Simulation Francesca Tombari (KTH) Homotopical decompositions of simplicial and Vietoris-Rips complexes Josué Tonelli-Cueto (Inria Paris & IMJ-PRG) Computing the Homology of Semialgebraic Sets via TDA Álvaro Torras Casas Persistence Spectral Sequences: Applications and Stability Renata Turkes Noise robustness of persistent homology on greyscale images across filtrations and signatures Mikael Vejdemo-Johansson Multiple Hypothesis Testing with Persistent Homology Siddharth Vishwanath Robust Persistence Diagrams using Reproducing Kernels Milton Chi-Chong Wong PersFormer: A Transformer Model for Persistence Diagrams Min-Chun Wu A topological approach to inferring the intrinsic dimension of convex sensing data Ling Zhou Persistent homotopy groups of metric spaces Back to top From Geometry to Topology: Inverse Theorems for Distributed Persistence Elchanan Solomon April 26, 2021 Statistical Frameworks for Mapping 3D Shape Variation onto Genotypic and Phenotypic Variation Lorin Crawford April 27, 2021 Topological Data Analysis of Database Representations for Information Retrieval Anthea Monod April 28, 2021 Predicting Survival Outcomes using Topological Shape Features of AI-reconstructed Medical Images Chul Moon April 28, 2021 Back to Basics – Topology of Simplicial Complexes for Business Optimisations Marc Lange April 29, 2021
{"url":"https://www.imsi.institute/activities/topological-data-analysis/","timestamp":"2024-11-06T12:45:53Z","content_type":"text/html","content_length":"278274","record_id":"<urn:uuid:70fe2213-2490-4b66-b165-c6670afb4587>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00833.warc.gz"}
Should We Trust Tipsters? - MoneyHighStreet Should We Trust Tipsters? Today we will discuss in more detail the principles of tipsters and tell you whether you should still trust most of them. To begin with, let’s give one simple example that shows a lot. For example, a certain tipster regularly posted his predictions for tennis matches for five years, while the quotes for the victory of a particular player in each of them were approximately equal (that is, the coefficients were about 2.0). Everything was quite fair and transparent-bets were issued a few hours before the start of matches in real time, which could be tracked by the corresponding coupons. And the profit of this tipster for the specified period amounted to 100 thousand dollars – it is a great result, isn’t it? Followers of this man probably imagined him as an excellent expert in the game, a kind of guru of tennis. However, in fact, it could well turn out that all these forecasts were no more valuable than, for example, the forecasts of the notorious octopus Paul – in other words, based solely on luck, but not on experience and analytical data. Let’s imagine that in a certain contest with the same conditions, not one, but one hundred tipsters participate with equal chances to win or lose 20 thousand dollars during the year; at the same time, the losers are eliminated at the end of the year. So, after the first year of the competition, 50 tipsters who stupidly click on one of the two buttons (the chances of winning one of the tennis players, let us remind, are approximately equal) remain, they earned 20 thousand dollars on average only due to luck, while the remaining 50 are eliminated. It is easy to calculate that by the end of the fifth year, three tipsters will remain, each of which was able to earn 100 thousand dollars, while relying only on luck. How to distinguish a lucky person from a professional? The phenomenon described above is called the “survival trend”. It is used for evaluating the work of tipsters: as we have already found out, even people who occupy the first places in the rating of such specialists can actually be just spoilers of fortune, who do not know much about sports analytics. If you seriously decide to follow tipster’s advice, not trusting your own opinion too much, when choosing such a specialist, you should correctly choose the size of the initial sample of matches for which correct predictions were given. Very often it happens that in a relatively short period of time, thanks to luck, a number of tipsters who are not professionals are able to give more or less accurate forecasts, but over a longer time, their chances of success are reduced to zero or become negative. However, this factor can not be considered a determining factor, as we have shown in our conditional example. Just tossing a coin and waiting for heads or tails to fall out (and the situation we simulated for tennis matches was similar), we do not take into account the bookmaker’s margin, which is always present in bets on sports events. Thus, the tipster should issue his forecasts taking into account the odds of those bookmakers where the margin is the lowest (because the lower the margin, the easier it is to achieve success over a long term), and the pinnacle betting company is just like that. Well, it is worth to remind that in any case, first of all, one should check the integrity of the tipster – whether he puts his coupons for matches in real time, and not just rigging them for the desired result. Also pay attention to the amount of bets – it is unlikely that a professional will advise you to use more than 3% of the bankroll for one single bet. Example of a survival trend A few years ago, on one of the English-language channels in a program called “System”, the famous illusionist Derren Brown demonstrated to the audience how confusing the survival trend can be. Among the participants in the show, which was supposed to develop a supposedly win-win betting system for the winner of the races, there was a girl named Hadisha. She won five times in a row, but when she placed a $ 4,000 bet for the sixth time, she lost. Initially, almost 8 thousand people participated in the program. All of them were divided into six groups by the illusionist; each group was asked to place a bet on the winner of the next race, while participants in two different groups could not place bets on the same horse. Since there were only six horses in each race, five groups were eliminated at the end of the race, and the participants in the remaining one were again divided into six groups. Hadisha, as you may have guessed, was always among the winners. However, when she was left alone, she lost after the others. The main conclusion from all of the above is that luck can smile at everyone, but for a professional it is of secondary importance. Even in the event of a series of losses, the professional is able to keep his cool, knowing that his system will allow you to return everything with a vengeance in the future. The formula for estimation of professionalism of tipster In conclusion, here is a simple formula by which you can independently assess the level of professionalism of the tipster: √ X + 0.5 X = Y Here, X is the total number of predictions issued by the tipster, and Y is the number of possible wins. So, if tipster gave 100 predictions, then the number of theoretical winnings according to our formula will be 60. This indicator can be achieved only by luck in only one case out of 40, and therefore, if the” verified ” forecast seller has similar statistics, most likely, he can be trusted. However, it is obvious that a hundred predictions is a too small sample to assess the true capabilities of the tipster. Therefore, the more forecasts available for viewing, the higher the probability of correctly estimating the tipster is. Most often, it is better to trust a tipster who has, say, 55% statistics for 1000 predictions than one who has given out 75% accuracy for 50 matches. Often, sellers of sports forecasts, otherwise called tipsters, promise super profits to those customers who are willing to bring in their money and follow their advice. Most often, this is what ordinary scammers do, giving wishful thinking, but sometimes really honest experts in this business, as they say, fill their own price, and deliberately inflate people’s expectations. So how do you distinguish a professional from a scam artist? Many players, both beginners and experienced betters, like to place bets on events whose betting odds are close to 2.00, taking into account the margin. In other words, the probability of such an outcome is about 50%. This can be both bets on the “money line” market (for example, the victory of one of the teams in a match where a draw is impossible), and on the “totals” market (for example, the total performance of a football match will be more or less than 2.5); a similar probability distribution can occur in other, less popular markets. Student’s criterion as an assessment of tipster’s effectiveness For events where the probability of opposite outcomes is close to 50%, the so-called Student’s criterion is quite applicable, which determines the probability of a random win in a series of multiple coin tosses (and the loss of heads or tails, respectively). This criterion helps to compare the player’s actual winnings with the theoretically expected ones (taking into account only randomness)in the conditions of the betting market. It is obvious that the greater the number of correct forecasts issued by tipster is, the higher the chances are that this is the professional. Who do you prefer as an “ideological mastermind”, whose advice when choosing bids should be used – the one who correctly guessed 7 out of 10 forecasts (that is, with a passability of 70%), or the one who correctly predicted 600 out of 1000 forecasts (passability of “only” 60%)? And in this case, the words “guessed” and “predicted” accurately reflect the true state of things. If you are in doubt about the answer to this seemingly obvious question, imagine a coin toss situation. You will agree that a seven-out-of-ten eagle is much more likely than a 600-out-of-ten eagle in 1000 flips, and the Student’s criteria mentioned above is exactly what convinces us of this. Thus, one of the criteria that will help you when choosing a tipster will be the positive statistics of its forecasts over a long distance – and the longer it is, the better. It is another matter to make sure that these statistics are accurate, but this is a different topic. Betting odds It’s hard to argue with the fact that betting odds directly affect the probability of winning. At the same time, placing bets on teams with a lower probability of winning (that is, with higher quotes) is initially more risky (given equal amounts of bets), since in this case the probability of randomness is high. In other words, a win on bets with a coefficient of 1.25 in the amount of 120% is much more indicative of the degree of professionalism of the player than the same win on a bet with a coefficient of about 5.00. And either lucky or people who know some information, unavailable to other market participants rates (which for this level is almost impossible) and predict a victory, for example, Levante against Barcelona in the match of the championship of Spain. Therefore, comparing the histories of placing bets with only the percentage of winnings taken into account when evaluating tipsters is a fundamentally wrong tactic: you should always pay attention to the coefficients. It directly follows that if the tipster indicates in his statistics a long series of wins for betting on events with high coefficients (say, from 3.00), interspersed with single failures, you are almost certainly a regular crook, just cunningly manipulating the results. So, we have considered two criteria that will help you when choosing a tipster-unless, of course, you trust your own experience, calculation, and intuition. When comparing the betting histories of different tipsters, as we can see, it is not enough to simply analyze the winning percentages. It is also necessary to study the history of winnings and the coefficients at which they were obtained.
{"url":"https://moneyhighstreet.com/should-we-trust-tipsters/","timestamp":"2024-11-15T01:10:28Z","content_type":"text/html","content_length":"133327","record_id":"<urn:uuid:acd86615-c7e7-46c2-929a-a4dd0341c6ee>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00684.warc.gz"}
seminars - Using Random Matrices in Quantum Information Theory The goal of this series of lectures is to present some recent results in QIT which make use of random matrices. After an introduction to random matrix theory, I will present the method of moments, one of the most successful methods used to study the spectra of large random matrices. This will be the occasion to discuss integration over Gaussian spaces and over unitary groups. On the QIT side, I will focus on two main topics, random quantum states and random quantum channels. I will then prove two recent results, one on the asymptotic eigenvalue distribution of the partial transposition of random quantum states, and another on the output set of random quantum channels. Both will require some terminology and results from free probability, which will also be discussed in detail. Useful recent reference: B. Collins and I. Nechita - Random matrix techniques in quantum information theory, J. Math. Phys. 57, 015215 (2016); http://dx.doi.org/10.1063/1.4936880; http://arxiv.org/ Plan of lectures: Lecture 1. - Introduction to Random Matrices - Gaussian random variables and integration. The Wick formula - The Haar measure on the unitary group. The Weingarten formula Lecture 2. - Random density matrices. The induced measure - The asymptotic distribution of eigenvalues - The partial transposition of random quantum states. Free probability theory Lecture 3. - Random quantum channels obtained from random isometries - The maximal output entropy of quantum channels. The additivity question - Product of conjugate random quantum channels - The asymptotic output set of a random quantum channel
{"url":"https://www.math.snu.ac.kr/board/index.php?mid=seminars&l=ko&page=74&document_srl=763976","timestamp":"2024-11-14T05:34:58Z","content_type":"text/html","content_length":"46480","record_id":"<urn:uuid:d9984876-c718-4ab6-a4ea-c53d6793768c>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00292.warc.gz"}
Advanced High School Statistics, 2nd Edition Second Edition, with updates based on AP Statistics Course Framework David Diez Data Scientist Mine C ¸ etinkaya-Rundel Associate Professor of the Practice, Duke University Professional Educator, RStudio Leah Dorazio Statistics and Computer Science Teacher San Francisco University High School Christopher D Barr Investment Analyst This book may be downloaded as a free PDF at openintro.org/ahss. This textbook is also avail-able under a Creative Commons license, with the source files hosted onGithub. Table of Contents 1 Data collection 10 1.1 Case study . . . 12 1.1.1 Case study . . . 12 1.2 Data basics . . . 17 1.2.1 Observations, variables, and data matrices. . . 17 1.2.2 Types of variables . . . 20 1.2.3 Relationships between variables . . . 21 1.3 Overview of data collection principles . . . 27 1.3.1 Populations and samples. . . 27 1.3.2 Anecdotal evidence . . . 29 1.3.3 Explanatory and response variables. . . 30 1.3.4 Observational studies versus experiments . . . 31 1.4 Observational studies and sampling strategies . . . 35 1.4.1 Observational studies . . . 35 1.4.2 Sampling from a population . . . 37 1.4.3 Simple, systematic, stratified, cluster, and multistage sampling . . . 40 1.5 Experiments. . . 48 1.5.1 Reducing bias in human experiments . . . 48 1.5.2 Principles of experimental design . . . 49 1.5.3 Completely randomized, blocked, and matched pairs design . . . 50 1.5.4 Testing more than one variable at a time . . . 53 Chapter highlights . . . 56 Chapter exercises . . . 57 2 Summarizing data 60 2.1 Examining numerical data . . . 62 2.1.1 Scatterplots for paired data . . . 63 2.1.2 Stem-and-leaf plots and dot plots . . . 65 2.1.3 Histograms . . . 67 2.1.4 Describing Shape . . . 70 2.1.5 Descriptive versus inferential statistics . . . 71 2.2 Numerical summaries and box plots . . . 76 2.2.1 Learning objectives. . . 76 2.2.2 Measures of center . . . 76 2.2.3 Standard deviation as a measure of spread. . . 79 2.2.4 Z-scores . . . 82 2.2.5 Box plots and quartiles . . . 83 2.2.6 Calculator/Desmos: summarizing 1-variable statistics . . . 86 2.2.7 Outliers and robust statistics . . . 89 2.2.8 Linear transformations of data . . . 90 2.2.9 Comparing numerical data across groups . . . 92 2.2.10 Mapping data (special topic) . . . 95 2.3.1 Contingency tables and bar charts . . . 104 2.3.2 Row and column proportions . . . 105 2.3.3 Using a bar chart with two variables . . . 107 2.3.4 Mosaic plots . . . 108 2.3.5 The only pie chart you will see in this book . . . 110 2.4 Case study: malaria vaccine (special topic) . . . 113 2.4.1 Variability within data . . . 113 2.4.2 Simulating the study . . . 115 2.4.3 Checking for independence . . . 115 Chapter highlights . . . 119 Chapter exercises . . . 120 3 Probability 122 3.1 Defining probability . . . 124 3.1.1 Introductory examples . . . 124 3.1.2 Probability . . . 125 3.1.3 Disjoint or mutually exclusive outcomes . . . 126 3.1.4 Probabilities when events are not disjoint . . . 128 3.1.5 Complement of an event . . . 130 3.1.6 Independence . . . 131 3.2 Conditional probability . . . 138 3.2.1 Exploring probabilities with a contingency table . . . 139 3.2.2 Marginal and joint probabilities. . . 140 3.2.3 Defining conditional probability. . . 141 3.2.4 Smallpox in Boston, 1721 . . . 143 3.2.5 General multiplication rule . . . 144 3.2.6 Sampling without replacement . . . 145 3.2.7 Independence considerations in conditional probability . . . 147 3.2.8 Checking for independent and mutually exclusive events . . . 147 3.2.9 Tree diagrams. . . 150 3.2.10 Bayes’ Theorem . . . 151 3.3 The binomial formula . . . 160 3.3.1 Introducing the binomial formula . . . 160 3.3.2 When and how to apply the formula . . . 162 3.3.3 Calculator: binomial probabilities . . . 165 3.4 Simulations . . . 169 3.4.1 Setting up and carrying out simulations . . . 169 3.5 Random variables. . . 175 3.5.1 Introduction to expected value . . . 175 3.5.2 Probability distributions . . . 176 3.5.3 Expectation . . . 178 3.5.4 Variability in random variables . . . 180 3.5.5 Linear transformations of a random variable. . . 181 3.5.6 Linear combinations of random variables. . . 182 3.5.7 Variability in linear combinations of random variables . . . 184 3.6 Continuous distributions . . . 189 3.6.1 From histograms to continuous distributions. . . 189 3.6.2 Probabilities from continuous distributions . . . 190 Chapter highlights . . . 194 Chapter exercises . . . 195 4 Distributions of random variables 197 4.1 Normal distribution . . . 199 4.1.1 Normal distribution model . . . 199 4.1.2 Standardizing with Z-scores . . . 201 4.1.3 Normal probability table. . . 202 4.1.4 Normal probability examples . . . 204 4.1.6 68-95-99.7 rule . . . 210 4.1.7 Evaluating the normal approximation . . . 211 4.1.8 Normal approximation for sums of random variables . . . 215 4.2 Sampling distribution of a sample mean . . . 220 4.2.1 The mean and standard deviation of ¯x . . . 220 4.2.2 Examining the Central Limit Theorem . . . 225 4.2.3 Normal approximation for the sampling distribution of ¯x . . . 228 4.3 Geometric distribution . . . 234 4.3.1 Bernoulli distribution . . . 234 4.3.2 Geometric distribution . . . 235 4.4 Binomial distribution. . . 240 4.4.1 An example of a binomial distribution . . . 240 4.4.2 The mean and standard deviation of a binomial distribution. . . 241 4.4.3 Normal approximation to the binomial distribution . . . 242 4.4.4 Normal approximation breaks down on small intervals (special topic). . . 244 4.5 Sampling distribution of a sample proportion . . . 248 4.5.1 The mean and standard deviation of ˆp . . . 248 4.5.2 The Central Limit Theorem revisited. . . 249 4.5.3 Normal approximation for the distribution of ˆp . . . 250 Chapter highlights . . . 254 Chapter exercises . . . 255 5 Foundations for inference 258 5.1 Estimating unknown parameters . . . 260 5.1.1 Point estimates . . . 261 5.1.2 Understanding the variability of a point estimate . . . 262 5.1.3 Introducing the standard error . . . 264 5.1.4 Basic properties of point estimates . . . 265 5.2 Confidence intervals . . . 269 5.2.1 Capturing the population parameter . . . 269 5.2.2 Constructing a 95% confidence interval . . . 270 5.2.3 Changing the confidence level . . . 271 5.2.4 Margin of error . . . 273 5.2.5 Interpreting confidence intervals . . . 274 5.2.6 Confidence interval procedures: a five step process . . . 274 5.3 Introducing hypothesis testing. . . 279 5.3.1 Case study: medical consultant . . . 279 5.3.2 Setting up the null and alternate hypothesis. . . 280 5.3.3 Evaluating the hypotheses with a p-value . . . 282 5.3.4 Calculating the p-value by simulation (special topic) . . . 285 5.3.5 Hypothesis testing: a five step process . . . 286 5.3.6 Decision errors . . . 286 5.3.7 Choosing a significance level. . . 288 5.3.8 Statistical power of a hypothesis test . . . 288 5.4 Does it make sense? . . . 293 5.4.1 When to retreat . . . 293 5.4.2 Statistical significance versus practical significance . . . 294 5.4.3 Statistical significance versus a real difference . . . 294 Chapter highlights . . . 296 Chapter exercises . . . 297 6 Inference for categorical data 299 6.1 Inference for a single proportion. . . 301 6.1.1 Distribution of a sample proportion (review) . . . 302 6.1.2 Checking conditions for inference using a normal distribution . . . 302 6.1.3 Confidence intervals for a proportion . . . 303 6.1.4 Calculator: the 1-proportion Z-interval. . . 307 6.1.6 Hypothesis testing for a proportion. . . 310 6.1.7 Calculator: the 1-proportion Z-test . . . 314 6.2 Difference of two proportions . . . 319 6.2.1 Sampling distribution of the difference of two proportions . . . 319 6.2.2 Checking conditions for inference using a normal distribution . . . 320 6.2.3 Confidence interval forp1−p2 . . . 320 6.2.4 Calculator: the 2-proportion Z-interval. . . 323 6.2.5 Hypothesis testing whenH0: p1=p2. . . 324 6.2.6 Calculator: the 2-proportion Z-test . . . 329 6.3 Testing for goodness of fit using chi-square. . . 335 6.3.1 Creating a test statistic for one-way tables. . . 335 6.3.2 The chi-square test statistic . . . 336 6.3.3 The chi-square distribution and finding areas . . . 337 6.3.4 Finding a p-value for a chi-square distribution . . . 341 6.3.5 Evaluating goodness of fit for a distribution . . . 343 6.3.6 Calculator: chi-square goodness of fit test . . . 346 6.4 Homogeneity and independence in two-way tables. . . 350 6.4.1 Introduction . . . 351 6.4.2 Expected counts in two-way tables . . . 352 6.4.3 The chi-square test of homogeneity for two-way tables . . . 353 6.4.4 The chi-square test of independence for two-way tables. . . 357 6.4.5 Calculator: chi-square test for two-way tables . . . 361 Chapter highlights . . . 365 Chapter exercises . . . 366 7 Inference for numerical data 369 7.1 Inference for a mean with thet-distribution . . . 371 7.1.1 Using a normal distribution for inference whenσis known. . . 371 7.1.2 Introducing thet-distribution . . . 372 7.1.3 Calculator: finding area under thet-distribution . . . 375 7.1.4 Checking conditions for inference on a mean using thet-distribution . . . 376 7.1.5 One sample t-interval for a mean . . . 376 7.1.6 Calculator: the 1-samplet-interval . . . 381 7.1.7 Choosing a sample size when estimating a mean . . . 382 7.1.8 Hypothesis testing for a mean. . . 383 7.1.9 Calculator: 1-samplet-test . . . 387 7.2 Inference for paired data . . . 392 7.2.1 Paired observations and samples . . . 392 7.2.2 Hypothesis tests for paired data . . . 393 7.2.3 Calculator: the matched pairst-test . . . 397 7.2.4 Confidence intervals for the mean of a difference . . . 397 7.2.5 Calculator: the matched pairst-interval . . . 400 7.3 Inference for the difference of two means . . . 404 7.3.1 Sampling distribution for the difference of two means. . . 405 7.3.2 Checking conditions for inference on a difference of means . . . 405 7.3.3 Confidence intervals for a difference of means . . . 406 7.3.4 Calculator: the 2-samplet-interval . . . 410 7.3.5 Hypothesis testing for the difference of two means . . . 411 7.3.6 Calculator: the 2-samplet-test . . . 416 Chapter highlights . . . 423 Chapter exercises . . . 424 8 Introduction to linear regression 427 8.1 Line fitting, residuals, and correlation . . . 429 8.1.1 Fitting a line to data. . . 429 8.1.2 Using linear regression to predict possum head lengths . . . 431 8.1.3 Residuals . . . 433 8.2 Fitting a line by least squares regression . . . 446 8.2.1 An objective measure for finding the best line . . . 446 8.2.2 Finding the least squares line . . . 448 8.2.3 Interpreting the coefficients of a regression line . . . 450 8.2.4 Extrapolation is treacherous. . . 451 8.2.5 UsingR2[to describe the strength of a fit] [. . . .] [452] 8.2.6 Calculator/Desmos: linear correlation and regression . . . 454 8.2.7 Types of outliers in linear regression . . . 457 8.2.8 Categorical predictors with two levels (special topic) . . . 459 8.3 Inference for the slope of a regression line . . . 465 8.3.1 The role of inference for regression parameters . . . 465 8.3.2 Conditions for the least squares line . . . 466 8.3.3 Constructing a confidence interval for the slope of a regression line . . . 467 8.3.4 Calculator: thet-interval for the slope . . . 471 8.3.5 Midterm elections and unemployment . . . 471 8.3.6 Understanding regression output from software . . . 473 8.3.7 Calculator: thet-test for the slope . . . 477 8.3.8 Which inference procedure to use for paired data? . . . 478 8.4 Transformations for skewed data . . . 484 8.4.1 Introduction to transformations. . . 484 8.4.2 Transformations to achieve linearity . . . 486 Chapter highlights . . . 491 Chapter exercises . . . 492 A Exercise solutions 495 B Data sets within the text 514 C Distribution tables 519 Advanced High School Statistics covers a first course in statistics, providing an introduction to applied statistics that is clear, concise, and accessible. This book was written to align with the AP® Statistics Course Description1, but it’s also popular in non-AP courses and community colleges. This book may be downloaded as a free PDF atopenintro.org/ahss. We hope readers will take away three ideas from this book in addition to forming a foundation of statistical thinking and methods. (1) Statistics is an applied field with a wide range of practical applications. (2) You don’t have to be a math guru to learn from real, interesting data. (3) Data are messy, and statistical tools are imperfect. But, when you understand the strengths and weaknesses of these tools, you can use them to learn about the real world. Textbook overview The chapters of this book are as follows: 1. Data collection. Data structures, variables, and basic data collection techniques. 2. Summarizing data. Data summaries and graphics. 3. Probability. The basic principles of probability. 4. Distributions of random variables. Introduction to key distributions, and how the normal model applies to the sample mean and sample proportion. 5. Foundations for inference. General ideas for statistical inference in the context of estimating the population proportion. 6. Inference for categorical data. Inference for proportions and contingency tables using the normal and chi-square distributions. 7. Inference for numerical data. Inference for one or two sample means using thet-distribution. 8. Introduction to linear regression. An introduction to regression with two variables, and in-ference on the slope of the regression line. Online resources OpenIntro is focused on increasing access to education by developing free, high-quality education ma-terials. In addition to textbooks, we provide the following accompanying resources to help teachers and students be successful. • Video overviews for each section of the textbook • Lecture slides for each section of the textbook • CasioandTIcalculator tutorials • Video solutionsfor selected section and chapter exercises 1[AP]®[is a trademark registered and owned by the College Board, which was not involved in the production of,] • Statistical software labs • A small but growing number ofDesmos activities2 • Quizletsets for each chapter3 • ATableau publicpage to further interact with data sets4 • Online, interactive version of textbook5 • Complete companion course with the learning management softwareMyOpenMath6 • Complete Canvas course accessible throughCanvas Commons7 All of these resources can be found at: We also have improved the ability to access data in this book through the addition of Appendix B, which provides additional information for each of the data sets used in the main text and is new in the Second Edition. Online guides to each of these data sets are also provided at openintro.org/data and through acompanion R package. Examples and exercises Many examples are provided to establish an understanding of how to apply methods. EXAMPLE 0.1 This is an example. Full solutions to examples are provided here, within the example. When we think the reader should be ready to do an example problem on their own, we frame it as Guided Practice. GUIDED PRACTICE 0.2 The reader may check or learn the answer to any Guided Practice problem by reviewing the full solution in a footnote.8 Exercises are also provided at the end of each section and each chapter for practice or homework assignments. Solutions for odd-numbered exercises are given in Appendix A. Getting involved We encourage anyone learning or teaching statistics to visit openintro.org and get involved. We value your feedback. Please send any questions or comments to [email protected]. You can also provide feedback, report typos, and review known typos at This project would not be possible without the passion and dedication of all those involved. The authors would like to thank the OpenIntro Stafffor their involvement and ongoing contributions. We are also very grateful to the hundreds of students and instructors who have provided us with valuable feedback since we first started working on this project in 2009. A special thank you to Catherine Ko for proofreading the second edition of AHSS. 2[openintro.org/ahss/desmos] 3[quizlet.com/openintro-ahss] 5[Developed by Emiliano Vega and Ralf Youtz of Portland Community College using PreTeXt.] 6[myopenmath.com/course/public.php?cid=11774] Chapter 1 Data collection 1.1 Case study 1.2 Data basics 1.3 Overview of data collection principles 1.4 Observational studies and sampling strategies Scientists seek to answer questions using rigorous methods and careful observations. These observations – collected from the likes of field notes, surveys, and experiments – form the backbone of a statistical investigation and are called . Statistics is the study of how best to collect, analyze, and draw conclusions from data. It is helpful to put statistics in the context of a general process of investigation: 1. Identify a question or problem. 2. Collect relevant data on the topic. 3. Analyze the data. 4. Form a conclusion. Researchers from a wide array of fields have questions or problems that require the collection and analysis of data. What questions from current events or from your own life can you think of that could be answered by collecting and analyzing data? This chapter focuses on collecting data. We’ll discuss basic properties of data, common sources of bias that arise during data collection, and techniques for collecting data. After finishing this chapter, you will have the tools for identifying weaknesses and strengths in data-based conclusions, tools that are essential to be an informed citizen and a savvy consumer of information. Case study: using stents to prevent strokes We start with a case study and we consider the following questions: Does the use of stents reduce the risk of stroke? How do researchers collect data to answer this question? What do they do with the data once it is collected? How different must the risk of stroke be in each group before there is sufficient evidence that it’s a real difference and not just random variation? Learning objectives 1. Understand the four steps of a statistical investigation (identify a question, col-lect data, analyze data, form a conclusion) in the context of a real-world example. 2. Consider the concept of statistical significance. Case study Section 1.1 introduces a classic challenge in statistics: evaluating the efficacy of a medical treatment. Terms in this section, and indeed much of this chapter, will all be revisited later in the text. The plan for now is simply to get a sense of the role statistics can play in practice. In this section we will consider an experiment that studies effectiveness of stents in treating patients at risk of stroke.1 Stents are devices put inside blood vessels that assist in patient recovery after cardiac events and reduce the risk of an additional heart attack or death. Many doctors have hoped that there would be similar benefits for patients at risk of stroke. We start by writing the principal question the researchers hope to answer: Does the use of stents reduce the risk of stroke? The researchers who asked this question collected data on 451 at-risk patients. Each volunteer patient was randomly assigned to one of two groups: Treatment group. Patients in the treatment group received a stent and medical manage-ment. The medical management included medications, management of risk factors, and help in lifestyle modification. Control group. Patients in the control group received the same medical management as the treatment group, but they did not receive stents. Researchers randomly assigned 224 patients to the treatment group and 227 to the control group. In this study, the control group provides a reference point against which we can measure the medical impact of stents in the treatment group. Researchers studied the effect of stents at two time points: 30 days after enrollment and 365 days after enrollment. The results of 5 patients are summarized in Figure 1.1. Patient outcomes are recorded as “stroke” or “no event”, representing whether or not the patient had a stroke at the end of a time period. Patient group 0-30 days 0-365 days 1 treatment no event no event 2 treatment stroke stroke 3 treatment no event no event .. . ... ... 450 control no event no event 451 control no event no event Figure 1.1: Results for five patients from the stent study. Considering data from each patient individually would be a long, cumbersome path towards answering the original research question. Instead, performing a statistical data analysis allows us to consider all of the data at once. Figure1.2summarizes the raw data in a more helpful way. In this table, we can quickly see what happened over the entire study. For instance, to identify the number of patients in the treatment group who had a stroke within 30 days, we look on the left-side of the table at the intersection of the treatment and stroke: 33. 0-30 days 0-365 days stroke no event stroke no event treatment 33 191 45 179 control 13 214 28 199 Total 46 405 73 378 Figure 1.2: Descriptive statistics for the stent study. GUIDED PRACTICE 1.1 What proportion of the patients in the treatment group had no stroke within the first 30 days of the study? (Please note: answers to all Guided Practice exercises are provided using footnotes.)2 We can compute summary statistics from the table. Asummary statisticis a single number summarizing a large amount of data.3 For instance, the primary results of the study after 1 year could be described by two summary statistics: the proportion of people who had a stroke in the treatment and control groups. Proportion who had a stroke in the treatment (stent) group: 45/224 = 0.20 = 20%. Proportion who had a stroke in the control group: 28/227 = 0.12 = 12%. These two summary statistics are useful in looking for differences in the groups, and we are in for a surprise: an additional 8% of patients in the treatment group had a stroke! This is important for two reasons. First, it is contrary to what doctors expected, which was that stents wouldreduce the rate of strokes. Second, it leads to a statistical question: do the data show a “real” difference between the groups? This second question is subtle. Suppose you flip a coin 100 times. While the chance a coin lands heads in any given coin flip is 50%, we probably won’t observe exactly 50 heads. This type of fluctuation is part of almost any type of data generating process. It is possible that the 8% difference in the stent study is due to this natural variation. However, the larger the difference we observe (for a particular sample size), the less believable it is that the difference is due to chance. So what we are really asking is whether the difference is statistically significant, that is, whether the difference so large that we should reject the notion that it was due to chance. 2[There were 191 patients in the treatment group that had no stroke in the first 30 days. There were 33 + 191 =] 224 total patients in the treatment group, so the proportion is 191/224 = 0.85. While we don’t yet have the statistical tools to fully address this question on our own, we can comprehend the conclusions of the published analysis: there was compelling evidence of harm by stents in this study of stroke patients. Section summary • To test the effectiveness of a treatment, researchers often carry out an experiment in which they randomly assign patients to atreatment groupor acontrol group. • Researchers compare the relevantsummary statisticsto get a sense of whether the treatment group did better, on average, than the control group. 1.1 Migraine and acupuncture, Part 1. A migraine is a particularly painful type of headache, which pa-tients sometimes wish to treat with acupuncture. To determine whether acupuncture relieves migraine pain, researchers conducted a randomized controlled study where 89 females diagnosed with migraine headaches were randomly assigned to one of two groups: treatment or control. 43 patients in the treatment group received acupuncture that is specifically designed to treat migraines. 46 patients in the control group re-ceived placebo acupuncture (needle insertion at non-acupoint locations). 24 hours after patients rere-ceived acupuncture, they were asked if they were pain free. Results are summarized in the contingency table below.4 Pain free Yes No Total Treatment 10 33 43 Control 2 44 46 Total 12 77 89 identified on the antero-internal part of the antitragus, the anterior part of the lobe and the upper auricular concha, on the same side of pain. The majority of these points were effective very rapidly (within 1 min), while the remaining points produced a slower antalgic response, between 2 and 5 min. The insertion of a semi-permanent needle in these zones allowed stable control of the migraine pain, which occurred within 30 min and still persisted 24 h later. Since the most active site in controlling migraine pain was the antero-internal part of the antitragus, the aim of this study was to verify the therapeutic value of this elec-tive area (appropriate point) and to compare it with an area of the ear (representing the sciatic nerve) which is probably inappropriate in terms of giving a therapeutic effect on migraine attacks, since it has no somatotopic correlation with head pain. Materials and methods The study enrolled 94 females, diagnosed as migraine without aura following the International Classification of Headache Disorders [5], who were subsequently examined at the Women’s Headache Centre, Department of Gynae-cology and Obstetrics of Turin University. They were all included in the study during a migraine attack provided that it started no more than 4 h previously. According to a predetermined computer-made randomization list, the eli-gible patients were randomly and blindly assigned to the following two groups: group A (n=46) (average age 35.93 years, range 15–60), group B (n=48) (average age 33.2 years, range 16–58). Before enrollment, each patient was asked to give an informed consent to participation in the study. Migraine intensity was measured by means of a VAS before applying NCT (T0). In group A, a specific algometer exerting a maximum pressure of 250 g (SEDATELEC, France) was chosen to identify the tender points with Pain–Pressure Test (PPT). Every tender point located within the identified area by the pilot study (Fig.1, area M) was tested with NCT for 10 s starting from the auricle, that was ipsilateral, to the side of prevalent cephalic pain. If the test was positive and the reduction was at least 25% in respect to basis, a semi-permanent needle (ASP SEDATELEC, France) was inserted after 1 min. On the contrary, if pain did not lessen after 1 min, a further tender point was challenged in the same area and so on. When patients became aware of an initial decrease in the pain in all the zones of the head affected, they were invited to use a specific diary card to score the intensity of the pain with a VAS at the following intervals: after 10 min (T1), after 30 min (T2), after 60 min (T3), after 120 min (T4), and after 24 h (T5). In group B, the lower branch of the anthelix was repeatedly tested with the algometer for about 30 s to ensure it was not sensitive. On both the French and Chinese auricular maps, this area corresponds to the representation of the sciatic nerve (Fig.1, area S) and is specifically used to treat sciatic pain. Four needles were inserted in this area, two for each ear. In all patients, the ear acupuncture was always per-formed by an experienced acupuncturist. The analysis of the diaries collecting VAS data was conducted by an impartial operator who did not know the group each patient was in. The average values of VAS in group A and B were calculated at the different times of the study, and a statis-tical evaluation of the differences between the values obtained in T0, T1, T2, T3 and T4 in the two groups studied was performed using an analysis of variance (ANOVA) for repeated measures followed by multiple ttest of Bonferroni to identify the source of variance. Moreover, to evaluate the difference between group B and group A, attest for unpaired data was always per-formed for each level of the variable ‘‘time’’. In the case of proportions, a Chi square test was applied. All analyses were performed using the Statistical Package for the Social Sciences (SPSS) software program. All values given in the following text are reported as arithmetic mean (±SEM). Only 89 patients out of the entire group of 94 (43 in group A, 46 in group B) completed the experiment. Four patients withdrew from the study, because they experienced an unbearable exacerbation of pain in the period preceding the last control at 24 h (two from group A and two from group B) and were excluded from the statistical analysis since they requested the removal of the needles. One patient from group A did not give her consent to the implant of the semi-permanent needles. In group A, the mean number of Fig. 1The appropriate area (M) versus the inappropriate area (S) used in the treatment of migraine attacks S174 Neurol Sci (2011) 32 (Suppl 1):S173–S175 Figure from the original pa-per displaying the appropri-ate area (M) versus the in-appropriate area (S) used in the treatment of migraine at-tacks. (a) What percent of patients in the treatment group were pain free 24 hours after receiving acupuncture? (b) What percent were pain free in the control group? (c) In which group did a higher percent of patients become pain free 24 hours after receiving acupuncture? (d) Your findings so far might suggest that acupuncture is an effective treatment for migraines for all people who suffer from migraines. However this is not the only possible conclusion that can be drawn based on your findings so far. What is one other possible explanation for the observed difference between the percentages of patients that are pain free 24 hours after receiving acupuncture in the two groups? 1.2 Sinusitis and antibiotics, Part 1. Researchers studying the effect of antibiotic treatment for acute sinusitis compared to symptomatic treatments randomly assigned 166 adults diagnosed with acute sinusitis to one of two groups: treatment or control. Study participants received either a 10-day course of amoxicillin (an antibiotic) or a placebo similar in appearance and taste. The placebo consisted of symptomatic treatments such as acetaminophen, nasal decongestants, etc. At the end of the 10-day period, patients were asked if they experienced improvement in symptoms. The distribution of responses is summarized below.5 Self-reported improvement in symptoms Yes No Total Treatment 66 19 85 Control 65 16 81 Total 131 35 166 (a) What percent of patients in the treatment group experienced improvement in symptoms? (b) What percent experienced improvement in symptoms in the control group? (c) In which group did a higher percentage of patients experience improvement in symptoms? (d) Your findings so far might suggest a real difference in effectiveness of antibiotic and placebo treatments for improving symptoms of sinusitis. However, this is not the only possible conclusion that can be drawn based on your findings so far. What is one other possible explanation for the observed difference between the percentages of patients in the antibiotic and placebo treatment groups that experience improvement in symptoms of sinusitis? 4[G. Allais et al. “][Ear acupuncture in the treatment of migraine attacks: a randomized trial on the efficacy of] appropriate versus inappropriate acupoints”. In: Neurological Sci. 32.1 (2011), pp. 173–175. Data basics You collect data on dozens of questions from all of the students at your school. How would you organize all of this data? Effective presentation and description of data is a first step in most analyses. This section introduces one structure for organizing data as well as some terminology that will be used throughout this book. We use loan data from Lending Club and county data from the US Census Bureau to motivate and illustrate this section’s learning objectives. Learning objectives 1. Identify the individuals and the variables of a study. 2. Identify variables as categorical or numerical. Identify numerical variables as discrete or continuous. 3. Understand what it means for two variables to be associated. Observations, variables, and data matrices Figure 1.3 displays rows 1, 2, 3, and 50 of a data set for 50 randomly sampled loans offered through Lending Club, which is a peer-to-peer lending company. These observations will be referred to as the loan50data set. Each row in the table represents a single loan. The formal name for a row is a case or observational unit. The columns represent characteristics, calledvariables, for each of the loans. For example, the first row represents a loan of $7,500 with an interest rate of 7.34%, where the borrower is based in Maryland (MD) and has an income of$70,000. GUIDED PRACTICE 1.2 What is the grade of the first loan in Figure 1.3? And what is the home ownership status of the borrower for that first loan? For these Guided Practice questions, you can check your answer in the In practice, it is especially important to ask clarifying questions to ensure important aspects of the data are understood. For instance, it is always important to be sure we know what each variable means and the units of measurement. Descriptions of the loan50variables are given in Figure1.4. loan amount interest rate term grade state total income homeownership 1 7500 7.34 36 A MD 70000 rent 2 25000 9.43 60 B OH 254000 mortgage 3 14500 6.08 36 A MO 80000 mortgage . . . ... ... ... ... ... ... ... 50 3000 7.96 36 A CA 34000 rent Figure 1.3: Four rows from theloan50 data matrix. variable description loan amount Amount of the loan received, in US dollars. interest rate Interest rate on the loan, in an annual percentage. term The length of the loan, which is always set as a whole number of months. grade Loan grade, which takes values A through G and represents the quality of the loan and its likelihood of being repaid. state US state where the borrower resides. total income Borrower’s total income, including any second income, in US dollars. homeownership Indicates whether the person owns, owns but has a mortgage, or rents. Figure 1.4: Variables and their descriptions for theloan50data set. The data in Figure 1.3 represent adata matrix, which is a convenient and common way to organize data, especially if collecting data in a spreadsheet. Each row of a data matrix corresponds to a unique case (observational unit), and each column corresponds to a variable. When recording data, use a data matrix unless you have a very good reason to use a different structure. This structure allows new cases to be added as rows or new variables as new columns. GUIDED PRACTICE 1.3 The grades for assignments, quizzes, and exams in a course are often recorded in a gradebook that takes the form of a data matrix. How might you organize grade data using a data matrix?7 GUIDED PRACTICE 1.4 We consider data for 3,142 counties in the United States, which includes each county’s name, the state in which it is located, its population in 2017, how its population changed from 2010 to 2017, poverty rate, and six additional characteristics. How might these data be organized in a data matrix?8 The data described in Guided Practice1.4represents thecountydata set, which is shown as a data matrix in Figure 1.5. These data come from the US Census, with much of the data coming from the US Census Bureau’s American Community Survey (ACS). Unlike the Decennial Census, which takes place every 10 years and attempts to collect basic demographic data from every residents of the US, the ACS is an ongoing survey that is sent to approximately 3.5 million households per year. As stated by the ACS website, these data help communities “plan for hospitals and schools, support school lunch programs, improve emergency services, build bridges, and inform businesses looking to add jobs and expand to new markets, and more.”9 A small subset of the variables from the ACS are summarized in 7[There are multiple strategies that can be followed. One common strategy is to have each student represented by] a row, and then add a column for each assignment, quiz, or exam. Under this setup, it is easy to review a single line to understand a student’s grade history. There should also be columns to include student information, such as one column to list student names. 8[Each county may be viewed as a case, and there are eleven pieces of information recorded for each case. A table] with 3,142 rows and 11 columns could hold these data, where each row represents a county and each column represents a particular piece of information. Types of variables Examine theunemp rate,pop,state, andmedian eduvariables in thecountydata set. Each of these variables is inherently different from the other three, yet some share certain characteristics. First considerunemp rate, which is said to be a numericalvariable since it can take a wide range of numerical values, and it is sensible to add, subtract, or take averages with those values. On the other hand, we would not classify a variable reporting telephone area codes as numerical since the average, sum, and difference of area codes doesn’t have any clear meaning. Thepopvariable is also numerical, although it seems to be a little different thanunemp rate. This variable of the population count can only take whole non-negative numbers (0,1,2, ...). For this reason, the population variable is said to be discrete since it can only take numerical values with jumps. On the other hand, the unemployment rate variable is said to be continuous. The variablestatecan take up to 51 values after accounting for Washington, DC: AL,AK, ..., andWY. Because the responses themselves are categories,stateis called acategoricalvariable, and the possible values are called the variable’slevels. Finally, consider themedian eduvariable, which describes the median education level of county residents and takes valuesbelow hs,hs diploma,some college, orbachelorsin each county. This variable seems to be a hybrid: it is a categorical variable but the levels have a natural ordering. A variable with these properties is called an ordinalvariable, while a regular categorical variable without this type of special ordering is called anominalvariable. To simplify analyses, any ordinal variable in this book will be treated as a nominal (unordered) categorical variable. all variables numerical categorical continuous discrete (unordered categorical)nominal (ordered categorical) Figure 1.7: Breakdown of variables into their respective types. EXAMPLE 1.5 Data were collected about students in a statistics course. Three variables were recorded for each student: number of siblings, student height, and whether the student had previously taken a statistics course. Classify each of the variables as continuous numerical, discrete numerical, or categorical. The number of siblings and student height represent numerical variables. Because the number of siblings is a count, it is discrete. Height varies continuously, so it is a continuous numerical variable. The last variable classifies students into two categories – those who have and those who have not taken a statistics course – which makes this variable categorical. GUIDED PRACTICE 1.6 An experiment is evaluating the effectiveness of a new drug in treating migraines. Agroupvariable is used to indicate the experiment group for each patient: treatment or control. Thenum migraines variable represents the number of migraines the patient experienced during a 3-month period. Classify each variable as either numerical or categorical.10 Relationships between variables Many analyses are motivated by a researcher looking for a relationship between two or more variables. A social scientist may like to answer some of the following questions: (1) If homeownership is lower than the national average in one county, will the percent of multi-unit structures in that county tend to be above or below the national average? (2) Does a higher than average increase in county population tend to correspond to counties with higher or lower median household incomes? (3) How useful a predictor is median education level for the median household income for US counties? To answer these questions, data must be collected, such as the county data set shown in Figure 1.5. Examining summary statistics could provide insights for each of the three questions about counties. Additionally, graphs can be used to visually explore the data. Scatterplots are one type of graph used to study the relationship between two numerical vari-ables. Figure 1.8 compares the variableshomeownershipand multi unit, which is the percent of units in multi-unit structures (e.g. apartments, condos). Each point on the plot represents a single county. For instance, the highlighted dot corresponds to County 413 in thecountydata set: Chat-tahoochee County, Georgia, which has 39.4% of units in multi-unit structures and a homeownership rate of 31.3%. The scatterplot suggests a relationship between the two variables: counties with a higher rate of multi-units tend to have lower homeownership rates. We might brainstorm as to why this relationship exists and investigate the ideas to determine which are the most reasonable explanations. wnership Rate Percent of Units in Multi−Unit Structures Figure 1.8: A scatterplot of homeownership versus the percent of units that are in multi-unit structures for US counties. The highlighted dot represents Chatta-hoochee County, Georgia, which has a multi-unit rate of 39.4% and a homeowner-ship rate of 31.3%. Explore this scatterplot and dozens of other scatterplots using American Community Survey data on Tableau Public . $0 $20k $40k $60k $80k $100k $120k −10% 0% 10% 20% 30% Median Household Income opulation Change o er 7 Y Figure 1.9: A scatterplot showingpop changeagainstmedian hh income. Owsley County of Kentucky, is highlighted, which lost 3.63% of its population from 2010 to 2017 and had median household income of$22,736. Explore this scatterplot and dozens of other scatterplots using American Community Survey data on Tableau Public . GUIDED PRACTICE 1.7 Examine the variables in theloan50data set, which are described in Figure1.4 on page 18. Create two questions about possible relationships between variables inloan50that are of interest to you.11 EXAMPLE 1.8 This example examines the relationship between a county’s population change from 2010 to 2017 and median household income, which is visualized as a scatterplot in Figure1.9. Are these variables The larger the median household income for a county, the higher the population growth observed for the county. While this trend isn’t true for every county, the trend in the plot is evident. Since there is some relationship between the variables, they are associated. Because there is a downward trend in Figure 1.8 – counties with more units in multi-unit structures are associated with lower homeownership – these variables are said to be negatively associated. Apositive association is shown in the relationship between themedian hh income and pop changein Figure 1.9, where counties with higher median household income tend to have higher rates of population growth. If two variables are not associated, then they are said to be independent. That is, two variables are independent if there is no evident relationship between the two. A pair of variables is either related in some way (associated) or not (independent). No pair of variables is both associated and independent. Section summary • Researchers often summarize data in a table, where the rows correspond to individuals or casesand the columns correspond to thevariables, the values of which are recorded for each individual. • Variables can benumerical(measured on a numerical scale) orcategorical(taking on levels, such as low/medium/high). Numerical variables can becontinuous, where all values within a range are possible, ordiscrete, where only specific values, usually integer values, are possible. • When there exists a relationship between two variables, the variables are said to beassociated 1.3 Air pollution and birth outcomes, study components. Researchers collected data to examine the relationship between air pollutants and preterm births in Southern California. During the study air pollution levels were measured by air quality monitoring stations. Specifically, levels of carbon monoxide were recorded in parts per million, nitrogen dioxide and ozone in parts per hundred million, and coarse particulate matter (PM10) inµg/m3. Length of gestation data were collected on 143,196 births between the years 1989 and 1993, and air pollution exposure during gestation was calculated for each birth. The analysis suggested that increased ambient PM10and, to a lesser degree, CO concentrations may be associated with the occurrence of preterm births.12 (a) Identify the main research question of the study. (b) Who are the subjects in this study, and how many are included? (c) What are the variables in the study? Identify each variable as numerical or categorical. If numerical, state whether the variable is discrete or continuous. If categorical, state whether the variable is ordinal. 1.4 Buteyko method, study components. The Buteyko method is a shallow breathing technique devel-oped by Konstantin Buteyko, a Russian doctor, in 1952. Anecdotal evidence suggests that the Buteyko method can reduce asthma symptoms and improve quality of life. In a scientific study to determine the effectiveness of this method, researchers recruited 600 asthma patients aged 18-69 who relied on medication for asthma treatment. These patients were randomnly split into two research groups: one practiced the Buteyko method and the other did not. Patients were scored on quality of life, activity, asthma symptoms, and medication reduction on a scale from 0 to 10. On average, the participants in the Buteyko group experienced a significant reduction in asthma symptoms and an improvement in quality of life.13 (a) Identify the main research question of the study. (b) Who are the subjects in this study, and how many are included? (c) What are the variables in the study? Identify each variable as numerical or categorical. If numerical, state whether the variable is discrete or continuous. If categorical, state whether the variable is ordinal. 1.5 Cheaters, study components. Researchers studying the relationship between honesty, age and self-control conducted an experiment on 160 children between the ages of 5 and 15. Participants reported their age, sex, and whether they were an only child or not. The researchers asked each child to toss a fair coin in private and to record the outcome (white or black) on a paper sheet, and said they would only reward children who report white.14 (a) Identify the main research question of the study. (b) Who are the subjects in this study, and how many are included? (c) The study’s findings can be summarized as follows: ”Half the students were explicitly told not to cheat and the others were not given any explicit instructions. In the no instruction group probability of cheating was found to be uniform across groups based on child’s characteristics. In the group that was explicitly told to not cheat, girls were less likely to cheat, and while rate of cheating didn’t vary by age for boys, it decreased with age for girls.” How many variables were recorded for each subject in the study in order to conclude these findings? State the variables and their types. 12[B. Ritz et al. “][Effect of air pollution on preterm birth among children born in Southern California between 1989] and 1993”. In:Epidemiology11.5 (2000), pp. 502–511. 1.6 Stealers, study components. In a study of the relationship between socio-economic class and unethical behavior, 129 University of California undergraduates at Berkeley were asked to identify themselves as having low or high social-class by comparing themselves to others with the most (least) money, most (least) education, and most (least) respected jobs. They were also presented with a jar of individually wrapped candies and informed that the candies were for children in a nearby laboratory, but that they could take some if they wanted. After completing some unrelated tasks, participants reported the number of candies they had taken.15 (a) Identify the main research question of the study. (b) Who are the subjects in this study, and how many are included? (c) The study found that students who were identified as upper-class took more candy than others. How many variables were recorded for each subject in the study in order to conclude these findings? State the variables and their types. 1.7 Migraine and acupuncture, Part 2. Exercise1.1introduced a study exploring whether acupuncture had any effect on migraines. Researchers conducted a randomized controlled study where patients were randomly assigned to one of two groups: treatment or control. The patients in the treatment group re-ceived acupuncture that was specifically designed to treat migraines. The patients in the control group received placebo acupuncture (needle insertion at non-acupoint locations). 24 hours after patients received acupuncture, they were asked if they were pain free. What are the explanatory and response variables in this study? 1.8 Sinusitis and antibiotics, Part 2. Exercise1.2 introduced a study exploring the effect of antibiotic treatment for acute sinusitis. Study participants either received either a 10-day course of an antibiotic (treatment) or a placebo similar in appearance and taste (control). At the end of the 10-day period, patients were asked if they experienced improvement in symptoms. What are the explanatory and response variables in this study? 1.9 Fisher’s irises. Sir Ronald Aylmer Fisher was an English statistician, evolutionary biologist, and geneticist who worked on a data set that contained sepal length and width, and petal length and width from three species of iris flowers (setosa,versicolorandvirginica). There were 50 flowers from each species in the data set.16 (a) How many cases were included in the data? (b) How many numerical variables are included in the data? Indicate what they are, and if they are continuous or discrete. (c) How many categorical variables are included in the data, and what are they? List the corre-sponding levels (categories). Photo by Ryan Claussen (http://flic.kr/p/6QTcuX) CC BY-SA 2.0 license 1.10 Smoking habits of UK residents. A survey was conducted to study the smoking habits of UK residents. Below is a data matrix displaying a portion of the data collected in this survey. Note that “£” stands for British Pounds Sterling, “cig” stands for cigarettes, and “N/A” refers to a missing component of the data.17 sex age marital grossIncome smoke amtWeekends amtWeekdays 1 Female 42 Single Under£2,600 Yes 12 cig/day 12 cig/day 2 Male 44 Single £10,400 to£15,600 No N/A N/A 3 Male 53 Married Above£36,400 Yes 6 cig/day 6 cig/day . . . . . . . . . . . . . . . . . . . . . . . . 1691 Male 40 Single £2,600 to£5,200 Yes 8 cig/day 8 cig/day (a) What does each row of the data matrix represent? (b) How many participants were included in the survey? (c) Indicate whether each variable in the study is numerical or categorical. If numerical, identify as contin-uous or discrete. If categorical, indicate if the variable is ordinal. 15[P.K. Piff et al. “Higher social class predicts increased unethical behavior”. In:] [Proceedings of the National] Academy of Sciences (2012). 16[R.A Fisher. “][The Use of Multiple Measurements in Taxonomic Problems][”. In:] [Annals of Eugenics] [7 (1936),] pp. 179–188. 1.11 US Airports. The visualization below shows the geographical distribution of airports in the contiguous United States and Washington, DC. This visualization was constructed based on a dataset where each observation is an airport.18 (a) List the variables used in creating this visualization. (b) Indicate whether each variable in the study is numerical or categorical. If numerical, identify as contin-uous or discrete. If categorical, indicate if the variable is ordinal. 1.12 UN Votes. The visualization below shows voting patterns the United States, Canada, and Mexico in the United Nations General Assembly on a variety of issues. Specifically, for a given year between 1946 and 2015, it displays the percentage of roll calls in which the country voted yes for each issue. This visualization was constructed based on a dataset where each observation is a country/year pair.19 (a) List the variables used in creating this visualization. (b) Indicate whether each variable in the study is numerical or categorical. If numerical, identify as contin-uous or discrete. If categorical, indicate if the variable is ordinal. 18[Federal Aviation Administration,][www.faa.gov/airports/airport safety/airportdata 5010][.] 19[David Robinson.] [unvotes: United Nations General Assembly Voting Data][. R package version 0.2.0. 2017.] Overview of data collection principles How do researchers collect data? Why are the results of some studies more reliable than others? The way a researcher collects data depends upon the research goals. In this section, we look at different methods of collecting data and consider the types of conclusions that can be drawn from those methods. Learning objectives 1. Distinguish between the population and a sample and between the parameter and a statistic. 2. Know when to summarize a data set using a mean versus a proportion. 3. Understand why anecdotal evidence is unreliable. 4. Identify the four main types of data collection: census, sample survey, experi-ment, and observation study. 5. Classify a study as observational or experimental, and determine when a study’s results can be generalized to the population and when a causal relationship can be drawn. Populations and samples Consider the following three research questions: 1. What is the average mercury content in swordfish in the Atlantic Ocean? 2. Over the last 5 years, what is the average time to complete a degree for Duke undergrads? 3. Does a new drug reduce the number of deaths in patients with severe heart disease? Each research question refers to a targetpopulation. In the first question, the target population is all swordfish in the Atlantic ocean, and each fish represents a case. Often times, it is too expensive to collect data for every case in a population. Instead, a sample is taken. A sample represents a subset of the cases and is often a small fraction of the population. For instance, 60 swordfish (or some other number) in the population might be selected, and this sample data may be used to provide an estimate of the population average and answer the research question. GUIDED PRACTICE 1.9 For the second and third questions above, identify the target population and what represents an individual case.20 We collect a sample of data to better understand the characteristics of a population. Avariable is a characteristic we measure for each individual or case. The overall quantity of interest may be the mean, median, proportion, or some other summary of a population. These population values are called parameters. We estimate the value of a parameter by taking a sample and computing a numerical summary called astatistic based on that sample. Note that the two p’s (population, parameter) go together and the two s’s (sample, statistic) go together. EXAMPLE 1.10 Earlier we asked the question: what is the average mercury content in swordfish in the Atlantic Ocean? Identify the variable to be measured and the parameter and statistic of interest. The variable is the level of mercury content in swordfish in the Atlantic Ocean. It will be measured for each individual swordfish. The parameter of interest is the average mercury content inall swordfish in the Atlantic Ocean. If we take a sample of 50 swordfish from the Atlantic Ocean, the average mercury content among just those 50 swordfish will be the statistic. Two statistics we will study are themean(also called theaverage) andproportion. When we are discussing a population, we label the mean asµ(the Greek letter,mu), while we label the sample mean as ¯x (read asx-bar). When we are discussing a proportion in the context of a population, we use the label p, while the sample proportion has a label of ˆp(read asp-hat). Generally, we use ¯ x to estimate the population mean, µ. Likewise, we use the sample proportion ˆp to estimate the population proportion,p. EXAMPLE 1.11 Is µa parameter or statistic? What about ˆp? µis a parameter because it refers to the average of theentire population. ˆpis a statistic because it is calculated from a sample. EXAMPLE 1.12 For the second question regarding time to complete a degree for a Duke undergraduate, is the variable numerical or categorical? What is the parameter of interest? The characteristic that we record on each individual is the number of years until graduation, which is a numerical variable. The parameter of interest is the average time to degree for all Duke under-graduates, and we use µto describe this quantity. GUIDED PRACTICE 1.13 The third question asked whether a new drug reduces deaths in patients with severe heart disease. Is the variable numerical or categorical? Describe the statistic that should be calculated in this If these topics are still a bit unclear, don’t worry. We’ll cover them in greater detail in the next chapter. Figure 1.10: In February 2010, some media pundits cited one large snow storm as valid evidence against global warming. As comedian Jon Stewart pointed out, “It’s one storm, in one region, of one —————————– February 10th, 2010. Anecdotal evidence Consider the following possible responses to the three research questions: 1. A man on the news got mercury poisoning from eating swordfish, so the average mercury concentration in swordfish must be dangerously high. 2. I met two students who took more than 7 years to graduate from Duke, so it must take longer to graduate at Duke than at many other colleges. 3. My friend’s dad had a heart attack and died after they gave him a new heart disease drug, so the drug must not work. Each conclusion is based on data. However, there are two problems. First, the data only represent one or two cases. Second, and more importantly, it is unclear whether these cases are actually representative of the population. Data collected in this haphazard fashion are called anecdotal evidence. Be careful of making inferences based on anecdotal evidence. Such evidence may be true and verifiable, but it may only represent extraordinary cases. The majority of cases and the average case may in fact be very different. Explanatory and response variables When we ask questions about the relationship between two variables, we sometimes also want to determine if the change in one variable causes a change in the other. Consider the following rephrasing of an earlier question about thecountydata set: If there is an increase in the median household income in a county, does this drive an increase in its population? In this question, we are asking whether one variable affects another. If this is our underlying belief, thenmedian household incomeis theexplanatoryvariable and thepopulation change is the responsevariable in the hypothesized relationship.22 When we suspect one variable might causally affect another, we label the first variable the explanatory variable and the second the response variable. might affect explanatory variable response variable For many pairs of variables, there is no hypothesized relationship, and these labels would not be applied to either variable in such cases. Labeling variables asexplanatoryandresponse does not guarantee the relationship between the two is actually causal, even if there is an association identified between the two variables. We use these labels only to keep track of which variable we suspect affects the other. In many cases, the relationship is complex or unknown. It may be unclear whether variableA explains variableBor whether variableB explains variableA. For example, it is now known that a particular protein called REST is much depleted in people suffering from Alzheimer’s disease. While this raises hopes of a possible approach for treating Alzheimer’s, it is still unknown whether the lack of the protein causes brain deterioration, whether brain deterioration causes depletion in the REST protein, or whether some third variable causes both brain deterioration and REST depletion. That is, we do not know if the lack of the protein is an explanatory variable or a response variable. Perhaps it is both.23 22[Sometimes the explanatory variable is called the][independent][variable and the response variable is called the] dependentvariable. However, this becomes confusing since apairof variables might be independent or dependent, so we avoid this language. Observational studies versus experiments There are two primary types of data collection: observational studies and experiments. Researchers perform anobservational studywhen they collect data without interfering with how the data arise. For instance, researchers may collect information via surveys, review medical or company records, or follow acohortof many similar individuals to study why certain diseases might develop. In each of these situations, researchers merely observe or take measurements of things that arise When researchers want to investigate the possibility of a causal connection, they conduct an experiment. For all experiments, the researchers must impose a treatment. For most studies there will be both an explanatory and a response variable. For instance, we may suspect administering a drug will reduce mortality in heart attack patients over the following year. To check if there really is a causal connection between the explanatory variable and the response, researchers will collect a sample of individuals and split them into groups. The individuals in each group are assigned a treatment. When individuals are randomly assigned to a group, the experiment is called a randomized experiment. For example, each heart attack patient in the drug trial could be randomly assigned into one of two groups: the first group receives aplacebo(fake treatment) and the second group receives the drug. See the case study in Section 1.1 for another example of an experiment, though that study did not employ a placebo. EXAMPLE 1.14 Suppose that a researcher is interested in the average tip customers at a particular restaurant give. Should she carry out an observational study or an experiment? In addressing this question, we ask, “Will the researcher be imposing any treatment?” Because there is no treatment or interference that would be applicable here, it will be an observational study. Additionally, one consideration the researcher should be aware of is that, if customers know their tips are being recorded, it could change their behavior, making the results of the study inaccurate. Section summary • The population is the entire group that the researchers are interested in. Because it is usually too costly to gather the data for the entire population, researchers will collect data from asample, representing a subset of the population. • Aparameteris a true quantity for the entire population, while astatisticis what is calcu-lated from the sample. A parameter is about a population and a statistic is about a sample. Remember: p goes with p and s goes with s. • Two common summary quantities are mean (for numerical variables) andproportion (for categorical variables). • Finding a good estimate for a population parameter requires a random sample; do not gener-alize from anecdotal evidence. • There are two primary types of data collection: observational studies and experiments. In an experiment, researchers impose a treatment to look for a causal relationship between the treatment and the response. In anobservational study, researchers simply collect data without imposing any treatment. 1.13 Air pollution and birth outcomes, scope of inference. Exercise 1.3 introduces a study where researchers collected data to examine the relationship between air pollutants and preterm births in Southern California. During the study air pollution levels were measured by air quality monitoring stations. Length of gestation data were collected on 143,196 births between the years 1989 and 1993, and air pollution exposure during gestation was calculated for each birth. (a) Identify the population of interest and the sample in this study. (b) Comment on whether or not the results of the study can be generalized to the population, and if the findings of the study can be used to establish causal relationships. 1.14 Cheaters, scope of inference. Exercise1.5introduces a study where researchers studying the rela-tionship between honesty, age, and self-control conducted an experiment on 160 children between the ages of 5 and 15. The researchers asked each child to toss a fair coin in private and to record the outcome (white or black) on a paper sheet, and said they would only reward children who report white. Half the students were explicitly told not to cheat and the others were not given any explicit instructions. Differences were observed in the cheating rates in the instruction and no instruction groups, as well as some differences across children’s characteristics within each group. (a) Identify the population of interest and the sample in this study. (b) Comment on whether or not the results of the study can be generalized to the population, and if the findings of the study can be used to establish causal relationships. 1.15 Buteyko method, scope of inference. Exercise1.4introduces a study on using the Buteyko shallow breathing technique to reduce asthma symptoms and improve quality of life. As part of this study 600 asthma patients aged 18-69 who relied on medication for asthma treatment were recruited and randomly assigned to two groups: one practiced the Buteyko method and the other did not. Those in the Buteyko group experienced, on average, a significant reduction in asthma symptoms and an improvement in quality of life. (a) Identify the population of interest and the sample in this study. (b) Comment on whether or not the results of the study can be generalized to the population, and if the findings of the study can be used to establish causal relationships. 1.16 Stealers, scope of inference. Exercise 1.6 introduces a study on the relationship between socio-economic class and unethical behavior. As part of this study 129 University of California Berkeley under-graduates were asked to identify themselves as having low or high social-class by comparing themselves to others with the most (least) money, most (least) education, and most (least) respected jobs. They were also presented with a jar of individually wrapped candies and informed that the candies were for children in a nearby laboratory, but that they could take some if they wanted. After completing some unrelated tasks, participants reported the number of candies they had taken. It was found that those who were identified as upper-class took more candy than others. (a) Identify the population of interest and the sample in this study. (b) Comment on whether or not the results of the study can be generalized to the population, and if the findings of the study can be used to establish causal relationships. 1.17 Relaxing after work. The General Social Survey asked the question, “After an average work day, about how many hours do you have to relax or pursue activities that you enjoy?” to a random sample of 1,155 Americans. The average relaxing time was found to be 1.65 hours. Determine which of the following is an observation, a variable, a sample statistic (value calculated based on the observed sample), or a population parameter. (a) An American in the sample. (b) Number of hours spent relaxing after an average work day. (c) 1.65. 1.18 Cats on YouTube. Suppose you want to estimate the percentage of videos on YouTube that are cat videos. It is impossible for you to watch all videos on YouTube so you use a random video picker to select 1000 videos for you. You find that 2% of these videos are cat videos.Determine which of the following is an observation, a variable, a sample statistic (value calculated based on the observed sample), or a population parameter. (a) Percentage of all videos on YouTube that are cat videos. (b) 2%. (c) A video in your sample. Observational studies and sampling strategies You have probably read or heard claims from many studies and polls. A background in statistical reasoning will help you assess the validity of such claims. Some of the big questions we address in this section include: If a study finds a relationship between two variables, such as eating chocolate and positive health outcomes, is it reasonable to conclude eating chocolate improves health outcomes? How do opinion polls work? How do research organizations collect the data, and what types of bias should we look out for? Learning objectives 1. Identify possible confounding factors in a study and explain, in context, how
{"url":"https://1library.net/document/8rz3weqx-advanced-high-school-statistics-nd-edition.html","timestamp":"2024-11-08T19:02:07Z","content_type":"text/html","content_length":"234920","record_id":"<urn:uuid:da96ba0f-cd94-49de-8dd9-91438336dbc5>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00114.warc.gz"}
Cite as Emile Anand, Jan van den Brand, Mehrdad Ghadiri, and Daniel J. Zhang. The Bit Complexity of Dynamic Algebraic Formulas and Their Determinants. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024) Copy BibTex To Clipboard author = {Anand, Emile and van den Brand, Jan and Ghadiri, Mehrdad and Zhang, Daniel J.}, title = {{The Bit Complexity of Dynamic Algebraic Formulas and Their Determinants}}, booktitle = {51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)}, pages = {10:1--10:20}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-322-5}, ISSN = {1868-8969}, year = {2024}, volume = {297}, editor = {Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.10}, URN = {urn:nbn:de:0030-drops-201538}, doi = {10.4230/LIPIcs.ICALP.2024.10}, annote = {Keywords: Data Structures, Online Algorithms, Bit Complexity}
{"url":"https://drops.dagstuhl.de/search?term=Weinstein%2C%20Omri","timestamp":"2024-11-03T19:22:44Z","content_type":"text/html","content_length":"150969","record_id":"<urn:uuid:4ad85963-3279-40fa-be8f-66f9700352f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00286.warc.gz"}
Complex-temperature properties of the two-dimensional Ising model for nonzero magnetic field We study the complex-temperature phase diagram of the square-lattice Ising model for nonzero external magnetic field H, i.e., for 0≤μ≤∞, where μ=[Formula Presented]. We also carry out a similar analysis for -∞≤μ≤0. The results for the interval -1≤μ≤1 provide a way of continuously connecting the two known exact solutions of this model, viz., at μ=1 (Onsager, Yang) and μ=-1 (Lee and Yang). Our methods include numerical calculations of complex-temperature zeros of the partition function and an analysis of low-temperature series expansions. For real nonzero H, the inner branch of a limaçon bounding the FM phase breaks and forms two complex-conjugate arcs. We study the singularities and associated exponents of thermodynamic functions at the endpoints of these arcs. For μ<0, there are two line segments of singularities on the negative and positive u axis, and we carry out a similar study of the behavior at the inner endpoints of these arcs, which constitute the nearest singularities to the origin in this case. Finally, we also determine the exact complex-temperature phase diagrams at μ=-1 on the honeycomb and triangular lattices and discuss the relation between these and the corresponding zero-field phase diagrams. All Science Journal Classification (ASJC) codes • Statistical and Nonlinear Physics • Statistics and Probability • Condensed Matter Physics Dive into the research topics of 'Complex-temperature properties of the two-dimensional Ising model for nonzero magnetic field'. Together they form a unique fingerprint.
{"url":"https://researchwith.njit.edu/en/publications/complex-temperature-properties-of-the-two-dimensional-ising-model","timestamp":"2024-11-03T09:21:10Z","content_type":"text/html","content_length":"50878","record_id":"<urn:uuid:d2ccc808-ab7e-4ab9-83d3-889fd54feb6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00460.warc.gz"}
Definitions concerning filters Definitions concerning filters Definition of a filter Traditional definition of a filter on It follows from the definition that there can be no filters on the empty set. If we waive the second rule and allow hyperfilter on Alternative equivalent definition A collection Dispensing with the specification of the enclosing set One can dispense with the specification of the enclosing set Fineness of filters Definition: If finer than Filter bases and filter prebases Definition: A collection No enclosing set is specified. It is trivial to check that a filter is always a filter base. Filter generated by a filter base It is trivial to check that By definition, Fineness of filter bases Definition: If finer than Property: If Equivalent filter bases Definition: Two filter bases are equivalent if each is finer than the other. Property: Two filter bases generate the same filter on a common enclosing set iff they are equivalent. Proof: The generated filters are equal, that is, each finer than the other, iff their generating filter bases are each finer than the other, that is, equivalent. Property: Equivalence is an equivalence relation among filter bases (trivial). Filter prebases A nonempty collection The image of a filter Traditional definition: The filter on Alternative definition: The filter on Again, the two definitions are equivalent.
{"url":"https://maths.david.olivier.name/equivalent-alternative-definitions-concerning-filters/","timestamp":"2024-11-10T18:35:27Z","content_type":"text/html","content_length":"66793","record_id":"<urn:uuid:bf3c86f6-557a-4ab3-be7d-790ed834a243>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00812.warc.gz"}
Journal of Applied Nonlinear Dynamics Miguel A. F. Sanjuan Albert C.J. Luo Miguel A. F. Sanjuan (editor) Department of Physics, Universidad Rey Juan Carlos, 28933 Mostoles, Madrid, Spain Email: miguel.sanjuan@urjc.es Albert C.J. Luo (editor) Department of Mechanical and Industrial Engineering, Southern Illinois University Ed-wardsville, IL 62026-1805, USA Fax: +1 618 650 2555 Email: aluo@siue.edu Nonlinear Analysis of Two-layer Fluid Sloshing in a Rectangular Tank Subjected to Width Direction Excitation Journal of Applied Nonlinear Dynamics 5(4) (2016) 399--421 | DOI:10.5890/JAND.2016.12.003 Fumitaka Yoshizumi Toyota Central R&D Labs., Inc., 41-1, Yokomichi, Nagakute, Aichi, 480-1192, Japan Download Full Text PDF This paper describes a nonlinear theory to describe oscillations of two liquid layers formed in a rectangular tank. Nonlinear equations based on the variational principle and the Galerkin method are used, which are written in a form of direct expanding in eigenmodes. Analysis using the equations is described under the experimental conditions reported in a previous study. Both the experiment and the analysis demonstrate a “peculiar oscillation” in which the interface of the two liquids oscillates at 1/5 to 1/7 of the excitation frequency in a particular excitation frequency range. By observing the nonlinear force time series, four eigenmodes are identified to be mainly relevant to the oscillation. An amplitude equation analysis was applied to the four relevant eigenmodes. It was found that this “peculiar oscillation” is one of summed and differential harmonic oscillations (combination oscillation) in which one asymmetric and one symmetric eigenmode excite each other through the mediation of other two asymmetric eigenmodes. 1. [1]& Handa, K. and Tajima, K. (1979), Sloshing of two superposed liquid layers in a rectangular tank, Transactions of the Japan Society of Mechanical Engineers, Series B, (in Japanese), 45 nbsp (398), 1450-1457. 2. [2]&nbsp Tang, Y. (1993), Sloshing displacements in a tank containing two liquids, Proceedings of the ASME 1993 Pressure Vessels and Piping Conference (PVP), 258, 179-184. 3. [3]&nbsp Veletsos, A. S. and Shivakumar, P. (1993), Sloshing response of layered liquids in rigid tanks, Earthquake Engineering and Structural Dynamics, 22, 801-821. 4. [4]& Xue, M., Zheng, J., Lin, P., Ma, Y. and Yuan, X. (2013), Experimental investigation on the layered liquid sloshing in a rectangular tank, The Twenty-third International Offshore and Polar nbsp Engineering Conference, 202-208. 5. [5]& Molin, B., Remy, F., Audiffren, C. and Marcer, R. (2012), Experimental and numerical study of liquid sloshing in a rectangular tank with three fluids, The Twenty-second International nbsp Offshore and Polar Engineering Conference, 331-340. 6. [6]&nbsp Generalis, S. C. and Nagata, M. (1995), Faraday resonance in a two-liquid layer system, Fluid Dynamics Research, 15, 145-165. 7. [7]&nbsp Meziani, B. and Ourrad, O. (2013), Modal method for solving the nonlinear sloshing of two superposed fluids in a rectangular tank, Journal of Applied Nonlinear Dynamics, 2(3), 261-283. 8. [8]& La Rocca, M., Sciortino, G., Adduce, C. and Boniforti, M. A. (2005), Experimental and theoretical investigation on the sloshing of a two-liquid system with free surface, Physics of nbsp Fluids, 17, 062101. 9. [9]&nbsp Hara, K. and Takahara, H. (2008), Hamiltonian formulation for nonlinear sloshing in layered two immiscible fluids, Journal of System Design and Dynamics, 2(5), 1183-1193. 10. [10]&nbsp Sciortino, G., Adduce, C. and La Rocca, M. (2009), Sloshing of a layered fluid with a free surface as a Hamiltonian system, Physics of Fluids, 21, 052102. 11. [11]& Yoshizumi, F. (2008), Nonlinear analysis of a two-layer fluid sloshing problem: Two-dimensional problem in rectangular tanks, Transactions of the Japan Society of Mechanical Engineers, nbsp Series C, (in Japanese), 74(748), 2845-2854. 12. [12]&nbsp Luke, J. C. (1967), A variational principle for a fluid with a free surface, Journal of Fluid Mechanics, 27, 395-397. 13. [13]& Sakata, M., Kimura, K. and Utsumi, M. (1984), Non-stationary response of non-linear liquid motion in a cylindrical tank subjected to random base excitation, Journal of Sound and nbsp Vibration, 94(3), 351-363. 14. [14]& Faltinsen, O. M., Rognebakke, O. F., Lukovsky, I. A. and Timokha, A. N. (2000), Multidimensional modal analysis of nonlinear sloshing in a rectangular tank with finite water depth, nbsp Journal of Fluid Mechanics, 407, 201-234. 15. [15]&nbsp Yamamoto, T., Yasuda, K. and Tei, N. (1981), Summed and differential harmonic oscillations in a slender beam, Bulletin of the JSME, 24(193), 1214-1222.
{"url":"https://www.lhscientificpublishing.com/Journals/articles/DOI-10.5890-JAND.2016.12.003.aspx","timestamp":"2024-11-09T22:41:07Z","content_type":"application/xhtml+xml","content_length":"26735","record_id":"<urn:uuid:5d63aa18-bccc-4aba-b22f-a6d9215850d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00009.warc.gz"}
Finding the nearest pair of points Finding the nearest pair of points¶ Problem statement¶ Given $n$ points on the plane. Each point $p_i$ is defined by its coordinates $(x_i,y_i)$. It is required to find among them two such points, such that the distance between them is minimal: $$ \min_{\scriptstyle i, j=0 \ldots n-1,\atop \scriptstyle i \neq j } \rho (p_i, p_j). $$ We take the usual Euclidean distances: $$ \rho (p_i,p_j) = \sqrt{(x_i-x_j)^2 + (y_i-y_j)^2} .$$ The trivial algorithm - iterating over all pairs and calculating the distance for each — works in $O(n^2)$. The algorithm running in time $O(n \log n)$ is described below. This algorithm was proposed by Shamos and Hoey in 1975. (Source: Ch. 5 Notes of Algorithm Design by Kleinberg & Tardos, also see here) Preparata and Shamos also showed that this algorithm is optimal in the decision tree model. We construct an algorithm according to the general scheme of divide-and-conquer algorithms: the algorithm is designed as a recursive function, to which we pass a set of points; this recursive function splits this set in half, calls itself recursively on each half, and then performs some operations to combine the answers. The operation of combining consist of detecting the cases when one point of the optimal solution fell into one half, and the other point into the other (in this case, recursive calls from each of the halves cannot detect this pair separately). The main difficulty, as always in case of divide and conquer algorithms, lies in the effective implementation of the merging stage. If a set of $n$ points is passed to the recursive function, then the merge stage should work no more than $O(n)$, then the asymptotics of the whole algorithm $T(n)$ will be found from the equation: $$T(n) = 2T(n/2) + O(n).$$ The solution to this equation, as is known, is $T(n) = O(n \log n).$ So, we proceed on to the construction of the algorithm. In order to come to an effective implementation of the merge stage in the future, we will divide the set of points into two subsets, according to their $x$-coordinates: In fact, we draw some vertical line dividing the set of points into two subsets of approximately the same size. It is convenient to make such a partition as follows: We sort the points in the standard way as pairs of numbers, ie.: $$p_i < p_j \Longleftrightarrow (x_i < x_j) \lor \Big(\left(x_i = x_j\right) \wedge \left(y_i < y_j \right) \Big) $$ Then take the middle point after sorting $p_m (m = \lfloor n/2 \rfloor)$, and all the points before it and the $p_m$ itself are assigned to the first half, and all the points after it - to the second $$A_1 = \{p_i \ | \ i = 0 \ldots m \}$$ $$A_2 = \{p_i \ | \ i = m + 1 \ldots n-1 \}.$$ Now, calling recursively on each of the sets $A_1$ and $A_2$, we will find the answers $h_1$ and $h_2$ for each of the halves. And take the best of them: $h = \min(h_1, h_2)$. Now we need to make a merge stage, i.e. we try to find such pairs of points, for which the distance between which is less than $h$ and one point is lying in $A_1$ and the other in $A_2$. It is obvious that it is sufficient to consider only those points that are separated from the vertical line by a distance less than $h$, i.e. the set $B$ of the points considered at this stage is equal to: $$B = \{ p_i\ | \ | x_i - x_m\ | < h \}.$$ For each point in the set $B$, we try to find the points that are closer to it than $h$. For example, it is sufficient to consider only those points whose $y$-coordinate differs by no more than $h$. Moreover, it makes no sense to consider those points whose $y$-coordinate is greater than the $y$-coordinate of the current point. Thus, for each point $p_i$ we define the set of considered points $C (p_i)$ as follows: $$C(p_i) = \{ p_j\ |\ p_j \in B,\ \ y_i - h < y_j \le y_i \}.$$ If we sort the points of the set $B$ by $y$-coordinate, it will be very easy to find $C(p_i)$: these are several points in a row ahead to the point $p_i$. So, in the new notation, the merging stage looks like this: build a set $B$, sort the points in it by $y$-coordinate, then for each point $p_i \in B$ consider all points $p_j \in C(p_i)$, and for each pair $(p_i,p_j)$ calculate the distance and compare with the current best distance. At first glance, this is still a non-optimal algorithm: it seems that the sizes of sets $C(p_i)$ will be of order $n$, and the required asymptotics will not work. However, surprisingly, it can be proved that the size of each of the sets $C(p_i)$ is a quantity $O(1)$, i.e. it does not exceed some small constant regardless of the points themselves. Proof of this fact is given in the next Finally, we pay attention to the sorting, which the above algorithm contains: first,sorting by pairs $(x, y)$, and then second, sorting the elements of the set $B$ by $y$. In fact, both of these sorts inside the recursive function can be eliminated (otherwise we would not reach the $O(n)$ estimate for the merging stage, and the general asymptotics of the algorithm would be $O(n \log^2 n)$). It is easy to get rid of the first sort — it is enough to perform this sort before starting the recursion: after all, the elements themselves do not change inside the recursion, so there is no need to sort again. With the second sorting a little more difficult to perform, performing it previously will not work. But, remembering the merge sort, which also works on the principle of divide-and-conquer, we can simply embed this sort in our recursion. Let recursion, taking some set of points (as we remember,ordered by pairs $(x, y)$), return the same set, but sorted by the $y$ -coordinate. To do this, simply merge (in $O(n)$) the two results returned by recursive calls. This will result in a set sorted by $y$-coordinate. Evaluation of the asymptotics¶ To show that the above algorithm is actually executed in $O(n \log n)$, we need to prove the following fact: $|C(p_i)| = O(1)$. So, let us consider some point $p_i$; recall that the set $C(p_i)$ is a set of points whose $y$-coordinate lies in the segment $[y_i-h; y_i]$, and, moreover, along the $x$ coordinate, the point $p_i$ itself, and all the points of the set $C(p_i)$ lie in the band width $2h$. In other words, the points we are considering $p_i$ and $C(p_i)$ lie in a rectangle of size $2h \times h$. Our task is to estimate the maximum number of points that can lie in this rectangle $2h \times h$; thus, we estimate the maximum size of the set $C(p_i)$. At the same time, when evaluating, we must not forget that there may be repeated points. Remember that $h$ was obtained from the results of two recursive calls — on sets $A_1$ and $A_2$, and $A_1$ contains points to the left of the partition line and partially on it, $A_2$ contains the remaining points of the partition line and points to the right of it. For any pair of points from $A_1$, as well as from $A_2$, the distance can not be less than $h$ — otherwise it would mean incorrect operation of the recursive function. To estimate the maximum number of points in the rectangle $2h \times h$ we divide it into two squares $h \times h$, the first square include all points $C(p_i) \cap A_1$, and the second contains all the others, i.e. $C(p_i) \cap A_2$. It follows from the above considerations that in each of these squares the distance between any two points is at least $h$. We show that there are at most four points in each square. For example, this can be done as follows: divide the square into $4$ sub-squares with sides $h/2$. Then there can be no more than one point in each of these sub-squares (since even the diagonal is equal to $h / \sqrt{2}$, which is less than $h$). Therefore, there can be no more than $4$ points in the whole square. So, we have proved that in a rectangle $2h \times h$ can not be more than $4 \cdot 2 = 8$ points, and, therefore, the size of the set $C(p_i)$ cannot exceed $7$, as required. We introduce a data structure to store a point (its coordinates and a number) and comparison operators required for two types of sorting: struct pt { int x, y, id; struct cmp_x { bool operator()(const pt & a, const pt & b) const { return a.x < b.x || (a.x == b.x && a.y < b.y); struct cmp_y { bool operator()(const pt & a, const pt & b) const { return a.y < b.y; int n; vector<pt> a; For a convenient implementation of recursion, we introduce an auxiliary function upd_ans(), which will calculate the distance between two points and check whether it is better than the current double mindist; pair<int, int> best_pair; void upd_ans(const pt & a, const pt & b) { double dist = sqrt((a.x - b.x)*(a.x - b.x) + (a.y - b.y)*(a.y - b.y)); if (dist < mindist) { mindist = dist; best_pair = {a.id, b.id}; Finally, the implementation of the recursion itself. It is assumed that before calling it, the array $a[]$ is already sorted by $x$-coordinate. In recursion we pass just two pointers $l, r$, which indicate that it should look for the answer for $a[l \ldots r)$. If the distance between $r$ and $l$ is too small, the recursion must be stopped, and perform a trivial algorithm to find the nearest pair and then sort the subarray by $y$-coordinate. To merge two sets of points received from recursive calls into one (ordered by $y$-coordinate), we use the standard STL $merge()$ function, and create an auxiliary buffer $t[]$(one for all recursive calls). (Using inplace_merge () is impractical because it generally does not work in linear time.) Finally, the set $B$ is stored in the same array $t$. vector<pt> t; void rec(int l, int r) { if (r - l <= 3) { for (int i = l; i < r; ++i) { for (int j = i + 1; j < r; ++j) { upd_ans(a[i], a[j]); sort(a.begin() + l, a.begin() + r, cmp_y()); int m = (l + r) >> 1; int midx = a[m].x; rec(l, m); rec(m, r); merge(a.begin() + l, a.begin() + m, a.begin() + m, a.begin() + r, t.begin(), cmp_y()); copy(t.begin(), t.begin() + r - l, a.begin() + l); int tsz = 0; for (int i = l; i < r; ++i) { if (abs(a[i].x - midx) < mindist) { for (int j = tsz - 1; j >= 0 && a[i].y - t[j].y < mindist; --j) upd_ans(a[i], t[j]); t[tsz++] = a[i]; By the way, if all the coordinates are integer, then at the time of the recursion you can not move to fractional values, and store in $mindist$ the square of the minimum distance. In the main program, recursion should be called as follows: sort(a.begin(), a.end(), cmp_x()); mindist = 1E20; rec(0, n); Generalization: finding a triangle with minimal perimeter¶ The algorithm described above is interestingly generalized to this problem: among a given set of points, choose three different points so that the sum of pairwise distances between them is the In fact, to solve this problem, the algorithm remains the same: we divide the field into two halves of the vertical line, call the solution recursively on both halves, choose the minimum $minper$ from the found perimeters, build a strip with the thickness of $minper / 2$, and iterate through all triangles that can improve the answer. (Note that the triangle with perimeter $\le minper$ has the longest side $\le minper / 2$.) Practice problems¶
{"url":"https://cp-algorithms.com/geometry/nearest_points.html","timestamp":"2024-11-08T05:27:08Z","content_type":"text/html","content_length":"154727","record_id":"<urn:uuid:c740fc8c-2781-4077-92f4-b2995e17f76d>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00288.warc.gz"}
How to Judge a Weatherman? Each day a weatherman gives a probability p of rain for the next day and each day it either rains or it doesn't. How do we judge the quality of these forecasts? A first attempt uses linear scores, p if it rains, 1-p if it doesn't. However when you analyze this system the weatherman should predict p=1 if his belief is greater than 1/2 and p=0 otherwise. A better measure is the log loss. The weatherman gets penalized -log(p) if it rains and -log(1-p) if it doesn't. A weatherman now has the incentive to announce his belief. There are other scoring functions with this property but the log loss has some nice properties such as the best a weather could hope to achieve is exactly the entropy of the distribution. The log loss and other measures are often used to analyze prediction mechanisms such as information markets. Dean Foster and Rakesh Vohra have a different take looking at a notion called calibration. Here you take all the days that the weatherman predicted 70% chance of rain and check that 70% of those days it actually rained. A prediction algorithm calibrates a binary sequence if for finite set of allowed probabilities, each of the subsequences consisting of predictions of probability p have about a p fraction of ones. Foster and Vohra showed that some probabilistic calibration scheme will calibrate every sequence in the limit. In other words you can be a great weatherman in the calibration sense just by looking at the history of rain and forgoing that pesky meterological training. Dean Foster and Sham Kakade gave a couple of interesting talks at the Bounded Rationality workshop giving a deterministic scheme that achieves a weak form of calibration and use it to learn Nash equilibirum in infinite repeated games. 4 comments: 1. Back when the Usenet was popular, a recurring question was "what do the weatherman probabilities mean?". This was well over a decade ago but I seem to recall that the answer was: the weather forecast service runs a few different computer models (usually 5 of them) and sees what is the outcome 24 hours into the future. If two models predict rain, then the probability is 40%, if 4 of them predict rain then it is 80%. At the risk of restarting a long dead usenet thread, can any one out there confirm this? 2. This is old, but may shed some light on the issue: 3. It seems that both are correct. Some weatherforecasters use statistical data to search for other times where weather conditions were the same as today. In this case the probability is historical: in 20% of the days that were just like today it ended up raining. Others run a set of computer models or the same computer model with small variations (known as ensemble) and report the percentage of such outcomes. It rained in 20% of our computer simulations. Each outcome can be weighted to reflect the probability of a given variation. For instance, if the chance of receiving above-median rainfall in a particular climate scenario is 60%, then 60% of past years when that scenario occurred had above median rainfall, and 40% had below-median rainfall. Because of weather's chaotic nature, errors or uncertainties in the starting point of a model can alter the results dramatically. One way to reduce the impact of such errors is through an ensemble of forecasts. In this technique, one model is run several times, each with a slightly different, intentionally varied set of starting points. 4. One nice things about logs is that uncertainty becomes additive. For instance if your students were completely ignorant, and you replaced your 10 binary questions with 1 question having 1024 answers, they would still get the same number of points if using log p award scheme. Using log(p) as length for codeword of symbol with probability p is also guaranteed to produce lowest expected codeword length when codewords must be prefix-free (instantaneously decodable). I wonder, what sort of bounds would hold for codes without any such constraints? Finally, using f(x)=log x as a way to award points would elicit correct internal probabilities from rational students, whereas f(x)=x^2 will not. This seemed like an interesting topic, hence my first blog entry has a derivation of this :)
{"url":"https://blog.computationalcomplexity.org/2005/02/how-to-judge-weatherman.html?m=0","timestamp":"2024-11-12T00:57:30Z","content_type":"application/xhtml+xml","content_length":"182937","record_id":"<urn:uuid:ee48b7a7-a210-48f9-8589-73d4360981ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00731.warc.gz"}
ACO Seminar The ACO Seminar (2013–2014) Nov. 22, 3:30pm, Wean 8220 (Note unusual day) Nayantara Bhatnagar, University of Delaware Lengths of Monotone Subsequences in a Mallows Permutation The longest increasing subsequence (LIS) of a uniformly random permutation is a well studied problem. Vershik–Kerov and Logan–Shepp first showed that asymptotically the typical length of the LIS is . This line of research culminated in the work of Baik–Deift–Johansson who related this length to the Tracy–Widom distribution. We study the length of the LIS and LDS of random permutations drawn from the Mallows measure, introduced by Mallows in connection with ranking problems in statistics. Under this measure, the probability of a permutation p in S[n] is proportional to q^Inv(p) where q is a real parameter and Inv(p) is the number of inversions in p. We determine the typical order of magnitude of the LIS and LDS, large deviation bounds for these lengths and a law of large numbers for the LIS for various regimes of the parameter q. This is joint work with Ron Peled.
{"url":"https://aco.math.cmu.edu/abs-13-14/nov22.html","timestamp":"2024-11-05T07:25:07Z","content_type":"text/html","content_length":"2573","record_id":"<urn:uuid:b29b4ee9-b969-45e2-aef5-77a4590b6b40>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00565.warc.gz"}
apt4Maths: GCSE (& KS3) Maths Lesson PowerPoint Presentation on Equivalent Fractions - APT Initiatives apt4Maths: GCSE (& KS3) Maths Lesson PowerPoint Presentation on Equivalent Fractions If you are a school/college and would like to place an order and be INVOICED later, please email sales@apt-initiatives.com. This is the 3rd Lesson PowerPoint Presentation only of the full set of 13 Lesson PowerPoint Presentations on Fractions for GCSE (and Key Stage 3) Maths. This PowerPoint explains how to find equivalent fractions and to cancel fractions down. The full set of 13 Lesson PowerPoint Presentations on Fractions can be purchased from www.apt-initiatives.com Product Information The full set of 13 Lesson PowerPoint Presentations for GCSE (and Key Stage 3) Maths on Fractions (143 slides, excluding Title Pages) covers the following fraction-based topics: • 01 Fractions – The Basics (12 slides): Explains how to recognise fractions – proper, improper, and mixed numbers, how to write fractions correctly, and how to express what fraction of a shape is shaded or unshaded. • 02 Converting between Mixed Numbers and Improper Fractions (15 slides): Explains how to convert between mixed numbers and improper fractions. • 03 Equivalent Fractions (9 slides): Explains how to find equivalent fractions and to cancel fractions down. INCLUDED in this download • 04 Express as Fractions (9 slides): Explains how to express one value as a fraction of another. • 05 Comparing and Ordering Fractions (21 slides): Reviews equivalent fractions and introduces fraction to decimal conversions in order to compare fractions or put them in size order. • 06 Recurring Decimals to Fractions (10 slides): Explains, using algebra, how to convert a recurring decimal to a fraction. • 07 Adding and Subtracting Fractions (13 slides): Explains how to add and subtract fractions, including mixed numbers. • 08 Multiplying with Fractions (10 slides): Explains how to multiply fractions by whole numbers, and how to multiply fractions by fractions (including mixed numbers). • 09 Dividing with Fractions (8 slides): Explains how to divide fractions by whole numbers, how to divide whole numbers by fractions, and how to divide fractions by fractions (including mixed • 10 Fractions of Quantities (10 slides): Explains how to calculate fractions of quantities, and revises multiplying fractions by fractions. • 11 Fractional Change (8 slides): Explains how to increase or decrease a quantity by a fraction of itself. • 12 Solving Reverse Fraction Problems (8 slides): Explains how to find the original value when given the increased or decreased value. It also revises dividing by fractions. • 13 Algebraic Fractions (10 slides): Revises algebraic manipulation and explains how to apply the skills learnt regarding calculations with fractions (adding, subtracting, multiplying, and dividing) to solving problems involving algebraic fractions. Download Sample Material apt4Maths PowerPoint on Fractions - Equivalent Fractions
{"url":"https://www.apt-initiatives.com/products/apt4maths-gcse-ks3-maths-lesson-powerpoint-presentation-on-equivalent-fractions/","timestamp":"2024-11-12T18:59:32Z","content_type":"text/html","content_length":"81145","record_id":"<urn:uuid:98e914d2-0084-4bcf-b3ac-b5aec34b5e96>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00594.warc.gz"}
• 15m Range • Fast • Melee Skill • Skill Type: Debuff • Deals slight damage and grants you the Provacateur buff, greatly reducing damage from the next hit you take and improves your crit chance by 20% on your next attack. Does more damage if used from in Stealth. • ... (Main-hand) Damage • +20% Skill Crit Chance • Duration: 10s • Cost: [68 at Level 150] Power • Cooldown: 6s General Information Class: Burglar Level: 34 Using this skill applies the following effects to the Burglar: • Provocateur -- next skill gains +20% Critical Chance • Roll With It -- next enemy attack deals 50% damage (stacks up to 3 times. Skill Interactions While the Burglar is in Stealth (or is under the effects of Combat Stealth or Improved Feint), this skill becomes empowered, dealing additional damage: Stealthed Provoke • 15m Range • Fast • Melee Skill • Skill Type: Debuff, Stealth • Deals slight damage and grants you the Provacateur buff, greatly reducing damage from the next hit you take and improves your crit chance by 20% on your next attack. Does more damage if used from in Stealth. • ... (Main-hand) Damage • +20% Skill Crit Chance • Duration: 10s • Cost: [68 at Level 150] Power • Cooldown: 6s Trait Interactions • The trait Strike From Shadows in the Quiet Knife trait tree increases the Critical Chance of this skill by 10% when used from Stealth. • The trait Double Down in the Gambler trait tree makes this skill have a 25% chance to apply Dazed to the enemy target for a random duration between 15 and 60 seconds. This effect is broken on any damage. While in Stealth (or under the effects of Combat Stealth or Improved Feint), the application chance is increased by 35%, up to 60%. Tracery Interactions Armour Set Interactions Equipping 4 pieces of the Umbari Armour of Twisted Ways armour set acquired from the Depths of Mâkhda Khorbo Raid increases the damage reduction potency of Roll With It by 15%.
{"url":"https://lotro-wiki.com/wiki/Provoke","timestamp":"2024-11-06T18:46:41Z","content_type":"text/html","content_length":"28432","record_id":"<urn:uuid:f52f9774-ced1-4e1a-a674-c74edf069dc6>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00838.warc.gz"}
Figure/ground segmentation aims at partitioning an image into regions of coherent properties as a means for separating objects from their backgrounds. Considerable effort has been made to develop many advanced techniques in the recent years, where learning segmentation has attracted considerable attention of researchers because of its significant performance in classification applications. Further, probabilistic graphical models have also been remarkably successful in segmentation applications. In any image system, feature representation is crucial to enhancing the system performance. How to manifest an image and how to capture salient properties of the object regions are still challenging problems. The bag-of-features (BoF) model [1-3] has been widely used in the field of image processing. The model treats an image as a collection of unordered appearance descriptors extracted from local patches, quantizes them into discrete ‘visual words,’ and then computes a compact histogram representation. In this work, we propose a patch-level BoF model to effectively represent an image patch from raw image data. By pixellevel dictionary learning, sparse coding, and spatial pyramid matching, the feature representation can capture the salient properties of the image patch, thus resulting in high patchwise segmentation accuracy. Learning segmentation converts the image segmentation problem into a data clustering problem of image elements. One of the core challenges for machine learning is to discover what kind of information can be learned from the data sources and cluster this data into segments depicting the same object. Ren and Malik [4] proposed a classification model for segmentation, which feeds the Gestalt grouping cues into a linear classifier to discriminate between good and bad segmentation. Wu and Nevatia [5] developed a method to simultaneously detect and segment objects by boosting the edgelet feature classifiers. Duygulu et al. [6] modeled object recognition as a process of annotating image regions with words, and learning a mapping between region types and keywords by using an EM Probabilistic graphical models usually construct a cost function on the basis of some image constraints and formulate the image segmentation problem as a stochastic optimization problem. A condition random field (CRF) provides a principled approach to incorporating datadependent interactions; the complex joint probability distribution need not be modeled in this case. In this work, we use the CRF model to fuse multiple visual cues. For the CRF model, the definition of unary and pairwise potentials is very important. Previously, the unary potential was directly defined on feature spaces [7]. Lately, researchers have paid more attention on using a classifier to generate a unary potential [8-15], and most of them prefer using a pixel or a superpixel as the basic processing unit. In contrast, we use a regular image patch. Image patches on object boundaries contain rich local structure information of an object (see Fig. 1). While superpixels usually have a homogeneous appearance and an almost uniform size along with edge preservation, particularly for weak boundaries, these properties weaken the discriminative capability of the unary classifier when the superpixel is taken as the sampling unit. Our main contributions in this paper are twofold. First, we use an image patch as a sample of a unary classifier and propose an upgrade patch feature representation based on pixel-level sparse coding, which can capture more structure information of the local contour of objects. Second, we propose the color and texture pairwise potentials with neighborhood interactions and an edge potential representing edge continuity, which are validated to be very effective in our experiments. CRFs are probabilistic models for segmenting data with structured labels [16], which are defined on a twodimensional discrete lattice, every site on which corresponds to a graph node. Let G (V,E) be an undirected graph with image patches as nodes V and the links between pairs of nodes as edges E. CRFs directly model the distribution P L | I,w,v of node labels L conditional on image data I for node parameters w and edge parameters v. We are interested in finding the labels where l[i] denotes the label of the i^th node. In this work, we are concerned with binary segmentation (foreground and background), i.e., l[i] ∈ {-1,1}. The joint distribution over the labels L given the observations I can be expressed as follows: where z represents a normalized constant known as the partition function, N[i] denotes the set of neighbors of the i^th node in graph G, and A[i] and I[ij] indicate the unary and pairwise potentials, respectively. The unary potential is modeled using a local discriminative model that decides the association of a given node to a certain class, ignoring the interaction of its neighbors. In contrast, the pairwise potential is regarded as a data-dependent smoothing function that denotes the interaction between two nodes. Both terms explicitly depend on a predefined set of features from I We use a CRF model to learn the conditional distribution over figure/ground labeling given an image, which allows us to incorporate different levels and different types of features in a single unified model. In this work, the unary potentials are defined by the prediction probability obtained from a linear support vector machine (SVM) classifier. Different from the existing feature descriptions, we train a pixel-level over-complete dictionary to sparsely represent image patches in a highdimensional space. 1) Pixel-Level Texture Descriptor Gabor wavelets have received considerable attention because of biological reasons and their optimal resolution in both frequency and spatial domains. The Gabor wavelet representation can capture the local structure corresponding to the spatial scale, spatial localization, and orientation selectivity. It can characterize the spatial frequency structure in the image, while preserving the information of spatial relations. However, many existing image representation approaches in the Gabor domain merely consider the magnitude information. In this work, we proposed a new pixel-level feature descriptor, which fuses the Gabor magnitude and the Gabor phase. To eliminate local noise interference, a simple smoothing filter is used for removing image noise in advance. Then, we perform the Gabor transform in D directions and S scales on a given gray image, and respectively, denote the magnitude response and the phase response in direction θ and scale σ as ρ[θσ] and α[θσ]. Further, the 2π phase space is uniformly quantized into J intervals as Φ[j] = [ø [j,][min],ø[j,][min] + ς] , j = 1,...J . ς = 2π / J represents the quantization step, and ø[j,][min] denotes the margin value between two phase intervals Φ[j][-1] and Φ[j]. Suppose that phase response α[θσ] belongs to the j^th interval Φ[j]. Then, we compute Eq. (2) to get a J-dimensional vector y[θσ] in direction θ and scale σ as follows: An example of diagram J = 8 is shown in Fig. 2, in which the phase space is quantized into eight intervals and ς = π/4. Assuming the phase response α[θσ] ∈ Φ[2] , we can update the margin value as the J-dimensional vector We concatenate all vectors y[θσ] of the given D directions and S scales as the pixel-level descriptor y = [y[1,1],y[1,2],...,y[D,S]] . The ℓ = D × S × J -dimensional feature vector not only describes the distribution of the phase response in each scale and direction but also reflects the magnitude response. 2) Pixel-Level Dictionary Learning and Coding Sparse representations have demonstrated considerable success in numerous applications, and the sparse modeling of signals has been proven to be very effective in signal reconstruction and classification. We randomly sample some pixels from the training image set to learn an over-complete pixel-level dictionary. Assuming that we collect N training samples, we define a matrix Y ∈ R^ℓ×N as the columns of samples: where y[i] stands for the ℓ-dimensional texture feature of the i^th sample. Using an over-complete dictionary D ∈ R^ℓ×L , which contains L atoms as column vectors, we can approximate the observed sample y well by using a sparse linear combination of these atoms. In particular, there exists a sparse coefficient vector x such that y can be approximated as y ≈ D[x], where the vector x represents the weighted contribution of these atoms when reconstructing the observed sample. Given the training samples, we can learn the dictionary D by solving the following optimization problem: where λ denotes a balance parameter and the second term enforces x to have a small number of nonzero elements. The optimization problem is convex in D or X while fixing the other, but not in both simultaneously [17]. We solve it by alternating the optimization over D and X ; the dictionary D can be initialized by randomly sampling L columns from Y or by K-means clustering. When fixing D, the optimization becomes a standard sparse coding problem, which can be solved very efficiently by using the feature-sign search algorithm. When fixing X, the problem reduces to a least squares problem (as shown in Eq. (5)), which can be solved by using the Lagrange dual algorithm. Once the over-complete dictionary D is given, the texture feature y of each pixel can be coded as L-dimensional sparse vector x by solving the following l1-norm regularization problem: 3) Patch Feature Representation The pivotal role of the unary potential in the CRF-based segmentation model has been demonstrated. It can be taken as a local decision term, which decides the association of a given graph node to a certain class. Usually, the use of the unary classifier alone leads to high accuracy as compared to the full CRF model as it can segment most parts of an object and loses only some details of the object boundaries. In this work, we integrate the texture feature and the color feature to represent an image patch. We partition a patch into 1×2, 2×2 segments in two different scales, and then, compute the max pooling vector of the sparse codes of pixels within each of the five segments. We finally concatenate all the vectors to form a vector representation of the texture feature. The so-called spatial pyramid matching has had remarkable success in image classification applications. Color information is very useful for identifying the classes of image patches. For example, backgrounds (e.g., sky, water, grass, and tree) are usually distinguishable from objects (e.g., cow, sheep, and bird) in color. For a patch, we compute 64-bin histograms in each CIE Lab color channel as its color feature and then, concatenate the texture vector and the color vector to form the final feature representation. In our experiments, we fixed the size of dictionary D as 2048; thus, the dimension of the patch feature is 2048×5+64×3 =10432 . We also find that max pooling outperforms the other alternative pooling methods. 4) Unary Potential Computation We train a binary linear SVM classifier to predict the figure/ground probabilities of an image patch, which are used for computing the unary potential. However, the boundaries of the foreground segmented by the CRF model using patches as graph nodes have a blocking effect. To generate results close to the ground truth, we split patches into some perceptually meaningful entities by using the over-segmentation boundary map, which is generated by an existing region merging method [18]. As shown in Fig. 3, a patch is split into several regions. These regions are considered the graph nodes, and their unary potential values are defined as the corresponding prediction probability of the host patch. That is, the regions split from a patch are assigned the same probability. In particular, the unary potential in Eq. (1) is defined as follows: where P (l[i] l I) denotes the local class posterior, that is, the prediction probability given by the unary classifier. After the unary binary classification, we can already obtain good segmentation results, but the classifier separately processes each image patch. The mutual dependence among neighboring patches is ignored, which results in some neighboring patches with a similar appearance being possibly improperly labeled as opposite classes (see Fig. 4). Therefore, as contextual knowledge is necessary for image segmentation, we define the pairwise potential to address this problem. In this work, the pairwise penalty I[ij] is defined as the weight w[ij] of a graph edge, that is, In image segmentation, the weights encode a graph affinity such that a pair of nodes with a high weight edge is considered to be strongly connected and edges with low weights represent the nearly disconnected nodes. We exploit the color, texture, and edge cues to model the connection between nodes and incorporate the three types of potentials in a unified CRF framework using pre-learned parameters. Assuming that the superscripts c, t, and e denote the color, texture, and edge, respectively, we can rewrite w[ij] (I) as follows: where g[ij] (I) represents a distance function defined over node pairs (i, j). w[ij] (I) synthetically reflects the connectivity of nodes i and j on multiple feature spaces. 1) Color Potential and Texture Potential Color information is an essential and typical representation for images and is a key element for distinguishing objects. Mean and histogram are two common color descriptors. Mean only describes the average color component rather than the color distribution in a region. We use the CIE Lab histogram as a color descriptor for computing the color potential. Similarly, each channel is uniformly quantified into 64 levels, and then, three channels are concatenated to form a 192-dimensional color vector. The experimental comparison demonstrates that the histogram descriptor is more effective than the mean descriptor, increasing the overall pixel-wise labeling accuracy by 3.8%. Every region in natural images is not isolated and is strongly connected with its adjacent regions. When computing it is unreasonable to only use the node pair (i, j) and ignore the neighboring nodes. Therefore, we consider the neighborhood interactions in the main steps summarized in Algorithm 1. where D (i, j) = [[X^2] (h[i] + h[m[i]],h[j]) + [X^2] (h[i],h[j] + h[m[i]])] / 2. Similarly, we can compute the texture potential Further, h[i] stands for a texture histogram, which is computed as follows: where K denotes the number of pixels in the region node i, and x represents the L-dimensional sparse texture vector described in Section III-A. Experiments demonstrate that the step of incorporating neighborhood interactions increases the overall pixel-wise labeling accuracy by 2.3%. We also find that the color potential using the Lab / [X^2] descriptor outperforms the GMM color model as used in 2) Edge Potential Inside a very small region, edge information basically indicates local image shape priors; the regions belonging to the same object often have strong edge continuities, which are described as the edge potential in this paper. As shown in Fig. 5, we find that there are many edges (blue lines) going through neighboring nodes. If two neighboring nodes are crossed by an edge, they very possibly belong to a visual unit and have the same figure/ground label. Motivated by this observation, we define the edge potential to capture the cue of edge continuity. Given an image, we compute its binary gradient magnitude M. Let S be the node index matrix, which indicates that the graph node that a pixel belongs to. Assuming that c[n] = (x[n],y[n]) denotes the coordinate of pixel n and (i, j) represents a pair of neighboring nodes, we can denote their common boundaries as a pixel pair set, as follows: Then, the edge potential is computed as follows: where and The parameter vector v in Eq. (7) is automatically learned from the training data. Given a set of training images T = {(L^(n),I^(n)), n = 1,...N}, we assume that all the training data are independent and identically distributed. We then use the conditional maximum likelihood (CML) criterion to estimate v. Its log likelihood is computed as follows: where the last term is the log-partition function. In general, the evaluation of the partition function is a NP-hard problem. We could use either sampling techniques (e.g., the Markov chain Monte Carlo method [19]) or some approximations (e.g., those of the free energy [20], piecewise training [21], pseudo-likelihood [22]) to estimate the parameters. The optimal parameter maximizes the log conditional likelihood according to the CML estimation as follows: This can be solved by using the gradient descent method. The derivative of the log likelihood L(v) is written as follows: where the second term E[P]( · ) denotes the expectation with respect to the distribution P (l|I^(n),v). That is, In general, the expectation cannot be computed analytically because of the combinatorial number of elements in the configuration space of labels. In this work, we use belief propagation [23] to approximate it. We evaluate the proposed approach using three datasets. The MSRC dataset [10] contains 591 images with 21 categories. The performance of the unary classifier on this dataset is measured by using the pixel precision. Furthermore, for comparison with a previous work [24], we select the following 13 classes of 231 images with a 7-class foreground (cow, sheep, airplane, car, bird, cat, and dog) and a 6-class background (grass, tree, sky, water, road, and building) as the data subset. The ground truth labeling of the dataset contains pixels labeled as ‘void’ (i.e., color-coded as black), which implies that these pixels do not belong to any of the 21 classes. In our experiments, void pixels are ignored for both the training and the testing of the unary classifier. The dataset is randomly split into roughly 40% training and 60% test sets, while ensuring approximately proportional contributions from each class. The second dataset is the Weizmann horse dataset [25], which includes the side views of many horses that have different appearances and poses. We have also used the VOC2006 cow database [26] in which ground truth segmentations are manually created. For the two datasets, the numbers of images in the training and test sets are exactly the same as in [27]. When extracting the pixel-level texture descriptor, we set the parameters of the Gabor filter as scales σ = {1,1.2} and directions θ = {0,π / 4,π / 2, 3π / 4}. The phase response is uniformly quantified into eight regions. Hence, the size of the pixel-level feature vector is 4×2×8 = 64 . We randomly select roughly 60000 samples from all the training images to learn the dictionary D and ensure approximately proportional contributions from each image. We set the dictionary size as 2048. Thus, a 64-dimensional pixel-level vector is sparsely encoded as a 2048-dimensional vector; then, by spatial pyramid matching and max pooling, we extract a patch feature from 16×16 pixel patches, which are densely sampled with a step size of 4 pixels. During the training of the unary classifier, a patch possibly contains multiple labels; however, we take the label that accounts for more than 75% of all the pixels in this patch as its label. We find that the number of patches of the training images in each class on average is in the order of 10000, and some classes have more than 100000 patches. Considering the memory and computational constraints, we randomly select 8000 patches from each class to construct a patch dataset for the evaluation of the unary classification, and each class sample is randomly split into 25% training and 75% test sets. For efficiency, we reduce the dimension of a patch feature from 10432 to 4000 by using the incremental principal component analysis (PCA) algorithm [28], in which we feed 20% samples to increment PCA in order to approximate the mean vector and the basis vectors. To evaluate the performance of the proposed patch representation, we use a simple linear SVM classifier to conduct 21-class classification experiments on the MSRC dataset. We select 1200 patches per class as training samples and the rest of the patches as the testing samples. We achieve patch-wise labeling accuracy of 71.0%, while the state-of-the-art approach [10] gives pixel-wise accuracy of 69.6%. For a fair comparison, we further refine the patch precision segmentations to the pixel precision ones by simply post-processing. In particular, we first get split patches (i.e., graph nodes) by using an existing segmentation method as described in Section III-A. The nodes are not always larger than the patches in size. Then, we take the nodes within the same segment as generated by [18] as the content consistent nodes. Finally, the label of each node is decided by a majority vote of the labels of its neighboring nodes, which must also be its content-consistent nodes. After the above processing, we achieve pixel-wise accuracy of 72.1%. We also evaluate the binary classification performance on the 13-class dataset. We select 2200 patches per class as the training samples and label the 7-class foreground and the 6-class background as positive samples and negative samples, respectively. Thus, we achieve patch-wise labeling accuracy of 87.5%. After the post-processing, we achieve pixel-wise accuracy of 88.4%. The unary pixel-wise accuracy on the Weizmann dataset and the VOC2006 dataset is 89.9% and 94.5%, respectively. On the 13-class dataset, we compare the unary potential accuracy with the full model accuracy. The latter is improved by 3.2% on average. This seemingly small numerical improvement corresponds to a large perceptual improvement (see Fig. 6), which shows that our pairwise potentials are effective. We evaluate the performance of the proposed method against that of three state-of-the-art methods [24,27,29]. The quantitative measure is the accuracy, namely segmentation cover, which is defined as the percentage of correctly classified pixels in the image (both foreground and background) [24,29]. On the MSRC dataset, since the performance varies substantially for different classes, we respectively give the accuracy of each class. We list the quantitative comparison of seven classes in Table 1 , which shows that our method outperforms the two competitors except for the cat class. In addition, the method proposed in [24] only selects 10 images for each class such that there is a single object in each image, while we compute the segmentation accuracy on the 13-class sub-dataset of 231 images, and many images contain several object instances. The difference in testing data also indicates that our method is more robust than that proposed in [24]. Fig. 7 shows some visual examples of the same images as reported in [24]. Although the accuracies of some examples are lower than those in the case of the competitor methods, our overall accuracy is higher. On the Weizmann and VOC2006 datasets, we compute the 2-class confusion matrix, as shown in Table 2, which shows that the proposed method performs favorably against the method proposed in [27] on the first dataset and much better on the second one. Figs. 8 and 9 show the same examples as those considered in [27]. The reason that the results on the cow dataset are very goodies that the appearances of the foreground and background are respectively homogeneous and the spatial distribution of the foreground is very compact. Compared to the horse dataset, the foreground of many images are inhomogeneous; in particular, horse shanks are very slim and their colors are different from those of the body, which leads to horse shanks going missing from the final segmentation, as shown in Fig. 8. In addition, the similar appearance of the foreground and the shadow in the background possibly causes some errors. In this paper, we propose a new discriminative model for figure/ground segmentation. First, a pixel-level dictionary is learnt from mass pixel-wise Gabor descriptors; second, each pixel is mapped as a high-dimensional sparse vector, and then, all the sparse vectors in a patch are fused to represent the patch by max pooling and spatial matching. The proposed unary features can simultaneously capture the appearance and context information, which significantly enhances the unary classification accuracy. The upgrade color and texture potentials with neighborhood interactions and the proposed edge potential weaken the interference of abnormal nodes during graph affinity computation. The experimental results demonstrate that the proposed approach is powerful with a comparison with three state-of-the-art approaches. In the future, we hope to integrate explicit semantic context and salient information to make the algorithm more intelligent.
{"url":"https://oak.go.kr/central/journallist/journaldetail.do?article_seq=17085","timestamp":"2024-11-12T16:42:39Z","content_type":"text/html","content_length":"232078","record_id":"<urn:uuid:b6a2a1d1-2cb3-421e-a338-ec4f277f6bac>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00589.warc.gz"}
How do you count tenths as a decimal? How do you count tenths as a decimal? To count in tenths, the digit after the decimal point increases by one each time. We have 0.1, 0. 2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 and 0.9. Ten tenths is equal to one whole. What is a decimal number chart? The decimal place value chart is a chart that shows the place values of all the digits in a given decimal number. The digits to the left of the decimal points represent the place values starting from ones, followed by tens, hundreds, thousands, and so on. How do you calculate tenths? The easiest way to calculate in tenth of an hour increments is to divide the number of minutes by 60 and then round to the nearest tenth of an hour. How many tenths are there? There are 10 tenths in all, so each of the 5 parts has 2 tenths. i could have called the rectangle 100 hundredths or 1,000 thou- sandths, too. How easy. is the same as point 15. You just put a decimal point in front! What is the tenth place? The tenth place is between the decimal point and the hundredths place, as shown in the place value chart below. The tenths place is shown in red, along with the surrounding place values. The value of any digit that is in the tenths place is equal to the product of the digit and 1/10, or 0.1. How many thousandths are in a tenth? Because our system is base ten, a value of 10 in one place is equal to a value of 1 in the place to the left: 10 thousandths is equivalent to 1 hundredth, 10 hundredths is equivalent to 1 tenth, 10 tenths is equivalent to 1 one, and so on. How do you round to the tenths place? To round a number to the nearest tenth , look at the next place value to the right (the hundredths). If it’s 4 or less, just remove all the digits to the right. If it’s 5 or greater, add 1 to the digit in the tenths place, and then remove all the digits to the right. How many tenths are in an hour? Compensation is calculated by multiplying the applicable rate per hour by the total number of hours….Billing Increment Chart—Minutes to Tenths of an Hour. Minutes Time 31-36 .6 37-42 .7 43-48 .8 49-54 .9 How many tenths are in a half? Answer =5 tenths make a half. How many groups of tenths are there in tenths? Now we move to the tenths. We can share 7 tenths with 3 groups by giving each group 2 tenths and then there will be 1 tenth left. The 1 tenth will be renamed as 10 hundredths. Now there are a total of 12 hundredths which can be shared with 3 groups by giving each group 4 hundredths.
{"url":"https://www.curvesandchaos.com/how-do-you-count-tenths-as-a-decimal/","timestamp":"2024-11-01T23:06:24Z","content_type":"text/html","content_length":"48121","record_id":"<urn:uuid:dbc3c270-907b-4fd1-aa7a-f6c1c43ec0b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00415.warc.gz"}
Complex to Polar Impedance Converter \( \) \( \) \( \) A calculator to convert impedance from complex to polar form is presented. A complex impedance of the from \( Z = a + j b \) has a modulus given by \( |Z| = \sqrt{a^2 + b^2} \) and a phase \( \theta = \arctan \left(\dfrac{b}{a} \right) \) such that \( -\pi \lt \theta \le \pi \) The complex impedance in polar form is written as \( Z = |Z| \; \angle \; \theta \) where \( \theta \) is in degrees or radians. Use of the calculator Enter impedances \( Z \) as a complex number of the form \( a + j b \) and press "calculate". The output is the impedance in polar form with phase in degress and radians. Impedance in Polar Form Argument or phase in degrees: Argument or phase in radians: More References and links AC Circuits Calculators and Solvers Complex Numbers - Basic Operations Complex Numbers in Exponential Form Complex Numbers in Polar Form Convert a Complex Number to Polar and Exponential Forms Calculator
{"url":"http://www.mathforengineers.com/AC-circuits-calculators/complex-to-polar-impedance-converter.html","timestamp":"2024-11-07T16:13:20Z","content_type":"text/html","content_length":"6097","record_id":"<urn:uuid:3dd34096-7335-4c11-b037-9069d1f82f6c>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00437.warc.gz"}
A 1.0 m×1.5 m double-pane window consists of two 4-mm-thick l... | Filo Question asked by Filo student A double-pane window consists of two 4-mm-thick layers of glass $(k=0.78 \mathrm{~W} / \mathrm{m} \cdot \mathrm{K}) 5-\mathrm{mm} \left(k_{\text {air }}=0.025 \mathrm{~W} / \mathrm{m} \cdot \mathrm {K}\right)$. The heat flow through the air gap is assumed to be by conduction. The inside and outside air temperatures are and , respectively, and the inside and outside heat transfer coefficients are 40 and . Determine the daily rate of heat loss through the window in steady operation and the temperature difference across the largest thermal resistance. Not the question you're searching for? + Ask your question Step 1. Determine the overall thermal resistance: To find the total thermal resistance ( ), we will consider the conduction resistance through the glass layers and the air gap, as well as the convection resistance at the inside and outside surfaces of the window. The conduction resistance of a uniform layer can be calculated as: , where is the layer thickness, is the thermal conductivity, and is the area. The convection resistance can be calculated as: , where is the heat transfer coefficient. Now, let's calculate the individual resistance of all the layers: Conduction resistance of glass layer 1 ( ): Conduction resistance of glass layer 2 ( ): Conduction resistance of the air gap ( ): Convection resistance of the inside surface ( ): Convection resistance of the outside surface ( ): Now we can calculate the total thermal resistance: . Step 2. Calculate the daily heat loss rate: Knowing the total thermal resistance, we can find the heat loss rate through the window, which is the rate of heat transfer from the inside to the outside, by using the formula: , where is the temperature difference between the inside and outside air (20°C - (-20°C) = 40°C). Now, we can find the daily heat loss rate by multiplying the heat loss rate by the seconds in a day: Step 3. Find the largest thermal resistance: Now, examine which of the resistance components calculated in Step 1 is the largest one, since it would have the greatest impact on the overall thermal resistance of the window. In this exercise, the largest resistance is . Step 4. Calculate the temperature difference across the largest thermal resistance: The temperature difference across the largest thermal resistance can be found using the following formula: Now, you can calculate the temperature difference across the air gap (largest resistance) using the previously calculated heat loss rate and the value of . Found 2 tutors discussing this question Discuss this question LIVE 15 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Physics tutors online and get step by step solution of this question. 231 students are taking LIVE classes A double-pane window consists of two 4-mm-thick layers of glass $(k=0.78 \mathrm{~W} / \mathrm{m} \cdot \mathrm{K}) 5-\mathrm{mm} \left(k_{\text {air }}=0.025 \mathrm{~W} / \mathrm{m} \cdot Question \mathrm{K}\right)$. The heat flow through the air gap is assumed to be by conduction. The inside and outside air temperatures are and , respectively, and the inside and outside heat transfer Text coefficients are 40 and . Determine the daily rate of heat loss through the window in steady operation and the temperature difference across the largest thermal resistance. Updated Nov 15, 2023 Topic All topics Subject Physics Class High School Answer Text solution:1
{"url":"https://askfilo.com/user-question-answers-physics/a-double-pane-window-consists-of-two-4-mm-thick-layers-of-36303839393532","timestamp":"2024-11-12T23:00:06Z","content_type":"text/html","content_length":"241532","record_id":"<urn:uuid:8b5d9865-f82e-4eba-b392-3758cf0bc1c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00655.warc.gz"}
5.1: Conditional Independence Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) 5.1. Conditional Independence^* The idea of stochastic (probabilistic) independence is explored in the unit Independence of Events. The concept is approached as lack of conditioning: \(P(A|B) = P(A)\). This is equivalent to the product rule \(P(AB) = P(A) P(B)\). We consider an extension to conditional independence. Examination of the independence concept reveals two important mathematical facts: • Independence of a class of non mutually exclusive events depends upon the probability measure, and not on the relationship between the events. Independence cannot be displayed on a Venn diagram, unless probabilities are indicated. For one probability measure a pair may be independent while for another probability measure the pair may not be independent. • Conditional probability is a probability measure, since it has the three defining properties and all those properties derived therefrom. This raises the question: is there a useful conditional independence—i.e., independence with respect to a conditional probability measure? In this chapter we explore that question in a fruitful way. Among the simple examples of “operational independence" in the unit on independence of events, which lead naturally to an assumption of “probabilistic independence” are the following: • If customers come into a well stocked shop at different times, each unaware of the choice made by the other, the the item purchased by one should not be affected by the choice made by the other. • If two students are taking exams in different courses, the grade one makes should not affect the grade made by the other. Example \(\PageIndex{1}\) Buying umbrellas and the weather A department store has a nice stock of umbrellas. Two customers come into the store “independently.” Let A be the event the first buys an umbrella and B the event the second buys an umbrella. Normally, we should think the events {\(A, B\)} form an independent pair. But consider the effect of weather on the purchases. Let C be the event the weather is rainy (i.e., is raining or threatening to rain). Now we should think \(P(A|C) > P(A|C^c)\) and \(P(B|C) > P(B|C^c)\). The weather has a decided effect on the likelihood of buying an umbrella. But given the fact the weather is rainy (event C has occurred), it would seem reasonable that purchase of an umbrella by one should not affect the likelihood of such a purchase by the other. Thus, it may be reasonable to suppose \(P(A|C) = P(A|BC)\) or, in another notation, \(P_C(A) = P_C(A|B)\) An examination of the sixteen equivalent conditions for independence, with probability measure \(P\) replaced by probability measure \(P_C\), shows that we have independence of the pair {\(A, B\)} with respect to the conditional probability measure \(P_C(\cdot) = P(\cdot |C)\). Thus, \(P(A|C^c) = P(A|BC^c)\). For this example, we should also expect that \(P(A|C^c = P(A|BC^c)\), so that there is independence with respect to the conditional probability measure \(P(\cdot |C^c)\). Does this make the pair {\(A, B\)} independent (with respect to the prior probability measure \(P\))? Some numerical examples make it plain that only in the most unusual cases would the pair be independent. Without calculations, we can see why this should be so. If the first customer buys an umbrella, this indicates a higher than normal likelihood that the weather is rainy, in which case the second customer is likely to buy. The condition leads to \(P(B|A) > P(B)\). Consider the following numerical case. Suppose \(P(AB|C) = P(A|C)P(B|C)\) and \(P(AB|C^c) = P(A|C^c) P(B|C^c)\) and \(P(A|C) = 0.60\), \(P(A|C^c) = 0.20\), \(P(B|C) = 0.50\), \(P(B|C^c) = 0.15\), with \(P(C) = 0.30\). \(P(A) = P(A|C) P(C) + P(A|C^c) P(C^c) = 0.3200\) \(P(B) = P(B|C) P(C) + P(B|C^c) P(C^c) = 0.2550\) \(P(AB) = P(AB|C) P(C) + P(AB|C^c) P(C^c) = P(A|C) P(B|C) P(C) + P(A|C^c) P(C^c) = 0.1110\) As a result, \(P(A) P(B) = 0.0816 \ne 0.1110 = P(AB)\) The product rule fails, so that the pair is not independent. An examination of the pattern of computation shows that independence would require very special probabilities which are not likely to be Example \(\PageIndex{2}\) Students and exams Two students take exams in different courses, Under normal circumstances, one would suppose their performances form an independent pair. Let A be the event the first student makes grade 80 or better and B be the event the second has a grade of 80 or better. The exam is given on Monday morning. It is the fall semester. There is a probability 0.30 that there was a football game on Saturday, and both students are enthusiastic fans. Let C be the event of a game on the previous Saturday. Now it is reasonable to suppose \(P(A|C) = P(A|BC)\) and \(P(A|C^c) = P(A|BC^c)\) If we know that there was a Saturday game, additional knowledge that B has occurred does not affect the lielihood that A occurs. Again, use of equivalent conditions shows that the situation may be \(P(AB|C) = P(A|C) P(B|C)\) and \(P(AB|C^c) = P(A|C^c) P(B|C^c)\) Under these conditions, we should suppose that \(P(A|C) < P(A|C^c)\) and \(P(B|C) < P(B|C^c)\). If we knew that one did poorly on the exam, this would increase the likelihoood there was a Saturday game and hence increase the likelihood that the other did poorly. The failure to be independent arises from a common chance factor that affects both. Although their performances are “operationally” independent, they are not independent in the probability sense. As a numerical example, suppose \(P(A|C) = 0.7\) \(P(A|C^c) = 0.9\) \(P(B|C) = 0.6\) \(P(B|C^c) = 0.8\) \(P(C) = 0.3\) Straightforward calculations show \(P(A) = 0.8400\), \(P(B) = 0.7400\), \(P(AB) = 0.6300\). Note that \(P(A|B) = 0.8514 > P(A)\) as would be expected. Sixteen equivalent conditions Using the facts on repeated conditioning and the equivalent conditions for independence, we may produce a similar table of equivalent conditions for conditional independence. In the hybrid notation we use for repeated conditioning, we write \(P_C(A|B) = P_C(A)\) or \(P_C(AB) = P_C(A)P_C(B)\) This translates into \(P(A|BC) = P(A|C)\) or \(P(AB|C) = P(A|C) P(B|C)\) If it is known that \(C\) has occurred, then additional knowledge of the occurrence of \(B\) does not change the likelihood of \(A\). If we write the sixteen equivalent conditions for independence in terms of the conditional probability measure \(P_C(\cdot)\), then translate as above, we have the following equivalent conditions. Sixteen equivalent conditions \(P(A|BC) = P(A|C)\) \(P(B|AC) = P(B|C)\) \(P(AB|C) = P(A|C) P(B|C)\) \(P(A|B^c C)\) = P(A|C)\) \(P(B^c|AC) = P(B^c|C)\) \(P(AB^c|C) = P(A|C) P(B^c|C)\) \(P(A^c| BC) = P(A^c|C)\) \(P(B|A^c C) = P(B|C)\) \(P(A^cB|C) = P(A^c|C) P(B|C)\) \(P(A^c|B^cC) = P(a^c|C)\) \(P(B^c|A^cC) = P(B^c|C)\) \(P(A^cB^c|C) = P(A^c|C) P(B^c|C)\) \(P(A|BC) = P(A|B^c C)\) \(P(A^c|B^cC) = P(A^c|B^c C)\) \(P(B|AC) = P(B|A^cC)\) \(P(B^c|AC) = P(B^c|A^cC)\) The patterns of conditioning in the examples above belong to this set. In a given problem, one or the other of these conditions may seem a reasonable assumption. As soon as one of these patterns is recognized, then all are equally valid assumptions. Because of its simplicity and symmetry, we take as the defining condition the product rule \(P(AB|C) = P(A|C) = P(B|C)\). A pair of events {\(A, B\)} is said to be conditionally independent, given C, designated {\(A, B\)} iff the following product rule holds: \(P(AB|C) = P(A|C) P(B|C)\). The equivalence of the four entries in the right hand column of the upper part of the table, establish The replacement rule If any of the pairs {\(A, B\)}, {\(A, B^c\)}, {\(A^c, B\)} or {\(A^c, B^c\)} is conditionally independent, given C, then so are the others. — □ This may be expressed by saying that if a pair is conditionally independent, we may replace either or both by their complements and still have a conditionally independent pair. To illustrate further the usefulness of this concept, we note some other common examples in which similar conditions hold: there is operational independence, but some chance factor which affects • Two contractors work quite independently on jobs in the same city. The operational independence suggests probabilistic independence. However, both jobs are outside and subject to delays due to bad weather. Suppose A is the event the first contracter completes his job on time and B is the event the second completes on time. If C is the event of “good” weather, then arguments similar to those in Examples 1 and 2 make it seem reasonable to suppose {\(A, B\)} ci \(|C\) and {\(A, B\)} ci \(|C^c\). Remark. In formal probability theory, an event must be sharply defined: on any trial it occurs or it does not. The event of “good weather” is not so clearly defined. Did a trace of rain or thunder in the area constitute bad weather? Did rain delay on one day in a month long project constitute bad weather? Even with this ambiguity, the pattern of probabilistic analysis may be useful. • A patient goes to a doctor. A preliminary examination leads the doctor to think there is a thirty percent chance the patient has a certain disease. The doctor orders two independent tests for conditions that indicate the disease. Are results of these tests really independent? There is certainly operational independence—the tests may be done by different laboratories, neither aware of the testing by the others. Yet, if the tests are meaningful, they must both be affected by the actual condition of the patient. Suppose D is the event the patient has the disease, A is the event the first test is positive (indicates the conditions associated with the disease) and B is the event the second test is positive. Then it would seem reasonable to suppose {\(A, B\)} ci \(|D\) and {\(A, B\)} ci \(|D^c\). In the examples considered so far, it has been reasonable to assume conditional independence, given an event C, and conditional independence, given the complementary event. But there are cases in which the effect of the conditioning event is asymmetric. We consider several examples. • Two students are working on a term paper. They work quite separately. They both need to borrow a certain book from the library. Let C be the event the library has two copies available. If A is the event the first completes on time and B the event the second is successful, then it seems reasonable to assume {\(A, B\)} ci \(|C\). However, if only one book is available, then the two conditions would not be conditionally independent. In general \(P(B|AC^c) < P(B|C^c)\), since if the first student completes on time, then he or she must have been successful in getting the book, to the detriment of the second. • If the two contractors of the example above both need material which may be in scarce supply, then successful completion would be conditionally independent, give an adequate supply, whereas they would not be conditionally independent, given a short supply. • Two students in the same course take an exam. If they prepared separately, the event of both getting good grades should be conditionally independent. If they study together, then the likelihoods of good grades would not be independent. With neither cheating or collaborating on the test itself, if one does well, the other should also. Since conditional independence is ordinary independence with respect to a conditional probability measure, it should be clear how to extend the concept to larger classes of sets. A class \(\{A_i: i \in J\}\), where \(J\) is an arbitrary index set, is conditionally independent, given event \(C\), denoted \(\{A_i: i \in J\}\) ci \(|C\), iff the product rule holds for every finite subclass of two or more. As in the case of simple independence, the replacement rule extends. The replacement rule If the class \(\{A_i: i \in J\}\) ci \(|C\), then any or all of the events A[i] may be replaced by their complements and still have a conditionally independent class. The use of independence techniques Since conditional independence is independence, we may use independence techniques in the solution of problems. We consider two types of problems: an inference problem and a conditional Bernoulli Example \(\PageIndex{3}\) Use of independence techniques Sharon is investigating a business venture which she thinks has probability 0.7 of being successful. She checks with five “independent” advisers. If the prospects are sound, the probabilities are 0.8, 0.75, 0.6, 0.9, and 0.8 that the advisers will advise her to proceed; if the venture is not sound, the respective probabilities are 0.75, 0.85, 0.7, 0.9, and 0.7 that the advice will be negative. Given the quality of the project, the advisers are independent of one another in the sense that no one is affected by the others. Of course, they are not independent, for they are all related to the soundness of the venture. We may reasonably assume conditional independence of the advice, given that the venture is sound and also given that the venture is not sound. If Sharon goes with the majority of advisers, what is the probability she will make the right decision? If the project is sound, Sharon makes the right choice if three or more of the five advisors are positive. If the venture is unsound, she makes the right choice if three or more of the five advisers are negative. Let \(H = \) the event the project is sound, \(F = \) the event three or more advisers are positive, \(G = F^c = \) the event three or more are negative, and \(E =\) the event of the correct decision. Then \(P(E) = P(FH) + P(GH^c) = P(F|H) P(H) + P(G|H^c) P(H^c)\) Let \(E_i\) be the event the \(i\)th adviser is positive. Then \(P(F|H) = \) the sum of probabilities of the form \(P(M_k|H)\), where \(M_k\) are minterms generated by the class \(\{E_i : 1 \le i \le 5\}\). Because of the assumed conditional independence, \(P(E_1 E_2^c E_3^c E_4 E_5|H) = P(E_1|H) P(E_2^c|H) P(E_3^c|H) P(E_4|H) P(E_5|H)\) with similar expressions for each \(P(M_k|H)\) and \(P(M_k|H^c)\). This means that if we want the probability of three or more successes, given \(H\), we can use ckn with the matrix of conditional probabilities. The following MATLAB solution of the investment problem is indicated. P1 = 0.01*[80 75 60 90 80]; P2 = 0.01*[75 85 70 90 70]; PH = 0.7; PE = ckn(P1,3)*PH + ckn(P2,3)*(1 - PH) PE = 0.9255 Often a Bernoulli sequence is related to some conditioning event H. In this case it is reasonable to assume the sequence \(\{E_i : 1 \le i \le n\}\) ci \(|H\) and ci \(|H^c\). We consider a simple Example \(\PageIndex{4}\) Test of a claim A race track regular claims he can pick the winning horse in any race 90 percent of the time. In order to test his claim, he picks a horse to win in each of ten races. There are five horses in each race. If he is simply guessing, the probability of success on each race is 0.2. Consider the trials to constitute a Bernoulli sequence. Let \(H\) be the event he is correct in his claim. If \(S\) is the number of successes in picking the winners in the ten races, determine \(P(H|S = k)\) for various numbers \(k\) of correct picks. Suppose it is equally likely that his claim is valid or that he is merely guessing. We assume two conditional Bernoulli trials: claim is valid: Ten trials, probability \(p = P(E_i | H) = 0.9\). Guessing at random: Ten trials, probability \(p = P(E_i|H^c) = 0.2\). Let \(S=\) number of correct picks in ten trials. Then \(\dfrac{P(H|S = k}{P(H^c|S = k)} = \dfrac{P(H)}{P(H^c)} \cdot \dfrac{P(S = k|H)}{P(S = k|H^c)}\), \(0 \le k \le 10\) Giving him the benefit of the doubt, we suppose \(P(H)/P(H^c) = 1\) and calculate the conditional odds. k = 0:10; Pk1 = ibinom(10,0.9,k); % Probability of k successes, given H Pk2 = ibinom(10,0.2,k); % Probability of k successes, given H^c OH = Pk1./Pk2; % Conditional odds-- Assumes P(H)/P(H^c) = 1 e = OH > 1; % Selects favorable odds 6 2 % Needs at least six to have creditability 7 73 % Seven would be creditable, 8 2627 % even if P(H)/P(H^c) = 0.1 Under these assumptions, he would have to pick at least seven correctly to give reasonable validation of his claim.
{"url":"https://stats.libretexts.org/Bookshelves/Probability_Theory/Applied_Probability_(Pfeiffer)/05%3A_Conditional_Independence/5.01%3A_Conditional_Independence","timestamp":"2024-11-10T11:13:41Z","content_type":"text/html","content_length":"147552","record_id":"<urn:uuid:24ef265f-70eb-4cc9-a54d-2b2fac31c78e>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00235.warc.gz"}
One-dimensional steady state conduction in Cylindrical coordinates • Thread starter Hannibal247 • Start date In summary, The problem involves determining the temperature distribution inside a cylindrical fuel with given dimensions and heat generation rate. Using a differential equation and boundary condition, the temperature at the axis is found to be finite and c1 is set to zero to satisfy the physical requirements. Im having some issues with my task. 1. Homework Statement The heat generation rate of a cylindrical fuel (D=0.2 m and 1 m long) is 160 kW. The thermal conductivity of the fuel is 100 W/mK and its surface temperature is maintained at 283 K. Determine the temperature at the axis. Homework Equations I tried to use this equation: 0=(1/r)*(d/dr)*(r*dT/dr)+q/k and I've added the volume to it --> 0=(1/r)*(d/dr)*(r*dT/dr)+q/(k*V)[/B] The Attempt at a Solution im getting this at the end: T(r)=-1/4*q/(kV)*r^2 + c[1]*ln(r) + c[2] i wanted to use the boundary condition that for r=0 ---> T=283K, but i can't type ln(0) into my calculator. I don't know how to go on. Best regards Temperature has to be finite everywhere. Therefore c[1] = 0 Henryk said: Temperature has to be finite everywhere. Therefore c[1] = 0 ok thank you very much. i understand why the temperature has to be everywhere finite, but why does it make c =0? how is the relation to that? best regards because ln(r) diverges at r = 0, that's you have to reject this solution, i.e. set c[1] to zero. It does satisfy the differential equation but it is not physical. It is actually very common practice in physics to reject solutions that satisfy mathematics but are not physically correct FAQ: One-dimensional steady state conduction in Cylindrical coordinates 1. What is one-dimensional steady state conduction in cylindrical coordinates? One-dimensional steady state conduction in cylindrical coordinates is a concept in heat transfer that describes the transfer of heat through a cylindrical object, such as a pipe or rod, where the temperature does not vary with time. 2. How is heat transferred in one-dimensional steady state conduction in cylindrical coordinates? Heat is transferred in one-dimensional steady state conduction in cylindrical coordinates through conduction, where thermal energy is transferred from a higher temperature region to a lower temperature region. 3. What is the governing equation for one-dimensional steady state conduction in cylindrical coordinates? The governing equation for one-dimensional steady state conduction in cylindrical coordinates is the cylindrical form of Fourier's law, which states that the rate of heat transfer is proportional to the temperature gradient in the direction of heat flow. 4. What are the boundary conditions for one-dimensional steady state conduction in cylindrical coordinates? The boundary conditions for one-dimensional steady state conduction in cylindrical coordinates are the temperatures at the inner and outer surfaces of the cylindrical object, as well as any heat sources or sinks within the object. 5. How is one-dimensional steady state conduction in cylindrical coordinates applied in real-world scenarios? One-dimensional steady state conduction in cylindrical coordinates is commonly used in the design and analysis of heat exchangers, pipes, and other cylindrical objects in various engineering applications, such as in the oil and gas industry, thermal power plants, and refrigeration systems.
{"url":"https://www.physicsforums.com/threads/one-dimensional-steady-state-conduction-in-cylindrical-coordinates.895850/","timestamp":"2024-11-06T20:24:15Z","content_type":"text/html","content_length":"91726","record_id":"<urn:uuid:a01c8c2f-e3d2-4bee-a003-d1b575ff7a2d>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00355.warc.gz"}
eginning with Kth This blog discusses how to swap the Kth node from the beginning with the Kth node from the end in the given singly linked list. One very important thing to note here is that while Kth node from the beginning with Kth node from the end we need to swap the node, not only its data. Let’s see the example given below. Suppose we are asked to swap the Kth node from the beginning with the Kth node from the end in the linked list on the top for k=2. Then we have the modified, linked list below it. Also see, Data Structures and Rabin Karp Algorithm Recommended Topic, Floyds Algorithm To swap Kth node from the beginning with Kth node from the end in a linked list, we traverse through it to find the Kth node and (total node- K +1)th node from the beginning, and then we use four temporary variables to store both these nodes and their previous nodes. Then we update the next pointer of the previous node of the first node with the pointer to the second node and the next pointer of the previous node of the second node with the pointer to the first node. We interchange the next pointers of both the nodes using another temporary variable. These operations leave us with the final linked list, which contains the asked nodes in swapped manner. • Take the linked list from user input. • Create a swap function that takes the head of the linked list the number k, and the size of a linked list as arguments, • Create four variables to store the Kth node, its previous node, (N-K+1)th node, and its previous node. • Change next pointers of previous nodes of Kth and (N-K+1)th node first with (N-K+1)th node and Kth node, respectively. • Create a temporary variable and swap the next pointers of Kth and (N-K+1)th node. • Print the linked list. //Program to swap Kth node from the beginning with Kth node from the end in a linked list. #include <bits/stdc++.h> using namespace std; //Structure for a node in the linked list. struct node int data; struct node *next; //Function to swap nodes. void swap_nodes(struct node** head, int k, int n) //When k is greater than the size of the linked list. //When kth node from the beginning and the end are the same. //Variables to store the kth node from the beginnning and its //previous node. node* node1= *head; node* temp1= nullptr; for(int i=1;i<k;i++) //Variables to store the kth node from the end and its //previous node. node* node2= *head; node* temp2= nullptr; for(int i=1;i<n-k+1;i++) //Setting the next pointers of the pevious nodes of the //swapped nodes. //Setting the next pointers of the swapped nodes. node* temp=node1->next; //We need to update head pointers when k is 1 or n. //Function to push nodes into the list. void push(struct node** head, int new_val) //Creatng a new node. struct node* new_node= new node(); //Putting the value in the node. new_node->data= new_val; //Linking the node to the list. //Shifting the head pointer to the new node. *head= new_node; //Driver function. int main() //Creating an empty list. struct node* head=nullptr; //Enter no of nodes in the node. int size; cout<<"Enter the number of nodes in the list- "; //Pushing the nodes in it. cout<<"Enter the nodes in the list- "; for(int i=0;i<size;i++) int a; //Taking the value of k as input. int k; cout<<"Enter the value of k- "; //Printing the initial linked list. cout<<"Initial linked list-"<<endl; struct node* p=head; cout<<p->data<<" "; //Function call to swap nodes in the linked list. //Printing the final linked list. cout<<"Final linked list-"<<endl; cout<<head->data<<" "; You can also try this code with Online C++ Compiler Run Code Enter the number of nodes in the list- Enter the nodes in the list- Enter the value of k- Initial linked list- Final linked list- Complexity Analysis The time complexity of the above approach is- O(N), where N is the number of nodes in the linked list. The space complexity of the above approach is- O(1).
{"url":"https://www.naukri.com/code360/library/how-to-swap-kth-node-from-the-beginning-with-kth-node-from-the-end-in-the-given-singly-linked-list","timestamp":"2024-11-10T14:04:48Z","content_type":"text/html","content_length":"409379","record_id":"<urn:uuid:06540a0e-edf4-4743-a976-c117184394e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00789.warc.gz"}
Problem L Consider a grid of $n$ columns and $m$ rows. Number the columns from $0$ to $n-1$ from left to right. Within the grid, some double-sided mirrors of the form “\” or “/” are placed. Assume the grid wraps around, that is, the left edge is connected to the right edge. When a laser is shined from the top, it will travel down the grid and might get reflected through the mirrors. For example: In the example above, the laser ray entering column $2$ from the top travels out at column $4$ at the bottom. Given an integer $n$ and a permutation $p = (p_0, p_1, \dots , p_{n-1})$ of $\{ 0, 1, \dots , n-1 \} $. Find a grid with the minimum nonnegative number of rows $m$, such that for every $i = 0, \dots , n-1$, the laser entering column $i$ from the top will travel out at column $p_ i$ at the bottom. There will be multiple test cases, terminated by EOF. For each test case: The first line contains a positive integer $n$, the second line contains $n$ space-separated integers, $p_0\ p_1\ \dots \ p_ n$. For each test case, print the number of rows $m$ on the first line. The $m$ subsequent lines should contain the mirror configurations of each row, using character “\”, “/” or “.” (without quotes) to represent each cell. For example, the image above corresponding to the following output: If there are multiple possible configurations, output any of them. Each file has at least one test case. The sum of $n$ for each file does not exceed $10^4$. The output file size does not exceed $5 \cdot 10^6$ bytes. Sample Input 1 Sample Output 1 5 \\\\\ 8 ././././ 1 0 3 2 5 4 7 6 \\\\\\\\
{"url":"https://nus.kattis.com/courses/CS3233/CS3233_S2_AY2324/assignments/io3nhf/problems/lasers","timestamp":"2024-11-04T11:59:53Z","content_type":"text/html","content_length":"30002","record_id":"<urn:uuid:2e96ade5-0bbd-4062-bc50-55680b90aa10>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00733.warc.gz"}
The quantum muse: harnessing randomness for artistic creation Introducing quantum computing in generative art is (theoretically) easier than you think Back at the end of December 2023 I ran into “GENUARY24”, which is like an “Advent of Code” but for generative art. That challenge intrigued me, since I thought it could be a nice way to experiment some new solutions and to go deep into art themes, which is the kind of heterogeneity I like. I quickly skimmed along the challenge prompts and finally decided to spend some time experimenting with algorithms to generate prompt n.5: “In the style of Vera Molnár (1924–2023)” Prompt #5: “In the style of Vera Molnàr” Vera Molnàr used computers to generate her artistic works: she was a pioneer in generative art and thought that randomness could replace artist’s intuition. All her works are “computed” starting from very basic forms (lines, squares, colors) and all are influenced by some kind of “noise”. Classical computers can produce pseudo-random numbers; quantum computers can achieve true randomness instead, so I liked the idea of using quantum computing to generate “Vera Molnàr-like” pictures. In “(Des)Ordres” artworks (1974), the artist creates a series of concentric squares, in random sizes and colors. I first wrote a classical algorithm, just to have a clear idea about where randomness kicks in and the size of the random number to be cast: import random from PIL import Image, ImageDraw colors = ["cyan","green","red","violet","blue","orange","pink"] size = 150 n_tiles = 8 img = Image.new("RGB", (size*n_tiles,size*n_tiles), color="#EDEBDF") img1 = ImageDraw.Draw(img) def draw_tile(img, size, colors, position): t_x, t_y = position square_size = random.randint(10,size) c = random.randint(0,len(colors)) for i in range(0,5): square_size = random.randint(10,(size//2)-10) shape = [(t_x+square_size, t_y+square_size), (t_x+size - square_size, t_y+size - square_size)] img.rectangle(shape, outline=colors[(c+i) % len(colors)],width=2) for i in range(n_tiles): for j in range(n_tiles): A circuit to cast random numbers It is possible to obtain random numbers by using Hadamard gates to induce a superposition on qubits. Since the tile size is 150 here, I first wrote a circuit to cast a number between 0 and 2^n, where n is the smallest exponent required to exceed 150. n is also the number of qubits we need to measure in order to obtain that random number and the number of classical bits to collapse the measures on. Then I tried to generalize the circuit to cast numbers between 0 and a generic number m, calculating the exponent as part of the process. In the following figure, for clarity’s sake, I show how to develop a circuit to generate numbers between 0 and 2³. Here, Hadamard equally distributes probabilities among all the numbers: Applying Hadamard gates to 3 qubits Qiskit implementation I developed the circuit with Qiskit as in the following snippet of code: def build_randint_circuit(n_max): # build a circuit with n qubits, in superposition, to be measured # this will generate a random number between [0,n_max] n_bits = math.ceil(math.log(n_max, 2)) qreg_q = QuantumRegister(n_bits, 'q') creg_c = ClassicalRegister(n_bits, 'c') circuit = QuantumCircuit(qreg_q, creg_c) for i in range(0, n_bits): circuit.measure(qreg_q[i], creg_c[i]) return circuit I just calculated the number of bits needed to encode the maximum number to be cast, initialized quantum and classical registers and used a for loop to apply a Hadamard gate to every single qubit in the circuit and measure it. Then I just initialized and ran a simulation to obtain random numbers: def qrandint_generator(n_min, n_max): simulator = AerSimulator() circuit = build_randint_circuit(n_max=n_max) # Run and peek into system to get a random number res = +math.inf while res < n_min or res > n_max: result = simulator.run(circuit, memory=True, shots=1).result() data = result.get_memory() res = int(data[0],2) return res I chose to manually discard all the numbers not in the range [n_min, n_max], but it is also possible to develop a circuit to discard unnecessary numbers directly on the circuit. I think a number of considerations should be made here: • I’m pretty sure It is not too good to “run a job” every time a random number is needed, so it’s probably better to run the simulation to obtain a list of random numbers to be used later, in order to reduce the number of the job to be run once the algorithm is going to be launched on a real quantum device. • It is known that quantum information just decays with running time/number of qubits and that is to be taken in account when dealing with a real quantum device. Simulation is ok to “theory-proof” an algorithm but then a number of factors kicks in on real devices (e.g: noise) Now that we have the “quantum random number generator”, we can substitute all the random parts in the original algorithm: def draw_tile(img, size, colors, position): t_x, t_y = position square_size = random.randint(10,size) # color_idx = random.randint(0,len(colors)) color_idx = qrandint_generator(0,len(colors)-1) for i in range(0,5): square_size = qrandint_generator(10,(size//2)-10) # square_size = random.randint(10,(size//2)-10) shape = [(t_x+square_size, t_y+square_size), (t_x+size - square_size, t_y+size - square_size)] img.rectangle(shape, outline=colors[(color_idx+i) % len(colors)],width=2) for i in range(n_tiles): for j in range(n_tiles): To obtain the following output image: A “Vera Molnàr”-like “quantum powered” generated artwork
{"url":"https://1littleendian.medium.com/the-quantum-muse-harnessing-randomness-for-artistic-creation-b2fa89305964?source=user_profile_page---------2-------------4285b6c04115---------------","timestamp":"2024-11-07T02:18:22Z","content_type":"text/html","content_length":"116794","record_id":"<urn:uuid:0a350780-2b68-483d-add2-326fdfb7ce00>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00643.warc.gz"}
ELM helps your child develop a deeper understanding of the basic mathematical concepts for grade 1 as covered in the Quebec Education Program, including number concept, arithmetic, geometry, probability and statistics, data organization, and strategic thinking. ELM also meets the standards set by the National Council of Teachers of Mathematics (NCTM) in the United States and Canada. ELM is designed to be engaging and to provide non-threatening, effective feedback on all activities. As your child completed these activities, he or she will earn animal "friends" in the form of online cards. Themes – Ideas – Steps ELM is organized into Themes (an overarching branch of mathematics), which are then divided into different Ideas (mathematical concepts). Each Idea has a certain number of Steps that use carefully sequenced activities to build children’s understanding of the concept. Click on any of the icons below to learn more about a Theme and its associated Ideas and Steps. Number Concept Children are provided with repetitive activities to help them to develop fluency in recognizing, comparing, adding, subtracting, and decomposing numbers. Emphasis is placed on always being able to return to the metaphor of using objects to express an operation with numbers. Most of the steps in this theme focus on the numbers 1-9. This allows children the opportunity to learn the concept in a way where they can verify the answer for themselves by using their fingers. The Count idea has five steps. These steps help children become familiar with the basic numerals (1 through 9) and the quantity each represents. Steps are structured to move from concrete to abstract means of counting a set of Count helps children gain fluency at subitizing, an important skill in which they can instantly recognize the number of objects in a set of objects presented without any conscious counting. The Compare idea has four steps. Children are asked to count and then compare two sets of objects: bears and hockey sticks. They then determine if the sets are equal or whether one integer is larger or smaller than the other. Compare exposes children to both natural language and mathematical symbols that express and compare the cardinality of two sets. The Add idea has four steps. Children are asked to add two sets of animals. They eventually learn that the resulting number is the “sum” or “total,” which can be represented by an equation. Children then write these equations by placing the numbers and symbols in the appropriate order. The steps in Add reuse the counting strategies learned in the Count steps. This includes clicking on objects to count them and the use of counters. The Subtract idea has five steps. Subtraction is introduced as the process of taking away. Children first see all animals in one set (the barn). They have to count how many are in the set and note the empty second set (the pasture). Then students subtract by sending animals from the barn to the pasture. Eventually the create equations to show the subtraction. The Decompose idea has four steps. Children practice decomposing, or partitioning, integers, using a set of beavers. They must separate the beavers into two different sets by deciding which beavers are in the grass or water region. In the final step, children are asked to complete a table to demonstrate their understanding of the different possibilities for decomposing a specific number. Place Value The Place Value idea has nine steps. This idea helps children realize that numbers beyond 9 but less than 100 have two ‘parts’: there is a number of ‘10s’ combined with a number of ‘1s’. The goal is to have students understand that one ‘ten’ is equal to ten ‘ones’. In order to facilitate students’ grasping the notion of place value, ELM’s steps associate tens to trees. When ten units - represented as pinecones - are grouped, they become a tree. This theme asks students to categorize and distinguish two-dimensional shapes. ELM’s goal is to develop children’s fluency in recognizing shapes and foster their own criteria for correctly identifying shapes. To this end, ELM does not list a shape’s attributes. Instead, children are provided with varying prototypes and must define their own criteria for what makes a shape a shape. Children also practice identify shapes that have been rotated, which makes them more difficult to recognize. Identify Shapes The Identify Shapes idea has three steps. Children are first asked to sort an array of two-dimensional shapes and open figures. Then, they are asked to identify and sort a specific shape (circle, square, rhombi, rectangle, or triangle). Finally, they are again asked to sort a specific shape, but this time some of the objects will be rotated, thus increasing the activity’s difficulty. This theme helps children develop skills in recognizing the changing attributes in patterns, especially determining the rule for a repeating pattern. Children typically express their understanding by recognizing, continuing, completing and creating patterns. The ELM activities help children identify regularity and building sequences. Translate Patterns The Translate Patterns has one step. In the first part, children are asked to is identify the repeating portion – the core, or unit of repeat. Then, children extend this understanding of the core structure by abstracting the pattern, meaning they use a new set of objects to recreate the pattern. This theme helps children organize and interpret data using graphs and tables. To support children’s development of these skills, the theme presents situations where the answer is not instantly obvious. In order to make sense of the situation, the student is required to organize data according to common attributes and represent a tally using graphs and tables. Bar Graphs and Tables The Bar Graph and Tables idea has two steps. Children are presented with a pile of objects, consisting of 2-4 categories with a total of 10-15 objects. Children must identify the categories and label a bar graph so each category is represented. The student is then prompted to complete the bar graph in relation to the given pile of objects. This second step is the same as the first one but also asks children to complete a table at the end, using the information in their bar graph. This demonstrates that both the bar graph and table can be used to represent their data. Number Line This theme introduces a number line, which provides children a concrete method for counting, comparing, and ordering numbers. Children see that counting up is related to increases in quantity and counting down is related to decreases in quantity. As well, children gain a new sense of addition and subtraction by visualizing them as displacement along the number line. Number as Displacement The Number as Displacement idea has one step. This Idea provides children with a situational problem where three numbers are related. The starting point (a), the displacement (b), and the ending point (c) are presented by the equation “a + b = c” or “a – b = c”. Two out of these three are provided, and the child must find the third. In the process of using the number line, students may count/add by 1s, 5s, and 10s. They gain proficiency in composing/decomposing numbers while gaining fluency in addition/subtraction with numbers up to 100.
{"url":"https://literacy.concordia.ca/resources/elm/parent/en/structure.php","timestamp":"2024-11-06T12:40:02Z","content_type":"text/html","content_length":"17904","record_id":"<urn:uuid:ea65b48b-7876-4b2b-99c5-9b02908ac636>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00141.warc.gz"}
Items where Subject is "Specific Sciences > Physics > Quantum Mechanics" • Subject Areas (1823) Jump to: Number of items at this level: 1823. Aaronson, Scott (2004) Is Quantum Mechanics An Island in Theoryspace? [Preprint] Aaronson, Scott (2011) Why Philosophers Should Care About Computational Complexity. [Preprint] Accorinti, Hernán Lucas and Fortin, Sebastian and Herrera, Manuel and Jaimes Arriaga, Jesús Alberto (2018) The case of phonons: explanatory or ontological priority. [Preprint] Acuña, Pablo (2017) Inertial Trajectories in de Broglie-Bohm Theory: an unexpected problem. International Studies in the Philosophy of Science, 30. pp. 201-230. Acuña, Pablo (2021) Must Hidden Variables Theories Be Contextual? Kochen & Specker meet von Neumann and Gleason. European Journal for Philosophy of Science, 41. ISSN 1879-4912 Adlam, Emily (2023) Disappearing Without a Trace: The Arrows of Time in Kent's Solution to the Lorentzian Quantum Reality Problem. [Preprint] Adlam, Emily (2022) Do We Have Any Viable Solution to the Measurement Problem? [Preprint] Adlam, Emily (2022) Does Science need Intersubjectivity? The Problem of Confirmation in Orthodox Interpretations of Quantum Mechanics. [Preprint] Adlam, Emily (2022) Is There Causation in Fundamental Physics? New Insights from Process Matrices and Quantum Causal Modelling. [Preprint] Adlam, Emily and Linnemann, Niels and Read, James (2022) Constructive Axiomatics in Spacetime Physics Part III: A Constructive Axiomatic Approach to Quantum Spacetime. [Preprint] Adlam, Emily and Rovelli, Carlo (2022) Information is Physical: Cross-Perspective Links in Relational Quantum Mechanics. [Preprint] Afriat, Alexander (2002) Altering the remote past. [Preprint] Afriat, Alexander (2004) Duhem, Quine and the other dogma. [Preprint] Afriat, Alexander (2008) Duhem, Quine and the other dogma. [Preprint] Afriat, Alexander (2004) If Bertlmann had three feet. [Preprint] Afriat, Alexander (2012) Logic of gauge. [Preprint] Afriat, Alexander (2012) Topology, holes and sources. International journal of theoretical physics, 52 (3). pp. 1007-1012. Afriat, Alexander (2018) The Aharonov-Bohm debate in 3D. [Preprint] Afriat, Alexander (2018) Is the world made of loops? [Preprint] Ahmed, Arif and Caulton, Adam (2014) Causal Decision Theory and EPR correlations. [Preprint] Al-Khalili, Jim and Chen, Eddy Keming (2024) The Decoherent Arrow of Time and the Entanglement Past Hypothesis. Foundations of Physics, 54 (49). ISSN 0015-9018 Albert, David Z. (2019) How to Teach Quantum Mechanics. [Preprint] Alemañ-Berenguer, Rafael-Andrés (2008) Quantum Mechanics versus Special Relativity: A forgotten conflict. UNSPECIFIED. Alfonseca, Manuel and Soler Gil, Francisco José (2014) About the Infinite Repetition of Histories in Space. THEORIA. An International Journal for Theory, History and Foundations of Science, 29 (3). pp. 361-373. ISSN 2171-679X Allori, Valia (2015) How to Make Sense of Quantum Mechanics(and More):Fundamental Physical Theories and Primitive Ontology. [Preprint] Allori, Valia (2012) On the Metaphysics of Quantum Mechanics. [Preprint] Allori, Valia (2012) Primitive Ontology and the Structure of Fundamental Physical Theories. [Preprint] Allori, Valia (2015) Primitive Ontology in a Nutshell. International Journal of Quantum Foundations, 1 (3). pp. 107-122. Allori, Valia (2015) Quantum Mechanics and Paradigm Shifts. Topoi. Allori, Valia and Duerr, Detlef and Goldstein, Sheldon and Zanghi, Nino (2002) Seven Steps Towards the Classical World. Journal of Optics B , 4. pp. 482-488. Allori, Valia and Goldstein, Sheldon and Tumulka, Roderich and Zanghi, Nino (2011) Many-Worlds and Schrodinger’s First Quantum Theory. The British Journal for the Philosophy of Science, 62 (1). pp. Allori, Valia and Goldstein, Sheldon and Tumulka, Roderich and Zanghi, Nino (2006) On the Common Structure of Bohmian Mechanics and the Ghirardi-Rimini-Weber Theory. [Preprint] Allori, Valia and Goldstein, Sheldon and tumulka, roderich and Zanghi, Nino (2012) Predictions and Primitive Ontology in Quantum Foundations: A Study of Examples. The British Journal for the Philosophy of Science-forthcoming. Allori, Valia and Zanghi, Nino (2008) On the Classical Limit of Quantum Mechanics. Foundations of Physics, 10.1007/s10701-008-9259-4 (2008). Allori, Valia and Zanghi, Nino (2004) What is Bohmian Mechanics. International Journal of Theoretical Physics , 43. pp. 1743-1755. Allori, Valia (2019) Contemporary Echoes of the World Soul: Quantum Mechanics and Consciousness. [Preprint] Allori, Valia (2018) Free Will in a Quantum World? [Preprint] Allori, Valia (2021) Fundamental Objects without Fundamental Properties: A Thin-object-orientated Metaphysics Grounded on Structure. [Preprint] Allori, Valia (2024) “Hidden Variables and Bell’s Theorem: Local or Not?”. [Preprint] Allori, Valia (2023) Many-Worlds: Why Is It Not the Consensus? [Preprint] Allori, Valia (2017) A New Argument for the Nomological Interpretation of the Wave Function: The Galilean Group and the Classical Limit of Nonrelativistic Quantum Mechanics. [Preprint] Allori, Valia (2022) On The Galilean Invariance of the Pilot-Wave Theory. [Preprint] Allori, Valia (2022) The Paradox of Deterministic Probabilities. [Preprint] Allori, Valia (2016) Primitive Ontology and the Classical World. R. Kastner, J. Jeknic-Dugic, G. Jaroszkiewicz (eds.), Quantum Structural Studies: Classical Emergence from the Quantum Level.. Allori, Valia (2019) Quantum Mechanics, Time and Ontology. [Preprint] Allori, Valia (2024) Quantum Ontology and Intuitions. [Preprint] Allori, Valia (2023) “Relativistic Pilot-Wave Theories as the Rational Completion of Quantum Mechanics and Relativity.”. [Preprint] Allori, Valia (2017) Scientific Realism and Primitive Ontology Or: The Pessimistic Induction and the Nature of the Wave Function. [Preprint] Allori, Valia (2016) Scientific Realism and Primitive Ontology. In: UNSPECIFIED. Allori, Valia (2019) Scientific Realism without the Wave-Function: An Example of Naturalized Quantum Metaphysics. [Preprint] Allori, Valia (2017) Space, Time, and (how they) Matter: a Discussion about some Metaphysical Insights Provided by our Best Fundamental Physical Theories. [Preprint] Allori, Valia (2021) Spontaneous Localization Theories. [Preprint] Allori, Valia (2020) Spontaneous Localization Theories with a Particle Ontology. [Preprint] Allori, Valia (2021) Wave-Functionalism. Synthese. Allori, Valia (2023) What if We Lived in the Best of All Possible (Quantum) Worlds? [Preprint] Allori, Valia (2022) What is it Like to be a Relativistic GRW Theory? Or: Quantum Mechanics and Relativity, Still in Conflict After All These Years. [Preprint] Allori, Valia (2023) Who’s Afraid of the Measurement Problem? On the Incompatibility between Scientific Realism and Quantum Mechanics. [Preprint] Allori, Valia (2020) Why Scientific Realists Should Reject the Second Dogma of Quantum Mechanics. [Preprint] Anastopoulos, Charis (2020) Mind-body interaction and modern physics. [Preprint] Andreoletti, Giacomo and Vervoort, Louis (2022) Superdeterminism: a reappraisal. Synthese, 200 (5). ISSN 1573-0964 Arageorgis, Aristidis and Earman, John (2017) Bohmian Mechanics: A Panacea for What Ails Quantum Mechanics, or a Different and Problematic Theory? [Preprint] Ardourel, Vincent and Guay, Alexandre (2017) Why is the transference theory of causation insuffcient? The challenge of the Aharonov-Bohm effect. [Preprint] Arenhart, Jonas R. B. and Krause, Decio (2014) Potentiality and Contradiction in Quantum Mechanics. [Preprint] Arenhart, Jonas R. B. and Krause, Décio (2014) Contradiction, Quantum Mechanics, and the Square of Opposition. [Preprint] Arenhart, Jonas R. B. and Krause, Décio (2014) Oppositions and quantum mechanics. [Preprint] Arenhart, Jonas R.B. and Krause, Décio (2012) Why non-individuality? A discussion on individuality, identity, and cardinality in the quantum context. [Preprint] Arenhart, Jonas R. B. (2017) Does weak discernibility determine metaphysics? THEORIA. An International Journal for Theory, History and Foundations of Science, 32 (1). pp. 109-125. ISSN 2171-679X Arenhart, Jonas R. B. and Arroyo, Raoni Wohnrath (2023) The roads to non-individuals (and how not to read their maps). [Preprint] Arenhart, Jonas R. B. and Bueno, Otávio and Krause, Décio (2018) Making Sense of Non-Individuals in Quantum Mechanics. [Preprint] Arroyo, Raoni Wohnrath (2023) Making New Tools From the Toolbox of Metaphysics. [Preprint] Arroyo, Raoni Wohnrath and Arenhart, Jonas R. B. (2023) The Powers of Quantum Mechanics: A Metametaphysical Discussion of the “Logos Approach”. [Preprint] Arroyo, Raoni Wohnrath and Arenhart, Jonas R. B. (2024) Quantum ontology de-naturalized: What we can't learn from quantum mechanics. THEORIA. An International Journal for Theory, History and Foundations of Science. ISSN 2171-679X Arroyo, Raoni Wohnrath and Arenhart, Jonas R. B. (2022) Whence deep realism for Everettian quantum mechanics? [Preprint] Arroyo, Raoni Wohnrath and Arenhart, Jonas R. B. and Krause, Décio (2023) The elimination of metaphysics through the epistemological analysis: lessons (un)learned from metaphysical underdetermination. [Preprint] Arroyo, Raoni Wohnrath and Morganti, Matteo (2024) The game of metaphysics: towards a fictionalist (meta)metaphysics of science. [Preprint] Arroyo, Raoni Wohnrath and NUNES FILHO, LAURO DE MATOS and MOREIRA DOS SANTOS, FREDERIK (2024) Towards a process-based approach to consciousness and collapse in quantum mechanics. Manuscrito, 47 (1). ISSN 0100-6045 Arroyo, Raoni Wohnrath and de Ronde, Christian (2022) Realistic From Far But Far From Realism: Withering Scientific Realism in the Quantum Case. [Preprint] Atkinson, David (2006) Losing energy in classical, relativistic and quantum mechanics. [Preprint] Atmanspacher, Harald and Primas, Hans (2002) Epistemic and Ontic Quantum Realities. [Preprint] Avin, Shahar (2010) A Philosopher's View of the Epistemic Interpretation of Quantum Mechanics. [Preprint] Avner, Ash and Justin, Clarke-Doane (2023) Intuition and Observation. [Preprint] Bacciagaluppi, Guido (2010) Collapse Theories as Beable Theories. Manuscrito, 33 (1). pp. 19-54. Bacciagaluppi, Guido A Conceptual Introduction to Nelson’s Mechanics. World Scientific, Singapore. Bacciagaluppi, Guido (2013) A Critic Looks at QBism. [Preprint] Bacciagaluppi, Guido (2003) Derivation of the Symmetry Postulates for Identical Particles from Pilot-Wave Theories. [Preprint] Bacciagaluppi, Guido (2014) Did Bohr Understand EPR? [Preprint] Bacciagaluppi, Guido (2015) Einstein, Bohm, and Leggett-Garg. [Preprint] Bacciagaluppi, Guido (2013) The Everett Interpretation of Quantum Mechanics. Collected Works 1955-1980 with Commentary. Hugh Everett III, edited by Jeffrey A. Barrett & Peter Byrne. Princeton: Princeton University Press (xii+389 pp.). HOPOS, 3 (2). pp. 348-352. Bacciagaluppi, Guido (1997) Gauge- and Galilei-invariant Geometric Phases. [Preprint] Bacciagaluppi, Guido (2012) Insolubility Theorems and EPR Argument. [Preprint] Bacciagaluppi, Guido (2013) Insolubility from No-Signalling. [Preprint] Bacciagaluppi, Guido (2007) Is logic empirical? [Preprint] Bacciagaluppi, Guido (2011) Measurement and Classical Regime in Quantum Mechanics. [Preprint] Bacciagaluppi, Guido (2012) Non-equilibrium in Stochastic Mechanics. Journal of Physics: Conference Series, 361. 012017/1-012017/12. Bacciagaluppi, Guido (2006) Probability, Arrow of Time and Decoherence. [Preprint] Bacciagaluppi, Guido (2014) Quantum Probability: An Introduction. [Preprint] Bacciagaluppi, Guido (2001) Remarks on Space-time and Locality in Everett's Interpretation. [Preprint] Bacciagaluppi, Guido Worlds Galore? n/a. Bacciagaluppi, Guido and Crull, Elise (2009) Heisenberg (and Schroedinger, and Pauli) on Hidden Variables. [Preprint] Bacciagaluppi, Guido and Ismael, Jenann (2014) Essay Review: David Wallace, The Emergent Multiverse. [Preprint] Bacciagaluppi, Guido (2016) An Everett Perspective on Bohr and EPR. [Preprint] Bacciagaluppi, Guido (2020) Physics and Beyond: Essay review of Kay Herrmann (ed.): Grete Henry-Hermann: Philosophie – Mathematik – Quantenmechanik. Springer: Wiesbaden, 2019, xv + 663 pp. [Preprint] Bacciagaluppi, Guido (2023) A Proof of Specker's Principle. [Preprint] Bacciagaluppi, Guido (2020) Quantum Mechanics, Emergence, and Decisions. [Preprint] Bacciagaluppi, Guido (2020) Translation of three short papers by Grete Hermann (with introduction). [Preprint] Bacciagaluppi, Guido (2019) Unscrambling Subjective and Epistemic Probabilities. [Preprint] Bacciagaluppi, Guido (2021) The statistical interpretation: Born, Heisenberg and von Neumann, 1926-27. [Preprint] Bacciagaluppi, Guido and Crull, Elise and Maroney, O J E (2017) Jordan's Derivation of Blackbody Fluctuations. [Preprint] Bacciagaluppi, Guido and Hermens, Ronnie (2020) Reverse Bell's Theorem and Relativity of Pre- and Postselection. [Preprint] Bacelar Valente, Mario (2020) THE DIRAC EQUATION AND ITS INTERPRETATIONS. [Preprint] Bahreini, Jafar and Razmi, Habibollah and Monfared, Mahdi (2020) Is determinism completely rejected in the standard Quantum Mechanics? [Preprint] Baker, David (2006) Measurement Outcomes and Probability in Everettian Quantum Mechanics. [Preprint] Baker, David John (2012) Identity, Superselection Theory and the Statistical Properties of Quantum Fields. [Preprint] Baker, David John (2015) Review of David Albert, After Physics. Notre Dame Philosophical Reviews. Baker, David John (2013) Review of Frank Arntzenius, Space, Time, and Stuff. [Preprint] Baker, David John and Halvorson, Hans and Swanson, Noel (2013) The Conventionality of Parastatistics. In: UNSPECIFIED. Baker, David John (2022) Free Will in the Many-Worlds Interpretation of Quantum Mechanics. [Preprint] Baker, David John (2018) Interpreting Supersymmetry. [Preprint] Barandes, Jacob A. (2019) Can Magnetic Forces Do Work? [Preprint] Barandes, Jacob A. (2021) Gauge Invariance for Classical Massless Particles with Spin. Foundations of Physics, 51 (1). p. 7. Barandes, Jacob A. (2019) Manifestly Covariant Lagrangians, Classical Particles with Spin, and the Origins of Gauge Invariance. [Preprint] Barandes, Jacob A. (2024) New Prospects for a Causally Local Formulation of Quantum Theory. [Preprint] Barandes, Jacob A. (2021) On Magnetic Forces and Work. Foundations of Physics, 51 (4). p. 79. Barandes, Jacob A. (2023) The Stochastic-Quantum Correspondence. [Preprint] Barandes, Jacob A. (2023) The Stochastic-Quantum Theorem. [Preprint] Barandes, Jacob A. and Kagan, David (2020) Measurement and Quantum Dynamics in the Minimal Modal Interpretation of Quantum Theory. Foundations of Physics, 50 (10). pp. 1189-1218. Barandes, Jacob A. and Kagan, David (2014) The Minimal Modal Interpretation of Quantum Theory. [Preprint] Barandes, Jacob A. and Kagan, David (2021) Quantum Conditional Probabilities and New Measures of Quantum Information. [Preprint] Barandes, Jacob A. and Kagan, David (2014) A Synopsis of the Minimal Modal Interpretation of Quantum Theory. [Preprint] Barbar, Ahmed (2017) The Kochen-Specker and Conway-Kochen Theorems. [Preprint] Barnum, Howard (1990) Dieks' Realistic Interpretation of Quantum Mechanics: A Comment. UNSPECIFIED. (Unpublished) Barnum, Howard (1990) The Many-Worlds Interpretation of Quantum Mechanics: Psychological versus Physical Bases for the Multiplicity of "Worlds". UNSPECIFIED. (Unpublished) Barrett, Jeffrey A. (2000) On the Nature of Measurement Records in Relativistic Quantum Field Theory. [Preprint] Barrett, Jeffrey A. (2016) Quantum Worlds. [Preprint] Barrett, Jeffrey A. (2004) Relativistic Quantum Mechanics through Frame-Dependent Constructions. In: UNSPECIFIED. (In Press) Barrett, Jeffrey A. (2016) Typicality in Pure Wave Mechanics. [Preprint] Barrett, Jeffrey A. (2011) Wigner's Friend and Bell's Field Beables. [Preprint] Barrett, Jeffrey A. (2008) Approximate Truth and Descriptive Nesting. [Preprint] Barrett, Jeffrey A. (2021) Situated Observation in Bohmian Mechanics. [Preprint] Barrett, Jeffrey A. and Chen, Eddy Keming (2023) Algorithmic Randomness and Probabilistic Laws. [Preprint] Barrett, Jeffrey A. and Goldbring, Isaac (2021) Everettian Mechanics with Hyperfinitely Many Worlds. [Preprint] Barrett, Jeffrey A. and Goldbring, Isaac (2023) A Nonstandard Formulation of Bohmian Mechanics. [Preprint] Barrett, Jeffrey A. and Huttegger, Simon (2020) Quantum Randomness and Underdetermination. [Preprint] Bartlett, Steven James (2021) A Primer on Bartlett's CRITIQUE OF IMPURE REASON. Willamette University, Salem, OR. Barzegar, Ali and Margoni, Emilia and Oriti, Daniele (2023) A minimalist account of agency in physics. [Preprint] Barzegar, Ali and Oriti, Daniele (2022) Epistemic-Pragmatist Interpretations of Quantum Mechanics: A Comparative Assessment. [Preprint] Bassi, Angelo and Dorato, Mauro and Ulbricht, Hendrik (2023) Collapse Models: A Theoretical, Experimental and Philosophical Review. Entropy, 25. pp. 1-16. Baumann, Veronika and Del Santo, Flavio (2022) Many Worlds are irrelevant for the problem of the arrow of time. [Preprint] Belnap, Nuel (2002) EPR-like ``funny business'' in the theory of branching space-times. [Preprint] Belnap, Nuel (2002) No-common-cause EPR-like funny business in branching space-times. [Preprint] Belot, Gordon (2011) Quantum States for Primitive Ontologists: A Case Study. [Preprint] Belot, Gordon and Earman, John and Healey, Richard and Maudlin, Tim and Nounou, Antigone and Struyve, Ward (2009) Synopsis and Discussion: Philosophy of Gauge Theory. [Preprint] Ben-Yami, Hanoch (2023) The Structure of Space and Time, and the Indeterminacy of Classical Physics. UNSPECIFIED. BenDaniel, David (2010) Constructibility in Quantum Mechanics. In: UNSPECIFIED. BenDaniel, David Jacob (2003) Definability and a Nonlinear Sigma Model. [Preprint] (Unpublished) Bene, Gyula and Dieks, Dennis (2001) A perspectival version of the modal interpretation of quantum mechanics and the origin of macroscopic behavior. UNSPECIFIED. (In Press) Berghofer, Philipp (2024) Defending the Quantum Reconstruction Program. European Journal for Philosophy of Science, 14. ISSN 1879-4912 Berghofer, Philipp (2024) Quantum Probabilities Are Objective Degrees of Epistemic Justification. [Preprint] Berghofer, Philipp (2024) Quantum Reconstructions as Stepping Stones toward ψ-Doxastic Interpretations? Foundations of Physics, 54. ISSN 0015-9018 Berghofer, Philipp (2020) Scientific Perspectivism in the Phenomenological Tradition. [Preprint] Berjon, Javier and Okon, Elias and Sudarsky, Daniel (2020) Debunking prevailing explanations for the emergence of classicality in cosmology. [Preprint] Berkovitz, Joseph (2011) The World According to De Finetti. [Preprint] Berkovitz, Joseph and Hemmo, Meir (2003) How to Reconcile Modal Interpretations of Quantum Mechanics with Relativity. In: UNSPECIFIED. (In Press) Berkovitz, Joseph and Hemmo, Meir (2004) Modal Interpretations of Quantum Mechanics and Relativity: A Reconsideration. [Preprint] Berkovitz, Joseph (2016) On time, causation and explanation in the causally symmetric Bohmian model of quantum mechanics. [Preprint] Bevers, Brett (2011) Can Many-Worlds Survive a Quantum Doomsday. [Preprint] Bevers, Brett (2011) Everett's "Many-Worlds" Proposal. Studies in History and Philosophy of Modern Physics, 42 (1). pp. 3-12. ISSN 1355-2198 Bhatta, Varun (2024) The Controversy about Interference of Photons. Studies in History and Philosophy of Science, 106. pp. 146-154. ISSN 00393681 Bigaj, Tomasz (2004) Counterfactual Logic and the Hardy Paradox: Remarks on Shimony and Stein's Criticism of Stapp's Proof. [Preprint] Bigaj, Tomasz (2002) Counterfactuals and non-locality of quantum mechanics. [Preprint] Bigaj, Tomasz and Vassallo, Antonio (2020) How Humean is Bohumianism? [Preprint] Binder, Bernd (2002) Berry Phase and Fine Structure. [Preprint] Binder, Bernd (2002) Geometric Phase Locked in Fine Structure. [Preprint] Binder, Bernd (2002) Iterative Interplay between Aharonov-Bohm Deficit Angle and Berry Phase. [Preprint] Binder, Bernd (2002) Josephson Effect, Bäcklund Transformations, and Fine Structure Coupling. [Preprint] Binder, Bernd (2003) A Natural Mass Unit Hidden in the Planck Action Quantum. [Preprint] Binder, Bernd (2003) Natural Nonlinear Quantum Units and Human Artificial Linear System of Units. [Preprint] Binder, Bernd (2002) Spacetime Memory: Phase-Locked Geometric Phases. [Preprint] Bishop, Robert (2004) Patching Physics and Chemistry Together. In: UNSPECIFIED. (Unpublished) Bishop, Robert C (2002) Quantum Time Arrows, Semigroups and Time-Reversal in Scattering. [Preprint] Bishop, Robert C. (2002) The Arrow of Time in Rigged Hilbert Space Quantum Mechanics. [Preprint] Bitbol, Michel (2008) CONSCIOUSNESS, SITUATIONS, AND THE MEASUREMENT PROBLEM OF QUANTUM MECHANICS. [Preprint] Bitbol, Michel (2002) Form and actuality. M. Mugur-Schächter & A. Van der Merwe (eds.), Quantum mechanics, mathematics, cognition and action. pp. 389-430. ISSN 978-0-306-48144-4 Bitbol, Michel (2008) Is Consciousness primary? UNSPECIFIED. Bitbol, Michel (2001) Non-Representationalist Theories of Knowledge and Quantum Mechanics. UNSPECIFIED. Bitbol, Michel (2007) Ontology, Matter and Emergence. UNSPECIFIED. Bitbol, Michel (2007) Physical Relations or Functional Relations ? A non-metaphysical construal of Rovelli’s Relational Quantum Mechanics. [Preprint] Bitbol, Michel (2014) Quantum Mechanics as Generalised Theory of Probabilities. Collapse, 8. pp. 87-121. ISSN 978-0-9567750-2-3 Bitbol, Michel (1998) SOME STEPS TOWARDS A TRANSCENDENTAL DEDUCTION OF QUANTUM MECHANICS. UNSPECIFIED. Bitbol, Michel (2002) Science as if situation mattered. UNSPECIFIED. Bitbol, Michel (2011) Traces of Objectivity: Causality and Probabilities in Quantum Physics. Diogenes, 58 (4). pp. 30-57. Bitbol, Michel, Guy, Simon (1991) PERSPECTIVAL REALISM AND QUANTUM MECHANICS. UNSPECIFIED. Bitbol, Michel (2021) Is the life-world reduction sufficient in quantum physics ? Continental Philosophy Review. Bitbol, Michel (2020) A PHENOMENOLOGICAL ONTOLOGY FOR PHYSICS: Merleau-Ponty and QBism. Bitbol, Michel (2002) Transcendental Structuralism in Physics: An alternative to Structural Realism. [Preprint] Bitbol, Michel and De la Tremblaye, Laura (2022) QBISM: AN ECO-PHENOMENOLOGY OF QUANTUM PHYSICS. [Preprint] Blackshaw, Nadia and Huggett, Nick and Ladyman, James (2024) Everettian Branching in the World and of the World. [Preprint] Boge, Florian (2016) ψ-Epistemic Models, Einsteinian Intuitions, and No-Gos. A Critical Study of Recent Developments on the Quantum State. [Preprint] Boge, Florian (2019) Quantum information vs. epistemic logic: An analysis of the Frauchiger-Renner theorem. [Preprint] Boge, Florian J. (2022) Back to Kant! QBism, Phenomenology, and Reality from Invariants. [Preprint] Boge, Florian J. (2019) The Best of Many Worlds, or, is Quantum Decoherence the Manifestation of a Disposition?∗. [Preprint] Bokulich, Alisa (2004) Open or Closed? Dirac, Heisenberg, and the Relation between Classical and Quantum Mechanics. [Preprint] Bokulich, Alisa (2009) Three Puzzles about Bohr's Correspondence Principle. [Preprint] Bokulich, Alisa (2009) Explanatory Fictions. Fictions in Science: Philosophical Essays on Modeling and Idealization. pp. 91-109. Bokulich, Alisa (2019) Losing Sight of the Forest for the Ψ: Beyond the Wavefunction Hegemony. [Preprint] Bonder, Yuri and Okon, Elias and Sudarsky, Daniel (2015) Can gravity account for the emergence of classicality? Physical Review D, 92 (124050). Bonds, Liam and Burson, Brooke and Cicchella, Kade and Feintzeig, Benjamin H. and Lynnx, Lynnx and Yusaini, Alia (2024) Quantum Probability via the Method of Arbitrary Functions. [Preprint] Boscá Díaz-Pintado, María C. (2007) Updating the wave-particle duality. In: UNSPECIFIED. Boyer, Thomas (2009) Coexistence of several interpretations of quantum mechanics and the fruitfulness of scientific works. In: UNSPECIFIED. Boës , Paul (2013) Is Howard's separability principle a sufficient condition for outcome independence? In: UNSPECIFIED. Brading, Katherine and Brown, Harvey R. (2003) Are gauge symmetry transformations observable? [Preprint] Brading, Katherine A. and Castellani, Elena (2002) Symmetries in physics: philosophical reflections. [Preprint] Briggs, G A D and Butterfield, J and Zeilinger, A (2013) The Oxford Questions on the foundations of quantum physics. Proceedings of the Royal Society (London) A, A 469. pp. 20130299-20130307. Broka, Chris (2016) Consciousness and Quantum Measurement. [Preprint] Broka, Chris (2016) Consciousness and Quantum Measurement. [Preprint] Broka, Chris (2019) The Quantum Mechanics of Two Interacting Realities. [Preprint] Broka, Chris A. (2023) Brains as Quantum Mechanical Systems - A New Model. [Preprint] Broka, Chris A. (2024) Degenerate States and the Problem of Quantum Measurement. [Preprint] Broka, Chris A. (2023) Quantum Measurements and their Place in Nature. [Preprint] Brown, Harvey R (1998) Aspects of Objectivity in Quantum Mechanics. [Preprint] Brown, Harvey R and Holland, Peter (2003) Simple applications of Noether's first theorem in quantum mechanics and electromagnetism. [Preprint] Brown, Harvey R. (2011) Curious and sublime: the connection between uncertainty and probability in physics. [Preprint] Brown, Harvey R. and Timpson, Christopher G. (2005) Why special relativity should not be a template for a fundamental reformulation of quantum mechanics. [Preprint] Brown, Harvey R. and Timpson, Christopher Gordon (2014) Bell on Bell's theorem: The changing face of nonlocality. [Preprint] Brown, Harvey R. and Wallace, David (2004) Solving the measurement problem: de Broglie-Bohm loses out to Everett. [Preprint] Brown, Matthew (2006) Relational Quantum Mechanics and the Determinacy Problem. UNSPECIFIED. (Unpublished) Brown, Matthew (2007) Relational Quantum Mechanics and the Determinacy Problem. In: UNSPECIFIED. Brown, Harvey R. (2017) The reality of the wavefunction: old arguments and new. [Preprint] Brown, Harvey R. and Ben Porath, Gal (2020) Everettian probabilities, the Deutsch-Wallace theorem and the Principal Principle. Springer International Publishing. Bräutigam, Maren (2023) Heterodox underdetermination: metaphysical options for discernibility and (non-)entanglement. Studies in History and Philosophy of Science. ISSN 00393681 Bub, Jeffrey (2001) Secure Key Distribution via Pre- and Post-Selected Quantum States. UNSPECIFIED. Bub, Jeffrey and Clifton, Rob and Halvorson, Hans (2002) Characterizing quantum theory in terms of information-theoretic constraints. [Preprint] Bub, Jeffrey and Pitowsky, Itamar (2007) Two Dogmas About Quantum Mechanics. [Preprint] Bueno, Otavio (2001) Weyl and von Neumann: Symmetry, Group Theory, and Quantum Mechanics. [Preprint] Busch, Paul (2008) Book review: Physical Theory and its Interpretation - Essays in Honor of Jeffrey Bub. [Preprint] Busch, Paul (2008) Effect (Compendium entry). [Preprint] Busch, Paul and Falkenburg, Brigitte (2008) Heisenberg's Uncertainty Relation (Compendium entry). [Preprint] Busch, Paul and Heinonen, Teiko and Lahti, Pekka (2006) Heisenberg's Uncertainty Principle. In: UNSPECIFIED. Busch, Paul and Jaeger, Gregg (2010) Unsharp Quantum Reality. [Preprint] Busch, Paul and Jaeger, Gregg (2008) Welcher-Weg Experiment (Compendium entry). [Preprint] Busch, Paul and Lahti, Pekka (2008) Lueders Rule (Compendium Entry). [Preprint] Busch, Paul and Lahti, Pekka (2008) Measurement Theory (Compendium Entry). [Preprint] Busch, Paul and Lahti, Pekka (2008) Observable (Compendium entry). [Preprint] Busch, Paul and Shilladay, Christopher (2006) Complementarity and Uncertainty in Mach-Zehnder Interferometry and beyond. [Preprint] Butterfield, J. (2011) Review of 'Many Worlds? Everett, Quantum Theory and Reality'. Philosophy (Journal of the Royal Institute of Philosophy), 86 (337). pp. 451-463. ISSN : 0031-8191 Butterfield, Jeremy (2013) Assessing the Montevideo Interpretation of Quantum Mechanics. Studies in the History and Philosophy of Modern Physics. ISSN 1355-2198 Butterfield, Jeremy (2001) The End of Time? [Preprint] Butterfield, Jeremy (2010) Less is Different: Emergence and Reduction Reconciled. [Preprint] Butterfield, Jeremy (2003) On Hamilton-Jacobi Theory as a Classical Root of Quantum Theory. UNSPECIFIED. (In Press) Butterfield, Jeremy (2011) On Time chez Dummett. [Preprint] Butterfield, Jeremy (2012) On Time in Quantum Physics. [Preprint] Butterfield, Jeremy (1997) Quantum Curiosities of Psychophysics. [Preprint] Butterfield, Jeremy (2007) Reconsidering Relativistic Causality. [Preprint] Butterfield, Jeremy (2004) The Rotating Discs Argument Defeated. [Preprint] Butterfield, Jeremy (2004) The Rotating Discs Argument Defeated. [Preprint] Butterfield, Jeremy (2001) Some Worlds of Quantum Theory. [Preprint] Butterfield, Jeremy (2007) Stochastic Einstein Locality Revisited. [Preprint] Butterfield, Jeremy (2000) Topos Theory as a Framework for Partial Truth. [Preprint] Butterfield, Jeremy and Isham, Chris (1999) A Topos Perspective on the Kochen-Specker Theorem: II. Conceptual Aspects, and Classical Analogues. UNSPECIFIED. Butterfield, Jeremy and Isham, Chris (2002) A Topos Perspective on the Kochen-Specker Theorem: IV. Interval Valuations. UNSPECIFIED. Butterfield, Jeremy (2020) Halvorson on Bell on ‘Subject and Object’. [Preprint] Butterfield, Jeremy (2018) On Dualities and Equivalences Between Physical Theories. [Preprint] Butterfield, Jeremy (2017) Peaceful Coexistence: Examining Kent's Relativistic Solution to the Quantum Measurement Problem. [Preprint] Butterfield, Jeremy and Marsh, Brendan (2019) Non-locality and quasiclassical reality in Kent's formulation of relativistic quantum theory. [Preprint] Bärenz, Manuel (2012) General Covariance and Background Independence in Quantum Gravity. University of Cambridge. bacelar valente, mario (2010) Bohr’s quantum postulate and time in quantum mechanics. [Preprint] C. A. da Costa, Newton and Krause, Décio (2003) The logic of complementarity. [Preprint] C. Barbado, Luis and Del Santo, Flavio (2023) On playing gods: The fallacy of the many-worlds interpretation. [Preprint] Calderón, Francisco (2024) The Causal Axioms of Algebraic Quantum Field Theory: A Diagnostic. [Preprint] Callender, Craig (2007) The Emergence and Interpretation of Probability in Bohmian Mechanics. [Preprint] Callender, Craig (2007) Finding 'Real' Time in Quantum Mechanics. [Preprint] Callender, Craig (2000) Is Time Handed in a Quantum World? [Preprint] Callender, Craig (2014) One World, One Beable. [Preprint] Callender, Craig (2018) Can We Quarantine the Quantum Blight? [Preprint] Callender, Craig (2024) Insights into Quantum Time Reversal from the Classical Schrödinger Equation. [Preprint] Callender, Craig (2023) The Prodigy That Time Forgot: The Incredible and Untold Story of John von Newton. [Preprint] Callender, Craig (2020) Quantum Mechanics: Keeping It Real? [Preprint] Calosi, Claudio and Fano, Vincenzo and Graziani, Pierluigi and Tarozzi, Gino (2012) Statistical VS Wave Realism in the Foundations of Quantum Mechanics. [Preprint] Calosi, Claudio and Morganti, Matteo (2018) Interpreting Quantum Entanglement: Steps Towards Coherentist Quantum Mechanics. The British Journal for the Philosophy of Science. Candiotto, Laura (2017) The reality of relations. [Preprint] Cano-Jorge, Fernando and Estrada-González, Luis (2022) A relevantist's glance at Bell's theorem. [Preprint] Canturk, Bilal (2020) A conceptual frame for giving physical content to the uncertainty principle and the quantum state. [Preprint] Caponigro, Michele (2008) Interpretations of Quantum Mechanics: a critical survey. [Preprint] Caponigro, Michele and Giannetto, Enrico (2012) Epistemic vs Ontic Classification of quantum entangled states? [Preprint] Caponigro, Michele and Mancini, Stefano (2009) Can (quantum) information be sorted out from quantum mechanics? [Preprint] Carcassi, Gabriele and Aidala, Christine A and Oldofredi, Andrea (2022) A No-Go Theorem for psi-Ontic Models. [Preprint] Carcassi, Gabriele and Aidala, Christine A and Oldofredi, Andrea (2022) On the reality of the quantum state once again: A no-go theorem for ψ-ontic models. [Preprint] Castagnino, Mario and Gadella, Manuel and Lombardi, Olimpia (2005) Time-reversal invariance and irreversibility in time-asymmetric quantum mechanics. [Preprint] Castagnino, Mario and Laura, Roberto and Lombardi, Olimpia (2006) A General Conceptual Framework for Decoherence in Closed and Open Systems. In: UNSPECIFIED. Castagnino, Mario and Lombardi, Olimpia (2004) Self-Induced Decoherence and the Classical Limit of Quantum Mechanics. In: UNSPECIFIED. (Unpublished) Castagnino, Mario and Lombardi, Olimpia (2002) Self-Induced Selection: A New Approach to Quantum Decoherence. [Preprint] Castellani, Elena (2002) On the meaning of symmetry breaking. [Preprint] Castellani, Leonardo (2022) All quantum mixtures are proper. [Preprint] Castellani, Leonardo (2020) No relation for Wigner's friend. [Preprint] Castellani, Leonardo and Anna, Gabetti (2024) Space and time correlations in quantum histories. [Preprint] Catren, Gabriel (2008) Can Classical Description of Physical Reality Be Considered Complete? [Preprint] Catren, Gabriel (2007) Can Classical Description of Physical Reality be Considered Complete? [Preprint] Catren, Gabriel (2007) On Classical and Quantum Objectivity. [Preprint] Catren, Gabriel (2008) On Classical and Quantum Objectivity. [Preprint] Catren, Gabriel (2011) Quantum Ontology in the Light of Gauge Theories. [Preprint] Catren, Gabriel (2013) Towards a Group-Theoretical Interpretation of Mechanics. [Preprint] Caulton, Adam (2012) Discerning "indistinguishable" quantum systems. [Preprint] Caulton, Adam (2014) Is mereology empirical? Composition for fermions. [Preprint] Caulton, Adam (2014) Physical Entanglement in Permutation-Invariant Quantum Mechanics. [Preprint] Caulton, Adam and Butterfield, Jeremy (2009) On Kinds of Indiscernibility in Logic and Metaphysics. [Preprint] Caulton, Adam and Butterfield, Jeremy (2009) Symmetries and Paraparticles as a Motivation for Structuralism. [Preprint] Caulton, Adam (2024) Entanglement by (anti-)symmetrisation does not violate Bell’s inequalities: so what kind of entanglement does? [Preprint] Caulton, Adam (2024) Is a particle an irreducible representation of the Poincaré group? [Preprint] Caulton, Adam (2023) Permutations, redux. [Preprint] Caulton, Adam (2014) Qualitative individuation in permutation-invariant quantum mechanics. [Preprint] Cavalcanti, Eric G. (2009) Causation, decision theory, and Bell's theorem: a quantum analogue of the Newcomb problem. [Preprint] Charis, Anastopoulos and Konstantina, Savvidou (2021) Quantum Information in Relativity: The Challenge of QFT Measurements. Entropy, 24 (4). ISSN 1099-4300 Chen, Eddy Keming (2020) Bell's Theorem, Quantum Probabilities, and Superdeterminism. [Preprint] Chen, Eddy Keming (2024) Density Matrix Realism. [Preprint] Chen, Eddy Keming (2020) From Time Asymmetry to Quantum Entanglement: The Humean Unification. [Preprint] Chen, Eddy Keming (2021) Fundamental Nomic Vagueness. [Preprint] Chen, Eddy Keming (2018) The Intrinsic Structure of Quantum Mechanics. [Preprint] Chen, Eddy Keming (2023) Laws of Physics. [Preprint] Chen, Eddy Keming (2017) Our Fundamental Physical Space: An Essay on the Metaphysics of the Wave Function. The Journal of Philosophy, 114 (7). pp. 333-365. ISSN 0022-362X Chen, Eddy Keming (2020) The Past Hypothesis and the Nature of Physical Laws. [Preprint] Chen, Eddy Keming (2018) Quantum Mechanics in a Time-Asymmetric Universe: On the Nature of the Initial Quantum State. The British Journal for the Philosophy of Science, 72 (4). ISSN 1464-3537 Chen, Eddy Keming (2020) The Quantum Revolution in Philosophy (Book Review). The Philosophical Review, 129 (2). pp. 302-308. Chen, Eddy Keming (2019) Quantum States of a Time-Asymmetric Universe: Wave Function, Density Matrix, and Empirical Equivalence. [Preprint] Chen, Eddy Keming (2019) Realism about the Wave Function. Philosophy Compass. Chen, Eddy Keming (2022) Strong Determinism. [Preprint] Chen, Eddy Keming (2019) Time's Arrow in a Quantum Universe: On the Status of Statistical Mechanical Probabilities. [Preprint] Chen, Eddy Keming (2022) The Wentaculus: Density Matrix Realism Meets the Arrow of Time. [Preprint] Chen, Eddy Keming and Goldstein, Sheldon (2022) Governing Without A Fundamental Direction of Time: Minimal Primitivism about Laws of Nature. Ben-Menahem, Y. (eds.). Rethinking the Concept of Law of Nature. pp. 21-64. Chen, Eddy Keming and Tumulka, Roderich (2024) Typical Quantum States of the Universe are Observationally Indistinguishable. [Preprint] Chen, Eddy Keming and Tumulka, Roderich (2020) Uniform Probability Distribution Over All Density Matrices. [Preprint] Chiatti, Leonardo (2014) Is Bohr's Challenge Still Relevant? [Preprint] Chiatti, Leonardo (2014) “Quantum Physics and Philosophical Problems”; Some Notes on a Paper by V. Fock Forty Years After His Death. [Preprint] Chiribella, Giulio and D'Ariano, Giacomo Mauro and Perinotti, Paolo (2010) Probabilistic Theories with Purification. In: UNSPECIFIED. Chiribella, Giulio and Scandolo, Carlo Maria (2015) Conservation of information and the foundations of quantum mechanics. EPJ Web of Conferences 95,(2015), 95. 03003. Chiribella, Giulio and Scandolo, Carlo Maria (2015) Entanglement and thermodynamics in general probabilistic theories. In: UNSPECIFIED. Chiribella, Giulio and Scandolo, Carlo Maria (2015) Operational axioms for diagonalizing states. EPTCS, 195. pp. 96-115. Christian, Joy (2008) Can Bell’s Prescription for Physical Reality Be Considered Complete? [Preprint] Christian, Joy (2011) Disproof of Bell's Theorem. [Preprint] Christian, Joy (2007) Disproof of Bell's Theorem by Clifford Algebra Valued Local Variables. [Preprint] Christian, Joy (2007) Disproof of Bell's Theorem by Clifford Algebra Valued Local Variables. [Preprint] Christian, Joy (2007) Disproof of Bell's Theorem: Further Consolidations. [Preprint] Christian, Joy (2009) Disproofs of Bell, GHZ, and Hardy Type Theorems and the Illusion of Entanglement. [Preprint] Christian, Joy (2009) Disproofs of Bell, GHZ, and Hardy Type Theorems and the Illusion of Entanglement. [Preprint] Christian, Joy (2009) Disproofs of Bell, GHZ, and Hardy Type Theorems and the Illusion of Entanglement. [Preprint] Christian, Joy (2010) Failure of Bell's Theorem and the Local Causality of the Entangled Photons. [Preprint] Christian, Joy (2010) Failure of Bell's Theorem and the Local Causality of the Entangled Photons. [Preprint] Christian, Joy (2015) Macroscopic Observability of Fermionic Sign Changes: A Reply to Gill. [Preprint] Christian, Joy (2012) Macroscopic Observability of Spinorial Sign Changes under 2pi Rotations. [Preprint] Christian, Joy (2013) Macroscopic Observability of Spinorial Sign Changes under 2pi Rotations. [Preprint] Christian, Joy (2012) On the Origins of Quantum Correlations. [Preprint] Christian, Joy (2016) Proposed Macroscopic Test of the Physical Relevance of Bell's Theorem. [Preprint] Christian, Joy (2011) Refutation of Some Arguments Against my Disproof of Bell's Theorem. [Preprint] Christian, Joy (2011) Restoring Local Causality and Objective Reality to the Entangled Photons. [Preprint] Christian, Joy (2005) Testing Quantum State Reduction via Cosmogenic Neutrinos. [Preprint] Christian, Joy (2011) What Really Sets the Upper Bound on Quantum Correlations? [Preprint] Christian, Joy (2013) Whither All the Scope and Generality of Bell's Theorem? [Preprint] Christian, Joy (2015) A simplified local-realistic derivation of the EPR-Bohm correlation. [Preprint] Christian, Joy (2024) Bell's Theorem Begs the Question. [Preprint] Christian, Joy (2019) Bell's Theorem Versus Local Realism in a Quaternionic Model of Physical Space. IEEE Access, 7. pp. 133388-133409. Christian, Joy (2024) Comment on the GHZ variant of Bell's theorem without inequalities. [Preprint] Christian, Joy (2020) Dr. Bertlmann's Socks in the Quaternionic World of Ambidextral Reality. [Preprint] Christian, Joy (2016) Local Causality in a Friedmann-Robertson-Walker Spacetime. Annals of Physics. Christian, Joy (2017) Local Causality in a Friedmann-Robertson-Walker Spacetime. [Preprint] Christian, Joy (2019) On a Surprising Oversight by John S. Bell in the Proof of his Famous Theorem. [Preprint] Christian, Joy (2020) Oversights in the Respective Theorems of von Neumann and Bell are Homologous. [Preprint] Christian, Joy (2018) Quantum Correlations are Weaved by the Spinors of the Euclidean Primitives. Royal Society Open Science, 5 (180526). pp. 1-40. ISSN 2054-5703 Christian, Joy (2017) Refutation of Richard Gill's Argument Against my Disproof of Bell's Theorem. [Preprint] Chua, Eugene Y. S. (2019) Does Von Neumann's Entropy Correspond to Thermodynamic Entropy? [Preprint] Chua, Eugene Y. S. (2024) The Time in Thermal Time. [Preprint] Chua, Eugene Y. S. and Callender, Craig (2020) No Time for Time from No-Time. [Preprint] Chua, Eugene Y. S. and Chen, Eddy Keming (2023) Decoherence, Branching, and the Born Rule in a Mixed-State Everettian Multiverse. [Preprint] Ciepielewski, Gerardo Sanjuan and Okon, Elias (2022) From locality to factorizability: a novel escape from Bell's theorem. [Preprint] Ciepielewski, Gerardo Sanjuan and Okon, Elias and Sudarsky, Daniel (2020) On Superdeterministic Rejections of Settings Independence. [Preprint] Cinti, Enrico and Sanchioni, Marco (2021) Humeanism in Light of Quantum Gravity. [Preprint] Cinà, Giovanni (2013) On the connection between the categorical and the modal logic approaches to Quantum Mechanics. ILLC. Cirkovic, Milan M. (2004) Is Quantum Suicide Painless? On an Apparent Violation of the Principal Principle. [Preprint] Clarke-Doane, Justin (2022) Mathemtics and Metaphilosophy. Cambridge Elements. Clifton, Rob (1996) Introductory Notes on the Mathematics Needed for Quantum Theory. [Preprint] Clifton, Rob (2001) The Subtleties of Entanglement and its Role in Quantum Information Theory. [Preprint] Clifton, Rob and Halvorson, Hans (2000) Are Rindler Quanta Real? Inequivalent Particle Concepts in Quantum Field Theory. [Preprint] Clifton, Rob and Pope, Damian (2001) On the Nonlocality of the Quantum Channel in the Standard Teleportation Protocol. [Preprint] Colanero, Klaus (2012) Decoherence and definite outcomes. [Preprint] Conroy, Christina (2011) The Relative Facts Interpretation and Everett's Note Added in Proof. In: UNSPECIFIED. Corbett, John Vincent (2007) The causal story of the double slit experiment in quantum real numbers. In: UNSPECIFIED. Cordero, Alberto (2024) Realism Despite Underdetermination. In: UNSPECIFIED. Corti, Alberto (2021) Yet Again, Quantum Indeterminacy is not Worldly Indecision. [Preprint] Craig, David (2007) A Theory of Theories. In: UNSPECIFIED. Crețu, Ana-Maria (2019) Diagnosing Disagreements: The Authentication of the Positron 1931 - 1934. [Preprint] Crowther, Karen and Linnemann, Niels and Wuthrich, Christian (2020) Spacetime functionalism in general relativity and quantum gravity. [Preprint] Crull, Elise and Bacciagaluppi, Guido Translation of: W. Heisenberg, ‘Ist eine deterministische Ergänzung der Quantenmechanik möglich?’. [Preprint] Crull, Elise (2017) Translation of: P. Ehrenfest (1925), 'Energieschwankungen im Strahlungsfeld oder Kristallgitter bei Superposition quantisierter Eigenschwingungen'. [Preprint] Crull, Elise (2017) Yes, More Decoherence: A Reply to Critics. [Preprint] Cuffaro, Michael (2010) The Kantian framework of complementarity. Studies In History and Philosophy of Science Part B: Studies In History and Philosophy of Modern Physics, 41 (4). pp. 309-317. Cuffaro, Michael (2012) Many Worlds, the Cluster-state Quantum Computer, and the Problem of the Preferred Basis. Studies in History and Philosophy of Modern Physics, 43 (1). pp. 35-42. ISSN 1355-2198 Cuffaro, Michael (2013) On the Necessity of Entanglement for the Explanation of Quantum Speedup. [Preprint] Cuffaro, Michael (2011) Reflections on the Role of Entanglement in the Explanation of Quantum Computational Speedup. [Preprint] Cuffaro, Michael (2011) Reflections on the Role of Entanglement in the Explanation of Quantum Computational Speedup. [Preprint] Cuffaro, Michael E. On the Physical Explanation for Quantum Computational Speedup. Western Libraries. Cuffaro, Michael E. and Myrvold, Wayne C. (2013) On the Debate Concerning the Proper Characterisation of Quantum Dynamical Evolution. Philosophy of Science, 80 (5). pp. 1125-1136. Cuffaro, Michael E. (2018) Information Causality, the Tsirelson Bound, and the 'Being-Thus' of Things. [Preprint] Cuffaro, Michael E. (2018) Kantian and Neo-Kantian First Principles for Physical and Metaphysical Cognition. [Preprint] Cuffaro, Michael E. (2023) The Measurement Problem Is a Feature, Not a Bug--Schematising the Observer and the Concept of an Open System on an Informational, or (Neo-)Bohrian, Approach. [Preprint] Cuffaro, Michael E. (2014) On the Significance of the Gottesman-Knill Theorem. [Preprint] Cuffaro, Michael E. (2021) The Philosophy of Quantum Computing. [Preprint] Cuffaro, Michael E. (2016) Reconsidering No-Go Theorems from a Practical Perspective. [Preprint] Cuffaro, Michael E. (2023) Review of Slobodan Perović's From Data to Quanta -- Niels Bohr's Vision of Physics. Philosophy of Science. ISSN 1539-767X Cuffaro, Michael E. and Doyle, Emerson P. (2020) Essay Review of Tanya and Jeffrey Bub's Totally Random: Why Nobody Understands Quantum Mechanics: A Serious Comic on Entanglement. [Preprint] Cuffaro, Michael E. and Hartmann, Stephan (2023) The Open Systems View and the Everett Interpretation. Quantum Rep. 2023, 5(2), 418-425, 5 (2). pp. 418-425. Cuffaro, Michael E. and Hartmann, Stephan (2023) The Open Systems View. [Preprint] Córdoba, Mariana and Martínez, Juan Camilo (2014) Los orbitales cuánticos y la autonomía del mundo químico. THEORIA. An International Journal for Theory, History and Foundations of Science, 29 (2). pp. 261-279. ISSN 2171-679X callender, craig and huggett, nicholas (2001) Physics Meets Philosophy at the Planck Scale. [Preprint] D'Ariano, Giacomo Mauro (2018) Causality re-established. [Preprint] Damiano, Anselmi (2018) The correspondence principle in quantum field theory and quantum gravity. [Preprint] Daniell, Matthew (2013) The Indivisible Now: why time must be discrete. [Preprint] Darby, George and Pickup, Martin (2019) Modelling Deep Indeterminacy. [Preprint] Daumer, Martin and Duerr, Detlef and Goldstein, Sheldon and Maudlin, Tim and Tumulka, Roderich and Zanghi, Nino (2006) The Message of the Quantum? [Preprint] Dawid, Richard (2008) Moritz Schlick and Bas van Fraassen: Two Different Perspectives on Causality and Quantum Mechanics. [Preprint] Dawid, Richard and Thebault, Karim P Y (2013) Against the Empirical Viability of the Deutsch Wallace Approach to Quantum Mechanics. [Preprint] Dawid, Richard and Thebault, Karim P Y (2013) Many Worlds: Decoherent or Incoherent? [Preprint] Dawid, Richard and Friederich, Simon (2019) Epistemic separability and Everettian branches---a critique of Sebens and Carroll. [Preprint] Dawid, Richard and Thebault, Karim P Y (2024) Decoherence and Probability. [Preprint] De Haro, Sebastian (2020) Science and Philosophy: A Love-Hate Relationship. Foundations of Science, 25. pp. 297-314. ISSN 1233-1821 De Haro, Sebastian (2019) Theoretical Equivalence and Duality. [Preprint] De la Tremblaye, Laura and Bitbol, Michel (2022) TOWARDS A PHENOMENOLOGICAL CONSTITUTION OF QUANTUM MECHANICS: A QBIST APPROACH. Mind and Matter, 20 (1). pp. 35-62. Dedes, Christos (2019) Backward Non-unitary Quantum Evolution. International Journal of Quantum Foundations, 5. pp. 128-140. Del Santo, Flavio and Gisin, Nicolas (2024) Creative and geometric times in physics, mathematics, logic, and philosophy. [Preprint] Del Santo, Flavio and Gisin, Nicolas (2023) Potentiality realism: A realistic and indeterministic physics based on propensities. [Preprint] Del Santo, Flavio and Gisin, Nicolas (2021) The relativity of indetermininacy. [Preprint] Del Santo, Flavio and Krizek, Gerd C. (2023) Against the “nightmare of a mechanically determined universe”: Why Bohm was never a Bohmian. [Preprint] Delhotel, Jean-Michel (2014) Quantum Mechanics Unscrambled. [Preprint] Dewar, Neil (2017) La Bohume. [Preprint] Di Biagio, Andrea and Rovelli, Carlo (2024) On the Time Orientation of Probabilistic Theories. [Preprint] Di Biagio, Andrea and Rovelli, Carlo (2021) Relational Quantum Mechanics is about Facts, not States: A reply to Pienaar and Brukner. [Preprint] Di Biagio, Andrea and Rovelli, Carlo (2021) Stable Facts, Relative Facts. Foundations of Physics, 51 (30). ISSN 1572-9516 Dickson, Michael (2007) Non-relativistic Quantum Mechanics. [Preprint] Dieks, Dennis (2002) Events and covariance in the interpretation of quantum field theory. UNSPECIFIED. (In Press) Dieks, Dennis (2009) Objectivity in perspective: relationism in the interpretation of quantum mechanics. [Preprint] Dieks, Dennis (2007) Probability in modal interpretations of quantum mechanics. [Preprint] Dieks, Dennis and Lam, Sander (2007) Complementarity in the Bohr-Einstein Photon Box. [Preprint] Dieks, Dennis and Lubberdink, Andrea (2012) How Classical Particles Emerge from the Quantum World. Foundations of Physics, 41 (6). pp. 1051-1064. Dieks, Dennis and Versteegh, Marijn (2007) Identical Particles and Weak Discernibility. [Preprint] Dieks, Dennis and Versteegh, Marijn (2007) Identical Quantum Particles and Weak Discernibility. [Preprint] Dieks, Dennis and van Dongen, Jeroen and de Haro, Sebastian (2014) Emergence in Holographic Scenarios for Gravity. [Preprint] Dieks, Dennis and van Dongen, Jeroen and de Haro, Sebastian (2015) Emergence in Holographic Scenarios for Gravity. Studies in History and Philosophy of Modern Physics. ISSN 1355-2198 Dieks, Dennis (2022) Identical Particles in Quantum Mechanics: Against the Received View (version 2022). [Preprint] Dieks, Dennis (2017) Mechanisms, Explanation and Understanding in Physics. [Preprint] Dieks, Dennis (2016) Niels Bohr and the Formalism of Quantum Mechanics. [Preprint] Dieks, Dennis (2022) Perspectival Quantum Realism. Foundations of Physics, 52 (95). ISSN 1572-9516 Dieks, Dennis (2016) Quantum Information and Locality. [Preprint] Dieks, Dennis (2019) Quantum Mechanics and Perspectivalism. [Preprint] Dieks, Dennis (2019) Quantum Reality, Perspectivalism and Covariance. [Preprint] Dieks, Dennis (2016) Von Neumann's Impossibility Proof: Mathematics in the Service of Rhetorics. [Preprint] Dieks, Dennis and Lubberdink, Andrea (2019) Identical quantum particles as distinguishable objects. [Preprint] Dikshit, Biswaranjan (2017) A simple proof of Born’s rule for statistical interpretation of quantum mechanics. [Preprint] Diósi, Lajos (2018) How to teach and think about spontaneous wave function collapse theories: not like before. Collapse of the Wave Function Models: Ontology, Origin, and Implications. pp. 3-11. Domenech, Graciela and Freytes, Hector and de Ronde, Christian (2007) The Contextual Character of Modal Interpretations of Quantum Mechanics. [Preprint] Domenech, Graciela and Holik, Federico (2007) A discussion on particle number and quantum indistinguishability. [Preprint] Domenech, Graciela and Holik, Federico and Krause, Décio (2008) Quasi-spaces an the foundation of quantum mechanics. [Preprint] Donald, Matthew J. (2003) Finitary and Infinitary Mathematics, the Possibility of Possibilities and the Definition of Probabilities. [Preprint] Donald, Matthew J. (2002) Neural Unpredictability, the Interpretation of Quantum Theory, and the Mind-Body Problem. [Preprint] Donald, Matthew J. (1997) On Many-Minds Interpretations of Quantum Theory. [Preprint] Dorato, Mauro (2003) Dispositions, Relational Properties and the Quantum World. [Preprint] Dorato, Mauro (2011) Do Dispositions and Propensities have a role in the Ontology of Quantum Mechanics? Some Critical Remarks. [Preprint] Dorato, Mauro (2015) Events and the Ontology of Quantum Mechanics. [Preprint] Dorato, Mauro (2012) How to combine and not to combine physics and metaphysics. [Preprint] Dorato, Mauro (2009) Philosophy of Physics between Objectivism and Conventionalism. [Preprint] Dorato, Mauro (2006) Properties and Dispositions: some Metaphysical Remarks on Quantum Ontology. [Preprint] Dorato, Mauro (2013) Rovelli’s relational quantum mechanics, anti-monism and quantum becoming. [Preprint] Dorato, Mauro (2015) The physical world as a blob: is OSR really realism? [Preprint] Dorato, Mauro and Esfeld, Michael (2009) GRW as an ontology of dispositions. [Preprint] Dorato, Mauro and Esfeld, Michael (2014) The metaphysics of laws: dispositionalism vs. primitivism. [Preprint] Dorato, Mauro and Felline, Laura (2009) Scientific Explanation and Scientific Structuralism. [Preprint] Dorato, Mauro and Felline, Laura (2010) Scientific Explanation and Scientific Structuralism. [Preprint] Dorato, Mauro and Laudisa, Federico (2014) Realism and instrumentalism about the wave function. How should we choose? [Preprint] Dorato, Mauro (2020) Bohr meets Rovelli:a dispositionalist account of the quantum state. [Preprint] Dorato, Mauro (2016) Bohr’s Relational Holism and the Classical-Quantum Interaction. [Preprint] Dorato, Mauro and Morganti, Matteo (2022) What Ontology for Relational Quantum Mechanics? [Preprint] Dorst, Chris (2021) There Is No Measurement Problem for Humeans. [Preprint] Dougherty, John (2015) A few points on gunky space. In: UNSPECIFIED. Dougherty, John (2020) The non-ideal theory of the Aharonov–Bohm effect. Synthese. ISSN 1573-0964 Dougherty, John and Callender, Craig (2016) Black Hole Thermodynamics: More Than an Analogy? [Preprint] Drezet, aurélien (2023) Local causality in the works of Einstein, Bohm and Bell. [Preprint] Drezet, aurélien (2023) Whence Nonlocality? Removing spooky action at a distance from the de Broglie Bohm pilot-wave theory using a time-symmetric version of de Broglie double solution. [Preprint] Driessen, Alfred (2018) Achilles, the Tortoise and Quantum Mechanics. [Preprint] Driessen, Alfred (2020) Aristotle and the Foundation of Quantum Mechanics. Drummond, Brian (2022) Quantum Mechanics: Statistical Balance Prompts Caution in Assessing Conceptual Implications. Entropy, 24 (11). p. 1537. ISSN 1099-4300 Drummond, Brian (2019) Understanding quantum mechanics: a review and synthesis in precise language. Open Physics, 17 (1). pp. 390-437. ISSN (Online) 2391-5471 Drummond, Brian (2021) Violation of Bell Inequalities: Mapping the Conceptual Implications. International Journal of Quantum Foundations, 7 (3). pp. 47-78. Duerr, Detlef and Struyve, Ward (2020) Typicality in the Foundations of Statistical Physics and Born's Rule. pp. 35-44. Duerr, Patrick and Ehmann, Alexander (2021) The physics and metaphysics of Tychistic Bohmian Mechanics. Studies in History and Philosophy of Science Part A, 90. pp. 168-183. ISSN 00393681 Dumitru, Spiridon (2021) Can Schrodinger's Cat Be Really a Quantum Touchstone? European Journal of Applied Physics, 3 (3). pp. 29-32. ISSN 2684-4451 Dumitru, Spiridon (2005) On the uncertainty relations and quantum measurements: conventionalities, shortcomings, reconsiderations. [Preprint] Dumitru, Spiridon (2022) Possible Perspective for Quantum Mechanics Interpretation: An Essay-Suggestion. European Journal of Applied Physics, 4 (5). pp. 55-62. ISSN 2684-4451 Dumitru, Spiridon (2021) A Survey on Uncertainty Relations and Quantum Measurements: Arguments for Lucrative Parsimony in Approaches of Matters. PROGRESS IN PHYSICS, 17 (1). pp. 38-70. ISSN 1555-5534 (print) 1555-5615 (web) Duncan, Anthony and Janssen, Michel (2012) (Never) Mind your p's and q's: Von Neumann versus Jordan on the Foundations of Quantum Theory. [Preprint] Duncan, Anthony and Janssen, Michel (2006) On the verge of Umdeutung in Minnesota: Van Vleck and the correspondence principle. [Preprint] Duncan, Anthony and Janssen, Michel (2002) Quantization Conditions, 1900–1927. [Preprint] Dunlap, Lucas (2015) The Metaphysics of D-CTCs: On the Underlying Assumptions of Deutsch's Quantum Solution to the Paradoxes of Time Travel. [Preprint] Dunlap, Lucas (2015) On the Common Structure of the Primitive Ontology Approach and the Information-Theoretic Interpretation of Quantum Theory. [Preprint] Dunlap, Lucas (2015) Would the Existence of CTCs Allow for Nonlocal Signaling? In: UNSPECIFIED. Dunlap, Lucas (2016) Shakespeare's Free Lunch: A Critique of the D-CTC Solution to the Knowledge Paradox. [Preprint] Durham, Ian (2005) Bell's Theorem, Uncertainty, and Conditional Events. [Preprint] Durham, Ian (2005) Quantum Puzzles in the Metaworld of Heisenberg, Clauser, and Horne. [Preprint] Décio, Krause and Otávio, Bueno (2008) Scientific Theories, Models, and the Semantic Approach. [Preprint] Dürr, Patrick and Ehmann, Alexander (2017) Probabilities in deBroglie-Bohm Theory: Towards a Stochastic Alternative. [Preprint] da Costa, Newton C. A. and Krause, Décio (2007) Logical and Philosophical Remarks on Quasi-Set Theory. [Preprint] da Costa, Newton C. A. and Krause, Décio (2003) Remarks on the applications of paraconsistent logic to physics. [Preprint] da Costa, Newton C. A. and de Ronde, Christian (2014) Non-Reflexive Logical Foundation for Quantum Mechanics. da Costa, Newton C. A. and de Ronde, Christian (2015) The Paraconsistent Approach to Quantum Superpositions Reloaded: Formalizing Contradictiory Powers in the Potential Realm. [Preprint] da Costa, Newton C. A. and de Ronde, Christian (2013) The Paraconsistent Logic of Quantum Superpositions. Foundations of Physics. da Costa, Newton C. A. and de Ronde, Christian (2016) Revisiting the Applicability of Metaphysical Identity in Quantum Mechanics. [Preprint] de Barros, Acacio and Holik, Federico and Krause, Décio (2017) Contextuality and Indistinguishability. [Preprint] de Barros, J. Acacio and Holik, Federico (2021) Indistinguishability and Negative Probabilities. Entropy. ISSN 1099-4300 de Barros, José Acacio and Jorge, Juan Pablo and Holik, Federico (2021) On the assumptions underlying KS-like contradictions. [Preprint] de Haro, Sebastian (2013) Science and Philosophy: A Love-Hate Relationship. [Preprint] de Regt, Henk and Dieks, Dennis (2003) A Contextual Approach to Scientific Understanding. [Preprint] de Ronde, Christian (2014) A Defense of the Paraconsistent Approach to Quantum Superpositions (Answer to Arenhart and Krause). [Preprint] de Ronde, Christian (2014) Epistemological and Ontological Paraconsistency in Quantum Mechanics: For and Against Bohrian Philosophy. UNSPECIFIED. de Ronde, Christian (2015) Hilbert Space Quantum Mechanics is Contextual (Reply to R. B. Griffiths). [Preprint] de Ronde, Christian (2007) Interpreting the Quantum Wave Function in Terms of ‘Interacting Faculties’. [Preprint] de Ronde, Christian (2014) Modality, Potentiality and Contradiction in Quantum Mechanics. [Preprint] de Ronde, Christian (2015) Probabilistic Knowledge as Objective Knowledge in Quantum Mechanics: Potential Powers Instead of Actual Properties. [Preprint] de Ronde, Christian (2015) Quantum Superpositions Do Exist! But ‘Quantum Physical Reality ≠ Actuality’ (Reply to Dieks and Griffiths). [Preprint] de Ronde, Christian (2016) Representational Realism, Closed Theories and the Quantum to Classical Limit. [Preprint] de Ronde, Christian (2013) Representing Quantum Superpositions: Powers, Potentia and Potential Effectuations. [Preprint] de Ronde, Christian and Freytes, Hector and Domenech, Graciela (2013) Interpreting the Modal Kochen-Specker Theorem: Possibility and Many Worlds in Quantum Mechanics. [Preprint] de Ronde, Christian and Freytes, Hector and Domenech, Graciela (2014) Quantum mechanics and the interpretation of the orthomodular square of opposition. [Preprint] de Ronde, Christian (2023) Bohr's Anti-Realist Realism in Contemporary (Quantum) Physics and Philosophy. [Preprint] de Ronde, Christian (2013) Causality and the Modeling of the Measurement Process in Quantum Theory. [Preprint] de Ronde, Christian (2017) Immanent Powers versus Causal Powers (Propensities, Latencies and Dispositions) in Quantum Mechanics. [Preprint] de Ronde, Christian (2020) Measuring Quantum Superpositions (Or, “It is only the theory which decides what can be observed.”). [Preprint] de Ronde, Christian (2021) The Power of Inconsistency in Anti-Realist Realisms about Quantum Mechanics (Or: Lessons on How to Capture and Defeat Smoky Dragons). [Preprint] de Ronde, Christian (2016) QBism, FAPP and the Quantum Omelette. (Or, Unscrambling Ontological Problems from Epistemological Solutions in QM). [Preprint] de Ronde, Christian (2016) Quantum Superpositions and the Representation of Physical Reality Beyond Measurement Outcomes and Mathematical Structures. [Preprint] de Ronde, Christian (2020) Quantum Theory Needs No ‘Interpretation’ But ‘Theoretical Formal-Conceptual Unity’ (Or: Escaping Adán Cabello’s “Map of Madness” With the Help of David Deutsch’s Explanations). [Preprint] de Ronde, Christian (2020) The (Quantum) Measurement Problem in Classical Mechanics. [Preprint] de Ronde, Christian (2023) The Return of Realism in the Logos Approach to Quantum Mechanics (Reply to Arroyo and Arenhart). [Preprint] de Ronde, Christian (2020) Understanding Quantum Mechanics (Beyond Metaphysical Dogmatism and Naive Empiricism). [Preprint] de Ronde, Christian (2016) Unscrambling the Omelette of Quantum Contextuality (Part I): Preexistent Properties or Measurement Outcomes? [Preprint] de Ronde, Christian and Cesar, Massri (2018) Against 'Particle Metaphysics' and 'Collapses' within the Definition of Quantum Entanglement. [Preprint] de Ronde, Christian and Cesar, Massri (2018) Against Collapses, Purity and Separability Within the Definition of Quantum Entanglement. [Preprint] de Ronde, Christian and Fernandez Moujan, Raimundo (2020) The Dilemma of Quantum Individuality Beyond Particle Metaphysics. [Preprint] de Ronde, Christian and Fernandez Moujan, Raimundo (2017) Epistemological vs. Ontological Relationalism in Quantum Mechanics: Relativism or Realism? [Preprint] de Ronde, Christian and Fernández Mouján, Raimundo (2021) Are 'Particles' in Quantum Mechanics "Just a Way of Talking"? [Preprint] de Ronde, Christian and Fernández Mouján, Raimundo and Cesar, Massri (2018) Taking Mermin's Relational Interpretation of QM Beyond Cabello's and Seevinck's No-Go Theorems. [Preprint] de Ronde, Christian and Fernández Mouján, Raimundo and Massri, Cesar (2024) Equivalence Relations in Quantum Theory: An Objective Account of Bases and Factorizations. [Preprint] de Ronde, Christian and Fernández Mouján, Raimundo and Massri, Cesar (2024) Everything is Entangled in Quantum Mechanics: Are the Orthodox Measures Physically Meaningful? [Preprint] de Ronde, Christian and Fernández Mouján, Raimundo and Massri, Cesar (2024) On the Relative Nature of Quantum Individuals. [Preprint] de Ronde, Christian and Fernández Mouján, Raimundo and Massri, Cesar (2024) Tensorial Quantum Mechanics: Back to Heisenberg and Beyond. [Preprint] de Ronde, Christian and Freytes, Hector and Sergioli, Giuseppe (2019) Quantum Probability: a reliable tool for an agent or a reliable source of reality? Synthese. ISSN 1573-0964 de Ronde, Christian and Massri, Cesar (2019) Against the Tyranny of 'Pure States' in Quantum Theory. [Preprint] de Ronde, Christian and Massri, Cesar (2019) Against the Tyranny of 'Pure States' in Quantum Theory. Foundations of Science. ISSN 1233-1821 de Ronde, Christian and Massri, Cesar (2020) Beyond Purity and Mixtures in Categorical Quantum Mechanics. [Preprint] de Ronde, Christian and Massri, Cesar (2014) Kochen-Specker Theorem, Physical Invariance and Quantum Individuality. [Preprint] de Ronde, Christian and Massri, Cesar (2018) The Logos Categorical Approach to QM: II. Quantum Superpositions. [Preprint] de Ronde, Christian and Massri, Cesar (2018) The Logos Categorical Approach to Quantum Mechanics: I. Kochen-Specker Contextuality and Global Intensive Valuations. [Preprint] de Ronde, Christian and Massri, Cesar (2018) The Logos Categorical Approach to Quantum Mechanics: III. Relational Potential Coding and Quantum Entanglement Beyond Collapses, Pure States and Particle Metaphysics. [Preprint] de Ronde, Christian and Massri, Cesar (2024) The Many Inconsistencies of the Purity-Mixture Distinction in Standard Quantum Mechanics. [Preprint] de Ronde, Christian and Massri, Cesar (2018) A New Objective Definition of Quantum Entanglement as Potential Coding of Intensive and Effective Relations. [Preprint] de Ronde, Christian and Massri, Cesar (2020) Relational Quantum Entanglement Beyond Non-Separable and Contextual Relativism. [Preprint] E. Szabo, Laszlo and Fine, Arthur (2002) A local hidden variable theory for the GHZ experiment. [Preprint] E. Szabó, László (2007) The Einstein-Podolsky-Rosen Argument and the Bell Inequalities. [Preprint] E. Szabó, László (2007) Objective probability-like things with and without objective indeterminism. [Preprint] Earman, John (2014) Some Puzzles and Unresolved Issues About Quantum Entanglement. Erkenntnis. ISSN ISSN 0165-0106 Earman, John (2017) Additivity Requirements in Classical and Quantum Probability. [Preprint] Earman, John (2024) Explaining the Aharonov-Bohm Effect. [Preprint] Earman, John (2022) A Guide to the Bargmann Mass Superselection Rule: Why There Is--and Isn't--Mass Superselection in Non-Relativistic Quantum Mechanics. [Preprint] Earman, John (2021) Implementing David Lewis' Principal Principle: A Program for Investigating the Relation between Credence and Chance. [Preprint] Earman, John (2019) Lüders conditionalization: Conditional probability, transition probability, and updating in quantum probability theory. [Preprint] Earman, John (2018) Quantum Bayesianism Assessed. [Preprint] Earman, John (2018) The Relation between Credence and Chance: Lewis' "Principal Principle" Is a Theorem of Quantum Probability Theory. [Preprint] Earman, John (2017) The Role of Idealizations in the Aharonov-Bom Effect. [Preprint] Earman, John (2022) The Status of the Born Rule and the Role of Gleason's Theorem and Its Generalizations: How the Leopard Got Its Spots and Other Just-So Stories. [Preprint] Earman, John (2024) Why √-1? The Role of Complex Structure in Quantum Physics. [Preprint] Earman, John (2023) As Revealing in the Breach as in the Observance: von Neumann's Uniqueness Theorem. [Preprint] Earman, John (2020) Believing the Unbelievable. [Preprint] Earman, John (2020) Quantum Physics in Non-Separable Hilbert Spaces. [Preprint] Earman, John (2020) Symmetries for Quantum Theory. [Preprint] Egg, Matthias (2013) Delayed-Choice Experiments and the Metaphysics of Entanglement. Foundations of Physics, 43 (9). pp. 1124-1135. Egg, Matthias and Esfeld, Michael (2013) Non-local common cause explanations for EPR. [Preprint] Egg, Matthias and Esfeld, Michael (2014) Primitive ontology and quantum state in the GRW matter density theory. [Preprint] Egg, Matthias (2018) Dissolving the Measurement Problem Is Not an Option for the Realist. In: UNSPECIFIED. Egg, Matthias (2016) The Physical Salience of Non-Fundamental Local Beables. Studies in History and Philosophy of Modern Physics. ISSN 1355-2198 Egg, Matthias (2020) Quantum Ontology without Speculation. [Preprint] El Demery, Mostafa (2013) New Perspectives on the Aharonov-Bohm Effect. n/a. Ellerman, David (2013) An Introduction to Partition Logic. Logic Journal of the IGPL. Ellerman, David (2014) Partitions and Objective Indefiniteness in Quantum Mechanics. [Preprint] Ellerman, David (2011) Why Delayed Choice Experiments do NOT imply Retrocausality. [Preprint] Ellerman, David (2022) Follow the Math!:The mathematics of quantum mechanics as the mathematics of set partitions linearized to (Hilbert) vector spaces. [Preprint] Ellerman, David (2022) Introduction to Logical Entropy and Its Relationship to Shannon Entropy. [Preprint] Ellerman, David (2024) A New Approach to Understanding Quantum Mechanics: Illustrated Using a Pedagogical Model over Z2. Applied Math, 4 (2). 468-494.. Ellerman, David (2024) A New Approach to Understanding Quantum Mechanics: Illustrated Using a Pedagogical Model over ℤ2. AppliedMath (MDPI), 4 (2). pp. 468-494. Ellerman, David (2021) On Abstraction in Mathematics and Indefiniteness in Quantum Mechanics. [Preprint] Ellerman, David (2017) The Quantum Logic of Direct-Sum Decompositions: The Dual to the Quantum Logic of Subspaces. [Preprint] Ellerman, David (2019) Quantum Mechanics over Sets: A pedagogical model with non-commutative finite probability theory as its quantum probability calculus. [Preprint] Emily, Adlam (2022) The Temporal Asymmetry of Influence is not Statistical. [Preprint] Esfeld, Michael (2015) Bell’s theorem and the issue of determinism and indeterminism. [Preprint] Esfeld, Michael (2014) How to account for quantum non-locality: ontic structural realism and the primitive ontology of quantum physics. [Preprint] Esfeld, Michael (2013) Physics and intrinsic properties. [Preprint] Esfeld, Michael (2014) Quantum Humeanism, or: physicalism without properties. [Preprint] Esfeld, Michael (2004) Quantum entanglement and a metaphysics of relations. [Preprint] Esfeld, Michael (2014) The primitive ontology of quantum physics: guidelines for an assessment of the proposals. [Preprint] Esfeld, Michael and Deckert, Dirk-André and Oldofredi, Andrea (2015) What is matter? The fundamental ontology of atomism and structural realism. [Preprint] Esfeld, Michael and Gisin, Nicolas (2013) The GRW flash theory: a relativistic quantum ontology of matter in space-time? [Preprint] Esfeld, Michael and Lam, Vincent (2010) Ontic structural realism as a metaphysics of objects. [Preprint] Esfeld, Michael and Lazarovici, Dustin and Hubert, Mario and Dürr, Detlef (2012) The ontology of Bohmian mechanics. [Preprint] Esfeld, Michael and Lazarovici, Dustin and Lam, Vincent and Hubert, Mario (2014) The physics and metaphysics of primitive stuff. [Preprint] Esfeld, Michael (2024) Bohmian mechanics as Cartesian science. [Preprint] Esfeld, Michael (2016) Collapse or no collapse? What is the best ontology of quantum mechanics in the primitive ontology framework? [Preprint] Esfeld, Michael (2019) From the measurement problem to the primitive ontology programme. [Preprint] Esfeld, Michael (2017) Individuality and the account of non-locality: the case for the particle ontology in quantum physics. [Preprint] Esfeld, Michael (2018) Metaphysics of science as naturalized metaphysics. [Preprint] Esfeld, Michael (2017) Super-Humeanism: the Canberra plan for physics. [Preprint] Esfeld, Michael (2017) A proposal for a minimalist ontology. [Preprint] Esser, Stephen (2024) Relational quantum mechanics, causal composition, and molecular structure. [Preprint] Eva, Benjamin (2016) Topos Theoretic Quantum Realism. [Preprint] Eva, Benjamin and Ozawa, Masanao and Doering, Andreas (2018) A Bridge Between Q-Worlds. [Preprint] Evans, Peter and Price, Huw and Wharton, Ken (2010) New Slant on the EPR-Bell Experiment. [Preprint] Evans, Peter W. (2014) Retrocausality at no extra cost. Synthese. ISSN 1573-0964 Evans, Peter W (2020) The End of a Classical Ontology for Quantum Mechanics? Entropy, 23 (1). p. 12. ISSN 1099-4300 Evans, Peter W. (2020) Perspectival Objectivity or: How I Learned to Stop Worrying and Love Observer-Dependent Reality. [Preprint] Evans, Peter W. (2024) What is it like to be unitarily reversed? European Journal for Philosophy of Science. ISSN 1879-4912 Evans, Peter W. (2020) A sideways look at faithfulness for quantum correlations. [Preprint] Evans, Peter W. and Gryb, Sean and Thebault, Karim P Y (2016) Psi-Epistemic Quantum Cosmology? [Preprint] Evans, Peter W. and Hangleiter, Dominik and Thebault, Karim P Y (2023) How to engineer a quantum wavefunction. [Preprint] Evans, Peter W. and Thebault, Karim P Y (2019) What can bouncing oil droplets tell us about quantum mechanics? [Preprint] Everth, Thomas and Gurney, Laura (2022) Emergent Realities: Diffracting Barad within a quantum-realist ontology of matter and politics. [Preprint] Eynck, Tim Oliver and Lyre, Holger and Rummell, Nicolai von (2001) A versus B! Topological nonseparability and the Aharonov-Bohm effect. [Preprint] Faglia, Paolo (2024) Non-separability, locality and criteria of reality: a reply to Waegell and McQueen. [Preprint] Fankhauser, Johannes (2017) Taming the Delayed Choice Quantum Eraser. Taming the Delayed Choice Quantum Eraser, 8. pp. 44-56. Fano, Vincenzo (2024) APPLICATION OF RELATIONAL QUANTUM MECHANICS TO SIMPLE PHYSICAL SITUATIONS. [Preprint] Farr, Matt (2020) C-theories of time: On the adirectionality of time. [Preprint] Faye, Jan (2021) Duncan and Janssen's Constructing Quantum Mechanics. BJPS Review of Books. Federico, Laudisa (2001) Non-Locality and Theories of Causation. [Preprint] Feintzeig, Benjamin (2024) Quantization and the Preservation of Structure across Theory Change. [Preprint] Feintzeig, Benjamin (2019) Reductive Explanation and the Construction of Quantum Theories. British Journal for Philosophy of Science. Feintzeig, Benjamin (2018) The classical limit as an approximation. [Preprint] Feintzeig, Benjamin H. (2017) On the Choice of Algebra for Quantization. [Preprint] Feintzeig, Benjamin H. (2017) The classical limit of a state on the Weyl algebra. [Preprint] Feintzeig, Benjamin H. (2017) The classical limit of a state on the Weyl algebra. [Preprint] Feintzeig, Benjamin H. and (Le)Manchak, JB and Rosenstock, Sarita and Weatherall, James Owen (2018) Why Be Regular? Part I. [Preprint] Feintzeig, Benjamin H. and Fletcher, Samuel C. (2016) On Noncontextual, Non-Kolmogorovian Hidden Variable Theories. [Preprint] Feintzeig, Benjamin H. and Weatherall, James Owen (2018) Why Be Regular? Part II. [Preprint] Felline, Laura (2015) Mechanisms meet Structural Explanation. [Preprint] Felline, Laura (2014) Review of 'Quantum Information Theory and the Foundations of Quantum Mechanics', by Christopher G. Timpson. International Studies in the Philosophy of Science, 3 (28). Felline, Laura and Bacciagaluppi, Guido (2011) Locality and Mentality in Everett Interpretations: Albert and Loewer’s Many Minds. [Preprint] Felline, Laura (2016) It's a Matter of Principle. Scientific Explanation in Information-Theoretic Reconstructions of Quantum Theory. [Preprint] Felline, Laura (2019) The Measurement Problem and two Dogmas about Quantum Mechanics. [Preprint] Felline, Laura (2018) Quantum theory is not only about information. [Preprint] Fernández Mouján, Raimundo (2020) Greek philosophy for quantum physics. The return to the Greeks in the works of Heisenberg, Pauli and Schrödinger. [Preprint] Field, Grace (2020) On the status of quantum tunneling times. [Preprint] Field, Grace (2022) On the status of quantum tunnelling time. [Preprint] Finkelstein, Jerry (2009) Has the Born rule been proven? [Preprint] Fleming, Gordon / N (2011) On the Quantum Deviations from Einstein Dilation of Unstable Quanton* Decay Evolution and Lifetimes. [Preprint] Fleming, Gordon N. (2007) Correlation coefficients and Robertson-Schroedinger uncertainty relations. [Preprint] Fleming, Gordon N. (2002) The Dependence of Lorentz Boost Generators on the Presence and Nature of Interactions. [Preprint] Fleming, Gordon N. (2003) Observations on Hyperplanes: I State Reduction and Unitary Evolution. [Preprint] Fleming, Gordon N. (2004) Observations on Hyperplanes: II. Dynamical Variables and Localization Observables. [Preprint] Fleming, Gordon N. (2009) Observations on Unstable Quantons, Hyperplane Dependence and Quantum Fields. [Preprint] Fleming, Gordon N. (2009) Observations on Unstable Quantons, Hyperplane Dependence and Quantum Fields. [Preprint] Fleming, Gordon N. (2009) Observations on Unstable Quantons, Hyperplane Dependence and Quantum Fields. [Preprint] Fleming, Gordon N. (2013) Response to Pashby: Time operators and POVM observables in quantum mechanics. In: UNSPECIFIED. Fleming, Gordon N. (2013) Response to Pashby: Time operators and POVM observables in quantum mechanics. [Preprint] Fleming, Gordon N. (2009) Shirokov's contracting lifetimes and the interpretation of velocity eigenstates for unstable quantons. [Preprint] Fleming, Gordon N. (2001) Uses of a Quantum Master Inequality. [Preprint] Fletcher, Samuel C. (2019) Modality in Physics. [Preprint] Fletcher, Samuel C. and Taylor, David E. (2021) Quantum indeterminacy and the eigenstate-eigenvalue link. Synthese. ISSN 1573-0964 Fletcher, Samuel C. and Taylor, David E. (2021) Two Quantum Logics of Indeterminacy. Synthese. ISSN 1573-0964 Fortin, Sebastian and Lombardi, Olimpia and Martínez González, Juan Camilo (2016) Isomerism and decoherence. [Preprint] Fortin, Sebastian and Holik, Federico and Vanni, Leonardo (2016) Non-unitary evolution of quantum logics. [Preprint] Fortin, Sebastian and Jaimes Arriaga, Jesús Alberto (2018) About the nature of the wave function and its dimensionality: the case of quantum chemistry. [Preprint] Fortin, Sebastian and Labarca, Martín and Lombardi, Olimpia (2022) On the ontological status of molecular structure: is it possible to reconcile molecular chemistry with quantum mechanics? [Preprint] Fortin, Sebastian and Lombardi, Olimpia (2021) Entanglement and indistinguishability in a quantum ontology of properties. [Preprint] Fortin, Sebastian and Lombardi, Olimpia (2017) Interpretation and decoherence: a contribution to the debate Vasallo & Esfeld vs Crull. [Preprint] Fortin, Sebastian and Lombardi, Olimpia (2021) Is the problem of molecular structure just the quantum measurement problem? [Preprint] Fortin, Sebastian and Lombardi, Olimpia (2016) Quantum mechanics: symmetry and interpretation. In: UNSPECIFIED. Fortin, Sebastian and Lombardi, Olimpia (2019) Wigner and his many friends: A new no-go result? [Preprint] Fortin, Sebastian and Lombardi, Olimpia (2019) The correspondence principle and the understanding of decoherence. [Preprint] Fortin, Sebastian and Lombardi, Olimpia and Martínez, Juan Camilo (2017) The relationship between chemistry and physics from the perspective of Bohmian mechanics. [Preprint] Fortin, Sebastian and Lombardi, Olimpia and Martínez González, Juan Camilo (2016) A new application of the modal-Hamiltonian interpretation of quantum mechanics: the problem of optical isomerism. Fortin, Sebastian and Lombardi, Olimpia and Pasqualini, Matias (2021) Relational event-time in quantum mechanics. Foundations of Physics, 52. p. 10. ISSN 0015-9018 Fortin, Sebastian and Pasqualini, Matias (2024) Emergence-Free Duality: Phonons and Vibrating Atoms in Crystalline Solids. [Preprint] Franklin, Alexander (2023) Can Bohmian Mechanics Make Sense of Local Reductive Explanation? [Preprint] Franklin, Alexander (2023) A Challenge for Humean Everettians. [Preprint] Franklin, Alexander (2023) Incoherent? No, Just Decoherent: How Quantum Many Worlds Emerge. [Preprint] Franklin, Alexander and Seifert, Vanessa A. (2020) The Problem of Molecular Structure Just Is The Measurement Problem. The British Journal for the Philosophy of Science. ISSN 1464-3537 Fraser, James D. (2023) Infinite Scale Scepticism: Probing the Epistemology of the Limit of Infinite Degrees of Freedom and Hilbert Space Non-Uniqueness. [Preprint] Fraser, James D. and Vickers, Peter (2022) Knowledge of the Quantum Domain: An Overlap Strategy. The British Journal for the Philosophy of Science. ISSN 1464-3537 French, Steven (2015) Response to (Metascience) critics. Springer. French, Steven (2001) Symmetry, Structure and the Constitution of Objects. [Preprint] French, Steven (2018) Between Factualism and Substantialism: Structuralism as a Third Way. [Preprint] French, Steven (2019) From a Lost History to a New Future: Is a Phenomenological Approach to Quantum Physics Viable? [Preprint] French, Steven (2020) Metaphysical Underdetermination as a Motivational Device. [Preprint] French, Steven (2019) A Neo-Kantian Approach to the Standard Model. [Preprint] French, Steven (2021) Putting Some Flesh on the Participant in Participatory Realism. [Preprint] French, Steven (2019) Representation and Realism: On Being a Structuralist All the Way (Up and) Down. [Preprint] Freytes, Hector and Domenech, Graciela and de Ronde, Christian (2014) Physical properties as modal operators in the topos approach to quantum mechanics. Foundations of Physics. ISSN 0015-9018 Freytes, Hector and de Ronde, Christian and Domenech, Graciela (2012) The square of opposition in orthomodular logic. Around and beyond the square of opposition. pp. 193-201. Friebe, Cord and Salimkhani, Kian and Wachter, Tina (2021) Introduction: Individuality, Distinguishability, and (Non‑)Entanglement. Journal for General Philosophy of Science. ISSN 0925-4560 Friederich, S. (2015) Re-thinking local causality. Synthese, 192 (1). pp. 221-240. ISSN 1573-0964 Friederich, Simon (2013) In defence of non-ontic accounts of quantum states. [Preprint] Friederich, Simon (2021) Introducing the Q-based interpretation of quantum theory. [Preprint] Frigg, Roman (2002) On the Property Structure of Realist Collapse of Quantum Mechanics and the So-Called "Counting Anomaly". [Preprint] Frigg, Roman and Hoefer, Carl (2007) Probability in GRW Theory. [Preprint] Frigg, Roman and Hoefer, Carl (2007) Probability in GRW Theory. Studies in History and Philosophy of Modern Physics, 38 (2). pp. 371-389. Frigg, Roman (2018) Properties and the Born Rule in GRW Theory. Collapse of the Wave Function: Models, Ontology, Origin, and Implications.. Fuksa, Jonáš (2021) Limits on Relativistic Quantum Measurement. [Preprint] Gambini, Rodolfo and Garcia Pintos, Luis Pedro and Pullin, Jorge (2009) Undecidability and the problem of outcomes in quantum measurements. [Preprint] Gambini, Rodolfo and Garcia-Pintos, Luis Pedro and Pullin, Jorge (2011) An axiomatic formulation of the Montevideo interpretation of quantum mechanics. Studies in the History and Philosophy of Modern Physics B, 42 (4). pp. 256-263. Gambini, Rodolfo and Lewowicz, Lucia and Pullin, Jorge (2013) Quantum mechanics, strong emergence and ontological non-reducibility. [Preprint] Gambini, Rodolfo and Pullin, Jorge (2015) The Montevideo Interpretation of Quantum Mechanics: a short review. [Preprint] Gambini, Rodolfo and Pullin, Jorge (2009) The Montevideo interpretation of quantum mechanics: frequently asked questions. In: UNSPECIFIED. Gambini, Rodolfo and Garcia-Pintos, Luis Pedro and Pullin, Jorge (2018) A single-world consistent interpretation of quantum mechanics from fundamental time and length uncertainties. [Preprint] Gambini, Rodolfo and Pullin, Jorge (2016) Event ontology in quantum mechanics and the problem of emergence. International Journal of Quantum Foundations, 2 (3). pp. 89-108. Gambini, Rodolfo and Pullin, Jorge (2024) Quantum panprotopsychism and the combination problem. Mind and Matter, 22 (1). pp. 51-94. ISSN 1611-8812 Gao, Shan (2013) Can continuous motion be an illusion? [Preprint] Gao, Shan (2012) Comment on "Distinct Quantum States Can Be Compatible with a Single State of Reality". [Preprint] Gao, Shan (2011) Comment on "How to protect the interpretation of the wave function against protective measurements" by Jos Uffink. [Preprint] Gao, Shan (2011) Derivation of the Meaning of the Wave Function. [Preprint] Gao, Shan (2013) Distinct Quantum States Cannot Be Compatible with a Single State of Reality. [Preprint] Gao, Shan (2013) Does gravity induce wavefunction collapse? An examination of Penrose's conjecture. [Preprint] Gao, Shan (2012) An Exceptionally Simple Argument Against the Many-worlds Interpretation: Further Consolidations. [Preprint] Gao, Shan (2013) Homogeneity of spacetime implies the free Schrödinger equation. [Preprint] Gao, Shan (2011) Interpreting Quantum Mechanics in Terms of Random Discontinuous Motion of Particles. [Preprint] Gao, Shan (2013) Interpreting Quantum Mechanics in Terms of Random Discontinuous Motion of Particles. [Preprint] Gao, Shan (2013) Is an electron a charge cloud? A reexamination of Schrödinger's charge density hypothesis. [Preprint] Gao, Shan (2012) Is the electron's charge 2e? A problem of the de Broglie-Bohm theory. [Preprint] Gao, Shan (2010) Meaning of the wave function. [Preprint] Gao, Shan (2014) A New Ontological Interpretation of the Wave Function. [Preprint] Gao, Shan (2014) Notes on the ontology of Bohmian mechanics. [Preprint] Gao, Shan (2014) Notes on the reality of the quantum state. [Preprint] Gao, Shan (2012) On Uffink's alternative interpretation of protective measurements. [Preprint] Gao, Shan (2012) On the origin of gravity. [Preprint] Gao, Shan (2013) On the possibility of nonlinear quantum evolution and superluminal communication. [Preprint] Gao, Shan (2014) On the reality and meaning of the wave function. [Preprint] Gao, Shan (2013) An Ontological Interpretation of the Wave Function. [Preprint] Gao, Shan (2014) A PBR-like argument for psi-ontology in terms of protective measurements. [Preprint] Gao, Shan (2011) Protective Measurement and the Meaning of the Wave Function. [Preprint] Gao, Shan (2013) Protective Measurement: A Paradigm Shift in Understanding Quantum Mechanics. [Preprint] Gao, Shan (2013) Protective measurements and relativity of worlds. [Preprint] Gao, Shan (2013) Protective measurements and the meaning of the wave function in the de Broglie-Bohm theory. [Preprint] Gao, Shan (2014) Protective measurements and the reality of the wave function. [Preprint] Gao, Shan (2004) Quantum collapse, consciousness and superluminal communication. Foundations of Physics Letters, 17 (2). pp. 167-182. Gao, Shan (2015) Reality of the quantum state: A new proof in terms of protective measurements. [Preprint] Gao, Shan (2013) Space-time, relativity and quantum mechanics: In search of a deeper connection. [Preprint] Gao, Shan (2013) Three possible implications of spacetime discreteness. [Preprint] Gao, Shan (2011) The Wave Function and Its Evolution. [Preprint] Gao, Shan (2015) What does it feel like to be in a quantum superposition? [Preprint] Gao, Shan (2001) What quantum mechanics describes is discontinuous motion of particles. [Preprint] Gao, Shan (2015) An argument for psi-ontology in terms of protective measurements. [Preprint] Gao, Shan (2013) A discrete model of energy-conserved wavefunction collapse. [Preprint] Gao, Shan (2011) An exceptionally simple argument against the many-worlds interpretation. [Preprint] Gao, Shan (2011) An exceptionally simple argument against the many-worlds interpretation. [Preprint] Gao, Shan (2016) The measurement problem revisited. [Preprint] Gao, Shan (2001) A possible quantum basis of panpsychism. [Preprint] Gao, Shan (2013) A quantum physical argument for panpsychism. [Preprint] Gao, Shan (2014) The wave function and particle ontology. [Preprint] Gao, Shan (2018) Against the field ontology of quantum mechanics. [Preprint] Gao, Shan (2018) Are there limits of objectivity in the quantum world? [Preprint] Gao, Shan (2019) Are there many worlds? [Preprint] Gao, Shan (2022) Can Bohmian brains make minds? On shadows, puppets and zombies. [Preprint] Gao, Shan (2017) Can particle configurations represent measurement results in Bohm's theory? [Preprint] Gao, Shan (2023) Can pragmatist quantum realism explain protective measurements? Foundations of Physics, 53 (11). ISSN 0015-9018 Gao, Shan (2023) Can the ontology of Bohmian mechanics consists only in particles? The PBR theorem says no. Gao, Shan (2023) Can the ontology of Bohmian mechanics include only particles? [Preprint] Gao, Shan (2020) Can the universe be in a mixed state? [Preprint] Gao, Shan (2023) Can the universe be in a mixed state? or did God have a choice in creating the universe? [Preprint] Gao, Shan (2019) Can the wave function of the universe be a law of nature? [Preprint] Gao, Shan (2019) Closing the superdeterminism loophole in Bell's theorem. [Preprint] Gao, Shan (2022) A Conjecture on the Origin of Superselection Rules - with a Comment on "The Global Phase Is Real". [Preprint] Gao, Shan (2022) Consciousness and Quantum Mechanics. [Preprint] Gao, Shan (2022) Do quantum observers have minds? [Preprint] Gao, Shan (2024) Do the Laws of Nature Have Necessity? If Yes, Where Does It Come From? [Preprint] Gao, Shan (2023) Does Bohmian mechanics solve the measurement problem? Maybe not yet. [Preprint] Gao, Shan (2024) Does Locality Imply Reality of the Wave Function? Hardy's Theorem Revisited. Foundations of Physics, 54 (44). Gao, Shan (2018) Does protective measurement imply the reality of the wave function? [Preprint] Gao, Shan (2023) Does quantum cognition imply quantum minds? Journal of Consciousness Studies, 28 (3-4). pp. 100-111. Gao, Shan (2024) Does the Conscious Mind Obey the Laws of Physics? [Preprint] Gao, Shan (2019) Does the PBR theorem refute Bohmian mechanics? [Preprint] Gao, Shan (2023) Energy nonconservation in collapse theories enables superluminal signaling. [Preprint] Gao, Shan (2023) Existence of macroscopic spatial superpositions in collapse theories. Studies In History and Philosophy of Science Part B: Studies In History and Philosophy of Modern Physics, 86. pp. 1-5. ISSN 13552198 Gao, Shan (2022) Existence of superluminal signaling in collapse theories of quantum mechanics. [Preprint] Gao, Shan (2017) Failure of psychophysical supervenience in Everett's theory. [Preprint] Gao, Shan (2018) Failure of psychophysical supervenience in many worlds. [Preprint] Gao, Shan (2021) How do electrons move in atoms? From the Bohr model to quantum mechanics. One hundred years of the Bohr atom: Proceedings from a conference (Edited by F. Aaserud and H. Kragh). Scientia Danica. Series M: Mathematica et physica, vol. 1., 2015. pp. 450-464. ISSN 1904-5514 Gao, Shan (2022) Humeanism and the Measurement Problem. [Preprint] Gao, Shan (2022) If the global phase is real. [Preprint] Gao, Shan (2023) Is decoherence necessary for the emergence of many worlds? [Preprint] Gao, Shan (2022) Is retrocausal quantum mechanics consistent with special relativity? Foundations of Physics. ISSN 0015-9018 Gao, Shan (2023) Is superluminal signaling possible in collapse theories of quantum mechanics? Foundations of Physics, 53. ISSN 0015-9018 Gao, Shan (2019) Is there density matrix realism? [Preprint] Gao, Shan (2024) Locality Implies Reality of the Wave Function: Hardy's Theorem Revisited. [Preprint] Gao, Shan (2023) Many Worlds with both "And" and "Or". [Preprint] Gao, Shan (2016) The Meaning of the Wave Function: In Search of the Ontology of Quantum Mechanics. [Preprint] Gao, Shan (2022) Nature abhors redundancies: A no-go result for density matrix realism. [Preprint] Gao, Shan (2022) A Note on Density Matrix Realism. [Preprint] Gao, Shan (2022) On Bell's Everett (?) theory. [Preprint] Gao, Shan (2022) On Bell's Everett (?) theory. Foundations of Physics. ISSN 0015-9018 Gao, Shan (2022) On the Initial State of the Universe. [Preprint] Gao, Shan (2022) On the ontologies of quantum theories. [Preprint] Gao, Shan (2024) On the reality of the quantum state once again: A no-go theorem for psi-ontic models? Foundations of Physics, 54 (52). Gao, Shan (2022) "Perception" at a distance in EPR-Bohm experiments with reversible measurements. [Preprint] Gao, Shan (2020) Protective Measurements and the Reality of the Wave Function. The British Journal for the Philosophy of Science. ISSN 1464-3537 Gao, Shan (2020) A Puzzle for the Field Ontologists. Foundations of Physics. Gao, Shan (2023) Quantum mechanics refutes solipsism: A proof of the existence of an external world. [Preprint] Gao, Shan (2022) Quantum suicide and many worlds. [Preprint] Gao, Shan (2019) Quantum theory is incompatible with relativity: A new proof beyond Bell's theorem and a test of unitary quantum theories. [Preprint] Gao, Shan (2022) Reality of mass and charge and its implications for the meaning of the wave function. [Preprint] Gao, Shan (2024) Simplest Quantum Mechanics: Why It Is Better Than Bohmian, Everettian and Collapse Theories. [Preprint] Gao, Shan (2024) Simplest Quantum Mechanics: Why It Is Better Than Bohmian, Everettian and Collapse Theories. [Preprint] Gao, Shan (2021) Time Division Multiverse: A New Picture of Quantum Reality. [Preprint] Gao, Shan (2021) Time's Arrow Points to Many Worlds. [Preprint] Gao, Shan (2024) Understanding Branching in the Many-worlds Interpretation of Quantum Mechanics. [Preprint] Gao, Shan (2023) Understanding Time Reversal in Quantum Mechanics: A New Derivation. Foundations of Physics, 52 (114). ISSN 1572-9516 Gao, Shan (2018) Unitary quantum theories are incompatible with special relativity. [Preprint] Gao, Shan (2022) What if there are only particles in Bohmian mechanics? [Preprint] Gao, Shan (2018) What is it like to be a quantum observer? And what does it imply about the nature of consciousness? [Preprint] Gao, Shan (2021) Why Bell's Everett (?) theory is wrong. [Preprint] Gao, Shan (2019) Why mind matters in quantum mechanics. [Preprint] Gao, Shan (2018) Why protective measurement establishes the reality of the wave function. [Preprint] Gao, Shan (2016) Why protective measurement implies the reality of the wave function: Further consolidation. In: UNSPECIFIED. Gao, Shan (2024) Why the global phase is not real. Foundations of Physics, 54 (19). ISSN 1572-9516 Gao, Shan (2023) Why the ontology of Bohmian mechanics cannot include only particles or particles and the wave function. [Preprint] Gao, Shan (2022) Why the quantum equilibrium hypothesis? From Bohmian mechanics to a many-worlds theory. [Preprint] Gao, Shan (2018) Why we cannot see the tails of Schrödinger's cat. [Preprint] Gao, Shan (2019) A contradiction in Bohm's theory. [Preprint] Gao, Shan (2022) A new EPR-Bohm experiment with reversible measurements. [Preprint] Gao, Shan (2022) A no-go result for Bohmian mechanics. [Preprint] Gao, Shan (2023) A no-go result for QBism. Foundations of Physics, 51 (103). ISSN 1572-9516 Gao, Shan (2022) A no-go result for wave function realism. [Preprint] Gao, Shan (2023) A quantum observer cannot report her observation; otherwise superluminal signaling is possible. [Preprint] Gao, Shan (2022) A simple proof that the global phase is real. [Preprint] Gao, Shan (2022) A simple proof that the global phase is real. [Preprint] Gao, Shan (2018) A simple proof that the many-worlds interpretation of quantum mechanics is inconsistent. [Preprint] Gao, Shan (2020) A thought experiment in many worlds. [Preprint] Gerig, Austin (2014) Are There Many Worlds? [Preprint] Gerig, Austin (2012) The Doomsday Argument in Many Worlds. [Preprint] Gilead, Amihud (2018) CLASSICAL PHYSICS AND THE ACTUALIZATION OF QUANTUM PURE POSSIBILITIES. [Preprint] Giovanelli, Marco (2018) `Physics Is a Kind of Metaphysics': Émile Meyerson and Einstein's Late Rationalistic Realism. European Journal for the Philosophy of Science. Gisin, Nicolas (2024) Elegance, Facts, and Scientific Truths. [Preprint] Gisin, Nicolas (2020) Indeterminism in Physics and Intuitionistic Mathematics. [Preprint] Gisin, Nicolas (2020) Mathematical languages shape our understanding of time in physics. Nature Physics, 16. pp. 114-119. Gisin, Nicolas (1991) PROPENSITIES IN A NON-DETERMINISTIC PHYSICS*. Synthese, 89. pp. 287-297. ISSN 1573-0964 Gisin, Nicolas (2020) Real Numbers are the Hidden Variables of Classical Mechanics. Quantum Studies: Mathematics and Foundations, 7. pp. 197-201. Giulini, Domenico (2007) Electron Spin or ``Classically Non-Describable Two-Valuedness''. [Preprint] Giulini, Domenico (2011) On Max Born's "Vorlesungen ueber Atommechanik, Erster Band". [Preprint] Giulini, Domenico (2007) Superselection Rules. [Preprint] Giulini, Domenico (2009) Superselection Rules. [Preprint] Glick, David (2021) Book Review: French, S., & Saatsi, J. (Eds.). (2020). Scientific Realism and the Quantum. Oxford University Press. [Preprint] Glick, David (2019) Pluralist Structural Realism: The Best of Both Worlds? [Preprint] Glick, David (2021) QBism and the Limits of Scientific Realism. [Preprint] Glick, David (2021) Quantum Mechanics Without Indeterminacy. [Preprint] Glick, David (2018) Review of Richard Healey, The Quantum Revolution in Philosophy. British Journal for the Philosophy of Science. Glick, David (2016) Swapping something real. In: UNSPECIFIED. Glick, David (2019) Timelike Entanglement For Delayed-Choice Entanglement Swapping. [Preprint] Glick, David and Boge, Florian J. (2019) Is the Reality Criterion Analytic? [Preprint] Glick, David and Darby, George (2018) In Defense of the Metaphysics of Entanglement. [Preprint] Glick, David and Le Bihan, Baptiste (2023) Metaphysical Indeterminacy in Everettian Quantum Mechanics. [Preprint] Glynn, Luke and Kroedel, Thomas (2013) Relativity, Quantum Entanglement, Counterfactuals, and Causation. [Preprint] Goldstein, Sheldon and Taylor, James and Tumulka, Roderich and Zanghi, Nino (2004) Are All Particles Real? [Preprint] Goldstein, Sheldon and Zanghi, Nino (2011) Reality and the Role of the Wavefunction in Quantum Theory. [Preprint] Gomes, Henrique (2017) Back to Parmenides. In: UNSPECIFIED. Gomes, Henrique and Roberts, Bryan W. and Butterfield, Jeremy (2021) The Gauge Argument: A Noether Reason. [Preprint] Gomori, Marton (2020) On the Very Idea of Distant Correlations. [Preprint] Gomori, Marton and Hoefer, Carl (2023) Classicality and Bell's Theorem. [Preprint] Gomori, Marton and Hofer-Szabó, Gábor (2021) On the Meaning of EPR’s Reality Criterion. [Preprint] Goyal, Philip (2023) Persistence and Reidentification in Systems of Identical Quantum Particles: Towards a Post-Atomistic Conception of Matter. [Preprint] Goyal, Philip (2019) Persistence and nonpersistence as complementary models of identical quantum particles. Goyal, Philip (2022) The Role of Reconstruction in the Elucidation of Quantum Theory. [Preprint] Graffigna, Matías (2018) Outlines for a Phenomenological Foundation For de Ronde's Theory of Powers and Potentia. [Preprint] Graffigna, Matías (2016) The Possibility of a New Metaphysics for Quantum Mechanics from Meinong's Theory of Objects. Probing the Meaning of Quantum Mechanics: Superpositions, Dynamics, Semantics and Identity. pp. 280-307. Grasshoff, Gerd and Wuethrich, Adrian (2008) Bell-type Inequalities from Separate Common Causes. In: UNSPECIFIED. Greaves, Hilary (2007) On the Everettian epistemic problem. [Preprint] Greaves, Hilary (2006) Probability in the Everett interpretation. [Preprint] Greaves, Hilary (2004) Understanding Deutsch's probability in a deterministic multiverse. [Preprint] Greaves, Hilary and Myrvold, Wayne (2008) Everett and evidence. [Preprint] Griffiths, Robert B. (2011) A Consistent Quantum Ontology. [Preprint] Grimmer, Daniel (2023) The Pragmatic QFT Measurement Problem and the need for a Heisenberg-like Cut in QFT. [Preprint] Grinbaum, Alexei (2006) Reconstruction of quantum theory. [Preprint] Grinbaum, Alexei (2017) The Effectiveness of Mathematics in Physics of the Unknown. [Preprint] Groisman, Berry and Hallakoun, Na'ama and Vaidman, Lev (2013) The measure of existence of a quantum world and the Sleeping Beauty Problem. [Preprint] Gryb, Sean and Thebault, Karim P Y (2013) Time Remains. [Preprint] Guay, Alexandre (2007) Appareil, Image et Particule. [Preprint] Guay, Alexandre (2004) Geometrical aspects of local gauge symmetry. In: UNSPECIFIED. (Unpublished) Guillermin, Mathieu (2018) Compte rendu Mécanique quantique : Et si Einstein et de Broglie avaient aussi raison ? de Michel Gondran et Alexandre Gondran. Lato Sensu, revue de la Société de philosophie des sciences, 5 (2). pp. 35-38. ISSN 2295-8029 Guo, Bixin (2023) Next Best Thing—What Can Quantum Mechanics Tell Us About the Fundamental Ontology of the World? [Preprint] Gyenis, Zalán and Rédei, Miklós (2017) Common cause completability of non-classical probability spaces. [Preprint] Gábor, Hofer-Szabó (2014) Local causality and complete specification: a reply to Seevinck and Uffink. [Preprint] Gábor, Hofer-Szabó (2014) On the relation between the probabilistic characterization of the common cause and Bell's notion of local causality. [Preprint] Gábor, Hofer-Szabó (2015) Relating Bell's local causality to the Causal Markov Condition. [Preprint] Gábor, Hofer-Szabó (2011) Separate common causal explanation and the Bell inequalities. [Preprint] Gábor, Hofer-Szabó (2020) Commutativity, comeasurability, and contextuality in the Kochen-Specker arguments. [Preprint] Gábor, Hofer-Szabó (2016) How human and nature shake hands: the role of no-conspiracy in physical theories. [Preprint] Gábor, Hofer-Szabó (2023) Sequential measurements and the Kochen-Specker arguments. [Preprint] Gábor, Hofer-Szabó (2020) Three noncontextual hidden variable models for the Peres-Mermin square. [Preprint] Hagar, Amit (2007) Length Matters (I): The Einstein-Swann Correspondence and the Constructive Approach to STR. In: UNSPECIFIED. Hagar, Amit and Hemmo, Meir (2005) Explaining the Unobserved: Why Quantum Mechanics Is Not only About Information. [Preprint] Hagar, Amit and Hemmo, Meir (2006) Explaining the Unobserved: Why Quantum Theory Ain't Only About Information. [Preprint] Hagar, Amit and Korolev, Alex (2007) Quantum Hypercomputation - Hype or Computation? [Preprint] Halvorson, Hans (2001) Complementarity of representations in quantum mechanics. [Preprint] Halvorson, Hans (2000) On the nature of continuous physical quantities in classical and quantum mechanics. [Preprint] Halvorson, Hans (2003) A note on information theoretic characterizations of physical theories. [Preprint] Halvorson, Hans (2003) A note on information theoretic characterizations of physical theories. [Preprint] Halvorson, Hans and Clifton, Rob (1999) Maximal Beable Subalgebras of Quantum-Mechanical Observables. [Preprint] Halvorson, Hans and Clifton, Rob (2001) No place for particles in relativistic quantum theories? [Preprint] Halvorson, Hans and Clifton, Rob (2001) Reconsidering Bohr's reply to EPR. [Preprint] Halvorson, Hans (2020) John Bell on Subject and Object. [Preprint] Halvorson, Hans (2022) Objective description in physics. [Preprint] Halvorson, Hans (2018) To be a realist about quantum theory. [Preprint] Halvorson, Hans and Butterfield, Jeremy (2021) John Bell on 'Subject and Object': an Exchange. [Preprint] Hamilton, John and Isham, Chris and Butterfield, Jeremy (2000) A Topos Perspective on the Kochen-Specker Theorem: III. Von Neumann Algebras as the Base Category. UNSPECIFIED. Hangleiter, Dominik and Carolan, Jacques and Thebault, Karim P Y (2021) Analogue Quantum Simulation: A New Instrument for Scientific Understanding. [Preprint] Hangleiter, Dominik and Carolan, Jacques and Thebault, Karim P Y (2017) Analogue Quantum Simulation: A Philosophical Prospectus. [Preprint] Harrigan, Nicholas and Spekkens, Robert W. (2010) Einstein, Incompleteness, and the Epistemic View of Quantum States. Foundations of Physics, 40. pp. 125-157. Hartmann, Stephan and Suppes, Patrick (2009) Entanglement, Upper Probabilities and Decoherence in Quantum Mechanics. [Preprint] Healey, Richard (2007) Gauge Symmetry and the Theta Vacuum. In: UNSPECIFIED. Healey, Richard (2013) How quantum theory helps us explain. [Preprint] Healey, Richard (2011) How to Use Quantum Theory Locally to Explain "Non-local" Correlations. [Preprint] Healey, Richard (2001) On the Reality of Gauge Potentials. [Preprint] Healey, Richard (2012) Quantum Meaning. [Preprint] Healey, Richard (2012) Quantum decoherence in a pragmatist view: Part I. [Preprint] Healey, Richard (2012) Quantum decoherence in a pragmatist view: Resolving the measurement problem. [Preprint] Healey, Richard (2009) Reduction and Emergence in Bose-Einstein Condensates. [Preprint] Healey, Richard (2010) Reduction and Emergence in Bose-Einstein Condensates. [Preprint] Healey, Richard A. (2013) Causality and Chance in Relativistic Quantum Field Theories. [Preprint] Healey, Richard A. (2014) Local Causality, Probability and Explanation. [Preprint] Healey, Richard A. (2013) Observation and Quantum Objectivity. [Preprint] Healey, Richard A. (2011) Physical Composition. [Preprint] Healey, Richard A. (2015) Quantum States as Informational Bridges. [Preprint] Healey, Richard A. (2011) Quantum Theory: a Pragmatist Approach. [Preprint] Healey, Richard (2019) Beyond Bell? [Preprint] Healey, Richard (2021) Beyond Bell? [Preprint] Healey, Richard (2018) Pragmatist Quantum Realism. [Preprint] Healey, Richard (2022) Securing the Objectivity of Relative Facts in the Quantum World. [Preprint] Healey, Richard (2016) A pragmatist view of the metaphysics of entanglement. [Preprint] Healey, Richard A. (2018) Quantum Theory: Realism or Pragmatism? [Preprint] Healey, Richard A. (2023) Representation and the Quantum State. UNSPECIFIED. Healey, Richard A. (2021) Scientific Objectivity and its Limits. [Preprint] Heartspring, William (2019) A Bayesian theory of quantum gravity without gravitons. [Preprint] Heathcote, Adrian (2021) Countability and Self-Identity. [Preprint] Held, Carsten (2006) Can Quantum Mechanics be shown to be Incomplete in Principle? In: UNSPECIFIED. Held, Carsten (2012) Incompatibility of standard completeness and quantum mechanics. [Preprint] Held, Carsten (2011) The Quantum Completeness Problem. [Preprint] Held, Carsten (2023) A Presupposition of Bell's Theorem. [Preprint] Hemmo, Meir and Pitowsky, Itamar (2001) Probability and Nonlocality in Many Minds Interpretations of Quantum Mechanics. [Preprint] Hemmo, Meir and Shenker, Orly (2020) Why the Many-Worlds Interpretation of quantum mechanics needs more than Hilbert space structure. Scientific Challenges to Common Sense Philosophy. pp. 61-70. ISSN Hemmo, Meir and Shenker, Orly R. (2020) Maxwell’s Demon in Quantum Mechanics. Entropy, 22. p. 269. ISSN 1099-4300 Hemmo, Meir and Shenker, Orly R. (2010) Probability and Typicality in Deterministic Physics. [Preprint] Hensen, Bas (2010) Decoherence, the measurement problem, and interpretations of quantum mechanics. UNSPECIFIED. Herbut, Fedor (2013) On the nucleon paradigm: the nucleons are closer to reality than the protons and neutrons. [Preprint] Hermens, Ronnie (2016) Philosophy of Quantum Probability - An empiricist study of its formalism and logic. UNSPECIFIED. Hermens, Ronnie (2009) Quantum Mechanics: From Realism to Intuitionism. UNSPECIFIED. Hermens, Ronnie (2020) Completely real? A critical note on the claims by Colbeck and Renner. [Preprint] Hetzroni, Guy (2019) Gauge and Ghosts. The British Journal for the Philosophy of Science. ISSN 1464-3537 Hetzroni, Guy (2020) Relativity and Equivalence in Hilbert Space: A Principle-Theory Approach to the Aharonov-Bohm Effect. Foundations of Physics. ISSN 1572-9516 Heunen, Chris and Landsman, Klaas and Spitters, Bas (2008) The principle of general tovariance. [Preprint] Higashi, Katsuaki (2020) Hardy relations and common cause. [Preprint] Higashi, Katsuaki (2019) A no-go result on common cause approaches via Hardy relations. [Preprint] Hilgevoord, Jan (2001) Time in Quantum Mechanics. [Preprint] Hoehn, Philipp (2017) Reflections on the information paradigm in quantum and gravitational physics. [Preprint] Hoehn, Philipp A (2017) Quantum theory from rules on information acquisition. In: UNSPECIFIED. Hofer-Szabó, Gábor (2011) Bell(δ) inequalities derived from separate common causal explanation of almost perfect EPR anticorrelations. [Preprint] Hofer-Szabó, Gábor (2007) Separate- versus common-common-cause-type derivations of the Bell inequalities. [Preprint] Hofer-Szabó, Gábor and Vecsernyés, Péter (2014) Bell's local causality for philosophers. In: UNSPECIFIED. Hofer-Szabó, Gábor and Vecsernyés, Péter (2014) On the concept of Bell's local causality in local classical and quantum theory. [Preprint] Hofer-Szabó, Gábor (2022) Two concepts of noncontextuality in quantum mechanics. [Preprint] Hoiland, Dr. Paul Karl (2003) The Zero Point Source Of Accelerated Expansion. [Preprint] (Unpublished) Hoiland, Paul Karl (2003) The Non-Local to Local Space-time Map. [Preprint] (Unpublished) Holik, Federico (2013) Logic, Geometry And Probability Theory. [Preprint] Holik, Federico and Decio , Krause and Ignacio, Gómez (2012) Quantum Logical Structures For Identical Particles. [Preprint] Holik, Federico (2022) Non-Kolmogorovian Probabilities and Quantum Technologies. Entropy. Holik, Federico and Fortin, Sebastian and Bosyk, Gustavo and Plastino, Angelo (2017) On the interpretation of probabilities in generalized probabilistic models. [Preprint] Holik, Federico and Jorge, Juan Pablo and Massri, César (2020) Indistinguishability right from the start in standard quantum mechanics. [Preprint] Holik, Federico and Plastino, Angelo and Sáenz, Manuel (2013) A discussion on the origin of quantum probabilities. Annals of Physics, 340 (1). pp. 293-310. Holland, Peter and Brown, Harvey R. (2002) The non-relativistic limits of the Maxwell and Dirac equations: the role of Galilean and gauge invariance. UNSPECIFIED. (In Press) Holster, Andrew (2003) The Quantum Mechanical Time Reversal Operator. [Preprint] Holster, Andrew (2004) Time Flow Physics: Introduction to a unified theory based on time flow. [Preprint] Holster, Andrew (2003) The incompleteness of extensional object languages of physics and time reversal. Part 1. [Preprint] Holster, Andrew (2003) The incompleteness of extensional object languages of physics and time reversal. Part 2. [Preprint] Holster, Andrew (2004) A paradox in quantum measurement theory? [Preprint] Horvat, Sebastian and Toader, Iulian D. (2023) An Alleged Tension Between non-Classical Logics and Applied Classical Mathematics. [Preprint] Horvat, Sebastian and Toader, Iulian Danut (2023) An Alleged Tension between Quantum Logic and Applied Classical Mathematics. [Preprint] Horvat, Sebastian and Toader, Iulian Danut (2023) Carnap on Quantum Mechanics. [Preprint] Horvat, Sebastian and Toader, Iulian Danut (2023) Quantum logic and meaning. [Preprint] Hrushovski, Ehud and Pitowsky, Itamar (2003) Generalizations of Kochen and Specker's Theorem and the Effectiveness of Gleason's Theorem. [Preprint] Hubert, Mario (2022) Is the Statistical Interpretation of Quantum Mechanics ψ-Ontic or ψ-Epistemic? [Preprint] Hubert, Mario (2022) Review of Alyssa Ney’s 
The World in the Wave Function: A Metaphysics for Quantum Physics. [Preprint] Hubert, Mario and Romano, Davide (2017) The Wave-Function as a Multi-Field. [Preprint] Huggett, Nick (2002) Quarticles and the Identity of Indiscernibles. [Preprint] Huggett, Nick (2003) Quarticles and the Identity of Indiscernibles. In: UNSPECIFIED. Huggett, Nick and Norton, Joshua (2013) Weak Discernibility For Quanta, The Right Way. [Preprint] Huggett, Nick and Vistarini, Tiziana (2009) Entanglement Exchange and Bohmian Mechanics. [Preprint] Huggett, Nick and Wuthrich, Christian (2012) Emergent spacetime and empirical (in)coherence. [Preprint] Huggett, Nick and Wuthrich, Christian (2013) The emergence of spacetime in quantum theories of gravity. Studies in the History and Philosophy of Modern Physics, 44 (3). pp. 273-275. Huggett, Nick and Ladyman, James and Thebault, Karim P Y (2024) On the Quantum Theory of Molecules: Rigour, Idealization, and Uncertainty. [Preprint] Icefield, William (2020) Effective theory approach to the measurement problem. [Preprint] Icefield, William (2020) Uncomputable but complete physics theory of the universe. [Preprint] Ip, Pui Him (2013) The mystery behind Schroedinger's first communication: a non-historical study on the variational approach and its implications. In: UNSPECIFIED. Isham, Chris and Butterfield, Jeremy (1998) A Topos Perspective on the Kochen-Specker Theorem: I. Quantum States as Generalised Valuations. UNSPECIFIED. Jabs, Arthur (2014) An interpretation of the formalism of quantum mechanics in terms of realism. arXiv, The British Journal for the Philosophy of Science, 43. pp. 405-421. Jaeger, Gregg (2016) Grounding the randomness of quantum measurement. Philosophical Transactions of the Royal Society A, 374. Jaeger, Gregg (2019) A Realist View of the Quantum World. [Preprint] Jaimes Arriaga, Jesús Alberto and Fortin, Sebastian and Lombardi, Olimpia (2018) A new chapter in the problem of the reduction of chemistry to physics: The Quantum Theory of Atoms in Molecules. Jaksland, Rasmus (2023) Decoherence, appearance, and reality in agential realism. European Journal for Philosophy of Science. ISSN 1879-4912 Jaksland, Rasmus (2023) Distinguishing two (unsound) arguments for quantum social science. European Journal for Philosophy of Science. ISSN 1879-4912 Jaksland, Rasmus (2020) Entanglement as the world-making relation: Distance from entanglement. [Preprint] Jaksland, Rasmus (2021) An apology for conflicts between metaphysics and science in naturalized metaphysics. [Preprint] Jamali, Alireza (2022) On Theoretical Contingency of Quantum Mechanics. [Preprint] Janas, Michael and Cuffaro, Michael E. and Janssen, Michel (2019) Putting probabilities first. How Hilbert space generates and constrains them. [Preprint] Janssen, Hanneke (2008) Reconstructing Reality: Environment-Induced Decoherence, the Measurement Problem, and the Emergence of Definiteness in Quantum Mechanics. [Preprint] Jantzen, Benjamin (2010) An Awkward Symmetry: The Tension between Particle Ontologies and Permutation Invariance. [Preprint] Jantzen, Benjamin C. (2010) How Symmetry Undid the Particle: A Demonstration of the Incompatibility of Particle Interpretations and Permutation Invariance. UNSPECIFIED. Jantzen, Benjamin C. (2020) Ad hoc identity, Goyal complementarity, and counting quantum phenomena. [Preprint] Jaroszkiewicz, George (2007) The physical basis of quantum relativity. In: UNSPECIFIED. Jaura, Emma (2024) Anti-foundationalist Coherentism as an Ontology for Relational Quantum Mechanics. [Preprint] Juan, Villacrés (2018) Ontological Motivation in Obtaining Certain Quantum Equations: A Case for Panexperientialism. [Preprint] Juan Pablo, Jorge and Holik, Federico (2020) Non-deterministic semantics for quantum states. Entropy. ISSN 1099-4300 Kabel, Viktoria and de la Hamette, Anne-Catherine and Apadula, Luca and Cepollaro, Carlo and Gomes, Henrique and Butterfield, Jeremy and Brukner, Caslav (2024) Identification is Pointless: Quantum Reference Frames, Localisation of Events, and the Quantum Hole Argument. [Preprint] Kallfelz, William (2007) Embedding Fundamental Aspects of the Relational Blockworld Interpretation in Geometric (or Clifford) Algebra. [Preprint] Karakostas, Vassilios (2014) Correspondence Truth and Quantum Mechanics. Axiomathes, 24 (3). pp. 343-358. ISSN 1572-8390 Karakostas, Vassilios (2005) Forms of Quantum Nonseparability and Related Philosophical Consequences. UNSPECIFIED. Karakostas, Vassilios (2009) Humean Supervenience in the Light of Contemporary Science. UNSPECIFIED. Karakostas, Vassilios (2008) Nonseparability, Potentiality and the Context-Dependence of Quantum Objects. UNSPECIFIED. Karakostas, Vassilios (2012) Realism and objectivism in quantum mechanics. Journal for General Philosophy of Science, 43 (1). Karakostas, Vassilios and Zafiris, Elias (2017) Contextual Semantics in Quantum Mechanics from a Categorical Point of View. Synthese, 194 (3). pp. 847-886. ISSN 0039-7857 Karakostas, Vassilios and Zafiris, Elias (2020) On the Structure and Function of Scientific Perspectivism in Categorical Quantum Mechanics. The British Journal for the Philosophy of Science. ISSN Kastner, R. E. (2011) The New Possibilist Transactional Interpretation and Relativity. [Preprint] Kastner, Ruth (2005) The Afshar Two-Slit Experiment and Complementarity. [Preprint] Kastner, Ruth (2016) The Born Rule and Free Will. [Preprint] Kastner, Ruth (2004) Cramer's Transactional Interpretation and Causal Loop Problems. [Preprint] Kastner, Ruth (2014) ‘Einselection’ of Pointer Observables: The New H-Theorem? [Preprint] Kastner, Ruth (2014) Maudlin’s Challenge Refuted: A Reply to Lewis. [Preprint] Kastner, Ruth (2002) The Nature of the Controversy Over Time-Symmetric Quantum Counterfactuals. [Preprint] Kastner, Ruth (2009) The Quantum Liar Experiment in Cramer's Transactional Interpretation. [Preprint] Kastner, Ruth (2009) The Quantum Liar Experiment in Cramer's Transactional Interpretation. [Preprint] Kastner, Ruth (2009) The Quantum Liar Experiment in Cramer's Transactional Interpretation. [Preprint] Kastner, Ruth (2004) Weak Values and Consistent Histories in Quantum Theory. [Preprint] Kastner, Ruth (2010) Why Everettians Should Appreciate the Transactional Interpretation. [Preprint] Kastner, Ruth (2005) Why the Afshar Experiment Does Not Refute Complementarity. [Preprint] Kastner, Ruth E. (2011) On Delayed Choice and Contingent Absorber Experiments. ISRN Mathematical Physics. Kastner, R. E. and Schlatter, Andreas (2023) A Note on Landauer's Principle. [Preprint] Kastner, Ruth (2023) The Arrow of Time is Alive and Well but Forbidden Under the Received View of Physics. [Preprint] Kastner, Ruth (2016) Beyond Complementarity. Quantum Structural Studies. Kastner, Ruth (2024) Conventional Quantum Theory Does Not Support A Coherent Relational Account. [Preprint] Kastner, Ruth (2019) The "Delayed Choice Quantum Eraser" Neither Erases Nor Delays. [Preprint] Kastner, Ruth (2019) The "Delayed Choice Quantum Eraser" Neither Erases Nor Delays. Foundations of Physics. Kastner, Ruth (2017) Demystifying weak measurements. [Preprint] Kastner, Ruth (2016) Is There Really "Retrocausation" in Time-Symmetric Approaches to Quantum Mechanics? [Preprint] Kastner, Ruth (2017) On Quantum Non-Unitarity as a Basis for the Second Law of Thermodynamics. [Preprint] Kastner, Ruth (2017) On the Status of the Measurement Problem: Recalling the Relativistic Transactional Interpretation. [Preprint] Kastner, Ruth (2022) Physical Time as Human Time. [Preprint] Kastner, Ruth (2022) Quantum Haecceity. [Preprint] Kastner, Ruth (2021) The Relativistic Transactional Interpretation and The Quantum Direct-Action Theory. [Preprint] Kastner, Ruth (2021) Unitary Interactions Do Not Yield Outcomes: Attempting to Model ``Wigner's Friend''. [Preprint] Kastner, Ruth (2019) Unitary-Only Quantum Theory Cannot Consistently Describe the Use of Itself: On the Frauchiger-Renner Paradox. [Preprint] Kastner, Ruth (2016) Violation of the Born Rule: Implications for Macroscopic Fields. International Journal of Quantum Foundations, 2 (3). pp. 121-126. Kastner, Ruth (2020) Why the Quantum Absorber Condition is Not a Light-Tight Box. [Preprint] Kastner, Ruth and Kauffman, Stuart (2017) Are Dark Energy and Dark Matter Different Aspects of the Same Physical Process? [Preprint] Kastner, Ruth and Kauffman, Stuart and Epperson, Michael (2017) Taking Heisenberg's Potentia Seriously. [Preprint] Kastner, Ruth and Schlatter, Andreas (2023) Entropy cost of "Erasure" in Physically Irreversible Processes. [Preprint] Kastner, Ruth E. (2016) The Relativistic Transactional Interpretation: Immune to the Maudlin Challenge. [Preprint] Kearney, Peter (2020) Decoherence, the measurement problem and realism. [Preprint] Kiessling, Michael K.-H. (2019) The influence of gravity on the Boltzmann entropy of a closed universe. [Preprint] Kim, Bryce (2016) Can Everettian Interpretation Survive Continuous Spectrum? [Preprint] Kim, Bryce (2016) PBR theorem and sub-ensemble of quantum state. [Preprint] Kim, Bryce (2016) What if we have only one universe and closed timelike curves exist? [Preprint] King, Martin (2014) Idealization and Structural Explanation in Physics. In: UNSPECIFIED. King, Robbie (2021) Bipartite Measurements in Minkowski Space. [Preprint] Kjærsdam Telléus, Emilia (2022) Measuring up to the Measurement Problem: Decoherence and Bohr's ideas through the lens of the measurement problem and quantum erasers. [Preprint] Klein, Ulf (2019) From probabilistic mechanics to quantum theory. Quantum Studies: Mathematics and Foundations. Klein, Ulf (2012) Is the individuality interpretation of quantum theory wrong ? [Preprint] Klevgard, Paul A. (2014) Einstein and the Formal Equivalence of Mass and Energy. [Preprint] Klevgard, Paul (2018) Wave-Particle Duality: A New Look from First Principles. [Preprint] Klevgard, Paul A. (2021) Is the photon really a particle? Optik International Journal for Light and Electron Optics, 237 (166679). ISSN 0030-4026 Knight, Andrew (2020) Macroscopic Quantum Superpositions Cannot Be Measured, Even in Principle. [Preprint] Knight, Andrew (2021) On the (Im)possibility of Scalable Quantum Computing. [Preprint] Knight, Andrew (2022) Wigner’s Friend Depends on Self-Contradictory Quantum Amplification. [Preprint] Knobe, Joshua and Olum, Ken and Vilenkin, Alexander (2003) Philosophical Implications of Inflationary Cosmology. [Preprint] Knobe, Joshua and Olum, Ken and Vilenkin, Alexander (2005) Philosophical Implications of Inflationary Cosmology. [Preprint] Knox, Eleanor (2008) Flavour-Oscillation Clocks and the Geometricity of General Relativity. [Preprint] Knox, Eleanor (2009) Flavour-Oscillation Clocks and the Geometricity of General Relativity. [Preprint] Knox, Eleanor and Wallace, David (2023) Functionalism Fit for Physics. [Preprint] Koberinski, Adam (2022) Framework generalization: Structuring searches for new physics. [Preprint] Koberinski, Adam and Dunlap, Lucas and Harper, William L. (2017) Do the EPR Correlations Pose a Problem for Causal Decision Theory? Synthese. ISSN 1573-0964 Krause, D. (2011) The Problem of Identity and a Justification for Non-Reflexive Quantum Mechanics. [Preprint] Krause, D\'ecio and French, Steven (2005) Quantum Sortal Predicates. [Preprint] Krause, Decio (2007) Entity, but no Identity. [Preprint] Krause, Decio and Arenhart, Jonas R. B. (2015) A logical account of superpositions. [Preprint] Krause, Decio and Bueno, Otavio (2010) Ontological Issues in Quantum Theory. [Preprint] Krause, Decio and Feitosa, Hercules de Araujo (2008) Algebraic aspects of quantum indiscernibility. [Preprint] Krause, Décio (2009) Logical Aspects of Quantum (Non-)Individuality. [Preprint] Krause, Décio (2004) The Mathematics of Non-Individuality. [Preprint] Krause, Décio (2012) Paraconsistent Quasi-Set Theory. [Preprint] Krause, Décio (2005) Separability and Non-Individuality : Is it possible to conciliate (at least a form of) Einstein's realism with quantum mechanics? [Preprint] Krause, Décio and Arenhart, Jonas R. B. (2012) Individuality, quantum physics, and a metaphysics of non-individuals: the role of the formal. [Preprint] Krause, Décio and Arenhart, Jonas R. B. (2015) Presenting Nonreflexive Quantum Mechanics: Formalism and Metaphysics. [Preprint] Krause, Décio and Arenhart, Jonas R. B. (2010) Structures and Models of Scientific Theories: A Discussion on Quantum Non-Individuality. [Preprint] Krause, Décio and Arenhart, Jonas R. B. and da Costa, Newton C. A. (2015) Ontology and the mathematization of the scientific enterprise. [Preprint] Krause, Décio and Coelho, Antonio M. N. (2004) Identity, Indiscernibility, and Philosophical Claims. [Preprint] Krause, Decio (2017) Do `classical' space and time confer identity to quantum particles? [Preprint] Krause, Décio (2017) Do `classical' space and time provide identity to quantum particles? [Preprint] Krause, Décio (2024) On Dieks against the Received View. [Preprint] Krause, Décio (2022) On the discrepancies between quantum logic and classical logic. [Preprint] Krause, Décio (2022) Quantifying over indiscernibles. [Preprint] Krause, Décio (2018) Quantum Mechanics, Ontology, and Non-Reflexive Logics. [Preprint] Krause, Décio (2024) A new theory of quasi-sets without atoms: a reply to Adonai Sant'Anna. [Preprint] Krause, Décio (2023) The underlying logic is mandatory also in discussing the philosophy of quantum physics. [Preprint] Krause, Décio (2023) The underlying logic is mandatory also in discussing the philosophy of quantum physics. [Preprint] Krause, Décio and Arenhart, Jonas R. B. (2020) Identical particles in quantum mechanics: favouring the Received View. [Preprint] Krause, Décio and Arenhart, Jonas R. B. and Bueno, Otávio (2020) The Non-Individuals Interpretation of Quantum Mechanics. [Preprint] Kronz, Fred (1998) Bohm's Ontological Interpretation and Its Relations to Three Formulations of Quantum Mechanics. [Preprint] Kronz, Fred (1997) Nonseparability and Quantum Chaos. [Preprint] Kronz, Fred (2000) Range of Violations of Bell's Inequality by Entangled Photon Pairs Entangled Photon Pairs. [Preprint] Kryukov, Alexey (2002) Coordinate formalism on Hilbert manifolds: string bases of eigenvectors. [Preprint] Kryukov, Alexey (2002) Coordinate formalism on abstract Hilbert space: Kinematics of a quantum measurement. [Preprint] Kryukov, Alexey (2007) The EPR experiment: A paradox-free definition of reality. [Preprint] Kryukov, Alexey (2007) Geometric derivation of quantum uncertainty. [Preprint] Kryukov, Alexey (2008) Nine theorems on the unification of quantum mechanics and relativity. [Preprint] Kryukov, Alexey (2007) On the measurement problem for a two-level quantum system. In: UNSPECIFIED. Kryukov, Alexey (2003) On the problem of emergence of classical space-time: The quantum-mechanical approach. [Preprint] Kryukov, Alexey (2007) The double-slit experiment: A paradox-free kinematic description. [Preprint] Kryukov, Alexey (2000) Mathematics of the classical and the quantum. J. Math. Phys., 61, 082101. Kryukov, Alexey (2000) Measurement in classical and quantum physics. [Preprint] Kryukov, Alexey (2018) On observation of position in quantum theory. [Preprint] Kryukov, Alexey (2000) Physics in the space of quantum states. [Preprint] Kryukov, Alexey (2024) Schro ̈dinger dynamics of a two-state system under measurement. [Preprint] Kryukov, Alexey A (2017) On the motion of macroscopic bodies in quantum theory. [Preprint] Krátký, Matěj (2024) Physical Possibility of the Aharonov-Bohm Effect. [Preprint] Kuby, Daniel (2018) Feyerabend's Reevaluation of Scientific Practice: Quantum Mechanics, Realism and Niels Bohr. [Preprint] Kuby, Daniel and Fraser, Patrick (2021) Feyerabend on the quantum theory of measurement: A reassessment. [Preprint] Ladyman, James and Lorenzetti, Lorenzo (2023) Effective Ontic Structural Realism. [Preprint] Ladyman, James and Thebault, Karim P Y (2024) Open Systems and Autonomy. [Preprint] Lahiri, Avijit (2023) Quantum Mechanical Reality: Entanglement and Decoherence. [Preprint] Lam, Vincent (2014) Entities without intrinsic physical identity. [Preprint] Lam, Vincent (2015) Primitive ontology and quantum field theory. [Preprint] Lam, Vincent and Esfeld, Michael (2012) A dilemma for the emergence of spacetime in canonical quantum gravity. [Preprint] Lam, Vincent and Letertre, Laurie and Mariani, Cristian (2022) Quantum Metaphysics and the Foundations of Spacetime. [Preprint] Lam, Vincent and Wuthrich, Christian (2020) Spacetime functionalism from a realist perspective. [Preprint] Landsman, N.P. (Klaas) (2013) Quantization and superselection sectors III: Multiply connected spaces and indistinguishable particles. [Preprint] Landsman, Nicolaas P. (2005) Between classical and quantum. UNSPECIFIED. (In Press) Landsman, Nicolaas P. (2009) Essay Review of: Maximilian Schlosshauer, Decoherence and the Quantum-To-Classical Transition (Springer, Berlin, 2007). Studies in History and Philosophy of Modern Physics, 40 (1). pp. 94-95. ISSN 1355-2198 Landsman, Nicolaas P. (2008) Macroscopic observables and the Born rule. I. Long run frequencies. [Preprint] Landsman, Nicolaas P. (2013) Spontaneous Symmetry Breaking in Quantum Systems: Emergence or Reduction? [Preprint] Landsman, Nicolaas P. (2005) When champions meet: Rethinking the Bohr--Einstein debate. UNSPECIFIED. (In Press) Landsman, Nicolaas P. and Reuvers, Robin (2012) A Flea on Schroedinger's Cat. [Preprint] Landsman, Klaas (2020) Indeterminism and Undecidability. [Preprint] Landsman, Klaas (2016) On the notion of free will in the Free Will Theorem. [Preprint] Landsman, Klaas (2019) Quantum theory and functional analysis. [Preprint] Landsman, Klaas (2019) Randomness? What randomness? [Preprint] Laudisa, Federico (2014) Against the 'No-Go' Philosophy of Quantum Mechanics. European Journal for the Philosophy of Science, 4 (1). pp. 1-17. Laudisa, Federico (2014) Against the 'no-go' philosophy of quantum mechanics. European Journal for Philosophy of Science, 4 (1). pp. 1-17. ISSN 1879-4912 Laudisa, Federico (2008) Non-local realistic theories and the scope of the Bell theorem. Laudisa, Federico (2014) On Leggett Theories: A Reply. Foundations of Physics. ISSN 0015-9018 Laudisa, Federico (2010) The uninvited guest: 'local realism' and the Bell theorem. In: UNSPECIFIED. Laudisa, Federico (2017) Counterfactual Reasoning, Realism and Quantum Mechanics: Much Ado About Nothing? [Preprint] Laudisa, Federico (2023) The Evolution of the Bell Notion of Beable: from Bohr to Primitive Ontology. [Preprint] Laudisa, Federico (2022) The Information-Theoretic View of Quantum Mechanics and the Measurement Problem(s). [Preprint] Laudisa, Federico (2023) The Information-Theoretic View of Quantum Mechanics and the Measurement Problem(s)_version forthcoming in EJPS 2023. [Preprint] Laudisa, Federico (2017) Open Problems in Relational Quantum Mechanics. [Preprint] Laudisa, Federico (2017) Stop making sense of Bell's theorem and nonlocality? A reply to Stephen Boughn. [Preprint] Laudisa, Federico (2022) When Did Locality Become ‘Local Realism’? A Historical and Critical Analysis (1963-1978). [Preprint] Lazarovici, Dustin (2013) Why "noncommuting common causes" don't explain anything. [Preprint] Lazarovici, Dustin (2019) On the measurement process in Bohmian mechanics (reply to Gao). [Preprint] Lazarovici, Dustin (2019) Position Measurements and the Empirical Status of Particles in Bohmian Mechanics. [Preprint] Lazarovici, Dustin and Oldofredi, Andrea and Esfeld, Michael (2018) Observables and unobservables in quantum mechanics: How the no-hidden-variables theorems support the Bohmian particle ontology. Lazarovici, Dustin and Reichert, Paula (2022) The Point of Primitive Ontology. Foundations of Physics, 52. ISSN 1572-9516 Le Bihan, Soazig (2008) Understanding Quantum Phenomena. UNSPECIFIED. Le Bihan, Baptiste (2021) Alastair Wilson's The Nature of Contingency. BJPS Review of Books. Le Bihan, Baptiste (2018) Space Emergence in Contemporary Physics: Why We Do Not Need Fundamentality, Layers of Reality and Emergence. [Preprint] Le Bihan, Baptiste (2020) What does the world look like according to superdeterminism? [Preprint] Le Bihan, Soazig (2009) Fine ways to fail to secure realism. [Preprint] Le Bihan, Soazig (2020) No Quantum Threat to Relativity. UNSPECIFIED. Leegwater, Gijs (2016) An impossibility theorem for Parameter Independent hidden variable theories. Studies in the History and Philosophy of Modern Physics, 54. pp. 18-34. Leegwater, Gijs (2018) When Greenberger, Horne and Zeilinger meet Wigner's Friend. [Preprint] Lehner, Christoph Albert (1997) Quantum Mechanics and Reality: An Interpretation of Everett's Theory. [Preprint] Leifer, Matthew and Pusey, Matthew (2016) Is a time symmetric interpretation of quantum theory possible without retrocausality? In: UNSPECIFIED. Lestienne, Rémy (2021) Whitehead, la Négation de l’Instant et la Mécanique Quantique. Lato Sensu, revue de la Soci�t� de philosophie des sciences, 8 (1). pp. 1-11. ISSN 2295-8029 Letertre, Laurie (2019) Causal nonseparability and the structure of spacetime. In: UNSPECIFIED. Lewis, Peter J (2008) Reply to Papineau and Durà-Vilà. [Preprint] Lewis, Peter J. (2014) Bell's theorem, realism, and locality. [Preprint] Lewis, Peter J. (2010) Can Transactional Description of Quantum-Mechanical Reality be Considered Complete? In: UNSPECIFIED. Lewis, Peter J. (2005) Conspiracy Theories of Quantum Mechanics. [Preprint] Lewis, Peter J. (2006) Conspiracy Theories of Quantum Mechanics. [Preprint] Lewis, Peter J. (2003) Deutsch on quantum decision theory. UNSPECIFIED. (Unpublished) Lewis, Peter J. (2011) Dimension and Illusion. [Preprint] Lewis, Peter J. (2006) Empty Waves in Bohmian Quantum Mechanics. In: UNSPECIFIED. Lewis, Peter J. (2007) How Bohm’s Theory Solves the Measurement Problem. [Preprint] Lewis, Peter J. (2015) In search of local beables. [Preprint] Lewis, Peter J. (2004) Interpreting spontaneous collapse theories. [Preprint] Lewis, Peter J. (2003) Life in configuration space. UNSPECIFIED. (Unpublished) Lewis, Peter J. (2014) Measurement and metaphysics. [Preprint] Lewis, Peter J. (2005) Probability in Everettian quantum mechanics. [Preprint] Lewis, Peter J. (2008) Probability, Self-Location and Quantum Branching. In: UNSPECIFIED. Lewis, Peter J. (2006) Quantum Sleeping Beauty. [Preprint] Lewis, Peter J. (2013) Retrocausal Quantum Mechanics: Maudlin's Challenge Revisited. [Preprint] Lewis, Peter J. (2006) Uncertainty and probability for branching selves. [Preprint] Lewis, Peter (2024) A dilemma for relational quantum mechanics. [Preprint] Lewis, Peter J. (2019) Against “experience”. [Preprint] Lewis, Peter J. (2018) Bohmian Philosophy of Mind? [Preprint] Lewis, Peter J. (2017) Collapse Theories. [Preprint] Lewis, Peter J. (2021) Explicating quantum indeterminacy. [Preprint] Lewis, Peter J. (2019) On Closing the Circle. [Preprint] Lewis, Peter J. (2017) On the Status of Primitive Ontology. [Preprint] Lewis, Peter J. (2018) Pragmatism and the content of quantum mechanics. In: UNSPECIFIED. Lewis, Peter J. (2018) Quantum Mechanics and its (Dis)Contents. [Preprint] Lewis, Peter J. (2016) Quantum mechanics, emergence, and fundamentality. [Preprint] Lia, den Daas (2018) Spontaneous Symmetry Breaking and Quantum Measurement. UNSPECIFIED. Lienert, Matthias (2011) Pilot Wave Theory and Quantum Fields. [Preprint] Linnebo, Oystein and Muller, F.A. (2012) On Witness-Discernibility of Elementary Particles. Erkenntnis. Liu, Chuang (2006) How we can be free from physics. [Preprint] Lombardi, Olimpia and Castagnino, Mario (2008) A modal-Hamiltonian interpretation of quantum mechanics. [Preprint] Lombardi, Olimpia and Fortin, Sebastian and Castagnino, Mario (2010) The problem of identifying the system and the environment in the phenomenon of decoherence. In: UNSPECIFIED. Lombardi, Olimpia and Fortin, Sebastian and Castagnino, Mario and Ardenghi, Juan Sebastián (2010) Compatibility between environment-induced decoherence and the modal-Hamiltonian interpretation of quantum mechanics. In: UNSPECIFIED. Lombardi, Olimpia and Dieks, Dennis (2014) Particles in a Quantum Ontology of Properties. [Preprint] Lombardi, Olimpia and Fortin, Sebastian and Pasqualini, Matias (2022) Possibility and time in quantum mechanics. [Preprint] Lombardi, Olimpia and Holik, Federico and Vanni, Leonardo (2016) What is quantum information? [Preprint] Lopez, Cristian (2019) PhD Thesis:The Arrow of Time and Time Symmetry in non-Relativistic Quantum Mechanics. [Preprint] Lopez, Cristian (2021) The Physics and the Philosophy of Time Reversal in Standard Quantum Mechanics. [Preprint] Lopez, Cristian (2021) Time’s Direction and Orthodox Quantum Mechanics: Time Symmetry and Measurement. [Preprint] Lorenzetti, Lorenzo (2022) Functionalising the Wavefunction. [Preprint] Lorenzetti, Lorenzo (2021) A Refined Propensity Account for GRW Theory. [Preprint] Lorenzetti, Lorenzo (2021) Structuralist Approaches to Bohmian Mechanics. [Preprint] Losada, Marcelo and Fortin, Sebastian and Holik, Federico (2017) Classical limit and quantum logic. In: UNSPECIFIED. Lyons, Tony (2023) Relational space-time and de Broglie waves. [Preprint] Lyre, Holger (2010) Why Quantum Theory is Possibly Wrong. UNSPECIFIED. López-Corredoira, Martín (2016) Against free will in the contemporary natural sciences. Free Will: Interpretations, Implementations and Assessments. MacKinnon, Edward (2009) The Consistent Histories Interpretation of Quantum Mechanics. [Preprint] MacKinnon, Edward (2005) Generating Ontology: From Quantum Mechanics to Quantum Field Theory. UNSPECIFIED. (Unpublished) Macías-Bustos, Moisés and Martínez-Ordaz, María del Rosario (2023) Understanding Defective Theories: The case of Quantum Mechanics and non-individuality. [Preprint] Mahler, Guenter and Ellis, George (2008) Plato's Cave Revisited: Science at the Interface. In: UNSPECIFIED. Mallah, Jacques (2009) Many-Worlds Interpretations Can Not Imply ‘Quantum Immortality’. [Preprint] Mansouri, Alireza and Golshani, Mehdi and Karbasizadeh, Amir Ehsan (2012) Quantum Objects. [Preprint] March, Eleanor (2023) Is the Deutsch-Wallace theorem redundant? [Preprint] March, Eleanor (2024) Many worlds or one: reply to Steeger. [Preprint] March, Eleanor (2023) Non-relativistic twistor theory: Newtonian limits and gravitational collapse. [Preprint] Mariani, Cristian and Michels, Robert and Torrengo, Giuliano (2021) Plural metaphysical supervaluationism. [Preprint] Marsh, Brendan Depictions of Quantum Reality in Kent's Interpretation of Quantum Theory. UNSPECIFIED. Martínez González, Juan Camilo and Fortin, Sebastian and Lombardi, Olimpia (2019) Why molecular structure cannot be strictly reduced to quantum mechanics. Foundations of Chemistry, 21 (1). pp. 31-45. ISSN 1386-4238 Mason, Lucy (2023) Quantum Darwinism: Redundant Records of Emergence. [Preprint] Matzkin, A. and Nurock, V. (2007) Are Bohmian trajectories real? On the dynamical mismatch between de Broglie-Bohm and classical dynamics in semiclassical systems. [Preprint] Matzkin, A. and Nvrock, V (2004) The Bohmian interpretation of quantum mechanics : a pitfall for realism. [Preprint] Maudlin, Tim and Okon, Elias and Sudarsky, Daniel (2019) On the Status of Conservation Laws in Physics: Implications for Semiclassical Gravity. [Preprint] Maxwell, Nicholas (2004) Does Probabilism Solve the Great Quantum Mystery? [Preprint] Maxwell, Nicholas (2012) Has Science Established that the Cosmos is Physically Comprehensible? [Preprint] Maxwell, Nicholas (2007) Is the Quantum World Composed of Propensitons? [Preprint] Maxwell, Nicholas (2004) Special Relativity, Time, Probabilism, and Ultimate Reality. [Preprint] Maxwell, Nicholas (2017) Could Inelastic Interactions Induce Quantum Probabilistic Transitions? [Preprint] Maxwell, Nicholas (2024) Hawking Radiation A Special Case of Probabilistic Transitions of Propensiton Quantum Theory. [Preprint] Maxwell, Nicholas (2016) Relativity Theory may not have the last Word on the Nature of Time: Quantum Theory and Probabilism. [Preprint] McCabe, Gordon (2004) Does an elementary particle have a unique intrinsic state? [Preprint] McCabe, Gordon (2005) The Standard Model of particle physics in other universes. [Preprint] McCabe, Gordon (2005) The Topology of Branching Universes. [Preprint] McCabe, Gordon (2004) What is an elementary particle in the first-quantized Standard Model? [Preprint] McCabe, Gordon (2018) Loop quantum gravity and discrete space-time. [Preprint] McCabe, Gordon (2020) What is light? [Preprint] McCall, Storrs and Whitaker, Andrew and George, Glyn (2000) CONTINUOUS VS DISCRETE PROCESSES: THE PROBABILISTIC EVOLUTION OF SINGLE TRAPPED IONS. [Preprint] McCoy, C.D. (2020) Interpretive Analogies Between Quantum and Statistical Mechanics. European Journal for Philosophy of Science, 10. ISSN 1879-4912 McKenzie, Alan (2016) Some remarks on the mathematical structure of the multiverse. [Preprint] McKenzie, Kerry (2013) Priority and Particle Physics: Ontic Structural Realism as a Fundamentality Thesis. [Preprint] McQueen, Kelvin J. and Vaidman, Lev (2018) In defence of the self-location uncertainty account of probability in the many-worlds interpretation. [Preprint] Meehan, Alexander (2020) Clarifying the New Problem for Quantum Mechanics: Reply to Vaidman. [Preprint] Meehan, Alexander (2019) A New Problem for Quantum Mechanics. The British Journal for the Philosophy of Science. Meehan, Alexander (2021) States of Ignorance and Ignorance of States: Examining the Quantum Principal Principle. [Preprint] Menon, Tushar (2023) On algebraic object naturalism and metaphysical indeterminacy in quantum mechanics. [Preprint] Menon, Tushar (2024) The inferentialist guide to quantum mechanics. [Preprint] Merriam, Paul (2021) Fragmental Presentism and Quantum Mechanics. [Preprint] Merriam, Paul (2021) Perspectival QM and Presentism: a New Paradigm. [Preprint] Michel, Bitbol (2007) ONTOLOGY, MATTER AND EMERGENCE. [Preprint] Midwinter, Charles and Janssen, Michel (2012) Kuhn Losses Regained: Van Vleck from Spectra to Susceptibilities. [Preprint] Miller, Michael (2020) Infrared Cancellation and Measurement. In: UNSPECIFIED. Miller, Michael (2016) Mathematical Structure and Empirical Content. [Preprint] Miller, Michael (2020) Worldly Imprecision. [Preprint] Miller, Michael E. (2019) Fundamental, yet imprecise? In: UNSPECIFIED. Miller, Ryan (2023) Chemical Reduction and Quantum Interpretation: A Case for Thomistic Emergence. Foundations of Chemistry. ISSN 1386-4238 Miller, Ryan (2024) The Irreducibility of Chemistry to Everettian Quantum Mechanics. [Preprint] Miller, Ryan (2022) Mereological Atomism's Quantum Problems. In: UNSPECIFIED. Miller, Ryan (2020) Not Another Brick in the Wall: an Extensional Mereology for Potential Parts. [Preprint] Mitsch, Chris (2022) Hilbert-Style Axiomatic Completion: On von Neumann and Hidden Variables in Quantum Mechanics. [Preprint] Mjelva, Jørn K. (2019) The spatiotemporal problem of entanglement. [Preprint] Mohrhoff, Ulrich J. (2014) First-Person Plural Quantum Mechanics. [Preprint] Monton, Bradley (2007) Common-Sense Realism and the Unimaginable Otherness of Science. [Preprint] Monton, Bradley (2009) Critical Notice: Bas van Fraassen, Scientific Representation: Paradoxes of Perspective. [Preprint] Monton, Bradley (2000) On Dualistic Interpretations of Quantum Mechanics. [Preprint] Monton, Bradley (2005) Presentism and Quantum Gravity. [Preprint] Monton, Bradley (2003) The Problem of Ontology for Spontaneous Collapse Theories. [Preprint] Monton, Bradley (2004) Quantum Mechanics and 3N-Dimensional Space. In: UNSPECIFIED. (Unpublished) Morgan, Peter (2022) The collapse of a quantum state as a joint probability construction. J. Phys. A: Math. Theor. 55 254006 (2022), 55. Morganti, Matteo (2010) Identity in Physics: Statistics and the (Non-)Individuality of Quantum Particles. In: UNSPECIFIED. Morganti, Matteo (2007) Individual particles, properties and quantum statistics. In: UNSPECIFIED. Morganti, Matteo and Dorato, Mauro (2011) Grades of Individuality. [Preprint] Morganti, Matteo (2018) From Ontic Structural Realism to Metaphysical Coherentism. [Preprint] Moulavi Ardakani, Reza (2017) Time Reversal Invariance in Quantum Mechanics. Mozota Frauca, Alvaro (2024) Foundational Issues in Group Field Theory. Foundations of Physics, 54 (33). ISSN 1572-9516 Mozota Frauca, Alvaro (2023) Reassessing the problem of time of quantum gravity. General Relativity and Gravitation, 55 (21). Mozota Frauca, Alvaro (2024) The problem of time for non-deparametrizable models and quantum gravity. [Preprint] Muciño, Ricardo and Okon, Elias and Sudarsky, Daniel (2022) Assessing Relational Quantum Mechanics. [Preprint] Muciño, Ricardo and Okon, Elias and Sudarsky, Daniel (2021) A reply to Rovelli's response to our "Assessing Relational Quantum Mechanics". [Preprint] Mulder, Ruward A. (2024) The Classical Stance: Dennett's Criterion in Wallacian quantum mechanics. Studies in History and Philosophy of Science. ISSN 00393681 Mulder, Ruward A. (2018) Emergence, Functionalism and Pragmatic Reality in Wallacian quantum mechanics. [Preprint] Muller, F. A. (2009) Withering Away,Weakly. In: UNSPECIFIED. Muller, F.A. (2003) Refutability Revamped: How Quantum Mechanics Saves the Phenomena. UNSPECIFIED. Muller, F.A. (2013) The Rise of Relationals. [Preprint] Muller, F.A. and Seevinck, M.P. (2009) Discerning Elementary Particles. [Preprint] Muller, F.A. and Seevinck, M.P. (2006) Is Quantum Mechanics Technologically Inadequate? [Preprint] Muller, Thomas (2001) Branching space-time, modal logic, and the counterfactual conditional. [Preprint] Muller, Thomas (2007) A branching space-times view on quantum error correction. [Preprint] Muller, F.A. (2020) The Influence of Quantum Physics on Philosophy. [Preprint] Muller, F.A. (2023) Six Measurement Problems of Quantum Mechanics. [Preprint] Muller, F.A. and Leegwater, G. (2020) The Case against Factorism. Journal for the General Philosophy of Science [special issue]. pp. 1-16. Murgueitio Ramírez, Sebastián (2020) On how Epistemological Letters changed the foundations of quantum mechanics. [Preprint] Muthukrishnan, Siddharth (2024) Many Worlds as Anti-Conspiracy Theory: Locally and causally explaining a quantum world without finetuning. [Preprint] Myrvold, Wayne (2008) Chasing Chimeras. [Preprint] Myrvold, Wayne (2008) From Physics to Information Theory and Back. [Preprint] Myrvold, Wayne (2009) Nonseparability, Classical and Quantum. UNSPECIFIED. Myrvold, Wayne C. (2000) Einstein's Untimely Burial. [Preprint] Myrvold, Wayne C. (2009) From Physics to Information Theory and Back. UNSPECIFIED. Myrvold, Wayne C. (2002) Modal Interpretations and Relativity. [Preprint] Myrvold, Wayne C. (2010) Nonseparability, Classical and Quantum. [Preprint] Myrvold, Wayne C. (2016) Quantum Mechanics and Narratability. [Preprint] Myrvold, Wayne C. (2002) Relativistic Quantum Becoming. [Preprint] Myrvold, Wayne C. (2014) What is a Wavefunction? [Preprint] Myrvold, Wayne C. (2015) Lessons of Bell's Theorem: Nonlocality, yes; Action at a distance, not necessarily. [Preprint] Myrvold, Wayne C. (2020) On the Status of Quantum State Realism. [Preprint] Myrvold, Wayne C. (2017) Ontology for Collapse Theories. [Preprint] Myrvold, Wayne C. (2018) Ontology of Relativistic Collapse Theories. [Preprint] Myrvold, Wayne C. (2022) Philosophical Issues raised by Quantum Theory and its Interpretations. The Oxford Handbook of the History of Quantum Interpretations. pp. 53-75. Myrvold, Wayne C. (2020) Relativistic Constraints on Interpretations of Quantum Mechanics. [Preprint] Myrvold, Wayne C. (2020) Subjectivists about Quantum Probabilities Should be Realists about Quantum States. [Preprint] Myrvold, Wayne C. and Albert, David Z. and Callender, Craig and Ismael, Jenann (2016) Book Symposium: David Albert, After Physics. UNSPECIFIED. Newton, da Costa and Federico, Holik (2013) A formal framework for the study of the notion of undefined particle number in quantum mechanics. [Preprint] Nielsen, Simon Ellersgaard (2010) An Overview of Probability in the Everett Interpretation. UNSPECIFIED. Niestegge, Gerd (2015) Non-classical conditional probability and the quantum no-cloning theorem. Phys. Scr. 90 (2015) 095101.. Niestegge, Gerd (2020) Local tomography and the role of the complex numbers in quantum mechanics. Proc. R. Soc. A. 476:20200063. Niestegge, Gerd (2016) Quantum key distribution without the wavefunction. International Journal of Quantum Information, 15 (6). Niestegge, Gerd (2020) Quantum probability's algebraic origin. Entropy 2020, 22, 1196. ISSN 1099-4300 Niestegge, Gerd (2016) Quantum teleportation and Grover's algorithm without the wavefunction. Found Phys (2017). Niestegge, Gerd (2022) A generic approach to the quantum mechanical transition probability. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 478 (2260). Niestegge, Gerd (2020) A simple and quantum-mechanically motivated characterization of the formally real Jordan algebras. Royal Society Proceedings A, 476. 0-14. Noninski, Vesselin (2003) Classical Bell's Inequalities. [Preprint] Noninski, Vesselin (2003) EPR Paradox and the Physical Meaning of an Experiment in Quantum Mechanics. [Preprint] Noninski, Vesselin (2003) A Quantum Mechanical Measurement Leading to Simultaneous Spin-Up and Spin-Down State of a Single Electron. [Preprint] Norsen, Travis (2008) Local Causality and Completeness: Bell vs. Jarrett. [Preprint] North, Jill (2012) The Structure of a Quantum World. [Preprint] North, Jill and Baker, David John and Myrvold, Wayne C. and Ruetsche, Laura (2022) Author Meets Critics: Jill North, Physics, Structure and Reality. UNSPECIFIED. Norton, John D. (2005) Atoms, Entropy, Quanta: Einstein’s Miraculous Argument of 1905. [Preprint] Norton, John D. (2007) Einstein’s Miraculous Argument of 1905: The Thermodynamic Grounding of Light Quanta. [Preprint] Norton, John D. (2007) History of Science and the Material Theory of Induction: Einstein’s Quanta, Mercury’s Perihelion. In: UNSPECIFIED. Norton, John D. (2010) Little Boxes: The Simplest Demonstration of the Failure of Einstein’s Attempt to Show the Incompleteness of Quantum Theory. [Preprint] Norton, John D. (2014) The Simplest Exorcism of Maxwell's Demon: The Quantum Version. [Preprint] Norton, Joshua (2014) Weak Discernibility and Relations Between Quanta. In: UNSPECIFIED. Norton, John D. (2016) How Einstein Did Not Discover. [Preprint] Näger, Paul M. (2013) Causal Graphs for EPR Experiments. In: UNSPECIFIED. Näger, Paul M. (2021) Evidence for Interactive Common Causes. Resuming the Cartwright-Hausman-Woodward Debate. [Preprint] Näger, Paul M. (2018) A Stronger Bell Argument for (Some Kind of) Parameter Dependence. [Preprint] Näger, Paul M. (2013) A Stronger Bell Argument for Quantum Non-Locality. [Preprint] Okon, Elias and Callender, Craig (2010) Does Quantum Mechanics Clash with the Equivalence Principle – and Does it Matter? [Preprint] Okon, Elias and Sudarsky, Daniel (2013) Benefits of Objective Collapse Models for Cosmology and Quantum Gravity. Found Phys, 44. pp. 114-143. Okon, Elias and Sudarsky, Daniel (2015) The Black Hole Information Paradox and the Collapse of the Wave Function. [Preprint] Okon, Elias and Sudarsky, Daniel (2015) The Consistent Histories Formalism and the Measurement Problem. [Preprint] Okon, Elias and Sudarsky, Daniel (2015) Less Decoherence and More Coherence in Quantum Gravity, Inflationary Cosmology and Elsewhere. [Preprint] Okon, Elias and Sudarsky, Daniel (2014) Measurements according to Consistent Histories. Studies in History and Philosophy of Modern Physics. ISSN 1355-2198 Okon, Elias and Sudarsky, Daniel (2013) On the Consistency of the Consistent Histories Approach to Quantum Mechanics. [Preprint] Okon, Elias (2021) Defending Quantum Objectivity. [Preprint] Okon, Elias (2022) On the objectivity of measurement outcomes. [Preprint] Okon, Elias (2022) Reassessing the strength of a class of Wigner's friend no-go theorems. [Preprint] Okon, Elias and Muciño, Ricardo (2020) Wigner's convoluted friends. [Preprint] Okon, Elias and Sebastián, Miguel Ángel (2018) A Consciousness-Based Quantum Objective Collapse Model. [Preprint] Okon, Elias and Sudarsky, Daniel (2018) Losing stuff down a black hole. [Preprint] Okon, Elias and Sudarsky, Daniel (2016) A (not so?) novel explanation for the very special initial state of the universe. [Preprint] Okon, Elias and Sudarsky, Daniel (2017) The weight of collapse: dynamical reduction models in general relativistic contexts. [Preprint] Oldofredi, Andrea and Lazarovici, Dustin and Deckert, Dirk-André and Esfeld, Michael (2016) From the universe to subsystems: Why quantum mechanics appears more stochastic than classical mechanics. Oldofredi, Andrea (2021) Beables, Primitive Ontology and Beyond: How Theories Meet the World. [Preprint] Oldofredi, Andrea (2020) The Bundle Theory Approach to Relational Quantum Mechanics. [Preprint] Oldofredi, Andrea (2020) Classical Logic in Quantum Context. [Preprint] Oldofredi, Andrea (2019) Is Quantum Mechanics Self-Interpreting? [Preprint] Oldofredi, Andrea (2018) No-Go Theorems and the Foundations of Quantum Physics. [Preprint] Oldofredi, Andrea (2023) Orthodox or Dissident? The Evolution of Bohm’s Ontological Reflections in the 1950s. [Preprint] Oldofredi, Andrea (2018) Particle Creation and Annihilation: Two Bohmian Approaches. [Preprint] Oldofredi, Andrea (2022) The Relational Dissolution of the Quantum Measurement Problems. [Preprint] Oldofredi, Andrea (2019) Some remarks on the mentalistic reformulation of the measurement problem. A reply to S. Gao. [Preprint] Oldofredi, Andrea (2018) Stochasticity and Bell-type Quantum Field Theory. [Preprint] Oldofredi, Andrea (2024) Unexpected Quantum Indeterminacy. [Preprint] Oldofredi, Andrea and Calosi, Claudio (2021) Relational Quantum Mechanics and the PBR Theorem: A Peaceful Coexistence. [Preprint] Oldofredi, Andrea and Carcassi, Gabriele and Aidala, Christine A (2022) On the Common Logical Structure of Classical and Quantum Mechanics. [Preprint] Oldofredi, Andrea and Esfeld, Michael (2019) Observability, Unobservability and the Copenhagen Interpretation in Dirac's Methodology of Physics. Quanta, 8 (1). pp. 68-87. Oldofredi, Andrea and Esfeld, Michael Andreas (2018) On the possibility of a realist ontological commitment in quantum mechanics. [Preprint] Oldofredi, Andrea and López, Cristian (2020) On the Classification between ψ−Ontic and ψ−Epistemic Ontological Models. [Preprint] Oriti, Daniele (2022) Tightrope-walking rationality in action: Feyerabendian insights for the foundations of quantum mechanics. [Preprint] Oriti, Daniele (2021) The complex timeless emergence of time in quantum gravity. [Preprint] Ovidiu Cristinel, Stoica (2023) The prince and the pauper. A quantum paradox of Hilbert-space fundamentalism. [Preprint] Page, Julien and Catren, Gabriel (2013) On the Galoisian Structure of Heisenberg Indeterminacy Principle. [Preprint] Papageorgiou, Maria and Fraser, Doreen (2023) Eliminating the ‘impossible’: Recent progress on local measurement theory for quantum field theory. [Preprint] Papineau, David and Durà-Vilà, Víctor (2008) Reply to Lewis: metaphysics versus epistemology. [Preprint] Papineau, David and Durà-Vilà, Víctor (2008) A thirder and an Everettian: a reply to Lewis's 'Quantum Sleeping Beauty'. UNSPECIFIED. Pashby, Thomas (2013) Do Quantum Objects Have Temporal Parts? [Preprint] Pashby, Thomas (2014) Quantum Mechanics for Event Ontologists. In: UNSPECIFIED. Pashby, Thomas (2013) Time and Quantum Theory: A History and A Prospectus. [Preprint] Pashby, Thomas (2014) Time and the Foundations of Quantum Mechanics. University of Pittsburgh. Pashby, Thomas (2017) At what time does a quantum experiment have a result? [Preprint] Pashby, Thomas (2012) Dirac's Prediction of the Positron: A Case Study for the Current Realism Debate. pp. 440-475. Pashby, Thomas (2016) How Do Things Persist? Location Relations in Physics and the Metaphysics of Persistence. Dialectica, 70 (3). pp. 269-309. Pashby, Thomas (2013) Reply to Fleming. In: UNSPECIFIED. Pasqualini, Matias and Fortin, Sebastian (2021) Trans-statistical behavior of a multiparticle system in an ontology of properties. [Preprint] Passon, Oliver (2006) What you always wanted to know about Bohmian mechanics but were afraid to ask. [Preprint] Passon, Oliver (2018) On the interpretation of Feynman diagrams, or, did the LHC experiments observe H to gamma gamma? [Preprint] Patton, Lydia (2015) Methodological Realism and Modal Resourcefulness: Out of the Web and Into the Mine. [Preprint] Penchev, Vasil (2019) Poincaré's conjecture proved by G. Perelman by the isomorphism of Minkowski space and the separable complex Hilbert space. [Preprint] Pernu, Tuomas K. (2017) Can physics make us free? Frontiers in Physics, 5:64. Perovic, Slobodan (2007) Why were two theories (Matrix Mechanics and Wave Mechanics) deemed logically distinct, and yet equivalent, in Quantum Mechanics? In: UNSPECIFIED. Perovic, Slobodan (2017) Niels Bohr’s Complementarity and Quantum Tunneling. Peterson, Daniel (2009) Qeauty and the Books: A Response to Lewis's Quantum Sleeping Beauty Problem. [Preprint] Petrov, Assen (2008) Ten Reasons for Pursuing Multi-Commutative Quantum Theories. [Preprint] Pipa, Francisco (2024) An Indeterminacy-based Ontology for Quantum Theory. [Preprint] Pipa, Francisco (2023) A new Indeterminacy-based Quantum Theory. [Preprint] Pitowsky, Itamar (2002) Betting on the Outcomes of Measurements: A Bayesian Theory of Quantum Probability. [Preprint] Pitowsky, Itamar (2005) Quantum mechanics as a theory of probability. [Preprint] Pitowsky, Itamar (2004) Random Witnesses and the Classical Character of Macroscopic Objects. [Preprint] Pitts, J. Brian (2017) Equivalent Theories Redefine Hamiltonian Observables to Exhibit Change in General Relativity. Classical and Quantum Gravity, 34. 055008. ISSN 1361-6382, Print ISSN 0264-9381 Placek, Tomasz (2009) On propensity-frequentist models for stochastic phenomena; with applications to Bell's theorem. [Preprint] Placek, Tomasz and Wroński, Leszek (2011) Separate common causes and EPR correlations---a no-go result. [Preprint] Placek, Tomasz and Gomori, Marton (2017) Small probability space formulation of Bell's theorem. [Preprint] Pniower, Justin (2005) Particles, Objects, and Physics. UNSPECIFIED. Pooley, Oliver (2005) Points, particles, and structural realism. [Preprint] Pooley, Oliver (2006) A hole revolution, or are we back where we started? [Preprint] Price, Huw (2006) Decision-based Probabilities in the Everett Interpretation: Comments on Wallace and Greaves. [Preprint] Price, Huw (2008) Decisions, Decisions, Decisions: Can Savage Salvage Everettian Probability? [Preprint] Price, Huw (2011) Does Time-Symmetry Imply Retrocausality? How the Quantum World Says "Maybe". [Preprint] Price, Huw (2006) Probability in the Everett World: Comments on Wallace and Greaves. [Preprint] Price, Huw (2008) Toy Models for Retrocausality. [Preprint] Price, Huw (1978) An Assumption in the Interpretation of Quantum Mechanics. [Preprint] Price, Huw and Wharton, Ken (2022) Why entanglement? [Preprint] Primas, Hans (2000) Asymptotically disjoint quantum states. [Preprint] Primas, Hans (1999) Basic elements and problems of probability theory. [Preprint] Primas, Hans (1998) Emergence in exact natural science. [Preprint] Primas, Hans (1994) Endo- and exo-theories of matter. [Preprint] Primas, Hans (2002) Hidden Determinism, Probability, and Time's Arrow. [Preprint] Primas, Hans (1994) Realism and quantum mechanics. [Preprint] Primas, Hans (2003) Time-Entanglement Between Mind and Matter. [Preprint] Primas, Hans (1997) The representation of facts in physical theories. [Preprint] Primas, Hans and Esfeld, Michael (1997) A Critical Review of Wigner's Work on the Conceptual Foundations of Quantum Theory. [Preprint] Pringe, Hernán (2014) Cassirer and Bohr on Intuitive and Symbolic Knowledge in Quantum Physics. THEORIA. An International Journal for Theory, History and Foundations of Science, 29 (3). pp. 417-429. ISSN 2171-679X Prokopov, Aleksey (2021) Relational physics and the concept of continuity. [Preprint] Puleston, James (2020) Tunnelling Times in Quantum Mechanics. UNSPECIFIED, Cambridge. Pykacz, Jarosław (2014) Can many-valued logic help to comprehend quantum phenomena? [Preprint] pinkel, daniel (2024) LIVING IN CONFIGURATION SPACE. [Preprint] Read, James (2024) Review of ``Logic Meets Wigner's Friend (and their Friends)'', by Alexandru Baltag and Sonja Smets. [Preprint] Read, James (2024) Review of ``Quantum Uncertainty as an Intrinsic Clock'', by Etera K. Livine. [Preprint] Read, James and Le Bihan, Baptiste (2021) The Landscape and the Multiverse: What's the Problem? [Preprint] Redhead, Michael (2017) The Relativistic Einstein-Podolsky-Rosen Argument. [Preprint] Rickles, Dean (2005) Interpreting Quantum Gravity. [Preprint] Rickles, Dean (2004) A New Spin on the Hole Argument. [Preprint] Rickles, Dean (2023) Your Cosmos Needs You! From Nothingness to Quantum Existentialism. [Preprint] Rickles, Dean and Elshatlawy, Hatem and Arsiwalla, Xerxes (2023) Ruliology: Linking Computation, Observers and Physical Law. [Preprint] Riedel, Timotheus (2023) Relational Quantum Mechanics, quantum relativism, and the iteration of relativity. [Preprint] Riggs, Peter James (1996) SPACETIME OR QUANTUM PARTICLES: THE ONTOLOGY OF QUANTUM GRAVITY? UNSPECIFIED. Roberts, Bryan W. (2011) Does quantum time have a preferred direction? [Preprint] Roberts, Bryan W. (2010) Group Structural Realism. The British Journal for the Philosophy of Science, 61 (4). Roberts, Bryan W. (2014) A general perspective on time observables. Studies in History and Philosophy of Modern Physics. ISSN 1355-2198 Roberts, Bryan W. (2012) The simple failure of Curie's Principle. [Preprint] Roberts, Bryan W. (2018) Observables, Disassembled. Studies in History and Philosophy of Modern Physics. ISSN 1355-2198 Roberts, Bryan W. (2022) Reversing the Arrow of Time. Cambridge University Press, Cambridge. ISBN 978-1-009-12332-7 Roberts, Bryan W. (2017) Rovelli on disharmony between the quantum arrows of time. [Preprint] Roberts, Bryan W. (2016) Three Myths About Time Reversal in Quantum Theory. Philosophy of Science. Roberts, Bryan W. (2019) Time Reversal. [Preprint] Roberts, Bryan W. (2017) Unreal Observables. Philosophy of Science. Roberts, Bryan W. and Butterfield, Jeremy (2020) Time-energy uncertainty does not create particles. [Preprint] Robertson, Kate (2017) Can the Two-Time Interpretation of Quantum Mechanics Solve the Measurement Problem? [Preprint] Robertson, Katie and Franklin, Alexander (2024) Objective Credences, Epistemic Chances, and Explanations of Time Asymmetry: Review of Myrvold's 'Beyond Chance and Credence'. [Preprint] Rocha, Gustavo and Rickels, Dean and Boge, Florian J. (2021) A Brief Historical Perspective on the Consistent Histories Interpretation of Quantum Mechanics. [Preprint] Rodolfo, Gambini and Jorge, Pullin (2020) The Montevideo Interpretation: How the inclusion of a Quantum Gravitational Notion of Time Solves the Measurement Problem. [Preprint] Romano, Davide (2015) Bohmian Classical Limit in Bounded Regions. [Preprint] Romano, Davide (2020) Multi-field and Bohm's theory. [Preprint] Romano, Davide (2021) On the alleged extra-structures of quantum mechanics. [Preprint] Romano, Davide (2021) The Unreasonable Effectiveness of Decoherence. [Preprint] Romero, Gustavo E. (2013) Adversus singularitates: The ontology of space-time singularities. Foundations of Science, 18. pp. 297-306. ISSN 1233-1821 Romero, Gustavo E. (2015) Present time. Foundations of Science, 20. pp. 135-145. ISSN 1233-1821 Romero, Gustavo E. (2014) The collapse of supertasks. Foundations of Science, 19. pp. 209-216. ISSN 1233-1821 Rosa, Rodolfo (2008) The Merli-Missiroli-Pozzi Two-Slit Electron Interference Experiment. [Preprint] Rosaler, Joshua (2015) "Formal" vs. "Empirical" Approaches to Quantum-Classical Reduction. Topoi, 34 (2). pp. 325-338. Rosaler, Joshua Inter-Theory Relations in Physics: Case Studies from Quantum Mechanics and Quantum Field Theory (Doctoral Dissertation - University of Oxford, 2013). UNSPECIFIED. Rosaler, Joshua (2015) Interpretation Neutrality in the Classical Domain of Quantum Theory. [Preprint] Rosaler, Joshua (2015) Interpretation Neutrality in the Classical Domain of Quantum Theory. Studies in History and Philosophy of Modern Physics, 53. pp. 54-72. ISSN 1355-2198 Rosaler, Joshua (2014) Is de Broglie-Bohm Theory Specially Equipped to Recover Classical Behavior? In: UNSPECIFIED. Rosaler, Joshua (2018) Ehrenfest Theorems, Deformation Quantization, and the Geometry of Inter-Model Reduction. Generalized Ehrenfest Relations, Deformation Quantization, and the Geometry of Inter-Model Reduction. Rosaler, Joshua (2016) Reduction as an A Posteriori Relation. In: UNSPECIFIED. Rosaler, Joshua and Harlander, Robert (2019) Naturalness, Wilsonian Renormalization, and 'Fundamental Parameters' in Quantum Field Theory. Rovelli, Carlo (2020) Agency in Physics. [Preprint] Rovelli, Carlo (2024) Can Alice do science and have friends, in a relational quantum world? Solipsism and Relational Quantum Mechanics. [Preprint] Rovelli, Carlo (2021) Preparation in Bohmian Mechanics. [Preprint] Rovelli, Carlo (2024) Princeton seminars on physics and philosophy. [Preprint] Rovelli, Carlo The Relational Interpretation. UNSPECIFIED. Rovelli, Carlo (2013) Relative information at the foundation of physics (Second prize in the 2013 FQXi context "It From Bit or Bit From It?"). [Preprint] Rovelli, Carlo (2017) "Space is blue and birds fly through it". [Preprint] Rovelli, Carlo (2015) An argument against the realistic interpretation of the wave function. [Preprint] Rovelli, Carlo (2021) A response to the Mucino-Okon-Sudarsky’s Assessment of Relational Quantum Mechanics. [Preprint] Rovelli, Carlo and Heilbron, John (2023) Matrix Mechanics Mis-Prized: Max Born's Belated Nobelization. [Preprint] Rugh, Svend E. and Zinkernagel, Henrik (2016) Limits of time in cosmology. [Preprint] Rugh, Svend E. and Zinkernagel, Henrik (2010) Weyl's principle, cosmic time and quantum fundamentalism. [Preprint] Rugh, Svend E. and Zinkernagel, Henrik (2013) A critical note on time in the multiverse. [Preprint] Ruiz de Olano, Pablo (2017) Intimate Connections: Symmetries and Conservation Laws in Quantum versus Classical Mechanics. Philosophy of Science, 84 (5). pp. 1275-1288. ISSN 1539-767X Ruiz de Olano, Pablo (2017) Intimate Connections: Symmetries and Conservation Laws in Quantum versus Classical Mechanics. Philosophy of Science, 84 (5). pp. 1275-1288. ISSN 1539-767X Ruyant, Quentin (2016) Primitive Ontology or Primitive Relations? [Preprint] Ruyant, Quentin (2018) Can we make sense of relational quantum mechanics? Foundations of Physics, 48 (4). pp. 440-455. ISSN 1572-9516 Ruyant, Quentin (2023) Consistent histories through pragmatist lenses. Studies in History and Philosophy of Science, 98. pp. 40-48. ISSN 00393681 Ruyant, Quentin (2016) Le réalisme structural face au problème de la mesure. Lato Sensu, revue de la Société de philosophie des sciences, 3 (1). pp. 43-51. ISSN 2295-8029 Ruyant, Quentin (2021) On the Relationship Between Modelling Practices and Interpretive Stances in Quantum Mechanics. Foundations of Science. ISSN 1233-1821 Ruyant, Quentin and Guay, Alexandre (2024) Lagrangian possibilities. Synthese, 203. ISSN 1573-0964 Rynasiewicz, Robert (2013) The(?) Correspondence Principle. [Preprint] Saatsi, Juha (2017) Scientific Realism meets Metaphysics of Quantum Mechanics. [Preprint] Salom, Igor (2020) Kolmogorov complexity as a smoking gun of the hard problem of consciousness. [Preprint] Salom, Igor (2019) The hard problem and the measurement problem: a no-go theorem and potential consequences. [Preprint] San Pedro, Iñaki (2011) Causation, measurement relevance and no-conspiracy in EPR. European Journal for Philosophy of Science, 2 (1). pp. 137-156. ISSN 1879-4912 San Pedro, Iñaki (2012) Many worlds: quantum theory and reality? Analysis, 72 (2). pp. 386-395. ISSN 0003-2638 Sauer, Tilman (2007) An Einstein manuscript on the EPR paradox for spin observables. [Preprint] Saunders, Simon (2006) Are quantum particles objects? [Preprint] Saunders, Simon (2004) Complementarity and Scientific Rationality. [Preprint] Saunders, Simon (2000) Space-Time and Probability. UNSPECIFIED. Saunders, Simon (2000) Tense and Indeterminateness. [Preprint] Saunders, Simon (1994) Time, Quantum Mechanics, and Decoherence. UNSPECIFIED. Saunders, Simon (1997) Time, Quantum Mechanics, and Probability. UNSPECIFIED. Saunders, Simon (2004) What is Probability? [Preprint] Saunders, Simon and Wallace, David (2007) Branching and Uncertainty. [Preprint] Saunders, Simon and Wallace, David (2008) Branching and Uncertainty. [Preprint] Saunders, Simon (2021) Branch-counting in the Everett interpretation of quantum mechanics. Proceedings of the Royal Society A, 477. Saunders, Simon (2016) Chance in the Everett interpretation. [Preprint] Saunders, Simon (2020) The Concept 'Indistinguishable'. [Preprint] Saunders, Simon (2021) The Everett Interpretation: probability. [Preprint] Saunders, Simon (2021) The Everett interpretation: structure. [Preprint] Saunders, Simon (2024) Finite frequentism explains quantum probability. [Preprint] Saunders, Simon (2024) Finite frequentism explains quantum probability. [Preprint] Saunders, Simon (2018) The Gibbs Paradox. Entropy, 20 (8). p. 552. Saunders, Simon (2013) Indistinguishability. [Preprint] Saunders, Simon (2010) Many Worlds: an introduction. [Preprint] Saunders, Simon (1991) The Negative Energy Sea. Philosophy of Vacuum. pp. 65-109. ISSN 9780198244493 Schroeren, David (2018) The Metaphysics of Invariance. [Preprint] Schroeren, David (2020) Symmetry Fundamentalism: A Case Study from Classical Physics. [Preprint] Schroeren, David (2021) Wavefunction Realism Does Not 'Privilege Position'. [Preprint] Schwarz, Giacomo (2012) Philosophical Aspects of Spontaneous Symmetry Breaking. [Preprint] Schürmann, Thomas (2006) About conditional probabilities of events regarding the quantum mechanical measurement process. UNSPECIFIED. Sebens, Charles (2014) Killer Collapse: Empirically Probing the Philosophically Unsatisfactory Region of GRW. In: UNSPECIFIED. Seevinck, M.P. (2008) Parts and Wholes. An Inquiry into Quantum and Classical Correlations. [Preprint] Seevinck, M.P. (2008) Parts and Wholes. An Inquiry into Quantum and Classical Correlations. UNSPECIFIED. Seevinck, M.P. and Uffink, J. (2010) Not throwing out the baby with the bathwater: Bell's condition of local causality mathematically 'sharp and clean'. [Preprint] Seevinck, Michael (2006) The quantum world is not built up from correlations. UNSPECIFIED. Seevinck, Michael Patrick (2004) Holism, Physical Theories and Quantum Mechanics. [Preprint] Seifert, Vanessa A. (2020) The Strong Emergence of Molecular Structure. [Preprint] Seifert, Vanessa A. (2024) The many laws in the periodic table. [Preprint] Seifert, Vanessa A. (2019) The role of idealisations in describing an isolated molecule. Foundations of Chemistry. pp. 15-29. ISSN 1386-4238 Sen, Rathindra Nath (2013) Galilei invariance and the welcher Weg problem. In: UNSPECIFIED. Sen, Rathindra Nath (2010) Superseparability and its Physical Implications. In: UNSPECIFIED. Shan, Gao (2011) Is Gravity an Entropic Force? Shan, Gao (2002) Quantum mechanics and discontinuous motion of particles. [Preprint] Shanahan, Daniel (2014) A Case for Lorentzian Relativity. Foundations of Physics, 44. pp. 349-367. ISSN 0015-9018 Shanahan, Daniel (2014) A Case for Lorentzian Relativity. Foundations of Physics, 44. pp. 349-367. ISSN 0015-9018 Shanahan, Daniel (2015) The de Broglie Wave as Evidence of a Deeper Wave Structure. [Preprint] Shanahan, Daniel (2023) The Lorentz transformation in a fishbowl: a comment on Cheng and Read's "Why not a sound postulate?". Foundations of Physics, 53 (55). ISSN 15729516,00159018 Shanahan, Daniel (2023) Protogravity: a quantum-theoretic precursor to gravity. Spacetime Conference 2022. pp. 153-181. Shanahan, Daniel (2019) Reality and the Probability Wave. International Journal of Quantum Foundations, 5. pp. 51-68. Shanahan, Daniel "Reverse Engineering" the de Broglie Wave. UNSPECIFIED. Shanahan, Daniel (2019) What might the matter wave be telling us of the nature of matter? [Preprint] Sharlow, Mark (2007) "Charge without Charge" in the Stochastic Interpretation of Quantum Mechanics. [Preprint] Sharlow, Mark (2007) Generalizing "Charge without Charge" to Obtain Classical Analogs of Short-Range Interactions. [Preprint] Sharlow, Mark (2007) The Quantum Mechanical Path Integral: Toward a Realistic Interpretation. [Preprint] Sharlow, Mark (2007) What Branching Spacetime Might Do for Physics. [Preprint] Shaw, Robert (2019) Single-particle entanglement and three forms of ambiguity. [Preprint] Shaw, Robert (2019) Stern-Gerlach: conceptually clean or acceptably vague? [Preprint] Shech, Elay (2022) Scientific Understanding in the Aharonov-Bohm Effect. [Preprint] Shenker, Orly R. and Hemmo, Meir (2004) Quantum Decoherence and the Approach to Equilibrium (II). [Preprint] Shenker, Orly R. and Hemmo, Meir (2001) Quantum Decoherence and the Approach to Equilibrium (Part 1). [Preprint] Shenker, Orly R. and Hemmo, Meir (2006) Von Neumann's Entropy Does Not Correspond to Thermodynamic Entropy. UNSPECIFIED. Sheridan, Erin (2020) A Man Misunderstood: Von Neumann did not claim that his entropy corresponds to the phenomenological thermodynamic entropy. [Preprint] Shrapnel, Sally (2015) Discovering Quantum Causal Models. [Preprint] Shrapnel, Sally (2017) Discovering Quantum Causal Models (final). [Preprint] Silberstein, Michael (2011) Being, Becoming and the Undivided Universe: A Dialogue between Relational Blockworld and the Implicate Order Concerning the Unification of Relativity and Quantum Theory. Silberstein, Michael and Stuckey, W.M. and Cifone, Michael (2007) An Argument for 4D Blockworld from a Geometric Interpretation of Non-relativistic Quantum Mechanics. UNSPECIFIED. Silberstein, Michael and Stuckey, W. M. (2021) Beyond Causal Explanation: Einstein’s Principle Not Reichenbach’s. In: UNSPECIFIED. Silberstein, Michael and Stuckey, W. M. and McDevitt, Timothy (2022) For Whom the Bell Inequality Really Tolls. [Preprint] Singh, Mihir (2023) Classical Concepts and the Bohrian Epistemological Thesis. [Preprint] Skokowski, Paul (2021) Observing a Superposition. [Preprint] Skow, Bradford (2010) On a Symmetry Argument for the Guidance Equation. [Preprint] Slowik, Edward (2011) A Pre-History of Quantum Gravity: The Seventeenth Century Legacy and the Deep Metaphysics of Space beyond Substantivalism and Relationism. In: UNSPECIFIED. Smets, Sonja (2002) In Defense of Operational Quantum Logic. [Preprint] Smets, Sonja (2002) On Causation and a Counterfactual in Quantum Logic: The Sasaki Hook. UNSPECIFIED. (Unpublished) Smolin, Lee (2001) Matrix models as non-local hidden variables theories. [Preprint] Smolin, Lee (2000) The present moment in quantum cosmology: Challenges to the arguments for the elimination of time. [Preprint] Snoke, David (2003) What is a photon, really? [Preprint] Snoke, David (2023) Mathematical formalism for nonlocal spontaneous collapse in quantum field theory. [Preprint] Soltau, Andrew (2010) Interactive Destiny. [Preprint] Soltau, Andrew (2010) Logical Types in Quantum Mechanics. UNSPECIFIED. Soltau, Andrew (2010) Multisolipsism. [Preprint] Soltau, Andrew (2010) Multisolipsism. UNSPECIFIED. Soltau, Andrew (2010) The Quantum Mechanical Frame of Reference. UNSPECIFIED. Soltau, Andrew (2011) Times Two. [Preprint] Soltau, Andrew (2009) Transtemporal Phenomenal Consciousness. UNSPECIFIED. Soltau, Andrew (2008) Universe Superposition, Relational Quantum Mechanics, and The Reality of the No-Collapse Universe. UNSPECIFIED. Soltau, Andrew (2010) The World Hologram. UNSPECIFIED. Solé, Albert (2013) Bohmian mechanics without wave function ontology. [Preprint] Srinivasan, Radhakrishnan (2003) Platonism in classical logic versus formalism in the proposed non-Aristotelian finitary logic. [Preprint] Srinivasan, Radhakrishnan (2002) Quantum superposition justified in a new non-Aristotelian finitary logic. [Preprint] Srinivasan, Radhakrishnan (2004) Quantum superposition principle justified in a new non-Aristotelian finitary logic. [Preprint] Srinivasan, Radhakrishnan (2023) Logical foundations of physics. Resolution of classical and quantum paradoxes in the finitistic paraconsistent logic NAFL. [Preprint] Steeger, Jer (2017) Betting on Quantum Objects. [Preprint] Steeger, Jer (2022) One world is (probably) just as good as many. [Preprint] Steeger, Jer (2018) Probabilism for Stochastic Theories. [Preprint] Steeger, Jer and Feintzeig, Benjamin H. (2021) Is the Classical Limit "Singular"? [Preprint] Steeger, Jer and Teh, Nicholas (2018) Two Forms of Inconsistency in Quantum Foundations. [Preprint] Stenger, Victor J. (2007) Where do the laws of physics come from? [Preprint] Stergiou, Chrysovalantis (2015) Explaining Correlations by Partitions. Foundations of Physics. Stergiou, Chrysovalantis (2016) NOTE ON SIMPLICITY AND STATISTICAL EXPLANATIONS OF CORRELATIONS. [Preprint] Stoeltzner, Michael (2001) Bell, Bohm, and von Neumann: Some philosophical inequalities concerning No-go Theorems and the axiomatic method. [Preprint] Stoica, Cristi (2008) Smooth Quantum Mechanics. [Preprint] Stoica, Ovidiu Cristinel (2008) Convergence and free-will. [Preprint] Stoica, Ovidiu Cristinel (2008) A Direct Interpretation of Quantum Mechanics. [Preprint] Stoica, Ovidiu Cristinel (2008) A Direct Interpretation of Quantum Mechanics. [Preprint] Stoica, Ovidiu Cristinel (2008) Smooth Quantum Mechanics. [Preprint] Stoica, Ovidiu Cristinel (2023) Are observers reducible to structures? [Preprint] Stoica, Ovidiu Cristinel (2023) Asking physics about physicalism, zombies, and consciousness. [Preprint] Stoica, Ovidiu Cristinel (2022) Background freedom leads to many-worlds with local beables and probabilities. [Preprint] Stoica, Ovidiu Cristinel (2022) Born rule: quantum probability as classical probability. [Preprint] Stoica, Ovidiu Cristinel (2022) Does quantum mechanics require "conspiracy"? [Preprint] Stoica, Ovidiu Cristinel (2024) Is the Wavefunction Already an Object on Space? Symmetry, 16 (10). pp. 1-23. ISSN 2073-8994 Stoica, Ovidiu Cristinel (2023) The Relation between Wavefunction and 3D Space Implies Many Worlds with Local Beables and Probabilities. Quantum Reports, 5 (1). pp. 102-115. ISSN 2624-960X Stoica, Ovidiu Cristinel (2021) World Theory. [Preprint] Stoica, Ovidiu Cristinel (2020) The negative way to sentience. [Preprint] Strayhorn, David (2005) General Relativity and the Probability Interpretation of Everett’s Relative State Formulation. [Preprint] Struyve, Ward (2024) Lorentz invariance and quantum mechanics. [Preprint] Struyve, Ward (2023) Scope of the action principle. [Preprint] Struyve, Ward (2020) Time-reversal invariance and ontology. [Preprint] Stuckey, W. M. and Silberstein, Michael and Cifone, Michael (2007) The Relational Blockworld Interpretation of Non-relativistic Quantum Mechanics. UNSPECIFIED. Stuckey, W. M. and Silberstein, Michael and Cifone, Michael (2005) Reversing the Arrow of Explanation in the Relational Blockworld: Why Temporal Becoming, the Dynamical Brain and the External World Are All "In The Mind". [Preprint] Stuckey, W.M. (2007) Implications for a spatially discrete transition amplitude in the twin-slit experiment. UNSPECIFIED. Stuckey, William Mark and Silbserstein, Michael and Cifone, Michael (2007) Reconciling Spacetime and the Quantum: Relational Blockworld and the Quantum Liar Paradox. UNSPECIFIED. Stuckey, W. M. (2024) Schrodinger's Cat: Qbit or Cbit? [Preprint] Stuckey, W. M. and Silberstein, Michael and McDevitt, Timothy and Le, T.D. (2020) Answering Mermin’s challenge with conservation per no preferred reference frame. Scientific Reports, 10 (15771). Stöltzner, Michael (2001) The Dynamics of Thought Experiments - Comment to Atkinson. [Preprint] Sudbery, Anthony (2012) Einstein and Tagore, Newton and Blake, Everett and Bohr: the dual nature of reality. [Preprint] Sudbery, Anthony (2010) The Everett-Wheeler interpretation and the open future. In: UNSPECIFIED. Sudbery, Anthony (2011) Philosophical lessons of entanglement. AIP Conference Proceedings, 1384. pp. 7-14. ISSN 978-0-7354-0945-3 Sugio, hajime (2024) Reconstructing the Concepts of Physical Quantities and Physical Reality: From Classical Reality to Information-Theoretic `Reality'. [Preprint] Sutherland, Roderick Ian (2005) Causally Symmetric Bohm Model. [Preprint] Sutherland, Roderick Ian (2006) Causally Symmetric Bohm Model. UNSPECIFIED. Suárez, Mauricio (2015) Bohmian Dispositions. [Preprint] Suárez, Mauricio (2013) Interventions and Causality in Quantum Mechanics. [Preprint] Suárez, Mauricio (2009) PROBABILITIES, CAUSES AND PROPENSITIES IN PHYSICS (SYNTHESE LIBRARY, SPRINGER). CHAPTERS 0 & 1 (CONTENTS & INTRODUCTION). [Preprint] Suárez, Mauricio (2004) Quantum Measurements, Propensities and the Problem of Measurement. [Preprint] Suárez, Mauricio (2006) Quantum Propensities. [Preprint] Suárez, Mauricio and San Pedro, Iñaki (2009) Causal Markov, Robustness and the Quantum Correlations. [Preprint] Suárez, Mauricio and San Pedro, Iñaki (2007) EPR, Robustness and the Causal Markov Condition. [Preprint] Suárez, Mauricio (2020) Philosophy of Probability and Statistical Modeling. Elements in the Philosophy of Science series. ISSN 9781108985826 Suárez, Mauricio (2023) Quiet Causation and its Many Uses in Science. [Preprint] Swanson, Noel (2020) Antiunitary Equivalence. [Preprint] Swanson, Noel (2018) How to be a Relativistic Spacetime State Realist. [Preprint] Szabó, László E. and Gömöri, Márton and Gyenis, Zalán (2023) Questionable and Unquestionable in Quantum Mechanics. [Preprint] Sznajderhaus, Nahuel (2016) On the Received Realist View of Quantum Mechanics. [Preprint] Tanona, Scott (2002) Idealization and formalism in Bohr's approach to quantum theory. [Preprint] Tappenden, Paul (2009) Evidence and Uncertainty in Everett's Multiverse. [Preprint] Tappenden, Paul (2008) Identity and uncertainty in Everett's multiverse. [Preprint] Tappenden, Paul (2007) Saunders and Wallace on Everett and Lewis. UNSPECIFIED. Tappenden, Paul (2008) Saunders and Wallace on Everett and Lewis. [Preprint] Tappenden, Paul (2010) Varieties of Divergence: A Response to Saunders and Wallace. [Preprint] Tappenden, Paul (2012) The World as Wavefunction. [Preprint] Tappenden, Paul (2019) Everett's Multiverse and the World as Wavefunction. Quantum Reports, 1 (1). pp. 119-129. Tappenden, Paul (2019) Everettian theory as pure wave mechanics plus a no-collapse probability postulate. [Preprint] Tappenden, Paul (2017) Objective Probability and the Mind-Body Relation. [Preprint] Tappenden, Paul (2022) Pilot-Wave Theory without Nonlocality. [Preprint] Taschetto, Diana (2024) The Thermodynamic Origins and Dynamical Foundations of Quantum Discontinuity. [Preprint] Taylor, Peter The Relation Between Classical and Quantum Mechanics. UNSPECIFIED. Teh, Nicholas J. (2011) Classical Cloning and No-cloning. Studies in the History and Philosophy of Modern Physics, 43. pp. 47-63. Terekhovich, Vladislav E. (2015) Modal approaches in metaphysics and quantum mechanics. [Preprint] Terekhovich, Vladislav E. (2012) Probabilistic and Geometric Languages in the Context of the Principle of Least Action. Philosophy of Science, 52 (2). pp. 108-120. Thebault, Karim P Y (2012) Symmetry, Ontology and the Problem of Time: On the Interpretation and Quantisation of Canonical Gravity. [Preprint] Thebault, Karim P Y and Gryb, Sean (2015) Schrodinger Evolution for the Universe: Reparametrization. [Preprint] Thebault, Karim P.Y. (2014) Quantization as a guide to ontic structure. [Preprint] Thebault, Karim P Y (2024) Flipping Arrows. [Preprint] Thebault, Karim P Y (2019) The Problem of Time. [Preprint] Timpson, Christopher Gordon (2005) The Grammar of Teleportation. [Preprint] Timpson, Christopher Gordon (2003) Nonlocality and information flow: The approach of Deutsch and Hayden. [Preprint] Timpson, Christopher Gordon (2001) On the Supposed Conceptual Inadequacy of the Shannon Information. [Preprint] Timpson, Christopher Gordon (2004) Quantum Information Theory and the Foundations of Quantum Mechanics. UNSPECIFIED. (Unpublished) Timpson, Christopher Gordon and Brown, Harvey (2002) Entanglement and Relativity. [Preprint] Timpson, Christopher Gordon and Brown, Harvey (2004) Proper and Improper Separability. [Preprint] Toader, Iulian D. (2023) Rules and Meaning in Quantum Mechanics. [Preprint] Toader, Iulian Danut (2024) Distribution can be Dropped: Reply to Rumfitt. [Preprint] Tommasini, Daniele (2002) Photons uncertainty removes Einstein-Podolsky-Rosen paradox. [Preprint] Tommasini, Daniele (2002) Reality, measurement and locality in Quantum Field Theory. [Preprint] Torza, Alessandro (2021) Quantum metametaphysics. Synthese. ISSN 1573-0964 Torza, Alessandro (2017) Quantum metaphysical indeterminacy and worldly incompleteness. Synthese, 197. pp. 4251-4264. ISSN 1573-0964 Toussaint, Pablo (2023) Are All Events Created At-Once in Relational Quantum Mechanics? [Preprint] Tumulka, Roderich (2006) Determinate Values for Quantum Observables. [Preprint] Uchii, Soshichi (2017) Leibniz's Ultimate Theory. [Preprint] Uffink, Jos (2012) Reply to Gao’s ”Comment on ”How to protect the interpretation of the wave function against protective measurements”. [Preprint] Vaidman, Lev (1990) About Schizophrenic Experiences of The Neutron or Why We Should Believe in Many-Worlds Interpretation of Quantum Theory. [Preprint] Vaidman, Lev (2016) All is Psi. [Preprint] Vaidman, Lev (2015) Bell Inequality and Many-Worlds Interpretation. [Preprint] Vaidman, Lev (2001) Byrne and Hall on Everett and Chalmers. Quantum Theory: Reconsidereation of Foundations, A. Khrennikov, (ed). . pp. 409-422. Vaidman, Lev (2003) Discussion: Time-Symmetric Quantum Counterfactuals. [Preprint] Vaidman, Lev (1994) On the paradoxical aspects of new quantum experiments. [Preprint] Vaidman, Lev (2011) Probability in the Many-Worlds Interpretation of Quantum Mechanics. “Probability in Physics” ed. by Y. Ben-Menahem and M. Hemmo. - . Vaidman, Lev (2014) Quantum Theory and Determinism. [Preprint] Vaidman, Lev (2014) REVIEW: David Wallace, The Emergent Multiverse: Quantum Theory according to the Everett Interpretation. [Preprint] Vaidman, Lev (2001) Sleeping Beauty in Quantumland. Quantum Theory: Reconsidereation of Foundations, , A. Khrennikov, (ed). . pp. 408-412. Vaidman, Lev (2009) Time Symmetry and the Many-Worlds Interpretation. [Preprint] Vaidman, Lev (2022) Are there observational differences between Bohmian mechanics and other interpretations? [Preprint] Vaidman, Lev (2024) Conservation laws in the many-worlds interpretation of quantum mechanics. [Preprint] Vaidman, Lev (2019) Derivations of the Born Rule. [Preprint] Vaidman, Lev (2018) Ontology of the wave function and the many-worlds interpretation. [Preprint] Vaidman, Lev (2020) There is no new problem for quantum mechanics: a comment on Meehan (2020). [Preprint] Vaidman, Lev (2022) Transfer of quantum information in teleportation. [Preprint] Vaidman, Lev (2021) Wave function realism and three dimensions. [Preprint] Valentini, Antony (2019) Foundations of statistical mechanics and the status of the Born rule in de Broglie-Bohm pilot-wave theory. [Preprint] Van Dyck, Maarten (2003) The Roles of One Thought Experiment in Interpreting Quantum Mechanics. Werner Heisenberg meets Thomas Kuhn. [Preprint] Van Strien, Marij (2023) Why Bohm was never a determinist. [Preprint] Vanzella, Daniel A. Turolla and Butterfield, Jeremy (2024) A frame-bundle formulation of quantum reference frames: from superposition of perspectives to superposition of geometries. [Preprint] Vassallo, Antonio (2015) Can Bohmian Mechanics Be Made Background Independent? [Preprint] Vassallo, Antonio and Esfeld, Michael (2015) On the Importance of Interpretation in Quantum Physics. A Reply to Elise Crull. [Preprint] Vassallo, Antonio and Esfeld, Michael (2013) A proposal for a Bohmian ontology of quantum gravity. [Preprint] Vassallo, Antonio and Ip, Pui Him (2016) On the Conceptual Issues Surrounding the Notion of Relational Bohmian Dynamics. [Preprint] Vassallo, Antonio and Naranjo, Pedro (2023) On the Prospects of a de Broglie-Bohm-Barbour-Bertotti Theory. [Preprint] Vassallo, Antonio and Naranjo, Pedro and Koslowski, Tim (2024) A Proposal for a Metaphysics of Self-Subsisting Structures. II. Quantum Physics. [Preprint] Vassallo, Antonio and Romano, Davide (2021) The metaphysics of decoherence. [Preprint] Vaughan, Martin P (2020) The Concept of Entropic Time: A Preliminary Discussion. [Preprint] Veilahti, Antti (2017) Higher Theory and the Three Problems of Physics. [Preprint] Verde, Clelia and Cortes, Marina and Smolin, Lee (2021) Physiccs, Time and Qualia. [Preprint] Verelst, Karin and Coecke, Bob (1999) Early Greek Thought and Perspectives for the Interpretation of Quantum Mechanics: Preliminaries to an Ontological Approach. [Preprint] Verrill, Robert (2023) The EPR-Bohm Paradox and Kent’s One-World Solution. [Preprint] Vervoort, Louis (2020) The hypothesis of “hidden variables” as a unifying principle in physics. [Preprint] Vervoort, Louis and Blusiewicz, Tomasz (2020) Free will and (in)determinism in the brain: a case for naturalized philosophy. THEORIA. An International Journal for Theory, History and Foundations of Science, 35 (3). pp. 345-364. ISSN 2171-679X Vickers, Peter John (2008) Bohr's Theory of the Atom: Content, Closure and Consistency. In: UNSPECIFIED. Vickers, Peter John (2010) Historical Magic in Old Quantum Theory? [Preprint] Vidotto, Francesca (2022) The relational ontology of contemporary physics. [Preprint] Vongehr, Sascha (2011) Many Worlds Model resolving the Einstein Podolsky Rosen paradox via a Direct Realism to Modal Realism Transition that preserves Einstein Locality. [Preprint] Vákár, Matthijs (2012) Topos-Theoretic Approaches to Quantum Theory. [Preprint] valiaallori, Valia (2021) Primitive Beables are not Local Ontology: On the Relation between Primitive Ontology and Local Beables. [Preprint] valiaallori, Valia (2021) Towards a Structuralist Elimination of Properties. [Preprint] van Dongen, Jeroen (2015) Communicating the Heisenberg uncertainty relations: Niels Bohr, Complementarity and the Einstein-Rupp experiments. [Preprint] van Dongen, Jeroen (2007) Emil Rupp, Albert Einstein and the canal ray experiments on wave-particle duality: Scientific fraud and theoretical bias. [Preprint] van Dongen, Jeroen (2007) The interpretation of the Einstein-Rupp experiments and their influence on the history of quantum mechanics. [Preprint] van Dongen, Jeroen (2017) The Epistemic Virtues of the Virtuous Theorist: On Albert Einstein and His Autobiography. Epistemic Virtues in the Sciences and the Humanities. Edited by Jeroen van Dongen and Herman Paul (Boston Studies in the Philosophy and History of Science, Vol. 321). pp. 63-77. van der Lugt, Tein (2021) Relativistic limits on quantum operations. [Preprint] WILCZEK, Piotr (2008) Constructible Models of Orthomodular Quantum Logics. [Preprint] WILCZEK, Piotr (2006) Model-Theoretic Investigations into Consequence Operation (Cn) in Quantum Logics: An Algebraic Approach. [Preprint] Waegell, Mordecai and McQueen, Kelvin J. (2020) Reformulating Bell's Theorem: The Search for a Truly Local Quantum Theory. [Preprint] Wallace, David (2010) Decoherence and Ontology, or: How I Learned To Stop Worrying And Love FAPP. S. Saunders, J. Barrett, A. Kent and D. Wallace (eds.), "Many Worlds? Everett, Quantum Theory, and Wallace, David (2011) Decoherence and its Role in the Modern Measurement Problem. [Preprint] Wallace, David (2014) Deflating the Aharonov-Bohm Effect. [Preprint] Wallace, David (2005) Epistemology Quantized: circumstances in which we should come to believe in the Everett interpretation. [Preprint] Wallace, David (2006) Epistemology Quantized: circumstances in which we should come to believe in the Everett interpretation. [Preprint] Wallace, David (2011) The Everett Interpretation. [Preprint] Wallace, David (2001) Everett and Structure. [Preprint] Wallace, David (2002) Everettian Rationality: defending Deutsch's approach to probability in the Everett interpretation. [Preprint] Wallace, David (2001) Implications of quantum theory in the foundations of statistical mechanics. [Preprint] Wallace, David (2013) Inferential vs. Dynamical Conceptions of Physics. [Preprint] Wallace, David (2005) Language use in a branching Universe. [Preprint] Wallace, David (2013) Probability in physics: stochastic, statistical, quantum. [Preprint] Wallace, David (2011) A Prolegomenon to the Ontology of the Everett Interpretation. [Preprint] Wallace, David (2007) The Quantum Measurement Problem: State of Play. [Preprint] Wallace, David (2002) Quantum Probability and Decision Theory, Revisited. [Preprint] Wallace, David (2003) Quantum Probability from Subjective Likelihood: improving on Deutsch's proof of the probability rule. UNSPECIFIED. (Unpublished) Wallace, David (2005) Quantum Probability from Subjective Likelihood: improving on Deutsch's proof of the probability rule. [Preprint] Wallace, David (2013) Recurrence Theorems: a Unified Account. [Preprint] Wallace, David (2016) What is orthodox quantum mechanics? [Preprint] Wallace, David (2001) Worlds in the Everett Interpretation. [Preprint] Wallace, David (2009) A formal proof of the Born rule from decision-theoretic assumptions. [Preprint] Wallace, David and Timpson, Chris (2005) Non-locality and gauge freedom in Deutsch and Hayden's formulation of quantum mechanics. [Preprint] Wallace, David and Timpson, Christopher Gordon (2009) Quantum Mechanics on Spacetime I: Spacetime State Realism. [Preprint] Wallace, David (2017) Against Wavefunction Realism. [Preprint] Wallace, David (2018) Interpreting the quantum mechanics of cosmology. [Preprint] Wallace, David (2018) Lessons from realistic physics for the metaphysics of quantum theory. [Preprint] Wallace, David (2022) Life and Death in the Tails of the Wave Function. [Preprint] Wallace, David (2018) On the Plurality of Quantum Theories: Quantum theory as a framework, and its implications for the quantum measurement problem. [Preprint] Wallace, David (2022) On the reality of the global phase. [Preprint] Wallace, David (2023) Philosophy of Quantum Mechanics. [Preprint] Wallace, David (2016) Probability and Irreversibility in Modern Statistical Mechanics: Classical and Quantum. [Preprint] Wallace, David (2024) Quantum Systems Other Than the Universe. [Preprint] Wallace, David (2018) Spontaneous Symmetry Breaking in Finite Quantum Systems: a decoherent-histories approach. [Preprint] Wallace, David (2022) The sky is blue, and other reasons quantum mechanics is not underdetermined by evidence. [Preprint] Walstad, Allan (2010) A Critical Reexamination of the Electrostatic Aharonov-Bohm Effect. [Preprint] Walstad, Allan (2023) The Model View Meets Quantum Ontology. UNSPECIFIED. Weatherall, James Owen (2013) The Scope and Generality of Bell's Theorem. [Preprint] Webermann, Michael (2017) Does Physics Provide Us With Knowledge About the Things in Themselves ? [Preprint] Weinstein, Steven (2000) Absolute Quantum Mechanics. [Preprint] Weinstein, Steven (2012) Patterns in the Fabric of Nature. Foundational Questions Institute (FQXI). Weinstein, Steven (2012) Review of "Space, Time, and Stuff", Frank Arntzenius, OUP 2012. [Preprint] Weinstein, Steven (2002) Superluminal Signalling. [Preprint] Weinstein, Steven (1995) Undermind. [Preprint] Weinstein, Galina (2023) Debating the Reliability and Robustness of the Learned Hamiltonian in the Traversable Wormhole Experiment. [Preprint] Weinstein, Galina (2023) Navigating the Conjectural Labyrinth of the Black Hole Information Paradox. [Preprint] Weinstein, Galina (2023) Reframing the Event Horizon: The Harlow-Hayden Computational Approach to the Firewall Paradox. [Preprint] Weinstein, Galina (2023) Revisiting Nancy Cartwright's Notion of Reliability: Addressing Quantum Devices' Noise. [Preprint] Weinstein, Galina (2023) Unveiling the Connection: ER bridges and EPR in Einstein’s Research. [Preprint] Weinstein, Galina (2023) Weinstein on Berger and DiRuggiero, 'Einstein: The Man and His Mind'. H-Sci-Med-Tech. pp. 1-5. Weslake, Brad (2005) Common Causes and The Direction of Causation. [Preprint] Weslake, Brad (2006) Common Causes and The Direction of Causation. [Preprint] Wilce, Alexander (2008) Formalism and Interpretation in Quantum Theory. [Preprint] Wilce, Alexander (2019) Dynamical states and the conventionality of (non-) classicality. [Preprint] Wilce, Alexander (2016) A Royal Road to Quantum Mechanics (or Thereabouts). [Preprint] Wilhelm, Isaac (2019) Typical: A Theory of Typicality and Typicality Explanations. The British Journal for the Philosophy of Science. ISSN 1464-3537 Williams, Porter (2022) Entanglement, Complexity, and Causal Asymmetry in Quantum Theories. Foundations of Physics. Williams, Porter and Dougherty, John and Miller, Michael (2023) Cluster Decomposition and Two Senses of Isolability. [Preprint] Wilson, Alastair (2006) Modal Metaphysics and the Everett Interpretation. UNSPECIFIED. (Unpublished) Witas, Piotr (2019) On the place of subjectivity in quantum theory. [Preprint] Wohnrath Arroyo, Raoni and Becker Arenhart, Jonas Rafael (2020) Floating free from physics: the metaphysics of quantum mechanics. [Preprint] Wronski, Leszek (2013) On a Conjecture by San Pedro. [Preprint] Wuthrich, Christian (2010) Can the world be shown to be indeterministic after all? [Preprint] Wuthrich, Christian (2011) In search of lost spacetime: philosophical issues arising in quantum gravity. [Preprint] Wuthrich, Christian (2014) Putnam looks at quantum mechanics (again and again). [Preprint] Wuthrich, Christian (2014) A quantum-information-theoretic complement to a general-relativistic implementation of a beyond-Turing computer. [Preprint] Yasmineh, Salim (2023) Non-local Building Blocks of Spacetime. [Preprint] Yau, Hou (2015) Can time have a more dynamical role in a quantum field? [Preprint] Yau, Hou Ying (2013) Reconsidering Born's Postulate and Collapse Theory. [Preprint] Zafiris, Elias and Karakostas, Vassilios (2013) A Categorial Semantic Representation of Quantum Event Structures. Foundations of Physics, 43 (9). pp. 1090-1123. ISSN 1572-9516 Zahedi, Ramin (A.) (2015) On Discrete Physics (Digital Philosophy/Digital Cosmology) and the Cellular Automaton: A Perfect Mathematical Deterministic Structure for Reality – as A Huge Computational Simulation. [Preprint] Zalamea, Federico (2016) Chasing Individuation: Mathematical Description of Physical Systems. [Preprint] Zalamea, Federico (2015) The Mathematical Description of a Generic Physical System. Topoi. pp. 1-10. ISSN 1572-8749 Zalamea, Federico (2017) The Two-fold Role of Observables in Classical and Quantum Kinematics. [Preprint] Zhong, Shengyang (2021) Quantum States: An Analysis via the Orthogonality Relation. Synthese. ISSN 1573-0964 Zimmermann, Rainer E. (2001) Recent Conceptual Consequences of Loop Quantum Gravity. Part I: Foundational Aspects. [Preprint] Zinkernagel, Henrik (2015) Are we living in a quantum world? Bohr and quantum fundamentalism. One hundred years of the Bohr atom: Proceedings from a conference (Edited by F. Aaserud and H. Kragh). Scientia Danica. Series M: Mathematica et physica, vol. 1., 2015. pp. 419-434. ISSN 1904-5514 Zinkernagel, Henrik (2016) Niels Bohr on the wave function and the classical/quantum divide. Studies in History and Philosophy of Modern Physics, 53. pp. 9-19. ISSN 1355-2198 Zinkernagel, Henrik (2011) Some trends in the philosophy of physics. THEORIA. An International Journal for Theory, History and Foundations of Science, 26 (2). pp. 215-241. ISSN 2171-679X Zinkernagel, Henrik (2006) The philosophy behind quantum gravity. UNSPECIFIED. Zinkernagel, Henrik (2022) Aesthetic Motivation in Quantum Physics: Past and Present. Annalen der Physik, 534 (9). pp. 2200283 (1-6). Ávila, Partricio and Okon, Elias and Sudarsky, Daniel and Wiedemann, Martín (2022) Quantum spatial superpositions and the possibility of superluminal signaling. [Preprint]
{"url":"http://philsci-archive.pitt.edu/view/subjects/quantum-mechanics.html","timestamp":"2024-11-11T20:01:29Z","content_type":"application/xhtml+xml","content_length":"476041","record_id":"<urn:uuid:0de8c2b7-c3b6-4d31-9141-4b4ff35f8867>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00156.warc.gz"}