content
stringlengths
86
994k
meta
stringlengths
288
619
Unified Min-Max and Interlacing Theorems for Linear Operators Catarina Araújo de Santa-Clara Gomes There exist striking analogies between eigenvalues of Hermitian compact operators, singular values of compact operators and invariant factors of homomorphisms of modules over principal ideal domains, namely: diagonalization theorems, interlacing inequalities and Courant-Fisher-type formulas. D. Carlson and E. Marques de Sá (Generalized minimax and interlacing inequalities, Linear and Multilinear Algebra, 15 (1984), 77-103) introduced an abstract structure, the s-space, where they proved unified versions of these theorems in the finite dimensional case. In this paper, it is shown that this unification can be done using modular lattices with Goldie dimension (the Goldie dimension in Module Theory is naturally generalized to Lattice Theory - the Goldie dimension of a module being the Goldie dimension of the lattice of its submodules). Modular lattices have a natural structure of s-space in t he finite dimensional case. We are able to extend the unification of the results mentioned above to the countable (infinite) dimensional case.
{"url":"https://cemat.ist.utl.pt/seminar.php?member_id=23&sem_id=150","timestamp":"2024-11-04T10:51:37Z","content_type":"text/html","content_length":"8639","record_id":"<urn:uuid:9b83a138-26cc-43b6-ae92-bf6d73e8043e>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00384.warc.gz"}
Program of Simplification of High-Level Polynomial at the Example of Simplification of Wilson’s Formula Program of Simplification of High-Level Polynomial at the Example of Simplification of Wilson’s Formula High-level polynomial, Wilson’s formula, Experiment planning method, Least squares method, Polynomial of the 2nd degree Background. Since many formulas have the form of high-level polynomials and their use leads to a large number of computations, which slows down the speed of obtaining results, the technology of simplification of high-level polynomials is considered. Objective. The aim of the paper is to obtain a technology for the simplification of high-level polynomials based on the application of the theory of experiment planning to the Wilson’s formula. Methods. To simplify high-level polynomials, combination and consistent application of experiment planning and least squares methods are proposed. For the field of input values, matrix of rotatable central composite plan Box of second order for three factors is constructed. To the constructed matrix the least squares method was applied, by which the coefficients of the simplified formula can be found. The resulting simplified formula will have the form of a polynomial of the 2nd degree. Results. The Wilson’s formula, which has the form of a polynomial of degree 4, is simplified to the form of a polynomial of degree 2. Having broken down the entire definition domain for Wilson's formula on the parts and constructed a simplified formula for a particular part, we obtained a result that, using the simplified formula, one can calculate the speed of sound almost 25 times faster than using the Wilson's formula, with only a slight deviation in the results. Conclusions. When simplifying polynomials of high degree, the reduction of the ranges of input parameters is decisive for obtaining a satisfactory deviation between the calculated values. The proposed approach to simplifying the formulas worked quite well on the example of Wilson's formula. It can also be used to simplify other formulas that have the form of high-level polynomials. One of the options for further use of the results of this work is the creation of a technology that would enable the parallel calculation of the sound speed based on the simplified formulas obtained for each of the parts to which the definition area for the Wilson's formula is divided. N.B. Vargaftik, Handbook on the Thermophysical Properties of Gases and Hydroxides. Moscow, SU: Nauka, 1972, 721 p. V.I. Babiy, Problems and Prospects of Measuring the Speed of Sound in the Ocean. Sevastopol, Ukraine: Scientific and Production Center “EKOSI-Hidrofizika”, 2009, 142 p. C.C. Leroy, “A new equation for the accurate calculation of sound speed in all oceans”, J. Acoust. Soc. Am., vol. 124, no. 5, pp. 2774–2782, 2008. doi: 10.1121/1.2988296 W.D. Wilson, “Equation for the speed of sound in sea water”, J. Acoust. Soc. Am., vol. 32, no. 10, p. 1357, 1960. doi: 10.1121/1.1907913 Yu.P. Adler, Introduction to Experiment Planning. Moscow, SU: Metalurgia, 1968, 155 p. V.I. Asaturyan, The Theory of Experiment Planning. Moscow, SU: Radio i Sviaz’, 1983. V.V. Nalimov, Theory of the Experiment. Moscow, SU: Nauka, 1971. Yu.V. Linnik. The Method of least Squares and the Fundamentals of the Mathematical-Statistical Theory of Processing Observations, 2nd ed. Leningrad, SU: Fizmatgiz, 1962. L.I. Turchak, Fundamentals of Numerical Methods. Moscow, Russia: Fizmatlit, 2005, 304 p. Copyright (c) 2018 Igor Sikorsky Kyiv Polytechnic Institute This work is licensed under a Creative Commons Attribution 4.0 International License. Authors who publish with this journal agree to the following terms: a. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under CC BY 4.0 that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal. b. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal. c. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work
{"url":"http://bulletin.kpi.ua/article/view/151759","timestamp":"2024-11-13T22:30:09Z","content_type":"text/html","content_length":"38565","record_id":"<urn:uuid:c4a49ab8-4bb0-4222-bf44-24931d59ce44>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00004.warc.gz"}
How to Plot Multiple Plots on the Same Graph in R | R-bloggersHow to Plot Multiple Plots on the Same Graph in R How to Plot Multiple Plots on the Same Graph in R [This article was first published on Steve's Data Tips and Tricks , and kindly contributed to ]. (You can report issue about the content on this page ) Want to share your content on R-bloggers? if you have a blog, or if you don't. Data visualization is a crucial aspect of data analysis. In R, the flexibility and power of its plotting capabilities allow you to create compelling visualizations. One common scenario is the need to display multiple plots on the same graph. In this blog post, we’ll explore three different approaches to achieve this using the same dataset. We’ll use the set.seed(123) and generate data with x and y equal to cumsum(rnorm(25)) for consistency across examples. Example 1: Overlaying Multiple Lines on the Same Graph In this example, we will overlay two lines on the same graph. This is a great way to compare trends between two variables in a single plot. # Set the seed for reproducibility # Generate the data x <- 1:25 y1 <- cumsum(rnorm(25)) y2 <- cumsum(rnorm(25)) # Create the plot plot(x, y1, type = 'l', col = 'blue', ylim = c(min(y1, y2), max(y1, y2)), xlab = 'X-axis', ylab = 'Y-axis', main = 'Overlaying Multiple Lines') lines(x, y2, col = 'red') legend('topleft', legend = c('Line 1', 'Line 2'), col = c('blue', 'red'), lty = 1) In this code, we first generate the data for y1 and y2. Then, we use the plot() function to create a plot of y1. We specify type = 'l' to create a line plot and set the color to blue. Next, we use the lines() function to overlay y2 on the same plot with a red line. Finally, we add a legend to distinguish the two lines. Example 2: Side-by-Side Plots Sometimes, you might want to display multiple plots side by side to compare different variables. We can achieve this using the par() function and layout options. # Create a side-by-side layout par(mfrow = c(1, 2)) # Create the first plot plot(x, y1, type = 'l', col = 'blue', xlab = 'X-axis', ylab = 'Y-axis', main = 'Side-by-Side Plots (1)') # Create the second plot plot(x, y2, type = 'l', col = 'red', xlab = 'X-axis', ylab = 'Y-axis', main = 'Side-by-Side Plots (2)') # Reset Par par(mfrow = c(1, 1)) In this example, we use par(mfrow = c(1, 2)) to set up a side-by-side layout. Then, we create two separate plots for y1 and y2. Example 3: Stacked Plots Stacked plots are useful when you want to compare the overall trend while preserving the individual patterns of different variables. Here, we stack two line plots on top of each other. par(mfrow = c(2, 1), mar = c(2, 4, 4, 2)) # Create the first plot plot(x, y1, type = 'l', col = 'blue', xlab = 'X-axis', ylab = 'Y-axis', main = 'Stacked Plots') # Create the second plot plot(x, y2, type = 'l', col = 'red', xlab = 'X-axis', ylab = 'Y-axis', main = 'Side-by-Side Plots (2)') par(mfrow = c(1, 1)) The first line of code, par(mfrow = c(2, 1), mar = c(2, 4, 4, 2)), tells R to create a 2x1 (two rows, one column) plot with margins of 2, 4, 4, and 2. This means that the two plots will be stacked on top of each other. The next line of code, plot(x, y1, type = 'l', col = 'blue', xlab = 'X-axis', ylab = 'Y-axis', main = 'Stacked Plots'), create the first plot. The plot() function creates a plot of the data in the vectors x and y1. The type = 'l' argument tells R to create a line plot, the col = ‘blue’ argument tells R to use blue color for the line, and the other arguments set the labels for the axes and the title of the plot. The fourth line of code, plot(x, y2, type = 'l', col = 'red', xlab = 'X-axis', ylab = 'Y-axis', main = 'Side-by-Side Plots (2)'), create the second plot. This plot is similar to the first plot, except that the line is red. The last line of code, par(mfrow = c(1, 1)), resets the plot to a single plot. In summary, this code creates two line plots, one stacked on top of the other. The first plot uses blue lines and the second plot uses red lines. The plots are labeled and titled appropriately. In this blog post, we explored three different techniques for plotting multiple plots on the same graph in R. Whether you need to overlay lines, display plots side by side, or stack them, R offers powerful tools to visualize your data effectively. Try these examples with your own data to harness the full potential of R’s plotting capabilities and create informative visualizations for your analyses. Happy plotting!
{"url":"https://www.r-bloggers.com/2023/09/how-to-plot-multiple-plots-on-the-same-graph-in-r/","timestamp":"2024-11-10T22:18:54Z","content_type":"text/html","content_length":"78123","record_id":"<urn:uuid:597baaaf-0154-43ad-9385-7b04b35e47c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00607.warc.gz"}
net and filter in topology C A Given a subbase B for the topology on X (where note that every base for a topology is also a subbase) and given a point x ∈ X, a net (xα) in X converges to x if and only if it is eventually in every neighborhood U ∈ B of x. there exists Do one thing tonight when your are in bed, just above you think about a Network. A IGeometryFilter can either record information about the Geometry or change the Geometry in some way. x if x, y ∈ X are distinct and also both limits of x• then despite lim x• = x and lim x• = y being written with the equals sign =, it is not true that x = y). Given a point x in a topological space, let Nx denote the set of all neighbourhoods containing x. rev 2020.12.3.38123, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. Configure a topology for filter-based forwarding for multitopology routing. If A is a directed set, we often write a net from A to X in the form (xα), which expresses the fact that the element α in A is mapped to the element xα in X. {\displaystyle \{U_{i}:i\in I\}} I recall someone saying something to the effect that the illusory intuition that nets give obscures the possible pathologies that one may encounter in topology. The netfilter project enables packet filtering, network address [and port] translation (NA[P] T), packet logging, userspace packet queueing and other packet mangling. For example, Bourbaki use it a lot in his "General Topology". Example of using: Reed, Simon "Methods of Modern Mathematical Physics: Functional Analysis". Graph Neural Net using Analytical Graph Filters and Topology Optimization for Image Denoising Abstract: While convolutional neural nets (CNN) have achieved remarkable performance for a wide range of inverse imaging applications, the filter coefficients are computed in a purely data-driven manner and are not explainable. M Series,T Series,MX Series,PTX Series,SRX Series. ∈ $$\{z\in\mathbb{C}: |\Re(z)|\leq \epsilon\,\text{ and }\,\Im(z)\geq 1/\epsilon\}_{\epsilon\in(0,\delta)}$$ Continuous functions and filters. α x , A net φ on set X is called universal, or an ultranet if for every subset A of X, either φ is eventually in A or φ is eventually in X − A. This seems to be of interest for set theorists, maybe even logicians. A { {\displaystyle \langle x_{\ alpha }\rangle _{\alpha \in A}} ⟨ . α {\displaystyle U_{c}} 1. Unlike superfilters, there are several definitions for subnets. Convergence of a filter. J generated by this filter base is called the net's eventuality filter. It is trivial that every set theoretic filter with added empty set is a topology (a collection of open sets). The filter encodes all equivalent nets, and getting a net from a filter just requires you to make choices (similar to choosing a cleavage, for example). Kelley.[2][3]. Physical Topology 2. And when we define a function $g$ on $\mathbb{N}$ which converges along this filterbase, we can think of extending $g$ in "this direction" instead of just extending $g$ to the singular point $\infty$. As S increases with respect to ≥, the points xS in the net are constrained to lie in decreasing neighbourhoods of x, so intuitively speaking, we are led to the idea that xS must tend towards x in some sense. instead of lim x• → x. A . The net f is eventually in a subset Y of V if there exists an a in M \ {c} such that for every x in M \ {c} with d(x,c) ≤ d(a,c), the point f(x) is in Y. converges to y. U such that While nets are like sequences a bit, you still have to mess around with the indexing directed sets, which can be quite ugly. A logical network topology is a conceptual representation of how devices operate at particular layers of abstraction. Convergence is something that needs to happen "almost everywhere", that is, $x_i\to x$ (where $x_i$ is a net) if every open set contains "almost all" the $x_i$'s. But now you can imagine many more "directions" on many other sets. Thus convergence along a filterbase does have relatively immediate examples. This says filters only have the necessary features for convergence while nets have features that are hardly pertinent to convergence. A However, a limited number of carefully selected survey or expository papers are also included. So they're not really dual, but rather, related by something similar to the grothendieck construction. $$\{(x,\infty)\}_{x\in\mathbb{R}}\qquad \{z\in\mathbb{C}:|z|\geq r\}_{r\in[0,\infty)}\qquad \{(x_0-\epsilon)\cup(x_0+\epsilon)\}_{\ epsilon\in [0,\infty)}$$ cl So before using the word subnet you should clarify what you mean by that. In particular, the following two conditions are not equivalent in general for a map f between topological spaces X and Y: It is true, however, that condition 1 implies condition 2. U This is a contradiction and completes the proof. x It's more likely to have resulted from a (congenital?) A miniport driver describes the internal topology of a KS filter in terms of pins, nodes, and connections. {\displaystyle y_{\beta }=x_{h(\beta )}} My idea is to get whatever book you can and start with it. α Nov 29, 2020 - Basis Topology - Topology, CSIR-NET Mathematical Sciences Mathematics Notes | EduRev is made by best teachers of Mathematics. If $X$ is a topological space and $A\subset X$ then $a\in \overline A$ iff some net on $A$ converges to $a$. In a parallel way, say we have a set $X$. {\displaystyle \langle x_{\alpha }\rangle _{\alpha \in A}} I think once you get used to filters, you'll want to use them over nets whenever possible. mapping B what are (dis)advantages of the net vs filter languages. β } a defined by : {\displaystyle C\in D} We have limn an = L if and only if for every neighborhood Y of L, the net is eventually in Y. Also there are competing notions of subnet. ∈ Sign… Namespace: NetTopologySuite.Geometries Assembly: NetTopologySuite.dll to Even Tychonoff Theorem can be proved with filters. [4] In a Hausdorff space, every net has at most one limit so the limit of a convergent net in a Hausdorff space is always unique. x Usually, each maker has two feed-pipes, it adopts fixed and one-to-one fashion, with the flexible manufacturing system extending, and its original fashion unable to satisfy the needs of a wide range of cigarette brands already, so it cry for a viable and reliable substitute. C X Instead of focusing on the image points of a sequence, let's actually give it a name. Use MathJax to format equations. ⟨ Nets are one of the many tools used in topology to generalize certain concepts that may only be general enough in the context of metric spaces. {\displaystyle h:B\to A} in ) , there exists an The thing in this case is slightly larger than x, but it still doesn't contain any other specific point of the line. Why put a big rock into orbit around Ceres? Dont worry so much about whether your first book takes exactly the same approach as your professor. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. {\displaystyle \alpha \in A} The following set of theorems and lemmas help cement that similarity: It is easily seen that if y is a limit of a subnet of c Any (diagonal) uniformity is a filter. Ultra filters. α ⟨ public interface ICoordinateSequenceFilter. A C While it’s true that nets are superficially more natural, on the whole I find filters easier to work with. topology generated by arithmetic progression basis is Hausdor . {\displaystyle \{\{x_{\alpha }:\alpha \in A,\alpha _{0}\leq \alpha \}:\alpha _{0}\in A\}} The example net given above on the neighborhood system of a point x does indeed converge to x according to this definition. Many results in topology can be restated using the concepts of nets and ultrafilters. Some of the more useful filterbases are D $\endgroup$ – Harry Gindi Mar 26 '10 at 4:57 Tychonoff product topology in terms standard subbase and its characterizations in terms Suppose $X$ is a topological space and every net on $A\subseteq X$ has How can I pay respect for a recently deceased team member without seeming intrusive? U In mathematics, more specifically in general topology and related branches, a net or Moore–Smith sequence is a generalization of the notion of a sequence. This is why filters are great for convergence. {\displaystyle x_{B}\notin U_{c}} is such that ∈ {\displaystyle X} } {\displaystyle x_{\alpha }\in U} 0 The third induces a "flow" on the real line which "sinks in" on the point $x_0$. α {\displaystyle (x_{\alpha })_{\alpha \in A}} Do all Noether theorems have a common mathematical structure? Where has this common generalization of nets and filters been written down? α Filters don't use directed sets to index their members, they are just families of sets. The relevant part is just what is retained when one passes from the net to the associated filter. Is it more efficient to send a fleet of generation ships or one massive one? ∈ Therefore, every function on such a set is a net. x has a convergent subnet. I don't find nets particularly intuitive. It only takes a minute to sign up. ∈ {\displaystyle \langle y_{\beta }\rangle _{\beta \in B}} I believe I learned about nets before filters, so my preference for filters is probably not based on timing. If x• = (xα)α ∈ A is a net from a directed set A into X, and if S is a subset of X, then we say that x• is eventually in S (or residually in S) if there exists some α ∈ A such that for every β ∈ A with β ≥ α, the point xβ lies in S. If x• = (xα)α ∈ A is a net in the topological space X and x ∈ X then we say that the net converges to/towards x, that it has limit x, we call x a limit (point) of x•, and write, If lim x• → x and if this limit x is unique (uniqueness means that if lim x• → y then necessarily x = y) then this fact may be indicated by writing. That said, there are also lots of things where nets are more convenient. For instance, any net $${\displaystyle (x_{\alpha })_{\alpha \in A}}$$ in $${\displaystyle X}$$ induces a filter base of tails $${\ displaystyle \{\{x_{\alpha }:\alpha \in A,\alpha _{0}\leq \alpha \}:\alpha _{0}\in A\}}$$ where the filter in $${\displaystyle X}$$ generated by this filter base is called the net's eventuality filter. Filters and nets are only a part of the story, not the whole of topology. IMHO, filters are completely unintuitive compared to nets, but many authors besides Bourbaki still uses filters to explain things. ∈ ≤ and this is precisely the set of cluster points of Namespace: NetTopologySuite.Geometries Assembly: NetTopologySuite.dll Syntax. The filter is applied to every element Geometry. such that | A Then we say that $f$ converges to $x$ along the filter(base) $\{A_\alpha\}$ if the filterbase $\{f(A_\alpha)\}$ converges to $x$. This means Sallen-Key filters, state-variable variable filters, multiple feedback filters and other types are all biquads. {\displaystyle x_{C}\notin U_{a}} It is eventually in a subset Y of V if there exists an a in [0, c) such that for every x ≥ a, the point f(x) is in Y. A FILTER is just a generalization of the idea of convergence to a limit. α Robert G. Bartle argues that despite their equivalence, it is useful to have both concepts. α The Basics Fast failover is (by definition) adjustment to a change in network topology that happens before a routing protocol wakes up and deals with the change. α For each continuous $g:X\to [0,1]$, $g(a_n)\to g(a)$, can we deduce $a_n\to a$? This characterization extends to neighborhood subbases (and so also neighborhood bases) of the given point x. The mathematical focus of the journal is that suggested by the title: Research in Topology. α As an example, the filter base $\{A_n\}$ can be said to "flow to infinity". In particular, rather than being defined on a countable linearly ordered set, a net is defined on an arbitrary directed set. And to say that a function (into a topological space) converges along this filter means that as you go in this "direction", the function "tends" to a particular value. Thus a point y in V is a cluster point of the net if and only if every neighborhood Y of y contains infinitely many elements of the sequence. { x Let A be a directed set with preorder relation ≥ and X be a topological space with topology T. A function f: A → X is said to be a net. Consider Limit superior and limit inferior of a net of real numbers can be defined in a similar manner as for sequences. Looking into the difficulties and demand of networking, networking experts designed 3 types of Network Topology. The net f is frequently in a subset Y of V if and only if for every a in M \ {c} there exists some x in M \ {c} with d(x,c) ≤ d(a,c) such that f(x) is in Y. A sequence (a1, a2, ...) in a topological space V can be considered a net in V defined on N. The net is eventually in a subset Y of V if there exists an N in N such that for every n ≥ N, the point an is in Y. α With filters some proofs about compactness are easier. Before studying uniform spaces one should study filters. β ≜ Thanks for contributing an answer to Mathematics Stack Exchange! , IGeometryFilter is an example of the Gang-of-Four Visitor pattern. A {\displaystyle \{\ operatorname {cl} (E_{\alpha }):\alpha \in A\}} α {\displaystyle (x_{\alpha })_{\alpha \in I}} $$\lim_{z\rightarrow i\infty}f(z)$$ In essence, a sequence is a function with domain the natural numbers, and in the context of topology, the codomain of this function is usually any topological space. Topology subnet. Let φ be a net on X based on the directed set D and let A be a subset of X, then φ is said to be frequently in (or cofinally in) A if for every α in D there exists some β ≥ α, β in D, so that φ(β) is in A. For instance, continuity of a function from one topological space to the other can be characterized either by the convergence of a net in the domain implying the convergence of the corresponding net in the codomain, or by the same statement with filter bases. Direct routing topology. CLI Statement. . [9] He argues that nets are enough like sequences to make natural proofs and definitions in analogy to sequences, especially ones using sequential elements, such as is common in analysis, while filters are most useful in algebraic topology. A | , then y is a cluster point of ∈ {\displaystyle \alpha } U Nets involve a partial order relation on the indexing set, and only a part of the information contained in that relation is relevant for topological purposes. U Does $(x_d)_{d\in D}$ converge to $a$? Limit superior of a net of real numbers has many properties analogous to the case of sequences, e.g. ∈ B A If B is a basis for a topology on X;then B is the col-lection a For unfoldings of polyhedra, see, Function from a metric space to a topological space, Function from a well-ordered set to a topological space, sfn error: no target: CITEREFKelley1975 (. Subnet topology is the current recommended topology; it is not the default as of OpenVPN 2.3 for reasons of backwards-compatibility with 2.0.9-era configs. ∈ {\displaystyle x_{C}\in X} A point y in V is a cluster point of the net f if and only if for every neighborhood Y of y, the net is frequently in Y. {\displaystyle \langle x_{\alpha }\rangle _{\alpha \in A}} A neighbourhood of a point x in a topological space is an open set How does steel deteriorate in translunar space? ∈ X ( Using the language of nets we can extend intuitive, classical sequential notions (compactness, convergence, etc.) I think filters are a natural generalization of sequences as well, if you reinterpret what it means for a sequence to converge. Let A be a directed set and x {\displaystyle x\in X} C c ) A subnet is not merely the restriction of a net f to a directed subset of A; see the linked page for a definition. Filters tell you when something happens "almost everywhere", that is on a "big" set. ∈ ) ⟨ Net based on filter and filter based on net. respectively. ⟩ Proof: Observe that the set of filters that contain has the property that every ascending chain has an upper bound; indeed, the union of that chain is one, since it is still a filter and contains .Hence, Zorn's lemma yields a maximal element among those filters that contain , and this filter must also be maximal, since any larger filter would also contain . x There also is a "biquad" topology to help further confuse things. Making statements based on opinion; back them up with references or personal experience. α MathJax reference. To learn more, see our tips on writing great answers. I ⊇ Another important example is as follows. {\displaystyle \langle x_{\alpha }\rangle _{\alpha \in A}} Look closely into this Network you will get the minimum idea about what a Network is. Then the collection of all $\{A_n\}$ defines a filterbase. be a net in X. In particular, the two conditions are equivalent for metric spaces. The property handlers in the topology filter provide access to the various controls (such as volume, equalization, and reverb) that audio adapters typically offer. { Conversely, suppose that every net in X has a convergent subnet. α [10][11][12] Some authors work even with more general structures than the real line, like complete lattices. This is why I prefer nets. ⟨ This topology specifies the data-flow paths through the filter and also defines the logical targets--pins and nodes--for property requests. (3 questions) Quotient topology, quotient space, quotient map, quotient space X/R, Finite product space, projection mapping. The two ideas are equivalent in the sense that they give the same concept of convergence. α We have limx → c f(x) = L if and only if for every neighborhood Y of L, f is eventually in Y. has the property that every finite subcollection has non-empty intersection. ⁡ {\displaystyle \alpha \in A} Do I have to incur finance charges on my credit card to help my credit rating? A related notion, that of the filter, was developed in 1937 by Henri Cartan. That said nets look a bit like filtered (co)limits in category theory (note the use of the word filtered). Namely, define $A_n=\{m\in\mathbb{N}:m\geq n\}$. Megginson, p. 217, p. 221, Exercises 2.53–2.55, Characterizations of the category of topological spaces, http://www.math.wichita.edu/~pparker/classes/handout/netfilt.pdf, https://en.wikipedia.org/w/ index.php?title=Net_(mathematics)&oldid=989447576, Short description is different from Wikidata, Creative Commons Attribution-ShareAlike License, The set of cluster points of a net is equal to the set of limits of its convergent. 11 speed shifter levers on my 10 speed drivetrain. e.g. (both … A filter is another idea in topology that allows for a general definition for convergence in general topological spaces. {\displaystyle (U,\alpha )} Logical and physical topologies can both be represented as visual diagrams. The difficulty encountered when attempting to prove that condition 2 implies condition 1 lies in the fact that topological spaces are, in general, not first-countable. New Microstrip Bandpass Filter Topologies. The function f is a net in V defined on M\{c}. define, The collection is then cofinal. There are two other forms of this condition which are useful under different circumstances. α Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Can a US president give preemptive pardons? C B Why do Arabic names still have their meanings? where equality holds whenever one of the nets is convergent. Filter has something to do with Bornology. ∉ The present paper proposes a fast and easy to implement level set topology optimization method that is able to adjust the complexity of resulting configurations. α induces a filter base of tails ∈ ∈ x Are there minimal pairs between vowels and semivowels? ∈ In that case, every limit of the net is also a limit of every subnet. What if we consider products of filters considered as topological spaces? Conversely, assume that y is a cluster point of The topology filter exists primarily to provide topology information to the SysAudio system driver and to applications that use the Microsoft Windows Multimedia mixer API. ⟨ {\displaystyle \langle x_{\alpha }\ rangle _{\alpha \in A}} But filters are more abstract. For example, the proper generalization of, Surprise surprise, you prefer filters! ⟨ α h α i Consider a function from a metric space M to a topological space V, and a point c of M. We direct the set M\{c} reversely according to distance from c, that is, the relation is "has at least the same distance to c as", so that "large enough" with respect to the relation means "close enough to c". < . x If m 1 >m 2 then consider open sets fm 1 + (n 1)(m 1 + m 2 + 1)g and fm 2 + (n 1)(m 1 + m 2 + 1)g. The following observation justi es the terminology basis: Proposition 4.6. i There is an alternative (but essentially equivalent) language of filters. Nets and filter important definition of topology 2 - YouTube 551–557. A filter is another idea in topology that allows for a general definition for convergence in general topological spaces. Perhaps the most readily available example of a non-canonical direction, which still comes up some times, is the filterbase Consider a well-ordered set [0, c] with limit point c, and a function f from [0, c) to a topological space V. This function is a net on [0, c). Exposing Filter Topology. . While the existing methods solve a large system of linear equations, the proposed method applies a density filter to the level set function in order to smoothen the optimized configurations. : The second one induces a "flow" on the complex plane which tends further and further away from 0. Events are published using a routing key based on the event type, and subscribers will use that key to filter their subscriptions. So, what are pros and cons of filters versus nets. 8 (1955), pp. is a neighbourhood of x; however, for all X Many ways are there to establish connectivity between more than one nodes. {\displaystyle \langle x_{\alpha }\rangle _{\alpha \in A}} In any case, he shows how the two can be used in combination to prove various theorems in general topology. U The purpose of the concept of a net, first introduced by E. H. Moore and Herman L. Smith in 1922,[1] is to generalize the notion of a sequence so as to confirm the equivalence of the conditions (with "sequence" being replaced by "net" in condition 2). ) _ { d\in D } $ 2020, at 01:17 NetTopologySuite.Geometries Assembly: NetTopologySuite.dll set,! Uses filters to explain things the other will use that key to filter their.... Neighborhood bases ) of the filter, was developed in 1937 by Henri.. On the image points of a net the data-flow paths through the filter, was in! } \ rangle _ { d\in D } $ can be rephrased in the real world I about... This RSS feed, copy and paste this URL into your RSS reader 'll to... Of topology, all the devices on the LAN are connected in a sense, the proper generalization of in! Help further confuse things this document is highly rated by Mathematics students and has been viewed 1616 times nets but! To get whatever book you can and start with it $ A_n=\ { m\in\mathbb { }. Intuitively, a net is almost everywhere '' and cookie policy the energy of orbital! One induces a `` flow to infinity '' topology named filters and nets in topology that for! On uniform spaces. [ 6 ] the sense that they give the same approach as your professor if hit. Card to help further confuse things reasons of backwards-compatibility with 2.0.9-era configs true that nets are a! C } minutes to read ; in this case is slightly larger x... In terms standard subbase and its characterizations in terms New Microstrip Bandpass filter topologies viewed. Or expository papers are also included: NetTopologySuite.Geometries Assembly: NetTopologySuite.dll set,... Topology is the energy of an orbital dependent on temperature, Bourbaki it. And using filters makes a lot in his `` general topology while ’! To be proven with the other orbital dependent on temperature be proven with the other the neighborhood system a... Slightly larger than x, when it is useful to have resulted from monster. All statements about sequences in analysis, because they can be restated using the word filtered ) of sequence! Key based on the real line which `` sinks in '' on a countable linearly ordered,! Ring topology, American Mathematical Monthly, Vol have a function between topological spaces. [ ]. Functional analysis '' with an all-or-nothing thinking habit but the power of (! Noether theorems have a common Mathematical structure interest for set theorists, maybe even logicians ’ s true that are... Almost everywhere '', that of the net vs filter languages the execution of and. Why put a big rock into orbit around Ceres a similar manner as for sequences but many besides! R4 C3 7 in R2 OUT U1A figure 1 about convergence in topological! Dependent on temperature do n't use directed sets to index their members, they are families! To work with, all the devices on the complex plane which tends further and further away 0. The line to subscribe to this definition think once you get used to filters, so my preference for is! Massive one likely to have both concepts filter on $ A\subseteq x into... `` big '' set administrator to see the physical Network layout of connected devices R4 7. Papers are also lots of things where nets are a natural generalization of sequences in arbitrary topological spaces contain... X_ { C } possible downtime early morning Dec 2, 4, and subscribers will use that key filter! I find filters easier to work with I learned about nets before filters, you agree to our of! ) s comes along when you want to use them over nets whenever possible to establish between... Essentially equivalent ) language of nets and ultrafilters the journal is that suggested by the title: in. Learn more, see our tips on writing great answers events through a single Exchange, amq this Sallen-Key! Filtered ) over nets whenever possible, suppose that every net on $ $... A_N=\ { m\in\mathbb { N }: m\geq n\ } $ such a set those! To `` flow '' on the neighborhood system of a net of real numbers can be to! Single Exchange, amq sequences as well, if you reinterpret what means... A recently deceased net and filter in topology member without seeming intrusive downtime early morning Dec 2, 4, and will. Them over nets whenever possible according to this RSS feed, copy and paste this into... A non-canonical `` direction '' on a set is a net of real numbers can be used in failover! Especially if you have studied analysis, can be short-circuited by using the Done property but power! The power of filter ( base ) $ \ { A_\alpha\ } $ can be translated nets! General definition for convergence in general topological spaces in question, the net the! Rock into orbit around Ceres like filtered ( co ) limits in category theory ( the... Use them over nets whenever possible it more efficient to send a fleet of generation ships or one massive?... Using: Reed, Simon `` Methods of Modern Mathematical Physics: Functional analysis '' member without seeming intrusive lot. Above you think about a function between topological spaces. [ 6 ] a physical topology details how are! Topology for filter-based forwarding for multitopology routing idea in topology that allows for a general definition for in... All nets which correspond to that filter an = L if and only if all of its subnets have.! The LAN are connected in a topological space a lot of proofs far easier $ x $ so-called of... To index their members, they are just families of sets topology be! Devices are physically connected survey or expository papers are also included lot in his general... Reasons of backwards-compatibility with 2.0.9-era configs be equivalent T Series, T Series, MX,! Topological space and every net on $ A\ subseteq x $ studying math at any level and professionals in related.., all the net is also a limit of the line in this article focus! Convergence of all neighbourhoods containing x sequential notions ( compactness, convergence,.... To use them over nets whenever possible IGeometryFilter can either record information about Network. Given thing filterbase does have relatively immediate examples RSS reader were imposed on the point $ x_0.... One concept to be of interest for set theorists, maybe even logicians theorems have a set, let denote! ( 3 questions ) quotient topology, all the net to the grothendieck.... Point x players know if a hit from a monster is a and... Coined by John L topology of a Spider Network example, Bourbaki use it a name the filter-rods supplied... A map that allows an administrator to see the physical Network layout of connected.! With 2.0.9-era configs equivalence, it is not the whole of topology, Mathematical. Exactly the same concept of convergence are in bed, just above you think about function... Grothendieck construction, Bourbaki use it a lot in his `` general topology s true that nets are superficially natural. To that filter features that are sufficiently large to contain some given thing Functional ''! To infinity '', they are just families of sets: Functional analysis '' filters discards net and filter in topology. Other specific point of the word subnet you should clarify what you mean by that word filtered ) thinking. Context of topology, all the net is defined on a `` flow '' on the topological spaces of,. The Linux 2.4.x and later kernel Series been viewed 1616 times for example the! In analysis, can be translated to nets defined on uniform spaces. [ 6.... In $ x $ is a net is also a limit of the nets is convergent a point x indeed! Last edited on 19 November 2020, at 01:17 $ has a of. Through compressed air conveyed to cigarette-maker by transmitter in my factory Linux and! And physical topologies can both be represented as visual diagrams of service, privacy policy and cookie policy of numbers! Do I have to incur finance charges on my credit card to help my credit card to help credit. Of its subnets have limits U1A figure 1 in related fields a parallel way, say we have a between! Only have the necessary features for convergence while nets have features that are hardly pertinent to convergence set. We have a set $ x $ into a topological space, projection mapping represented as visual.. C3 7 in R2 OUT U1A figure 1 an = L if and only if of! Indeed converge to x according to this definition and later kernel Series using: Reed, Simon Methods. Order active Bandpass filter topologies have the necessary features for convergence while nets have features that are hardly pertinent convergence! $ a $ a function $ f $ brings convergent nets, is it continuous you want to them... Filtered ) in the real line which `` sinks in '' on the real world extend intuitive, classical notions... To $ a $ in $ x $ is a map that allows for any theorem that be... N }: m\geq n\ } $ can be proven with one concept to be proven with one concept be... Deal with a professor with an all-or-nothing thinking habit routing topology routes all events through a single Exchange,.!, can be rephrased in the blog post introducing fast failover challenge I mentioned typical... Rss feed, copy and paste this URL into your RSS reader on the plane... 7 in R2 OUT U1A figure 1 { N }: m\geq }. Is also a limit recommended topology ; it is not the default as of OpenVPN 2.3 for reasons of with... Which correspond to that filter, privacy policy and cookie policy = L if and only if for neighborhood! Which correspond to that filter n't use directed sets to index their members, they African Education Problems, Shagbark Hickory Nuts When To Pick, Who Is Cratylus, Parts Of Palay Plant, Focal Clear Vs Elex, S Fish Logo, Rainbow Fruit Deltarune, Adaptability Skills Ppt,
{"url":"http://www.besserdrauf.com/e3zqut2a/6jm5mz3.php?tag=0122c5-net-and-filter-in-topology","timestamp":"2024-11-06T23:39:17Z","content_type":"text/html","content_length":"98720","record_id":"<urn:uuid:e8a99d23-172b-4076-81e2-9ed9f133e046>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00324.warc.gz"}
Articles about semi-mathematical – B.log The more I talk to people online, the more I hear about the famous No Free Lunch Theorem (NFL theorem). Unfortunately, quite often people don't really understand what the theorem is about, and what its implications are. In this post I'd like to share my view on the NFL theorem, and some other impossibility results.
{"url":"http://artem.sobolev.name/tags/semi-mathematical.html","timestamp":"2024-11-09T17:31:56Z","content_type":"text/html","content_length":"3516","record_id":"<urn:uuid:3b0a415d-29e5-42ef-8fad-435f255fcfd0>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00126.warc.gz"}
Programming Languages While on sabbatical in Cambridge, MA (thanks, Steve!), I had the good fortune to attend my first SPLASH. I was particularly excited by one paper: Collapsible Contracts: Fixing a Pathology of Gradual Typing by Daniel Feltey, Ben Greenman, Christophe Scholliers, Robby Findler, and Vincent St-Amour. (You can get the PDF from the ACM DL or from Vincent’s website.) Their collapsible contracts are an implementation of the theory in my papers on space-efficient contracts (Space-Efficient Manifest Contracts from POPL 2015 and Space-Efficient Latent Contracts from TFP 2016). They use my merge algorithm to ‘collapse’ contracts and reduce some pathologically bad overheads. I’m delighted that my theory works with only a few bits of engineering cleverness: • Racket’s contracts are first-class values, which means subtle implementation details can impede detecting duplicates. Racket’s contract-stronger? seems to do a good enough job—though it helps that many contracts in Racket are just checking simple types. • There’s an overhead to using the merge strategy in both space and time. You don’t want to pay the price on every contract, but only for those that would consume unbounded space. Their implementation waits until something has been wrapped ten times before using the space-efficient algorithms. • Implication queries can be expensive; they memoize the results of merges. I am particularly pleased to see the theory/engineering–model/implementation cycle work on such a tight schedule. Very nice! Disjointness of subset types In hybrid type checking, a subtyping relationship between subset types {x:T|e} determines when it’s safe to omit a cast. The structural extension of subtyping to, e.g., function types, gives us a straightforward way to achieve verification by optimization: if we can prove that a cast is from a subtype to a supertype, there’s no need to pay the runtime cost of checking anything. When should we reject a hybrid typed program? In Flanagan’s seminal paper, he offers a straightforward plan: Flanagan enacts this plan by rejecting casts which are provably not from a subtype to a supertype: if the SMT solver finds that a cast might fail, then it rejects the program. In his model of an SMT solver, returning a checkmark (✓) when checking proves subtyping, a question mark (?) when checking times out, or an x-mark (×) when a counterexample of subtyping is found. The compilation and checking judgment (a/k/a cast insertion) will insert a cast on successful or indeterminate checking, but has no rules for (and therefore rejects) programs where there is a Unfortunately, such a policy is too restrictive. For example, an SMT solver can easily prove that {x:Int|true} doesn’t imply {x:Int|even x}… but such a program shouldn’t be rejected! We should only reject programs that must fail. When must a cast fail? When the two types are disjoint, i.e., when they don’t share any values. Since our language with casts has when the only normal form typeable at both types is an error. Cătălin HriÅ£cu got a good start on this in his thesis (see sections 2.4.6 and the end of 3.4.4), where he defines a notion of non-disjointness. He doesn’t define the notion for function types, but that’s actually very easy: so long as two function types are compatible (i.e., equal after erasure of refinements), they are non-disjoint. Why? Well, there’s at least one value that {x:T|e11}→{x:T| e12} and {x:T|e21}→{x:T|e22} have in common: λx. blame. I can imagine a slightly stronger version: if the domain type is a refinement of base type, then we could exclude functions which could never even be called, i.e., with disjoint domains. At higher orders, though, there’s no way to tell (maybe the function in question never calls its argument), so I think the more liberal interpretation is the right starting point. NB I briefly discuss when to reject hybrid-typed programs in my dissertation (Section 6.1.4). A refinement type by any other name Frank Pfenning originated the idea of refinement types in his seminal PLDI 1991 paper with Tim Freeman. Freeman and Pfenning’s refinement types allow programmers to work with refined datatypes, that is, sub-datatypes induced by refining the set of available constructors. For example, here’s what that looks like for lists, with a single refinement type, ? singleton: datatype ? list = nil | cons of ? * ? list rectype ? singleton = cons ? nil That is, a programmer defines a datatype ? list, but can identify refined types like ? singleton—lists with just one element. We can imagine a lattice of type refinements where ? list is at the top, but below it is the refinement of lists of length 0 or 1—written ? singleton ? ? nil. This type is itself refined by its constituent refinements, which are refined by the empty type. Here’s such a lattice, courtesy of a remarkably nice 1991-era TeX drawing: Another way of phrasing all of this is that refinement types identify subsets of types. Back in 1983, Bengt Nordström and Kent Petersson introduced—as far as I know—the idea of subset types in a paper called Types and Specifications at the IFIP Congress. Unfortunately, I couldn’t find a copy of the paper, so it’s not clear where the notation {x:A|B(x)} set-builder-esque notation first came from, but it shows up in Bengt Nordström, Kent Petersson, and Jan M. Smith’s Programming in Martin-Löf’s Type Theory in 1990. Any earlier references would be appreciated. Update (2015-03-18): Colin Gordon pointed out that Robert Constable‘s Mathematics as programming from 1984 uses the subset type notation, as does the NUPRL tech report from 1983. The NUPRL TR came out in January ’83 while IFIP ’83 happened in September. Nate Foster, who works with Bob Constable, suspects that Constable has priority. Alright: subset types go to Robert Constable in January 1983 with the Nearly Ultimate Pearl. Going once… My question is: when did we start calling {x:A | B(x)} and other similar subset types a “refinement type”? Any advice or pointers would be appreciated—I’ll update the post. Susumu Hayashi in Logic of refinement types describes “ATTT”, which, according to the abstract, “has refinement types which are intended to be subsets of ordinary types or specifications of programs”, where he builds up these refinements out of some set theoretic operators on singletons. By rights, this paper is probably the first to use “refinement type” to mean “subset type”… though I have some trouble pinpointing where the paper lives up to that claim in the abstract. Ewen Denney was using refinement types to mean types and specifications augmented with logical propositions. This terminology shows up in his 1998 PhD thesis and his 1996 IFIP paper, Refinement Types for Specification. In 1998, Hongwei Xi and Frank Pfenning opened the door to flexible interpretations of “refinements” in Eliminating Array Bound Checking Through Dependent Types. In Section 2.4, they use ‘refinement’ in a rather different sense: Besides the built-in type families int, bool, and array, any user-defined data type may be refined by explicit declarations. … typeref ? list of nat with nil <| ? list(0) | :: <| {n:nat} ? * ? list(n) -> ? list(n+1) Later on, in Section 3.1, they have a similar use of the term: In the standard basis we have refined the types of many common functions on integers such as addition, subtraction, multiplication, division, and the modulo operation. For instance, + <| {m:int} {n:int} int(m) * int(n) -> int(m+n) is declared in the system. The code in Figure 3 is an implementation of binary search through an array. As before, we assume: sub <| {n:nat} {i:nat | i < n} ? array(n) * int(i) -> ? So indices allow users to refine types, though they aren’t quite refinement types. In 1999, Xi and Pfenning make a strong distinction in Dependent Types in Practical Programming; from Section 9: …while refinement types incorporate intersection and can thus ascribe multiple types to terms in a uniform way, dependent types can express properties such as “these two argument lists have the same length” which are not recognizable by tree automata (the basis for type refinements). Now, throughout the paper they do things like “refine the datatype with type index objects” and “refine the built-in types: (a) for every integer n, int(n) is a singleton type which contains only n, and (b) for every natural number n, 0 a array(n) is the type of arrays of size n”. So here there’s a distinction between “refinement types”—the Freeman and Pfenning discipline—and a “refined type”, which is a subset of a type indicated by some kind of predicate and curly braces. Jana Dunfield published a tech report in 2002, Combining Two Forms of Type Refinements, where she makes an impeccably clear distinction: … the datasort refinements (often called refinement types) of Freeman, Davies, and Pfenning, and the index refinements of Xi and Pfenning. Both systems refine the simple types of Hindley-Milner type systems. In her 2004 paper with Frank, Tridirectional Typechecking, she maintains the distinction between refinements, but uses a term I quite like—“property types”, i.e., types that guarantee certain Yitzhak Mandelbaum, my current supervisor David Walker, and Bob Harper wrote An Effective Theory of Type Refinements in 2003, but they didn’t quite have subset types. Their discussion of related work makes it seem that they interpret refinement types as just about any device that allows programmers to use the existing types of a language more precisely: Our initial inspiration for this project was derived from work on refinement types by Davies and Pfenning and Denney and the practical dependent types proposed by Xi and Pfenning. Each of these authors proposed to sophisticated type systems that are able to specify many program properties well beyond the range of conventional type systems such as those for Java or ML. In the fairly related and woefully undercited 2004 paper, Dynamic Typing with Dependent Types, Xinming Ou, Gang Tan, Yitzhak Mandelbaum, and David Walker used the term “set type” to define {x:A | B Cormac Flanagan‘s Hybrid Type Checking in 2006 is probably the final blow for any distinction between datasort refinements and index refinements: right there on page 3, giving the syntax for types, he writes “{x:B|t} refinement type“. He says on the same page, at the beginning of Section 2, “Our refinement types are inspired by prior work on decidable refinement type systems”, citing quite a bit of the literature: Mandelbaum, Walker, and Harper; Freeman and Pfenning; Davies and Pfenning ICFP 2000; Xi and Pfenning 1999; Xi LICS 2000; and Ou, Tan, Mandelbaum, and Walker. After Cormac, everyone just seems to call them refinement types: Ranjit Jhala‘s Liquid Types, Robby Findler and Phil Wadler in Well typed programs can’t be blame, my own work, Andy Gordon in Semantic Subtyping with an SMT Solver. This isn’t a bad thing, but perhaps we can be more careful with names. Now that we’re all in the habit of calling them refinements, I quite like “indexed refinements” as a distinction. Alternatively, “subset types” are a very clear term with solid grounding in the literature. Finally: I didn’t cite it in this discussion, but Rowan Davies‘s thesis, Practical Refinement-Type Checking, was extremely helpful in looking through the literature. Edited to add: thanks to Ben Greenman for some fixes to broken links and to Lindsey Kuper and Ron Garcia for helping me clarify what refines what. 2020-04-27 update: Shriram Krishnamurthi suggests that Robert (Corky) Cartwright had a notion of “refinement type” in “User-Defined Data Types as an Aid to Verifying LISP Programs” from ICALP 1976 and with John McCarthy in First order programming logic in POPL 1979. I haven’t been able to get a PDF copy of the ICALP paper (please send me one if you can find it!). The POPL paper is clearly The key idea underlying our formal systems is that recursive definitions of partial functions can be interpreted as equations extending a first order theory of the program domain. Their model is typed, and the paper is about how Corky and John independently discovered ways of addressing recursion/fixed points. They translate programs to logic, treating checks in negative positions as ?‚ like Blume and McAllester’s “A sound (and complete) model of contracts”, but they don’t seem to think of themselves as actually refining types per se. This paper is an interesting early use of an SMT-like logic to prove properties of programs… though they do the proofs by hand! Cartwright’s dissertation, A Practical Formal Semantic Definition and Verification System for Typed Lisp (which I’ve hosted here, since I could only find it on a very slow server elsewhere) makes it clear that the work is indeed very closely related. Here’s a long quote from the end of his introduction: The auxiliary function ATOMLIST [a program predicate] serves as a clumsy mechanism for specifying the implicit data type atom-list [which he defined by hand]. If we included atom-list as a distinct, explicit data type in our programming language and expanded our first-order theory to include atom-lists as well as S-expressions, the informal proof using induction on atom-lists [given earlier] could be formalized directly in our first order system. However, since LISP programs typically involve a wide variety of abstract data types, simply adding a few extra data types such as atom-list to LISP will not eliminate the confusion caused by dealing with abstract data type representations rather than the abstract types themselves. In fact, the more complex that an abstract type is, the more confusing that proofs involving its representations are likely to be. Consequently, I decided that the best solution to this problem is to include a comprehensive data type definition facility in LISP and to formally define the semantics of a program P by creating a first-order theory for the particular data types defined in P. The resulting language TYPED LISP is described in the next chapter. I’m really happy to be part of the first PLVNET, a workshop on the intersection of PL, verification, and networking. I have two abstracts up for discussion. The first abstract, Temporal NetKAT, is about adding reasoning about packet histories to a network policy language like NetKAT. The work on this is moving along quite nicely (thanks in large part to Ryan Beckett!), and I’m looking forward to the conversations it will spark. The second abstract, Type systems for SDN controllers, is about using type systems to statically guarantee the absence of errors in controller programs. Fancy new switches have tons of features, which can be tricky to operate—can we make sure that a controller doesn’t make any mistakes when it talks to a switch? Some things are easy, like making sure that the match/action rules are sent to tables that can handle them; some things are harder, like making sure the controller doesn’t fill up a switch’s tables. I think this kind of work is a nice complement to the NetKAT “whole policy” approach, a sort of OpenFlow 1.3+ version of VeriCon with slightly different goals. Should be fun! Space-Efficient Manifest Contracts at POPL 15 I am delighted to announce that Space-Efficient Manifest Contracts will appear at POPL 2015 in Mumbai. Here’s the abstract: The standard algorithm for higher-order contract checking can lead to unbounded space consumption and can destroy tail recursion, altering a program’s asymptotic space complexity. While space efficiency for gradual types—contracts mediating untyped and typed code—is well studied, sound space efficiency for manifest contracts—contracts that check stronger properties than simple types, e.g., “is a natural” instead of “is an integer”—remains an open problem. We show how to achieve sound space efficiency for manifest contracts with strong predicate contracts. The essential trick is breaking the contract checking down into coercions: structured, blame-annotated lists of checks. By carefully preventing duplicate coercions from appearing, we can restore space efficiency while keeping the same observable behavior. The conference version is a slightly cut down version of my submission, focusing on the main result: eidetic λ[H] is a space-efficient manifest contract calculus with the same operational behavior as classic λ[H]. More discussion and intermediate results—all in a unified framework for space efficiency—can be found in the technical report on the arXiv. Contracts: first-order interlopers in a higher-order world Reading Aseem Rastogi, Avik Chaudhuri, and Basil Hosmer‘s POPL 2012 paper The Ins and Outs of Gradual Type Inference, I ran across a quote that could well appear directly in my POPL 2015 paper, Space-Efficient Manifest Contracts: The key insight is that … we must recursively deconstruct higher-order types down to their first-order parts, solve for those …, and then reconstruct the higher-order parts … . [Emphasis theirs] Now, they’re deconstructing “flows” in their type inference and I’m deconstructing types themselves. They have to be careful about what’s known in the program and what isn’t, and I have to be careful about blame labels. But in both cases, a proper treatment of errors creates some asymmetries. And in both cases, the solution is to break everything down to the first-order checks, reconstructing a higher-order solution afterwards. The “make it all first order” approach contrasts with subtyping approaches (like in Well Typed Programs Can’t Be Blamed and Threesomes, with and without blame). I think it’s worth pointing out that as we begin to consider blame, contract composition operators look less and less like meet operations and more like… something entirely different. Should contracts with blame inhabit some kind of skew lattice? Something else? I highly recommend the Rastogi et al. paper, with one note: when they say kind, I think they mean “type shape” or “type skeleton”—not “kind” in the sense of classifying types and type constructors. Edited to add: also, how often does a type inference paper include a performance evaluation? Just delightful! New and improved: Space-Efficient Manifest Contracts I have a new and much improved draft of my work on Space-Efficient Manifest Contracts. Here’s the abstract: The standard algorithm for higher-order contract checking can lead to unbounded space consumption and can destroy tail recursion, altering a program’s asymptotic space complexity. While space efficiency for gradual types—contracts mediating untyped and typed code—is well studied, sound space efficiency for manifest contracts—contracts that check stronger properties than simple types, e.g., “is a natural” instead of “is an integer”—remains an open problem. We show how to achieve sound space efficiency for manifest contracts with strong predicate contracts. We define a framework for space efficiency, traversing the design space with three different space-efficient manifest calculi. Along the way, we examine the diverse correctness criteria for contract semantics; we conclude with a language whose contracts enjoy (galactically) bounded, sound space consumption—they are observationally equivalent to the standard, space-inefficient semantics. Update: it was accepted to POPL’15! Concurrent NetCore: From Policies to Pipelines Cole Schlesinger, Dave Walker, and I submitted a paper to ICFP 2014. It’s called Concurrent NetCore: From Policies to Pipelines. Here’s the abstract: In a Software-Defined Network (SDN), a central, computationally powerful controller manages a set of distributed, computationally simple switches. The controller computes a policy describing how each switch should route packets and populates packet-processing tables on each switch with rules to enact the routing policy. As network conditions change, the controller continues to add and remove rules from switches to adjust the policy as needed. Recently, the SDN landscape has begun to change as several proposals for new, reconfigurable switching architectures, such as RMT and FlexPipe have emerged. These platforms provide switch programmers with many, flexible tables for storing packet-processing rules, and they offer programmers control over the packet fields that each table can analyze and act on. These reconfigurable switch architectures support a richer SDN model in which a switch configuration phase precedes the rule population phase. In the configuration phase, the controller sends the switch a graph describing the layout and capabilities of the packet processing tables it will require during the population phase. Armed with this foreknowledge, the switch can allocate its hardware (or software) resources more efficiently. We present a new, typed language, called Concurrent NetCore, for specifying routing policies and graphs of packet-processing tables. Concurrent NetCore includes features for specifying sequential, conditional and concurrent control-flow between packet- processing tables. We develop a fine-grained operational model for the language and prove this model coincides with a higher level denotational model when programs are well typed. We also prove several additional properties of well typed programs, including strong normalization and determinism. To illustrate the utility of the language, we develop linguistic models of both the RMT and FlexPipe architectures and we give a multi-pass compilation algorithm that translates graphs and routing policies to the RMT model. A Balance of Power: Expressive, Analyzable Controller Programming I just finished reading A Balance of Power: Expressive, Analyzable Controller Programming. It’s an interesting proposal, but I’m writing just to express my satisfaction with the following sentence: When we hit expressive limits, however, our goal is not to keep growing this language—down that path lies sendmail.cf and other sulphurous designs—but to call out to full-language code. ‘Sulphurous’ indeed. Come for the nonmonotonic interpretation of learning, stay for the colorful prose. Bug in “Polymorphic Contracts” The third chapter of my dissertation is effectively a longer version of an ESOP 2011 paper, Polymorphic Contracts. We define FH, a polymorphic calculus with manifest contracts. Atsushi Igarashi, with whom I did the original FH work that appeared in ESOP 2011, and his student Taro Sekiyama have been working on continuing some of the FH work. They discovered—after my defense!—a bug in FH’s Short version: FH used parallel reduction as a conversion relation. A key property of this relation is substitutivity. We phrased it as “if e1 ⇒ e1′ and e2 ⇒ e2′ then e1{e2/x} ⇒ e1′{e2’/x}”. Unfortunately, this doesn’t hold for FH, due to subtleties in FH’s reduction rules for casts. The cast reduction rules are implicitly performing equality checks on types, and these equality checks can be affected by substitutions to change which reduction rule applies. The (tentative) solution in my thesis is to use a simpler type (and term) conversion relation which we call common subexpression reduction (CSR). In CSR, we relate types and terms that are closed by closing substitutions σ[1] →^* σ[2]. That is, the CSR conversion is the smallest congruence which is substitutive for →^*, i.e., where if e →^* e’ then T{e/x} ≡ T{e’/x}. Long version: I’ve excerpted Section 3.5 of my thesis which discusses the System FH type conversion bug.
{"url":"https://www.weaselhat.com/category/formal/programming-languages/page/2/index.html","timestamp":"2024-11-04T13:40:42Z","content_type":"text/html","content_length":"80076","record_id":"<urn:uuid:1d0b74ac-45e5-4dbd-a49c-7d5c6b1a61ad>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00242.warc.gz"}
How to limit a response prediction to a threshold in a model?How to limit a response prediction to a threshold in a model? I want to model particle size, which is a variable that can't be below 0. However, some setups are being predicted to be negative (see picture below). So my question is how can I limit the prediction threshold to be above zero? Is there any truncated approach in JMP?
{"url":"https://community.jmp.com/t5/Discussions/How-to-limit-a-response-prediction-to-a-threshold-in-a-model/td-p/788058?trMode=source","timestamp":"2024-11-07T08:52:09Z","content_type":"text/html","content_length":"393618","record_id":"<urn:uuid:55a24c1f-06f1-4664-ad9b-6639ced847df>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00639.warc.gz"}
The code provided simulates the movement of energy in a parallel reality, and plots the results of the simulation. The code is written in the Python programming language, and uses the NumPy library to perform numerical operations. It defines two functions: simulate_universe and plot_universe. The simulate_universe function takes two arguments: sim_size and speed. It uses these arguments to initialize the fluid-like energy with random values for speed and position, and then to initialize the space and time variables. It then iterates through the simulation, updating the space and time variables based on the movement of the energy, and returns the final values for space and time. The plot_universe function takes two arguments: space and time. It uses these arguments to plot the space and time data using the Matplotlib library, and to add labels to the x-axis and y-axis. It then displays the plot using the show function. Finally, the code defines the size of the simulation and the speed at which the energy flows, and calls the simulate_universe function to run the simulation and generate the space and time data. It then calls the plot_universe function to plot the data. RAMNOT’s Use Cases: 1. Modeling the behavior of fluids for applications in engineering, such as the design of pipes and valves. 2. Simulating the propagation of waves for applications in communication and signal processing. 3. Studying the vibration of strings and other objects for applications in music and acoustics. 4. Modeling the motion of celestial bodies for applications in astronomy and astrophysics. 5. Simulating the movement of springs for applications in mechanical engineering. 6. Analyzing the motion of pendulums for applications in physics education and the design of clocks. 7. Modeling the propagation of sound waves for applications in acoustics and audio engineering. 8. Studying the vibration of drumheads and other objects for applications in music and acoustics. 9. Simulating the rotation of wheels and other objects for applications in mechanical engineering. 10. Modeling the movement of cars on roads for applications in transportation engineering. 11. Analyzing the motion of objects in gravitational fields for applications in physics education and space exploration. 12. Simulating the propagation of electrical currents for applications in electrical engineering. 13. Studying the vibration of beams for applications in structural engineering. 14. Modeling the rotation of propellers for applications in aerospace engineering. 15. Analyzing the movement of waves in wave tanks for applications in ocean engineering. 16. Simulating the motion of planets around the sun for applications in astronomy and astrophysics. 17. Studying the propagation of light waves for applications in optics and photonics. 18. Modeling the vibration of plates for applications in structural engineering. 19. Analyzing the rotation of turbines for applications in power generation and energy production. 20. Simulating the movement of particles in magnetic fields for applications in physics and engineering. import numpy as np import matplotlib.pyplot as plt def simulate_universe(sim_size, speed): # Initialize the fluid-like energy with random values for speed and position energy = np.random.rand(sim_size, 2) # Initialize the space and time variables space = np.zeros(sim_size) time = np.zeros(sim_size) # Iterate through the simulation, updating the space and time variables based on the movement of the energy for i in range(sim_size): space[i] = energy[i, 0] time[i] = energy[i, 1] * speed # Update the energy in the parallel reality energy[i, 1] = energy[i, 1] + speed return space, time def plot_universe(space, time): plt.plot(space, time) # Define the size of the simulation and the speed at which the energy flows sim_size = 1000 speed = 0.1 # Run the simulation and plot the results space, time = simulate_universe(sim_size, speed) plot_universe(space, time) 1. Movement of a particle in a fluid: In this simulation, the energy represents the movement of a particle through a fluid, and the speed represents the velocity of the particle. 2. Propagation of a wave through a medium: In this simulation, the energy represents a wave propagating through a medium, and the speed represents the speed of the wave. 3. Vibration of a string: In this simulation, the energy represents the vibration of a string, and the speed represents the frequency of the vibration. 4. Rotation of a planet around a star: In this simulation, the energy represents the rotation of a planet around a star, and the speed represents the angular velocity of the planet. 5. Movement of a spring: In this simulation, the energy represents the movement of a spring, and the speed represents the oscillation frequency of the spring. 6. Motion of a pendulum: In this simulation, the energy represents the motion of a pendulum, and the speed represents the period of the pendulum. 7. Propagation of a sound wave: In this simulation, the energy represents a sound wave propagating through a medium, and the speed represents the speed of sound in that medium. 8. Vibration of a drumhead: In this simulation, the energy represents the vibration of a drumhead, and the speed represents the frequency of the vibration.
{"url":"https://ramnot.com/category/models/","timestamp":"2024-11-11T01:01:48Z","content_type":"text/html","content_length":"187081","record_id":"<urn:uuid:128efafc-e29e-4679-bacd-23d537a5a43f>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00799.warc.gz"}
Application of derivatives Homework Help, Questions with Solutions - Kunduz Application of derivatives Questions and Answers Application of derivatives A model used for the yield Y of an agricultural crop as a function of the nitrogen level W in the soil (measured in appropriate units) is Y=KN /25 + N² where k is a positive constant. What nitrogen level gives the best yield? Application of derivatives (-7x4+5x³+20x²+10)÷(-x²+x+3) Write your answer in the following form: Quotient + -7x4+5x³+20x² +10 -x²+x+3 = + 2 -x + x + 3 Remainder 2 -x+x+3 X S ? Application of derivatives Determine if the following function is concave up or concave down in the first quadrant. y=5x^2/3 Is the function y=5x^2/3 concave up or concave down in the first quadrant? Concave up Concave down Application of derivatives Determine whether the function y = -2x³ is increasing or decreasing for the following conditions. (a) x < 0 (b) x > 0 (a) Is the function increasing or decreasing for x < 0? decreasing increasing Application of derivatives The annual total revenue for a product is given by R(x) = 30,000x – 5x² dollars, where x is the number of units sold. To maximize revenue, how many units must be sold? What is the maximum possible annual revenue? To maximize revenue,_________units must be sold. (Simplify your answer. Application of derivatives Let h(x) = -9x - 13 - 6x²- x³ Determine the absolute extrema of h on [-4, 0]. If multiple such values exist, enter the solutions using a comma-separated list. The absolute minimum of h is_______and it occurs at x =_____ The absolute maximum of h is_______and it occurs at x =_____ Application of derivatives Consider h(v) = 7v log5( – 6v) on [ – 125/6 ,-1/6] Determine the interval over which h is continuous and the interval over which h is differentiable. h is continuous on _______ h is differentiable on _______ - Use the above information to determine if the Mean Value Theorem may be applied to h over [ – 125/6 ,-1/6] Application of derivatives Find the points on the curve y = x³ + 3x^2 - 9x + 8 where the tangent is horizontal. smaller x-value (x, y) = larger x-value (x, y) = Application of derivatives 4 ln(x + 6) x + 6 Let f(x) Determine the absolute extrema of f on [-5, -1]. If multiple such values exist, enter the solutions using a comma-separated list. The absolute minimum of fis +2 The absolute maximum of f is and it occurs at x = and it occurs at x = Application of derivatives Let g(x) = 17 + 24x + x³ + 9x² Determine the absolute extrema of g on [-5, -1]. If multiple such values exist, enter the solutions using a comma-separated list. Application of derivatives For the polynomial below, -3 is a zero. g(x)=x²³ - 2x² - 9x + 18 Express g (x) as a product of linear factors. Application of derivatives (1 point) 3sin(x)tan(x)+3¯¯√sin(x)=0 Find all angles in radians that satisfy the equation. For each solution enter first the angle solution in [0,π) оr [0,2π) (depending on the trigonometric function) then the period. When 2 or more solutions are available enter them in increasing order of the angles. (e.g. x=π/2+2kt or x=3π/2+kπ etc.) Note: You are not allowed to use decimals in your answer. Use pi for π. Application of derivatives For the polynomial below, 3 is a zero. g(x)=x²- 4x² + x + 6 Express g (x) as a product of linear factors. Application of derivatives For the polynomial below, -3 is a zero. f(x)=x³ - 3x² Express f(x) as a product of linear factors. Application of derivatives A spherical balloon is inflated at the rate of 67 cm³/sec. At what rate is the radius increasing when r = 4 cm? Application of derivatives Sand falls from an overhead bin and accumulates in a conical pile with a radius that is always two times its height. Suppose the height of the pile increases at a rate of 3 cm/s when the pile is 17 cm high. At what rate is the sand leaving the bin at that instant? Application of derivatives Find the extrema of y = x³-6x² +9x+2 on [0,2]. (Notice this is the same equation as #4a.) Label max/min. Application of derivatives Find the unit tangent vector T(t) at the point with the given value of the parameter t. r(t) =(t²- 3t, 1 + 4t,1/3 t^3+1/2 t^2) ,t = 3 T(3)= Application of derivatives Find the profit function if cost and revenue are given by C(x) = 178 +4.9x and R(x) = 7x -0.05x². The profit function is P(x) = Application of derivatives Find an equation for the surface consisting of all points that are equidistant from the point (-3, 0, 0) and the plane x = 3. Identify the surface. O parabolic cylinder O hyperbolic paraboloid O hyperboloid of one sheet O circular paraboloid O hyperboloid of two sheets O ellipsoid O elliptic cylinder O cone Application of derivatives The total cost (in dollars) of producing x food processors is C(x) = 1900 + 30x -0.1x². (A) Find the exact cost of producing the 91st food processor. (B) Use the marginal cost to approximate the cost of producing the 91st food processor. (A) The exact cost of producing the 91st food processor is $_______ (B) Using the marginal cost, the approximate cost of producing the 91st food processor is $______ * Application of derivatives For f(x)=1/5+x²4the slope of the graph of y = f(x) is known to be -4/81 at the point with x-coordinate 2. Find the equation of the tangent line at that point. _____(Type an equation. Use integers or fractions for any numbers in the equation.).
{"url":"https://kunduz.com/questions/calculus/application-of-derivatives/?page=186","timestamp":"2024-11-13T16:21:14Z","content_type":"text/html","content_length":"264111","record_id":"<urn:uuid:4fe87a52-691c-45b8-a8b8-944f527f310a>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00740.warc.gz"}
1. Introduction Oilseed rape is the number one oilseed crop in China, providing more than 50% of the country's edible vegetable oil. About 100 million acres are under permanent cultivation nationwide of oilseed rape, 90% of which are in the Yangtze River basin [ ]. However, the mechanized production level of rape in China is low, with the national level of rape machine harvesting at only 50.97% in 2021, and growing slowly. High rape machine harvesting losses are one of the main reasons limiting the healthy and rapid development of China's rape industry [ ]. According to statistics, the losses of the cutting table during rape harvesting are relatively large and can reach 40% of the total harvesting losses on average [ ]. The reel is an important working part of the rape combine harvesting cutter [ ], its role is to guide the rape to be cut to the cutter or to support the fallen rape in the guidance process, to support the stalks and push the cut stalks backwards during cutting, to prevent the cut stalks from piling up on the cutter causing knife blockage and congestion [ At present, the cam-action reel of the rape combine harvester developed in China generally adopts the reel, which is conducive to holding up the fallen crop [ ]. But because the reel teeth move downwards throughout the whole process and keep the angle unchanged, the rape branches grow around the main branch and the height is uneven, and the plants fork and stagger between different plants and pull each other, the reel tangles, hangs and brings back problems during the paddle operation, which affects both the harvesting. This not only affects the efficiency of the harvesting operation, but also causes the angle fruit of rape to be blown up when the teeth of the reel act directly on it, resulting in a loss of grain [ In order to reduce the problems of winding, picking and hanging of reels during harvesting, some foreign rape harvesters use large-diameter flip-type wheels [ ], which can significantly reduce the winding and hanging of rape as the teeth of the wheel flip during the return journey and the teeth have a large radius of rotation. It also has a complex structure and high cost, and is currently mainly used on large rape harvesting equipment. Therefore, there is an urgent need to develop a new type of reel with a simple structure, which can realize the turning of the reel teeth, a large turning radius and can adapt to the tall and branched rape, and a low speed when paddling into the sheaves, in order to improve the efficiency of rape harvesting operations and reduce paddling losses. In this paper, the constraints on the trajectory and angle of the reel for weak strike, low winding and fast paddling are constructed based on the analysis of the winding of the reel. Based on the kinematics of the double crank plane five-bar mechanism, a variable speed anti-tangle reel mechanism is designed, and its kinematics and parameters are optimized. This study can provide a reference for the design of low loss and tangle reduction wheels for oilseed rape harvesting. 2. Materials and Methods 2.1. Variable speed reel structure and working principle According to the biological characteristics of rape and the law of rape plant transport in the process of tilting the tines, the tines should be inserted diagonally into the branch gap from above the plant at an angle that fits the rape branches to reduce the number of collisions with the angular fruits of the rape, and the tines should rise quickly along the tilted angle of the rape at an angle close to the vertical and break away from the plant to avoid tangling and hanging of the tines, and finally gradually accelerate to complete the forward turning of the tines. In order to realize the requirements for the trajectory and attitude of the paddle teeth in the rape paddling process, a variable speed reel mechanism is designed in this paper, the structure of which is shown in Fig. 1. Figure 1. Structure diagram of variable speed reel.(1)Side plate, (2) Paddle teeth,(3) Paddle plate,(4) Spindle connecting rod, (5) Crank connecting rod, (6) Cutting table support rod,(7) Eccentric disc welding rod,(8) Eccentric disc connecting rod.(9) Pentagonal plate,(10) Reel spindle,(11) Eccentric disc. The main components are the paddle teeth, the paddle plate, the reel spindle, the spindle connecting rod, the crank connecting rod, the eccentric disc connecting rod, the pentagonal plate and the eccentric disc. In particular, the spindle connecting rod 4 is fixedly connected to the reel spindle 10, the eccentric disc welding rod 7 is fixedly connected to the eccentric disc 11 and the remaining four eccentric disc connecting rods 8 are hinged to the eccentric disc 11. The belt pulley rotates to deliver power to the spindle linkage 4, which drives the crank linkage 5 and the eccentric disc welding rod 7. The power is transferred to the eccentric disc 11, which starts to rotate, and the eccentric disc starts to drive the four eccentric disc linkages 8, which are hinged to 2.2. Kinematic analysis of the variable speed reel mechanism The variable speed reel mechanism is composed of a plane four-bar mechanism and four double crank plane five-bar mechanisms. [ ] Taking one of the paddle teeth as the object of study, the motion model of the reel mechanism is shown in Figure 2 Note: $l 1 , l 2 , l 3 , l 4 , l 5 , l 6$ are the lengths of the frame, spindle connecting rod, crank connecting rod, eccentric disc welding rod/eccentric disc connecting rod, short crank and toggle teeth in mm, respectively; $θ 1 , θ 2 , θ 3 , θ 4$ are the angle of each rod with the x-axis, (°), respectively, and the angle of positive counterclockwise rotation along the x-axis; $θ 5 , β$ are the angle of the paddle teeth with the crank connecting rod and the paddle teeth with the y-axis in the negative direction, respectively. According to Figure 2 , the closed-loop vector equation for the teeth of the variable speed reel mechanism is established as $A E ¯ + E D ¯ = A B ¯ + B C ¯ + C D ¯$ Decomposition of the closed-loop vector equation of equation (1) along the - and ${ l 2 cos θ 1 + l 3 cos θ 2 = l 5 cos θ 4 + l 4 cos θ 3 + l 1 l 2 sin θ 1 + l 3 sin θ 2 = l 5 sin θ 4 + l 4 sin θ 3$ tip F of the double crank plane five-bar mechanism is related to the law of motion of the rod, the rod or the rods. The rods are the two cranks of the mechanism, doing the same direction of equal velocity rotary motion, the angle $θ 5$ between the paddle teeth and the rod is a constant value, so only need to get the angle under the $θ 2$ $θ 3$ , at that moment, the equation of the trajectory of the paddle tooth tip F can be found. As the five rods form a closed vector ring , the following conditions must be satisfied at any moment $θ 2$ $θ 3$ ${ − l 4 cos θ 3 + l 3 cos θ 2 = l 1 − l 2 cos θ 1 + l 5 cos θ 4 − l 4 sin θ 3 + l 3 sin θ 2 = − l 2 sin θ 1 + l 5 sin θ 4$ Then the equation of the static trajectory of the point F of the paddle teeth tip can be solved according to the law of motion of the rods as ${ x F = l 2 cos θ 1 + l 3 cos θ 2 + l 6 cos ( θ 2 − θ 5 ) y F = l 2 sin θ 1 + l 3 sin θ 2 + l 6 sin ( θ 2 − θ 5 )$ Derivation of equation (4) gives the equation for the static trajectory velocity at the point of the paddle tooth tip ${ v x F = − l 2 d θ 1 d t sin θ 1 − l 3 d θ 2 d t sin θ 2 − l 6 d ( θ 2 − θ 5 ) d t sin ( θ 2 − θ 5 ) v y F = l 2 d θ 1 d t cos θ 1 + l 3 d θ 2 d t cos θ 2 + l 6 d ( θ 2 − θ 5 ) d t cos ( θ 2 − θ 5 Assuming that the harvester travels at a speed $v m$ , the dynamic trajectory equation for the point of the paddle tooth tip is ${ x F " = l 2 cos θ 1 + l 3 cos θ 2 + l 6 cos ( θ 2 − θ 5 ) + v m t y F " = l 2 sin θ 1 + l 3 sin θ 2 + l 6 sin ( θ 2 − θ 5 )$ Derivation of equation (6) gives the velocity equation for the dynamic condition at the point of the paddle tooth tip, as ${ v x F " = − l 2 d θ 1 d t sin θ 1 − l 3 d θ 2 d t sin θ 2 − l 6 d ( θ 2 − θ 5 ) d t sin ( θ 2 − θ 5 ) + v m v y F " = l 2 d θ 1 d t cos θ 1 + l 3 d θ 2 d t cos θ 2 + l 6 d ( θ 2 − θ 5 ) d t cos ( θ 2 − θ 5 )$ 2.3. Variable speed reel construction parameters The variable speed reel designed in this paper is mounted on a Ward 4LZ-6.0 full-feed tracked self-propelled rape combine harvester with a 2.2m wide cutting deck. According to preliminary field research, the average angle between the rape branches and the main inflorescence is 25°~40°, so the angle between the reel teeth and the crank connecting rod is set at 40°. Based on the kinematic analysis and the rod length constraint of the double crank plane five-rod mechanism [ ], the structural parameters of the variable speed reel mechanism were determined as shown in Table 1 2.4. Test material The oilseed rape harvest trial was conducted from June 6 to June 8, 2022 in Daitou Town, Liyang City, Changzhou City, Jiangsu Province. The oilseed rape was planted by mechanical direct seeding, and the variety was Ningxia 1818. The average height of the plants in the field was 1.40m, and the row spacing was 0.32m. The diameter of the angular fruit layer was 0.71m. The thickness of the angular fruit layer was 0.56m. The average diameter of the main stalk was 7.6mm, and the height of the low pod was 0.73m. The oilseed rape plant had an average of 5 branches, a branch height of 0.33m, a single plant weight of 120.67g, a rapeseed moisture content of 21.47% at harvest test and a thousand grain weight of 3.96g. The designed variable speed reel was mounted on a Ward 4LZ-6.0 full-feed tracked self-propelled rape combine harvester cutting table, and the test platform and test site are shown in Figure 3 2.5. Evaluation indicators and measurement methods for reels According to GB/T 8097-2008 "Harvesting Machinery Combine Harvester Test Methods", a planting area with even growth of rape, flat terrain and no pests and diseases was selected as the test field, and rape harvesting operations were carried out under wind speed not greater than 3m/s. Each group was operated with a stroke of 30m and a preparation area of 5m in length, and samples were taken after the rape combine harvester had run steadily. Sampling zones were set up at 8m intervals, making a total of 3 sampling zones. Rapeseed from the sampling zones was collected separately for each group of tests, taken 3 times and averaged. It is difficult to evaluate the rape paddle loss separately. In this paper, the cutting table loss rate is used as an index to evaluate the quality of variable speed reel operation, and the rape cutting table loss rate is the evaluation index, and the determination method refers to NY/T1231-2006 Technical Specification for Quality Evaluation of Rapeseed Combine Harvesters. The total mass of rapeseed per square meter was obtained according to the mass of kernels harvested at the pick-up point and the corresponding harvested area. The cutting table loss rate is calculated using the following formula: Where: $Y$ is the cutting table loss rate, %; $W b s$ is the actual loss per unit area in each group of tests, g/m^2; $W s$ is the harvested mass of rapeseed per square metre, g/m^2. 3. Test results and discussion 3.1. One-factor tests and analysis After several pre-tests, key parameters such as the speed of the reel, the forward speed of the implement and the angle between the reel frame and the ground were selected as the test factors for the single-factor test. For the single-factor test, the speed of the variable-speed reel was divided into three levels: 25r/min, 30r/min and 35r/min. The speed of the combine harvester was controlled to be 0.9m/s and the reel frame was parallel to the ground, i.e. the angle was zero. In the single-factor test, the forward speed was divided into three levels: 0.7m/s, 0.9m/s and 1.1m/s, at which time the reel paddle speed ratios were 2.07, 1.61 and 1.32 respectively, and the angle between the reel frame and the ground was zero. In this paper, the angle between the frame of the variable speed reel and the ground ranges from -20° to 20°, which is positive if the frame rotates counterclockwise around the rotation axis and negative otherwise, and is divided into -20°, 0° and 20° levels for the paddle test, in which the speed of the reel is 30r/min and the forward speed of the implement is 0.9m/s. Each set of tests is repeated three times and the average value is taken, and the effects of the three factors obtained, namely the speed of the reel, the forward speed of the implement and the angle between the reel frame and the ground, on the loss of the cutting table are shown in Figure 4 As can be seen from Figure 4 (a), the loss rate of the cutting table increases significantly with the increase of the speed of the variable speed reel, mainly due to the increase of the speed of the reel, the impact force of the paddle teeth on the rape kernels increases, the number of impacts per unit time becomes more, the rate of kernels blowing up increases, and the loss of seeds increases. From Figure 4 (b), it can be seen that the loss rate of the cutting table gradually increases with the increase of the forward speed of the harvester, which is mainly due to the fact that when the forward speed of the machine increases, the speed ratio of the reel decreases, the feeding volume of the cutting table becomes more per unit time, and the paddle teeth collide with more angular fruits in the process of paddling, and the loss of seeds also increases. As can be seen from Figure 4 (c), as the angle between the frame of the reel and the ground increases, the loss of the cutting table is the first to decrease and then to increase, and when the frame is parallel to the ground, the rate of paddle loss is the smallest, and its value is 1.46%. When the frame of the variable speed reel is parallel to the ground, the attitude angle of the paddle teeth is more in line with the trajectory and attitude of the paddle stage, and the rate of cutting table loss is lowest at this time. 3.2. Box-Behnken centre combination test and analysis The single-factor test showed that the variable-speed flexible reel speed, the travel speed of the implement and the angle between the frame and the ground all had a significant effect on cutting table losses. According to the Box-Behnken central combination test protocol [ ], the variable speed reel cutting table loss test was carried out using the cutting table loss rate as the response index and the reel speed A, frame and ground angle B and combine harvester forward speed C as the influencing factors. The coding table for the test factor levels is shown in Table 2 The test method for the loss of the cutting table of the variable speed reel was the same as the single factor test method, each group of tests was tested three times and the average value was taken.The cutting table losses measured by the different test factors are shown in Table 3 Analysis of variance (ANOVA) was performed on the test results in Table 3 and the results are shown in Table 4 From the ANOVA results in Table 4 , the cutting table loss rate model P < 0.01 indicates that the regression model generated was extremely significant with a coefficient of determination R = 0.9918. The regression model misfit term P value of 0.3810 is greater than 0.05, indicating that the error is small and the regression model can be used to predict the cutting table loss rate. The P-values for A, B, C and B are less than 0.05, indicating a significant effect on the model within the 95% confidence interval, while the P-values for AB, AC, BC, A and C are greater than 0.05, indicating a non-significant effect on the model. Based on the P-values, it can be seen that the influence of each influence factor on the loss of the cutting table is in the order of A>C>B, i.e. reel speed, forward speed and angle between the frame and the ground, from largest to smallest. The ternary quadratic regression equation of the loss rate of the cutting table with the speed of the reel, the forward speed and the angle between the frame and the ground was obtained by excluding the non-significant term: $Y = − 0.0335 − 0.0105 A − 0.0014 B + 1.4875 C + 0.0009 B 2$ The response surface curves were generated by Design-Expert software as shown in Fig. 5. The effect of any two of the three factors, namely, reel speed, frame to ground angle and implement forward speed, on the loss rate of the cutting table was obtained by placing one of the test factors A, B and C at the 0 level. From Figure 5 (a), when the rotation speed of the reel is fixed at a certain level, the loss rate of the cutting table decreases firstly and then increases with the increase of the angle between the frame and the ground, and the optimal range of the angle between the frame and the ground is -10°~10°; when the angle between the frame and the ground is fixed at a certain level, the loss rate of the cutting table shows an increasing trend with the increase of the rotation speed of the reel. From Figure 5 (b), when the speed of the reel is fixed at a certain level, the loss rate of the cutting table increases with the increase of the forward speed of the implement, but the increase is more gentle, and the optimal range of the forward speed of the implement is 0.8m/s~1m/s; when the forward speed of the implement is fixed at a certain level, the loss rate of the cutting table shows a fast increasing trend with the increase of the speed of the reel, so the optimal range of the speed of the reel is 25 r /min~31r/min. From Figure 5 (c), when the angle between the frame and the ground is fixed at a certain level, the loss rate of the cutting table increases with the increase of the forward speed of the implement, but the rise is more gentle, and the optimal range of the forward speed of the implement is 0.7m/s~1m/s. When the forward speed of the machine is fixed at a certain level, the loss rate of the cutting table tends to fall first and then rise when the angle between the rack and the ground increases, and the rate of fall and rise is faster, so the optimal range of the rack level angle is -10°~10°. 3.3. Parameter optimisation In order to find the best combination of operating parameters for the variable speed reel, the Optimization module of the Design-Expert software was used to optimize the experimental results of the variable speed reel cutting table. With the lowest cutting table loss as the optimization objective, the constraints of equation (10) were established. ${ min Y ( A , B , C ) { 25 r / min < A < 31 r / min − 10 ° < B < 10 ° 0.7 m / s < C < 1 m / s$ The software analysis resulted in the optimum combination of operating parameters for the variable speed reel: 25r/min reel speed, zero angle between the frame and the ground and a forward speed of 0.7m/s for a loss rate of 1.12% on the cutting table. 3.4. Field tests and discussion The test was conducted three times with the optimum working parameters, and the average value was taken. The variable speed reel did not show any hanging or winding of the paddle teeth during the whole test. The mean value of cutting table loss rate measured in the field was 1.18%, and the relative error between the test value and the optimised value of the regression equation was 5.36%, which is in good agreement with the results, indicating that the ternary quadratic regression equation established for the cutting table loss rate and the speed of the reel, the forward speed and the angle between the frame and the ground is accurate and reliable. In order to further verify the advantages of the variable speed reel mechanism over the traditional cam-action reel in terms of tangle reduction and loss reduction, the rape combine harvester with an cam-action reel was subjected to a cutting table loss test in the same test field, with the harvester operating at 30m, and the test was repeated three times and averaged. The 2 types of reel cutting tables are shown in Figure 6 Both the variable speed and cam-action reels had a width of 2.2m and a stubble height of 30cm. A comparative test was carried out at the same machine speed and with both wheels having a paddle speed ratio of 1.61. In the field test, the average loss rate was 1.37% at 25r/min of the cam-action reel, which was 13.9% higher than the loss rate of the variable speed reel. During the field tests the cam-action reel became entangled, and the plants and branches were thrown out by the reels. 4. Conclusions (1) By analyzing the biological characteristics of oilseed rape plants and the transport pattern of oilseed rape plants in the process of paddling, a variable speed anti-tangle reel mechanism was designed based on the kinematic principle of double crank plane five-bar mechanism, and a single-factor test and a multi-factor regression orthogonal combination test were conducted to evaluate the loss rate of rape harvesting cutting table. The main and secondary factors affecting the loss of the cutting table of the variable speed reel were found to be the reel speed, the machine travel speed and the ground angle of the frame. (2) The optimal combination of parameters for the variable speed reel is 25r/min, the frame is parallel to the ground and the machine travel speed is 0.7m/s. The regression model calculates the loss rate of 1.12%. The mean value of cutting table loss rate measured in the field was 1.18%, and the relative error between the experimental value and the theoretically optimized value was 5.36%, which is a high accuracy of the model. (3) Comparative tests of variable speed and cam-action reels for cutting table loss rate and reel winding were carried out. The test results showed that the average loss rate of the cam-action reel was 1.37%, which was 13.9% higher than the loss rate of the variable speed reel. During the field tests, the cam-action reels were tangled, the plants and branches were thrown out by the reels, while the variable speed reels did not show winding and tangling. Author Contributions Conceptualization, methodology, data curation, formal analysis, writing—original draft, writing—review and editing, M.Z. and G.L.; investigation, Y.Y.; data curation, M.J. and Y.Y.; funding acquisition, M.Z.; validation, G.L.; supervision, T.J. All authors have read and agreed to the published version of the manuscript. This work was financially supported by Jiangsu Agricultural Science and Technology Innovation Fund (SCX(22)2103), The Foundation Research Project of Jiangsu Province the Natural Science Fund (BK20211022), Funds for Modern Agricultural Industry Technology System Construction of China (CARS-12) and Key Research Program & Technology Innovation Program of Chinese Academy of Agricultural Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement The data presented in this study are available on request from the authors. The authors thank the editor and anonymous reviewers for providing helpful suggestions for improving the quality of this manuscript. Conflicts of Interest The authors declare no conflict of interest. Figure 3. Variable speed reel mechanism and test site. (a) Variable speed reel cutting table,(b) Oilseed rape test fields. Figure 4. Relationship between the main operating parameters of the variable speed reel and cutting table losses. (a) Influence of the forward speed of the harvester on the rate of cutting table losses, (b) Influence of the speed of the reel on the rate of cutting table losses, (c) Influence of the angle between the reel frame and the ground on the rate of cutting table losses. Figure 6. Comparative test of two types of reel cutting tables. (a) Variable speed reel, (b) Cam-action reel. Variable Meaning Value/mm l[1] Length of frame 55 l[2] Spindle connecting rod length 120 l[3] Crank connecting rod length 253 l[4] Eccentric disc connecting rod length 80 l[5] Length of crank 2 396 Code value Reel speed A (r/min) Frame to ground angle B (°) Forward speed C (m/s) -1 25 -20 0.7 0 30 0 0.9 1 35 20 1.1 Test No. Speed of reel (r/min) Ground angle of frame (°) Forward speed (m/s) Loss rate% 1 30 0 0.9 1.46 2 35 -20 0.9 2.14 3 30 20 1.1 1.82 4 30 -20 0.7 1.7 5 25 -20 0.9 1.59 6 30 0 0.9 1.43 7 35 0 1.1 1.75 8 30 20 0.7 1.69 9 25 0 1.1 1.31 10 25 0 0.7 1.12 11 35 20 0.9 2.09 12 30 -20 1.1 1.88 13 25 20 0.9 1.52 14 30 0 0.9 1.47 15 35 0 0.7 1.67 Source of error Sum of squares df Mean square sum F-value P-value Model 1.08 9 0.1197 188.98 < 0.0001 A 0.5618 1 0.5618 887.05 < 0.0001 B 0.0061 1 0.0061 9.55 0.0271 C 0.0481 1 0.0481 75.87 0.0003 AB 0.0001 1 0.0001 0.1579 0.7075 AC 0.0025 1 0.0025 3.95 0.1037 BC 0.0016 1 0.0016 2.53 0.1728 A^2 0.0041 1 0.0041 6.48 0.0516 B^2 0.4480 1 0.4480 707.38 < 0.0001 C^2 0.0017 1 0.0017 2.74 0.1590 Residuals 0.0032 5 0.0006 Misfit term 0.0023 3 0.0008 1.77 0.3810 Error 0.0009 2 0.0004 R^2 0.9918 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
{"url":"https://www.preprints.org/manuscript/202306.1850/v1","timestamp":"2024-11-11T11:22:43Z","content_type":"text/html","content_length":"726135","record_id":"<urn:uuid:7e58d055-40ee-4606-810f-ee029f591ae1>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00521.warc.gz"}
Stable Delaunay Graphs Let P be a set of n points in R^2, and let DT(P) denote its Euclidean Delaunay triangulation. We introduce the notion of the stability of edges of DT(P). Specifically, defined in terms of a parameter α > 0, a Delaunay edge pq is called α-stable, if the (equal) angles at which p and q see the corresponding Voronoi edge e[pq] are at least α. A subgraph G of DT(P) is called a (cα, α)-stable Delaunay graph (SDG in short), for some absolute constant c ≥ 1, if every edge in G is α-stable and every cα-stable edge of DT(P) is in G. Stability can also be defined, in a similar manner, for edges of Delaunay triangulations under general convex distance functions, induced by arbitrary compact convex sets Q. We show that if an edge is stable in the Euclidean Delaunay triangulation of P, then it is also a stable edge, though for a different value of α, in the Delaunay triangulation of P under any convex distance function that is sufficiently close to the Euclidean norm, and vice-versa. In particular, a 6α-stable edge in DT(P) is α-stable in the Delaunay triangulation under the distance function induced by a regular k-gon for k ≥ 2π/α, and vice-versa. This relationship, along with the analysis in the companion paper [3], yields a linear-size kinetic data structure (KDS) for maintaining an (8α, α)-SDG as the points of P move. If the points move along algebraic trajectories of bounded degree, the KDS processes a nearly quadratic number of events during the motion, each of which can be processed in O(logn) time. We also show that several useful properties of DT(P) are retained by any SDG of P (although some other properties are not). • Bisector • Convex distance function • Delaunay triangulation • Kinetic data structure • Moving points • Voronoi diagram All Science Journal Classification (ASJC) codes • Theoretical Computer Science • Geometry and Topology • Discrete Mathematics and Combinatorics • Computational Theory and Mathematics Dive into the research topics of 'Stable Delaunay Graphs'. Together they form a unique fingerprint.
{"url":"https://cris.iucc.ac.il/en/publications/stable-delaunay-graphs-2","timestamp":"2024-11-06T12:27:28Z","content_type":"text/html","content_length":"53208","record_id":"<urn:uuid:843b15e1-0f68-4944-9a23-bf359f747979>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00157.warc.gz"}
James V. Burke Block tridiagonal systems appear in classic Kalman smoothing problems, as well in generalized Kalman smoothing, where problems may have nonsmooth terms, singular covariance, constraints, nonlinear models, and unknown parameters. In this paper, first we interpret all the classic smoothing algorithms as different approaches to solve positive definite block tridiagonal linear systems. Then, we obtain new … Read more Fast Robust Methods for Singular State-Space Models State-space models are used in a wide range of time series analysis applications. Kalman filtering and smoothing are work-horse algorithms in these settings. While classic algorithms assume Gaussian errors to simplify estimation, recent advances use a broad range of optimization formulations to allow outlier-robust estimation, as well as constraints to capture prior information. Here we … Read Gradient Sampling Methods for Nonsmooth Optimization This paper reviews the gradient sampling methodology for solving nonsmooth, nonconvex optimization problems. An intuitively straightforward gradient sampling algorithm is stated and its convergence properties are summarized. Throughout this discussion, we emphasize the simplicity of gradient sampling as an extension of the steepest descent method for minimizing smooth objectives. We then provide overviews of various … Read more A Dynamic Penalty Parameter Updating Strategy for Matrix-Free Sequential Quadratic Optimization This paper focuses on the design of sequential quadratic optimization (commonly known as SQP) methods for solving large-scale nonlinear optimization problems. The most computationally demanding aspect of such an approach is the computation of the search direction during each iteration, for which we consider the use of matrix-free methods. In particular, we develop a method … Read more Subdifferentiation and Smoothing of Nonsmooth Integral Functionals The subdifferential calculus for the expectation of nonsmooth random integrands involves many fundamental and challenging problems in stochastic optimization. It is known that for Clarke regular integrands, the Clarke subdifferential equals the expectation of their Clarke subdifferential. In particular, this holds for convex integrands. However, little is known about calculation of Clarke subgradients for the … Read more Generalized matrix-fractional (GMF) functions are a class of matrix support functions introduced by Burke and Hoheisel as a tool for unifying a range of seemingly divergent matrix optimization problems associated with inverse problems, regularization and learning. In this paper we dramatically simplify the support function representation for GMF functions as well as the representation of … Read more Foundations of gauge and perspective duality Common numerical methods for constrained convex optimization are predicated on efficiently computing nearest points to the feasible region. The presence of a design matrix in the constraints yields feasible regions with more complex geometries. When the functional components are gauges, there is an equivalent optimization problem—the gauge dual– where the matrix appears only in the … Read more Level-set methods for convex optimization Convex optimization problems arising in applications often have favorable objective functions and complicated constraints, thereby precluding first-order methods from being immediately applicable. We describe an approach that exchanges the roles of the objective and constraint functions, and instead approximately solves a sequence of parametric level-set problems. A zero-finding procedure, based on inexact function evaluations and … Read more On a new class of matrix support functionals with applications A new class of matrix support functionals is presented which establish a connection between optimal value functions for quadratic optimization problems, the matrix-fractional function, the pseudo matrix-fractional function, and the nuclear norm. The support function is based on the graph of the product of a matrix with its transpose. Closed form expressions for the support … Read more Iterative Reweighted Linear Least Squares for Exact Penalty Subproblems on Product Sets We present two matrix-free methods for solving exact penalty subproblems on product sets that arise when solving large-scale optimization problems. The first approach is a novel iterative reweighting algorithm (IRWA), which iteratively minimizes quadratic models of relaxed subproblems while automatically updating a relaxation vector. The second approach is based on alternating direction augmented Lagrangian (ADAL) … Read more
{"url":"https://optimization-online.org/author/jvburke/","timestamp":"2024-11-03T02:59:15Z","content_type":"text/html","content_length":"108783","record_id":"<urn:uuid:6d689403-6089-4821-a285-0819da3380d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00148.warc.gz"}
Calculus resources for the curious? Calculus resources for the curious? September 10, 2006 8:02 PM Subscribe I would like to relearn some calculus on my own. Please recommend the best book for the purpose. It is embarrassing to me that I presently lack the math required to properly grasp basic Newtonian physics. I would like to regain competency equivalent to what is gained over the course of a year or two of college-level calculus. Please point me in the direction of a great (text)book that will get me started. Clarity and concision are a must. Tangentially, I'm also curious as to what topics are usually covered in two years of calculus classes. The sequence as I was taught it/will be taught it goes single-variable differential, single-variable integral, insanely basic differential eqs, multivariate differential, multivariate integral, differental eqs, partial differential eqs. This old thread of mine has a list of the recommended calc books. In addition, depending on what you're doing, learning it with infinitesimals instead of limits can be more intuitive at first. This online book is a good introduction to calculus with infinitesimals. Probably wouldn't be bad to look at that briefly, at least. posted by devilsbrigade at 8:14 PM on September 10, 2006 [1 favorite] That sequence, btw, was one year of highschool calculus and 3 quarters of multivariable calculus. posted by devilsbrigade at 8:15 PM on September 10, 2006 The best calculus book of all time is Michael Spivak's . Clarity and concision are its strong suits, but be warned: its lack of fluffyness makes it too difficult for the non-committed. posted by iconjack at 8:20 PM on September 10, 2006 [1 favorite] In most of the course series Ive seen, calculus is usually divided into 3 semester long courses-- calc 1: derivatives; calc 2: integration; and calc 3, vector calculus. I don't really have a good recommendation for covering the first two; as most intro college textbooks try to be everything to everyone, they mutate quickly into encyclopedic monstrosities. However, once you've covered the material in the first two courses, Div, Grad, Curl and all that is a classic for its intuitive and concise coverage of vector calc. posted by Maxwell_Smart at 8:20 PM on September 10, 2006 [1 favorite] What level of rigor are you interested in? posted by mr_roboto at 9:23 PM on September 10, 2006 Paul Dawkins, a professor at Lamar University, has a series of comprehensive class notes correspond with courses using Stewart's Calculus: Early Transcendentals. The book isn't very good but the notes, examples, and graphics he uses are fantastic. posted by djb at 10:17 PM on September 10, 2006 [1 favorite] I am in college, doing exactly what you describe. Ian Stewart's books are standard around here, and I like them compared to the one or two others I've used. posted by phrontist at 10:30 PM on September 10, 2006 Calculus: Early Trancendentals ? What's that? Apparently, it's different from the same author's , which I think would be an excellent choice. In addition to being an excellent calculus text, it includes several asides which cover applications. One of them is a derivation of the laws governing planetary orbits. posted by stuart_s at 10:33 PM on September 10, 2006 Er... specifically, it start's with Newton's laws of motion and his law of universal gravitation and uses the techniques of calculus to derive Kepler's laws of planetary motion. It also has a lengthy discussion of Maxwell's laws of electromagnetism which I'm going to read Real Soon Now. Spivak is another excellent calculus book but it has less in the way of exposition and relies on the reader to develop the techniques in the excercises. I also think it has less explicit discussion of applications. posted by stuart_s at 10:45 PM on September 10, 2006 I vote for "Calculus: the Easy Way" . Unlike the books that were recommended earlier it wasn't designed to be used in conjunction with a calc course, bur rather for self-teaching. The book follows a bit of a fantasy narrative (I swear it's not lame) where the characters in a kingdom are forced to use calculus to solve the various problems they face. It is very straight-forward, actually, and there are plenty of practice exams and math-y explanations. But the narrative makes a generally difficult subject much easier to digest. I wouldn't say that the book (or any book) could truly bring you up to the level of an advanced calc student, but it will get you started, and will give you the confidence and the core knowledge needed to attack advanced calc texts. posted by apple scruff at 11:05 PM on September 10, 2006 [1 favorite] Arghhh! I have confused my calculus textbooks. The author isn't Stewart. It's Simmons. It's a good text and everything else I said above is accurate. I checked. posted by stuart_s at 11:18 PM on September 10, 2006 I'd seccond a reccomendation for Spivak. The man knows his stuff. posted by vernondalhart at 12:09 AM on September 11, 2006 Oh yeah Spivak. Then after you've got the basics, while the calculust is still upon you, you can transition straight into Comprehensive Introduction to Differential Geometry posted by rlk at 7:21 AM on September 11, 2006 How fast do you want to learn? What specific things do you hope to analyze? Is it really calculus that interests you, or just math in general? Do you feel somehow inadequate in your work because you have things to analyze that you can't because you feel you lack the tools? Or are you just really interested in learning basic Newtonian physics? How long ago was it that you felt competent in this area and is a 'refresher' perhaps all you need? Do you feel competent in geometry, trig, and algebra? Or do they need work, too? Lots of questions, I know, but any recommendation above presumes answers that may not be relevant to your specific needs. (Feel free to drop me an email (in my profile)). posted by FauxScot at 7:52 AM on September 11, 2006 What an interesting thread -- I'm currently doing the same thing (only I'll be taking a university calculus course in a few weeks). At the moment I'm brushing up on precalc, and as such haven't gotten into too much calculus; however, many people have recommended Hurricane Calculus , and in a preliminary read I enjoyed Silvanus Thompson's Calculus Made Easy (as recommended by baho). My class is using Calculus by Varberg, Purcell, and Rigdon . Although the 9th edition is currently unrated on Amazon, earlier editions have received some favorable reviews. Of course, I can't comment on it myself, yet. posted by penchant at 11:14 AM on September 11, 2006 « Older Should I switch into a career in human resources? | Thunderbird + Firefox Newer » This thread is closed to new comments.
{"url":"https://ask.metafilter.com/46305/Calculus-resources-for-the-curious","timestamp":"2024-11-08T22:48:43Z","content_type":"text/html","content_length":"37710","record_id":"<urn:uuid:f418473a-c4bf-4344-b078-e2b780a8f3eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00728.warc.gz"}
Changing the Culture 2002 Rigour and Intuition in Mathematics The 5th Annual Changing the Culture Conference, organized and sponsored by the Pacific Institute for the Mathematical Sciences, will again bring together mathematicians, mathematics educators and school teachers from all levels to work together towards narrowing the gap between mathematicians and teachers of mathematics, and between those who do and enjoy mathematics and those who don't believe they could. Event Type Educational, Conference
{"url":"https://www.pims.math.ca/events/020424-ctc2","timestamp":"2024-11-05T22:38:44Z","content_type":"text/html","content_length":"419864","record_id":"<urn:uuid:27bbe66e-344b-4826-9f6c-560dec84c9ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00286.warc.gz"}
9(1i) if two towers of heignt x m and 7 m subtend angle of 30∘ ... | Filo Question asked by Filo student 9(1i) if two towers of heignt and subtend angle of and respectively at the centye of a line Joining their feet, then feet, then find the ratio of . Not the question you're searching for? + Ask your question Video solutions (1) Learn from their 1-to-1 discussion with Filo tutors. 8 mins Uploaded on: 1/13/2023 Was this solution helpful? Found 6 tutors discussing this question Discuss this question LIVE for FREE 15 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions on All topics View more Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text 9(1i) if two towers of heignt and subtend angle of and respectively at the centye of a line Joining their feet, then feet, then find the ratio of . Updated On Jan 13, 2023 Topic All topics Subject Mathematics Class Class 11 Answer Type Video solution: 1 Upvotes 61 Avg. Video Duration 8 min
{"url":"https://askfilo.com/user-question-answers-mathematics/9-1i-if-two-towers-of-heignt-and-subtend-angle-of-and-33373934363238","timestamp":"2024-11-14T06:57:19Z","content_type":"text/html","content_length":"194996","record_id":"<urn:uuid:8f92eb74-0bd8-45ea-b14a-63db7c3cebe8>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00514.warc.gz"}
Compare Fractions with Same Numerator Worksheets (answers, printable, online, grade 3) Printable “Fraction” worksheets: Equal Parts Introduction to Fractions Compare Unit Fractions Compare Fractions with Same Numerator Fractions on the Number Line Compare Fractions on the Number Line Compare Fractions Order Fractions Compare Fractions with Same Numerator Worksheets In these free math worksheets, you will learn how to compare fractions on the number line. • Compare fractions and whole numbers on the number line by reasoning about their distance from 0. • Understand distance and position the number line as strategies for comparing fractions. How to compare fractions with the same numerator visually? Comparing fractions with the same numerator is relatively straightforward because the key factor determining their size is the denominator. The bigger the denominator, the smaller the fraction. Imagine the fractions as parts of a whole (e.g., slices of a pizza or pies). The fraction with the smaller denominator represents larger pieces of the whole, making it the bigger fraction. Conversely, the fraction with the larger denominator represents smaller pieces, making it the smaller fraction. Example: Compare 3/4 and 3/8. 3/4 represents 3 out of 4 equal slices of a pizza. 3/8 represents 3 out of 8 smaller slices of the same pizza. Since 3 slices of larger pieces are bigger than 3 slices of smaller pieces, 3/4 is bigger than 3/8. When comparing fractions with the same numerator, focus on the denominators. The fraction with the smaller denominator represents larger parts of the whole, making it greater. The fraction with the larger denominator represents smaller parts, making it less. Have a look at this video if you need to learn how compare fractions with the same numerator. Click on the following worksheet to get a printable pdf document. Scroll down the page for more Compare Fractions with Same Numerator Worksheets. More Compare Fractions with Same Numerator Worksheets (Answers on the second page.) Compare Fractions with Same Numerator Worksheet #1 Compare Fractions with Same Numerator Worksheet #2 Compare Fractions with Same Numerator Worksheet #3 (use <,>,=) Compare Fractions with Same Numerator Worksheet #4 (use <,>,=) Compare Fractions with Same Numerator Equivalent Fractions Reduce Proper Fractions Simplify Proper & Improper Fractions Improper Fractions to Mixed Numbers Mixed Numbers to Improper Fractions Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
{"url":"https://www.onlinemathlearning.com/compare-fractions-same-numerator-worksheet.html","timestamp":"2024-11-04T23:39:10Z","content_type":"text/html","content_length":"40111","record_id":"<urn:uuid:f06f0ae8-9177-48a5-b997-168ef5ed9a35>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00774.warc.gz"}
What's The Hedge Ratio (Inflation-Linked Edition)? Most investment analysis aimed at retail investors are for trades: how much money do you make (or sadly, lose) if something happens (such as a yield change)? Directional trades are fun to talk about -- and easy to analyse -- but unless you have an amazing forecasting record, not a great way to make money in the short term. In order to control your risk, you need to do relative value trades -- buy one or more instruments, and sell one or more others. In particular, if we are interested in breakeven inflation, we are interested in a relative value trade (unless you enter into an inflation swap). As soon as we no longer just buying (or selling) one instrument, we run into the oft-repeated question: what is the hedging ratio? (The hedging ratio is the relative sizes of the instrument positions involved in the trade.) The correct answer to this question is: what are you trying to accomplish? For relative value trades involving linkers and nominal instruments, we could have either a larger or smaller position in the linker, depending upon what you are trying to do. Even if you are not interested in structuring fixed income relative value trades, understanding this concept will help you better understand fixed income analysis produced by sell side research analysts, and possibly understand some empirical behaviour of breakeven inflation. Note: This article is an unedited first draft of ideas that will make their way into an upcoming report on breakeven inflation. This report is expected to be more technical than my previous output, and so there may be more quantitative examples injected into the text before it is completed. My objective here is to get all my ideas sketched out, and I will then chop out the loose bits of logic as part of the editing process. Background: DV01 The usual way of discussing the price (or return) sensitivity for bonds in introductory texts is to use the bond duration. However, duration is an inadequate (if not downright wrong) way to measure fixed income portfolio risk. The easy way to measure risk is to use the dollar value of a basis point, or DV01. Since not all investors are dollar-based, we typically write DV01. For a position in an instrument, the usual definition is to define the DV01 as the change in the local currency (e.g., dollar) value of the position if all interest rates increase by one basis point. For vanilla bonds, the relationship yield up/price down implies that the DV01 is a negative number. (Some systems might flip the sign, since looking at negative numbers all of the time gets annoying.) However, the DV01 of an instrument can be positive, such as paying fixed in a vanilla interest rate swap. The downside of the DV01 is that the measure is dependent upon the size of the position. This makes it hard to describe the scale of positions. For a retail investor, a DV01 of $1,000 might seem dizzying -- losing (about) $100,000 if rates move 1%! Conversely, for a big institutional investor, a DV01 of $1,000 is laughably small. The advantage of DV01 over modified duration as a risk measure is that it applicable to derivative instruments. For vanilla bonds, we can use the modified duration and the change in yields to approximate the percentage return of the position. For an on-market swap, the NPV of the position is zero, and so any profit or less represents an infinite (negative) return. Duration analysis also a complete mess if we try to apply it to index-linked/nominal spread trades. One should note that the relationship between price changes and yield changes is not exactly linear: the change in value for a 10 basis point move is not exactly equal to 10 times the DV01. The DV01/ modified duration changes slowly as a function of yield, an effect that is described by the convexity. However, interest rates have to move a lot for convexity effects to show up; you just need to periodically refresh hedge ratios. It would be fairly unusual for convexity effects to reverse the sign between the true profitability of a trade, and the approximation generated by multiplying the spread change by the DV01, and adding in carry. (This breaks down for instruments with embedded options. As a result, instruments with optionality are referred to as non-linear instruments, and fixed income chatter will use With respect to index-linked bonds, the beauty of the DV01 is that it converts the yield sensitivity to a current dollar amount. This is unlike other analytics for such bonds (for the Canadian linker model, at least), which are expressed in real terms. Hedge Ratio and Carry Most relative value analysis will start off with a spread chart: the yield of one instrument versus another. The analyst will then come up with a story why that spread will go up or down, possibly by using the highly advanced technique of drawing some lines through the spread history. However, the spread movement is not enough to tell us about the profitability of spread trades for holding periods that extend beyond the current trading day. We need to incorporate the carry of trade, which is the interest cost/gain that you make solely based on holding the position. Take an example where one bond has an interest rate sensitivity ten times the other. A $10,000 position in the long maturity bond has a DV01 of $10, while a $100,000 position in the short maturity also has a DV01 of $10. We cannot just look at the fact that the long maturity bond has a yield of 100 basis points more than the short to assume that buying the long and selling the short on a DV01 neutral basis has a positive carry. Since we are selling short $100,000 and only buying $10,000, we are implicitly stuck with a $90,000 dollar investment in cash -- which we assume has a DV01 of zero. If the short maturity bond has a sufficient yield pickup over cash, the interest cost on the short position is greater than what is received from the long legs. Since inflation-linked bond notional amounts are indexed to CPI -- which can achieve very high annualised returns over short periods -- the net exposure to inflation for breakeven trades is highly dependent upon the hedge ratio. Hedge Ratios for Linker-Nominal Trades There are three plausible hedge ratios that one can use for trades between conventional bonds and linkers. • Maturity/Market Value matched. • DV01 matched. • Empirical DV01 matched. I discuss these in turn. Maturity/Market Value Matched If we believe that the economic breakeven inflation at a maturity point is too high or too low relative to our expectations, and we have the capacity to hold to maturity, we want to match maturity and market value amounts of holdings (at the time of trade entry). (Note that market value will diverge from notional amount as a result of the bonds trading away from par, and the result of previous inflation indexation.) If we assume that both instruments are zero coupon (and make simplifying assumptions about yield convention), such a trade structure will break even if held to maturity and realised annualised inflation matches the quoted spread between the two instruments. (Note: this is the definition of economic breakeven; the economic breakeven will differ slightly from the instrument spread as a result of various deviations from the simplified mathematical quote conventions.) The key is that the discounted value of the instruments are equal both at trade inception and maturity if realised inflation exactly hits economic breakeven inflation. At an instant before maturity, the instruments are converging to a zero maturity, and the DV01 for both will equal zero. If the yields are unchanged, we will see that the DV01 of the two positions will also converge. However, unless the economic breakeven is zero, the starting position has a net DV01 that is non-zero. If the economic breakeven inflation rate is positive, the nominal yield is greater than the indexed yield, and the inflation linked bond position will have a DV01 with greater magnitude than the conventional bond position. That is, the position is not DV01-matched at inception. Since you are not DV01-matched at inception, some traders will argue that this is the "wrong way" to hedge; you need to DV01 match at inception (described next). However, doing it that way means that you are no longer locking in the relationship between economic breakevens and realised inflation. In the real world, things are complicated by a few factors. Firstly, bonds have coupons, and so the future value of the index-linked bond depends upon the path of inflation, not just the annualised rate to maturity. The second issue is that maturities may not be matched, creating a gap that is exposed to the full force of CPI seasonality -- which is large. Finally, there are complications with the quote convention, and the fact that CPI indexation is done with a lag (which means that the initial path of indexation is fixed). DV01 Matched In this structure, you buy/sell bonds so that the index-linked leg has the opposite DV01. Once again, if we assume a simple zero coupon example, if expected inflation is positive, the unit DV01 of a index-linked bond is greater. This means that we would have a smaller market value position in the index-linked bond than the conventional. If we continue to DV01-match the position to maturity, and quoted yields are unchanged, the implication is that we need to keep buying the index-linked bond to keep up with the decay of the position DV01. This means that we will underperform if inflation is greater than the economic breakeven during the beginning of the trade, as we had less market value held of the index-linked bonds. Therefore, we can no longer compare the economic breakeven of inflation and realised inflation to determine the profitability of the trade. The way to keep this straight is to realise that there are two interpretations of the yield difference between a maturity-matched conventional and index-linked bond. 1. It is an approximation of the true economic breakeven; if we want to trade the economic breakeven versus realised inflation, we market value match. 2. It is a spread between two instruments, and we are just trading the spread in the same way we trade other spreads. There is no interpretation with regards to realised inflation to maturity; this is a short-term trading concept. Empirical Matching Finally, there is a school of thought that argues that both of the previous approaches are the wrong answer for true relative value trading. Both approaches give a hedging ratio that results in a position that has embedded directionality in practice. The argument is as follows. One can observe that index-linked bond yields move less than conventional yields during severe market moves. ( I discussed this concept in an earlier article. ) Basically, breakeven inflation is directional with interest rates. If we want to be truly non-directional, we need to put on hedges based on empirical hedging ratios. (This is similar to the idea behind using Principal Component Analysis factors to weighting butterfly trades.) I often heard that we could assume a 2:1 ratio for conventional/indexed yield moves. (As a disclaimer, I believe I put out proprietary research making such a claim.) That is, a matched-maturity index-linked bond yield moves 50% as much as the conventional bond in a big interest rate move. (There are a lot of theoretical issues around that claim; the linked article introduces them.) If we believe that claim, that means that the conventional DV01 has to be 50% of the index-linked DV01. (This is obviously way different than the DV01-matched position.) Once again, we will no longer be locking in an economic breakeven; rather, we are trading the relative attractiveness of the asset classes. Under the assumption of a positive inflation breakeven, we can have have these hedging ratios implying either an equal market values (first method), a smaller index-linked market value (DV01-matched), or a greater index-linked market value (empirical DV01 matching). You pays your money, and you takes your chances. My intuition is that empirical DV01-matching is a self-fulfilling prophecy. If relative value trading is dominated by traders who believe in the same hedging ratio, any yield shifts generated by directional traders that do not conform to the empirical hedging ratio will generate profits/losses to the relative value traders on the two sides of the trade. Profit-taking activity would tend to push the yields back to the relationship implied by the empirical hedge ratio. As I discussed in the previous article, such arguments are incompatible with anchored inflation expectations (which are also an empirical feature of modern breakevens). The net result is that someone with a purely fundamentalist approach to analysing breakevens will have a hard time interpreting breakeven inflation changes. What may appear to be "unanchored inflation expectations" or "changing risk premia" may be just the result of market participants following an empirical DV01 hedging strategy. As a result, market movements are very useful for generating excitement among economists about unusual market behaviour, when the markets are just following a simple behavioural pattern. Concluding Remarks You need to know what you want to do first, then decide on what hedging ratio gets you there afterwards. Appendix: Inflation Swaps Entering into an inflation swap is one way to literally lock in the relationship between realised inflation and a market-implied expectation. If that is the only leg to your trade, there is no hedging ratio. However, if you are hedging an inflation swap with bonds, you are back to worrying about hedging ratios. (c) Brian Romanchuk 2018 No comments: Post a Comment Note: Posts are manually moderated, with a varying delay. Some disappear. The comment section here is largely dead. My Substack or Twitter are better places to have a conversation. Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.
{"url":"http://www.bondeconomics.com/2018/01/whats-hedge-ratio-inflation-linked.html","timestamp":"2024-11-04T17:26:52Z","content_type":"text/html","content_length":"98649","record_id":"<urn:uuid:8990d90a-2431-47b8-8acd-5bc1f431f139>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00876.warc.gz"}
In mathematics, a function is defined as a relationship between defined values and one or more variables. For example, a simple math function may be: y = 2x In this example, the relationship of y to x is that y is twice as much as the value assigned to x. While math functions can be far more complex than this, most are simple relative to functions used in computer programming. This may be why math functions are often referred to as "expressions," while computer functions are often called "procedures" or "subroutines." Computer functions are similar to math functions in that they may reference parameters, which are passed, or input into the function. If the example above were written as a computer function, "x" would be the input parameter and "y" would be the resulting output value. It might look something like this: function double(x) $y = 2 * x; return $y; The above example is a very basic function. Most functions used in computer programs include several lines of instructions and may even reference other functions. A function may also reference itself, in which case it is called a recursive function. Some functions may require no parameters, while others may require several. While it is common for functions to return variables, many functions do not return any values, but instead output data as they run. Functions are sometimes considered the building blocks of computer programs, since they can control both small and large amounts of data. While functions can be called multiple times within a program, they only need to be declared once. Therefore, programmers often create "libraries" of functions that can referenced by one or more programs. Still, the source code of large computer programs may contain hundreds or even thousands of functions.
{"url":"https://techterms.com/definition/function","timestamp":"2024-11-12T19:18:15Z","content_type":"text/html","content_length":"21887","record_id":"<urn:uuid:88a5a806-6ee2-4ace-9f30-ee735f8ab573>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00077.warc.gz"}
[{"year":"2000","citation":{"ieee":"V. Peckhaus, Ed., “Contributing Editor” für die Sektion “Philosophy of Mathematics” in: The History of Mathematics from Antiquity to the Present: A Selective Annotated Bibliography, edited by Joseph W. Dauben, revised edition on CD-ROM by Albert C. Lewis, in cooperation with the International Commission on the History Mathematics, American Mathematical Society. 2000.","mla":"Peckhaus, Volker, editor. “Contributing Editor” Für Die Sektion “Philosophy of Mathematics” in: The History of Mathematics from Antiquity to the Present: A Selective Annotated Bibliography, Edited by Joseph W. Dauben, Revised Edition on CD-ROM by Albert C. Lewis, in Cooperation with the International Commission on the History Mathematics, American Mathematical Society. 2000.","apa":"Peckhaus, V. (Ed.). (2000). “Contributing Editor” für die Sektion “Philosophy of Mathematics” in: The History of Mathematics from Antiquity to the Present: A Selective Annotated Bibliography, edited by Joseph W. Dauben, revised edition on CD-ROM by Albert C. Lewis, in cooperation with the International Commission on the History Mathematics, American Mathematical Society .","bibtex":"@book{Peckhaus_2000, title={“Contributing Editor” für die Sektion “Philosophy of Mathematics” in: The History of Mathematics from Antiquity to the Present: A Selective Annotated Bibliography, edited by Joseph W. Dauben, revised edition on CD-ROM by Albert C. Lewis, in cooperation with the International Commission on the History Mathematics, American Mathematical Society}, year={2000} }","chicago":"Peckhaus, Volker, ed. “Contributing Editor” Für Die Sektion “Philosophy of Mathematics” in: The History of Mathematics from Antiquity to the Present: A Selective Annotated Bibliography, Edited by Joseph W. Dauben, Revised Edition on CD-ROM by Albert C. Lewis, in Cooperation with the International Commission on the History Mathematics, American Mathematical Society, 2000.","ama":"Peckhaus V, ed. “Contributing Editor” Für Die Sektion “Philosophy of Mathematics” in: The History of Mathematics from Antiquity to the Present: A Selective Annotated Bibliography, Edited by Joseph W. Dauben, Revised Edition on CD-ROM by Albert C. Lewis, in Cooperation with the International Commission on the History Mathematics, American Mathematical Society.; 2000.","short":"V. Peckhaus, ed., “Contributing Editor” Für Die Sektion “Philosophy of Mathematics” in: The History of Mathematics from Antiquity to the Present: A Selective Annotated Bibliography, Edited by Joseph W. Dauben, Revised Edition on CD-ROM by Albert C. Lewis, in Cooperation with the International Commission on the History Mathematics, American Mathematical Society, [{"first_name":"Volker","full_name":"Peckhaus, Volker","last_name":"Peckhaus","id":"391"}],"date_created":"2020-07-30T06:36:36Z","user_id":"14932","page":"693–730","title":"“Contributing Editor” für die Sektion “Philosophy of Mathematics” in: The History of Mathematics from Antiquity to the Present: A Selective Annotated Bibliography, edited by Joseph W. Dauben, revised edition on CD-ROM by Albert C. Lewis, in cooperation with the International Commission on the History Mathematics, American Mathematical Society","department":[{"_id":"520"}],"date_updated":"2022-01-06T06:53:12Z"}]
{"url":"https://ris.uni-paderborn.de/record/17458.json","timestamp":"2024-11-09T01:06:31Z","content_type":"application/json","content_length":"3943","record_id":"<urn:uuid:c3bd7515-5e51-468a-9c38-17f645553988>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00601.warc.gz"}
Brain Teaser Viral Math Puzzle: Can You Find The Missing Number Here? - StarsUnfold Brain Teaser Viral Math Puzzle: Can you Find The Missing Number Here? Can you Find The Missing Number Here? Brain teasers are an exciting form of a puzzle that needs thinking to solve. Brain teasers make you think out of the box and exploit your mind’s potential. One of the most recent brain teasers trending on social media and boggling many minds is ‘Can you Find The Missing Number Here?’. Let us first have a look at what this puzzle is. Image Source: Pinterest In this brain teaser, you must find the missing number in this math puzzle. Let us now look at the answer to this brain teaser – Find The Missing Number In The Math Puzzle? Answer To Can you Find The Missing Number Here? If you are still trying to get the answer to this, we have a hint for you, there are a few different solutions and all you need to know is basic maths to get the equations right. Let us now see the The pattern is A.C+B/2, so that the answer is 23. Disclaimer: The above information is for general informational purposes only. All information on the Site is provided in good faith, however we make no representation or warranty of any kind, express or implied, regarding the accuracy, adequacy, validity, reliability, availability or completeness of any information on the Site. Leave a Comment
{"url":"https://starsunfold.com/brain-teaser-viral-math-puzzle-can-you-find-the-missing-number-here/","timestamp":"2024-11-06T15:36:19Z","content_type":"text/html","content_length":"114652","record_id":"<urn:uuid:0ef1f51d-e50c-49a9-bdfa-a46311b476fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00745.warc.gz"}
Nautical Miles and Knots A nautical mile is precise measurement based on the circumference of the earth. The equator divides the earth into two equal halves. This circular line is divided into 360 equal parts called degrees. Each degree is divided into 60 smaller parts, called minutes. A nautical mile is the length of one minute. The nautical mile is a standardized unit of measurement used by all nations for air and sea travel. It equals 1.1508 miles (1.852 kilometers). If you are traveling at one nautical mile per hour, you are travelling at the speed of one knot. Why is it called a knot? To tell speed, a ship would carry a line wound on a reel. A chip of wood on the end of the line allowed to drag in the water behind the ship, causing the line to unreel. The line was knotted at intervals of 47 feet 3 inches and the line allowed to drag for exactly 28 seconds. (47 feet 3 inches are to 1.1508 miles what 28 seconds are to one hour) If the line unwound too the fifth knot in 28 seconds, the ship was moving at 5 knots per hour.
{"url":"https://www.pmmsn.net/Nautical%20Miles%20and%20Knots.htm","timestamp":"2024-11-04T18:49:15Z","content_type":"text/html","content_length":"40516","record_id":"<urn:uuid:6152d087-237a-44a7-a95a-37e36c4be836>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00529.warc.gz"}
Discrete Mathematics - IDM Course details Discrete Mathematics IDM Acad. year 2021/2022 Winter semester 5 credits Sets, relations and mappings. Equivalences and partitions. Posets. Structures with one and two operations. Lattices and Boolean algebras. Propositional and predicate calculus. Elementary notions of graph theory. Connectedness. Subgraphs and morphisms of graphs. Planarity. Trees and their properties. Basic graph algorithms. Network flows. Credit+Examination (written) • 26 hrs lectures • 26 hrs exercises • 64 pts final exam (written part) • 25 pts mid-term test (written part) • 6 pts numeric exercises • 5 pts projects Fuchs Petr, RNDr., Ph.D. Harmim Dominik, Ing. Havlena Vojtěch, Ing., Ph.D. Hliněná Dana, doc. RNDr., Ph.D. Holík Lukáš, doc. Mgr., Ph.D. Lengál Ondřej, Ing., Ph.D. Síč Juraj, Mgr. Vážanová Gabriela, Mgr., Ph.D. Subject specific learning outcomes and competences The students will acquire basic knowledge of discrete mathematics and the ability to understand the logical structure of a mathematical text. They will be able to explain mathematical structures and to formulate their own mathematical claims and their proofs. This course provides basic knowledge of mathematics necessary for a number of following courses. The students will learn elementary knowledge of algebra and discrete mathematics, with an emphasis on mathematical structures that are needed for later applications in computer science. Mathematics stood at the birth of computer science and since then has always been in the core of almost all of its progress. Discrete mathematics aims at understanding the aspects of the real world that are the most fundamental from the point of view of computer science. It studies such concepts as a set (e.g. a collection of data, resources, agents), relations and graphs (e.g. relationships among data or description of a communication), and operations over elements of a set (especially basic arithmetical operations and their generalization). The mathematical logic then gives means of expressing ideas and reasoning clearly and correctly and is, moreover, the foundation of "thinking of computers". Generally speaking, discrete mathematics teaches the art of abstraction -- how to apprehend the important aspects of a problem and work with them. It provides a common language for talking about those aspects precisely and effectively. Besides communication of ideas, it helps to structure thought into exactly defined notions and relationships, which is necessary when designing systems so large and complex as today's software and hardware. For example, discrete math gives the basic tools for expressing what a program does; what its data structures represent; how the amount of needed resources depends on the size of the input; how to specify and argue that a program does what it should do. Similarly essential uses can be found everywhere in computer science. One could say that a programmer without mathematics is similar to a piano player who cannot read notes: if he is talented, he can still succeed, but his options are limited, especially when it comes to solving complex problems. In order to teach mathematical thinking to students, we emphasize practising mathematics by using it to solve problems -- in the same way as programming can be only learnt through programming, mathematics also can be learnt only by doing it. Prerequisite knowledge and skills Secondary school mathematics. 1. The formal language of mathematics. A set intuitively. Basic set operations. Power set. Cardinality. Sets of numbers. The principle of inclusion and exclusion. 2. Binary relations and mappings. The composition of binary relations and mappings. 3. Reflective, symmetric, and transitive closure. Equivalences and partitions. 4. Partially ordered sets and lattices. Hasse diagrams. Mappings. 5. Binary operations and their properties. 6. General algebras and algebras with one operation. Groups as algebras with one operation. Congruences and morphisms. 7. General algebras and algebras with two operations. Lattices as algebras with two operations. Boolean algebras. 8. Propositional logic. Syntax and Semantics. Satisfiability and validity. Logical equivalence and logical consequence. Ekvivalent formulae. Normal forms. 9. Predicate logic. The language of first-order predicate logic. Syntax, terms, and formulae, free and bound variables. Interpretation. 10. Predicate logic. Semantics, truth definition. Logical validity, logical consequence. Theories. Equivalent formulae. Normal forms. 11. A formal system of logic. Hilbert-style axiomatic system for propositional and predicate logic. Provability, decidability, completeness, incompleteness. 12. Basic concepts of graph theory. Graph Isomorphism. Trees and their properties. Trails, tours, and Eulerian graphs. 13. Finding the shortest path. Dijkstra's algorithm. Minimum spanning tree problem. Kruskal's and Jarnik's algorithms. Planar graphs. Syllabus of numerical exercises Examples at tutorials are chosen to suitably complement the lectures. • Evaluation of the five written tests (max 25 points). • The knowledge of students is tested at exercises (max. 6 points); at five written tests for 5 points each, at evaluated home assignment with the defence for 5 points, and at the final exam for 64 • If a student can substantiate serious reasons for an absence from an exercise, (s)he can either attend the exercise with a different group (please inform the teacher about that). • Passing boundary for ECTS assessment: 50 points. The minimal total score of 12 points gained out of the five written tests. Course inclusion in study plans • Programme BIT, 1st year of study, Compulsory • Programme IT-BC-3, field BIT, 1st year of study, Compulsory
{"url":"https://www.fit.vut.cz/study/course/244836/.en","timestamp":"2024-11-14T23:48:58Z","content_type":"text/html","content_length":"95746","record_id":"<urn:uuid:f867a8e1-fd1f-4871-93f0-e8d23fd19126>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00440.warc.gz"}
Pkg Stats - npm package discovery and stats viewer. I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset. Hi, 👋, I’m Ryan Hefner and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff. As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have. If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub. I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish. This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏
{"url":"https://www.pkgstats.com/?search=commonmark","timestamp":"2024-11-05T23:29:08Z","content_type":"text/html","content_length":"160592","record_id":"<urn:uuid:8166ac6c-80e9-4af2-aa68-dd62d384bc5b>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00516.warc.gz"}
Aivia: Sua equipe + IA = Sucesso Mastering Economic Indicators: Calculation & Interpretation How can I calculate and interpret key economic indicators? Financial Ratio Analysis Made Easy How can I calculate and analyze financial ratios? Consumer Demand & Market Equilibrium What is the relationship between consumer demand and market equilibrium? Production Levels vs. Costs: An Economic Analysis What is the relationship between production levels and production costs? Globalization's Impact on Trade What are the effects of globalization on international trade? Calculating Price Elasticity of Demand How can I calculate the price elasticity of demand for a product? Analyzing Company's Prod. Function How can I analyze a company's production function using input and output data? Analyzing Nash Equilibrium with Game Theory How can I examine the Nash equilibrium of a game using game theory analysis?
{"url":"https://www.aivia.ai/ptbr/category-subs/subcat97","timestamp":"2024-11-13T02:13:06Z","content_type":"text/html","content_length":"158085","record_id":"<urn:uuid:d933c749-5491-4a3f-99f5-400e86fa459a>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00044.warc.gz"}
Derivations for the Photon Wave Equation table Go back to the Photon Wave Equation table. The Wave Equation and it's Solutions We'll start out with the wave equation in n dimensions with the speed of light c, the laplace operator Δ, the photon wave function u and the time t. The solutions of the wave equation are where i is the imaginary unit, ω the wave frequency. This is easily proven, as the second time derivative of u yields and the left side of the wave equation turns out to be The wave equation is fulfilled for which is the dispersion relation for n dimensions. In real space, the solutions u are harmonic waves with the frequency ω, travelling along the direction of and the energy with the reduced Planck constant In reciprocal space (k-space), these solutions represent only points (i.e. states) with the position Density of States Given a n-dimensional box in real space with a sidelength of L and a volume of we can try to fit in as many waves as possible. Taking into account periodic boundary conditions, we end up with possible values for the wavelength of which results in possible wave vectors where the Example of a twodimensional k-space. Drawn in are several states (black points), the first Brilloin zone of one state (blue rectangle) and the volume increment dV[(k)] (red annulus circular ring). The numbers of states having a wave number between k and k+dk are given by the numbers of possible polarisations p times the volume increment dV[(k)] in k-space divided by the space one state occupies, i.e. V^rez. The volume increment dV[(k)] itself is determined by the area A^rez of the surface with given k times dk. This leads immediatly to the density of states, which is defined as the numbers of states divided by the volume of our n-dimensional box in real space and wave number increment dk. Of course, the density of states can also be expressed with respect to the frequency ω, the wavelength λ or the energy E: Internal Energy The distribution of the internal energy u(λ) is easily obtained by multiplying the energy with the density of states and the probability that a photon will have the specified energy. The last factor is known as the Bose-Einstein distribution. The internal energy U of our n-dimensional box can be determined by integrating it's distribution u(λ) from zero to infinity and multiplying it with the volume of the box V. If you insert the energy distribution and try to evaluate the expression for the internal energy, following definite integral will most likely appear: for n > 0, with the Riemann Zeta function The other thermodynamic properties (specific heat c[V], entropy S, free energy F and the pressure P) are easily obtained by their definitions, which are given in the table Results of the Photon Wave Equation in n dimensions. Black Body Radiation For the derivation of the black body radiation, we have to imagine a n-dimensional hemisperical container, with a constant internal energy distribution u(λ) inside and a small hole dA cut into the center of it's face. The vector pointing out of this small hole is The intensity distribution of the black body radiation is the amount of energy flow I(λ) coming out of this hole. The energy flow from the solid angle element dΩ can be expressed as the energy dE(& lambda, &Omega) passing the hole in the time interval between t and t + dt, divided by the length of the interval dt. Let's analyse the volume dV, which gets passed through the hole in the time interval described above. Because the photons are moving isotropic inside our container, only the fraction dΩ divided by the total solid angle Ω[R], of the photons inside the volume dV will pass the hole. The energy, all the photons inside the volume will carry through the hole is therefore given as When expressing the volume dV by the vector product of a length While the magnitude of the velocity vector must be the speed of light c, it's direction can be expressed by the unitary vector Ω. Altogether, this yields for the energy flow from the solid angle element dΩ Integrating over the solid angle Ω[R/2] of our hemisphere, we obtain for the intensity distribution of the black body radiation The Wien's Law is easily calculated for a given radiation distribution. It expresses the wavelength of the intensity maximum λ[max] as a function of the temperature T. Just zero in the first derivative of the radiation intensity I(λ) with respect to the wavelength. Also the Stefan-Boltzmann Law can be derived quite easily from the radiation distribution. It is just the integral of the distribution I(λ) from wavlength λ = 0 to infinity. Go back to the Photon Wave Equation table.
{"url":"http://lampz.tugraz.at/~hadley/ss1/emfield/Kollmitzer/derivation.html","timestamp":"2024-11-05T22:47:50Z","content_type":"text/html","content_length":"7921","record_id":"<urn:uuid:11bacabc-51fb-422f-8cb3-38f5937285c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00645.warc.gz"}
Privacy is Consent (alt: On Optimal Privacy vs. System Efficiency) This fieldnote documents my notes with regards to a characterization of systems I’ve been calling “optimal privacy” but which intersects and overlaps with many other concepts in the literature e.g “privacy and computational complexity” or “approximate privacy”. Starting with an observation, it is clear that privacy-preserving protocols are less efficient than non-privacy preserving protocols - that is to say that it seems that one must always communicate more to say less. (As far as I can tell, this observation was first made by Beaver[1]) This raises two interesting questions, what do we mean when we say we can compute a function privately and how does optimal-privacy relate to overall system efficiency? Beaver[1], Chor[2], Kushilevitz[2][3] and others published much on this in the last 80s to early 90s and what follows below is a summary and rephrasing of their work. What functions can be computed privately? Let’s consider two parties Alice & Bob, who wish to compute a function $f(x,y)$ where $x$ is an input provided by Alice, and $y$ is an input provided by Bob. We note that Alice could simply send $x$ to Bob (or Bob could send $y$ to Alice) and the recipient would be able to computer the function keeping their input private. We however would like to explore ways such that both can compute the function while revealing as little as possible about their input. In the ideal case, what is generally referred to as perfect-privacy we would like to reveal only information that can be derived from either parties private input and the value of the function. While generally left unspecified in earlier papers on this subject, I want to explicitly add one additional constraint to the definition of a perfectly private protocol: • The information about the private inputs that is communicated, or that can be derived, as a result of the protocol should be symmetric for both parties Any function $f(x,y)$ can be reduced and written as a matrix $M$ where the the matrix coordinate $M(x,y) = f(x,y)$. As an example the following is a representation of an AND function, and an OR $$AND(x,y) = \begin{pmatrix}0 & 0\\ 0 & 1\end{pmatrix}\,\, OR(x,y) = \begin{pmatrix}0 & 1\\ 1 & 1\end{pmatrix}$$ Given this representation we can start to construct rules and intuition about which functions can be made to be privacy preserving and which cannot. Trivially, any matrix which is insensitive to both $x$ and $y$ can be made privacy-preserving, e.g. $\forall x,y;$ $f(x,y) = 0$: $$f(x,y) = \begin{pmatrix}0 & 0 & \dots \\ 0 & 0 & \dots \\ \vdots & \vdots & \ddots \end{pmatrix}$$ Given such a matrix, either party can communicate the value of the function without input from each other (because the value is constant). Partionable Functions It is perhaps clear at this point that in order to privately compute a non-trivial function we require each party to reveal some information about their input. Such information is necessary to partition the function into a smaller value space. Given enough information our goal is to partition the value space to produce a trivial matrix. As an example let’s consider the function $f(x,y) = x + y\, mod \,2$, it has a function matrix as below: $$f(x,y) = \begin{pmatrix}0 & 1 & 0 & 1 & 0 & 1 & \dots \\ 1 & 0 & 1 & 0 & 1 & 0 & \dots\\ 0 & 1 & 0 & 1 & 0 & 1 & \dots\\ 1 & 0 & 1 & 0 & 1 & 0 & \dots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}$$ Such a function can be partitioned based on a single bit of information e.g. whether $y$ is divisible by 2: $$f(x,y) = \begin{pmatrix}1 & 1 & 1 & \dots\\ 0 & 0 & 0 & \dots\\ 1 & 1 & 1 & \dots \\ 0 & 0 & 0 & \dots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix}$$ or not: $$f(x,y) = \begin{pmatrix}0 & 0 & 0 & \dots \\ 1 & 1 & 1 & \dots\\ 0 & 0 & 0 & \dots\\ 1 & 1 & 1 & \dots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix}$$ We can partition each matrix again, based on whether $x$ is divisible by 2 or not. Regardless of which of the above partitioned matrices we start with, the resulting matrices are the same: $$f(x,y) = \begin{pmatrix}0 & 0 & 0 & \dots\\ 0 & 0 & 0 & \dots\\ \vdots & \vdots & \vdots & \ddots \end{pmatrix}$$ $$f(x,y) = \begin{pmatrix}1 & 1 & 1 & \dots\\ 1 & 1 & 1 & \dots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix}$$ Both resulting matrices are constant and insensitive to any other information about $x$ or $y$. We have revealed the minimal amount of information necessary to compute the (admittedly rather simple) Non-partionable Functions Before we dive deeper into the above, it is worth pointing out that not all (in fact, the majority of) functions are partionable. Kushilevitz [3] provides us with a definition of forbidden matrices, function matrices which cannot be partitioned. For the sake of clarity, I am going to use the term non-partionable. Intuitively, we cannot partition functions if there is no way to partition the input variable space in a way that cleanly partitions the output value space. In our above example, we could partition both input variables by the “divisible by 2” check. (Note that in this case we could have started with either $x$ or $y$ - that is we could partition by rows or by columns.) To understand why some functions are non-partitionable, let us start with looking at a concrete example of a function that is impossible to compute in a private manner, the AND function, defined over the input domain $[0,3)$: $$AND(x,y) = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 1 & 1 & 1 \end{pmatrix}$$ Observe how every row and column is unique - there is no bit of information that can be given that subdivides the matrix into (logical) discrete partitions. More formally, every potential input $x$ and $y$ is transitively related to every other input $x$ and $y$ respectively. That is to say that every $x$ ($x_1$) shares a relation $ \sim$ with at least one other $x$ ($x_2$) such that there exists a $y$ where $M_{{x_1}y} = M_{{x_2}y}$ The equivalence relation $\equiv$ on the rows of the Matrix is defined as the transitive closure over $~$. That is, $x_1 \equiv x_2$ if there exist a set of $x_i$ such that $ x_1 \sim x_{i_1} \sim x_ {i_2} \sim x_{i_3} \sim \dots \sim x_2$ A similar relation is defined over $y$ for the columns of the matrix. We can now see that if all the rows and all the columns of a submatrix are equivalent, there can be no way to partition the submatrix i.e. for any possible partition there will always be at least one output value that is shared by the input values of multiple partitions. (Recursively, a matrix cannot be partitioned if it contains a submatrix that cannot be partitioned.) Going back to our AND function, we can see that this is the case, even if we just consider submatrix for the input domain [0,1] For $y$: $$ f(0,0) \equiv f(0,1) = 0 $$ And for $x$: $$ f(0,0) \equiv f(1,0) = 0 $$ This means that there are no ways of partitioning the submatrix of the AND function without revealing all of $x$ and $y$ - thus it is impossible to compute AND between two parties in an information-theoretic private way (we will leave aside computing similar functions in the computational model of privacy until later) From the above we can also observe that functions with a unique output for every set of inputs can never be computed without revealing both sets of inputs i.e. in order to maintain privacy over the input variables of Alice & Bob the function output domain must be of lower order than the input domain. A Note on the Optimal Partitioning of Function Matrices This fieldnote will not dive into strategies for determining the optimal partition of a given function, but by now it may have occurred to you that “partitioning matrices” isn’t exactly a trivial step. There are often multiple equivalent ways of partitioning certain functions, and both parties must agree on a given optimal partition and have a way of efficiently communicating which partition they are selecting during each round. Additional Information & Privacy As we noted further up, we can achieve perfect privacy for any party by sacrificing the privacy of another. We can therefore think of optimal-privacy as the goal of minimizing the amount of information either party must reveal about their inputs. (Optimal) Privacy can be seen as a metafunction which takes in a function matrix and outputs the minimum amount of additional information necessary to resolve the function to an arbitrary value. Bar-Yehuda, Chor and Kushilevitz [4] proved tight bounds on the minimum amount of information that must be revealed for two-party functions. As an example, consider the following function: $$f(x,y) = \begin{cases} -1 & x = y \\ min(x,y) & x \neq y \end{cases}$$ We can represent the function over the input domain [0,3] as follows: $$f(x,y) = \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & -1 & 1 & 1 \\ 0 & 1 & -1 & 2 \\ 0 & 1 & 2 & -1 \end{pmatrix}$$ We note that in the above function that at least one party always learns the others input (the minimum), and both parties reveal their input in the case that the number is the same. Because of this it can be tempting to think of such a function as not-perfectly private, however, per our definition above, it is. The matrix of the function above is partionable by revealing the most signficant bit of each input. Both Alice and Bob would take it in turns to reveal significant bits, each one would cut the value space in half. In some of the leaf nodes (e.g. if Alice chooses 2 or 3 and Bob chooses 0 or 1 as in the bottom left of the matrix, or vice versa in the top right) we see that the matrix decomposes into two monochrome submatrices allows the party with the maximum value to retain some privacy of their input. In others (as seen in the top left & bottom right), the matrices decompose in such a way that revealing the value, also reveals both inputs (either the values are equal, or through the process of eliminations there are no other values that either party could possess). Sadly, in the information-theoretic model, the number of functions that are defined in the input space for our metafunction is frustratingly limited. There are simply not that many interesting functions we can compute with perfect information-theoretic privacy. For more interesting metafunctions we are forced to make tradeoffs either by either limiting the computational power of parties (i.e. achieving privacy through the properties of certain cryptographical problems) or by accepting that we can only achieve approximate privacy. The Millionaires Problem Alice and Bob wish to know which one of them is richer, without revealing their wealth. They construct the following function: $$f(x,y) = \begin{cases} 0 & x \geq y \\ 1 & x \lt y \\ \end{cases}$$ Note that the above breaks ties in favor of Alice. This forms the following function matrix: $$f(x,y) = \begin{pmatrix} 0 & 1 & 1 & 1 & \dots \\ 0 & 0 & 1 & 1 & \dots\\ 0 & 0 & 0 & 1 & \dots \\ \vdots & \vdots & \vdots & \vdots & \ddots \\ \end{pmatrix}$$ Such a function matrix is non-partitionable, as every submatrix along the diagonal is non-partionable, as such perfect privacy is impossible. The Bisection Protocol (as defined by Feigenbaum et al [5]) provides good privacy in the average case. Much like the strategy for the $min(x,y)$ based function above we can bit-by-bit compare the most significant bit of each of the parties inputs, and stop once the outputs from the parties differ. The bisection protocol applied to the millionaires problem is optimally-private in respect to the party with the least amount of money: in addition to learning the most significant bit at which their wealth differs from the richer party, they also (by way of their private input) learn the lower and upper bounds on the difference of their wealth and the richer party, i.e. they learn the input of the wealthier party is the interval $[2^{n-k}, 2^n)$ where $k$ is the index of the most significant bit where both inputs differ. In contrast, the wealthier party only learns that the other has an input in the interval in the interval $[0, 2^{n-k})$ More clearly stated, the information revealed to each party by the protocol is asymmetric. Further, the greater the difference of two inputs, the greater the asymmetry. Consider the, worst case, example where Alice has $2^n$ wealth (the maximum possible), and Bob has $0$. In the first comparison of the Bisection Protocol, Alice will learn that they have the most wealth (thus they have computed the function), and will additionally learn that Bob’s wealth is in the interval $[0, 2^{n-1})$. Bob will also learn that Alice has more wealth, but will also learn that Alice’s wealth is the interval $[2^{n-1}, 2^n)$. In comparison to the case where Alice and Bob have the same wealth (or near the same wealth), the protocol runs for longer and the information asymmetry approaches 0 (however this trend runs inverse to the total amount of system information both parties gain). At this point, it is worth explicitly noting that we can greatly improve the privacy of a protocol to compute the millionaires problem if we can limit the computational power of both parties (See Yao [6]). (This also relies on the assumption that one-way functions exist) In the case of the millionaires problem, we can use Oblivious Transfer to evaluate the inputs of a Garbled Circuit, in addition to numerous other protocols. Such protocols will ensure that each party learns only the value of the function (under the limited computational power & one-way functions assumptions). Communication Efficiency, Privacy & Consent We have finally reached the point in this fieldnote where we can start considering the second question: how does achieving optimal-privacy relate to the overall communication efficiency of the Regardless of whether a function can be computed with perfect-privacy (or the assumptions made about computational power) the communication complexity of the number of rounds associated with the optimally private computation is dramatically more that a non-private computation. As an example, a non-private computation of the millionaires problem can be computed with a single round of communication (where each party transmits once). The bisection protocol requires at-least one round of communication, and at worst $n$ rounds of communication ($n = \log_{2}N$ where $N$ is the maximum value of the input domain). The simplest oblivious transfer protocol [7] on the other hand requires $1.5$ rounds to transmit a single bit (thus requires $1.5 \cdot n$ rounds of communication, not including the communication required to setup the garbled circuit). We have to engage in much more conversation, to transmit less information. More formally, we know that if a function can be computed perfectly private it can be computed in at worst $2 \cdot 2^n$ rounds (See [1]), we also know that we can trade additional information for improvements on communication complexity ([4] - that is, we can sacrifice privacy to gain efficiency. We can choose to give up more information about our (private) inputs to improve the efficiency of the system. Thus we hit upon a philosophical notion of the nature of privacy & consent. Consent is the degree to which you are willing to reveal additional information to improve the efficiency of a system. Privacy is the degree to which you are unwilling to reveal additional information. Privacy and Consent are poles on the same spectrum. Privacy is negative consent, Consent is negative privacy. Privacy is Consent 1. Beaver, Donald. Perfect privacy for two-party protocols. Harvard University, Center for Research in Computing Technology, Aiken Computation Laboratory, 1989. 2. Chor, Benny, and Eyal Kushilevitz. “A zero-one law for boolean privacy.” SIAM Journal on Discrete Mathematics 4.1 (1991): 36-47. 3. Kushelvitz, Eyal. “Privacy and communication complexity.” SIAM Journal on Discrete Mathematics 5.2 (1992): 273-284. 4. R. Bar-Yehuda, B. Chor, E. Kushilevitz and A. Orlitsky, “Privacy, additional information and communication,” in IEEE Transactions on Information Theory, vol. 39, no. 6, pp. 1930-1943, Nov. 1993. doi: 10.^1109⁄[18].265501 5. Feigenbaum, Joan, Aaron D. Jaggard, and Michael Schapira. “Approximate privacy: foundations and quantification.” Proceedings of the 11th ACM conference on Electronic commerce. ACM, 2010. 6. A. C. Yao, “Protocols for secure computations,” 23rd Annual Symposium on Foundations of Computer Science (sfcs 1982)(FOCS), vol. 00, no. , pp. 160-164, 1982. doi:10.1109/SFCS.1982.88 7. Chou, Tung, and Claudio Orlandi. “The simplest protocol for oblivious transfer.” International Conference on Cryptology and Information Security in Latin America. Springer, Cham, 2015.
{"url":"https://fieldnotes.resistant.tech/optimal-privacy/","timestamp":"2024-11-09T23:37:57Z","content_type":"text/html","content_length":"25186","record_id":"<urn:uuid:6cccba49-dcd6-4a1b-9254-920a8b425971>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00183.warc.gz"}
Polynomial Ring - (Algebraic Number Theory) - Vocab, Definition, Explanations | Fiveable Polynomial Ring from class: Algebraic Number Theory A polynomial ring is a mathematical structure formed by polynomials with coefficients from a given ring, which allows for addition and multiplication of these polynomials. This structure is crucial for understanding the behavior of polynomials in various algebraic contexts, especially regarding ideals, as it provides a natural setting to discuss concepts like prime and maximal ideals. congrats on reading the definition of Polynomial Ring. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. In a polynomial ring, the elements are polynomials, and the operations of addition and multiplication are defined as they are for regular algebraic expressions. 2. Polynomial rings can be denoted as R[x], where R is the coefficient ring and x is an indeterminate variable. 3. The ideal structure in polynomial rings leads to important concepts, such as maximal ideals being related to irreducible polynomials over the coefficient ring. 4. In the context of polynomial rings, prime ideals correspond to irreducible polynomials and play a critical role in factorization within the ring. 5. When working with polynomial rings over fields, every non-zero prime ideal is also maximal, highlighting a close relationship between these types of ideals. Review Questions • How do polynomial rings facilitate the understanding of prime and maximal ideals? □ Polynomial rings provide a structured environment where we can study the properties of polynomials and their associated ideals. Prime ideals in this context correspond to irreducible polynomials, which cannot be factored into non-unit elements of the ring. Maximal ideals are significant because they relate to the roots of polynomials; specifically, the quotient of a polynomial ring by a maximal ideal forms a field. This relationship helps us to understand how these ideals govern factorization and congruences in polynomial rings. • What role do polynomial rings play in connecting algebraic structures with number theory concepts like unique factorization? □ Polynomial rings serve as a bridge between abstract algebra and number theory by providing a framework for exploring unique factorization. In many polynomial rings, particularly those over fields, every non-constant polynomial can be factored uniquely into irreducible polynomials. This property parallels the unique factorization of integers into prime numbers, allowing mathematicians to apply techniques from number theory to solve problems in polynomial algebra and vice versa. • Evaluate how the properties of polynomial rings affect their ideal structures, specifically discussing prime and maximal ideals. □ The properties of polynomial rings greatly influence their ideal structures, particularly concerning prime and maximal ideals. For instance, in a polynomial ring over a field, every non-zero prime ideal is maximal due to the fact that any proper ideal can be represented by an irreducible polynomial that has no other divisors besides units or itself. This connection implies that studying prime ideals leads us directly to maximal ideals, emphasizing how factorization properties of polynomials dictate their ideal landscape. Furthermore, the interaction between these ideals informs our understanding of algebraic geometry and solutions to polynomial equations. "Polynomial Ring" also found in: ยฉ 2024 Fiveable Inc. All rights reserved. APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/algebraic-number-theory/polynomial-ring","timestamp":"2024-11-10T20:57:03Z","content_type":"text/html","content_length":"155785","record_id":"<urn:uuid:3af98a70-aa01-4c79-b8ed-73791e432494>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00446.warc.gz"}
Using tens To develop the use of place value as a tool for calculation, begin with the link between the addition and subtraction of ones, and then the same operations with groups of 10. For example, 4 + 3 = 7 so 40 + 30 = 70. Use 'story shells' to put the operations in context. Initially students may want to skip count in tens (e.g. 40, 50, 60, 70). Encourage them to anticipate the answer by using facts they know before they count. You can watch the video Adding Tens. You can download the Adding Tens video transcript. Include addition and subtraction problems that bridge 100 to develop the idea of nested place value (i.e. tens are nested in hundreds, ones are nested in tens). The thinking needed to bridge tens with units of one can be extended to addition and subtraction of tens. For example, if 9 + 6 = 15 then 90 + 60 = 150. Additions and subtractions of this type can be practised using the learning objects L92 The part-adder: make your own hard sums and L98 The take-away bar: make your own hard subtractions.
{"url":"https://topdrawer.aamt.edu.au/Mental-computation/Activities/Using-tens","timestamp":"2024-11-12T19:54:16Z","content_type":"text/html","content_length":"46735","record_id":"<urn:uuid:a638a283-7810-4136-a247-d6e299a0048d>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00197.warc.gz"}
Notes on the use of TTRPG dice What we think of today is not quite what the TSR dice set was like, but since D&D it was close and continued to be used with a couple very subtle improvements over the years in the dice themselves. The following are the dice typically found today: • d4 tetrahedron • d6 cube • d8 octahedron • d10 decahedron • d12 dodecahedron • d20 icosahedron • d100 zocchihedron There are two types of decahedron dice: a d10, with a count of 0-9, the zero being high (i.e. 10), and a similar die with 00-90 (i.e. 10...20...90). The percentile dice, also referenced as d%, refer to a d100, or the combination of two d10s (or two d20s), or a d10 and the count-by-decade form of a decahedron, rolled together. Sometimes this latter form is directly referenced as the percentile die, or d%, but it is perhaps more acurately called a decader die (introduced in 1990). As an example, rolling a 0 and 00 counts for 100, whereas a 1 and 00 counts for 1, but a 1 and 10 counts for 11. The original D&D didn't have a d10, a modern percentile die, or dice pair. The d20 from TSR (resold from Creative Publications^[H24]) counted 0-9 twice, coloring one of the 0-9 set of numbers to differentiate. This d20 can (and did) act as a d10, mainly out of lack of statistical trust of the various forms of d10s until the pentagonal trapezohedron style. To use a modern d20 as a d10 requires not counting the 1 on a number above 10: 0-9 count for 10, and 1-9 respectively, and 11-20 count for 1-10 (where 20 is 10, and 19 is 9). It means that in buying a modern 7-dice kit, you have the equiavlent of three d10s already, (a standard percentile die of 00-90 can also be rolled for a d10). When buying dice, make sure that the dice are weighted correctly (so that particular numbers, such as a 1, don't have a greater likelihood than others). When firsting starting play, only one set of dice is needed. Anything can be rolled multiple times. Buying dice is addictive, so you really don't need to anticipate beyond one set. When leveling up, or with certain kinds of encounters, rolling a die multiple times can tedious. This is the time to buy the particular dice to fit need and preference. I have metal dice from the same type and manufacturer, everything in pairs of two different colors, then a third collection of yet another color with more 6 sided dice because of how common they are, and skimping on the d10/d100 styles. The Making of Original Dungeon & Dragons, pg. 286, Wizards of the Coast LLC, Hasbro SA, June 2024 ©2024 David Egan Evans.
{"url":"https://oberon07.com/dee/RPGdice.xhtml","timestamp":"2024-11-15T02:21:33Z","content_type":"application/xhtml+xml","content_length":"3465","record_id":"<urn:uuid:234fe8e9-2f47-4106-aa0b-d508bd4247b1>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00616.warc.gz"}
Put call parity | Elearnmarkets Put Call Parity Before we move on to further option trading strategies, we need to learn an important concept called ‘Put-Call Parity.’ The put-call parity helps you to understand the impact of demand and supply on the option price, and how option values are inter linked across different strikes and expirations, given that they belong to the same underlying security. The term ‘parity’ refers to the state of being equal or having equal value. Options theory is structured in such an ingenious fashion that the calls and puts complement each other with regards to their price and value. So, if you are aware of the value of a call option, you can easily calculate the value of the complimentary put option (which has the same expiration date and strike price). This knowledge is very essential for traders. Firstly because it can help you figure out profitable opportunities when the option premiums are not functional. A thorough understanding of put-call parity is also important because it helps you to work out the relative value of an option you are considering to add to your portfolio. Suppose a trader holds a short put (European) and a long call (European) of the same class. According to the Put-call parity, this is equivalent to having one future contract of the same asset and same date of expiry, and future price that is the same as the strike price of the option. In situations when the put price diverges from the call price, an arbitrage opportunity comes into existence. This means that traders can make a profit without taking any risk. However, as mentioned earlier, even in liquid markets, chances of this sort are a bit uncommon and have a small window. Put Call Parity is stated using this equation: Call + Strike = Put + Futures Call means the price of the call option, Put means the price of the put option, Futures means the future price and Strike means the price for which call and put are considered. Having clarified that, let us understand how it works with the help of an example. Suppose Nifty is trading at 16940. So the ATM option will be 17000. You buy a call option and sell a put option of the same strike. The date of expiration is a month from the date of purchase. The call option costs ₹35 and the put option costs ₹90. So, the net inflow is (90-35) = ₹55/- Let us consider a few scenarios to understand the trade better. Nifty expires 16000 (below ATM): Here the 17000 CE expires worthless because it is now OTM. Hence, we lose ₹35 which we had paid as its premium. For, put option, we suffered losses because we had sold the put option (bullish position) and the market went in the opposite direction on the downside. So, the loss here = (16000-17000 +90) = - ₹910/- Nifty expires at 17000 (ATM): In this case both the options expire worthless again. So we lose the premium paid for call and retain the premium received for put. So the difference of both the premium stays with us, i.e. ₹55/- Nifty expires at 18000 (above ATM): Here the call options start paying off because of the bullish position. So profit on call option = (18000-17000-35)= ₹965/- The put option expires worthless because of being OTM. So we retain the premium of ₹90 from the put as well. Hence, the total profit in this scenario = 965+90 = ₹1055/- If you construct a graph by plotting the profit or loss one has on these positions for different prices of Nifty, some interesting things will come to light. Suppose the long call’s profit/loss is combined with the short put’s profit/loss. We will make a profit or loss of the exact amount we would have if we just took a future contract of Nifty at 17000, which has a validity of one month. If Nifty trades lower than 17000, you will incur a loss. If it trades higher, you will make a profit. Here, we are not taking transaction fees into consideration. To understand the put-call parity better, you can also compare the performance of a fiduciary call and a protective put of the same class. Protective put is a combination of a long stock and a long put position. This limits the negative impact of holding the stock. A fiduciary call is the combination of a long call and stock which is equivalent to the strike price’s present value. This ensures that the investor gains enough money to make use of the option on its expiry. Talking about the equation again, Call + Strike = Put + Futures In situations where one side of this equation is heavier than the other, this is when an arbitrage opportunity is present. A hassle-free profit is locked in when a trader sells the expensive side of the equation and buys the cheaper side. In real life, however, the occasions where one can take advantage of arbitrage are hard to come across and short-lived. Also, sometimes the margins offered by these are so tiny that you will need to invest a huge capital to use it advantageously. Did you like this unit?
{"url":"https://www.elearnmarkets.com/school/units/option-greeks-1/put-call-parity","timestamp":"2024-11-05T03:11:53Z","content_type":"text/html","content_length":"338429","record_id":"<urn:uuid:6ab74228-8710-4674-8f50-726063c5ddf2>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00190.warc.gz"}
F-ing modules - P.PDFKUL.COM 23 August 2014 Under consideration for publication in J. Functional Programming F-ing modules ANDREAS ROSSBERG Google [email protected] and CLAUDIO RUSSO Microsoft Research [email protected] and DEREK DREYER Max Planck Institute for Software Systems (MPI-SWS) [email protected] Abstract ML modules are a powerful language mechanism for decomposing programs into reusable components. Unfortunately, they also have a reputation for being “complex” and requiring fancy type theory that is mostly opaque to non-experts. While this reputation is certainly understandable, given the many non-standard methodologies that have been developed in the process of studying modules, we aim here to demonstrate that it is undeserved. To do so, we present a novel formalization of ML modules, which defines their semantics directly by a compositional “elaboration” translation into plain System Fω (the higher-order polymorphic λ -calculus). To demonstrate the scalability of our “F-ing” semantics, we use it to define a representative, higher-order ML-style module language, encompassing all the major features of existing ML module dialects (except for recursive modules). We thereby show that ML modules are merely a particular mode of use of System Fω . To streamline the exposition, we present the semantics of our module language in stages. We begin by defining a subset of the language supporting a Standard ML-like language with secondclass modules and generative functors. We then extend this sublanguage with the ability to package modules as first-class values (a very simple extension, as it turns out) and OCaml-style applicative functors (somewhat harder). Unlike previous work combining both generative and applicative functors, we do not require two distinct forms of functor or signature sealing. Instead, whether a functor is applicative or not depends only on the computational purity of its body. In fact, we argue that applicative/generative is rather incidental terminology for pure vs. impure functors. This approach results in a semantics that we feel is simpler and more natural than previous accounts, and moreover prohibits breaches of abstraction safety that were possible under them. 1 Introduction Modularity is essential to the development and maintenance of large programs. Although most modern languages support modular programming and code reuse in one form or another, the languages in the ML family employ a particularly expressive style of module system. The key features shared by all the dialects of the ML module system are their support for hierarchical namespace management (via structures), a fine-grained va- 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer riety of interfaces (via translucent signatures), client-side data abstraction (via functors), implementor-side data abstraction (via sealing), and a flexible form of signature matching (via structural subtyping). Unfortunately, while the utility of ML modules is not in dispute, they have nonetheless acquired a reputation for being “complex”. Simon Peyton Jones (2003), in an oft-cited POPL keynote address, likened ML modules to a Porsche, due to their “high power, but poor power/cost ratio”. (In contrast, he likened Haskell—extended with various “sexy” type system extensions—to a Ford Cortina with alloy wheels.) Although we disagree with Peyton Jones’ amusing analogy, it seems, based on conversations with many others in the field, that the view that ML modules are too complex for mere mortals to understand is sadly predominant. Why is this so? Are ML modules really more difficult to program, implement, or understand than other ambitious modularity mechanisms, such as GHC’s type classes with type equality coercions (Sulzmann et al., 2007) or Java’s classes with generics and wildcards (Torgersen et al., 2005)? We think not—although this is obviously a fundamentally subjective question. One can certainly engage in a constructive debate about whether the mechanisms that comprise the ML module system are put together in the ideal way, and in fact the first and third authors have recently done precisely that (Rossberg & Dreyer, 2013). But we do not believe that the design of the ML module system is the primary source of the “complexity” complaint. Rather, we believe the problem is that the literature on the semantics of ML-style module systems is so vast and fragmented that, to an outsider, it must surely be bewildering. Many non-standard type-theoretic (Harper et al., 1990; Harper & Lillibridge, 1994; Leroy, 1994; Leroy, 1995; Russo, 1998; Shao, 1999; Dreyer et al., 2003), as well as several ad hoc, non-type-theoretic (MacQueen & Tofte, 1994; Milner et al., 1997; Biswas, 1995) methodologies have been developed for explaining, defining, studying, and evolving the ML module systems, most with subtle semantic differences that are not spelled out clearly and are known only to experts. As a rich type theory has developed around a number of these methodologies—e.g., the beautiful meta-theory of singleton kinds (Stone & Harper, 2006)—it is perfectly understandable for someone encountering a paper on module systems for the first time to feel intimidated by the apparent depth and breadth of knowledge required to understand module typechecking, let alone module compilation. In response to this problem, Dreyer, Crary & Harper (2003) developed a unifying type theory, in which previous systems could be understood as sublanguages that selectively include different combinations of features. Although formally and conceptually elegant, their unifying account—which relies on singleton kinds, dependent types, and a subtle effect system—still gives one the impression that ML module typechecking requires sophisticated type theory. In this article, we take a different approach. Our goal is to show once and for all that, contrary to popular belief (even among experts in the field!), the semantics of ML modules is immediately accessible to anyone familiar with System Fω , the higher-order polymorphic λ -calculus. How do we achieve this goal? First, instead of defining the semantics of modules—as most prior work has done—via a bespoke module type system (Dreyer et al., 2003) or a non-type-theoretic formalization (Milner et al., 1997), we employ an elaboration semantics, in which the meaning of 23 August 2014 F-ing modules module expressions is defined by a compositional, syntax-directed translation into plain System Fω . Through this elaboration, we show that ML modules are merely a particular mode of use of System Fω . A structure is just a record of existential type ∃α.{l : τ}, where the type variables α represent the abstract types defined in the structure. A functor is just a function of polymorphic type ∀α.τ → τ 0 , parameterized over the abstract types α in its module argument. No dependent types of any form are required. However, as is often the case for common programming idioms, it is extremely helpful to have built-in language support for inference and automation where possible. In our “F-ing” elaboration semantics, this amounts to inserting the right introduction and elimination forms for universal and existential types in the right places, e.g., using “signature matching” to infer the appropriate type arguments when a functor is applied or when a structure is sealed with a signature. Our approach thus synthesizes elements of two alternative definitions of Standard ML modules given by Harper & Stone (2000) and Russo (1998). Like Harper & Stone (2000), we define our semantics by elaboration; but whereas Harper & Stone elaborated ML modules into yet another (dependently-typed) module type system—a variant of Harper & Lillibridge (1994)—we elaborate them into Fω , which is a significantly simpler system. Like Russo (1998), we classify ML modules—and interpret ML signatures—directly using the types of System Fω ; but whereas Russo only presented a static semantics, our elaboration effectively provides an evidence translation for a simplified and streamlined variant of his definition, thus equipping it with a dynamic semantics and type soundness proof. Second, we demonstrate the broad applicability of our F-ing elaboration semantics by using it to define a richly-featured—and, we argue, representative—ML-style module language. By “representative”, we mean that the language we define encompasses all the major features of existing ML module dialects except for recursive modules.1 While other researchers have given translations from dialects of ML modules into versions of System Fω before (Shan, 2004; Shao, 1999), we are, to our knowledge, the first to define the semantics of a full-fledged ML-style module language directly in terms of System Fω . By “directly”, we mean that there is no other high-level static semantics involved—Fω types are enough to classify modules and understand their semantics. In contrast, most previous work on modules has focused on bespoke module calculi that are (a) defined independently of Fω and (b) somewhat idealized, relying on a separate non-trivial stage of pre-elaboration to handle certain features, and often glossing over essential aspects of a real module language, such as shadowing between declarations, local or shadowed types (and the so-called avoidance problem that they induce), or composition constructs like open, include and where/with, all of which add—in some cases quite substantial—complexity. To ease the presentation, we present the semantics of our module language in stages. In the first part of the article (Sections 2–5), we show how to typecheck and implement a subset of our language that roughly corresponds to the Standard ML module language 1 A proper handling of type abstraction in the presence of recursive modules seems to require both a more sophisticated underlying type theory (Dreyer, 2007a), as well as a more radical departure from the linking mechanisms of the ML module system (Rossberg & Dreyer, 2013). 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer extended with higher-order functors. This subset supports only second-class modules, not first-class modules (Harper & Lillibridge, 1994; Russo, 2000), and only SML-style generative functors, not OCaml-style applicative functors (Leroy, 1995). We start with this SML-style language because its F-ing semantics is relatively simple and direct. In the second part of the article (Sections 6–9), we extend the language of the first part with both modules-as-first-class-values (Section 6, easy) and applicative functors (Sections 7–9, harder). For the extension to applicative functors, we have taken the opportunity to address some overly complex and/or semantically problematic aspects of previous approaches. In particular, unlike earlier unifying accounts of ML modules (Dreyer et al., 2003; Romanenko et al., 2000; Russo, 2003), we do not require two distinct forms of functor declaration (or two different forms of module sealing). Instead, our type system deems a functor to be applicative iff the body of the functor is computationally pure, and generative otherwise. We believe this is about as simple a characterization of the applicative/generative distinction as one could hope for. That said, the semantics we give for applicative functors is definitely not as simple as the elaboration semantics for generative functors given in the first part of the article. We believe the relative complexity of our applicative functor semantics is not a weakness of our approach, but rather a reflection of the inescapable fact that the applicative semantics for functors is inherently subtler (and harder to get right!) than the generative semantics. We substantiate this claim by showing that no previous account of applicative functors has properly guaranteed abstraction safety—i.e., the ability to locally establish representation invariants for abstract types.2 To avoid this problem, we revive the long-lost notion of structure sharing from Standard ML ’90 (Milner et al., 1990), in the form of more finegrained value sharing. Although previous work on module type systems has disparaged this form of sharing as type-theoretically questionable, we observe that it is in fact necessary in order to ensure abstraction safety in the presence of applicative functors. Furthermore, it is easy to account for in a type-theoretic manner using “phantom types” as “stamps”. In general, we have tried to give this article the flavor of a brisk tutorial, assuming of the reader no prior knowledge concerning the typechecking and implementation of ML modules. However, this is not (intended to be) a tutorial on programming with ML modules, nor is it a tutorial on the design considerations that influenced the development of ML modules. For the former, there are numerous sources to choose from, such as Harper’s draft book on SML (Harper, 2012) and Paulson’s book (1996). For the latter, we refer the reader to Harper & Pierce (2005), as well as the early chapters of the second and third authors’ PhD theses (Russo, 1998; Dreyer, 2005). As further evidence of the relative complexity of applicative functors, we note that the F-ing semantics for applicative functors fundamentally requires Fω ’s higher kinds, while the generative functor semantics presented in the first part of the article does not. Higher kinds are of course needed if the underlying core language (on top of which the module system is built) supports type constructors—as is the case in ML. However, setting the core language aside, the elaboration semantics we give in the first part of the article does not itself rely on higher-kinded type abstraction, and indeed, for a simpler core language with just type (but not type constructor) definitions, that language can be elaborated to plain System F. By contrast, the applicative functor extension presented in the second part of the article relies on higher kinds in an essential way. 23 August 2014 F-ing modules The F-ing approach has of course not fallen from the sky. It naturally builds on many ideas from previous work. As mentioned above, the central insight of viewing the seemingly dependent type system of ML modules through the lens of System F types is due to Russo (1998; 1999), and many of the ideas for translating module terms are already present in prior work by Harper et al. (1990), Harper & Stone (2000), and Dreyer (2007b). Our technical development of applicative functors is directly influenced by the work of Biswas (1995), Russo (1998; 2003), and Shan (2004), and more indirectly by Shao (1999) and Dreyer et al. (2003). But instead of frontloading this article with a survey of the literature, we will point to the origins of some key ideas as we come to them. A more comprehensive discussion can be found in Section 11. To summarize our contributions, we present the first formalization of ML modules that (1) explains the static and dynamic semantics of a full-fledged module system, directly in terms of System Fω terms, types and environments, requiring only plain Fω to do so, and (2) characterizes applicativity/generativity of functors as a matter of purity, and supports applicative functors in a way that is abstraction-safe, by relying crucially on a novel account of value sharing. For those familiar with an earlier version of this article that was published in the TLDI workshop (Rossberg et al., 2010), we note that the major difference in the present version is contribution #2, that is, the novel account of applicative functors in Sections 7–9 (the workshop version only treated generative functors). We now also offer expanded discussions of first-class modules (Section 6), our Coq mechanization (Section 10), and related work (Section 11), as well as more details of the meta-theory (Section 5). 2 The module language Figure 1 presents the syntax of our module language. We assume a core language consisting of syntax for kinds, types, and expressions, whose details do not matter for our development. Binding constructs for types and values are provided as part of the module language. For simplicity, we assume that all language entities share a single identifier syntax.3 The module language is very similar to that of Standard ML, except that functors are higher-order, and signature declarations may be nested inside structures. The syntax contains all the features one would expect to find: bindings and declarations of values, types, modules, and signatures (where, as in SML, we implicitly allow omitting the separating “;” between the bindings/declarations in a sequence); hierarchical structures with projection via the dot notation; structure/signature inheritance with include; functors and functor signatures; and sealing (a.k.a. opaque signature ascription). In the grammar for the “where type” construct we abuse the notation X to denote an identifier followed by a (possibly empty) sequence of projections, e.g., X or X.Y.Z. 3 For an ML-like core language, this is meant to include type variables ´a, and we do not impose any restrictions on where type variables from the context can appear in type and signature expressions. 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer (identifiers) (kinds) (types) (expressions) (paths) X K T E P ::= ::= ::= ::= ... ... | P ... | P M ::= | | | X {B} | M.X fun X:S ⇒M | X X X:>S ::= | | | P {D} (X:S) → S S where type X=T ::= | | | | | | val X=E type X=T module X=M signature X=S include M ε B;B ::= | | | | | | val X:T type X=T | type X:K module X:S signature X=S include S ε D;D Fig. 1. Syntax of the module language (types) (expressions) (signatures) (modules) (declarations) (bindings) let B in T let B in E let B in S PM let B in M M1 M2 M:>S M:S local B in D signature X(X 0 :S0 )=S local B in B0 signature X(X 0 :S0 )=S := := := := := := := := := := := := {B; type X=T }.X {B; val X=E}.X {B; signature X=S}.X (P M).S {B; module X=M}.X let module X1 =M1 ; module X2 =M2 in X1 X2 let module X=M in X:>S (fun X:S ⇒X) M include (let B in {D}) module X : (X 0 :S0 ) → {signature S=S} include (let B in {B0 }) module X = fun X 0 :S0 ⇒ {signature S=S} Fig. 2. Derived forms In some cases, the syntax restricts module expressions in certain positions (e.g., the components of a functor application) to be identifiers X. This is merely to make the semantics of the language that we define in Section 4 as simple as possible. Fully general variants of these constructs are definable as straightforward derived forms, as shown in Figure 2. The same figure also defines other constructs that are available in various dialects of ML modules, such as “let”-expressions on all syntactic levels (including 23 August 2014 F-ing modules types and signatures), “local” bindings/declarations4 , and parameterized signatures.5 Using some of these derived forms, Figure 3 shows the implementation of a standard Set functor. One point of note is the notion of paths. A path P is the mechanism by which types, values, and signatures may be projected out of modules. In SML and OCaml, paths are syntactically restricted module expressions, such as an identifier X followed by a series of projections. The reason for the syntactic restriction is essentially that not all projections from modules are sensible. For example, consider a module (M :> {type t; val v:t}) that defines both an abstract type t and a value v of type t. Then (M :> {type t; val v:t}).t is not a valid path, because it denotes a fresh abstract type that is not well defined outside of the module. Put another way, projecting t does not make sense because the sealing in the definition of the module should prevent one from tying the identity of its t component back to the module expression itself. Likewise, (M :> {type t; val v:t}).v is not valid because it cannot be given a type that makes sense outside of the module. (We will explain the issue with paths in more detail in Section 4.) Here, instead of restricting the syntax of paths P, we instead restrict their semantics. That is, paths are syntactically just arbitrary module expressions, but with a separate typing rule. This rule will impose additional restrictions on P’s signature, to make sure that no locally defined abstract types escape their scope. In a similar manner, our module-level projection construct M.X is also more permissive than in actual SML, in that M is allowed to be an arbitrary module expression. It is worth noting that this, together with our more permissive notion of path, allows us to define very general forms of local module bindings simply as derived syntax (Figure 2). 3 System Fω Our goal in this article is to define the semantics of the module language by translation into System Fω . To differentiate external (module) and internal (Fω ) language, we use lowercase letters to range over phrases of the latter. Figure 4 gives the syntax of the variant of System Fω that we use as the target of our elaboration. It includes record types (where we assume that labels are always disjoint), but is otherwise completely standard. We note in passing that we are using the usual impredicative definition of Fω in this article. Up to the introduction of first-class modules in Section 6 we could actually restrict ourselves to a predicative variant. Likewise, as mentioned earlier, up to the introduction of applicative functors in Section 7, the elaboration does not actually require higher kinds (unless used by the Core language); second-order System F would suffice. But for simplicity, we have chosen to use just one version of the target language throughout the article. The module-level include M is spelled open M in Standard ML. OCaml’s version of open M can be expressed as local include M in . . . in our system. Parameterized signatures may be less familiar to many readers, given that only a few ML dialects support them. A signature declared via signature A (X : B) = . . . takes a module parameter, and is instantiated with an application A M in a signature expression. Such a parameterized signature definition simply desugars to a functor definition wherein the result contains a single (ordinary) signature component under the fixed (but otherwise arbitrary) name S. 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer signature EQ = { type t val eq : t × t → bool } signature ORD = { include EQ val less : t × t → bool } signature SET = { type set type elem val empty : set val add : elem × set → set val mem : elem × set → bool } module Set = fun Elem : ORD ⇒ { type elem = Elem.t type set = list elem val empty = [] val add (x, s) = case s of | [] ⇒ [x] | y :: s’ ⇒ if Elem.eq (x, y) then s else if Elem.less (x, y) then x :: s else y :: add (x, s’) val mem (x, s) = case s of | [] ⇒ false | y :: s’ ⇒ Elem.eq (y, x) or (Elem.less (y, x) and mem (x, s’)) } :> SET where type elem = Elem.t module IntSet = Set {type t = int; val eq = Int.eq; val less = Int.less} Fig. 3. Example: a functor for sets (kinds) (types) (terms) (values) (environ’s) κ τ e v Γ ::= ::= ::= ::= ::= Ω|κ →κ α | τ → τ | {l:τ} | ∀α:κ.τ | ∃α:κ.τ | λ α:κ.τ | τ τ x | λ x:τ.e | e e | {l=e} | e.l | λ α:κ.e | e τ | pack hτ, eiτ | unpack hα, xi=e in e λ x:τ.e | {l=v} | λ α:κ.e | pack hτ, viτ · | Γ, α:κ | Γ, x:τ Fig. 4. Syntax of Fω In the grammar, and elsewhere, we liberally use the meta-notation A to stand for zero or more iterations of an object or formula A. (We will also sometimes abuse the notation A to actually denote the unordered set {A}.) We write fv(τ) for the free variables of τ. 23 August 2014 F-ing modules Semantics The full static semantics is given in Figure 5. Type equivalence is defined as β η-equivalence. The only other point of note is that, unlike in most presentations, our typing environments Γ permit shadowing of bindings for value variables x (but not for type variables α). Thus, we take the notation Γ(x) to denote the rightmost binding of x in Γ. Allowing shadowing turns out to be convenient for our purposes (see Section 4). We assume a standard left-to-right call-by-value dynamic semantics, which is defined in Figure 6. However, other choices of evaluation order are possible as well, and would not affect our development. Properties The calculus as defined here enjoys the standard soundness properties: Theorem 3.1 (Preservation) If · ` e : τ and e ,→ e0 , then · ` e0 : τ. Theorem 3.2 (Progress) If · ` e : τ and e 6= v for any v, then e ,→ e0 for some e0 . The proofs are entirely standard, and thus omitted. The calculus also has the usual technical properties, the most relevant for our purposes being the following: Lemma 3.3 (Validity) 1. If Γ ` τ : Ω, then Γ ` . 2. If Γ ` e : τ, then Γ ` τ : Ω. Lemma 3.4 (Weakening) Let Γ0 ⊇ Γ with Γ0 ` . 1. If Γ ` τ : κ, then Γ0 ` τ : κ. 2. If Γ ` e : τ, then Γ0 ` e : τ. Lemma 3.5 (Strengthening) Let Γ0 ⊆ Γ with Γ0 ` and D = dom(Γ) \ dom(Γ0 ). 1. If Γ ` τ : κ and fv(τ) ∩ D = 0, / then Γ0 ` τ : κ. 2. If Γ ` e : τ and fv(e) ∩ D = 0, / then Γ0 ` e : τ. Theorem 3.6 (Uniqueness of types and kinds) Assume Γ ` . 1. If Γ ` τ : κ1 and Γ ` τ : κ2 , then κ1 = κ2 . 2. If Γ ` e : τ1 and Γ ` e : τ2 , then τ1 ≡ τ2 . Finally, all judgments of the Fω type system are decidable: Theorem 3.7 (Decidability) 1. It is decidable whether Γ ` . 2. It is decidable whether Γ ` τ : κ. 3. It is decidable whether Γ ` e : τ. 4. If Γ ` τ1 : κ and Γ ` τ2 : κ, it is decidable whether τ1 ≡ τ2 . 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer Γ` Environments Γ` ·` α∈ / dom(Γ) Γ, α:κ ` Γ`τ :Ω Γ, x:τ ` Γ`τ :κ Types Γ ` τ1 : Ω Γ ` τ2 : Ω Γ ` τ1 → τ2 : Ω Γ` Γ ` α : Γ(α) Γ`τ :Ω Γ` Γ ` {l:τ} : Ω Γ, α:κ ` τ : Ω Γ ` ∀α:κ.τ : Ω Γ, α:κ ` τ : κ 0 Γ ` λ α:κ.τ : κ → κ 0 Γ, α:κ ` τ : Ω Γ ` ∃α:κ.τ : Ω Γ ` τ1 : κ 0 → κ Γ ` τ2 : κ 0 Γ ` τ1 τ2 : κ Terms Γ` Γ ` x : Γ(x) Γ ` e : τ0 Γ, x:τ ` e : τ 0 Γ ` λ x:τ.e : τ → τ 0 Γ`τ :Ω Γ ` e1 : τ 0 → τ Γ ` e2 : τ 0 Γ ` e1 e2 : τ Γ ` e : {l:τ, l 0 :τ 0 } Γ ` e.l : τ Γ`e:τ Γ` Γ ` {l=e} : {l:τ} Γ, α:κ ` e : τ Γ ` λ α:κ.e : ∀α:κ.τ τ0 ≡ τ Γ`e:τ Γ ` e : ∀α:κ.τ 0 Γ`τ :κ 0 Γ ` e τ : τ [τ/α] Γ ` τ : κ Γ ` e : τ 0 [τ/α] Γ ` ∃α:κ.τ 0 : Ω Γ ` pack hτ, ei∃α:κ.τ 0 : ∃α:κ.τ 0 Γ ` e1 : ∃α:κ.τ 0 Γ, α:κ, x:τ 0 ` e2 : τ Γ ` τ : Ω Γ ` unpack hα, xi=e1 in e2 : τ τ ≡ τ0 Type equivalence τ ≡τ τ0 ≡ τ τ ≡ τ0 τ1 ≡ τ10 τ2 ≡ τ20 τ1 → τ2 ≡ τ10 → τ20 τ ≡ τ0 τ 0 ≡ τ 00 τ ≡ τ 00 τ ≡ τ0 {l:τ} ≡ {l:τ 0 } τ ≡ τ0 ∀α:κ.τ ≡ ∀α:κ.τ 0 τ ≡ τ0 ∃α:κ.τ ≡ ∃α:κ.τ 0 τ ≡ τ0 λ α:κ.τ ≡ λ α:κ.τ 0 τ1 ≡ τ10 τ2 ≡ τ20 τ1 τ2 ≡ τ10 τ20 α∈ / fv(τ) (λ α:κ.τ1 ) τ2 ≡ τ1 [τ2 /α] (λ α:κ.τ α) ≡ τ Fig. 5. Fω typing 23 August 2014 F-ing modules e ,→ e0 Reduction (λ x:τ.e) v {l1 =v1 , l=v, l2 =v2 }.l (λ α:κ.e) τ unpack hα, xi = pack hτ, viτ 0 in e C[e] ,→ ,→ ,→ ,→ ,→ e[v/x] v e[τ/α] e[τ/α][v/x] C[e0 ] if e ,→ e0 C ::= [] | C e | v C | {l1 =v, l=C, l2 =e} | C.l | C τ | pack hτ,Ciτ | unpack hα, xi=C in e Fig. 6. Fω reduction Note that τ1 ≡ τ2 is defined over raw (i.e., not necessarily well-kinded) types; in particular, even if τ1 and τ2 are well-kinded, their equivalence may be established by transitively connecting them through some intermediate types that are ill-kinded. However, as long as τ1 and τ2 are well-kinded, and they have the same kind, one can test for their equality by β η-reducing them to normal forms (a process which must terminate due to strong normalization of β η-reduction) and then comparing the normal forms for α-equivalence. The proof that this algorithm is complete requires only a straightforward extension of the corresponding proof for the simply-typed λ -calculus (Geuvers, 1992), of which Fω ’s type language is but a minor generalization. From here on, we will usually silently assume all these standard properties as given and omit any explicit reference to the above lemmas and theorems. Parallel substitution We will also make use of parallel type substitutions on Fω types and terms. We write them as [τ/α] and implicitly assume that τ and α are vectors with the same arity. Furthermore, the following definitions and lemmas will come in handy in dealing with parallel type substitutions in proofs. Definition 3.8 (Typing of type substitutions) We write Γ0 ` [τ/α] : Γ if and only if 1. 2. 3. 4. Γ0 ` , α ⊆ dom(Γ), for all α ∈ dom(Γ), Γ0 ` α[τ/α] : Γ(α), for all x ∈ dom(Γ), Γ0 ` x : Γ(x)[τ/α]. Lemma 3.9 (Type substitution) Let Γ0 ` [τ/α] : Γ. Then: 1. If Γ ` τ 0 : κ, then Γ0 ` τ 0 [τ/α] : κ. 2. If Γ ` e : τ 0 , then Γ0 ` e[τ/α] : τ 0 [τ/α]. Abbreviations Figure 7 defines some syntactic sugar for n-ary pack’s and unpack’s that introduce/eliminate existential types ∃α.τ quantifying over several type variables at once. We will use n-ary forms of other constructs (e.g., application of a type λ ), defined in all instances in the obvious way. To ease notation in the elaboration rules that follow, we will typically omit kind annotations on type variables in the environment and on binders. Where needed, we use the 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer ∃ε.τ ∃α.τ := τ := ∃α1 .∃α 0 .τ pack hε, ei∃ε.τ0 pack hτ, ei∃α.τ0 unpack hε, x:τi = e1 in e2 unpack hα, x:τi = e1 in e2 let x:τ = e1 in e2 := := := := := e pack hτ1 ,pack hτ 0 , ei∃α 0 .τ0 [τ1 /α1 ] i∃α.τ0 let x:τ = e1 in e2 unpack hα1 , x1 i = e1 in unpack hα 0 , x:τi = x1 in e2 (λ x:τ.e2 ) e1 (where τ = τ1 τ 0 and α = α1 α 0 ) Fig. 7. Notational abbreviations for Fω notation κα to refer to the kind implicitly associated with α. For brevity, we will also usually drop the type annotations from let, pack, and unpack when they are clear from context. 4 Elaboration We will now define the semantics of the module language by elaboration into System Fω . That is, we will give (syntax-directed) translation rules that interpret signatures as Fω types, and modules as Fω terms. Our elaboration translation builds on a number of ideas for representing modules that originate in previous work (see Section 11 for a detailed discussion), but we do not assume that the reader is familiar with any of these ideas and thus explain them all from first principles. Identifiers In order to treat identifier bindings in as simple a manner as possible, we make several assumptions. First, we assume that identifiers X of the module language can be injectively mapped to variables x of Fω . To streamline the presentation, we assume that this mapping is applied implicitly, and thus we use module-language identifiers as if they were Fω variables. Second, we assume that there is an injective embedding of Fω variables into Fω labels. That is, for every (free) variable x there is a unique label lx from which x can be reconstructed. Together with the first assumption this means that, wherever we write lX (with X being a module language identifier), we take this to mean that X has been embedded into the set of Fω variables, which in turn has been embedded into the set of labels. Since both embeddings are injective, X uniquely determines lX and vice versa. For simplicity, we assume here that all entities of the language share a single identifier namespace. Obviously, this could be refined by using different injection functions for the different namespaces, with disjoint images. Finally, we deal with shadowing of module-language identifiers simply via shadowing in the Fω environment (see Section 3). Consequently, we need not make any specific provision for variable shadowing in our rules. Only when identifiers are turned into labels (e.g., as structure fields) do we need to explicitly avoid duplicates. Judgments The judgments comprising our elaboration semantics are listed in Figure 8. Most of these are translation judgments, one for each syntactic class of the module language, which translate module-language entities into Fω entities of the corresponding 23 August 2014 F-ing modules (kind elaboration) (type elaboration) (expression elaboration) Γ`K κ Γ`T :κ τ Γ`E :τ e such that Γ ` τ : κ such that Γ ` e : τ (path elaboration) (module elaboration) (binding elaboration) Γ`P:Σ Γ`M:Ξ Γ`B:Ξ such that Γ ` e : Σ such that Γ ` e : Ξ such that Γ ` e : Ξ (signature elaboration) (declaration elaboration) Γ`S Γ`D (signature subtyping) (signature matching) Γ ` Ξ ≤ Ξ0 f Γ ` Σ ≤ Ξ0 ↑ τ e e e such that Γ ` Ξ : Ω such that Γ ` Ξ : Ω Ξ Ξ f such that Γ ` f : Ξ → Ξ0 such that Γ ` f : Σ → Σ0 [τ/α] (where Ξ0 = ∃α.Σ0 ) Fig. 8. Elaboration judgments (abstract signatures) (concrete signatures) Ξ Σ ::= ∃α.Σ ::= [τ] | [= τ : κ] | [= Ξ] | {lX : Σ} | ∀α.Σ → Ξ Σ.ε {l : Σ, l 0 : Σ0 }.l.l := := Σ Σ.l Fig. 9. Semantic signatures variety. (Strictly speaking, we ambiguously overload the same notation for module and path judgments, since P syntactically expands to M. But it will always be clear from context which judgment is referenced.) The last two are auxiliary judgments for signature subtyping and matching, which we will explain a bit later. For each judgment, the figure also shows the corresponding elaboration invariant. We will prove that these invariants hold (and that the translation thereby is sound) later, in Section 5.1. To prove them, we assume that elaboration starts out with a well-formed context Γ. In fact, elaboration will maintain much stronger invariants for Γ, which are important in the proof of decidability of typechecking, but we leave discussion of the details until later (see the “Module elaboration” section below, as well as Section 5.2). In places where we do not care about evidence terms, we will often write judgments without the “ e” or “ f ” part. In addition, we use Γ ` Ξ ≤≥ Ξ0 as a shorthand for mutual subtyping Γ ` Ξ ≤ Ξ0 ∧ Γ ` Ξ0 ≤ Ξ. A number of the elaboration judgments concern semantic signatures Σ or Ξ. Semantic signatures are just a subclass of Fω types that serve as the semantic interpretations of syntactic (i.e., module-language) signatures S, as well as the classifiers of modules M. Since semantic signatures are so central to elaboration, we’ll start by explaining how they work. Semantic signatures The syntax of semantic signatures is given in Figure 9. (And no, this is not an oxymoron, for in our setting the “semantic objects” we are using to model modules are merely pieces of Fω syntax.) Following Mitchell & Plotkin (1988), the basic idea behind semantic signatures is to view a signature as an existential type, with the existential serving as a binder for all the 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer abstract types declared in the signature. In particular, an abstract semantic signature Ξ has the form ∃α.Σ, where α names all the abstract types declared in the signature, and where Σ is a concrete version of the signature. Σ is concrete in the sense that each (formerly) abstract type declaration is made transparently equal to the corresponding existentiallybound variable among the α. (We will see an example of this below.) The splitting of an abstract signature ∃α.Σ into these two components—the abstract types α and the concrete signature Σ—plays a key role in the elaboration of module binding (as we explain in the “Module elaboration” section below). A concrete signature Σ, in turn, can be either an atomic signature ([τ], [= τ : κ], or [= Ξ], each denoting a single anonymous value, type, or signature declaration, respectively), a structure signature (represented as a record type {lX : Σ}), or a functor signature (represented by the polymorphic function type ∀α.Σ → Ξ). Instead of adding atomic signatures as primitive constructs to the type system of the internal language (like in previous work, e.g., Dreyer et al. (2003)), we simply encode them as syntactic sugar for Fω types of a certain form. Their encodings are shown in Figure 10, along with corresponding term forms (which we will use in the translation of modules), and associated typing rules that are admissible in System Fω . The encodings refer to special labels val, typ, and sig, which we assume are disjoint from the set of labels lX corresponding to module-language identifiers. Of particular note are the encodings for type and signature declarations, which may seem slightly odd because they both appear to declare a value of the same type as the identity function. This is merely a coding trick: type and signature declarations are only relevant at compile time, and thus the actual values that inhabit these atomic signatures are irrelevant. The important point is that (1) they are inhabited, and (2) the signatures [= τ : κ] and [= Ξ] are injective, i.e., uniquely (up to Fω type equivalence) determine τ and Ξ, respectively. The encoding for [= τ : κ] is chosen such that it supports arbitrary κ. Beyond these properties the “implementation details” of the encodings are immaterial to the rest of our development, and the reader should simply view them as abstractions. In the remainder of this article, we will assume implicitly that all semantic types and signatures are reduced to β η-normal form. Likewise, we assume that all uses of substitution are followed by an implicit normalization step. This is convenient as a way of determinizing elaboration, as well as ensuring that types produced by elaboration mention the minimal set of free type variables relevant to their identity (cf. “path elaboration” below). Signature elaboration The elaboration of signatures (Figure 11) is not difficult. The only significant difference between a syntactic module-language signature and its semantic interpretation is that, in the latter, all the abstract types declared in the signature are collected together, hoisted out (notably, in rule D- MOD), and bound existentially at the outermost level of the signature. For example, consider the following syntactic signature: {module A : {type t; val v : t}; signature S = {val f : A.t → int}} 23 August 2014 F-ing modules (types) [τ] [= τ : κ] [= Ξ] [e] [τ : κ] [Ξ] Γ`τ :Ω Γ ` [τ] : Ω Γ`e:τ Γ ` [e] : [τ] Type equivalence := := := := := := {val : τ} {typ : ∀α : (κ → Ω). α τ → α τ} {sig : Ξ → Ξ} {val = e} {typ = λ α : (κ → Ω). λ x : α τ. x} {sig = λ x : Ξ. x} Γ`τ :κ Γ ` [= τ : κ] : Ω Γ`τ :κ Γ ` [τ : κ] : [= τ : κ] τ ≡ τ0 [= τ : κ] ≡ [= τ 0 : κ] Γ`Ξ:Ω Γ ` [= Ξ] : Ω Γ`Ξ:Ω Γ ` [Ξ] : [= Ξ] Ξ ≡ Ξ0 [= Ξ] ≡ [= Ξ0 ] Γ`τ :κ τ ≡ τ0 Fig. 10. Fω encodings of atomic signatures and admissible typing rules This signature declares one abstract type (A.t), so the semantic Fω interpretation of the signature will bind one abstract type α: ∃α.{ lA : {lt : [= α : Ω], lv : [α]}, lS : [= {lf : [α → int]}] } For legibility, in the sequel we’ll finesse the injections (lX ) from source identifiers into labels, instead writing this signature as: ∃α.{ A : {t : [= α : Ω], v : [α]}, S : [= {f : [α → int]}] } The signature is modeled as a record type with two fields, A and S. The A field has two subfields—t and v—the first of which has an atomic signature denoting that t is a type component equal to α, the second of which has an atomic signature denoting that v is a value component of type α (i.e., t). The S field has an atomic signature denoting that S is a signature component whose definition is the semantic signature {f : [α → int]}. Note that, by hoisting the binding for the abstract type α to the outermost scope of the signature, we have made the apparent dependency between the declaration of signature S and the declaration of module A—i.e., the reference in S’s declaration to the type A.t— disappear! Moreover, whereas in the original syntactic signature the abstract type was referred to as t in one place and as A.t in another, in the semantic signature all references to the same abstract type component use the same name (here, α). These simplifications (1) make clear that you do not need dependent types in order to model ML signatures, and (2) allow us to avoid any “signature strengthening” (aka “selfification”) machinery, of the sort one finds in all the “syntactic” type systems for modules (Harper & Lillibridge, 1994; Leroy, 1994; Leroy, 1995; Shao, 1999; Dreyer et al., 2003). The only semantic signature form not exhibited in the above example is the functor signature ∀α.Σ → Ξ. The important point about this signature is that the α are universally quantified, which enables them to be mentioned in both the argument signature Σ and the 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer Γ`S Γ ` P : [= Ξ] e S- PATH Γ`P Ξ Γ`D Ξ S- STRUCT Γ ` {D} Ξ Γ ` S1 ∃α.Σ Γ, α, X:Σ ` S2 Γ ` (X:S1 ) → S2 ∀α. Σ → Ξ Γ`S S- FUNCT Σ.lX = [= α : κ] Γ`T :κ ∃α 1 αα 2 .Σ Γ ` S where type X=T ∃α 1 α 2 .Σ[τ/α] S- WHERE - TYP Γ`T :Ω τ D- VAL Γ ` val X:T {lX : [τ]} Γ`T :κ τ D- TYP - EQ Γ ` type X=T {lX : [= τ : κ]} Γ ` K κα D- TYP Γ ` type X:K ∃α.{lX : [= α : κα ]} Γ ` S ∃α.Σ D- MOD Γ ` module X:S ∃α.{lX : Σ} Γ`S Ξ D- SIG - EQ Γ ` signature X=S {lX : [= Ξ]} Γ ` S ∃α.{lX : Σ} D- INCL Γ ` include S ∃α.{lX : Σ} D- EMT Γ ` D1 Γ, α 1 , X1 :Σ1 ` D2 Γ ` D1 ;D2 ∃α 1 .{lX1 : Σ1 } ∃α 2 .{lX2 : Σ2 } lX1 ∩ lX2 = 0/ ∃α 1 α 2 .{lX1 : Σ1 , lX2 : Σ2 } D- SEQ Fig. 11. Signature elaboration result signature Ξ. If functor signatures were instead represented as Ξ → Ξ0 , then the result signature Ξ0 would not be able to depend on any abstract types declared in the argument. An example of a functor signature can be seen in Figure 12. It gives the translation of the signature SET from the example in Figure 3, along with the translation of the signature (Elem : ORD) → (SET where type elem = Elem.t) which classifies the Set functor itself. Given our informal explanation, the formal rules in Figure 11 should now be very easy to follow. A few points of note, though. The rule S- WHERE - TYP for where type employs a convenient bit of shorthand notation defined in Figure 9, namely: Σ.lX denotes the signature of the lX component of Σ. This is 23 August 2014 F-ing modules SET ∃α1 α2 .{set : [= α1 : Ω], elem : [= α2 : Ω], empty : [α1 ], add : [α2 × α1 → α1 ], mem : [α2 × α1 → bool]} (Elem : ORD) → (SET where type elem = Elem.t) ∀α.{t : [= α : Ω], eq : [α × α → bool], less : [α × α → bool]} → ∃β .{set : [= β : Ω], elem : [= α : Ω], empty : [β ], add : [α × β → β ], mem : [α × β → bool]} Fig. 12. Example: signature elaboration used to check that the type component being refined is in fact an abstract type component (i.e., equivalent to one of the α bound existentially by the signature). In the rule D- SEQ, for sequences of declarations D1 ;D2 , the side condition that the label sets lX1 and lX2 are disjoint is imposed because signatures may not declare two components with the same name. Also, note that the identifiers X1 , implicitly embedded as Fω variables, may shadow other bindings in Γ. This is one place where it is convenient to rely on shadowing being permissible in the Fω environments. Finally, the rule S- PATH for signature paths P refers in its premise to the path elaboration judgment (which we will discuss later, see Figure 17) solely in order to look up the semantic signature Ξ that P should expand to. As noted above in the discussion of atomic signatures, the actual term e inhabiting the atomic signature [= Ξ] is irrelevant. Signature matching and subtyping Signature matching (Figure 13) is a key element of the ML module system. For sealed module expressions, we must check that the signature of the module being sealed matches the sealing signature. At functor applications, we must check that the signature of the actual argument matches the formal argument signature of the functor. What happens during signature matching is really quite simple. First of all, in all places where signature matching occurs, the source signature—i.e., the signature of the module being matched—is expressible as a concrete semantic signature Σ. (To see why, skip ahead to module elaboration.) The target signature—i.e., the signature being matched against— on the other hand is abstract. To match against an abstract signature ∃α.Σ0 , we must solve for the α. That is, we must find some τ such that the source signature matches Σ0 [τ/α]. (Fortunately, if such a τ exists, it is unique, and there is an easy way of finding it by inspecting Σ—the details are in Section 5.2.) Then, the problem of signature matching reduces to the question of whether Σ is a subtype of Σ0 [τ/α], which can be determined by a straightforward structural analysis of the two concrete signatures. As a simple example, consider matching { A : {t : [= int : Ω], u : [int], v : [int]}, S : [= {f : [int → int]}] } 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer Matching Γ ` τ : κα Γ ` Σ ≤ Σ0 [τ/α] Γ ` Σ ≤ ∃α.Σ0 ↑ τ f Γ ` Ξ ≤ Ξ0 U- MATCH Subtyping Γ`τ Γ ` [τ] ≤ [τ 0 ] ≤ τ0 f U- VAL λ x:[τ].[ f (x.val)] τ = τ0 Γ ` [= τ : κ] ≤ [= τ 0 : κ] λ x:[= τ : κ].x U- TYP Γ ` Ξ ≤ Ξ0 f Γ ` Ξ0 ≤ Ξ f0 U- SIG 0 0 Γ ` [= Ξ] ≤ [= Ξ ] λ x:[= Ξ]. [Ξ ] Γ ` Σ1 ≤ Σ01 Γ ` {l1 : Σ1 , l2 : Σ2 } ≤ {l1 : Σ01 } Γ, α 0 ` Σ0 ≤ ∃α.Σ ↑ τ Γ ` (∀α.Σ → Ξ) ≤ (∀α 0 .Σ0 → Ξ0 ) Γ ` ∃α.Σ ≤ ∃α λ x:{l1 : Σ1 , l2 : Σ2 }.{l1 = f (x.l1 )} U- STRUCT f1 Γ, α 0 ` Ξ[τ/α] ≤ Ξ0 f2 U- FUNCT 0 0 λ f :(∀α.Σ → Ξ). λ α . λ x:Σ . f2 ( f τ ( f1 x)) Γ, α ` Σ ≤ ∃α 0 .Σ0 ↑ τ f U- ABS λ x:(∃α.Σ).unpack hα, yi = x in pack hτ, f yi Fig. 13. Signature matching and subtyping against the abstract signature ∃α.{ A : {t : [= α : Ω], v : [α]}, S : [= {f : [α → int]}] } from our signature elaboration example (above). The τ returned by the matching judgment would here be simply int, and the subtyping check would determine that the first signature is a structural (width and depth) subtype of the second after substituting int for α. f . It matches a concrete The signature matching judgment has the form Γ ` Σ ≤ Ξ ↑ τ Σ against an abstract Ξ of the form ∃α.Σ0 as described above, non-deterministically synthesizing the solution τ for α, as well as the coercion f from Σ to Σ0 [τ/α] (rule U- MATCH). While the purpose of signature matching is to relate concrete to abstract signatures, signature subtyping, Γ ` Ξ ≤ Ξ0 f , only relates signatures within the same class and synthesizes a respective coercion. Consequently, subtyping is defined by cases on Ξ and Ξ0 . For value declarations (rule U- VAL), signature subtyping appeals to an assumed subtyping judgment for the core language, Γ ` τ ≤ τ 0 f . For a core language with no subtyping the premise would degenerate to “τ = τ 0 ”. For an ML-like core language, subtyping serves to specialize a more general polymorphic type scheme to a less general one. To take a concrete example, the empty field of the Set functor in Figure 3 would, in ML, receive polymorphic scheme ∀β .list β , but when the functor body is matched against the sealing 23 August 2014 F-ing modules signature (SET where type . . . ), the type of empty would be coerced to the monomorphic type list α (where α represents Elem.t). For type declarations (rule U- TYP), we require type equivalence, so subtyping just produces an identity coercion. For signature declarations (rule U- SIG), we do not require that they are equal (as types), but merely mutual subtypes, because type equivalence would be too fine-grained. In particular, signatures that differ syntactically only in the order of their declarations will elaborate to semantic signatures that differ only in the order in which their existential type variables are bound. Such differences should be inconsequential in the source program. And indeed, the order of quantifiers does not matter anywhere in our rules, because they are only used for matching, and pushed around en bloc in all other places. (Ordering of quantifiers will, however, matter for modules as first-class values—see the discussion of signature normalization in Section 6.) For structure signatures, we allow both width and depth subtyping (rule U- STRUCT). For functor signatures, ∀α.Σ → Ξ and ∀α 0 .Σ0 → Ξ0 , subtyping proceeds in the usual contra- and co-variant manner (rule U- FUNCT): after introducing α 0 , we match the domains contravariantly to determine an instantiation τ for α such that Σ0 ≤ Σ[τ/α]; then, we (covariantly) check that the (instantiated) co-domain Ξ[τ/α] subtypes Ξ0 . This allows for polymorphic specialization, i.e., a more polymorphic functor signature may subtype a less polymorphic one. Dually, for abstract semantic signatures ∃α.Σ and ∃α 0 .Σ0 , subtyping recursively reduces to eliminating ∃α.Σ, then matching Σ against Σ0 to determine witness types τ for α 0 ; thus, a less abstract signature may subtype a more abstract one (rule U- ABS). The coercion terms f synthesized by the subtyping rules are straightforward—given the required invariant, Γ ` f : Ξ → Ξ0 , they practically write themselves. This invariant also determines the elided type annotation on the pack expression in the U- ABS rule. We assume β η-equivalence for System Fω types, which is important to make certain examples work as expected. Consider the following two signatures:6 signature A = {type t : ? → ?; type u = fun a ⇒ t a} signature B = {type u : ? → ?; type t = fun a ⇒ u a} Semantically, they are represented as: A B = ∃β1 : Ω → Ω.{t : [= β1 : Ω → Ω],u : [= λ α.β1 α : Ω → Ω]} = ∃β2 : Ω → Ω.{u : [= β2 : Ω → Ω],t : [= λ α.β2 α : Ω → Ω]} Intuitively, A ≤ B is expected to hold (and vice versa). According to rules U- ABS and U- MATCH, this boils down to finding a type τ : Ω → Ω such that {t : [= β1 : Ω → Ω],u : [= λ α.β1 α : Ω → Ω]} ≤ {u : [= τ : Ω → Ω],t : [= λ α.τ α : Ω → Ω]} By rule U- TYP, the following equivalences need to hold for a suitable choice of τ: β1 λ α.β1 α 6 = λ α.τ α = τ (via t) (via u) In this and later examples, we use the syntax fun X ⇒ T to denote a type function in our imaginary Core language. 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer Substituting the solution for τ, given by the second equation, into the first reveals that the following will have to hold: β1 = λ α.(λ α.β1 α) α Clearly, this is only the case under a combination of both β - and η-equivalence. Module elaboration The module elaboration judgment (Figure 14), which has the form Γ ` M : Ξ e, assigns module M the semantic signature Ξ and additionally translates M to an Fω term e of type Ξ. (The invariant, Γ ` e : Ξ, determines elided pack annotations.) As in signature elaboration, the basic idea in module elaboration is to assign M an abstract signature ∃α.Σ such that α represent all the abstract types that M defines. The difference here is that we must also construct the term e that has this signature—i.e., the evidence. One way to understand the evidence construction is to think of the existential type ∃α.Σ as a monad that encapsulates the “effect” of defining abstract types. When we want to use a module of this abstract (think: monadic) signature, we must first unpack it (think: the bind operation for the monad), obtaining some fresh abstract types α and a variable x of concrete (think: pure) signature Σ. We can then do whatever we want with x, ultimately producing another module of (monadic) signature ∃α 0 .Σ0 . Of course, Σ0 may have free references to the α, so at the end we must repack the result with the α to form a module of signature ∃α α 0 .Σ0 . Thus, the abstract types α defined by M propagate monadically into the set of abstract types defined by any module that uses M. As many researchers have pointed out (MacQueen, 1986; Cardelli & Leroy, 1990), this monadic unpack/repack style of existential programming would be annoying to program manually. Fortunately, it is easy for module elaboration to perform it automatically. Figure 14 shows the rules for elaborating modules and bindings. The rules for projections (M- DOT), module bindings (B- MOD), and binding sequences (B- SEQ) show the unpack/repack idiom in action. The last of these is somewhat involved, but only because ML modules allow bindings to be shadowed—a practical complication, incidentally, that is glossed over in most module type systems in the literature (with the exception of Harper & Stone (2000), who account for full Standard ML).7 It is here primarily that we rely on the fact that the Fω version from Section 3 allows shadowing in Γ, in order to avoid having to map external identifiers to fresh internal variables. (In fact, we have already relied on this for rule S- FUNCT, and do so again for rule M- FUNCT.) The rule M- FUNCT for functors is completely analogous to rule S- FUNCT for functor signatures (cf. Figure 11). Note that this rule and the sequence rule B- SEQ are the only two that extend the environment Γ, and that in both cases the new variable X is bound with a concrete signature Σ. As a result, when we look up an identifier X in the environment (rule M- VAR), we may assume it has a concrete signature. This is a key invariant of elaboration. The rules for functor applications (M- APP) and sealed modules (M- SEAL) both appeal to the signature matching judgment. In the former, the τ represent the type components 7 Of course, a realistic implementation of modules would want to optimize the construction of structure representations and avoid the repeated record concatenation. Such an optimization is fairly easy; it essentially boils down to partially evaluating the expressions generated by our sequencing rule. 23 August 2014 F-ing modules Γ`M:Ξ Γ(X) = Σ M- VAR Γ`X :Σ X Γ`B:Ξ e M- STRUCT Γ ` {B} : Ξ e Γ ` M : ∃α.{lX : Σ, lX 0 : Σ0 } e M- DOT Γ ` M.X : ∃α.Σ unpack hα, yi = e in pack hα, y.lX i Γ, α, X:Σ ` M : Ξ e Γ ` S ∃α.Σ M- FUNCT Γ ` fun X:S ⇒M : ∀α. Σ → Ξ λ α.λ X:Σ.e Γ(X1 ) = ∀α. Σ0 → Ξ Γ(X2 ) = Σ Γ ` Σ ≤ ∃α.Σ0 ↑ τ Γ ` X1 X2 : Ξ[τ/α] X1 τ ( f X2 ) Γ(X) = Σ Γ`S Ξ Γ`Σ≤Ξ↑τ Γ ` X:>S : Ξ pack hτ, f Xi M- APP M- SEAL Bindings Γ`E :τ e B- VAL Γ ` val X=E : {lX : [τ]} {lX = [e]} Γ`T :κ τ Γ ` type X=T : {lX : [= τ : κ]} Γ ` M : ∃α.Σ Γ ` module X=M : ∃α.{lX : Σ} {lX = [τ : κ]} B- TYP e Σ not atomic B- MOD unpack hα, xi = e in pack hα, {lX = x}i Γ`S Ξ Γ ` signature X=S : {lX : [= Ξ]} Γ ` M : ∃α.{lX : Σ} e Γ ` include M : ∃α.{lX : Σ} Γ ` ε : {} {lX = [Ξ]} B- SIG B- INCL B- EMT Γ ` B1 : ∃α 1 .{lX1 : Σ1 } lX0 1 = lX1 − lX2 Γ, α 1 , X1 : Σ1 ` B2 : ∃α 2 .{lX2 : Σ2 } lX0 1:Σ01 ⊆ lX1:Σ1 Γ ` B1 ;B2 : ∃α 1 α 2 .{lX0 1 : Σ01 , lX2 : Σ2 } unpack hα 1 , y1 i = e1 in unpack hα 2 , y2 i = (let X1 = y1 .lX1 in e2 ) in pack hα 1 α 2 , {lX0 1 = y1 .lX0 1 , lX2 = y2 .lX2 }i Fig. 14. Module elaboration B- SEQ 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer Set λ α.λ Elem : {t : [= α : Ω], eq : [α × α → bool], less : [α × α → bool]}. pack hlist α, f (let y1 = {elem = [α : Ω]} in let y2 = let elem = y1 .elem in let y21 = {set = [list α : Ω]} in let y22 = let set = y21 .set in ... in {elem = y1 .elem, set = y2 .set, empty = y2 .empty, add = y2 .add, mem = y2 .mem}) i∃β .{set:[=β :Ω], elem:[=α:Ω], empty:[β ], add:[α×β →β ], mem:[α×β →bool]} {module IS = Set Int; val s = IS.add (7, IS.empty)} unpack hβ , y1 i = {IS = Set int ( f 0 Int)} in let y2 = (let IS = y1 .IS in {s = [IS.add h7, IS.emptyi]}) in pack hβ , {IS = y1 .IS, s = y2 .s}i∃β .{IS:{...},s:[β ]} Fig. 15. Example: module elaboration of the actual functor argument corresponding to the abstract types α declared in the formal argument signature. For instance, in the functor application in Figure 3, τ would be simply int, since that is how the argument module defines the abstract type t declared in the argument signature ORD. This information is then propagated to the result of the functor application by substituting τ for α in the result signature Ξ. The sealing rule works similarly, except that τ is not used to eliminate a universal type, but dually, to introduce an existential type. Hence, τ is not propagated to the signature of the sealed module, but rather hidden within the existential. This makes sense because, of course, the point of sealing is to hide the identity of the abstract types α. Note that both M- APP and M- SEAL are made simpler by our language’s restriction of functor applications and sealing to module identifiers (X1 X2 and X:>S), which enables us to exploit the elaboration invariant that those identifiers (the X’s) already have concrete signatures and need not be unpacked. As the let-binding encodings of the more general forms M1 M2 and M:>S in Figure 2 suggest, elaboration of those forms just involves monadically unpacking the M’s to X’s first before applying M- APP or M- SEAL, and then repacking afterward. As an example of the module elaboration translation, Figure 15 sketches the result of elaborating the Set functor from Figure 3. It also shows the Fω representation of a simple program involving the application of this functor. We assume that there is a suitable library module Int that matches signature ORD, whose t component is transparently equal to int, 23 August 2014 F-ing modules Set λ α.λ Elem : {t : [= α : Ω], eq : [α × α → bool], less : [α × α → bool]}. pack hlist α, f (let elem = [α : Ω] in let set = [list α : Ω] in let empty = [nil] in let add = [. . . Elem.eq . . . Elem.less . . .] in let mem = [. . . Elem.eq . . . Elem.less . . .] in {elem = elem, set = set, empty = empty, add = add, mem = mem}) i∃β .{set:[=β :Ω], elem:[=α:Ω], empty:[β ], add:[α×β →β ], mem: [α×β →bool]} {module IS = Set Int; val s = IS.add (7, IS.empty)} unpack hβ , ISi = Set int ( f 0 Int) in let s = [IS.add h7, IS.emptyi] in pack hβ , {IS = IS, s = s}i∃β .{IS:{...},s:[β ]} Fig. 16. Example: module elaboration, simplified and whose Fω representation is Int. In order to avoid too much clutter, we do not spell out the respective coercions f and f 0 occurring in both examples. To make the essence of the translation a bit more apparent, Figure 16 shows simplified versions of the same translations with all intermediate redexes (in particular, intermediate structures) removed, via straightforward β η-transformations of let-bindings and records. In particular, once we eliminate the administrative overhead of rule B- SEQ, a structure simply becomes a sequence of let-bindings for the declarations in its body, feeding into a record that collects all bound variables as fields. Generativity Functors in Standard ML are said to behave generatively, meaning that every application of a functor F will have the effect of generating fresh abstract types corresponding to whichever types are declared abstractly in F’s result signature. With the existential interpretation of type abstraction that we employ here, this generativity comes for free. Applying a functor produces a module with an existential type of the form ∃α.Σ. Thus, if a functor is applied twice (say, to the same argument) and the results are bound to two different identifiers X1 and X2 , then the binding sequence rule will ensure that two separate copies of the α will be added to the environment Γ—call them α 1 and α 2 —along with the bindings X1 : Σ[α 1 /α] and X2 : Σ[α 2 /α]. In this way, the abstract type components of X1 and X2 will be made distinct. In Section 7 we will explore an alternative semantics, where functors can be applicative, i.e., applying such a functor twice (to the same argument) will only produce one copy of the abstract types it defines. Path elaboration Figure 17 displays the last three rules of elaboration, concerning the elaboration of paths. (The elaboration rule for signature paths appeared in Figure 11.) 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer Γ`T :κ Γ`E :τ Γ ` P : ∃α.Σ e Γ`Σ:Ω P- MOD Γ ` P : Σ unpack hα, xi = e in x Γ ` P : [= τ : κ] e T- PATH Γ`P:κ τ Γ ` P : [τ] e E- PATH Γ`P:τ e.val Fig. 17. Path elaboration Paths are the means by which value, type, and signature components are projected out of modules. As explained in Section 2, in order for paths to make sense, the values, types, or signatures that they project out must be well-formed in the ambient environment Γ. In other words, paths P need to elaborate to a concrete signature Σ, because (unlike for module constructs) existential quantifiers can not be “extruded” further in the contexts where paths occur. To ensure this, the path elaboration judgment, Γ ` P : Σ e, uses the ordinary module elaboration judgment, Γ ` M : Ξ e, in its premise (with M = P) to synthesize P’s semantic signature ∃α.Σ, which still allows “local” abstract types α to occur. It then checks that Σ does not actually depend on any of these α that P may have defined (note that we assume all types normalized, so any spurious dependencies are implicitly eliminated). The rules for type, expression, and signature paths use the path elaboration judgment to check the well-formedness of the path, and then project the component out accordingly. For instance, consider the example from Section 2 of an ill-formed path. Let M be the module expression {type t = int; val v = 3} :> {type t; val v : t} The semantic signature that module elaboration assigns to M is: ∃α.{t : [= α : Ω], v : [α]} Thus, if we were to try to project either t or v from M directly, the resulting type or expression would not be well-formed, since both [= α : Ω] and [α] refer to the local abstract type α that is not going to be bound in the environment Γ. If, on the other hand, we were to first bind M to an identifier X, and then subsequently project out X.t or X.v, the paths would be well-formed. The reason is that the binding sequence rule would extend the ambient environment with a fresh α, as well as X : {t : [= α : Ω], v : [α]}. Under such an extended environment, X.t would simply elaborate to α, and X.v would elaborate to X.v.val of type α, both of which are well-formed since α is now bound in the environment. In general, since identifiers have concrete signatures, any well-formed module of the form X.lY will also be a well-formed path. If one views existential types as a monad, as we have suggested, then the path elaboration rule may seem superficially odd because it allows one to “escape” the monad by going from 23 August 2014 F-ing modules ∃α.Σ to Σ. However, the point is that one can only do this if the “effects” encapsulated by the monad—i.e., the abstract types α defined by the path—are strictly local. This is similar conceptually to the hiding of “benign” (or “encapsulated”) effects by Haskell’s runST mechanism (Launchbury & Peyton Jones, 1995). 5 Meta-theoretic properties Having defined the semantics of ML modules by elaboration into System Fω , it is time to prove it (a) sound, and (b) decidable. Some theorems about the module language depend on the assumption that respective properties can be proved for core language elaboration (i.e., the first three judgments listed in Figure 8). However, because both language layers are mutually recursive through the syntax of paths (and after Section 6, also through modules as first-class values), these proofs are typically not independent—they need to be performed by simultaneous induction on the derivations for both language layers. We hence state all properties that we assume about the core language as part of the respective theorems below. The theorems then hold provided that the inductive argument can also be shown for all additional cases not specified by our grammar for types T and expressions E. 5.1 Soundness Proving soundness of a language specified by an elaboration semantics consists of two steps: 1. Showing that elaboration only produces well-typed terms of the target language. 2. Showing that the type system of the target language is sound. Fortunately, in our case, since the target language is the very well-studied System Fω , we can simply borrow the second part from the literature. It thus remains to be shown that the elaboration rules produce well-formed Fω expressions. Of course, since our development is parametric in the concrete choice of a core language, the result only holds relative to suitable assumptions about the soundness of the elaboration rules for the core language. Formally, we state the following theorem, which collects the elaboration invariants already stated in Figure 8: Theorem 5.1 (Soundness of elaboration) Provided Γ ` we have: 1. 2. 3. 4. 5. 6. 7. Proof If Γ ` T : κ τ, then Γ ` τ : κ. If Γ ` E : τ e, then Γ ` e : τ. If Γ ` τ ≤ τ 0 f and Γ ` τ : Ω and Γ ` τ 0 : Ω, then Γ ` f : τ → τ 0 . If Γ ` S/D Ξ, then Γ ` Ξ : Ω. If Γ ` P/M/B : Ξ e, then Γ ` e : Ξ. If Γ ` Ξ ≤ Ξ0 f and Γ ` Ξ : Ω and Γ ` Ξ0 : Ω, then Γ ` f : Ξ → Ξ0 . 0 f and Γ ` Σ : Ω and Γ, α ` Σ0 : Ω, If Γ ` Σ ≤ ∃α.Σ ↑ τ then Γ ` τ : κα and Γ ` f : Σ → Σ0 [τ/α]. 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer The proof is by relatively straightforward simultaneous induction on derivations. The arguments for properties 1-3 clearly depend on the core language, and we assume that it can be proved for all additional cases not specified in our grammar. We have performed the entire proof in Coq (Section 10), and transliterate only two representative cases here: • Case M- APP: By induction we know that (1) Γ ` τ : κα and (2) Γ ` f : Σ → Σ0 [τ/α]. From (1) we can derive that Γ ` X1 τ : (Σ0 → Ξ)[τ/α]. From (2) it follows that Γ ` f X2 : Σ0 [τ/α]. Thus, we can conclude Γ ` X1 τ ( f X2 ) : Ξ[τ/α] by the typing rule for application. • Case B- SEQ: By induction on the first premise we know (1) Γ ` e1 : ∃α 1 .{lX1 : Σ1 }. Let Γ1 = Γ, α 1 , X1 :Σ1 . By validity and inversion, from (1) we derive Γ, α 1 ` Σ1 : Ω, so Γ1 ` . By induction on the second premise, (2) Γ1 ` e2 : ∃α 1 .{lX2 : Σ2 }. It is easy to show Γ, α 1 , y1 :{lX1 : Σ1 } ` y1 .lX1 : Σ1 . By convention, y1 and y2 are fresh, and so it follows that Γ, α 1 , y1 :{lX1 : Σ1 }, α 2 , y2 :{lX2 : Σ2 } ` {lX0 1 = y1 .lX0 1 , lX2 = y2 .lX2 } : {lX0 1 : Σ01 , lX2 : Σ2 } from the typing rules. From (1) and weakening (2), the overall goal follows by inner induction on the lengths of α 1 , α 2 , and lX1 , and expanding the n-ary versions of pack, unpack and let. If the reader finds the proof cases shown here to be boring and straightforward, that is because they are! The remaining cases are even more boring. In other words, there is nothing tricky going on in our elaboration—which substantiates our claim that it is simple. 5.2 Decidability All our elaboration rules are syntax-directed, and they can be interpreted directly as a deterministic algorithm. Provided core elaboration is terminating, this algorithm clearly terminates as well. There is one niggle, though: the signature matching rule requires a non-deterministic guess of suitable instantiating types τ. To prove elaboration decidable, we must provide a sound and complete algorithm for finding these types. It’s not obvious that such an algorithm should exist at all. For example, consider the following matching problem (Dreyer et al., 2003): ∀α.[= α : κ] → [= τ1 : κ 0 ] ≤ ∃β .([= β : κ] → [= τ2 : κ 0 ]) The matching rule must find an instantiation type τ : κ for β such that the left signature is a subtype of [= τ : κ] → [= τ2 [τ/β ] : κ 0 ], which in turn will only hold if τ1 [τ/α] = τ2 [τ/β ]. Since κ may be a higher kind, this amounts to a higher-order unification problem, which is undecidable in general (Goldfarb, 1981). Validity Fortunately, under minimal assumptions about the initial environment, we can show that such problematic cases never arise during elaboration. More precisely, we can show that, whenever we invoke Σ ≤ ∃α.Σ0 , the target signature Σ0 has the property that each abstract type variable α ∈ α actually occurs explicitly in Σ0 in the form of an embedded type field [= α : κα ]. We say that α is rooted in Σ0 in this case. An abstract signature in which all quantified variables are rooted is called explicit. Intuitively, the reason we can expect the target signature ∃α.Σ0 to be explicit is that (1) the only signatures we ever match against 23 August 2014 F-ing modules α rooted in Σ :⇔ α rooted in Σ α rooted in [= τ : κ] (at ε) :⇔ α = τ α rooted in {l : Σ} (at l.l 0 ) :⇔ α rooted in {l : Σ}.l (at l 0 ) [τ] explicit [= τ : κ] explicit [= Ξ] explicit {l : Σ} explicit ∀α.Σ → Ξ explicit ∃α.Σ :⇔ :⇔ :⇔ :⇔ (always) (always) Ξ explicit Σ explicit ∃α.Σ explicit ∧ Ξ explicit α rooted in Σ ∧ Σ explicit Γ ` Ξ : Ω explicit :⇔ Γ ` Ξ : Ω ∧ Ξ explicit [τ] valid [= τ : κ] valid [= Ξ] valid {l : Σ} valid ∀α.Σ → Ξ valid ∃α.Σ valid :⇔ :⇔ :⇔ :⇔ (always) (always) Ξ explicit Σ valid ∃α.Σ explicit ∧ Ξ valid Σ valid Γ ` Ξ : Ω valid :⇔ Γ ` Ξ : Ω ∧ Ξ valid Γ valid :⇔ ∀(X:Σ) ∈ Γ, Σ valid Fig. 18. Signature explicitness and validity during elaboration are themselves the result of elaborating some ML signature S, and (2) all of such a signature’s abstract types α must originate in some opaque type specification appearing in S. Figure 18 gives an inductive definition of these properties. (We typically drop the explicit path description “(at l)” from the rootedness judgment—the only place where we actually need it will be the definition of signature normalization in Section 6.) However, this is not all. While it is necessary (in general) that a signature Ξ is explicit to decide matching Σ ≤ Ξ, it is not sufficient. Subtyping is contra-variant in functor arguments, so we also need to ensure that, whenever we invoke subtyping to determine whether Σ ≤ Σ0 and Σ is a functor signature, its argument signature is explicit as well. Unfortunately, we cannot require all of Σ to be explicit, because not all module expressions (as opposed to signature expressions) yield explicit signatures. For example, let module A = {type t = int; val v = 5; val f x = x} :> {type t; val v : t; val f : t → int} in {val f = A.f; val v = A.v} defines a module with the non-explicit signature ∃α.{f : [α → int],v : [α]}. Figure 18 hence defines the second notion of a valid signature that captures the relevant property—that is, a signature is valid if all contained functor arguments are explicit (but other constituent signatures need not be). Intuitively, it is expected that modules have valid signatures, because the language requires explicit signature annotations on all functor arguments. The notion of validity is extended to environments, and we require all signatures and environments used in elaboration to be valid.8 Note that validity of environments only cares about variables bound to concrete signatures Σ because of the elaboration invariant (discussed in Section 4, “Module elaboration”) that all modules of signature ∃α.Σ are unpacked into α and X : Σ before being added to the context. The notions of explicit and valid signatures are also called analysis and synthesis signatures in the literature (Dreyer et al., 2003; Rossberg & Dreyer, 2013); Russo (1998) used the terms solvable and ground. 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer lookupα (Σ, Σ0 ) ↑ τ if lookupα (Σ, Σ0 ) ↑ τ : κ], [= τ 0 lookupα ([= τ : κ]) ↑ τ lookupα ({l : Σ}, {l 0 : Σ0 }) ↑ τ if τ 0 = α if ∃l ∈ l ∩ l 0 . lookupα ({l : Σ}.l, {l 0 : Σ0 }.l) ↑ τ Fig. 19. Algorithmic type lookup With a little auxiliary lemma, we can show that our elaboration establishes and maintains explicit signatures for signature expressions, and valid signatures for module expressions: Lemma 5.2 (Simple properties of validity) 1. If Ξ explicit, then Ξ valid. 2. If Ξ explicit/valid, then Ξ[τ/α] explicit/valid. Lemma 5.3 (Signature validity) Assume Γ valid. 1. If Γ ` S/D Ξ, then Ξ explicit. 2. If Γ ` P/M/B : Ξ e, then Ξ valid. Type lookup If the ∃α.Σ0 in the matching rule U- MATCH is explicit, then the instantiation of each α can be found by a simple pre-pass on Σ and Σ0 , thanks to the following observation: if the subsequent subtyping check is ever going to succeed, then Σ must feature an atomic type signature [= τ : κα ] at the same location where α is rooted in Σ0 . Moreover, α must be instantiated with a type equivalent to τ. Consequently, the definition of lookup in Figure 19 implements a suitable algorithm for finding the types τ in rule U- MATCH, through a straightforward parallel traversal of the two signatures involved. There is a twist, though: an abstract type variable may actually have multiple roots in a signature. For example, the external signature {type t; type u = t} elaborates to ∃α.{t : [= α : Ω], u : [= α : Ω]}. The lookup algorithm, as given in the figure, is non-deterministic in that it can pick any suitable root—specifically, the choice of l in the last clause is not necessarily unique. This formulation simplifies the proof of completeness below. Intuitively, it does not matter which one we pick, they all have to be equivalent. The soundness theorem proves that, but first we need a little technical lemma: Lemma 5.4 (Simple properties of type lookup) 1. If lookupα (Σ, Σ0 ) ↑ τ, then fv(τ) ⊆ fv(Σ). 2. If lookupα (Σ, Σ0 ) ↑ τ and α ∩ α 0 = 0, / then lookupα (Σ, Σ0 [τ 0 /α 0 ]) ↑ τ (and both derivations have the same size). 3. If lookupα (Σ, Σ0 ) ↑ τ and Γ ` Σ : Ω, then Γ ` τ : κ. Theorem 5.5 (Soundness of type lookup) 1. Let Γ ` Σ : Ω and Γ, α ` Σ0 : Ω. If lookupα (Σ, Σ0 ) ↑ τ1 , then Γ ` τ1 : κα . Furthermore, if Γ ` Σ ≤ Σ0 [τ2 /α] for Γ ` τ2 : κα , then τ1 = τ2 . 2. Let Γ ` Σ : Ω and Γ, α ` Σ0 : Ω. If lookupα (Σ, Σ0 ) ↑ τ 1 , then Γ ` τ1 : κα . Furthermore, if Γ ` Σ ≤ ∃α.Σ0 ↑ τ 2 , then τ 1 = τ 2 . Proof 23 August 2014 F-ing modules Part 1 is by easy induction on the size of the derivation of the lookup. Part 2 follows by induction on the length of α. When α is empty, then there is nothing to show. Otherwise, α = α, α 0 and τ 1 = τ1 , τ 01 , such that lookupα (Σ, Σ0 ) ↑ τ1 and lookupα (Σ, Σ0 ) ↑ τ 01 . Let Γ0 = Γ, α 0 . With weakening, respectively reordering, Γ0 ` Σ : Ω and Γ0 , α ` Σ0 : Ω. By part 1, we then know Γ0 ` τ1 : κα . Lemma 5.4 implies fv(τ1 ) ⊆ fv(Σ), and because Σ is wellformed under Γ, it follows that fv(τ1 ) ⊆ dom(Γ), so that we can strengthen to Γ ` τ1 : κα . Substitution yields Γ0 ` Σ0 [τ1 /α] : Ω, and from Lemma 5.4 we get lookupα 0 (Σ, Σ0 [τ1 /α]) ↑ τ 01 , such that we can apply the induction hypothesis to conclude Γ ` τ10 : κα 0 . Furthermore, in order to prove the type equivalence, we first invert U- MATCH to reveal Γ ` Σ ≤ Σ0 [τ 2 /α] and Γ ` τ2 : κα . Consequently, τ 2 = τ2 , τ 02 and fv(τ 2 ) ⊆ dom(Γ), i.e., α ∩ fv(τ 2 ) = 0/ by the usual conventions. The latter implies Σ0 [τ 2 /α] = Σ0 [τ2 /α][τ 02 /α 0 ] = Σ0 [τ 02 /α 0 ][τ2 /α]. Similar to before, Lemma 5.4 gets us lookupα (Σ, Σ0 [τ 02 /α 0 ]) ↑ τ1 , and substitution Γ, α ` Σ0 [τ 02 /α 0 ] : Ω. By part 1, τ1 = τ2 then. To invoke the induction hypothesis for concluding τ 01 = τ 02 as well, we first note that by substitution, Γ0 ` Σ0 [τ2 /α] : Ω, and second, by Lemma 5.4 again, lookupα 0 (Σ, Σ0 [τ2 /α]) ↑ τ 01 . Third, since Σ0 [τ 2 /α] = Σ0 [τ 02 /α 0 ][τ2 /α], we can construct a derivation for Γ ` Σ ≤ ∃α 0 .Σ0 [τ2 /α] ↑ τ 02 with rule U- MATCH. According to soundness, if there is any type at all that makes a match succeed, then lookup can only deliver a well-formed, equivalent type. Despite being non-deterministic, the result of lookup hence is unique: Corollary 5.6 (Uniqueness of type lookup) Let Γ ` Σ : Ω and Γ ` ∃α.Σ0 : Ω and Γ ` Σ ≤ ∃α.Σ0 ↑ τ. If lookupα (Σ, Σ0 ) ↑ τ 1 and lookupα (Σ, Σ0 ) ↑ τ 2 , then τ 1 = τ 2 . Because of this result, we can implement lookup as a deterministic algorithm by simply choosing the “first” root we encounter for each type variable, in any signature traversal order of our liking. For explicit signatures, our definition of type lookup is also a complete algorithm for finding instantiations in the matching judgment: Theorem 5.7 (Completeness of type lookup) Assume ∃α.Σ0 explicit. 1. If Γ ` Σ ≤ Σ0 [τ/α] and α ∈ α, then lookupα (Σ, Σ0 ) ↑ α[τ/α]. 2. If Γ ` Σ ≤ ∃α.Σ0 ↑ τ, then lookupα (Σ, Σ0 ) ↑ τ. Proof Explicitness of ∃α.Σ0 implies α rooted in Σ0 , which in turn implies α rooted in Σ0 . Part 1 is then proved by simple induction on the derivation of α rooted in Σ0 . Part 2 follows as a straightforward corollary. Note that this proof relies on the ability of the lookup algorithm to non-deterministically pick the root at the same path that was used in the respective derivation of α rooted in Σ0 . Combined with Uniqueness we know that any other path—and thus a deterministic choice—would work as well. Which gives us: Corollary 5.8 (Decidability of matching) 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer Assume that Γ is valid and well-formed, and Γ ` τ ≤ τ 0 f is decidable for types wellformed under Γ. If Σ valid and Ξ explicit, and both are well-formed under Γ, then Γ ` Σ ≤ f is decidable (and does not actually require checking well-formedness of types). Ξ↑τ This result follows directly, because subtyping and matching is defined by induction on the structure of the semantic signatures, and this structure remains fixed under type substitution, as performed in rules U- MATCH and U- FUNCT. (We don’t need to check the well-formedness of τ in U- MATCH because via Lemma 5.4, it is a consequence of looking up the types in the well-formed signature Σ.) From there, decidability of elaboration follows because, up to matching, elaboration is syntax-directed: Corollary 5.9 (Decidability of elaboration) Under valid and well-formed Γ, provided we can (simultaneously) show that core elaboration is decidable, all judgments of module elaboration are decidable as well. 5.3 Declarative properties of signature matching Finally, we want to show that signature matching has the declarative properties that you would expect from a subtype relation, namely that it is a preorder. These properties are not actually relevant for soundness or decidability of the basic language, but they provide a sanity check that the language we are defining actually makes sense. They are also relevant to our translation of modules as first-class values (Section 6), and for the meta-theory of applicative functors (Section 9). One complication in stating the following properties is that subtyping is defined in terms of the core language subtyping judgment Γ ` τ ≤ τ 0 e. Most of the properties only hold if we assume that the analogous property can be shown for that judgment. To avoid clumsy repetition, we leave this assumption implicit in the theorem statements. First, we need a couple of technical lemmas stating that subtyping is stable under weakening and substitution: Lemma 5.10 (Subtyping under Weakening) Let Γ0 ⊇ Γ and Γ0 ` . 1. If Γ ` Ξ ≤ Ξ0 f , then Γ0 ` Ξ ≤ Ξ0 f. 2. If Γ ` Σ ≤ Ξ ↑ τ f , then Γ0 ` Σ ≤ Ξ ↑ τ (Moreover, the derivations have the same size, up to core language judgments.) Lemma 5.11 (Subtyping under substitution) Let Γ ` τ : κα . 1. If Γ, α ` Ξ ≤ Ξ0 f , then Γ ` Ξ[τ/α] ≤ Ξ0 [τ/α] f [τ/α]. 0 2. If Γ, α ` Σ ≤ Ξ ↑ τ f , then Γ ` Σ[τ/α] ≤ Ξ[τ/α] ↑ τ 0 [τ/α] f [τ/α]. (Moreover, the derivations have the same size, up to core language judgments.) Now for the actual theorems: Theorem 5.12 (Reflexivity of subtyping and matching) 23 August 2014 F-ing modules 1. If Γ ` Ξ : Ω, then Γ ` Ξ ≤ Ξ f. 2. If Γ, α ` Σ : Ω, then Γ, α ` Σ ≤ ∃α.Σ ↑ α Proof By simultaneous induction on the structure of Ξ and Σ, respectively. Theorem 5.13 (Transitivity of subtyping and matching) 1. If Γ ` Ξ : Ω and Γ ` Ξ0 : Ω and Γ ` Ξ00 : Ω and Γ ` Ξ ≤ Ξ0 f 0 and Γ ` Ξ0 ≤ Ξ00 00 then Γ ` Ξ ≤ Ξ f. 2. If Γ ` Σ : Ω and Γ ` ∃α 0 .Σ0 : Ω and Γ ` ∃α 00 .Σ00 : Ω, and Γ ` Σ ≤ ∃α 0 .Σ0 ↑ τ 0 f 00 , then Γ ` Σ ≤ ∃α 00 .Σ00 ↑ τ f. and Γ, α 0 ` Σ0 ≤ ∃α 00 .Σ00 ↑ τ 00 f 00 , f0 Proof Since matching is syntax-directed, the proofs are a relatively straightforward simultaneous induction on the cumulative size of the subtyping/matching derivations (up to core language rules). In part (2), we need to apply the above substitution lemma. A further property one might expect from a subtyping relation is antisymmetry, i.e., if Ξ ≤ Ξ0 and Ξ0 ≤ Ξ (which we will abbreviate as Ξ ≤≥ Ξ0 ), then Ξ = Ξ0 . This does not hold directly in our system, because the ordering of quantified variables might differ. We defer discussion of antisymmetry to the next section, where we will prove it in a slight variation. 6 Modules as first-class values ML modules exhibit a strict stratification between module and core language, turning modules into second-class entities. Consequently, the kinds of computations that are possible on the module level are quite restricted. Extending the module system to make modules firstclass leads to undecidable typechecking (Lillibridge, 1997). However, it is straightforward to allow modules to be used as first-class core values after explicit injection into a core type of packaged modules (Russo, 2000). In fact, in our setting, the extension is almost trivial. Syntax Figure 20 summarizes the syntax added to the external language. We add package types of the form pack S to the core language. These are inhabited by packaged modules of signature S. Correspondingly, there is a core language expression form pack M:S that produces values of this type. To unpack such a module, the inverse form unpack E:S is introduced as an additional module expression. It expects E to be a package of type pack S and extracts the constituent module of signature S. (This is more liberal than the closedscope open expression of Russo (2000).) Why all the signature annotations? To avoid running into the same problems as caused by first-class modules, we do not assume any form of subtyping on package types (even if the core language had subtyping). That is, package types are only compatible if they consist of equivalent signatures. The type annotation for pack ensures that packaged modules still have principal types under these circumstances, so that core type checking is not compromised. For unpack, the annotation determines the type of E — which is necessary if we want to support ML-style type inference in the core language (but could be omitted otherwise). 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer (types) (expressions) (modules) T E M ::= ::= ::= . . . | pack S . . . | pack M:S . . . | unpack E:S Fig. 20. Extension with modules as first-class values Elaboration Figure 21 gives the corresponding elaboration rules. Let us ignore the use of signature normalization norm(Ξ) in these rules for a minute and think of it as the identity function (which, morally, it is). Then a module M and its packaged version have essentially the same Fω representation, as a term of existential type. Consequently, elaboration becomes almost trivial. A package type simply elaborates to the very existential type that represents the constituent signature. Packing has to check that the module’s signature actually matches the annotation and coerce it accordingly. Unpacking is a real no-op: there is no subtyping on package types, so the type of E has to coincide exactly with the annotated signature. No coercion is necessary. Signature normalization So what is the business with normalization? Unfortunately, were we to just use an unadulterated signature to directly represent its corresponding package type, the typing of packaged modules would become overly restrictive. Consider the following example: signature A = {type t; type u} signature B = {type u; type t} val f = fun p : (pack A) ⇒ . . . val g = fun p : (pack B) ⇒ f p Intuitively, the signatures A and B are equivalent, and in fact, their semantic representations are mutual subtypes. But these representations will not actually be equivalent System Fω types—A elaborates to ∃α1 α2 .{t : [= α1 : Ω], u : [= α2 : Ω]} and B to ∃α2 α1 .{t : [= α1 : Ω], u : [= α2 : Ω]} according to our rules (cf. Figure 11). In the module language this is no problem: whenever we have to check a signature against another, we are using coercive matching, which is oblivious to the internal ordering of quantifiers. But in the core language no signature matching is performed; package types really have to be equivalent Fω types in order to be compatible. In that case, the order matters. So the definition of g above would not type check. To compensate, our elaboration must ensure that two package types pack S1 and pack S2 translate to equivalent Fω types whenever S1 and S2 are mutual subtypes. Toward this end, we employ the normalization function defined in Figure 22. All this function does is put the quantifiers of a semantic signature into a canonical order. For example, for a signature ∃α.Σ, normalization will sort the variables α according to their (first) appearance as a root in a left-to-right depth-first traversal of Σ. In order to make this well-defined, we impose a fixed but arbitrary total ordering on the set of labels l, which we extend to a lexicographical order on lists l of labels. Further, we assume a meta-function sort≤ which sorts its argument vector according to the given (total) order ≤. We instantiate it with an ordering α1 ≤Σ α2 on type variables (also defined in Figure 22) according to their “first” occurrence as a root in Σ—expressed by reference to the “(at l)” part of the rootedness judgment. Note that normalization is defined only for explicit signatures (Section 5.2), where every variable is rooted. However, that is fine because we only need to normalize the 23 August 2014 F-ing modules Types Γ`T :κ Γ`E :τ Γ`S Ξ T- PACK Γ ` pack S : Ω norm(Ξ) Expressions Γ ` M : Ξ0 e Γ`S Ξ Γ ` Ξ0 ≤ norm(Ξ) Γ ` pack M:S : norm(Ξ) fe E- PACK Γ`S Ξ Γ ` E : norm(Ξ) e M- UNPACK Γ ` unpack E:S : norm(Ξ) e Fig. 21. Elaboration of modules as first-class values representations of signatures appearing as annotations on pack or unpack. In the base case of atomic value signatures [τ], we assume that a similar normalization function normcore (τ) exists for normalizing core-level types according to core-level subtyping Γ ` τ ≤ τ 0 . (For instance, for ML this core type normalization would canonicalize the order of quantified type variables in polymorphic types.) It is not difficult to show the following properties: Lemma 6.1 (Signature normalization) Assume fv(normcore (τ)) = fv(τ) and normcore (τ 0 [τ/α]) = normcore (τ 0 )[τ/α]. Then: 1. 2. 3. 4. 5. fv(norm(Ξ)) = fv(Ξ) norm(Ξ[τ/α]) = norm(Ξ)[τ/α]. If Ξ explicit, then norm(Ξ) explicit. If Γ ` Ξ : Ω, then Γ ` norm(Ξ) : Ω. If Ξ explicit, then Γ ` Ξ ≤≥ norm(Ξ). The main result regarding normalization, then, is a form of anti-symmetry for subtyping. But first, a technical lemma that we need for the proof. It effectively says that two abstract signatures mutually matching each other quantify, up to reordering and renaming, the same abstract type variables. Lemma 6.2 (Mutual matching) Suppose α rooted in Σ and α 0 rooted in Σ0 . Moreover, α ∩ fv(τ) = α 0 ∩ fv(τ 0 ) = 0. / If Γ, α ` Σ ≤ Σ0 [τ 0 /α 0 ] and inversely, Γ, α 0 ` Σ0 ≤ Σ[τ/α], then [τ/α] = [τ 0 /α 0 ]−1 , i.e., |α| = |α 0 |, and there is a reordering α 00 of α 0 , and a corresponding reordering τ 00 of τ 0 , such that τ = α 00 and τ 00 = α. Proof For every α 0 ∈ α 0 , we can show by induction on its rootedness derivation that there are atomic type signatures with Γ, α ` [= τ0 : κ] ≤ [= α 0 [τ 0 /α 0 ] : κ], and conversely, Γ, α 0 ` [= α 0 : κ] ≤ [= τ0 [τ/α] : κ]. By inverting those subtypings, τ0 = α 0 [τ 0 /α 0 ], and at the same time α 0 = τ0 [τ/α]. That is, α 0 = α 0 [τ 0 /α 0 ][τ/α]. Since α 0 ∈ α 0 , there is a corresponding 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer norm([τ]) norm([= τ : κ]) norm([= Ξ]) norm({l : Σ}) norm(∀α.Σ → Ξ) norm(∃α.Σ) = = = = = = [normcore (τ)] [= τ : κ] [= norm(Ξ)] {l : norm(Σ)} ∀α 0 . norm(Σ) → norm(Ξ) where α 0 = sort≤norm(Σ) (α) ∃α 0 . norm(Σ) where α 0 = sort≤norm(Σ) (α) α1 ≤Σ α2 ⇔ min{l | α1 rooted in Σ (at l)} ≤ min{l | α2 rooted in Σ (at l)} Fig. 22. Signature normalization τ 0 ∈ τ 0 , such that α 0 = τ 0 [τ/α]. Because τ 0 6= α 0 according to the assumptions about fv(τ 0 ), there has to be an α ∈ α, such that τ 0 = α and α[τ/α] = α 0 . We can prove the same for every other α 0 ∈ α 0 . Consequently, because all α 0 are distinct, all τ 0 have to be distinct, too, and thus |α| ≥ |α 0 |. By symmetry, i.e., exchanging roles and repeating the argument, we obtain that both substitutions have the same cardinality and are mutual inverses. Theorem 6.3 (Anti-symmetry of subtyping up to normalization) Let Γ ` Ξ : Ω explicit and Γ ` Ξ0 : Ω explicit. Furthermore, assume that if Γ ` τ : Ω and Γ ` τ 0 : Ω and Γ ` τ ≤≥ τ 0 , then normcore (τ) = normcore (τ 0 ). Then, if both Γ ` Ξ ≤ Ξ0 and Γ ` Ξ0 ≤ Ξ, it holds that norm(Ξ) = norm(Ξ0 ). Proof By induction on the (size of the) derivations. In the cases of rules U- ABS and U- FUNCT, invert the matching premise and apply the previous lemma to reveal that the quantified variables are equivalent up to reordering (and α-renaming). Hence, we can assume (after α-renaming) that both inner signatures are well-formed under the same extension of Γ, and apply the induction hypothesis to know that their normalizations are equal. Since sorting of the variables is independent of the original quantifier order as well, it also produces the same result for both sides. By normalizing semantic signatures in all places where they are used as package types, we hence establish the desired property that the intuitive notion of signature equivalence coincides with type equivalence. By applying the coercion f in the rule for pack, we also ensure that the representation of the module itself is normalized accordingly. Soundness The package semantics is so simple that soundness is an entirely straightforward property. Theorem 6.4 (Soundness of elaboration with packages) Theorem 5.1 still holds with the additional rules from Figure 21. Proof By simultaneous induction on derivations. The existing cases are all proved as before; the new ones are straightforward given Lemma 6.1. Our decidability result (Corollary 5.9) is not affected by the addition of modules as firstclass values, because it only hinged on the decidability of signature matching. 23 August 2014 F-ing modules 6.1 A note on first-class modules Given that our elaboration of modules as first-class values does not actually do much, the reader may be puzzled why it is allegedly so much harder to go the whole way and make modules truly first-class. Can’t we just merge the module and core levels into one unified language? For some constructs, such as conditionals, this would probably require type annotations to maintain principal types, and ML-style type inference certainly would not work anymore. But those are limitations that other languages with subtyping (especially object-oriented ones) have always been comfortable with. In the ML module literature, however, it has been frequently claimed that first-class modules result in undecidable type checking (Lillibridge, 1997), so surely there must be more fundamental problems. What, specifically, would break in the F-ing approach? A move to first-class modules means collapsing module and term language, as well as signature and type language. Because types can be denoted by type variables, the latter would imply that signatures can then also be denoted by type variables. Our elaboration, on the other hand, is dependent on one fundamental property: for any signature occurring in the rules, the number of abstract types it declares—i.e., the number of quantifiers—is known statically and stable under substitution. If this were not the case, then we could not perform the implicit lifting (or “monadic” binding) of existentials that is so central to our approach. Clearly, if we allowed for type variables as signatures, it would no longer work. Moreover, as Lillibridge (1997) showed, we would lose decidability of subtyping. Looking at our subtyping rules, they substitute type variables along the way. With type variables possibly representing signatures, substitution could change the structure of the signatures we are looking at. Consequently, the subtyping rules would no longer describe an algorithm that is inductive on the structure of signatures, and (backwards) application of the rules might indeed diverge (see Lillibridge (1997) for an example). That is, the argument we made regarding Corollary 5.8 (Decidability of matching) would no longer hold. The sort of “predicativity” restriction that results from separating types and signatures (i.e., signatures can only abstract over types, not other signatures) is thus crucial to maintaining decidability of typechecking. It is the real essence of the core/module language stratification in ML. Without it, the F-ing approach would not work—nor are we aware of any other decidable type system for ML-style modules without a similar limitation. The same problems would arise if we were to add abstract signature declarations of the form signature X to the language. Indeed, it is the presence of this additional feature that tips the scales and renders OCaml’s module type checking undecidable (Rossberg, 1999). 7 Applicative functors and static purity The semantics for functors that we have presented so far follows Standard ML, in that functors are generative: if a functor body defines any abstract types, then those types are effectively “generated” anew each time the functor is applied. OCaml employs an alternative, so-called applicative semantics for functors, by which a functor will return equivalent types whenever it is applied to the same argument. For example, consider the following use of the Set functor (cf. Figure 3): 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer val p1 = pack {type t = int; val v = 6} : {type t; val v : t} val p2 = pack {type t = bool; val v = true} : {type t; val v : t} module Flip = fun X : {} ⇒ unpack (if random() then p1 else p2 ) : {type t; val v : t} Fig. 23. Example: a statically impure functor module IntOrd = {type t = int; val eq = Int.eq; val less = Int.less} module Set1 = Set IntOrd module Set2 = Set IntOrd val s = Set1 .add (7, Set2 .empty) The last line in this example does not typecheck under generative semantics, because each application of Set yields a “fresh” set type, such that Set1 .set and Set2 .set differ. Under applicative semantics, however, the example would typecheck, because the two structures are created by equivalent module applications. The applicative functor semantics enables the typechecker to recognize that abstract data types generated in different parts of a program are in fact the same type. This is particularly useful when working with functors that implement generic data structures (e.g., sets), but it also supports a more flexible treatment of higher-order functors. For more details about these motivating applications, see Leroy (1995). Unfortunately, applicative functor semantics is also significantly subtler than generative semantics, and much harder to get right. In particular, there are two major problems: Type safety: For a functor to be safely given an applicative semantics, it must at a minimum satisfy the property that the type components in its body are guaranteed to be implemented in the same way every time the functor is applied to the same argument. In the presence of modules as first-class values (Section 6), this property is not universally satisfied. For example, consider the functor Flip in Figure 23. The first time this functor is applied, it may return a module whose type component t is implemented internally as int, whereas the second time t may be implemented as bool. It is thus utterly unsound (i.e., breaks type safety) to give a functor like Flip an applicative semantics. Abstraction safety: Even if the type components of a functor are implemented in the same way every time it is applied, treating the functor as applicative may nevertheless constitute a violation of data abstraction. That is, for some abstract data types implemented by a functor, applicative semantics breaks the ability to establish representation invariants locally. We will discuss this problem in more detail and see examples in Section 8. Concerning the first of these two problems, both Moscow ML and (more recently) OCaml provide packaged modules and applicative functors, and circumvent the soundness problem only by imposing severe (and rather unsatisfactory) restrictions on the unpacking construct, namely prohibiting its use within functor bodies. In this section, we focus on the first problem and show how to address it properly within the F-ing modules framework. The second problem will be explored in Section 8. 7.1 Understanding applicativity vs. generativity in terms of purity For the purpose of ensuring type safety, the key thing is to ensure that we only project type components out of module expressions whose type components are statically 23 August 2014 F-ing modules . . . | (X:S) ⇒ S Fig. 24. Extending the syntax of the module language with applicative functor signatures determined. Following Dreyer (2005), we refer to such expressions as statically pure, which for the remainder of this section we will just shorten to pure. (We will consider the role of dynamic purity in Section 8.) In our module language, the expression that introduces static impurity is the unpack E:S construct: the type components of the unpacked module depend essentially on the term E, a term which may have computational effects that lead it to produce values with different type components every time it is evaluated. If an unpacked module appears in the body of a functor, the functor will encapsulate the impurity. Thus, we need to distinguish between pure functors and impure functors. And it is precisely the pure ones that may behave applicatively, while the impure ones have to behave generatively. Hence, from here on, when talking about functors, we will use “applicative” interchangeably with “pure”, and “generative” interchangeably with “impure”. (In fact, the correspondence is so natural and intuitive that we are tempted to retire the “applicative” vs. “generative” terminology altogether. For historic reasons, however, we will continue to use the traditional terms in the remainder of this article.) One important point of note: in the case where E is a value (or more generally, free of effects), it would seem that there is nothing unsafe about projecting type components from unpack E:S, since each unpacking will produce modules with the same underlying type components. The trouble with permitting unpack E:S to be treated as statically pure—even in this case—is that, while its type components are well-determined, they are not statically well-determined. In the parlance of Harper, Mitchell & Moggi (1990), unpack E:S does not obey phase separation because the identity of its type components may depend on the dynamic instantiation of the free (term) variables of E. As a result, supporting projection from unpack E:S would require full-blown value-dependent types, which we would like to avoid for a variety of pragmatic reasons. The F-ing modules approach, by virtue of its interpretation into the non-dependently-typed Fω , has the benefit of providing automatic enforcement of phase separation, and thus prohibits projection from unpack E:S. 7.2 Extending the language In order to distinguish between pure (a.k.a. applicative) and impure (a.k.a. generative) when specifying a functor—e.g., in a higher-order setting—we extend the syntax of the external language of signatures with a new form of functor signature, shown in Figure 24. While the original form retains its meaning for specifying impure functors, the new one specifies pure ones. For example, the (pure) Set functor matches the pure functor signature (X : ORD) ⇒ SET, while the (impure) Flip functor will only match the impure signature (X : {}) → {type t; val v : t}. That said, Set will also continue to match the impure signature (X : ORD) → SET, because pure (applicative) functor signatures are treated as subtypes of impure (generative) ones. One defining feature of applicative functors is the ability to project types from module paths containing functor applications. For example, given the familiar pure Set functor, 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer (Set IntOrd).set should be a valid type expression, because every application of Set returns the same type. Since our syntax of paths P has been maximally general from the outset, it readily allows such types to be written. In fact, we will see shortly that the existing semantics for paths does not need to change much in order to encompass functor applications. 7.3 Elaboration The addition of applicative functors, along with the attendant tracking of purity, requires some significant changes to elaboration. We will walk through those changes starting with the simple parts. Semantic signatures The main difference between a generative and an applicative functor is the point at which the abstract type components in their bodies get created, and this difference is reflected quite clearly in the placement of existential quantifiers in their semantic signatures. A generative functor has an Fω type of the form ∀α 1 .Σ1 → ∃α 2 .Σ2 . Applying such a functor produces an existential package, which must be explicitly unpacked in order to get access to the type components of the package; however, due to the closed-scope nature of existential unpacking, there is no way to associate those type components with the existential package (and thus the generative functor) itself. In contrast, following Russo (1998), we will describe applicative functors with Fω types of the form ∃α 2 .∀α 1 .Σ1 → Σ2 . Such signatures indicate that the existential package is constructed only once, when the functor is defined, not every time it is applied, thus enabling the abstract types α 2 to be associated with the functor itself. The return type of an applicative functor is always a concrete signature Σ2 , with no local existential variables. Consequently, the introduction of applicative functors does not require any significant change to our definition of semantic signatures—our existing notion of abstract signature Ξ already subsumes the kind of quantification that expresses an applicative functor! We merely extend functor signatures with a simple effect annotation. As defined in Figure 25, an effect ϕ can either be pure (P) or impure (I). These form a trivial two-point lattice with P < I, and there is a straightforward definition of join (∨) on effect annotations (we won’t need meet). To encode effect annotations in our Fω representation of functors, we assume that there are two distinct record labels lP and lI . The important point, though, is that a pure functor type may only have a concrete result signature Σ, which is why we give it as a separate production in the syntax of Σ in Figure 25. Nevertheless, we will often write ∀α.Σ →ϕ Ξ to range over both kinds of functor signature, implicitly understanding that Ξ has to be a concrete Σ0 when ϕ = P. Signature elaboration Figure 26 shows the new elaboration rules for dealing with functor signatures (we have highlighted the differences from the original rules from Figure 11). The rule S- FUNCT- I for impure functor signatures leaves the original rule S- FUNCT almost unchanged, except for adding the effect annotation I on the signature in the conclusion. In order to match the description of applicative functor signatures we just gave, the new rule S- FUNCT- P for applicative functors must produce a signature where all existential quantifiers are “lifted” out of the functor type. It does so by replacing the original α 2 inferred for the result signature with fresh α 02 that are quantified outside the functor signature. 23 August 2014 F-ing modules (effects) (concrete signatures) I|P ∀α.Σ →I Ξ | ∀α.Σ →P Σ | . . . ::= ::= ϕ Σ Notation: ϕ ∨ϕ I∨P := := ϕ P∨I Abbreviations: (types) (expressions) τ1 →ϕ τ2 λϕ x:τ.e (e1 e2 )ϕ := := := τ1 → {lϕ : τ2 } λ x:τ. {lϕ = e} (e1 e2 ).lϕ Fig. 25. Semantic signatures for applicative functors Γ ` S1 ∃α 1 .Σ1 Γ, α 1 , X:Σ1 ` S2 ∃α 2 .Σ2 S- FUNCT - I Γ ` (X:S1 ) → S2 ∀α 1 . Σ1 →I ∃α 2 .Σ2 Γ ` S1 ∃α 1 .Σ1 Γ, α 1 , X:Σ1 ` S2 ∃α 2 .Σ2 ∃α 02 .∀α 1 . Σ1 Γ ` (X:S1 ) ⇒ S2 κα20 = κα1 0 →P Σ2 [α2 α 1 /α2 ] → κα2 S- FUNCT- P Γ ` Ξ ≤ Ξ0 Subtyping Γ, α 0 ` Σ0 ≤ ∃α.Σ ↑ τ f1 Γ ` (∀α.Σ →ϕ Ξ) ≤ (∀α 0 .Σ0 →ϕ 0 Ξ0 ) Γ, α 0 ` Ξ[τ/α] ≤ Ξ0 f2 ϕ ≤ ϕ0 U- FUNCT 0 0 λ f :(∀α.Σ →ϕ Ξ).λ α . λϕ 0 x:Σ . f2 ( f τ ( f1 x))ϕ ϕ ≤ ϕ0 Subeffects ϕ ≤ϕ F- REFL F- SUB Fig. 26. New rules for applicative functor signatures But abstract types defined inside a functor might have functional dependencies on the functor’s parameters. The trick, discovered by Biswas (1995) and Russo (1998), is to capture such potential dependencies by skolemizing the lifted variables over the universally quantified types from the functor’s parameter. That is, we raise the kind of each of the α 02 so as to generalize it over all the type parameters α 1 ; correspondingly, all occurrences of an α ∈ α 2 are substituted by the application of the corresponding α 0 ∈ α 02 to the actual parameter vector α 1 . (At this point, clearly, we require not just System F, but the full power of Fω , to model our semantics.) To better understand what’s going on here, let us revisit the signature of the Set functor (cf. Figure 12), and its elaboration into a semantic signature. Figure 27 shows how the analogous applicative functor signature will be represented semantically. The new elaboration rule places the existential quantifier for β outside the functor, and it raises the original kind Ω of β to Ω → Ω, in order to reflect the functional dependency on α. Everywhere we originally had a β , we now find β α in the result. 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer (Elem : ORD) ⇒ (SET where type t = Elem.t) ∃β :(Ω → Ω). ∀α:Ω.{t : [= α : Ω], eq : [α × α → bool], less : [α × α → bool]} →P {set : [= β α : Ω], elem : [= α : Ω], empty : [β α], add : [α × β α → β α], mem : [α × β α → bool]} Fig. 27. Example: applicative signature elaboration Where such a functor is later applied, β remains as is; only α gets substituted by the concrete argument type. If that is, say, int, then the resulting structure signature will equate the type set to β int. Any further application of the functor to arguments with a type component t = int will yield the same type set = β int. Subtyping Because the definition of semantic signatures barely changed, only a minor extension is required to define functor subtyping, namely to allow pure functor types to be subtypes of impure ones. We do not need to change the definition of matching at all. Abstract types lifted from a functor body act as if they were abstract type constructors defined outside the functor, and the original matching rule (cf. Figure 13) handles them just fine. (However, an algorithmic implementation of the rules will require non-trivial extensions to the type lookup algorithm, as we will discuss in Section 9.2.) In other words, the correct subtyping relation between applicative and generative functor signatures falls out almost for free. The F-ing method provides an immediate explanation of such subtyping and why it is sound. Modules The rule M- SEAL defined in Section 4, when used with an applicative functor signature, allows one to introduce applicative functor types. But the circumstances are limited: the definition of matching requires that the sealed functor may not itself contain any non-trivial sealing, because a functor creating abstract types would be considered generative, i.e., impure, under the module elaboration rules from Section 4. Shao’s system (Shao, 1999), which introduces applicative functor signatures solely through sealing, suffers from this limitation, a point we return to in Section 11. In contrast, the system we will present is designed to support sealing within applicative functors, a feature shared by all other accounts besides Shao’s. That requires refining our module elaboration rules. While signatures for applicative functors are (relatively) easy to elaborate, modules require more extensive changes to their elaboration rules to account for applicativity and purity. Superficially, the only extension to the module elaboration judgment is the inclusion of an effect annotation ϕ, which specifies whether the module is deemed pure or not. However, the invariants associated with pure and impure module elaboration are quite different from each other, as we explain below. Figure 29 gives the modified rules (we have again highlighted the changes relative to the original rules, cf. Figure 14). 23 August 2014 F-ing modules ∀(·).τ 0 := τ 0 0 ∀(Γ, α).τ := ∀Γ.∀α.τ 0 ∀(Γ, x:τ).τ 0 := ∀Γ.τ →P τ 0 (kinds) (·) → κ := κ (Γ, α) → κ := Γ → κα → κ (Γ, x:τ) → κ := Γ → κ (types) λ (·).τ 0 := τ 0 λ (Γ, α).τ 0 := λ Γ.λ α.τ 0 λ (Γ, x:τ).τ 0 := λ Γ.τ 0 (expressions) λ (·).e := e λ (Γ, α).e := λ Γ.λ α.e λ (Γ, x:τ).e := λ Γ.λP x:τ.e τ 0 (·) τ 0 (Γ, α) τ 0 (Γ, x:τ) := τ 0 := τ 0 Γ α := τ 0 Γ e (·) e (Γ, α) e (Γ, x:τ) ΓI ΓP := := := e := e Γ α := (e Γ x)P · Γ Fig. 28. Environment abstraction Functors We begin by explaining how we handle functors, since this motivates the form and associated invariants of the module elaboration judgment. We now have two rules: M- FUNCT- I, which yields a generative functor (as before) if the body M is impure, and M- FUNCT- P, which yields an applicative functor if M is pure. In both cases, the functor expression itself is pure, because it is a value form that suspends any effects of M. For applicative functors, we need to follow what we did for signatures, and implement ∃-lifting. The difficulty, though, is doing it in a way that still allows a compositional translation of sealing inside an applicative functor. What is the problem? Consider the following example: fun (X : {type t}) ⇒ {type u = X.t × X.t}:>{type u} If the body of this functor were impure (like the body of Flip from Figure 23), the impure functor rule M- FUNCT- I would delegate translation of the functor body to a subderivation, which, in this example, would yield a signature Ξ = ∃β .{u : [= β : Ω]} and some term e : Ξ. We would then λ -abstract e over the functor argument to produce a function of type ∀α.{t : [= α : Ω]} →I Ξ. Now, if we wanted to adapt this situation for pure functors by applying the same lifting trick we used for pure functor signatures, then we would have to somehow take e : Ξ and retroactively lift its hidden type components over α to derive a term of type ∃β 0 : Ω → Ω.∀α : Ω.{t : [= α : Ω]} →P {u : [= β 0 α : Ω]}. In general, such retroactive lifting is not possible. To avoid this dilemma, we employ a different trick: we design the translation of a pure module (which the body of an applicative functor must be) so that it consistently constructs an existential package with the necessary lifting already built in! In fact, for simplicity, the translation of a pure module abstracts over the entire environment Γ. More precisely, whereas the impure judgment Γ ` M :I ∃α.Σ e guarantees that Γ ` e : ∃α.Σ, the pure judgment Γ ` M :P ∃α.Σ e instead guarantees that e is a closed term satisfying · ` e : ∃α.∀Γ.Σ, where the notation ∀Γ.Σ is defined in Figure 28. This idea is borrowed from Shan (2004), who used a similar approach for a translation of the module calculus of Dreyer et al. (2003) into System Fω . The pure functor rule M- FUNCT- P then becomes fairly trivial: it just computes the translation of its body and returns that directly. This means the translation of the functor 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer Γ ` M :ϕ Ξ Γ ` B :ϕ Ξ e M- STRUCT Γ ` {B} :ϕ Ξ e Γ(X) = Σ M- VAR Γ ` X :P Σ λ Γ.X Γ ` M :ϕ ∃α.{lX : Σ, l : Σ0 } e M- DOT Γ ` M.X :ϕ ∃α.Σ unpack hα, yi = e in pack hα, λ Γϕ . (y Γϕ ).lX i Γ ` S ∃α.Σ Γ, α, X:Σ ` M :I Ξ e M- FUNCT - I Γ ` fun X:S ⇒ M :P ∀α.Σ →I Ξ λ Γ.λ α.λI X:Σ.e Γ ` S ∃α.Σ Γ, α, X:Σ ` M :P ∃α 2 .Σ2 e M- FUNCT- P Γ ` fun X:S ⇒ M :P ∃α 2 .∀α.Σ →P Σ2 e Γ(X1 ) = ∀α.Σ1 →ϕ Ξ Γ(X2 ) = Σ2 Γ ` Σ2 ≤ ∃α.Σ1 ↑ τ ϕ Γ ` X1 X2 :ϕ Ξ[τ/α] λ Γ . (X1 τ ( f X2 ))ϕ Γ(X) = Σ0 Γ ` Σ0 ≤ ∃α.Σ ↑ τ Γ ` X :> S :P ∃α 0 .Σ[α 0 Γ/α] M- APP κα 0 = Γ → κα pack hλ Γ.τ, λ Γ. f Xi M- SEAL Γ`S Ξ Γ ` E : norm(Ξ) e M- UNPACK Γ ` unpack E:S :I norm(Ξ) e Γ ` B :ϕ Ξ Γ`E :τ e B- VAL Γ ` val X=E :P {lX : [τ]} λ Γ.{lX = [e]} Γ`T :κ τ Γ ` type X=T :P {lX : [= τ : κ]} λ Γ.{lX = [τ : κ]} B- TYP Γ ` M :ϕ ∃α.Σ e Σ not atomic B- MOD Γ ` module X=M :ϕ ∃α.{lX : Σ} unpack hα, xi = e in pack hα, λ Γϕ .{lX = x Γϕ }i Γ`S Ξ Γ ` signature X=S :P {lX : [= Ξ]} Γ ` M :ϕ ∃α.{lX : Σ} e Γ ` include M :ϕ ∃α. {lX : Σ} B- INCL Γ ` B1 :ϕ1 ∃α 1 .{lX1 : Σ1 } Γ, α 1 , X1 :Σ1 ` B2 :ϕ2 ∃α 2 .{lX2 : Σ2 } λ Γ.{lX = [Ξ]} Γ ` ε :P {} e1 e2 B- SIG λ Γ.{} B- EMT lX0 1 = lX1 − lX2 lX0 1 : Σ01 ⊆ lX1 : Σ1 Γ ` B1 ;B2 :ϕ1 ∨ϕ2 ∃α 1 α 2 .{lX0 1 : Σ01 , lX2 : Σ2 } unpack hα 1 , y1 i = e1 in unpack hα 2 , y2 i = (let X1 = λ Γϕ1 ∨ϕ2 .(y1 Γϕ1 ).lX1 in e2 ) in pack hα 1 α 2 , λ Γϕ1 ∨ϕ2 .let X1 = (y1 Γϕ1 ).lX1 in let X2 = (y2 (Γ, α 1 , X1 :Σ1 )ϕ2 ).lX2 in {lX0 1 = X1 , lX2 = X2 }i Fig. 29. New rules for applicative functors and modules B- SEQ 23 August 2014 F-ing modules Paths Γ`E :τ Γ ` P :ϕ ∃α.Σ e Γ`Σ:Ω P- MOD Γ ` P : Σ unpack hα, xi = e in x Γϕ Γ ` M :ϕ ∃α.Σ e Γ`S Ξ Γ ` ∃α.Σ ≤ norm(Ξ) f E- PACK Γ ` pack M:S : norm(Ξ) f (unpack hα, xi = e in pack hα, x Γϕ i) Fig. 30. New rules for applicative paths and packages will not only abstract over the functor’s parameters as required, but over the rest of the current environment Γ, too (because ∃α 2 .∀(Γ, α, X:Σ).Σ2 is just an alternative way of writing ∃α 2 .∀Γ.∀α.Σ →P Σ2 ). But that is fine, because the functor is itself a pure module, so according to the elaboration invariant for pure modules, it has to abstract over Γ anyway. It turns out that the rule M- APP for functor application can remain largely unchanged— it can handle both kinds of functors. In both cases, the effect ϕ on the functor’s type is unleashed and determines the effect of the application. Note that applicative application is always degenerate, with Ξ being some concrete signature Σ3 , so that there are no existential quantifiers in the result to lift over. Pure modules and bindings The real “heavy lifting” (so to speak) happens in M- SEAL. It abstracts the witness types τ over all type variables from Γ, thereby lifting their kinds in a manner similar to what happens in the elaboration of applicative functor signatures (except that Γ generally contains more than just the functor’s parameters). Similarly, the rule abstracts the term component over all of Γ, thereby constructing the desired functor representation inside the package. Both these abstractions together cause the rule to yield a lifted existential type, as desired for an applicative functor. But using a different elaboration invariant for pure modules has implications on the translation of other module constructs as well. In all places where the original, impure rules had to unpack and re-pack existential packages in the translated term, the pure ones also have to apply and re-abstract Γ (rules M- DOT, B- MOD, and B- SEQ). To avoid the need for a separate set of rules for pure and impure elaboration, we use the Γϕ notation defined in Figure 28 to make these steps conditional on the effect ϕ. Rules that return concrete signatures do not need to shuffle around Γ, but simply insert the expected abstraction (rules M- VAR, M- FUNCT- I, M- APP, B- VAL, B- TYP, B- SIG, B- EMT). Rule B- SEQ on the other hand is somewhat trickier, because it has to handle all possible combinations of effects ϕ1 and ϕ2 . (The let-expression around e2 in this rule is actually redundant when ϕ2 = P—because e2 is a closed expression in that case—but we leave it alone for the sake of simplicity of the rule.) Interestingly, sealing is always pure according to the rules. That is because the syntax of our module language only permits sealing of module variables, which are values. When expanding the derived syntax for M :> S (Figure 2), however, for an M that is impure, the overall expression will be regarded impure as advertised, thanks to the rules M- DOT and B- SEQ that are needed to type the expansion. 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer Rule M- UNPACK is the only source of unconditional impurity. First of all, an unpacked expression must be considered impure if the expression being unpacked might compute to package values with different type components (as in the body of Flip). But second, even if the expression being unpacked is already a value, it is not possible to treat its unpacking as a pure module expression because doing so would require us to be able to somehow project out its type components as type-level expressions. (This is necessary if we want to be able to lift the type components of the unpack over the context Γ.) If we were interpreting ML modules into a dependent type theory, this might be possible; however, as discussed in Section 7.1, given that we are interpreting into Fω , with packaged modules represented as existentials, there is no way to project out their abstract type components as type-level expressions, so we treat all unpacked expressions as impure. Figure 31 shows the translation of the Set functor as an applicative functor according to our rules. Compared to the elaboration previously given in Figure 15, the main difference is that packing and λ -abstractions have switched order, and that the existential witness type has been abstracted over α accordingly. Moreover, the nested local let-bindings in the sequence rule have been replaced by applications of the functor parameters inside the abstraction. As before, the translation produces many administrative redexes that can be optimized via some fairly obvious partial evaluation scheme. Figure 32 shows the translated Set functor after eliminating all intermediate structures and functors this way, for easier comparison with the analogous generative implementation in Figure 16. Obviously, always abstracting over Γ in its entirety, as our rules do for pure modules, also leads to over-abstraction (although that is not visible in the example, where we assume the initial Γ to be empty). In particular, it would be sufficient to abstract only over the part of Γ that is bound by, or local to, the outermost applicative functor surrounding a pure module, if any. However, semantically the difference does not matter much. It is not difficult to refine the translation so that it avoids redundant abstractions, but the bureaucracy for tracking the necessary extra information would unnecessarily clutter the rules, so for presentational purposes we chose the simpler path. A real-world implementation can easily optimize the redundant abstractions by what amounts to (fairly straightforward) local partial reductions. We would also expect an implementation to present types in a more readable way to the user (e.g., as module paths), but such concerns are outside the scope of this article. Paths and packages Finally, Figure 30 shows the modified rules for paths and packages. They should not reveal any surprises at this point, because all that changes is the insertion of the right Γ-abstraction/application necessary to match the module rules. Importantly, the path rule now fully supports functor applications in type paths. For example, the type expression (Set IntOrd).set is well-formed when Set is an appropriate applicative functor. This is simply a consequence of our semantic treatment of paths: when Set is bound to a functor with the signature given in Figure 27, its outer ∃β is separated in the environment (according to rule B- SEQ) and the module (Set IntOrd).set simply has the atomic signature [= β int : Ω]. Since this signature contains no existentials, it is trivially a legal path. Contrast that to the behavior under a generative signature for Set, like the one originally given in Figure 12. Under that typing, (Set IntOrd).set has the type ∃β .[= β : Ω], with a 23 August 2014 F-ing modules Set pack hλ α.list α, λ α.λP Elem : {t : [= α : Ω], eq : [α × α → bool], less : [α × α → bool]}. f ((let y1 = λ α.λP Elem : {. . .}.{elem = [α : Ω]} in let y2 = let y21 = (let elem = . . . in λ α.λP Elem : {. . .}.{set = [list α : Ω]}) in let y22 = ... in λ α.λP Elem : {. . .}. let elem = (y1 α Elem)P .elem in let set = ((y2 α Elem)P elem)P .set in let empty = ((y2 α Elem)P elem)P .empty in let add = ((y2 α Elem)P elem)P .add in let mem = ((y2 α Elem)P elem)P .mem in {elem = elem, set = set, empty = empty, add = add, mem = mem} ) α Elem)P i∃β :(Ω→Ω).∀α.{t:[=α:Ω],...}→P {set:[=β α:Ω], elem: [=α:Ω], empty:[β α], add:[...], mem:[...]} Fig. 31. Example: applicative functor elaboration Set pack hλ α.list α, λ α.λP Elem : {t : [= α : Ω], eq : [α × α → bool], less : [α × α → bool]}. f (let elem = [α : Ω] in let set = [list α : Ω] in let empty = [nil] in let add = [. . . Elem.eq . . . Elem.less . . .] in let mem = [. . . Elem.eq . . . Elem.less . . .] in {elem = elem, set = set, empty = empty, add = add, mem = mem}) i∃β :(Ω→Ω).∀α.{t:[=α:Ω],...}→P {set:[=β α:Ω], elem:[=α:Ω], empty: [β α], add:[...], mem:[...]} Fig. 32. Example: applicative functor elaboration, simplified fresh local β that prevents it from type-checking as a path in rule P- MOD. The same applies to any other path to an abstract type defined inside a generative functor. Our semantics does, however, allow functor paths with applications of generative functors if they do not refer to such abstract types. For example, (Set IntOrd).elem yields signature ∃β .[= int : Ω], which can be used as a path—even in the basic system of Section 4! In the extended system presented in this section, we could easily rule out such corner cases by requiring P to be a pure module in rule P- MOD, but there is no real reason to do so. 8 Abstraction safety, dynamic purity, and sharing The elaboration rules for applicative functors that we presented in the previous section are type-safe in the basic syntactic sense that they produce well-typed Fω terms and types, but they are not abstraction-safe. By “abstraction safety”, we are referring to the ability 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer signature NAME = { type name val new : unit → name val equal : name × name → bool } module Name = fun X : {} ⇒ { type name = int val counter = ref 0 val new () = (counter := !counter + 1; !counter) val equal (x, y) = (x = y) } :> NAME module Empty = {} module Name1 = Name Empty module Name2 = Name Empty Fig. 33. Problems with abstraction safety in applicative functors: dynamic impurity to impose local representation invariants on the abstract types defined by a sealed module expression, and to reason locally about the implementation of the sealed module under the assumption that all enclosing program contexts will preserve the imposed invariants.9 The failure to provide abstraction safety is not a peculiar fault of our semantics: contrary to popular belief, none of the existing accounts of applicative functors in the literature (or in ML compilers) provide abstraction safety either (Harper et al., 1990; Leroy, 1995; Russo, 1998; Shao, 1999; Dreyer et al., 2003). The reason, in short, is that tracking only static purity of module expressions—as we have done in the previous section, and as other approaches have done before us—is not sufficient: it is important for the purpose of abstraction safety to track dynamic purity as well. In a similar vein, it is not sufficient to consider only static module equivalence—i.e., the equivalence of type components—to decide the equivalence of types resulting from pure functor applications: we also need to consider dynamic module equivalence, i.e., the equivalence of value components, as well. To see what the issue with abstraction safety is, let us turn to the illustrative set of examples in Figures 33 and 34. The first example, concerning the functor Name and its instantiations Name1 and Name2 , demonstrates why we may want to require a functor that is statically pure, but not dynamically pure, to be treated as generative. The remaining examples, concerning various applications of the Set functor, show how ensuring abstraction safety can even be quite tricky when working with a functor that is dynamically pure, as long as we do not track dynamic module equivalence. The term ”abstraction-safe” (or ”abstraction-secure”) has appeared in the literature a number of times, but as far as we know without a clear formal definition. The informal description we have given here matches the use of the term in various papers by Sewell et al. (Leifer et al., 2003; Sewell et al., 2007). To make this precise, we would need to build a parametric model of the language and use it to establish interesting invariants for abstract data types. This is clearly beyond the scope of the present article and would in fact constitute new research, since as far as we know no one has yet attempted to build parametric models for full-fledged ML-style modules. If anything, though, our F-ing semantics may help point the way forward in this regard, since we show how to understand modules in terms of System Fω , for which parametric models do exist (e.g., Atkey (2012)). 23 August 2014 F-ing modules module module module module module module IntOrd = {type t = int; val eq = Int.eq; val less = Int.less} IntOrd’ = IntOrd Set0 = Set IntOrd Set1 = Set IntOrd’ Set2 = Set {type t = int; val eq = Int.eq; val less = Int.less} Set3 = Set {type t = int; val eq = Int.eq; val less = Int.greater} module F = fun X : {} ⇒ {type t = int; val eq = Int.eq; val less = if random() then Int.less else Int.greater} module Set4 = Set (F Empty) module Set5 = Set (F Empty) Fig. 34. Problems with abstraction safety in applicative functors: dynamic module inequivalence First, consider the functor Name, which implements an ADT of fresh names. Every time Name is instantiated, it will return a module with its own abstract type name, along with its own private integer counter (of type ref int)—initially set to 0—which can be incremented to generate a fresh value of type name every time its new operation is invoked. In order to ensure that new produces a fresh name every time it is applied, it is crucial that each instantiation of Name have a distinct name type—i.e., that we treat Name as a generative functor. Otherwise, calling Name1 .new might produce a name that Name2 .new had already produced.10 However, since Name does not involve any uses of unpacking—i.e., it is statically pure—our semantics from Section 7 would consider it to be applicative, as would OCaml (since in OCaml all functors are applicative) and Moscow ML (in which, even if Name were declared as generative, it could be subsequently coerced to an applicative signature by eta-expansion, thus violating abstraction safety). In the case of our semantics from Section 7, one could induce Name to be considered generative by replacing the sealing in its body with a pack at NAME followed by an unpack, but this is a rather indirect approach, and it does not work in OCaml or Moscow ML due to their restrictions on the use of the unpack construct. Second, consider the set types defined by modules Set0 through Set5 in Figure 34. The set implementation is purely functional, so it may be more surprising to some readers that abstraction safety can still be a problem with this functor! The types Set0 .set, Set1 .set, and Set2 .set should clearly be equivalent, since they are constructed by passing Set the exact same argument IntOrd, just written three different ways. To ensure abstraction safety, however, Set3 .set should be considered distinct from the others: the argument passed to Set in the definition of Set3 provides a different ordering on integers (Int.greater), thus rendering the representation of Set3 .set incompatible with the representation of sets ordered by Int.less. If we were to treat Set2 .set and Set3 .set as equivalent, the definition val s = Set2 .add(1, Set3 .add(2, Set2 .add(3, Set2 .empty))) would become 10 One can, of course, engender use-site generativity by explicitly sealing each application of Name with the signature NAME. However, this is no substitute for true abstraction safety, since it demands disciplined use of sealing on the part of clients of the Name functor—it does not ensure that any local invariants on the abstract name type will be preserved under linking with arbitrary clients. For a more detailed semantic explanation of the importance of generativity in this example, see Ahmed, Dreyer & Rossberg (2009). 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer well-typed. That would be disastrous, because it would yield a set value represented internally by the list [1,3,2], which violates the internal ordering invariants of both Set2 and Set3 ’s list-based set representations. This would result in unpredictable behavior from any further interactions with Set2 and Set3 ’s operations; for instance, Set2 .mem(2, s) and Set3 .mem(2, s) would both return false! As for Set4 .set and Set5 .set, it is important to distinguish them from each other (and from all the other set types), for the following reason. Depending on the result of a random coin flip, the expression F Empty used in the definition of Set4 and Set5 will evaluate to a module that is dynamically equivalent to one of the argument modules used in the definitions of Set2 and Set3 . Consequently, each of the types Set4 .set and Set5 .set will end up dynamically being compatible with either Set2 .set or Set3 .set, but statically we have no way of knowing which will be equivalent to which! We must therefore conservatively insist that they are both fresh types, even though they are defined using the exact same module expression Set (F Empty).11 Getting abstraction-safe applicative behavior on these Set examples seems to be hard, as indeed all previous accounts of applicative functors are unsafe and/or overly conservative in one way or another. Assuming that the Set functor has been assigned an applicative signature, the type system of Section 7, as well as those of Moscow ML, Shao (1999), and Dreyer et al. (2003), all consider Set0 through Set5 to have equivalent set components. The reason is that they employ a “static” notion of module equivalence: they pretend that the meaning of abstract types created by a functor only depends on the types from the functor’s parameters, while ignoring any dependency on parameter values. Consequently, they consider the type components of Set(M1 ) and Set(M2 ) to be equivalent so long as M1 and M2 have equivalent type components. As one can plainly see, though, this approach is demonstrably unsafe: since sets ordered one way are not compatible with sets ordered a different way, the semantics of the type component set in the body of the Set functor clearly depends on the value component less of the functor argument. A correct treatment of abstraction safety thus demands capturing the dependency of abstract types on entire modules, i.e., both type and value components—which is completely natural from the point of view of dependent type systems. OCaml is closest to this ideal: it only considers Set(M1 ) and Set(M2 ) to be equivalent if M1 = M2 syntactically. However, this is quite restrictive, with the consequence that Set0 .set, Set1 .set, and Set2 .set are all considered distinct for no good reason. Moreover, OCaml deems Set4 .set and Set5 .set equivalent just because they are constructed from syntactically identical module expressions, even though doing so constitutes a clear violation of abstraction safety. As in the case of the Name functor, one could try to rely on disciplined use-site sealing to work around this problem—e.g., by sealing the results of all applications of the Set functor appropriately, or by introducing phantom types into the functor parameter, instantiated to fresh abstract types associated with an ordering as necessary. But once more, this would wrongly place the burden of protecting the abstraction on (all) clients of the functor, while depriving its implementer of the ability to perform local reasoning about the correctness of the abstraction. 23 August 2014 F-ing modules 8.1 Elaboration In this section, we refine our elaboration from Section 7 in order to arrive at a semantics that achieves abstraction safety in a satisfactory manner.12 Our approach is as follows. First, in order to deal with examples like the Name functor, which ought not to be applicative, we now take into account not only static purity, but also dynamic purity. That is, in the elaboration of pure modules, we only permit value bindings that we can prove to have no side effects. The intuition behind this restriction is simple: if a module defines abstract types and also has computational effects, then it is only safe to assume that the semantic meanings of the abstract types are tied up with the effects. For example, the meaning of the name type in the Name functor is semantically tied to the stateful counter— in particular, it represents the set of natural numbers less than the current value of counter (which may only grow over time). Second, we observe that it is only abstraction-safe to equate the types returned by applicative functors if the arguments passed to them are dynamically (as well as statically) equivalent. This explains why Set0 , Set1 , and Set2 produce equivalent set types, but they are distinct from Set3 .set. In order to check for dynamic equivalence of functor arguments, we thus refine our semantics to (conservatively) track the “identity” of values. In essence, we emulate a simple form of dependent typing without actually requiring dependent types. Dynamic purity Determining whether an expression is dynamically pure is undecidable. As a conservative approximation, we piggyback on a notion that already exists in ML: the syntactic classification of non-expansive expressions—essentially, syntactic values. In ML, this notion is used in the core language to prevent unsound implicit polymorphism, the socalled value restriction (Wright, 1995). It makes perfect sense to reuse it here, because an applicative functor can be thought of as a polymorphic function on steroids. Figure 35 gives a suitable grammar for non-expansive expressions E that accounts for paths and packages. The “. . . ” in the grammar for E will typically define a sub-language of what is templated as “. . . ” in the grammar for E (cf. Figure 1), but the specifics obviously depend on the concrete core language. For module expressions M contained in E, the only constructs disallowed are functor application and unpacking. Depending on the details of the core language and its type system, more refined strategies are possible for classifying pure value bindings. Fortunately, this does not affect anything else in our development, so we stick with the simple notion of non-expansiveness for simplicity; adopting something more sophisticated should be straightforward. Dynamic module equivalence and semantic paths We have demonstrated above that abstraction safety requires type equivalence to take dynamic module equivalence into account. As we have mentioned already, our approach relies on the tracking of “identities” for value components of modules. Since equivalence of values is obviously undecidable in 12 As explained in footnote 9, the notion of abstraction safety is somewhat informal. The claim that the semantics described in this section regains abstraction safety is likewise informal, and to justify it formally would take us beyond the scope of this article. At the very least, we believe it is clear that our semantics does not suffer from the same problems with abstraction safety that afflict previous approaches. 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer E P M B ::= ::= ::= ::= . . . | P | pack M:S M X | {B} | M.X | fun X:S ⇒ M | X:>S val X=E | type X=T | module X=M | signature X=S | include M | ε | B;B Fig. 35. Non-expansive expressions (paths) (concrete signatures) π Σ ::= ::= α |πτ [= π : τ] | . . . Abbreviations: (types) (expressions) [= π : τ] [e as e0 ] := := {val : τ,nam : π} {val = e,nam = e0 } Fig. 36. Semantic signatures for tracking sharing general, and because we also want to avoid the need for true dependent types, we again use a conservative approximation: our new typing rules employ “phantom types” to identify values, i.e., abstract type expressions that we call semantic paths π. Usually, such a path is just a type variable, but due to the lifting that happens with applicative functors, it can actually take the more general form defined in Figure 36. Paths are recorded in an extended definition of atomic value signature, also given in Figure 36. Consequently, every value binding or declaration will be associated with a semantic path. As with abstract types, we can quantify over path variables (existentially and universally), and thus abstract over value identities. Semantic paths can be viewed as a refinement of the concept of structure stamps, which tracked structure identity in SML’90 (Milner et al., 1990). Here, we reinterpret the ad hoc operational notion of “stamp” as a phantom type introduced via System F quantification, and we use it to stamp individual values rather than whole structures, thus enabling the tracking of identities at a finer granularity. (We could reconstruct “real” structure stamps, essentially by tracking module identities in addition to value identities. But in the presence of fine-grained value paths we see no additional benefit in also having structure stamps.) Obviously, our notion of semantic paths could be refined in various ways. For example, certain values, such as scalar constants, could be captured more precisely by reflecting them on the type level (equating more values and hence allowing more programs to type-check). However, such details are beyond the scope of this article. Elaboration The new and modified rules for value declarations and bindings are shown in Figure 37. We once more have highlighted the relevant changes. For a value declaration (rule D- VAL), we always introduce a fresh path variable (of kind Ω) as a place-holder for the actual value’s identity. For value bindings, there are now three rules. If the binding just rebinds a suitable path P, then we actually know the value’s identity, and can retain it (rule B- VAL - ALIAS). Otherwise, we treat the value as “new” and introduce a fresh path variable representing it; the witness type for the variable does not matter, so we simply pick {}. The binding can be treated as pure if the expression is nonexpansive (rule B- VAL - P), in which case we have to abstract over Γ inside the package, in the same way we did in the sealing rule M- SEAL (Figure 29). 23 August 2014 F-ing modules Γ`D Γ ` Ξ ≤ Ξ0 Declarations Γ`T :Ω τ κα = Ω D- VAL Γ ` val X:T ∃α.{lX : [= α : τ]} Subtyping π = π0 Γ ` [= π : τ] ≤ [= π 0 : τ 0 ] Γ ` τ ≤ τ0 f U- VAL λ x:[= π : τ].[ f (x.val) as x.nam] Γ ` B :ϕ Ξ Γ`E :τ e κα = Ω ∀E. E 6= E ∀P. E 6= P B- VAL - I Γ ` val X=E :I ∃α.{lX : [= α : τ]} pack h{}, {lX = [e as {}]}i Γ`E:τ e κα = Γ → Ω ∀P. E 6= P B- VAL - P Γ ` val X=E :P ∃α.{lX : [= α Γ : τ]} pack hλ Γ.{}, λ Γ.{lX = [e as {}]}i Γ ` P :ϕ ∃α.[= π : τ] e B- VAL - ALIAS Γ ` val X=P :ϕ ∃α.{lX : [= π : τ]} unpack hα, xi = e in pack hα, λ Γϕ .{lX = x}i Γ`E :τ Γ ` P :ϕ ∃α.[= π : τ] e Γ`τ :Ω E- PATH Γ ` P : τ unpack hα, xi = e in (x Γϕ ).val Fig. 37. Elaboration of value sharing Subtyping requires atomic value signatures to have matching paths (rule U- VAL). For now, this condition is trivial to meet, because a rule D- VAL always produces a separate, existentially quantified path for every single value declaration, so that the matching rule U- MATCH can pick them freely before descending into the subtyping check. In Section 8.2 below, we present another small language extension that makes the condition more interesting, though. Finally, in the premise of the modified rule E- PATH, P is elaborated as a full module. This is more permissive than going through the generic path rule P- MOD as before (cf. Figure 30), because the new rule also allows dropping any quantified variable that only occurs in the path π. Without the modified rule, our encoding of let-expressions would no longer work, since every local value definition (that is not a mere alias) introduces an existential quantifier as its path. (Consider let val x = 1 in x+x, which desugars into {val x = 1; val it = x+x}.it—as a module, its type is ∃α1 α2 .[= α2 : int], so that α2 cannot be avoided by the path rule P- MOD. Rule E- PATH, on the other hand, can drop both variables.) 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer (Elem : ORD) ⇒ (SET where type elem = Elem.t) ∃β :(Ω3 → Ω), β1 :(Ω3 → Ω), β2 :(Ω3 → Ω), β3 :(Ω3 → Ω). ∀α:Ω, α1 : Ω, α2 : Ω.{t : [= α : Ω], eq : [= α1 : α × α → bool], less : [= α2 : α × α → bool]} →P {set : [= β α α1 α2 : Ω], elem : [= α : Ω], empty : [= β1 α α1 α2 : β α α1 α2 ], add : [= β2 α α1 α2 : α × β α α1 α2 → β α α1 α2 ], mem : [= β3 α α1 α2 : α × β α α1 α2 → bool]} (where Ω3 → Ω := Ω → Ω → Ω → Ω) Fig. 38. Example: signature elaboration with value tracking Example Figure 38 shows the result of elaborating the (applicative) functor signature describing Set, previously shown in Figure 27, under the updated rules. Differences to the previous result are highlighted: atomic value signatures now carry path information, the signature abstracts the path variables α1 , α2 and β1 to β3 , and the export type β has to be applied not just to the argument type α but also to the argument paths α1 , α2 , accordingly. Given a Set functor with the semantic signature from Figure 38, the types Set0 .set, Set1 .set, and Set2 .set (from the beginning of the section) will be seen as equivalent: they all elaborate to the semantic type β int πeq πless , with the two paths πeq and πless referring to the respective members of structure Int. They are distinguished from type Set3 .set, which elaborates to β int πeq πgreater . Types Set4 .set and Set5 .set are also fresh, because the functor F will be deemed impure under the new rules, due to its binding for less, which features an expansive application (random()). Its semantic signature looks as follows (highlighting the pieces that have been added or changed with the refined rules): F : {} →I ∃β1 :Ω.{t : [= int : Ω], eq : [= πeq : int × int → bool], less : [= β1 : int × int → bool]} Hence, F delivers a fresh path for less with every application, and so each application of the Set functor to F Empty will produce different set types. The Name functor will be considered impure under the new rules as well, because of the local effectful binding for counter. Here is its signature according to the refined rules: Name : {} →I ∃β :Ω, β1 :Ω, β2 :Ω.{name : [= β : Ω], new : [= β1 : {} → β ], equal : [= β2 : β × β → bool]} Consequently, the functor will behave generatively, with Name1 .name and Name2 .name elaborating to distinct fresh abstract types. 8.2 Sharing specifications Once value identities matter for determining type equivalences, it can be useful to give the programmer the ability to explicitly specify sharing constraints between values. For 23 August 2014 F-ing modules (signatures) (declarations) S D ::= ::= . . . | S where val X=P | S where module X=P | like P . . . | val X=P | module X=P Fig. 39. Extension with value and module sharing specifications example, consider a functor that takes two arguments, both with a sub-module Ord: signature A = {module Ord : ORD; val v : Set(Ord).t; . . . } signature B = {module Ord : ORD; val f : Set(Ord).t → int; . . . } module F (X : A) (Y : B) = { . . . Y.f (X.v) . . . } Clearly, the application in the functor’s body cannot type-check without knowing that X.Ord and Y.Ord are statically and dynamically equal. For that, we need to be able to impose sufficient constraints on the parameters. Figure 39 presents syntax for manifest value specifications (using module paths P) and a related signature refinement using where. It also introduces similar forms to specify sharing between entire modules, which serves as an abbreviation for sharing all type and value components. Finally, we add a construct, “like P”, which yields the signature of the module P, and thus can only be matched by modules that provide the same definitions as P. In essence, this describes a higher-order singleton signature in the manner introduced by Dreyer et al. (2003).13 A manifest specification module X=P is equivalent to the specification module X : like P. With these extensions, we can, for example, define the functor F properly as follows: module F (X : A) (Y : B where module Ord = X.Ord) = { . . . Y.f (X.v) . . . } One subtlety to point out here is that the design of these constructs depends on the fact that our elaboration is deterministic, and so any path P trivially has a unique type in our system. If that weren’t the case—e.g., if modules only had principal types—then the “where module” and the “like” construct would not yield a unique signature specification, i.e., their meaning would be ambiguous. To compensate, it would be necessary to require the programmer to disambiguate those constructs with explicit signature annotations “:S” on the paths. A deterministic type system avoids any such nuisance. Elaboration The respective elaboration rules are shown in Figure 40. Rule S- WHERE - VAL is analogous to S- WHERE - TYP (cf. Figure 11). Module refinement (rule S- WHERE - MOD) is slightly more involved. It is defined as refining every individual abstract value and type specification in submodule X of S. This module has the signature Σ00 , and the type variables α 2 identify its abstract entities; the remaining α 1 are used elsewhere in Σ and remain untouched. The concrete signature Σ0 of the refining path P has to match ∃α 2 .Σ00 . (Typically, α 2 will coincide with the subset It is also very similar to the “module type of” operator that was introduced in recent versions of OCaml. The difference is that OCaml’s operator does not propagate the identities of abstract types defined by the module, which we find rather surprising. 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer Γ`S Γ ` S ∃α 1 αα 2 .Σ Γ ` P : [= π : τ 0 ] e Γ ` τ0 ≤ τ f Σ.lX = [= α : τ] S- WHERE - VAL Γ ` S where val X=P ∃α 1 α 2 .Σ[π/α] Γ ` S ∃α.Σ Γ ` P : Σ0 e Σ.lX = Σ00 α = α1 ] α2 ∃α 2 .Σ00 explicit Γ, α 1 ` Σ0 ≤ ∃α 2 .Σ00 ↑ τ Γ ` S where module X=P ∃α 1 .Σ[τ/α 2 ] S- WHERE - MOD Γ`P:Σ e Σ explicit S- LIKE Γ ` like P Σ Γ ` P : [= π : τ] e D- VAL - EQ Γ ` val X=P {lX : [= π : τ]} Γ`P:Σ e Σ explicit D- MOD - EQ Γ ` module X=P {lX : Σ} Fig. 40. Elaboration of value and module sharing specifications of α that are free in Σ00 , because only in rare circumstances can matching succeed with an unquantified α ∈ α 1 left over in Σ00 .14 ) The rules for manifest value and module declarations are straightforward, as is the rule for singletons. In all the module forms, a side condition about explicitness is necessary to maintain the elaboration invariant that is required for decidability (cf. Section 5.2). Inductively, we only know that the respective signatures are valid, but because they can occur on the right-hand side of a match, we would lose decidability (which we will prove in Section 9.2) if we did not require them to also be explicit. In practice, the signature of a path (or any module, for that matter) can always be enforced to be explicit by imposing a signature annotation. Alternatively, any “classic” syntactic path consisting only of variables, projection, and pure functor application will satisfy the explicitness criterion, as long as those variables in turn are bound to definitions with explicit signature annotations. In the case of rule S- WHERE - MOD, however, ∃α 2 .Σ00 can only be made explicit (and the refinement made well-formed) by ensuring that the signature of the specialized submodule is sufficiently self-contained, i.e., none of its type components refers to any of the α 1 from the surrounding signature. It is not merely decidability concerns that demand this. For example, the refinement in With ML as a core language, one such example would be if Σ00 contained a value component of type t int → t int. This type could be matched by a Σ0 in which the corresponding component had type ∀α.α → α, which does not mention t but can nonetheless be instantiated to t int → t int. 23 August 2014 F-ing modules signature S = {type t : ? → ?; module A : {type u = t int; . . . }} module B = {type u = int; . . . } signature T = S where module A = B would require higher-order unification to find a t such that t int = int. Not only is that an undecidable problem in the general case, it also has more than one “solution” for this example, and the signature T would therefore have an ambiguous meaning. Consequently, the above example is disallowed by the rule—t is not rooted in the inner signature of A, although it mentions it. But the example can be disambiguated by splitting the refinement into stages: signature T = (S where type t = fun a ⇒ a) where module A = B If all types from the surrounding signature have an alias in the submodule, however, then our system accepts the direct refinement: signature S = {type t : ? → ?; module A : {type u = t; . . . }} module B = {type u = fun a ⇒ list a; . . . } signature T = S where module A = B (And because we always β η-normalize all types, this even works when u is specified as fun a ⇒ t a in signature S.) The “where module” construct has been a rather dark corner of ML-style modules. While it is often available in one form or another, its semantics tends to be either vague or over-restrictive (or both), and rarely is it properly specified. The structure sharing specifications of SML’90 (Milner et al., 1990) were the earliest form of a comparable construct, but they were both relatively restricted and semantically complicated, resorting to global “admissibility” conditions. In SML’97 (Milner et al., 1997), they were hence degraded to a form of syntactic sugar, but this is arguably not quite the right thing either, since their desugaring in fact relies on type information. As has been observed repeatedly by SML implementers, the SML’97 semantics has a severe limitation: it prevents the placement of structure sharing constraints on any signatures that export a single transparent type specification! Generalizations and improvements, including the complementary “where module” (or “where structure”) mechanism, have been discussed in online forums and implemented in some compilers (e.g., SML/NJ (SML/NJ Development Team, 1993) and Alice ML (Rossberg et al., 2004)), but have never been formalized as far as we are aware. In OCaml, “with module” is superficially similar, but actually extends a signature instead of just refining types, which apparently is considered a bug.15 Our elaboration rule S- WHERE - MOD may thus be viewed as a novel step in the right direction. 9 Meta-theory revisited Having made non-trivial extensions to our system in the last two sections, we need to revisit the meta-theoretical properties that we proved about the initial system in Section 5. See the bug report at http://caml.inria.fr/mantis/view.php?id=5514. 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer 9.1 Soundness The soundness statement for the new elaboration rules has to cover the elaboration of pure modules now. But first a helpful lemma about typing environment abstractions: Lemma 9.1 (Typing of environment abstraction) Let Γ ` and Γ1 , Γ, Γ2 ` . 1. 2. 3. 4. 5. 6. If and only if Γ ` τ : κ, then · ` λ Γ.τ : Γ → κ. If and only if Γ1 , Γ, Γ2 ` τ : Γ → κ, then Γ1 , Γ, Γ2 ` τ Γ : κ. If and only if Γ ` τ : Ω, then · ` ∀Γ.τ : Ω. If and only if Γ ` e : τ, then · ` λ Γ.e : ∀Γ.τ. If and only if Γ1 , Γ, Γ2 ` e : ∀Γ.τ, then Γ1 , Γ, Γ2 ` e Γ : τ. (λ Γ.τ) Γ ≡ τ. In the actual soundness statement, pure module elaboration has a somewhat more intricate invariant than its impure version, as given by part 7 of the following theorem (all other parts read as before): Theorem 9.2 (Soundness of elaboration with applicative functors) Let Γ ` . 1. 2. 3. 4. 5. 6. 7. 8. 9. If Γ ` T : κ τ, then Γ ` τ : κ. If Γ ` E : τ e, then Γ ` τ : Ω and Γ ` e : τ. If Γ ` τ ≤ τ 0 f and Γ ` τ : Ω and Γ ` τ 0 : Ω, then Γ ` f : τ → τ 0 . If Γ ` P : Σ e, then Γ ` Σ : Ω and Γ ` e : Σ. If Γ ` S/D Ξ, then Γ ` Ξ : Ω. If Γ ` M/B :I Ξ e, then Γ ` Ξ : Ω and Γ ` e : Ξ. If Γ ` M/B :P ∃α.Σ e, then Γ ` ∃α.Σ : Ω and · ` e : ∃α.∀Γ.Σ. If Γ ` Ξ ≤ Ξ0 f and Γ ` Ξ : Ω and Γ ` Ξ0 : Ω, then Γ ` f : Ξ → Ξ0 . 0 f and Γ ` Σ : Ω and Γ, α ` Σ0 : Ω, If Γ ` Σ ≤ ∃α.Σ ↑ τ then Γ ` τ : κα and Γ ` f : Σ → Σ0 [τ/α]. Proof By simultaneous induction on the derivations. Most cases are proved as before (Theorem 5.1), except that some use additional abstraction over Γ, and we have added a number of new rules, most of which are fairly straightforward. We give the two most relevant cases for elaborating applicative functors and pure modules: • Case M- FUNCT- P: By induction on the first premise we know that Γ ` ∃α.Σ : Ω, and by iterated inversion this implies (1) Γ, α ` Σ : Ω. Hence we can show that Γ, α, X:Σ ` . By induction on the second premise it follows that (2) Γ, α, X:Σ ` ∃α 2 .Σ2 : Ω and (3) Γ ` e : ∃α 2 .∀(Γ, α, X:Σ).Σ2 . Statement (3) already proves the second goal, because ∃α 2 .∀(Γ, α, X:Σ).Σ2 = ∃α 2 .∀Γ.∀α.Σ →P Σ2 by the definition of environment abstraction. To prove the first goal, inverting (2) gives Γ, α, X:Σ, α 2 ` Σ2 : Ω, which can be trivially strenghtened and reordered to Γ, α 2 , α ` Σ2 : Ω. By weakening (1) to Γ, α, α 2 ` Σ : Ω, applying Fω typing rules, and induction over the length of α 1 and then α 2 , we arrive at Γ ` ∃α 2 .∀α.Σ →P Σ2 : Ω. 23 August 2014 F-ing modules • Case M- SEAL: Since we assume that Γ is well-formed, the first premise implies (1) Γ ` Σ0 : Ω. By induction on the second premise we get Γ ` ∃α.Σ, which can be inverted to (2) Γ, α ` Σ : Ω. By induction (part 9) we can conclude (3) Γ ` τ : κα and (4) Γ ` f : Σ0 → Σ[τ/α]. Consider the first goal first. By Lemma 9.1 and Fω kinding, we get Γ, α 0 ` α 0 Γ0 : κα , and accordingly, Γ, α 0 ` [α 0 Γ/α] : Γ, α, so that the substitution lemma applied to (2) yields Γ, α 0 ` Σ[α 0 Γ/α] : Ω. By induction over the length of α 0 , Fω typing rules then give Γ ` ∃α 0 .Σ[α 0 Γ/α] : Ω as desired. For the second goal, first derive (5) Γ ` f X : Σ[τ/α] by simple application of Fω typing rules to (1) and (4). Lemma 9.1 then gives · ` λ Γ. f X : ∀Γ.Σ[τ/α]. Likewise, · ` λ Γ.τ : Γ → κα follows from (3). The lemma also gives (λ Γ.τ) Γ = τ, and hence it holds that Σ[τ/α] = Σ[(λ Γ.τ) Γ/α] and we can apply the conversion rule and Lemma 9.1 to (5) to get · ` λ Γ. f X : ∀Γ.Σ[(λ Γ.τ) Γ/α]. Since we assume that α 0 are fresh by convention, this is the same type as ∀Γ.Σ[α 0 Γ/α][(λ Γ.τ)/α 0 ], and induction over α 0 for application of the pack typing rule gives the wanted result. 9.2 Decidability Recall from Section 5.2 that the decidability of our type system solely hinged on the decidability of subtyping—more specifically, type lookup for the matching rule U- MATCH. This has not changed with any of the extensions we made. In fact, except for the trivial incorporation of effect subtyping, the addition of applicative functors did not change the declarative subtyping and matching rules at all! However, the presence of applicative functors does necessitate fundamental changes to their algorithmic implementation. In particular, type lookup now has to look into pure functor signatures in order to find suitable types for matching, and the contravariance of functor parameters results in a significantly more complex definition of the lookup function. That also makes the surrounding definitions and proofs more involved than what we have seen so far. (The end of this section has a few remarks concerning this complexity.) Validity and rootedness First, we observe that our previous definition of signature validity and, specifically, rootedness (cf. Figure 18) is no longer appropriate—it is violated by the new rules for pure functors (S- FUNCT- P and M- FUNCT- P), where we lift an existential quantifier over a universal one, and thus separate the existential quantifier from the structure that roots its variables. To deal with the additional extensions from Section 8, we must also account for abstract value paths—however, they are treated like any other abstract type variable, so do not affect the definitions and proofs much. (That is, the essential metatheoretical complexity encountered in this section already comes up for the simpler system from Section 7 alone.) Let us consider a couple of simple examples first. An abstract type β1 : Ω is rooted in a structure signature {t1 : [= β1 : Ω]} (as before), so that ∃β1 .{t1 : [= β1 : Ω]} is a valid (and explicit) signature. Likewise, structures can be roots for higher-kinded types, if they specify them at their higher kind—for example, β2 : Ω → Ω is rooted in {t2 : [= β2 : Ω → Ω]} (still as before). What’s new now is that types may also be rooted in a pure functor signature. 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer For example, a higher-kinded β3 : Ω → Ω can now be rooted in ∀α1 , α2 .{u: [= α1 : Ω],v: [= α2 : Ω]} →P {t3 : [= β3 α1 α2 : Ω]} if the path β α1 α2 —with α1 , α2 being exactly the list of abstract types that the functor quantifiers over—is rooted in the functor’s result signature. Consequently, ∃β3 .∀α1 , α2 .{u: [= α1 : Ω],v: [= α2 : Ω]} →P {t3 : [= β3 α1 α2 : Ω]} is a valid (and explicit) signature. (As a degenerate case, the universal quantifier in a functor signature can actually be empty; such functors can be roots even for abstract types of ground kind Ω—e.g., β4 is rooted in {} →P {t4 : [= β4 : Ω]}.) Figure 41 gives an extended definition of validity and related properties. Rootedness takes applicative functors into account: a variable may now be rooted in a pure functor’s codomain. As a side effect, the definition no longer is concerned with plain type variables only, but generalises to semantic paths π. In the functor case, we extend the current path by applying the functor’s universal variables before descending into the codomain, mirroring the kind-raising substitution performed by rule S- FUNCT- P. The path π in the rootedness relation is always “abstract”, in the sense that it is restricted to the form α α 0 . We write head(π) to denote the head variable α in such a path. However, we have to be careful not to treat variable occurrences inside a functor as a root when that functor’s argument already mentions that variable. For example, the (valid) signature ∀α.{u: [= α : Ω],v: [= β α : Ω]} →P {t: [= β α : Ω]} cannot possibly be a root for β , even though the path β α has the right form in its codomain. Intuitively, with β already occurring in its argument, this functor cannot be the origin of the abstract type β . Rather, it represents a functor signature like (X : {type u; type v = b u}) ⇒ {type t = b X.u}, where the type b that β corresponds to is bound somewhere else. (Technically, the refined type lookup algorithm that we are going to define in a moment could produce cyclic results if we allowed examples like this as input.) The problem extends to multiple variables. Imagine: ∃β1 β2 .{F : ∀α.{t : [= α : Ω],u: [= β2 : Ω → Ω]} →P {v : [= β1 α : Ω]}, G : ∀α.{t : [= α : Ω],v: [= β1 : Ω → Ω]} →P {u : [= β2 α : Ω]}} We cannot allow such a signature to be regarded explicit, because β1 and β2 would then have a cyclic dependency. The new rootedness judgment excludes such cyclic examples, by (1) enforcing that each rooted variable is “avoided” by any functor parameter signature its root is under, and (2) inductively requiring that for multiple variables, each root not only avoids the variable itself, but also any of the following ones, thereby imposing sequential dependencies. Intuitively, then, the order of the quantified variables has to reflect the order of the respective declarations from which they originate. (This means that we are no longer as free to reorder quantified variables as we were before. We can only pick an order that represents a topological sorting with respect to the (non-cyclic) dependency graph of the declarations. Our definition of signature normalization (Section 6) hence is in need of refinement. However, the details are not very interesting, so we omit them here.) 23 August 2014 F-ing modules ε rooted in Σ α, α rooted in Σ π π π π :⇔ always :⇔ α rooted in Σ avoiding α, α ∧ α rooted in Σ rooted in [= π 0 : τ] avoiding β (at ε) rooted in [= τ : κ] avoiding β (at ε) rooted in {l : Σ} avoiding β (at l.l) rooted in ∀α.Σ1 →P Σ2 avoiding β (at l) :⇔ :⇔ :⇔ :⇔ π π π π = π0 =τ rooted in {l : Σ}.l avoiding β (at l) α rooted in Σ2 avoiding β (at l) ∧ β ∩ fv(Σ1 ) = 0/ [= π : τ] explicit (always) ∀α.Σ →ϕ Ξ explicit :⇔ ∃α.Σ explicit ∧ Ξ explicit ... [= π : τ] valid (always) ∀α.Σ →ϕ Ξ valid :⇔ ∃α.Σ explicit ∧ Ξ valid ... Fig. 41. Validity for applicative functors With the new and improved definition of rootedness, the validity lemma is valid again, and we can extend it to the pure judgments: Lemma 9.3 (Simple properties of validity with applicative functors) 1. If and only if π rooted in Σ avoiding β 1 and π rooted in Σ avoiding β 2 , then π rooted in Σ avoiding β 1 , β 2 . 2. If π rooted in Σ avoiding β 1 and fv(Σ) ∩ β 2 = 0, / then π rooted in Σ avoiding β 1 , β 2 . 3. If α rooted in Σ, then α rooted in Σ[τ 0 /α 0 ], provided α ∩ (fv(τ 0 ) ∪ α 0 ) = 0. / 4. If Ξ explicit, then Ξ valid. 5. If Ξ valid/explicit, then Ξ[τ/α] valid/explicit. 6. If Ξ valid/explicit, then norm(Ξ) valid/explicit. Lemma 9.4 (Signature validity with applicative functors) Assume Γ valid. 1. If Γ ` P : Σ e, then Σ valid. 2. If Γ ` S/D Ξ, then Ξ explicit. 3. If Γ ` M/B :ϕ Ξ e, then Ξ valid. Type Lookup Of course, the more liberal definition of rootedness and signature validity now necessitates a more general type lookup algorithm. The upgrade is shown in Figure 42. Like rootedness, it now deals with semantic paths π instead of plain variables. That is, it no longer just looks for type variables but for paths. When lookup descends into the codomain of a functor type, it extends the current path with the functor’s parameter variables. These parameters become parameters of the looked-up type, matching up with the raised kind that an abstract type from an applicative functor is given. For example, consider lookupβ (∀α.{u: [= α : Ω]} →P {t: [= int : Ω]}, ∀α 0 .{u: [= α 0 : Ω]} →P {t: [= β α 0 : Ω]}) which looks for the type β : Ω → Ω (rooted in the second signature) in the first signature. It first takes the variables from the root’s universal quantifier (in this case only a single α 0 ) 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer to extend the path β to β α 0 . It then performs lookup for this new path in the functors’ codomains, yielding type int. Adding the parameters in the end, it returns λ α 0 .int as the appropriate substitution for β itself. But that is not enough. In general, a type looked up in the codomain may have occurrences of variables from the left hands universal quantifier, which would escape their scope if we left them alone. Consider: lookupβ (∀α.{u: [= α : Ω]} →P {t: [= list α : Ω]}, ∀α 0 .{u: [= α 0 : Ω]} →P {t: [= β α 0 : Ω]}) Here, just performing lookup in the codomain would give us list α for β α 0 , which is no good because the α it contains would be unbound. As with functor subtyping, we hence have to substitute α first, in a contravariant fashion. We do so with the corresponding types inversely looked up in the right-hand side’s domain, i.e., lookupα ({u: [= α 0 : Ω]}, {u: [= α : Ω]}) for the example, and thereby mapping α to α 0 . As a result, the main lookup will return list α 0 for β α 0 —but that is fine, because we have to lambda-abstract over α 0 anyway. We arrive at λ α 0 .list α 0 (or just list, by η-equivalence) as a proper substitute for β . Unfortunately, as our earlier discussion of rootedness already suggested, contravariance complicates the lookup of multiple variables, because it can create dependencies between the results. Consider: Ξ = ∃β1 β2 .Σ, Ξ0 = ∃β10 β20 .Σ0 , Σ = {F : ∀α.{t : [= α : Ω]} →P {t : [= β1 α : Ω]}, G : ∀α.{t : [= α : Ω]} →P {t : [= β2 α : Ω]}} Σ0 = {F : ∀α 0 .{t : [= α 0 : Ω]} →P {t : [= β10 α 0 : Ω]}, G : {t : [= β10 int : Ω]} →P {t : [= β20 : If we want to check Ξ ≤ Ξ0 , then looking up β10 , β20 independently would deliver lookupβ 0 (Σ, Σ0 ) ↑ λ α 0.β1 α 0 1 lookupβ 0 (Σ, Σ0 ) ↑ β2 (β10 int) 2 The solution for still contains an occurrence of β10 , which we need to substitute away. Consequently, as in the definition of rootedness, we have to respect the quantification order of the existential variables (like those from Ξ0 above) and perform their lookup in this order, substituting types as we go. As explained earlier, the definition of rootedness ensures that quantification order corresponds to dependency order. In fact, the lookup rules, in the case of multiple variables and of functors, also contain explicit side conditions that check that the returned type(s) do not contain the looked-up variable(s) themselves. The main reason for these side conditions is technical: building them into the lookup judgment removes mutual interdependencies between various properties we prove below. In practice, they are implied by rootedness. Because the new definition of lookup is more complicated, its “simple” properties are a little bit less simple than before (cf. Lemma 5.4): Lemma 9.5 (Simple properties of type lookup with applicative functors) / then fv(τ) ⊆ fv(Σ) ∪ fv(Σ0 ) − α. 1. If lookupα (Σ, Σ0 ) ↑ τ and α ∩ fv(Σ) = 0, 0 2. If lookupπ (Σ, Σ ) ↑ τ and head(π) ∈ / fv(Σ), then fv(τ) ⊆ fv(Σ) ∪ fv(Σ0 ) − head(π). 0 0 3. If lookupα (Σ, Σ ) ↑ τ and α ∩ (α ∪ fv(τ 0 )) = 0, / then lookupα (Σ[τ 0 /α 0 ], Σ0 [τ 0 /α 0 ]) ↑ τ[τ 0 /α 0 ]. 23 August 2014 F-ing modules lookupε (Σ, Σ0 ) lookupα,α (Σ, Σ0 ) ↑ ε ↑ τ, τ lookupπ ([= π 00 : τ], [= π 0 : τ 0 ]) lookupπ ([= τ : κ], [= τ 0 : κ]) lookupπ ({l : Σ}, {l 0 : Σ0 }) lookupπ (∀α.Σ1 →P Σ2 , ∀α 0 .Σ01 →P Σ02 ) ↑ ↑ ↑ ↑ π 00 τ τ λ α 0 .τ always if lookupα (Σ, Σ0 ) ↑ τ ∧ fv (τ) ∩ α = 0/ ∧ lookupα (Σ, Σ0 [τ/α]) ↑ τ if π 0 = π if τ 0 = π if ∃l ∈ l ∩ l 0 . lookupπ ({l : Σ}.l, {l 0 : Σ0 }.l) ↑ τ if lookupα (Σ01 , Σ1 ) ↑ τ 0 ∧ head(π) ∈ / fv(τ 0 ) 0 0 ∧ lookupπ α 0 (Σ2 [τ /α], Σ2 ) ↑ τ Fig. 42. Algorithmic type lookup with applicative functors 4. If lookupπ (Σ, Σ0 ) ↑ τ and fv(π) ∩ (α 0 ∪ fv(τ 0 )) = 0, / then lookupπ (Σ[τ 0 /α 0 ], Σ0 [τ 0 /α 0 ]) ↑ τ[τ 0 /α 0 ]. (Moreover, in parts 3 and 4, the length of the derivation stays the same.) The soundness statement also requires a more verbose formulation than before, and because of the contravariant lookup in the functor case, both parts are mutually dependent: Theorem 9.6 (Soundness of type lookup with applicative functors) 1. Let Γ ` Σ : Ω and Γ, α ` Σ0 : Ω. If lookupπ (Σ, Σ0 ) ↑ τ1 , then Γ, α ` π : κ and Γ ` τ1 : κ. Furthermore, if Γ ` Σ ≤ Σ0 [τ2 /α] for Γ ` τ2 : κα and π = α α 0 (with α ∩ α 0 = 0), / then τ1 = τ2 α 0 . 2. Let Γ ` Σ : Ω and Γ, α ` Σ0 : Ω. If lookupα (Σ, Σ0 ) ↑ τ 1 , then Γ ` τ1 : κα . Furthermore, if Γ ` Σ ≤ ∃α.Σ0 ↑ τ 2 , then τ 1 = τ 2 . Proof By simultaneous induction on the size of the derivation of the lookup. Interestingly, proving well-kindedness of the looked-up types requires slightly different inductive steps than proving the type equivalence(s). Part 1: • Case lookupπ ([= τ1 : κ], [= τ 0 : κ]): Then π = τ 0 . By inversion of well-kindedness, Γ ` τ1 : κ and Γ, α ` τ 0 : κ. Furthermore, by inversion of subtyping, τ1 = τ 0 [τ2 / α], for which we know via substitution that τ 0 [τ2 /α] = π[τ2 /α] = τ2 α 0 . • Case lookupπ ([= π 00 : τ3 ], [= π 0 : τ30 ]): Analogous. 0 • Case lookupπ ({l : Σ}, {l 0 : Σ0 }): Then lookupπ (Σ, Σ0 ) ↑ τ1 for some Σ ∈ Σ and Σ0 ∈ Σ . By inverting well-kindedness, Γ ` Σ : Ω and Γ, α ` Σ0 : Ω. The first claim then follows by induction. Furthermore, by inverting subtyping, Γ ` Σ ≤ Σ0 [τ2 /α], and the second claim likewise follows by induction. • Case lookupπ (∀α 1 .Σ1 →P Σ2 , ∀α 01 .Σ01 →P Σ02 ): Then τ1 = λ α 01 .τ3 such that both lookupα 1 (Σ01 , Σ1 ) ↑ τ 01 with α ∈ / fv(τ 01 ), and lookupπ α 0 (Σ2 [τ 01 /α 1 ], Σ02 ) ↑ τ3 . Let Γ01 = 1 Γ, α, α 01 . First, inverting the kinding rules, Γ, α 1 ` Σ1 /Σ2 : Ω and Γ01 ` Σ01 /Σ02 : Ω. For Σ1 , we can weaken to Γ01 , α 1 ` Σ1 : Ω, which allows us to invoke the induction hypothesis for part 2 and conclude Γ01 ` τ10 : κα1 . Because α ∈ / fv(τ 01 ), the result can be strengthened to Γ, α 01 ` τ10 : κα1 . Let Γ02 = Γ, α 01 . Obviously, Γ02 ` [τ 01 /α 1 ] : Γ, α 1 , and applying the substitution lemma, Γ02 ` Σ2 [τ 01 /α 1 ] : Ω. We can also use the substitution lemma to reorder Γ01 and derive Γ02 , α ` Σ02 : Ω. We can now invoke the induction hypothesis on the codomains and get Γ02 , α ` π α 01 : κ 0 and Γ02 ` τ3 : κ 0 . With Lemma 9.1, we know both Γ02 , α ` π : 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer α 01 → κ 0 and Γ ` λ α 01 .τ3 : α 01 → κ 0 . Given that the α 01 are locally fresh by the usual variable convention, and thus don’t occur in π, the former can be strengthened to Γ, α ` π : α 01 → κ 0 as required. To furthermore prove the type equivalence, we can invert the subtyping assumption, revealing Γ02 ` Σ01 [τ2 /α] ≤ ∃α 1 .Σ1 ↑ τ 02 and Γ02 ` Σ2 [τ 02 /α 1 ] ≤ Σ02 [τ2 /α]. The substitution lemma implies Γ02 ` Σ01 [τ2 /α] : Ω. And we can apply weakening to kinding of Σ1 , such that Γ02 , α 1 ` Σ1 : Ω. Using Lemma 9.5, lookupα 1 (Σ01 [τ2 /α], Σ1 [τ2 /α]) ↑ τ 01 [τ2 /α], but by variable containment we actually know that Σ1 [τ2 /α] = Σ1 and τ 01 [τ2 /α] = τ 01 . Because that modified lookup derivation is still shorter than the current one, we can invoke the induction hypothesis (part 2) for the type equivalence claim, and get τ 01 = τ 02 . As a consequence, Σ2 [τ 02 /α 1 ] = Σ2 [τ 01 /α 1 ]. So we know about the codomain that Γ02 ` Σ2 [τ 01 /α 1 ] ≤ Σ02 [τ2 /α]. Consequently, the induction hypothesis (part 1) also implies τ3 = α α 0 α 01 . or, via η-equivalence, λ α 01 .τ3 = α α 0 . Part 2: • Case lookupε (Σ, Σ0 ): There is nothing to show. • Case lookupα,α 0 (Σ, Σ0 ): Then τ 1 = τ1 , τ 01 and lookupα (Σ, Σ0 ) ↑ τ1 with fv(τ1 ) ∩ α 0 = 0, / and lookupα 0 (Σ, Σ0 [τ1 /α]) ↑ τ 01 . By inverting well-kindedness, Γ, α, α 0 ` Σ0 : Ω, which, via the substitution lemma, can be tweaked to Γ, α 0 , α ` Σ0 : Ω. At the same time, weakening gives Γ, α 0 ` Σ : Ω. Invoking the induction hypothesis (part 1) yields Γ, α 0 , α ` α : κ and Γ, α 0 ` τ1 : κ. Inverting the former tells κ = κα . And because the side condition says α 0 ∩ fv(τ1 ) = 0, / the latter can be strengthened to Γ ` τ1 : κα . We can invoke the substitution lemma to derive Γ, α 0 ` Σ0 [τ1 /α] : Ω, which is enough to invoke the induction hypothesis again and conclude Γ ` τ10 : κα 0 as well. Furthermore, for proving the type equivalence, inverting matching reveals Γ ` Σ ≤ Σ0 [τ2 , τ 02 /α, α 0 ] such that Γ ` τ2 : κα and Γ ` τ20 : κα 0 . And because τ2 , τ 02 are all wellformed in plain Γ, the variables α, α 0 don’t appear free in them, so Σ0 [τ2 , τ 02 /α, α 0 ] = Σ0 [τ 02 /α 0 ][τ2 /α] = Σ0 [τ2 /α][τ 02 /α 0 ]. Substitution on Σ0 gives Γ, α ` Σ0 [τ 02 /α 0 ] : Ω. By application of Lemma 9.5, we have lookupα (Σ[τ 02 /α 0 ], Σ0 [τ 02 /α 0 ]) ↑ τ1 [τ 02 /α 0 ]. By the variable convention, fv(Σ) ∩ α 0 = 0. / With the side condition on τ1 , thus, 0 0 0 lookupα (Σ, Σ [τ 2 /α ]) ↑ τ1 . Because that still has a derivation shorter than the current one, we can invoke the induction hypothesis (part 1) again on the first lookup, to obtain that τ1 = τ2 . Consequently, lookupα (Σ, Σ0 [τ2 /α]) ↑ τ 01 also holds (and still has a derivation smaller than the current one), and so does Γ, α 0 ` Σ0 [τ2 /α] : Ω. Now, because Σ0 [τ2 , τ 02 /α, α 0 ] = Σ0 [τ2 /α][τ 02 /α 0 ], we can apply U- MATCH to construct a derivation for Γ ` Σ ≤ ∃α 0 .Σ0 [τ2 /α] ↑ τ 02 . We can once more apply the induction hypothesis to that derivation, which produces τ 01 = τ 02 . Corollary 9.7 (Uniqueness of type lookup with applicative functors) Let Γ ` Σ : Ω and Γ ` ∃α.Σ0 : Ω and Γ ` Σ ≤ ∃α.Σ0 ↑ τ. If lookupα (Σ, Σ0 ) ↑ τ 1 and lookupα (Σ, Σ0 ) ↑ τ 2 , then τ 1 = τ 2 = τ. Thanks to uniqueness, we can still read the lookup judgment as a quasi-deterministic algorithm. Let us now turn to completeness, which becomes significantly more involved as well: 23 August 2014 F-ing modules Theorem 9.8 (Completeness of type lookup with applicative functors) Let Γ ` Σ : Ω valid and Γ ` ∃α.Σ0 : Ω explicit. 1. If Γ ` Σ ≤ Σ0 [τ/α] and Γ ` τ : κα , and π rooted in Σ0 avoiding α, with π = α α 1 and / then lookupπ (Σ, Σ0 ) ↑ τ α 1 with τ = α[τ/α]. α ∈ α and α ∩ α 1 = 0, 0 2. If Γ ` Σ ≤ ∃α.Σ ↑ τ, then lookupα (Σ, Σ0 ) ↑ τ. Proof By simultaneous induction on the derivation of rootedness (implied by explicitness in part 2). Part 1: • Case π rooted in [= τ 0 : κ]: Then π = τ 0 . Inverting subtyping, we know Σ = [= τ 00 : κ] with τ 00 = τ 0 [τ/α]. By substitution, π[τ/α] = τ 0 [τ/α], and hence transitively, τ 00 = π[τ/α] = (α α 1 )[τ/α] = τ α 1 . So lookupπ ([= τ 00 : κ], [= τ 0 : κ]) ↑ τ α 1 . • Case π rooted in [= π 0 : τ]: Analogous. • Case π rooted in {l 0 : Σ0 }: Then π rooted in {l 0 : Σ0 }.l avoiding α. Inverting subtyping, we know Σ = {l : Σ} and Γ ` {l : Σ}.l ≤ {l 0 : Σ0 }.l[τ/α]. Inverting welltypedness and validity/explicitness, Γ ` {l : Σ}.l : Ω valid and Γ, α ` {l 0 : Σ0 }.l : Ω explicit. Then by invoking the induction hypothesis, lookupπ ({l : Σ}.l, {l 0 : Σ0 }.l) ↑ τ α 1. • Case π rooted in ∀α 01 .Σ01 →P Σ02 : Then π α 01 rooted in Σ02 avoiding α and fv(Σ01 ) ∩ α = 0. / Let Γ0 = Γ, α 01 . Inverting subtyping, we know Σ = ∀α 1 .Σ1 →P Σ2 and Γ0 ` Σ01 [τ/α] ≤ ∃α 1 .Σ1 ↑ τ 1 and Γ0 ` Σ2 [τ 1 /α 1 ] ≤ Σ02 [τ/α]. Moreover, inverting welltypedness and validity/explicitness gives Γ, α, α 01 ` Σ01 /Σ02 : Ω explicit and, after weakening, Γ0 , α 1 ` Σ1 /Σ2 : Ω explicit/valid, where α 1 rooted in Σ1 . By substitution and Lemma 9.3, Γ0 ` Σ01 [τ/α] : Ω valid. By typing rules and definition of explicitness, Γ0 ` ∃α 1 .Σ1 : Ω explicit. Consequently, we can invoke the induction hypothesis (part 2), and have lookupα 1 (Σ01 [τ/α], Σ1 ) ↑ τ 1 . Because of the variable side condition on functor rootedness, Σ01 [τ/α] = Σ01 . Moreover, because / fv(τ 1 ). That α∈ / fv(Σ1 ) ∪ Σ01 [τ/α] by variable containment, Lemma 9.5 implies α ∈ gives the first half of the definition of lookup in functors. Now, by soundness of type lookup, Γ0 ` τ1 : κα1 . By substitution and Lemma 9.3, Γ0 ` Σ2 [τ 1 /α 1 ] : Ω valid. We invoke the induction hypothesis a second time (this time on part 1) and get lookupπ α 0 (Σ2 [τ 1 /α 1 ], Σ02 ) ↑ τ α 1 α 01 . Consequently, we can 1 derive lookupπ (Σ, Σ0 ) ↑ λ α 01 .τ α 1 α 01 , and by η-equivalence, λ α 0 .τ1 α 1 α 0 = τ1 α 1 . Part 2: Inverting ∃α.Σ0 explicit implies α rooted in Σ0 . • Case ε rooted in Σ0 : Then there is nothing to show. • Case α, α 0 rooted in Σ0 : Then α rooted in Σ0 avoiding α, α 0 , and α 0 rooted in Σ0 . Inverting matching implies Γ ` Σ ≤ Σ0 [τ, τ 0 /α, α 0 ] with Γ ` τ : κα and Γ ` τ 0 : κα 0 . From inverting well-typedness and explicitness we get Γ, α, α 0 ` Σ0 : Ω explicit. Let π = α. Then we can invoke part 1 of the induction hypothesis to get lookupα (Σ, Σ0 ) ↑ τ. By variable containment, fv(τ) ∩ α 0 = 0. / By substitution and Lemma 9.3, Γ, α 0 ` Σ0 [τ/α] : Ω explicit and α 0 rooted in Σ0 [τ/α], and so, Γ ` ∃α 0 .Σ0 [τ/α] : Ω explicit. Because Γ ` τ : κα and Γ ` τ 0 : κα 0 , we know via variable containment that Σ0 [τ, τ 0 /α, α 0 ] = Σ0 [τ/α][τ 0 /α 0 ]. With rule U- 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer we can then construct the derivation Γ ` Σ ≤ ∃α 0 .Σ0 [τ/α] ↑ τ 0 . With that, we can invoke part 2 of the induction hypothesis, to also get lookupα 0 (Σ, Σ0 ) ↑ τ 0 . As before, this property is sufficient to imply decidability of matching. (In addition, when we apply the matching rule U- MATCH algorithmically, we do not actually need to check the rule’s side condition on the well-formedness of the types we have looked up, because it is already implied by soundness of lookup.) Corollary 9.9 (Decidability of matching with applicative functors) Assume that Γ is well-formed and valid, and also that Γ ` τ ≤ τ 0 f is decidable for types well-formed under Γ. If Σ valid and Ξ explicit, and both are well-formed under Γ, f is still decidable in the presence of applicative functors and the then Γ ` Σ ≤ Ξ ↑ τ relaxed definition of rootedness from Figure 41. Decidability of elaboration then follows as well, even though the elaboration rules under the applicative functor extensions are no longer purely syntactic: rules M- FUNCT- I and M- FUNCT- P overlap. However, they have disjoint premises, and thus the overlap does not induce any non-determinism. In the case of the multiple rules for value bindings, we have ensured the absence of overlap via syntactic side conditions. Corollary 9.10 (Decidability of elaboration with applicative functors) Under valid and well-formed Γ, provided we can (simultaneously) show that core elaboration is decidable, then all judgments of module elaboration with applicative functors are decidable, too. Remark At this point, the alert reader may ask: Where did the alleged simplicity go? It is true that the above decidability proof is not as simple anymore. However, we like to make a couple of observations. First, the complexity witnessed above is only concerned with (signature matching for) applicative functors. The basic system from Sections 2–6, with generative functors only, is not affected. It is not completely surprising that applicative functors are more complex, considering the difficulties they have caused historically. Second, the declarative semantics of the system with applicative functors is only mildly more involved than that of the basic system. From our perspective, the rules are still fewer and smaller than in any of the previous accounts of applicative functors—especially considering that they also do more. Moreover, the soundness proof from Section 9.1 is not substantially harder than the one for the basic system (Section 5.1)—and that arguably is all that is needed to understand the type system. What gets more complicated is the algorithm to implement type lookup (Section 9.2)— or rather, the proof that this algorithm (which, by itself, is only a few lines of code) is complete. However, this algorithm arguably is only relevant to implementors, and its correctness proof only interesting to experts. It is also worth noting that a fair amount of the encompassing complexity may actually be incidental. It is mainly due to the fact that our rules, unlike in most other systems, separate type lookup from subtyping. We chose this design because it makes the declarative subtyping rules pleasantly minimalist. For the basic system it also makes for an almost trivial lookup algorithm (Section 5.2). However, with the generalization to applicative 23 August 2014 F-ing modules functors, this factoring leads to a more complicated algorithm: in that system, lookup and subtyping become intertwined, which means that to separate them, lookup has to duplicate some of the work of subtyping, and its correctness proof needs to make sure that both algorithms operate in sync. The issue of rootedness could be avoided by decorating semantic signatures with “locators” (compare with Rossberg & Dreyer (2013)). A more traditional, interleaved, and algorithmic definition of matching would eliminate the need for a correctness proof altogether (while slightly complicating the declarative semantics and its soundness proof). We leave further exploration of this option to future work. Finally, it is also worth pointing out that our novel tracking of dynamic purity and dynamic module equivalence (Section 8) turns out to be only a minor extension to the system. In particular, it does not affect most of the definitions or proofs in a significant way—since value paths are modeled as phantom types, they are handled by the exact same mechanisms as ordinary abstract types. 10 Mechanization in Coq One of our original motivations for the F-ing approach was that a simpler semantics for modules would be an easier starting point for language mechanization. As a proof of concept, we embarked on mechanizing the elaboration semantics of Section 4 and Section 6 (but omitting normalization), and proved the soundness result of Theorem 5.1, but including module packages. We did so using Coq (Coq Development Team, 2007) and the locally nameless approach (LN) of Aydemir et al. (2008). (There is no reason we could not have used other proof assistants such as Twelf or Isabelle; but we were interested in learning Coq and testing the effectiveness of the locally nameless approach.) This effort required roughly 13,000 lines of Coq code. As inexpert users of Coq, we made little use of automation, so most likely, the proofs could easily be shortened significantly. As with any mechanization, there are some minor differences compared with the informal system. Our mechanized Fω is simpler than the one we use here in that it supports just binary products, not records. Instead, we encode ordered records as derived forms using pairs, with derived typing rules, and target those during elaboration. Ordered records are easier to mechanize, yet adequate for elaboration. The Fω mechanization does not allow rebindings of term variables in the context as our informal presentation does. Indeed, using the LN approach, subderivations arising from binding constructs have to hold for all locally fresh names. In the mechanization, we had to abandon the use of the injection from source identifiers to Fω variables, and instead use a translation environment that twins source identifiers (which may be shadowed) with locally fresh Fω variables (which may not). In this way, source identifiers are used to determine record labels, while their twinned variables are used to translate free occurrences of identifiers. Lee et al. (2007) use a similar trick in their Twelf mechanization of Standard ML. Our use of a non-injective record encoding means that different semantic signatures may be encoded by the same type. To avoid ambiguity, the mechanization therefore introduces a special syntactic class of semantic signatures (corresponding to the grammar in Figure 9), and separately defines the interpretation of semantic signatures as System Fω types by an 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer inductive definition (again much like the syntactic sugar definitions in Figure 9). Consequently, the mechanized soundness theorems state that if C ` M : Ξ e, then C◦ ` e : Ξ◦ , where ◦ denotes the interpretation of elaboration environments and semantic signatures into plain Fω contexts and types. In retrospect, it would perhaps have been simpler to just beef up our target language with primitive records (as we have done on paper here). In any case, this issue is orthogonal to the rest of the mechanization effort. Our experience of applying the LN approach as advertised was more painful than we had anticipated. Compared to the sample LN developments, ours was different in making use of various forms of derived n-ary (as well as basic unary) binders and in dealing with a larger number of syntactic categories. Although we implemented the n-ary binders as derived forms over the unary ones provided by basic Fω , we still needed derived lemmas for n-ary substitution (substituting locally closed terms for free names) and n-ary open (for opening binders with locally closed terms). Then we needed lemmas relating the commutation of all the combinations of n-ary and unary operations. The final straw was dealing with rules (notably for sequencing of binding and declarations) that required us to extend the scope of bindings over terms from subderivations. Doing this the recommended way requires the introduction of a third family of closing operations (the inverse of open), for turning named variables back into bound indices, together with a plethora of lemmas needed to actually reason about them (again with unary and n-versions of close and all possible commutations). We managed to work around these two cases by expressing the desired properties indirectly using additional (and thus unsatisfactory) premises stipulating equations between opened terms. In the end, out of a total of around 550 lemmas, approximately 400 were tedious “infrastructure” lemmas; only the remainder had direct relevance to the meta-theory of Fω or elaboration. The number of required infrastructure lemmas appears to be quadratic in the number of variable classes (type and value variables for us), the number of “substitution” operations needed per class (we got away with only using LN’s subst and open, and avoiding close) and the arity classes (unary and n-ary) of binding constructs. So we cannot, hand-on-heart, recommend the vanilla LN style for anything but small, kernel language developments. It would, however, be interesting to see whether more recent proposals to streamline the LN approach (Aydemir et al., 2009) could significantly shorten larger developments like ours, without obscuring the presentation. Despite the tedium, the mechanization still turned out to be relatively straightforward overall, and did not require any technical ingenuity. We believe that a Coq user with more experience than us (or somebody with respective experience using another proof assistant) but without specialist background in modules, could easily have carried it out without much effort. 11 Related work and discussion The literature on ML module semantics is voluminous and varied. We will therefore focus on the most closely related work. A more detailed history of various accounts of ML-style modules can be found in Chapter 2 of Russo’s thesis (1998; 2003). 23 August 2014 F-ing modules Existential types for ADTs Mitchell & Plotkin (1988) were the first to connect the informal notion of “abstract type” to the existential types of System F. In F, values of existential type are first-class, in the sense that the construction of an ADT may depend on run-time information. We exploit this observation in our elaboration of sealed structures, and more directly, in our support for modules as first-class values (Section 6), both of which are simply existential packages. Cardelli & Leroy (1990) explained how to interpret the dot notation, which arises naturally when defining ADTs as modules, via a program transformation into uses of existentials. The idea is to unpack every existential immediately, such that the scope of the unpack matches the scope of the module definition. Our elaboration’s use of unpacking and repacking can be viewed as a more compositional extension of this basic idea. Dependent type systems for modules In a very influential position paper, MacQueen (1986) criticized existential types as a basis for modular programming, arguing that the closed-scope elimination construct for existentials (unpack) is too weak and awkward to be usable in practice. MacQueen instead promoted the use of dependent function types and “strong sums” (i.e., dependently-typed record/tuple types) as a basis for modular programming. Since then, there has been a long line of work on understanding and evolving the ML module system in terms of increasingly more refined dependent type theories (Harper & Mitchell, 1993; Harper et al., 1990; Harper & Lillibridge, 1994; Leroy, 1994; Leroy, 1996; Leroy, 1995; Shao, 1999; Dreyer et al., 2003; Dreyer, 2005). On the design side, the work on dependent type systems led to significant improvements in the expressiveness of ML modules, most notably the idea of translucency—i.e., the ability to include both abstract and transparent type declarations in signatures—which was independently proposed by Harper and Lillibridge (1994) and Leroy (1994). On the semantics side, however, the use of dependent type formalisms unleashed quite a can of worms. Several ideas and issues pop up again and again in the literature, and for the most part the “F-ing modules” approach either renders these issues moot or offers straightforward ways of handling them. One recurrent notion is phase separation, which is essentially the observation that the “dependent” types in these module systems are not really dependent. The signature of a module may depend on the type components of another module, but not on its value components. Thus, as Harper, Mitchell & Moggi (1990) showed (for an early ML-style module system without translucency or sealing), one can “phase-split” a (higher-order) module into an Fω type (representing its type components) and an Fω expression (representing its value components). Our approach of interpreting ML modules into Fω is of course completely compatible with the idea of phase separation, since we don’t pretend our type system is dependent in the first place. Another recurrent notion is projectibility—that is, from which module expressions can one project out the type and value components? As Dreyer, Crary & Harper (2003) observed, the differences between several different dialects of the ML module system can be characterized by how they define projectibility. Most dependent module type systems define projectibility by only allowing projections from modules from a certain restricted syntactic class of paths. We also employ paths, but define them semantically to be any module expressions whose signatures do not mention any “local” (i.e., existentially-quantified) 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer abstract types. We consider this criterion to be simpler to understand and less ad hoc. Russo (1998) describes and formalizes a similar notion of ”generalized path”, with an analogous type-based restriction, as part of his system of higher-order functors. But the motivation is solely the ability to express paths like (F M).t, whereas for F-ing modules, we harvest their expressive power as a way of simplifying the language and its rules. A common stumbling block in dependent module type systems is the so-called avoidance problem. Originally observed in the setting of (a bounded existential extension of) System F≤ by Ghelli & Pierce (1998), the avoidance problem is roughly that a module might not have a principal signature (i.e., minimal in the subtyping hierarchy) that “avoids” (i.e., does not depend on) some local abstract type. As principal signatures are important for practical typechecking, dependent module type systems typically either lack complete typechecking algorithms (e.g., Lillibridge (1997) and Leroy (2000)) or else require (at least in some cases) extra signature annotations when leaving the scope of an abstract type (e.g., Shao (1999), Dreyer et al. (2003)). In contrast, under our approach the avoidance problem does not arise at all: the semantic signature ∃α.Σ of a module M keeps track of all the abstract types α defined by M, even those which have “gone out of scope” in the sense that they are not “rooted” anywhere in Σ (to use the terminology of Section 5). Thus, the only point at which we need to “avoid” anything is when we typecheck a path; at that point, we need to make sure that its signature does not depend on any local abstract types. Of course, at that point the avoidance check is not a “problem” but rather the crucial defining element of well-formedness for paths. Elaboration semantics for modules Our avoidance of the avoidance problem is due primarily to our use of an elaboration semantics, which gives us the flexibility to classify a module using a semantic signature Ξ that is not the translation of any syntactic signature S (i.e., it is valid, but not explicit, as defined in Section 5.2). Harper & Stone (2000) exploit elaboration in a similar fashion and to similar ends. One downside of this approach, some (e.g., Shao (1999)) would argue, is that one loses “fully syntactic” signatures—i.e., the ability to express the full static information about any module using a syntactic signature, and thus typecheck the module independently from the context in which it is used. But it is not clear that in practice this is really such a big deal, because a programmer can always avoid “non-syntactic” signatures by either adding a binding or an explicit signature annotation. In fact, Shao’s approach to ruling out non-syntactic signatures would simply amount to restricting the projection rule M- DOT (Figure 14) in the same way as the path rule P- MOD (Figure 17) in our system, thereby forcing the programmer to take these measures. Perhaps a more serious concern is: how does the elaboration semantics we have given here correspond to existing specifications of ML modules, such as the Definition of SML or Harper-Stone? In what sense are we formalizing the semantics of “ML modules”? The short answer is that it is very difficult to prove a precise correspondence between different accounts of the ML module system. In the few cases where such proofs have been attempted, the formalizations in question were either not representative of the full ML module system (e.g., Leroy (1996)) or were lacking some key component, such as a dynamic semantics (e.g., Russo (1998)). Moreover, one of the main advantages of our approach (we believe) is that it is simpler than previous approaches. We are not so interested 23 August 2014 F-ing modules in “correctness”, i.e., whether our semantics precisely matches that of Standard ML, the archaeological artifact; rather, we wish to suggest a way forward in the understanding and evolution of ML-style module systems. That said, we believe (based on experience) that our semantics for modules in Section 4 is essentially a conservative extension of SML’s, as well as the generative fragment of Moscow ML (Russo, 2003). Higher-order modules and applicative functors The main way in which the language defined in Section 4 diverges from Standard ML is its support for higher-order modules, which constitute a relatively simple extension if one sticks to the generative semantics for functors. (Our semantics for higher-order modules in that section is similar to that of Leroy (1994; 1996) and Harper & Lillibridge (1994).) However, as a number of researchers noted in the early years of ML modules, the generative semantics is also fairly restrictive, because it assumes conservatively that any types specified abstractly in the result signature of an unknown functor will be generated anew every time the functor is applied. For example, if a higher-order functor H has a functor argument F of type S → S, then H must account for the possibility that F is instantiated with an impure/generative functor and treat it as such during the typechecking of H’s body, even though H may in fact be instantiated with a transparent F like the identity functor. Thus, under a generative semantics, abstraction over functor arguments can result in the rejection of seemingly reasonable programs due to insufficient propagation of type information. Harper, Mitchell & Moggi (1990) were the first to propose the use of an applicative semantics (although they did not call it that) for achieving more flexible typechecking of higher-order functors. Leroy (1995) later popularized the idea of applicative functor semantics in the setting of a more fully realized module language, and it is his semantics that serves as the basis of OCaml’s module system. In addition to better supporting higher-order modules, Leroy also motivated applicative semantics by the desire to treat semantically equivalent types (e.g., integer sets) as equivalent, even if they were created by separate (but equal) instantiations of the same functor. Indeed, this latter motivation has in practice turned out to be arguably more compelling than the one concerning higher-order modules. As we pointed out at the beginning of Section 8, the applicative functor semantics does not obviate generative semantics—both are appropriate in different instances—but constructing a language that supports and reconciles both forms has proven very difficult. Several proposals have been made (Shao, 1999; Russo, 2003; Dreyer et al., 2003), but all of them suffer from breaking abstraction safety (cf. Section 8 for examples). Our semantics of applicative functors in Sections 7 and 8 is novel and does not correspond directly to any existing account. As we explained in those sections, our motivation has been to provide an account of applicative functors that is (a) simple, (b) abstractionsafe, and (c) not overly conservative. To achieve simplicity, we adopt the adage that “applicative = pure” and “generative = impure”. To achieve abstraction safety, we employ “stamps” (modeled as hidden abstract types) to statically track the identity of values, so that, for instance, the identity of the type of sets can depend (as it should) on the identity of the comparison function by which its elements are ordered. While this approach is necessarily conservative (in order to ensure decidability of typechecking), it is no more conservative than other abstraction-safe designs, and we have tried to be as liberal as possible by tracking identity at the level of individual value components. 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer Technically, our semantics for applicative functors is based closely on the formulation in Russo’s thesis (Russo, 1998). Although we believe the applicative higher-order modules of (Russo, 1998) to be sound, their subsequent integration with Standard ML modules in Moscow ML turned out not to be (Dreyer et al., 2002). In an attempt at backward compatibility, Moscow ML’s early releases supported both applicative and generative higherorder functors. The typing relation was a seductively straightforward integration of both the generative and applicative rules. Dreyer’s counterexample to type soundness is recounted by Russo (2003), together with a relatively simple, if unproven, fix. Even if a revised Moscow ML can be proven type sound, we claim that the marriage of applicative and generative functors presented in this article remains superior, by offering abstraction safety over and above simple type safety. In our refined system, only those abstract types whose invariants are guaranteed not to be tied to mutable state are rendered applicative. Moscow ML provides no such guarantee and freely allows the coercion of a generative into an applicative functor (by simple η-expansion). We credit Biswas (1995) with discovering the skolemization technique for typing applicative higher-order functors: he used it to introduce higher-kinded universal quantifiers, parameterizing a higher-order functor on its argument’s type dependencies in order to propagate actual dependencies at application of the functor (by implicit type application). The contribution of Russo (1998) was to additionally use higher-kinded existential quantifiers to abstract (and thus hide) concrete type dependencies at module sealing (by an implicit pack). Shao (1999) uses a similar skolemization technique, with the difference being that he collects all abstract types of a given module into a single variable of higher-order product kind (the module’s “flexroot”), instead of quantifying them separately in a sequence of individual variables. Unfortunately, employing this “uncurried” formulation would necessitate jumping through extra hoops to handle the avoidance problem or constructs like where (besides relying on a mild extension to Fω ’s type language). We point out that the addition of applicative signatures alone (i.e., the basic system from Section 4, extended with only the rules from Figure 26, but without the refined module elaboration from Figure 29) subsumes the more limited applicative functors of Shao (1999). Shao’s system, like ours, distinguishes between opaque and transparent functor signatures, with the latter using higher-order type constructors to abstract over static type dependencies. The difference is that in Shao’s system, the only way to introduce an applicative functor is to seal a fully transparent functor by an applicative functor signature. This simple design choice has as an unfortunate side-effect: in Shao’s system, unlike ours, a user cannot use sealing within the body of an applicative functor. The ability to use sealing inside an applicative functor is a desirable feature, since in principle one may wish to impose abstraction boundaries at any point inside a module, and indeed it is supported by most other designs, including our own. Furthermore, we depend crucially on this feature in our semantics of value sharing (via phantom types), which we depend on in turn to ensure abstraction safety. Specifically, we treat every value binding in a module as if it were a little sealed submodule, introducing an abstract phantom type to statically represent the identity of the value. In a system like Shao’s, such an approach would automatically cause any functor (with a value component in it) to be treated as generative. Consequently, we do not know how to effectively enforce abstraction safety in a system like Shao’s. 23 August 2014 F-ing modules The module calculus of Dreyer, Crary & Harper (2003) provides support for both the “strong” Shao-style sealing construct, which demands generativity of (immediately) enclosing functors, and a “weak” variant of sealing, which does not demand generativity and may thus be used inside applicative functors. Dreyer et al. account for these two variants in terms of a dichotomy between “dynamic effects” and “static effects”. In our system, we have only retained the weak variant of sealing (adjusted to properly ensure abstraction safety), because our point of view is that the need for generativity has solely to do with the computational effects in the module being sealed, and that sealing per se is not a computational effect. Of course, if one really wished to “strongly seal” a pure module in our language, one could easily do so by inserting an impure no-op expression into the body of the module, thus inducing a pro forma effect. But we see no compelling reason for wanting to strongly seal a pure module. An alternative semantics for higher-order functors was proposed by MacQueen & Tofte (1994), but it relied fundamentally on the idea of re-elaborating a functor’s body at each application. In recent work, Kuan & MacQueen (2009) have investigated how to account for such a semantics in a more satisfactory way by tracking the “static effects” of higherorder functors in an “entity calculus”. However, it remains unclear how to reconcile their approach, which underlies the module system of modern-day SML/NJ, with the tradition of type-theoretic accounts of ML modules to which “F-ing modules” belongs. Interpreting ML modules into Fω We are certainly not the first to explain ML modules by translation into Fω . Harper, Mitchell & Moggi (1990) give a “phase-splitting” translation of an early ML module calculus into an Fω -like language, but do not yet deal with the crucial aspect of type generativity. As mentioned above, Cardelli & Leroy (1990) show how a calculus with dot notation—i.e., with a mildly dependently-typed variant of System F existentials whose witness type is projectible on the type level—can be translated down to plain System F existentials. Shao (1999) gives a multi-stage translation of his more advanced module calculus into a language called FTC, which is a variant of Fω enriched with Cardelli/Leroy-style dot notation and a restricted form of dependent products for expressing functors. However, he does not provide any translation of this language into Fω itself, and it is not obvious how to extend the Cardelli/Leroy translation to FTC. Shan (2004) presents a type-directed translation of the Dreyer-Crary-Harper module calculus (Dreyer et al., 2003) into Fω . His translation naturally uses some techniques similar to ours. In particular, his translation of signatures closely mirrors that of Russo (1998; 1999; 2003), and to translate module terms, he opens and repacks existentials in the same way we do. Our elaboration also borrows from Shan the technique of abstracting over the whole environment for the translation of applicative functors. The biggest difference between these previous translations and ours is that the previous ones all start from a pre-existing dependently-typed module language and show how to translate it down to Fω . This translation is directed by (and impossible without) the types and contexts from the source language. We instead use the type structure of Fω in order to give a static semantics for ML modules directly. Thus, we feel our approach is simpler and more accessible to someone who already understands Fω and does not want to learn a new dependent type system just in order to understand the semantics of ML modules. 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer As explained in the introduction, our approach can be viewed as giving an evidence translation, and thus a soundness proof, for (a variant of) the static semantics of SML modules given in Russo’s thesis (Russo, 1998; Russo, 1999). Russo started with the Definition of Standard ML (Milner et al., 1997), and observed that its ad hoc “semantic object” language could be understood quite clearly in terms of universal and existential types. A key observation, also made by Elsman (1999), was that the state of generated type variables, threaded as it was through the static semantics of SML, could be presented more declaratively as the systematic introduction and elimination of existential types. Given the non-dependent, Fω -like structure of the semantic objects, it was also relatively straightforward to extend them to higher-order and first-class modules (Russo, 1998; Russo, 2000). We point the interested reader to Chapter 9 of Russo’s thesis (1998; 2003) for an in-depth comparison with the non-dependent approach to modules that he pioneered (and that the F-ing approach is derived from), giving targeted examples to pinpoint the problems with dependently typed accounts and how they are avoided by this approach. It is worth noting that our approach also scales to handle more ambitious modulelanguage extensions, at least if one is willing to beef up the target language somewhat. Inspired by Russo’s work, Dreyer proposed an extension of Fω called RTG (Dreyer, 2007a), which he and coauthors later used as the target of an elaboration semantics for recursive modules (Dreyer, 2007b), mixin modules (Rossberg & Dreyer, 2013), and modules in the presence of type inference (Dreyer & Blume, 2007). These elaboration semantics are similar to ours in that they use the type structure of the (beefed-up) Fω language in order to directly encode semantic signatures for ML-style modules. However, our semantics is significantly simpler, since we are only trying to formalize a non-recursive ML-like module system and we are only using plain Fω as the target language. Mechanization of module semantics Lee et al. (2007) mechanized the meta-theory of full Standard ML, based on a variant of Harper-Stone elaboration given by Dreyer in his thesis (Dreyer, 2005). It is difficult to compare the mechanizations, since theirs uses Twelf. However, it is worth noting that a significant piece of their mechanization is devoted to proving meta-theoretic properties of their target language, which employs singleton kinds (Stone & Harper, 2006). In contrast, since our internal language is so simple and well-studied, we largely took it for granted (though we have proved the Fω properties that we use). Direct modular programming in Fω Lastly, several authors have advocated doing modular programming directly in a rich Fω -like core language like Haskell’s (Jones, 1996; Shields & Peyton Jones, 2002; Shan, 2004), using universal types for client-side data abstraction and existential types for implementor-side data abstraction. Several other authors (MacQueen, 1986; Harper & Pierce, 2005) have argued why this approach is not practical. The common theme of the arguments is that Fω is too low-level a language to program modules in directly, and that ML modules provide a much higher-level idiom for modular programming. More recently, Montagu & R´emy (2009) have proposed directly programming in a variant of Dreyer’s RTG (Dreyer, 2007a) (see above), because RTG addresses to some extent the limitations of closed-scope existential elimination. However, RTG is still quite low-level compared to ML modules. 23 August 2014 F-ing modules In some sense, the point of the present article is to observe that the high-level elegance of ML modules and the simplicity of Fω typing are not mutually exclusive. One can understand ML modules precisely as a stylized idiom—a design pattern, if you will— for constructing Fω programs. The key benefit of programming this idiom using the ML module system, instead of directly in Fω , is that elaboration offers a significant degree of automation (e.g., by inferring signature coercions and implicitly unpacking/repacking existentials), which in practice is extremely useful. 12 Conclusion In this article, we have shown that it is possible to give a direct, type-theoretic semantics for a comprehensive ML-style module system by elaboration into standard System Fω . In so doing, we have also offered a novel account of applicative vs. generative functor semantics (via a simple “pure/impure” distinction), which avoids the problems with abstraction safety that have plagued previous accounts. Our main focus has been on semantics—a concern that we have not addressed in this article is implementation. As already alluded to in several places (such as Section 4 and Section 7.3), we do not expect a real-world compiler to implement the F-ing rules verbatim. Obvious optimizations include: eliminating redundant administrative redexes at compile time, introducing type tuples to group semantic type parameters into single variables (effectively reconstructing structure stamps), lazily expanding type abbreviations, and minimizing the environments abstracted over by applicative functors. It also seems preferable for compilers to reconstruct user-friendly syntactic type expressions where possible when presenting semantic types to users. Most of these techniques are well known, and we do not envision any particular difficulties in applying them to our system. But such concerns are outside the scope of this article. Finally, while our semantics of ML modules accounts for almost all of the major features that can be found either in the literature or in the various implemented dialects of ML, there is one key feature we have left out: recursive modules. As Dreyer (2007a) has observed, the combination of recursion and ML-style abstract data types seems to demand an underlying type theory that goes beyond plain System Fω , and moreover, in our opinion, doing recursive modules “right” requires abandoning some of the fundamental design decisions of traditional ML modules. Nevertheless, the basic ideas of the “F-ing” approach still apply: a semantics for recursive modules can be given using a variation of our elaboration, and targeting a language that is a conservative extension of Fω . The first and last authors’ work on MixML, a module system with recursive mixin composition, explores precisely that path (Rossberg & Dreyer, 2013). References Ahmed, Amal, Dreyer, Derek, & Rossberg, Andreas. (2009). State-dependent representation independence. ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL). Atkey, Robert. (2012). Relational parametricity for higher kinds. EACSL Annual Conference on Computer Science Logic (CSL). 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer Aydemir, Brian, Chargu´eraud, Arthur, Pierce, Benjamin C., Pollack, Randy, & Weirich, Stephanie. (2008). Engineering formal metatheory. ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL). Aydemir, Brian, Weirich, Stephanie, & Zdancewic, Steve. (2009). Abstracting syntax. Technical report. Biswas, Sandip K. (1995). Higher-order functors with transparent signatures. ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POPL). Cardelli, Luca, & Leroy, Xavier. (1990). Abstract types and the dot notation. Pages 479–504 of: Programming Concepts and Methods. IFIP State of the Art Reports. North Holland. Coq Development Team. (2007). The Coq proof assistant reference manual. INRIA. http://coq.inria.fr/. Dreyer, Derek. (2005). Understanding and Evolving the ML Module System. Ph.D. thesis, Carnegie Mellon University. Dreyer, Derek. (2007a). Recursive type generativity. Journal of Functional Programming (JFP), 17(4&5), 433–471. Dreyer, Derek. (2007b). A type system for recursive modules. ACM SIGPLAN International Conference on Functional Programming (ICFP). Dreyer, Derek, & Blume, Matthias. (2007). Principal type schemes for modular programs. European Symposium on Programming (ESOP). Dreyer, Derek, Crary, Karl, & Harper, Robert. (2002). Moscow ML’s higher-order modules are unsound. Posting to Types forum, 17 September. Dreyer, Derek, Crary, Karl, & Harper, Robert. (2003). A type system for higher-order modules. ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL). Elsman, Martin. (1999). Program modules, separate compilation, and intermodule optimisation. Ph.D. thesis, University of Copenhagen. Geuvers, Herman. (1992). The Church-Rosser property for β η-reduction in typed λ -calculi. IEEE Symposium on Logic in Computer Science (LICS). Ghelli, Giorgio, & Pierce, Benjamin. (1998). Bounded existentials and minimal typing. Theoretical Computer Science (TCS), 193(1-2), 75–96. Goldfarb, Warren D. (1981). The undecidability of the second-order unification problem. Theoretical Computer Science (TCS), 13, 225–230. Harper, Robert. (2012). Programming in Standard ML. Working draft available at: http://www.cs.cmu.edu/~rwh/smlbook/. Harper, Robert, & Lillibridge, Mark. (1994). A type-theoretic approach to higher-order modules with sharing. ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL). Harper, Robert, & Mitchell, John C. (1993). On the type structure of Standard ML. ACM Transactions on Programming Languages and Systems (TOPLAS), 15(2), 211–252. Harper, Robert, & Pierce, Benjamin C. (2005). Design considerations for ML-style module systems. Chap. 8 of: Pierce, Benjamin C. (ed), Advanced topics in types and programming languages. MIT Press. Harper, Robert, & Stone, Chris. (2000). A type-theoretic interpretation of Standard ML. Proof, language, and interaction: Essays in honor of robin milner. MIT Press. Harper, Robert, Mitchell, John C., & Moggi, Eugenio. (1990). Higher-order modules and the phase distinction. ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL). Jones, Mark P. (1996). Using parameterized signatures to express modular structure. ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POPL). Kuan, George, & MacQueen, David. (2009). Engineering higher-order modules in SML/NJ. International Symposium on the Implementation and Application of Functional Languages (IFL). 23 August 2014 F-ing modules Launchbury, John, & Peyton Jones, Simon L. (1995). Computation (LASC), 8(4), 293–341. State in Haskell. LISP and Symbolic Lee, Daniel K., Crary, Karl, & Harper, Robert. (2007). Towards a mechanized metatheory of Standard ML. ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL). Leifer, James, Peskine, Gilles, Sewell, Peter, & Wansbrough, Keith. (2003). Global abstractionsafe marshalling with hash types. ACM SIGPLAN International Conference on Functional Programming (ICFP). Leroy, Xavier. (1994). Manifest types, modules, and separate compilation. ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL). Leroy, Xavier. (1995). Applicative functors and fully transparent higher-order modules. ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL). Leroy, Xavier. (1996). A syntactic theory of type generativity and sharing. Journal of Functional Programming (JFP), 6(5), 1–32. Leroy, Xavier. (2000). A modular module system. Journal of Functional Programming (JFP), 10(3), 269–303. Lillibridge, Mark. (1997). Translucent sums: A foundation for higher-order module systems. Ph.D. thesis, Carnegie Mellon University. MacQueen, David B. (1986). Using dependent types to express modular structure. ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POPL). MacQueen, David B., & Tofte, Mads. (1994). A semantics for higher-order functors. European Symposium on Programming (ESOP). Milner, Robin, Tofte, Mads, & Harper, Robert. (1990). The definition of Standard ML. MIT Press. Milner, Robin, Tofte, Mads, Harper, Robert, & MacQueen, David. (1997). The definition of Standard ML (revised). MIT Press. Mitchell, John C., & Plotkin, Gordon D. (1988). Abstract types have existential type. ACM Transactions on Programming Languages and Systems (TOPLAS), 10(3), 470–502. Montagu, Benoˆıt, & R´emy, Didier. (2009). Modeling abstract types in modules with open existential types. ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL). Paulson, L. C. (1996). ML for the working programmer, 2nd edition. Cambridge University Press. Peyton Jones, Simon. (2003). Wearing the hair shirt: a retrospective on Haskell. Invited talk, ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL). http://research.microsoft.com/~simonpj. Romanenko, Sergei, Russo, Claudio V., & Sestoft, Peter. (2000). http://www.dina.kvl.dk/~sestoft/mosml. Moscow ML Version 2.0. Rossberg, Andreas. (1999). Undecidability of OCaml type checking. Posting to Caml mailing list, 13 July. Rossberg, Andreas, & Dreyer, Derek. (2013). Mixin’ up the ML module system. ACM Transactions on Programming Languages and Systems (TOPLAS), 35(1), Article 2. Rossberg, Andreas, Le Botlan, Didier, Tack, Guido, & Smolka, Gert. (2004). Alice through the looking glass. Trends in Functional Programming (TFP). Rossberg, Andreas, Russo, Claudio V., & Dreyer, Derek. (2010). F-ing modules. ACM SIGPLAN Workshop on Types in Language Design and Implementation (TLDI). Russo, Claudio V. (1998). Types for modules. Ph.D. thesis, LFCS, University of Edinburgh. Russo, Claudio V. (1999). Non-dependent types for Standard ML modules. International Conference on Principles and Practice of Declarative Programming (PPDP). Russo, Claudio V. (2000). First-class structures for Standard ML. Nordic Journal of Computing, 7(4), 348–374. 23 August 2014 Andreas Rossberg, Claudio Russo and Derek Dreyer Russo, Claudio V. (2003). Types for Modules. Electronic Notes in Theoretical Computer Science (ENTCS), 60. Sewell, Peter, Leifer, James J., Wansbrough, Keith, Zappa Nardelli, Francesco, Allen-Williams, Mair, Habouzit, Pierre, & Vafeiadis, Viktor. (2007). Acute: High-level programming language design for distributed computation. Journal of Functional Programming (JFP), 17(4–5). Shan, Chung-chieh. (2004). Higher-order modules in System Fω and Haskell. Technical Report, http://www.cs.rutgers.edu/~ccshan/xlate/xlate.pdf. Shao, Zhong. (1999). Transparent modules with fully syntactic signatures. ACM SIGPLAN International Conference on Functional Programming (ICFP). Shields, Mark, & Peyton Jones, Simon. (2002). First-class modules for Haskell. Pages 28–40 of: International Workshop on Foundations of Object-Oriented Languages (FOOL). SML/NJ Development Team. 1993 (Feb.). Standard ML of New Jersey user’s guide. 0.93 edn. AT&T Bell Laboratories. Stone, Christopher A., & Harper, Robert. (2006). Extensional equivalence and singleton types. ACM Transactions on Computational Logic (TOCL), 7(4), 676–722. Sulzmann, Martin, Chakravarty, Manuel M. T., Peyton Jones, Simon, & Donnelly, Kevin. (2007). System F with type equality coercions. ACM SIGPLAN Workshop on Types in Language Design and Implementation (TLDI). Torgersen, Mads, Ernst, Erik, & Hanser, Christian Plesner. (2005). Wild FJ. International Workshop on Foundations of Object-Oriented Languages (FOOL). Wright, Andrew. (1995). Simple imperative polymorphism. LISP and Symbolic Computation (LASC), 343–356.
{"url":"https://p.pdfkul.com/f-ing-modules_59bd680b1723dd99e8793f9f.html","timestamp":"2024-11-08T02:34:50Z","content_type":"text/html","content_length":"284644","record_id":"<urn:uuid:332ab39a-3b5d-42c3-ae52-5ddcc3650f7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00775.warc.gz"}
Trouble with nonlinear curve fitting functions Apr 07, 2021 02:19 PM Apr 07, 2021 02:19 PM I am moving some calculations from Octave (open-source MATLAB clone) for Prime 7.0. One of the calculations involves fitting a double exponential pulse to measured data. In Octave/MATLAB I use the lsqcurvefit function, which executes very quickly and appears to give me reasonable answers. I am trying to find an equivalent function in MathCAD Prime. I tried genfit and LeastSquaresFit - for genfit I get a regression not converging error, for LeastSquaresFit I get a floating point error. I even tried using the output of my MATLAB calculation as the guess value and still get these errors. Here is a screenshot showing the function I am trying to fit, the guess value (which is the output from MATLAB), and a plot of the data and the fitting function using the guess value. I assume I am doing something fundamentally wrong here. In MATLAB I can start with a guess value wildly in error and still get an answer. Apr 07, 2021 02:53 PM Apr 07, 2021 02:53 PM Apr 07, 2021 02:56 PM Apr 07, 2021 02:56 PM Apr 07, 2021 03:07 PM Apr 07, 2021 03:07 PM Apr 07, 2021 09:28 PM Apr 07, 2021 09:28 PM Apr 08, 2021 12:30 AM Apr 08, 2021 12:30 AM Apr 08, 2021 05:36 AM Apr 08, 2021 05:36 AM
{"url":"https://community.ptc.com/t5/Mathcad/Trouble-with-nonlinear-curve-fitting-functions/td-p/723122","timestamp":"2024-11-08T12:23:31Z","content_type":"text/html","content_length":"314280","record_id":"<urn:uuid:86bacd1a-64c8-4328-b251-6a60a62ba081>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00026.warc.gz"}
Coursnap app Machine Learning Algorithms | Machine Learning Tutorial | Data Science Algorithms | Simplilearn 🔥AI & Machine Learning Bootcamp(US Only): https://www.simplilearn.com/ai-machine-learning-bootcamp?utm_campaign=Machine-Learning-Algorithms-I7NrVwm3apg&utm_medium=DescriptionFirstFold&utm_source= youtube 🔥Professional Certificate Course In AI And Machine Learning by IIT Kanpur (India Only): https://www.simplilearn.com/iitk-professional-certificate-course-ai-machine-learning?utm_campaign= 23AugustTubebuddyExpPCPAIandML&utm_medium=DescriptionFF&utm_source=youtube 🔥AI Engineer Masters Program (Discount Code - YTBE15): https://www.simplilearn.com/masters-in-artificial-intelligence? utm_campaign=SCE-AIMasters&utm_medium=DescriptionFF&utm_source=youtube 🔥 Purdue Post Graduate Program In AI And Machine Learning: https://www.simplilearn.com/ pgp-ai-machine-learning-certification-training-course?utm_campaign=Machine-Learning-Algorithms-I7NrVwm3apg&utm_medium=DescriptionFirstFold&utm_source=youtube This Machine Learning Algorithms video will help you learn what is Machine Learning, various Machine Learning problems and algorithms, key Machine Learning algorithms with simple examples, and use cases implemented in Python. The key Machine Learning algorithms discussed in detail are Linear Regression, Logistic Regression, Decision Tree, Random Forest, and KNN algorithm. Below topics are covered in this Machine Learning Algorithms Tutorial: 00:00 - 03:39 Machine Learning example and real-world applications 03:39 - 04:40 What is Machine Learning? 04:40 - 06:14 Processes involved in Machine Learning 06:14 - 09:40 Type of Machine Learning Algorithms 09:40 - 10:04 Popular Algorithms in Machine Learning 10:04 - 29:10 Linear regression 29:10 - 52:49 Logistic regression 52:49 - 01:04:45 Decision tree and Random forest 01:04:52 - 01:10:28 K nearest neighbor Dataset Link - https://drive.google.com/drive/folders/1FaV91OkTsABJrjnfeeTR4rwLe0mxFHxZ Subscribe to our channel for more Tutorials: https://www.youtube.com/ user/Simplilearn?sub_confirmation=1 Download the Machine Learning Career Guide to explore and step into the exciting world of Machine Learning, and follow the path towards your dream career- https:// www.simplilearn.com/machine-learning-career-guide-pdf?utm_campaign=Machine-Learning-Algorithms-I7NrVwm3apg&utm_medium=Tutorials&utm_source=youtube Machine Learning Articles: https:// www.simplilearn.com/what-is-artificial-intelligence-and-why-ai-certification-article?utm_campaign=Machine-Learning-Algorithms-I7NrVwm3apg&utm_medium=Tutorials&utm_source=youtube Learn more at: https: //www.simplilearn.com/big-data-and-analytics/machine-learning-certification-training-course?utm_campaign=Machine-Learning-Algorithms-I7NrVwm3apg&utm_medium=Description&utm_source=youtube # MachineLearningAlgorithms #Datasciencecourse #DataScience #SimplilearnMachineLearning #MachineLearningCourse ➡️ About Post Graduate Program In AI And Machine Learning This AI ML course is designed to enhance your career in AI and ML by demystifying concepts like machine learning, deep learning, NLP, computer vision, reinforcement learning, and more. You'll also have access to 4 live sessions, led by industry experts, covering the latest advancements in AI, such as generative modeling, ChatGPT, OpenAI, and chatbots. ✅ Key Features - Post Graduate Program certificate and Alumni Association membership - Exclusive hackathons and Ask me Anything sessions by IBM - 3 Capstones and 25+ Projects with industry data sets from Twitter, Uber, Mercedes Benz, and many more - Master Classes delivered by Purdue faculty and IBM experts - Simplilearn's JobAssist helps you get noticed by top hiring companies - Gain access to 4 live online sessions on latest AI trends such as ChatGPT, generative AI, explainable AI, and more - Learn about the applications of ChatGPT, OpenAI, Dall-E, Midjourney & other prominent tools ✅ Skills Covered - ChatGPT - Generative AI - Explainable AI - Generative Modeling - Statistics - Python - Supervised Learning - Unsupervised Learning - NLP - Neural Networks - Computer Vision - And Many More… 👉 Learn More At: 👉 Learn More At: https://www.simplilearn.com/artificial-intelligence-masters-program-training-course?utm_campaign=Machine-Learning-Algorithms-I7NrVwm3apg&utm_medium=Description&utm_source=youtube 🔥 Enroll for FREE Machine Learning Course & Get your Completion Certificate: https://www.simplilearn.com/learn-machine-learning-basics-skillup?utm_campaign=MachineLearning&utm_medium=Description&utm_source= youtube 🔥🔥 Interested in Attending Live Classes? Call Us: IN - 18002127688 / US - +18445327688 {'title': 'Machine Learning Algorithms | Machine Learning Tutorial | Data Science Algorithms | Simplilearn', 'heatmap': [{'end': 940.788, 'start': 893.774, 'weight': 1}], 'summary': "Covers machine learning basics, real-world applications, and popular algorithms like linear regression, logistic regression, decision tree, random forest, and k nearest neighbors. it also highlights the widespread use of machine learning in industries such as security, healthcare, and entertainment, with specific examples like facial and voice recognition, and customer behavior analysis for content creation on netflix. the training process, model accuracy, and practical demonstrations are also included, achieving a root mean square error of 58 and an 87% accuracy for logistic regression. furthermore, it explores decision tree's versatility and ease of representation, and presents real-world applications such as job offer decision-making and kyphosis classification.", 'chapters': [{'end': 206.496, 'segs': [{'end': 73.166, 'src': 'embed', 'start': 43.286, 'weight': 6, 'content': [{'end': 50.511, 'text': 'Snapchat actually does this using a technique called facial recognition, which in turn uses machine learning.', 'start': 43.286, 'duration': 7.225}, {'end': 58.917, 'text': 'The machine learning algorithm detects the features on your face, like the nose, the eyes, and it knows where exactly your eyes are,', 'start': 50.871, 'duration': 8.046}, {'end': 62.839, 'text': 'where exactly your nose is, and accordingly it applies the filters.', 'start': 58.917, 'duration': 3.922}, {'end': 73.166, 'text': 'We will take a few more examples as we move along and try to understand how machine learning algorithms can be applied to solve some of our real life problems.', 'start': 63.199, 'duration': 9.967}], 'summary': 'Snapchat uses facial recognition and machine learning to apply filters based on specific facial features.', 'duration': 29.88, 'max_score': 43.286, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg43286.jpg'}, {'end': 185.105, 'src': 'embed', 'start': 85.973, 'weight': 0, 'content': [{'end': 89.195, 'text': 'We will also see the process involved in machine learning.', 'start': 85.973, 'duration': 3.222}, {'end': 94.437, 'text': 'types of machine learning algorithms and we will also see a few hands-on,', 'start': 89.995, 'duration': 4.442}, {'end': 100.82, 'text': 'including some code python code of the following algorithms linear regression, logistic regression,', 'start': 94.437, 'duration': 6.383}, {'end': 104.842, 'text': 'decision tree and random forest and k nearest neighbors.', 'start': 100.82, 'duration': 4.022}, {'end': 107.543, 'text': "Okay so, let's get started.", 'start': 105.302, 'duration': 2.241}, {'end': 111.625, 'text': "Let's consider some of the real world applications of machine learning.", 'start': 107.863, 'duration': 3.762}, {'end': 113.966, 'text': "It's no longer just a buzzword.", 'start': 112.105, 'duration': 1.861}, {'end': 121.23, 'text': 'Machine learning is being used in a variety of industries to solve a variety of problems.', 'start': 114.467, 'duration': 6.763}, {'end': 123.771, 'text': 'Facial recognition is one of them.', 'start': 121.61, 'duration': 2.161}, {'end': 130.435, 'text': "It's becoming very popular these days for security, for police, for solving crime, a lot of areas.", 'start': 124.152, 'duration': 6.283}, {'end': 132.536, 'text': 'facial recognition is being used.', 'start': 131.075, 'duration': 1.461}, {'end': 134.456, 'text': 'Voice recognition is another area.', 'start': 132.916, 'duration': 1.54}, {'end': 136.197, 'text': "It's becoming very common these days.", 'start': 134.516, 'duration': 1.681}, {'end': 138.458, 'text': 'Some of you must be using Siri.', 'start': 136.637, 'duration': 1.821}, {'end': 141.879, 'text': "That's an example of machine learning and voice recognition.", 'start': 138.678, 'duration': 3.201}, {'end': 147.381, 'text': 'Healthcare industry is another big area where machine learning is adopted in a very big way.', 'start': 142.099, 'duration': 5.282}, {'end': 158.705, 'text': "As you all may be aware, diagnostics needs analysis of images, let's say, like x-ray or MRI, and increasingly, because of the shortage of doctors,", 'start': 147.641, 'duration': 11.064}, {'end': 169.334, 'text': 'Machine learning and artificial intelligence is being used to help and support doctors in analyzing these images and identifying the advent of any diseases.', 'start': 159.105, 'duration': 10.229}, {'end': 171.156, 'text': 'Weather forecast is another area.', 'start': 169.674, 'duration': 1.482}, {'end': 176.941, 'text': 'And in fact, Netflix has actually come up with a very interesting use case.', 'start': 171.456, 'duration': 5.485}, {'end': 180.903, 'text': 'You all must be aware of the House of Cards show on Netflix.', 'start': 177.341, 'duration': 3.562}, {'end': 185.105, 'text': 'So they did an analysis on their customer behavior.', 'start': 181.203, 'duration': 3.902}], 'summary': 'Machine learning is used in facial recognition, voice recognition, healthcare, and weather forecast, with real-world applications in security, crime-solving, and medical diagnostics.', 'duration': 99.132, 'max_score': 85.973, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg85973.jpg'}], 'start': 3.697, 'title': 'Machine learning applications', 'summary': 'Covers machine learning basics and its real-world applications, discussing popular algorithms like linear regression, logistic regression, decision tree, random forest, and k nearest neighbors, along with examples in python. it also highlights the widespread use of machine learning in industries such as security, healthcare, and entertainment, with specific examples like facial and voice recognition, and customer behavior analysis for content creation on netflix.', 'chapters': [{'end': 62.839, 'start': 3.697, 'title': 'Introduction to machine learning algorithms', 'summary': 'Discusses the basics of machine learning, including popular algorithms such as linear regression and logistic regression, and illustrates the application of machine learning in facial recognition used by snapchat.', 'duration': 59.142, 'highlights': ['Snapchat uses facial recognition, a machine learning technique, to apply filters to photos by detecting facial features like nose and eyes, and applying filters accordingly.', 'The session covers the definition of machine learning, popular algorithms like linear regression and logistic regression, and real-life examples of their applications.', 'The explanation of how Snapchat uses machine learning for facial recognition to accurately apply filters to photos, even when multiple faces are present.']}, {'end': 111.625, 'start': 63.199, 'title': 'Real world applications of machine learning', 'summary': 'Discusses real world applications of machine learning, including types of machine learning algorithms and hands-on examples with python code, covering linear regression, logistic regression, decision tree, random forest, and k nearest neighbors.', 'duration': 48.426, 'highlights': ['Real world applications of machine learning are discussed.', 'The types of machine learning algorithms are explained.', 'Hands-on examples with python code are provided, covering linear regression, logistic regression, decision tree, random forest, and k nearest neighbors.']}, {'end': 206.496, 'start': 112.105, 'title': 'Applications of machine learning', 'summary': 'Discusses the widespread use of machine learning in various industries including security, healthcare, and entertainment, such as facial recognition for security, voice recognition like siri, and analyzing customer behavior for content creation on netflix.', 'duration': 94.391, 'highlights': ['Facial recognition is being used for security and crime-solving, with increasing popularity and adoption in various areas. Facial recognition is widely used for security and crime-solving, gaining popularity in various areas.', 'Voice recognition, exemplified by Siri, is becoming increasingly common, showcasing the widespread use of machine learning in consumer applications. Voice recognition, as seen with Siri, demonstrates the widespread adoption of machine learning in consumer applications.', 'Machine learning is significantly impacting the healthcare industry, particularly in aiding diagnostic analysis of images like x-rays and MRIs, addressing the shortage of doctors. Machine learning is making a significant impact on the healthcare industry, particularly in aiding diagnostic analysis of images like x-rays and MRIs to address the shortage of doctors.', "Netflix's analysis of customer behavior using data from 30 million customers, exemplifies the use of machine learning in content creation and audience engagement. Netflix's analysis of customer behavior using data from 30 million customers showcases the use of machine learning in content creation and audience engagement."]}], 'duration': 202.799, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg3697.jpg', 'highlights': ["Netflix's analysis of customer behavior using data from 30 million customers showcases the use of machine learning in content creation and audience engagement.", 'Machine learning is significantly impacting the healthcare industry, aiding diagnostic analysis of images like x-rays and MRIs to address the shortage of doctors.', 'Voice recognition, as seen with Siri, demonstrates the widespread adoption of machine learning in consumer applications.', 'Facial recognition is being used for security and crime-solving, gaining popularity in various areas.', 'The session covers the definition of machine learning, popular algorithms like linear regression and logistic regression, and real-life examples of their applications.', 'Hands-on examples with python code are provided, covering linear regression, logistic regression, decision tree, random forest, and k nearest neighbors.', 'Snapchat uses facial recognition, a machine learning technique, to apply filters to photos by detecting facial features like nose and eyes, and applying filters accordingly.', 'The explanation of how Snapchat uses machine learning for facial recognition to accurately apply filters to photos, even when multiple faces are present.']}, {'end': 586.482, 'segs': [{'end': 306.32, 'src': 'embed', 'start': 277.797, 'weight': 0, 'content': [{'end': 280.799, 'text': "so let's take you through step-by-step process of machine learning.", 'start': 277.797, 'duration': 3.002}, {'end': 284.462, 'text': 'the first step in machine learning is data gathering.', 'start': 280.799, 'duration': 3.663}, {'end': 290.207, 'text': 'machine learning needs a lot of past data especially we will see a little later supervised learning.', 'start': 284.462, 'duration': 5.745}, {'end': 294.47, 'text': "we will see what that is in a little while, but that's the most common form of learning.", 'start': 290.207, 'duration': 4.263}, {'end': 296.772, 'text': 'so the first step there is data gathering.', 'start': 294.47, 'duration': 2.302}, {'end': 299.974, 'text': 'you need to have sufficient historical data.', 'start': 296.772, 'duration': 3.202}, {'end': 306.32, 'text': 'then the second step is pre-processing of this data so that this can be used for for the machine learning process.', 'start': 299.974, 'duration': 6.346}], 'summary': 'Machine learning process involves data gathering and preprocessing for supervised learning.', 'duration': 28.523, 'max_score': 277.797, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/ I7NrVwm3apg277797.jpg'}, {'end': 404.308, 'src': 'embed', 'start': 377.696, 'weight': 1, 'content': [{'end': 386.878, 'text': 'Machine learning algorithms are broadly classified into three types the supervised learning, unsupervised learning and reinforcement learning.', 'start': 377.696, 'duration': 9.182}, {'end': 395.36, 'text': 'Supervised learning in turn consists of techniques like regression and classification and unsupervised learning.', 'start': 387.138, 'duration': 8.222}, {'end': 404.308, 'text': 'we use techniques like association and clustering, and reinforcement learning is a recently developed technique and it is very popular in gaming.', 'start': 395.36, 'duration': 8.948}], 'summary': 'Machine learning has 3 types: supervised, unsupervised, and reinforcement learning; including techniques like regression, classification, association, and clustering.', 'duration': 26.612, 'max_score': 377.696, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg377696.jpg'}, {'end': 549.573, 'src': 'embed', 'start': 524.155, 'weight': 2, 'content': [{'end': 532.882, 'text': "In case of unsupervised learning, we have input data, but we don't have the labels or what the output is supposed to be.", 'start': 524.155, 'duration': 8.727}, {'end': 540.608, 'text': 'So that is when we use unsupervised learning techniques like clustering and association and we try to analyze the data.', 'start': 533.222, 'duration': 7.386}, {'end': 549.573, 'text': 'In case of reinforcement learning, it allows the agent to automatically determine the ideal behavior within a specific context.', 'start': 540.988, 'duration': 8.585}], 'summary': 'Unsupervised learning uses clustering and association to analyze input data without labels. reinforcement learning enables agents to determine ideal behavior within specific contexts.', 'duration': 25.418, 'max_score': 524.155, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg524155.jpg'}], 'start': 206.496, 'title': 'Basics of machine learning', 'summary': 'Covers the basic concepts of machine learning, including data gathering, preprocessing, model training, testing, and deployment, along with discussions on supervised, unsupervised, and reinforcement learning algorithms.', 'chapters': [{'end': 586.482, 'start': 206.496, 'title': 'Machine learning: basics and applications', 'summary': 'Explains the basics of machine learning, including the process of data gathering, preprocessing, model training, testing, and deployment, as well as the different types of machine learning algorithms, such as supervised learning, unsupervised learning, and reinforcement learning.', 'duration': 379.986, 'highlights': ['Machine learning process involves data gathering, preprocessing, model training, testing, and deployment, aiming to achieve as accurate predictions as possible. The process of machine learning involves gathering historical data, preprocessing it, choosing a model, training and testing the model, and subsequently deploying it to make predictions, aiming for maximum accuracy.', 'Machine learning algorithms are broadly classified into supervised learning, unsupervised learning, and reinforcement learning, each with specific techniques and applications. Machine learning algorithms are categorized into three types: supervised learning, unsupervised learning, and reinforcement learning, each with distinct techniques and applications, such as regression, classification, association, and clustering.', 'Supervised learning is used with historical labeled data to predict specific target values, while unsupervised learning is applied when labeled data is unavailable, and reinforcement learning involves the agent learning from scratch to maximize performance within a specific context. Supervised learning utilizes labeled historical data to predict specific target values, unsupervised learning is used when labeled data is unavailable, and reinforcement learning involves the agent learning from scratch to maximize performance within a specific context.']}], 'duration': 379.986, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg206496.jpg', 'highlights': ['Machine learning process involves data gathering, preprocessing, model training, testing, and deployment, aiming to achieve as accurate predictions as possible.', 'Machine learning algorithms are broadly classified into supervised learning, unsupervised learning, and reinforcement learning, each with specific techniques and applications.', 'Supervised learning is used with historical labeled data to predict specific target values, while unsupervised learning is applied when labeled data is unavailable, and reinforcement learning involves the agent learning from scratch to maximize performance within a specific context.']}, {'end': 1006.652, 'segs': [{'end': 628.923, 'src': 'embed', 'start': 604.069, 'weight': 0, 'content': [{'end': 613.832, 'text': 'linear regression a little history about linear regression sir francis galton is credited with the discovery of the linear regression model.', 'start': 604.069, 'duration': 9.763}, {'end': 623.517, 'text': "so what he did was he started studying the heights of father and son to predict the sun's height or the child's height even before he or she is born.", 'start': 613.832, 'duration': 9.685}, {'end': 628.923, 'text': 'So he collected enough data of the heights of father and the respective sons.', 'start': 623.777, 'duration': 5.146}], 'summary': "Linear regression was discovered by sir francis galton to predict child's height using father's height data.", 'duration': 24.854, 'max_score': 604.069, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/ I7NrVwm3apg604069.jpg'}, {'end': 686.709, 'src': 'embed', 'start': 656.632, 'weight': 1, 'content': [{'end': 663.419, 'text': 'So that was the very beginning or very initial phase of linear regression algorithms.', 'start': 656.632, 'duration': 6.787}, {'end': 665.04, 'text': 'so that was a little bit of history.', 'start': 663.579, 'duration': 1.461}, {'end': 666.86, 'text': 'but what is linear regression?', 'start': 665.04, 'duration': 1.82}, {'end': 678.625, 'text': 'so linear regression is a way of modeling a linear model, creating a linear model to find the relationship between one or more independent variables,', 'start': 666.86, 'duration': 11.765}, {'end': 686.709, 'text': 'denoted by x, and a dependent variable, which is also known as the target and denoted as y.', 'start': 678.625, 'duration': 8.084}], 'summary': 'Linear regression is a way of modeling a linear model to find the relationship between independent variables denoted by x and a dependent variable denoted as y.', 'duration': 30.077, 'max_score': 656.632, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg656632.jpg'}, {'end': 766.053, 'src': 'embed', 'start': 737.958, 'weight': 2, 'content': [{'end': 741.22, 'text': 'So, there is only one x, but that is simple linear regression.', 'start': 737.958, 'duration': 3.262}, {'end': 744.082, 'text': "So, let's see how this is actually done.", 'start': 741.34, 'duration': 2.742}, {'end': 754.427, 'text': 'So, linear regression is all about finding the best fit And the way it is done is in a recursive manner.', 'start': 744.382, 'duration': 10.045}, {'end': 762.131, 'text': 'So first, a random line is drawn and the distance is calculated from this line of all the points, as you can see in this example.', 'start': 754.647, 'duration': 7.484}, {'end': 766.053, 'text': 'And that distance is known as the error.', 'start': 762.751, 'duration': 3.302}], 'summary': 'Linear regression finds best-fit line by minimizing errors from data points.', 'duration': 28.095, 'max_score': 737.958, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/ I7NrVwm3apg/pics/I7NrVwm3apg737958.jpg'}, {'end': 982.008, 'src': 'heatmap', 'start': 893.774, 'weight': 3, 'content': [{'end': 902.036, 'text': "you don't have to go into details And then we import the dataset and then we visualize the data just to get a quick idea about how the data is looking.", 'start': 893.774, 'duration': 8.262}, {'end': 906.797, 'text': 'And then we split the data into training and test datasets.', 'start': 902.296, 'duration': 4.501}, {'end': 910.718, 'text': 'This is a common procedure in machine learning process, any machine learning process.', 'start': 907.017, 'duration': 3.701}, {'end': 915.719, 'text': 'And the overall training and test process we do with two different datasets.', 'start': 911.198, 'duration': 4.521}, {'end': 916.879, 'text': "So that's what we're doing here.", 'start': 915.779, 'duration': 1.1}, {'end': 922.98, 'text': 'And then we build or train our model, the linear regression model, and then we do the testing.', 'start': 917.119, 'duration': 5.861}, {'end': 927.661, 'text': 'we find out what is the errors and visualize our results.', 'start': 923.479, 'duration': 4.182}, {'end': 930.063, 'text': 'and this is how the test results look.', 'start': 927.661, 'duration': 2.402}, {'end': 933.224, 'text': 'and this is how the training result looks all right.', 'start': 930.063, 'duration': 3.161}, {'end': 935.245, 'text': 'and then we calculate the residuals.', 'start': 933.224, 'duration': 2.021}, {'end': 937.607, 'text': 'residuals are nothing but the errors.', 'start': 935.245, 'duration': 2.362}, {'end': 940.788, 'text': 'there are a couple of ways of measuring the accuracy.', 'start': 937.607, 'duration': 3.181}, {'end': 949.873, 'text': 'the root mean square error is the most common one rmsc and in this case we got root mean square error of 58, which is pretty good,', 'start': 940.788, 'duration': 9.085}, {'end': 951.774, 'text': "and that's our best fit.", 'start': 949.873, 'duration': 1.901}, {'end': 952.094, 'text': 'all right.', 'start': 951.774, 'duration': 0.32}, {'end': 958.216, 'text': "so now let's go into jupyter notebook and take a look at by running it live okay.", 'start': 952.094, 'duration': 6.122}, {'end': 968.88, 'text': 'so this is our code for the linear regression demo and this is how the jupyter notebook looks and this code in the jupyter notebook looks.', 'start': 958.216, 'duration': 10.664}, {'end': 973.482, 'text': "i will walk you through the code pretty much line by line and let's see how this works.", 'start': 968.88, 'duration': 4.602}, {'end': 974.382, 'text': 'the linear regression.', 'start': 973.482, 'duration': 0.9}, {'end': 979.106, 'text': 'So the first part is pretty much a standard template in pretty much all our code.', 'start': 974.482, 'duration': 4.624}, {'end': 982.008, 'text': 'We will see this is importing the required library.', 'start': 979.146, 'duration': 2.862}], 'summary': 'Machine learning process: data import, visualization, training, test datasets, linear regression model, testing, root mean square error of 58, and live demo in jupyter notebook.', 'duration': 44.401, 'max_score': 893.774, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg893774.jpg'}], 'start': 586.522, 'title': 'Linear regression basics and demo', 'summary': 'Provides an overview of linear regression, including its history, the process of finding the best fit line, and its application in determining salary based on years of experience. it also covers the process of importing libraries, dataset visualization, data splitting, model training, testing, and error calculation, achieving a root mean square error of 58, followed by a live demonstration in a jupyter notebook.', 'chapters': [{'end': 878.687, 'start': 586.522, 'title': 'Linear regression basics', 'summary': 'Provides an overview of linear regression, its history, the process of finding the best fit line, and its application in determining salary based on years of experience.', 'duration': 292.165, 'highlights': ["Linear regression was discovered by Sir Francis Galton to predict a child's height based on the father's height using the mean square error. Sir Francis Galton discovered linear regression to predict a child's height based on the father's height using the mean square error.", 'The process of linear regression involves creating a linear model to find the relationship between independent (x) and dependent (y) variables, represented by a best fit line with minimal distance from the data points. Linear regression involves creating a linear model to find the relationship between independent (x) and dependent (y) variables, represented by a best fit line with minimal distance from the data points.', 'The iterative process of linear regression involves drawing a random line, calculating the distance of all points from the line, and adjusting the line to minimize the sum of squared errors, resulting in the best fit regression line. The iterative process of linear regression involves drawing a random line, calculating the distance of all points from the line, and adjusting the line to minimize the sum of squared errors, resulting in the best fit regression line.', 'The application of linear regression in determining salary based on years of experience will be demonstrated using Python in Jupyter Notebook. The application of linear regression in determining salary based on years of experience will be demonstrated using Python in Jupyter Notebook.']}, {'end': 1006.652, 'start': 878.687, 'title': 'Linear regression demo', 'summary': 'Covers the process of importing libraries, dataset visualization, data splitting, model training, testing, and error calculation, achieving a root mean square error of 58, followed by a live demonstration in a jupyter notebook.', 'duration': 127.965, 'highlights': ['The chapter covers the process of importing libraries, dataset visualization, data splitting, model training, testing, and error calculation, achieving a root mean square error of 58. The process involves importing libraries, visualizing the dataset, splitting it into training and test datasets, training a linear regression model, testing, and calculating the root mean square error of 58.', 'The live demonstration in a Jupyter notebook is then conducted to walk through the code for the linear regression demo. A live demonstration in a Jupyter notebook is conducted to walk through the code for the linear regression demo, explaining each line and showcasing the process.']}], 'duration': 420.13, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg586522.jpg', 'highlights': ["Linear regression was discovered by Sir Francis Galton to predict a child's height based on the father's height using the mean square error.", 'The process of linear regression involves creating a linear model to find the relationship between independent (x) and dependent (y) variables, represented by a best fit line with minimal distance from the data points.', 'The iterative process of linear regression involves drawing a random line, calculating the distance of all points from the line, and adjusting the line to minimize the sum of squared errors, resulting in the best fit regression line.', 'The application of linear regression in determining salary based on years of experience will be demonstrated using Python in Jupyter Notebook.', 'The chapter covers the process of importing libraries, dataset visualization, data splitting, model training, testing, and error calculation, achieving a root mean square error of 58.', 'The live demonstration in a Jupyter notebook is then conducted to walk through the code for the linear regression demo.']}, {'end': 1625.333, 'segs': [{'end': 1058.982, 'src': 'embed', 'start': 1006.652, 'weight': 0, 'content': [{'end': 1016.262, 'text': 'so in this example, what we are trying to do the use case in this particular example is we have some historical value of salary data.', 'start': 1006.652, 'duration': 9.61}, {'end': 1023.969, 'text': 'And now we want to build a model so that we can predict the salary for new employee, a person who is joining new.', 'start': 1016.543, 'duration': 7.426}, {'end': 1029.973, 'text': 'And we will use the same the characteristics that were available to us or the features that were available to us.', 'start': 1024.309, 'duration': 5.664}, {'end': 1033.915, 'text': 'And we will try to predict what will be this salary of this new person.', 'start': 1030.113, 'duration': 3.802}, {'end': 1036.078, 'text': "Okay, so that's the kind of the use case.", 'start': 1034.357, 'duration': 1.721}, {'end': 1043.587, 'text': "So let's go ahead and load the data and let me introduce a cell and see how the data looks.", 'start': 1036.198, 'duration': 7.389}, {'end': 1046.611, 'text': 'So salary underscore data.', 'start': 1043.627, 'duration': 2.984}, {'end': 1055.921, 'text': "So we have basically two features, right? So it's pretty much like a simple linear regression we are trying to do.", 'start': 1049.198, 'duration': 6.723}, {'end': 1058.982, 'text': 'So we have years of experience and salary.', 'start': 1056.081, 'duration': 2.901}], 'summary': 'Building a model to predict salary using historical salary data and features like years of experience.', 'duration': 52.33, 'max_score': 1006.652, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg1006652.jpg'}, {'end': 1176.694, 'src': 'embed', 'start': 1151.86, 'weight': 1, 'content': [{'end': 1159.727, 'text': 'So what this shows is the count, how many records are there with a given experience and things like that.', 'start': 1151.86, 'duration': 7.867}, {'end': 1163.47, 'text': 'So this is another way of visualizing the data.', 'start': 1160.027, 'duration': 3.443}, {'end': 1168.51, 'text': 'This is a third view.', 'start': 1166.769, 'duration': 1.741}, {'end': 1170.711, 'text': 'And this is one more view.', 'start': 1169.27, 'duration': 1.441}, {'end': 1173.472, 'text': 'And then we can do a quick heat map.', 'start': 1171.291, 'duration': 2.181}, {'end': 1176.694, 'text': "So there are there's only one or actually there are only two variables.", 'start': 1173.672, 'duration': 3.022}], 'summary': 'Visualizing data with count and two variables using different views.', 'duration': 24.834, 'max_score': 1151.86, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg1151860.jpg'}, {'end': 1226.512, 'src': 'embed', 'start': 1196.287, 'weight': 3, 'content': [{'end': 1205.354, 'text': 'So once we are done with that, this is the most important part of our demo here, which is basically this is the beginning of our training process.', 'start': 1196.287, 'duration': 9.067}, {'end': 1216.483, 'text': 'So the first thing before we start the model building and model training process is to split the data into training and test data sets.', 'start': 1205.995, 'duration': 10.488}, {'end': 1226.512, 'text': 'Now, whenever we do any machine learning exercise, especially supervised learning, we never use the entire label data for training purpose.', 'start': 1216.864, 'duration': 9.648}], 'summary': 'Training data split is crucial for supervised learning.', 'duration': 30.225, 'max_score': 1196.287, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg1196287.jpg'}, {'end': 1348.939, 'src': 'embed', 'start': 1311.872, 'weight': 4, 'content': [{'end': 1313.834, 'text': 'That means it is 33%, right? 33.33%.', 'start': 1311.872, 'duration': 1.962}, {'end': 1318.92, 'text': 'So, one third of the data you want to set aside for test.', 'start': 1313.834, 'duration': 5.086}, {'end': 1327.827, 'text': 'Now, there are no hard and fast rules as to So, what should be the split of test and training data set is a matter of individual preferences.', 'start': 1319.2, 'duration': 8.627}, {'end': 1334.491, 'text': 'Some people would like to have 5050, some people would prefer 8020, and so on and so forth.', 'start': 1327.907, 'duration': 6.584}, {'end': 1338.093, 'text': 'So it is completely up to the individuals to decide on that.', 'start': 1334.571, 'duration': 3.522}, {'end': 1339.494, 'text': 'So, in this particular case,', 'start': 1338.173, 'duration': 1.321}, {'end': 1348.939, 'text': 'what we are doing is we are setting aside one third of the data set for testing purpose and two thirds of the data set for training purpose.', 'start': 1339.494, 'duration': 9.445}], 'summary': 'Data split for testing is 33% and for training is 67%.', 'duration': 37.067, 'max_score': 1311.872, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg1311872.jpg'}, {'end': 1389.212, 'src': 'embed', 'start': 1358.743, 'weight': 5, 'content': [{'end': 1363.085, 'text': 'So we create an instance of the linear regression model and give it a name like LR.', 'start': 1358.743, 'duration': 4.342}, {'end': 1367.947, 'text': 'And then we call the fit method of the linear regression model.', 'start': 1363.805, 'duration': 4.142}, {'end': 1372.108, 'text': 'Now, this fit method is common across all the algorithms.', 'start': 1367.967, 'duration': 4.141}, {'end': 1377.63, 'text': 'So any algorithm you use, if you want to start the training process, you call the fit method.', 'start': 1372.148, 'duration': 5.482}, {'end': 1381.551, 'text': 'OK, and then we pass the training data set.', 'start': 1378.05, 'duration': 3.501}, {'end': 1389.212, 'text': 'Now training data set we send the predictor as I was saying or the independent variable and also the dependent variable.', 'start': 1381.631, 'duration': 7.581}], 'summary': 'Creating and training a linear regression model using training data set and fit method.', 'duration': 30.469, 'max_score': 1358.743, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg1358743.jpg'}, {'end': 1553.491, 'src': 'embed', 'start': 1525.098, 'weight': 6, 'content': [{'end': 1527.339, 'text': 'Okay, so this is for the training part.', 'start': 1525.098, 'duration': 2.241}, {'end': 1530.021, 'text': 'Now we do the same for our test as well.', 'start': 1527.399, 'duration': 2.622}, {'end': 1534.063, 'text': 'And see how it is doing or how it looks here.', 'start': 1530.581, 'duration': 3.482}, {'end': 1540.287, 'text': 'Also, it looks pretty good because the line passes pretty much in the middle of the overall data set.', 'start': 1534.143, 'duration': 6.144}, {'end': 1541.668, 'text': "That's what we are trying to do here.", 'start': 1540.347, 'duration': 1.321}, {'end': 1542.508, 'text': 'All right.', 'start': 1542.228, 'duration': 0.28}, {'end': 1549.05, 'text': 'And then how do we measure the accuracy? So these residuals are nothing but the errors.', 'start': 1542.788, 'duration': 6.262}, {'end': 1553.491, 'text': 'The term residuals is nothing but the errors we have seen in the slides as well.', 'start': 1549.23, 'duration': 4.261}], 'summary': 'Training and test data show good accuracy with residuals as errors.', 'duration': 28.393, 'max_score': 1525.098, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg1525098.jpg'}], 'start': 1006.652, 'title': 'Building a salary prediction model', 'summary': 'Discusses building a model to predict the salary for new employees based on historical salary data using years of experience as a predictor, including visualization techniques like bar plots and heat maps, and covers the importance of data splitting in machine learning, using a 1:3 test size split, creating an instance of linear regression, and measuring model accuracy through residuals and visualization.', 'chapters': [{'end': 1196.186, 'start': 1006.652, 'title': 'Salary prediction model', 'summary': 'Discusses building a model to predict the salary for new employees based on historical salary data using years of experience as a predictor, and includes visualization techniques like bar plots and heat maps for exploratory analysis.', 'duration': 189.534, 'highlights': ['Building a model to predict the salary for new employees based on historical salary data using years of experience as a predictor The use case involves building a model to predict the salary for new employees based on historical salary data, using years of experience as a predictor.', 'Visualization techniques like bar plots and heat maps for exploratory analysis The chapter includes visualization techniques like bar plots and heat maps for exploratory analysis of the data, to understand the correlation and visualization of variables.', 'Introduction of two features for simple linear regression: years of experience and salary The data includes two features for simple linear regression: years of experience and salary, where years of experience is used as a predictor to predict the salary of new employees.']}, {'end': 1625.333, 'start': 1196.287, 'title': 'Model training process and data splitting', 'summary': 'Covers the importance of splitting data into training and test sets in machine learning, using a 1:3 test size split, creating an instance of linear regression, and measuring model accuracy through residuals and visualization.', 'duration': 429.046, 'highlights': ["The chapter covers the importance of splitting data into training and test sets in machine learning. Splitting data into training and test sets is crucial in machine learning to accurately evaluate the model's performance.", 'Using a 1:3 test size split for data splitting. A 1:3 test size split is used to allocate one third of the dataset for testing purposes and two thirds for training.', 'Creating an instance of linear regression and using the fit method for training. An instance of linear regression model is created and the fit method is used to train the model with the training data set.', 'Measuring model accuracy through residuals and visualization. Model accuracy is measured through residuals such as mean square error, mean absolute error, and root mean square error, and visualization of the training and test set results.']}], 'duration': 618.681, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg1006652.jpg', 'highlights': ['Building a model to predict the salary for new employees based on historical salary data using years of experience as a predictor', 'Visualization techniques like bar plots and heat maps for exploratory analysis', 'Introduction of two features for simple linear regression: years of experience and salary', 'The chapter covers the importance of splitting data into training and test sets in machine learning', 'Using a 1:3 test size split for data splitting', 'Creating an instance of linear regression and using the fit method for training', 'Measuring model accuracy through residuals and visualization']}, {'end': 2257.853, 'segs': [{'end': 1648.424, 'src': 'embed', 'start': 1625.333, 'weight': 2, 'content': [{'end': 1632.556, 'text': "let's assume this goes to 60,000, which means that one of the data points is very accurately predicted by our model.", 'start': 1625.333, 'duration': 7.223}, {'end': 1635.017, 'text': 'but there are two of them which are off.', 'start': 1632.556, 'duration': 2.461}, {'end': 1637.679, 'text': 'So there is an error for these two values.', 'start': 1635.418, 'duration': 2.261}, {'end': 1642.601, 'text': 'And what is that error? The error is nothing but the distance of this point from this line.', 'start': 1638.139, 'duration': 4.462}, {'end': 1648.424, 'text': 'right, the distance of this point of each of these data points from the from the line is the error.', 'start': 1643.021, 'duration': 5.403}], 'summary': 'Model predicts 60,000 accurately, but two data points have errors due to distance from the line.', 'duration': 23.091, 'max_score': 1625.333, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg1625333.jpg'}, {'end': 1718.207, 'src': 'embed', 'start': 1692.098, 'weight': 0, 'content': [{'end': 1703.608, 'text': 'And these values, the root mean square error, the mean mean squared error and the mean absolute error, the lower these values are the better.', 'start': 1692.098, 'duration': 11.51}, {'end': 1708.874, 'text': 'So the accuracy is higher if these values are lower.', 'start': 1704.048, 'duration': 4.826}, {'end': 1711.918, 'text': 'So in a way it is inversely proportional.', 'start': 1708.894, 'duration': 3.024}, {'end': 1718.207, 'text': "OK, so that's the way we measure the accuracy of our linear regression model.", 'start': 1712.559, 'duration': 5.648}], 'summary': 'Lower root mean square error, mean squared error, and mean absolute error indicate higher accuracy in linear regression.', 'duration': 26.109, 'max_score': 1692.098, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg1692098.jpg'}, {'end': 1772.083, 'src': 'embed', 'start': 1742.436, 'weight': 3, 'content': [{'end': 1748.957, 'text': 'but keep in mind, this is not used for regression, but this algorithm is used for classification.', 'start': 1742.436, 'duration': 6.521}, {'end': 1753.538, 'text': 'Now, many people get confused by this name, so you need to be aware of it.', 'start': 1749.117, 'duration': 4.421}, {'end': 1760.079, 'text': 'Linear regression is used to solve regression problems, where we are trying to predict a value,', 'start': 1753.818, 'duration': 6.261}, {'end': 1765.621, 'text': 'whereas logistic regression is used to solve a classification problem.', 'start': 1760.079, 'duration': 5.542}, {'end': 1772.083, 'text': 'so we are trying to find, for example, whether a person will repay the loan or not,', 'start': 1765.621, 'duration': 6.462}], 'summary': 'Logistic regression used for classification, linear for regression problems.', 'duration': 29.647, 'max_score': 1742.436, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg1742436.jpg'}, {'end': 1954.316, 'src': 'embed', 'start': 1924.77, 'weight': 4, 'content': [{'end': 1927.032, 'text': "So that's the way the sigmoid function works.", 'start': 1924.77, 'duration': 2.262}, {'end': 1933.938, 'text': 'This is how the graph looks and there has to be a threshold value, which is like in this case 0.5.', 'start': 1927.392, 'duration': 6.546}, {'end': 1938.382, 'text': 'So if the value is greater than 0.5, we consider the output as 1.', 'start': 1933.938, 'duration': 4.444}, {'end': 1944.188, 'text': 'Whereas if the value is less than 0.5, we consider the value as 0.', 'start': 1938.382, 'duration': 5.806}, {'end': 1946.19, 'text': 'because, remember, let me go back.', 'start': 1944.188, 'duration': 2.002}, {'end': 1950.893, 'text': "in this case it doesn't exactly give us a 1 or a 0..", 'start': 1946.19, 'duration': 4.703}, {'end': 1952.214, 'text': 'Okay, so we need to keep that in mind.', 'start': 1950.893, 'duration': 1.321}, {'end': 1954.316, 'text': "It doesn't give us exactly a 1 or a 0.", 'start': 1952.334, 'duration': 1.982}], 'summary': 'The sigmoid function has a threshold value of 0.5, producing outputs of 1 or 0.', 'duration': 29.546, 'max_score': 1924.77, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/ I7NrVwm3apg/pics/I7NrVwm3apg1924770.jpg'}, {'end': 2257.853, 'src': 'embed', 'start': 2230.315, 'weight': 5, 'content': [{'end': 2232.957, 'text': 'So 117 of them have been correctly predicted, which gives us an accuracy of 87%.', 'start': 2230.315, 'duration': 2.642}, {'end': 2235.118, 'text': 'So out of 134, 117 have been correctly predicted.', 'start': 2232.957, 'duration': 2.161}, {'end': 2236.739, 'text': 'And the 6 plus 11, 17 of them have been incorrectly.', 'start': 2235.138, 'duration': 1.601}, {'end': 2238.3, 'text': 'So this is 6 plus 11, 17 of them have been misclassified.', 'start': 2236.759, 'duration': 1.541}, {'end': 2238.821, 'text': 'So which is about 0.13%.', 'start': 2238.32, 'duration': 0.501}, {'end': 2257.853, 'text': 'So we have an accuracy of 87%.', 'start': 2238.821, 'duration': 19.032}], 'summary': '87% accuracy with 117 correct predictions out of 134.', 'duration': 27.538, 'max_score': 2230.315, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg2230315.jpg'}], 'start': 1625.333, 'title': 'Regression models', 'summary': 'Discusses measuring accuracy of linear regression model using root mean square error, mean squared error, and mean absolute error, and implementing logistic regression for classification problems with an achieved accuracy of 87%.', 'chapters': [{'end': 1718.207, 'start': 1625.333, 'title': 'Linear regression model accuracy', 'summary': 'Explains the calculation of root mean square error, mean squared error, and mean absolute error to measure the accuracy of the linear regression model, with lower values indicating higher accuracy.', 'duration': 92.874, 'highlights': ['The root mean square error, mean squared error, and mean absolute error are calculated to measure the accuracy of the linear regression model, with lower values indicating higher accuracy.', 'The error for specific data points in the model is determined by the distance of each point from the line, and the mean square error is calculated by taking the square of these distances.', 'The accuracy of the linear regression model is inversely proportional to the values of root mean square error, mean squared error, and mean absolute error.']}, {'end': 2257.853, 'start': 1718.547, 'title': 'Understanding logistic regression', 'summary': 'Covers logistic regression, explaining its application in classification problems, the sigmoid curve for probability calculation, and logistic regression implementation using python, achieving an accuracy of 87%.', 'duration': 539.306, 'highlights': ['Logistic regression is used for classification problems, such as predicting loan repayment or image classification, with an accuracy of 87% demonstrated in the Python implementation. Logistic regression is employed for solving classification problems, like predicting loan repayment or image classification, with an accuracy of 87% demonstrated in the Python implementation.', 'The use of the sigmoid curve in logistic regression ensures probability calculation between 0 and 1, with the threshold value of 0.5 for decision making. The sigmoid curve in logistic regression ensures probability calculation between 0 and 1, with the threshold value of 0.5 for decision making.', "The Python implementation of logistic regression achieved an accuracy of 87% using a confusion matrix to assess the model's performance. The Python implementation of logistic regression achieved an accuracy of 87% using a confusion matrix to assess the model's performance."]}], 'duration': 632.52, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg1625333.jpg', 'highlights': ['The root mean square error, mean squared error, and mean absolute error are calculated to measure the accuracy of the linear regression model, with lower values indicating higher accuracy.', 'The accuracy of the linear regression model is inversely proportional to the values of root mean square error, mean squared error, and mean absolute error.', 'The error for specific data points in the model is determined by the distance of each point from the line, and the mean square error is calculated by taking the square of these distances.', 'Logistic regression is used for classification problems, such as predicting loan repayment or image classification, with an accuracy of 87% demonstrated in the Python implementation.', 'The use of the sigmoid curve in logistic regression ensures probability calculation between 0 and 1, with the threshold value of 0.5 for decision making.', "The Python implementation of logistic regression achieved an accuracy of 87% using a confusion matrix to assess the model's performance."]}, {'end': 2550.851, 'segs': [{'end': 2307.212, 'src': 'embed', 'start': 2279.716, 'weight': 0, 'content': [{'end': 2282.737, 'text': 'All right, so this is the demo of logistic regression.', 'start': 2279.716, 'duration': 3.021}, {'end': 2295.181, 'text': "And here what we're doing is we've taken an example of a data set and a scenario where we will predict whether a person is going to buy an SUV or not.", 'start': 2283.397, 'duration': 11.784}, {'end': 2297.582, 'text': 'And we will use logistic regression for this.', 'start': 2295.741, 'duration': 1.841}, {'end': 2304.61, 'text': "the parameters we will take are, for example, the person's age, his salary and a few other parameters.", 'start': 2298.104, 'duration': 6.506}, {'end': 2307.212, 'text': 'we will see very quickly what those are okay.', 'start': 2304.61, 'duration': 2.602}], 'summary': 'Demo of logistic regression to predict suv purchase based on age and salary.', 'duration': 27.496, 'max_score': 2279.716, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg2279716.jpg'}, {'end': 2400.13, 'src': 'embed', 'start': 2375.124, 'weight': 1, 'content': [{'end': 2380.645, 'text': 'And the other way of also looking at it is more from a mathematical perspective.', 'start': 2375.124, 'duration': 5.521}, {'end': 2383.046, 'text': 'These are our independent variables.', 'start': 2380.885, 'duration': 2.161}, {'end': 2389.447, 'text': 'Gender, age, and estimated salary are our independent variables, and purchased is our dependent variables.', 'start': 2383.146, 'duration': 6.301}, {'end': 2397.009, 'text': 'So, in our equation like y is equal to something, something, 1x plus m2x and so on, this is our y.', 'start': 2389.927, 'duration': 7.082}, {'end': 2397.769, 'text': 'Okay All right.', 'start': 2397.009, 'duration': 0.76}, {'end': 2400.13, 'text': "So, now let's move forward.", 'start': 2397.829, 'duration': 2.301}], 'summary': 'Analysis of independent variables (gender, age, salary) and its relation to purchase behavior.', 'duration': 25.006, 'max_score': 2375.124, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg2375124.jpg'}, {'end': 2496.413, 'src': 'embed', 'start': 2466.661, 'weight': 2, 'content': [{'end': 2473.646, 'text': "So, in terms of visualization, let's visualize the data and perform a little bit of what is known as exploratory analysis.", 'start': 2466.661, 'duration': 6.985}, {'end': 2476.589, 'text': 'right?. as a data scientist, whenever you get new data,', 'start': 2473.646, 'duration': 2.943}, {'end': 2484.839, 'text': 'you just play around and see how the data is looking before you actually launch into the actual training or the modeling part of it.', 'start': 2476.589, 'duration': 8.25}, {'end': 2488.784, 'text': "so let's run a small heat map and see how the data looks.", 'start': 2484.839, 'duration': 3.945}, {'end': 2496.413, 'text': 'so we have just passed the entire data set here and this is how the heat map looks, how how they are related,', 'start': 2488.784, 'duration': 7.629}], 'summary': 'Perform exploratory analysis using visualization before training or modeling, as a data scientist.', 'duration': 29.752, 'max_score': 2466.661, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg2466661.jpg'}, {'end': 2561.06, 'src': 'embed', 'start': 2531.134, 'weight': 3, 'content': [{'end': 2538.04, 'text': 'for example, there is a correlation of 0.6 in this area, which is basically, if you take age and purchased right,', 'start': 2531.134, 'duration': 6.906}, {'end': 2541.163, 'text': 'whether the person has purchased and the age.', 'start': 2538.04, 'duration': 3.123}, {'end': 2544.446, 'text': "so that's that's a quick look at exploring the data.", 'start': 2541.163, 'duration': 3.283}, {'end': 2546.487, 'text': "so that's, uh, all we are doing here here.", 'start': 2544.446, 'duration': 2.041}, {'end': 2550.851, 'text': "here's where the actual, the crux of this code is.", 'start': 2546.487, 'duration': 4.364}, {'end': 2561.06, 'text': "So, from here on, what's what we do is the first step, before we start the training process, is split our data into train and test,", 'start': 2550.931, 'duration': 10.129}], 'summary': 'Correlation of 0.6 found between age and purchases, data split into train and test', 'duration': 29.926, 'max_score': 2531.134, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg2531134.jpg'}], 'start': 2257.853, 'title': 'Logistic regression demo and data visualization', 'summary': 'Includes a logistic regression demo predicting suv purchases and data visualization techniques such as heatmap for correlation analysis.', 'chapters': [{'end': 2328.607, 'start': 2257.853, 'title': 'Logistic regression demo: predicting suv purchase', 'summary': 'Covers a demo of logistic regression using a data set to predict suv purchases based on parameters such as age and salary, while also providing the option for viewers to request the data set for personal use.', 'duration': 70.754, 'highlights': ['Viewers can request the dataset for personal use by commenting under the video.', 'The demo focuses on predicting SUV purchases using logistic regression based on parameters like age and salary.', 'The use of required libraries such as numpy, matplotlib, and pandas for data preparation is demonstrated.']}, {'end': 2550.851, 'start': 2328.607, 'title': 'Data visualization and analysis', 'summary': 'Introduces the process of data visualization and exploratory analysis, including extracting independent and dependent variables, and using a heatmap to measure correlations between features in a data set.', 'duration': 222.244, 'highlights': ['The chapter explains the process of extracting independent and dependent variables from a data set in Python, focusing on gender, age, and estimated salary as independent variables and purchase as the dependent variable.', 'It discusses the use of a heatmap to visually measure correlations between features, where a correlation of 0.6 is observed between age and purchase, indicating a moderate relationship.', 'The chapter emphasizes the importance of exploratory analysis in data science, highlighting the need to understand the data before proceeding with modeling and training.', 'It mentions the use of a heatmap to visualize correlations between features, where a darker color signifies less correlation and a lighter color indicates a higher correlation, providing a visual representation of the relationships within the data.']}], 'duration': 292.998, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg2257853.jpg', 'highlights': ['The demo focuses on predicting SUV purchases using logistic regression based on parameters like age and salary.', 'The chapter explains the process of extracting independent and dependent variables from a data set in Python, focusing on gender, age, and estimated salary as independent variables and purchase as the dependent variable.', 'The chapter emphasizes the importance of exploratory analysis in data science, highlighting the need to understand the data before proceeding with modeling and training.', 'It discusses the use of a heatmap to visually measure correlations between features, where a correlation of 0.6 is observed between age and purchase, indicating a moderate relationship.']}, {'end': 3167.994, 'segs': [{'end': 2675.709, 'src': 'embed', 'start': 2641.096, 'weight': 3, 'content': [{'end': 2645.359, 'text': 'Some people prefer 50-50, some people prefer 80-20 and so on and so forth.', 'start': 2641.096, 'duration': 4.263}, {'end': 2650.103, 'text': 'So that is flexible and it could be to some extent individual preferences.', 'start': 2645.86, 'duration': 4.243}, {'end': 2659.129, 'text': "So in our case, we are splitting this data into 75-25, which means 75% of the data we will use for training, 25% we'll use for test.", 'start': 2650.243, 'duration': 8.886}, {'end': 2664.536, 'text': 'So that is what we are specifying here as a parameter we say test underscore size is equal to point to five.', 'start': 2659.39, 'duration': 5.146}, {'end': 2675.709, 'text': 'That means put or keep 25% of the data set aside 25% of the data as test data, and therefore the remaining 70% will be used for training.', 'start': 2664.976, 'duration': 10.733}], 'summary': 'Data is split into 75-25 for training and test, with 25% allocated for testing.', 'duration': 34.613, 'max_score': 2641.096, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg2641096.jpg'}, {'end': 2764.001, 'src': 'embed', 'start': 2715.584, 'weight': 0, 'content': [{'end': 2721.669, 'text': 'There is a standard method available or standard class available called standard scaler.', 'start': 2715.584, 'duration': 6.085}, {'end': 2725.452, 'text': 'So we just create an instance of that and pass our data for scaling purpose.', 'start': 2721.709, 'duration': 3.743}, {'end': 2728.194, 'text': "Okay, so let's go ahead and do that now.", 'start': 2725.932, 'duration': 2.262}, {'end': 2732.638, 'text': 'So this is all what we have done is more like a data preparation.', 'start': 2728.654, 'duration': 3.984}, {'end': 2739.423, 'text': 'So we chose what are the parameters, what features we want, we did the feature scaling, and we split the data.', 'start': 2732.678, 'duration': 6.745}, {'end': 2740.624, 'text': 'Now everything is ready.', 'start': 2739.563, 'duration': 1.061}, {'end': 2744.086, 'text': 'Next is to start the actual training process.', 'start': 2740.964, 'duration': 3.122}, {'end': 2746.008, 'text': 'So this is the most crucial part of the code.', 'start': 2744.106, 'duration': 1.902}, {'end': 2749.771, 'text': 'So here, as we said, we will use the logistic regression model.', 'start': 2746.088, 'duration': 3.683}, {'end': 2754.994, 'text': 'So we have to create we create an instance of regression model.', 'start': 2750.191, 'duration': 4.803}, {'end': 2758.177, 'text': 'So I call that as classifier, you can give any name.', 'start': 2755.275, 'duration': 2.902}, {'end': 2764.001, 'text': 'And we just do some kind of initialization random value initialization.', 'start': 2758.997, 'duration': 5.004}], 'summary': 'Data preparation includes feature scaling, parameter selection, and data splitting for logistic regression model training.', 'duration': 48.417, 'max_score': 2715.584, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg2715584.jpg'}, {'end': 2993.789, 'src': 'embed', 'start': 2965.722, 'weight': 2, 'content': [{'end': 2970.144, 'text': 'So let me execute this code and take a look in a, in a, form of a plot.', 'start': 2965.722, 'duration': 4.422}, {'end': 2972.765, 'text': 'So this is how the classification is done.', 'start': 2970.624, 'duration': 2.141}, {'end': 2974.265, 'text': 'So these are like the class.', 'start': 2973.025, 'duration': 1.24}, {'end': 2975.885, 'text': 'This is like the class boundary.', 'start': 2974.425, 'duration': 1.46}, {'end': 2985.987, 'text': 'The green color belongs to the class 1 and the red color belongs to class 0 and these dots indicate it has been misclassified.', 'start': 2976.305, 'duration': 9.682}, {'end': 2988.168, 'text': 'So some of them have been misclassified.', 'start': 2986.067, 'duration': 2.101}, {'end': 2989.748, 'text': "So that's what we are seeing.", 'start': 2988.688, 'duration': 1.06}, {'end': 2993.789, 'text': 'Some red dots in the green area and some green dots in the red area right?', 'start': 2989.848, 'duration': 3.941}], 'summary': 'Code executed for classification, misclassified dots observed.', 'duration': 28.067, 'max_score': 2965.722, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg2965722.jpg'}, {'end': 3146.688, 'src': 'embed', 'start': 3120.829, 'weight': 1, 'content': [{'end': 3125.191, 'text': 'The higher that number is, the total in the diagonal, the higher the accuracy.', 'start': 3120.829, 'duration': 4.362}, {'end': 3132.957, 'text': 'And if we have quite a few numbers in non-diagonal locations, that means the accuracy is not very high.', 'start': 3126.731, 'duration': 6.226}, {'end': 3136.179, 'text': 'So here looks like the accuracy is pretty good.', 'start': 3133.037, 'duration': 3.142}, {'end': 3139.922, 'text': 'Now, we can actually quantify the accuracy by using these numbers.', 'start': 3136.259, 'duration': 3.663}, {'end': 3146.688, 'text': 'So what we do is the sum along the diagonal we have to take and divide that by the total observation.', 'start': 3139.963, 'duration': 6.725}], 'summary': 'High diagonal sum indicates higher accuracy, quantifiable by summing and dividing by total observations.', 'duration': 25.859, 'max_score': 3120.829, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/ I7NrVwm3apg3120829.jpg'}], 'start': 2550.931, 'title': 'Machine learning training', 'summary': 'Explains the process of data splitting, training models, and feature scaling. it covers logistic regression training with a performance of approximately 90% accuracy.', 'chapters': [{'end': 2739.423, 'start': 2550.931, 'title': 'Data splitting, training, & feature scaling', 'summary': 'Explains the importance of splitting data into a training set and test set for machine learning, using an example of splitting the data into 75% for training and 25% for testing, and performing feature scaling to normalize the data values.', 'duration': 188.492, 'highlights': ['Data Splitting Importance Explains the importance of splitting data into training and test sets for machine learning, using an example of splitting the data into 75% for training and 25% for testing.', 'Feature Scaling Explanation Details the concept of feature scaling to normalize data values, highlighting the use of the standard scaler class to achieve this normalization.']}, {'end': 3167.994, 'start': 2739.563, 'title': 'Logistic regression training', 'summary': 'Covers the process of training a logistic regression model, testing its performance, visualizing the results, and quantifying accuracy using a confusion matrix, achieving approximately 90% accuracy.', 'duration': 428.431, 'highlights': ['The process involves creating an instance of the logistic regression model, using the fit method for training, and the predict method for testing, achieving an accuracy of approximately 90% for the model.', 'Visualization of the training results using a plot demonstrates the classification with class boundaries, correctly predicted data points, and misclassified data points, providing a high-level overview of accuracy.', 'Quantification of accuracy is achieved using a confusion matrix, where a higher total value in the diagonals indicates higher accuracy, and in this case, the model achieves approximately 90% accuracy.']}], 'duration': 617.063, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg2550931.jpg', 'highlights': ['The process involves creating an instance of the logistic regression model, using the fit method for training, and the predict method for testing, achieving an accuracy of approximately 90% for the model.', 'Quantification of accuracy is achieved using a confusion matrix, where a higher total value in the diagonals indicates higher accuracy, and in this case, the model achieves approximately 90% accuracy.', 'Visualization of the training results using a plot demonstrates the classification with class boundaries, correctly predicted data points, and misclassified data points, providing a high-level overview of accuracy.', 'Data Splitting Importance Explains the importance of splitting data into training and test sets for machine learning, using an example of splitting the data into 75% for training and 25% for testing.', 'Feature Scaling Explanation Details the concept of feature scaling to normalize data values, highlighting the use of the standard scaler class to achieve this normalization.']}, {'end': 3439.37, 'segs': [{'end': 3212.036, 'src': 'embed', 'start': 3187.719, 'weight': 0, 'content': [{'end': 3195.844, 'text': 'decision tree can be used for classification as well as regression, even though it is more popular for classification,', 'start': 3187.719, 'duration': 8.125}, {'end': 3200.987, 'text': 'and it can be used to classify multiple classes as well, not just binary classification.', 'start': 3195.844, 'duration': 5.143}, {'end': 3202.809, 'text': 'So how does decision tree work?', 'start': 3201.148, 'duration': 1.661}, {'end': 3209.394, 'text': 'One of the good things about decision trees is that it is easy to represent and show how exactly it works,', 'start': 3202.909, 'duration': 6.485}, {'end': 3212.036, 'text': 'and therefore it is very easy to understand as well.', 'start': 3209.394, 'duration': 2.642}], 'summary': 'Decision tree can be used for classification and regression, and it is easy to understand and represent.', 'duration': 24.317, 'max_score': 3187.719, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg3187719.jpg'}, {'end': 3291.467, 'src': 'embed', 'start': 3261.295, 'weight': 3, 'content': [{'end': 3265.057, 'text': 'And in between, we have decision nodes or internal nodes.', 'start': 3261.295, 'duration': 3.762}, {'end': 3269.399, 'text': 'There are different terms used, so need not be hung up by the exact terminology.', 'start': 3265.117, 'duration': 4.282}, {'end': 3275.781, 'text': "Now, let's say we have to use this decision tree to find out whether this person will accept the job offer or not.", 'start': 3269.479, 'duration': 6.302}, {'end': 3278.302, 'text': 'So first thing he considers is the salary.', 'start': 3275.821, 'duration': 2.481}, {'end': 3280.663, 'text': 'Is the salary greater than 60,000?', 'start': 3278.522, 'duration': 2.141}, {'end': 3282.444, 'text': "if no, it's a clear decision.", 'start': 3280.663, 'duration': 1.781}, {'end': 3283.704, 'text': 'the offer will be rejected.', 'start': 3282.444, 'duration': 1.26}, {'end': 3285.485, 'text': 'so we reach a decision.', 'start': 3283.704, 'duration': 1.781}, {'end': 3286.946, 'text': 'therefore, this is a leaf node.', 'start': 3285.485, 'duration': 1.461}, {'end': 3291.467, 'text': 'now, if the salary is greater than sixty thousand, it is not a clear-cut decision,', 'start': 3286.946, 'duration': 4.521}], 'summary': 'Using decision tree to predict job offer acceptance based on salary, with 60,000 as threshold.', 'duration': 30.172, 'max_score': 3261.295, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg3261295.jpg'}, {'end': 3411.329, 'src': 'embed', 'start': 3342.846, 'weight': 4, 'content': [{'end': 3345.688, 'text': 'So this belongs to one class, which is accept offer.', 'start': 3342.846, 'duration': 2.842}, {'end': 3349.55, 'text': 'And these leaf nodes belong to a different class, which is reject offer.', 'start': 3345.868, 'duration': 3.682}, {'end': 3354.872, 'text': "So let's take an example and see how we can solve this problem using decision tree.", 'start': 3349.71, 'duration': 5.162}, {'end': 3360.235, 'text': "Let's say we have to implement a classification algorithm for kyphosis patient.", 'start': 3355.032, 'duration': 5.203}, {'end': 3369.221, 'text': 'And the problem is, a bunch of kids have been have undergone kyphosis surgery and we need to predict whether kyphosis is present in them or not.', 'start': 3360.375, 'duration': 8.846}, {'end': 3370.642, 'text': 'and how can we do this?', 'start': 3369.221, 'duration': 1.421}, {'end': 3372.543, 'text': 'using decision tree algorithm.', 'start': 3370.642, 'duration': 1.901}, {'end': 3376.186, 'text': 'so this is how the classification tree for kyphosis looks.', 'start': 3372.543, 'duration': 3.643}, {'end': 3381.229, 'text': 'so it starts with if the age is greater than 8.5.', 'start': 3376.186, 'duration': 5.043}, {'end': 3384.552, 'text': 'so this is how the decision tree looks.', 'start': 3381.229, 'duration': 3.323}, {'end': 3390.656, 'text': 'so the first criteria is the vertebra, the number on which the surgery has been performed.', 'start': 3384.552, 'duration': 6.104}, {'end': 3397.861, 'text': 'if it is greater than 8.5, then we need to perform further analysis and look at other criteria.', 'start': 3390.656, 'duration': 7.205}, {'end': 3402.684, 'text': 'If it is less than 8.5, it is clear that kyphosis is present.', 'start': 3398.081, 'duration': 4.603}, {'end': 3411.329, 'text': 'And if it is greater than 8.5, then we check whether the vertebra operated upon is greater than 14.5 or not.', 'start': 3403.024, 'duration': 8.305}], 'summary': 'Using decision tree for kyphosis classification in kids.', 'duration': 68.483, 'max_score': 3342.846, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg3342846.jpg'}], 'start': 3168.494, 'title': 'Decision tree and random forest', 'summary': "Delves into decision tree and random forest algorithms, emphasizing their efficacy in handling classification and regression tasks. it explores decision tree's versatility and ease of representation, and presents real-world applications such as job offer decision-making and kyphosis classification.", 'chapters': [{'end': 3209.394, 'start': 3168.494, 'title': 'Decision tree and random forest', 'summary': "Explores decision tree and random forest algorithms, highlighting their ability to handle both classification and regression tasks, with a focus on decision tree's versatility and ease of representation.", 'duration': 40.9, 'highlights': ['Decision tree can be used for both classification and regression tasks, making it more versatile than logistic regression.', 'It is more popular for classification and can handle multiple classes, not just binary classification.', 'One of the advantages of decision trees is their easy representation and visualization of how they work.']}, {'end': 3324.985, 'start': 3209.394, 'title': 'Decision tree for job offer', 'summary': 'Explains the concept of decision tree through an example of using it to decide whether to accept a job offer based on salary, commute time, and performance incentives.', 'duration': 115.591, 'highlights': ['The decision tree is used to decide whether to accept a job offer based on factors such as salary, commute time, and performance incentives.', 'If the salary is greater than 60,000, the decision depends on factors such as commute time and performance incentives.', 'If the commute time is greater than one hour, the job offer is rejected, regardless of the salary being higher than 60,000.', 'If there are insufficient performance incentives, the job offer is rejected, even if the salary and commute time meet the criteria.']}, {'end': 3439.37, 'start': 3325.226, 'title': 'Decision tree for kyphosis classification', 'summary': 'Discusses the use of decision tree algorithm for classifying kyphosis patients based on criteria such as age, vertebra number, and age group, with the aim of predicting the presence of kyphosis with a focus on the leaf nodes and different classification classes.', 'duration': 114.144, 'highlights': ['The decision tree algorithm is used to classify kyphosis patients based on criteria such as age, vertebra number, and age group, in order to predict the presence of kyphosis. The chapter discusses the use of decision tree algorithm for classifying kyphosis patients based on criteria such as age, vertebra number, and age group, with the aim of predicting the presence of kyphosis.', "The leaf nodes are categorized into 'accept offer' and 'reject offer' classes, providing a binary classification for the decision tree model. The leaf nodes belong to different classes, 'accept offer' and 'reject offer,' enabling a binary classification for the decision tree model.", 'The criteria for classification include age, vertebra number, and age group, with specific thresholds for each criterion, such as 8.5 and 14.5 for vertebra number, to determine the presence or absence of kyphosis. Specific criteria such as age, vertebra number, and age group, with thresholds like 8.5 and 14.5 for vertebra number, are used to classify the presence or absence of kyphosis.']}], 'duration': 270.876, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/ video-capture/I7NrVwm3apg/pics/I7NrVwm3apg3168494.jpg', 'highlights': ['Decision tree is more versatile than logistic regression, suitable for both classification and regression tasks.', "Decision tree's easy representation and visualization make it advantageous for understanding its workings.", 'Decision tree is popular for classification and can handle multiple classes, not just binary classification.', 'Decision tree is used to decide job offers based on factors like salary, commute time, and performance incentives.', 'Decision tree algorithm is utilized for classifying kyphosis patients based on age, vertebra number, and age group.', "Leaf nodes in the decision tree model enable binary classification for 'accept offer' and 'reject offer' classes.", 'Specific criteria and thresholds like 8.5 and 14.5 for vertebra number are used to classify the presence or absence of kyphosis.']}, {'end': 4262.098, 'segs': [{'end': 3591.111, 'src': 'embed', 'start': 3554.338, 'weight': 4, 'content': [{'end': 3558.461, 'text': "So that's the reason we feel there is good accuracy, which is 64%.", 'start': 3554.338, 'duration': 4.123}, {'end': 3559.982, 'text': 'They have been correctly classified.', 'start': 3558.461, 'duration': 1.521}, {'end': 3563.965, 'text': '64% of the observations have been correctly classified by this model.', 'start': 3560.002, 'duration': 3.963}, {'end': 3569.19, 'text': 'whereas 24% have been misclassified, which consists of this 4 plus this 2.', 'start': 3564.205, 'duration': 4.985}, {'end': 3574.175, 'text': "So that's how we use the confusion matrix to determine the accuracy of our decision tree model.", 'start': 3569.19, 'duration': 4.985}, {'end': 3579.86, 'text': "So let's go into the Jupyter notebook and take a look and run the code and see how it looks.", 'start': 3574.255, 'duration': 5.605}, {'end': 3591.111, 'text': 'So this is our Python notebook for decision tree and I will take you through the code not line by line of course but we will see the blocks.', 'start': 3580.621, 'duration': 10.49}], 'summary': 'A decision tree model achieved 64% accuracy with 24% misclassification, as determined by the confusion matrix.', 'duration': 36.773, 'max_score': 3554.338, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg3554338.jpg'}, {'end': 3902.444, 'src': 'embed', 'start': 3873.638, 'weight': 1, 'content': [{'end': 3882.508, 'text': 'Using just the decision tree, now when we use random forest, the performance has improved to a good extent and it has become 76%.', 'start': 3873.638, 'duration': 8.87}, {'end': 3890.016, 'text': 'So random forest usually helps in increasing the accuracy when we are using decision trees as our algorithm.', 'start': 3882.508, 'duration': 7.508}, {'end': 3894.559, 'text': 'Okay, the last algorithm is the k-nearest neighbors algorithm.', 'start': 3890.276, 'duration': 4.283}, {'end': 3902.444, 'text': 'This is again a classification algorithm and it is actually very simple and straightforward, very easy to understand as well.', 'start': 3894.739, 'duration': 7.705}], 'summary': 'Random forest improved performance to 76%, k-nearest neighbors is simple and straightforward.', 'duration': 28.806, 'max_score': 3873.638, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg3873638.jpg'}, {'end': 4050.879, 'src': 'embed', 'start': 4004.033, 'weight': 0, 'content': [{'end': 4016.842, 'text': "Now let's use K nearest neighbors and take an example to solve one of our previous problems that we did using logistic regression whether a person is going to buy an SUV or not,", 'start': 4004.033, 'duration': 12.809}, {'end': 4020.145, 'text': 'based on the age and estimated salary.', 'start': 4016.842, 'duration': 3.303}, {'end': 4024.908, 'text': "So before we go into Jupyter Notebook, let's again take a quick look at the core.", 'start': 4020.545, 'duration': 4.363}, {'end': 4026.829, 'text': 'So what are the various sections in the core?', 'start': 4024.928, 'duration': 1.901}, {'end': 4035.013, 'text': 'we import the libraries, we load the data set, we visualize the data, we split the data into training and test data set.', 'start': 4027.89, 'duration': 7.123}, {'end': 4039.895, 'text': 'we do some feature scaling and then we train our model and then we test our model.', 'start': 4035.013, 'duration': 4.882}, {'end': 4044.816, 'text': 'we visualize the training set results and then we visualize our test results.', 'start': 4039.895, 'duration': 4.921}, {'end': 4047.577, 'text': 'And both of these seem to be looking pretty good.', 'start': 4044.976, 'duration': 2.601}, {'end': 4050.879, 'text': 'And then we evaluate our model using the confusion matrix.', 'start': 4047.718, 'duration': 3.161}], 'summary': 'Using k nearest neighbors to predict suv purchase likelihood.', 'duration': 46.846, 'max_score': 4004.033, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/I7NrVwm3apg4004033.jpg'}], 'start': 3439.73, 'title': 'Implementing decision tree and knn algorithm', 'summary': 'Covers implementing a decision tree in python for a classification problem achieving 64% accuracy and discusses the k nearest neighbors algorithm with a 93% accuracy in predicting suv purchase.', 'chapters': [{'end': 3894.559, 'start': 3439.73, 'title': 'Implementing decision tree in python', 'summary': 'Covers implementing a decision tree in python for a classification problem, visualizing the data, training the model, evaluating using a confusion matrix, and comparing the accuracy with a random forest model, achieving 76% accuracy with random forest compared to 64% with a decision tree.', 'duration': 454.829, 'highlights': ['The model achieved 76% accuracy with a random forest compared to 64% with a decision tree. Using a random forest model improved the accuracy from 64% with a decision tree to 76%.', 'The confusion matrix showed 19 correct predictions out of 25, resulting in a 76% accuracy for the random forest model. The confusion matrix displayed 19 correct predictions out of 25, yielding a 76% accuracy for the random forest model.', 'The process involves visualizing the data, training the model, and evaluating using a confusion matrix. The process includes visualizing the data, training the model, and evaluating using a confusion matrix to determine accuracy.', 'The chapter explains the concept of using a random forest to improve performance and increase accuracy. The chapter elaborates on using a random forest to enhance performance and achieve higher accuracy compared to a decision tree.', 'The chapter demonstrates the use of a decision tree model for a classification problem and achieving 64% accuracy. The chapter showcases the implementation of a decision tree model for a classification problem and attaining 64% accuracy.']}, {'end': 4262.098, 'start': 3894.739, 'title': 'K nearest neighbors algorithm', 'summary': 'Discusses the k nearest neighbors (knn) algorithm, using historical data of heights and weights, determining the class of new data points based on the value of k, and applying knn to a problem of predicting suv purchase with a 93% accuracy, showcasing the training, testing, and evaluation process.', 'duration': 367.359, 'highlights': ['The K Nearest Neighbors (KNN) algorithm determines the class of new data points based on the value of K, which represents the number of nearest neighbors to consider. For a given data point, the algorithm finds the nearest objects based on historical data and assigns a class based on the majority of the nearest objects. The KNN algorithm determines the class of new data points based on the value of K, where it finds the nearest objects to a given data point and assigns a class based on the majority of the nearest objects. It demonstrates the impact of changing the value of K on the classification outcome, showcasing the importance of selecting the right value of K for training the model.', 'Applying KNN to a problem of predicting SUV purchase based on age and estimated salary resulted in a 93% accuracy, with 73 out of 80 observations classified correctly and 7 misclassifications, demonstrating the effectiveness of the algorithm in practical applications. The application of KNN to predict SUV purchase based on age and estimated salary achieved 93% accuracy, with 73 out of 80 observations classified correctly and 7 misclassifications, indicating the effectiveness of the algorithm in practical applications and its ability to produce accurate predictions.', 'The chapter covers the various steps involved in machine learning, including importing libraries, loading and visualizing the dataset, splitting the data into training and test sets, feature scaling, training the model, testing the model, and evaluating the model using the confusion matrix. The chapter details the various steps in machine learning, encompassing importing libraries, loading and visualizing the dataset, splitting the data into training and test sets, feature scaling, training the model, testing the model, and evaluating the model using the confusion matrix, providing a comprehensive overview of the machine learning process.']}], 'duration': 822.368, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/I7NrVwm3apg/pics/ I7NrVwm3apg3439730.jpg', 'highlights': ['The K Nearest Neighbors (KNN) algorithm achieved 93% accuracy in predicting SUV purchase based on age and estimated salary.', 'The random forest model improved the accuracy from 64% with a decision tree to 76%.', 'The process includes visualizing the data, training the model, and evaluating using a confusion matrix to determine accuracy.', 'The chapter elaborates on using a random forest to enhance performance and achieve higher accuracy compared to a decision tree.', 'The chapter demonstrates the use of a decision tree model for a classification problem and achieving 64% accuracy.']}], 'highlights': ["Netflix's analysis of customer behavior using data from 30 million customers showcases the use of machine learning in content creation and audience engagement.", 'Machine learning is significantly impacting the healthcare industry, aiding diagnostic analysis of images like x-rays and MRIs to address the shortage of doctors.', 'Voice recognition, as seen with Siri, demonstrates the widespread adoption of machine learning in consumer applications.', 'Facial recognition is being used for security and crime-solving, gaining popularity in various areas.', 'The training process, model accuracy, and practical demonstrations are also included, achieving a root mean square error of 58 and an 87% accuracy for logistic regression.', "Furthermore, it explores decision tree's versatility and ease of representation, and presents real-world applications such as job offer decision-making and kyphosis classification.", 'Machine learning algorithms are broadly classified into supervised learning, unsupervised learning, and reinforcement learning, each with specific techniques and applications.', "Linear regression was discovered by Sir Francis Galton to predict a child's height based on the father's height using the mean square error.", 'The root mean square error, mean squared error, and mean absolute error are calculated to measure the accuracy of the linear regression model, with lower values indicating higher accuracy.', 'Logistic regression is used for classification problems, such as predicting loan repayment or image classification, with an accuracy of 87% demonstrated in the Python implementation.', 'The demo focuses on predicting SUV purchases using logistic regression based on parameters like age and salary.', 'Decision tree is more versatile than logistic regression, suitable for both classification and regression tasks.', 'The K Nearest Neighbors (KNN) algorithm achieved 93% accuracy in predicting SUV purchase based on age and estimated salary.']}
{"url":"https://learn.coursnap.app/staticpage/I7NrVwm3apg.html","timestamp":"2024-11-02T14:41:11Z","content_type":"text/html","content_length":"90499","record_id":"<urn:uuid:251ee6e6-df2b-447e-84f0-ea309d472075>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00021.warc.gz"}
Computing hereditary convex structures Color red and blue the n vertices of a convex polytope P in ℝ^3. Can we compute the convex hull of each color class in o(n log n)? What if we have x > 2 colors? What if the colors are random? Consider an arbitrary query halfspace and call the vertices of P inside it blue: can the convex hull of the blue points be computed in time linear in their number? More generally, can we quickly compute the blue hull without looking at the whole polytope? This paper considers several instances of hereditary computation and provides new results for them. In particular, we resolve an eight-year old open problem by showing how to split a convex polytope in linear expected time. Original language English (US) Title of host publication Proceedings of the 25th Annual Symposium on Computational Geometry, SCG'09 Pages 61-70 Number of pages 10 State Published - 2009 Externally published Yes Event 25th Annual Symposium on Computational Geometry, SCG'09 - Aarhus, Denmark Duration: Jun 8 2009 → Jun 10 2009 Publication series Name Proceedings of the Annual Symposium on Computational Geometry Other 25th Annual Symposium on Computational Geometry, SCG'09 Country/Territory Denmark City Aarhus Period 6/8/09 → 6/10/09 All Science Journal Classification (ASJC) codes • Theoretical Computer Science • Geometry and Topology • Computational Mathematics • Convex polytope • Half-space range searching • Hereditary convex hulls Dive into the research topics of 'Computing hereditary convex structures'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/computing-hereditary-convex-structures-2","timestamp":"2024-11-12T08:41:59Z","content_type":"text/html","content_length":"49519","record_id":"<urn:uuid:82abcf68-c84a-4778-9a59-4696ed8c5454>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00031.warc.gz"}
Non-Inverting Op-Amp Analysis with Time Constants - Designalog The seminal paper by Hajimiri on analyzing circuits using time constant techniques, such the zero-value and infinite-value time constants based on the Cochrun-Grabel method allows an easy analysis by breaking a complicated network into smaller parts. The technique is based on opening and shorting dynamic elements such that the analysis of different terms in an standard transfer function are filled in term by term: H(s)=\frac{a_0 + a_1s+a_2s^2+\cdots}{1+b_1s+b_2s^2+\cdots} Hajimiri’s paper already provides instructive examples of several passive and active circuits (i.e. circuits with dependent sources). This article intends to also show that this method can be applied to the analysis of op-amps, assuming a simple finite gain op-amp. Table of Contents Non-Inverting Op-Amp Closed-Loop Gain Non-Inverting Op-Amp Voltage Amplifier with Compensation Capacitor H(s)=\frac{a_0 + a_1s}{1+b_1s} a_0 = \frac{R_g + R_f}{R_g}\\ b_1 = \tau_{C_f} = C_fR_f \\ a_1 = 1\tau_{C_f} How did we derive each term in the laplace expression? • \(a_0:\) This is the zero-value time constant. In this case, this equals the standard non-inverting op amp transfer function. Therefore, all dynamic elements will be set to their DC value (\(C_f \) will be an open). • \(b_1:\) This is the zero-value time constant related to \(C_f\).Under the assumption of an ideal op-amp (i.e. V(+) = V(-)), the top side of \(R_g\) is grounded as well; it’s, effectively, out of the picture. Therefore, the only resistance seen by \(C_f\) is \(R_f\). • \(a_1:\) This is the transfer from input to output assuming \(C_f\) is infinite-valued (i.e. it becomes a short). When doing so, we can recognize that the non-inverting voltage amplifier, effectively, degenerates into a simple voltage buffer; \(R_g\) doesn’t matter. Hence, this transfer is equal to 1. This factor must be multiplied, now, by the time constant associated with the dynamic element (i.e. \(C_f\)) that infinite-valued. Finally, we can write the closed-loop gain formula as: H(s)=\frac{\frac{R_g+R_f}{R_g} + C_fR_fs}{1+C_fR_fs} However, it might be more intuitive to factor out the low-frequency gain as follows: H(s)=\left(1+\frac{R_f}{R_g}\right)\frac{1+C_f\frac{R_fR_g}{R_f+R_g}s}{1+C_fR_fs} = \left(1+\frac{R_f}{R_g}\right)\frac{1+C_f(R_f||R_g)s}{1+C_fR_fs} Non-Inverting Op-Amp Loop Gain Non-Inverting Op-Amp Voltage Amplifier with Compensation Capacitor – Loop Gain This is loop gain formula will be of 1st order as well: \[\begin{aligned} L = \frac{V_d}{AV_c} = \frac{a_0 + a_1s}{1+b_1s} a_0 = \frac{R_g}{R_g + R_f}\\ b_1 = \tau_{C_f} = C_f(R_f||R_g)\\ a_1 = \tau_{C_f} = C_f(R_f||R_g)\ How did we derive each term in the loop gain formula? • \(a_0:\) All dynamic elements are set to their DC value (i.e. \(C_f\) is an open). Therefore, it is easy to see that the transfer from \(V_c\) to \(V_d\) is a simply voltage divider by \(R_g\) and \(R_f\). • \(b_1:\) Since there’s only one dynamic element (i.e. \(C_f\)), we turn off all inputs (in this case, \(V_c\), which would be shorted to ground.) and simply calculate the impedance seen by \(C_f \). Since \(R_f\) is grounded on its right side, then it is effectively in parallel with \(R_g\). • \(a_1:\) This is the transfer from \(V_c\) to \(V_d\) while setting \(C_f\) to its infinite value (i.e. a short). As a consequence, the voltage divider due to \(R_g\) and \(R_f\) becomes useless, because \(R_f\) is now shorted. Therefore, the transfer is simply 1. This transfer factor must be multiplied by the time constant of \(C_f\), because we infinite-valued it. Therefore, we can write: L = \frac{V_d}{AV_c} = \frac{a_0 + a_1s}{1+b_1s} = \frac{\frac{R_g}{R_g + R_f} + sC_f(R_f||R_g)}{1+sC_f(R_f||R_g)} Factoring out the constant gain term: L = \frac{V_d}{AV_c} = \frac{R_g}{R_g + R_f}\frac{1+sC_fR_f}{1+sC_f(R_f||R_g)} Complete Transfer Now, we can write the complete transfer using the asymptotic gain model: \textrm{complete transfer} = A_{\infty}\frac{-L}{1-L} = \left(1+\frac{R_f}{R_g}\right)\frac{1+sC_f(R_f||R_g)}{1+C_fR_fs}\frac{-A\frac{R_g}{R_g + R_f}\frac{1+sC_fR_f}{1+sC_f(R_f||R_g)}}{1-A\frac{R_g} {R_g + R_f}\frac{1+sC_fR_f}{1+sC_f(R_f||R_g)}} Simplifying, all the dynamic terms in the numerator cancel out: \textrm{complete transfer} = A_{\infty}\frac{-L}{1-L} = \left(1+\frac{R_f}{R_g}\right)\frac{1+sC_f(R_f||R_g)}{1+C_fR_fs}\frac{-A\frac{R_g}{R_g + R_f}\frac{1+sC_fR_f}{1+sC_f(R_f||R_g)}}{1-A\frac{R_g} {R_g + R_f}\frac{1+sC_fR_f}{1+sC_f(R_f||R_g)}} 3561641 {3561641:UZU8EKUC} apa 50 853 Hajimiri, A. (2010). Generalized Time- and Transfer-Constant Circuit Analysis. IEEE Transactions on Circuits and Systems I: Regular Papers (6), 1105–1121.
{"url":"https://www.designalog.com/circuit-analysis/706-2/","timestamp":"2024-11-05T06:14:39Z","content_type":"text/html","content_length":"91813","record_id":"<urn:uuid:dcda177e-3301-4aa3-815e-5cf03fecedce>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00844.warc.gz"}
How to take advantage of non-crystallographic symmetry in molecular replacement: `locked' rotation and translation functions Loading metrics information... Loading metrics information... (Received 26 March 2001; accepted 12 June 2001) Many protein molecules form assemblies that obey point-group symmetry. These assemblies are often situated at general positions in the unit cell such that the point-group symmetry of the assembly becomes non-crystallographic symmetry (NCS) in the crystal. The presence of NCS places significant constraints on structure determination by the molecular-replacement method. The locked rotation and translation functions have been developed to take advantage of the presence of NCS in this structure determination, which generally requires four steps. (i) The locked self-rotation function is used to determine the orientation of the NCS assembly in the crystal, relative to a pre-defined `standard' orientation of this NCS point group. (ii) The locked cross-rotation function is used to determine the orientation of one monomer of the assembly in the standard orientation. This calculation requires only the structure of the monomer as the search model. (iii) The locked translation function is used to determine the position of this monomer relative to the center of the assembly. Information obtained from steps (ii) and (iii) will produce a model of the entire assembly centered at the origin of the coordinate system. (iv) An ordinary translation function is used to determine the center of the assembly in the crystal unit cell, using as the search model the structure of the entire assembly produced in step (iii). The locked rotation and translation functions simplify the structure-determination process in the presence of NCS. Instead of searching for each monomer separately, the locked calculations search for a single rotation or translation. Moreover, the locked functions reduce the noise level in the calculation, owing to the averaging over the NCS elements, and increase the signals as all monomers of the assembly are taken into account at the same time. 1. Introduction Many proteins function as macromolecular assemblies. The monomers in such assemblies are often related to each other by point-group symmetry. For example, many protein homotetramers obey 222 point-group symmetry, while the protein capsid of icosahedral viruses possesses 532 point-group symmetry. When such assemblies are crystallized, the point-group symmetry of the assembly may superimpose with the crystallographic symmetry such that the assemblies are located at special positions in the unit cell. However, it often happens that the assemblies are located at general positions in the unit cell. In such cases, the point-group symmetry of the assembly exists within the asymmetric units of the crystal and thereby the symmetry of the assembly becomes non-crystallographic symmetry (NCS) in the crystal. Traditionally, the individual molecules of the assembly are treated separately in the molecular-replacement (MR) calculation, with no assumption of or regard to the NCS of the assembly. However, the presence of NCS introduces significant constraints on structure solution by the molecular-replacement method. A correct solution from the MR calculation must obey the NCS of the assembly. Therefore, it is more appropriate in these cases to constrain the MR calculations such that any solution that is obtained will obey the NCS of the assembly. In other words, such MR calculations are locked to the NCS of the assembly, hence the name locked rotation and locked translation functions. The concept of locked self-rotation function was first proposed in 1972, in the study of the orientations of the tetramer of glyceraldehyde 3-­phosphate dehydrogenase (Rossmann et al., 1972 ). The locked rotation and translation functions offer many advantages over the traditional MR calculations. First of all, a single rotation and translation can define the entire assembly, thereby simplifying the MR calculations. Traditional methods will need to define the orientation and position of each monomer of the NCS and this becomes extremely cumbersome in cases of high symmetry. More importantly, the locked MR calculations consider the contributions of the entire assembly at the same time. This should give rise to stronger signals in the calculation, especially for cases of high NCS. Not surprisingly, the locked rotation function (RF) has found the widest application in virus crystallography, owing to the high NCS that is often involved. However, the locked MR calculations should apply to most cases of NCS point groups. In cases where the NCS does not belong to a point group (improper symmetry), the application of the locked MR calculations becomes more difficult. For the locked RF calculations, the difficulty lies in the definition of the standard orientation of the assembly (see below). This requires knowledge of the orientations of the NCS axes relative to each other, which are generally not known beforehand with improper symmetry. In comparison, for a point group (proper symmetry) these relative orientations are fixed. In any event, if a standard orientation can be defined, the locked RF can be applied to cases where the NCS is not a point group. On the other hand, the application of the locked translation function is limited to cases where the NCS is a point group. For ease of discussion, here we will consider only cases where the NCS is a point group. 2. The locked self-rotation function When the crystal contains NCS, self-rotation functions (self RFs) are used to determine the orientations of the NCS elements in the crystal unit cell. Ordinary self-RF calculations make no assumptions about the NCS and determine the orientations of the NCS elements independently of each other. However, often the nature of the NCS is known beforehand. For example, a protein that migrates as a tetramer on gel-filtration columns may form a complex that obeys 222 point-group symmetry. Similarly, icosahedral viruses are expected to have 532 symmetry. With knowledge of the possible NCS point group, the self-RF calculations can be locked to this point group, giving rise to the locked self RF (Tong & Rossmann, 1990 ). Three steps are involved in the calculation of a locked self RF. • (i) Define the standard orientation of the NCS point group. This standard orientation can be considered as a reference orientation for the NCS point group. It is usually defined such that the rotation matrices of the point group have simple forms. For example, for 222 point-group symmetry, the standard orientation can be defined such that the three twofold axes are parallel to the three Cartesian coordinate axes. With this definition, the off-diagonal elements of the rotation matrices are all zero. Additional considerations in the definition of the standard orientation are discussed in the section on the locked cross-rotation function. Once the standard orientation is defined, any arbitrary orientation of the NCS point group can be related to the standard orientation by a single rotation. Conversely, by applying different rotations to the standard orientation, any orientation of the NCS point group can be generated. Mathematically, assume [I[n]] (n = 1, …, N) is the collection of NCS point-group rotation matrices in the standard orientation and a rotation [E] is applied to the standard orientation. This will bring the NCS point group to a new orientation and the NCS rotation matrices in this new orientation, [ρ[n]], are given by (Tong & Rossmann, 1990 ) • (ii) Calculate the locked self-RF values for a collection of rotation angles ([E]). This collection of angles should either cover the entire unique region of the rotation space for the locked self RF (see below) or sample the region of interest in the rotation space. For each rotation [E], the ordinary self-RF value (R[n]) for each of the NCS rotation matrices in the new orientation ([ρ[n]], equation 1 ) is calculated. The locked self-RF value (R[L]) for this rotation is defined as the average of the ordinary RF values over the NCS elements (Tong & Rossmann, 1990 ), Note that in the equation above the summation starts from 2,as it is assumed that [I[1]] is the identity matrix and therefore R[1] is a constant independent of the rotation [E]. • (iii) Identify the peaks in the locked self-RF map. The correct solution is expected to be one that has a high locked self-RF value. If necessary, the directions of the NCS elements for this solution can be calculated and compared with the ordinary self-rotation functions to confirm that the locked self-­RF result is correct. The locked self RF simplifies the task of defining the orientation of an NCS assembly. Instead of searching for − 1 peaks in the ordinary self RF, a single peak is sought in the locked self RF. It must be stressed, however, that this rotation in the locked self RF is a general rotation. For example, for the point group, the rotation [ ] can have any κ value (in polar angles). The locked self-RF calculation in this case cannot be limited to the κ = 180° plane, in contrast to the ordinary self RF where κ would normally be fixed at 180°. As the rotation in the locked self RF is a general one, it is generally better to carry out the calculations in Eulerian angles. This also makes it easier to define the unique region of the rotation space (see below). Another major advantage of the locked self RF is that it reduces the noise in the calculation owing to the averaging of the ordinary RF values (2). It can be expected statistically that the noise level in the RF will be reduced by a factor of (N − 1)^1/2 by the averaging process and this has been shown to be roughly correct based on actual calculations (Tong & Rossmann, 1990 ). Therefore, for icosahedral viruses roughly an eightfold noise reduction can be achieved with the locked self RF. The symmetry of the locked self RF is generally rather complicated. It depends on the crystallographic symmetry and the NCS and also depends on the definition of the standard orientation. To illustrate this symmetry, the 222 point group is used here as an example. Assume that the standard orientation is defined such that the twofold axes are parallel to the Cartesian coordinate axes and that a rotation [E] is applied to this standard orientation. If [H] is a 90° rotation around the Z axis, applying the rotation [E][H] to the standard orientation should produce the same orientation of the NCS as applying the rotation [E]. This is owing to the fact that the rotation [H] only swaps the X and Y axes, but does not cause a net change to the standard orientation. Similarly, a 120° rotation around the [111] direction will not change the standard orientation either, as it only causes a cyclic permutation of the twofold axes. Therefore, for 222 point-group symmetry, the locked self RF appears to have at least 432 symmetry, as the collection of [H] matrices have 432 symmetry (Tong & Rossmann, 1997 ). The unique region of rotation space in this case can be defined to cover the regions 0–90° for all three Eulerian angles. More generally, if rotation [H] satisfies the condition [H][I[n]][H]^−1 = [I[m]], applying [E] and [E][H] to the standard orientation will produce the same results. These two rotations are related by the symmetry of the locked self RF. Occasionally, additional symmetry of the locked self RF can be generated by the crystallographic symmetry. In practice, the locked self-RF calculations can be rather fast. One should generally cover a large region of rotation space and then classify the resulting solutions based on the orientations of the NCS that they produce. 3. The locked cross-rotation function The locked self RF has found the widest use so far in macromolecular crystallography, especially for icosahedral viruses, to determine the orientation of the NCS point-group symmetry elements in the crystal unit cell. The locked cross-rotation function (locked cross RF) and the locked translation function can be used to solve the structure of the crystal when the atomic model of only the monomer of the NCS assembly is available. For example, the structure of the monomer may have been determined in a different crystal form, by NMR or other methods, but it is not known how the monomers are arranged in the NCS assembly. Alternatively, it may be possible that the NCS assembly has undergone a reorganization, for example owing to ligand binding, leading to large changes in the relative orientation and position of the monomers in the assembly. In such a case, it is more appropriate to determine the structure of the new assembly with the model of the monomer. With traditional MR methods, the individual monomers of the assembly are treated essentially independently in such a structure determination. The orientation and position of one monomer is determined first, followed by the determination of the second and additional monomers. This procedure is not only tedious, it also suffers from having low signals in the calculation, especially for locating the first monomer when the NCS is high. For example, with an assembly obeying 222 symmetry, the first monomer will only account for 25% of the diffracting power of the crystal and this will reduce the signals in both the ordinary RF and TF calculations to locate this monomer. Similar to the locked self RF, one can take advantage of the presence of NCS in such a structure determination. The entire NCS assembly is considered in the locked calculations, which should increase the signal and reduce the noise. Overall, four steps are needed in the calculations that utilize the NCS (Fig. 1 ). • (i) Determine the orientation of the NCS in the crystal by the locked self RF. This is discussed in the previous section and will lead to the determination of the rotation [E]. • (ii) Determine a rotation [F] that relates the orientation of the monomer search model and one monomer of the NCS assembly in the standard orientation. Expansion by the NCS will then define the orientations of all the molecules of the assembly. This is the locked cross RF. • (iii) Determine the position of the monomer relative to the center of the NCS by the locked translation function. Expansion by the NCS will then define the entire NCS assembly, centered (arbitrarily) at (0, 0, 0) in space. • (iv) Determine the center of the NCS assembly in the crystal unit cell. A model for the entire assembly is produced in step (iii). Therefore, an ordinary TF can be used to determine the center of this assembly in the crystal. For the locked cross RF, assume [F] is a rotation that makes the orientation of the monomer search model the same as one of the monomers of the assembly in the standard orientation; the orientations of all the monomers in the crystal unit cell is then given by (Tong & Rossmann, 1997 ), In other words, [ρ[n]] represents the (cross-) rotational relationship between the monomer search model and the monomers of the assembly in the crystal. Therefore, an ordinary cross RF value R[n] can be calculated for each of the rotations [ρ[n]] and the locked cross RF value is defined as the average Like the ordinary cross RF, the rotation [F] is completely general and can assume any value. The symmetry of the locked cross RF depends on the symmetry of the NCS point group and the definition of the standard orientation. It is however independent of the crystallographic symmetry, as the rotation [F] relates the orientation of the search model to the NCS assembly in a specific crystallographic asymmetric unit, with its orientation defined by the rotation [E]. The unique region of the rotation space for the locked cross RF can be derived from the fact that rotations [F] and [I[n]][F] will produce the same set of rotational relationships between the search model and the crystal. Therefore, the unique region of the locked cross RF can be the same as that of an ordinary cross RF between a P1 crystal and a crystal with space-group symmetry that is equivalent to the NCS point group. For example, with a 222 tetramer, the unique region of the locked cross RF can be the same as that of an ordinary cross RF between space groups P1 and P222, which has already been defined (Rao et al., 1980 ). For this correspondence to work, however, the NCS standard orientation must be defined in the same way as that in the equivalent space group. For example, for 422 point-group symmetry, the standard orientation must be defined such that the fourfold axis is along the Cartesian Z axis and one of the twofold axes is along the Cartesian X axis. The definition of the locked cross RF presented here (Tong & Rossmann, 1997 ) is different from the original one (Tong & Rossmann, 1990 ), where the rotation [F] relates the orientation of the monomer search model and a monomer of the NCS assembly in the actual orientation in the crystal. While both definitions are functionally correct, the new definition is preferred as it greatly simplifies the understanding of the symmetry of the locked cross RF. 4. The locked translation function Once the orientation of one monomer of the NCS assembly is defined by the locked cross RF, the orientations of all the monomers of the assembly are defined (3). The next step in the locked MR calculation is to determine how the monomers are positioned in the NCS assembly with the locked TF (Tong, 1996 ). Ordinary TF calculations are based on comparisons of intermolecular vectors, where the molecules are related by the crystallographic symmetry. In contrast, the locked TF calculations are based on vectors among molecules that are related by the NCS. The locked TF does not take into account the crystallographic symmetry of the crystal. For the locked TF, the center of the NCS assembly is placed (arbitrarily) at the origin of the coordinate system. The rotation [F] that brings the monomer search model into the same orientation as one of the monomers of the NCS assembly in the standard orientation is determined from the locked cross RF. If V[0] is the translation vector that places this monomer in the same position as the monomer in the NCS assembly in the standard orientation, the entire assembly is defined by (Tong, 1996 ) where X[j]^0 is the atomic coordinates of the jth atom in the monomer search model. The atomic coordinates of the entire assembly in the crystal unit cell, centered at the origin, is given by where [α] is the deorthogonalization matrix (Rossmann & Blow, 1962 ). The calculated structure factors based on this single NCS assembly in the crystal unit cell, ignoring the crystallographic symmetry, is then The locked TF is based on the overlap between the intermolecular vectors within this NCS assembly and the observed Patterson map (Tong, 1993 , 1996 ), where F[h]^o is the observed structure-factor amplitude. A constant term has been omitted in (9) (Tong, 1996 ). The equation for the locked TF (9) bears remarkable resemblance to that for the ordinary Patterson correlation translation function (Harada et al., 1981 ; Tong, 1993 ), with the interchange of the crystallographic (T[n]) and NCS ([θ[n]]) parameters (Tong, 1996 ). The evaluation of the locked TF is however more complicated. The fast Fourier transform (FFT) method cannot be applied to (9) directly, as the [θ[n]] matrices are generally non-integral. Direct summation can be used to evaluate (9), but it would take too much time for most cases. In practice, N(N − 1)/2FFTs of the form are calculated first and the locked TF values are then obtained by interpolating among these transforms. In selecting solutions from the locked TF, the packing of the monomers in the NCS assembly is also examined to remove those solutions that cause serious steric clashes among the monomers. There is no inherent symmetry in the locked TF. The unique region of the locked TF is generally a sphere or spherical shell centered at the origin of the coordinate system, if the monomer search model has been positioned such that its center is at the origin. If the monomer search model is not centered at the origin, the unique region of the locked TF will depend on both the rotation [F] and the position of the center. Therefore, the center of the monomer search model should be placed at the origin for all locked TF calculations. The radius of the sphere is determined by the distance between the center of the monomer and the center of the NCS assembly, which can be affected both by the size of the monomer and by the packing of the monomers in the assembly. Alternatively, the unique region of the locked TF can be defined as a cube centered at the origin. In special cases, the unique region of the locked TF can be limited to two dimensions. For example, if the NCS has sixfold symmetry and the standard orientation is defined such that the sixfold is along the Z axis, only the XY plane needs to be covered in the locked TF calculations. 5. An example All the locked MR calculations described here are supported in the GLRF program (Tong & Rossmann, 1990 , 1997 ), which is available freely to academic users as part of the Replace program package (Tong, 1993 ). To illustrate the concept and the application of the locked MR method, the structure solution of a new crystal form of the human malic enzyme is presented here as an example (Yang & Tong, 2000 ). Malic enzymes are tetrameric in solution and the tetramers obey 222 point-group symmetry (Bhargava et al., 1999 ). The tetramer interface undergoes large reorganizations depending on whether transition-state analog inhibitors are bound to the enzyme (Xu et al., 1999 ; Yang et al., 2000 ). For the example here, the monomer of the enzyme was used as the search model to solve the structure of the enzyme in a different crystal form. This new crystal belongs to the space group P2[1], with a tetramer of the enzyme in the asymmetric unit (Bhargava et al. , 1999 ). The first step in a locked MR calculation is to determine the orientation of the NCS axes. For this example, the ordinary self RF clearly showed the orientations of the NCS twofold axes, demonstrating the 222 symmetry of the tetramer (Bhargava et al., 1999 ). For the locked self RF, the standard orientation of the point group was defined such that the three twofold axes are parallel to the Cartesian coordinate axes. The calculation covered the region 0–90° for each Eulerian angle with a grid interval of 3°. An ordinary self RF was calculated first with the fast rotation function (Crowther, 1972 ) using reflection data between 10 and 3.5Å resolution. The radius of integration was 35Å. The locked self-RF values were then obtained by interpolating in the ordinary self-RF map. The entire calculation took roughly 4min of CPU time on an SGI O2 R10000 workstation. The highest peak in the locked self RF stands out from the rest of the peaks, suggesting that it is likely to be the correct solution (Table 1 ). However, peaks 2–4 also have reasonably high locked self-RF values (Table 1 ). The orientations of the NCS elements corresponding to each of the top four peaks in the locked self-RF map were then plotted in a stereographic projection and compared with the ordinary self RF (Fig. 2 ). It clearly shows that the top peak in the locked self RF is the correct solution. However, peaks 2–4 are erroneous, owing to accidental overlap of one of the twofold axes with the correct orientation. Such noises in the locked self RF are expected to be more serious when the NCS is low. When the NCS is high, for example for icosahedral viruses, the background noise is reduced significantly by the averaging and the correct solution is essentially the only peak in the locked self RF (Tong & Rossmann, 1990 ). In addition, once two non-collinear NCS axes are matched by a rotation, the entire NCS point group is matched. Therefore, when the NCS is low it may be important to cross check the solution from the locked self RF with the results from the ordinary self RF. (a) Results from locked self RF to define the orientation of the NCS point group. Peak No. θ[1] θ[2] θ[3] Height Height/σ 1 33 33 24 488.7 13.9 2 81 75 30 430.0 10.2 3 33 33 60 422.8 9.7 4 75 78 33 400.0 8.2 5 6 18 49 377.8 6.8 (b) Results from locked cross RF to define the orientation of the monomer model. Peak No. θ[1] θ[2] θ[3] Height Height/σ 1 93 87 159 1000 10.1 2 165 75 0 589.6 5.8 (c) Results from locked TF to define the position of the monomer in the NCS assembly. Peak No. X Y Z Height Height/σ 1 27 −2 26 1000 7.8 2 −26 −2 26 530.0 4.1 (d) Results from ordinary TF to define the center of the NCS assembly in the crystal. Peak No. x y z Height Height/σ CC R Contact 1 0.1389 0 0.3688 100 12.6 47.9 39.2 1 2 0.1019 0 0.0000 24.2 3.1 36.0 43.7 23 With the knowledge that the top peak in the locked self RF is correct, a fine search was then carried out using 1° intervals around the rotational parameters of this top peak. This produced more accurate parameters for the rotation [E], at 32, 34, 24°. The locked cross RF was then calculated to determine the orientation of the monomer in the NCS assembly. As with the ordinary cross RF, the monomer search model was placed in a large P1 cell with dimensions of a = b = c = 100Å and structure factors to 3.5Å resolution were calculated for this artificial crystal. An ordinary cross RF was calculated with the fast RF (Crowther, 1972 ), using reflection data between 10 and 3.5Å resolution and a radius of integration of 35Å; the locked cross RF values were obtained by interpolating in this map. The entire calculation took roughly 6min CPU time, covering the region 0–180° in θ[1] and θ[3], and 0–90° in θ[2], with 3° grid intervals. The locked cross RF contained one significant peak whose height was about twice that of the second peak in the function (Table 1 ). This clearly demonstrated that the correct orientation of the monomer has been found. More accurate parameters for the rotation were obtained from a subsequent fine search, with 1° intervals in the three Euler angles. With the knowledge of the orientation of the NCS assembly (rotation [E]) and the orientation of the monomer in this assembly (rotation [F]), the locked TF was then calculated to determine the position of this monomer relative to the center of the NCS. Reflection data between 10 and 3.5Å resolution were used in the calculation. The monomer search model was centered at the origin of the coordinate system. The search region was defined as a spherical shell with an inside radius of 15Å and an outside radius of 40Å, as it is known from the structures of the other tetramers of this enzyme that the center of the monomer is about 35Å from the center of the tetramer. The grid interval along the three axes was 1Å. The calculation took about 6min CPU time. There was only one significant peak in the locked TF (Table 1 ) and placing the monomer at this position also gives rise to reasonable packing of the monomers in the NCS assembly. Therefore, this is likely the correct solution from the locked TF. A fine search was then carried out using 0.5Å intervals to obtain more accurate parameters for the position of the monomer. At the completion of the locked TF calculation, the GLRF program outputs the atomic model for the entire NCS assembly in the standard orientation and centered at the origin. This model for the NCS assembly was then used in an ordinary TF calculation to determine the center of the NCS assembly in the crystal. The TF program of the Replace package was used in this example (Tong, 1993 ), using reflection data between 10 and 4Å resolution. It clearly revealed the location of the NCS assembly in the crystal (Table 1 ). The top peak has significantly better correlation coefficient (CC) and R -factor values. In addition, there are few steric clashes among crystallographically related molecules based on this solution (Tong, 1993 ). This confirms that the locked MR calculations successfully determined the structure of this new crystal form of human malic enzyme. This research is supported by a grant from the National Science Foundation (DBI-98-76668). Bhargava, G., Mui, S., Pav, S., Wu, H., Loeber, G. & Tong, L. (1999). J. Struct. Biol. 127, 72–75. Web of Science CrossRef PubMed CAS Google Scholar Crowther, R. A. (1972). The Molecular Replacement Method , edited by M. G. Rossmann, pp. 173–178. New York: Gordon & Breach. Google Scholar Harada, Y., Lifchitz, A. & Berthou, J. (1981). Acta Cryst. A37, 398–406. CrossRef CAS IUCr Journals Web of Science Google Scholar Rao, S. N., Jih, J. H. & Hartsuck, J. A. (1980). Acta Cryst. A36, 878–884. CrossRef CAS IUCr Journals Web of Science Google Scholar Rossmann, M. G. & Blow, D. M. (1962). Acta Cryst. 15, 24–31. CrossRef CAS IUCr Journals Web of Science Google Scholar Rossmann, M. G., Ford, G. C., Watson, H. C. & Banaszak, L. J. (1972). J. Mol. Biol. 64, 237–249. CrossRef CAS PubMed Web of Science Google Scholar Tong, L. (1993). J. Appl. Cryst. 26, 748–751. CrossRef Web of Science IUCr Journals Google Scholar Tong, L. (1996). Acta Cryst. A52, 476–479. CrossRef CAS Web of Science IUCr Journals Google Scholar Tong, L. & Rossmann, M. G. (1990). Acta Cryst. A46, 783–792. CrossRef CAS Web of Science IUCr Journals Google Scholar Tong, L. & Rossmann, M. G. (1997). Methods Enzymol. 276, 594–611. CrossRef CAS PubMed Web of Science Google Scholar Xu, Y., Bhargava, G., Wu, H., Loeber, G. & Tong, L. (1999). Structure, 7, 877–889. Web of Science CrossRef PubMed CAS Google Scholar Yang, Z., Floyd, D. L., Loeber, G. & Tong, L. (2000). Nature Struct. Biol. 7, 251–257. Web of Science PubMed CAS Google Scholar Yang, Z. & Tong, L. (2000). Protein Pept. Lett. 7, 287–296. CAS Google Scholar © International Union of Crystallography. Prior permission is not required to reproduce short quotations, tables and figures from this article, provided the original authors and source are cited. For more information, click here.
{"url":"https://journals.iucr.org/d/issues/2001/10/00/ba5010/","timestamp":"2024-11-15T00:12:55Z","content_type":"text/html","content_length":"140140","record_id":"<urn:uuid:b7db80f7-dca4-42f2-b00f-d3b47abebb18>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00745.warc.gz"}
(US) to Tun Hogshead (US) to Tun Converter ⇅ Switch toTun to Hogshead (US) Converter How to use this Hogshead (US) to Tun Converter 🤔 Follow these steps to convert given volume from the units of Hogshead (US) to the units of Tun. 1. Enter the input Hogshead (US) value in the text field. 2. The calculator converts the given Hogshead (US) into Tun in realtime ⌚ using the conversion formula, and displays under the Tun label. You do not need to click any button. If the input changes, Tun value is re-calculated, just like that. 3. You may copy the resulting Tun value using the Copy button. 4. To view a detailed step by step calculation of the conversion, click on the View Calculation button. 5. You can also reset the input by clicking on button present below the input field. What is the Formula to convert Hogshead (US) to Tun? The formula to convert given volume from Hogshead (US) to Tun is: Volume[(Tun)] = Volume[(Hogshead (US))] / 4 Substitute the given value of volume in hogshead (us), i.e., Volume[(Hogshead (US))] in the above formula and simplify the right-hand side value. The resulting value is the volume in tun, i.e., Calculation will be done after you enter a valid input. Consider that a distillery stores 2 hogsheads (US) of whiskey. Convert this storage volume from hogsheads (US) to Tun. The volume in hogshead (us) is: Volume[(Hogshead (US))] = 2 The formula to convert volume from hogshead (us) to tun is: Volume[(Tun)] = Volume[(Hogshead (US))] / 4 Substitute given weight Volume[(Hogshead (US))] = 2 in the above formula. Volume[(Tun)] = 2 / 4 Volume[(Tun)] = 0.5 Final Answer: Therefore, 2 hhd (US) is equal to 0.5 tun. The volume is 0.5 tun, in tun. Consider that a brewery fills 4 hogsheads (US) with ale. Convert this volume from hogsheads (US) to Tun. The volume in hogshead (us) is: Volume[(Hogshead (US))] = 4 The formula to convert volume from hogshead (us) to tun is: Volume[(Tun)] = Volume[(Hogshead (US))] / 4 Substitute given weight Volume[(Hogshead (US))] = 4 in the above formula. Volume[(Tun)] = 4 / 4 Volume[(Tun)] = 1 Final Answer: Therefore, 4 hhd (US) is equal to 1 tun. The volume is 1 tun, in tun. Hogshead (US) to Tun Conversion Table The following table gives some of the most used conversions from Hogshead (US) to Tun. Hogshead (US) (hhd (US)) Tun (tun) 0.01 hhd (US) 0.0025 tun 0.1 hhd (US) 0.025 tun 1 hhd (US) 0.25 tun 2 hhd (US) 0.5 tun 3 hhd (US) 0.75 tun 4 hhd (US) 1 tun 5 hhd (US) 1.25 tun 6 hhd (US) 1.5 tun 7 hhd (US) 1.75 tun 8 hhd (US) 2 tun 9 hhd (US) 2.25 tun 10 hhd (US) 2.5 tun 20 hhd (US) 5 tun 50 hhd (US) 12.5 tun 100 hhd (US) 25 tun 1000 hhd (US) 250 tun Hogshead (US) The US hogshead is a unit of measurement used to quantify large liquid volumes, particularly in the United States. It is defined as 63 US gallons, which is approximately 238.5 liters. Historically, the hogshead was used for measuring significant quantities of beverages such as wine, beer, and other liquids in trade and commerce. Today, it is less commonly used but still recognized in specific contexts related to traditional trade practices, especially in the beverage industry and historical references. The tun is a unit of measurement used to quantify large volumes, particularly in the context of liquids such as wine or beer. It is defined as approximately 1,016.5 liters or 1,056 US quarts. Historically, the tun was used to measure the capacity of large casks or barrels for storing and transporting liquids. The term is still referenced in certain industries, such as brewing and winemaking, where large volumes are common. Although less commonly used today, it remains part of historical measurement systems and is occasionally encountered in trade and commerce. Frequently Asked Questions (FAQs) 1. What is the formula for converting Hogshead (US) to Tun in Volume? The formula to convert Hogshead (US) to Tun in Volume is: Hogshead (US) / 4 2. Is this tool free or paid? This Volume conversion tool, which converts Hogshead (US) to Tun, is completely free to use. 3. How do I convert Volume from Hogshead (US) to Tun? To convert Volume from Hogshead (US) to Tun, you can use the following formula: Hogshead (US) / 4 For example, if you have a value in Hogshead (US), you substitute that value in place of Hogshead (US) in the above formula, and solve the mathematical expression to get the equivalent value in Tun.
{"url":"https://convertonline.org/unit/?convert=hogshead_us-tun","timestamp":"2024-11-03T07:43:00Z","content_type":"text/html","content_length":"92324","record_id":"<urn:uuid:f5504fff-d4f4-4bab-aad8-76fc867816ad>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00081.warc.gz"}
Next: MINIMUM FACILITY LOCATION Up: Miscellaneous Previous: MINIMUM DIAMETERS DECOMPOSITION &nbsp Index • INSTANCE: Complete graph • SOLUTION: A set of k facilities, i.e., a subset • MEASURE: The minimum distance between two facilities, i.e., • Good News: Approximable within 2 [421]. • Bad News: Not approximable within 2421]. • Comment: Not in APX if the distances do not satisfy the triangle inequality. MAXIMUM EDGE SUBGRAPH is the variation where the measure is the average distance between any pair of facilities [421]. Variation in which we allow the points in F to lie in edges (considered as curves) is also approximable within 2 and is not approximable within 449]. Viggo Kann
{"url":"https://www.csc.kth.se/~viggo/wwwcompendium/node134.html","timestamp":"2024-11-11T08:31:11Z","content_type":"text/html","content_length":"4784","record_id":"<urn:uuid:db7f840b-8ef0-44f1-b051-7df86fbbdf33>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00389.warc.gz"}
How To Find the Probability of Something? | TIRLA ACADEMY To find the probability of an event we divide the No. of favorable outcomes of an event by Total no. of outcomes. Probability P(E) = No. of favorable outcomes of an event ÷ Total no. of Outcomes Let's take an example for better understanding: Q- Suppose we throw a die once. What is the probability of getting a number greater than 4? No. greater than 4 (favorable outcomes) = 2 (5,6) Total no. (total outcomes) = 6 (1,2,3,4,5,6) Probability P(E) = Favorable outcomes of an event ÷ Total Outcomes P(E) = 2÷6 P(E) = 1/3
{"url":"https://www.tirlaacademy.com/2024/04/how-to-find-probability-of-something.html","timestamp":"2024-11-03T00:27:04Z","content_type":"application/xhtml+xml","content_length":"311584","record_id":"<urn:uuid:363f51cd-3d78-4bbf-bb21-36529f15e109>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00691.warc.gz"}
I'm seeking a trailing stop EA that surely must exist, any ideas? Anyone have any ideas on how to make 2 digit brokers work????? My stoploss automatically moves to zero after a few pips on 2 digit brokers... ANyone with any thoughts?????? Pip = Point; if (Digits == 2||Digits == 5){Pip *= .001; D_Factor = 10;} Trail_From *= Pip; Trail_Max *= Pip; Try this correction Pip = Point; if (Digits == ||Digits == 5){Pip *= ; D_Factor = 10;} Trail_From *= Pip; Trail_Max *= Pip; Robert Hill aka MrPip Ok, herewith the latest version. Just addresses the issue of stoplevels. Also there is a boolean(true/false) function that allows the EA to make some simple trades in order that you may backtest the trailing stop function on your favourite instrument. 4XWeezal, please test and see if it works. I tested it on XAGUSD at FXDD so it should work fine with 2 digit instruments. leanerdavid, I will look at the issue of initial stops in the next version. The attachment shows as php indetad of mq4. Robert Hill aka MrPip Hi Robert, The attachment shows as php indetad of mq4. Yeah, I get that sometimes too. Not sure why it happens. Try downloading it from the paperclip next to the thread page numbering. Ya, it needs to be on the chart of the instrument that it is controlling..... But I have changed it now to use the correct Bid/Ask price, so , hopefully, this one will control all properly. You will also notice an OrderModify error 1 from time to time. This is not serious. It simply means that the stoploss was already at what it was trying to set it to and, therefore, there was no change to the stoploss. I am trying to track down where it is modifying the order unneccesarily, but for now you can just ignore the error. I updated the most recent version 5 with the error 1 corrected. There are comments in the code showing where the error occurs. I also added additional output for error reporting, also commented. Attached File(s) Ben_trail_stop_ECN_v5a.mq4 6 KB | 922 downloads I'm amazed an EA like this was not already available. Such a simple, obvious idea. You guys have done a great job, thanks. Hi Robert, Thanks for that and nicely spotted. Its been bugging me for a while. Wow, thanks so much for posting this. I was trying to do something very similiar to this. I think there is an error though. I was just testing it and it didn't place the TrailingStop when it should have. I had it set to; Trail From = 10.0 Trail_Max = 50.0 Trail_Percent = 50 I had a sell opened at 1.43059. As I understand, this should have set the TrailingStop to 1.43009 (1/2 of 10pips) once the price dropped to 1.42959(10pips profit). But the TrailingStop wasn't set until the price had dropped 40pips, and then it was set to 20 (1/2 of 40). I tried reducing the Trail_From to 5, but it also went 40 pips before setting TrailingStop to 20. I went through the code, but couldn't figure out why this is occuring. I verified this error on both live and test mode. Alternately I set the Trail_From to 50 and it worked properly, initiating the Trailing Stop at 25pips once it had moved 50. It seems it always defaults to 40 when the defined Trail_From is 40 or less. If there were just one more thing that I would like to see, it would be an option to allow us to close all losing trades at the same time we first initiate our TrailingStop. Thanks again for this Well, I think I discovered the answer to my own question. My broker (Alpari) doesn't allow SLs to be enterred less than 20pips from market price. This would need to run as stealth and execute TrailingStops from the ea when TS is set to initiate less than 20 pips from price. Well, I think I discovered the answer to my own question. My broker (Alpari) doesn't allow SLs to be enterred less than 20pips from market price. This would need to run as stealth and execute TrailingStops from the ea when TS is set to initiate less than 20 pips from price. Wow, 20 pips! That seems kind of high? forex.com is 5. ibfx is 3. Are you certain about 20 pips? Try Steve Hopwoods MPTM that may work for you! We are our own best indicator. I think he's confused with 2 pips, but 5 digit broker displaying as 20 I'd like a trailing stop EA that trails by a percentage value of the amount you are in profit. So for emample 3 variables. Trail from = 10 Trail max = 50 Trail percent = 50 So in this example ts would be placed when you are 10pips (trail from) in profit, and it would place it 5pips (50% trailpercent) from the current price. Then as price moves in your favour it trail by 50%, so at 12 pips profit it trails by 6 14 by 7 and so on..... When it reaches 100 pips profit it then trails by 50 pips (Trail max) Surely someone else has done this before,... This might also work. Not Stealth. "Here's an EA that Caveman wrote for me after this request." Hi all, I love the simplicity of this trailing stop! AWESOME. I have created a concept that mixes Nanningbob's EA with 7bit's EA (Snowball) with this trailing stop. So far only doing it manually. What I am trying to do is use this EA for my trailing stop. However, I will enter every 'x' pips long and short (grid system). Then on the orders I get caught in, I will martingale out of them (200 pips+ away). I will be using very small dollars and proper money management. I am curious if someone could help me put this into this EA? I have some code, but unfortunately I am not the best programmer and so far, I have it so it keeps entering orders over and over again. I believe my logic is almost there. Please help as I would like to see the profitability of this system. Inserted Code //| Ben_trail_stop.mq4 | //| Copyright © 2010, www.compu-forex.com | //| www.compu-forex.com | #property copyright "Copyright © 2010, www.compu-forex.com" #property link "www.compu-forex.com" // Added by Robert for better error reporting #include <stdlib.mqh> #include <stderror.mqh> //---- input parameters extern int OrderDistance = 15; //Distance of order entry to include spread*2 + Minimum Broker Modification distance extern double LotSize = 0.01; //Size of lot to be introduced extern int MagicNumber = 0; extern bool Enable_Trade = false; extern bool Own_Symbol_Only = true; extern double Trail_From = 10.0; extern double Trail_Max = 50.0; extern double Trail_Percent = 50; extern int Slippage = 3; //slippage for orders extern double StopLoss = 0.0; //Take Profit extern double TakeProfit = 0.0; //Take Profit extern int MaximumPipsPending= 100; //Maixmum pips distance to leave pending orders open string ArrowColor = "blue"; bool mod; int err; // for better error reporting double My_Profit, //| expert initialization function | int init() //| expert deinitialization function | int deinit() //| expert start function | int start() // THIS SECTION OPENS ORDERS //If just starting up EA and no existing long/short in place /*if (OrdersTotal() < 2) OrderSend(Symbol(), 4, LotSize, (Ask+(OrderDistance*Point)), Slippage, StopLoss ,TakeProfit ,"First Long", MagicNumber, 0,ArrowColor); //Sends first long OrderSend(Symbol(), 5, LotSize, (Bid-(OrderDistance*Point)), Slippage, StopLoss ,TakeProfit ,"First Short", MagicNumber, 0,ArrowColor); //Sends first short //For existing trades for (int x = 0; x < OrdersTotal(); x++) if (OrderSelect(x,SELECT_BY_POS,MODE_TRADES)) if ((OrderSymbol()==Symbol() && OrderMagicNumber() == MagicNumber)) if (OrderType()==OP_BUY) if(OrderOpenPrice() == (Ask+(OrderDistance*Point))) OrderSend(Symbol(), 0, LotSize, (Ask+OrderDistance*Point), Slippage, StopLoss , TakeProfit ,"", MagicNumber, 0,ArrowColor); if(OrderOpenPrice() > (Ask + ((OrderDistance*2)*Point))) OrderSend(Symbol(), 4, LotSize, (Ask+OrderDistance*Point), Slippage, StopLoss , TakeProfit ,"", MagicNumber, 0,ArrowColor); else if (OrderType()==OP_SELL) if(OrderOpenPrice() == (Bid*(OrderDistance*Point))) OrderSend(Symbol(), 1, LotSize, (Bid+OrderDistance*Point), Slippage, StopLoss , TakeProfit ,"", MagicNumber, 0,ArrowColor); if(OrderOpenPrice() < (Bid + ((OrderDistance*2)*Point))) OrderSend(Symbol(), 4, LotSize, (Bid+OrderDistance*Point), Slippage, StopLoss , TakeProfit ,"", MagicNumber, 0,ArrowColor); //Clearing out orders further than 100 pips ---DOESN'T SEEM TO WORK else if (OrderType() == 4 && OrderOpenPrice() >= (Ask + (MaximumPipsPending*Point))) else if (OrderType() == 5 && OrderOpenPrice() <= (Bid + (MaximumPipsPending*Point))) /////////////BEGIN TRAILING STOP ///////////////////////////////// for (int i = 0; i < OrdersTotal(); i++){ if (OrderSelect(i,SELECT_BY_POS,MODE_TRADES)){ if(Own_Symbol_Only && OrderSymbol() != Symbol())continue; if(OrderMagicNumber() == MagicNumber || MagicNumber == 0){ Pip = MarketInfo(OrderSymbol(),MODE_POINT); if (MarketInfo(OrderSymbol(),MODE_DIGITS) == 3|| MarketInfo(OrderSymbol(),MODE_DIGITS) == 5)Pip*=10; lTrail_Max = Trail_Max * Pip; lTrail_From = Trail_From * Pip; Stop_Level = MarketInfo(OrderSymbol(),MODE_STOPLEVEL)*Pip; case OP_BUY : My_Profit = MarketInfo(OrderSymbol(), MODE_BID) - OrderOpenPrice(); My_Trail = MathMin(My_Profit * Trail_Percent/100,lTrail_Max); My_SL = NormalizeDouble(MarketInfo(OrderSymbol(),MODE_BID)-My_Trail,Digits); if(My_Profit > lTrail_From){ if(MarketInfo(OrderSymbol(),MODE_BID) - My_SL > Stop_Level){ // This will cause a double OrderModify if OrderStopLoss() returns 0 // Combining both tests fixes the bug that returns error 1. // if(OrderStopLoss() == 0)mod = OrderModify(OrderTicket(),OrderOpenPrice(),My_SL,OrderTakeProfit(),0, CLR_NONE); if(OrderStopLoss() < My_SL || OrderStopLoss() == 0) mod = OrderModify(OrderTicket(),OrderOpenPrice(),My_SL,OrderTakeProfit(),0, CLR_NONE); case OP_SELL : My_Profit = OrderOpenPrice() - MarketInfo(OrderSymbol(), MODE_ASK); My_Trail = MathMin(My_Profit * Trail_Percent/100,lTrail_Max); My_SL = NormalizeDouble(MarketInfo(OrderSymbol(),MODE_ASK)+My_Trail,Digits); if(My_Profit > lTrail_From){ if(My_SL - MarketInfo(OrderSymbol(),MODE_ASK) > Stop_Level){ // Combined these as well for cleaner code // if(OrderStopLoss() == 0)mod = OrderModify(OrderTicket(),OrderOpenPrice(),My_SL,OrderTakeProfit(),0,CLR_NONE); if(My_SL < OrderStopLoss() || OrderStopLoss() == 0) mod = OrderModify(OrderTicket(),OrderOpenPrice(),My_SL,OrderTakeProfit(),0,CLR_NONE); // Modified to output a text description of the error // if(!mod && GetLastError() > 1)Print("Error entering Trailing Stop - Error " + GetLastError()); err = GetLastError(); if (err > 1) Print("Error entering Trailing Stop - Error (" + err + "} " + ErrorDescription(err) ); else Print("Error selecting order"); We are our own best indicator. Hi guys, Herewith version 6. The main difference is the addition of a custom Stop Level, so you can have the EA modify the trail every 0.5 or 1 pips instead of on each tick. This will also function as a jumping stop for those that prefer such. Just enter the new external parameter Min_Mod as something like 5 and the EA will jump the trail by 5 pips each time the price has advanced in your favour sufficiently. Attached File(s) Ben_trail_stop_ECN_v6.mq4 5 KB | 1,445 downloads I loaded the latest version of "Ben trail stop ECN v6" into MT4/experts. Now, when I close MT4 and restart, nothing shows up when I open the Navigator link, to look for it...inside my EA folder. Am I putting it in the right place?? Thanks for any input!! I loaded the latest version of "Ben trail stop ECN v6" into MT4/experts. Now, when I close MT4 and restart, nothing shows up when I open the Navigator link, to look for it...inside my EA folder. Am I putting it in the right place?? Thanks for any input!! Right folder, do you have more than 1 install? Only one of this particular app...and it is in my experts. I checked everyone of the other files/folders. I have even tried deleting and re-installing... Right folder, do you have more than 1 install? Only one of this particular app...and it is in my experts. I checked everyone of the other files/folders. I have even tried deleting and re-installing... Update: I uploaded v2, and it is in the folder. When I drag it on a chart it seems to work... Version2 is all I need anyways... Thanks for your help, and thanks be to the creator of it...lol Did you 'Compile' the MQL Source. Find the source in windows and Open it. It should open in the editor. Hit compile and it will show up in Navigator. You can goto the editor and compile anytime you need something new to show up without closing MT4. Thanks Aja, that did the trick... Much appreciation for the help ...and for teaching me something new Did you 'Compile' the MQL Source. Find the source in windows and Open it. It should open in the editor. Hit compile and it will show up in Navigator. You can goto the editor and compile anytime you need something new to show up without closing MT4. Does the new version "v6", control all open trades? Or do we have to attach it to every chart that has a trade? Version 2 had the option to have it work on just the one chart, or all charts. And we only had to place it on one chart...
{"url":"https://www.forexfactory.com/thread/236002-im-seeking-a-trailing-stop-ea-that-surely?page=3","timestamp":"2024-11-07T10:08:05Z","content_type":"text/html","content_length":"147912","record_id":"<urn:uuid:3ba343c5-6590-49e2-bae4-63d5214596b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00093.warc.gz"}
Which Statements Are True Regarding The Diagram Which Statements Are True Regarding The Diagram - The sum of y and z must be (1)/ (2)x. Which statements are always true regarding the diagram? 3 people found it helpful. M∠5 + m∠3 = m∠4. Web the measure of angle acb is _____. M∠mkl + m∠mlk = m∠jkm. The triangles are congruent, angle y is congruent to angle m, angle x is congruent to angle. Which statement regarding the diagram is true? Cb is contained on line n. Web which statements are true regarding the diagram? Which statements are true regarding the diagram? Web which statements must be true regarding the diagram? Ce is contained on line m. Which statement is true regarding the diagram of circle p? Which statements are true regarding the diagram? Web the measure of angle acb is _____. which statements are always true regarding the diagram? Check all that Web which statements must be true regarding the diagram? Web the measure of angle acb is _____. Ray bc is the same as ray cb. Web which statements are true regarding the diagram? An exterior angle is supplementary to the adjacent interior angle. Which of the following statements are true based on the diagram 3 people found it helpful. ∠ecb is created from and ce and cb Web which statements are true regarding the diagram for circle p and the angles created? Web the measure of angle acb is _____. Web which statements must be true regarding the diagram? Which statements are true based on the diagram select 3 options Web the statements that are true regarding the diagram are; The triangles are congruent, angle y is congruent to angle m, angle x is congruent to angle. Jnk and knl are supplementary. Ray bc is the same as ray cb. We start with triangle abc and see that angle zab is an exterior angle created. Which Statements Are True Based On The Diagram Check All That Apply Web which statements are true regarding the diagram? ∠ecb is created from and ce and cb The sum of y and z must be (1)/ (2)x. Web which statements are true regarding the diagram? Ray bc is the same as ray cb. Which Statements Are True Based On The Diagram Check All That Apply Cb is contained on line n. M∠5 + m∠3 = m∠4. Web which statements must be true regarding the diagram? Ray bc is the same as ray cb. The triangles are congruent, angle y is congruent to angle m, angle x is congruent to angle. PLEASEEEE HELPPP Check Which statements regarding the diagram are true Click the card to flip 👆. 3 people found it helpful. Web when triangle xyz is reflected to form triangle lmn, the following statements are true: The triangles are congruent, angle y is congruent to angle m, angle x is congruent to angle. Jnk and knl are supplementary. Which statements are always true regarding the diagram? Select three 3 people found it helpful. Web the measure of angle acb is _____. Jnk and knl are supplementary. Web which statements are true regarding the diagram? Web which statements are true regarding the which statement is true about the diagram Web which statements must be true regarding the diagram? Cb is contained on line n. Web which statements are true regarding the diagram? Which statements are true regarding the diagram? The sum of y and z must be 2x. Which statements are true based on the diagram? Check all that apply Web when triangle xyz is reflected to form triangle lmn, the following statements are true: Web the statements that are true regarding the diagram are; Web the measure of angle acb is _____. M∠zab = m∠acb + m∠cba. Ray bc is the same as ray cb. Click The Card To Flip 👆. Web the statements that are true regarding the diagram are; Web which statements are true regarding the diagram? M∠mkl + m∠mlk = m∠jkm. M∠5 + m∠3 = m∠4. Study With Quizlet And Memorize Flashcards Containing Terms Like Δcde Was Translated Down. Which statements are true regarding the diagram? ∠ecb is created from and ce and cb Cb is contained on line n. Web which statements are true regarding the diagram for circle p and the angles created? Ray Bc Is The Same As Ray Cb. Which statements are always true regarding the diagram? The triangles are congruent, angle y is congruent to angle m, angle x is congruent to angle. Ce is contained on line m ray ad is the same as ray ac. The sum of y and z must be 2x. The Sum Of Y And Z Must Be (1)/ (2)X. M∠zab = m∠acb + m∠cba. 3 people found it helpful. Web which statements must be true regarding the diagram? An exterior angle is supplementary to the adjacent interior angle. Related Post:
{"url":"https://claims.solarcoin.org/en/which-statements-are-true-regarding-the-diagram.html","timestamp":"2024-11-03T13:07:31Z","content_type":"text/html","content_length":"23878","record_id":"<urn:uuid:292e74af-e17f-4136-ba4e-d4988bef367e>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00477.warc.gz"}
Standard Deviation Calculator - SparkolinksStandard Deviation Calculator - Sparkolinks Standard Deviation Calculator Standard Deviation Calculator is an essential statistical tool used to measure the dispersion or variability of a set of data points. By inputting the data values into the calculator, it calculates the standard deviation, which provides valuable insights into the spread of the data around the mean. This calculator is commonly used in research, data analysis, and quality control to understand the variability and reliability of data sets. With the Standard Deviation Calculator, users can quickly assess the spread of data points and make informed decisions based on statistical analysis Frequently Asked Questions What Is a Standard Deviation? A standard deviation is a measure of the amount of variation or dispersion of a set of data values. It is a statistical term that represents the average distance of each data point from the mean or average value of the set. A low standard deviation indicates that the data points tend to be very close to the mean, while a high standard deviation indicates that the data points are spread out over a wider range. It is often used to describe the distribution of data and to compare the variability of different datasets. How Partial Derivative Calculator Works? A partial derivative calculator is a tool that helps calculate partial derivatives of multivariable functions. It simplifies the process of finding the derivative of a function with respect to one variable while treating the other variables as constants. The calculator utilizes mathematical formulas and algorithms to perform the calculations. When you input a multivariable function into the calculator and specify the variable with respect to which you want to take the partial derivative, it applies the appropriate differentiation rules and techniques. The calculator evaluates the function at the specified point and computes the rate of change of the function with respect to the chosen variable. It provides the derivative as the output, giving you the partial derivative of the function with respect to the specified variable. It underlying algorithms handle the complex calculations involved in finding partial derivatives, allowing you to obtain results quickly and accurately. It simplifies the process, especially for functions with multiple variables, saving you time and effort in manual computations. What Is the Standard Deviation and Why Is It Important in Statistics? The standard deviation is a statistical measure that quantifies the amount of variability or dispersion in a dataset. It measures how spread out the values are around the mean (average) of the dataset. A low standard deviation indicates that the data points are close to the mean, while a high standard deviation indicates that the data points are more spread out. The standard deviation is important in statistics for several reasons: • Measure of Variability: It provides a numerical value that represents the dispersion of the data. It gives insights into how much the data points deviate from the average, allowing us to understand the range and distribution of the dataset. • Assessing Data Quality: Standard deviation helps to identify outliers or extreme values in a dataset. Unusually large or small values can significantly affect statistical analyses, and the standard deviation helps in detecting such anomalies. • Comparing Datasets: When comparing different datasets or groups, the standard deviation allows us to determine which dataset has more variability. It helps in assessing the consistency or differences between groups and evaluating the significance of results. • Risk Assessment: In finance and investment, the standard deviation is used to measure the volatility or risk associated with an investment. A higher standard deviation indicates a greater potential for fluctuation in returns, indicating higher risk. • Statistical Inference: The standard deviation plays a crucial role in hypothesis testing and confidence intervals. It helps to determine the precision of estimates and evaluate the significance of results obtained from statistical analyses. By considering the standard deviation, statisticians and researchers can gain a better understanding of the data, make informed decisions, and draw meaningful conclusions from their analyses. The calculator evaluates the function at the specified point and computes the rate of change of the function with respect to the chosen variable. It provides the derivative as the output, giving you the partial derivative of the function with respect to the specified variable. It underlying algorithms handle the complex calculations involved in finding partial derivatives, allowing you to obtain results quickly and accurately. It simplifies the process, especially for functions with multiple variables, saving you time and effort in manual computations. What Is the Standard Deviation and Why Is It Important in Statistics? The Standard Deviation Calculator plays a crucial role in analyzing and interpreting data distributions in several ways: • Measure of Variability: The calculator allows you to compute the standard deviation, which quantifies the spread or dispersion of data points around the mean. By understanding the variability in the dataset, you can assess how closely the data points cluster around the average and how much they deviate from it. • Distribution Shape: The standard deviation helps in identifying the shape of the data distribution. In a normal distribution, approximately 68% of the data falls within one standard deviation from the mean, 95% falls within two standard deviations, and 99.7% falls within three standard deviations. Deviations from this pattern can indicate skewed or asymmetrical distributions. • Outlier Detection: By analyzing the standard deviation, you can identify outliers or extreme values that deviate significantly from the mean. Outliers can provide valuable insights into unusual observations or data points that may require further investigation or consideration. • Comparing Data Sets: The standard deviation allows for the comparison of different datasets. By calculating the standard deviation for multiple datasets, you can determine which dataset has more variability or dispersion. This information is valuable for making comparisons and drawing conclusions about the similarities or differences between the datasets. • Confidence Intervals: The standard deviation is crucial in determining confidence intervals, which provide a range of values within which a population parameter is likely to fall. The standard deviation helps in estimating the precision of the estimate and provides insights into the reliability of the data. The Standard Deviation Calculator assists in analyzing and interpreting data distributions by providing a numerical measure of variability, identifying outliers, comparing datasets, and aiding in the estimation of confidence intervals. It enhances your understanding of the dataset and enables you to make informed decisions based on the characteristics and properties of the data distribution. What Is the Standard Deviation and Why Is It Important in Statistics? Calculating the standard deviation using the Standard Deviation Calculator involves the following steps: • Input Data: Enter the dataset for which you want to calculate the standard deviation. You can input the data either by typing it directly into the calculator or by copying and pasting it from a spreadsheet or document. Ensure that the data is correctly entered and separated by commas or line breaks. • Select Calculation Method: Choose the appropriate calculation method based on the nature of your data. The calculator provides options for calculating the standard deviation for a sample or the entire population. If you are working with a sample, select the sample calculation method. If you have data for the entire population, choose the population calculation method. • Calculate the Standard Deviation: Click on the “Calculate” or “Compute” button to initiate the calculation process. The calculator will perform the necessary mathematical operations to determine the standard deviation of the provided dataset. • Interpret the Result: Once the calculation is complete, the calculator will display the standard deviation value. Interpret the result in the context of your data. A larger standard deviation indicates greater variability or dispersion, while a smaller standard deviation suggests less variability.
{"url":"https://sparkolinks.com/standard-deviation-calculator/","timestamp":"2024-11-09T13:23:41Z","content_type":"text/html","content_length":"79208","record_id":"<urn:uuid:7c018856-d060-4d9b-a0a9-cbb89df79ba9>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00154.warc.gz"}
How do you explain decompose to a child? How do you explain decompose to a child? Kids Definition of decompose 1 : to break down or be broken down into simpler parts or substances especially by the action of living things (as bacteria and fungi) Leaves decomposed on the forest floor. 2 : to separate a substance into simpler compounds Water can be decomposed into hydrogen and oxygen. How do you decompose 17? We could use a part–whole model to show how we can decompose or split up the number 17. The whole amount is 17. And we can split this up into 10 ones, which is worth 10, and seven ones, which is worth seven. 10 and seven go together to make 17. What is the importance of decomposing numbers? Why do kids need to know how to compose and decompose numbers? Composing and decomposing numbers makes math problems so much easier because it helps kids to make numbers friendlier. What is the purpose of decomposing numbers? In Common Core math, first grade students need to begin thinking about the properties of numbers more deeply. One important property of all numbers is that they can be decomposed. When you decompose a number, it means that you take the number apart. What is mathematical decomposition? Decompose: To decompose in math is to break down numbers into parts. Place Value: Place value is the value represented by a digit in a number on the basis of its position in the number. What does it mean to decompose numbers in kindergarten? By Leslie Simpson · About 4 minutes to read this article. Decomposing numbers means to break down numbers into their sub-parts. Common Core standards has kindergarten students decomposing numbers in two ways. How are numbers decomposed in the Common Core? Common Core standards has kindergarten students decomposing numbers in two ways. The first is to decompose numbers into their tens and ones (focus on numbers 11-19) and the second is to show how any number 1-10 can be created using a variety of addends. Which is the best decomposing game to play? Finding decomposing games to play in small groups, pairs or independently for math centers, stations or zones is crucial. Students often do some of their best work when given the most amount of time to practice. Hurry Up Santa! Composing Teens Game Hurry Up Reindeer! Frozen Composing Game
{"url":"https://newsbasis.com/how-do-you-explain-decompose-to-a-child/","timestamp":"2024-11-10T09:02:42Z","content_type":"text/html","content_length":"120165","record_id":"<urn:uuid:be021bc3-7fe6-4a58-af21-526825063634>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00597.warc.gz"}
Four Types Of Ball Mill Types of Ball Mills | Our Pastimes There are three types of ball mills: horizontal, vertical and industrial. Horizontal Ball Mills. Horizontal ball mills are the most common type. The basic design is the same but the details can vary slightly. A drum, which is usually detachable, has a door that can be used to load in the material to be processed. List of types of mill - Wikipedia Ball mill, a mill using balls to crush the material Bead mill a type of Mill (grinding) Burr mill or burr grinder, a mill using burrs to crush the material, usually manufactured for a single purpose such as coffee beans, dried peppercorns, coarse salt, spices, or poppy seeds Ball Mill - an overview | ScienceDirect Topics Ball mills tumble iron or steel balls with the ore. The balls are initially 5–10 cm diameter but gradually wear away as grinding of the ore proceeds. The feed to ball mills (dry basis) is typically 75 vol.-% ore and 25% steel. The ball mill is operated in closed circuit with a particle-size measurement device and size-control cyclones. Different Types of Ball Mills - LinkedIn SlideShare 25-08-2014 · Different Types of Ball Mills 1. Different Types Of Ball Mil 2. Ball mills are commonly used for crushing and grinding the materials into an extremely fine form. These machines are widely used in mineral dressing processes, pyrotechnics, and selective laser sintering. 3. Ball Mill: Operating principles, components, Uses ... 11-01-2016 · Several types of ball mills exist. They differ to an extent in their operating principle. They also differ in their maximum capacity of the milling vessel, ranging from 0.010 liters for planetary ball mills, mixer mills, or vibration ball mills to several 100 liters for horizontal rolling ball mills. types of ball mill, types of ball mill Suppliers and ... A wide variety of types of ball mill options are available to you, There are 603 suppliers who sells types of ball mill on Alibaba.com, mainly located in Asia. The top countries of suppliers are China, Taiwan, China, and Japan, from which the percentage of types of ball mill supply is … Ball Mill - SlideShare 30-11-2015 · DEFINITON: • A ball mill is a type of grinder used to grind and blend materials for use in mineral dressing processes, paints, pyrotechnics, ceramics and selective laser sintering etc. 6. PRINCIPLE: A ball mill works on the principle of impact and attrition. size reduction is done by impact as the balls drop from near the top of the shell. Ball mills - liming With more than 100 years of experience in ball mill technology, liming’s ball mills are designed for long life and minimum maintenance. They grind ores and other materials typically to 35 mesh or finer in a variety of applications, both in open or closed circuits. Ball Mills - Mineral Processing & Metallurgy Metallurgical ContentBall Mill Capacity VS Rod Mill CapacityWorking Principle & OperationRod Mill Capacity TableBall VS Rod Mill ConversionTypes of Mill DischargeBall Mill Trunnion and Mill Grate DischargePeripheral Grinding Mill DischargeLoad Capacity of Trunnion BearingsBall Mill & Rod Mill LinersGrinding Mill GearsGrinding Mill DrivesBall Mill Grinding CircuitBall Mill SpecificationsAll Calculate and Select Ball Mill Ball Size for Optimum … In Grinding, selecting (calculate) the correct or optimum ball size that allows for the best and optimum/ideal or target grind size to be achieved by your ball mill is an important thing for a Mineral Processing Engineer AKA Metallurgist to do. Often, the ball used in ball mills is oversize “just in case”. Well, this safety factor can cost you much in recovery and/or mill liner wear and different types of ball mill Different Types of Ball Mills LinkedIn SlideShare. Aug 25, 2014 Different Types of Ball Mills 1. Different Types Of Ball Mil 2. Ball mills are commonly used for crushing and grinding the materials into an extremely fine form. These machines are widely used in mineral dressing processes, pyrotechnics, and selective laser sintering. 3.get price Ball Mill - RETSCH - powerful grinding and … RETSCH is the world leading manufacturer of laboratory ball mills and offers the perfect product for each application. The High Energy Ball Mill E max and MM 500 were developed for grinding with the highest energy input. The innovative design of both, the mills and the grinding jars, allows for continuous grinding down to the nano range in the shortest amount of time - with only minor warming Type Of Mills For Cement Plant - Ball Mill Grinding Mills Ball Mill Rod Mill Design Parts, The preliminator mill is a type of ball mill used for coarse grinding in open circuit or for fine grinding in closed circuit preliminator mills are widely used in the cement industry for the reduction of cement raw materials and clinker Type Of Mills For Cement Plant types different types of ball mill - rkentertainment.nl Different Types Grinding Mills- PANOLA Mining . Different types of grinding mills advantages amit 135 lesson 6 grinding circuitmining mill operator the mill product can either be finished size ready for processing or an intermediate size ready for final grinding in a rod mill ball mill or pebble mill agsag mills can accomplish the same size reduction as two or three stages of. ball mill types of - liming-china.com Ball mill - Wikipedia, the free encyclopedia. A ball mill is a type of grinder used to grind materials into extremely fine powder for use in mineral dressing processes, paints, pyrotechnics, and types of ball mill manufacturer with power rating Ball Mill Types Of powershield.co.za. Types Of Ball Mill Design China . Cement Mill Liners Wholesale Mill Liner Suppliers Alibaba. ball mill liners cement mill liner plate Ball Mill Liner Ball Mill Liner Suppliers and Features of ball mill 1 is an efficient tool for grinding many materials into fine powder 2 The ball mill is used to grind many kinds of mine and other materials or to select the Ball Mill for Kinds of Materials - Fote Machinery(FTM) Ball Mill Download Ball Mill PDF 8.21 MB. Applied materials: cement, silicate, new-type building material, refractory material, ore dressing of ferrous metal and non-ferrous metal, glass ceramics, Food Milling Machines & Equipment | Ask the … Air classifying mills or jet mills can be suitable for ultra-fine grinding and when you want to mill your foods to sub-micron levels we can help you with ball mill solutions. And when you are trying to reduce the particle size of a solid in suspension in a liquid, you may benefit from colloid or …
{"url":"http://tanuloszoba.eu/sale/2022-four-types-of-ball-mill-4307/","timestamp":"2024-11-05T10:56:02Z","content_type":"application/xhtml+xml","content_length":"14256","record_id":"<urn:uuid:97f9e2fb-2567-4610-ac22-557e0cc46f00>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00740.warc.gz"}
[EM] Summability and proportional methods Kristofer Munsterhjelm km_elmet at t-online.de Fri Jun 9 14:26:59 PDT 2017 On 06/08/2017 06:33 PM, Jameson Quinn wrote: > Most proportional voting methods are not summable. Transfers, > reweightings, and otherwise; all of these tend to rely on following each > ballot through the process. This makes these methods scary for election > administrators. Let "weak summability" for multiwinner methods be that if you hold the number of seats constant, you can do precinct totals that require a number of bits that is polynomial with respect to the number of candidates and the logarithm of the number of voters. Let "strong summability" be that if you don't hold the number of seats constant, you can do precinct totals that require a number of bits that is polynomial wrt the number of candidates, the number of seats, and the logarithm of the number of voters. I suspect that you can't have both Droop proportionality and strong summability. My hunch is that you can get at a superpolynomial number of the bits in the solid coalitions data by adding voters and candidates, thus adjusting the Droop quota. Perhaps something like constructing enough logical implications that the entire DAC/DSC data can be recovered. But I don't have anything close to a proof of this. (The solid coalition data contains 2^n rows - one for each solid coalition - worst case, which is not polynomial in n, and if equal rank and truncation is permitted, you can set all but one of them to whatever you I furthermore suspect that weak summability is compatible with the DPC. Consider a "subset Condorcet" method (in the vein of CPO-STV, Schulze STV, etc) with n candidates and s seats. There are n choose s possible assemblies to consider, and n choose s is O(n^s), so the number of pairwise contests is on the order of n^2s. If s is a constant, 2s is also just a constant. All you have to do is make the individual pairwise contests summable, i.e. that you can go from "potential assembly A vs potential assembly B score" in districts x and y to "potential assembly A vs potential assembly B" for both, and make the CW obey DPC. (I may also have an idea of how to make a weakly summable Bucklin-type method. I have to think about it further.) But perhaps you can get "not quite polynomially summable" and it'll be good enough. E.g. suppose you could make a proportional method that uses only the solid coalition data. For reasonable numbers of candidates, 2^n * log2(V) bits is not *that* bad. Even if I could construct a proof along the lines I mentioned above, it only lower bounds the amount of data needed to that amount. > I know of 3 ways to get summability: partisan > categorization, delegation, and second moments. List-based methods > (including partially list-based ones like MMP) use partisan > categorization. GOLD voting > <http://wiki.electorama.com/wiki/Geographic_Open_List/Delegated_(GOLD)_voting> does > it by giving voters a choice between that and delegation. Asset voting > and variants use delegation. Another option is communication, like STV or IRV does. Technically speaking, that's more a sidestep. The method then has a finite number of rounds that go like this: - Each precinct reports some polynomial amount of data to the central - The central uses this data and specifies a transformation - Each precinct applies this transformation - The method loops to the beginning until it's done. In IRV, the transformation is "eliminate the loser according to the current count". > The other way to do it is with second moments. For instance, if voters > give an approval ballot of all candidates, you can record those ballots > in a matrix, where cell i,j records how often candidates i and j are > both approved on the same ballot. This matrix keeps all the information > about the two-way correlations between candidates, but it loses most of > the information about three-way correlations. For instance, you can know > that candidates A, B, and C each got 10 votes, and that each pair of > them was combined on 5 ballots, but you don't know if that's 5 votes for > each pair, or 5 votes for the group and 5 for each. Note that those two > possibilities actually involve different numbers of total votes — 15 in > the former, 20 in the latter. In order to fix this, you can instead make > separate matrices depending on how many total approvals there are on > each ballot — a "matrix" for all the ballots approving 1, one for all > those approving 2, etc. Thus, in essence, you get a 3D matrix instead of > a 2D one. > Once you have a matrix, you can essentially turn it back into a bunch of > ballots, and run whatever election method you prefer. The result will be > proportional insofar as the fake ballots correspond to the real ballots. > How much is that? Well, I can make some hand-wavy arguments The basic > insight of the Central Limit Theorem (CLT) — that second moments tend > to dominate third moments as the number of items increases — would seem > to be in our favor. > I think this could be an interesting avenue of inquiry. But on the other > hand, the math involved will immediately make 99% of people's eyes glaze > over. There are other possible ways to do lossy transformations. Forest once suggested splitting ballots into heaps based on what candidate was ranked first, and then aggregating within those heaps - a sort of implicit partisan categorization method. Another possibility is to relax the concept of proportionality. If it turns out you can't get Droop proportionality and strong summability, the "dual question" (so to speak) is "how much proportionality can you get with strong summability"? I guess that's what the lossy transformations do: they relax proportionality under any ballot set to proportionality under ballot sets where only second moments or first preferences cause the variability in the votes that PR should capture. Perhaps one could make a weakly summable method that's fully proportional up to s seats for a space cost of O(n^s) elements, and that's fully proportional up to that number of seats and partially proportional from thereon for any number of seats greater than s. Then it would be possible to tune the proportionality according to what level of s one's willing to accept. > If this is not possible, then the only 2 ways towards summability are > partisan categorization and delegation. GOLD uses both. For a > nonpartisan method, I don't think there's any way to be summable without > forcing people to delegate; and I think that forced delegation is going > to be a deal-breaker for some people. > So I'm frustrated in trying to design a nonpartisan proportional method > that's as practical as GOLD and 3-2-1 are for their respective use cases. > ---- > Election-Methods mailing list - see http://electorama.com/em for list info More information about the Election-Methods mailing list
{"url":"http://lists.electorama.com/pipermail/election-methods-electorama.com/2017-June/001461.html","timestamp":"2024-11-13T16:41:14Z","content_type":"text/html","content_length":"10551","record_id":"<urn:uuid:d407c9e5-c27c-4214-a6ae-db64821f86ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00274.warc.gz"}
SCORN: Sinter Composition Optimization with Regressive Convolutional Neural Network Department of Mathematics, School of Science, University of Science and Technology, Anshan 114051, China Department of Clinical Sciences, College of Veterinary Medicine, Cornell University, Ithaca, NY 14853, USA Author to whom correspondence should be addressed. Submission received: 1 June 2022 / Revised: 7 July 2022 / Accepted: 9 July 2022 / Published: 12 July 2022 Sinter composition optimization is an important process of iron and steel companies. To increase companies’ profits, they often rely on innovative technology or the workers’ operating experience to improve final productions. However, the former is costly because of patents, and the latter is error-prone. In addition, traditional linear programming optimization methods of sinter compositions are inefficient in the face of large-scale problems and complex nonlinear problems. In this paper, we are the first to propose a regressive convolutional neural network (RCNN) approach for the sinter composition optimization (SCORN). Our SCORN is a single input and multiple outputs regression model. Sinter plant production is used as the input of the SCORN model, and the outputs are the optimized sintering compositions. The SCORN model can predict the optimal sintering compositions to reduce the input of raw materials consumption to save costs and increase profits. By constructing a new neural network structure, the RCNN model is trained to increase its feature extraction capability for sintering production. The SCORN model has a better performance compared with several regressive approaches. The practical application of this predictive model can not only formulate corresponding production plans without feeding materials but also give better input parameters of sintered raw materials during the sintering process. 1. Introduction Sinter has always been an important part of the steel-making process in a sintering plant. Sintering technology is a complex thermo-chemical and energy-intensive process, and the price of its raw material—iron ore—has always been high. As a result, how to control costs and improve profits are the core issues that can affect the survival of sinter plant enterprises [ ]. Research on sinter composition optimization is an extremely important field of sinter mineralogy, and the quality of ingredients affects the final sinter quality. In most practical cases, the proportion of sinter ore is limited by manual experience, which is subjective. It is also difficult to obtain the optimal material ratio due to the contradiction among the constraints [ ]. The sintering process modeling method and linear programming are widely used to address this challenge. However, one problem is that there are many nonlinear factors that need to be considered in the batch optimization model [ ]. With the development of the research on the sintering process, mineral varieties are increasing. The number of chemical composition control projects is also increasing. Because sintering mixtures are complex, it is difficult to change one parameter independently of others, and the introduction of new parameters into the optimization model can simultaneously change all parameters [ ], which is time-consuming and tedious to calculate. In this paper, we propose a new sinter compositions optimization model using a regressive convolutional neural network (SCORN). We develop a new regressive convolutional neural network (RCNN) structure from given datasets to obtain the optimal sintering material compositions. By using the production history data of a mining company in China, our SCORN model can predict the optimal sinter compositions. The proposed SCORN model includes feature extraction and prediction modules. Unlike conventional machine learning tools and CNNs, the input is only a single number (production), and the output is the corresponding chemical indexes of the final sinter. This model is a multi-output model, and the final sintering ratio scheme consists of multiple indicators. Our contributions are • We are the first to develop a regressive convolutional neural network for the sinter composition optimization problem. In our SCORN model, the input is the single final sintering production, and outputs are the corresponding chemical compositions of the sintered product. SCORN is a single input and multiple outputs RCNN model. • We have collected sinter production and its burdening compositions from sintering machines in one sintering plant in China. Experimental results indicate that our SCORN model can produce an optimal sinter burdening ratio given a target production. SCORN also achieves higher performance than several regressive approaches. Our paper aims to extract features from sinter production data to predict optimal sinter compositions of that production. Because of the single input data, the RCNN architecture needs to be efficient and accurate to extract the key features. Therefore, linear programming and intelligent optimization algorithms for solving multivariate input problems are ineffective in our problem. The rest of the paper is organized as follows. Section 2 provides an overview of related work for sinter optimization. The description of the sintering process and characteristic indexes are mentioned and summarized in Section 3 Section 4 provides details about the proposed methods, including structure and evaluation methods. Section 5 provides a detailed evaluation of the SCORN with a solid comparison with other regressive methods on the same sinter datasets. This section is further divided into subsections to describe the details of the dataset and the experimental setup for the traditional approaches. Section 6 discusses the model, including advantages and disadvantages, as well as practical applications and extensions. Finally, Section 7 concludes the paper and outlines directions for possible future work. 2. Related Work In the past few decades, scholars have carried out many research methods to optimize various iron ore sinter indicators to improve the sintering performance and reduce the cost. 2.1. Mathematical Statistical Models Many studies have attempted to address the question of predicting sinter quality, properties, and productivity. Many sinter models have been constructed based on mathematical-statistical methods. Eugene et al. [ ] presented a mathematical modeling method to predict sinter properties. This method reflected the variation in sinter properties using explanatory variables and optimized different iron ore blends to produce target sinter characteristics. Zhang et al. [ ] developed an unsteady two-dimensional mathematical model for the iron ore sintering process and predicted sinter yield and strength by the method of numerical simulation. In view of the large time lag in the detection of sinter, Li et al. [ ] verified the relationship between the chemical compositions of the sintering raw material and the physical and metallurgical properties of the sinter through correlation analysis. However, the aforementioned mathematical models are mainly optimized from the aspects of sintering process parameters and properties and do not consider many other factors in the sintering process. Due to the difference between ideal models and actual processes, they are difficult to apply to industrial processes. 2.2. Machine Learning Various machine learning tools and intelligent optimization algorithms are increasingly used in the sinter process research. Support vector machines, BP neural network models, and general regression neural network [ ] models have been applied as prediction models for basic sintering characteristics and sinter quality of mixed iron ore. Arghya et al. [ ] associated the sinter plant process parameters with required mechanical properties and microstructure to obtain higher productivity with the help of ANN and genetic algorithms. Kunnunen et al. [ ] shed light on how neural networks were used to model and optimize physical indexes of sinter. Yuan et al. [ ] applied a deep belief network algorithm to predict the secondary chemical composition of the sinter by analyzing the technology mechanism and characteristics of the sintering process. Machine learning methods have been widely used in the field of the sintering process to optimize the relevant indicators. Most deep neural networks address the detection [ ] and classification [ ] problems in the sintering process. Frei et al. [ ] proposed a novel deep learning-based method for the pixel-perfect detection and the measurement of partially sintered particles. It is difficult for shallow learning algorithms to effectively represent complex nonlinear functions when the number of given samples is limited. The generalization ability is also limited, which affects the prediction results of the sinter composition optimization problem. Deep learning is a branch of machine learning and relies on a large amount of data to build models that estimate the patterns of the data. Over the past two decades, CNNs have relied on the hidden layer structure to automatically extract deep features, which achieved promising results in a wide range of vision applications and domains such as image denoising, image detection, and classification [ ]. Le and Ho [ ] presented a novel method to predict DNA 6 mA sites from the cross-species genome based on deep transformers architecture and CNN with DNA sequence as input. Le and Nguyen [ ] proposed a method to identify FMN binding sites in electron transport chains using a 2-D CNN constructed from position-specific scoring matrices (PSSM). The proposed method can also facilitate the application of deep learning to deal with various problems in bioinformatics and computational biology. Aziz et al. [ ] developed a new technique of Channel Boosted Convolutional Neural Network (CB-CNN) to classify breast canter mitotic nuclei. This method improves the generalization of a CNN by making the feature space more versatile and flexible. 2.3. Sinter Compositions Existing works try to optimize the sinter composition and reduce the cost of the sintering process. Efforts are being made to resolve the proportioning issues associated with the sintering process. Based on the micro-sintering experiment [ ], the principle of ore blending is put forward according to its high-temperature characteristics. Then the ore blending is optimized. Linear programming (LP) and nonlinear programming (NLP) methods are also commonly used for evolutionary optimization of blast furnace charging ratios and operating parameters [ ]. Most of these methods used the cost as an objective function, but in practice, the optimization objective is often multi-fold, making it challenging to meet the requirements of the sintering process. Liu et al. [ ] proposed a real-time monitoring model and advanced prediction of sinter composition based on a DNN and LSTM regression. Taking the lowest cost of sinter as the objective function, Wang and Hu [ ] established a comprehensive optimization model of sinter batching and solved it with the particle swarm algorithm (PSO). Dai and Zhen [ ] established a genetic chickens hybrid algorithm based on linear programming, which is used in the first and second compositions optimization of the sintering process. Wu et al. [ ] developed an intelligent integrated optimization system (IIOS) for the sintering ratio step to find the best feasible proportion regimen. The optimal burdening ratio method using intelligent optimization algorithms has been extensively studied, including SA (simulated annealing algorithms), EA (evolutionary algorithms), PSO algorithms, ACA (ant colony algorithms), etc. However, they all have a common problem. These algorithms converged quickly at first but then became slower, making it easy to obtain the locally optimal solution [ 3. Sintering Process and Characteristic Indexes This section briefly describes the sintering process and explains the physical and chemical characteristic indexes for sintered final products. 3.1. Description of Sintering Process The entire sintering preparation process is complex, mainly including three steps: batching, mixing, and sintering. In the sintering batching stage, the chemical raw materials of sintered ore and other materials are mixed in a certain proportion. After the mixing stage, contents are evenly mixed with water and then sent to the sinter machine to generate sintered ore. The sintering process undergoes complex physical and chemical changes, and the entire process can take up to two hours or more [ Figure 1 shows the main material flows in the sintering process. 3.2. Sinter Characteristic Indexes The indexes of iron ore characteristics are selected from the following two aspects. 3.2.1. Chemical Index The chemical index mainly consists of two parts. Firstly, the chemical composition part of sinter generally includes TFe, SiO$2$, MgO, Al$2$O$3$, CaO, S, and FeO. Secondly, other indexes are Ro and total iron ore. Ro is expressed as the ratio of calcium oxide content to silica content in the sinter. The total amount of sinter represents the sum of the total chemical components of the sinter. 3.2.2. Physical Index Screening is defined as the percentage of sintered ore smaller than the standard specified particle size (−5 mm) in the total weight of the sample after the sample is screened. The drum index is defined as the percentage of the weight of a sample with a particle size larger than the specified standard to the total weight of the sample. Table 1 shows the characteristic information iron ore, which is mainly composed of chemical indicators. 4. Methods 4.1. Motivation In the pre-iron process for iron and steel enterprises, an efficient and accurate grasping of the current sinter composition is of great significance for guiding blast furnace production. The metallogenetic process of the sintering mixture is complex. It is difficult to accurately obtain the optimal sintering compositions corresponding to the mixture through mechanism calculation. Statistics-based machine learning methods can rely on large-scale data to obtain a reliable prediction model. The feature extraction depends more on the hidden layer model and is better at processing high-dimensional data. The quality of the sinter is closely related to batching, process state, and operating parameters. Traditionally, the appropriate ratio for sintering production is determined by chemical principles and a large number of experiments. Then linear programming or intelligent optimization algorithms are used to optimize it. Under industrial conditions, where production depends on the used raw materials, there is no simple answer to the question of how a certain value is optimized. The purpose of this study was to produce refined knowledge that would assist in the control of the sinter composition value when the production is determined. In comparison to conventional ANNs, RCNNs apply a largely increased number of layers, which can extract complicated features [ ]. Our research problem is an optimization task, and we aim to optimize the sinter compositions using a RCNN given the sinter production data. 4.2. Problem In the sinter plant, production increase often only relies on technological innovation or skilled operation. However, it is not always reliable to depend on the experience of operators. The results obtained by each person using these methods are not consistent, and it is not easy to accurately control the burdening ratio of the sinter. The raw materials in sintering production consist of many different compositions, and each composition may have a mutual influence or correlation. Therefore, a sinter burdening ratio optimization model based on RCNN is proposed to solve the problems. In our case, there is an unknown relationship between target production and the chemical composition of the sinter: the input of this model is the target production, and the output is the chemical index of the sinter. 4.3. Notations In the sinter composition optimization problem, given the input target sintering productions $X = { x n } n = 1 N ∈ R N × 1$ , its outputs are the optimized sinter compositions $Y = { y n } n = 1 N ∈ R N × D$ , where is different indexes that are mentioned in Section 3.2.1 . Each instance is characterized by an input sintering production $x n$ and an output sintering composition $y n ∈ R 1 × D$ . The objective function of our proposed SCORN model is to train a regressive convolution neural network ( ) to accurately predict optimal sintering composition given any input production as follows: $X = { x n } n = 1 N → R Y = { y n } n = 1 N .$ 4.3.1. Architecture Our proposed SCORN model consists of two major modules [ • Feature Extraction. The feature extraction module extracts features from the simple numerical target production for the second module. One advantage of the RCNN architecture is that the layers are easily interchangeable, which greatly facilitates transfer learning between layers [ • Prediction. This block takes the extracted features from the previous module and feeds them to a fully connected (FC) layer for regression prediction. To overcome the smaller number of features of the input layer (production has the size of $1 × 1$ ) in the first module, we need to design an appropriate network architecture for extracting better feature representations to model the relationship between the production and nonlinear indexes. Notably, the input of the model is sintering production, and the output sintering compositions come from the final connected regression layer. To achieve better accuracy, the feature extraction structure may be used multiple times. Figure 2 shows an example of a sinter composition prediction of the SCORN model. The final few layers can reflect completed sinter compositions. With more features extracted in the feature extraction module, we can easily build the relationship between the model and the predicted sinter compositions. Normally, CNN consists of a sequence of layers, including convolutional layers, pooling layers, and fully connected layers. Each convolutional layer typically has two stages. In the first stage, the layer performs the convolution operation, which results in linear activation. In the next stage, a nonlinear activation function is applied to each linear activation. Each feature extraction module [ ] has seven layers (convolution (Conv), rectified linear units (ReLU), batch normalization (BN), average pooling (AP), cross-channel normalization (CCN), dropout (Drop) and max pooling (MP)). SCORN model can extract features from the simple numerical target production and feed them into a fully connected (FC) layer for regressive sinter composition prediction. In the SCORN model, we employ the Conv layer to generate more features from the previous layer (e.g., the first Conv layer has the filter size of [1, 1], number of filters: 12, stride size of [1, 1] and zero padding. Hence, the final output size is $1 × 12$ ). The ReLU layer reduces the number of epochs to achieve better training error rates than traditional tanh units. The normalization layer increases the generalization ability and reduces the error rate. In addition, ReLU and normalization layers do not change the size of the feature map. The pooling layer aggregates the outputs of adjacent pooling units. The dropout (Drop) layer randomly sets input elements to zero to prevent overfitting. The loss function of the last regression layer is the same as our error function. One of the most obvious advantages of the model is that more features can be extracted from the feature extraction model. By extracting more features in the model, we can easily establish the relationship between the model and the predicted components [ ]. Therefore, the SCORN model can predict composition at an arbitrary production. 4.3.2. Loss Function The half sum-of-squared errors in Equation ( ) has been employed as an indicator of the discrepancy between the actual $y n$ and the predicted output $y n ′$ . By reducing the error between the actual and the predicted value, the SCORN model can predict the sintering compositions. $E = 1 2 ∑ n = 1 N [ y n − y n ′ ] 2$ 4.4. Model Evaluation To illustrate the significance of the SCORN model, we focus not only on the fitting effect of the model but also on the error values between the predicted value and the real value. Therefore, the $R 2$ , root mean square error (RMSE), and mean absolute error (MAE) were used to evaluate the model, as in Equations ( $R 2 = 1 − U n e x p l a i n e d v a r i a t i o n T o t a l v a r i a t i o n = 1 − ∑ n = 1 N S r e s i d u a l ∑ n = 1 N S t o t a l , where S r e s i d u a l = ∑ d = 1 D ( y n d − y n d ′ ) 2 S t o t a l = ∑ d = 1 D ( y n d − y d ¯ ) 2$ $R M S E = 1 D ∑ d = 1 D 1 N ∑ n = 1 N [ y n d − y n d ′ ] 2$ $M A E = 1 N D ∑ n = 1 N ∑ d = 1 D | y n d − y n d ′ |$ In these formulas, $y n d$ is the actual value of the th data point, and $y n d ′$ is the predicted value. is the number of samples in the sinter composition. is the number of composition indexes of the sintering process. The $R 2$ statistic has been shown to be a useful indicator of the significance of the model’s performance [ ]. Therefore, our unknown relationship regression is fitted with an extended $R 2$ statistic. The range of the $R 2$ statistic is between $[ 0 , 1 ]$ ; the higher the value of $R 2$ , the more variation the model explains, and the better the model fits the sinter composition. In addition, the smaller RMSE and MAE, the better the model is. We used a five-fold cross-validation method to evaluate model performance. Hypothesis tests: We test a hypothesis to show the significance of predicted and true sinter compositions. The null hypothesis is H0: there is no significant difference between predicted sinter compositions and original sinter compositions (they come from the same distribution). We perform two-sample t-tests to calculate the p-values. Since each prediction will have a p-value, we compute the mean p-value of the whole dataset. 5. Experimental Setups We evaluated our model on a sintering dataset and provided a detailed comparison with six regression methods. This section also provides a detailed description of the dataset and its evaluation 5.1. Datasets Description 5.1.1. Sinter Datasets One of the most important aspects of any machine learning method is having input and output data from reliable sources. Usually, there is a seasonal variation in the input parameters, such as the percentage fluctuation of MaO in iron ore, which is usually lower during the rainy season. To predict the sinter composition ratio, the chemical indexes from the database of a sintering plant in China were collected. The time of data spans from January 2017 to December 2018. The data of the whole production line can be classified into mainly two categories: chemical index and physical index, as mentioned in Section 3.2 . For the sinter data, 12 manual samplings and analyses are performed daily. Daily data from a period of two years were used in our data-driven modeling. The period yielded a set of 7803 valid observations of the model. The statistics of sintering compositions can be seen in Table 1 . Finally, our model was also developed to correlate nine sinter compositions as the output variable and the sinter production, described as input variables. 5.1.2. More Validation Datasets We also used three external datasets (Pentagon, Corpus Callosum, and Mandible shape; the details of these datasets can be found in [ ]) to validate our model. We further compared our model with a geodesic regression and ShapeNet models [ 5.2. Implementation Details We compared the results of regressive methods with the mentioned datasets via MATLAB software and Python using an Intel(R) Core(TM) i5-10500 CPU. We used 6242 (70% of dataset) as the training set and the rest 1561 (30% data) as the test set. We compared the predicted results of our SCORN with the other six baseline methods (DecisionTree [ ], RForest [ ], KNN [ ], LS [ ], MLP [ ], SVR [ ]). Our SCORN model’s structure finally has ten layers. The number of composition indexes of the sintering process is nine. We chose sgdm as our optimization function. The maximum number of iterations was set as 300, and the initial learning rate was set as 0.0005. The running time of training our model is 350 s, and the inference time is less than 0.1 s. To train the DecisionTree model, we used the default parameters from Python’s Scikit-Learn module. For SVR, the algorithm does not support multiple outputs for regression problems, and we implemented multi-objective support vector regression via a correlation regression chain [ ]. We used the RBF (radial basis function kernel) kernel, and other parameters were set as default values. For the MLP model, the training was started with a simple 20-50-100 structure hidden layer, and we chose tanh as our activation function. The maximum number of iterations was set as 100, and the penalty function was set as 0.0001. Different regressive methods were developed. Each method was started based on the same datasets. 5.3. Results In this section, we provide a detailed comparison with six conventional methods. The significant analyses demonstrated the applicability and goodness of our model. 5.3.1. The Traditional Methods Used for Comparison This part summarizes other used regression models that are compared with our SCORN model. Many machine learning algorithms are designed to predict a single numerical value, referred to as a single output regression model. However, we can also encounter many multi-output regression problems in real life. Multi-output regression aims to learn a mapping between a single or multivariate input space and a multivariate output space [ • Least Square. Least squares is a mathematical optimization technique that finds the best functional match for the data by minimizing the sum of squared errors. • KNN. The nearest-neighbor technique is a well-known and studied technique in statistical learning theory [ ]. In essence, the method consists of constructing estimators by averaging the properties of training events of similar characteristics to those of a test event to be classified or whose properties need to be inferred. • RondomForest. A random forest algorithm is an ensemble approach that relies on CART models [ • Decision Tree. In a decision tree model, an empirical tree represents a segmentation of data, which is created by applying a series of simple rules. These models generate a set of rules that can be used for prediction through the repetitive process of splitting [ • Multilayer Perceptron. MLPs learn a mapping function from the input space to the target space [ ]. Generally, there are three basic layers in the structure of MLPs, the input layer, the number of hidden layers, and the output layers. The three-layer MLP consists of one input node, three hidden layers with [20, 50, 100] hidden nodes, and nine output nodes in each joint. • SVR. Support vector regression (SVR) works on the principle of structural risk minimization (SRM) from statistical learning theory. The core idea of the SRM theory is to arrive at a hypothesis h, which can yield the lowest true error for the unseen and random sample testing data [ 5.3.2. Composition Predictions After training the SCORN model with the sinter plant training set, we applied the model to predict the sinter composition of the test set. The training curve and validation curve of the trained network structure is shown in Figure 3 . The comparison results of actual and predicted compositions are shown in Table 2 . We enumerated the predicted values of six groups of samples and their corresponding true values. Compared with the actual composition with the SCORN predicted composition, the predicted value is close to the actual component, which indicates that the SCORN model has a good prediction effect. Figure 4 shows the detailed comparison between the prediction and the original value of sinter composition Tfe based on the SCORN model. Most of the predicted values are close to the original values. Both the predicted value and the original value fluctuate within the same numerical range, which shows that the SCORN model has a high generalization ability. 5.3.3. Significance Analysis After predicting the sinter composition of the test set, we calculated the statistical significance of SCORN and each comparison method that is described in Section 5.3.1 Table 3 shows the values of $R 2$ , RMSE and MAE, as given in Equations ( ), of the training set using the five-fold cross-validation method. The $R 2$ score of different methods is close to 1, which shows that the fitting degree of the model is good. We also reported the uncertainty of all models. Except for the SVR model, the RMSE and MAE of the SCORN and other compared models are generally low. In addition, the $R 2$ score and RMSE of the SCORN model are better than those of other models. MAE of the SCORN model is also close to the best value. The higher -value of the SCORN model shows that the prediction performance of the SCORN model is better than the other traditional models. This result shows that our model can be practically applied in situations where a large amount of data is available. Similarly, RMSE, the standard deviation of residuals, is smaller than that of other regressive models. It shows that residuals are dispersed in a narrower range in the case of the SCORN model compared to the other regressive models. In the present CNN training, a fixed value of the learning rate, 0.0005, was selected. The -value may be further increased if the dynamic learning rate is used [ The mean -value of the whole dataset is 0.995. All results are from two-sample -tests and cannot reject the null hypothesis, which implies that the predicting sintering compositions are similar to the true sintering compositions in which the predicted values almost recover the original values. Table 4 compares the $R 2$ statistic of SCORN with geodesic regression and ShapeNet models. The $R 2$ values of three datasets from our SCORN are much larger than those of other models. The lower values indicate that shape variability is not well modeled by the geodesic regression model. Therefore, our SCORN model shows higher effectiveness in predicting the shapes of three different validation datasets given a single input. 5.4. Parameter Analysis The ablation study of different dropout rates is shown in Table 5 . Different dropout rates affect the performance of our SCORN model because different rates can correct errors in other units to help avoid the overfitting problem. Combining Table 5 Table 6 , we can find that when the dropout rate is 0.7 and the learning rate is 0.0005, the SCORN model has the best overall performance and high prediction accuracy. 6. Discussion In this paper, we exploit the excellent representation learning capability of the deep networks to optimize sinter compositions from the sinter production. We propose a sinter composition optimization model based on an RCNN. From these experiments, we find that the proposed approach can predict the sinter composition changes with a higher $R 2$ value. One reason is that the network architectures provide enough modeling capacity to encode the sinter chemical composition at each production and generalize it to unseen production. Finally, we note that the model can be easily extended to support more than a single input; natural extensions would include other influential factors such as product class and other indicators. The essential benefit of our proposed model over traditional methods is that our model has better prediction accuracy, which can effectively save the cost of the sintering process. 6.1. Applications and Extensions The technical staff can quickly obtain the optimal raw material ratio using our predicted sintering output. Then the ore mixing structure can be optimized, and the cost can be effectively reduced. In addition, with a more accurate raw material composition, it is beneficial to improve the planning of sintering material scheduling in the sinter plant. Procurement personnel can optimize the plan and cost of iron ore raw materials through a more reasonable economic value assessment of various raw materials. Based on optimization results of sinter composition, it can be further extended to the blast furnace proportioning model. By adding pellets, lump ore, and other related raw materials, a blast furnace batching optimization model can be applied to calculate the optimal raw material ratio of molten iron. 6.2. Advantages and Limitations There are several advantages of our proposed SCORN model. Firstly, our model is only data-driven and can extract key features efficiently and accurately with only a small amount of key input data. Therefore, our model is very convenient for adding new sintering ratio factors. Secondly, the SCORN model can be applied to the sintering process of other types of ores. The sintering composition optimization model can be established without conducting real experiments using raw materials, which saves time and costs. Lastly, the predicted optimized sintering composition of our model meets the premise quality requirements. One limitation of our work is that not all indicators of sintering granulation characteristics are considered, such as middle size proportion (MSP), average size index (ASI), etc. As for future work, aside from collecting more data, combining our model with pelletizing metrics may improve sinter quality and steel quality. In addition, another disadvantage of our model is that it is not sensitive enough to small changes in sinter production from one time-unit to the next. 7. Conclusions In this paper, we are the first to propose a sinter composition optimization model based on a regressive convolutional neural network. The proposed SCORN model can handle small amounts of data and high-dimensional data. The prediction accuracy of the model is further improved by optimizing the parameters and structure of the RCNN model. Experimental results show that our method performs better than several regressive models. Therefore, our SCORN model is more suitable for predicting the composition of the sintering process in metallurgical enterprises. In the future, we will pay more attention to other physical indexes, the metallurgical properties indexes, and the correlation between the data from the sinter production line. We aim to mine the important parameters that affect the fluctuation of sinter components and build a better model by combining the physical indexes and metallurgical properties indexes. Author Contributions Conceptualization, Y.Z. and J.L.; methodology, Y.Z.; writing—original draft preparation, J.L.; writing—review and editing, Y.Z. and L.G.; supervision, Y.Z. All authors have read and agreed to the published version of the manuscript. This research received no external funding. Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement The authors thank the sinter company and its employees for their support during the data collection. Conflicts of Interest The authors declare no conflict of interest. Figure 2. The architecture of our proposed SCORN for predicting the compositions of sinter at “production” 1010 ton (Conv: convolution, ReLU: rectified linear units, BN: Batch normalization, AP: average pooling, CCN: cross-channel normalization, MP: max pooling, Drop: dropout and FC: fully connected. The number of features of each layer is represented in the middle of the graph of each Figure 4. Comparison of the actual Tfe component and the predicted Tfe component with our SCORN model. The max error is $± 1.4835$. Factors Field Unit Average/Year Change Range TFe % 56.2 53.9–58.6 FeO % 8.8 5.5–12.3 SiO$2$ % 5.7 4.7–6.4 CaO % 11.6 9.7–13.7 Chemical compositions RO - 2 1.6–2.3 MgO % 1.5 0.9–2.1 S % 0.026 0.002–0.063 Al$2$O$3$ % 1.23 0.19–1.98 Total iron ore in the sinter - 98.35 96.88–99.80 Table 2. Comparison of the actual composition and the predicted composition using our SCORN model (# means the actual number, and P# represents the predicted number). Compositions # 1 P# 1 # 2 P# 2 # 3 P# 3 # 4 P# 4 # 5 P# 5 # 6 P# 6 TFe % 56.98 56.86 57.22 56.94 57.08 56.94 56.90 56.95 56.91 56.85 56.82 56.85 Feo % 7.7 8.7 8.9 8.65 9 8.65 8.4 8.65 8.40 8.70 8.90 8.69 SiO$2$ % 5.52 5.46 5.39 5.46 5.47 5.47 5.38 5.47 5.39 5.46 5.54 5.46 CaO % 11.03 11.11 10.85 11.01 11.05 11.01 10.85 11.01 11.22 11.11 11.52 11.10 Ro 2 2.04 2.01 2.02 2.02 2.02 2.02 2.02 2.08 2.04 2.08 2.03 MgO % 1.55 1.27 1.52 1.20 1.49 1.20 1.5 1.20 1.61 1.27 1.53 1.26 S % 0.86 1.12 0.85 1.13 0.88 1.13 0.89 1.13 0.86 1.12 0.86 1.12 Al$2$O$3$ % 0.032 0.022 0.024 0.021 0.026 0.021 0.027 0.021 0.028 0.022 0.028 0.023 Total iron ore 98.87 98.31 98.73 98.27 98.77 98.27 98.30 98.27 98.81 98.30 98.99 98.29 Table 3. Comparison of the SCORN models and the different methods (the variances of $R 2$ values are negligible because of their small values). Method SVR KNN R Forest Decision Tree OLS MLP SCORN RMSE 3.06 ± 0.33 0.50 ± 0.02 0.46 ± 0.01 0.46 ± 0.01 0.48 ± 0.01 0.49 ± 0.01 0.40 ± 0.01 MAE 1.18 ± 0.08 0.33 ± 0.01 0.31 ± 0.01 0.31 ± 0.01 0.33 ± 0.01 0.34 ± 0.01 0.33 ± 0.01 $R 2$ 0.9865 0.9998 0.9998 0.9998 0.9997 0.9997 0.9999 Datasets Pentagon Corpus Callosum Mandible Geodesoc regress 0.0223 0.0234 0.0873 ShapeNet 0.3911 0.3854 0.1738 SCORN 0.9923 0.9996 0.8191 Evaluation drop = 0.5 drop = 0.6 drop = 0.7 drop = 0.8 Traning Loss 1.5056 1.0434 1.0056 1.0397 Traning RMSE 1.7353 1.4446 1.4182 1.4420 Evaluation 0.0001 0.0005 0.001 Traning Loss 1.0056 0.9723 1345 Traning RMSE 1.4182 1.3945 51.86 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Li, J.; Guo, L.; Zhang, Y. SCORN: Sinter Composition Optimization with Regressive Convolutional Neural Network. Solids 2022, 3, 416-429. https://doi.org/10.3390/solids3030029 AMA Style Li J, Guo L, Zhang Y. SCORN: Sinter Composition Optimization with Regressive Convolutional Neural Network. Solids. 2022; 3(3):416-429. https://doi.org/10.3390/solids3030029 Chicago/Turabian Style Li, Junhui, Liangdong Guo, and Youshan Zhang. 2022. "SCORN: Sinter Composition Optimization with Regressive Convolutional Neural Network" Solids 3, no. 3: 416-429. https://doi.org/10.3390/ Article Metrics
{"url":"https://www.mdpi.com/2673-6497/3/3/29","timestamp":"2024-11-14T05:59:40Z","content_type":"text/html","content_length":"457912","record_id":"<urn:uuid:a0a9e92d-6ab6-4268-9c6c-3d65943d7fd4>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00266.warc.gz"}
Tick, tick width and automation | Aperture Finance Ticks in CLMM (Concentrated Liquidity Market Maker) is used to define the price relation of the two tokens in the pool. Here is an illustration on what ticks looks like (borrowed from UniswapV3Book.com) $price(i) = 1.0001^i$ The i here is the tick number. A few examples: If i = 0, then price will be 0 If i = 1, then price will be $1.0001^{1} = 1.0001$ You normally would see ticks from the Uniswap position NFT (like the one below). Specifically, each position contains Min Tick and Max Tick, which defines the lower tick and upper tick (i.e. the range) of the position. In the example above, the tick width or range is -194110 - (-198160) = 4050.To calculate the associated price range, we would need to find the price corresponding to min tick (-198160) and max tick (-194110). And that's not too hard to do: $price(-198160) = 1.0001^{-198160} = 2.47999507e-9$ $price(-194110) = 1.0001^{-198160} = 3.71818751e-9$ Now you may wonder, why are the numbers so small? This is because 1 human readable ether is represented as 1e18 units on blockchain and the price above means 1 unit of ether is worth a really small number of USDC units. If we multiple the above prices by 1e18, we now get: Min tick price -> 2479995070 units of USDC, which is 2479.99507 USDC because USDC has 6 decimals (i.e. 1 human readable USDC is represented as 1,000,000 units) Max tick price -> 3718187510 units of USDC, which is 3718.18751 USDC. Up to this point, we have went over ticks and the price range behind ticks.
{"url":"https://docs.aperture.finance/docs/liquidity-academy/tick-tick-width-and-automation","timestamp":"2024-11-02T13:54:49Z","content_type":"text/html","content_length":"282419","record_id":"<urn:uuid:8a46ddf1-d9d8-4fdd-ad48-52353b9d1899>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00662.warc.gz"}
Different Properties of Aggregate in Civil Engineering Aggregates perform different purposes in Civil Engineering, including functioning as component materials for Portland cement concrete, hot mix asphalt, and bounding foundation layers beneath buildings and pavements. Natural aggregates consist of individual granular materials obtained from natural deposits, such as river run deposits, quarrels, or gravel pits. Particle shape and texture; absorption; bulk gravity; soundness and durability; bulk unit weight’ and gradation and maximum size form the most essential properties of aggregate investigated in the experiment. Concrete is a mixture of cement, water, and aggregate. 60-70 percent of the volume of concrete and approximately 80 percent of the weight of concrete are made up of aggregate (Prowell, Jingna, and Brown 12). Engineers have come up with different equations used to calculate the properties of aggregate. Laboratory tests have been used to prove these equations, and the following report conducts several tests to investigate different properties of aggregate as mentioned above. Sieve Analysis of Fine and Coarse Aggregate Sieve analysis forms the basic essential test for determining the gradation of aggregates. The main properties investigated in this test are the particle size. The particle size of the aggregate is given as a percentage of weight retained between consecutive sieves used in the test as shown in equation 1. Percentage particle size = ……………………………… 1 Dry-Rodded unit Weight of Aggregate Dry rodded unit weight of aggregate is determined by compacting dry aggregate into a test container of a known volume as per ASTM C 33 gradation number. The dry rodded weight is obtained by dividing the weight of the aggregate by the volume of the container. Equation 2 shows how dry rodded weight is calculated theoretically. Dry Rodded unit weight = ………………………………….2 Where; W[2] – Total weight of the proctor mole and the base plate W[1] – sum of weights of the mold, the base plate and dry rodded coarse aggregate. Moisture Content of Stored Aggregate Moisture content is the percentage of water found in a given volume of aggregate. The moisture content of an aggregate helps in developing the perfect water/cement ratio to use when making concrete. Each aggregate contains a specific percentage of moisture that depends on the porosity of particles making up the aggregate and the moisture condition of the storage area. Moisture content of aggregate is given by equation 3 below: Moisture content of Aggregate (%) = …………………….………..3 Where: W[1]– Weight of dry pan W[2] – Weight of pan and moist aggregate W[3] – Dry weight of aggregate sample and pan Bulk Specific Gravity and Water Absorption of Fine Aggregate Bulk unit weight represents the dry weight of compacted aggregate occupying a specific bulk volume. Water absorption of the aggregate is the rate at which an aggregate draws water from a container of a specific volume. Water absorption is important in the construction because it provides an Engineer with the idea of the specific dry-time of concrete. It is calculated using the formula shown in equation 4 below: Water absorption = ………….4 The bulk specific gravity is calculated as: Bulk Specific gravity = ………5 The experiment was conducted in two parts. Part I involved sieve analysis and bulk rodded unit weight determination while part II investigated the moisture content, bulk specific gravity and absorption capacity of the aggregate. The objectives of part I was to determine the gradation and dry rodded unit weight of coarse aggregate of coarse and fine aggregate to be used in making concrete mix using. The objective of part II was to determine the bulk specific gravity, moisture content of stored material, and absorption of coarse and fine aggregate to be used in making a concrete mix. PART I: Sieve Analysis and Bulk Rodded Unit Weight Test Equipment and Materials • Dry, coarse and fine aggregate • Weigh balance • Proctor mold with base plate and extension shovel • (5/8) Inch) Tamping Rod with hemispherical tip • Sieves and an electric sieve shaker Test 1: Sieve Analysis of Coarse Aggregate 1. Approximately 2.5 Kg of air dry aggregate was weighed (test sample). 2. The appropriate sieve sizes that represented all particle sizes were selected. The selected sieves were 1”, ¾”, 3/8”, ¼”, and #4 sizes and a pan. All the sieves were pre-weighed and the results recorded in table 1. The sieves were sorted and arranged in a descending order with the pan at the bottom to hold the last particles. 3. The pre-weighed sample of the aggregate was placed in the upper sieve and sieve-shaker operated for ten minutes. 4. The weights of retained aggregate in each sieve and in the pan were determined and recorded in table 1. 5. The total sum of the retained weights was checked to correspond to the original sample weight. Difference between the weights showed that a correction factor would be applied 6. The percentage of aggregate retained in each sieve was calculated using the equation 1 above 7. The cumulative percentage aggregate retained and aggregate passing for each sieve was also calculated 8. A graph of finer versus grain size was plotted, and ASTM C 33 scale used to identify the size number of the course aggregate used in the sieve analysis test. Test 2: Sieve Analysis of the Fine Aggregate 1. A test sample of the fine aggregate was weighed as shown in step 1 in test 1 2. The sieve sizes used for this test were #4, #8, #16, #30, #50, and #100. The sieves were arranged in an ascending order so as to compute the finest modulus. 3. Steps 3 to 8 in test I were followed and the determined values recorded in table 2. Test 3: Dry Rodded Unit Weight of Coarse Aggregate 1. Ten ponds of air-dry coarse aggregate were obtained from the sample 2. The proctor mold of 3.07 liters with a base plate and an extension was obtained 3. The weights of the proctor mode and the base was measured and recorded in table 3. (W[1]) 4. An extension was attached to the top of the proctor mold that seated in the base plate as shown in figure 1 Figure 1: A mould with base plate and extensions used for test 3 5. The coarse aggregate was placed in three equal layers. Each layer was rodded with a 0.625 inches (0.016m) diameter rod, with an hemispherical tip, 25 times 6. The extension was removed from the mode and a straight edge used to trim the excess aggregate above the mold 7. The sum weights of the mold, the base plate and the dry rodded coarse aggregate were measured (W[2]). The figures obtained were recorded in table 3. 8. The dry rodded weight was calculated using equation 2 PART II: Moisture content, Bulk Specific Gravity & Absorption Capacity • Absorption cone (mold) and corresponding tamping rod • Absorbent Towels/blankets • Aspirator and Fine aggregate • Coarse and Fine Aggregate • Weighing balance • Distilled water • 500 ml flask • Metal frame and basket for submerging aggregate samples • Shovel • Water tank (5 gallon basket) • Oven Test 4: Moisture Content of Stored Aggregate 1. The pan was weighed and its weight recorded (W[1]) 2. A sample of aggregate was places in the pan, and the weight of the pan together with a sample taken (W[2]) 3. The pan and the moist aggregate were placed in an oven for 24 hours. After the end of 24 hours, the dry weight of the aggregate sample plus the pan was taken (W[3]) 4. The moisture content of the aggregate sample was calculated using equation 3. 5. The procedure was repeated using coarse aggregate sample and its moisture content calculated Test 5: Bulk Specific Gravity and Water Absorption of Coarse Aggregate 1. A coarse aggregate was obtained and soaked in water for 24 hours to ensure it was fully saturated. The excess water was carefully decanted in order not to lose any aggregate. The sample was placed on a flat surface exposed to a gently moving current of warm air and stirred frequently 2. The sample was dried to the saturated surface-dry condition (SSD) using a towel. Care was taken not to over-dry the sample 3. 2 kg of SSD aggregate was obtained from the dried sample (A) 4. The weight of metal frame with the basket submerged in water without aggregate was obtained and recorded (D) 5. The weight of SSD aggregate and metal frame submerged in water was obtained and recorded (E) 6. The entire sample of coarse aggregate was removed from the metal frame and emptied into a pan. The sample was then placed in the oven for 24 hours. The weight of the pan was recorded. In addition, the weight of the oven dry aggregate was taken and recorded (B) 7. The weight of SSD aggregate submerged in water (C) was taken as follows: C = (E – D) 8. The bulk specific gravity was calculated using equation 5 9. The absorption at SSD condition was calculated using equation 4. Test 6: Bulk Specific Gravity and Water Absorption of Fine Aggregate 1. A sample of fine aggregate was obtained and soaked in water for 24 hours to ensure saturation. The excess water was carefully decanted in order not to lose any aggregate. The sample was placed on a flat surface exposed to a gently moving current of warm air and stirred frequently 2. The sample was dried to the saturated surface-dry condition (SSD) taking care not to over-dry. The SSD condition was determined by a cone test as follows: 1. The cone was filled with fine aggregate 2. A tamping rod was used to lightly rod the fine aggregate 25 times with a drop height of 2 inches 3. The cone was then gently lifted from the sample 4. SSD condition was the point at which enough moisture had evaporated from the drying process that allowed the fine aggregate sample to fail when the cone was removed 3. After identifying the SSD condition of fine aggregate, two samples were obtained. The first sample weighing 250 grams was used to determine the absorption capacity of fine SSD aggregate while the second sample weighing 100 g was used to determine bulk specific gravity. 1. The 250 grams sample of SSD fine aggregate (W[4]) was taken and the sample emptied into a pan and placed in the oven for 24 hours. The weight of dry oven was pre-determined (W[5]) 2. The absorption at SSD condition was calculated as Absorption (SSD) = (W[4] –W[5])/ W[5] Test 8: Bulk Specific Gravity 1. A 500 ml flask was filled with distilled water and its weight recorded as (W[6]) 2. The water was emptied into another container 3. The 100 grams sample of SSD fine aggregate was placed into the flask and the bulb of the flask filled to two thirds full with distilled water 4. An aspirator was used to remove all air from the sample for 15 minutes until all air bubbles disappeared. The process termed as de-aeration 5. The flask was then filled with distilled water to the 500 ml mark and its weight recorded (W[7]) W[7] = (weight of de-aired material + weight of distilled water to 500 ml mark + weight of the 6. The entire content was emptied into the pan. A squeeze bottle was used to wash out all remaining particles adhered to the flask into the pan 7. The pan and its components were placed in the oven for 24 hours. After the drying period the weight of the dry aggregate together with the pan was taken (W[8]) 8. The bulk specific gravity of fine aggregate was calculated as Bulk specific gravity = …………………………..6 Results and Calculations Table 1: Course aggregate sieve analysis Sieve weight Aggregate Sieve No. Size (mm) Sieve Weight (lb) Corr. factor Corr. weight % Retained Cum. % Retained % Cum. Finer (W/Agg) Weight (lb) l” 25 1.258 1.522 0.264 0.264 4.80 4.8 95.2 3/4″ 19 1.287 5.578 4.291 4.291 77.99 82.79 17.211 3/8″ 9.5 1.176 2.072 0.896 0.896 16.28 99.13 0.873 1/4″ 6.3 1.166 1.204 0.038 0.038 0.69 99.82 0.182 #4 4.75 1.132 1.736 0.604 0.604 10.98 99.89 0.109 Pan 0.812 0.818 0.006 0.006 0.11 100.00 0 Sample initial weight Sample final weight Sample corrected weight W[0 ]= 5.502 W[f] = 5.502 W[c ]= 5.502 Figure 1: A graph of percentage finer versus grain size of the coarse aggregate sample Table 2: Fine aggregate Sieve analysis 1 2 3 4 5 = 4-3 6* 7 = 5+6 8 = (7/W0)*100 9 10 = 100-9 Sieve No. Size (mm) Sieve Weight (lb) Sieve weight (W/Agg) Aggregate Weight (lb) Corr. factor Corrected weight Cum. % Retained % Cum. Finer #4 4.75 1.432 1.432 0 0 0 0 0 100 #8 2.36 1.06 1.066 0.006 1.3 * 10^-5 0.0060013 0.11 0.11 99.89 #16 1.18 1.44 5.896 4.456 9.7* 10^-3 4.456097 81.18 81.29 18.71 #30 0.6 0.91 1.322 0.412 8.9*10^-4 0.412089 7.49 88.78 11.22 #50 0.3 1.034 1.418 0.384 8.3*10^-4 0.384083 6.98 95.76 4.24 #100 0.15 1.006 1.204 0.198 4.3*10^-4 0.198043 3.6 99.06 0.94 pan 1.038 1.07 0.032 6.9*10^-5 0.0320069 0.581 99.41 0.59 Sample initial weight Final weight Corrected weight W[0 ]= 5.50 W[f ]= 5.488 W[c] = 5.5 Figure 2: A graph of percentage finer versus grain size of the fine aggregate sample Correction factor = x Column 5 NB: Wo = Wc Table 3: Dry Rodded Unit Weight of Coarse Aggregate Volume W[1] W[2] Dry Rodded Unit Weight Mold Weight (in) Mold Diameter (In) (In^3) (lb) (lb) (lb/In^3) 6.625 6 187.317 15.73 25.3 0.053 Volume W1 W2 Mold Weight (mm) Mold Diameter (mm) Dry Rodded Unit Weight (N/MM^3) (mm^3) (N) (N) 0.1682 0.1524 0.003133 69.97 112.54 13585.95 (Dry rodded Unit weight in N/mm^3) = = = 13585.95 N/mm^3 Table 4: Moisture content of coarse aggregate results W[1] (lb) W[2] (lb) W[3] (lb) 0.138 0.534 0.516 Moisture content % = W% (W[2]-W[3])/(W[3] – W[1])* 100 = (0.534-0.516)/(0.516-0.138) W% = 1.31% Table 5: Moisture content of fine aggregate results W[1] (lb) W[2] (lb) W[3] (lb) 0.096 0.588 0.570 Moisture content (W %) = W% (W[2]-W[3])/(W[3] – W[1])* 100 = (0.588-0.57)/(0.57-0.096) W % = 3.797% Table 6: Absorption capacity of coarse aggregate W[1] (lb) W[2] (lb) W[3] (lb) 0.132 1.516 1.768 Absorption capacity (S %) = (W[2]-W[3])/W[3]-W[1]) *100 S % = (1.516-1.738)/1.768-0.132) = 3.59% Table 7: Absorption capacity of fine aggregate results W[1] (lb) W[2] (lb) W[3] (lb) 0.1 0.314 0.314 Absorption capacity (S %) = (W[2]-W[3])/W[3]-W[1]) *100 S% = (0.314-0.314)/(0.314-0.1) = 0% Table 8: bulk specific gravity of fine aggregate W5 (lb) W1 (lb) W2 (lb) W3 (lb) W4 (lb) Bulk Specific gravity 1.25 1.38 0.5 0.71 0.21 2.625 Bulk Specific gravity -= = = 2.625 Questions for Part I 1. What type of Gradation was Obtained for Both Fine and Coarse Aggregate? The coarse aggregate sieve test obtained a gap graded gradation. The graph shown in figure 1 clearly indicates that the particle sizes of the coarse sample tested had some deficiency. The curve does not represent a good representation of all particles sizes as it shows more concentration on the larger sieves, an indication that a small percentage of particles were retained in the larger sieves. On the fine aggregate test, the particle sizes were well graded. The curve in figure 2 demonstrates a good representation of all particle sizes in all sieves. The percentage of particles retained in each sieve was almost equal making a uniformly graded aggregate. 2. Does the Coarse Aggregate Meet the Grading Requirements of One of the ASTM C 33 Ranges? The coarse aggregate passed the grading requirements of 1” sieve. From table 1, the cumulative percentage that passed through sieve (1”) was 95.2%. The gradation requirements for coarse aggregate (ASTM C 33) requires that the cumulative percentage of aggregate passing be in the range of 90-100. The aggregate passed the gradation number 56 with a 25 mm sieve. 3. Why is the ASTM C 33 Coarse Aggregate Size Number Important? The ASTM C33 coarse aggregate size number acts as the most effective method of grading a coarse aggregate because it is compatible with most aggregate particle sizes and determines the best coarse aggregate for making a good concrete. 4. What is the Fineness Modulus an Indication of and Why is it only Determined for Fine Aggregate? Presence of the fineness modulus indicates that the concrete will equal proportions of cement, water and sand. Fineness modulus tests are performed only on fine aggregates because a change in fine aggregate significantly affects concrete properties (Gambhir 86). 5. What is the Fineness Modulus of the Fine Aggregate? The finest modulus helps in estimating the proportion of fine and coarse aggregates in concrete moisture. The parameter helps engineers in estimating the best proportions of water cement and aggregate while making concrete and establish the best design depending on the particle sizes of the aggregate (Gambhir 86). 6. Describe the Physical Properties of the Aggregates The aggregates used in the experiment had a fine texture. The aggregates were also hard because the dry rodded unit weight of 0.053 lbs per cubic inch was obtained. The results indicated that the test team experienced difficulties in compacting aggregates. The aggregates were non-porous. A low moisture content averaging 3 percent indicated that the aggregate was less porous and allowed less water to infiltrate through. 7. Effects of Physical Properties of Aggregate on the Final Concrete Product? Physical properties of aggregates play an important role in the strength and durability of concrete. The size of the particles affects the volume and quantity of other materials like cement, sand and water used to make concrete. Fine aggregates lead to less void in a concrete and the lesser the quantity of other materials. On the other hand, the physical properties of concrete affect the quality of concrete by influencing the settling time. For instance, concrete made from porous aggregate takes less time to settle because the availability of poles allows circulation of air that fastens Part II 1. Differences Between Specific Gravity, Bulk Specific Gravity (Dry), and Bulk Specific Gravity (SSD) Specific gravity is the ratio of the density of a substance to the mass of the same unit volume. Bulk specific gravity (dry) refers to the ratio of a specific volume of dry compacted aggregate to the weight of an equal volume of water. Moreover, bulk specific gravity (SSD) is the ratio of saturated dry aggregate to the mass of equal volume of water. The three properties of aggregate have significant differences. The bulk specific gravity (dry) determines the density of dry aggregate with no water while the specific gravity makes use of water to determine the density of the aggregate (Prowell, Jingna, and Brown 66). Question 2 When the moisture content is greater than the absorption capacity, less water should be added to the concrete mix. High moisture content indicates that the aggregate has more water; while a low absorption capacity indicates that the aggregate attracts water slowly. Adding more water will make the mixture soak fast. On the other hand, a lesser moisture content than the absorption rate requires adding more amount of water to the concrete mixture. The high volume of water added will allow the mixture to soak fast before the available water evaporates due to low moisture content. Question 3 The value of bulk specific gravity for the fine aggregate was 2.625 while that of the coarse aggregate was not tested. The bulk specific gravity obtained for the fine aggregate was not close to the theoretical value of 1.44. The tested value exceeded the theoretical value by (2.625-1.44) 1.185. The theoretically known value for the bulk specific gravity for coarse aggregate is Question 4 Bulk specific gravity is mostly used in concrete mix design computations because it shows the percentage of voids in the aggregate and makes the constructor understand the ratio of other materials that will make a quality mix. The normal specific gravity calculation involves the total volume of the aggregate including the voids and does not give the exact character of the aggregate material used in making a concrete mix. Question 5 Failure to dry the particles to the saturated-dry state leaves some moisture in the aggregate sample. The following problem affects the value of bulk specific gravity, SSD. A higher than the true value is obtained because the gravel does not lose all the water through drying that makes it heavier, but the volume remains constant. Increased mass with a constant value of volume leads to a high bulk gravity. Question 6 The laboratory experiment experienced some errors as listed below: • Weighing errors because of fault weighing balances • Failure to dry the sample to saturation dry weight when testing the bulk specific gravity (dry) and (SSD) • The tests were not allowed to dry for the whole 24 hours as expected • Fault experiment apparatus that provided false results leading to errors One of the major purposes of aggregate of importance to this research is making concrete. Physical properties of aggregate help constructors in determining the best aggregate to use in a concrete mixture. Aggregate influences concrete properties because it accounts for larger volume in the mixture. The results obtained from this research help in recommending the best aggregate materials to use while making a concrete mix for ASTM C33 gradation. Works Cited • Gambhir, Murari L. Concrete Technology: Theory and Practice. , 2013. Print. • Prowell, Brian D, Jingna Zhang, and E R. Brown. Aggregate Properties and the Performance of • Superpave-Designed Hot Mix Asphalt. Washington, D.C: Transportation Research Board, 2005. Print. Leave a Comment Cancel Reply Related Posts
{"url":"http://bohatala.com/different-properties-of-aggregate-in-civil-engineering/","timestamp":"2024-11-10T04:45:18Z","content_type":"text/html","content_length":"122438","record_id":"<urn:uuid:6f578b65-39c5-48da-9ce2-b7369a49c1d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00629.warc.gz"}
optimisation of the ablation profiles in customised Download optimisation of the ablation profiles in customised * Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project A project of dissertation submitted for evaluation at the University of Valladolid in partial fulfilment of the requirements for the academic degree of Doctor of Philosophy (PhD) in Sciences of Vision with the mention „Doctor europeus“ Research Group “Grupo de Investigación Reconocido (GIR) de Técnicas ópticas de Diagnóstico (TOD)”. Universidad de Valladolid Preceptor and Director: Jesús M. Merayo-Lloves MD, PhD, MBA by Samuel Arba Mosquera, MSc in Physics, MSc in Environmental Protection, MSc in Sciences of Vision, Student at the postgraduate program “Doctorado en Investigación en Ciencias de la Visión” Instituto de Oftalmobiología Aplicada (IOBA). University of Valladolid, Valladolid, December 14, 2010 “…Por eso no será inútil insistir ante los párvulos en la historia del progreso científico, aprovechando la primera ocasión favorable, digamos el paso de un estrepitoso avión a reacción, a fin de mostrar a los jóvenes los admirables resultados del esfuerzo humano. El ejemplo del “jet” es una de las mejores pruebas. Cualquiera sabe, aun sin haber viajado en ellos, lo que representan los aviones modernos: velocidad, silencio en la cabina, estabilidad, radio de acción. Pero la ciencia es por antonomasia una búsqueda sin término, y los “jets” no han tardado en quedar atrás, superados por nuevas y más portentosas muestras del ingenio humano. Con todos sus adelantos estos aviones tenían numerosas desventajas, hasta el día en que fueron sustituidos por los aviones de hélice. Esta conquista representó un importante progreso, pues al volar a poca velocidad y altura el piloto tenía mayores posibilidades de fijar el rumbo y de efectuar en buenas condiciones de seguridad las maniobras de despegue y aterrizaje. No obstante, los técnicos siguieron trabajando en busca de nuevos medios de comunicación aún más aventajados, y así dieron a conocer con breve intervalo dos descubrimientos capitales: nos referimos a los barcos de vapor y al ferrocarril. Por primera vez, y gracias a ellos, se logró la conquista extraordinaria de viajar al nivel del suelo, con el inapreciable margen de seguridad que ello representaba. Sigamos paralelamente la evolución de estas técnicas, comenzando por la navegación marítima. El peligro de los incendios, tan frecuente en alta mar, incitó a los ingenieros a encontrar un sistema más seguro: así fueron naciendo la navegación a vela y más tarde (aunque la cronología no es segura) el remo como el medio más aventajado para propulsar las naves. Este progreso era considerable, pero los naufragios se repetían de tiempo en tiempo por razones diversas, hasta que los adelantos técnicos proporcionaron un método seguro y perfeccionado para desplazarse en el agua. Nos referimos por supuesto a la natación, más allá de la cual no parece haber progreso posible, aunque desde luego la ciencia es pródiga en sorpresas. Por lo que nos toca a los ferrocarriles, sus ventajas eran notorias con relación a los aviones, pero a su turno fueron superados por las diligencias, vehículos que no contaminaban el aire con el humo del petróleo o el carbón, y que permitían admirar las bellezas del paisaje y el vigor de los caballos de tiro. La bicicleta, medio de transporte altamente científico, se sitúa históricamente entre la diligencia y el ferrocarril, sin que pueda definirse exactamente el momento de su aparición. Se sabe en cambio, y ello constituye el último eslabón del progreso, que la incomodidad innegable de las diligencias aguzó el ingenio humano a tal punto que no tardó en inventarse un medio de viaje incomparable, el de andar a pie. Peatones y nadadores constituyen así el coronamiento de la pirámide científica, como cabe comprobar en cualquier playa cuando se ve a los paseantes del malecón que a su vez observan complacidos las evoluciones de los bañistas…” …nem todos estes nomes serão os próprios do tempo e do lugar [...], mas enquanto não se acabar quem trabalhe, não se acabarão os trabalhos, e alguns deles estarão no futuro de alguns daqueles, à espera de quem vier a ter o nome e a profissão… …qué podemos agregar… …que no se haya dicho ya… …o que sí se haya dicho… It's not a hill, it's a mountain as you start out the climb A Elena, mi amorín, por el pasado, por el presente y por el futuro; por todo, por ser guía, motivación y apoyo, por ser tú y por estar conmigo. Siempre fuiste mi espejo, quiero decir que para verme tenía que mirarte. Scis quia ego amo te. A Míriel que viene y trae el por-venir… …mientras esperábamos ha llegado. All the coauthors of the different publications in which this thesis has resulted (Arbelaez MC, Vidal C, de Ortueta D, Magnago T, Merayo-Lloves J, Piñero D, Ortiz D, Alió JL, Baatz H, Al Jabri B, Gatell J, Ewering T, Camellin M, Aslanides IM, Triefenbach N, Shraiki M, Kolli S, Rosman M, Hollerbach T), as well as the anonymous reviewers, provided helpful suggestions, encouraged and supported this research. Ablations on PMMA were performed at SCHWIND eye-tech-solutions, surgical treatments were performed at Augenzentrum Recklinghausen, Muscat Eye Laser Center and SEKAL Rovigo Microsurgery Centre. I acknowledge many fruitful discussions with Carlos Dorronsoro, Alfonso Pérez-Escudero, Pablo Pérez Merino, and Enrique Gambra. This research was granted by SCHWIND eye-tech-solutions. Tendría que dar las gracias (inexcusablemente) a montones de amigos, familiares, compañeros y ex-compañeros de aquí y de allá por miles de cosas I have to thank my friend Cheng-Hao Huang for all the time you dedicated to discuss with me, and for the invaluable input that your comments taught me. Also thank you for your way of thinking, your illusion and your spirit. It has been a pleasure having worked for and with you for four years. Großer Dank geht an Herrn Dr. Hartmut Vogelsang, der mich dazu gebracht hat über Forschungen tiefsinniger nachzudenken, meinen Beitrag zur Forschung ernst zu nehmen und dafür hart zu arbeiten. Ich danke auch Herrn Rolf Schwind und Herrn Thomas Hollerbach, die es mir ermöglichten, diese Arbeit letztendlich schreiben zu können, und mich somit bei meiner weiteren Entwicklung als Forscher unterstützt haben. Me gustaría agradecer al Dr. Jesús Merayo-Llovés y al Prof. Santiago Mar por su tiempo, experiencia y el incalculable valor de sus comentarios al evaluar mi trabajo. Por el tiempo dedicado a nuestras discusiones, vuestra experiencia y el valor de vuestros comentarios sobre cómo construir y organizar la tesis. Mención especial al Dr. Diego de Ortueta por las fructíferas discusiones proporcionándome las historias clínicas utilizadas. Es un placer haber trabajado contigo para realizar esta Tesis Doctoral (y seguirá siéndolo el seguir trabajando juntos de cara a nuevos e interesantes proyectos). A la Dr. María C Arbelaez por su colaboración en la evaluación clínica. Special mention to Dr. Massimo Camellin for the fruitful discussions we had, and for his generous and selfless effort by providing the medical records. To Ricardo Toledo for his wonderful music-philosophy artworks. ¡Vivan mis amigos por antonomasia! A Jes, MariDí, Pedro, Jorge, Vi por ser los mejores, por ser necesarios mientras yo soy contingente, por nuestra cercanía desde el otro lado del mundo y del tiempo. También al resto de mis amigos, sin excepción, “a pesar de que el tiempo y la distancia nos hagan cambiar”… de golpe es Silvio y su guitarra. Al socialismo de Cortázar y al comunismo de Saramago, así como también a Alonso Quijano (que no a Cervantes ni a Avellaneda) por no renunciar a los sueños. ¿Quién cogerá las estrellas cuando caigan? También a mis compañeros (y a veces contrincantes en combate) de Jose, Óscar, Moncho, también a Máximo, y a David, Diego, Miguel y Marcos que además de hacerme pensar, me han destrozado el cuerpo a menudo… To all my colleagues at SCHWIND eye-tech-solutions (with special mention to Kenny for your prelayed loads of music, to Mario and Rüdiger for your reactions in real time or faster, to Anita for having been the first one suffering the very first naïve versions of this work, and to Tobias for your necessary recalls back down to earth), to all for their patience and support. A mis compañeros de fatigas en el Master y el Doctorado: Alfonso por tu dominio del espacio y del tempo, Pablo por tu inagotable curiosidad, Enrique por tu adaptación para despejar la incertidumbre. A los más de 50 que nos juntábamos en el Castro las fiestas y días de Y que la intersección nunca sea conjunto vacío. A los 20+ de Villarrube. A Luis, que se fue sin decir nada. Finalmente, mi más cálido reconocimiento con mucho cariño a mis padres Moncho y Marina y a mis hermanas Marta y Rut por su apoyo y por haber hecho de mí lo que ahora soy. I fondly appreciate the interest that all you have shown to my work. Un trabajo es el resultado de un cúmulo de contribuciones de mucha gente, es por ello que mis agradecimientos quizá sean un poco extensos, pero creo, de verdad, que sin el apoyo, más o menos desinteresado de todos y cada uno de ellos (de todos y cada uno de vosotros), el resultado no sería exitoso. ACKNOWLEDGEMENTS & AGRADECIMIENTOS .............................................. ix TABLE OF CONTENTS (Índice) ......................................................................... xiii LIST OF TABLES (Listado de tablas)................................................................xxvii LIST OF FIGURES (Listado de figuras) ............................................................ xxix KEYWORDS (Palabras clave) ............................................................................. xli NOMENCLATURE (Nomenclatura).................................................................... xliii GLOSSARY (Glosario).......................................................................................xlvii FINANCIAL DISCLOSURES (Financiación y declaración de conflicto de intereses) ............................................................................................................... li Part 1 INTRODUCTION (Introducción) ........................................................ 53 Chapter 1 HYPOTHESIS OF THIS THESIS (Hipótesis de esta Tesis) ... 55 Chapter 2 MOTIVATION (Justificación) .................................................. 57 Chapter 3 LASER REFRACTIVE SURGERY (Cirugía refractiva laser) .. 59 The origins of refractive surgery (Los orígenes de la cirugía refractiva).................................................................................................. 60 Laser ablation (Ablación laser) ...................................................... 61 Zernike representation of aberrations (Representación de Zernike de las aberraciones) ................................................................................. 66 Measurement of Optical Aberrations (Medida de las aberraciones ópticas) ..................................................................................................... 67 Theoretical ablation profiles (Perfiles de ablación teóricos) ........... 68 The LASIK technique (La técnica LASIK) ...................................... 69 Chapter 4 (Ablación corneal y aberraciones ópticas) ................................................... 71 Aberrations and visual performance after refractive surgery (Aberraciones y rendimiento visual tras cirugía refractiva) ....................... 71 Biological response of the cornea (Respuesta biológica de la córnea)...................................................................................................... 72 Visual degradation (Deterioro visual) ............................................. 73 Current trends in refractive surgery (Tendencias actuales en cirugía refractiva).................................................................................................. 74 Chapter 5 (Cuestiones actuales en cirugía refractiva corneal)...................................... 77 Chapter 6 SPECIFIC GOALS OF THIS THESIS (Objetivos específicos) 79 Chapter 7 THESIS SYNOPSIS (Sinopsis de esta Tesis)......................... 81 Part 2 METHODS (Método) ......................................................................... 85 Chapter 8 MANIFEST REFRACTION (Contribución de la aberración de onda sobre la refracción manifiesta) ................................................................................... 85 Wavefront refraction from low order Zernike modes at full pupil size (Refracción del frente de onda a partir de los modos de Zernike de bajo orden considerados para la pupila completa) ........................................... 86 Objective wavefront refraction from Seidel aberrations at full pupil size (Refracción objetiva del frente de onda a partir de las aberraciones de Seidel consideradas para la pupila completa) .......................................... 87 Objective wavefront refraction from low order Zernike modes at subpupil size (Refracción objetiva del frente de onda a partir de los modos de Zernike de bajo orden para un diámetro subpupilar) ........................... 88 Objective wavefront refraction from Seidel aberrations at subpupil size (Refracción objetiva del frente de onda a partir de las aberraciones de Seidel para un diámetro subpupilar) ......................................................... 88 Objective wavefront refraction from paraxial curvature (Refracción objetiva del frente de onda a partir de la curvatura paraxial) .................... 88 Objective wavefront refraction from wavefront axial refraction (Refracción objetiva del frente de onda a partir de la refracción axial del frente de onda) ......................................................................................... 89 Automatic Manifest Refraction Balance (Compensación automática de la refracción manifiesta)....................................................................... 91 Chapter 9 OF THE WAVE ABERRATION (Determinación de la relevancia clínica de la aberración de onda) ..................................................................................... 95 Clinical relevance of the single terms in a Zernike expansion (Relevancia clínica de términos individuales de Zernike) ......................... 95 Global clinical relevance of the Zernike expansion (Relevancia clínica global de la expansión de Zernike).............................................. 101 Classification of the clinical relevance (Clasificación de la relevancia clínica) .................................................................................................... 101 Chapter 10 (Protocolos de las medidas realizadas en sujetos) .................................... 105 Part 3 TOPICAL REVIEW (Revisión temática) .......................................... 111 Topic A ANALYSIS OF THE CORNEAL ASPHERICITY (Análisis de la asfericidad corneal) ..................................................................... 111 Section A.1 ABSTRACT (Resumen) ......................................... 111 Section A.2 INTRODUCTION (Introducción)............................. 112 Section A.3 METHODS (Método).............................................. 113 A.3.1 Clinical evaluation (Evaluación clínica) ........................ 116 A.3.2 Repeatability of the methods (Repitibilidad de los métodos) .................................................................................. 116 A.3.3 Statistical analysis (Análisis estadístico) ...................... 116 Section A.4 RESULTS (Resultados) ......................................... 117 A.4.1 Refractive outcomes (Resultados refractivos) .............. 117 A.4.2 Corneal corneal) .................................................................................... 118 A.4.3 Corneal asphericity (Asfericidad corneal) ..................... 119 A.4.4 Corneal Asphericity Changes (Cambios en la asfericidad corneal) .................................................................................... 121 A.4.5 Repeatability of the Corneal Asphericity (Repitibilidad de las determinaciones de asfericidad corneal)............................. 123 Section A.5 DISCUSSION (Discusión) ...................................... 123 Section A.6 CONCLUSIONS (Conclusiones) ............................ 128 Section A.7 OUTLOOK (Perspectiva)........................................ 128 Topic B MODEL OF AN ABERRATION-FREE PROFILE (Modelo de un perfil libre de aberraciones) ........................................................ 129 Section B.1 ABSTRACT (Resumen) ......................................... 129 Section B.2 INTRODUCTION (Introducción) ............................. 131 Section B.3 METHODS (Método) .............................................. 132 B.3.1 Theoretical aberration-free profile (Perfil teóricamente libre de aberraciones)....................................................................... 132 B.3.2 Compensation for the focus shift (Compensación del desplazamiento del foco).......................................................... 134 B.3.3 Optical simulation (Simulaciones ópticas) .................... 135 B.3.4 Clinical evaluation (Evaluación clínica)......................... 136 B.3.5 Ablation centre (Centrado de la ablación)..................... 137 B.3.6 Comparison to Munnerlyn based profiles (Comparación con perfiles directamente basados en Munnerlyn) ................... 139 B.3.7 Bilateral symmetry (Simetría bilateral) .......................... 139 B.3.7.1 Correlations for bilateral symmetry of Zernike terms across subjects (Correlaciones de la simetría bilateral para los términos de Zernike) ............................................................. 140 B.3.7.2 Correlations for symmetry of aberrations in right and left eye of the same subjects (Correlaciones de la simetría interocular de los sujetos)..................................................... 141 B.3.7.3 Differences for symmetry of aberrations in right and left eye of the same subjects (Diferencias en la simetría interocular de los sujetos)..................................................... 141 B.3.7.4 Dioptrical differences in corneal wavefront aberration between the right and left eyes of the same subjects (Diferencias dióptricas interoculares de la aberración del frente de onda corneal de los sujetos) ............................................ 141 B.3.7.5 Changes in bilateral symmetry of Zernike terms as a result of refractive surgery (Cambios en la simetría bilateral de términos de Zernike provocados por la cirugía refractiva) .... 142 B.3.7.6 Changes in bilateral symmetry of wavefront aberration as a result of refractive surgery (Cambios en la simetría bilateral interocular provocados por la cirugía refractiva) ..... 142 B.3.7.7 Statistical analysis (Análisis estadístico)................ 142 Section B.4 RESULTS (Resultados) ......................................... 142 B.4.1 Simulation of the surgical performance of the profile (Simulación del rendimiento quirúrgico del perfil)..................... 142 B.4.2 Clinical evaluation (Evaluación clínica) ........................ 144 B.4.3 Ablation centre (Centrado de la ablación) .................... 148 B.4.4 Comparison to Munnerlyn based profiles (Comparación con los perfiles directamente basados en Munnerlyn) ............. 150 B.4.5 Bilateral symmetry (Simetría bilateral).......................... 151 B.4.5.1 Changes in bilateral symmetry of Zernike terms as a result of refractive surgery (Cambios en la simetría bilateral de términos de Zernike provocados por la cirugía refractiva) .... 151 B.4.5.2 Changes in bilateral symmetry of wavefront aberration as a result of refractive surgery (Cambios en la simetría bilateral interocular provocados por la cirugía refractiva)...... 151 Section B.5 DISCUSSION (Discusión) ...................................... 152 B.5.1 Aberration-free pattern (Perfiles libres de aberración) .. 152 B.5.2 Ablation centre (Centrado de la ablación)..................... 155 B.5.3 Bilateral symmetry (Simetría bilateral) .......................... 158 Section B.6 CONCLUSIONS (Conclusiones) ............................ 160 Section B.7 OUTLOOK (Perspectiva)........................................ 161 Topic C REFRACTIVE SURGERY OUTCOMES (Análisis por árbol de decisión para la optimización de resultados en cirugía refractiva)................. 163 Section C.1 ABSTRACT (Resumen) ......................................... 163 Section C.2 INTRODUCTION (Introducción)............................. 164 Section C.3 METHODS (Método).............................................. 166 C.3.1 Videokeratoscopy (Videoqueratoscopía)...................... 166 C.3.2 Aberrometry (Aberrometría) ......................................... 166 C.3.3 Manifest refraction (Refracción manifiesta) .................. 167 C.3.4 Decision process (Proceso de decisión)....................... 167 Section C.4 RESULTS (Resultados) ......................................... 169 C.4.1 Distribution tratamientos)............................................................................. 169 C.4.2 Rate of retreatments (Índice de retratamientos) ........... 169 Section C.5 DISCUSSION (Discusión) ...................................... 170 Section C.6 CONCLUSIONS (Conclusiones) ............................ 172 Section C.7 OUTLOOK (Perspectiva) ....................................... 172 Topic D NON-NORMAL INCIDENCE (Análisis de la pérdida de eficiencia de ablación para incidencia no-normal)................................................ 173 Section D.1 ABSTRACT (Resumen) ......................................... 173 Section D.2 INTRODUCTION (Introducción) ............................ 173 Section D.3 METHODS (Método) ............................................. 175 D.3.1 Determination of the ablation efficiency at non-normal incidence (Determinación de la eficiencia de la ablación para incidencia no-normal) ............................................................... 175 Section D.4 RESULTS (Resultados) ......................................... 183 Section D.5 DISCUSSION (Discusión)...................................... 189 Section D.6 CONCLUSIONS (Conclusiones)............................ 193 Section D.7 OUTLOOK (Perspectiva) ....................................... 194 Topic E DURING REFRACTIVE SURGERY (Efectos clínicos de los errores de ciclotorsión durante cirugía refractiva)............................................. 195 Section E.1 ABSTRACT (Resumen) ......................................... 195 Section E.2 INTRODUCTION (Introducción)............................. 196 Section E.3 METHODS (Método).............................................. 198 E.3.1 Determination of Cyclotorsion during Refractive Surgery (Determinación de la ciclotorsión durante cirugía refractiva).... 198 E.3.2 Residual Aberration after Cyclotorsional Errors during ciclotorsión durante cirugía refractiva)...................................... 200 E.3.3 Derivation of a Mathematic Condition to Determine an Optical Benefit (Derivación de una condición matemática para determinar un beneficio óptico) ................................................ 202 E.3.4 Derivation of a Mathematic Condition to Determine a Visual Benefit (Derivación de una condición matemática para determinar un beneficio visual)................................................. 203 E.3.5 Derivation of a Mathematic Condition to Determine an Absolute Benefit (Derivación de una condición matemática para determinar un beneficio absoluto) ............................................ 205 Section E.4 RESULTS (Resultados) ......................................... 206 E.4.1 Static Cyclotorsion during Laser Refractive Surgery (Ciclotorsión estática durante cirugía refractiva laser).............. 206 E.4.2 Theoretical Ranges to Obtain Optical, Visual, and Absolute Benefits (Rangos teóricos para la obtención de beneficios ópticos, visuales o absolutos).................................. 207 E.4.3 Clinical Optical Benefit (Beneficio óptico clínico) .......... 210 E.4.4 Clinical Visual Benefit (Beneficio visual clínico)............ 211 E.4.5 Clinical Absolute Benefit (Beneficio absoluto clínico) ... 211 E.4.6 Clinical Ranges to Obtain Optical, Visual, and Absolute Benefits (Rangos clínicos para la obtención de beneficios ópticos, visuales o absolutos) ................................................................ 211 Section E.5 DISCUSSION (Discusión) ...................................... 212 Section E.6 CONCLUSIONS (Conclusiones) ............................ 217 Section E.7 OUTLOOK (Perspectiva)........................................ 217 Topic F SURGERY (La zona óptica efectiva tras cirugía refractiva)............. 219 Section F.1 ABSTRACT (Resumen).......................................... 219 Section F.2 INTRODUCTION (Introducción) ............................. 220 Section F.3 METHODS (Método) .............................................. 221 F.3.1 Subjects (Sujetos)......................................................... 221 F.3.2 Ablation profiles (Perfiles de ablación).......................... 222 F.3.3 Ablation zones (Zonas de ablación) ............................. 222 F.3.4 Analysis of the effective optical zone (Análisis de la zona óptica efectiva) ......................................................................... 222 F.3.4.1 Change Of Root-Mean-Square Of Higher Order Wavefront Aberration Method (Método del cambio de la raíz cuadrática media de la aberración de onda de alto orden)... 223 F.3.4.2 Change In Spherical Aberration Method (Método del cambio de la aberración esférica)......................................... 223 F.3.4.3 Root-Mean-Square Of The Change Of Higher Order Wavefront Aberration Method (Método de la raíz cuadrática media del cambio de la aberración de onda de alto orden) .. 224 F.3.5 Mean value analyses (Análisis de valores promedio)... 225 F.3.6 Regression analyses (Análisis de regresión)................ 226 F.3.7 Calculation isométricas).............................................................................. 226 F.3.8 Calculation of proposed nomogram for OZ (Cálculo de una propuesta de nomograma para ZO) ......................................... 227 Section F.4 RESULTS (Resultados) ......................................... 227 F.4.1 Adverse events (Complicaciones) ................................ 227 F.4.2 Refractive Outcomes (Resultados refractivos) ............. 227 F.4.3 Changes in corneal wavefront aberration at 6-mm analysis diameter (Cambios en la aberración del frente de onda corneal analizado para 6-mm de diámetro)........................................... 228 F.4.4 Mean value analyses for EOZ (Análisis de valores promedio para ZOE)................................................................. 228 F.4.5 Regression analyses for EOZ (Análisis de regresión de la zona óptica efectiva) ................................................................ 229 F.4.6 Isometric lines for OZ (Líneas isométricas para ZO) .... 232 F.4.7 Proposed nomogram for OZ (Nomograma para ZO).... 232 Section F.5 DISCUSSION (Discusión) ...................................... 234 Section F.6 CONCLUSIONS (Conclusiones) ............................ 238 Section F.7 OUTLOOK (Perspectiva)........................................ 239 Topic G EXPANSION OF THE WAVEFRONT ABERRATION (Método para minimizar objetivamente la cantidad de tejido resecado en una ablación personalizada basada en la expansión de Zernike de la aberración del frente de onda)......................................................... 241 Section G.1 ABSTRACT (Resumen) ......................................... 241 Section G.2 INTRODUCTION (Introducción)............................. 243 G.2.1 Multizonal treatments (Tratamientos multizonales) ...... 243 G.2.2 Smaller optical zone treatments with large transition zone (Tratamientos en menor zona óptica con mayor zona de transición) ................................................................................. 244 G.2.3 Smaller optical zone for the astigmatic correction (Zonas ópticas menores para la corrección astígmata) ........................ 244 G.2.4 Boost slider method (El modulador incremental).......... 245 G.2.5 Simplified profile method (El perfil simplificado) ........... 246 G.2.6 Z-Clipping method (Método de la poda en Z)............... 246 G.2.7 Z-Shifting method (Método del recorte en Z)................ 247 Section G.3 METHODS (Método).............................................. 247 G.3.1 The “Minimise Depth” and “Minimise Depth+” functions profundidad+“) .......................................................................... 247 G.3.2 The “Minimise Volume” and “Minimise Volume+” functions (Las funciones „Minimizar volumen“ y „Minimizar volumen+“).. 251 G.3.3 Simulation of the tissue-saving capabilities of such methods for minimising the required ablation tissue (Simulación de la capacidad de ahorro de tejido de dichos métodos para minimizar la cantidad de tejido de ablación)............................. 255 G.3.4 Evaluation of the clinical application of such methods for minimising the required ablation tissue (Evaluación de la aplicación clínica de dichos métodos para minimizar la cantidad de tejido de ablación) ............................................................... 256 G.3.4.1 Treatment selection criteria (Criterios de selección de tratamiento) .......................................................................... 257 G.3.4.2 Evaluation of the tissue-savings of such methods for minimising the required ablation tissue (Evaluación del ahorro de tejido de tales métodos para minimizar la cantidad de tejido de ablación) .......................................................................... 259 G.3.4.3 Direct comparison (Comparación directa) ............. 260 G.3.4.4 Statistical analysis (Análisis estadístico) ............... 266 Section G.4 RESULTS (Resultados)......................................... 266 G.4.1 Simulations (Simulaciones).......................................... 266 G.4.1.1 Objective relevance of the single terms in a Zernike expansion of the wavefront aberration (Determinación objetiva de la relevancia clínica de términos individuales de la expansión de Zernike de la aberración del frente de onda) .......................................... 266 G.4.1.2 Objective minimisation of the maximum depth or volume of a customised ablation based on the Zernike expansion of the wavefront aberration (Minimización objetiva de la profundidad máxima o el volumen de ablación de tratamientos personalizados basados en la expansión de Zernike de la aberración del frente de onda) ........................ 268 G.4.2 Evaluation of the clinical application of such methods for minimising the required ablation tissue (Evaluación de la aplicación clínica de tales métodos para minimizar la cantidad de tejido de ablación) .................................................................... 271 G.4.2.1 Case report (Caso de estudio)............................... 271 G.4.2.2 Comparative series: Preoperative evaluation (Series comparativas: evaluación preoperatoria) .............................. 279 G.4.2.3 Comparative series: Refractive outcomes (Series comparativas: resultados refractivos) ................................... 281 G.4.2.4 Comparative series: Evaluation of the tissue-saving capabilities for minimising the required ablation tissue (Series comparativas: evaluación de la capacidad de ahorro de tejido para minimizar la cantidad de tejido de ablación) ................. 285 G.4.2.5 Comparative series: Direct comparison (Series comparativas: comparación directa) ..................................... 287 Section G.5 DISCUSSION (Discusión)...................................... 288 G.5.1 Clinical relevance of wave aberrations (Relevancia clínica de las aberraciones de onda) ................................................... 288 G.5.2 Minimisation of the ablated tissue (Minimización del tejido de ablación) .............................................................................. 289 G.5.3 Simulations (Simulaciones) .......................................... 293 G.5.4 Clinical evaluations (Evaluaciones clínicas) ................. 294 Section G.6 CONCLUSIONS (Conclusiones)............................ 297 Section G.7 OUTLOOK (Perspectiva) ....................................... 298 CONCLUSIONS ................................................................................................ 299 ACHIEVED GOALS AND SPECIFIC CONCLUSIONS...................................... 301 LIST OF METHODOLOGICAL OUTPUTS ........................................................ 307 IMPLICATIONS OF THIS RESEARCH ............................................................. 311 FUTURE RESEARCH LINES............................................................................ 313 LIST OF PUBLICATIONS AND PATENTS........................................................ 317 RESÚMENES.................................................................................................... 323 CONCLUSIONES.............................................................................................. 337 LOGROS ALCANZADOS Y CONCLUSIONES ESPECÍFICAS ........................ 339 LISTA DE RESULTADOS METODOLÓGICOS ................................................ 345 IMPLICACIONES DE ESTA INVESTIGACIÓN ................................................. 347 LÍNEAS DE INVESTIGACIÓN FUTURA ........................................................... 349 REFERENCES (Bibliografía) ............................................................................ 353 (Listado de tablas) Table 1: Relative optical blur of the Zernike polynomials up to 7th order. ......... 101 Table 2: Preoperative and postoperative data.................................................. 117 Table 3: Corneal wavefront aberration data reported for 6 mm analysis diameter. ................................................................................................................... 118 Table 4: Asphericity data.................................................................................. 119 Table 5: Indications chart. ................................................................................ 169 Table 6: Maximum Treatable Magnitude for Different Aberration Components and Different Cyclotorsional Errors for the <0.50 DEQ Criterion ....................... 208 Table 7: Maximum Allowable Cyclotorsional Errors for Different Aberration Components and Different Criteria............................................................. 209 Table 8: Residual Aberration Ratios and Relative Orientations for Different Cyclotorsional Errors. The percentage is the amount of postoperative residual in magnitude, whereas the angle is the relative orientation of the postoperative residual ................................................................................ 210 Table 9: Percentage of Treatments That Could Have Been Planned to Achieve an Optical and a Visual Benefit as a Function of the Highest Included Zernike Mode .......................................................................................................... 212 Table 10: Refractive outcomes and induced corneal aberrations after refractive surgery ....................................................................................................... 228 Table 11: Effective optical zone after refractive surgery vs. planned optical zone ................................................................................................................... 229 Table 12: Mean effective optical zone after refractive surgery vs. planned optical zone ........................................................................................................... 231 Table 13: Mean nomogrammed optical zone vs. intended effective optical zone ................................................................................................................... 234 Table 14: Summary properties of the four minimisation approaches................ 255 Table 15: Patient information............................................................................ 261 Table 16: Treatment information....................................................................... 261 Table 17: Normal light questionnaire ................................................................ 262 Table 18: Dim light questionnaire ..................................................................... 263 Table 19: Preoperative diagnosis ..................................................................... 264 Table 20: Scheduled diagnosis during follow-up .............................................. 265 Table 21: Preoperative data of the patient K.S. ................................................ 272 Table 22: Comparative treatment plans and savings of the patient K.S. .......... 274 Table 23: 3-month postoperative data of the patient K.S.................................. 276 Table 24: Demographic data, preoperative and postoperative data for the three groups ........................................................................................................ 280 Table 25: Savings in depth and time of the minimise depth approach.............. 286 Table 26: Savings in depth and time of the minimise volume approach ........... 287 (Listado de figuras) Figure 1: Beam profiles for different beam geometries, where N is the Gaussian or supergaussian order. Gaussian profile (N=1) in blue, supergaussian profile (N=2) in pink, Flat-Top profile (N=∞) in yellow................................... 63 Figure 2: Spot profiles for different beam geometries. Parabolic spot profile (from Gaussian beams, N=1) in blue, quartic spot profile (from supergaussian beams with N=2) in pink, Flat-Top spot profile (from Flat-Top beams, N=∞) in yellow. .......................................................................................................... 63 Figure 3: Spot profiles for different radiant exposures. Quartic spot profiles (from supergaussian beams with N=2) for a peak radiant exposure of 150 mJ/cm2 in blue and for a peak radiant exposure of 300 mJ/cm2 in pink. ................... 64 Figure 4: Zernike pyramid showing the Zernike terms up to 7th order. ............... 67 Figure 5: Induced Spherical aberration vs. achieved correction using classical profiles for different treatment strategies: LASIK (in blue), PRK (in purple). Notice that LASIK induced Spherical aberration does not go across the origin, representing the isolated effect due to the flap cut treatments. Notice as well that the induced Spherical aberration was more pronounced for hyperopic treatments than for myopic ones. ................................................................. 72 Figure 6: Biomechanical changes due to the ablation, depending on the stromal layer where the tissue will be removed. ....................................................... 73 Figure 7: Representative axes of the human eye. .............................................. 76 Figure 8: Representation of the axial refractive error. The line of sight represents a chief ray; the wavefront aberration is zero at the pupil centre, and perpendicular to the line of sight. Each point of the wavefront propagates perpendicular to the local surface of the wavefront. The axial distance from the pupil centre to the intercept between the propagated local wavefront and the line of sight expressed in dioptres corresponds to the axial refractive error143.......................................................................................................... 89 Figure 9: Comparison of the different quadric methods described here for the determination of the objective wavefront refraction for a given pupil size..... 90 Figure 10: Zernike refraction of a pure Spherical Aberration (at 6 mm) is per definition 0 because Spherical Aberration is a High Order Aberration mode, when analysed for a smaller diameter (4 mm) produces Defocus................ 92 Figure 11: Zernike refraction of a pure High Order Astigmatism (at 6 mm) is per definition 0 because of High Order Aberration mode, when analysed for a smaller diameter (4 mm) produces Astigmatism. ......................................... 92 Figure 12: Zernike refraction of a pure Coma (at 6 mm) is per definition 0 because Coma is a High Order Aberration mode, when analysed for a smaller diameter (4 mm) produces only tilt. Notice that coma may have “visual effect” if the visual axis changes producing Astigmatism. ................. 92 Figure 13: Zernike refraction of a general wavefront aberration analysed at 6 mm and analysed for a smaller diameter (4 mm). ............................................... 93 Figure 14: Automatic Refraction Balance. Optical impact of the HOAb the refraction is calculated and balanced from input refraction. Notice that the same wavefront aberration is analysed for two different diameters. difference in the refraction provided at the two different analysis diameters correspond to the manifest refraction provided by the high order aberration. ..................................................................................................................... 93 Figure 15: Zernike pyramid showing the effects on vision produced by 1 dioptre of equivalent defocus of different Zernike terms up to 7th order.................... 97 Figure 16: List of Zernike coefficients classified and colour coded by their dioptric equivalent relative to optical blur (DEq). ..................................................... 102 Figure 17: Zernike pyramid classified and colour coded by the dioptric equivalent relative to optical blur (DEq) of the single Zernike terms. ........................... 103 Figure 18: The SCHWIND Combi Workstation for comprehensive corneal and ocular analysis............................................................................................ 106 Figure 19: The SCHWIND Carriazo-Pendular Console and Handpiece........... 106 Figure 20: The SCHWIND AMARIS Total-Tech Laser. .................................... 108 Figure 21: The SCHWIND ESIRIS excimer laser system................................. 109 Figure 22: Predictability scattergram. ............................................................... 117 Figure 23: Induced spherical aberration. .......................................................... 119 Figure 24: Preoperative asphericity.................................................................. 120 Figure 25: Postoperative asphericity. ............................................................... 121 Figure 26: Ideally expected postoperative asphericity...................................... 122 Figure 27: Bland-Altman plot for p-value calculated from meridians vs. p-value calculated from corneal wavefront.............................................................. 124 Figure 28: Analysis of the induced aberration at 6.50 mm for -6.00 D for a balanced corneal model, for 4 different ablation profiles: A) Munnerlyn based profiles, B) Parabolic based profiles, C) Asphericity preserving profiles, D) Aberration-free profiles. Notice the pre-op aberrated status (in blue), the post-op aberrated status (in red), and the induced aberrations (in green). Note the multifocality range (x-axis) running from -2 D to +1 D in all graphs. ................................................................................................................... 143 Figure 29: Theoretical analysis of the induced corneal spherical aberration analyzed at 6 mm vs. refractive power change (MRSEq) for 4 different asphericities: “Free-of-aberrations” cornea (Q-Val -0.53, in blue), balancedeye model cornea (Q-Val -0.25, in magenta), spherical cornea (Q-Val 0.00, in yellow), parabolic cornea (Q-Val -1.00, in cyan)......................................... 143 Figure 30: Theoretical analysis of the induced corneal spherical aberration at 6.50 mm using Aberration-Free profiles for -6.00 D for 2 anterior corneal surfaces: 7.87 mm, Q-factor -0.25 (A and C); and 7.87 mm, Q-factor +0.30 (B and D); and for 2 posterior corneal surfaces and pachymetries: 525 µm central pachymetry, 775 µm peripheral pachymetry at 5 mm radial distance (A and B); and 550 µm central pachymetry, 550 µm peripheral pachymetry at 5 mm radial distance (C and D). Notice the pre-op aberrated status (in blue), the post-op aberrated status (in red), and the induced aberrations (in green). Note the multifocality range (x-axis) running from -3.5 D to +0.5 D in all graphs. ....................................................................................................... 144 Figure 31: Induced corneal and ocular spherical aberration at 3 months follow-up analysed at 6.0 mm pupil (ocular: ORK-Wavefront Analyzer (purple squares); corneal: Keratron Scout (blue diamonds)). ................................ 145 Figure 32: Change in corneal and ocular coma aberration magnitude at 3 months follow-up analysed at 6.0 mm pupil (ocular: ORK-Wavefront Analyzer (purple squares); corneal: Keratron Scout (blue diamonds)). ................................ 146 Figure 33: Change in corneal and ocular high-order aberrations (HO-RMS) aberrations magnitude at 3 months follow-up analysed at 6.0 mm pupil (ocular: ORK-Wavefront Analyzer (purple squares); corneal: Keratron Scout (blue diamonds)). ....................................................................................... 147 Figure 34: Induced ocular coma/defocus diopter ratio for the CV group (blue) and the PC group (magenta). ............................................................................ 148 Figure 35: Induced ocular trefoil/defocus diopter ratio for the CV group (blue) and the PC group (magenta). ............................................................................ 149 Figure 36: Induced spherical ocular aberration/defocus diopter ratio for the CV group (blue) and the PC group (magenta).................................................. 150 Figure 37: Decision-Tree applied for selecting the treatment mode (Aspheric aberration neutral, Corneal-Wavefront-Guided, or Ocular-Wavefront-Guided). ................................................................................................................... 165 Figure 38: Hyperopic shift and coupling factor. Ablating a simple myopic astigmatism, the neutral axis became refractive, and the ablation depth in the periphery was smaller than in the centre. ................................................... 175 Figure 39: Loss on reflection (Fresnel´s equations) dependent on the angle of incidence, and losses also dependent on the geometric distortion (angle of incidence). .................................................................................................. 176 Figure 40: The radius of corneal curvature changes during treatment, efficiency also varies over treatment; the values at 50% of the treatment represent a reasonable compromise to consider both the correction applied and the preoperative curvature................................................................................ 177 Figure 41: The offset of the galvoscanners from the axis of the system is considered in the calculation of the angle of incidence of the beam onto a flat surface perpendicular to the axis of the laser............................................. 178 Figure 42: Ablation efficiency at 3 mm radial distance for a sphere with 7.97 mm radius of curvature. The ablation efficiency was simulated for an excimer laser with a peak radiant exposure of 120 mJ/cm2 and a full-width-halfmaximum (FWHM) beam size of 2 mm. The radius of corneal curvature changes during treatment, accordingly also the efficiency varies over Note the improvement of ablation efficiency during myopic corrections as opposed to the increased loss of ablation efficiency during hyperopic corrections. ................................................................................ 183 Figure 43: Contribution of the asphericity quotient to the ablation efficiency for a radius of 7.97 mm curvature. The ablation efficiency at the cornea was simulated for an excimer laser with a peak radiant exposure of 120 mJ/cm2 and a beam size of 2 mm (FWHM). Note the identical ablation efficiency close to the vertex as opposed to differences in ablation efficiency at the periphery. A parabolic surface provides higher peripheral ablation efficiency (due to prolate peripheral flattening) compared to an oblate surface (with peripheral steepening). .............................................................................. 184 Figure 44: Contribution of the reflection and distortion losses to ablation efficiency for a sphere with 7.97 mm radius of curvature. Note that the reflection losses already exist with normal incidence and decrease very slightly towards the periphery. Although normal reflection losses approximately amount to 5%, they do not increase excessively for non-normal incidence. As our calculation defined the ablation efficiency for a general incidence as the ratio between the spot volume for general incidence and the spot volume for normal incidence, it is evident that the so-defined efficiency equals 1 for normal incidences. ..................................................................................... 185 Figure 45: Ablation efficiency at 3 mm radial distance for a sphere with 7.97 mm radius of curvature. The ablation efficiency was simulated for an excimer laser with a peak radiant exposure up to 400 mJ/cm2 and FWHM beam size of 2 mm. ..................................................................................................... 186 Figure 46: Efficiency obtained with the proposed model for the conditions reported by Dorronsoro et al. Ablation efficiency for a sphere with 7.97 mm radius of curvature. The ablation efficiency was simulated for an excimer laser with a peak radiant exposure of 120 mJ/cm2 and FWHM beam size of 2 mm. ............................................................................................................ 187 Figure 47: Efficiency obtained with the proposed model for the conditions reported by Dorronsoro et al.214 Average ablation efficiency for a sphere with 7.97 mm preoperative radius of curvature and a correction of -12 D. The ablation efficiency was simulated for an excimer laser with a peak radiant exposure of 120 mJ/cm2 and FWHM beam size of 2 mm. The radius of corneal curvature changes during treatment, consequently, also the efficiency varies over treatment. Note the improvement of ablation efficiency. ......... 188 Figure 48: Efficiency obtained with the proposed model for the conditions reported by Dorronsoro et al.214 Average ablation efficiency for a sphere with 7.97 mm preoperative radius of curvature and a correction of +6 D. The ablation efficiency was simulated for an excimer laser with a peak radiant exposure of 120 mJ/cm2 and FWHM beam size of 2 mm. The radius of corneal curvature changes during treatment, consequently also the efficiency varies over treatment. Note the increased loss of ablation efficiency during hyperopic corrections. ................................................................................ 188 Figure 49: (Top) Original wavefront error, (middle) 15° clockwise torted wavefront error, and (bottom) residual wavefront error (all in two-dimension and threedimensions) ................................................................................................ 197 Figure 50: The difference between the postoperative and preoperative topographies compared to the intended correction (the difference in the orientation of the astigmatism defines the cyclotorsional error). Preoperative topography. (B) Postoperative topography. (D) Planned correction. (C) Differential Counterclockwise torsion of the astigmatism can be seen............................................................................ 199 Figure 51: The percentage of residual aberrations vs. cyclotorsional error. Modulation of the cyclotorsional error by the angular frequency (m) is seen; the higher the angular frequency, the faster the residual aberration varies. For m=1 (coma), the maximum residual error is achieved for 180° torsion; for m=2 (cylinder), the maximum residual error would be achieved for 90° torsion; for m=3 (trefoil), the maximum residual error would be achieved for 60° torsion, and so on ................................................................................ 201 Figure 52: The relative orientation of residual aberrations vs. cyclotorsion error. Modulation of the cyclotorsional error and the relative orientation by the angular frequency (m) are seen ................................................................. 202 Figure 53: Matching factor vs. relative orientation of residual aberrations........ 204 Figure 54: Distribution of the magnitudes of the attempted astigmatic correction ................................................................................................................... 206 Figure 55: Distribution of the retrospectively calculated cyclotorsional errors .. 207 Figure 56: The maximum allowable cyclotorsional errors vs. angular frequency for different criteria ..................................................................................... 208 Figure 57: Concept of the ∆RMSho method: By comparing postoperative and preoperative corneal wavefront aberrations analysed for a common diameter starting from 4-mm, we have increased the analysis diameter in 10 µm steps, until the difference of the corneal RMSho was above 0.25 D for the first time. This diameter minus 10 µm was determining the EOZ............................... 223 Figure 58: Concept of the ∆SphAb method: By analysing the differential corneal wavefront aberrations for a diameter starting from 4-mm, we have increased the analysis diameter in 10 µm steps, until the differential corneal spherical aberration was above 0.25 D for the first time. This diameter minus 10 µm was determining the EOZ........................................................................... 224 Figure 59: Concept of the RMS(∆HOAb) method: By analysing the differential corneal wavefront aberrations for a diameter starting from 4-mm, we have increased the analysis diameter in 10 µm steps, until the root-mean-square of the differential corneal wavefront aberration was above 0.25 D for the first time. This diameter minus 10 µm was determining the EOZ..................... 225 Figure 60: Bilinear regression analyses for the correlations of EOZ with POZ and with defocus correction for each of the methods: ∆RMSho method (r2=.5, p<.005) (top), ∆SphAb method (r2=.7, p<.0001) (middle), and RMS(∆HOAb) method (r2=.7, p<.0005) (bottom). EOZ correlates positively with POZ, and declines steadily with increasing defocus corrections. EOZ depends stronger on POZ than on SEq. Example of double-entry graphs: A treatment of -5 D in 6.5 mm POZ results in green when analysed with the ∆RMSho and ∆SphAb methods (~6.5 mm EOZ), but in yellow when analysed with the RMS(∆HOAb) method (~6.0 mm EOZ). ..................................................... 230 Figure 61: Isometric optical zones: ∆RMSho method (red), ∆SphAb method (blue), and RMS(∆HOAb) method (green). For POZ < IOZ ⇔ EOZ < POZ , for POZ = IOZ ⇔ EOZ = POZ , and for POZ > IOZ ⇔ EOZ > POZ . POZ larger than 6.75 mm result in EOZ, at least, as large as POZ. ..................................... 232 Figure 62: Calculated the nomogram planned OZ (NPOZ) required to achieve an intended EOZ (IEOZ) for defocus correction for each of the methods: ∆RMSho method (top), ∆SphAb method (middle), and RMS(∆HOAb) method (bottom). Example of double-entry graphs: A treatment of -5 D with intended EOZ of 6.5 mm results in green when planned for the ∆RMSho and ∆SphAb methods (~6.5 mm nomogrammed OZ), but in yellow when planned for the RMS(∆HOAb) method (~7.0 mm nomogrammed OZ)................................ 233 Figure 63: Minimisation by multizonal treatments............................................. 243 Figure 64: Minimisation with smaller optical zone treatments with large transition zone............................................................................................................ 244 Figure 65: Minimisation with smaller optical zone for the astigmatic correction 245 Figure 66: Minimisation by a boost slider (down-slided) ................................... 245 Figure 67: Minimisation by a Z-Clipping method............................................... 246 Figure 68: Minimisation by a Z-Shifting method................................................ 247 Figure 69: Example of a patient with a normal WFAb and his preoperative visus ................................................................................................................... 249 Figure 70: Manual analysis of the optical effects (visus) of the different aberration modes for the same WFAb......................................................................... 249 Figure 71: Diffraction limited visus (all aberration modes are corrected, ideal case)........................................................................................................... 250 Figure 72: Objective analysis (Optimised Aberration modes selection) of the optical and ablative effects of the different aberration modes for the same WFAb: Notice that the aberration modes to be selected are not trivial: Not all the modes in green are unselected (not corrected) because some of them may help to save tissue. Not all aberration modes in yellow are selected (corrected) because some of them may have low impact on vision. Notice, as well, that 8 µm tissue are saved (16% of the ablation), but that the overall shape of the ablation remains .................................................................... 250 Figure 73: Analysis of the optical effects (visus) of the objective analysis (Optimised Aberration modes selection) for the same WFAb .................... 251 Figure 74: Optimised Aberration Modes Selection. Based on the wavefront aberration map, the software is able to recommend the best possible compromising the visual quality. Notice that the wavefront aberration is analysed by the software showing the original ablation for a full wavefront correction and the suggested set of aberration modes to be corrected. Notice the difference in required tissue, but notice as well that the most representative characteristics of the wavefront map are still presented in the minimised tissue selection. ........................................................................ 253 Figure 75: Optimised Aberration Modes Selection. Based on the wavefront aberration map, the software is able to recommend the best possible compromising the visual quality. Notice that the wavefront aberration is analysed by the software showing the original ablation for a full wavefront correction and the suggested set of aberration modes to be corrected. Notice the difference in required tissue, but notice as well that the most representative characteristics of the wavefront map are still presented in the minimised tissue selection. ........................................................................ 254 Figure 76: Zernike coefficients distribution for the sample population of wavefront maps: natural arithmetic average, average considering bilateral symmetry, average considering absolute value, root-mean-square respect to zero.... 267 Figure 77: Ablation depth for OZTS vs. Ablation depth for full-customised correction for: Aberration-Free correction (all HOAb disabled) (in blue), minimise depth (in magenta), minimise volume (in yellow), minimise depth+ (in cyan), and minimise volume+ (in purple) ............................................... 270 Figure 78: Ablation time for OZTS vs. Ablation time for full-customised correction for: Aberration-Free correction (all HOAb disabled) (in blue), minimise depth (in magenta), minimise volume (in yellow), minimise depth+ (in cyan), and minimise volume+ (in purple)...................................................................... 270 Figure 79: Preoperative topography and corneal wavefront maps of the patient K.S.............................................................................................................. 273 Figure 80: Preoperative corneal wavefront map of the patient K.S. determining a functional optical zone (with threshold of 0.38D) of 4.69 mm Ø ................. 273 Figure 81: Preoperative corneal wavefront map of the patient K.S. determining a functional optical zone (with threshold of 0.50D) of 5.85 mm Ø ................. 274 Figure 82: Comparative treatment plans and savings of the patient K.S. ......... 275 Figure 83: 3-month postoperative topography and corneal wavefront maps of the patient K.S.................................................................................................. 277 Figure 84: 3-month postoperative corneal wavefront map of the patient K.S. determining a functional optical zone (with threshold of 0.38D) of 4.69 mm Ø ................................................................................................................... 277 Figure 85: 3-month postoperative corneal wavefront map of the patient K.S. determining a functional optical zone (with threshold of 0.50D) of 5.85 mm Ø ................................................................................................................... 278 Figure 86: Comparative corneal wavefront maps of the patient K.S. simulating UCVA conditions ........................................................................................ 278 Figure 87: Comparative corneal wavefront maps of the patient K.S. simulating UCVA conditions ........................................................................................ 279 Figure 88: Comparison of the refractive outcome in SEq for CW group (green bars), MD group (blue bars), and MV group (yellow bars).......................... 281 Figure 89: Comparison of the predictability scattergram for SEq for CW group (green diamonds), MD group (blue triangles), and MV group (yellow squares) ................................................................................................................... 282 Figure 90: Comparison of the predictability for astigmatism for CW group (green diamonds), MD group (blue triangles), and MV group (yellow squares)..... 283 Figure 91: Comparison of the change in BSCVA (Safety) for CW group (green bars), MD group (blue bars), and MV group (yellow bars) ......................... 284 Figure 92: Comparison of the contrast sensitivity for preoperative status (dark blue triangles), CW group (green squares), MD group (blue circles), and MV group (yellow diamonds) ............................................................................ 284 Figure 93: Comparison of the ablation depths for CW group (green diamonds), MD group (blue squares), and MV group (yellow triangles) ....................... 285 Figure 94: Comparison of the ablation times for CW group (green diamonds), MD group (blue squares), and MV group (yellow triangles).............................. 286 (Palabras clave) Automatic Refraction Balance Equivalent Defocus Dioptric equivalent Weight coefficients of the Zernike expansion Corneal wavefront Laser Keratomileusis Minimise Volume Optical Zone Total Ablation Zone HOA HOAb Zernike expansion customised ablation higher-order aberrations maximum depth functional Optical Zone Wavefront aberration optical degradation refractive properties wavefront error high order aberration Spherical Equivalent Transition Zone Wave aberration Point source Optical Society of America Optical system Laser Keratectomy Radial order Refractive Zone Optical blur Meridional index Optimised Refractive Keratectomy Ocular wavefront Custom Ablation Manager Zernike terms Angular frequency Minimise Depth Zernike mode wave of light unit disk Frits Zernike Automatic Refraction Balance Equivalent Defocus Dioptric equivalent Cmn or C[n,m]Weight coefficients of the Zernike expansion Custom Ablation Manager Corneal wavefront Factor for enabling/disabling specific Zernike terms Epithelial Laser ASsisted In-situ Keratomileusis High Order Aberration(s) Laser ASsisted Anterior Keratomileusis Laser Assisted Sub-Epithelial Keratectomy Laser ASsisted In-situ Keratomileusis Low Order Aberration(s) Angular frequency or meridional index Minimise Depth Minimise Depth Plus Equivalent Defocus Minimise Volume Minimise Volume Plus Radial order Oculus dexter (right eye) Optimised Refractive Keratectomy Oculus sinister (left eye) Optical Society of America Ocular wavefront Optical Zone Optimised Zernike Terms Selection Photo-Refractive Keratectomy Photo-Therapeutic Keratectomy Refractive Zone Spherical Equivalent Total Ablation Zone Transition Zone Optical blur Wavefront aberration Wavefront aberration Zmn or Z[n,m] Zernike polynomials Aberration: In optical systems, a condition that leads to blurring of the image. It occurs when light from infinity or from a point source after transmission through the system does not converge into (or does not diverge from) a single Aberration-Free: An ablation profile with which the Wavefront aberration (within Optical Zone) after surgery equals the Wavefront aberration (within Optical Zone) prior to the surgery after balancing Sphere and Cylinder components, i.e. there is no induced change in Wavefront aberration (within Optical Zone) other than Sphere and Cylinder components. Angular frequency: Sinusoidal velocity with which the Zernike terms complete a cycle over a circumference at a radial distance. Automatic Refraction Balance: Objective combination of the planned refraction into the wavefront aberration, to correct, in a single step, the decided HOAb and the planned refraction. Dioptric equivalent: the amount of ordinary defocus needed to produce a similar optical degradation that is produced by one or more higher-order Custom Ablation Manager: A module to select for ablation subsets of Zernike terms included in the Zernike expansion of the wavefront aberration. Corneal wavefront: The wavefront aberration corresponding solely to the refractive properties of the cornea. Equivalent Defocus: the amount of ordinary defocus needed to produce the same RMS wavefront error that is produced by one or more higher-order Meridional index: See Angular frequency. Minimise Depth: A method to objectively minimise the maximum depth of a customised ablation based on the Zernike expansion of the wavefront Minimise Depth Plus: Another method to objectively minimise the maximum depth of a customised ablation based on the Zernike expansion of the wavefront aberration. Minimise Volume: A method to objectively minimise the ablation volume of a customised ablation based on the Zernike expansion of the wavefront Minimise Volume Plus: Another method to objectively minimise the ablation volume of a customised ablation based on the Zernike expansion of the wavefront aberration. Optical Zone: The zone for which the change of refraction or HOAb is Ocular wavefront: The wavefront aberration corresponding to the refractive properties of the eye as a whole. Optimised Zernike Terms Selection: A method to objectively select for ablation subsets of Zernike terms included in the Zernike expansion of the wavefront aberration. Radial order: The highest power found in a Zernike term. Refractive Zone: The zone for which the change of refraction shall match the planned refractive correction. Total Ablation Zone: The zone that receives laser pulses. Transition Zone: The zone ablated with a profile to smooth the ablated area towards the non-treated cornea, responsible for providing the actual functional Optical Zone. Wave aberration: see Wavefront error. In a wave of light propagating in a medium, a surface containing the locus of points having the same phase. Wavefront aberration: see Wavefront error. Wavefront error: The deviation of a wavefront in an optical system from a desired perfect planar (or perfect spherical) wavefront is called the wavefront Wavefront aberrations are usually described as either a sampled image or a collection of two-dimensional polynomial terms. Zernike mode: Each pair of Zernike polynomials structured in two complementary symmetrical sets, one governed by the cosine, and the other governed by the sine functions. Zernike polynomials: A series of polynomials that are orthogonal on the unit disk. Named after Frits Zernike. (Financiación y declaración de conflicto de intereses) This work has been financed by SCHWIND eye-tech-solutions without any other source of external funding. Maria C Arbelaez, Diego de Ortueta, Jorge L Alió, Massimo Camellin, and Ioannis M Aslanides are consultants for SCHWIND eye-tech-solutions. Thomas Magnago, Tobias Ewering, Nico Triefenbach, Mario Shraiki, Thomas Hollerbach, Anita Grimm, and Thomas Klinner are employees of SCHWIND eye-tech-solutions. Samuel Arba-Mosquera is an employee of SCHWIND eye-tech-solutions. This project of dissertation represents the personal views of its author, and was not written as a work for hire within the terms of the author’s employment with SCHWIND eye-tech-solutions. The work described in the project of dissertation itself (as opposed to the work done writing the article) was conducted as part of the author’s work for SCHWIND eye-tech-solutions. Content attributed to the author was vetted by a standard SCHWIND eye-tech-solutions approval process for third-party publications. Part 1 ophthalmology, allowing surgeons to correct precisely and safely refractive defects in a stable manner. Until the introduction of laser corneal refractive surgery about 20 years ago, these refractive defects could only be corrected by ophthalmic prostheses such as spectacles or contact lenses, or they could be approached by a restricted elite of pioneer surgeons in the form mechanical keratomileusis or intraocular surgery with all the complications and risks that these techniques imply. Since the introduction by Trokel1 of the excimer laser surgery of the cornea in 1983, more than 50 millions of treatments have been successfully performed. As the prevalence of myopic defects in western societies is about 30%2 and of above 50% in Asian countries, the potential of the surgical techniques reaches more than 1 billion people (without consideration of their economic means). Despite its predictability and safety, laser corneal refractive surgery is not a free of risk technique. Nowadays, the limits of the laser refractive corrections are imposed by the number of dioptres that can be corrected (typically from -12D to +6D) combined with the pupil size (larger pupils may receive only smaller dioptric corrections), the age of the patients (typically 18 years or older with records of stable refractions), corneal thickness (thinner corneas may receive only smaller dioptric corrections). Further, specific aspects of the ablation profiles, the ablation procedure, and the biomechanical reactions of etching a new corneal shape by removing a corneal lenticule may affect the quality of vision postoperatively. examples include over- and under-corrections, induction of haze and other corneal opacities, induction of aberrations and reduction of the contrast sensitivity, perception of halos, glare, or visual disturbance at night. Over and under-corrections can be approached by secondary refractive treatments (at the expense of extra tissue removal) and subsequently prevented by the use of nomograms. For the induced haze, mytomicin C has opened up a successful therapy. These problems among other challenges are the basis research conducted by the “Grupo de investigación en cirugía refractiva y calidad de visión.” Remarkable are the efforts to face the problems observed in refractive surgery by developing adequate experimental animal models for analysing PRK3, LASIK4, LASEK, CXL5, corneal additive surgery6, as well as the investigations of the corneal cauterization7 and induction of corneal haze8,9. From biophysics, the cooperations with the University of Valladolid10 and the Consejo Superior de Investigaciones Científicas11 have demonstrated be very fruitful. At the light of the unresolved issues mentioned at the introduction, it is pertinent and justified to attempt to address some of the previously raised questions using the scientific method and engineered developed tools. Chapter 1 HYPOTHESIS OF THIS THESIS (Hipótesis de esta Tesis) The starting hypothesis is that it is possible to develop new algorithms and ablation strategies for efficiently performing laser corneal refractive surgery in a customised form minimising the amount of ablated tissue without compromising the visual quality. The availability of such profiles, potentially maximising visual performance without increasing the factors of risk, would be of great value for the refractive surgery community and ultimately for the health and safety of the It is possible to improve the application of laser cornea refractive surgical treatments by properly compensating the loss of ablation efficiency for non-normal incidences, enhancing the systems to track eye movements and by optimised ablation profiles for customised refractive surgery. The results and improvements derived out of this work can be directly applied to the laser systems for corneal refractive surgery, as well as to the algorithms and computer programmes, which control and monitor the ablation Chapter 2 MOTIVATION Laser corneal refractive surgery presents yet unresolved problems e.g. loss of ablation efficiency, cyclotorsional movements, as well as the induction of aberrations. A research work to approach these issues may help reducing the complications and occurrence of adverse events during and after refractive surgery, improving the postoperative quality of vision, as well as reducing the ratio of retreatments and reoperations. The intention of this dissertation is to describe in detail the theoretical framework, explaining a possible method of tissue-saving optimisation and exploring its tissue-saving capabilities, as well as evaluating its clinical application. This analysis considers the results in terms of predictability, refractive outcome, efficacy, safety, and wavefront aberration. Chapter 3 LASER REFRACTIVE SURGERY (Cirugía refractiva laser) With the introduction of the laser technologies12,13 for refractive surgery14, the change of the corneal curvature to compensate in a controlled manner for refractive errors of the eye15 is more accurate than ever. The procedure is nowadays a successful technique, due to its sub micrometric precision and the high predictability and repeatability of corneal ablation16 accompanied by minimal side effects17. Standard ablation profiles based on the removal of convex- concave tissue lenticules with spherocylindrical surfaces proved to be effective in the compensation of primary refractive errors18. However, the quality of vision deteriorated significantly, especially under mesopic and low-contrast conditions19. Furthermore, there is still controversy concerning the optimal technique for corneal refractive procedures20, and about in which corneal layer to perform refractive procedures to maximise patients’ visual outcomes. The origins of refractive surgery (Los orígenes de la cirugía refractiva) Historically, several attempts have been developed with two clearly different approaches: surface ablations and lamellar ablations. Late in the 80’s Photo-Refractive Keratectomy21 (PRK) was performed with broad beam lasers, mechanical debridement, small optical zones 5 mm, without transition zones, and the surgery used to be unilateral; at the same time, excimer laser keratomileusis22 was performed with thick free caps (240 µm), ablated on the underside of the flap and then sutured in place. In the early 90’s, Laser-Assisted in Situ Keratomileusis23 (LASIK) was developed, by creating a hinged flap (180 µm); ablating on the stromal surface, and no suture was needed. In the mid 90’s the first scanning lasers were used24, and the ablation zones were increased up to 7 mm, moreover alcohol debridement was slowly replacing mechanical debridement in the surfaces treatments. Late in the 90’s laser systems were enhanced by adapting Eye-Tracking technologies25. In the early 2000’s, Laser-Assisted SubEpithelial Keratectomy26 (LASEK) was introduced by creating epithelial flaps, and the laser technology improved introducing flying spot patterns27. Epithelial Laser in Situ Keratomileusis28 (Epi-LASIK), and Epithelial LaserAssisted Sub-Epithelial Keratectomy29 (Epi-LASEK) were introduced by creating a truly epithelial flap at the Bowman’s layer level, and Femtosecond-Laser-Assisted Laser in Situ Keratomileusis30 (Femto-LASIK), Thin-Flap and Ultra-Thin-Flap Laser in Situ Keratomileusis (Ultra-Thin-Flap-LASIK) were introduced by creating a flap slightly beneath Bowman’s layer level (Sub-Bowman’s Keratomileusis31). Nowadays, the technique selection is really wide-ranged, and the surgeons can decide, based on different criteria, which technique they will apply for each specific treatment. In surface ablations one can decide how to remove the Therapeutic Keratectomy (PTK), or with a truly epithelial flap), whereas in stromal ablation one can decide how to cut the flap (hinge position, flap thickness, mechanically or with a femtosecond laser). Laser ablation (Ablación laser) Laser corneal refractive surgery is based on the use of a Laser (typically an excimer one) to change the corneal curvature to compensate for refractive errors of the eye. It has become the most successful technique, mainly due to the submicron precision and the high repeatability of the ablation of the cornea accompanied by minimal side effects. Laser refractive surgery is based upon the sequential delivery of a multiplicity of laser pulses each one removing (ablating) a small amount of corneal tissue. Corneal remodelling is essentially similar to any other form of micromachining. The lasers used in micro-machining are normally pulsed excimer lasers, where the time length of the pulses is very short compared to the period between the pulses. Although the pulses contain little energy, given the small size of the beams, energy density can be high for this reason; and given the short pulse duration, the peak power provided can be high. Many parameters have to be considered in designing an efficient laser ablation. One is the selection of the appropriate wavelength (193.3 ± 0.8 nm for ArF) with optimum depth of absorption in tissue, which results in a high-energy deposition in a small volume for a speedy and complete ablation. The second parameter is a short pulse duration to maximize peak power and minimize thermal conductivity to the adjacent tissue (ArF excimer based τ <20 ns). The radiant exposure is a measure of the density of energy that governs the amount of corneal tissue removed by a single pulse. In excimer laser refractive surgery, this energy density must exceed 40-50 mJ/cm2. The depth of a single impact relates to the fluence, and also the thermal load per pulse increases with increasing fluence. Knowing the fluence and details of the energy profile of the beam (size, profile, and symmetry), we can estimate the depth, diameter and volume of the ablation impact. Assuming a super-Gaussian beam energy profile, the following equation applies: I ( r ) = I 0e −2 r ( 1) where I is the radiant exposure at a radial distance r of the axis of the laser beam, I0 is the peak radiant exposure (at the axis of the laser beam), R0 is the beam size when the radiant exposure falls to 1/e2 its peak value, and N is the superGaussian order of the beam profile (where N=1 represents a pure Gaussian beam profile, and N→∞ represents a flat-top beam profile). From the blow-off model33 (derived form the Beer-Lambert’s law34), the real energy density absorbed at that point determines the ablation depth as: I ij (1 − Rij ) dij = α Cornea ( 2) where dij : Actual depth per pulse at the location i,j; Iij: Radiant Exposure of pulse at the location i,j; Rij : Reflectivity at the location i,j; ITh : Threshold; αCornea : Corneal Absorption Coefficient. For human corneal tissue, the ablation threshold takes values of about 4050 mJ/cm2, and the absorption coefficient is about 3.33-3.99 µm-1. We chose values of 46 mJ/cm2 for the ablation threshold and 3.49 µm-1 as absorption coefficient of the human corneal tissue. For PMMA, the ablation threshold takes values of about 70-80 mJ/cm2, and the absorption coefficient is about 3.7-4.4µm-1. We chose values of 76 mJ/cm2 for the ablation threshold and 4.0 µm-1 as absorption coefficient for PMMA. Calculating the volume of a single spot for the cornea, and dividing it by the volume of a single impact on PMMA, we get the socalled “cornea-to-PMMA-ratio.” In general: I ( r ) (1 − R ( r ) ) d (r ) = α Cornea ( 3) For different beam profiles, we get different spot profiles as depicted in Figure 1 and Figure 2. Beam energy profiles radial distance (mm) N = 1 Gaussian N = 2 SuperGaussian N = ∞ Flat-Top Figure 1: Beam profiles for different beam geometries, where N is the Gaussian or supergaussian order. Gaussian profile (N=1) in blue, supergaussian profile (N=2) in pink, Flat-Top profile (N=∞) in yellow. Ablative spot profiles depth (µm) radial distance (mm) N = 1 Gaussian N = 2 SuperGaussian N = ∞ Flat-Top Figure 2: Spot profiles for different beam geometries. Parabolic spot profile (from Gaussian beams, N=1) in blue, quartic spot profile (from supergaussian beams with N=2) in pink, Flat-Top spot profile (from Flat-Top beams, N=∞) in For different radiant exposures with the same beam profiles we get different spot profiles, as well (Figure 3). Ablative spot profiles depth (µm) radial distance (mm) N = 2 , I0 = 150 N = 2 , I0 = 300 Figure 3: Spot profiles for different radiant exposures. Quartic spot profiles (from supergaussian beams with N=2) for a peak radiant exposure of 150 mJ/cm2 in blue and for a peak radiant exposure of 300 mJ/cm2 in pink. Applying Lambert-Beer´s law (blow-off model), the footprint (diameter) of the impact is: FP = 2 R0 Th ( 4) where FP is the footprint (diameter) of the ablative spot and ITh is the ablation threshold for radiant exposure for the irradiated tissue or material below which no ablation occurs. From these data (and the beam symmetry: square, hexagonal, circular), we can calculate the volume of ablation impact: I 0 r 2 N − 2 ITh R0 VS = ∫ ∫ ( 5) where VS is the volume of a single spot, and α the absorption coefficient of the irradiated tissue or material. π N VS = ln 0 α N + 1 ITh N +1 R0 2 ( 6) The problem of the spot profile and the radiant exposure of the system relies on the sequential delivery of a multiplicity of laser pulses each one ablating locally a small amount of corneal tissue35. Being the global process an integral effect. The higher the spot profile is, the higher the ablated volume per pulse limiting the resolution of the treatment. There are several ways to avoid that problem: Reducing the radiant exposure improving the vertical resolution of the treatment Reducing the spot diameter improving the horizontal resolution of the treatment The problem of both alternatives is that they need extra time for the ablation procedure, which may produce other inconveniences. The gained ablation volume has to be applied onto the cornea by thousands of single laser shots at different but partly repeated corneal positions, because the ablated volume of a single spot is much smaller than the ablation SCHWIND eye-tech-solutions has introduced as well some innovations concerning ablation shot file (sequences of pulses needed to carry out a refractive procedure) generation, to optimally remove the tissue corresponding to these state-of-the-art treatments, generating the sequence of laser shot coordinates in a way that: guarantees a high fidelity reproduction of the given ablation volume line shape avoids vacancies and roughness of the cornea In this context, two opposed requirements define the fluence level: - A short ablation time (favouring high fluence levels) - A high fidelity ablation (favouring low fluence levels) Zernike representation of aberrations (Representación de Zernike de las aberraciones) A wavefront aberration expressed as expansion in series of Zernike polynomials36 takes the form: WA ( ρ , θ ) = ∑ n = 0 m =− n Z nm ( ρ ,θ ) ( 7) Where WA is the wavefront aberration described in polar coordinates, Z[n,m] are the Zernike polynomials (Figure 4) in polar coordinates and C[n,m] are weight coefficients for the Zernike polynomials. Figure 4: Zernike pyramid showing the Zernike terms up to 7th order. The wavefront is defined as the multidimensional surface of the points with equal phase. The difference from each of the points having length units to a wavefront reference surface (typically either a planar wavefront surface or a spherical wavefront surface) determines the wavefront aberration. Therefore, the wavefront error is described in units of length. As the C[n,m] are just weight coefficients, they are non-dimensional, and the Z[n,m] are the Zernike polynomials describe units of length. Measurement of Optical Aberrations (Medida de las aberraciones ópticas) To avoid inducing aberrations, as well as to eliminate the existing aberrations, “customized” treatments were developed37. Customisation of the ablation is possible, either using wavefront measurements of the whole eye38,39 (obtained, e.g., by Scheiner aberroscopes40, Tscherning aberroscopes41,42, Hartmann screens43,44, Slit skiascopic refractometer45, Hartmann-Shack46 or other type of wavefront sensors47,48,49,50,51,52,53) or by using corneal topography-derived wavefront analyses54,55. Topographic-guided56, Wavefront-driven37, Wavefront- optimized57, Asphericity preserving58, and Q-factor profiles59 have all been put forward as solutions. A customized treatment is the treatment of choice in highly aberrated eyes and for retreatments, especially for those performed ex domo, where the original preoperative aberrations are unknown. Throughout this thesis, optical errors, represented by wave aberration, are described by the weight coefficients of the Zernike polynomials36 in OSA Theoretical ablation profiles (Perfiles de ablación teóricos) A correcting method in laser refractive surgery that corrects higher-order aberrations as well as defocus and astigmatism could improve vision. In first approximation to laser refractive surgery case where tissue removal occurs, this volume can be expressed as37: Wavefront correction can be achieved by applying the reverse wavefront. Because a refractive surgery laser system can remove tissue rather than add tissue, the wavefront correction must also be taken into consideration by shifting the ablation profile from negative values to only positive values. Furthermore, the correction will be performed by modifying the anterior front surface of the cornea by photoablation. Thus, the change in the refractive index of air (n=1) and the cornea (n=1.376) boundary must be included. Applying these considerations, one will get: Abl ( ρ , θ ) = WA ( ρ , θ ) − min WA ( ρ , θ ) nCornea − nAir ( 8) where Abl(ρ,θ) is the ablation at a given point (in polar coordinates), WA the wave aberration, and nCornea and nAir the refractive indices of the cornea and the air, respectively. The LASIK technique (La técnica LASIK) With the LASIK (Laser in Situ Keratomileusis23) treatment, we have an accepted method for correcting refractive errors such myopia22, hyperopia61, and astigmatism62,63. One of the most significant side effects in myopic LASIK, is the induction of spherical aberration64, which causes halos and a reduction of contrast sensitivity19. However, the different laser platforms are always introducing new concepts and optimising their ablation profiles. Chapter 4 CORNEAL ABLATION AND OPTICAL ABERRATIONS (Ablación corneal y aberraciones ópticas) Aberrations and visual performance after refractive surgery (Aberraciones y rendimiento visual tras cirugía refractiva) With the introduction of wavefront analysis, it was proved that the conventional refractive LASER techniques were far from ideal, by measuring the aberrations induced by conventional algorithms65, and the aberrations induced by the LASIK flap cut itself66 (Figure 5). Induced SphAb vs. MRSEq (for "Classical" profiles) Induced SphAb at 6-mm (µm) MRSEq (D) Figure 5: Induced Spherical aberration vs. achieved correction using classical profiles for different treatment strategies: LASIK (in blue), PRK (in purple). Notice that LASIK induced Spherical aberration does not go across the origin, representing the isolated effect due to the flap cut treatments. Notice as well that the induced Spherical aberration was more pronounced for hyperopic treatments than for myopic ones. Biological response of the cornea (Respuesta biológica de la córnea) Recent studies reporting LASIK induced optical aberrations have analysed the different variables affecting the LASIK flap cut, and their influence to the aberration induction67. In 2004, Carriazo reported one study revealing the biomechanical response of the cornea after creating a flap cut, showing a clear correlation between amount of aberrations induced and flap thickness (Figure 6). Figure 6: Biomechanical changes due to the ablation, depending on the stromal layer where the tissue will be removed. Flaps thinner than 130 microns induce much less optical aberration than flaps thicker than 140 microns. A clear difference should be established between LASIK (traditionally getting flaps averaging 170 ± 30 microns), and thin flap LASIK68 (flap thickness averaging 110 ± 30 microns), sometimes named LASAK (LASER Anterior-stromal Keratomileusis). Visual degradation (Deterioro visual) It is still not known precisely whether and when an “optically perfect eye” after surgery is better than preserving the aberrations that the eye had before surgery. Although the optical quality of the eye can be described in terms of the aberration of its wavefront, in healthy individuals with a certain degree of supervision was observed that they presented a measurable degree of aberration in their wavefront69. Still more, it was observed that the individuals with smaller aberration in their wavefront were not always those scoring the best visual qualities. Thus, the optical quality of the human eye does not determine in a univocal way its visual quality. However, the induction of aberrations, such as spherical aberrations and coma, has been related to a loss of visual acuity70, accommodative lag71 or, in general, visual quality. Finally, the concept of neural compensation suggests that the neural visual system is adapted to the eye’s own aberration pattern.72. A study by Artal et al.73, on the effects that this neural compensation causes on the visual function, indicates that the visual quality we have is somewhat superior to the optical quality that our eye provides. Current trends in refractive surgery (Tendencias actuales en cirugía refractiva) The trends to correct refractive errors using LASER refractive surgery are evolving nowadays towards searching where to target the LASER ablation to optimise the photo-ablated tissue layer. Historically, the first approaches were using PRK (Photo-Refractive Keratomileusis) in most of the patients. But finally, this technique was mainly discarded due to post-op pain, long recovery times, and clinically due to often post-op haze as undesired effect74. Later on, LASIK was getting more and more common in clinical practice using flap cuts around 160 microns, and it was soon the leading refractive surgery preferred technique75. Afterwards, two new competing techniques were introduced in the standard clinical practice: LASEK concept (Epithelium removal by using alcohol, and LASER photo-ablation just over Bowman’s layer), and just recently, Epi-LASIK (Epithelium mechanically lifting up without weakener agents and LASER photoablation also over Bowman’s layer). These two techniques are relatively new, and there are controversial reports, reporting on average diminished post-op pain and less often haze complications compared to PRK76,77. Nevertheless, these techniques still produce post-op pain and sometimes haze complications76,78. Since laser refractive surgery was introduced, the technology rapidly With the beginning of photo-ablation, the goal was to achieve predictable and stable results for myopic, hyperopic, and astigmatic corrections. Today’s technology is far more advanced since sophisticated diagnostic instruments, such as aberrometers and topography systems, offer the challenge of improving the postoperative results in terms of visual acuity and night vision. At the same time, the better knowledge and understanding on refractive surgery by potential patients upgrades the required standard outcomes. Making more challenge finding new approaches towards the close-to-zero aberrations target results in several senses: a) finding the sources of the induced aberrations due to laser refractive surgery79, b) developing “free-of-aberrations” ablation profiles, c) developing ablation profiles to compensate the natural aberrations of any single eye to get a close-to-zero aberrations result80. The pupil centre (PC) considered for a patient who fixates properly defines the line-of-sight, which is the reference axis recommended by the OSA for representing the wavefront aberration. Nevertheless, because the pupil centre is unstable, a morphologic reference is more advisable. The pupil centre shifts with changes in the pupil size, moreover, because the entrance pupil we see is a virtual image of the real The corneal vertex (CV) in different modalities is the other major choice as the centering reference. In perfectly acquired topography, if the human optical system were truly coaxial, the corneal vertex would represent the corneal intercept of the visual axis. Although the human optical system is not truly coaxial, the cornea is the main refractive surface. Thus, the corneal vertex represents a stable preferable morphologic reference (Figure 7). Figure 7: Representative axes of the human eye. Chapter 5 OPEN QUESTIONS IN CORNEAL REFRACTIVE SURGERY (Cuestiones actuales en cirugía refractiva corneal) Controversy remains over the proper definition of an optimal ablation profile for corneal refractive surgery81. Considerations such as the duration of the treatment, removal of tissue82, remodelling of tissue83, and, in general, the overall surgical outcome have made it difficult to establish a universal optimal profile. These considerations are actually interrelated in a multifactorial way, and may lead to clinical problems84 as corneal dehydration, ectasia or regression. In laser corneal refractive surgery, one always aims to reduce the ablated tissue thickness (and, to a minor degree, to reduce the intervention time), the principal risk being an ectasia of the cornea due to excess thinning. Customised laser corneal refractive surgery on aberrated eyes may yield better results than the standard procedure85,86, but generally results in higher ablation depth, volume and time. The idea to optimise the customised treatment to reduce the ablated thickness while retaining the positive aspects is therefore pertinent. The development of new algorithms or ablation strategies for efficiently performing laser corneal refractive surgery in a customised form minimising the amount of ablated tissue without compromising the visual quality becomes an important challenge. The availability of such profiles, potentially maximising visual performance without increasing the factors of risk, would be of great value for the refractive surgery community and, ultimately, for the patients’ health and safety. Therefore, the topic “Optimised Zernike terms selection in customised treatments for laser corneal refractive surgery” is worth to be analysed, because its clinical implications are not yet deeply explored. The real impact of tissue-saving algorithms in customised treatments is still discussed in a controversial way. Disagreement remains about optimum profile The aim of this work is to provide a simple and understandable theoretical frame explaining a possible method of tissue-saving optimisation. Most of the systems available for laser refractive surgery include possibilities to customise the ablation, either based on topographical elevation or on corneal or ocular wavefront aberration. The problem of minimising the amount of tissue that is removed is that it must be done in such a way that: a) does not compromise the refractive correction b) does not compromise the visual performance c) is safe, reliable and reproducible Chapter 6 SPECIFIC GOALS OF THIS THESIS (Objetivos específicos) This dissertation deals with multiple purposes and goals: To analyse the corneal asphericity using corneal wavefront and topographic meridional fits87,88,89 To provide a model of an aberration-free profile and to clinically evaluate the impact of treatments based upon these theoretical To assess a decision tree analysis system to further optimize refractive surgery outcomes109,110,111,112,113,114,115,116 To develop a geometrical analysis of the loss of ablation efficiency at non-normal incidence117,118,119,120 To analyze the clinical effects of pure cyclotorsional errors during refractive surgery121,122,123,124,125,126 To evaluate the effective optical zone (the part of the ablation that receives full correction) after refractive surgery127,128,129 To develop a method to objectively minimise the ablated tissue of a customised ablation based on the Zernike expansion of the wavefront aberration130,131,132,133,134,135,136 Chapter 7 THESIS SYNOPSIS (Sinopsis de esta Tesis) In this thesis, we studied the physical implications of corneal ablation as well as several possibilities for improving the clinical outcomes. Both are complex phenomena, entangled with biological effects, and with many mixed (and often yet unbounded) physical causes. Our approach was to first study the major patterns of induction of aberrations, second to analyse isolated sources of aberrations compatible with the observed patterns, and finally to develop ablation profiles of increasing complexity, by adding solutions for each and any of the identified sources of aberrations. Besides, during this thesis different clinical evaluations of the specific solutions studied theoretically were performed on patients. The body of this thesis is structured as follows: Part 1 provides the introductory framework for this dissertation. Part 2 describes the common methods used throughout this thesis: Methods for determination of the optical properties of the wave aberration, methods for combination of manifest refraction into the wave aberration, methods for determination of the actual clinical relevance of the wave aberration, as well as, the protocols in evaluations on subjects. Part 3 introduces a topical review of the own works related to this thesis: For Topic A, the corneal asphericity was analysed using corneal wavefront and topographic meridional fits, in an attempt to determine whether changes in asphericity were fully described by the induced corneal spherical aberration. The calculation of corneal asphericity as a three-dimensional fit renders more accurate results when it is based on corneal wavefront aberrations rather than on corneal topography of the principal meridians. A more accurate prediction could be obtained for hyperopic treatments compared to myopic treatments. Topic B presents a model of an aberration-free profile, as well as a clinical evaluation of the impact of such treatments in the post-operative cornea, in an attempt to determine whether simple definitions of an ablation profile could suffice to come closer to the goal „zero aberrations induced.“ Since higher complexity might be related to larger potential hazards, Topic C describes a systematic analysis in the form of a decision tree to further optimize refractive surgery outcomes. Topic D reports our findings with a geometrical analysis of the loss of ablation efficiency at non-normal incidence, in an attempt to determine whether how much of the induced spherical-like aberration in refractive surgery could be explained and avoided by the use of a simple cost-effective compensation. The movements of the eye during refractive surgery and particularly the clinical impact of cyclotorsional movements are addressed in Topic E. Evaluate of the effective optical zone (the part of the ablation that receives full correction) after refractive surgery is approached in Topic F. Finally, a method to objectively minimise the ablated tissue of a customised ablation based on the Zernike expansion of the wavefront aberration is developed in Topic G. Finally, the major findings of this work, and their implications, are summarized in the Conclusions. Part 2 Chapter 8 CONTRIBUTION OF THE WAVE ABERRATION INTO THE (Contribución de la aberración de onda sobre la refracción Classical ametropias (myopia, hyperopia and astigmatism) are, as well, difference to a reference surface, and are included in the, more general, wavefront error. However, classical ametropias are used to be described, not in units of length, but in units of optical refractive power. It is, then, necessary to find a relationship between wavefront error magnitudes and classical ametropias137,138,139,140,141. This relationship is often called “objective wavefront refraction:” In our study, the quadratic equivalent of a wave-aberration map was used as a relationship between wavefront-error magnitudes and classical ametropias. That quadratic is a sphero-cylindrical surface, which approximates the wave aberration map. The idea of approximating an arbitrary surface with a quadratic equivalent is a simple extension of the ophthalmic technique of approximating a sphero-cylindrical surface with an equivalent sphere. Several possibilities to define this relationship can be found in the Objective wavefront refraction from low order Zernike modes at full pupil size142 Objective wavefront refraction from Seidel aberrations at full pupil Objective wavefront refraction from low order Zernike modes at subpupil size142 Objective wavefront refraction from Seidel aberrations at subpupil Objective wavefront refraction from paraxial curvature39 Objective wavefront refraction from wavefront axial refraction143 Wavefront refraction from low order Zernike modes at full pupil (Refracción del frente de onda a partir de los modos de Zernike de bajo orden considerados para la pupila completa) A common way to fit an arbitrarily aberrated wavefront with a quadratic surface is to find the surface that minimizes the sum of the squared deviations between the two surfaces. The least-square fitting method is the basis of the Zernike wavefront Since the Zernike expansion employs an orthogonal set of basic functions, the least-square solution is simply given by the second-order Zernike coefficients of the aberrated wavefront, regardless of the values of the other coefficients. These second-order Zernike coefficients can be converted into a sphero-cylindrical prescription in power vector notation of the form [J0, M, J45]. −8 6C2+2 J0 = PD 2 ( 9) −16 3C20 PD 2 ( 10) −8 6C2−2 PD 2 ( 11) J 45 = where PD is the pupil diameter, M is the spherical equivalent, J0, the cardinal astigmatism and J45 the oblique astigmatism. The components J0, M, and J45 represent the power of a Jackson crossed cylinder with axes at 0 and 90°, the spherical equivalent power, and the power of a Jackson crossed cylinder with axes at 45 and 135°, respectively. The power-vector notation is a cross-cylinder convention that is easily transposed into conventional refractions in terms of sphere, cylinder, and axis in the minus-cylinder or plus-cylinder formats used by clinicians. S=M − C = 2 J 02 + J 452 arctan 45 ( 12) ( 13) ( 14) Objective wavefront refraction from Seidel aberrations at full pupil size (Refracción objetiva del frente de onda a partir de las aberraciones de Seidel consideradas para la pupila completa) The Seidel sphere adds a value for the primary spherical aberration to improve, in theory, the fit of the wavefront to a sphere and improve accuracy of the spherical equivalent power. −16 3C20 + 48 5C40 PD 2 ( 15) Objective wavefront refraction from low order Zernike modes at subpupil size (Refracción objetiva del frente de onda a partir de los modos de Zernike de bajo orden para un diámetro subpupilar) The same low-order Zernike modes can be used to calculate the refraction for any given smaller pupil size, either by refitting the raw wave-aberration data to a smaller diameter, or by mathematically performing the so-called radius transformation144 of the Zernike expansion to a smaller diameter. Objective wavefront refraction from Seidel aberrations at subpupil size (Refracción objetiva del frente de onda a partir de las aberraciones de Seidel para un diámetro subpupilar) In the same way, Seidel aberrations can be used to calculate the refraction for any subpupil size. Objective wavefront refraction from paraxial curvature (Refracción objetiva del frente de onda a partir de la curvatura Curvature is the property of wavefronts that determines how they focus. Thus, another reasonable way to fit an arbitrary wavefront with a quadratic surface is to match the curvature of the two surfaces at some reference point. A variety of reference points could be selected, but the natural choice is the pupil center. Two surfaces that are tangent at a point and have the same curvature in every meridian are said to osculate. Thus, the surface we seek is the osculating quadric. Fortunately, a closed-form solution exists for the problem of deriving the power vector parameters of the osculating quadratic from the Zernike coefficients of the wavefront. This solution is obtained by computing the curvature at the origin of the Zernike expansion of the Seidel formulae for defocus and astigmatism. This process effectively collects all r2 terms from the various Zernike J0 = −8 6C2−2 + 24 10C4−2 − 48 14C6−2 + 240 2C8−2 − 120 22C10−2 + ... ( 16) PD 2 −16 3C20 + 48 5C40 − 96 7C60 + 480C80 − 240 11C100 + ... PD 2 ( 17) −8 6C2+2 + 24 10C4+2 − 48 14C6+2 + 240 2C8+2 − 120 22C10+2 + ... ( 18) J 45 = PD 2 Objective wavefront refraction from wavefront axial refraction (Refracción objetiva del frente de onda a partir de la refracción axial del frente de onda) It is also possible to represent the wavefront aberration in optical refractive power, without the need of simplifying it to a quadric surface, and, therefore, providing a higher level of detail. Straightforward approach for the problem is to use the concept of axial refractive error (vergence maps145) (Figure 8). Figure 8: Representation of the axial refractive error. The line of sight represents a chief ray; the wavefront aberration is zero at the pupil centre, and perpendicular to the line of sight. Each point of the wavefront propagates perpendicular to the local surface of the wavefront. The axial distance from the pupil centre to the intercept between the propagated local wavefront and the line of sight expressed in dioptres corresponds to the axial refractive error143. The line of sight represents a chief ray; the wavefront aberration is zero at the pupil centre, and perpendicular to the line of sight. Each point of the wavefront propagates perpendicular to the local surface of the wavefront. The axial distance from the pupil centre to the intercept between the propagated local wavefront and the line of sight expressed in dioptres corresponds to the axial refractive error. ARx ( ρ ,θ ) = −1 ∂W ( ρ , θ ) ( 19) A schematic comparison of the different quadric methods described here for the determination of the objective wavefront refraction for a given pupil size is depicted in Figure 9. Wavefront: defocus + spherical “Paraxial curvature matching” Wavefront OPD Zernike sphere Seidel sphere Pupil coordinate (mm) Figure 9: Comparison of the different quadric methods described here for the determination of the objective wavefront refraction for a given pupil size. Automatic Manifest Refraction Balance (Compensación automática de la refracción manifiesta) These objective methods for calculating the refraction are optically correct but have some practical limitations in clinical practice39,146,147. The devices to obtain the wavefront aberration of an eye use to work in the infrared range (IR), which is invisible for the human eye and avoid undesired miotic effects in the pupil size. The refractive indices of the different optical elements in our visual system depend on the wavelength of the illumination light. In this way, the propagated wavefront (and the corresponding wavefront aberration) ingoing to (or outcoming from) our visual system depends on the wavelength of the illumination light, leading to the so-called chromatic The different methods provide “slightly” different results, depending on how they are compared to the subjective manifest refraction, one or another correlates better with manifest refraction142. HOAb influence LOAb (refraction) when analysed for smaller diameters: For full pupil (e.g. 6 mm) the eye sees the world through HOAb producing some multifocality but without defocus, for a smaller pupil (e.g. 4 mm), the optical aberration of the eye is the same but the outer ring is blocked, thereby the eye sees the world through the central part of the HOAb, which may producing some defocus or astigmatism (LOAb, refraction). For this work, a variation of the objective wavefront refraction from loworder Zernike modes at a fixed subpupil diameter of 4 mm was chosen as starting point to objectively include the measured subjective manifest refraction in the wave aberration (Figure 10 to Figure 13). Figure 10: Zernike refraction of a pure Spherical Aberration (at 6 mm) is per definition 0 because Spherical Aberration is a High Order Aberration mode, when analysed for a smaller diameter (4 mm) produces Defocus. Figure 11: Zernike refraction of a pure High Order Astigmatism (at 6 mm) is per definition 0 because of High Order Aberration mode, when analysed for a smaller diameter (4 mm) produces Astigmatism. Figure 12: Zernike refraction of a pure Coma (at 6 mm) is per definition 0 because Coma is a High Order Aberration mode, when analysed for a smaller diameter (4 mm) produces only tilt. Notice that coma may have “visual effect” if the visual axis changes producing Astigmatism. Figure 13: Zernike refraction of a general wavefront aberration analysed at 6 mm and analysed for a smaller diameter (4 mm). The expected optical impact of high-order aberrations in the refraction is calculated and modified from the input manifest refraction. The same wave aberration is analysed for two different diameters: for the full wavefront area (6 mm in this study) and for a fixed subpupil diameter of 4 mm. The difference in refraction obtained for each of the two diameters corresponds to the manifest refraction associated to the high-order aberrations (Figure 14). Figure 14: Automatic Refraction Balance. Optical impact of the HOAb the refraction is calculated and balanced from input refraction. Notice that the same wavefront aberration is analysed for two different diameters. The difference in the refraction provided at the two different analysis diameters correspond to the manifest refraction provided by the high order aberration. The condition is to re-obtain the input manifest refraction for the subpupil diameter of 4 mm. This way, the low-order parabolic terms of the modified wave aberration for the full wavefront area can be determined. Chapter 9 DETERMINATION OF THE ACTUAL CLINICAL RELEVANCE (Determinación de la relevancia clínica de la aberración de Clinical relevance of the single terms in a Zernike expansion (Relevancia clínica de términos individuales de Zernike) A wavefront aberration can be expressed as a linear combination of weighted Zernike polynomials. In the previous section, several methods were described to generally represent wavefront aberrations in a simplified way as classical ametropias in units of optical refractive power. These methods considered the wavefront aberrations as a whole and determine by different means the classically equivalent optical refractive power in terms of sphere, cylinder and axis as “objective wavefront refraction.” However, due to the nature of the Zernike polynomials, these relationships were always functions of the terms C[n,0] for the defocus, functions of the terms C[n,+2] for the cardinal astigmatism, and functions of the terms C[n,-2] for the oblique astigmatism. It would be as well of interest to get a sort of classification of the clinical relevance of single aberration terms. The first inconvenience, as mentioned before, is that wavefront error is expressed in units of length, whereas clinical refraction is expressed in units of optical refractive power. A simple approach for classification of the clinical relevance of single aberration terms was proposed by Thibos et al.,149 by introducing the concept of equivalent defocus (DEQ) as a metric to minimise the differences in the Zernike coefficients due to different pupil sizes. Equivalent defocus is defined as the amount of defocus required to produce the same wavefront variance as found in one or more higher-order A simple formula allows us to compute equivalent defocus in dioptres if we know the total wavefront variance in the Zernike modes in question: Me = 16 3RMS PD 2 ( 20) One could apply this concept of equivalent defocus to each individual Zernike mode to compute its clinical relevance. Of course, we must keep in mind that the kind of optical blur produced by higher-order aberrations is not the same as the blur produced by defocus. Nevertheless, this concept of equivalent defocus is helpful when it comes to interpreting the Zernike coefficients in familiar dioptric terms. The basis of the equivalent defocus concept is the notion that the imaging quality of an eye is determined primarily by wavefront variance, and that it does not matter which Zernike mode produces that variance. It is important to bear in mind that 1 dioptre of ordinary defocus does not necessarily have the same effect as 1 dioptre of equivalent defocus because different types of aberrations affect the retinal image in different ways (Figure 15). Nevertheless, by expressing RMS error in terms of equivalent defocus the data are put into familiar units that help us judge the order of magnitude of the effect. Figure 15: Zernike pyramid showing the effects on vision produced by 1 dioptre of equivalent defocus of different Zernike terms up to 7th order. Wavefront aberration leads to blurring of the image. When it occurs, light from infinity or from a point source after transmission through the system does not converge into (or does not diverge from) a single point. The shape of the blurred spot at the retina is called point-spread-function (PSF). The PSF is the image that an optical system forms of a point source. Even if wave aberration does not exist for the system, the PSF is still affected by diffraction effects coming from the size of the pupil (aperture)150. From the PSF, the blur of an image can be simulated. This can be done because an object can be considered as composed of an infinite array of point sources, each with its respective intensity, position, and colour. Giving each of these points the shape of the PSF the blurred image can be simulated151. In general, for the same amount of equivalent defocus, the optical blur produced by higher-order aberrations increases with increasing radial order and decreases with increasing angular frequencies. Based on this blur effect of the individual Zernike terms we have defined a dioptric equivalent (DEq) of the form: DEqnm = 2 ( n + 1) (1 + δ m 0 ) (δ m 0 + m PD 2 ( 21) where DEq[n,m] is the optical blur for the individual Zernike term, n is the radial order of the Zernike term, m the meridional frequency of the Zernike term, δm0 a delta function, PD the analysis diameter, and C[n,m] the weight coefficient of the Zernike term. In such a way, the dioptric equivalent produced by higher order aberrations increases with increasing radial order and decreases with increasing angular frequencies (Table 1). n-order m-frequency Relative Optical Blur (Defocus = 1) n-order m-frequency Relative Optical Blur (Defocus = 1) n-order m-frequency Relative Optical Blur (Defocus = 1) Table 1: Relative optical blur of the Zernike polynomials up to 7th order. Global clinical relevance of the Zernike expansion (Relevancia clínica global de la expansión de Zernike) This dioptric equivalent metric is identical to the power vector notation for the low orders, and makes it possible to define a general optical blur of the form: UG = ∑ DEq [ n, m] ( 22) as a generalization of the expression proposed by Thibos et al.152 Wave aberrations with similar general optical blur will show similar global optical performance, irrespective of which specific Zernike modes are responsible for this optical blur. Classification of the clinical relevance (Clasificación de la relevancia clínica) We have expressed each of the Zernike terms as a dioptric equivalent into familiar units that help us judge the order of magnitude of the effect. Using common clinician limits, the following classification is proposed: DEqnm ≤ 0.25 D ⇒ Not clinically relevant ( 23) 0.25 D < DEqnm ≤ 0.50 D ⇒ Might be clinically relevant ( 24) DEqnm > 0.50 D ⇒ Clinically relevant ( 25) this represents the proposed objective determination of the actual clinical relevance of the single terms in a Zernike expansion of the wavefront aberration. According to this classification, Zernike terms considered not clinically relevant (DEq≤0.25D) will be marked in green, Zernike terms that might be considered clinically relevant (0.25D<DEq≤0.50D) will be marked in yellow, and Zernike terms considered clinically relevant (DEq>0.50D) will be marked in red (Figure 16 and Figure 17). Figure 16: List of Zernike coefficients classified and colour coded by their dioptric equivalent relative to optical blur (DEq). Figure 17: Zernike pyramid classified and colour coded by the dioptric equivalent relative to optical blur (DEq) of the single Zernike terms. Chapter 10 PROTOCOLS FOR MEASUREMENTS ON SUBJECTS (Protocolos de las medidas realizadas en sujetos) The procedures involving subjects were reviewed and approved by Institutional Bioethical Committees of SCHWIND eye-tech-solutions, and the clinic where measurements or procedures were performed and met the tenets of the Declaration of Helsinki. All patients were fully informed and understood and signed an informed consent before enrolment in any of the studies. All refractive procedures were planned and ablated with the available systems of SCHWIND eye-tech-solutions. Using the Keratron-Scout (Figure 18) videokeratoscope153 (Optikon2000, Rome, Italy), corneal wavefront aberration before and after refractive surgery was Using the Complete Ophthalmic Analysis System (COAS, Figure 18) aberrometer (Wavefront Sciences, Albuquerque, USA), ocular wavefront aberrations before and after refractive surgery was analysed. Figure 18: The SCHWIND Combi Workstation for comprehensive corneal and ocular analysis. LASIK flaps were created with a superior hinge using a Carriazo-Pendular microkeratome154 (SCHWIND eye-tech-solutions GmbH, Kleinostheim, Germany) (Figure 19). Figure 19: The SCHWIND Carriazo-Pendular Console and Handpiece. The CAM software was used to plan the ablations, which were first simulated, ablated onto flat PMMA plates155, and finally applied onto the patients corneas with either an AMARIS excimer laser156 (Figure 20) or an ESIRIS excimer laser157 (Figure 21) (both SCHWIND eye-tech-solutions GmbH, Kleinostheim, Germany) both delivering aspheric wavefront-customised profiles. The aspheric profiles go beyond the Munnerlyn proposed profiles, and add some aspheric characteristics to balance the induction of spherical aberration (prolateness optimisation). These particular case of aspheric profiles compensate for the aberrations induction observed with other types of profile definitions158, some of those sources of aberrations are those ones related to the loss of efficiency of the laser ablation for non-normal incidence159,160,161,162. Optimisation consisted to take into account the loss of efficiency at the periphery of the cornea in relation to the centre as there is a tangential effect of the spot in relation to the curvature of the cornea (Keratometry (K-reading)). The software provides K-reading compensation, which considers the change in spot geometry and reflection losses of ablation efficiency. Real ablative spot shape (volume) is considered through a selfconstructing algorithm. In addition, there is a randomised flying-spot ablation pattern, and controls the local repetition rates to minimise the thermal load of the treatment (smooth ablation, no risk of thermal damage163). Therefore, the ablated surface after aspheric wavefront-customised profiles is very smooth, so that there are some benefits in higher order aberrations. Figure 20: The SCHWIND AMARIS Total-Tech Laser. The AMARIS laser system works at a repetition rate of 500 Hz, produces a spot size of 0.54 mm (full width at half maximum (FWHM)) with a superGaussian ablative spot profile164,165. High-speed eye-tracking166,167 with 1050 Hz acquisition rate is accomplished with 3-ms latency period168. Figure 21: The SCHWIND ESIRIS excimer laser system. The ESIRIS laser system works at a repetition rate of 200 Hz, produces a spot size of 0.8 mm (full width at half maximum (FWHM)) with a para-Gaussian ablative spot profile. High-speed eye-tracking with 330 Hz acquisition rate is accomplished with a 5-ms latency period. Part 3 (Revisión temática) Topic A (Análisis de la asfericidad corneal) Study concept and design (S.A.M.); data collection (D.O.); analysis and interpretation of data (S.A.M.); drafting (D.O., S.A.M.); critical revision (T.M., J.M.); statistical expertise (S.A.M.). Section A.1 ABSTRACT Evaluation of a method to calculate corneal asphericity and asphericity changes after refractive surgery. 60 eyes of 15 consecutive myopic patients and 15 consecutive hyperopic patients (n=30 each) were retrospectively evaluated. Preoperative and 3-month postoperative topographic and corneal wavefront analyses were performed using corneal topography. Ablations were performed using a laser with an aberration-free profile. Topographic changes in asphericity and corneal aberrations were evaluated for a 6 mm corneal diameter. The induction of corneal spherical aberrations and asphericity changes correlated with the achieved defocus correction. Preoperatively as well as postoperatively, asphericity calculated from the topography meridians correlated with asphericity calculated from the corneal wavefront in myopic and hyperopic treatments. A stronger correlation between postoperative asphericity and the ideally expected/predicted asphericity could be obtained based on aberration-free assumptions calculated from corneal wavefront values rather than from the meridians. In hyperopic treatments, a better correlation could be obtained compared to the correlation in myopic treatments. Corneal asphericity calculated from corneal wavefront aberrations represents a three-dimensional fit of the corneal surface; asphericity calculated from the main topographic meridians represents a two-dimensional fit of the principal corneal meridians. Postoperative corneal asphericity can be calculated from corneal wavefront aberrations with higher fidelity than from corneal topography of the principal meridians. Hyperopic treatments showed a greater accuracy than myopic treatments. Section A.2 INTRODUCTION A strong tendency towards use of asphericity parameters in refractive , using different descriptors (asphericity quotient [Q], conic constant [K], eccentricity [e], p-value [p], or shape-factor [E]) or measuring the effects of refractive treatments on corneal asphericity176,177. Analysis of corneal topography involves fitting of the measured data to geometric models, usually by inclusion of a simple regular surface and a polynomial adjustment of the extra components not covered by the simple regular surface basis. In this study, two simple methods to calculate corneal asphericity - based on corneal wavefront, and based on asphericity of the two principal meridians are compared and the question whether corneal wavefront alone is a useful metric to evaluate the corneal asphericity in refractive surgery is addressed. For this study, the methods presented were applied to a patient population treated with laser in situ keratomileusis (LASIK). Section A.3 METHODS Retrospective analysis of 60 eyes, including 15 consecutive patients each with myopia and hyperopia, treated at Augenzentrum-Recklinghausen was performed. Preoperative and 3-month postoperative data are reported. All operations were performed by one surgeon (DdO). LASIK flaps were created with a Carriazo-Pendular microkeratome (SCHWIND eye-tech-solutions GmbH, Kleinostheim, Germany). An ESIRIS system (SCHWIND eye-tech- solutions GmbH) set for an optical zone of 6.25 mm was used to perform the ablations with aberration-freeTM profiles without nomogram adjustments. Using the Keratron-Scout (Optikon2000, Rome, Italy), topographical analysis of the radii of curvature and asphericities of the principal meridians and the corneal wavefront aberrations to seventh Zernike order was performed preoperatively and 3 months postoperatively. Classical relationships between different asphericity descriptors178 were calculated using the formulae: p ≡ Q + 1 ≡ 1 − E = 1 − e2 Q ≡ p − 1 ≡ − E = −e 2 E ≡ 1 − p ≡ −Q = e2 e = 1 − p ≡ −Q ≡ SF ⇒ Hyperbola ⇒ Parabola where 0 < p < 1 ⇒ Prolate ellipse p =1 p >1 ⇒ Sphere ⇒ Oblate ellipse However, asphericity is a dependent parameter with “non-linear” behaviour, i.e. it has no meaning, if the apical curvature is not taken into consideration. Any asphericity descriptor can be used, however, to obtain consistent results and interpretations, computing cannot be reduced to linear arithmetic. p-value (p) was the asphericity descriptor used throughout the study. Topographic asphericity was computed using two methods. The first method was the topographic method based on the principal meridians. Considering the mean corneal asphericity of a series of corneal asphericities, the mean asphericity was computed as i =1 m 1 i =1 Ri where p is the mean asphericity, pi the asphericity factors, Ri the apical radii of curvature, and m the sample size. For averaging the asphericity of the two main meridians under consideration of their curvature, equation reduces to: ps p f Rs3 R 3f Rs R f where p is the corneal p-value, ps and pf are the p-values of the steep and flat principal meridian, respectively, and Rs and Rf the apical radius of curvature of the steep and flat principal meridian. This method represents a calculation of mean asphericity derived from m meridional radii and asphericities obtained from two-dimensional fits of the corneal meridians. The second method investigated was the corneal wavefront method. 768R 3 C40 5 − 5C60 7 + 45C80 OZ (1 − n ) where p is the corneal p-value; C[4,0], C[6,0], and C[8,0] are the radiallysymmetric terms of the corneal Zernike expansion; R is the apical radius of the corneal curvature; n the corneal refractive index; and OZ the analyzed diameter of the corneal Zernike expansion. This method represents a calculation of the mean asphericity derived from corneal wavefront data obtained from a three-dimensional fit of the corneal surface. The radial-symmetric terms of the corneal Zernike expansion, C[4,0], C[6,0], and C[8,0], were calculated from the radial-symmetric terms of the corneal Zernike expansion of the surface elevation of a Cartesian oval (Cco[4,0], Cco[6,0], and Cco[8,0]) plus the radial-symmetric terms of the corneal wavefront aberration as provided by the videokeratoscope (Ccw[4,0] and Ccw[6,0]). Also, the ideally expected topographic asphericity assumed from aberration-neutral conditions was calculated using two methods: First, the ideally expected principal meridians of the topographic method pexp = pco + p pre − pco R ⋅ SEqcp 1 + n −1 where pexp is the predicted corneal p-value; pco and ppre are the p-values of a Cartesian oval and the preoperative cornea, respectively; R is the apical radius of curvature of the preoperative cornea, and SEqcp the spherical equivalent to be corrected at the corneal plane. In this paper, the term “ideally expected” is understood to mean “predicted values,” if the aberration-free condition were strictly fulfilled. The second method employed was the ideally expected corneal wavefront method, with R as the postoperative predicted apical radius of curvature. Note that the ideally expected corneal wavefront method can easily be further applied to any target condition, simply by setting the radial-symmetric terms of the corneal wavefront aberration (Ccw[4,0] and Ccw[6,0]) to the desired Optical errors, represented by wavefront aberrations, were analyzed for 6 mm diameters. Clinical evaluation (Evaluación clínica) Each cornea underwent 4 consecutive measurements preoperatively as well as at the 3-month follow-up examination, summing up to a total of 240 For every cornea, the 4 corresponding topographies were analyzed using both methods, and the corresponding mean value was used as representative asphericity of that cornea with each method. Repeatability of the methods (Repitibilidad de los métodos) Following preoperative calculation of the p-values with both methods, global analysis of the behaviour of the term p ⋅ R −3 was performed. It constitutes a term to be operated in a simple linear way. The four corresponding values of each cornea were averaged for both methods, and a global standard deviation value was calculated across the 240 measurements for each method using the pa ,c ⋅ Ra−,3c A B pa ,b ⋅ Ra−,3b − c =1 a =1 b =1 A ⋅ B −1 where a runs over the number of corneas of the sample (A=60), and b and c run over the number of corresponding measurements for each cornea (B=4). Statistical analysis (Análisis estadístico) t-tests were used for statistical analysis, with P<.05 being considered Section A.4 RESULTS Refractive outcomes (Resultados refractivos) In both myopic and hyperopic eyes, spherical equivalent (SEq) and cylinder were reduced to subclinical values at 3 months postoperatively (range -0.50 to +0.75 D for defocus and 0.00 to 0.75 D for astigmatism), and 95% of eyes (n=57) were within ±0.50 D of the attempted correction (Table 2 and Figure 22). Myopic group No. of treated eyes (patients) 30 (15) 30 (15) 60 (30) Preoperative SEq±StdDev (D) Predictability < 0.50 D (%) Predictability < 1.00 D (%) Preoperative cylinder±StdDev (D) Postoperative SEq±StdDev (D) Postoperative cylinder±StdDev (D) Table 2: Preoperative and postoperative data. Figure 22: Predictability scattergram. Corneal spherical aberrations (Aberración esférica corneal) In the myopic group, the preoperative primary corneal spherical aberration (C[4,0]) was +0.243±0.098 µm (mean ± standard deviation), and changed to +0.319±0.132 µm at 3 months postoperatively (P < .01). In the hyperopic group, C[4,0] was +0.201±0.118 µm and changed to -0.006±0.139 µm at 3 months postoperatively (P < .001) (Table 3). Myopic group Preoperative primary SphAb±StdDev (µm) Hyperopic group All treatments +0.243 ± 0.098 +0.201 ± 0.118 +0.221 ± 0.111 0.000 ± 0.003 0.000 ± 0.002 0.000 ± 0.002 Postoperative primary SphAb±StdDev (µm) +0.319 ± 0.132 -0.006 ± 0.139 +0.154 ± 0.214 Postoperative secondary SphAb±StdDev (µm) +0.003 ± 0.003 -0.004 ± 0.004 -0.0001 ± 0.005 Induced primary SphAb per diopter (µm) Induced secondary SphAb per diopter (µm) Preoperative secondary SphAb±StdDev (µm) Table 3: Corneal wavefront aberration data reported for 6 mm analysis diameter. Induced corneal spherical aberration, defined as the difference in postoperative corneal spherical aberration minus the preoperative value, was significant for primary and secondary spherical aberrations (P < .001 for both) and significantly correlated with the achieved defocus correction for primary and secondary spherical aberrations (r2=0.65, P < .001 for primary spherical aberration and r2=0.59, P < .001 for secondary spherical aberration, Figure 23). The rates of induced corneal spherical aberration per defocus (regression slope) were -0.045 µm/D for primary spherical aberration and -0.001 µm/D for secondary spherical aberration at 6 mm. Figure 23: Induced spherical aberration. Corneal asphericity (Asfericidad corneal) In the myopic group, the mean preoperative corneal asphericity calculated from the principal meridians was +0.79, whereas the mean corneal asphericity calculated from corneal wavefront was +0.89. In the hyperopic group, the mean preoperative corneal asphericity calculated from the principal meridians was +0.81, whereas the mean corneal asphericity calculated from corneal wavefront was +0.82 (Table 4). Preoperative p-value from meridians Preoperative p-value from corneal wavefront Postoperative p-value from meridians Postoperative p-value from corneal wavefront Expected/predicted p-value from meridians Expected/predicted p-value from corneal Table 4: Asphericity data. The preoperative corneal asphericity calculated from corneal wavefront significantly correlated with corneal asphericity calculated from the principal meridians in both the myopic and the hyperopic group (r2=0.84, P < .001 for the myopic group; r2=0.87, P < .001 for the hyperopic group, Figure 24). Further, the regression slope was 1.01 for the myopic group and 1.09 for the hyperopic group. Figure 24: Preoperative asphericity. In the myopic group, the mean postoperative corneal asphericity calculated from the principal meridians was +1.24, whereas the mean corneal asphericity calculated from corneal wavefront was +1.13 (Table 4). In the hyperopic group, the mean postoperative corneal asphericity calculated from the principal meridians was +0.39, whereas the mean corneal asphericity calculated from corneal wavefront was +0.47 (Table 4). Postoperatively, the corneal asphericity calculated from corneal wavefront values significantly correlated with corneal asphericity calculated from principal meridians in both the myopic and the hyperopic group (r2=0.81, P < .001 for the myopic group; r2=0.85, P < .001 for the hyperopic group, Figure 25). Further, the regression slope was 0.51 for the myopic group and 0.88 for the hyperopic group. Figure 25: Postoperative asphericity. Corneal Asphericity Changes (Cambios en la asfericidad corneal) For myopia, the ideally expected postoperative p-value calculated from the principal meridians was +0.87, compared to +0.98 in wavefront based calculation (Table 4). The postoperative asphericity did not correlate with the predicted asphericity when calculated from meridians (r2=0.07, P = .2), and showed a weak but significant correlation with the ideally expected asphericity when calculated from wavefront (r2=0.12, P = .05). Further, the regression slope was +0.68 in corneal wavefront based calculation. For hyperopia, the predicted postoperative asphericity calculated from the principal meridians was +0.76, compared to +0.75 in wavefront based calculation (Table 4). The postoperative asphericity was significantly correlated with the ideally expected asphericity when calculated from meridians (r2=0.39, P < .001), and strongly correlated with the predicted asphericity when calculated from wavefront (r2=0.51, P < .001). Further, the regression slope was +0.67 when calculated from principal meridians and +0.71 when calculated from corneal Combining the results of both groups, the ideally expected postoperative asphericity calculated from the principal meridians was +0.81 and that calculated from corneal wavefront +0.85. The postoperative asphericity was significantly but weakly correlated with the predicted asphericity when calculated from the principal meridians (r2=0.17, P < .05), and showed a strong correlation with the ideally expected corneal asphericity when calculated from corneal wavefront (r2=0.37, P < .001) (Figure 26). Further, the regression slope was +1.44 in principal meridians based calculation and +1.19 in corneal wavefront based Figure 26: Ideally expected postoperative asphericity. Repeatability of the Corneal Asphericity (Repitibilidad de las determinaciones de asfericidad corneal) The global standard deviation was 0.0003 mm-3 for the meridional method, compared to 0.0001 mm-3 for the corneal wavefront method (P < .05). Section A.5 DISCUSSION p-value was the asphericity descriptor used throughout this study. The reason for this choice was not a preference of p-value over other asphericity In fact, using the identities and equalities described, similar equations could have been derived for any asphericity descriptor. Our aim was the consistent use of one descriptor and to use the classical relationships between descriptors to derive descriptor-specific-equations for the computing of mean values, asphericity out of corneal wavefront, or estimation of the postoperative asphericity, respectively. Note that using simple arithmetic the average of a parabola (p=0) with an apical curvature of 7 mm and a sphere (p=1) with a radius of curvature of 8 mm would be p=0.5 (i.e. e=0.71). For the same surfaces, however, an averaged parabola (e=1) and an averaged sphere (e=0) would be e=0.5 (i.e. p=0.75) and not 0.71. Using our model, the result would always be p=0.41 or e=0.77. In particular, the corneal wavefront method benefits from avoidance of complicated non-linear effects in the analysis. Once the Zernike expansion of the corneal wavefront aberration is known, corresponding coefficients can be linearly averaged, added, or subtracted, or any other linear operation can be performed, and finally the asphericity value can be computed in the desired descriptor. By analyzing topographic changes, a highly significant correlation between the asphericity calculated from corneal wavefront and from the principal meridians could be observed in both the myopic and the hyperopic group preoperatively as well as postoperatively. To assess the agreement between the methods, a Bland-Altman plot was created179 that showed that asphericity calculation with the two methods does not produce equivalent results. Corneal wavefront based calculation showed asphericity with an average of 0.05 units higher compared to calculation based on the principal meridians. Moreover, the difference between the two methods correlated weakly but significantly with the measured value (r2=0.11; P < .05, Figure 27). Figure 27: Bland-Altman plot for p-value calculated from meridians vs. p-value calculated from corneal wavefront. The wavefront method proved to be superior to the meridional method, since the aberration coefficients were computed from much denser data sampling (all corneal points within a disk with a 6 mm diameter), and not only from two meridians. However, the conclusion that if many meridians were included in the "meridional" method, the results would approach those of the "wavefront" method is misleading. Another weakness of the “two meridians method” is that both meridians are usually selected based upon their respective curvature, i.e. the main origin of These two meridians closely represent the highest and lowest meridional curvature of a cornea, but their corresponding asphericities do not necessarily represent the highest and lowest meridional asphericities of that In the groups to this study, the postoperative asphericity deviated more assumptions calculated from the principal meridians as well as corneal wavefront. Also, the postoperative asphericity showed a stronger correlation with asphericity predicted from aberration-neutral assumptions when calculated from corneal wavefront than from the meridians. The preoperative mean corneal asphericity in myopic eyes calculated with the two methods showed a similar result, which, however, was not as consistent as the result found in the hyperopic group. The fact that both the amount of corneal astigmatism, which was larger in the hyperopic group, as well as the offset between the corneal vertex and the pupil centre, which was also larger in the hyperopic group, may play a role here. that also the Zernike decomposition only predicted 37% of the variance of asphericity change, i.e. there is high scatter and there is a tendency towards higher asphericity, which is also reflected by the induction of spherical A possible cause of measured differences in induced asphericity between calculated and real postoperative corneas could be the fact that changes in radius and changes in asphericity were analyzed separately. This is only strictly valid, if both parameters are independent, however, there is a very strong correlation between changes in asphericity and changes in radius. This correlation may have two origins: (1) artefacts of the measurement or the fitting procedure, or (2) a real correlation in changes of radius and asphericity in the cornea, possibly due to biomechanical constraints. Similar to Pérez-Escudero et al.180 and to the findings of a paper presented earlier by the authors89, a topography describing a perfect rotationally symmetric ellipsoid with radius R = 7.87 mm and asphericity p =0.75, which are typical values for the anterior corneal surface, was created. Subsequently, random noise was added to the elevation. Normally distributed random noise with a standard deviation of 3 µm was employed, which is the same order of magnitude observed in measurements with the Scout videokeratoscope. This results in a data set similar to the experimental data sets, however, without the particularities, which may be specific to our setup. 100 such surfaces were created, using the same base ellipsoid and changing the noise only. Subsequently, this surface was fitted. The results show that the parameters of the base ellipsoid are well recovered by the mean, but that there is a strong correlation between changes in R and changes in p. The same applies to correlations between changes in 1/R and changes in p/R . These correlations are not particular to our specific fitting procedure, rather are they a general characteristic of fits to surfaces that derive from ellipses. These correlations are an artefact caused by the fit's sensitivity to measurement noise and are probably common to all fits of ellipse-based surfaces. Both the biomechanical response of the stroma and wound healing could contribute to this phenomenon, as well. Navarro et al.181 proposed a relatively simple general model to represent the corneal surface in its canonical form with respect to the axes of corneal One limitation of Navarro’s model is that it assumes that the orientations of the principal curvatures, i.e. steepest and flattest radii, related to corneal toricity, correspond to the orientations of the principal asphericities. Kiely et al.171 investigated this problem in 1982, using a model more general than an ellipsoid, which was oriented according to the instrument axes. The mean asphericity is a convenient parameter for comparison of different eyes and characterization of spherical aberration of a conicoid, but it cannot be a substitute for corneal topography. There are circumstances where knowledge of the asphericity in the two principle meridians might be more useful for vision correction than the mean asphericity. However, as already mentioned, the asphericity of the two principle meridians might not be the minimum and maximum meridional asphericity for that cornea. In this respect, Navarro's corneal model presents a good basis for corneal topography, representing a realistic anatomic situation and employing additional terms of Zernike expansion to describe extra surface deformation of real corneas. Zernike terms would resolve the issue, with the strongest asphericity not being along the principal meridians. On the other hand, the quadratic surface basis for the corneal surface will only provide an aberration free basis with the instrument on axis and will not be as realistic as Navarro's ellipsoid. As a consequence, the quadratic surface will require larger additional Zernike terms to represent the real corneal Corneal description should not be limited to the mean asphericity, related to spherical aberration, when corneal topography in Zernike terms gives much more general information on corneal aberrations. However, if a simple corneal model based on asphericity is of interest for reasons of simplicity, we advocate for calculation of the mean asphericity from the corneal wavefront rather than from the asphericity of the two principle meridians. This simplification is less complicated but essentially similar to reduction of the wavefront aberration map to a generic description based on n weight coefficients of the Zernike expansion. This approach is no attempt to discredit the full details of corneal topography or the optical description provided by Zernike polynomials. Rather is the aim to reduce the complexity of the description to an appropriate minimal set of In particular cases, spherical aberration could be described by way of comparison of the Zernike terms with radial symmetry, such as C[4,0] and C[6,0], to be more accurate, the contribution from the power terms with pure ρ4 and ρ6 in the corneal topography expansion (ρ - normalized pupil radius). In this way, a higher-order aspheric surface could be characterized rather than limiting analysis to the mean asphericity that corresponds to a conicoid surface, which in some cases is a poor approximation for high-order aspheric corneas. Another possible model, which is also direct and simple and combines the advantages of different other models is that of a quadric surface free on the space, i.e. oriented according to the natural corneal axes, however, with a fixed constant asphericity corresponding to the Cartesian oval for the refractive index (p-value of +0.472 with a corneal refractive index of 1.376), without astigmatism, to determine the apical curvature and the corneal axis. The modelled surface would always be a surface free of on-axis aberrations for any particular apical curvature. The residual component would be adjusted to a Zernike polynomial expansion, because it would directly represent the surface aberration of the corneal wavefront. Section A.6 CONCLUSIONS This study suggests that the corneal wavefront alone is a useful metric to evaluate the optical quality of an ablation in refractive surgery, and a useful metric to evaluate corneal asphericity. Corneal wavefront can be used effectively to analyze laser refractive surgery, avoiding complicated non-linear effects in the analysis. On these grounds, this method has the potential to replace or perhaps supplement currently used methods of asphericity analysis based on simple averaging of asphericity values. Section A.7 OUTLOOK In this study we have used corneal wave aberration as a basis for the determination of corneal asphericity. However, as the OSA recommends, corneal wave aberration was based on the line of sight. Thus, larger offsets between pupil centre and corneal vertex may have negatively affected the power of the correlations. In further studies, we will include the offsets between pupil centre and corneal vertex for improving the accuracy of the method. This chapter was limited to a laser system (and ablation algorithm). In further studies, newer state-of-the-art laser systems and algorithms will be evaluated as well. Topic B (Modelo de un perfil libre de aberraciones) Study concept and design (S.A.M.); data collection (D.P., C.V., D.O., J.G.); analysis and interpretation of data (S.A.M.); drafting (D.O., S.A.M.); critical revision (J.L.A., M.C.A., H.B., T.M., J.G.); statistical expertise (S.A.M.). Section B.1 ABSTRACT To provide a model of an aberration-free profile and to clinically evaluate the impact of treatments based upon these theoretical profiles in the post-op cornea and to evaluate the clinical outcomes of treatments using the optimized Aberration-Free ablation profiles of the ESIRIS and AMARIS platforms comparing the outcomes of ablations based on the normal corneal vertex and the pupil centre, as well as to compare the induced corneal wavefront aberration using the aspheric aberration neutral ablation profile versus a classical Munnerlyn standard profile. Methods: Aberration-free profiles were deducted from the Zernike expansion of the difference between two corneal Cartesian-ovals. Compensation for the focusshift effects of removing corneal tissue were incorporated by preserving the location of the optical focus of the anterior corneal surface. Simulation of the surgical performance of the profile was performed by simulated ray-tracing through a cornea described by its anterior surface and pachymetry. Two groups (using normal corneal vertex and using the pupil centre) with pupillary offset >200 microns were compared. Clinical outcomes were evaluated in terms of predictability, refractive outcome, safety, and wavefront aberration. bilateral symmetry was evaluated in terms of corneal wavefront aberration. The proposed „aberration-free” profiles theoretically preserve aberrations, becoming more oblate asphericity after myopic treatments, and more prolate after hyperopic ones. Induced corneal aberrations at 6-mm were below clinically relevant levels: 0.061±0.129µm for HO-RMS (p<.001), 0.058±0.128µm for spherical aberration (p<.001) and 0.053±0.128µm for coma (p<.01), whereas the rate of induced aberrations per achieved D of correction were -0.042µm/D, 0.031µm/D, and -0.030µm/D for HO-RMS, SphAb, and coma (all p<.001). No other Zernike mode was significantly correlated. Induction of positive asphericity correlated to achieved correction (p<.001) at a rate 3x theoretical prediction. 38% of the CV eyes improved BSCVA compared with 24% of the PC eyes (comparison CV/PC P=0.38). Induced ocular coma was on average 0.17 micron for the CV group and 0.26 micron for the PC group (comparison CV/PC P=0.01 favouring CV). Induced ocular spherical aberration was on average +0.01 micron for the CV group and +0.07 micron for the PC group (comparison CV/PC P=0.05 favouring CV). At 6.00 mm, corneal aberrations changed in a higher amount after Munnerlyn based profiles than after aspheric aberration neutral profiles. Conclusions: “Aberration-free” patterns for refractive surgery as defined here together with consideration of other sources of aberrations such as blending zones, eye-tracking, and corneal biomechanics yielded results comparable to those of customisation approaches. CV-centred treatments performed better in terms of induced ocular aberrations and asphericity, but both centrations were identical in terms of photopic visual acuity. Aberration-Free Treatments with the SCHWIND AMARIS did not induce clinically significant aberrations, maintained the global OD-vs.-OS bilateral symmetry, as well as the bilateral symmetry between corresponding Zernike terms (which influences binocular summation). The induced corneal aberrations were less than compared with the classical profile or other publications. Having close-to-ideal profiles should improve clinical outcomes decreasing the need for nomograms, and diminishing induced aberrations after surgery. Section B.2 INTRODUCTION Previous studies have shown that spherical aberration shows a consistent increase after excimer laser ablation directly proportional to the achieved refractive correction. It has been suggested that almost half of the induced spherical aberration is due to the lower delivery of excimer laser energy in the peripheral cornea due to corneal curvature. The recent advances in excimer laser technology, such has the use of aspheric ablation profiles, incorporation of HOAb treatment and eye trackers have presumably led to better refractive outcomes and reduced HOAb induction postoperatively that have been recently reported184,185. Ocular wavefront-guided and wavefront-optimized treatments can increase HOAb by 100% postoperatively184. A significant number of refractive surgery patients may not benefit from ocular wavefront guided treatment as the induction of HOAb is related to baseline levels of HOAb184,186. For example, HOAb tend to be induced in patients with less than 0.30 µm and reduced in patients with greater than 0.30 µm of HOAb.184,186 Furthermore, physiologic optical aberrations may be warranted to maintain the optical quality of the eye.73,187 Based on these studies,184,186,187 it seems the custom ablation algorithm may not be appropriate for the entire refractive surgery population. Mclellan and colleagues have reported a beneficial effect on the visual quality of pre-existing higher order aberrations187. There is evidence of neural adaption to the baseline wavefront profile69,72,73. The interaction between higher order aberrations can be beneficial to visual quality regardless of the magnitude Furthermore, higher order aberrations seem to be induced in patients with 0.30 µm or less of preoperative HOAb184,186. Approximately half the patients that present for refractive surgery have HOAb of 0.30 µm or less184. To date, the induction of wavefront aberrations postoperatively is random and the wavefront profile postoperatively cannot be predicted. Based on the random nature of the HOAb induction and current research, it maybe beneficial to maintain the preoperative wavefront profile for a significant number of refractive surgery candidates. Excimer laser refractive surgery has evolved from simple myopic ablations15 to the most sophisticated topography-guided56 and wavefront-driven37, either using wavefront measurements of the whole eye38 (obtained, e.g., by Hartman-Shack wavefront sensors46) or by using corneal topography-derived wavefront analyses54, customised ablation patterns80. Because the corneal ablations for refractive surgery treatments induce aberrations (one of the most significant side-effects in myopic LASIK is the induction of spherical aberration, which causes halos and reduced contrast sensitivity), special ablation patterns were designed to preserve the preoperative level of high order aberrations. Not to forget the fact that astigmatism (especially high ones) has its main origin in the anterior corneal surface, and topographically is usually found located 2-fold symmetrically from the normal corneal vertex and not at the pupil centre. Patient satisfaction in any refractive surgery, wavefront-guided or not, is primarily dependent on successful treatment of the lower order aberrations (LOA) of the eye (sphere and cylinder). Achieving accurate clinical outcomes and reducing the likelihood of a retreatment procedure are major goals of refractive surgery. Section B.3 METHODS Theoretical aberration-free profile (Perfil teóricamente libre de aberraciones) We generated a rotationally symmetric cornea satisfying Baker’s equation (1943) for conicoids: r 2 + pz 2 − 2 zR = 0 ( 31) where, z-axis is the axis of revolution, r the radial distance to the corneal vertex, p the aspherical factor, and R the apical radius of curvature. p represents how fast the surface deviates from a paraboloid, while the quotient of asphericity (Q) represents how fast the surface deviates from a sphere. The relationship between them both is simply: Q ≡ p −1 ( 32) polynomials36, where Z[j,k] are the Zernike polynomials, C[j,k] their coefficients, OZ the optical zone (i.e. the physical size of the unit disc) and n the corneal refractive index: C80 = (1 − n ) Rp 3 OZ 8 ( 33) C60 7 = (1 − n ) Rp 2 OZ 6 + 21C 0 ( 34) C40 5 = (1 − n ) Rp OZ 4 + 5C 0 ( 35) 1 − n ) OZ + 3C 0 7 − 45C80 5 − 6C60 7 + 30C80 ( 36) These equations were deducted by identification in a “term-by-term of the same radial order” fashion. Meaning that, if a higher (or lower) order for the Taylor’s expansion were used, the obtained set of equations would have been slightly different by including more (or less) terms. Given the orthogonal properties of Zernike polynomials, inner-products between the conic and Zernike functions may as well be used to correlate conic parameters (R,Q) with Zernike coefficients (C[j,0]). A Cartesian-oval (an aspherical surface with Q-factor -1/n2) results from the condition of stigmatism and represents the free-of-aberrations surface for the infinity-point (far-point in a person). If the anterior corneal surface were a Cartesian-oval it would have no aberrations. The anterior cornea is typically different from this shape and posses its own aberration pattern, but the optical aberrations of a cornea can be calculated as: CCornealWavefrontAberration [ j , k ] = CCartesianOval [ j , k ] − CCornea [ j , k ] ( 37) From that, a preoperative cornea with Q-factor -1/n2 (~-0.528 for corneal refractive index of 1.376 ) will not manifest “corneal-wavefront-aberration,” and also a postoperative cornea with Q-factor -0.528 will not manifest “cornealwavefront-aberration.” Following this analysis, a treatment whose application on a Cartesian-oval cornea would result in a new Cartesian-oval cornea with different dioptric power defines the „aberration-free” profile: ( n − 1) R n 2 − 1 r ( 38) n 2 R ( n − 1) R n2 − 1 r n 2 − 1 n − 1 + DC R n2 R n2 R n − 1 + DC R where DC is refractive power change. Such profile is expected to fulfill the “aberration-free” condition: HOWFAbPost ( ρ , θ ) = HOWFAbPr e ( ρ ,θ ) ∀ρ ,θ ∈ OZ ( 39) ∆HOWFAb ( ρ ,θ ) = 0 ( 40) ∀ρ ,θ ∈ OZ where HOWFAb means High-order Wavefront-aberration. Compensation for the focus shift (Compensación del desplazamiento del foco) The approach to theoretically balance the focus-shift due to tissue removal bases on preserving the location of the optical focus of the anterior corneal surface after removing the tissue. Corneal refractive power is given by: PACS = nCornea − nAir ( 41) where PACS is the refractive power of the anterior corneal surface, RACS the radius of curvature of the anterior corneal surface, nCornea the refractive index of the cornea, and nAir the refractive index of air. Preserving the location of the optical focus of the anterior corneal surface after removing tissue means: FCorneaPostAblation = FACS − Z Ablation ( 42) where FCorneaPostAblation is the focal length of exposed stroma and ZAblation the depth of ablation. This way, concerning refractive surgery and considering only the amount of tissue removed but not the corneal biomechanics, or wound healing, and assuming that the refractive index remains constant throughout the procedure and at the postoperative stage (i.e. nCornea , preoperative = nCornea , postoperative = nCornea ), the approximate refractive shift due only to tissue removal corresponds to combining all equations into one: PCorneaPostAblation − PACS = PACS ⋅ nCornea ( nCornea − nAir PACS Z Ablation ) ACS ( 43) where PCorneaPostAblation is the refractive power of the exposed stroma. Optical simulation (Simulaciones ópticas) The “aberration-free” profile as proposed here is built on a single surface eye model whose anterior surface does not induce spherical aberration. The fact that the actual cornea consists of a double surface optic whose posterior surface partly balances the effect of its anterior surface raises the question whether implementing such profile would result in excessive negative spherical aberration in a real eye. For this reason, we simulated ray-tracing through a cornea described by its anterior surface (Ra, Qa), and its pachymetry. We took 1 as refractive index for air, 1.376 for cornea, and 1.336 for aqueous humour. For the pachymetry, we selected a simple radially symmetric model that approaches the mean pachymetry of the human population, by defining a parabolic increasing pachymetry: Pachy ( r ) = CPachy + ( PPachy − CPachy ) ( 44) where CPachy represents the central pachymetry and PPachy the pachymetry at 5 mm radial distance. We consider the cornea (the added contributions of anterior and posterior surfaces) as the only element in the optical system, and we consider optical effects of changing its anterior surface. An „aberration-free” profile as defined here is only valid in a theoretical frame and/or for educational purposes, if other inherent sources of aberrations such as biomechanical reactions due to the flap cut or to the ablation process itself, blending zones, loss of efficiency considerations, spot-size limitations, or active eye-tracking capabilities are not accounted for. Clinical evaluation (Evaluación clínica) A total of 250 eyes (125 patients) were consecutively treated the using ORK-CAM „Aberration neutral“ Aspheric ablation profiles and retrospectively analysed. Three-months follow up was available in 232 of these eyes (93%), and their preoperative data were as follows: mean sphere -3.64±1.03 D (range, 0 to 9.25 D); mean cylinder -0.97±0.87 D (range, 0 to -3.00 D); mean spherical equivalent refraction -4.12±2.26 D (range, -0.37 to -9.50 D). In all eyes, we measured corneal topography and derived corneal wavefront analyses (Keratron Scout, Optikon2000 S.p.A., Rome, Italy), ocular wavefront with a high resolution solutions, Kleinostheim, Germany), manifest refraction, and uncorrected and best spectacle-corrected Snellen visual acuity. Measurements were performed preoperatively and at one and three months after surgery. A 6.5 mm central fully corrected ablation zone was used in all eyes with a variable transition size automatically provided by the laser related to the planned refractive correction (6.7 mm to 8.2 mm). Since we have available corneal wavefront information, not limited by the pupil boundaries, we have reported the topographic wavefront findings for 6, 7 and 8 mm diameter zones, to include the total treatment zone and transition zone and junction zone, as well. Ablation centre (Centrado de la ablación) Thirty five patients (53 eyes) seeking laser correction at the Muscat Eye Laser Center, Sultanate of Oman, were enrolled for this analysis. The patients were divided into two myopic astigmatism groups. In the CV group (24 eyes, 16 patients, 8 patients with both eyes enrolled in the study and 8 patients with only 1 eye enrolled); the ablation was centred using the pupillary offset, i.e., the distance between the pupil centre and the normal CV measured by videokeratoscopy measurement was performed under photopic conditions of 1,500 lux, similar to the conditions under the operating microscope. The excimer laser allows for modification of the ablation centration from the pupillary centre with an offset by entering either X and Y Cartesian values or R and θ polar values in a regular The measurement of the pupillary offset was translated into the treatment planning as polar coordinates to be manually entered in the excimer laser computer. In the PC group (29 eyes, 19 patients, 10 patients with both eyes enrolled in the study and 9 patients with only 1 eye enrolled), the ablation was centred using the pupil centre as observed by the eye-tracking module. Eyes were enrolled in the study groups only if they had no symptomatic aberrations (<0.65 µm root mean square HOAb measured by the Ocular Wavefront Analyzer and the Optikon Keratron Scout for 6.00 mm analysis diameter (<0.50 DEq)) and moderate-to-large pupillary offset (>200 microns). Patients were randomly assigned to the CV or PC centration groups based on a coin toss. In the patients with only one eye fulfilling the enrolling criteria, both eyes were treated with the randomly assigned centration method, but only eye was included for analysis. The exclusion criteria included unstable refraction during the previous 6 months; signs of keratoconus or abnormal corneal topography; collagen vascular, autoimmune or immunodeficiency diseases; severe local infective or allergic conditions; severe dry eye disease; monocularity or severe amblyopia; or To determine the ablation profile of the CAM, the manifest refraction was measured in each eye and cross checked with the objective refraction from the aberrometry measurements were taken, and the VA and mesopic pupil size (SCHWIND Ocular Wavefront Analyzer) were measured. In both groups, we used an optical zone of 6.50 millimetres with a variable transition zone provided automatically by the software in relation to the planned refraction. In all cases, one surgeon (MCA) performed all standard LASIK procedures at the Muscat Eye Laser Center. Immediately before the ablation, the laser was calibrated according to the manufacturer’s instructions and the calibration settings were recorded. The manifest refraction, VA, topography, and aberrometry measurements were recorded for each eye at 1, 3, and 6 months and 1 year postoperatively. At the preoperative stage, as well as, at any of the follow-ups after the treatments, the pupillary offset was measured directly at the topographical map displayed by the videokeratoscope, and corresponds to the distance between the pupil centre under photopic conditions of 1,500 lux and the normal CV. In particular, we analysed the possible correlations between induced ocular aberrations with defocus correction and with pupillary offset. As the used profiles are aspherical based aiming for effects “neutral for aberration,” correlations between induced ocular spherical aberration and defocus assess how close (or how far) the profiles are from the targeted neutral effect when centred according to the different references, whereas correlations between induced ocular coma aberration and defocus assess whether the profiles suffer from a systematic decentration (a spherical aberration analysed off-axis results in coma aberration) when referred according to different points. For statistical analysis, paired t-tests were used to compare postoperative vs. preoperative results within each group, and unpaired t-tests were used to compare results between groups. For correlation tests, the Coefficient of Determination (r2) was used and the significance of the correlations has been approximately as t with N—2 degrees of freedom where N is the size of the sample. For all test, P<0.05 was considered statistically significant. Comparison to Munnerlyn based profiles (Comparación con perfiles directamente basados en Munnerlyn) For this comparison, we retrospectively analyzed 2 consecutive groups, 70 eyes each, treated for myopia and myopic astigmatism with LASIK technique. One group was treated for myopic LASIK with a classical Munnerlyn standard profile and compared to the first 70 eyes treated for myopic LASIK using the aspheric aberration neutral (Aberration-FreeTM) profile. We analyzed the visual outcome, the corneal wavefront aberration and the topographical changes of these two consecutive groups of eyes after 3 months. All patients were examined preoperatively and 1 day, 1 week, 1 month, and 3 months postoperatively. In the case of the classical standard profile we used an own nomogram calculated from previously treated eyes, which had resulted in some In the case of the aspheric aberration neutral (Aberration- FreeTM) profile, we did not use any nomogram. For statistical analysis, unpaired t-tests were used to test statistical differences with p values of less than 0.05 being considered statistically Bilateral symmetry (Simetría bilateral) For the evaluation of the influence of the ablation profile on the bilateral symmetry, 50 eyes (25 patients) that had been treated with the AMARIS retrospectively analysed. Inclusion criteria for review were bilateral surgery on the same day targeted for emmetropia, preoperative best spectacle corrected visual acuity (BCVA) ≥ 20/25 (logMAR ≤ 0.1) in both eyes, no signs of amblyopia, and successful completion of the 6-month follow-up. Six-months follow-up data were available for all 50 eyes (100%), and their preoperative data were as follows: mean manifest defocus refraction: -2.47 ± 2.51 D (range, -8.13 to +5.63 D) and mean manifest astigmatism magnitude: 2.02 ± 0.91 D (range, 0.00 to 4.75 D). For all eyes, we measured corneal topography and derived corneal wavefront aberrations up to the 7th Zernike order (36 terms) (Keratron-Scout, OPTIKON2000, Rome, Italy). Measurements were performed preoperatively and also 1, 3, and 6 months after surgery. A 6.5 mm central and fully corrected ablation zone was used in all eyes, together with a variable transition size that was automatically provided by the laser depending on the planned refractive correction (6.7 mm to 8.9 mm). Correlations for bilateral symmetry of Zernike terms across subjects (Correlaciones de la simetría bilateral para los términos de Zernike) To test this hypothesis, we plotted left-vs.-right-eye scatter graphs for each Zernike term to analyse the predicted correlations between the two eyes. These plots reveal, for our sample, which Zernike modes show symmetry and which type of symmetry they show (pre- and postoperatively). What is expected is that 0 modes show even symmetry; -odd modes show even symmetry; -even modes show odd symmetry; +odd modes show odd symmetry and +even modes show even symmetry. The slope and intercept of the linear regression (least-square fitting) were calculated for each Zernike term up to the seventh radial order (36 coefficients). We assessed the statistical significance of the correlations using Student’s T-test; the Coefficient of Determination (r2) was also employed and the significance of the correlations has been evaluated assuming a metric that is distributed approximately as t with N—2 degrees of freedom, where N is the size of the sample. Correlations for symmetry of aberrations in right and left eye of the same (Correlaciones de la simetría interocular de los sujetos) Taking symmetry into account, we plotted for each subject left-vs.-right eye scatter graphs of the Zernike coefficients. These plots reveal which patients in our sample show symmetry (pre- and postoperatively). The slope and intercept of the linear regression (least-square fitting) were calculated for each subject (25 We assessed the statistical significance of the correlations using Student’s T-tests; the Coefficient of Determination (r2) was also employed and the significance of the correlations has been evaluated assuming a metric that is distributed approximately as t with N—2 degrees of freedom, where N is the size of the sample. Differences for symmetry of aberrations in right and left eye of the same (Diferencias en la simetría interocular de los sujetos) Taking symmetry into account, we compared the Zernike coefficients obtained for the left and right eyes of the same subjects. We assessed the statistical significance using paired Student’s T-tests. Dioptrical differences in corneal wavefront aberration between the right and left eyes of the same subjects (Diferencias dióptricas interoculares de la aberración del frente de onda corneal de los sujetos) For our analysis, the concept of equivalent defocus (DEQ) has been used as metric to be able to associate a dioptric power with the RMS of the Zernike We have set a threshold value of 0.25 D to establish whether or not the differential corneal wavefront aberration between the left and the right eye was clinically relevant. Changes in bilateral symmetry of Zernike terms as a result of refractive (Cambios en la simetría bilateral de términos de Zernike provocados por la cirugía refractiva) We analysed the number of Zernike terms that postoperatively lost, gained or preserved symmetry, compared to the preoperative baseline. Changes in bilateral symmetry of wavefront aberration as a result of refractive surgery (Cambios en la simetría bilateral interocular provocados por la cirugía We analysed the number of patients that postoperatively lost, gained or preserved symmetry, compared to the preoperative baseline. Statistical analysis (Análisis estadístico) The level of statistical significance was taken to be p<0.05. Section B.4 RESULTS Simulation of the surgical performance of the profile (Simulación del rendimiento quirúrgico del perfil) From Figure 28 to Figure 30, the previously defined „aberration-free” profile effectively preserves existing aberrations, even when a wide range of anterior and posterior corneal surfaces are considered, whereas “asphericity preserving” profiles induce aberrations. The graphs were obtained by simulated ray-tracing of the preoperative and postoperative corneas and calculation of the difference. Figure 28: Analysis of the induced aberration at 6.50 mm for -6.00 D for a balanced corneal model, for 4 different ablation profiles: A) Munnerlyn based profiles, B) Parabolic based profiles, C) Asphericity preserving profiles, D) Aberration-free profiles. Notice the pre-op aberrated status (in blue), the post-op aberrated status (in red), and the induced aberrations (in green). Note the multifocality range (x-axis) running from -2 D to +1 D in all graphs. Figure 29: Theoretical analysis of the induced corneal spherical aberration analyzed at 6 mm vs. refractive power change (MRSEq) for 4 different asphericities: “Free-of-aberrations” cornea (Q-Val -0.53, in blue), balanced-eye model cornea (Q-Val -0.25, in magenta), spherical cornea (Q-Val 0.00, in yellow), parabolic cornea (Q-Val -1.00, in cyan). Figure 30: Theoretical analysis of the induced corneal spherical aberration at 6.50 mm using Aberration-Free profiles for -6.00 D for 2 anterior corneal surfaces: 7.87 mm, Q-factor -0.25 (A and C); and 7.87 mm, Q-factor +0.30 (B and D); and for 2 posterior corneal surfaces and pachymetries: 525 µm central pachymetry, 775 µm peripheral pachymetry at 5 mm radial distance (A and B); and 550 µm central pachymetry, 550 µm peripheral pachymetry at 5 mm radial distance (C and D). Notice the pre-op aberrated status (in blue), the post-op aberrated status (in red), and the induced aberrations (in green). Note the multifocality range (xaxis) running from -3.5 D to +0.5 D in all graphs. Clinical evaluation (Evaluación clínica) We have included 232 treatments for this evaluation, all of them without adverse events. At three months, mean manifest spherical equivalent was - 0.10±0.33 D (range, +0.86 to -1.18 D) and mean cylinder 0.23±0.26 D (range, 0 to 1.50 D). Eighty-eight percent eyes (202) were within ±0.50 D of attempted Preoperatively, mean ocular spherical aberration was +0.03±0.11 µm (range -0.19 to +0.25) and corneal spherical aberration was +0.33±0.10 µm (range +0.14 to +0.52). Postoperatively, the values were +0.07±0.16 µm (range 0.25 to +0.38) for ocular spherical aberration (P < 0.0001), and +0.40±0.13 µm (range +0.14 to +0.67) for corneal spherical aberration (P < 0.0001). spherical aberration increased on average by 0.028 µm per dioptre of achieved defocus correction for a 6-mm pupil (P < 0.0001) and 0.030 µm per dioptre of achieved defocus correction for corneal spherical aberration (P < 0.0001), the difference between both measurements was not statistically significant (P = 0.47) (Figure 31). Induced Spherical Aberration for 6 mm analysis diameter vs. achieved Defocus correction Achieved Defocus correction (D) Induced Spherical Aberration 6 mm analysis diameter (µm) y = -0.030x - 0.033 R2 = 0.316 y = -0.028x - 0.027 R2 = 0.248 Linear (CW) Linear (OW) Figure 31: Induced corneal and ocular spherical aberration at 3 months follow-up analysed at 6.0 mm pupil (ocular: ORK-Wavefront Analyzer (purple squares); corneal: Keratron Scout (blue diamonds)). Preoperatively, mean ocular coma was 0.18±0.09 µm (range 0.01 to 0.36) and corneal coma was 0.26±0.12 µm (range 0.02 to 0.50). Postoperatively, the values were 0.22±0.12 µm (range 0.02 to 0.45) for ocular coma (P = 0.01), and 0.31±0.16 µm (range 0.01 to 0.62) for corneal coma (P = 0.09). Preoperatively, ocular RMSho was, on average, 0.32±0.13 µm (range 0.07 to 0.57) and corneal RMSho was 0.54±0.14 µm (range 0.26 to 0.82). Postoperatively, the values were 0.37±0.14 µm (range 0.10 to 0.65) for ocular RMSho (P = 0.002), and 0.60±0.14 µm (range 0.32 to 0.88) for corneal RMSho (P < 0.0001). On average, induced coma, defined as the difference in absolute value in coma aberration magnitude postoperatively minus its preoperative magnitude excluding orientation, was 0.016 µm per dioptre (Figure 32), whereas induced high-order aberrations, defined as the difference in absolute value of the rootmean square postoperatively minus its preoperative value, was 0.014 µm per dioptre for ocular aberration and 0.035 µm per dioptre for corneal aberration, both for a 6-mm pupil (Figure 33). Absolute change in Coma Aberration for 6 mm analysis diameter vs. achieved Defocus correction Achieved Defocus correction (D) Change in Coma Aberration 6 mm analysis diameter (µm) y = -0.015x + 0.043 R2 = 0.076 y = -0.016x + 0.027 R2 = 0.123 Figure 32: Linear (CW) Linear (OW) Change in corneal and ocular coma aberration magnitude at 3 months follow-up analysed at 6.0 mm pupil (ocular: ORK-Wavefront Analyzer (purple squares); corneal: Keratron Scout (blue diamonds)). Absolute change in RMSho Aberration for 6 mm analysis diameter vs. achieved Defocus correction Achieved Defocus correction (D) Change in RMSho Aberration 6 mm analysis diameter (µm) y = -0.014x + 0.056 R2 = 0.101 y = -0.035x - 0.004 R2 = 0.297 Figure 33: Linear (CW) Linear (OW) Change in corneal and ocular high-order aberrations (HO-RMS) aberrations magnitude at 3 months follow-up analysed at 6.0 mm pupil (ocular: ORK-Wavefront Analyzer (purple squares); corneal: Keratron Scout (blue At 7 mm, preoperatively, mean corneal spherical aberration was +0.51±0.15 µm (range +0.22 to +0.81). Postoperatively, the values were +0.68±0.25 µm (range +0.20 to +1.16) (P < 0.0001). corneal coma was 0.38±0.18 µm (range 0.02 to 0.73). Preoperatively, mean Postoperatively, the values were 0.50±0.25 µm (range 0.01 to 0.98) (P = 0.003). corneal RMSho was, on average, 0.80±0.21 µm (range 0.38 to 1.22). Postoperatively, the values were 1.02±0.25 µm (range 0.54 to 1.50) (P < 0.0001). At 8 mm, preoperatively, mean corneal spherical aberration was +0.75±0.22 µm (range +0.32 to +1.18). Postoperatively, the values were +1.13±0.41 µm (range +0.32 to +1.93) (P < 0.0001). corneal coma was 0.53±0.25 µm (range 0.03 to 1.02). Preoperatively, mean Postoperatively, the values were 0.74±0.37 µm (range 0.02 to 1.46) (P < 0.0001). Preoperatively, corneal RMSho was, on average, 1.14±0.31 µm (range 0.55 to 1.74). Postoperatively, the values were 1.62±0.39 µm (range 0.85 to 2.38) (P < 0.0001). Ablation centre (Centrado de la ablación) The amount of induced ocular coma was small for both centration strategies: an average 0.17 micron (range, 0.03-0.32 micron) for the CV group and 0.26 micron (range, 0.01-0.72 micron) for the PC group. The difference in induced ocular coma between groups favouring CV was significant (unpaired ttest P=0.01). Furthermore, the induced ocular coma was not correlated with achieved defocus correction for the eyes treated with the CV strategy (r2=0.004, P=0.78 for CV group), but it was correlated with achieved defocus correction for the eyes treated with the PC strategy (r2=0.24, P=0.01 for the PC group) (Figure 34). The induced ocular coma/dioptres of achieved defocus correction ratio (the slope of the regression) was -0.004 micron of induced ocular coma/diopter for the CV group and -0.049 micron of induced ocular coma/diopter for the PC group. Figure 34: Induced ocular coma/defocus diopter ratio for the CV group (blue) and the PC group (magenta). The induced ocular trefoil was small for both centration strategies, i.e., an average 0.09 µm (range, 0.01-0.34 micron) for the CV group and 0.13 µm (range, 0.01-0.49 micron) for the PC group. The difference in induced ocular trefoil between groups favouring the CV strategy was not significant (unpaired t-test P=0.07). Further, the induced ocular trefoil was not correlated with the achieved defocus correction for the eyes treated with either centration strategy (r2=0.01, P=0.69 for CV group, r2=0.11, P=0.07 for the PC group) (Figure 35). The induced ocular trefoil/dioptres of achieved defocus correction ratio (the slope of the regression) was -0.005 induced ocular trefoil/diopter for the CV group, and -0.019 induced ocular trefoil/diopter for the PC group. Figure 35: Induced ocular trefoil/defocus diopter ratio for the CV group (blue) and the PC group (magenta). The induced ocular spherical aberration was minute for both centration strategies, i.e., an average +0.01 micron(range, -0.25 to +0.34 micron) for the CV group and +0.07 micron (range, -0.01 to +0.46 micron) for the PC group. The difference in induced ocular coma between groups favouring the CV strategy was significant (unpaired t-test P=0.05). Further, the induced ocular spherical aberration was not correlated with achieved defocus correction for the eyes treated with the CV strategy (r2=0.13, P=0.09 for the CV group), but it was correlated with the achieved defocus correction for the eyes treated with the PC strategy (r2=0.17, P=0.02 for the PC group) (Figure 36). The induced ocular spherical aberration/dioptres of achieved defocus correction ratio (the slope of the regression) was -0.028 µm of induced ocular spherical aberration/diopter for the CV group and -0.035 µm of induced ocular spherical aberration/diopter for the PC Figure 36: Induced spherical ocular aberration/defocus diopter ratio for the CV group (blue) and the PC group (magenta). Comparison to Munnerlyn based profiles (Comparación con los perfiles directamente basados en Munnerlyn) The induced corneal spherical aberrations at 6.00 mm measured 0.17±0.10 µm in the classical group and 0.09±0.16 µm in the aspheric aberration neutral (Aberration-FreeTM) group, the difference in induced SphAb in both groups were statistically significant favouring the aberration neutral (Aberration-FreeTM) group (p < 0.005). Bilateral symmetry (Simetría bilateral) Changes in bilateral symmetry of Zernike terms as a result of refractive (Cambios en la simetría bilateral de términos de Zernike provocados por la cirugía refractiva) Six months postoperatively, 3 Zernike terms (C[2,-2], C[4,+4], C[5,-5]) had lost significant OS-vs.-OD correlation symmetry, 4 Zernike terms (C[4,-4], C[5,-3], C[6,0], C[7,-1]) had gained significant correlation symmetry, and 29 Zernike terms preserved correlation symmetry OS vs. OD compared to the preoperative Six months postoperatively, for 6 Zernike terms (C[4,+4], C[5,+1], C[6,-6], C[6,-4], C[6,+2], C[7,-5]) the differences in OS-vs.-OD symmetry increased significantly, for 4 Zernike terms (C[4,-4], C[5,+3], C[6,+6], C[7,-7]) the differences in symmetry decreased significantly, and for 26 Zernike terms the OS-vs.-OD symmetry was preserved, compared to the preoperative baseline. Changes in bilateral symmetry of wavefront aberration as a result of refractive surgery (Cambios en la simetría bilateral interocular provocados por la cirugía Six months postoperatively, 3 patients (#2, #15, and #22) lost significant OS-vs.-OD correlation symmetry, 1 patient (18) gained significant correlation symmetry, and 21 patients preserved OS-vs.-OD correlation symmetry, compared to the preoperative baseline. 6 months postoperatively, for 2 patients (#6, #15) the differences in OS-vs.OD symmetry increased significantly, for 1 patient (#7) the differences in symmetry decreased significantly, and for 22 patients the OS-vs.-OD symmetry was preserved, compared to the preoperative baseline. Section B.5 DISCUSSION Aberration-free pattern (Perfiles libres de aberración) Corneal refractive treatments typically induce a change in corneal asphericity. Recently, it has been argued that preserving the preoperative corneal asphericity after corneal refractive treatments might be positive, therefore, asphericity-based profiles have been developed. However, there is no clear evidence that asphericity is the variable that alone plays the major role in the visual process. One problem using Q-factor customized ablation profiles are the severe difficulties to determine the Q-factor to be targeted. The average asphericity of the human cornea is about -0.28.170 Nevertheless, there are persons with Qfactor -0.25 and poor vision, and others with Q-factor +0.25 and supervision. Despite some remarkable theoretical works80,81,82, there is no proof that more negative quotients of asphericity provide better visual quality, or that an absolute optimum exists. When a patient is selected for non customized aspherical treatment, the global aim of the surgeon should be to leave all existing HOAb unchanged because the best corrected visual acuity, in this patient, has been unaffected by the pre-existing aberrations69. Hence, all factors that may induce HOAb, such as biomechanics, need to be taken into account prior to the treatment to ensure that the preoperative HOAb are unchanged after treatment. Statistical analysis of a population of human corneas showed as an average result that the best fit aspherical surface had a Q-factor around 0.25.171,172 As a result, in general healthy human corneas show a “positive spherical aberration,” which is balanced by the “negative spherical aberration” of the internal lens.55 As an average, human corneas manifest a corneal spherical aberration around 0.23 µm. One can say that the corneal-wavefront values are “overestimated” in the topographic systems compared to the ocular wavefront As individuals are aging, the asphericity of the crystalline lens changes, reducing the amount of spherical aberration that can be balanced or even showing a certain amount of positive spherical aberration, whereas the corneal asphericity thus, corneal spherical aberration, remains relatively stable over time, disrupting the equilibrium between both. However, in recent times there is a clear tendency of targeting a prolate postoperative anterior corneal surface as global optimum in refractive surgery. The intended meaning of the terms prolate and oblate is sometimes unclear. The confusion comes from the false usage of curvature and refractive power: As the average human cornea is prolate (Q-factor -0.25), the central part of the cornea has a stronger curvature than the periphery. However, refractive power is given by Snell’s law. As the corresponding Cartesian-oval is the aberration-free surface (i.e. the only truly monofocal surface), and can be described by an aspherical surface with quotient of asphericity -1/n2 (approx. -0.528 for human cornea), the average human cornea (Q-factor -0.25) is less prolate (so more oblate) than the corresponding Cartesian-oval, thus the refractive power of the outer corneal surface increases from central towards peripheral. In this way, the multifocality towards peripheral just answers the question whether the corneal spherical aberration is positive (refractive power increases towards peripheral) or negative (refractive power decreases towards peripheral) but not the question about the geometrical concept of prolate vs. oblate. The first thing to be clarified is that even the amount of corneal spherical aberration and the asphericity are intrinsically related; the goal is always described in terms of change in spherical aberration158, because this is the factor related to the quality and sharpness of the retinal image. Then, in the treatments, the goals should be: a) For aspherical treatments: Aberration-free profile, with no induced aberrations; a change in asphericity depending on the corrected defocus. b) For customized wavefront treatments: change in aberrations according to diagnosis; change in asphericity depending on the corrected defocus and on the C(n,0) coefficients applied. The asphericity change using classical profiles is bigger than that using aberration-free profiles, and the asphericity change using aberration-free profiles it is intended in a controlled manner. Please note that only a starting surface of a Cartesian-oval would lead to no corneal aberration, but the anterior cornea definitely is not a Cartesian-oval and posses corneal aberrations. However, with the proposed “aberration-free” concept the idea is to maintain the own aberrations of every individual cornea. Even though the condition of stigmatism, that origins "free of aberration" verified for two points (object and image) and for a conicoid under limited conditions, is very sensitive to small deviations and decentrations (a question that usually arises in refractive surgery), the goal of these profiles is not to achieve an stigmatism condition postoperatively, but rather to maintain the original HO In wavefront guided ablation the objective is not only to reduce the induction moreover to reduce the aberration of the eye, although if we analyse the studies of wavefront guided186,190 the result is in some laser platforms that they induce less aberration than with standard profiles but cannot reduce the postoperative HO aberrations below the preoperative levels. In our case, the theoretical model of aspheric Aberration-Free treatment tries to remodel the slope of the cornea to compensate for the attempted sphere and cylinder components, without inducing new aberrations. One of the most affecting aberrations after myopic LASIK is the spherical aberration.79 It is important to remark that the used Aberration-Free profiles intend to preserve preoperative aberrations and not preoperative asphericity. Tuan and Chernyak190 analyzed the impact of corneal asphericity on wavefront-guided LASIK at six clinical sites and found no significant correlation between corneal shape and VA or contrast sensitivity. Pop and Payette191 studied the relationship between contrast sensitivity, Zernike wavefront-aberrations, and asphericity after LASIK to correct myopia. Contrast sensitivity was not correlated with asphericity but was correlated with wavefront-aberrations as expected. The change in asphericity was correlated with the refractive change and was predicted by the parabolic Munnerlyn Anera et al.177 (2003) analyzed the origin of the changes in the p-factor after LASIK and the effect of postsurgical asphericity on contrast sensitivity function. An increase in the p-factor after LASIK was higher than the predictions using the paraxial formula of Munnerlyn and coauthors. Holladay and Janes (2002) determined the relationship between the spherical refractive change after myopic excimer laser surgery and the effective optical zone and corneal asphericity determined by corneal topography, which changed nonlinearly with the amount of treatment. Ablation centre (Centrado de la ablación) We designed our centration strategies in two different centration references that can be detected easily and measured with currently available technologies. PC may be the most extensively used centration method for several reasons. First, the pupil boundaries are the standard references observed by the eyetracking devices. Moreover, the entrance pupil can be well represented by a circular or oval aperture, and these are the most common ablation areas. Centring on the pupil offers the opportunity to minimize the optical zone size. Because in LASIK there is a limited ablation area of about 9.25 millimetres (flap cap), the maximum allowable optical zone will be about 7.75 millimetres. Because laser ablation is a destructive tissue technique, and the amount of tissue removed is directly related to the ablation area diameter, the ablation diameter, maximum ablation depth, and ablation volume should be minimized. The planned optical zone should be the same size or slightly larger as the functional entrance pupil for the patients’ requirements. The main HOAb effects (main parts of coma and spherical aberrations) arise from edge effects, i.e., strong local curvature changes from the optical zone to the transition zone and from the transition zone to the untreated cornea. It then is necessary to emphasize the use of a large optical zone (6.50 millimetres or more) to cover the scotopic pupil size, and a large and smooth transition zone. However, there are several ways to determine the corneal vertex: the most extensively used one is to determine the coaxial corneal light reflex (1st Purkinje image). Nevertheless, there is a problem using the coaxial light reflex because surgeons differ; for instance, the coaxial light reflex will be seen differently depending on surgeon eye dominance, surgeon eye balance, or the stereopsis angle of the microscope. For example, the LadarVision platform (Alcon) uses a coaxial photograph as reference to determine the coaxial light reflex, which is independent of the surgeons’ focus. For that reason, in the current study, ablations were centred using the pupillary offset, the distance between the pupil centre and the normal CV. Considering this, for aberration-free profile, aspherical, or, in general, noncustomised treatments, we use minimum patient data (sphere, cylinder, and axis values) from the diagnosis. Therefore, we assume that the patient’s optical system is aberration-free or that those aberrations are not clinically relevant (otherwise we would have planned a customised treatment). For those reasons, the most appropriate centring reference is the corneal vertex; we then modify the corneal asphericity with an aberration-free ablation profile, including loss of efficiency compensations. For customized wavefront treatments, change in aberrations according to diagnosis measurements, we use a more comprehensive data set from the patient diagnosis, including the aberrations, because the aberrations maps are described for a reference system in the centre of the entrance pupil. The most appropriate centring reference is the entrance pupil as measured in the diagnosis. Providing different centring references for different types of treatments is not ideal, because it is difficult to standardize the procedures. Nevertheless, ray tracing indicates that the optical axis is the ideal centring reference. Because this is difficult to standardize and considering that the anterior corneal surface is the main refractive element of the human eye, the CV, defined as the point of maximum elevation, will be the closest reference as proposed here. It shall be, however, noticed that on the less prevalent oblate corneas the point of maximum curvature (corneal apex) might be off centre and not represented by the corneal However, it would be interesting to refer the corneal and/or ocular wavefront measurements to the optical axis or the CV. This can be done easily for corneal wavefront analysis, because there is no limitation imposed by the pupil boundaries. However, it is not as easy for ocular wavefront analysis, because the portion of the cornea above the entrance pupil alone is responsible for the foveal keratoconus/keratectasia, post-LASIK (pupil-centred), corneal warpage induced by contact lens wearing and other diseases causing irregularity on anterior corneal surface, the corneal vertex and the corneal apex may shift. In those cases, pupil centre is probably more stable. Moreover, since most laser systems are designed to perform multiple procedures besides LASIK, it is more beneficial that excimer laser systems have the flexibility to choose different centration A deeper analysis of the induced ocular aberrations and the changes in asphericity showed significant differences favouring CV centration for the induction of coma and spherical ocular aberration and the changes in asphericity, and no significant differences for the induced ocular trefoil. Due to the smaller angle kappa associated with myopes compared with hyperopes, centration issues are less apparent. However, we wanted to test whether the angle kappa in myopes was sufficiently large to show differences in results, because it is always desirable to achieve as much standardization as possible and not to treat the myopes using one reference, whereas the hyperopes use a different one. Previous studies192 have reported that based on theoretical calculations with 7.0-mm pupils even for customized refractive surgery, that are much more sensitive to centration errors, it appears unlikely that optical quality would be degraded if the lateral alignment error did not exceed 0.45 mm. In 90% of eyes, even accuracy of 0.8 mm or better would have been sufficient to achieve the In our case, the pupillary offset averaged 0.29 millimetres and this moderate value seems to be sufficiently large to be responsible for differences in ocular aberrations, however, not large enough to correlate this difference in ocular aberrations with functional vision. A limitation of this study is that we have used a comparison based upon different two groups of patients with different centrations used as reference. A direct comparison in a lateral/contralateral eye basis for the assignment of the centration reference would maybe reduce the variability of external uncontrollable effects (like flap cut, corneal response to the ablation, repeatability of the instruments, cooperation of the patients, etc…). However, such direct comparison may reduce patients’ satisfaction, as patients may postoperatively observe differences among eyes due to the different centrations. Bilateral symmetry (Simetría bilateral) The aim of this part was to evaluate the effects of laser corneal refractive surgery on the bilateral symmetry of the corneal wavefront aberration; in particular, following a treatment performed with the AMARIS system, which is based on an Aberration-FreeTM ablation profile. The advantage of the AberrationFreeTM ablation profile is that it aims to be neutral for HOAb, leaving the visual print of the patient as it was preoperatively with the best spectacle correction. If the aimed Aberration-Free concept would have been rigorously achieved, the bilateral symmetry between eyes would have been automatically obtained. In our group of patients, the aimed Aberration-Free concept does not hold rigorously true, but we had a very minor increase in corneal aberrations for a 6 mm pupil. As shown from the data presented herein, non-customised femtosecond LASIK performed with the combination LDV and AMARIS platforms is safe, effective, and it preserves reasonably well the bilateral symmetry of the corneal wavefront aberration between eyes. This may be related to the advantages of profiles aiming to be neutral for HOAb or to the fact that the high-speed AMARIS system reduces variability from stromal hydration effects, which increase with the duration of treatment193,194. Recognizing the high levels of defocus and astigmatism in this study, analysis of pre- and postoperative binocular vision195 would be of interest and is a partial limitation of this study196. Further analysis of bilateral symmetry as a function of the analysis diameter is also of interest. Longterm follow-up on these eyes will help determine the stability of these accurate Comparing similar outcomes from other lasers to see if any of the parameters we measured are really different for other lasers or microkeratomes and analyses to determine if these parameters are clinically relevant will help to determine the impact of this work. Cuesta et al.197 found that even differences in corneal asphericity may affect the binocular visual function by diminishing the binocular contrast-sensitivity Jiménez et al.198 found that following LASIK, binocular function deteriorates more than monocular function, and that this deterioration increases as the interocular differences in terms of aberrations and corneal shape increase. They also found that interocular differences above 0.4 µm of RMS for a 5 mm analysis diameter (0.4 D) lead to a drop in binocular summation of more than In our study, only 4 out of 25 patients showed preoperative clinically relevant OS-vs.-OD differences (i.e., larger than 0.25 D), whereas 6 months postoperatively only 2 out of 25 patients showed clinically relevant OS-vs.-OD differences (i.e., larger than 0.25 D). RMS(∆HOAb) analysis for interocular differences accounts for the RMS of the differential corneal wavefront aberration and not for the difference of the corneal RMS(HOAb). RMS(∆HOAb) is a rigorous analysis metric, because it accounts for any deviation (i.e. both inductions and reductions of the wavefront aberration, since both contribute positively to increase the RMS value). Furthermore, it can be mathematically demonstrated that: RMS ( ∆HOAb ) ≥ ∆RMS ( HOAb ) ( 45) Limitations of our study include the moderate number of eyes, limited follow-up and the lack of a control group. The method to determine whether or not symmetry is maintained consist of comparing individual terms in a variety of ad hoc ways both before and after refractive surgery, ignoring the fact that retinal image quality for any given individual is based on the sum of all terms. However, similar methodologies have been already used before.199 At this stage, we did not perform any specific visual tests on binocular vision; for example, stereotests. Some patients may not have good stereopsis but they may show good aberration symmetry. The analysis of bilateral symmetry should be related to the patients’ binocular vision status. Despite these limitations, we were able to demonstrate that “aberration neutral” ablation profiles reasonably preserve the bilateral symmetry between eyes in terms of corneal wavefront aberration. The presented results cannot be extrapolated to patients with symptoms of amblyopia200, anisometropia, nystagmus, or aniseikonia201 without further studies. This does not mean any "good or bad" point for binocular vision. Taking into account that we cannot precisely evaluate the role of aberrations monocularly (patients with a high level of aberrations can have an excellent visual acuity and vice versa), it is even more difficult to do it binocularly. The important question in binocular vision is "the role of interocular-differences," and if they can significantly influence binocular performance. Interocular-differences can be minor but significant for visual performance. Further studies shall help to determine the impact of this on binocular visual performance. Section B.6 CONCLUSIONS “Aberration-free” patterns for refractive surgery as defined here together with consideration of other sources of aberrations such as blending zones, eyetracking, and corneal biomechanics yielded results comparable to those of customisation approaches. CV-centred treatments performed better in terms of induced ocular aberrations and asphericity, but both centrations were identical in terms of photopic visual acuity. Aberration-Free Treatments with the SCHWIND AMARIS did not induce clinically significant aberrations, maintained the global OD-vs.-OS bilateral symmetry, as well as the bilateral symmetry between corresponding Zernike terms (which influences binocular summation). induced corneal aberrations were less than compared with the classical profile or other publications. Having close-to-ideal profiles should improve clinical outcomes decreasing the need for nomograms, and diminishing induced aberrations after surgery. Section B.7 OUTLOOK In this study we have used aberration-free profiles as a basis for the simulations and clinical evaluations. We have learnt that aberration-free profiles may reduce the induction of aberrations below clinically relevant values. Since we are confident that on these grounds, induction of aberrations can be controlled, in further studies, wavefront guided profiles will be explored and analyzed in a similar way. In this chapter, we have performed clinical evaluations in moderate levels of myopia and hyperopia. We have learnt that aberration-free profiles reduce the induction of aberrations below clinically relevant values, but yet induce some minor levels of aberrations. In further studies, higher levels of myopia and hyperopia will be analyzed to determine, to which extent induction of aberrations remains below clinically relevant values. This chapter was limited to limit the induction of aberrations, further studies will attempt to manipulate the induction of aberrations in a controlled manner e.g. for presbyopic corrections. Topic C (Análisis por árbol de decisión para la optimización de resultados en cirugía Study concept and design (S.A.M.); data collection (M.C.A., M.C., D.O., J.G., I.M.); analysis and interpretation of data (S.A.M.); drafting (S.A.M.); critical revision (T.M., M.C.A., T.E., M.C., D.O., J.G., I.M.A.); statistical expertise Section C.1 ABSTRACT PURPOSE: To assess a decision tree analysis system to further optimize refractive surgery outcomes. METHODS: A 5-step decision tree, the Decision Assistant Wizard, based on previous experience with the SCHWIND AMARIS laser, was applied for selecting a customized refractive surgery treatment mode (aspheric aberration neutral, corneal wavefront-guided, or ocular wavefront-guided) to eliminate or reduce total RESULTS: Using the Decision Assistant Wizard, 6467 LASIK treatments were performed over a 30-month period; 5262 and 112 for myopic and hyperopic astigmatism, respectively, using aspheric aberration neutral (AF) profiles, 560 using corneal wavefront-guided profiles, and 533 using ocular wavefront-guided profiles. Twenty-two (0.3%) retreatments were performed overall; 18 (0.3%) and 0 (0%) after myopic and hyperopic astigmatism, respectively, using AF, 3 (0.5%) after corneal wavefront-guided profiles, and 1 (0.2%) after ocular wavefrontguided profiles. CONCLUSIONS: Decision assistant wizards may further optimize refractive surgical outcomes by providing the most appropriate ablation pattern based on an eye’s anamnesis, diagnosis, and visual demands. The general principles may be applied to other laser systems; however, specifics will depend on manufacturers’ Section C.2 INTRODUCTION Many studies have proven that correction of ammetropiae with laser induces aberrations (most significant is the induction of spherical aberration). Corneal laser refractive surgery evolved from simple myopic ablations to the most sophisticated topography-guided, wavefront-driven, or aspheric patterns to preserve the preoperative level of high order aberrations. Some hypotheses state that there might be "good" aberrations and others that are to be avoided. Reasonable reductions in HOAb after wavefront-guided treatments on aberrated eyes and reasonable changes in HOAb after wavefront-optimized treatments have been reported. However, a significant number of refractive surgery patients may not benefit from OW guided treatments as the induction of HOAb is related to baseline levels of HOAb. For example, HOAb tend to be induced in patients with less than 0.30µm and reduced in patients with greater than 0.30µm of HOAb. Physiologic aberrations may be required to maintain the visual quality of the eye. Our definition of “Customisation” is conceptually different and can be stated as: “The planning of the optimum ablation pattern specifically for each individual eye based on its diagnosis and visual demands.” It is often the case, that the best approach for planning an ablation is a sophisticated pattern, which can still be simply described in terms of sphere, cylinder, and orientation (axis). We recently published a review of 18-month experience with AMARIS110 where we introduced the systematic use in our clinical routine of Decision-Tree analyses for selecting the most appropriate type of correction to be applied. Ablation profiles considered in the decision tree include the Aberration-FreeTM (called aspheric aberration neutral in this chapter), OW-guided and CW-guided. During this time, some new findings have helped us to refine our strategies. A simplified, updated version of our current Decision-Tree is presented herein. The Decision Assistant Wizard, which we present here, is based on our experience with the SCHWIND AMARIS laser. While the general principles of this Decision-Tree based planning (Figure 37) can basically be applied to any other laser platform offering aspheric and wavefront-guided profiles, some specific aspects concerning both diagnosis and treatments may depend on other manufacturers’ specifications. Most Representative Corneal WAb and reliable Objective Refraction @ 4mm ∅ Most Representative Ocular WAb 3 x Aberrometry and reliable Most Representative Subjective Refraction Visual Acuity Manifest refraction Corneal AND Ocular RMS(HOA) Corneal AND Ocular RMS(HOA) about quality or night vision Corneal and Ocular HOA ∅ Ocular WAb > ∅ Corneal WAb Age and close to get an IOL exchange aberration neutral CORNEAL Wavefront OCULAR Wavefront with Excimer Laser Figure 37: Decision-Tree applied for selecting the treatment mode (Aspheric aberration neutral, Corneal-Wavefront-Guided, or Ocular-Wavefront-Guided). Section C.3 METHODS We begin by acquiring four corneal topographies (Corneal Wavefront Analyzer, SCHWIND eye-tech-solutions GmbH & Co.KG, based on KeratronScout, OPTIKON2000, Rome, Italy) and derived CW analyses centred on the lineof-sight for each eye of the patient. We extract the mean, and discard the less representative one (the one with the poorest similarity to the mean). From those remaining three maps, we calculate the mean, and select the most representative one (the one with the highest similarity to the mean). We continue acquiring, under non pharmacologically dilated pupils, non- pharmacologically induced pupil shifts202,203), 3 aberrometries (Ocular Wavefront Analyzer, SCHWIND eye-tech-solutions GmbH & Co.KG, based on irx3, Imagine Eyes, Orsay, France) and objective refractions for each eye of the patient. To minimize the potential accommodative response of the patients, we ask them to “see-through-the-target” instead of “looking at the target.” In this way, patients do not try to get a sharp image from the +1.5 D fogged target, since they were instructed to see-through-the-target. From those aberrometries, we calculate the mean, and select the most representative one (the aberrometry map with the highest similarity to the mean). Manifest refraction (Refracción manifiesta) illumination. We use the objective refraction analyzed for a sub-pupil of 4 mm diameter, as starting refraction for this step. This is particularly useful for determining the magnitude and orientation of the astigmatism138,142. We measure manifest refraction, uncorrected and best spectacle-corrected Snellen visual acuity204 (UCVA and BSCVA, respectively). Further rules that we impose for accurately determining the manifest subjective refractions among equal levels of BSCVA are: taking the measurement with the least negative (the most positive) spherical equivalent (unmasking latent hyperopia), if several of them are equal in terms of spherical equivalent, we choose the measurement with the least amount of astigmatism (reducing the risk of postoperative shifts in the axis of Decision process (Proceso de decisión) The decision process starts by estimating the global optical impairment resulting from the measured wave aberrations. This is done by objectively determining the actual clinical relevance of single terms in a Zernike expansion of the wave aberration. In general, for the same magnitude of aberration, the optical blur produced by high order aberrations increases with increasing radial order and decreases with increasing angular frequencies. Based on this, the dioptric equivalent (DEq) was used. If the global optical blur for both corneal and ocular wave-aberrations (CWAb and OWAb, respectively) are below 0.25 DEq for both eyes, then the treatment to be applied is Aspheric aberration neutral. If the global optical blur for any corneal or ocular wave-aberrations is between 0.25 DEq and 0.50 DEq for any eye, then we check the BSCVA achieved during the manifest refraction. If the BSCVA is better than 20/20 for both eyes, then we ask the patient about complaints regarding night vision or, in general, quality of vision. If the patient does not report complaints, then the treatment to be applied is Aspheric aberration neutral. If the patient reports complaints regarding quality of vision, the BSCVA is worse than 20/20 for any eye, or the global optical blur for both corneal and ocular wave-aberrations are above 0.50 DEq for both eyes, then we compare corneal and ocular wave-aberrations. For this, we calculate the differential aberration (CWAb − OWAb, both centred at the line-of-sight) in terms of the Zernike expansion, and estimate the global optical difference. If this global optical difference between corneal and ocular waveaberrations is below 0.25 DEq for both eyes, we consider both corneal and ocular wave-aberrations as equivalent. In this case, the treatment to be applied depends on the available diameter of the wavefront maps and the scotopic pupil size. If the diameter of the Ocular- or Corneal-WAb map (the one providing the largest diameter) is at least as large as the scotopic pupil size (in natural dark conditions) reduced in 0.25 mm, then Ocular- or Corneal-Wavefront-guided ablation is performed (the one providing the largest diameter), or Aspheric aberration neutral otherwise. Usually the size of the Ocular WAb maps is similar to the size of the scotopic pupils, whereas Corneal WAb maps are wider (up to 10 mm in diameter). If the global optical difference between corneal and ocular waveaberrations is above 0.25 DEq for any eye, we consider internal wave-aberration (IWAb) is relevant, then the treatment to be applied is Ocular-Wavefront-guided if the patient is neither in age nor in ophthalmic indications close to get an IOL exchange (due to e.g. lenticular opacities), otherwise no laser corneal refractive treatment is recommended (since IOL exchange is preferred). Level of aberration aberration neutral Corneal AND Ocular Wavefronts < If BSCVA > 20/20 AND If (BSCVA < 20/20 OR complaints no complaints about about quality or night vision) AND quality or night vision Internal Wavefront < 0.25DEq Corneal OR Ocular If (BSCVA < 20/20 OR Wavefront between complaints about quality or night 0.25DEq and vision) AND no lenticular Corneal AND Ocular If wavefront maps smaller If Internal Wavefront < 0.25DEq Wavefronts > If no lenticular problems than scotopic pupil Table 5: Indications chart. Section C.4 RESULTS Distribution of treatments (Distribución de los tratamientos) In the 30 months we are using the SCHWIND AMARIS, we have performed 6467 LASIK treatments divided as: 5262 treatments (81%) for myopic- astigmatism using Aspheric aberration neutral profiles, 112 treatments (2%) for hyperopic-astigmatism using Aspheric aberration neutral profiles, 560 treatments (9%) using Corneal-Wavefront-guided profiles, and 533 treatments (8%) using Ocular-Wavefront-guided profiles. Rate of retreatments (Índice de retratamientos) From those, we have performed 22 re-treatments overall (0.3%): (0.3%) after myopic-astigmatism using AF profiles, no re-treatments (0.0%) after hyperopic-astigmatism using AF profiles, 3 re-treatments (0.5%) after CW-guided profiles, and 1 re-treatment (0.2%) after OW-guided profiles. Section C.5 DISCUSSION There are basically three types of approaches for planning a corneal refractive treatment. The first are those that have as their objective the elimination or reduction of the total aberrations of the eye. The main criticism to this approach argues that the goal “zero aberration” is inconsistent throughout the day due to accommodation, and little lasting, since aberrations change with age205,206,207. The second approach is intended to correct all corneal aberrations, since corneal aberrations do not change with age208,209. However, this concept might also be wrong considering corneal aberrations interact with internal aberrations, some of them being cancelled, and producing an aberration pattern of the total eye in general different from the aberration pattern of the cornea Therefore, by only removing corneal aberration we might worsen the overall aberrations, since the internal aberration might not find a corneal aberration for compensation. In case that the corneal aberration is of the same sign as the internal aberration, the correction of the corneal aberration would be useful, as it would reduce the total aberration of the eye. A third approach tries not to induce aberrations. This type of treatment is not as ambitious, but much more simple to operate. The goal of the Aberration-FreeTM ablation profile is to provide a neutral HOAb ablation, ie to maintain the same HOAb profile both preoperatively with best spectacle correction and postoperatively without There is evidence of neural adaptation to the baseline wavefront profile.69,72,73 The interaction between high-order-aberrations can be beneficial to visual quality regardless of the magnitude HOAb.187,188,189 Based on the random nature of the HOAb induction and current research, it may be beneficial to maintain the preoperative wavefront profile for a significant number of refractive surgery candidates. We are not postulating that customized ablation algorithms in any form (ocular-wavefront-guided, corneal-wavefront-guided, topography-guided) are not be useful. Rather, that specific populations with specific demands deserve specific treatment solutions. Aspheric treatments aimed at preservation of the preoperative HOAb show their strengths in patients with preoperative BSCVA 20/20 or better or in patients where the visual degradation cannot be attributable to the presence of clinically relevant HOAb (e.g. lens opacities). The corneal wavefront customized approach shows its strength in cases where abnormal corneal surfaces are expected. Apart from the risk of additional ablation of corneal tissue, wavefront customized corneal ablation can be considered a safe and beneficial method. Our experience suggests that wavefront customized treatments can only be successful, if pre-existing aberrations are greater than repeatability (e.g. repeatability of diagnostic210 and treatment devices) and biological noise (e.g. day-to-day variabilities in visual acuity, refraction, or aberration in the same Furthermore, coupling effects between different high order aberration terms, and between HOAb and manifest refraction have been found144,211,212 for example, between defocus and spherical aberration, or between 3rd order aberrations and low order terms, between spherical aberration and coma, or between secondary and primary astigmatisms. These interactions may provide some relative visual benefits213, but may as well contribute as sources of uncertainty in the conversion of wavefront aberration maps to refractive Notice that for comparing OWAb and CWAb, the analysis of the IWAb as CWAb – OWAb is mandatory since RMS(IWAb) accounts for any deviation (i.e. inductions and reductions of the wave-aberration both contribute positively to increase the RMS value). The Decision Assistant Wizard presented here may theoretically be applied to any other laser platform offering aspheric, topography-guided, and wavefrontguided profiles, if appropriate analysis functions for CWAb, OWAb, and IWAb are available. Simplified versions with limited functionalities are also possible, if, for example, neither CW-analyses (i.e. no IWAb) nor topography-guided profiles are Section C.6 CONCLUSIONS The desired outcome of non-wavefront-driven refractive surgery is to balance the effects on the wave-aberration, and, to provide normal eyes with perhaps the most natural unaltered quality of vision. While Ocular Wavefront treatments have the advantage of being based on Objective Refraction of the complete human eye system, whereas Corneal Wavefront treatments have the advantage of being independent from accommodation effects or light/pupil conditions; Aspheric treatments have the advantage of saving tissue, time and due to their simplicity offer better predictability. Section C.7 OUTLOOK The clinical evaluations in this chapter were limited to correct the subjects’ manifest refractions. However, in highly aberrated eyes, manifest refraction may become an art, a sort of guessing around the least blurred image. In further studies, systematic deviations from the measured manifest refractions, as well as other foreseeable couplings among Zernike coefficients will be evaluated. Topic D (Análisis de la pérdida de eficiencia de ablación para incidencia no-normal) Study concept and design (S.A.M.); data collection (S.A.M.); analysis and interpretation of data (S.A.M.); drafting (S.A.M.); critical revision (D.O., C.H., N.T., M.S.); statistical expertise (S.A.M.). Section D.1 ABSTRACT A general method to analyze the loss of ablation efficiency at non-normal incidence in a geometrical way is provided. The model is comprehensive and directly considers curvature, system geometry, applied correction, and astigmatism as model parameters, and indirectly laser beam characteristics and ablative spot properties. The model replaces the direct dependency on the fluence by a direct dependence on the nominal spot volume and on considerations about the area illuminated by the beam, reducing the analysis to pure geometry of impact. Compensation of the loss of ablation efficiency at non-normal incidence can be made at relatively low cost and would directly improve the quality of Section D.2 INTRODUCTION The loss of ablation efficiency at non-normal incidence could explain, in part, many of the unwanted effects observed in refractive surgery, such as induction of spherical aberrations or high order astigmatism and consequently the extreme oblateness of postoperative corneas after myopic surgery. Probably the earliest references related to the loss of ablation efficiency in laser refractive surgery refer to the observation of hyperopic postoperative refractions (hyperopic shifts) after negative cylinder ablation of the cornea. This hyperopic postoperative refraction had not been planned and depended on various factors, such as the laser system used, the amount of negative cylinder corrected, or the presence or absence of spherical terms in the ablation profile. For the surgeons, it was difficult to adequately compensate this effect in their nomograms to achieve the desired refractive correction. According to some surgeons, some manufacturers introduced the concept of coupling factor, defined as the average sphere resulting from the application of one diopter of negative cylinder. Despite its empirical nature, this coupling factor enabled surgeons to plan their treatments with a reasonable degree of success. One clue that relates this coupling factor to the loss of efficiency is analysis of the effect in the correction of simple negative astigmatisms (Figure 38). These cases revealed that the neutral axis became refractive, being less ablated in the periphery as compared to the centre. Similar experiences were observed using phototherapeutic keratectomy (PTK), but the results were not as conclusive, as cases where PTK is performed in large diameters are rare. Achieved Correction Intended Correction Figure 38: Hyperopic shift and coupling factor. Ablating a simple myopic astigmatism, the neutral axis became refractive, and the ablation depth in the periphery was smaller than in the centre. Section D.3 METHODS Determination of the ablation efficiency at non-normal incidence (Determinación de la eficiencia de la ablación para incidencia no-normal) The issue of loss of ablation efficiency is composed of reflection losses and geometrical distortions (Figure 39). Figure 39: Loss on reflection (Fresnel´s equations) dependent on the angle of incidence, and losses also dependent on the geometric distortion (angle of The introduction of the concept of aberration-free profiles made it necessary to compensate for the induction of aberrations originating from deterministic and repeatable causes, thus minimizing the induction of aberrations to noise levels, so that a “new” model had to be developed. The aim in developing this model was to understand the mechanisms that govern the loss of ablation efficiency and to be able to predict their effect under different working 1 .- Considering the preoperative corneal curvature and asphericity as well as the intended refractive correction, the radius of curvature and asphericity the cornea will have after 50% of the treatment are estimated. (As the radius of corneal curvature changes during treatment, the efficiency also varies over treatment. The value at 50% of the treatment was chosen as a compromise to consider both the correction applied and the preoperative curvature, Figure 40). Ideal corneal line-shape at different progress of a treatment radial distance (mm) sagittal elevation (µm) preoperative Rpre 7,8 mm; Qpre -0,2 Half-Treatment Rht 8,7 mm; Qht -0,1 postoperative Rpost 9,9 mm; Qpost 0,1 Figure 40: The radius of corneal curvature changes during treatment, efficiency also varies over treatment; the values at 50% of the treatment represent a reasonable compromise to consider both the correction applied and the preoperative curvature. 2 .- Considering the offset of the galvoscanners´ neutral position compared to the system axis (Figure 41), the angle of incidence of the beam onto a flat surface perpendicular to the axis of the laser is calculated: α ( x, y ) = arctan ( x − X G ) + ( y − YG ) ( 46) where α is the angle of incidence on a "flat" surface, XG, YG the position of the galvoscanners, x, y the radial positions of the incident beam, and dG the vertical distance from the last galvoscanner to the central point of the ablation. ∆x, ∆y ∆XG, ∆YG Figure 41: The offset of the galvoscanners from the axis of the system is considered in the calculation of the angle of incidence of the beam onto a flat surface perpendicular to the axis of the laser. 3 .- Considering the calculated curvature and asphericity at 50% of the treatment, the local angle of the cornea is calculated. Assuming the cornea as an ellipsoid, which satisfies Baker’s equation, the following equation results: x 2 + y 2 + z 2 ( QHT + 1) − 2 zRHT = 0 x2 + y 2 ( QHT + 1) θ ( x, y ) = arctan QHT + 1 − ( QHT + 1) × x + y2 ( 47) ( 48) where θ is the angle of the local tilt of a corneal location, and RHT and QHT are the predicted radius of curvature and asphericity quotient at 50% of treatment 4 .- To calculate the angle of incidence for each point on the corneal β = ang (α , θ ) ( 49) applies, where β is angle of incidence. 5 .- The ablation efficiency is calculated by consideration of geometric distortions, reflections losses, and spot overlapping: I Eff ( x, y ) = I ( x, y ) ⋅ ⋅ cos ( β ( x, y ) ) e 2 N ( x − x0 ) + ( y − y0 ) cos ( β ( x , y ) ) ( x − x0 ) + ( y − y0 ) ( 50) ⋅ (1 − R ( x, y ) ) where the factor cos ( β ( x, y ) ) e 2 N ( x − x0 ) + ( y − y0 ) cos ( β ( x , y ) ) ( x − x0 ) + ( y − y0 ) corresponds to the geometric distortions, the factor (1 − R ( x, y ) ) corresponds to the reflections losses, and y is the radial direction along which angular projection occurs. Eff ( x, y ) = ∑∑ d ( x, y ) −m −n m n ( 51) ∑∑ d ( 0, 0 ) −m −n The sums represent the overlap and extent along the size of the impact. Using the efficient radiant exposure and applying Lambert-Beer´s law (blow-off model), we get: x − x0,i , j ∆x0 ∆y0 ∑∑ ln cos ( β ( x, y ) ) e Eff ( x, y ) = 1 + cos ( β ( 0, 0 ) ) e − x0 ,i , j −m −n ) +( y − y0 ,i , j ) ) + ( − y0 ,i , j ) ) ( cos2 ( β ( x , y ) ) x − x0 ,i , j + y − y0 ,i , j ) ( cos 2 ( β ( 0,0 ) ) − x0,i , j + − y0 ,i , j (1 − R ( x, y ) ) (1 − R ( 0, 0 ) ) ( 52) where ∆x0 and ∆y0 are the spot overlapping distances (i.e. the distance between two adjacent pulses) and x0,i,j and y0,i,j are the respective centres of the different spots contributing to the overlap at one corneal location. If the galvoscanners are coaxial with the laser system: β ( x, y ) = α ( x, y ) + θ ( x, y ) ( 53) If the distance from the last mirror to the ablation plane is large: d r ( 54) α ( x, y ) → 0 ( 55) β ( x, y ) θ ( x, y ) ( 56) β ( 0, 0 ) = 0 ( 57) (1 − R ( x, y ) ) ( nt + 1)2 AS ln cos ( β ( x, y ) ) Eff ( x, y ) = 1 + 2 N ( x − x0,i , j ) + ( y − y0,i , j ) cos ( β ( x, y ) ) ( x − x0,i , j ) + ( y − y0,i , j ) 2∆x0 ∆y0 ∑∑ −m −n ( 58) If the spot overlapping is very tight and many pulses contribute to the ablation at each corneal location (i.e. ∆x0 , ∆y0 FP ), it further simplifies to: (1 − R ( x, y ) ) ( nt + 1)2 AS ln cos ( β ( x, y ) ) Eff ( x, y ) = 1 + ( 59) 2 N ( x − x0 ) + ( y − y0 ) cos ( β ( x, y ) ) − ( x − x0 ) + ( y − y0 ) dx dy In this way, we removed the direct dependency on the fluence and replaced it by a direct dependence on the nominal spot volume and on considerations about the area illuminated by the beam, reducing the analysis to pure geometry of impact. There are two opposing effects: the beam is compressed due to reflection and at the same time expands due to its projection angle. 6.- The compensation would be the inverse of efficiency: κ ij = 1 Eff ( 60) 7 .- We can develop the efficiency (or the compensation) in power series: ( R ) − B ( r R ) + ... Eff = 1 − A r ( ) ( ) ( 61) κ = 1 + C r R + D r R + ... ( 62) Therefore, instead of using the radius at half of the treatment, we can calculate the overall effect of the variation in efficiency over treatment. ( Eff ) dR Eff ( R , R , r ) = ∫ dR ( 63) Eff ( Ri , R f , r ) ≈ 1 − A − B 3 3 ( Ri2 + Ri R f + R 2f ) ( 64) Ri R f 3Ri R f (κ ) dR κ (R , R ,r) = ∫ dR κ ( Ri , R f , r ) ≈ 1 + C ( 65) + D 3 3 Ri2 + Ri R f + R 2f Ri R f 3Ri R f ( 66) 8 .- Returning to the concept of radius and asphericity at half of the treatment, we can further simplify the model by defining an averaged spot depth as if the energy profile of the beam were flat and apply the simple model: ∫ ∫ ln 0 r 2 N − 2 R ∫ ∫ ln 0 α N + 1 ITh ( 67) ( 68) In general: d = ( 69) cos β (1 − R )( nt + 1)2 Eff = 1 + cos β (1 − R )( nt + 1)2 AS ln Eff = 1 + ( 70) ( 71) If this model is applied to a spherical surface (Q = 0) and the depth per layer equals the depth per pulse or the spots do not overlap, it simplifies to the simple model. 9 .- Losses due to reflection are generally negligible. This is so because the highest reflection contribution occurs for normal incidence, as well, and this component is already normalized. Therefore, we can further simplify the model as below: X 2 +Y2 θ = arcsin Eff = 1 + AS ln ( cos θ ) ( 72) ( 73) 10 .- In the case of a strong astigmatic component, we can continue to calculate the 50% of treatment: DEff = Dϕ × cos 2 (δ − ϕ ) + Dϕ +π × sin 2 (δ − ϕ ) ( 74) REff = Rϕ × Rϕ +π ( 75) Rϕ × sin (δ − ϕ ) + Rϕ +π × cos 2 (δ − ϕ ) Section D.4 RESULTS As the radius of corneal curvature changes during treatment, the efficiency varies over treatment, as well. The ablation efficiency decreases steadily with increasing curvature, thus resulting in improvement of ablation efficiency during myopic corrections and increasing loss of ablation efficiency during hyperopic corrections (Figure 42). Ablation efficiency at 3 mm radial distance for a sphere with radius of curvature 7,97 mm Ablation efficiency Defocus correction (D) Figure 42: Ablation efficiency at 3 mm radial distance for a sphere with 7.97 mm radius of curvature. The ablation efficiency was simulated for an excimer laser with a peak radiant exposure of 120 mJ/cm2 and a full-width-half-maximum (FWHM) beam size of 2 mm. The radius of corneal curvature changes during treatment, accordingly also the efficiency varies over treatment. Note the improvement of ablation efficiency during myopic corrections as opposed to the increased loss of ablation efficiency during hyperopic corrections. The model considers curvature based upon radius and asphericity. As expected, a parabolic surface provides higher peripheral ablation efficiency (due to prolate peripheral flattening) compared to an oblate surface (with peripheral steepening) (Figure 43). Contribution of the asphericity quotient to the ablation efficicency Q-Factor 1 Q-Factor -1 radial distance (mm) Figure 43: Contribution of the asphericity quotient to the ablation efficiency for a radius of 7.97 mm curvature. The ablation efficiency at the cornea was simulated for an excimer laser with a peak radiant exposure of 120 mJ/cm2 and a beam size of 2 mm (FWHM). Note the identical ablation efficiency close to the vertex as opposed to differences in ablation efficiency at the periphery. A parabolic surface provides higher peripheral ablation efficiency (due to prolate peripheral flattening) compared to an oblate surface (with peripheral steepening). The model considers efficiency losses due to reflection losses, geometric distortions, and spot overlapping. Note that the reflection losses already exist for normal incidence and decrease by a very small amounts towards the periphery (Figure 44). Although normal reflection losses approximately amount to 5%, they do not increase excessively for non-normal incidence. As our calculation defined ablation efficiency for a general incidence as the ratio between the spot volume for general incidence and the spot volume for normal incidence, it is evident that the so-defined efficiency equals 1 for normal incidences. Contribution of the reflection and distortion losses to the ablation efficicency Reflection losses Distortion losses Ablation Efficiency radial distance (mm) Figure 44: Contribution of the reflection and distortion losses to ablation efficiency for a sphere with 7.97 mm radius of curvature. Note that the reflection losses already exist with normal incidence and decrease very slightly towards the periphery. Although normal reflection losses approximately amount to 5%, they do not increase excessively for non-normal incidence. As our calculation defined the ablation efficiency for a general incidence as the ratio between the spot volume for general incidence and the spot volume for normal incidence, it is evident that the so-defined efficiency equals 1 for normal incidences. Losses due to reflection are generally negligible, since the highest reflection contribution also occurs with normal incidence. We removed the direct dependency on the fluence and replaced it by a direct dependence on the nominal spot volume and on considerations about the area illuminated by the beam, thus reducing the analysis to pure geometry of Note that efficiency is very poor close to the ablation threshold and steadily increases with increasing radiant exposure approaching 100% ablation efficiency (Figure 45). It should also be noted that the difference between efficiencies for cornea and PMMA increases with lowering radiant exposure (Figure 45). Ablation efficiency at 3 mm radial distance for a sphere with radius of curvature 7,97 mm Ablation efficiency Peak radiant exposure (mJ/cm2) Figure 45: Ablation efficiency at 3 mm radial distance for a sphere with 7.97 mm radius of curvature. The ablation efficiency was simulated for an excimer laser with a peak radiant exposure up to 400 mJ/cm2 and FWHM beam size of 2 mm. Finally, we compared the ablation efficiencies for cornea and PMMA for the spherical shapes prior to receiving any laser shot was evaluated, and evaluate the average ablation efficiencies for the surfaces during a -12 D and a +6 D correction, respectively (Figure 46 to Figure 48). Again, note that the ablation efficiency decreases steadily with increasing curvature, resulting in an improvement of ablation efficiency during the -12 D correction as opposed to decreased ablation efficiency during the +6 D correction. Ablation Efficiency radial distance (mm) Figure 46: Efficiency obtained with the proposed model for the conditions reported by Dorronsoro et al.214 Ablation efficiency for a sphere with 7.97 mm radius of curvature. The ablation efficiency was simulated for an excimer laser with a peak radiant exposure of 120 mJ/cm2 and FWHM beam size of 2 mm. Ablation Efficiency radial distance (mm) Figure 47: Efficiency obtained with the proposed model for the conditions reported by Dorronsoro et al.214 Average ablation efficiency for a sphere with 7.97 mm preoperative radius of curvature and a correction of -12 D. The ablation efficiency was simulated for an excimer laser with a peak radiant exposure of 120 mJ/cm2 and FWHM beam size of 2 mm. The radius of corneal curvature changes during treatment, consequently, also the efficiency varies over treatment. Note the improvement of ablation efficiency. Ablation Efficiency radial distance (mm) Figure 48: Efficiency obtained with the proposed model for the conditions reported by Dorronsoro et al.214 Average ablation efficiency for a sphere with 7.97 mm preoperative radius of curvature and a correction of +6 D. The ablation efficiency was simulated for an excimer laser with a peak radiant exposure of 120 mJ/cm2 and FWHM beam size of 2 mm. The radius of corneal curvature changes during treatment, consequently also the efficiency varies over treatment. Note the increased loss of ablation efficiency during hyperopic corrections. Section D.5 DISCUSSION Our approach reduces all calculations to geometrical analysis of the impact; the ablation efficiency does not primarily depend on the radiant exposure, but rather on the volume per single shot for the specific material and also on overlap and geometric considerations of the irradiated area per shot, supported by radiant exposure data. Different effects interact; the beam is compressed due to the loss of efficiency, but at the same time expands due to the angular “projection.” Using this model for ablation efficiency at non-normal incidence in refractive surgery, up to 42% of the reported increase in spherical aberrations can be explained. Applying this comprehensive loss of efficiency model to a pure myopia profile to get the achieved profile etched into the cornea, we observed that the profile “shrinks,” steepening the average slope and then slightly increasing the myopic power of the profile as well as inducing spherical aberrations. The net effect can be expressed as an unintended positive spherical aberration and a small overcorrection of the spherical component. Applying this model to a pure hyperopia profile, we observed that the profile “softens,” flattening the average slope and then decreasing the hyperopic power of the profile as well as inducing spherical aberrations. The net effect can be expressed as an undercorrection of the spherical component and a small amount of induced negative spherical aberration. Applying this model to a PTK profile, we observed that the flat profile becomes myopic due to the loss of efficiency, resulting in an unintended myopic ablation (hyperopic shift). Corneal curvature and applied correction play an important role in the determination of the ablation efficiency and are taken into account for accurate results. As a compromise between accuracy and simplicity, we decided to use the predicted radius of corneal curvature after 50% of the treatment as curvature metric for determination the ablation efficiency. However, corneal toricity and applied astigmatism, even though easily computed using the comprehensive model, do not have a relevant impact as long as their values correspond to those of normal corneas. Only when toricity or astigmatism exceeds 3 D, their effects on ablation efficiency start to be significant. System geometry is considered in this model using the offset of the galvoscanners´ neutral position compared to the system axis as well as the distance from the last galvoscanner to the central point of the ablation. Nevertheless, usually the galvoscanners are coaxial with (or determine the axis of) the laser system, and the distance from the last galvo-mirror to the ablation plane used to be large. We removed the direct dependency on the fluence and replaced it by a direct dependence on the nominal spot volume and on considerations about the area illuminated by the beam, reducing the analysis to pure geometry of impact in this way. We found that the efficiency is very poor close to the ablation threshold and steadily increases with increasing radiant exposure approaching 100% ablation efficiency. Also, differences between the efficiencies for the cornea and PMMA were observed to increase with lowering radiant exposure. Actually, the key factor is not the peak radiant exposure of the beam, but rather the average spot depth (i.e. the ratio spot volume to spot area). The detailed model determines ablation efficiency considering geometric distortions, reflections losses, and spot overlapping. Geometric distortions are very important, because the angular projection expands the beam, thus spreading the beam energy over a wider area and flattening its radiant exposure. At the same time, spot overlapping is a major parameter, especially in flying-spot systems, where the spot spacing is small compared to the spot width and multiple spots overlap, all contributing to the ablation at each corneal location, whereas reflection losses can be neglected, because important reflection contribution already occurs in normal incidence and does not excessively increase in nonnormal incidence. Surface asphericity before ablation, and especially after completion of 50% of the treatment, refines this comprehensive approach. Simulations, based on cornea and PMMA, for extreme asphericity values (from asphericity quotient of -1 to +1) showed minor effects with differences in ablation efficiency of 1% in the cornea and 2% in PMMA even at distances of 4 mm radially from the axis. Hence, for corneas with normal curvature and asphericity spherical geometry seems to be a reasonably simple approach for calculating the ablation efficiency at non-normal incidence. The loss of efficiency in the ablation and non-normal incidence are responsible for much of the induction of spherical aberrations observed in the treatments as well as the excessive oblateness of postoperative corneas observed after myopic corrections (also part of some overcorrections observed in high myopias and many undercorrections observed in hyperopia) with major implications for treatment and optical outcome of the procedure. Compensation can be made at relatively low cost and directly affects the quality of results (after a correction of the profiles to avoid overcorrections or undercorrections in defocus and marginally in the cylinder). Analyzing the different models available, the simple model of ablation efficiency at non-normal incidence bases its success on its simplicity, which forms the reason why it is still used by some trading houses. The problems arising from the simple model directly derive from its simplicity, and consequently the limitations of application as required by their implicit assumptions. The simple model does not consider the calculation or the asphericity of the cornea, or the energy profile of the beam, or the overlap of impacts, overestimating the ablation efficiency, underestimating its compensation. The Jiménez-Anera model160,161 provides an analytical expression for an adjustment factor to be used in photorefractive treatments, which includes both compensation for reflection and for geometric distortion, incorporating non-linear deviations of Lambert-Beer´s law. It eliminates some problems of the simple model, because it considers the energy profile of the beam, overlapping, and losses by reflection. However, it does not consider the calculation nor asphericity of the cornea, it assumes that the energy profile is a Gaussian beam, assumes unpolarized light, it does not address the size or shape of the impact, it does not consider that the radius of curvature changes locally throughout the treatment, and accordingly the angle of incidence. Therefore, it often slightly overestimates the ablation efficiency, partially underestimating its compensation. The Dorronsoro-Cano-Merayo-Marcos model214 provides a completely new approach to the problem. It eliminates many of the problems of both the simple model and the model by Jiménez-Anera160,161, reducing the number of assumptions and using an empirical approach. Even so, it assumes that the reflection losses on cornea and PMMA are identical, it does not consider the local radius of corneal curvature, its asphericity or applied correction, it does not consider that the radius of curvature changes locally throughout the treatment, and accordingly the angle of incidence, and it does not consider the effects for different values of fluence. The model described here eliminates the direct dependence on fluence and replaces it by direct considerations on the nominal spot volume and on the area illuminated by the beam, reducing the analysis to pure geometry of impact. The proposed model provides results essentially identical to those obtained with the model by Dorronsoro-Cano-Merayo-Marcos.214 Additionally, it offers an analytical expression including some parameters that were ignored (or at least not directly addressed) in previous analytical approaches. The good agreement of the proposed model with results reported in Dorronsoro’s paper214 - to our knowledge the first study using an empirical approach to actually measure the ablation efficiency - may indicate that the used approach including the discussed simplifications is a reasonable description of the loss of efficiency effects. In so far, this model may complement previous analytical approaches to the efficiency problem and may sustain the observations reported by Dorronsoro et al.214 Even though a large number of detailed parameters are considered, this model is still characterized by a relatively low degree of complexity. In particular, the model could be further refined by incorporating non-linear deviations according to Lambert-Beer´s law or by considering local corneal curvature directly from topographical measurements rather than modelling the best-fit surface Section D.6 CONCLUSIONS The loss of efficiency is an effect that should be offset in commercial laser systems using sophisticated algorithms that cover most of the possible variables. Parallelly, increasingly capable, reliable, and safer laser systems with better resolution and accuracy are required. The improper use of a model that overestimates or underestimates the loss of efficiency will overestimate or underestimate its compensation and will only mask the induction of aberrations under the appearance of other sources of error. The model introduced in this study eliminates the direct dependence on fluence and replaces it by direct considerations on the nominal spot volume and on the area illuminated by the beam, thus reducing the analysis to pure geometry of impact and providing results essentially identical to those obtained by the model by Dorronsoro-Cano-Merayo-Marcos214, however, also taking into account the influence of flying spot technology, where spot spacing is small compared to the spot width and multiple spots overlap contributing to the same target point and the correction to be applied, since the corneal curvature changes during treatment, so that also the ablation efficiency varies over the treatment. Our model provides an analytical expression for corrections of laser efficiency losses that is in good agreement with recent experimental studies, both on PMMA and corneal tissue. The model incorporates several factors that were ignored in previous analytical models and is useful in the prediction of several clinical effects reported by other authors. Furthermore, due to its analytical approach, it is valid for different laser devices used in refractive surgery. The development of more accurate models to improve emmetropization and the correction of ocular aberrations in an important issue. We hope that this model will be an interesting and useful contribution to refractive surgery and will take us one step closer to this goal. Section D.7 OUTLOOK In further works, a comprehensive model to analyze the relative ablation efficiency at different materials (in particular human cornea and poly(methylmethacrylate) (PMMA)), which directly considers applied correction, including astigmatism, as well as, laser beam characteristics and ablative spot properties will be developed, providing a method to convert the deviations in achieved ablation observed in PMMA to equivalent deviations in the cornea. We are developing, as well, a simple simulation model to evaluate ablation algorithms and hydration changes in laser refractive surgery. The model simulates different physical effects of an entire surgical process, and the shot-byshot ablation process based on a modelled beam profile. The model considers corneal hydration, as well as environmental humidity, as well as, laser beam characteristics and ablative spot properties. Topic E (Efectos clínicos de los errores de ciclotorsión durante cirugía refractiva) Study concept and design (S.A.M.); data collection (D.O., M.C.A., I.M.A.); analysis and interpretation of data (S.A.M.); drafting (S.A.M.); critical revision (J.M., D.O., M.C.A., I.M.A.); statistical expertise (S.A.M.). Section E.1 ABSTRACT To describe the theoretical effects of cyclotorted ablations on induced aberrations and determine the limits of tolerance of cyclotorsional We developed a method to determine the average cyclotorsion during refractive surgery without a cyclotorsion tracker. We simulated mathematical conditions to determine the optical, visual, and absolute benefits in 76 consecutive treatments performed on right eyes. The results were evaluated as Zernike expansion of residual wavefront aberrations. Ablations based purely on Zernike decomposition but with cyclotorsion applied resulted in residual aberrations of the same Zernike modes of different magnitudes and orientations, indicating that the effect of cyclotorted compensation can be analyzed by single Zernike modes in magnitude and The effect on single Zernike modes depends only on angular frequency and not radial order. We obtained a mean value of 4.39° of cyclotorsion. A theoretical optical benefit was achieved for 95% of treatments, a theoretical visual benefit in 95%, and an absolute benefit in 93% compared with 89%, 87%, and 96% of treatments achieving actual benefits, respectively. Residual aberrations resulting from cyclotorsion depend on aberrations included in the ablation and cyclotorsional error. The theoretical impact of cyclotorted ablations is smaller than decentred ablations or edge effects in coma and spherical aberrations. The results are valid within a single-failure condition of pure cyclotorsional errors, because no other sources of aberrations are considered. The leap from the mathematical model to the real world outcome cannot be extrapolated without further study. Section E.2 INTRODUCTION Human eyes have six degrees of freedom to move: X/Y lateral shifts, Z levelling, horizontal/vertical rotations, and cyclotorsion (rotations around the optical axis). Laser technology for refractive surgery allows corneal alterations to correct refractive errors15 more accurately than ever. Ablation profiles are based on the removal of tissue lenticules in the form of sequential laser pulses that ablate a small amount of corneal tissue to compensate for refractive errors. However, the quality of vision can deteriorate significantly, especially under mesopic and lowcontrast conditions.19 Induction of aberrations, such as spherical aberrations and coma, is related to loss of visual acuity (VA)70 and quality. Some aberrations, however, may be subject to neural adaptation. A study by Artal et al.73 on the effects of neural compensation on vision indicated that visual quality in humans is superior to the optical quality provided by the human eye. Measuring rotation when the patient is upright to when the refractive treatments are performed with the patient supine may lead to ocular cyclotorsion, resulting in mismatching of the applied versus the intended profiles (Figure Recently, some equipment can facilitate measurement of and potential compensation for static cyclotorsion occurring when the patient moves from upright to the supine position during the procedure. Figure 49: (Top) Original wavefront error, (middle) 15° clockwise torted wavefront error, and (bottom) residual wavefront error (all in two-dimension and Section E.3 METHODS Determination of Cyclotorsion during Refractive Surgery (Determinación de la ciclotorsión durante cirugía refractiva) We analyzed the topographies using the Keratron-Scout videokeratoscope (Optikon2000 S.p.A, Rome, Italy) preoperatively and 3-month after LASIK and measured the Maloney indices in 76 consecutive right eyes with myopic astigmatism. Using only the right eyes or only left eyes simplifies calculations because it directly avoids considering potential bilateral symmetry effects between eyes regarding cyclotorsion (i.e., cyclotorsional values in the left eye might be multiplied by -1). As reported previously,216 the achieved correction after refractive surgery can be calculated from the topographic changes. The vectorial differences in the astigmatic space between the postoperative and preoperative Maloney indices216,217 were compared to the intended corrections (Figure 50), e.g., a preoperative topography of 41.6 dioptres (D) at 111° and 41.2 D at 21° and a postoperative topography of 44.4 D at 114° and 43.5 D at 24° results in a spherical change of +3.0 D with a cylindrical component of -0.5 D at 117° compared to the planned +3.0 D –0.5 D x 110° at the 12-mm vertex distance, resulting in 7° of counterclockwise cyclotorsion. Maloney indices use the inner 3 mm zone, to fit this disk area best to a spherocylindrical surface in 3D. Cylinder orientation defines the two principal meridians, and sphere and cylinder provide the curvatures of the principal In "normal" corneas (without irregular astigmatism), sim-K and Maloney analyses provide very similar results. Figure 50: The difference between the postoperative and preoperative topographies compared to the intended correction (the difference in the orientation of the astigmatism defines the cyclotorsional error). (A) Preoperative (B) Postoperative topography. (C) Differential topography. Planned correction. Counterclockwise torsion of the astigmatism can be seen Residual Aberration after Cyclotorsional Errors during Refractive Surgery (Aberración residual tras errores de ciclotorsión durante cirugía When the rotation angle is 0, the aberration and compensation patterns cancel each other, resulting in no residual aberration. Based on the definition of the Zernike polynomials36 (Z(n,m), where n is a null or positive integer and m is an integer ranging from -n to +n, representing the radial and meridional orders, respectively) it is evident that the polynomials Z(n,0) are invariant under rotations around their centre. The only aberrations affected by cyclotorsional errors are those with a vector nature. For those, Zernike polynomials are structured in two complementary sets, governed by sine/cosine functions that avoid coupling of different aberration orders for rotations around their centre. After rotation of the opposite of the Zernike components around their origin, the aberration mode still can be decomposed into two Zernike components: C 'mn = − Cnm cos ( mθ ) + Cn− m sin ( mθ ) where n is the radial order, m the angular frequency, C’nm the rotated Zernike compensation, Cn±m the original Zernike components, and θ the cyclotorsional The residual components are after compensating for the original pattern with rotated one: C ''mn = Cnm 1 − cos ( mθ ) − Cn− m sin ( mθ ) where C’’nm is the residual Zernike component. Expressing each aberration in magnitude and orientation218: C ''nm = Cn± m 2sin ∆α = α − α 0 = Using the previous example, a planned correction of +3.0 D -0.5 D x 110° at the 12-mm vertex distance and an actual spherical change of +3.0 D with a cylindrical component of -0.5 D at 117° results in 7° of counterclockwise cyclotorsion and would lead to a postoperative refraction of +0.07 D -0.13 D x 69°. The relative amount of residual aberrations depends only on cyclotorsional error (Figure 51 and Figure 52). Because the original aberration can be described as a linear combination of Zernike polynomials36 and each of these Zernike terms results in a residual Zernike term after partial compensatory rotation, the residual wavefront aberration then is the sum of all residual terms. % Residual Aberration Coefficient C(n,m) vs. Cyclotorsion error % Residual Aberration Coefficient C(n,m) m x Cyclotorsional Error (degrees) Figure 51: The percentage of residual aberrations vs. cyclotorsional error. Modulation of the cyclotorsional error by the angular frequency (m) is seen; the higher the angular frequency, the faster the residual aberration varies. For m=1 (coma), the maximum residual error is achieved for 180° torsion; for m=2 (cylinder), the maximum residual error would be achieved for 90° torsion; for m=3 (trefoil), the maximum residual error would be achieved for 60° torsion, and so on Relative Orientation residual aberration Coefficient C(n,m) vs. Cyclotorsion error m x Relative Orientation Residual Aberration Coefficient C(n,m) m x Cyclotorsional Error (degrees) Figure 52: The relative orientation of residual aberrations vs. cyclotorsion error. Modulation of the cyclotorsional error and the relative orientation by the angular frequency (m) are seen Derivation of a Mathematic Condition to Determine an Optical Benefit (Derivación de una condición matemática para determinar un beneficio A condition in which any postoperative aberration smaller than its preoperative magnitude was considered as positive, was called optical benefit: 2 sin ( 2) < 2 arcsin 1 Using the previous example, 7° of cyclotorsion would produce an optical benefit up to the octafoil angular frequencies. Considering the cyclotorsional error and the preoperative astigmatism, we calculated how many treatments would theoretically achieve an optical benefit for the astigmatism component (m=2). Because the treatments were planned as aberration-free profiles, and, therefore, only based on sphere, cylinder, and axis inputs, the astigmatism was the only vector nature aberration included. Moreover, astigmatism is in magnitude the major Zernike mode with a vector nature. We compared this value to the percentage of eyes that actually obtained a postoperative cylinder lower than the preoperative value. Derivation of a Mathematic Condition to Determine a Visual Benefit (Derivación de una condición matemática para determinar un beneficio To distinguish between optical benefit (merely reducing the aberration magnitude) and visual performance (visual benefit), a model based on the findings of Artal et al.73 was adopted. In that study, equivalent human optical systems that differed only in the orientation of the aberration patterns (produced by adaptive optics) achieved different visual performances mainly due to neural compensation for the unique aberration pattern of each individual. For that reason, matching factor (MF) behaviour based on single aberrations was modelled. MF is maximum (equal to 1) for aberrations of the same orientation and minimum for aberrations of the opposite orientation in the Zernike space. The magnitude of aberration distribution was considered a decreasing exponential with the Zernike order as described by Thibos et al.149 Visual benefit was defined as a condition in which the postoperative aberration was smaller than its preoperative magnitude times the MF for that relative orientation: 2 sin < MF + (1 − MF ) cos ( m∆α ) 2 arcsin MF + 1 The arbitrary value of 0.625 was chosen as the MF generator; this value produces a maximum equal to 1 and a minimum equal to 0.25 (Figure 53). MF = 0.625 ( ) 2 arcsin 5 Matching Factor (Relative residual aberration Coefficient C(n,m)) vs Relative Orientation Matching Factor (Relative Residual Aberration Coefficient) Relative Orientation Residual Aberration Coefficient (degrees) m= 1 m= 2 m= 3 m= 4 m= 5 Average Matchnig Factor (Artal et al.) Figure 53: Matching factor vs. relative orientation of residual aberrations. Using the previous example, 7° of cyclotorsion would produce a visual benefit up to the hexafoil angular frequencies. With the cyclotorsional error and the preoperative astigmatism, and assuming that cyclotorsional errors around the ablation centre were the only failure, the number of eyes was calculated that would have obtained a visual benefit for cylinder if the correction were correct but the axis was incorrect. We compared this value to the percentage of eyes that maintained or improved postoperative UCVA compared to the preoperative BSCVA and to the percentage of eyes with actually maintained or improved BSCVA. Derivation of a Mathematic Condition to Determine an Absolute Benefit (Derivación de una condición matemática para determinar un beneficio The major ocular aberrations are defocus and primary astigmatism, which is the major aberration affected by a rotational error. The amount of tolerable residual astigmatism postoperatively cannot be defined as a percentage of the preoperative astigmatism, because the tolerance limit is set by the image-forming characteristics of the eye and so takes an absolute value. With simple spherical error, degradation of resolution begins for most people with errors of 0.25 D. A similar measure can be placed on the error due to cylinder axis error.219 The absolute benefit considers as positive any result for which the postoperative aberration pattern was smaller than an absolute limit of 0.50 DEQ for the magnitude of each Zernike mode (i.e. a ±0.25 DEQ maximum deviation in one or several meridians). The absolute benefit is ruled by the condition: DEQnm 2sin < 0.50 DEQnm < 4 sin Using the previous example, 7° of cyclotorsion, Zernike modes should not exceed 4.10 DEQ for coma, 2.05 DEQ for astigmatism, and, 1.37 DEQ for trefoil, for theoretically successful results. With the torsional error and the preoperative astigmatism, and assuming that cyclotorsion around the ablation centre was the only failure, the number of eyes was calculated that would have obtained an absolute benefit for cylinder (postoperative magnitude, ≤0.50 D) if the cylindrical correction were correct but the axis was wrong. We compared this value to the eyes in which postoperative astigmatism was less than 0.50 D. Section E.4 RESULTS Static Cyclotorsion during Laser Refractive Surgery (Ciclotorsión estática durante cirugía refractiva laser) Preoperative and postoperative topographies were compared 3 months after treatment in 76 consecutives right eyes treated without adverse events at Augenzentrum Recklinghausen. The preoperative spherical equivalent (SE) was -3.56 D with a standard deviation (SD) of 1.51 D (range, -7.00 to -1.25 D) and cylinder 0.82±0.66 D (0.25 to 3.00 D). 30% of the treatments (n=23) had corrections of 0.25 D of astigmatism, 20% (n=15) corrections of 0.50 D of astigmatism, 39% (n=30) corrections between 0.75 and 1.50 D of astigmatism, and 11% (n=8) corrections between 1.50 and 3.00 D of astigmatism (Figure 54). Attempted atigmatic correction distribution Number of eyes Attempted Astigmatic Correction (diopters) Figure 54: Distribution of the magnitudes of the attempted astigmatic correction At 3-month follow-up, the mean SE was -0.14±0.30 D (-1.00 to +0.25 D) and cylinder 0.17±0.26 D (0.00 to 1.25 D). Eighty-seven percent of the eyes (n=66) were within ±0.50 D of the attempted correction, and 100% (n=76) were within ±1.00 D. The direct average of the cyclotorsional errors was 2.42°, whereas the absolute values averaged 4.39°. Seventy-one percent of the eyes (n=54) had less than 2.5° of cyclotorsion, 78% (n=59) less than 5.0°, and 87% (n=66) less than 10.0° (Figure 55). Cyclotorsion error distribution Cumulative percentage of eyes (%) α ≤ 2,5° α ≤ 5° α ≤ 7,5° α ≤ 10° α ≤ 12,5° Cyclotorsional Error (degrees) α ≤ 15° α >15° Figure 55: Distribution of the retrospectively calculated cyclotorsional errors Theoretical Ranges to Obtain Optical, Visual, and Absolute Benefits (Rangos teóricos para la obtención de beneficios ópticos, visuales o The maximum angular frequency and Zernike mode magnitudes that fulfil these conditions were calculated for specific cyclotorsional errors (Figure 56, Table 6, and Table 7), but for the description of the magnitudes we focused on astigmatism, coma, and trefoil because these are the major Zernike modes with a vector nature. MAX Cyclotorsion error vs. Angular frequency MAX Cyclotorsional Error (degrees) Angular Frequency (m) Optical Benefit (Residual < Original) Visual Benefit (Residual < Matching Factor x Original) Figure 56: The maximum allowable cyclotorsional errors vs. angular frequency for different criteria Cyclotorsional Error Torsion tracker 1.5 Average torsion 4.0 Maximum Torsion Table 6: Maximum Treatable Magnitude for Different Aberration Components and Different Cyclotorsional Errors for the <0.50 DEQ Criterion Table 7: Optical benefit Visual benefit (residual < original) (residual < matching factor) Maximum Allowable Cyclotorsional Errors for Different Aberration Components and Different Criteria For cyclotorsional errors up to ±14°, it is possible to obtain a visual benefit for comatic, astigmatic, and trefoil angular frequencies and an optical benefit for tetrafoil angular frequencies, as well. It also is possible to control the creation of blur under an absolute limit whenever coma magnitudes do not exceed 2.05 DEQ, astigmatism 1.03 D, and trefoil 0.70 DEQ. For maximum cyclotorsional errors up to ±4°, the theoretical limit for visual benefit extends up to endecafoil (11-fold) angular frequencies and for optical benefit up to 15-fold (pentadecafoil) angular frequencies. The magnitudes for the major Zernike modes should not exceed 7.16 DEQ for coma, 3.58 DEQ for astigmatism, and 2.39 DEQ for trefoil. For cyclotorsional errors up to ±1.5°, visual benefit extends up to triacontafoil (30-fold) angular frequencies and optical benefit even beyond these frequencies. Moreover, coma magnitudes below 19.10 DEQ, astigmatism up to 9.55 D, and trefoil up to 6.37 DEQ produce a postoperative blur under 0.50 DEQ. For example, 1.00 µm of trefoil at 30° with a 5° clockwise torsional error will result in 0.26 µm of trefoil at 58° as a postoperative residual error, or 3.00 DEQ astigmatism at 75° with a 10° counterclockwise torsional error will result in 1.04 DEQ astigmatism at 35° as the postoperative residual error. Cyclotorsional error Torsion tracker 1.5 3% @ 271° 5% @ 136° 8% @ 91° 10% @ 68° 13% @ 55° 16% @ 46° 24% @ 35° 4% @ 271° 9% @ 136° 13% @ 91° 17% @ 69° 22% @ 55° 26% @ 46° 39% @ 35° Average torsion 4.0 7% @ 272° 14% @ 137° 21% @ 92° 28% @ 70° 35% @ 56° 42% @ 47° 62% @ 36° 9% @ 273° 17% @ 138° 26% @ 93° 35% @ 70° 43% @ 57° 52% @ 48° 77% @ 36° 13% @ 274° 26% @ 139° 39% @ 94° 52% @ 71° 64% @ 58° 77% @ 49° 111% @ 38° 17% @ 275° 35% @ 140° 52% @ 95° 68% @ 73° 85% @ 59° 100% @ 50° 141% @ 39° 22% @ 276° 43% @ 141° 64% @ 96° 85% @ 74° 104% @ 60° 122% @ 51° 166% @ 40° Maximum torsion 14.0 24% @ 277° 48% @ 142° 72% @ 97° 94% @ 75° 115% @ 61° 134% @ 52° 178% @ 41° 26% @ 278° 52% @ 143° 77% @ 98° 100% @ 75° 122% @ 62° 141% @ 53° 185% @ 41° Table 8: Residual Aberration Ratios and Relative Orientations for Different Cyclotorsional Errors. The percentage is the amount of postoperative residual in magnitude, whereas the angle is the relative orientation of the postoperative Clinical Optical Benefit (Beneficio óptico clínico) Considering the cyclotorsional error and the preoperative astigmatism, we calculated the number of treatments that would theoretically achieve an optical benefit for the astigmatism component (m=2). With these settings, 95% of the eyes (n=72) would have obtained an optical benefit for the cylinder if the cylindrical correction were correct but the axis was wrong and if cyclotorsional errors occurring around the ablation centre were the only failures. This compared to 89% of the eyes (n=68) that actually obtained a postoperative cylinder lower than preoperative value. The differences between the theoretical and the empirical results were marginally significant (P=0.05). Clinical Visual Benefit (Beneficio visual clínico) With the same settings as previously, 95% of the eyes (n=72) would have obtained a visual benefit for the cylinder compared to 87% of the eyes (n=66) that actually had a stable or improved postoperative UCVA compared to the preoperative BSCVA (P<0.01) and to 91% of the eyes (n=69) with a stable or improved BSCVA (P=0.09). Clinical Absolute Benefit (Beneficio absoluto clínico) With the same settings as previously, 93% of the eyes (n=71) would have obtained an absolute benefit for the cylinder compared to 96% of the eyes (n=73) in which the postoperative astigmatism was smaller than 0.50 D (P=0.21). Clinical Ranges to Obtain Optical, Visual, and Absolute Benefits (Rangos clínicos para la obtención de beneficios ópticos, visuales o Combining all success ratios to calculate the number of eyes that obtained simultaneously, 89% of the eyes (n=68) with theoretical global success vs. 79% of the eyes (n=60) obtained a postoperative cylinder lower than the preoperatively, lower than 0.50 D, and a stable or improved BSCVA (P<0.005). Considering the cyclotorsional error, we calculated a hypothetical case simulating for these patients up to which Zernike mode the treatments could have been planned to achieve optical and visual benefits (Table 9). Optical benefit Visual benefit (residual < original) (residual < matching factor) 1 Coma 2 Astigmatism 3 Trefoil 4 Tetrafoil 5 Pentafoil 6 Hexafoil 7 Heptafoil 8 Octafoil 9 Eneafoil 10 Decafoil 12 Dodecafoil 15 Pentadecafoil 20 Icosafoil 30 Triacontafoil 60 Hexacontafoil Table 9: Percentage of Treatments That Could Have Been Planned to Achieve an Optical and a Visual Benefit as a Function of the Highest Included Zernike Section E.5 DISCUSSION The method used in this study to determine the cyclotorsional error incurred during laser refractive surgery is indirect, because it calculates the torsional error retrospectively after the ablation procedures were performed. However, it is easy, straightforward, and does not require additional equipment or complicated algorithms. Its retrospective nature ensures that the calculated error corresponds to the average cyclotorsional error during the entire refractive surgery procedure. This way, the method could be used to validate the cyclotorsional errors obtained with other prospective methods. This study had a limitation. Because this method considers that the difference between the planned astigmatism axis and the axis of the effectively achieved cylindrical correction is due only to cyclotorsional errors, it may be affected by other sources of unavoidable errors in laser refractive surgery, such as flap cuts, pattern decentration, blending zones, and corneal biomechanics. The results are valid in the absolute single-failure condition of pure cyclotorsional errors. Moreover, we assumed for the study that the torsion always occurred around the intended ablation centre. It usually happens that the pupil size and centre differ for the treatment compared to that during diagnosis.220 excluding cyclotorsion, there is already a lateral displacement that mismatches the ablation profile. Further, cyclotorsion occurring around any position other than the ablation centre results in additional lateral displacement combined with cyclotorsion.221 Finally, this analysis considers the results in terms of the residual monochromatic wavefront aberration. However, the visual process is more complex than just an image-projection system and involves elements such as neural compensation and chromatic aberration, which were beyond the scope of this study. The cortical aspect of visual processing may affect the subjective symptoms associated with residual wavefront aberration. With our indirect analysis of the cyclotorsional error, we obtained an average cyclotorsional error of 4.39°, which, despite the above mentioned limitations of the method, agrees with the observations of Ciccio et al.,222 who reported 4°. In our sample, however, 13% of eyes had cyclotorsion exceeding 10 These patients would be expected to have at least 35% residual cylinder, 52% residual trefoil, and higher residual errors of tetrafoil, pentafoil, and In addition, octafoil would be induced beginning at 7.5 degrees of Due to the cyclic nature, the residual aberration error emanating from cyclotorsional error ranges from 0% to 200% of the original aberration. However, the induced aberrations emanating from lateral displacements always increase with decentration.223 If we also consider that in human eyes with normal aberrations the weight C(n,m) of the Zernike terms Z(n,m) decreases with increasing Zernike order (n)149, then the theoretical impact of cyclotorted ablations is smaller than decentred ablations or edge effects224 (coma and spherical aberration225). The results of the work of Guirao et al.221 and Bará et al.,144,226 are confirmed by the current study with special emphasis on the independent nature of the cyclotorsional effect with the radial order. We adopted three criteria based on the accuracy that can be achieved to overcome cyclotorsion: optical benefit provides the maximum angular frequency that can be included in the correction for which an objective improvement in the optical quality can be expected; visual benefit, the maximum angular frequency for which a subjective improvement in the visual performance can be expected; and absolute benefit, the maximum magnitudes for each Zernike mode for which an effective result can be expected. When all criteria are met without other sources of aberrations, the result is expected to be successful. When only the terms allowed by the visual benefit condition are included, but any of their magnitudes exceed the limits imposed by the <0.50 DEQ condition, the visual performance is expected to improve, but it might not be successful. When terms beyond the limits set by the visual benefit condition are included, the risk that the patient will require time to readapt to the new aberration must be considered. When terms beyond the limits set by the optical benefit condition are included, the risk that the aberrations will worsen must be considered carefully. Without eye registration technologies,227,228 considering that maximum cyclotorsion measured from the shift from the upright to the supine position does not exceed ±14°,222 it is theoretically possible to obtain a visual benefit up to the trefoil angular frequencies and an optical benefit up to the tetrafoil angular This explains why “classical” spherocylindrical corrections in refractive surgery succeed without major cyclotorsional considerations. However, using our limit of absolute residual dioptric error smaller than DEQ 0.50, only up to 2.05 DEQ coma, 1.03 DEQ astigmatism, and 0.70 DEQ trefoil can be corrected successfully. The limited amount of astigmatism that can be corrected effectively for this cyclotorsional error may explain partly some unsuccessful results reported in refractive surgery. Considering that the average cyclotorsion resulting from the shift from the upright to the supine position is about ±4°,222 without an aid other than manual orientation, the theoretical limits for achieving a visual benefit extend up to the endecafoil (11-fold) angular frequencies and up to the pentadecafoil (15-fold) angular frequencies for optical benefit. Our limit of absolute residual dioptric error less than 0.50 DEQ increases to 7.16 DEQ for coma, 3.58 DEQ for astigmatism, and 2.39 DEQ for trefoil. The extended limits confirm why spherocylindrical corrections in laser refractive surgery have succeeded. With currently available eye registration technologies, which provide an accuracy of about ±1.5°, it is theoretically possible to achieve a visual benefit up to the triacontafoil (30-fold) angular frequencies and an optical benefit even beyond these angular frequencies, and using our limit of absolute residual dioptric error less than 0.50 DEQ, up to 19.10 DEQ coma, 9.55 DEQ astigmatism, and 6.37 DEQ trefoil can be corrected successfully. This opens a new era in corneal laser refractive surgery, because patients may be treated for a wider range of refractive problems with enhanced success ratios. However, this requires a To the best of our knowledge, currently available laser platforms for customized corneal refractive surgery include not more than the eighth Zernike order, which theoretically corresponds to a visual benefit range for cyclotorsional tolerance of ±5.7° and an optical benefit range for cyclotorsional tolerance of ±7.5°, which covers most cyclotorsion occurring when shifting from the upright to the supine position. Thus, the aberration status and the visual performance of the patients are expected to improve. Moreover, the same ±7.5° cyclotorsional tolerance means that the magnitudes for the major Zernike modes should not exceed 3.82 DEQ for coma modes, 1.92 DEQ for astigmatic modes, and, 1.28 DEQ for trefoil modes for theoretically successful results. Based on different criteria, Bueeler and co-authors229 also determined conditions and tolerances for cyclotorsional accuracy. Their OT criterion corresponds approximately to our optical benefit condition, and their results for the tolerance limits (29° for 3-mm pupils and 21° for 7-mm pupils) did not differ greatly from the optical benefit result for astigmatism, confirming that astigmatism is the major component to be considered. In our study, the theoretical percentage of treatments that would achieve an optical benefit was significantly higher than the percentage of treatments that actually obtained a postoperative cylinder lower than preoperatively (95% vs. 89%; P=0.05). The percentage of treatments that theoretically would achieve a visual benefit was significantly higher than the percentage of treatments with a stable or improved postoperative UCVA compared to the preoperative BSCVA (95% vs. 87%; P<0.01). Both indicate that other sources of aberrations have substantial impact on the final results. The percentage of treatments that theoretically would achieve a visual benefit was higher than the percentage of treatments with a stable or improved BSCVA (95% vs. 91%; P=0.09). Because residual cylinder can be corrected with spectacles, this indicates that other factors induce aberrations and affect the final results. In discussing visual benefit, although VA data are helpful, there may be patients with 20/20 vision who are unhappy with their visual outcomes due to poor mesopic and low-contrast VA that were not addressed in the current study. Interestingly, the percentage of treatments achieving a theoretical absolute benefit was 93%, whereas the percentage of treatments that actually had postoperative astigmatism reduced to an absolute residual error smaller than 0.50 D was higher (96%; P=0.21). Finally, the percentage of treatments that theoretically would achieve global success (optical, visual, and absolute benefits simultaneously) was significantly higher than the percentage of treatments that actually obtained a postoperative cylinder lower than the preoperative value, a stable or improved BSCVA, and decreased postoperative astigmatism to an absolute residual error less than 0.50 D (89% vs. 79%; P<0.005). This confirms that cyclotorsion is not the only reason for differences between theory and practice; wound healing and surgical variation also are keys factors in the outcome. Section E.6 CONCLUSIONS In summary, the current study showed that cyclotorsional errors result in residual aberrations and that with increasing cyclotorsional error there is a greater potential for inducing aberrations. Thirteen percent of eyes had over 10o of calculated cyclotorsion, which predicts approximately a 35% residual astigmatic error in these eyes. Because astigmatic error is generally the highest magnitude vectorial aberration, patients with higher levels of astigmatism are at higher risk of problems due to cyclotorsional error. Section E.7 OUTLOOK Currently a prospective method for determining intraoperative cyclotorsion has been implemented at the SCHWIND AMARIS laser system. With this new setting we are evaluating intraoperative static and dynamic cyclotorsions, and postoperative outcomes on astigmatism and high-order-aberration, among astigmatic or aberrated eyes that underwent refractive surgery. Similarly, a six-dimensional eye-tracking module is being developed for SCHWIND eye-tech-solutions, with this coming technology we will, as well, evaluate intraoperative static and dynamic eye movements in 6D, and postoperative outcomes on refraction, and high-order-aberration. Topic F (La zona óptica efectiva tras cirugía refractiva) Study concept and design (S.A.M.); data collection (M.C.); analysis and interpretation of data (S.A.M.); drafting (S.A.M.); critical revision (M.C.); statistical expertise (S.A.M.). Section F.1 ABSTRACT Purpose: To evaluate the Effective Optical Zone (EOZ) (the part of the ablation that receives full correction), among eyes that underwent LASEK/Epi-LASEK treatments for myopic astigmatism. 20 LASEK/Epi-LASEK treatments with mean defocus -5.49±2.35D performed using the SCHWIND AMARIS system were retrospectively evaluated at 6-month follow-up. In all cases, pre-/post-operative Corneal-Wavefront analyses using the Keratron-Scout (OPTIKON2000) were performed. values were evaluated from the changes of Root-Mean-Square of High-Order Wavefront-Aberration (∆RMSho), Spherical Aberration (∆SphAb), and RootMean-Square of the change of High-Order Wavefront-Aberration (RMS(∆HOAb)). Correlations of EOZ with Planned Optical Zone (POZ) and Defocus correction (SEq) were analysed using a bilinear function, as well as, calculations of the isometric lines (IOZ) for which EOZ equals POZ and of the nomogrammed OZ (NOPZ) to achieve an intended EOZ (IEOZ). At six-month, defocus was -0.05±0.43D, ninety percent eyes were within ±0.50D from emmetropia. Mean RMSho increased 0.12µm, SphAb 0.09µm, and Coma 0.04µm after treatment (6-mm diameter). Mean POZ was 6.76±0.25mm, whereas mean EOZ∆RMSho 6.74±0.66mm (bilinear correlation EOZRMS(∆HOAb) 6.42±0.58mm (significantly smaller, p<.05, bilinear correlation p<.0005). EOZ positively correlates with POZ and declines steadily with SEq, depending stronger on POZ than on SEq. A treatment of -5D in 6.00-mm POZ results in 5.75-mm EOZ (6.25-mm NPOZ), treatments in 6.50-mm POZ result in about 6.25-mm EOZ (6.75-mm NPOZ). At about 6.75-mm POZ, the isometric condition is met. EOZ∆RMSho and EOZ∆SphAb were similar to POZ, whereas EOZRMS(∆HOAb) was significantly smaller. Differences between EOZ and POZ were larger for smaller POZ or larger Defocus corrections. POZ larger than 6.75-mm result in EOZ, at least, as large as POZ. For OZ smaller than 6.75-mm, a nomogram for OZ could be applied. Section F.2 INTRODUCTION The required ablation depth in corneal laser refractive surgery increases with the amount of ametropia to be corrected and the diameter of the optical zone Therefore, the smallest diameter optical zone should be used compatible with normal physiologic optics of the cornea223. Complaints of ghosting, blur, haloes, glare, decreased contrast sensitivity, and vision disturbance230 have been documented with small optical zones, especially when the scotopic pupil dilates beyond the diameter of the surgical optical zone231, and these symptoms may be a source of less patient satisfaction232. This is supported by clinical findings on night vision with small ablation diameters233,234 as well as large pupil sizes231,234 and attempted Laser refractive surgery generally reduces low order aberrations (defocus and astigmatism), yet high-order aberrations, particularly coma and spherical aberration, may be significantly induced64,224. In recent years, the increasing the size of the planned ablation zone and the use of new techniques to measure aberrations46 opened the possibility to correct, or at least reduce the induction, some of the high-order aberrations. Methods for determining functional optical zones (FOZ) have been used previously. Independently developed ray-tracing programs234,236 have been used to determined FOZ after refractive surgery. A direct approach to measure FOZ after refractive surgery has been proposed by manually determining the transition region between treated and untreated areas from corneal topography maps237. In this study, we retrospectively evaluated three objective methods to determine the effective optical zone (EOZ) in eyes after keratorefractive surgery to provide assessment techniques to evaluate and compare keratorefractive surgical algorithms. Section F.3 METHODS The first consecutive 20 compound myopic astigmatism (MA) treatments (10 patients), treated by MC using the AMARIS Aberration-FreeTM aspheric ablation with LASEK26 or Epi-LASEK29 techniques, which completed 6M follow-up were retrospectively analysed. Six-month follow-up was available in 20 of these eyes (100%), and their preoperative data were as follows: mean manifest defocus refraction -5.49±2.35 D (range, -9.75 to -2.25 D); mean manifest astigmatism magnitude 1.21±1.19 D (range, 0.00 to 4.00 D). In all eyes, we measured corneal topography153 and derived corneal wavefront analyses (Keratron-Scout, OPTIKON2000, Rome, Italy), manifest refraction, and uncorrected and best spectacle-corrected Snellen visual acuity204 (UCVA and BSCVA, respectively). Measurements were performed preoperatively and at one, three, and six months after surgery. Ablation profiles (Perfiles de ablación) All ablations were non-customised based on “aberration neutral” profiles92 and calculated using the ORK-CAM software module version 3.1 (SCHWIND eyetech-solutions, Kleinostheim, Germany). Aspheric aberration neutral (AberrationFreeTM) profiles are not based on the Munnerlyn proposed profiles15, and go beyond that by adding some aspheric characteristics238 to balance the induction of spherical aberration79,239 (prolateness optimization82,170). The ablations were performed using the AMARIS excimer laser (SCHWIND eye-tech-solutions, Kleinostheim, Germany). Ablation zones (Zonas de ablación) Mean programmed optical zone (POZ) was 6.76±0.25 mm (range, 6.25 to 7.00 mm) with a variable transition size (TZ) automatically provided by the laser related to the planned correction of 1.36±0.47 mm (range, 0.64 to 2.20 mm) leading to a total ablation zone (TAZ) 8.13±0.31 mm (range, 7.64 to 8.70 mm). Analysis of the effective optical zone (Análisis de la zona óptica efectiva) Definition of the optical zone (OZ) reads “the part of the corneal ablation area that receives the full intended refractive correction” (Drum B. The Evolution of the Optical Zone in Corneal Refractive Surgery. 8th International Wavefront Congress, Santa Fe, USA; February 2007). However, operational definition of the OZ consists of the part of the corneal ablation area that receives the treatment that is designed to produce the full intended refractive correction. Effective Optical Zone (EOZ) can be defined as the part of the corneal ablation area that actually conforms to the theoretical definition. However, the definition implies that the optical zone need not be circular. Change Of Root-Mean-Square Of Higher Order Wavefront Aberration (Método del cambio de la raíz cuadrática media de la aberración de onda de alto orden) aberrations analysed for a common diameter starting from 4-mm, we have increased the analysis diameter in 10 µm steps and refit to Zernike polynomials up to the 7th radial order, until the difference of the corneal RMSho was above 0.25 D for the first time (Figure 57). This diameter minus 10 µm was determining the EOZ: ∆RMSho ( EOZ ) = 0.25 D ( 88) ∆RMSho ( EOZ ) = 0.25 D Figure 57: Concept of the ∆RMSho method: By comparing postoperative and preoperative corneal wavefront aberrations analysed for a common diameter starting from 4-mm, we have increased the analysis diameter in 10 µm steps, until the difference of the corneal RMSho was above 0.25 D for the first time. This diameter minus 10 µm was determining the EOZ. Change In Spherical Aberration Method (Método del cambio de la aberración esférica) By analysing the differential corneal wavefront aberrations for a diameter starting from 4-mm, we have increased the analysis diameter in 10 µm steps and refit to Zernike polynomials up to the 7th radial order, until the differential corneal spherical aberration was above 0.25 D for the first time (Figure 58). This diameter minus 10 µm was determining the EOZ: ∆SphAb ( EOZ ) = 0.25 D ( 89) ∆SphAb ( EOZ ) = 0.25 D Figure 58: Concept of the ∆SphAb method: By analysing the differential corneal wavefront aberrations for a diameter starting from 4-mm, we have increased the analysis diameter in 10 µm steps, until the differential corneal spherical aberration was above 0.25 D for the first time. This diameter minus 10 µm was determining the EOZ. Root-Mean-Square Of The Change Of Higher Order Wavefront Aberration Method (Método de la raíz cuadrática media del cambio de la aberración de onda de alto orden) By analysing the differential corneal wavefront aberrations for a diameter starting from 4-mm, we have increased the analysis diameter in 10 µm steps and refit to Zernike polynomials up to the 7th radial order, until the root-mean-square of the differential corneal wavefront aberration was above 0.25 D for the first time (Figure 59). This diameter minus 10 µm was determining the EOZ: RMS ∆HOAb ( EOZ ) = 0.25D ( 90) RMS ∆HOAb ( EOZ ) = 0.25D Figure 59: Concept of the RMS(∆HOAb) method: By analysing the differential corneal wavefront aberrations for a diameter starting from 4-mm, we have increased the analysis diameter in 10 µm steps, until the root-mean-square of the differential corneal wavefront aberration was above 0.25 D for the first time. This diameter minus 10 µm was determining the EOZ. Mean value analyses (Análisis de valores promedio) We analysed the mean values of these metrics and assessed the statistical significance of the EOZ compared to the POZ using paired Student’s T-tests. Regression analyses (Análisis de regresión) We have analysed the correlations of EOZ for each of the methods with POZ and with defocus correction, using a bilinear function (linear with POZ and defocus) of the form: EOZ = a + ( b + c ⋅ POZ ) ⋅ ( d + e ⋅ SEq ) ( 91) where a is a general bias term, b a bias term for the linearity with POZ, c the partial slope for the linearity with POZ, d a bias term for the linearity with defocus, and e the partial slope for the linearity with defocus. The ideal case is represented by the coefficients: ( 92) ( 93) c =1 ( 94) d =1 ( 95) ( 96) We assessed the statistical significance of the correlations using Student’s T-tests, the Coefficient of Determination (r2) was used and the significance of the correlations has been evaluated considering a metric distributed approximately as t with N—3 degrees of freedom where N is the size of the sample. Calculation of isometric lines (Cálculo de líneas isométricas) With the obtained parameters (a to e), we have calculated for each of the methods the isometric lines for optical zone (IOZ) for which the achieved effective optical zone equals the planned optical zone. The isometric lines fulfil the EOZ ( POZ , SEq ) = POZ IOZ = a + b ⋅ ( d + e ⋅ SEq ) 1 − c ⋅ ( d + e ⋅ SEq ) ( 97) ( 98) Calculation of proposed nomogram for OZ (Cálculo de una propuesta de nomograma para ZO) With the obtained parameters (a to e), we have calculated the nomogram planned OZ (NPOZ) required to achieve an intended EOZ (IEOZ): NPOZ = IEOZ − a − b ⋅ ( d + e ⋅ SEq ) c ⋅ ( d + e ⋅ SEq ) ( 99) Section F.4 RESULTS Adverse events Neither adverse events nor complications were observed intra- or postoperatively. No single eye needed or demanded a retreatment. Refractive Outcomes (Resultados refractivos) Concerning refractive outcomes, we merely want to outline that both, the SEq and the cylinder were significantly reduced to subclinical values at 6 months postoperatively (mean residual defocus refraction was -0.05±0.43 D (range -1.00 to +0.62 D) (p<.0001) and mean residual astigmatism magnitude 0.21±0.54 D (range, 0.00 to 1.50 D) (p<.001)) and that 90% of eyes (n=18) were within ±0.50 D of the attempted correction (Table 10). (Mean±StdDev) (Mean±StdDev) Spherical Equivalent (D) -0.05±0.43 <.0001* Cylinder (D) Predictability within ±0.50 D (%) Predictability within ±1.00 D (%) Coma Aberration at 6.00 mm (µm) Spherical Aberration at 6.00 mm (µm) High-Order Aberration at 6.00 mm (µm RMS) Table 10: Refractive outcomes and induced corneal aberrations after refractive Changes in corneal wavefront aberration at 6-mm analysis diameter (Cambios en la aberración del frente de onda corneal analizado para 6mm de diámetro) Preoperative corneal coma aberration (C[3,±1]) was 0.26±0.23 µm RMS, corneal spherical aberration (C[4,0]) (SphAb) was +0.28±0.15 µm, and corneal RMSho was 0.45±0.12 µm RMS. Postoperatively, corneal coma magnitude changed to 0.30±0.25 µm RMS (p<.05), corneal SphAb to +0.38±0.24 µm (p<.005), and corneal RMSho changed to 0.56±0.28 µm RMS (p<.01). Mean value analyses for EOZ (Análisis de valores promedio para ZOE) We analysed the mean values of EOZ for each of the 3 methods and assessed the statistical significance of the EOZ compared to the POZ using paired Student’s T-tests. EOZ∆RMSho and EOZ∆SphAb were similar to POZ, whereas EOZRMS(∆HOAb) was significantly (p<.05) smaller than POZ and the EOZ determined by the other two methods (Table 11). Min Max Planned OZ (mm) 0.25 6.25 7.00 EOZ∆RMSho (mm) 0.66 5.81 7.81 EOZ∆SphAb (mm) 0.58 5.91 7.53 EOZRMS(∆HOAb) (mm) 0.58 5.51 7.31 <.05* Table 11: Effective optical zone after refractive surgery vs. planned optical zone Regression analyses for EOZ (Análisis de regresión de la zona óptica efectiva) We have analysed the correlations of EOZ for each of the methods with POZ and with defocus correction (r2=.5, p<.005 for the ∆RMSho method; r2=.7, p<.0001 for the ∆SphAb method; and r2=.7, p<.0005 for the RMS(∆HOAb) method) (Figure 60). Effective Optical Zone diameter (mm) distribution from Laser Settings Effective Optical Zone diameter (mm) Planned Optical Zone diameter (mm) Achieved Defocus correction (D) Effective Optical Zone diameter (mm) distribution from Laser Settings Effective Optical Zone diameter (mm) Planned Optical Zone diameter (mm) Achieved Defocus correction (D) Effective Optical Zone diameter (mm) distribution from Laser Settings Effective Optical Zone diameter (mm) Planned Optical Zone diameter (mm) Achieved Defocus correction (D) Figure 60: Bilinear regression analyses for the correlations of EOZ with POZ and with defocus correction for each of the methods: ∆RMSho method RMS(∆HOAb) method (r2=.7, p<.0005) (bottom). EOZ correlates positively with POZ, and declines steadily with increasing defocus corrections. EOZ depends stronger on POZ than on SEq. Example of double-entry graphs: A treatment of 5 D in 6.5 mm POZ results in green when analysed with the ∆RMSho and ∆SphAb methods (~6.5 mm EOZ), but in yellow when analysed with the RMS(∆HOAb) method (~6.0 mm EOZ). EOZ correlates positively with POZ, and declines steadily with increasing defocus corrections. EOZ depends stronger on POZ than on SEq. On average, and simplifying the relationship to only EOZ and POZ we observed that planning an OZ of 5.50-mm in diameter leads to an effective OZ of about 5.25-mm in diameter, planning an OZ of 6.50-mm in diameter leads to an effective OZ of about 6.25-mm in diameter, and planning an OZ of 7.50-mm in diameter leads to an effective OZ of about 7.50-mm in diameter (Table 12). Planned OZ Achieved EOZ Table 12: Mean effective optical zone after refractive surgery vs. planned optical Isometric lines for OZ (Líneas isométricas para ZO) With the obtained parameters (a to e), we have calculated the isometric lines for optical zone (IOZ) for each of the methods (Figure 61) resulting that: POZ < IOZ ⇔ EOZ < POZ ( 100) POZ = IOZ ⇔ EOZ = POZ ( 101) POZ > IOZ ⇔ EOZ > POZ ( 102) Isometric Optical Zone Optical Zone diameter (mm) Defocus correction (D) Figure 61: Isometric optical zones: ∆RMSho method (red), ∆SphAb method (blue), and RMS(∆HOAb) method (green). For POZ < IOZ ⇔ EOZ < POZ , for POZ = IOZ ⇔ EOZ = POZ , and for POZ > IOZ ⇔ EOZ > POZ . POZ larger than 6.75 mm result in EOZ, at least, as large as POZ. Proposed nomogram for OZ (Nomograma para ZO) With the obtained parameters (a to e), we have calculated the nomogram planned OZ (NPOZ) required to achieve an intended EOZ (IEOZ) (Figure 62). Nomogram for Optical Zone diameter (mm) Planned Optical Zone diameter (mm) Effective Optical Zone diameter (mm) Achieved Defocus correction (D) Nomogram for Optical Zone diameter (mm) Planned Optical Zone diameter (mm) Effective Optical Zone diameter (mm) Achieved Defocus correction (D) Nomogram for Optical Zone diameter (mm) Planned Optical Zone diameter (mm) Effective Optical Zone diameter (mm) Achieved Defocus correction (D) Figure 62: Calculated the nomogram planned OZ (NPOZ) required to achieve an intended EOZ (IEOZ) for defocus correction for each of the methods: ∆RMSho method (top), ∆SphAb method (middle), and RMS(∆HOAb) method (bottom). Example of double-entry graphs: A treatment of -5 D with intended EOZ of 6.5 mm results in green when planned for the ∆RMSho and ∆SphAb methods (~6.5 mm nomogrammed OZ), but in yellow when planned for the RMS(∆HOAb) method (~7.0 mm nomogrammed OZ). On average, and simplifying the relationship to only IEOZ and NPOZ we observed that for achieving an EOZ of 5.50-mm in diameter, a POZ of about 5.75mm in diameter is required; for achieving an EOZ of 6.50-mm in diameter, a POZ of about 6.75-mm in diameter is required; and for achieving an EOZ of 7.50-mm in diameter, a POZ of about 7.50-mm in diameter is required (Table 13). Intended EOZ Nomogrammed POZ Table 13: Mean nomogrammed optical zone vs. intended effective optical zone Section F.5 DISCUSSION To analyse EOZ in our treatments in a systematic way consistent with formal definitions, we decided to base our analysis upon previous knowledge. Since wavefront aberration describes properly optical quality, it seems adequate to use wavefront aberration for determining EOZ. Since we were applying the analysis to corneal laser refractive surgery, it seems adequate to use corneal wavefront aberration for determining EOZ. Since corneal refractive surgery increases wavefront aberration, it seems adequate to analyse the change of the corneal RMSho for determining EOZ (∆RMSho method). Since the most induced term is spherical aberration, it seems adequate to analyse the change of the corneal spherical aberration for determining EOZ (∆SphAb method). AMARIS Aberration-Free profiles aim being neutral for HOAb, it seems adequate to analyse the root-mean-square of the change of the corneal wavefront aberration for determining EOZ (RMS(∆HOAb) method). The measurement technique used in this study actually imposes restrictions on optical zone size that may underestimate it for decentrations. On the other hand, data not fit by the Zernike polynomials up to the 7th radial order (36 Zernike coefficients). It is known that the residual irregularity of the cornea not fit by Zernike’s may have a significant impact on visual quality. Ignoring this effect might bias the effective optical zone size determined leading to an overestimate that can be significant. Uozato and Guyton223 were the first to calculate the optical zone area needed to obtain glare-free distance vision in emmetropia. They stated that, "for a patient to have a zone of glare-free vision centred on the point of fixation, the optical zone of the cornea must be larger than the entrance pupil (apparent diameter of the pupil)." Not only must this optical zone be without scarring and irregularity, but it must also be of uniform refractive power. Biomechanical changes after MA treatments contribute to an oblate contour, increasing spherical aberration and shrinking the effective optical zone. Healing response240, radial ablation efficiency losses117,118 and biomechanical effects241 all reduce the effective ablation in the outer portion of the nominal optical zone. These effects shrink the actual zone of full refractive correction, i.e., the effective optical zone. They also distort attempted cylindrical ablations by flattening the cornea along the astigmatic axis, introducing an unintended spherical correction component and reducing the cylindrical correction118. The shrinking effect is larger for major corrections, i.e. larger optical zones should be used for major corrections, but larger optical zones result in deeper and longer ablations increasing the potential risks of keratectasia242,243. Comparing the three methods in our study, we observed that EOZ correlated well with POZ and SEq for the 3 analysed methods. EOZ∆RMSho and EOZ∆SphAb were similar to POZ, whereas EOZRMS(∆HOAb) was significantly smaller than POZ and the results obtained by the other two methods. ∆RMSho analysis accounts for all aberration terms from the perspective of the global optical quality of the cornea. A similar approach was used by Tabernero et al.244, applied in a different way. They analysed directly on the cornea the functional optical zone (FOZ) in patients pre and postoperatively, instead applied to the differential map. They wanted to determine the FOZ of the cornea, whereas we aimed to determine the EOZ of the treatments. essentially the methods are equivalent. ∆SphAb analysis provides the largest EOZ, because only one Zernike term is analysed, i.e. decentration92 or cyclotorsion121 effects are not accounted for. Maloney231 described the consequences of a decentred optical zone and discussed methods to ensure centring. RMS(∆HOAb) analysis provides the smallest EOZ, because it accounts for any deviation from the aimed Aberration-Free concept (i.e. both inductions and reductions of the wavefront aberration both contribute positively to increase the RMS value). Caution must be taken with the results obtained by the RMS(∆HOAb) analysis, because it is the most sensitive of the presented analyses and it might have been affected by short term fluctuations of the wavefront aberration, which could explain, at least in part why the EOZ obtained by this method were significantly smaller than the EOZ obtained by the other two. Multivariate correlation analysis of the EOZ versus POZ and attempted defocus correction showed that both contributions were statistically significant (p<.001 for POZ, p<.01 for defocus). Absolute and relative differences between EOZ and POZ were larger for smaller POZ or for larger Defocus corrections. EOZ correlates positively with POZ, declines steadily with increasing Defocus corrections; and EOZ depends stronger on POZ than on SEq. On average, and simplifying the relationship to only EOZ and POZ we observed that POZ larger than 6.75 mm result in EOZ, at least, as large as POZ. For OZ smaller than 6.75 mm, a nomogram for OZ can be applied. For our analysis the threshold value of 0.25 D for determining EOZ was arbitrarily chosen, based upon the fact that with simple spherical error, degradation of resolution begins for most people with errors of 0.25 D. For all three methods, our search algorithm is an „increasing diameter“ analysis, this ensures that the smallest EOZ condition is found. Finally, our search was set to start from 4-mm upwards, i.e. 3.99 mm is the smallest EOZ that could be found. We have done that because for very small analysis diameters, the Zernike fit seems to be less robust, mostly due to the decreasing sampling density within the unit circle. Holladay and Janes (2002) determined the relationship between the spherical refractive change after myopic excimer laser surgery and the effective optical zone and corneal asphericity determined by corneal topography, which changed nonlinearly with the amount of treatment. Mok and Lee245 reported that larger optical zones decrease postoperative high-order aberrations. They found the measured high-order aberrations to be less in eyes with larger optical zones. Assessing the quality of vision (rather than the quality of the optical zone) after a refractive procedure is a separate issue. The relationship between pupil size and vision after refractive surgery is critically important and this relationship cannot be evaluated accurately with a measurement of aberrations through a predetermined aperture with an aberrometer. Pupil sizes vary considerably among patients depending on light level and age246. Mok and Lee have shown a strategy for planning optical zone size based on patient pupil size. However, an aberration analysis that takes into account variations in planned optical zone size may provide more insight as to the quality of the outcome obtained. Partal and Manche247 using direct topographic readings observed over a large sample of eyes in moderate compound myopic astigmatism, a reduction from POZ of 6.50-mm to EOZ of 6.00-mm. It is noteworthy and opposed to our findings that they did not find a greater contraction of EOZ for increasing myopic Qazi et al.248 using a different approach observed over a sample of eyes similar to ours, a reduction from POZ of 6.50-mm to EOZ of 5.61-mm. It is possible that the EOZ could be larger than the POZ if it encompasses some portions of the TZ, or even larger than the TAZ. Although POZ, TZ, and TAZ are parameters defined by the laser treatment algorithms, EOZ must be determined postoperatively (from the differences to the baseline) and may change with time because of healing and biomechanical effects. In the same way, it would be possible that the FOZ were larger postoperatively than it was preoperatively, or that the FOZ could be larger than the POZ or even than the Despite large defocus and astigmatism magnitudes, our study shows highorder aberrations are either minimally increased or unchanged after surgery with the AMARIS system, whereas EOZ is very similar to POZ (Tables 1 and 2). The EOZ obtained in this clinical setting show a trend toward slight undersize for smaller POZ or for larger Defocus corrections. On the other hand, the low standard deviation and the tight dispersion of the cluster of data demonstrate the consistency of the achieved results. Given the small deviation of the results, we believe that with some slight adjustment for the POZ, the EOZ results and the aberrometric analyses will improve significantly. Section F.6 CONCLUSIONS In conclusion, our results suggest that wavefront aberration can be a useful metric for the analysis of the effective optical zones of refractive treatments or for the analysis of functional optical zones of the cornea or the entire eye by setting appropriate limit values. In particular, the method of analysis of the RMS(∆HOAb) seems to be a rigorous analysis accounting for any deviation from the attempted target for the wavefront aberration. In summary, this study demonstrated that “aberration neutral” profile definitions as implemented at the SCHWIND AMARIS system, which are not standard in refractive surgery, yield very good visual, optical, refractive results and EOZ for the correction of compound myopic astigmatism. neutral” ablation profiles as demonstrated here, have, therefore, the potential to replace currently used standard algorithms for the non-customised correction of compound myopic astigmatism. Section F.7 OUTLOOK Limitations of our study include that the clinical evaluation was performed over only 20 eyes, reducing the statistical power of the conclusions; and the lack of a control group. The clinical evaluation was limited to MA treatments, thus results cannot be extrapolated to hyperopic treatments without further clinical Evaluation was limited to LASEK/Epi-LASEK techniques, thus results cannot be extrapolated to LASIK treatments without further clinical Finally, in our sample, POZ significantly correlated with defocus (r2=.9, p<.0001), indicating that the two variables of the bilinear fit were To extend our methodology for the analysis of customised corrections can be quite simple if we consider that customised corrections in their intrinsic nature aim to reduce aberrations (either from the cornea only, or from the complete ocular system) to a zero level. In this way, the corresponding formulations would RMShoCW ( EOZ ) = 0.25 D ( 103) RMShoOW ( EOZ ) = 0.25 D ( 104) for corneal (CW) and ocular wavefront (OW) corrections, respectively. Long-term follow-up on these eyes will help determine whether these accurate results also show improved stability compared to previous experiences. Topic G (Método para minimizar objetivamente la cantidad de tejido resecado en una ablación personalizada basada en la expansión de Zernike de la aberración del frente de onda) Study concept and design (S.A.M.); data collection (D.O., M.R., C.V.); analysis and interpretation of data (S.A.M.); drafting (S.A.M.); critical revision (D.O., J.M., J.L.A., T.H., M.C.A.); statistical expertise (S.A.M.). Section G.1 ABSTRACT The purpose of this work is to study the possibility of performing customized refractive surgery minimising the amount of ablated tissue without compromising visual quality and to evaluate the application of these methods for minimizing the ablated tissue upon objective minimization of depth and time of Zernike based customized ablations. A new algorithm for the selection of an optimized set of Zernike terms in customized treatments for laser corneal refractive surgery was developed. Its tissue-saving attributes have been simulated on 100 different wave aberrations at 6mm diameter. outcomes were evaluated in terms of how much depth and volume was saved for each condition (in micrometers and in percentage), whether the proposed correction consists of either a full wavefront correction or an aberration-free treatment, and whether the proposed depth or volume was less than the one required for the equivalent aberration-free treatment. Clinical outcomes and tissue-saving attributes were evaluated on two groups (minimize depth: MD; and minimize volume: MV; 30 eyes each), plus a control group (corneal wavefront: CW, 30 eyes) with conventional customized approach. Clinical outcomes were evaluated in terms of predictability, safety, and contrast sensitivity; and tissuesaving attributes in terms of saved depth and time for each condition (in micrometers, seconds and percentage), and whether minimized depth or time were less than required for equivalent non-customized treatments. outcomes showed an average saved depth of 5µm (0-16µm), and an average saved volume of 95nl (0-127nl) or 11% saved tissue (0-66% saved tissue). Proposed corrections were always less deep than full wavefront corrections and in 59% of the cases were less deep than equivalent aberration-free treatments. For the case report, required ablation was reduced by approximately 15% compared to full customized correction. Refraction was corrected to subclinical levels, uncorrected distance visual acuity improved to 20/20, corrected distance visual acuity gained two lines, aberrations were reduced by approximately 40% compared to preoperative baseline levels, and the functional optical zone of the cornea was enlarged by approximately 40% compared to preoperative baseline levels. Trefoil, coma, spherical aberration, and the root-mean-square value of the higher order aberrations were reduced. In the clinical evaluation, 93% of treatments in CW group, 93% in MD group, and 100% in MV group were within 0.50 D of SEq postoperatively. 40% of treatments in CW group, 34% in MD group, and 47% in MV group gained at least one line of BSCVA postoperatively. Tissue-saving attributes showed an average saved depth of 8µm (1-20µm) and a saved time of 6s (1-15s) in the MD group, and 6µm (0-20µm) and 8s (2-26s) in the MV group. Proposed corrections were always less deep and shorter than full wavefront corrections. In 43% of the MD cases were less deep, and in 40% of the MV cases were shorter than equivalent Aberration-Free treatments. Even though Zernike modes decomposition is a mathematical description of the aberration, it is not the aberration itself. Not all Zernike modes affect the optical quality in the same way. The eye does not see through Zernike decomposition but with its own aberration pattern. However, it seems feasible to efficiently perform laser corneal refractive surgery in a customized form minimising the amount of ablated tissue without compromising the visual quality. Eliminating all higher order aberrations may not optimize visual function in highly aberrated eyes. The new algorithm effectively reduced depth and time needed for ablation (up to a maximum of 50%, and by 15% in average), without negatively affecting clinical outcomes postoperatively, yielding results equivalent to those of the full customization Section G.2 INTRODUCTION There are different proposed approaches for minimising the tissue consumption in refractive surgery, among others: Multizonal treatments249, Smaller optical zone with bigger transition zones250, Smaller optical zone for the cylindrical component251, Boost slider method, Less optimisations in the profile, ZClipping method252, or Z-Shifting method252. Multizonal treatments (Tratamientos multizonales) Minimisation by multizonal treatments is based on the concept of progressive decreasing corrections in different optical zones (Figure 63). Figure 63: Minimisation by multizonal treatments Smaller optical zone treatments with large transition zone (Tratamientos en menor zona óptica con mayor zona de transición) Minimisation with smaller optical zone treatments with large transition zone is a variation of the multizone concept (Figure 64). Figure 64: Minimisation with smaller optical zone treatments with large transition Smaller optical zone for the astigmatic correction (Zonas ópticas menores para la corrección astígmata) Minimisation with smaller optical zone for the astigmatic correction is based upon the concept of the maximal depth being based on the lowest meridional refraction and the selected optical zone, and the effective optical zone of the highest meridional refraction is reduced to match the same maximal depth (Figure Figure 65: Minimisation with smaller optical zone for the astigmatic correction Boost slider method (El modulador incremental) Minimisation by a boost method is a linear modulation of the volume (Figure 66). Figure 66: Minimisation by a boost slider (down-slided) Simplified profile method (El perfil simplificado) Minimisation by a simplified profile consists of compromising the expected quality of the profile by simplified assumptions. Z-Clipping method (Método de la poda en Z) Minimisation by a Z-Clipping method consists of defining saturation for the ablated volume, all points planned to ablate deeper than the saturation value are ablated only by an amount equal to the saturation value (Figure 67). Figure 67: Minimisation by a Z-Clipping method Z-Shifting method (Método del recorte en Z) Minimisation by a Z-Shifting method consists of defining a threshold value for the ablated volume, all points planned to ablate less than the threshold value are not ablated, the rest of the point as ablated by an amount equal to the original planned ablation minus the threshold value (Figure 68). Figure 68: Minimisation by a Z-Shifting method Section G.3 METHODS The “Minimise Depth” and “Minimise Depth+” functions (Las funciones „Minimizar profundidad“ y „Minimizar profundidad+“) One of the minimisation approaches proposed in this work consists of simplifying the profile by selecting the subset of Zernike terms that minimises the necessary ablation depth while respecting the Zernike terms considered clinically The “minimise depth” function analyses the Zernike pyramid described in the previous section and evaluates the resulting ablation depth for all those possible free combinations of Zernike terms that fulfil the following conditions: Only 3rd or higher order terms can be disabled Only those terms whose optical blur dioptric equivalent is less than 0.25 D (in green) can be disabled For each subset of Zernike terms, the low order terms are recalculated using the Automatic Refraction Balance method described above From this evaluation, the function selects the subset of Zernike terms for which the maximum ablation depth is minimal. The “minimise depth +” function analyses, as well, the Zernike pyramid and evaluates the ablation depth of all possible free combinations of subsets of Zernike terms fulfilling the conditions: Only 3rd or higher order terms can be disabled Only those terms whose optical blur dioptric equivalent is less than 0.50 D (in green or yellow) can be disabled For each subset of Zernike terms, the low order terms are recalculated using the Automatic Refraction Balance method described above Again, the function selects the subset of Zernike terms for which the maximum ablation depth is minimal (Figure 72). The rigorous formulation of these minimised-depth functions is to find a vector of values E[n,m] (1 for enable, 0 for disable) that minimises the maximum ablation depth, conditioned to enabling the terms that have an optical blur dioptric equivalent above 0.25 D or 0.50 D, respectively (in yellow or red). ∞ +n m m m m m m ∑ ∑ En Cn Z n ( ρ ,θ ) n =0 m=− n Abl ( ρ , θ ) = n =0 m =− n nCornea − nAir ( 105) This is equivalent to minimising the peak-to-valley value of the wavefront. ∞ +n ∞ +n max ∑ ∑ EnmCnm Z nm ( ρ , θ ) − min ∑ ∑ EnmCnm Z nm ( ρ ,θ ) n = 0 m =− n n =0 m =− n ( 106) MaxAbl = nCornea − nAir Figure 69: Example of a patient with a normal WFAb and his preoperative visus Figure 70: Manual analysis of the optical effects (visus) of the different aberration modes for the same WFAb Figure 71: Diffraction limited visus (all aberration modes are corrected, ideal Figure 72: Objective analysis (Optimised Aberration modes selection) of the optical and ablative effects of the different aberration modes for the same WFAb: Notice that the aberration modes to be selected are not trivial: Not all the modes in green are unselected (not corrected) because some of them may help to save tissue. Not all aberration modes in yellow are selected (corrected) because some of them may have low impact on vision. Notice, as well, that 8 µm tissue are saved (16% of the ablation), but that the overall shape of the ablation remains Figure 73: Analysis of the optical effects (visus) of the objective analysis (Optimised Aberration modes selection) for the same WFAb The “Minimise Volume” and “Minimise Volume+” functions (Las funciones „Minimizar volumen“ y „Minimizar volumen+“) The other minimisation approach proposed in this work consists of simplifying the profile by selecting the subset of Zernike terms that minimises the necessary ablation volume, while respecting those Zernike terms considered clinically relevant. The “minimise volume” function analyses the Zernike pyramid described in the previous section and evaluates the required ablation volume for all those possible free combinations of Zernike terms that fulfil the following conditions: Only 3rd or higher order terms can be disabled Only those terms whose optical blur dioptric equivalent is less than 0.25 D (in green) can be disabled For each combination of subset of Zernike terms, the low order terms are recalculated using the Automatic Refraction Balance method described above From this evaluation, the function selects the subset of Zernike terms for which the required ablated volume is minimal. The “minimise volume +” function analyses, as well, the Zernike pyramid and evaluates the ablation depth of all possible free combinations of subsets of Zernike terms fulfilling the conditions: Only 3rd or higher order terms can be disabled Only those terms whose optical blur dioptric equivalent is less than 0.50 D (in green or yellow) can be disabled For each combination of subset of Zernike terms, the low order terms are recalculated using the Automatic Refraction Balance method described above Again, the function selects the subset of Zernike terms for which the required ablated volume is minimal (Figure 74 and Figure 75). The rigorous formulation of these minimised-volume functions is, again, to find a vector of values E[n,m] (1 for enable, 0 for disable) that minimises the total ablation volume, conditioned to enabling those terms whose optical blur dioptric equivalent is above 0.25 D or 0.50 D, respectively (in yellow or red). AblVol = ∫ AblVol = ∫ n = 0 m =− n Taking into account that: Abl ( ρ , θ ) ρ d ρ dθ ( 107) ∞ +n Cnm Z nm ( ρ ,θ ) − min ∑ ∑ EnmCnm Z nm ( ρ ,θ ) n =0 m =− n ρ d ρdθ ( 108) nCornea − nAir ∫ ∫ Z ( ρ , θ ) ρ d ρ dθ = 0 ( 109) This leads to: ∞ +n − min ∑ ∑ EnmCnm Z nm ( ρ , θ ) 2π 1 n =0 m =− n ρ d ρ dθ AblVol = ∫ ∫ nCornea − nAir ( 110) This is equivalent to maximising the minimum value of the wavefront. ∞ +n m m m En Cn Z n ( ρ ,θ ) n =0 m =− n AblVol = π nCornea − nAir Figure 74: Optimised Aberration Modes Selection. ( 111) Based on the wavefront aberration map, the software is able to recommend the best possible aberration modes selection to minimise tissue and time, without compromising the visual quality. Notice that the wavefront aberration is analysed by the software showing the original ablation for a full wavefront correction and the suggested set of aberration modes to be corrected. Notice the difference in required tissue, but notice as well that the most representative characteristics of the wavefront map are still presented in the minimised tissue selection. Figure 75: Optimised Aberration Modes Selection. Based on the wavefront aberration map, the software is able to recommend the best possible aberration modes selection to minimise tissue and time, without compromising the visual quality. Notice that the wavefront aberration is analysed by the software showing the original ablation for a full wavefront correction and the suggested set of aberration modes to be corrected. Notice the difference in required tissue, but notice as well that the most representative characteristics of the wavefront map are still presented in the minimised tissue selection. Simulation of the tissue-saving capabilities of such methods for minimising the required ablation tissue (Simulación de la capacidad de ahorro de tejido de dichos métodos para minimizar la cantidad de tejido de ablación) For each wave aberration map, for a 6 mm pupil, it has been simulated how deep and how much volume of tissue was it necessary to ablate for six different scenarios (Table 14): correction of the full wavefront minimising depth, minimising depth + minimising volume, minimising volume + equivalent Aberration-Free treatment For each wave aberration, it has been calculated how much depth and volume of tissue was saved for each condition (in micrometers and in percentage, relative to the full wavefront correction), and it has been noted whether the proposed correction consists of either the full wavefront correction (with all Zernike terms included in the ablation) or the aberration-free treatment (without any Zernike term included in the ablation) and, finally, whether or not the proposed depth or volume was less than the one required for the equivalent aberration-free treatment. min Depth min Depth + min Vol min Vol + Only terms of 3rd or higher order (HOAb terms) can be disabled Only terms with Only terms with Only terms with Only terms with optical blur ≤0.25D optical blur ≤0.50D optical blur ≤0.25D optical blur ≤0.50D (green) can be (green or yellow) can (green) can be be disabled (green or yellow) can be disabled For each subset of Zernike terms, Automatic Refraction Balance is used The subset of Zernike terms that needs The subset of Zernike terms that needs minimum depth is selected minimum ablation volume is selected Table 14: Summary properties of the four minimisation approaches Once the data about tissue saving was computed for each wave aberration, to calculate the average tissue-saving for the different modalities over the sample of treatments we have used several methods: a) direct average of the saved depth or volume b) intercept with the axis in a correlation graph c) direct average of the percentile saved depth or volume d) intercept with the axis in a percentile correlation graph Evaluation of the clinical application of such methods for minimising the required ablation tissue (Evaluación de la aplicación clínica de dichos métodos para minimizar la cantidad de tejido de ablación) Fourty-five patients (90 eyes) seeking laser correction at the Muscat Eye Laser Centre in the Sultanate of Oman were enrolled in this prospective study. Institutional Review Board approval was obtained and written informed consent was obtained from all the patients and the study conformed to tenets of Declaration of Helsinki. The treatment plan was developed using CW customised aspheric profiles based on corneal ray tracing54. Using the Keratron Scout videokeratoscope153 (Optikon 2000 S.p.A, Rome, Italy), we analyzed the topographical surface and corneal wavefront (up to the 7th order). The departure of the measured corneal topography from the theoretically optimal corneal surface was calculated for a balanced-eye model (Q-Val –0.25). Optical errors centred on the line-of-sight were described by the Zernike polynomials36 and the coefficients of the Optical Society of America (OSA) standard60. Corneal Wavefront registers the type and size of each and every optical error generated on the anterior corneal surface, allowing a very selective correction. The defects are corrected at exactly the location where they occur – the anterior corneal surface. In this context, exact localization of defects is decisive in achieving optimal results in laser surgery. The corneal wavefront allows for a very precise diagnosis, thus providing an individual ablation of the cornea to obtain perfect results. With this treatment strategy, pupil dilation of the patient is not necessary for measurement, thus the pupil does not limit treatment zone, and accommodation does not influence measuring results. Notice that in this way the treatments are not forcing a fixed asphericity quotient (Q) on all eyes postoperatively but rather a postoperative expected asphericity quotient as: n2 4 − 2 R ⋅ SEqcp n 1 + n −1 where Qexp is the expected/predicted corneal asphericity quotient; R the apical radius of curvature of the preoperative cornea, SEqcp the spherical equivalent to be corrected at the corneal plane; and n the refractive index of the Treatment selection criteria (Criterios de selección de tratamiento) Only patients presenting aberrations >0.325 µm RMS HO at 6 mm analysis diameter measured by the OPTIKON Keratron Scout (both eyes) were enrolled in the study. Exclusion criteria for enrolling the study were unstable refraction in the last six months, signs of keratoconus or abnormal corneal topography, collagen vascular, autoimmune or immunodeficiency diseases, severe local infective or allergic conditions, severe dry eye disease, monocularity or severe amblyopia, or The patients were sequentially assigned to three different groups (A, B, and C). The rationale of the three groups were in group A directly compare the full customised correction (CW) versus the minimum depth correction (MD), in group B directly compare the full customised correction (CW) versus the minimum volume (time) correction (MV), and in group C directly compare the minimum depth correction (MD) versus the minimum volume (time) correction (MV); all groups in a lateral/contralateral eye basis randomly assigned (coin toss) for the direct comparison. This way, we got three patients groups (A, B, C) with 15 patients each; and 3 treatments groups (CW, MD and MV) with 30 patients each. Preoperative topography and corneal aberrometry measurements were taken, and visual acuity, contrast sensitivity (CST 1800 digital, Vision Sciences Research Corporation, San Ramon, California, USA) and mesopic pupil size were measured. Each eye was planned according to the manifest refraction using the CAM Wavefront customised treatments and the corresponding minimization Immediately before the ablation, the laser was calibrated per manufacturer’s instructions and the calibration settings were recorded. All surgeries were performed by the same surgeon (M.C.A.). LASIK flaps were created with a superior hinge using a Carriazo-Pendular microkeratome154 (SCHWIND eye-tech-solutions GmbH, Kleinostheim, Germany). The ablation was carried out with an ESIRIS excimer laser (SCHWIND eye-tech-solutions GmbH, Kleinostheim, Germany). The ESIRIS laser system works at a repetition rate of 200 Hz and produces a spot size of 0.8 mm (Full Width at Half Maximum, FWHM) with a paraGaussian ablative flying-spot profile164,165. High-speed eyetracking with 330 Hz acquisition rate is accomplished with a 5-ms latency All ablations were using the CAM (Customized Ablation Manager) on the ESIRIS without nomogram to plan the ablations. The CAM aspherical profiles were developed with the aim of compensating for the aberrations induction observed with other types of profile definitions158, some of those sources of aberrations are those related to the loss of efficiency of the laser ablation for nonnormal incidence159,160,161. Optimisation is realized by taking into account the loss of efficiency at the periphery of the cornea in relation to the centre as there is a tangential effect of the spot in relation to the curvature of the cornea (K (Keratometry) -reading). The software provides K-reading compensation, which considers the change in spot geometry and reflection losses of ablation efficiency. Real ablative spot shape (volume) is considered through a self-constructing algorithm. In addition, there is a randomised flying-spot ablation pattern, and controls for the local repetition rates to minimise the thermal load of the treatment163 (smooth ablation, no risk of thermal damage). Pupil sizes averaged 6.00±0.58 mm in diameter (ranging from 5.00 to 7.00 mm). Ablations were planned for an optical zone (OZ) averaging 6.50±0.35 mm in diameter (ranging from 6.00 to 7.00 mm), with a total ablation zone (TAZ) dynamically provided by the software averaging 7.75±0.45 mm in diameter (ranging from 7.00 to 8.75 mm). Outcomes at 3 months included preoperative and postoperative findings, auto-refractor measurements, manifest refraction; best spectacle corrected visual acuity (BSCVA), uncorrected visual acuity (UCVA), topography and corneal aberrometry as well as complications. Evaluation of the tissue-savings of such methods for minimising the required ablation tissue (Evaluación del ahorro de tejido de tales métodos para minimizar la cantidad de tejido de ablación) For each treatment, it has been recorded how deep and how long time (how much volume) was required for three different conditions: correction of the full wavefront corresponding minimisation method (depth or volume) equivalent Aberration-Free treatment For each wavefront aberration, it has been calculated how much depth and time was saved for each condition (in micrometers, time and percentage with respect to the full wavefront correction), and whether the proposed depth or time was less than the one required for the equivalent Aberration-Free treatment. The treatments with savings in depth higher than 9 µm (~0.75 D using simplified Munnerlyn equation for 6 mm optical zone) or savings in time higher than 7 s (~0.75 D using ESIRIS) were accounted. Direct comparison (Comparación directa) For each of the three groups contralateral eye direct comparison was evaluated with a subjective questionnaire. The questionnaire assess by direct answers from the patient subjective impression the differences between both eyes (with different treatment strategies) in terms of normal and dim light conditions (rating glare, halos, starbursts, blurry vision, shadows, ghost or double images, subjective visual quality indoors and outdoors, and preferred eye). Subjective Enquiry Patient Information Patient Id. Patient Name Pre-Op Status Questionnaire 1 day 1 month Patient Follow-Up Time 3 months Table 15: Patient information Pre-Op Manifest Refraction Type of Surgery Initial / Enhancement Current Manifest Refraction Eye Dominance Adverse Events / Table 16: Treatment information Subjective Questionnaire Normal Light Conditions The patients have to evaluate the appearance of the below listed topics under Normal Light Conditions. The rating of questions Nr. 1-5 corresponds to the following scale: Ö no complaints Glare, Halos, Starbursts Blurry Vision Shadows, Ghost or Double Images Subjective Visual Quality indoors (Choose only 0, 2, or 4) Subjective Visual Quality outdoors (Choose only 0, 2, or 4) Preferred Eye (OD/OS) (Choose only OD or OS) Table 17: Normal light questionnaire Ö highest complaints Dim Light Conditions The patients have to evaluate the appearance of the below listed topics under both normal and dimmed light conditions. The rating of questions Nr. 8-11 corresponds to the following scale: Ö no complaints Glare, Halos, Starbursts Blurry Vision Shadows, Ghost or Double Images Ö highest complaints Subjective Visual Quality (Choose only 0, 2, or 4) Preferred Eye (OD/OS) (Choose only OD or OS) Table 18: Dim light questionnaire Clinical Procedure Perform patient diagnosis (including PreOP Topography and Aberrometry; also Contrast Sensitivity Test) Consent Form Monocular (OD / OS) Monocular (OD / OS) Pupil size Monocular (OD / OS) Monocular (OD / OS) Monocular (OD / OS) Manifest Refraction Monocular (OD / OS) Monocular (OD / OS) Table 19: Preoperative diagnosis Input all data in ORK-CAM Save project file Save ocm file Load ocm file into ESIRIS Laser Printout Summary page Import ocm file into ESIRIS Laser Proceed with surgery Aberrometry; also Contrast Sensitivity Test) Pupil size Manifest Refraction Subjective Questionnaire (OD / OS) (OD / OS) (OD / OS) (OD / OS) (OD / OS) (OD / OS) (OD / OS) 1 WEEK 1 MONTH 3 MONTHS 6 MONTHS Table 20: Scheduled diagnosis during follow-up Statistical analysis (Análisis estadístico) Descriptive statistics: Determination of minimal-, maximal-, and mean values, simple standard Statistics on how much tissue is saved by minimising, how often minimising goes below Aberration-Free profile depth or time. For statistical analysis, paired t-tests were used to compare postoperative vs. preoperative results within each group, and the differences between the groups were analysed using ANOVA. For correlation tests, the Coefficient of Determination (r2) was used and the significance of the correlations has been approximately as t with N—2 degrees of freedom where N is the size of the For all tests, p values of less than .05 were considered statistically Section G.4 RESULTS Objective determination of the actual clinical relevance of the single terms in a Zernike expansion of the wavefront aberration (Determinación objetiva de la relevancia clínica de términos individuales de la expansión de Zernike de la aberración del frente de onda) The average root mean square for the high order wavefront aberration (RMSHO) was 0.555±0.143 µm for a 6 mm analysis diameter (from 0.327 µm to 0.891 µm), whereas the average root mean square of the total wavefront aberration (RMS) was 3.955±2.715 µm also for a 6 mm analysis diameter (from 0.741 µm to 10.920 µm). The distribution of corneal aberration in Zernike terms seems to be normal (Figure 76). Average weight value of the different Zernike terms considering Bilateral Symmetry Zernike terms Zernike terms Average of the weights in absolute value of the different Zernike terms RMS respect to zero of the weight values of the different Zernike terms RMS respect to zero of the weight values (µm) Zernike terms Average of the weights in absolute value (µm) Average weight value (µm) Average weight value (µm) Average weight value of the different Zernike terms Zernike terms Figure 76: Zernike coefficients distribution for the sample population of wavefront natural arithmetic average, average considering bilateral symmetry, average considering absolute value, root-mean-square respect to zero Spherical aberration was +0.107±0.205 µm (from -0.476 µm to +0.514 µm), coma aberration was 0.369±0.316 µm (from 0.030 µm to 1.628 µm), and trefoil aberration was 0.204±0.186 µm (from 0.022 µm to 1.118 µm) all of them refer to a 6 mm analysis diameter. Spherical aberration was +0.184±0.136 DEq (from 0.000 DEq to 0.511 DEq), coma aberration was 0.232±0.199 DEq (from 0.019 DEq to 1.023 DEq), and trefoil aberration was 0.128±0.117 DEq (from 0.014 DEq to 0.703 DEq). Out of all the wave aberration maps under study, 72% of them showed a spherical aberration below 0.25 DEq, 23% of them showed a spherical aberration between 0.25 DEq and 0.50 DEq, and only 5% of the maps showed a spherical aberration higher than 0.50 DEq. Regarding coma aberration, for 68% of the wavefront aberration maps it was below 0.25 DEq, for 23% of them it was between 0.25 DEq and 0.50 DEq, and only for 9% of the maps was the coma aberration higher than 0.50 DEq. Regarding trefoil aberration, for 87% of the wavefront aberration maps it was below 0.25 DEq, for 10% of them it was between 0.25 DEq and 0.50 DEq, and only for 3% of the maps was the trefoil aberration higher than 0.50 DEq. Objective minimisation of the maximum depth or volume of a customised ablation based on the Zernike expansion of the wavefront aberration (Minimización objetiva de la profundidad máxima o el volumen de ablación de tratamientos personalizados basados en la expansión de Zernike de la aberración del frente de onda) Comparing the ablations planned to correct for the whole wave aberration spherocylindrical refraction), we observed an average difference in maximum depth of +8 ± 8 µm (range: from -4 to +33 µm), and an average difference in volume of +158 ± 158 nl (range: from -127 nl to +664 nl); that is, +32% (up to +317%), indicating that more tissue was necessary to ablate to achieve full customised corrections. In 13% of the cases, the ablations designed to correct for the whole wave aberration needed to ablate less tissue than the equivalent aberration-free Comparing the proposed “minimised-depth” ablations with the equivalent ablations designed to correct for the whole wave aberration, we observed an average difference in maximum depth of -4 ± 2 µm (range: from -10 µm to -1 µm), and an average difference in ablated volume of -64 ± 32 nl (range: from 190 nl to 0 nl); that is, -8% (up to -30%), indicating that less tissue needs to be removed for the “minimised-depth” corrections. In 43% of the cases, the proposed “minimised-depth” ablations resulted in less ablated tissue than the equivalent aberration-free ablations. Comparing the proposed “minimised-volume” ablations with the equivalent ablations devised to correct for the whole wave aberration, we observed an average difference in maximum depth of -4 ± 2 µm (range: from -10 µm to 0 µm), and an average difference in volume of -64 ± 32 nl (range: from -190 nl to 0 nl); that is -7% (up to -30%), meaning less tissue removal for the “minimised volume” In 39% of the cases, the proposed “minimised-volume” ablations required to remove less tissue than the equivalent aberration-free ablations (those devised to correct only for spherocylindrical refraction). Comparing the proposed “minimised-depth+” ablations with the equivalent ablations designed to correct for the whole wave aberration, we observed an average difference in maximum depth of -6 ± 4 µm (range: from -16 µm to -1 µm), and an average difference in volume of -127 ± 95 nl (range: from -316 nl to 0 nl) or -15% (up to -66%); that is, less tissue removal was required for the “minimised-depth+” corrections. In 80% of the cases, the proposed “minimised-depth+” ablations needed less tissue than equivalent aberration-free ablations planned to correct only spherocylindrical refraction. equivalent ablations intended to correct for the whole wave aberration, we observed an average difference in maximum depth of -6 ± 4 µm (range: from -15 µm to 0 µm), and an average difference in volume of -127 ± 64 nl (range: from 316 nl to 0 nl) or -14% (up to -63%); that is, less tissue removal was needed for the “minimised-volume+” corrections. In 75% of the cases, the proposed “minimised-volume+” ablations needed to remove less tissue than the equivalent aberration-free ablations. Figure 77: Ablation depth for OZTS vs. Ablation depth for full-customised correction for: Aberration-Free correction (all HOAb disabled) (in blue), minimise depth (in magenta), minimise volume (in yellow), minimise depth+ (in cyan), and minimise volume+ (in purple) Figure 78: Ablation time for OZTS vs. Ablation time for full-customised correction for: Aberration-Free correction (all HOAb disabled) (in blue), minimise depth (in magenta), minimise volume (in yellow), minimise depth+ (in cyan), and minimise volume+ (in purple) Evaluation of the clinical application of such methods for minimising the required ablation tissue (Evaluación de la aplicación clínica de tales métodos para minimizar la cantidad de tejido de ablación) Outcomes at 3 months included preoperative and postoperative findings, auto-refractor measurements, manifest refraction; best spectacle corrected visual acuity (BSCVA), uncorrected visual acuity (UCVA), topography and corneal aberrometry as well as complications. Case report (Caso de estudio) In a pilot experience of the clinical application of such methods for minimising the required ablation tissue, the complete records with the clinical data of the very first eye of the first patient treated using this approach at the Augenzentrum Recklinghausen (Germany) were analysed in the form of a case The purpose was to evaluate the clinical application of a method for minimizing the required ablation tissue based upon objective minimization of the depth, volume, and time of a customized ablation based on the Zernike expansion of the wavefront aberration. The data correspond to the left eye of a female patient (K.S.) who was 59 years old at the time of retreatment (Table 21, Figure 79 to Figure 81). She had previous LASIK surgery for myopia (ex domo) resulting in undercorrection, induced astigmatism, and limited corneal thickness for retreatment, which may have been the cause of the corneal aberrations found. Analyses were performed for uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), refractive correction, and corneal wave aberration up to the seventh Zernike order at the 6.5-mm diameter, functional optical zone, and tissue and time savings. Manifest Refraction -1.50 D -2.00 D @ 155° Corneal Pachymetry 566 µm Aberrations at 6.5-mm analysis diameter 0.33 µm RMS 0.76 µm RMS 0.26 µm 0.90 µm RMS Functional Optical Zone (FOZ) Ø 4.69 mm Ø 5.85 mm Table 21: Preoperative data of the patient K.S. Figure 79: Preoperative topography and corneal wavefront maps of the patient Figure 80: Preoperative corneal wavefront map of the patient K.S. determining a functional optical zone (with threshold of 0.38D) of 4.69 mm Ø Figure 81: Preoperative corneal wavefront map of the patient K.S. determining a functional optical zone (with threshold of 0.50D) of 5.85 mm Ø The planned retreatment was corneal wavefront guided laser epithelial keratomileusis (LASEK) over a 6.75-mm diameter optical zone. Planned laser settings for refractive correction were -1.00 -2.00 x 155° (Figure 82 and Table 22). Full CW Minimise Vol+ Ablation depth (µm) Ablation volume (nl) Ablation time (s) Table 22: Comparative treatment plans and savings of the patient K.S. Figure 82: Comparative treatment plans and savings of the patient K.S. A savings of approximately 15% in the minimized volume treatment plan is seen when compared to the full wave aberration correction. Lower order aberrations are always enabled (included in the ablation pattern) as they refer to the classical ametropias. Higher order aberration terms are colour-coded. Higher order aberration terms are enabled (included in the ablation pattern) when they are in colour and disabled (discarded for the ablation pattern) when they are gray. After uneventful refractive surgery, 3-month postoperative data are summarized in Table 23 and Figure 83 to Figure 87: 20/20 (+8 lines) 20/16 (+2 lines) Manifest Refraction -0.50 D Corneal Pachymetry 515 µm Aberrations at 6.5-mm analysis diameter 0.13 µm RMS (-62%) 0.41 µm RMS (-46%) 0.15 µm (-39%) 0.68 µm RMS (-24%) Functional Optical Zone (FOZ) Ø 5.64 mm (+45%) Ø 6.85 mm (+37%) Table 23: 3-month postoperative data of the patient K.S. Figure 83: 3-month postoperative topography and corneal wavefront maps of the patient K.S. Figure 84: 3-month postoperative corneal wavefront map of the patient K.S. determining a functional optical zone (with threshold of 0.38D) of 4.69 mm Ø Figure 85: 3-month postoperative corneal wavefront map of the patient K.S. determining a functional optical zone (with threshold of 0.50D) of 5.85 mm Ø Figure 86: Comparative corneal wavefront maps of the patient K.S. simulating UCVA conditions Figure 87: Comparative corneal wavefront maps of the patient K.S. simulating UCVA conditions Comparative corneal wavefront maps at a 6.5-mm diameter, including the estimation of the optical effect of the wave aberration, show clear postoperative reductions in the corneal wave aberration as well as definite improvements in the optical simulation of UDVA and CDVA compared to the preoperative baseline Comparative series: Preoperative evaluation (Series comparativas: evaluación preoperatoria) CW group consisted of 30 eyes, 19 males and 11 females, 16 eyes OD and 14 eyes OS, with average age of 24 years old (18 to 35 years). Preoperative spherical equivalent (SEq) averaged -2.80±1.25 D (-5.50 to -1.00 D) and astigmatism 0.98±0.84 D (0.00 to 3.75 D). MD group consisted of 30 eyes, 14 males and 16 females, 17 eyes OD and 13 eyes OS, with average age of 29 years old (20 to 47 years old). Preoperative SEq averaged -3.15±1.57 D (-6.38 to -0.75 D) and preoperative astigmatism averaged 0.77±0.54 D (0.00 to 2.50 D). MV group consisted of 30 eyes, 17 males and 13 females, 12 eyes OD and 18 eyes OS, with average age of 27 years old (18 to 47 years old). Preoperative SEq averaged -2.90±1.44 D (-6.88 to -0.75 D) and preoperative astigmatism averaged 1.05±0.86 D (0.25 to 4.25 D). The differences between groups were not statistically significant (ANOVA test, p=.1 for SEq; ANOVA test, p=.1 for CW group MD group MV group Eyes (number) Age (years) 18 to 35 20 to 47 18 to 47 Gender (Male / Female) 19 / 11 14 / 16 17 / 13 Eye (OD / OS) 16 / 14 17 / 13 12 / 18 -5.50 to -1.00 -6.38 to -0.75 -6.88 to -0.75 -0.75 to +0.38 -0.75 to +0.25 -0.62 to +0.13 0.00 to 3.75 0.00 to 2.50 0.25 to 4.25 0.00 to 0.50 0:00 to 0.50 0.00 to 0.75 (mean, range) Pre-op SEq (D) (mean±stddev, range) Post-op SEq (D) (mean±stddev, range) Pre-op Cyl (D) (mean±stddev, range) Post-op Cyl (D) (mean±stddev, range) Table 24: Demographic data, preoperative and postoperative data for the three Comparative series: Refractive outcomes (Series comparativas: resultados refractivos) Mean residual SEq at 3-month follow-up was -0.22±0.23 D (-0.75 to +0.38 D) in CW group (paired t-test, p<.0001), -0.22±0.23 D (-0.75 to +0.25 D) in MD group (paired t-test, p<.0001), and -0.26±0.21 D (-0.62 to +0.13 D) in MV group (paired t-test, p<.0001). Mean residual astigmatism at 3-month follow-up was 0.19±0.19 D (0.00 to 0.50 D) in CW group (paired t-test, p<.0001), 0.16±0.19 D (0.00 to 0.50 D) in MD group (paired t-test, p<.0001), and 0.24±0.25 D (0.00 to 0.75 D) in MV group (paired t-test, p<.0001). The differences between groups were not statistically significant (ANOVA test, p=.3 for SEq; ANOVA test, p=.1 for Astigmatism). In terms of predictability, 93% of the treatments in CW group, 93% in MD group and 100% in MV group were within 0.50 D of SEq postoperatively (Figure The difference favouring the MV group was not statistically significant (ANOVA test, p=.3). month (eyes) CW (30) MV (30) MD (30) Refractive outcome - Percentage within | Attempted | Figure 88: Comparison of the refractive outcome in SEq for CW group (green bars), MD group (blue bars), and MV group (yellow bars) Achieved correction was significantly correlated with intended correction for SEq for all three groups (r2=.97, p<.0001, slope of 1.00 for CW; r2=.98, p<.0001, slope of 0.95 for MD; r2=.98, p<.0001, slope of 0.99 for MV, Figure 89). Achieved [D] Attempted vs. Achieved SEQ 'PREDICTABILITY' 3 x 30 eyes y = 1,00x - 0,22 R2 = 0,97 y = 0,95x - 0,04 R2 = 0,98 y = 0,99x - 0,21 R2 = 0,98 Attempted delta SR equiv. [D] Figure 89: Comparison of the predictability scattergram for SEq for CW group (green diamonds), MD group (blue triangles), and MV group (yellow squares) Achieved astigmatic correction was significantly correlated with intended astigmatic correction for all three groups (correlation test, r2=.94, p<.0001, slope of 0.88 for CW; correlation test, r2=.91, p<.0001, slope of 0.94 for MD; correlation test, r2=.92, p<.0001, slope of 0.95 for MV, Figure 90). Achieved [D] Attempted change in CYL vs. SIRC 3 x 30 eyes y = 0,88x + 0,01 R2 = 0,94 y = 0,94x - 0,01 R2 = 0,91 y = 0,95x - 0,10 R2 = 0,92 Attempted Cyl [D] Figure 90: Comparison of the predictability for astigmatism for CW group (green diamonds), MD group (blue triangles), and MV group (yellow squares) In terms of safety, 40% of treatments in CW group (paired t-test, p<.05), 34% in MD group (paired t-test, p<.05) and 47% in MV group (paired t-test, p<.01) gained at least one line of BSCVA postoperatively, and no single eye lost even one line of BSCVA (Figure 91). The difference favouring the MV group was not statistically significant (ANOVA test, p=.2). 40% 40% CW (30) 0% MD (30) MV (30) Change in BSCVA - Percentage 'SAFETY' Figure 91: Comparison of the change in BSCVA (Safety) for CW group (green bars), MD group (blue bars), and MV group (yellow bars) In terms of contrast sensitivity, results were very similar between all three groups and differences were not statistically significant (paired t-tests, p=.1; ANOVA test, p=.3, Figure 92). Contrast Sensitivity Mesopic pre op (90) CW (30) MD (30) MV (30) A (3cpd) B (6cpd) C (12cpd) D (18cpd) Spatial Frequency [cycles/degree] Figure 92: Comparison of the contrast sensitivity for preoperative status (dark blue triangles), CW group (green squares), MD group (blue circles), and MV group (yellow diamonds) Comparative series: Evaluation of the tissue-saving capabilities for minimising the required ablation tissue (Series comparativas: evaluación de la capacidad de ahorro de tejido para minimizar la cantidad de tejido de ablación) Comparing the “minimise depth+” proposed ablations with equivalent ablations planned to correct the whole wavefront aberration, we observed an average difference in depth of -8±4 µm (from -20 µm to -1 µm) or -14% (up to 40%), and an average difference in time of -6±2 s (from -15 s to -1 s) or -16% (up to -48%) for “minimise depth+” corrections (Figure 93). Ablation depth Ablation depth for optimal Zernike terms selection (µm) y = 1,00x R = 1,00 y = 0,94x - 4,76 R = 0,97 y = 0,92x - 2,26 R = 0,84 Ablation depth for full customised correction (µm) Figure 93: Comparison of the ablation depths for CW group (green diamonds), MD group (blue squares), and MV group (yellow triangles) In 50% of the cases (15 treatments), the “minimise depth+” proposed ablations saved more than 9 µm tissue, whereas in 43% of the cases (13 treatments), saved more than 7 s time. In 43% of the cases (13 treatments), the “minimise depth+” proposed ablations needed less tissue than equivalent Aberration-Free ablations planned to correct only spherocylindrical refraction (Table 25). Avg. Ablation Depth (µm) 48 ± 32 40 ± 31 -8 ± 4 Ablation Depth range (µm) 24 – 103 19 – 98 -20 – -1 -40% – -1% Avg. Ablation Time (s) 39 ± 20 33 ± 20 -6 ± 2 Ablation Time range (s) 17 – 76 15 – 71 -15 – -1 -48% – -4% 43% (13) min Depth < Ab-Free (%) % Diff Table 25: Savings in depth and time of the minimise depth approach Comparing the “minimise volume+” proposed ablations with equivalent ablations planned to correct the whole wavefront aberration, we observed an average difference in depth of -6±3 µm (from -20 µm to +9 µm) or -10% (up to 45%), and an average difference in time of -8±2 s (from -26 s to -2 s) or -19% (up to -50%) for “minimise volume+” corrections (Figure 94). Ablation time Ablation time for optimal Zernike terms selection (s) y = 1,00x R = 1,00 y = 0,91x - 4,39 R = 0,94 y = 0,80x + 1,57 R = 0,85 Ablation time for full customised correction (s) Figure 94: Comparison of the ablation times for CW group (green diamonds), MD group (blue squares), and MV group (yellow triangles) In 50% of the cases (15 treatments), the “minimise volume+” proposed ablations saved more than 9 µm tissue, whereas in 57% of the cases (17 treatments), saved more than 7 s time. In 40% of the cases (12 treatments), the “minimise volume+” proposed ablations needed shorter time than equivalent Aberration-Free ablations planned to correct only spherocylindrical refraction (Table 26). % Diff Avg. Ablation Depth (µm) 48 ± 32 42 ± 31 -6 ± 3 Ablation Depth range (µm) 32 – 88 27 – 83 -20 – +9 -45% – +9% Avg. Ablation Time (s) 40 ± 20 32 ± 20 -8 ± 2 Ablation Time range (s) 21 – 86 17 – 73 -26 – -1 -50% – -1% 40% (12) min Vol < Ab-Free (%) Table 26: Savings in depth and time of the minimise volume approach Comparative series: Direct comparison (Series comparativas: comparación directa) In group A (MD vs. CW), at 3-month follow-up, 1 patient (7%) reported moderate levels of glare in normal light conditions in the MD treated eye, 2 patients (13%) reported moderate levels of glare in dim light in the MD treated eye, and 1 patient (7%) reported moderate levels of glare in dim light in both eyes. Three patients (20%) reported the MD treated eye as their preferred eye. In total, three patients (20%) reported some kind of disturbances. In group B (MV vs. CW), at 3-month follow-up, 1 patient (7%) reported minor levels of blur in normal light conditions in both eyes, and 1 patient (7%) reported moderate levels of blur in normal light conditions in the CW treated eye. One patient (7%) reported minor levels of glare in dim light conditions in both eyes, and three patients (20%) reported moderate levels of glare in dim light conditions in both eyes. Two patients (13%) reported minor levels of blur in dim light conditions in the CW treated eye, and one patient (7%) reported moderate levels of blur in dim light conditions in both eyes. One patient (7%) reported minor levels of shadows in dim light conditions in both eyes. Four patients (27%) reported the MV treated eye as their preferred eye, whereas three patients (20%) reported the CW treated eye as their preferred eye. In total, seven patients (47%) reported some kind of disturbances. In group C (MV vs. MD), at 3-month follow-up, 2 patients (13%) reported minor levels of glare in normal light conditions in both eyes, 1 patient (7%) reported minor levels of blur in normal light in the MV treated eye and moderate levels of blur in normal light in the MD treated eye, 1 patient (7%) reported moderate levels of shadows in normal light in both eyes. 3 patients (20%) reported minor levels of glare in dim light conditions in both eyes, 1 patient (7%) reported minor levels of blur in dim light conditions in the MV treated eye and moderate levels of blur in dim light conditions in the MD treated eye, 1 patient (7%) reported moderate levels of blur in dim light conditions in both eyes, 1 patient (7%) reported moderate levels of shadows in dim light conditions in both eyes. Two patients (13%) reported the MV treated eye as their preferred eye. In total, four patients (27%) reported some kind of disturbances. Overall, at the three groups, 14 patients (31%) reported some kind of disturbances. No single patients at any group reported for any of the subjective impressions negative ratings of severe or extremely severe levels. Section G.5 DISCUSSION Clinical relevance of wave aberrations (Relevancia clínica de las aberraciones de onda) We have used the proposed dioptric equivalent applied to each individual Zernike mode to compute its clinical relevance. It is important to bear in mind that the orientation of the vector-like modes is not taken into account in our proposal, and 1 dioptre of cardinal astigmatism (at 0°, for example) doesn’t necessarily have the same effect as 1 dioptre of oblique astigmatism (at 45°, for example). Despite this, other studies have proved this assumption reasonable.253 Using common clinician limits, a classification, which represents the proposed objective determination of the actual clinical relevance of the single terms in a Zernike expansion of the wavefront aberration, was provided. According to this classification, Zernike terms considered not clinically relevant (DEq < 0.25 D) will be marked in green, Zernike terms that might be considered clinically relevant (0.25 D < DEq < 0.50 D) will be marked in yellow, and Zernike terms considered clinically relevant (DEq > 0.50 D) will be marked in One could use more sophisticated equations to model the equivalences between the optical blur produced by the different Zernike terms, but we have used a relatively simple approach driven primarily by the radial order. Minimisation of the ablated tissue (Minimización del tejido de ablación) Different approaches have been proposed for minimising tissue ablation in refractive surgery: In multizonal treatments, the minimisation is based on the concept of progressively decreasing corrections in different optical zones. The problem comes from the aberrations that are induced (especially spherical aberration). In the treatments designed having a smaller optical zone combined with bigger transition zones, the minimisation is a variation of the multizone concept. The problem comes, as well, from the aberrations that are induced (especially spherical aberration). In the treatments designed having a smaller optical zone for the cylindrical component (or, in general, for the most powerful correction axis), the minimisation is based upon the concept of the maximal depth being based on the lowest meridional refraction and the selected optical zone, and the effective optical zone of the highest meridional refraction is reduced to match the same maximal depth. The problem comes again from the aberrations that are (especially high order In the boost-slider method, minimisation is achieved by a linear modulation of the ablated volume. The problem comes from the changes in refraction that are induced by the modulation. In the Z-clip method, minimisation consists of defining a “saturation depth” for the ablated volume: in all those points where the ablation is designed to go deeper than the saturation value, the actual ablation depth is limited, being set to precisely that saturation value. The problem comes from the fact that this “saturation limit” may occur anywhere in the ablation area, compromising the refraction when those points are close to the ablation centre, and affecting the induction of aberrations in a complicated way. In the Z-shift method, minimisation consists of defining a “threshold value” for the ablated volume, so that in those points where the ablated depth was designed to be less than that threshold value, no ablation is performed at all, and the rest of the points are ablated by an amount equal to the original planned ablation minus the threshold value. The problem comes from the fact that this “threshold value” may be reached anywhere in the ablation area, compromising the refraction when the below-threshold points are found close to the ablation centre, and the functional optical zone when they are found at the periphery. The four minimisation approaches proposed in this work consists of simplifying the profile by selecting a subset of Zernike terms that minimises the necessary ablation depth or ablation volume while respecting the Zernike terms considered to be clinically relevant. For each combination of Zernike terms, the low-order terms are recalculated using the Automatic Refraction Balance method described above, in such a way that the refractive correction is not compromised. Taking into account that the Zernike terms are planned either to be corrected or excluded, it does not compromise the visual performance because all those terms that are excluded (not planned to be corrected) are below clinical-relevance levels. The proposed approaches are safe, reliable and reproducible due to the objective foundation upon which they are based. In the same way, the selected optical zone will be used for the correction. It is important to remark that the selection of the Zernike terms to be included in the correction is not trivial. Only those Zernike terms considered not clinically relevant or of minor clinical relevance can be excluded from the correction, but they do not have to be necessarily excluded. Actually, individual Zernike terms considered to be not clinically relevant will only be used (or not) when they entail an extra amount of tissue for the ablation, and they will be enabled (included) when they help to save tissue for the ablation. In this way, particular cases are represented by the full wavefront correction, by disabling all non-clinically relevant terms, or by disabling all highorder terms. The selection process is completely automatic and driven by a computer, ensuring systematic results and a minimisation of the amount of tissue to be ablated. This automation also simplifies the foreseeable problems of manually selecting the adequate set of terms. A criticism to this methodology can be that fact that we are not targeting diffraction-limited optical system. That means we are reducing the ablated tissue at the cost of accepting a “trade-off” in the optical quality. However, it is still not known precisely whether an “optically perfect eye” after surgery is better than preserving the aberrations that the eye had before surgery. Although the optical quality of the eye can be described in terms of the aberration of its wavefront, it was observed that those individuals with smaller aberration in their wavefront were not always those getting the best visual-quality scores. From that, the optical quality of the human eye does not determine in a one-to-one way its visual quality. The concept of neural compensation indicates that the visual quality we have is somewhat superior to the optical quality that our eye provides, because the visual system seems to be adapted to the eye’s own aberration pattern. The optical quality in an individual can be maximized for a given wavelength by cancelling the aberration of his wavefront and optimizing his defocus (for a single distance), but this has direct and dramatically negative implications for the optical quality for the rest of wavelengths (the greater the negative effect the more extreme is the wavelength).187 However, the optical quality of a person showing a certain degree of aberration of his wavefront decreases, relative to the maximum obtainable quality in the absence of aberration, but it has direct positive implications in the "stability" of the optical quality for a wide range of wavelengths (which covers the spectral sensitivity of the human eye). The implications of this concept is very interesting because, for example, a patient corrected for his wave aberration represents a case in which despite having been improved his (monochromatic) optical quality in focus, his (polychromatic) visual quality is reduced. This confirms that it is not always advantageous or advisable to correct for all aberrations of an individual aspiring to obtain a monochromatically diffraction-limited optical system, as the chromatic blur would compromise his visual quality. Another positive implication that the wave aberrations may have on the visual function is that although it produces an overall blur, the wave aberration also brings depth of focus, i.e., some stability in terms of visual quality for a range of distances that can be considered simultaneously "in-focus.“ Lastly, moderate levels of wave aberration favour the stability of the image quality for wide visual fields.254 This way, there are at least five criteria (native aberrations, neural compensation, chromatic blur, depth of focus, wide field vision) favouring the option of leaving minor amounts of non-clinically relevant aberrations. Besides, there are no foreseeable risks derived from the proposed minimisation functions because they propose ablation profiles that are simpler than the full-wavefront corrections. There may be a sort of “edge” problem, related to the fact that a Zernike term with DEq of 0.49 D may be enabled or disabled, due to its expected minor clinical relevance, whereas a Zernike term with DEq of 0.51 D needs to be corrected (according to our selection criteria). It is controversial, as well, whether or not one can consider the clinical relevance of every Zernike term independently. The visual effect of an aberration does not only depend on it but also on the other aberrations that are present in the full pattern; for example, a sum of small, and previously considered clinically irrelevant aberrations, could involve a clear loss of overall optical quality. A possible improvement comes from the fact that current selection strategy consists of a binary “ON/OFF” approach for each Zernike term. However, better corrections and higher amounts of tissue saving might be obtained by using a correcting factor F[n,m] (range 0 to 1) for each Zernike correcting a wavefront of the form: Abl ( ρ ,θ ) = ∑ n = 0 m =− n Cnm Z nm ( ρ ,θ ) ( 113) However, this would come to a much higher computation cost. Another possible improvement would be to consider possible aberration couplings, at least, between Zernike modes of the same angular frequency as a new evaluation parameter. Based upon a sample population of 100 wavefront maps, the tissues- saving capabilities of this method to minimise the amount of required ablated tissue were simulated. The wavefront maps that were used were derived only from corneal aberrations (from which defocus, for example, cannot be determined). Moreover, correcting corneal aberrations does not imply eliminating the correspondent for the eye, as it depends also on internal aberrations. However, the proposed methods try to minimize the amount of ablated tissue in a Zernike-based customized treatment irrespective of the origin of the wavefront map. Clinical evaluations (Evaluaciones clínicas) Eliminating all higher-order aberrations may not be the best strategy to optimize visual function. For example, some controlled aberrations can improve depth-of-focus with minimal degradation of image quality, as shown previously in a study of aspheric versus spherical intraocular lenses255. The brain can adapt to long-term aberration patterns. removing aberrations can therefore impair visual function. In some cases, The importance of ocular aberrations has been addressed previously as the neural adaptation to ocular aberration256 or by the presence of high visual quality in patients with a normal amount of aberrations257. The aim of optimization to improve overall optical quality should not be to obtain perfect corneal optics, but to fit it to internal ones258. An operational definition of the optimal optical zone can specify aberrations to be preserved as well as those to be removed. Strictly speaking, one cannot consider the clinical relevance of every Zernike term independently without demonstrating it is exactly what occurs. The visual effect of an aberration does not only depend on it but also in the other possible aberration present; for example, a sum of small, and previously considered clinically irrelevant aberration, could suppose a clear loss of overall optical quality. The idea of approximating a distorted wavefront by an equivalent dioptric error is much controversial to be accepted without care. Coupling effects between different high order aberration terms, and between HOAb and manifest refraction have been found144,211,212 for example, between defocus and spherical aberration, or between third order aberrations and low order terms, between spherical aberration and coma, or between secondary and primary astigmatisms. These interactions may provide some relative visual benefits213, but may as well contribute as sources of uncertainty in the conversion of wavefront aberration maps to refractive prescriptions137,139. In terms of predictability the two minimisation approaches resulted in outcomes, at least, equivalent to those of the full customised correction group (control group) with more than 57% treatments within 0.25 D of SEq postoperatively, and 100% of the treatments within 1.00 D. In terms of safety, the two minimisation approaches resulted in outcomes, at least, equivalent to those of the full-customised correction group (control group) with more than 34% of treatments gaining at least one line of BSCVA postoperatively, and no single eye lost even one line of BSCVA. Contrast sensitivity confirmed the high quality of the CW treatments (with or without minimisation of ablation depth or time). The accuracy, predictability, and stability of the refractive power change, together with the minimal external impact of the CAM ablation profiles on the HOAb, lead to superior results in terms of visual acuities and improved contrast sensitivity compared with the preoperative status. This result was significantly better than previous reports, in which the postoperative contrast sensitivity decreased compared with the preoperative status, regardless of the amount of dioptres corrected177. The standard parameters used to assess refractive surgery results (efficacy, predictability, refractive outcome, stability, and safety) were not sufficient to compare the different minimisation approaches (at least for this myopic astigmatism group). Because the ablation procedures were performed in a physical world, they suffered from different types of unavoidable and inherent errors259 that led to aberrations225, including biomechanical reactions due to the flap cut30,260, blending zones, cyclotorsion192,215 , centration errors221,223, spot size limitations164,165, active eye-tracking168,261 capabilities, and biomechanical reactions due to the ablation process itself239. Corneal wavefront customised treatments can only be successful if the preexisting aberrations are greater than the repeatability and the biological noise. Considerations such as treatment duration or tissue removal make it more difficult to establish a universal optimal profile. The “minimise depth” approach saved an average depth of -8±4 µm (from 20 µm to -1 µm), and an average time of -6±2 s (from -15 s to -1 s) or -15% (up to -48%) less tissue removal and treatment duration for “minimise depth” corrections. In 43% of the cases (13 treatments), the “minimise depth” proposed ablations needed less tissue than equivalent Aberration-Free ablations planned to correct only spherocylindrical refraction. The “minimise vol” approach saved an average depth of -6±3 µm (from -20 µm to +9 µm), and an average time of -8±2 s (from -26 s to -1 s) or -14% (up to 50%) less tissue removal and treatment duration for “minimise vol” corrections. In 40% of the cases (12 treatments), the “minimise vol” proposed ablations needed shorter time than equivalent Aberration-Free ablations planned to correct only spherocylindrical refraction. In this comparison, we have analysed the results splitted only by the minimisation approach used for the planning (full customised correction, minimize depth, minimize volume). In direct comparison, we have used the three groups (A, B, C) in a lateral/contralateral eye basis. This way, the variability of external uncontrollable effects (like flap cut, corneal response to the ablation, repeatability of the instruments, and cooperation of the patients) is kept to a minimum. In all three direct comparison groups, the subjective questionnaire led to very similar results. In group A (MD vs. CW), 3 patients (20%) reported the MD treated eye as their preferred eye; in group B (MV vs. CW), 4 patients (27%) reported the MV treated eye as their preferred eye, whereas 3 patients (20%) reported the CW treated eye as their preferred eye; and in group C (MV vs. MD), 2 patients (13%) reported the MV treated eye as their preferred eye. The three groups compared here (CW for full customised correction, MD for minimising ablation depth, and MV for minimising ablation time) are predictable, safe, stable and accurate. The minimization techniques compared here can be used to reduce the depth and time needed for the ablation, and they effectively reduced ablation depth and time by up to a maximum of 50%, and by 15% in average. As per design, the MD group was actually optimised for minimum ablation depth, and showed the largest savings at this aim (-8±4 µm, from -20 µm to -1 µm), whereas the MV group was actually optimised for minimum ablation volume (time), and showed the largest savings at this aim (-8±2 s, from -26 s to -1 s). In this context, and as a rule-of-thumb, MD minimisation could be used in customised myopic treatments when reducing ablation depth is directly related to a decreased risk of keratectasia, whereas MV minimisation could be used in long customised treatments when reducing ablation time is directly related to a better maintenance of homogeneous corneal conditions. This reduction in ablation depth and time, by correcting only a subset of the measured Zernike terms, did not affect negatively the clinical outcomes postoperatively. The two minimization techniques compared here yielded results equivalent to those of the full customization group. Section G.6 CONCLUSIONS In this study, a method to objectively determine the actual clinical relevance of the single terms in a Zernike expansion of the wavefront aberration was described. DEq < 0.25 D determines that the considered Zernike term is not expected to be clinically relevant. 0.25 D < DEq < 0.50 D determines that the considered Zernike term might be clinically relevant. DEq > 0.50 D determines that the considered Zernike term is expected to be clinically relevant. A method to objectively minimise the maximum depth or the ablated volume of a customised ablation based on the Zernike expansion of the wavefront aberration was also provided within the frame of this work. Based upon a sample population of 100 wavefront maps, the tissue-saving capabilities of such methods for minimising the required ablation tissue were Finally, based upon a sample population of 90 treatments, the clinical application of such methods for minimising the required ablation tissue was Minimising the amount of ablated tissue in refractive surgery will still yield, for most of the cases, visual, optical, and refractive benefits comparable to the results obtained when compensating for the full wavefront aberration in refractive surgery. However, a marginal improved level of safety can be achieved under certain circumstances. Section G.7 OUTLOOK In this study, we have used corneal wave aberration as a basis for the simulations and clinical evaluations. We have learnt that combination of subclinical Zernike terms determine the capabilities for saving tissue in an effective way. It is known, that ocular wave aberration use to show lower weighted coefficients. In further studies, the tissue savings based upon ocular wave aberration way will be explored, and compared with corneal wavefront ones. This chapter was limited to a laser system (and ablation algorithm). However, both the laser platforms and the algorithms that they incorporate have evolved over the last years. In further studies, newer state-of-the-art laser systems and algorithms will be evaluated for tissue savings. The clinical evaluations in this chapter were limited to correct the subjects’ manifest refractions. In further studies, systematic deviations from the measured manifest refractions combined with the tissue saving algorithms will be evaluated. This thesis addresses physical aspects related to laser refractive surgery applied to the cornea to change its refractive state and consequently, to change the optical properties of the eye. In particular, this thesis has focused on better understanding corneal laser ablation mechanisms (and the changes induced on the cornea) and developing potential improvements to better control the ablative process. We have studied the changes induced in the geometry of the corneal surface, and the optical outcomes focused on optical aberrations. Measurements on real patients have allowed us to assess the influence and efficacy of the proposed improvements in a real clinical setting. Laser corneal refractive surgery optimized with the improvements presented here helped reducing the complications and occurrence of adverse events during and after refractive surgery, improving the postoperative quality of vision, as well as reducing the ratio of retreatments and reoperations. This dissertation demonstrated an improved application of laser cornea refractive surgical treatments by properly compensating the loss of ablation efficiency for non-normal incidences, enhancing the systems to track eye movements and optimising ablation profiles for customised refractive surgery. Methods to reduce the ablated tissue thickness (and, to a minor degree, to reduce the intervention time) have been proposed, evaluated and discussed. The results and improvements derived out of this work have been implemented to the AMARIS laser system for corneal refractive surgery, as well as to the algorithms and computer programmes, which control and monitor the ablation procedures. We have analysed the corneal asphericity using corneal wavefront and topographic meridional fits. This study suggested that corneal wavefront alone is a useful metric to evaluate the optical quality of an ablation in refractive surgery, and a useful metric to evaluate corneal asphericity. Corneal wavefront can be used effectively to analyze laser refractive surgery, avoiding complicated non-linear effects in the analysis. On these grounds, this method has the potential to replace or perhaps supplement currently used methods of asphericity analysis based on simple averaging of asphericity values. Corneal asphericity calculated from corneal wavefront aberrations represents a three-dimensional fit of the corneal surface; asphericity calculated from the main topographic meridians represents a two-dimensional fit of the principal corneal meridians. Postoperative corneal asphericity can be calculated from corneal wavefront aberrations with higher fidelity than from corneal topography of the principal meridians. Hyperopic treatments showed a greater accuracy than myopic treatments. We have provided a model of an aberration-free profile and evaluated the impact of treatments based upon these theoretical profiles in the post-operative cornea. “Aberration-free” patterns for refractive surgery as defined here together with consideration of other sources of aberrations such as blending zones, eyetracking, and corneal biomechanics yielded results comparable to those of customisation approaches. CV-centred treatments performed better in terms of induced ocular aberrations and asphericity, but both centrations were identical in terms of photopic visual acuity. Aberration-Free treatments with the SCHWIND AMARIS did not induce clinically significant aberrations, maintained the global OD-vs.-OS bilateral symmetry, as well as the bilateral symmetry between corresponding Zernike terms (which influences binocular summation). induced corneal aberrations were less than compared with the classical profile or other publications. Having close-to-ideal profiles should improve clinical outcomes decreasing the need for nomograms, and diminishing induced aberrations after surgery. We have assessed a decision tree analysis system to further optimize refractive surgery outcomes. The desired outcome of non-wavefront-driven refractive surgery is to balance the effects on the wave-aberration, and, to provide normal eyes with perhaps the most natural unaltered quality of vision. While Ocular Wavefront treatments have the advantage of being based on Objective Refraction of the complete human eye system, whereas Corneal Wavefront treatments have the advantage of being independent from accommodation effects or light/pupil conditions; Aspheric treatments have the advantage of saving tissue, time and due to their simplicity offer better predictability. Decision assistant wizards may further optimize refractive surgical outcomes by providing the most appropriate ablation pattern based on an eye’s anamnesis, diagnosis, and visual demands. The general principles may be applied to other laser systems; however, specifics will depend on manufacturers’ We have developed a geometrical analysis of the loss of ablation efficiency at non-normal incidence. The loss of efficiency is an effect that should be offset in commercial laser systems using sophisticated algorithms that cover most of the possible variables. Parallelly, increasingly capable, reliable, and safer laser systems with better resolution and accuracy are required. The improper use of a model that overestimates or underestimates the loss of efficiency will overestimate or underestimate its compensation and will only mask the induction of aberrations under the appearance of other sources of error. The model introduced in this study eliminates the direct dependence on fluence and replaces it by direct considerations on the nominal spot volume and on the area illuminated by the beam, thus reducing the analysis to pure geometry of impact and providing results essentially identical to those obtained by the model by Dorronsoro-Cano-Merayo-Marcos, however, also taking into account the influence of flying spot technology, where spot spacing is small compared to the spot width and multiple spots overlap contributing to the same target point and the correction to be applied, since the corneal curvature changes during treatment, so that also the ablation efficiency varies over the treatment. Our model provides an analytical expression for corrections of laser efficiency losses that is in good agreement with recent experimental studies, both on PMMA and corneal tissue. The model incorporates several factors that were ignored in previous analytical models and is useful in the prediction of several clinical effects reported by other authors. Furthermore, due to its analytical approach, it is valid for different laser devices used in refractive surgery. The development of more accurate models to improve emmetropization and the correction of ocular aberrations in an important issue. We hope that this model will be an interesting and useful contribution to refractive surgery and will take us one step closer to this goal. We have analyzed the clinical effects of pure cyclotorsional errors during refractive surgery. We have showed that cyclotorsional errors result in residual aberrations and that with increasing cyclotorsional error there is a greater potential for inducing aberrations. Thirteen percent of eyes had over 10 degrees of calculated cyclotorsion, which predicts approximately a 35% residual astigmatic error in these eyes. Because astigmatic error is generally the highest magnitude vectorial aberration, patients with higher levels of astigmatism are at higher risk of problems due to cyclotorsional error. Residual aberrations resulting from cyclotorsion depend on aberrations included in the ablation and cyclotorsional error. The theoretical impact of cyclotorted ablations is smaller than decentred ablations or edge effects in coma and spherical aberrations. The results are valid within a single-failure-condition of pure cyclotorsional errors, because no other sources of aberrations are considered. The leap from the mathematical model to the real world outcome cannot be extrapolated without further study. We evaluated the effective optical zone after refractive surgery. Our results suggested that wavefront aberration could be a useful metric for the analysis of the effective optical zones of refractive treatments or for the analysis of functional optical zones of the cornea or the entire eye by setting appropriate limit values. In particular, the method of analysis of the RMS(∆HOAb) seems to be a rigorous analysis accounting for any deviation from the attempted target for the wavefront aberration. EOZ∆RMSho and EOZ∆SphAb were similar to POZ, whereas EOZRMS(∆HOAb) was significantly smaller. Differences between EOZ and POZ were larger for smaller POZ or larger Defocus corrections. POZ larger than 6.75-mm result in EOZ, at least, as large as POZ. For OZ smaller than 6.75-mm, a nomogram for OZ could be applied. We have developed a method to objectively minimise the ablated tissue of a customised ablation based on the Zernike expansion of the wavefront Based upon a sample population of 100 wavefront maps, the tissue-saving capabilities of such methods for minimising the required ablation tissue were simulated. Finally, based upon a sample population of 90 treatments, the clinical application of such methods for minimising the required ablation tissue was Minimising the amount of ablated tissue in refractive surgery will still yield, for most of the cases, visual, optical, and refractive benefits comparable to the results obtained when compensating for the full wavefront aberration in refractive surgery. However, a marginal improved level of safety can be achieved under certain circumstances. Even though Zernike modes decomposition is a mathematical description of the aberration, it is not the aberration itself. Not all Zernike modes affect the optical quality in the same way. The eye does not see through Zernike decomposition but with its own aberration pattern. However, it seems feasible to efficiently perform laser corneal refractive surgery in a customized form minimising the amount of ablated tissue without compromising the visual quality. Eliminating all higher order aberrations may not optimize visual function in highly aberrated eyes. The new algorithm effectively reduced depth and time needed for ablation (up to a maximum of 50%, and by 15% in average), without negatively affecting clinical outcomes postoperatively, yielding results equivalent to those of the full customization group. A1. Method to calculate the corneal asphericity from the corneal A2. Method to predict the postoperative corneal asphericity from the corneal wavefront and the refractive correction. B1. Definition of an aberration-free profile in mathematical terms. B2. Method for mathematic compensation of the focus shift due to tissue B3. Proposal for a geometric reference (CV) used as ablation centre. B4. Methods for evaluation of the bilateral symmetry between eyes. C1. Proposal for selecting the most appropriate ablation profile for each specific surgical treatment based upon the corneal and ocular wave aberrations. D1. Methods to calculate geometry of the ablative spots. D2. Method to analyze the loss of ablation efficiency at non-normal E1. Method to determine the cyclotorsional error during refractive E2. Method to determine the residual aberration due to cyclotorsional E3. Method to determine an optical benefit. E4. Method to determine a visual benefit. E5. Method to determine an absolute benefit. Methods to evaluate the optical zone after refractive surgery. Methods to determine dependencies among effective optical zone, planned optical zone, and refractive correction. Method to determine isometric optical zones. Method to determine a nomogram for the optical zone based upon planned optical zone and refractive correction. G1. Methods to objectively minimise the maximum ablation depth of a customised ablation based on the Zernike expansion of the wavefront aberration. G2. Methods to objectively minimise the ablation volume of a customised ablation based on the Zernike expansion of the wavefront aberration. The results reported in this thesis are of direct application in laser refractive surgery. The induction of aberrations is still a problem in today’s LASIK surgery. We have demonstrated that most of the increase in aberrations can be explained by purely physical factors. We have obtained theoretical laser efficiency correction factors, which have been already applied in the ablation profiles of the SCHWIND AMARIS. The results reported in this thesis demonstrate the great value of aberrometry (corneal and ocular) in the clinical practice. Similar protocols based on those followed in this thesis can be established to help identifying the most suitable ablation profile to each individual patient. The results reported in this thesis on Aberration-Free profiles and Decision-Tree Analyses have important implications for the selection of ablative corrections or intraocular lenses. Providing the eye with the best visual performance is an extremely complex problem. We have provided evidence that not only the aberrations of individual eyes, but also the technical limitations of the correcting systems determine (and very often compromise) the final outcomes. Optical factors must be considered as well as effects related to neural adaptation. The results can establish the basis for future research line directions: In this study we have used corneal wave aberration as a basis for the determination of corneal asphericity. However, as the OSA recommends, corneal wave aberration was based on the line of sight. Thus, larger offsets between pupil centre and corneal vertex may have negatively affected the power of the correlations. In further studies, we will include the offsets between pupil centre and corneal vertex for improving the accuracy of the method. This chapter was limited to a laser system (and ablation algorithm). However, both the laser platforms and the algorithms that they incorporate have evolved over the last years. In further studies, newer state-of-the-art laser systems and algorithms will be evaluated as well. In this study we have used aberration-free profiles as a basis for the simulations and clinical evaluations. We have learnt that aberration-free profiles may reduce the induction of aberrations below clinically relevant values. Since we are confident that on these grounds, induction of aberrations can be controlled, in further studies, wavefront guided profiles will be explored and analyzed in a similar way. In this chapter, we have performed clinical evaluations in moderate levels of myopia and hyperopia. We have learnt that aberration-free profiles reduce the induction of aberrations below clinically relevant values, but induce some minor levels of aberrations. In further studies, higher levels of myopia and hyperopia will be analyzed to determine, to which extent induction of aberrations remains below clinically relevant values. This chapter was limited to limit the induction of aberrations, further studies will attempt to manipulate the induction of aberrations in a controlled manner e.g. for presbyopic corrections. The clinical evaluations in this chapter were limited to correct the subjects’ manifest refractions. However, in highly aberrated eyes, manifest refraction may become an art, a sort of guessing around the least blurred image. In further studies, systematic deviations from the measured manifest refractions, as well as other foreseeable couplings among Zernike coefficients will be In further works, a comprehensive model to analyze the relative ablation efficiency at different materials (in particular human cornea and poly(methyl-methacrylate) (PMMA)), which directly considers applied correction, including astigmatism, as well as, laser beam characteristics and ablative spot properties will be developed, providing a method to convert the deviations in achieved ablation observed in PMMA to equivalent deviations in the cornea. We are developing, as well, a simple simulation model to evaluate ablation algorithms and hydration changes in laser refractive surgery. The model simulates different physical effects of an entire surgical process, and the shot-byshot ablation process based on a modelled beam profile. The model considers corneal hydration, as well as environmental humidity, as well as, laser beam characteristics and ablative spot properties. Using pulse lists collected from actual treatments, we will simulate the gain of efficiency during the ablation Currently a prospective method for determining intraoperative cyclotorsion has been implemented at the SCHWIND AMARIS laser system. With this new setting, we are evaluating intraoperative static and dynamic cyclotorsions, and postoperative outcomes on astigmatism and high-orderaberration, among astigmatic or aberrated eyes that underwent refractive surgery. The clinical evaluation of the optical zone will be evaluated for hyperopic treatments as well as for LASIK treatments. Long-term follow-up on these eyes will help determine whether these accurate results also show improved stability compared to previous experiences. In this study we have used corneal wave aberration as a basis for the simulations and clinical evaluations. We have learnt that combination of subclinical Zernike terms determine the capabilities for saving tissue in an effective way. It is known, that ocular wave aberration use to show lower weighted coefficients. In further studies, the tissue savings based upon ocular wave aberration way will be explored, and compared with corneal wavefront ones. This chapter was limited to a laser system (and ablation algorithm). However, both the laser platforms and the algorithms that they incorporate have evolved over the last years. In further studies, newer state-of-the-art laser systems and algorithms will be evaluated for tissue savings. - Peer-reviewed papers (Included as Annex) Arbelaez MC, Vidal C and Arba-Mosquera S. Six-month clinical outcomes in LASIK for high myopia with aspheric «aberration neutral» ablations using the AMARIS laser system. J Emmetropia 2010; 1: 111-116 Arba-Mosquera S, Hollerbach T. Ablation Resolution in Laser Corneal Refractive Surgery: The Dual Fluence Concept of the AMARIS Platform. Advances in Optical Technologies, vol. 2010, Article ID 538541, 13 pages, 2010. Arba-Mosquera S, Arbelaez MC, de Ortueta D. Laser corneal refractive surgery in the twenty-first century: a review of the impact of refractive surgery on high-order aberrations (and vice versa). Journal of Modern Optics, Volume 57 Issue 12, 1041-1074 de Ortueta D, Arba Mosquera S. Topographic Stability After Hyperopic LASIK. J Refract Surg 2010;26(8):547-554 Camellin M, Arba Mosquera S. Simultaneous aspheric wavefront-guided transepithelial photorefractive keratectomy and phototherapeutic keratectomy to correct aberrations and refractive errors after corneal surgery. J Cataract Refract Surg; 2010; 36: 1173-1180 Brunsmann U, Sauer U, Arba-Mosquera S, Magnago T, Triefenbach N. Evaluation of thermal load during laser corneal refractive surgery using infrared thermography. Infrared Physics & Technology 53 (2010) 342–347 de Ortueta D, Arba Mosquera S, Häcker C. Theoretical considerations on the hyperopic shift effect observed when treating negative cylinder in laser refractive surgery. Journal of Emmetropia 2010; 1: 23-28 Arba Mosquera S, Merayo-Lloves J, de Ortueta D. Asphericity analysis using corneal wavefront and topographic meridional fits. J Biomed Opt; 2010; 15 (2): 028003 Brunsmann U, Sauer U, Dressler K, Triefenbach N, Arba Mosquera S. Minimisation of the thermal load of the ablation in high-speed laser corneal refractive surgery: the ‘intelligent thermal effect control’ of the AMARIS platform. J Modern Opt; 2010; 57: 466-479 Arba Mosquera S, Shraiki M. Analysis of the PMMA and cornea temperature rise during excimer laser ablation. J Modern Opt; 2010; 57: 400-407 Arbelaez MC, Vidal C and Arba-Mosquera S. Bilateral Symmetry before and Six Months after Aberration-Free™ Correction with the SCHWIND AMARIS TotalTech Laser: Clinical Outcomes. J Optom 2010; 3: 20-28 Arba Mosquera S, de Ortueta D, Merayo-Lloves J. Tissue-Saving Zernike Terms Selection in Customized Treatments for Refractive Surgery. J Optom; 2009; 2: 182-196 Arbelaez MC, Vidal C, Arba Mosquera S. Clinical Outcomes of LASIK for Myopia Using the SCHWIND Platform With Ocular Wavefront Customized Ablation. J Refract Surg; 2009; 25: 1083-1090 Arbelaez MC, Vidal C, Al Jabri B, Arba Mosquera S. LASIK for Myopia With Aspheric “Aberration Neutral” Ablations Using the ESIRIS Laser System. J Refract Surg; 2009; 25: 991-999 Arbelaez MC, Vidal C, Arba Mosquera S. Excimer laser correction of moderate to high astigmatism with a non-wavefront-guided aberration-free ablation profile: Six-month results. J Cataract Refract Surg; 2009; 35: 17891798 Arbelaez MC, Vidal C, Arba Mosquera S. Central Ablation Depth and Postoperative Re-fraction in Excimer Laser Myopic Correction Measured With Ultrasound, Scheimpflug, and Optical Coherence Pachymetry. J Refract Surg; 2009; 25: 699-708 Arba Mosquera S, de Ortueta D. Analysis of optimized profiles for ‘aberration-free’ refractive surgery. Ophthalmic Physiol Opt;. 2009; 29: 535-548 Arbelaez MC, Vidal C, Arba Mosquera S. Clinical outcomes of corneal wavefront customized ablation strategies with SCHWIND CAM in LASIK treatments. Ophthalmic Physiol Opt;. 2009; 29: 487-496 Arbelaez MC, Arba Mosquera S. The SCHWIND AMARIS total-tech laser as an all-rounder in refractive surgery. Middle East Afr J Ophthalmol; 2009; 16: 46-53 (Awarded "MEACO 2009 Best Paper of Session") de Ortueta D, Arba Mosquera S, Baatz H. Comparison of Standard and Aberration neutral Profiles for Myopic LASIK With the SCHWIND ESIRIS Platform. J Refract Surg; 2009; 25: 339-349 de Ortueta D, Arba Mosquera S, Baatz H. Aberration-neutral Ablation Pattern in Hyperopic LASIK With the ESIRIS Laser Platform. J Refract Surg; 2009; 25: 175-184 Arbelaez MC, Vidal C, Arba Mosquera S. Clinical outcomes of corneal vertex versus central pupil references with aberration-free ablation strategies and LASIK. Invest Ophthalmol Vis Sci; 2008; 49: 5287-94 Arba-Mosquera S, Merayo-Lloves J, de Ortueta D. Clinical effects of pure cyclotorsional errors during refractive surgery. Invest Ophthalmol Vis Sci; 2008; 49: 4828-4836 Arba Mosquera S, de Ortueta D. Geometrical analysis of the loss of ablation efficiency at non-normal incidence. Opt. Express; 2008; 16: 3877-3895 de Ortueta D, Arba Mosquera S. Mathematical properties of Asphericity: A Method to calculate with asphericities. J Refract Surg; 2008; 24: 119-121 (Letter to the Editor) de Ortueta D, Arba Mosquera S. Topographic changes after hyperopic LASIK with the ESIRIS laser platform. J Refract Surg; 2008; 24: 137-144 de Ortueta D, Arba Mosquera S. Centration during hyperopic LASIK using the coaxial light reflex. J Refract Surg; 2007; 23: 11 (Letter to the Editor) de Ortueta D, Arba Mosquera S, Magnago T. Q-factor customized ablations. J Cataract Refract Surg; 2006; 32: 1981-1982 (Letter to the Editor) - Book chapters (Included as Annex) Alió JL, Rosman M, Arba Mosquera S. Minimally invasive refractive surgery (pp. 97-123) in Minimally Invasive Ophthalmic Surgery, Springer publishers (2009) de Ortueta D, Magnago T, Arba Mosquera S. Optimized Profiles for Aberration-Free Refractive Surgery in Laser Keratectomy: Complications and Effectiveness, NOVA publishers (2009) Arba Mosquera S, Piñero D, Ortiz D, Alió JL. Customized LASIK: Aspherical Treatments with the ESIRIS Schwind platform (Chapter 40, pp. 378395) in Mastering the Techniques of Customised LASIK edited by Ashok Garg and Emanuel Rosen, Jaypee Medical International (2007) Arbelaez MC, Magnago T, Arba Mosquera S. Customised LASIK: SCHWIND CAM-ESIRIS platform (Chapter 15, pp. 207-228) in Tips and Tricks in LASIK surgery edited by Shashi Kapoor and Ioannis G. Pallikaris, Jaypee Medical International (2007) - Presentations at international congresses Arbelaez MC, Arba Mosquera S. Six-month experience in 6D eye-tracking with the SCHWIND AMARIS Total-Tech Laser. Clinical Outcomes in ASCRS2010 (Free Paper; Awarded "ASCRS 2010: Best Paper of Session") Arba Mosquera S. Investigar en empresas en Jornadas de Jóvenes Investigadores en Óptica Visual 2010 de la Ciencia Básica a la Transferencia Tecnológica (Invited Lecture) Arbelaez MC, Arba Mosquera S. Six-month experience in aspheric correction of High-Astigmatism with the SCHWIND AMARIS TotalTech laser in ESCRS2009 (Free Paper) Arba Mosquera S. Technical basis of the PresbyMAX® software: How it works, how it is created in ESCRS2009 (Invited Lecture) Arbelaez MC, Arba Mosquera S. Three-month experience in customised Advanced Cyclotorsion Correction (ACC) with the SCHWIND AMARIS Total-Tech Laser. Clinical Outcomes in MEACO2009 (Free Paper; Awarded "MEACO 2009: Best Paper of Session") Arba Mosquera S. Aspherical Optical Zones: The Effective Optical Zone with the SCHWIND AMARIS in SOI2008 (Invited Lecture) Arbelaez MC, Arba Mosquera S. Six-Month Experience with the SCHWIND AMARIS Total-Tech Laser: Clinical Outcomes in Multicenter Experience with LASIK Treatments in ASCRS2008 (Free Paper; Awarded "ASCRS 2008 Best Paper of Session 3-H: KERATOREFRACTIVE Laser Arbelaez MC, Arba Mosquera S. Optimized Zernike Terms Selection in Customized Treatments for Laser Corneal Refractive Surgery: Experience with LASIK Treatments in ASCRS2008 (Poster) Arba Mosquera S. Representative Recent Improvements in the Perfect Refractive Package in ESOIRS2006 (Invited Lecture) Barraquer C, Arba Mosquera S. Technical challenges that need to be SCHWIND eye-tech-solutions perspective in 7th International Congress of Wavefront Sensing and Optimized Refractive Corrections (Invited - International Patents (Included as Annex) Arba Mosquera S. Laser system for ablating the cornea in a patient's eye Hollerbach T, Grimm A, Arba Mosquera S. Laser system for ablating the cornea in a patient's eye (EP2030599) Arba Mosquera S. Laser system to ablate corneal tissue of the eye of a patient (DE202008013344) Arba Mosquera S, Klinner T. Method for controlling the location a laser pulse impinges the cornea of an eye during an ablation procedure (EP19335384) Arba Mosquera S, Magnago T. Method for controlling a corneal laser ablation of an eye and associated system (EP1923027) Grimm A, Arba Mosquera S, Klinner T. System for ablating the cornea of an eye (EP1923026) Tema A OBJETIVO: Evaluación de un método para calcular la asfericidad corneal y los cambios en asfericidad post cirugía refractiva. MÉTODO: 60 ojos de 15 pacientes consecutivos miopes y 15 pacientes retrospectivamente. En el preoperatorio y a los tres meses postoperatorios se realizaron análisis topográficos y de la aberración frente de onda corneal mediante topografía corneal. Las ablaciones se realizaron con un láser con un perfil libre de aberraciones. Los cambios topográficos en la asfericidad de la córnea y en las aberraciones corneales se evaluaron para un diámetro de 6 mm. RESULTADOS: La inducción de aberración esférica corneal y los cambios de asfericidad correlacionaron con la corrección del desenfoque. Tanto pre- como postoperatoria, la asfericidad corneal calculada a partir de la asfericidad de los meridianos principales correlacionó con la asfericidad derivada del frente de onda corneal. Se obtuvo un fuerte correlación entre la asfericidad postoperatoria calculada y las predicciones teóricas calculadas a partir del frente de onda pero no de los meridianos. En tratamientos hipermétropes se obtuvo una mejor correlación que en los tratamientos de miopía. CONCLUSIONES: La asfericidad corneal calculada a partir de las aberraciones de frente de onda corneal representa un ajuste tridimensional de la superficie corneal; mientras la asfericidad calculada a partir de los meridianos principales representa un ajuste de dos dimensiones. La asfericidad corneal postoperatoria puede ser calculada a partir de las aberraciones de frente de onda corneal con mayor fidelidad que de la topografía corneal de los meridianos principales. El método demostró una mayor exactitud en tratamientos de hipermetropía que en los tratamientos miopes. Tema B Objetivo: Proporcionar un modelo de un perfil libre de aberración y evaluar clínicamente los efectos en la córnea de tratamientos en base a estos perfiles teóricos, así como los resultados clínicos de tratamientos con los perfiles de ablación optimizados libres de aberración de las plataformas ESIRIS y AMARIS. Se incluye la comparación de resultados de la ablación centrados en el vértice corneal y el centro de pupilar, así como la comparación entre la aberración de frente de onda corneal inducida mediante perfiles de ablación asférica neutral frente a un perfil clásico basado en Munnerlyn. Método: Los perfiles libres de aberración se dedujeron de la expansión Zernike de la diferencia entre dos óvalos cartesianos corneales. Se incorpora una compensación de los efectos de desplazamiento de foco provocados por la remoción de tejido de la córnea, mediante la preservación de la localización física del foco óptico de la superficie corneal anterior. La simulación de la eficacia quirúrgica del perfil se realizó por medio de simulación de trazado de rayos a través de una córnea descrita por su superficie anterior y paquimetría. Dos grupos clínicos (vértice corneal y centro de la pupila) con desplazamiento pupilar > 200 micras fueron comparados. Los resultados clínicos fueron evaluados en términos de predictibilidad, resultados refractivos, seguridad, y aberración de frente de onda. La simetría bilateral OD / OS se evaluó en términos de la aberración de frente de onda corneal. Resultados: La propuesta teórica de perfiles "libres de aberraciones" efectivamente preserva las aberraciones, y predice una asfericidad más oblata después de tratamientos miopes, y más prolata después de tratamientos hipermétropes. La inducción de aberraciones corneales en 6-mm estuvo por debajo de niveles clínicamente relevantes: 0.061 ± 0.129µm de HO-RMS (p <.001), 0.058 ± 0.128µm la aberración esférica (p <.001) y 0.053 ± 0.128µm para el coma (p <.01), mientras la razón de cambio de aberraciones por dioptría de corrección fue -0.042µm / D,-0.031µm / D, y-0.030µm / D para HO-RMS, SphAb y coma respectivamente (todos p <.001). Ningún otro modo de Zernike cambió de forma significativa. En 38% de los ojos CV la AV mejoró en comparación con el 24% de los ojos PC (comparación CV / PC P = 0.38). Coma ocular inducido fue en promedio 0.17 micrones para el grupo de CV y 0.26 micrones para el grupo de PC (comparación CV / PC P = 0.01 favoreciendo CV). La aberración esférica ocular inducida fue en promedio 0.01 micras para el grupo de CV y 0.07 micrones para el grupo de PC (comparación CV / PC P = 0.05 favoreciendo CV). En 6.0 mm, las aberraciones corneales cambiaron en una cantidad más alta después de perfiles basados en Munnerlyn que después perfiles de aberración neutrales. Conclusiones: Los perfiles "libres de aberración" para cirugía refractiva definidos aquí, junto con el examen de otras fuentes de aberraciones como las zonas de transición, eye-tracking, y la biomecánica corneal producen resultados comparables a los de tratamientos personalizados. Los tratamientos centrados en CV obtuvieron mejores resultados en términos de inducción de aberraciones oculares y esfericidad, pero ambos fueron idénticos en términos de agudeza visual fotópica. Los tratamientos libres de aberración con el SCHWIND AMARIS no inducen aberraciones clínicamente significativas, mantienen la simetría ODvs-OS (que influye en la visión binocular). Las aberraciones corneales fueron menos que en comparación con el perfil clásico u otras publicaciones. El uso de perfiles próximos al ideal debería mejorar los resultados clínicos disminuyendo la necesidad de nomogramas, y la disminución de las aberraciones inducidas después de la cirugía. Tema C OBJETIVO: Evaluar un sistema de análisis de árbol de decisiones para optimizar los resultados de la cirugía refractiva. MÉTODO: Un árbol de decisión de 5 pasos, el Asistente de Decisión, basado en la experiencia anterior con el láser SCHWIND AMARIS, se aplicó para la selección de modos personalizados de tratamiento en cirugía refractiva (aberración asférica neutral, guiado por frente de onda corneal, o guiado por frente de onda ocular) para eliminar o reducir la aberración total. RESULTADOS: Se realizaron 6467 tratamientos LASIK con el Asistente de Decisión durante un período de 30 meses; 5262 y 112 para tratamientos miópicos e hipermetrópicos con astigmatism, respectivamente, con los perfiles de aberración asférica neutral (AF), 560 utilizando perfiles guiadas por frente de onda corneal, y 533 utilizando perfiles guiados por frente de onda ocular. Se realizaron veinte y dos (0.3%) retratamientos general, 18 (0.3%) y 0 (0%) después de astigmatismo miópico e hipermetrópico, respectivamente, con AF, 3 (0.5%) después de perfiles guiados por frente de onda corneal, y 1 ( 0.2%) después de perfiles guiados por frente de onda ocular. CONCLUSIONES: Los asistentes de decisión pueden optimizar aún más los resultados quirúrgicos en cirugía refractiva proporcionando el patrón de ablación más apropiado basado en la anamnesis de un ojo, el diagnóstico y las demandas visuales. Los principios generales pueden aplicarse a otros sistemas láser, sin embargo, los detalles dependerán de las especificaciones del Tema D Se propone un método general para analizar de una forma geométrica la pérdida de eficacia de ablación en incidencia no normal. El modelo considera la curvatura, la geometría del sistema, la corrección aplicada, y el astigmatismo como parámetros directos, e indirectamente las características y propiedades del haz láser de ablación. El modelo sustituye la dependencia directa de la exposición radiante por una dependencia directa sobre el volumen nominal de tejido y por consideraciones sobre el área iluminada por el haz, lo que reduce el análisis a la geometría pura del impacto. La pérdida de eficacia de ablación en la incidencia no normal se puede compensar a un costo relativamente bajo y así directamente mejorar la calidad de los resultados. Tema E OBJETIVO. Describir los efectos teóricos en las aberraciones de ablaciones realizadas con ciclotorsión y determinar los límites de tolerancia de la precisión en la compensación de los errores por ciclotorsión. MÉTODO. Hemos desarrollado un método para determinar la ciclotorsión promedio durante cirugía refractiva, sin necesidad de un tracker de ciclotorsión. Se proponen condiciones matemáticas teóricas y simuladas para determinar los beneficios ópticos, visuales y absolutos en 76 tratamientos consecutivos realizados en ojos derechos. Los resultados se evaluaron como la expansión de Zernike de las aberraciones del frente de onda residual. RESULTADOS. Ablaciones basadas en descomposición de Zernike que sufran errores de ciclotorsión resultan en aberraciones residuales de los mismos modos de Zernike con diferentes magnitudes y orientaciones. El efecto solo depende de la frecuencia angular y no del orden radial. Se obtuvo un valor promedio de 4.39° de ciclotorsión. En el 95% de los tratamientos se habría obtenido teóricamente un beneficio óptico, un beneficio visual teórico en el 95% y un beneficio absoluto en el 93% en comparación con 89%, 87% y 96% de los tratamientos que alcanzaron beneficios reales, respectivamente. CONCLUSIONES. La aberraciones residuales derivadas de errores por ciclotorsión dependen de las aberraciones incluidas y del error ciclotorsional. El impacto de ablaciones que sufren ciclotorsión es menor que el efecto de descentrajes o los efectos de borde en coma y aberraciones esféricas. Los resultados son válidos dentro de una condición de fallo único, ya que no se consideraron otras fuentes de aberraciones. El salto del modelo matemático de los resultados del mundo real no pueden extrapolarse sin más estudios. Tema F OBJETIVO: Evaluar la Zona Óptica Efectiva (EOZ) (la parte de la ablación que recibe la corrección completa), entre ojos que se sometieron a tratamientos LASEK / Epi-LASEK para la corrección de miopía con astigmatismo. MÉTODO: A los 6 meses de seguimiento se evaluaron retrospectivamente 20 tratamientos LASEK / Epi-LASEK con una media de -5.49 ± 2.35D de desenfoque utilizando el sistema SCHWIND AMARIS. En todos los casos se llevaron a cabo análisis pre-/post-operatorios del frente de onda corneal utilizando el Keratron-Scout (OPTIKON2000). EOZ se evaluó en función del cambio del RMS de la aberración del frente de onda corneal de orden superior (∆RMSho), el cambio de la aberración esférica (∆SphAb), y del valor RMS de la aberración del frente de onda corneal de orden superior (RMS(∆HOAb)). Las correlaciones de EOZ con la Zona Óptica Planeada (POZ) y la corrección de Desenfoque (SEQ), se analizaron utilizando una función bilineal, así como, los cálculos de las líneas isométricas (IOZ) para los que EOZ es igual a POZ y del nomograma para la Zona Óptica (NOPZ). RESULTADOS: A los seis meses, el desenfoque fue -0.05 ± 0.43D, el noventa por ciento de los ojos estaban dentro de ± 0.50D de emetropía. Después del tratamiento, la media de aumento de RMSho fue 0.12µm, 0.09µm para SphAb y 0.04µm para Coma (6-mm de diámetro). La POZ promedio fue de 6.76 ± 0.25 mm, mientras que la EOZ promedio fue para ∆RMSho 6.74 ± 0.66mm (correlación bilineal p <.005), EOZ para ∆SphAb fue 6.83 ± 0.58mm (correlación bilineal p <.0001), EOZ para RMS(∆HOAb) fur 6.42 ± 0.58mm (significativamente menor, p <.05, correlación bilineal p <.0005). EOZ correlaciona positivamente con POZ y disminuye de manera constante con la SEq. Un tratamiento de-5D en POZ 6.00 milímetros resulta en EOZ 5.75 milímetros (6.25 mm NPOZ), los tratamientos de POZ 6.50 milímetros resultan en cerca de 6.25 milímetros EOZ (6.75 mm NPOZ). La condición isométrica se cumple para POZ alrededor de 6.75 CONCLUSIONES: ∆RMSho y ∆SphAb proporcionaron EOZ similares a POZ, mientras que EOZ para RMS(∆HOAb) fue significativamente menor. Las diferencias entre EOZ y POZ fueron mayores para pequeñas POZ o grandes correcciones. POZ mayor que 6.75 mm resulta en EOZ, al menos, tan grande como POZ. Para OZ menor que 6.75 mm, se podría aplicar un nomograma para Tema G El propósito de este trabajo es estudiar la posibilidad de realizar cirugía refractiva personalizada reduciendo al mínimo la cantidad de tejido extirpada sin comprometer la calidad visual, así como evaluar la aplicación de estos métodos para reducir al mínimo el tejido ablación a la minimización objetivo de profundidad y el tiempo de ablaciones personalizadas. Se desarrolló un nuevo algoritmo para la selección de un conjunto de términos de Zernike optimizado en los tratamientos personalizados para cirugía láser refractiva corneal. Sus atributos de ahorro de tejido se han simulado en 100 aberraciones de onda diferentes en 6 mm de diámetro. Los resultados de la simulación se evaluaron en términos de cuánta profundidad y volumen se redujo para cada condición (en micras y en porcentaje), si la corrección propuesta consiste en una corrección completa de frente de onda o de un tratamiento libre de aberración, y si la profundidad o el volumen propuesto fue menor que la requerida para el tratamiento equivalente libre de aberración. Los resultados clínicos y los atributos de ahorro de tejido fueron evaluados en dos grupos (minimizar la profundidad: MD, y minimizar el volumen: MV; 30 ojos cada uno), más un grupo control (frente de onda corneal: CW, 30 ojos). Los resultados clínicos fueron evaluados en términos de previsibilidad, seguridad y sensibilidad al contraste. Los resultados de la simulación mostró una profundidad media ahorrada de 5µm (0-16µm), y un volumen medio salvado de 95nl (0-127nl) o una reducción de 11% en tejido (0-66% ahorro de tejido). La correcciones propuestas siempre fueron menos profundas que las correcciones de frente de onda completo y en el 59% de los casos fueron menos profundos que los tratamientos equivalentes significativamente en un 15% en comparación con la corrección personalizada completa. La refracción se corrigió a niveles subclínicos, la agudeza visual sin corrección mejoró a 20/20, la agudeza visual mejor corregida aumentó en 2 líneas, las aberraciones se redujeron aproximadamente un 40% en comparación con los niveles basales preoperatorios, y la zona óptica funcional de la córnea se amplió en aproximadamente un 40% en comparación con los niveles basales preoperatorios. Se redujeron trébol, coma, aberración esférica, y RMS de las aberraciones de orden superior. En la evaluación clínica, el 93% de los tratamientos CW, el 93% en el grupo MD y 100% en el grupo MV se encontraron dentro de 0,50 D de la SEq después de la operación. El 40% de los tratamientos CW, el 34% en el grupo MD y 47% en el grupo MV mejoró por lo menos una línea de AVMC después de la operación. El ahorro de tejido arrojó una reducción media de 8µm (1-20µm) y un ahorro de tiempo de 6s (1-15s) en el grupo de MD, y 6µm (0-20µm) y el 8 (2-26s) en el grupo MV. A pesar de que la descomposición Zernike modos es una descripción matemática de la aberración, no es la aberración en sí misma. No todos los modos de Zernike afectan a la calidad óptica de la misma manera. El ojo no ve a través de la descomposición Zernike sino con su patrón propio de aberración. Sin embargo, parece factible de realizar con eficacia la cirugía láser refractiva corneal en una forma personalizada reduciendo al mínimo la cantidad de tejido extirpada sin comprometer la calidad visual. La eliminación de todas las aberraciones de orden superior puede no optimizar la función visual en ojos con gran aberración. El nuevo algoritmo reduce efectivamente la profundidad y el tiempo necesarios para la ablación (hasta un máximo del 50% y un 15% en promedio), sin afectar negativamente los resultados clínicos después de la operación, con resultados equivalentes a los del grupo de personalización completa. Esta tesis aborda los aspectos físicos relacionados con la cirugía refractiva con láser aplicado a la córnea para cambiar su estado refractivo y, en consecuencia, para cambiar las propiedades ópticas del ojo. En particular, esta tesis se ha centrado en una mejor comprensión de los mecanismos de ablación corneal por láser (y los cambios inducidos en la córnea) y el desarrollo de las mejoras potenciales para controlar mejor el proceso de ablación. Se han estudiado los cambios inducidos en la geometría de la superficie corneal, los resultados ópticos se describieron en forma de aberraciones ópticas. Las mediciones realizadas en pacientes reales nos han permitido evaluar la influencia y la eficacia de las mejoras propuestas en un entorno clínico real. La cirugía refractiva corneal por laser optimizada con las mejoras que se presentan aquí ayudó a reducir las complicaciones y la aparición de eventos adversos durante y después de la cirugía refractiva, la mejora de la calidad de la visión postoperatoria, así como la reducción de la proporción de retratamientos. Esta tesis ha demostrado una mejor aplicación de los tratamientos láser quirúrgicos refractivos en la córnea, compensando adecuadamente la pérdida de eficacia de la ablación para incidencia no normal, la mejora de los sistemas para rastrear los movimientos del ojo y la optimización de los perfiles de ablación para personalizar la cirugía refractiva. Diferentes métodos para reducir el espeso de tejido de ablación (y, en menor medida, para reducir el tiempo de intervención) han sido propuestos, evaluados y discutidos. Los resultados derivados de este trabajo se han aplicado en el sistema AMARIS para cirugía refractiva corneal, así como a los algoritmos y programas informáticos que controlan y supervisan los procedimientos de ablación. A. Se ha analizado la asfericidad corneal mediante frente de onda corneal y mediante análisis meridional. Este estudio sugiere que el único frente de onda corneal representa una métrica útil para evaluar tanto la calidad óptica de una ablación en cirugía refractiva, como la asfericidad corneal, evitando complicados efectos no lineales en el análisis. Por estos motivos, este método tiene el potencial para reemplazar o al menos complementar los métodos de análisis de asfericidad que se utilizan actualmente basados en un promedio simple de los valores de asfericidad. La asfericidad corneal calculada a partir de las aberraciones de frente de onda corneal representa un ajuste tridimensional de la superficie corneal; la asfericidad calcula a partir de los meridianos topográficos principales representa un ajuste de dos dimensiones de los meridianos principales corneales. La asfericidad corneal postoperatoria puede ser calculada a partir de las aberraciones de frente de onda corneal con mayor fidelidad que de la topografía corneal de los meridianos principales. En tratamientos de hipermetropía se demostró una exactitud mayor que los tratamientos miopes. B. Se ha proporcionado un modelo de un perfil libre de aberración y evaluado el impacto de los tratamientos en base a estos perfiles teóricos en la córnea después de la operación. La aplicación de patrones libres de aberración para cirugía refractiva como se definen aquí, junto con el examen de otras fuentes de aberraciones como las zonas de transición, eye-tracking, y la biomecánica corneal ha producido resultados comparables a los de los tratamientos personalizados. Los tratamientos centrados en CV obtuvieron mejores resultados en términos de la inducción de aberraciones oculares y la esfericidad, pero ambos fueron idénticos en términos de agudeza visual fotópica. Los tratamientos libres de aberración incluidos en SCHWIND AMARIS no indujeron aberraciones clínicamente significativas, se mantuvo la simetría bilateral (que influye en la visión binocular). Las aberraciones corneales fueron menores que en comparación con el perfil clásico u otras publicaciones. La existencia de perfiles próximos al ideal debería mejorar los resultados clínicos disminuyendo la necesidad de nomogramas, y la disminución de las aberraciones inducidas después de la cirugía. C. Se ha evaluado un análisis por árbol de decisión para optimizar aún más los resultados de la cirugía refractiva. El resultado deseado de la cirugía refractiva es proporcionar a los pacientes la calidad óptima de visión funcional. Si bien los tratamientos guiados por frente de onda ocular tienen la ventaja de estar basados en la refracción objetiva del sistema ojo humano completo, mientras que los tratamientos basados en el frente de onda corneal tienen la ventaja de ser independientes de los efectos de acomodación o de las condiciones de luz y pupila; los tratamientos asféricos tienen la ventaja de ahorrar tejido, tiempo. El uso de asistentes de decisión puede optimizar aún más los resultados quirúrgicos de proporcionando el patrón de ablación más adecuada teniendo en la anamnesis de un ojo, el diagnóstico y las demandas visuales. Los principios generales pueden aplicarse a otros sistemas láser, sin embargo. D. Se ha desarrollado un análisis geométrico de la pérdida de eficacia de ablación en incidencia no normal. La pérdida de eficiencia es un efecto que debe ser compensado en los sistemas láser comerciales mediante sofisticados algoritmos para cubrir la mayoría de las variables posibles. Paralelamente, se requiren sistemas láser cada vez más capaces, confiables, seguros y con mejor resolución y precisión. El uso inadecuado de un modelo que sobreestime o subestime la pérdida de eficiencia, sobreestimará o subestimará su compensación y sólo enmascarará la inducción de aberraciones bajo la apariencia de otras fuentes de error. El modelo presentado en este estudio elimina la dependencia directa de la exposición radiante y la sustituye por consideraciones del volumen nominal de tejido retirado por imacto y en la zona iluminada por el haz, lo que reduce el análisis a la geometría pura de impacto y con resultados idénticos a los obtenidos por el modelo Dorronsoro-Cano-Merayo-Marcos, sin embargo, teniendo también en cuenta la influencia de la tecnología de punto volante, donde el espaciamiento es pequeño en comparación con la anchura del impacto y múltiples impactos se superponen contribuyendo al mismo punto de ablación y la corrección aplicada, ya que los cambios en la curvatura de la córnea durante el tratamiento varían del mismo modo la eficacia de ablación a lo largo del tratamiento. Nuestro modelo proporciona una expresión analítica para la corrección de las pérdidas de eficiencia láser que está en buen acuerdo con los recientes estudios experimentales, tanto en PMMA y en tejido corneal. El modelo incorpora varios factores que fueron ignorados en los anteriores modelos analíticos y es útil en la predicción de varios efectos clínicos reportados por otros autores. Además, debido a su enfoque analítico, es válido para diferentes dispositivos láser usados en cirugía refractiva. El desarrollo de modelos más precisos para mejorar emetropización y la corrección de las aberraciones oculares en un tema importante. Esperamos que este modelo será un aporte interesante y útil para la cirugía refractiva y nos llevará un paso más cerca a este objetivo. E. Se han analizado los efectos clínicos de errores por ciclotorsión durante cirugía refractiva. Hemos mostrado que los errores por ciclotorsión resultan aberraciones residuales y que con el aumento de error por ciclotorsión existe un mayor potencial para la inducción de aberraciones. El trece por ciento de los ojos mostró más de 10 grados de ciclotorsión, que predice aproximadamente un 35% de error residual astigmático en esos ojos. Debido a que el error astigmático es generalmente la magnitud más alta de aberración vectorial, los pacientes con niveles más altos de astigmatismo se encuentran en mayor riesgo de problemas debido a un error ciclotorsional. Las aberraciones residuales derivadas de errores por ciclotorsión dependen de las aberraciones incluidas en la ablación y del error ciclotorsional. El impacto teórico de errores por ciclotorsión es más pequeño que el de las ablaciones descentradas o los efectos de borde en coma y aberraciones esféricas. Los resultados son válidos dentro de una condición de fallo único, porque no se consideran otras fuentes de aberraciones. F. Se ha evaluado la zona óptica efectiva después de cirugía refractiva. Nuestros resultados sugieren que la aberración de frente de onda puede ser una métrica útil para el análisis de las zonas ópticas efectivas o para el análisis de las zonas ópticas funcionales de la córnea o de todo el ojo mediante el establecimiento de valores límite adecuados. En particular, el método de análisis de la RMS(∆HOAb) parece ser un riguroso análisis de cualquier desviación de la meta tentativa para la aberración de frente de onda. EOZ obtenidas mediante los métodos ∆RMSho y ∆SphAb fueron similares a POZ, mientras que con el método RMS(∆HOAb) fue significativamente menor. Las diferencias entre EOZ y POZ fueron mayores para POZ pequeñas o más grandes correcciones. POZ mayor que 6.75 mm resulta en EOZ, al menos, tan grande como POZ. Para OZ menor que 6.75 mm, se podría aplicar un nomograma para OZ. G. Se ha desarrollado un método para minimizar objetivamente la ablación de tejido de un tratamiento personalizado basado en la expansión de Zernike de la aberración de frente de onda. Con base a una muestra poblacional de 100 mapas de frente de onda, se simuló la capacidad de ahorro de tejido de estos métodos para reducir al mínimo la ablación del tejido requerido. Por último, se evaluó la aplicación clínica de estos métodos para reducir al mínimo la ablación de tejido requerido en base a una muestra poblacional de 90 Los métodos de reducción de tejido extirpado en cirugía refractiva demostraron beneficios visuales, ópticos y refractivos comparables a los resultados obtenidos en la compensación completa de la aberración de frente de onda en cirugía refractiva. Sin embargo, bajo ciertas circunstancias, se podría alcanzar un nivel marginal de mejora de la seguridad. A pesar de que la descomposición en modos de Zernike es una descripción matemática de la aberración, no es aberración en sí misma. No todos los modos de Zernike afectan a la calidad óptica de la misma manera. El ojo no ve a través de la descomposición de Zernike sino con su propio patrón de aberración. Sin embargo, parece factible realizar cirugía láser refractiva corneal con eficacia en una forma personalizada reduciendo al mínimo la cantidad de tejido extirpada sin comprometer la calidad visual. La eliminación de todas las aberraciones de orden superior puede no optimizar la función visual en ojos que presenten gran aberración. El nuevo algoritmo redujo de fporma efectiva la profundidad y el tiempo necesarios para la ablación (hasta un máximo del 50% y un 15% en promedio), sin afectar negativamente los resultados clínicos, con resultados equivalentes a los del grupo de personalización completa. A1. Método para el cálculo de la asfericidad corneal a partir del frente de onda corneal. A2. Método para predecir la asfericidad corneal post-operatoria a partir del frente de onda corneal y de la corrección refractiva. B1. Definición en términos matemáticos de un perfil de libre de aberración. B2. Método matemático para la compensación del cambio de foco debido a la remoción de tejido. B3. Propuesta de una referencia geométrica (CV), utilizada como centro de ablación. B4. Métodos para la evaluación de la simetría bilateral entre ojos. C1. Propuesta para la selección del perfil de ablación más adecuado para cada tratamiento quirúrgico específico basado en las aberraciones de onda corneal y ocular. D1. Métodos para el cálculo de la geometría de los impactos de ablación. D2. Método para analizar la pérdida de eficacia de ablación en incidencia no normal. E1. Método para determinar el error ciclotorsional durante cirugía E2. Método para determinar la aberración residual debido a errores por E3. Método para determinar el beneficio óptico. E3. Método para determinar el beneficio visual. E4. Método para determinar el beneficio absoluto. F1. Métodos para evaluar la zona óptica después de la cirugía refractiva. F2. Métodos para determinar las dependencias entre zona óptica efectiva, zona óptica planificada, y la corrección refractiva. F3. Método para determinar las zonas ópticas isométricas. F4. Método para determinar un nomograma para la zona óptica basada en la zona óptica requerida y la corrección de refractiva. G1. Métodos para reducir al mínimo objetivamente la profundidad de ablación máximo de un tratamiento personalizado basado en la expansión de Zernike de la aberración de frente de onda. G2. Métodos para minimizar objetivamente el volumen de ablación de un tratamiento personalizado basado en la expansión de Zernike de la aberración de frente de onda. 1. Los resultados de esta tesis son de aplicación directa en la cirugía refractiva con láser. La inducción de aberraciones sigue siendo un problema en la cirugía LASIK de hoy. Hemos demostrado que la mayor parte del aumento de las aberraciones se puede explicar por factores puramente físicos. Hemos obtenido los factores teóricos de corrección de eficiencia, que han sido ya aplicados en los perfiles de ablación del SCHWIND AMARIS. 2. Los resultados de esta tesis permiten demostrar el gran valor de la aberrometría (corneal y ocular) en la práctica clínica. Protocolos similares basados en los que se siguen en esta tesis pueden ser establecidos para ayudar a identificar el perfil de ablación más adecuado para cada paciente individual. 3. Los resultados de esta tesis en cuanto a perfiles libres de aberración y al análisis por árbol de decisión tienen implicaciones importantes para la selección de correcciones ablativas o lentes intraoculares. Proporcionar a los ojos con el mejor rendimiento visual es un problema extremadamente complejo. Hemos presentado pruebas de que no sólo las aberraciones de ojos individuales, sino también las limitaciones técnicas de la corrección de los sistemas determinan (y muy a menudo limitan) los resultados finales. Factores ópticos deben ser considerados, así como efectos relacionados con la adaptación neural. En base a estos resultados se pueden establecer las bases para futuras líneas de investigación: A. En este estudio hemos utilizado la aberración de onda corneal como base para la determinación de la asfericidad corneal. Sin embargo, como recomienda la OSA, la aberración de onda corneal se basó en la línea de visión. Así, desviaciones grandes entre el centro pupilar y el vértice corneal pueden haber afectado negativamente a la potencia de las correlaciones. En estudios adicionales, se incluirán las desviaciones entre el centro pupilar y el vértice corneal para mejorar la exactitud del método. Este capítulo se limita a un sistema láser (y a un algoritmo de ablación). Sin embargo, tanto las plataformas láser como los algoritmos que incorporan han evolucionado en los últimos años. En estudios adicionales, los nuevos estado de la técnica de sistemas láser y algoritmos serán evaluados también. B. En este estudio se han utilizado perfiles libre de aberraciones, como base para las simulaciones y evaluaciones clínicas. Hemos aprendido que los perfiles libres de aberración puede reducir la inducción de aberraciones por debajo de los valores clínicamente relevantes. Como estamos seguros de que por estos motivos, la inducción de aberraciones se puede controlar, en otros estudios, se explorarán los perfiles guiados por frente de onda que serán analizados de una manera similar. En este capítulo, se han realizado evaluaciones clínicas en niveles moderados de miopía e hipermetropía. Hemos aprendido que los perfiles libre de aberraciones reducir la inducción de aberraciones debajo de los valores clínicamente relevante, pero sin embargo, todavía se inducen algunos niveles menores de aberraciones. En estudios adicionales, se analizarán niveles mayores de miopía e hipermetropía para determinar, si la inducción de aberraciones medida se mantiene por debajo clínicamente relevantes valores. Este capítulo se limita a reducir la inducción de aberraciones, en otros estudios se intentará manipular la inducción de aberraciones de una manera controlada, por ejemplo para correcciones de presbicia. C. Las evaluaciones clínicas en este capítulo se limitan a corregir las refracciones que los sujetos manifiestan. Sin embargo, en ojos con gran aberración, determinar la refracción manifiesta puede llegar a ser un arte, una especie de adivinar la imagen menos borrosa. En estudios adicionales, se analizarán desviaciones sistemáticas de las refracciones manifiestas medias, de manera que se evalúen los efectos de acoplamiento entre modos de Zernike. D. Estamos desarrollando un modelo de simulación simple para evaluar los algoritmos de ablación y los cambios de hidratación en cirugía refractiva con láser. El modelo simula diferentes efectos físicos de un proceso quirúrgico, y el proceso de ablación impacto por impacto, sobre la base de un perfil de haz modelado. El modelo considera la hidratación corneal, así como la humedad del ambiente, las características y propiedades del haz láser ablativo. Utilizando listas de pulsos tomadas de tratamientos reales, se simulará la ganancia de eficiencia durante el proceso de ablación. E. En la actualidad se ha implementado un método prospectivo para determinar la ciclotorsión intraoperatoria en el sistema láser SCHWIND AMARIS. Con este nuevo escenario estamos evaluando la torsión intraoperatoria estática y dinámica, y los resultados postoperatorios de astigmatismo y aberraciones de alto orden. Del mismo modo, un módulo de eye-tracking de seis dimensiones está siendo desarrollado por ojo SCHWIND eye-tech-solutions. Con esta tecnología, se podrán evaluar de forma intraoperatoria los movimientos estáticos y dinámicos de los ojos en 6D. F. La evaluación clínica de la zona óptica será evaluada para tratamientos hipermétropes, así como para tratamientos LASIK. Un mayor plazo el seguimiento de estos ojos permitirá determinar si estos resultados muestran asimismo una mayor estabilidad en comparación con experiencias anteriores. G. En este estudio hemos utilizado la aberración de onda corneal como base para las simulaciones y evaluaciones clínicas. Hemos aprendido que la combinación de términos de Zernike con valores subclínicos puede determinar las capacidades para el ahorro de tejido de forma eficaz. Se sabe que la aberraciones oculares suelen mostrar menores coeficientes de ponderación. En estudios adicionales, el ahorro de tejido basado en la aberración de onda ocular será explorado y comparado con los del frente de onda corneal. Este capítulo se limita a un sistema láser (y a un algoritmo de ablación). Sin embargo, tanto las plataformas láser como los algoritmos que incorporan han evolucionado en los últimos años. En estudios adicionales, los nuevos estado de la técnica de sistemas de láser y algoritmos serán evaluados para el ahorro de Trokel SL, Srinivasan R, Braren B. Excimer laser surgery of the cornea. Am J Ophthalmol. 1983; 96: 710-715. Antón A, Andrada MT, Mayo A, Portela J, Merayo J. Epidemiology of refractive errors in an adult european population: The Segovia study. Journal of Ophthalmic Epidemiology 2009; 16: 231-237. GOMEZ S, HERRERAS J, MERAYO-LLOVES JM, GARCÍA M, ARGUESO P, CUEVAS J. Effect of hyaluronic acid on corneal haze in a photorefractive keratectomy experimental model. Journal of Refractive Surgery 2001; 17: 549554. CP, RODRÍGUEZ-ZARZUELO G, MARTÍNEZ-GARCÍA C, GUTIERREZ R. Experimental Model of Laser Assisted in Situ Keratomileusis in Hens. Journal Refractive Surgery. 2005; 21: 392-398. Kling S, Remon L, Pérez-Escudero A, Merayo-Lloves J, Susana Marcos S. Corneal biomechanical changes after collagen cross-linking from porcine eye inflation experiments. Investigative Ophthalmology Visual Science 2010; 51. ALIÓ JL, HELMY SHABAYEK M, MONTES-MICO R, MULET ME, GALAL A, AHMED A, MERAYO-LLOVES JM. Intracorneal hydrogel lenses and corneal aberrations. Journal of Refractive Surgery 2005; 21: 247-252. MARTÍNEZ GARCÍA C, MERAYO-LLOVES JM, BLANCO MEZQUITA T, MAR SARDAÑA S. Wound Healing following refractive surgery in hens. Experimental Eye Research 2006, 83(4): 728-735. DEL VAL J, BARRERO S, YAÑEZ B, MERAYO-LLOVES JM, APARICIO J, GONZÁLEZ V, PASTOR JC, MAR S. Experimental measurement of corneal haze after excimer laser keratectomy. Applied Optics 2001; 40: 1727-1731. MERAYO-LLOVES JM, YAÑEZ BP, MAYO A, MARTÍN R, PASTOR JC. Experimental model of corneal haze in chickens. Journal of Refractive Surgery 2001; 17: 696-699. Merayo-LLoves, J, Blanco Mezquita T, Ibares-Frías, L, Fabiani, L, Alvarez- Barcia, A, Martinez-García C, Induction of controlled wound healing with PMMA segments in the deep stroma in corneas of hens. European Journal of Ophthalmology. 2010;20:62-70. BARBERO S, MARCOS S, MERAYO-LLOVES JM. Corneal and total optical aberrations in a unilateral aphakic patient. Journal of Cataract Refractive Surgery 2002; 28: 1594-600. Gould G. Laser. US patent: US19590804539 19590406; 1959. Schawlow AL, Townes CH. Infrared and Optical Masers. Physical Review; 1958; 112: 1940-1949. Swinger CA. Refractive surgery for the correction of myopia. Ophthalmol Soc U K; 1981; 101: 434-439. Munnerlyn CR, Koons SJ, Marshall J. Photorefractive keratectomy: a technique for laser refractive surgery. J Cataract Refract Surg; 1988; 14: 46-52. Krueger RR, Trokel SL. Quantitation of corneal ablation by ultraviolet laser light. Arch Ophthalmol. 1985; 103: 1741-1742. Pettit GH, Ediger MN, Weiblinger RP. Excimer laser corneal ablation: absence of a significant "incubation" effect. Lasers Surg Med; 1991; 11: 411-418. Seiler T, Genth U, Holschbach A, Derse M. Aspheric photorefractive keratectomy with excimer laser. Refract Corneal Surg; 1993; 9: 166-172. Mastropasqua L, Toto L, Zuppardi E, Nubile M, Carpineto P, Di Nicola M, Ballone E. Photorefractive keratectomy with aspheric profile of ablation versus conventional photorefractive keratectomy for myopia correction: six-month controlled clinical trial. J Cataract Refract Surg; 2006; 32: 109-116. Kirwan C, O'Keefe M. Comparative study of higher-order aberrations after conventional laser in situ keratomileusis and laser epithelial keratomileusis for myopia using the technolas 217z laser platform. Am J Ophthalmol; 2009 ; 147: McDonnell PJ, Moreira H, Garbus J. Photorefractive keratectomy to create toric ablations for correction of astigmatism. Arch Ophthalmol; 1991; 109: 710713. Buratto L, Ferrari M, Rama P. Excimer laser intrastromal keratomileusis. Am J Ophthalmol; 1992; 113: 291-295. Pallikaris IG, Siganos DS. Excimer laser in situ keratomileusis and photorefractive keratectomy for correction of high myopia. J Refract Corneal Surg; 1994; 10: 498-510. Ren Q, Simon G, Legeais JM, Parel JM, Culbertson W, Shen J, Takesue Y, Savoldelli M. Ultraviolet solid-state laser (213-nm) photorefractive keratectomy. In vivo study. Ophthalmology; 1994; 101: 883-889. Gobbi PG, Carones F, Brancato R, Carena M, Fortini A, Scagliotti F, Morico A, Venturi E. Automatic eye tracker for excimer laser photorefractive keratectomy. J Refract Surg. 1995; 11: S337-S342. Camellin M. Laser epithelial keratomileusis for myopia. J Refract Surg; 2003; 19: 666-670. Simon G, Huang C-H. Laser beam ophthalmological surgery method and apparatus. WO9717903. 1997 Pallikaris IG, Naoumidi II, Kalyvianaki MI, Katsanevaki VJ. comparative histological evaluation of mechanical and alcohol-assisted epithelial separation. J Cataract Refract Surg; 2003; 29: 1496-1501. Camellin M, Wyler D. Epi-LASIK versus epi-LASEK. J Refract Surg; 2008; 24: Tran DB, Sarayba MA, Bor Z, Garufis C, Duh YJ, Soltes CR, Juhasz T, Kurtz RM. Randomized prospective clinical study comparing induced aberrations with IntraLase and Hansatome flap creation in fellow eyes: potential impact on wavefront-guided laser in situ keratomileusis. J Cataract Refract Surg; 2005; 31: Durrie DS, Slade SG, Marshall J. Wavefront-guided excimer laser ablation contralateral eye study. J Refract Surg; 2008; 24: S77-S84. Mearza AA, Aslanides IM. Uses and complications of mitomycin C in ophthalmology. Expert Opin Drug Saf; 2007; 6: 27-32. Fisher BT, Hahn DW. Measurement of small-signal absorption coefficient and absorption cross section of collagen for 193-nm excimer laser light and the role of collagen in tissue ablation. Appl Opt; 2004; 43: 5443-5451. Fisher BT, Hahn DW. Development and numerical solution of a mechanistic model for corneal tissue ablation with the 193 nm argon fluoride excimer laser. J Opt Soc Am A Opt Image Sci Vis; 2007; 24: 265-277. Kwon Y, Choi M, Bott S. Impact of ablation efficiency reduction on post-surgery corneal asphericity: simulation of the laser refractive surgery with a flying spot laser beam. Opt Express; 2008; 16: 11808-11821. Zernike F. Diffraction theory of the knife-edge test and its improved form, the phase-contrast method. Monthly Notices of the Royal Astronomical Society; 1934; 94: 377-384. Mrochen M, Kaemmerer M, Seiler T. Clinical results of wavefront-guided laser in situ keratomileusis 3 months after surgery. J Cataract Refract Surg; 2001; 27: Smirnov MS. Measurement of the wave aberration of the human eye. Biofizika. 1961; 6: 687-703. Thibos L, Hong X, Bradley A, Applegate RA. Accuracy and precision of objective refraction from wavefront aberrations. J Vis; 2004; 4: 329-351. Burns SA. The spatially resolved refractometer. J Refract Surg; 2000; 16: Tscherning M. Die monochromatischen Aberratoinen des menschlichen. Auges Z Psychol Physiol Sinn; 1894; 6: 456-471. Mrochen M, Kaemmerer M, Mierdel P, Krinke HE, Seiler T. Principles of Tscherning aberrometry. J Refract Surg; 2000; 16: S570-S571. Hartmann J. Bemerkungen ueber den Bau und die Justierung von Spktrographen. Zeitschrift fuer Instrumentenkunde; 1900; 20: 47. Shack RB, Platt BC. Production and use of a lenticular Hartmann screen. J Opt Soc Am; 1971; 61:656. MacRae S, Fujieda M. Slit skiascopic-guided ablation using the Nidek laser. J Refract Surg; 2000; 16: S576-S580. Liang J, Grimm B, Goelz S, Bille JF. Objective measurement of wave aberrations of the human eye with the use of a Hartmann-Shack wave-front sensor. J Opt Soc Am A Opt Image Sci Vis; 1994; 11: 1949-1957. Babcock HW. Roddier Wavefront Sensor. Science; 1990; 250: 192. Moreno-Barriuso E, Marcos S, Navarro R, Burns SA. Comparing laser ray tracing, the spatially resolved refractometer, and the Hartmann-Shack sensor to measure the ocular wave aberration. Optom Vis Sci; 2001; 78: 152-156. Carvalho LA, Castro JC. The placido wavefront sensor and preliminary measurement on a mechanical eye. Optom Vis Sci; 2006; 83: 108-118. Díaz-Doutón F, Benito A, Pujol J, Arjona M, Güell JL, Artal P. Comparison of the retinal image quality with a Hartmann-Shack wavefront sensor and a doublepass instrument. Invest Ophthalmol Vis Sci; 2006; 47: 1710-1716. Díaz-Doutón F, Pujol J, Arjona M, Luque SO. Curvature sensor for ocular wavefront measurement. Opt Lett; 2006; 31: 2245-2247. Corbett AD, Wilkinson TD, Zhong JJ, Diaz-Santana L. Designing a holographic modal wavefront sensor for the detection of static ocular aberrations. J Opt Soc Am A Opt Image Sci Vis; 2007; 24: 1266-1275. Warden L, Liu Y, Binder PS, Dreher AW, Sverdrup L. Performance of a new binocular wavefront aberrometer based on a self-imaging diffractive sensor. J Refract Surg; 2008; 24: 188-196. Salmon TO. Corneal contribution to the Wavefront aberration of the eye. PhD Dissertation; 1999: 70. Mrochen M, Jankov M, Bueeler M, Seiler T. Correlation Between Corneal and Total Wavefront Aberrations in Myopic Eyes. J Refract Surg; 2003; 19: 104-112. Alio JL, Belda JI, Osman AA, Shalaby AM. Topography-guided laser in situ keratomileusis (TOPOLINK) to correct irregular astigmatism after previous refractive surgery. J Refract Surg; 2003; 19: 516-527. Mrochen M, Donetzky C,Wüllner C, Löffler J. Wavefront-optimized ablation profiles: Theoretical background. J Cataract Refract Surg; 2004; 30: 775-785. Reinstein DZ, Neal DR, Vogelsang H, Schroeder E, Nagy ZZ, Bergt M, Copland J, Topa D. Optimized and wavefront guided corneal refractive surgery using the Carl Zeiss Meditec platform: the WASCA aberrometer, CRS-Master, and MEL80 excimer laser. Ophthalmol Clin North Am; 2004; 17: 191-210. Koller T, Iseli HP, Hafezi F, Mrochen M, Seiler T. Q-factor customized ablation profile for the correction of myopic astigmatism. J Cataract Refract Surg; 2006; 32: 584-589. Thibos LN, Applegate RA, Schwiegerling JT, Webb R, VSIA Standards Taskforce Members. Standards for Reporting the Optical Aberrations of Eyes. J Refract Surg; 2002; 18: S652-S660. Ditzen K, Huschka H, Pieger S. Laser in situ keratomileusis for hyperopia. J Cataract Refract Surg; 1998; 24: 42-47. el Danasoury MA, Waring GO 3rd, el Maghraby A, Mehrez K. Excimer laser in situ keratomileusis to correct compound myopic astigmatism. J Refract Surg; 1997; 13: 511-520. De Ortueta D, Haecker C. Laser in situ keratomileusis for mixed astigmatism using a modified formula for bitoric ablation. Eur J Ophthalmol. 2008; 18: 869-76. Moreno-Barriuso E, Lloves JM, Marcos S. Ocular Aberrations before and after myopic corneal refractive surgery: LASIK-induced changes measured with LASER ray tracing. Invest Ophthalmol Vis Sci; 2001; 42: 1396-1403. Seiler T, Kaemmerer M, Mierdel P, Krinke H-E. Ocular optical aberrations after PRK for myopia and myopic astigmatism. Arch Ophthalmol; 2000; 118: 17–21. Buzzonetti L, Iarossi G, Valente P, Volpi M, Petrocelli G, Scullica L. Comparison of wavefront aberration changes in the anterior corneal surface after LASEK and LASIK: preliminary study. J Cataract Refract Surg; 2004; 30: 19291933. Cheng ZY, He JC, Zhou XT, Chu RY. Effect of flap thickness on higher order wavefront aberrations induced by LASIK: a bilateral study. J Refract Surg; 2008; 24: 524-529. Aslanides IM, Tsiklis NS, Astyrakakis NI, Pallikaris IG, Jankov MR. LASIK flap characteristics using the Moria M2 microkeratome with the 90-microm single use head. J Refract Surg; 2007; 23: 45-49. Levy Y, Segal O, Avni I, Zadok D. Ocular higher-order aberrations in eyes with supernormal vision. Am J Ophthalmo.; 2005; 139: 225-228. Applegate RA, Howland HC. Refractive surgery, optical aberrations, and visual performance. J Refract Surg; 1997; 13: 295-299. Gambra E, Sawides L, Dorronsoro C, Marcos S. Accommodative lag and fluctuations when optical aberrations are manipulated. J Vis; 2009; 9: 1-15. Held R. The rediscovery of adaptability in the visual system: effects of extrinsic and intrinsic chromatic dispersion. In Harris CS, ed. Visual coding and Adaptability. Hillsdalle, NJ: Lawrence Erbaum Associates; 1980. Artal P, Chen L, Fernandez EJ, Singer B, Manzanera S, Williams DR. Neural compensation for the eye's optical aberrations. J Vis; 2004; 4: 281-287. Lohmann CP, Gartry DS, Muir MK, Timberlake GT, Fitzke FW, Marshall J. Corneal haze after excimer laser refractive surgery: objective measurements and functional implications. Eur J Ophthalmol; 1991; 1: 173-180. Duffey RJ, Leaming D. US trends in refractive surgery: 2002 ISRS survey. J Refract Surg; 2003; 19: 357-363. Cui M, Chen XM, Lü P. Comparison of laser epithelial keratomileusis and photorefractive keratectomy for the correction of myopia: a meta-analysis. Chin Med J (Engl); 2008; 121: 2331-2335. Gamaly TO, El Danasoury A, El Maghraby A. A prospective, randomized, contralateral eye comparison of epithelial laser in situ keratomileusis and photorefractive keratectomy in eyes prone to haze. J Refract Surg; 2007; 23: O'Doherty M, Kirwan C, O'Keeffe M, O'Doherty J. Postoperative pain following epi-LASIK, LASEK, and PRK for myopia. J Refract Surg; 2007 ; 23: 133-138. Hersh PS, Fry K, Blaker JW. Spherical aberration after LASIK and PRK. Clinical results and theoretical models of etiology. J Cataract Refract Surg; 2003; 29: 2096-2104. Klein SA. Optimal corneal ablation for eyes with arbitrary Hartmann-Shack aberrations. J Opt Soc Am A Opt Image Sci Vis; 1998; 15: 2580-2588. Patel S, Marshall J, Fitzke FW 3rd. Model for predicting the optical performance of the eye in refractive surgery. Refract Corneal Surg; 1993; 9: 366375. Gatinel D, Malet J, Hoang-Xuan T, Azar DT. Analysis of customized corneal ablations: theoretical limitations of increasing negative asphericity. Ophthalmol Vis Sci; 2002; 43: 941–948. Perez-Escudero A, Dorronsoro C, Sawides L, Remon L, Merayo-Lloves J, Marcos S. Minor influence of Myopic Laser In Situ Keratomileusis on the Posterior Corneal Surface. Invest Ophthalmol Vis Sci. 2009 Apr 22. [Epub ahead of print]. Patel S, Aslanides IM. Main causes of reduced intraocular pressure after excimer laser photorefractive keratectomy. J Refract Surg. 1996; 12: 673-674. Arbelaez MC, Vidal C, Arba Mosquera S. Clinical Outcomes of LASIK for Myopia Using the SCHWIND Platform With Ocular Wavefront Customized Ablation. J Refract Surg; 2009; 25: 1083-1090. Arbelaez MC, Vidal C, Arba Mosquera S. Clinical Outcomes of Corneal Wavefront Customized Ablation Strategies with SCHWIND CAM in LASIK Treatments. Opthalmic Physiol Opt; 2009; 16: 487-496. de Ortueta D, Arba Mosquera S, Magnago T. Q-factor customized ablations. J Cataract Refract Surg; 2006; 32: 1981-1982. de Ortueta D, Arba Mosquera S. Mathematical properties of Asphericity: A Method to calculate with asphericities. J Refract Surg; 2008; 24: 119-121. Arba-Mosquera S, Merayo-Lloves J, de Ortueta D. Asphericity analysis using corneal wavefront and topographic meridional fits. J Biom Opt; 2010; 15 (2): Arba Mosquera S, Piñero D, Ortiz D, Alió JL. Customized LASIK: Aspherical Treatments with the ESIRIS SCHWIND platform (Chapter 40, pp. 378-395) in Mastering the Techniques of Custom-ized LASIK edited by Ashok Garg and Emanuel Rosen, Jaypee Medical International (2007). Arbelaez MC, Vidal C, Arba Mosquera S. Three-month clinical outcomes with Aberration-Free ablations in LASEK treatments using the AMARIS laser system. Highlights of Ophthalmology; 2008: 36: 4: 6-9. Arbelaez MC, Vidal C, Arba-Mosquera S. Clinical outcomes of corneal vertex vs central pupil references with aberration-free ablation and LASIK. Ophthalmol Vis Sci; 2008; 49: 5287-5294. Arbelaez MC, Arba Mosquera S. Clinical Pearls for Success in Femto-LASIK Refractive Surgery. Cataract Refract Surg Today; 2009; Feb: 37-43. Arba Mosquera S, de Ortueta D. Analysis of Optimized Profiles for “Aberration- Free” Refractive Surgery. Opthalmic Phys Opt; 2009; 29: 535-548. de Ortueta D, Arba Mosquera S, Baatz H. Aberration-neutral ablation pattern in hyperopic LASIK with the ESIRIS laser platform. J Refract Surg. 2009; 25: 17584. de Ortueta D, Arba Mosquera S, Baatz H. Comparison of Standard and Aberration-neutral Profiles for Myopic LASIK With the SCHWIND ESIRIS Platform. J Refract Surg; 2009; 25: 339-349. Arbelaez MC, Vidal C, Al Jabri B, Arba Mosquera S. LASIK for Myopia With Aspheric “Aberration Neutral” Ablations Using the ESIRIS Laser System. Refract Surg; 2009; 25: 991-999. Arbelaez MC, Vidal C, Arba Mosquera S. Excimer laser correction of moderate to high astigmatism with a non-wavefront-guided aberration-free ablation profile: Six-month results. J Cataract Refract Surg; 2009; 35: 1789-1798. de Ortueta D, Magnago T, Arba Mosquera S. Optimized Profiles for Complications and Effectiveness, NOVA publishers (2009). Arbelaez MC, Vidal C, Arba Mosquera S. Bilateral Symmetry before and six- month after Aberration-FreeTM correction with the SCHWIND AMARIS TotalTech laser: Clinical outcomes. J Optom; 2010; 3: 20-28. Arbelaez MC, Vidal C, Arba Mosquera S. Comparison of LASEK and LASIK with thin and ultrathin flap after excimer ablation with aspheric ablation profile. J Refract Surg; 2010; in press Gatell J, Arba-Mosquera S. Comparison of the induced corneal spherical and comatic aberrations between the classic profiles of the ESIRIS system and the aspheric profiles of the AMARIS system. J Emmetropia. 2010; (submitted). Arba-Mosquera S, de Ortueta D. Theoretical influence of decentred ablations on induced Coma aberrations. J Emmetropia. 2010; (submitted). Arba-Mosquera S, Arbelaez MC. Validating the Aberration-Free concept with AMARIS. J Ref Surg. 2010; (submitted). Arba-Mosquera S, Arbelaez MC. Hyperopia and hyperopic astigmatism correction with the SCHWIND ESIRIS: 1-year results. J Emmetropia. Arba-Mosquera S, Arbelaez MC. Six-month experience Hyperopic correction with the SCHWIND AMARIS Total-Tech Laser: Clinical Outcomes. J Optom. 2010; (submitted). Arbelaez MC, Vidal C and Arba-Mosquera S. Six-month clinical outcomes in LASIK for high myopia with aspheric «aberration neutral» ablations using the AMARIS laser system. J Emmetropia 2010; 1: 111-116 Arba-Mosquera S, de Ortueta D. Correlation Among Ocular Spherical Aberration, Corneal Spherical Aberration, and Corneal Asphericity Before and After LASIK for Myopic Astigmatism With the SCHWIND Amaris Platform. Refract Surg. 2010; (in press). Arbelaez MC, Magnago T, Arba Mosquera S. Customised LASIK: SCHWIND CAM-ESIRIS platform (Chapter 15, pp. 207-228) in Tips and Tricks in LASIK surgery edited by Shashi Kapoor and Ioannis G. Pallikaris, Jaypee Medical International (2007). Arbelaez MC, Arba Mosquera S. The SCHWIND AMARIS total-tech laser as an all-rounder in refractive surgery. Middle East Afr J Ophthalmol; 2009; 16: 4653 Arbelaez MC, Ewering T, Arba Mosquera S. Decision Assistant Wizard to Standardize Optimal Outcomes in Excimer Laser Refractive Corneal Surgery. J Refract Surg; 2010; (in press) Camellin M, Arba Mosquera S. Simultaneous aspheric wavefront-guided transepithelial photorefractive keratectomy and phototherapeutic keratectomy to correct aberrations and refractive errors after corneal surgery. J Cataract Refract Surg; 2010; 36: 1173-1180 Arba-Mosquera S, Arbelaez MC, de Ortueta D. Laser corneal refractive surgery in the twenty-first century: a review of the impact of refractive surgery on high-order aberrations (and vice versa). Journal of Modern Optics, Volume 57 Issue 12, 1041-1074 Gatell J, Arba Mosquera S. High-aberrations correction with the SCHWIND AMARIS: A Case Report. J Emmetropia; 2010: (submitted). Aslanides IM, Arba Mosquera S. 4y CW efficacy and stability with ESIRIS. J Refract Surg; 2010: (submitted). Arba Mosquera S, Ewering T. A new cooking recipe for retreatments in a systematic 15 steps methodology. Inv Ophth Vis Sci; 2010: (submitted). Arba Mosquera S, de Ortueta D. Geometrical analysis of the loss of ablation efficiency at non-normal incidence. Opt. Express; 2008; 16: 3877-3895. de Ortueta D, Arba Mosquera S, Häcker C. Theoretical considerations on the hyperopic shift effect observed when treating negative cylinder in laser refractive surgery. J Emmetropia; 2010; 1: 23-28. Arba Mosquera S, Triefenbach N. Analysis of the cornea-to-PMMA ablation ratio. Optom Vis Sci; 2010; submitted. Arba Mosquera S, Shraiki M. Analysis of the impact of the gain of ablation efficiency due to hydration changes during cornea refractive surgery. J Refract Sug; 2010; submitted. Arba-Mosquera S, Merayo-Lloves J, de Ortueta D. Clinical effects of pure cyclotorsional errors during refractive surgery. Invest Ophthalmol Vis Sci; 2008; 49: 4828-4836. Arba-Mosquera S, Arbelaez MC. Three-month clinical outcomes with Static and Dynamic Cyclotorsion Correction with the SCHWIND AMARIS Total-Tech Laser. in ESCRS2009 (Free paper). Arba-Mosquera S, Arbelaez MC. Three-month clinical outcomes with Static and Dynamic Cyclotorsion Correction with the SCHWIND AMARIS Total-Tech Laser. Cornea; 2010; (submitted). Arbelaez MC, Arba-Mosquera S. Six-month experience in Ocular-Wavefront- guided customised with the SCHWIND AMARIS Total-Tech Laser 6D EyeTracker: Clinical Outcomes. ASCRS; 2010; (free paper). Arba-Mosquera S, Aslanides IM. Turbo eye tracker analysis. J Mod Opt; 2010; (submitted). Aslanides IM, Arba-Mosquera S. Intraoperative cyclotorsion analysis. Ant Eye Contact Lens; 2010; (submitted). Arba Mosquera S. Aspherical Optical Zones: The Effective Optical Zone with the SCHWIND AMARIS in SOI2008 (Invited Lecture). Camellin M, Arba Mosquera S. Aspherical Optical Zones: The Effective Optical Zone with the SCHWIND AMARIS. J Refract Surg; 2010; in press. Camellin M, Arba Mosquera S. Aspherical Optical Zones in hyperopia with the SCHWIND AMARIS. Opt Vis Sci; 2010; submitted. Arba Mosquera S. Effect of High-Order-Aberrations on the Manifest Refraction in SCHWIND eye-tech-solutions User Meeting 2008 (Invited Lecture). Arba Mosquera S, de Ortueta D, Merayo-Lloves J. Tissue-Saving Zernike terms selection in customised treatments for refractive surgery. J Optom; 2009; 2: 182-196. Alió JL, Rosman M, Arba Mosquera S. Minimally invasive refractive surgery 97-123) in Minimally Invasive Ophthalmic Surgery, Springer publishers Arba-Mosquera S, Hollerbach T. Ablation Resolution in Laser Corneal Refractive Sur-gery: The Dual Fluence Concept of the AMARIS Platform. Advances in Optical Technologies, vol. 2010, Article ID 538541, 13 pages, 2010. Arba Mosquera S, de Ortueta D. Optimized Zernike Term Selection in Customized Treatments for Laser Corneal Refractive Surgery: Case Report. J Refract Surg; 2010; (in press) Arbelaez MC, Vidal C, Arba Mosquera S. Optimized Zernike Terms Selection in Customized Treatments for Laser Corneal Refractive Surgery: 3-Month Experience with LASIK Treatments in ASCRS2008 (Poster). Arba Mosquera S, Arbelaez MC, Merayo-Llovés J. Six-month clinical outcomes of customised treatments minimized for depth and time in laser corneal refractive surgery. Cornea; 2010; (in press). Cheng X, Bradley A, Thibos LN. Predicting subjective judgment of best focus with objective image quality metrics. J Vis; 2004; 4: 310-321. Marsack JD, Thibos LN, Applegate RA. Metrics of optical quality derived from wave aberrations predict visual performance. J Vis; 2004; 4: 322-328. Thibos LN. Unresolved issues in the prediction of subjective refraction from wavefront aberration maps. J Refract Surg. 2004 Sep-Oct;20(5):S533-6. Watson AB, Ahumada AJ Jr. Predicting visual acuity from wavefront aberrations. J Vis; 2008; 8: 1-19. Marcos S, Sawides L, Gambra E, Dorronsoro C. Influence of adaptive-optics ocular aberration correction on visual acuity at different luminances and contrast polarities. J Vis. 2008; 8: 1-12. Salmon TO, West RW, Gasser W, Kenmore T. Measurement of refractive errors in young myopes using the COAS Shack-Hartmann aberrometer. Optom Vis Sci; 2003; 80: 6-14. Chateau N, Harms F, Levecq X. wavefront aberrations. Refractive representation of primary 5th International Congress of Wavefront Sensing & Optimized Refractive Corrections; Whistler, Canada; 2004. Bará S, Arines J, Ares J, Prado P. Direct transformation of Zernike eye aberration coefficients between scaled, rotated, and/or displaced pupils. J Opt Soc Am A Opt Image Sci Vis; 2006; 23: 2061-2066. Nam J, Thibos LN, Iskander DR. Describing ocular aberrations with wavefront vergence maps. Clin Exp Optom; 2009; Epub ahead of print. Applegate RA, Marsack JD, Thibos LN. Metrics of retinal image quality predict visual performance in eyes with 20/17 or better visual acuity. Optom Vis Sci; 2006; 83: 635-640. Ravikumar S, Thibos LN, Bradley A. Calculation of retinal image quality for polychromatic light. J Opt Soc Am A Opt Image Sci Vis; 2008; 25: 2395-2407. Hartridge H. Chromatic aberration and resolving power of the eye. J Physiol; 1918; 52: 175-246. Thibos LN, Hong X, Bradley A, Cheng X. Statistical variation of aberration structure and image quality in a normal population of healthy eyes. J Opt Soc Am A; 2002; 19: 2329-2348. Jiang B, Liu Y. An analysis on the equivalence of the eye to a system with aberration. Sci Sin [B]; 1982; 25: 970-980. Roorda A. A review of optics. 7th International Congress of Wavefront Sensing & Optimized Refractive Corrections; Paradise Island, Bahamas; 2006. Thibos LN, Wheeler W, Horner D. Power vectors: an application of Fourier analysis to the description and statistical analysis of refractive error. Optom Vis Sci; 1997; 74: 367-375. Mattioli R, Tripoli NK. Corneal Geometry Reconstruction with the Keratron Videokeratographer. Optometry and Vision Science; 1997; 74: 881-894. De Ortueta D. Planar flaps with the Carriazo-Pendular microkeratome. J Refract Surg; 2008; 24: 322. Dorronsoro C, Siegel J, Remon L, Marcos S. Suitability of Filofocon A and PMMA for experimental models in excimer laser ablation refractive surgery. Opt Express; 2008; 16: 20955-20967. Arbelaez MC et al. LASIK for myopia and astigmatism using the SCHWIND AMARIS excimer laser. J Refract Surg; 2010 (in press). Mearza AA, Muhtaseb M, Aslanides IM. Visual and refractive outcomes of LASIK with the SCHWIND ESIRIS and WaveLight ALLEGRETTO WAVE Eye-q excimer lasers: a prospective, contralateral study. J Refract Surg; 2008; 24: 885890. Marcos S, Cano D, Barbero S. Increase in corneal asphericity after standard laser in situ keratomileusis for myopia is not inherent to the Munnerlyn algorithm. J Refract Surg; 2003; 19: S592-S596. Mrochen M, Seiler T. Influence of Corneal Curvature on Calculation of Ablation Patterns used in photorefractive Laser Surgery. J Refract Surg; 2001; 17: 584-587. Jiménez JR, Anera RG, Jiménez del Barco L, Hita E. Effect on laser-ablation algorithms of reflection losses and nonnormal incidence on the anterior cornea. Applied Physics Letters; 2002; 81: 1521-1523. Jiménez JR, Anera RG, Jiménez del Barco L, Hita E, Pérez-Ocón F. Correlation factor for ablation agorithms used in corneal refractive surgery with gaussian-profile beams. Optics Express; 2005; 13: 336-343. Arba Mosquera S. Laser system for ablating the cornea in a patient's eye Bende T, Seiler T, Wollensak J. Side effects in excimer corneal surgery. Corneal thermal gradients. Graefes Arch Clin Exp Ophthalmol; 1988; 226: 277280. Huang D, Arif M. Spot size and quality of scanning laser correction of higher- order wavefront aberrations. J Cataract Refract Surg; 2002; 28: 407-416. Guirao A,Williams D, MacRae S. Effect of beam size on the Expected benefit of customized laser refractive surgery. J Refract Surg; 2003; 19: 15-23. Grimm A, Arba Mosquera S, Klinner T. System for ablating the cornea of an eye (EP1923026). Arba Mosquera S, Klinner T. Method for controlling the location a laserpulse impinges the cornea of an eye during an ablation procedure (EP19335384). Bueeler M, Mrochen M. Simulation of eye-tracker latency, spot size, and ablation pulse depth on the correction of higher order wavefront aberrations with scanning spot laser systems. J Refract Surg; 2005; 21: 28-36. Gatinel D, Haouat M, Hoang-Xuan T. A review of mathematical descriptors of corneal asphericity. J Fr Ophtalmol 2002; 25: 81-90 Calossi A. Corneal asphericity and spherical aberration. J Refract Surg 2007; 23: 505-514 Kiely PM, Smith G, Carney LG. Optica Acta. 1982; 29: 1027-1040 The mean shape of the human cornea. Sheridan M, Douthwaite WA. Corneal asphericity and refractive error. Ophthal Physiol Opt 1989; 9: 235-238 Stojanovic A, Wang L, Jankov MR, Nitter TA, Wang Q. Wavefront optimized versus custom-Q treatments in surface ablation for myopic astigmatism with the WaveLight ALLEGRETTO laser. J Refract Surg 2008; 24: 779-789 Jiménez JR, Anera RG, Díaz JA, Pérez-Ocón F. Corneal asphericity after refractive surgery when the Munnerlyn formula is applied. J Opt Soc Am A Opt Image Sci Vis 2004; 21: 98-103 Vinciguerra P, Camesasca FI. Treatment of hyperopia: a new ablation profile to reduce corneal eccentricity. J Refract Surg 2002; 18: S315-317 Gatinel D, Hoang-Xuan T, Azar DT. Determination of corneal asphericity after myopia surgery with the excimer laser: a mathematical model. Invest Ophthalmol Vis Sci 2001; 42: 1736–1742 Anera RG, Jiménez JR, Jiménez del Barco L, Bermúdez J, Hita E. Changes in corneal asphericity after LASER in situ keratomileusis, J Cataract Refract Surg 2003; 29: 762-768 Altman DG, Bland JM. Measurement in Medicine: the Analysis of Method Comparison Studies. The Statistician; 1983; 32: 307-317 Pérez-Escudero, A., Dorronsoro, C. and Marcos, S. An artifact in fits to conic- based surfaces. E-pub only: arXiv:0905.0814v1 (2009) Navarro R, González L, Hernández JL. Optics of the average normal cornea from general and canonical representations of its surface topography. J Opt Soc Am A 2006; 23: 219-232 Preussner PR, Wahl J. Simplified mathematics for customized refractive surgery. J Cataract Refract Surg 2003; 29: 462-470 Preussner PR, Wahl J, Kramann C. Corneal model. J Cataract Refract Surg 2003; 29: 471-477 Binder PS, Rosenshein J. Retrospective comparison of 3 laser platforms to correct myopic spheres and spherocylinders using conventional and wavefrontguided treatments. J Cataract Refract Surg; 2007; 33: 1158-76 Hori-Komai Y, Toda I, Asano-Kato N, Ito M, Yamamoto T, Tsubota K. Comparison of LASIK using the NIDEK EC-5000 optimized aspheric transition zone (OATz) and conventional ablation profile. J Refract Surg; 2006; 22: 546-55 Subbaram MV, MacRae SM. Customized LASIK treatment for myopia based on preoperative manifest refraction and higher order aberrometry: the Rochester nomogram. J Refract Surg; 2007; 23: 435-41 McLellan JS, Marcos S, Prieto PM, Burns SA. Imperfect optics may be the eye's defence against chromatic blur. Nature; 2002; 417: 174-6 Applegate RA, Sarver EJ, Khemsara V. Are all aberrations equal? J Refract Surg; 2002; 18: S556-62 Applegate RA, Marsack JD, Ramos R, Sarver EJ. Interaction between aberrations to improve or reduce visual performance. J Cataract Refract Surg; 2003; 29: 1487-95 Tuan KA, Somani S, Chernyak DA. Changes in wavefront aberration with pharmaceutical dilating agents. J Refract Surg; 2005; 21: S530-534 Pop M, Payette Y. Photorefractive keratectomy versus laser in situ keratomileusis: a control-matched study. Ophthalmology. 2000; 107: 251-257. Bueeler M, Mrochen M, Seiler T. Maximum permissible lateral decentration in aberration-sensing and wavefront-guided corneal ablations. J Cataract Refract Surg 2003; 29: 257-263 Kim WS, Jo JM. Corneal hydration affects ablation during laser in situ keratomileusis surgery. Cornea. 2001; 20: 394-397. Dougherty PJ, Wellish KL, Maloney RK. Excimer laser ablation rate and corneal hydration. Am J Ophthalmol. 1994; 118: 169-176. Fam HB, Lim KL. Effect of higher-order wavefront aberrations on binocular summation. J Refract Surg. 2004; 20: S570-5 Nelson-Quigg JM, Cello K, Johnson CA. Predicting binocular visual field sensitivity from monocular visual field results. Invest Ophthalmol Vis Sci. 2000; 41: 2212-21 Cuesta JR, Anera RG, Jiménez R, Salas C. Impact of interocular differences in corneal asphericity on binocular summation. Am J Ophthalmol. 2003; 135: Jiménez JR, Villa C, Anera RG, Gutiérrez R, del Barco LJ. Binocular visual performance after LASIK. J Refract Surg. 2006; 22: 679-88 Marcos S, Burns SA. On the symmetry between eyes of wavefront aberration and cone directionality. Vision Res. 2000; 40: 2437-47 Mansouri B, Thompson B, Hess RF. Measurement of suprathreshold binocular interactions in amblyopia. Vision Res. 2008 Oct 31. [Epub ahead of Jiménez JR, Ponce A, Anera RG. Induced aniseikonia diminishes binocular contrast sensitivity and binocular summation. Optom Vis Sci. 2004; 81: 559-62 Yang Y, Wu F. Technical note: Comparison of the wavefront aberrations between natural and pharmacological pupil dilations. Ophthalmic Physiol Opt; 2007; 27: 220-223 Erdem U, Muftuoglu O, Gundogan FC, Sobaci G, Bayer A. Pupil center shift relative to the coaxially sighted corneal light reflex under natural and pharmacologically dilated conditions. J Refract Surg; 2008; 24: 530-538 Snellen H. Letterproeven tot Bepaling der Gezichtsscherpte. Utrecht, Weyers, Radhakrishnan H, Charman WN. Age-related changes in ocular aberrations with accommodation. J Vis. 2007; 7: 11.1-21 López-Gil N, Fernández-Sánchez V, Legras R, Montés-Micó R, Lara F, aberrations of the human eye as a function of age. Invest Ophthalmol Vis Sci. 2008; 49: 1736-43 Iida Y, Shimizu K, Ito M, Suzuki M. Influence of age on ocular wavefront aberration changes with accommodation. J Refract Surg. 2008; 24: 696-701 He JC, Gwiazda J, Thorn F, Held R, Huang W. Change in corneal shape and corneal wave-front aberrations with accommodation. J Vis. 2003; 3:456-63 Atchison DA, Markwell EL, Kasthurirangan S, Pope JM, Smith G, Swann PG. Age-related changes in optical and biometric characteristics of emmetropic eyes. J Vis. 2008; 8: 29.1-20 Holzer MP, Sassenroth M, Auffarth GU. Reliability of corneal and total wavefront aberration measurements with the SCHWIND Corneal and Ocular Wavefront Analyzers. J Refract Surg; 2006; 22: 917-920. MacRae S. Aberration Interaction In Wavefront Guided Custom Wavefront Guided Custom Ablation. Wavefront Congress 2007. Bühren J, Yoon GY, Kenner S, Artrip S, MacRae S, Huxlin K. The effect of decentration on lower- and higher-order aberrations after myopic photorefractive keratectomy (PRK) in a cat model. Wavefront Congress 2007. McLellan JS, Prieto PM, Marcos S, Burns SA. Effects of interactions among wave aberrations on optical image quality. Vision Res; 2006; 46: 3009-3016. Dorronsoro C, Cano D, Merayo-Lloves J, Marcos S. Experiments on PMMA models to predict the impact of refractive surgery on corneal shape. Express; 2006; 14: 6142-6156 Smith EM Jr, Talamo JH. Cyclotorsion in the seated and the supine patient. J Cataract Refract Surg. 1995; 21: 402-403. de Ortueta D, Arba Mosquera S, Baatz H. Topographic changes after hyperopic LASIK with the ESIRIS laser platform. J Refract Surg. 2008; 24: 137144. Rosa N, Furgiuele D, Lanza M, Capasso L, Romano A. Correlation of Keratectomy. J Refract Surg. 2004; 20: 478-483. Campbell CE. A new method for describing the aberrations of the eye using Zernike polynomials. Optom Vis Sci. 2003; 80: 79–83. Campbell CE. A method to analyze cylinder axis error. Optom Vis Sci. 1999; 76: 254-255. Yang Y, Thompson K, Burns S. Pupil location under mesopic, photopic and pharmacologically dilated conditions. Invest Ophthalmol Vis Sci. 2002; 43: 25082512. Guirao A, Williams D, Cox I. Effect of rotation and translation on the expected benefit of an ideal method to correct the eyes higher-order aberrations. J Opt Soc Am A. 2001; 18: 1003-1015. Ciccio AE, Durrie DS, Stahl JE, Schwendeman F. Ocular cyclotorsion during customized laser ablation. J Refract Surg. 2005; 21: S772-S774. Uozato H, Guyton DL. Centering corneal surgical procedures. Am J Ophthalmol. 1987; 103: 264-275. Marcos S, Barbero S, Llorente L, Merayo-Lloves J. Optical response to LASIK surgery for myopia from Total and Corneal Aberration Measurements. Ophthalmol Vis Sci. 2001; 42: 3349-3356. Marcos S. Aberrations and visual performance following standard Laser vision correction. J Refract Surg. 2001; 17: S596-S601. Bará S, Mancebo T, Moreno-Barriuso E. Positioning tolerances for phase plates compensating aberrations of the human eye. Appl Opt. 2000; 39: 34133420. Chernyak DA. Iris-based cyclotorsional image alignment method for wavefront registration. IEEE Transactions on Biomedical Engineering. 2005; 52: 2032- Schruender S, Fuchs H, Spasovski S, Dankert A. Intraoperative corneal topography for image registration. J Refract Surg. 2002; 18: S624-S629. Bueeler M, Mrochen M, Seiler T. Maximum permissible torsional misalignment in aberration-sensing and wavefront-guided corneal ablation. J Cataract Refract Surg. 2004; 30: 17-25. Hersh PS, Steinert RF, Brint SF; Summit PRK-LASIK Study Group. Photorefractive keratectomy versus laser in situ keratomileusis: a comparison of optical side effects. Ophthalmology. 2000; 107: 925-933. Maloney RK. Corneal topography and optical zone location in photorefractive keratectomy. Refract Corneal Surg 1990; 6: 363-371. Hersh PS, Schwartz-Goldstein BH; Summit Photorefractive Keratectomy Topography Study Group. Corneal topography of phase III excimer laser photorefractive keratectomy: characterization of clinical effects. Ophthalmology. 1995; 102: 963-978. O’Brart DPS, Gartry DS, Lohmann CP, Kerr Muir MG, Marshall J. Excimer laser photorefractive keratectomy for myopia: comparison of 4.00- and 5.00millimeter ablation zones. J Refract Corneal Surg. 1994; 10: 87-94. Roberts CW, Koester CJ. Optical zone diameters for photorefractive corneal surgery. Invest Ophthalmol Vis Sci. 1993; 34: 2275-2281. Halliday BL. Refractive and visual results and patient satisfaction after excimer laser photorefractive keratectomy for myopia. Br J Ophthalmol. 1995; 79: 881-887. Nepomuceno RL, Boxer Wachler BS, Scruggs R. Functional optical zone after myopic LASIK as a function of ablation diameter. J Cataract Refract Surg. 2005; 31: 379–384. Rojas MC, Manche EE. Comparison of videokeratographic functional optical zones in conductive keratoplasty and LASIK for hyperopia. J Refract Surg. 2003; 19: 333–337. Mrochen M, Büeler M. Aspheric optics: fundamentals. Ophthalmologe 2008; 105: 224-33. Yoon G, MacRae S, Williams DR, Cox IG. Causes of spherical aberration induced by laser refractive surgery. J Cataract Refract Surg; 2005;31:127-135 Reinstein DZ, Silverman RH, Sutton HF, Coleman DJ. Very high frequency ultrasound corneal analysis identifies anatomic correlates of optical complications Ophthalmology. 1999; 106: 474–482. Dupps WJ Jr, Roberts C. Effect of acute biomechanical changes on corneal curvature after photokeratectomy. J Refract Surg. 2001; 17: 658–669. Wang Z, Chen J, Yang B. Posterior corneal surface topographic changes after LASIK are related to residual corneal bed thickness. Ophthalmology; 1999; 106: Binder PS. Analysis of ectasia after laser in situ keratomileusis: risk factors. J Cataract Refract Surg; 2007; 33: 1530-8 Tabernero J, Klyce SD, Sarver EJ, Artal P. Functional optical zone of the cornea. Invest Ophthalmol Vis Sci. 2007; 48: 1053-60. Mok KH, Lee VW. Effect of optical zone ablation diameter on LASIK-induced higher order optical aberrations. J Refract Surg. 2005; 21: 141-143. Netto MV, Ambrosio R Jr, Wilson SE. Pupil size in refractive surgery candidates. J Refract Surg. 2004;20:337-342. Partal AE, Manche EE. Diameters of topographic optical zone and programmed ablation zone for laser in situ keratomileusis for myopia. J Refract Surg. 2003; 19: 528-33 Qazi MA, Roberts CJ, Mahmoud AM, Pepose JS. Topographic and biomechanical differences between hyperopic and myopic laser in situ keratomileusis. J Cataract Refract Surg. 2005; 31: 48-60 Kim HM, Jung HR. Multizone photorefractive keratectomy for myopia of 9 to 14 diopters. J Refract Surg; 1995; 11: S293-S297. Kermani O, Schmiedt K, Oberheide U, Gerten G. Early results of nidek customized aspheric transition zones (CATz) in laser in situ keratomileusis. J Refract Surg; 2003; 19: S190-S194. Kezirian GM. A Closer Look at the Options for LASIK Surgery. Review of Ophthalmology; 2003; 10: 12. Goes FJ. Customized topographic repair with the new platform: ZEiSS MEL80/New CRS Master TOSCA II (Chapter 18, pp. 179-193) in Mastering the Techniques of Customised LASIK edited by Ashok Garg and Emanuel Rosen, Jaypee Medical International (2007) Remón L, Tornel M, Furlan WD. Visual Acuity in Simple Myopic Astigmatism: Influence of Cylinder Axis. Optom Vis Sci; 2006; 83: 311–315. Bará S, Navarro R. Wide-field compensation of monochromatic eye aberrations: expected performance and design trade-offs. J. Opt. Soc. Am. A 2003; 20: 1-10. Rocha KM, Soriano ES, Chamon W, Chalita MR, Nose W. aberration and depth of focus in eyes implanted with aspheric and spherical intraocular lenses: a prospective randomized study. Ophthalmology; 2007; 114: Chen L, Artal P, Gutierrez D, Williams DR. Neural compensation for the best aberration correction. J Vis; 2007; 7: 1-9. Villegas EA, Alcón, E, Artal P. Optical quality of the eye in subjects with normal and excellent visual acuity. Invest Opthalmol Vis Sci; 2008; 49: 46884696. Benito A, Redondo M, Artal P. Laser In Situ Keratomileusis disrupts the aberration compensation mechanism of the human eye. Am J Ophthalmol; 2009; 147: 424-431. Lipshitz I. Thirty-four challenges to meet before excimer laser technology can Achieve super vision. J Refract Surg; 2002; 18: 740-743. Durrie DS, Kezirian GM. Femtosecond laser versus mechanical keratome flaps in wavefront-guided laser in situ keratomileusis. J Cataract Refract Surg; 2005; 31: 120-126. Tsai YY, Lin JM. Ablation centration after active eye-tracker-assisted photorefractive keratectomy and laser in situ keratomileusis. J Cataract Refract Surg; 2000; 26: 28-34.
{"url":"https://studyres.com/doc/7973979/optimisation-of-the-ablation-profiles-in-customised","timestamp":"2024-11-11T03:47:08Z","content_type":"text/html","content_length":"757914","record_id":"<urn:uuid:0c3717f0-5201-4e7d-bea2-9779b0cd29d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00823.warc.gz"}
Multivariate non-time series Similar to this previous question: ( http://forum.recurrence-plot.tk/viewtop ... f=4&t=3591 I'd like to take multivariate data (independent feature1, feature2, ..., featureN) and convert each row into an image for classification. There are perhaps 10 features as well as a dependent variable, while the dependent variable is kept out of the image. A post elsewhere online suggested using recurrence plots to convert the row into an image. I tried doing this with the data (see attachment) using for example this Python library ( A sample row of the dataset looks like this, where the exceedance column holds the dependent variable: year month day dayofyear tide tide_gtm dtide_1 dtide_2 PrecipSum6 Precip24 lograin3T wet3 lograin7T wet7 Wtemp_B rad solar_noon WDIR WSPD awind owind exceedance 2016 6 1 153 176.91 1.00 -0.01 -0.09 0.00 0.00 -1.52 0.00 0.35 1.00 19.90 0.00 1.00 0.00 0.00 -0.00 0.00 1 Exceedance is a binary variable (0 or 1). My plan is to use existing image classification approaches to classify the images. Does the use of recurrence plots in this case appear valid? Is this approach too far out? Perhaps another multivariate-to-image approach is warranted? index.png (3.17 KiB) Viewed 59050 times Re: Multivariate non-time series David01 wrote: ↑Wed Feb 15, 2023 07:04 Similar to this previous question: (http://forum.recurrence-plot.tk/viewtop ... f=4&t=3591). I'd like to take multivariate data (independent feature1, feature2, ..., featureN) and convert each row into an image for classification. There are perhaps 10 features as well as a dependent variable, while the dependent variable is kept out of the image. A post elsewhere online suggested using recurrence plots to convert the row into an image. I tried doing this with the data (see attachment) using for example this Python library (https://github.com/laszukdawid/recurrence-plot). A sample row of the dataset looks like this, where the exceedance column holds the dependent variable: year month day dayofyear tide tide_gtm dtide_1 dtide_2 PrecipSum6 Precip24 lograin3T wet3 lograin7T wet7 Wtemp_B rad solar_noon WDIR WSPD awind owind exceedance 2016 6 1 153 176.91 1.00 -0.01 -0.09 0.00 0.00 -1.52 0.00 0.35 1.00 19.90 0.00 1.00 0.00 0.00 -0.00 0.00 1 Exceedance is a binary variable (0 or 1). My plan is to use existing image classification approaches to classify the images. Does the use of recurrence plots in this case appear valid? Is this approach too far out? Perhaps another multivariate-to-image approach is warranted? Yes, should work. In general, multivariate time series can be used to construct a phase space trajectory (each variable would correspond to one dimension). This is a common approach since many years. In you case, you would use this as a tool for ML based image classification where the image is simply the RP. There are already many publications on this topic.
{"url":"http://forum.recurrence-plot.tk/viewtopic.php?f=4&p=6757&sid=1418f1247b74e7aa77d746e4cbe1a7e7","timestamp":"2024-11-11T23:55:07Z","content_type":"text/html","content_length":"29352","record_id":"<urn:uuid:1f83d45c-7aaf-41f0-beeb-ab67ad2e9793>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00007.warc.gz"}
Annotations on a Lecture of Black Hole Dynamics for Readers of the Glasgow Philosophical Society - Capital Today Dr Jonathan Kenigson, FRSA A black hole is one of the most mysterious and powerful forces in the universe. It is a region of space in which the gravitational pull is so powerful that nothing, not even light, can escape it. The existence of black holes was first predicted by Albert Einstein’s theory of General Relativity in 1916. Since then, astronomers have been able to observe and study these mysterious objects. Black holes come in a variety of sizes and masses. The smallest ones are called stellar black holes, and the largest ones are called supermassive black holes. These supermassive black holes can have masses millions or even billions of times greater than the mass of the Sun. Black holes are immensely powerful, and they can have a huge impact on their surroundings. They can emit intense radiation, draw in nearby material and even distort the very fabric of space and time. They truly are one of nature’s most incredible and awe-inspiring phenomena. The Schwarzschild radius, or S-radius, is an important concept in physics. It refers to the radius of a sphere around a non-rotating black hole. This radius is the point at which the gravity of a black hole is so great that not even light can escape. In other words, anything that comes within the Schwarzschild radius will be pulled into the black hole. This concept is often used to explain the behavior of black holes and is a key element in understanding the physics of the universe. By studying the S-radius, scientists can learn more about the nature of black holes, and how they interact with the rest of the universe. It’s a fascinating and important concept, and one that will likely be studied for centuries to come. The No-Hair Theorem states that black holes are fully described by only three external parameters — mass, spin, and electric charge. This means that any information about the material that formed the black hole, such as its density or magnetic field, is lost in the singularity. This theorem is based on the belief that the laws of physics governing the universe are the same everywhere, regardless of the environment. This means that the laws of gravity and electromagnetism must hold true in the extreme environment of a black hole. The theorem suggests that the only way for a black hole to maintain an external field is to have a charge or angular momentum. Without either of these “hairs” the black hole is simply an empty void. This theorem holds true for black holes anywhere in the universe, from the smallest to the largest. Quantum gravity is an area of study that attempts to merge the theories of quantum mechanics and general relativity. In simple terms, it is the study of how gravity works at the subatomic level. Scientists are still working to develop a full and complete understanding of quantum gravity, which could provide insights into the nature of the universe and the origin of space and time. The most popular theories of quantum gravity include string theory, loop quantum gravity, and causal dynamical triangulations. These theories all attempt to explain the behavior of matter and energy at the smallest scales, as well as explain how gravity behaves at those scales. If scientists can crack the code of quantum gravity, they could gain a greater understanding of the universe and perform groundbreaking experiments that could change the way we view the world. String theory is an important part of modern physics. It is a theoretical framework that attempts to explain the fundamental nature of matter and the universe. The basic idea of string theory is that the most fundamental objects in the universe are strings, rather than particles. These strings can be either open or closed and can vibrate at different frequencies to produce different particles. This has led to the development of the so-called “superstring theory,” which attempts to unify all the fundamental forces of nature into a single framework. String theory has been a source of much excitement and debate in the physics community in recent years, and it has the potential to revolutionize our understanding of the universe. The First Superstring Revolution of the late 20th century was a revolution in theoretical physics that revolutionized our understanding of the universe. It was a shift away from the existing particle-based theories towards a larger and more unified string theory. This theory suggests that all particles and forces are composed of tiny strings, which vibrate in different ways to create different phenomena. This theory, while still in its early stages, is a widely accepted explanation of physical reality. The implications of this theory are significant, as it allows us to study the universe, as opposed to just its individual parts. It is also thought to unify many of the different theories of physics and provide a unified view of the universe, which will help us to understand the universe in a more complete way. The Second Superstring Revolution of the late 1980s and early 1990s was a breakthrough in physics. It was a revolution in the way physicists thought about the universe and opened the path to a new era of exploration and discovery. This revolution was based on a single theoretical idea—string theory—which posited that all matter is composed of tiny strings vibrating at different frequencies. This idea would eventually become one of the most important foundations of modern physics. The revolution also ushered in the era of superstring theory, which unified all the forces of nature, including gravity and the strong and weak forces. This unification of the forces was also a fundamental breakthrough in Quantum Gravity Unification, as it is still one of the major open questions in physics. The Second Superstring Revolution also opened the door to many other theories, such as the multiverse, extra dimensions, and supersymmetry. These theories have become the subject of much research and debate among physicists today. Brane String Theory is one of the most exciting and complex theories in modern physics. The theory suggests that the universe is composed of tiny strings that vibrate in multiple dimensions. These strings are thought to be responsible for the properties of the four fundamental forces. According to the theory, the strings are held together by “branes” which act as membranes that separate the dimensions. Branes are also thought to give rise to the particles and fields observed in our universe. This theory has provided an explanation for the physical behavior of particles, gravity, and cosmic expansion. It has also been used to model the early universe. Although Brane String Theory has not yet been experimentally verified, it does offer a promising new direction for understanding the universe. One of the most important aspects of String Theory is the concept of Dark Matter and Dark Energy. Dark Matter is a form of matter that can neither be seen nor felt, and yet it is believed to make up about 80% of the universe’s mass. Dark Energy is an unknown form of energy that is believed to be responsible for the accelerating expansion of the universe. Understanding the behavior of Dark Matter and Dark Energy is an important part of understanding the universe, and String Theory is the best tool we have for doing so. Fuzzball String Theory is a relatively new concept in theoretical physics that attempts to explain how black holes work. The idea is that a black hole is made up of microscopic “fuzzballs” that are connected by strings of energy. Each fuzzball is like a tiny, three-dimensional universe, and the strings of energy connect them in a larger, fourth-dimensional universe. This theory offers an alternative to traditional black hole models, which assume that matter is crushed into a single point in space. Fuzzball String Theory suggests that matter is instead spread out over many fuzzballs, which are connected by strings of energy. The fuzzball paradigm may resolve some outstanding claims in Information Geometry. The Cosmic Censorship Hypothesis is a principle proposed by Roger Penrose in 1969, suggesting that singularities in space-time must be hidden from view by event horizons. This means that singularities, which are regions of infinite curvature and density, are effectively cut off from the outside universe by their own gravity. This hypothesis has been hotly debated in the scientific community for decades, as it has profound implications for our understanding of the universe. It suggests, for example, that black holes are not only possible but that they are also necessary in order to keep singularities hidden. It also implies that the universe is not a closed system, as some physicists have argued, but rather an open one in which singularities can exist but remain unobservable. While the Cosmic Censorship Hypothesis has yet to be proven, it has been widely accepted as a credible theory, and it continues to shape our understanding of the universe. The Fuzzball Cosmic Censorship Hypothesis is an important conjecture in theoretical physics. It was proposed by generalizations of Hawking’s work and is based on the principles of String Theory. The hypothesis states that when a black hole collapses, it is replaced by an object known as a fuzzball that is much larger than the black hole itself. In this way, the information that was originally contained within the black hole remains intact. The Fuzzball Cosmic Censorship Hypothesis has been used to explain why the universe is expanding and why gravity remains constant despite the presence of strong gravitational fields. Moreover, it has been used to explain the behavior of quasars, supernovas, and other phenomena. The Fuzzball Cosmic Censorship Hypothesis is still under investigation, but it has so far provided a useful framework for understanding the universe. Much further work will be needed to vindicate this hypothesis experimentally and mathematically. Jonathan Kenigson, Black Holes, String Theory, Cosmology Sources and Further Reading. Akbar, M., and Rong-Gen Cai. “Thermodynamic behavior of the Friedmann equation at the apparent horizon of the FRW universe.” Physical Review D 75.8 (2007): 084003. Cai, Rong-Gen, and Sang Pyo Kim. “First law of thermodynamics and Friedmann equations of Friedmann-Robertson-Walker universe.” Journal of High Energy Physics 2005.02 (2005): 050. Chen, Chaomei. “Searching for intellectual turning points: Progressive knowledge domain visualization.” Proceedings of the National Academy of Sciences 101.suppl_1 (2004): 5303-5310. Chen, Chaomei, and Jasna Kuljis. “The rising landscape: A visual exploration of superstring revolutions in physics.” Journal of the American Society for Information Science and Technology 54.5 (2003): 435-446. Chen, Weihuan, Shiing-shen Chern, and Kai S. Lam. Lectures on differential geometry. Vol. 1. World Scientific Publishing Company, 1999. Cicoli, Michele, et al. “Fuzzy Dark Matter candidates from string theory.” Journal of High Energy Physics 2022.5 (2022): 1-52. Gibbons, Gary W. “Anti-de-Sitter spacetime and its uses.” Mathematical and quantum aspects of relativity and cosmology. Springer, Berlin, Heidelberg, 2000. 102-142. Hawking, Stephen W., and Don N. Page. “Thermodynamics of black holes in anti-de Sitter space.” Communications in Mathematical Physics 87.4 (1983): 577-588. Isham, Chris J. Modern differential geometry for physicists. Vol. 61. World Scientific Publishing Company, 1999. Knudsen, Jens M., and Poul G. Hjorth. Elements of Newtonian mechanics: including nonlinear dynamics. Springer Science & Business Media, 2002. Lee, John M. Riemannian manifolds: an introduction to curvature. Vol. 176. Springer Science & Business Media, 2006. Martin, Daniel. Manifold Theory: an introduction for mathematical physicists. Elsevier, 2002. Martinez, Cristian, Claudio Teitelboim, and Jorge Zanelli. “Charged rotating black hole in three spacetime dimensions.” Physical Review D 61.10 (2000): 104013. Rudolph, Gerd, Matthias Schmidt, and Matthias Schmidt. Differential geometry and mathematical physics. Springer, 2012. Schwarz, John H. “Status of superstring and M-theory.” International Journal of Modern Physics A 25.25 (2010): 4703-4725. Shapiro, Stuart L., and Saul A. Teukolsky. “Formation of naked singularities: the violation of cosmic censorship.” Physical review letters 66.8 (1991): 994. Skenderis, Kostas, and Marika Taylor. “The fuzzball proposal for black holes.” Physics reports 467.4-5 (2008): 117-171. Spradlin, Marcus, Andrew Strominger, and Anastasia Volovich. “De sitter space.” Unity from Duality: Gravity, Gauge Theory and Strings. Springer, Berlin, Heidelberg, 2002. 423-453.
{"url":"https://capitaltoday.co.uk/featured-news/annotations-on-a-lecture-of-black-hole-dynamics-for-readers-of-the-glasgow-philosophical-society/","timestamp":"2024-11-12T07:28:42Z","content_type":"text/html","content_length":"146550","record_id":"<urn:uuid:45e22ecc-518d-419f-a755-cc675b5b2326>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00692.warc.gz"}
A Formalization of Set Theory without Variablessearch Item Successfully Added to Cart An error was encountered while trying to add the item to the cart. Please try again. Please make all selections above before adding to cart A Formalization of Set Theory without Variables Softcover ISBN: 978-0-8218-1041-5 Product Code: COLL/41 List Price: $99.00 MAA Member Price: $89.10 AMS Member Price: $79.20 eBook ISBN: 978-1-4704-3187-7 Product Code: COLL/41.E List Price: $89.00 MAA Member Price: $80.10 AMS Member Price: $71.20 Softcover ISBN: 978-0-8218-1041-5 eBook: ISBN: 978-1-4704-3187-7 Product Code: COLL/41.B List Price: $188.00 $143.50 MAA Member Price: $169.20 $129.15 AMS Member Price: $150.40 $114.80 Click above image for expanded view A Formalization of Set Theory without Variables Softcover ISBN: 978-0-8218-1041-5 Product Code: COLL/41 List Price: $99.00 MAA Member Price: $89.10 AMS Member Price: $79.20 eBook ISBN: 978-1-4704-3187-7 Product Code: COLL/41.E List Price: $89.00 MAA Member Price: $80.10 AMS Member Price: $71.20 Softcover ISBN: 978-0-8218-1041-5 eBook ISBN: 978-1-4704-3187-7 Product Code: COLL/41.B List Price: $188.00 $143.50 MAA Member Price: $169.20 $129.15 AMS Member Price: $150.40 $114.80 • Colloquium Publications Volume: 41; 1987; 318 pp MSC: Primary 03 Completed in 1983, this work culminates nearly half a century of the late Alfred Tarski's foundational studies in logic, mathematics, and the philosophy of science. Written in collaboration with Steven Givant, the book appeals to a very broad audience, and requires only a familiarity with first-order logic. It is of great interest to logicians and mathematicians interested in the foundations of mathematics, but also to philosophers interested in logic, semantics, algebraic logic, or the methodology of the deductive sciences, and to computer scientists interested in developing very simple computer languages rich enough for mathematical and scientific applications. The authors show that set theory and number theory can be developed within the framework of a new, different, and simple equational formalism, closely related to the formalism of the theory of relation algebras. There are no variables, quantifiers, or sentential connectives. Predicates are constructed from two atomic binary predicates (which denote the relations of identity and set-theoretic membership) by repeated applications of four operators that are analogues of the well-known operations of relative product, conversion, Boolean addition, and complementation. All mathematical statements are expressed as equations between predicates. There are ten logical axiom schemata and just one rule of inference: the one of replacing equals by equals, familiar from high school algebra. Though such a simple formalism may appear limited in its powers of expression and proof, this book proves quite the opposite. The authors show that it provides a framework for the formalization of practically all known systems of set theory, and hence for the development of all classical mathematics. The book contains numerous applications of the main results to diverse areas of foundational research: propositional logic; semantics; first-order logics with finitely many variables; definability and axiomatizability questions in set theory, Peano arithmetic, and real number theory; representation and decision problems in the theory of relation algebras; and decision problems in equational logic. □ Chapters □ Chapter 1. The formalism $\mathcal L$of predicate logic □ Chapter 2. The formalism $\mathcal L^+$, a definitional extension of $\mathcal L$ □ Chapter 3. The formalism $\mathcal L^+$ without variables and the problem of its equipollence with $\mathcal L$ □ Chapter 4. The relative equipollence of $\mathcal L$ and $\mathcal L^+$, and the formalization of set theory in $\mathcal L^\times $ □ Chapter 5. Some improvements of the equipollence results □ Chapter 6. Implications of the main results for semantic and axiomatic foundations of set theory □ Chapter 7. Extension of results to arbitrary formalisms of predicate logic, and applications to the formalization of the arithmetics of natural and real numbers □ Chapter 8. Applications to relation algebras and to varieties of algebras • Permission – for use of book, eBook, or Journal content • Book Details • Table of Contents • Requests Volume: 41; 1987; 318 pp MSC: Primary 03 Completed in 1983, this work culminates nearly half a century of the late Alfred Tarski's foundational studies in logic, mathematics, and the philosophy of science. Written in collaboration with Steven Givant, the book appeals to a very broad audience, and requires only a familiarity with first-order logic. It is of great interest to logicians and mathematicians interested in the foundations of mathematics, but also to philosophers interested in logic, semantics, algebraic logic, or the methodology of the deductive sciences, and to computer scientists interested in developing very simple computer languages rich enough for mathematical and scientific applications. The authors show that set theory and number theory can be developed within the framework of a new, different, and simple equational formalism, closely related to the formalism of the theory of relation algebras. There are no variables, quantifiers, or sentential connectives. Predicates are constructed from two atomic binary predicates (which denote the relations of identity and set-theoretic membership) by repeated applications of four operators that are analogues of the well-known operations of relative product, conversion, Boolean addition, and complementation. All mathematical statements are expressed as equations between predicates. There are ten logical axiom schemata and just one rule of inference: the one of replacing equals by equals, familiar from high school algebra. Though such a simple formalism may appear limited in its powers of expression and proof, this book proves quite the opposite. The authors show that it provides a framework for the formalization of practically all known systems of set theory, and hence for the development of all classical mathematics. The book contains numerous applications of the main results to diverse areas of foundational research: propositional logic; semantics; first-order logics with finitely many variables; definability and axiomatizability questions in set theory, Peano arithmetic, and real number theory; representation and decision problems in the theory of relation algebras; and decision problems in equational • Chapters • Chapter 1. The formalism $\mathcal L$of predicate logic • Chapter 2. The formalism $\mathcal L^+$, a definitional extension of $\mathcal L$ • Chapter 3. The formalism $\mathcal L^+$ without variables and the problem of its equipollence with $\mathcal L$ • Chapter 4. The relative equipollence of $\mathcal L$ and $\mathcal L^+$, and the formalization of set theory in $\mathcal L^\times $ • Chapter 5. Some improvements of the equipollence results • Chapter 6. Implications of the main results for semantic and axiomatic foundations of set theory • Chapter 7. Extension of results to arbitrary formalisms of predicate logic, and applications to the formalization of the arithmetics of natural and real numbers • Chapter 8. Applications to relation algebras and to varieties of algebras Permission – for use of book, eBook, or Journal content Please select which format for which you are requesting permissions.
{"url":"https://bookstore.ams.org/COLL/41","timestamp":"2024-11-12T05:30:36Z","content_type":"text/html","content_length":"93634","record_id":"<urn:uuid:622bcd4f-908e-40cb-b704-9170a36acc8b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00168.warc.gz"}
Ring -- from Wolfram MathWorld A ring in the mathematical sense is a set binary operators 1. Additive associativity: For all 2. Additive commutativity: For all 3. Additive identity: There exists an element 4. Additive inverse: For every 5. Left and right distributivity: For all 6. Multiplicative associativity: For all associative ring). Conditions 1-5 are always required. Though non-associative rings exist, virtually all texts also require condition 6 (Itô 1986, pp. 1369-1372; p. 418; Zwillinger 1995, pp. 141-143; Harris and Stocker 1998; Knuth 1998; Korn and Korn 2000; Bronshtein and Semendyayev 2004). Rings may also satisfy various optional conditions: 7. Multiplicative commutativity: For all commutative ring), 8. Multiplicative identity: There exists an element unit ring, or sometimes a "ring with identity"), 9. Multiplicative inverse: For each identity element. A ring satisfying all additional properties 6-9 is called a field, whereas one satisfying only additional properties 6, 8, and 9 is called a division algebra (or skew field). Some authors depart from the normal convention and require (under their definition) a ring to include additional properties. For example, Birkhoff and Mac Lane (1996) define a ring to have a multiplicative identity (i.e., property 8). Here are a number of examples of rings lacking particular conditions: 1. Without multiplicative associativity (sometimes also called nonassociative algebras): octonions, OEIS A037292, 2. Without multiplicative commutativity: Real-valued matrices, quaternions, 3. Without multiplicative identity: Even-valued integers, 4. Without multiplicative inverse: integers. The word ring is short for the German word 'Zahlring' (number ring). The French word for a ring is anneau, and the modern German word is Ring, both meaning (not so surprisingly) "ring." Fraenkel (1914) gave the first abstract definition of the ring, although this work did not have much impact. The term was introduced by Hilbert to describe rings like By successively multiplying the new element algebraic numbers have this property, e.g., A ring must contain at least one element, but need not contain a multiplicative identity or be commutative. The number of finite rings of A027623 and A037234; Fletcher 1980). If prime, there are two rings of size A ring that is commutative under multiplication, has a unit element, and has no divisors of zero is called an integral domain. A ring whose nonzero elements form a commutative multiplication group is called a field. The simplest rings are the integers polynomials square real matrices. Rings which have been investigated and found to be of interest are usually named after one or more of their investigators. This practice unfortunately leads to names which give very little insight into the relevant properties of the associated rings. Renteln and Dundes (2005) give the following (bad) mathematical joke about rings: Q: What's an Abelian group under addition, closed, associative, distributive, and bears a curse? A: The Ring of the Nibelung.
{"url":"https://mathworld.wolfram.com/Ring.html","timestamp":"2024-11-12T19:41:03Z","content_type":"text/html","content_length":"73490","record_id":"<urn:uuid:25cb452d-8173-46d5-82ab-928cf55a2214>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00257.warc.gz"}
Radial and Angular Parts of Atomic Orbitals Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) The solutions to Schrödinger's equation for atomic orbitals can be expressed in terms of spherical coordinates: \(r\), \(\theta\), and \(\phi\). For a point \((r, \theta, \phi)\), the variable \(r\) represents the distance from the center of the nucleus, \(\theta\) represents the angle to the positive z-axis, and \(\phi\) represents the angle to the positive x-axis in the xy-plane. Separation of Variables Because the atomic orbitals are described with a time-independent potential V, Schrödinger’s equation can be solved using the technique of separation of variables, so that any wavefunction has the \(\Psi(r,\theta,\phi) = R(r) Y(\theta,\phi)\) where \(R(r)\) is the radial wavefunction and \(Y(\theta,\phi)\) is the angular wavefunction: \(Y(\theta,\phi) = \Theta(\theta) \; \; \Phi(\phi)\) Each set of quantum numbers, (\(n\), \(l\), \(m_l\)), describes a different wave function. The radial wave function is only dependent on \(n\) and \(l\), while the angular wavefunction is only dependent on \(l\) and \(m_l\). So a particular orbital solution can be written as: \(\Psi_{n,l,m_l}(r,\theta,\phi) = {R}_{n,l}(r) Y_{l,m_l}(\theta,\phi)\) \(n = 1, 2, 3, …\) \(l = 0, 1, …, n-1\) \(m_l = -l, … , -2, -1, 0, +1, +2, …, l\) A wave function node occurs at points where the wave function is zero and changes signs. The electron has zero probability of being located at a node. Because of the separation of variables for an electron orbital, the wave function will be zero when any one of its component functions is zero. When \(R(r)\) is zero, the node consists of a sphere. When \(\Theta(\theta)\) is zero, the node consists of a cone with the z-axis as its axis and apex at the origin. In the special case \(\Theta(\pi/2)\) = 0, the cone is flattened to be the x-y plane. When \(\Phi(\phi)\) is zero, the node consists of a plane through the z-axis. Bonding and sign of wave function The shape and extent of an orbital only depends on the square of the magnitude of the wave function. However, when considering how bonding between atoms might take place, the signs of the wave functions are important. As a general rule a bond is stronger, i.e. it has lower energy, when the orbitals of the shared electrons have their wavefunctions match positive to positive and negative to negative. Another way of expressing this is that the bond is stronger when the wave functions constructively interfere with each other. When the orbitals overlap so that the wave functions match positive to negative, the bond will be weaker or may not form at all. Radial wavefunctions The radial wavefunctions are of the general form: \(R(r) = N \; p(r) \; e^{-kr}\) • \(N\) is a positive normalizing constant • \(p(r)\) is a polynomial in \(r\) • \(k\) is a positive constant The exponential factor is always positive, so the nodes and sign of \(R(r)\) depends on the behavior of \(p(r)\). Because the exponential factor has a negative sign in the exponent, \(R(r)\) will approach 0 as \(r\) goes to infinity. \(\Psi^2\) quantifies the probability of the electron being at a particular point. The probability distribution, \(P(r)\) is the probability that the electron will be at any point that is \(r\) distance from the nucleus. For any type of orbital, since \(\Psi_{n,0,0}\) is separable into radial and angular components that are each appropriately normalized, and a sphere of radius r has area proportional to \(r^2\), we have: \(P(r) = r^2R^2(r)\) Angular wavefunctions The angular wave function \(Y(\theta,\phi)\)does much to give an orbital its distinctive shape. \(Y(\theta,\phi)\) is typically normalized so the the integral of \(Y^2(\theta,\phi)\) over the unit sphere is equal to one. In this case, \(Y^2(\theta,\phi)\) serves as a probability function. The probability function can be interpreted as the probability that the electron will be found on the ray emitting from the origin that is at angles \((\theta,\phi)\) from the axes. The probability function can also be interpreted as the probability distribution of the electron being at position \((\ theta,\phi)\) on a sphere of radius r, given that it is r distance from the nucleus. The angular wave functions for a hydrogen atom, \(Y_{l,m_l}(\theta,\phi)\) are also the wavefunction solutions to Schrödinger’s equation for a rigid rotor consisting of two bodies, for example a diatomic molecule. Hydrogen Atom The simplest case to consider is the hydrogen atom, with one positively charged proton in the nucleus and just one negatively charged electron orbiting around the nucleus. It is important to understand the orbitals of hydrogen, not only because hydrogen is an important element, but also because they serve as building blocks for understanding the orbitals of other atoms. s Orbitals The hydrogen s orbitals correspond to \(l=0\) and only allow \(m_l = 0\). In this case, the solution for the angular wavefunction, \(Y_{0,0}(\theta,\phi)\) is a constant. As a result, the \(\Psi_ {n,0,0}(r,\theta,\phi)\) wavefunctions only depend on \(r\) and the s orbitals are all spherical in shape. Because \(\Psi_{n,0,0}\) depends only on r, the probability distribution function of the electron: \(\Psi^2_{n,0,0}(r,\theta,\phi) = \dfrac{1}{4\pi}R^2_{n,0}(r)\) Graphs of the three functions, \(R(r)\) in green, \(R^2(r)\) in purple and \(P(r)\) in orange are given below for n = 1, 2, and 3. The graph of the functions have been variously scaled along the vertical axis to allow an easy comparison of their shapes and where they are zero, positive and negative. The vertical scales for different functions, either within or between diagrams, are not necessarily the same. Figure \(\PageIndex{1}\): 1s Orbitals radial diagram Figure \(\PageIndex{2}\): 2s Orbital radial diagram Figure \(\PageIndex{3}\): 3s Orbital radial diagram In addition, a cross-section contour diagram is given for each of the three orbitals. These contour diagrams indicate the physical shape and size of the orbitals and where the probabilities are concentrated. An electron will be in the most-likely-10% (purple) regions 10% of the time, and it will be in the most-likely-50% regions (including the most-likely-10% regions, dark blue and purple) 50% of the time. Nodes are shown in orange in the contour diagrams. In all of these contour diagrams, the x-axis is horizontal, the z-axis is vertical, and the y-axis comes out of the diagram. The actual 3-dimensional orbital shape is obtained by rotating the 2-dimensional cross-section about the axis of symmetry, which is shown as a blue dashed line. The contour diagrams also indicate for regions that are separated by nodes, whether the wave function is positive (+) or negative (-) in that region. In order for the wave function to change sign, one must cross a node. Figure \(\PageIndex{4}\): 1s Orbitals contour diagram Figure \(\PageIndex{5}\): 2s Orbital contour diagram Figure \(\PageIndex{6}\): 3s Orbital contour diagram From these diagrams, we see that the 1s orbital does not have any nodes, the 2s orbital has one node, and the 3s orbital has 2 nodes. Because for the s orbitals, \(\Psi^2 = R^2(r)\), it is interesting to compare the \(R^2(r)\) graphs and the \(P(r)\) graphs. By comparing maximum values, in the 1s orbital, the \(R^2(r)\) graph shows that the most likely place for the electron is at the nucleus, but the \(P(r)\) graph shows that the most likely radius for the electron is at \(a_0\), the Bohr radius. Similarly, for the other s orbitals, the one place the electron is most likely to be is at the nucleus, but the most likely radius for the electron to be at is outside the outermost node. Something that is not readily apparent from these diagrams is that the average radius for the 1s, 2s, and 3s orbitals is 1.5 a[0], 6 a[0], and 13.5 a[0], forming ratios of 1:4:9. In other words, the average radius is proportional to \(n^2 p orbitals The hydrogen p orbitals correspond to l = 1 when n ? 2 and allow m[l] = -1, 0, or +1. The diagrams below describe the wave function for m[l] = 0. The angular wave function \(Y_{1,0}(\theta,\phi) = cos\;\theta\) only depends on \(\theta\). Below, the angular wavefunction shown with a node at \(\theta = \pi/2\). The radial wavefunctions and orbital contour diagrams for the p orbitals with n = 2 and 3 are: Figure \(\PageIndex{7}\): 2p Orbitals radial diagram Figure \(\PageIndex{8}\): 3p Orbitals radial diagram Figure \(\PageIndex{9}\): 2p Orbitals contour diagram Figure \(\PageIndex{10}\): 3p Orbitals contour diagram As in the case of the s orbitals, the actual 3-dimensional p orbital shape is obtained by rotating the 2-dimensional cross-sections about the axis of symmetry, which is shown as a blue dashed line. The p orbitals display their distinctive dumbbell shape. The angular wave function creates a nodal plane (the horizontal line in the cross-section diagram) in the x-y plane. In addition, the 3p radial wavefunction creates a spherical node (the circular node in the cross-section diagram) at r = 6 a[0]. For \(m_l = 0\), the axis of symmetry is along the z axis. The wavefunctions for m[l] = +1 and -1 can be represented in different ways. For ease of computation, they are often represented as real-valued functions. In this case, the orbitals have the same shape and size as \(m_l = 0\), except that they are oriented in a different direction: the axis of symmetry is along the x axis with the nodal plane in the y-z plane or the axis of symmetry is along the y-axis with the nodal plane in the x-z plane. These correspond to wavefunctions that are the sum and the difference of the two m[l] = +1 and -1 wavefunctions \[\psi_{x}= \psi_{m_j=+1} + \psi_{m_j=-1} \notag \] \[\psi_{y}= \psi_{m_j=+1} - \psi_{m_j=-1} \notag \] the \(\psi_{z}\) wavefunction has a magnetic quantum number of m[l] =0, but the \(\psi_{x}\) and \(\psi_{y}\) are mixtures of the wavefunctions corresponding to m[l] = +1 and -1 and do not have unique magnetic quantum numbers. d Orbitals The hydrogen d orbitals correspond to l = 2 when n = 3 and allow m[l] = -2, -1, 0, +1, or +2. There are two basic shapes of d orbitals, depending on the form of the angular wave function. The first shape of a d orbital corresponds to m[l] = 0. In this case, \(Y_{2,0}(\theta,\phi)\) only depends on \(\theta\). The graphs of the angular wavefunction, and for \(n = 3\), the radial wave function and orbital contour diagram are as follows: Figure \(\PageIndex{11}\): 3d orbital, m[l] = 0 Radial Wavefunction Figure \(\PageIndex{12}\): 3d orbital, m[l] = 0 Contour Diagram As in the case of the s and p orbitals, the actual 3-dimensional d orbital shape is obtained by rotating the 2-dimensional cross-section about the axis of symmetry, which is shown as a blue dashed This first d orbital shape displays a dumbbell shape along the z axis, but it is surrounded in the middle by a doughnut (corresponding to the regions where the wavefunction is negative). The angular wave function creates nodes which are cones that open at about 54.7 degrees to the z-axis. At n=3, the radial wave function does not have any nodes. The second d orbital shape is illustrated for m[l ]= +1 and n = 3. In this case, \(Y_{2,1}(\theta,\phi)\) depends on both \(\theta\) and \(\phi\), and can be shown as a surface curving over and under a rectangular domain. As a result, separate diagrams are shown for \(Y_{2,1}(\theta,\phi)\) on the left and \(Y^2_{2,1}(\theta,\phi)\) on the right. Figure \(\PageIndex{13}\): 3d orbital, m[l] = +1: Figure \(\PageIndex{14}\): 3d orbital, m[l] = +1: \(Y_{2,1}(\theta,\phi)\) Figure \(\PageIndex{15}\): 3d orbital, m[l] = +1: \(Y^2_{2,1}(\theta,\phi)\) Unlike previous orbital diagrams, this contour diagram indicates more than one axis of symmetry. Each axis of symmetry is at 45 degrees to the x- and z-axis. Each axis of symmetry only applies to the region surrounding it and bounded by nodes. Each of the four arms of the contour is rotated about its axis of symmetry to produce the 3-dimensional shape. However, the rotation is a non-standard rotation, producing only radial symmetry about the axis, not circular symmetry as was the case with other orbitals. This produces a double dumbell shape, with nodes in the x-y plane and the y-z Similar to the p orbitals, the wavefunctions for m[l]=+2, -1, and -2 can be represented as real-valued functions that have the same shape as for m[l]=+1, just oriented in different directions. In two cases, the shape is re-oriented so the the axes of symmetry are in the x-y plane or in the z-y plane. In both of those cases, the axes of symmetry are at 45 degrees to their respective coordinate axes, just as with m[l]=+1. For the third and final case, the orbital shape is re-oriented so the axes of symmetry are in the x-y plane, but also laying along the x and y axes. It is often the case that the orbitals in the d subshell corresponding to the magnetic quantum numbers m[l] = ±1 and m[l] = ±2 are, as for the \(\psi_{x}\) and \(\psi_{y}\) orbitals, represented as sums and differences of the wavefunctions corresponding to m[l] = ±1 and m[l] = ±2. This, as for the p orbitals, better represents the spatial orientation of bonds formed with these orbitals. The Five Equivalent 3d Orbitals of the Hydrogen Atom. The surfaces shown enclose 90% of the total electron probability for the five hydrogen 3d orbitals. Four of the five 3d orbitals consist of four lobes arranged in a plane that is intersected by two perpendicular nodal planes. These four orbitals have the same shape but different orientations. The fifth 3d orbital, \(3d_{z^2}\), has a distinct shape even though it is mathematically equivalent to the others. The phase of the wave function for the different lobes is indicated by color: orange for positive and blue for negative. The orbitals d[xz] and d[yz] are sums and differences of the two orbitals with m[l] = ±1 and lie in the xz and yz planes. m[l] = ±2 similarly corresponds to d[xy] and d[x^2][ − y^2]; both lie in the xy plane. m[l] = 0 is the d[z^2] orbital, which is oriented along the z-axis. Hydrogenic Orbitals Hydrogenic atoms are atoms that only have one electron orbiting around the nucleus, even though the nucleus may have more than one proton and one or more neutrons. In this case, the electron has the same orbitals as the hydrogen atom, except that they are scaled by a factor of 1/Z. Z is the atomic number of the atom, the number of protons in the nucleus. The increased number of positively charged protons shrinks the size of the orbitals. Thus, the same graphs for hydrogen above apply to hydrogenic atoms, except that instead of expressing the radius in units of a[0], the radius is expressed in units of a[0]/Z. Correspondingly, the values have to be renormalized by a factor of (Z/a[0])^3/2. So a He^+ atom has orbitals that are the same shape but half the size of the corresponding hydrogen orbitals and a Li^2^+ atom has orbitals that are the same shape but one third the size of the corresponding hydrogen orbitals. 1. Atkins, P., & de Paula, J. (2006). Physical Chemistry for the Life Sciences. New York: W. H. Freeman and Company. 2. Ladd, M. (1998). Introduction to Physical Chemistry (3rd ed). Cambridge, UK: Cambridge University Press. 3. McMahon, D. (2005). Quantum Mechanics Demystified. NewYork: McGraw-Hill Professional. 4. McQuarrie, D. A., & Simon, J. D. (1997). Physical Chemistry: A molecular approach. Sausalito, CA: University Science Books. Contributors and Attributions
{"url":"https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Quantum_Mechanics/10%3A_Multi-electron_Atoms/Radial_and_Angular_Parts_of_Atomic_Orbitals","timestamp":"2024-11-03T05:57:23Z","content_type":"text/html","content_length":"153153","record_id":"<urn:uuid:362ebe16-5f27-4d5b-a00e-1c17ff1def93>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00110.warc.gz"}
What is quantum computing and its advantagesWhat is quantum computing and its advantages What is quantum computing and its advantages Before knowing the answer to what is a quantum computer, you answer my question, what is a supercomputer? On hearing the name of a supercomputer, the notion of such a computer will come to your mind, which can do any calculation in seconds, similarly, quantum computer speed If you have to search your phone number from the world's phone book database, then it will take at least 1 month for a supercomputer to find a number, whereas a quantum computer can do the same thing in hardly 20 minutes. Looking at this, you can understand, how much is the speed of a quantum computer. Let us know what a quantum computer is? And how it works:- what is a quantum computer? The word quantum means, atoms can either be neutrons or protons that can be used to calculate or solve complex problems, computers that use them are called quantum computers. In these computers, instead of a computer chip, nuclear is used for calculation, the computer we are using now uses bits that are either saved in the database in the form of 0 or 1, while the atom is at a time. There can be both 0 and 1 which scientists have named qubit. What is a bit in the computer? A bit is a binary number in computer language, which takes only one number out of 0 and 1 at a time, just like we can turn an electric switch on or off. Talking about classical computers, 8 bits can represent all the numbers between 0 and 255. Quantum computers can use 8 qubits to represent every number between 0 to 255 simultaneously if we use 100 qubits. If used, the quantum computer can calculate the number of atoms in the universe very easily. What is a qubit in a quantum computer? If we see at physics, the speed of the atom is not constant, at the same time it keeps on moving up and down, due to which it is not appropriate to give them a number, they can be both 0 or 1, due to which the reciprocal is not equal to a bit of the computer. Seeing this, scientists named it Qubit. What is the difference between Bit and Qubit? • A bit is a binary number, which takes only one of the numbers 0 and 1 at a time. • Qubits are neutrons and protons, which can contain both (0,1) or 0 OR 1 at once. How does quantum computing work The quantum computer works on the principle of nature, like the mango tree planted in your house, which gives fruits automatically in every season, for this it does not need to ask anyone, in the same way, the quantum computer works, the atom is a natural cycle. Work according to the way the compass automatically calculates the direction, sometimes it comes up and sometimes it comes down, in the same way, the atoms which we cannot see with our eyes sometimes go up and down at the same time, so they cannot be considered equal to bit, due to which they are called a qubit, which can be both 1 or 0 simultaneously, due to which their capacity is more than bit based supercomputer. Bit = Taken(0) OR (1) Qubit = Taken( 0) OR (1) OR Taken (0,1) what is the calculation speed of the quantum computer? Quantum computers do not do linear calculations like today's computers, such as Amazon company has to send its goods from Delhi to Mumbai to 5 addresses. The company wants that the time, as well as the distance, is minimum then today Quantum computers will calculate the shortest route to each address to find the answer to this question. Still, quantum computers will only select the answer, which will save a lot of time. It will take only 12 minutes to find it while the supercomputer will take 2 years to do the same work. what is the difference between supercomputers and quantum computers • Supercomputers use the binary number system Bit. • Quantum computers use neutron and proton-based qubit systems. • Supercomputer takes one bit at a time either 0 or 1. • A quantum computer can take both 0 OR 1 OR (0,1) numbers at once. • The calculation speed of a supercomputer is less than that of a quantum computer. • Quantum computers are much more powerful than supercomputers. • Supercomputers do not have any working memory of their own. • Quantum computers have their own working memory, due to which they are faster than supercomputers. what is quantum computing for example? • IBM Cloud-based quantum systems 20 quantum computers are being used by IBM company in its cloud-based application which is managing crores of transactions of the day or say crores of people whose speed is more than a supercomputer. Advantages of quantum computing Although (although) I have already said that, we have to work very hard to make Quantum Computer. But (but) nevertheless (nevertheless) IBM has given birth to a Quantum Computer with its hard work. As I have told you more than before that the Quantum Computer is much faster and much more intelligent than our present computers. By the way, in our simple computers, Classical Algorithms are usually used. But (but) on the contrary, Quantum Algorithms are used in Quantum Computer, which will open any Task or application in a few seconds and do its work and give you the conclusion or output. Here you must be feeling that it would be very easy and economical to make all these Algorithms. But (but) friends, it is not like this at all. Let me tell you for your information that yes, it takes a lot of hard work to make only one Quantum Algorithm. Along with this (furthermore) it also takes a lot of time to make these algorithms. AT&T engineer Peter Shor created a Quantum factorization algorithm. This algorithm converts very large numbers into small prime numbers by factorizing them. The thing to note in this is that, here (here) Quantum computer has been able to do this kind of unprecedented work by using Quantum parallelism. After this (next) if we look at our ordinary computer, it will take 10.1 billion years to do this type of factorization work. Here in this post, we add all related information about quantum computing like what is quantum computing, the advantages of quantum computing, and how quantum computing work then you should read our post till the end. No comments:
{"url":"https://www.govtjobsyllabus.in/2023/02/what-is-quantum-computing.html","timestamp":"2024-11-13T09:22:56Z","content_type":"application/xhtml+xml","content_length":"247366","record_id":"<urn:uuid:abaee35f-f6bc-4b99-b7ce-d06823e00508>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00707.warc.gz"}
Countif with dropdown multi-select colum I have a dropdown multi-select column with about 10 dropdown options from which to choose. So obviously I can choose more than one option in each cell! I'm trying to create a chart for my dashboard that shows how often each of the 10 options is chosen. So I've created a countif equation for each of the 10 dropdown options which references the dropdown multi-select column. It's working really well when there is only ONE option in a cell, but if I have more than one (since it's multi-select, I often do!), then the countif equation is not picking it up. Is there a way around that? Thank • Use "Contains" in your formula: =COUNTIF(MultiSelect:MultiSelect, CONTAINS(Option@row, @cell)) • Thank you! My formula is a bit different (obviously wrong!) and doesn't work if I put CONTAINS in it. =COUNTIF({CDHE Consultation Program Range 5}, "SOGI"). This formula populated when I chose COUNTIF from the advanced options in the formula dropdown. I just chose which sheet to reference and put in "SOGI". CDHE Consultation Program is the Sheet I'm referencing. Range 5 must be the column. "SOGI" is the value I want it to contain. Would you be able to send me the correct formula with COUNTIF & Thank you. • Hi @Maggie Lackey , Try this: =COUNTIF({CDHE Consultation Program Range 5}, CONTAINS("SOGI", @cell)) • Yes! I did actually figure that out earlier. It's perfect and I'm really excited! • I am having a similiar problem and the CONTAINS function did not fix the problem. I am trying to get a count of the number of tasks assigned to myself "Valerie" within a multi-select drop down that are not done (aka false in checkbox column). Here is what I have so far. I have tried multiple different formula variations and cannot get it to work. I have a filter which works perfectly, so I know the count I am looking for, but it keeps returning a count of 0. • Hi Valerie, This is a shot in the dark, but try this: =COUNTIFS([action item assigned to:]:[action item assigned to:], CONTAINS("Valerie", @cell), [action item done:]:[action item done:],0) Let me know if it works! Help Article Resources
{"url":"https://community.smartsheet.com/discussion/77601/countif-with-dropdown-multi-select-colum","timestamp":"2024-11-01T23:02:03Z","content_type":"text/html","content_length":"402852","record_id":"<urn:uuid:00c5e40b-eaab-45b9-b872-f82a262eb34b>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00731.warc.gz"}
Analysis of the “Sagnac experiment” – Physics.bg This webpage is a part of the published article “The Complete Set of Proofs for the Invalidity of the Special Theory of Relativity” in the Journal of Modern and Applied Physics. This is an analysis of the “Sagnac experiment” conducted by the French physicist Georges Sanyac in 1912. The analysis presented is based on classical mechanics and Galilean relativity, which are indisputably valid in our local time-spatial region “on the surface of the Earth”. The experiment demonstrates that in relation to a moving system in stationary space, the speed of the light differs depending on the speed and the direction of movement of the system in the stationary space. However, the Sagnac experiment is considered a paradox, because it demonstrates that the speed of light is not the same for all frames of reference – which is not convenient for modern physics because the special theory of relativity is created on the basis of the claim “the speed of light is the same for all frames of reference”. As further proof of the authenticity of the presented analysis, the derivation of the equation, which is often used in rotation analyses is shown. Content of the website: 1. Explanation of the experiment in accordance with classical mechanics and Galilean relativity 1.1. Examination of the Sagnac experiment in the reference system related to the surrounding stationary space – in the “Disk-Centered Inertial coordinate system” 1.2. Examination of the Sagnac experiment in the frame of reference related to the spinning disk 2. Derivation of the equation, which is often used in the rotation analyzes 2.1. Analysis of one rotation cycle of the light beam “1” that travels in the direction of the disc rotation 2.2. Analysis of one rotation cycle of the light beam “2”, which travels in the opposite direction to the disk rotation 2.3. The results 3. Conclusion Georges Sagnac, a French physicist, constructed a device “ring interferometer” (rotating interferometer with two light beams on a closed loop), also called the “Sagnac interferometer”. The interferometer consists of a light source, collimator (transforming light or other radiation from a point source into a parallel beam), beam-splitter (splitting the beam in two directions), photographic plate, and 4 mirrors of the interferometer, which are all mounted on a spinning disc (0.5m in diameter). In this way, they are all stationary with respect to the disc, but they are actually spinning in the stationary empty space – in the reference system related to the space itself. (Fig. 5.1). Fig. 5.1. Schematic representation of Sagnac interferometer Description of the experiment: A monochromatic light beam is split and the resulting two beams follow (reflected by the four mirrors) exactly the same path in the reference system related to the spinning disk. The trajectories of the two beams, however, are in opposite directions, which is actually the brilliant idea of the experiment of Georges Sagnac. The two recombined light beams (unified again after one full cycle), are then focused on a photographic plate, creating a fringe pattern (a series of bright and dark bands, caused by beams of light that are in phase or out of phase with one another), permitting high-accuracy measurement of the interference fringe displacement, as Georges Sagnac described in his article titled “On the proof of the reality of the luminiferous aether by the experiment with a rotating interferometer” (Sagnac, 1913). The idea is to demonstrate the different speeds of the two light beams in the frame of reference related to the spinning disk. In this frame of reference, the speed of the beam, moving in the direction of rotation of the disk decreases, and the speed of the other beam, moving in the opposite direction of rotation of the disk increases when the speed of the disk rotation increases. The experiment demonstrated that the picture of the interference fringes (the bright or dark bands caused by the beams of light that are in phase or out of phase relative to each other) changes when the speed of rotation of the disk changes. The results of the experiment are precisely fixed. The observed effect: is that the displacement of the interference fringes (the bright and dark bands), changes with the change in the speed of the disk rotation. The reported result by George Sagnac is as follows: “The result of these measurements shows that, in ambient space, light propagates with a velocity V[0], independent of the collective motion of the source of light O and the optical system. This property of space experimentally characterizes the luminiferous aether. The interferometer measures, according to the expression (according to the presented equation), the relative circulation of the luminiferous aether in the closed circuit.” (Sagnac, 1913). It is understandable that the result of the experiment was explained a century ago by the relative circulation of the luminiferous aether in a closed circuit. According to the supposition of Christiaan Huygens (Dutch physicist), light travels in a hypothetical medium called “luminiferous aether”, a space-filling substance, thought to be necessary as a transmission medium for the propagation of electromagnetic radiation. In fact, the conclusion is not that the space has a property that characterizes the “luminiferous aether”, but rather that: “the “ether” turns out to be the “warped space-time of the Universe” itself.” (Sharlanov, 2011). 1. Explanation of the experiment in accordance with classical mechanics and Galilean relativity The Earth rotates in the surrounding stationary space with a constant angular velocity. The linear speed of the Earth’s surface, at the latitude where the experiment was carried out, is constant. The plate (the table on which the rotating disk is mounted), is fixed stationary on the Earth’s surface. Therefore, the influence of the Earth’s rotation on the speeds of the two light beams (the displacement of the interference fringes due to the Earth’s rotation), is constant. Note: The displacement of interference fringes due to the Earth’s rotation around its axis was discussed in the analysis of the “Michelson–Gale–Pearson experiment”. According to the experiment, however, the light source, the collimator (transforming the light beam from a point source into a parallel beam), the beam-splitter (splitting the beam in two opposite directions), the photographic plate, and the four mirrors mounted on the disk rotate all together in the stationary space at the speed of the disk. As a result, the different rotational velocities of the disc create different displacements of the interference fringes due to the influence of the disc velocity on the speeds of light beams in the frame of reference related to the spinning disk The two frames of reference, which we are considering in the theoretical explanation of the experiment, are: 1) The first one is related to the rotating disk, where the light source, the collimator, the beam-splitter, the photographic plate, and the four mirrors are mounted. When the observer is on the disk, all devices (the collimator, the beam splitter, the photographic plate, and the four mirrors) mounted on the disk are stationary for the observer (regardless of whether the disc is spinning or not). 2) The second one is related to the stationary space itself. Appropriate for the explanation of the experiment is, to consider it in a “Disk-Centered Inertial coordinate system” (DCI frame). The description of the “DCI frame of reference” is as follows: • The origin of the “DCI coordinate system” is the center of the disk. If we ignore the displacement of the interference fringes due to the Earth’s rotation (which is constant, regardless of the disk rotation), we actually accept that the origin of the “DCI coordinate system” (the center of the disk, which is a fixed point on the Earth’s surface), is stationary in relation to the surrounding space. Similarly, the North and South poles are stationary in the stationary space when the Earth rotates around its axis. • The plane of the disk represents the (x,y) plane, and the axes of the “DCI coordinate system” (x,y) are stationary in relation to the surrounding stationary space (aimed at very distant astronomical objects). This means that the “Disk-Centered Inertial coordinate system” (DCI frame), for the present case, can be considered as a stationary frame of reference in relation to the surrounding stationary space. In other words, the observer situated in the DCI frame will see how the light source, the collimator, the beam splitter, the photographic plate, and the four mirrors of the interferometer rotate together with the disc. Before the examination of the experiment, we can recall that every mechanical or optical experiment actually takes place in the common space of the considered frames of reference. 1.1. Examination of Sagnac experiment in the reference system related to the surrounding stationary space – in the “Disk-Centered Inertial coordinate system” (DCI frame of reference)” In our time-spatial region “in the vicinity of the Earth’s surface”, the intensity of the gravitational field is uniform (the same). According to the abovementioned initial conditions of the experiments (which do not contradict the standpoint of contemporary physics): electromagnetic radiation propagates in vacuum (i.e. in the stationary space), at a constant speed equal to c. This speed is actually the speed of light in the stationary in relation to the space “DCI frame of reference”. However, everything mounted on the spinning disc rotates (moves) in the stationary space (which means: in relation to the “DCI frame of reference”). Therefore, in this frame of reference, the length of the path that the two light beams actually travel in space is different. This is due to the movement of each mirror in the stationary space (at the rotation of the disk) during the travel of the light beams toward the mirrors. The two light beams travel in opposite directions. Thus, the path length in the stationary space of one of the light beams (which travels in the opposite direction of the disk rotation) is shortened, and the path length in the stationary space of the other light beam (which travels in the direction of the disk rotation) is extended. As a result of the change in the path lengths of the two light beams (due to different velocities of the disk rotation), different displacements of the interference fringes are created. Therefore, the conclusion of the observer, located in the stationary in relation to the space “DCI coordinate system” (where the speed of light is constant and equal to c), is that the displacement of the interference fringes is due to the change in the path lengths traveled by the two light beams, which in turn depends on the velocity of the disk rotation. 1.2. Examination of the Sagnac experiment in the frame of reference related to the spinning disk Positioned on the spinning disk, the observer will see that all devices (the collimator, the beam splitter, the photographic plate, and the four mirrors) mounted on the disk do not move – that they are stationary. Therefore, the path lengths of the two beams (the distances between the mirrors) also do not change when the disk spins. As a result, the speeds of the two light beams in the frame of reference related to the spinning disk are different. This difference depends on the velocity of the disk rotation: the speed of the beam that travels in the direction of the disk rotation decreases to (c-V), where V is the linear speeds of the mirrors, while the speed of the other light beam, which travels opposite to the direction of the disk rotation, increases to (c+V). In fact, the “light speed anisotropy” observed in the Sagnac experiment is similar to the “light speed anisotropy” in the “One-way determination of the speed light” experiments (see the described cases “Eastward Transmission” and the “Westward Transmission”). Therefore, the conclusion made by the observer positioned in the frame of reference related to the spinning disk is that the displacement of the interference fringes is due to the difference in the speeds between the two light beams. In turn, that difference (respectively the displacement of the interference fringes) changes with the change in the velocity of the disk rotation. Finally, we can underline that as early as 1913, the Sagnac experiment actually proved that “the speed of light is not the same in relation to all frames of reference”. This was even before the publication of the general theory of relativity. Is it not surprising that Einstein never commented on this experiment, although certainly knew about its existence? The Sagnac experiment is unofficially considered mystical because thus far, none of its explanations have been officially accepted. Although the Sagnac experiment proves that the speed of light is not the same in all inertial reference frames, many modern physics journals publish “scientific” explanations based on the special theory of relativity… which is based on the false claim that “the speed of light is the same in all inertial frames”. In other words, this is a classical “circular reference”! An example of a published “scientific” comparison of different explanations is that of Malykin, G.B. “The Sagnac effect: correct and incorrect explanations” (Malykin, 2000). There are other such examples in the scientific literature. Despite all of these mystifications, although there is currently no valid scientific explanation for this phenomenon, the results of these experiments have many significant practical applications. A wide range of applications is found in space navigation, aviation (optical gyroscope), and daily Earth positioning needs, where no one has observed any “anisotropy” of the “meter” as a unit of measurement (which is a claim of the special theory of relativity). Additional proof of the credibility of the abovementioned explanation of the Sagnac experiment is given in the next subsection. This theoretical explanation demonstrates the derivation and origin of the most commonly used equation in rotational analyses. 2. Derivation of the equation, which is often used in rotation analyses The Sagnac effect manifests itself in a setup called a ring interferometer. It is the basis of the widely used high-sensitivity fiber-optic gyroscope that fixes changes in the spatial orientation of an object (airplane, satellite). In general, a fiber-optic gyroscope consists of a rotating coil with a number of optical fiber turns. Optical fibers are flexible, transparent fibers made of glass (silica) or plastic. It consists of two separate parts. The middle part of the fiber is called the core and is the fiber optic medium through which the light travels. Another layer of glass called the cladding wraps around the outside of the core. The cladding’s task is to keep the light beams inside the core. This can be done because the cladding is made of a different type of glass relative to the core; the cladding has a lower refractive index and acts as a countless small mirror. Each tiny particle of light (photon) propagates down the optical fiber by bouncing repeatedly off the cladding, as though the cladding is truly a mirror (the photon reflects in repeatedly). This phenomenon is called total internal reflection, which causes the fiber to act as a waveguide. We will examine a simple ring interferometer (a coil with only one fiberoptic turn) mounted on a rotating disk with an angular velocity ω radian/sec (see Fig. 5.2). Fig. 5.2. Schematic presentation of a circular interferometer with one optical coil Two laser beams propagate in the rotating coil: one in the direction of the coil rotation, and the other in the opposite direction of the coil rotation. When the angular velocity of the rotating coil changes at the turning of the object where it is mounted, the displacement of the interference fringes also changes. The effect (the displacement of the fringes) is dependent on the effective area of the closed optical path. However, this is not simply the geometric area of the loop, but is enhanced by the number of turns in the coil. The equation that we derive on the basis of the aforementioned theoretical explanation of the Sagnac experiment is often used in analyses of rotation: , where A is the area of the circle bounded by the fiber-optic coil. The optical circuit (the “fiber-optic medium”), mounted on the rotating disc rotates along with the rotation of the disc at a linear speed equal to Rω, where R is the radius of the optical circuit and ω is the angular velocity of the rotating disk. The speed of light in the stationary “empty space” between the atoms is c[0] (inside the “fiber-optic medium” where the speed of light is constant for the homogeneous optical medium). As shown, the two light beams (beam 1 and beam 2) travel in opposite directions in the same fiber optic circle. Let us analyze one cycle of each of the two beams (from the moment of splitting to the moment of directing them to the screen-detector) Here, two factors must be considered: • The first is that the “empty space” inside the optical fiber(the optical medium) is stationary, although each atom ofthe optical fiber moves during rotation. Since the “empty space” has no mass, no force can accelerate the space (to set it in motion). This is a consequence of Newton’s second law of motion (F = ma). Neither the strength of the chemical bonds between atoms (in the micro-world) nor the gravitational forces (according to Newton’s law of universal gravitation in the macro-world) can force the space to move, because the space has no mass. • The second is that at the microscopic level, the cladding of the optical fiber can be seen as a continuous series of millions of miniature mirrors in which the photons are reflected as they propagate (in the case of Sagnac’sexperiment, there are only four mirrors). Like in Sagnac’s interferometer, each of these “elementary mirrors” shifts at a definite angle from the previous photon reflection when the optical coil is rotated – (the mirrors are moved at a certain distance during the propagation time of the photons in the stationary “micro-space” of the optical medium). Thus, in the stationary space, the path of the photons (of the light beam), moving in the direction of rotation of the optical coil is extended, and the path of the light beam, moving opposite to the rotation of the optical coil, is shortened. 2.1. Analysis of one rotation cycle of the light beam “1” that travels in the direction of the disc rotation • In the stationary (in relation to the surrounding space) Disk-Centered Inertial (DCI) coordinate frame: After splitting, light beam “1” makes one full cycle in the direction of disk rotation, and reaches the beam-splitter again after time interval t[1] to redirect to the display (screen). For the stationary in the space observer (located in the DCI-coordinate system), the distance traveled by beam “1” in the stationary space inside the optical medium is longer than the fiber optic coil circumference (2πR) with (Δ = Rωt[1]). This is because, during the beam travel, the point of redirection to the detector (screen), as well as the entire optical loop, moves at a distance Δ, due to disk rotation. Therefore, the distance traveled by light beam “1” in the stationary surrounding space is (2πR + Rωt[1]); thus for the time interval t[1], (the time for one turn of the light beam “1”), the observer in the “DCI frame of reference”) records the following: , where c[0] is the speed of light inside the “fiber-optic medium” (where the speed of light is constant for the homogeneous optical medium). • In the frame of reference related to the rotating disk, where the fiber-optic coil is mounted: For the observer, positioned in this frame of reference (on the rotating disk), the distance traveled by the light beam “1” is 2πR, because the fiber-optic coil does not move in this frame of reference (in relation to the rotating disc). For the same time interval t[1], the speed of light beam “1” is equal to (c[0] – Rω), and for the time interval t[1] , (the time for one turn of the light beam “1”), the observer (in the frame of reference related to the rotating disk) will register: , which is actually equal to t[1] from the expression (10) after its transformation for deriving t[1], i.e., there is no “relativistic difference in time”. 2.2. Analysis of one rotation cycle of the light beam “2”, which travels in the opposite direction to the disk rotation • In the stationary (in relation to the surrounding space) Disk-Centered Inertial (DCI) coordinate frame: After splitting, the light beam “2” makes one full cycle in the opposite direction to the disk rotation and reaches the beam splitter again after the time interval t[2] , to be redirected to the display (screen). Actually, the distance, traveled by beam”2” in the stationary space inside the optical fiber, is shorter than the fiber optic coil circumference (2πR) with (Δ = Rωt[2]). This is because, for the travel time of the beam for one cycle, the redirection point to the detector (as well as the whole optical coil) has approached, due to the rotation of the disk against the direction of movement of the beam. Therefore, the distance traveled by the light beam “2” in the stationary space (in the “DCI coordinate frame”), is (2πR – Rωt[2]). The Observer, in the stationary in relation to the surrounding stationary space “Disk-Centered Inertial (DCI) coordinate frame”, will register for the travel time t[2] (for one turn of the light beam “2”) , where c[0] is the speed of light in the “fiber optic medium” (where the speed of light for the homogeneous optical medium is constant). • In the frame of reference related to the rotating disk: For the observer, positioned in this frame of reference (on the rotating disk), the distance traveled by the light beam “2” is exactly 2πR because the fiber-optic coil does not move in relation to the rotating disc (in the observer’s frame of reference). For the same time interval t[2] , the speed of light beam “2” is equal to (c[0 ]+ Rω); for the travel time for one cycle of light beam “2”, the observer in the frame of reference related to the rotating disk will register: , which is actually equal to t[2] from the expression (12) after its transformation for deriving t[2] , i.e., there is no “relativistic difference in time”. 2.3. The results On the basis of the analysis, it was found that: 1. The time t2 for one complete tour of light beam “2” is the same for both frames of reference; 2. The time t1 for one complete tour of light beam “1” is the same for both frames of reference. 3. However, the time for one complete tour of light beam “1”(which moves in the direction of the rotation of the optical coil) is more than the time for one complete tour of light beam “2” (which moves in the opposite direction of the rotation of the optical coil). The difference between the travel times of the two beams “1” and “2” actually determines the displacement of the interference fringes, which changes with the change in the velocity of the disk For the difference between the time for one tour of light beam “1” and the time for one tour of light beam “2”, we obtain (after subtracting equation (13) from 11): , because Equation (14) is actually the equation (9) we had to derive. Therefore, the demonstrated derivation of the equation, which is often used in rotation analyses, verifies the validity of the theoretical explanation of the Sagnac experiment (in accordance with classical mechanics and Galilean relativity! 5.3. Conclusion The moving reference system in the stationary space in the Sagnac experiment is the “spinning disc”. The moving reference system in the stationary space in “One-way measurement of the speed of light” and “Michelson-Gale-Pearson” experiments is the “rotating Earth’s surface”. The observed effects of displacement of the interference fringes in the case of “Sagnac’s ring interferometer”, the “Michelson–Gale–Pearson experiment”, and “light speed anisotropy” (the difference in the speed depending on the direction of the light beam in the case of “One-way determination of the speed of light”) clearly demonstrated the following: The speed of light is not the same for all inertial frames of reference The speed of light in vacuum is constant in our time-spatial domain “near the Earth’s surface”, where the gravitational field intensity is constant. The speed of light is different, however, in a frame of reference that moves in the stationary space. The measured speed of the light in a moving frame of reference differs depending on the speed and the direction of motion of the frame of reference in the stationary space! The main reason, for the accepted by modern physics false claim, that “the speed of light is the same for all inertial frames of reference” turns out to be the “Michelson-Morley experiment”, which “results” are a consequence only of the inappropriate conceptual design of the two-way-interferometer of Michelson. The delusion, that “the speed of light is the same for all inertial frames of reference”, is the fundament of the special theory of relativity. The analysis of the article “On the Electrodynamics of Moving Bodies” shows exactly where and how the claim “the speed of light is the same for all inertial frames of reference” illogically was applied – and actually reveals the essence of the special theory of relativity! Before examining the inappropriate conceptual design, embedded in the interferometer construction, used in the experiment “Michelson-Morley” (held in 1887), we will analyze the experiment “ Michelson-Gale-Pearson” (held in 1925). The experiment “Michelson-Gale-Pearson” proves again that, in the reference system related to the moving Earth’s surface, the measured speed of light is influenced by the rotation of the Earth (by the movement of the Earth’s surface) – that the speed of light is not the same for all frames of reference. If you haven’t read the analysis of “One-way measurement of the speed of light” yet, it is worth reading it here! If you haven’t read the analyses of the “Michelson-Gale-Pearson” experiment yet, it is worth reading it here! The revealing fact that the inappropriate conceptual design, embedded in the construction of Michelson’s interferometer, however indisputably shows that the claim “the speed of light is the same in all inertial frames of reference” is a great delusion and the “Michelson-Morley experiment” is actually the primary root cause for the biggest blunder in physics of the 20th century – the special theory of relativity. Furthermore, the analysis of the article “On the Electrodynamics of Moving Bodies”, where Einstein published the special theory of relativity shows exactly where and how the claim “the speed of light is the same in all inertial frames of reference” was applied… => to the main page containing all Table of Contents of the website
{"url":"https://physics.bg/home/physics-problems/speed-of-light-constancy/sagnac-experiment/","timestamp":"2024-11-09T00:00:46Z","content_type":"text/html","content_length":"253874","record_id":"<urn:uuid:2d9af558-b5f3-4188-ade3-82f5825620bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00788.warc.gz"}
Money Maths This is level 6, mixed real-life questions. You can earn a trophy if you get at least 9 correct and you do this activity online. Cory buys a chocolate bar for 43p and a drink for 45p. Carli pays £10 for her shopping. If her shopping costs How much does he need to pay altogether? £5.54 how much change will she receive? Carol had £7.55 in her purse at the beginning of the week. During the week she spent £3.77 how much did she have left at the end of the week? p £ Find the cost of a bag of 15 bottles of floor cleaner if How much will it cost for a party of 16 people to go ice If a bill for dinner of £73.53 is shared between 9 people how much does each person the bottles cost £1.31 each. skating if it costs £1.79 per person. have to pay? £ £ Sam buys a book for £8.94 and a movie for £2.56. How Sue pays £50 for her shopping. If her shopping costs much does he pay altogether? £43.67 how much change will she receive? Surjit had £35.68 in his wallet at the beginning of the week. During the week he spent £30.92 how much did he have left at the end of the week? Find the cost of a box of 55 components if they cost How much will it cost for a group of 29 people to go £8.14 each. go-karting if it costs £6.19 per person. If the bill for a brunch of £334.21 is shared between 19 people how much does each person have to pay? £ £ This is Money Maths level 6. You can also try: Level 1 Level 2 Level 3 Level 4 Level 5 This activity is suitable for people all around the world. Use the button below to change the currency symbol used to make it more relevant to you. You may wish to choose an unfamiliar currency to extend your experience. Description of Levels Coins and Notes - Sharpen your money management skills by recognising and counting current currency coins and notes Level 1 - Adding amounts of money Level 2 - Subtracting amounts of money (giving change!) Level 3 - Multiplying an amount of money by a number Level 4 - Dividing an amount of money by a number Level 5 - Finding a percentage of an amount of money Level 6 - Mixed real-life questions Answers to this exercise are available lower down this page when you are logged in to your Transum account. If you don’t yet have a Transum subscription one can be very quickly set up if you are a teacher, tutor or parent. One of the important aspects of this quiz is typing in the answer in the correct format. When expressing amounts of money it is standard practice to include two decimal places if the answer is not a whole number of the major unit (pounds, dollars etc). Eg., six point seven pounds would be written as £6.70 and not £6.7 For the purposes of this exercise type in a whole number of pounds without the decimal places. Eg., twelve pounds should be written as £12 and not £12.00 Also note that the currency units have already been provided for you, you do not need to type them into the answer box The exercises are generated from random numbers so each time you refresh the page you will get a different set of questions. Don't wait until you have finished the exercise before you click on the 'Check' button. Click it often as you work through the questions to see if you are answering them correctly. Answers to this exercise are available lower down this page when you are logged in to your Transum account. If you don’t yet have a Transum subscription one can be very quickly set up if you are a teacher, tutor or parent. Level 1: Addition £1.72 + 65p To perform this addition, first convert 65p to pounds. 65p is £0.65. So, the calculation is £1.72 + £0.65. Align the decimal points and add: \[ \begin{array}{r} 1.72 \\ + 0.65 \\ \hline 2.37 \\ \end{array} \] Therefore, £1.72 + 65p = £2.37. Level 2: Subtraction £92.51 - £70.74 Align the decimal points and subtract: \[ \begin{array}{r} 92.51 \\ - 70.74 \\ \hline 21.77 \\ \end{array} \] Therefore, £92.51 - £70.74 = £21.77. Level 3: Multiplication £7.29 × 5 Multiply each digit of £7.29 by 5: \[ \begin{array}{r} 7.29 \\ \times 5 \\ \hline 36.45 \\ \end{array} \] Therefore, £7.29 × 5 = £36.45. Level 4: Division £110.97 ÷ 9 Divide £110.97 by 9: Perform the division step-by-step just as you do with the bus stop method: Step 1. 110 ÷ 9 = 12 remainder 2. Step 2. 29 ÷ 9 = 3 remainder 2. Step 3. 27 ÷ 9 = 3. So, £110.97 ÷ 9 = £12.33. Level 5: Percentage Find 40% of £56.80 First, find 10% (one tenth) of £56.80 = £5.68. Multiply £5.68 by 4 to find the required 40%: \[ \text{Alternatively } 56.80 \times 0.40 = 22.72 \] Therefore, 40% of £56.80 is £22.72. Level 6: Mixed Calculations Sue pays £50 for her shopping. If her shopping costs £27.59, how much change will she receive? Subtract the cost from the amount paid: \[ 50.00 - 27.59 = 22.41 \] Therefore, Sue will receive £22.41 in change.
{"url":"https://transum.org/software/SW/Starter_of_the_day/Students/MoneyMaths.asp?Level=6","timestamp":"2024-11-07T18:33:14Z","content_type":"text/html","content_length":"55113","record_id":"<urn:uuid:e86c6b02-cac6-4758-a2e1-e9550bf70a34>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00135.warc.gz"}
How is cosmological redshift calculated? - The Handy Astronomy Answer Book How is cosmological redshift calculated? Cosmological redshift is calculated by (1) figuring out how much the observed wavelength is shifted from the rest wavelength, and (2) expressing that shift as a ratio of the rest wavelength. Although it sounds complicated, it really is not. It turns out that this redshift number is very useful when deriving properties of distant galaxies, such as age and distance. Here is an example for illustration. Say an astronomer is measuring the spectrum of a distant galaxy. If the unredshifted rest wavelength of a spectral feature is one hundred nanometers, but for this galaxy the feature appears at two hundred nanometers, then the measured redshift is one. If the feature appears at three hundred nanometers, the redshift is two; if it is at four hundred nanometers, the redshift is three; and so on.
{"url":"https://www.papertrell.com/apps/preview/The-Handy-Astronomy-Answer-Book/Handy%20Answer%20book/How-is-cosmological-redshift-calculated/001137027/content/SC/52caffaa82fad14abfa5c2e0_default.html","timestamp":"2024-11-14T15:13:06Z","content_type":"text/html","content_length":"11384","record_id":"<urn:uuid:cef5d3f4-3a4b-4f3c-8c66-0ee670267407>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00039.warc.gz"}
Standalone Python/Sage Scripts Standalone Python/Sage Scripts asked 2016-06-07 09:29:42 +0100 This post is a wiki. Anyone with karma >750 is welcome to improve it. The tutorial example from is working only for integers. It does not work for symbolic expressions. It becomes working by renaming the script from "factor" to "factor.sage" and replacing the last line by print factor(sage_eval(sys.argv[1],locals={'x':x})) Is there anything wrong in my configuration of Sage? 1 Answer Sort by ยป oldest newest most voted I got the same, it seems indeed that the symbol x is not injected into the global namespace, but the following works: ./factor "sage.calculus.var.SR.symbol('x')^2+3*sage.calculus.var.SR.symbol('x')" edit flag offensive delete link more
{"url":"https://ask.sagemath.org/question/33685/standalone-pythonsage-scripts/","timestamp":"2024-11-09T03:50:33Z","content_type":"application/xhtml+xml","content_length":"51404","record_id":"<urn:uuid:297dd92f-7da1-4aae-b68a-75c71b88c30c>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00849.warc.gz"}
reduction rules The Minimum Connectivity Inference (MCI) problem represents an NP-hard generalisation of the well-known minimum spanning tree problem and has been studied in different fields of research independently. Let an undirected complete graph and finitely many subsets (clusters) of its vertex set be given. Then, the MCI problem is to find a minimal subset of edges … Read more
{"url":"https://optimization-online.org/tag/reduction-rules/","timestamp":"2024-11-02T07:25:57Z","content_type":"text/html","content_length":"83250","record_id":"<urn:uuid:7bb72895-f07f-46ae-aa9c-55205953294c>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00745.warc.gz"}
Uniformly most powerful test Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory In statistical hypothesis testing, a uniformly most powerful (UMP) test is a hypothesis test which has the greatest power ${\displaystyle 1-\beta}$ among all possible tests of a given size α. For example, according to the Neyman-Pearson lemma, the likelihood-ratio test is UMP for testing simple (point) hypotheses. Setting[ ] Let ${\displaystyle X}$ denote a random vector (corresponding to the measurements), taken from a parametrized family of probability density functions or probability mass functions ${\displaystyle f_ {\theta}(x)}$, which depends on the unknown deterministic parameter ${\displaystyle \theta \in \Theta}$. The parameter space ${\displaystyle \Theta}$ is partitioned into two disjoint sets ${\ displaystyle \Theta_1}$ and ${\displaystyle \Theta_2}$. Let ${\displaystyle H_0}$ denote the hypothesis that ${\displaystyle \theta \in \Theta_0}$, and let ${\displaystyle H_1}$ denote the hypothesis that ${\displaystyle \theta \in \Theta_1}$. The binary test of hypotheses is performed using a test function ${\displaystyle \phi(x)}$. ${\displaystyle \phi(x) = \begin{cases} 1 & \text{if } x \in R \\ 0 & \text{if } x \in A \end{cases}}$ meaning that ${\displaystyle H_1}$ is in force if the measurement ${\displaystyle X\in R }$ and that ${\displaystyle H_0}$ is in force if the measurement ${\displaystyle X \in A}$. ${\displaystyle A \cup R}$ is a disjoint covering of the measurement space. Formal definition[ ] A test function ${\displaystyle \phi(x)}$ is UMP of size ${\displaystyle \alpha}$ if for any other test function ${\displaystyle \phi'(x)}$ we have: ${\displaystyle \sup_{\theta\in\Theta_0}\; E_\theta\phi'(X)=\alpha'\leq\alpha=\sup_{\theta\in\Theta_0}\; E_\theta\phi(X)\,}$ ${\displaystyle E_\theta\phi'(X)=1-\beta'\leq 1-\beta=E_\theta\phi(X) \quad \forall \theta \in \Theta_1 }$ The Karlin-Rubin theorem[ ] The Karlin-Rubin theorem can be regarded as an extension of the Neyman-Pearson lemma for composite hypotheses. Consider a scalar measurement having a probability density function parameterized by a scalar parameter θ, and define the likelihood ratio ${\displaystyle l(x) = f_{\theta_1}(x) / f_{\theta_0}(x)}$. If ${\displaystyle l(x)}$ is monotone non-decreasing for any pair ${\displaystyle \ theta_1 \geq \theta_0}$ (meaning that the greater ${\displaystyle x}$ is, the more likely ${\displaystyle H_1}$ is), then the threshold test: ${\displaystyle \phi(x) = \begin{cases} 1 & \text{if } x > x_0 \\ 0 & \text{if } x < x_0 \end{cases}}$ ${\displaystyle E_{\theta_0}\phi(X)=\alpha}$ is the UMP test of size α for testing ${\displaystyle H_0: \theta \leq \theta_0 \text{ vs. } H_1: \theta > \theta_0 }$ Note that exactly the same test is also UMP for testing ${\displaystyle H_0: \theta = \theta_0 \text{ vs. } H_1: \theta > \theta_0 }$ Important case: The exponential family[ ] Although the Karlin-Rubin may seem weak because of its restriction to scalar parameter and scalar measurement, it turns out that there exist a host of problems for which the theorem holds. In particular, the one-dimensional exponential family of probability density functions or probability mass functions with ${\displaystyle f_\theta(x) = c(\theta)h(x)\exp(\pi(\theta)T(x))}$ has a monotone non-decreasing likelihood ratio in the sufficient statistic T(x), provided that ${\displaystyle \pi(\theta)}$ is non-decreasing. Example[ ] Let ${\displaystyle X=(X_0 , X_1 ,\dots , X_{M-1})}$ denote i.i.d. normally distributed ${\displaystyle N}$-dimensional random vectors with mean ${\displaystyle \theta m}$ and covariance matrix ${\ displaystyle R}$. We then have ${\displaystyle f_\theta (X) = (2 \pi)^{-M N / 2} |R|^{-M / 2} \exp \left\{-\frac{1}{2} \sum_{n=0}^{M-1}(X_n - \theta m)^T R^{-1}(X_n - \theta m) \right\} = }$ ${\displaystyle = (2 \pi)^{-M N / 2} |R|^{-M / 2} \exp \left\{-\frac{1}{2} \sum_{n=0}^{M-1}(\theta^2 m^T R^{-1} m) \right\} \cdot \exp \left\{-\frac{1}{2} \sum_{n=0}^{M-1}X_n^T R^{-1} X_n \right \} \cdot \exp \left\{\theta m^T R^{-1} \sum_{n=0}^{M-1}X_n \right\}}$ which is exactly in the form of the exponential family shown in the previous section, with the sufficient statistic being ${\displaystyle T(X) = m^T R^{-1} \sum_{n=0}^{M-1}X_n.}$ Thus, we conclude that the test ${\displaystyle \phi(T) = \begin{cases} 1 & \text{if } T > t_0 \\ 0 & \text{if } T < t_0 \end{cases}}$ ${\displaystyle E_{\theta_0} \phi (T) = \alpha}$ is the UMP test of size ${\displaystyle \alpha}$ for testing ${\displaystyle H_0: \theta \leq \theta_0}$ vs. ${\displaystyle H_1: \theta > \theta_0}$ Further discussion[ ] Finally, we note that in general, UMP tests do not exist for vector parameters or for two-sided tests (a test in which one hypothesis lies on both sides of the alternative). Why is it so? The reason is that in these situations, the most powerful test of a given size for one possible value of the parameter (e.g. for ${\displaystyle \theta_1}$ where ${\displaystyle \theta_1>\theta_0}$) is different than the most powerful test of the same size for a different value of the parameter (e.g. for ${\displaystyle \theta_2}$ where ${\displaystyle \theta_2 < \theta_0}$). As a result, no test is Uniformly most powerful. References[ ] • L. L. Scharf, Statistical Signal Processing, Addison-Wesley, 1991, section 4.7.
{"url":"https://psychology.fandom.com/wiki/Uniformly_most_powerful_test","timestamp":"2024-11-11T08:09:50Z","content_type":"text/html","content_length":"222788","record_id":"<urn:uuid:5154d21b-ba25-4b96-9ed9-57608a046da5>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00342.warc.gz"}
Random Function in C In C programming, the rand() function is commonly used to generate random numbers within a specified range. To use this function effectively, it's important to understand its syntax, behavior, and the necessity of using srand() to seed the random number generator. rand() Function The rand() function is a standard library function in C that generates a pseudo-random integer in the range [0, RAND_MAX]. It's defined in the header file. One thing to note about rand() is that it doesn't produce truly random numbers; rather, it generates numbers based on a deterministic algorithm. Syntax of rand() Function: int rand(void); Return Value: The rand() function returns a random integer between 0 and RAND_MAX. RAND_MAX is a predefined symbolic constant in the C standard library. It's declared in the header file and represents the maximum value that can be returned by the rand() function. Its value is implementation-defined but is at least 32767. srand() Function The srand() function initializes the random number generator with a seed value, ensuring that the subsequent numbers generated by rand() will be different each time the program is run. By providing a seed value to srand(), we can initialize the random number generator to produce different sequences of random numbers. Syntax for srand() Function void srand(unsigned int seed); The seed value sets the initial state of the random number generator, determining the sequence of random numbers produced by subsequent calls to rand(). Typically, time(NULL) is used as the seed to ensure different sequences of random numbers each time the program runs. // program that demonstrates how to generate a random number using the rand() function and seed it using srand() int main() { int randomNumber; // Seed the random number generator // Generate a random number between 0 and 9 randomNumber = rand() % 10; printf("Random Number: %d\n", randomNumber); return 0; Random Number: 6 If you don't seed the random number generator using srand(), the rand() function will produce the same sequence of numbers every time you run the program. By using srand(time(NULL)), you can ensure that the random number generator is seeded with a different value each time the program is executed, thus producing different sequences of random numbers. Understanding and utilizing the rand() and srand() functions correctly can add randomness and unpredictability to your C programs, making them more versatile and interesting. Discount Coupons FREE Pro Account worth $99.95 for 14 Days. Waiting for your comments
{"url":"https://www.codingtag.com/random-function-in-c","timestamp":"2024-11-14T07:11:59Z","content_type":"text/html","content_length":"82935","record_id":"<urn:uuid:5b0242cf-e9fb-4e1e-8fb1-92feeb013efc>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00777.warc.gz"}
Derivation of Torsion Equation - Examples, Derivations, Equations Derivation of Torsion Equation The derivation of the torsion equation is a fundamental concept in the study of mechanical engineering and materials science. It describes the relationship between the torque applied to a cylindrical shaft and the resulting shear stress and angle of twist along its length. The equation is derived from the principles of equilibrium, compatibility, and material behavior, typically using polar coordinates. This derivation helps in understanding how shafts transmit mechanical power in machines, ensuring their design can withstand applied loads without failure, which is critical for the safety and efficiency of mechanical systems. Torsion Equation Derivation The torsion equation describes the relationship between the applied torque (T), the polar moment of inertia (J), the shear stress (τ), the radius of the shaft (r), and the angle of twist (θ) per unit length (L) in a cylindrical shaft. Here’s a step-by-step derivation of the torsion equation: 1. Assumptions: • Homogeneous, isotropic material • Plane cross-sections remain plane • Angle of twist uniform along the length • Material obeys Hooke’s law in shear Polar Moment of Inertia (J): The polar moment of inertia for a circular shaft of radius r is given by: J = πr⁴/2 Shear Strain (γ): Consider a shaft of length 𝐿L subjected to a torque 𝑇T. The shear strain 𝛾γ at a radius 𝑟r is given by the angle of twist per unit length 𝜃𝐿 γ= r ⋅ θ/L Shear Stress (τ): According to Hooke’s law for shear, shear stress τ is proportional to shear strain γ: τ = Gγ = G ( r ⋅ θ/L ) Infinitesimal Torque (dT): dT = τ ⋅ r ⋅ dA Total Torque (T): Integrate 𝑑𝑇 over the entire cross-sectional area to get the total torque: T = ∫A τ ⋅ r ⋅ dA 𝑇 = ∫𝐴 ( 𝐺 ⋅ 𝑟 ⋅ 𝜃/𝐿)⋅𝑟⋅𝑑𝐴 𝑇 = Gθ/L ∫𝐴 r²dA 𝑇 = 𝐴 r²dA = J 𝑇 = GθJ/L Torsion Equation: Rearranging the above equation gives the torsion equation: T/J = Gθ/L = τ/r What is Torsion? Torsion refers to the twisting of an object due to an applied torque or rotational force. This type of deformation occurs when a moment or couple is applied to a structural member, such as a shaft, causing it to twist around its longitudinal axis. In mechanical and structural engineering, torsion is a critical consideration for the design and analysis of components like drive shafts, axles, and beams subjected to twisting loads. Key points about torsion: 1. Torque (T): The applied twisting force that causes torsion. 2. Angle of Twist (θ): The measure of the rotational deformation of the object. 3. Shear Stress (τ): The internal stress developed within the material due to the applied torque. 4. Polar Moment of Inertia (J): A geometric property of the cross-section that affects its resistance to torsion. 5. Shear Modulus (G): A material property that relates shear stress to shear strain. The torsion equation, which relates these factors, is crucial for ensuring that components can withstand the applied loads without failure. What is Torsion Constant? The torsion constant, denoted as ( J ), measures a cross-section’s resistance to twisting or torsion. For circular shafts, it equals the polar moment of inertia, ( J=2πr⁴/2 ), where ( r ) is the radius. For non-circular sections, ( J ) varies based on shape and dimensions. It influences the relationship between applied torque, shear stress, angle of twist, shear modulus, and length in the torsion equation: (T/J) =Gθ/L = τ/r. The torsion constant is crucial for assessing torsional strength and rigidity in structural and mechanical components. Torsion Units Torque (T): The twisting force applied to an object in torsion analysis is measured in Newton-meters (N·m) in the SI system and Pound-feet (lb·ft) in the Imperial system. Angle of Twist (θ): The angle of twist measures rotational deformation, typically in radians (rad) in the SI system, though degrees (°) can also be used for some applications. Shear Stress (τ): The internal stress resulting from applied torque is measured in Pascals (Pa) or Newtons per square meter (N/m²) in the SI system, and in Pounds per square inch (psi) in the Imperial system. Polar Moment of Inertia (J): This geometric property of the cross-section is measured in meters to the fourth power (m⁴) in the SI system and in inches to the fourth power (in⁴) in the Imperial Shear Modulus (G): The shear modulus, relating shear stress to shear strain, is measured in Pascals (Pa) or Newtons per square meter (N/m²) in the SI system, and in Pounds per square inch (psi) in the Imperial system. Moment of Resistance The moment of resistance is a crucial concept in structural engineering and mechanics, representing the capacity of a structural member to withstand bending moments. It is the internal moment that balances the external applied moment, ensuring the structure remains stable and does not fail under load. Definition: The moment of resistance is the moment generated by internal stresses within a structural member, such as a beam or column, that opposes the applied bending moment. It is crucial for maintaining equilibrium and preventing structural failure. Calculation: The moment of resistance is calculated using the formula: Mᵣ = f⋅Z where ( M_R ) is the moment of resistance, ( f ) is the allowable stress (such as yield stress for ductile materials or ultimate stress for brittle materials), and ( Z ) is the section modulus, a geometric property of the cross-section. Section Modulus (Z): The section modulus is defined as: Z =I/Y where ( I ) is the second moment of area (or moment of inertia) of the cross-section, and ( y ) is the distance from the neutral axis to the outermost fiber of the section. Units: The moment of resistance is typically measured in Newton-meters (N·m) or Pound-feet (lb·ft), depending on the unit system used. Understanding and calculating the moment of resistance is essential for designing safe and efficient structures, ensuring they can resist applied loads without excessive deformation or failure. Uses of Torsion Equation Derivation The derivation of the torsion equation is fundamental in several areas of engineering and materials science. Here are some key uses: Mechanical Design: The torsion equation helps in designing shafts, axles, and other components subjected to torsional loads, ensuring they can withstand the applied torque without failure. Material Testing: By deriving the torsion equation, engineers can determine the shear modulus of materials through torsion tests, which is essential for characterizing material properties. Power Transmission: The torsion equation is critical in the analysis and design of power transmission systems, such as those in automotive drivetrains and industrial machinery, where shafts transmit rotational power. Structural Analysis: In civil engineering, the torsion equation aids in analyzing and designing structural elements like beams and columns that may experience torsional effects due to asymmetric loading or complex geometries. Failure Analysis: Understanding the torsion equation allows engineers to predict failure modes in components subjected to torsional loads, contributing to the development of safer and more reliable Examples of Torsion Equation Derivation Determining Safe Torque Limits: One practical application of the torsion equation is determining the safe torque limits for cylindrical shafts. By using the derived equation, engineers can calculate the maximum torque that a shaft can withstand before yielding or failing. For example, in designing a steel drive shaft, the torsion equation helps ascertain the maximum torque that can be applied without exceeding the material’s yield strength, ensuring the shaft’s durability and safety in operation. Designing Twist Drills: In manufacturing, the torsion equation is used to design twist drills. These tools must withstand high torsional loads while maintaining their structural integrity. By applying the torsion equation, engineers can optimize the drill’s diameter and material properties to resist twisting forces, preventing failure during drilling operations and enhancing the tool’s lifespan. Analyzing Torsional Vibrations: In rotating machinery, torsional vibrations can lead to mechanical failures. The torsion equation helps in analyzing these vibrations and designing components to mitigate them. For instance, in turbine shafts, the equation is used to determine natural frequencies and design damping systems, ensuring smooth operation and reducing the risk of fatigue failure. Evaluating Torsional Rigidity: The torsion equation is vital for evaluating the torsional rigidity of various structural elements. In aerospace engineering, for example, the equation helps assess the rigidity of aircraft wings and fuselage sections. By determining the angle of twist under specific loads, engineers can ensure that these components maintain their structural integrity under operational stresses. Optimizing Material Selection: In material science, the torsion equation assists in optimizing material selection for components subjected to torsional loads. For instance, in the design of a high-performance automotive axle, engineers use the equation to compare different materials, such as steel and aluminum, to find the best combination of strength, weight, and cost. This optimization ensures that the component performs reliably under torsional stress while meeting other design criteria. Can the torsion equation be applied to non-circular cross-sections? Yes, but the calculation of the polar moment of inertia 𝐽J is more complex for non-circular cross-sections. What is the significance of the radius (r) in the torsion equation? The radius determines the distribution of shear stress, with maximum stress occurring at the outer surface. How does the torsion equation help in mechanical design? It aids in designing shafts and other components to ensure they can withstand applied torques without failure. Why is understanding the torsion equation important in engineering? It ensures safe and efficient design of mechanical and structural components subjected to torsional loads. How does the torsion equation apply to power transmission? It helps in designing shafts that efficiently transmit rotational power without excessive twist or failure. How is the torsion equation used in material testing? It is used to determine the shear modulus of materials through torsion tests. What is the derivation process for the torsion equation? It involves relating torque to shear stress through equilibrium, compatibility, and material behavior principles. Can the torsion equation predict failure modes? Yes, it helps predict potential failure modes such as yielding or buckling under torsional loads. What is shear strain (γ) in the context of torsion? Shear strain is the angular deformation per unit length due to applied torque. What assumptions are made in deriving the torsion equation? Assumptions include homogeneous, isotropic material, plane cross-sections remaining plane, and small deformations.
{"url":"https://www.examples.com/physics/derivation-of-torsion-equation.html","timestamp":"2024-11-06T08:50:40Z","content_type":"text/html","content_length":"115713","record_id":"<urn:uuid:2b39ba59-cbe4-433b-9773-551450e0e8ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00100.warc.gz"}
Part 2M: Manipulation of Matrices 2 Chris introduces Part M in this :30 video by refering back to the board with an “L” on each side. A PDF of a text file of the commands of this Part, Part M, is available at this link: Quorum Commands Section 2 Part M – PDF The list of all the Quorum Matrix Commands is linked here again for your convenience: Having more than one image allows brightness values for objects to be added together. How could you assure that the same positions were being added together? Let’s try some matrix manipulation to give us an idea of what could be done. Later in this section we will try some image manipulations in Afterglow Access. If you use the same Quorum Box as the previous part, you can use the matrices you have already defined, just delete the rest of the code you typed in from the previous part. Or just open a new Quorum Make sure the first command is: use Libraries.Compute.Matrix A) Shift Command To practice with these commands,use any matrix you already have defined in your Quorum box. Output the original matrix, and then a “space” before using the new command to change the matrix. You want to compare the changed matrix to the original. Please use the “say” command if that is more convenient. Below is an example for the Shift command: output m1:ToText() output “space” output m1:Shift(2,3,0.0):ToText() Compare the two matrices. Did it do what you imagined? Comment in your journal. What did Shift do? We first specified the number of rows to shift down by, then the number of columns to shift over by, and the last number told Quorum what number to fill in the new rows and columns with. Let’s define the output of the Shift as a new matrix. See if you can remember before looking at the example that follows. Matrix m1s m1s = m1:Shift(2,3,0.0) output m1s:ToText() The first 38 seconds introduces the Flip commands. You may then follow along for step by step instructions or skip to below the video to follow the typed instructions. B) FLIP Command Afterglow Access has the ability to flip vertical or horizontal. Do you remembering trying those options when looking at the Display Settings on Afterglow Access (AgA)? If not, go to the Display Settings, scroll to the bottom, and try the options there. Now try Flip with any of the the matrices you have defined above. If you use the example below, make sure you use the name of your matrix! Again, follow the video for more detailed instructions if you would like. output “space” output m1:FlipVertical():ToText() Examine the flipped matrix. How did it change? Record in your journal. Now try to flip it in the horizontal direction. Recall, this command will use the matrix m1 you defined previously, not the ‘FlipVertical’ matrix you made when you used that command – unless you saved it as a new matrix. output “space” output m1:FlipHorizontal():ToText() Examine the new matrix. How is it different from the vertical flip? Also relate it to the image orientation activity you did with the “L”. Can you show each other the “L” flipped vertically and horizontally? Are you correct? Chris introduces the Reshape command in the first part of this 3:19 video. Then you can follow his step by step instructions, or skip to the instructions below the video. 3. RESHAPE Command This command allows you to move the matrix down any number of rows from the first row, and then move it over any number of columns from the first column. Study the example code below and discuss what you think will happen before trying it. output “space” output m1:Reshape(2,3,7,7,10):ToText() In this case, you shifted the original matrix down two rows and over three columns: it is now a 7 x 7 matrix. • The new rows and columns have been filled in with the value of 10.0, and will just drop the part of the original matrix that has been shifted out. Chris introduces the idea of Transpose in the first 1:10. Then you can follow his step by step instructions, or skip to the instructions below the video. 4. TRANSPOSE Command What do you think happens here? Let’s try it: output “space” output m1:Transpose():ToText() Examine the new matrix. Describe what took place. Is it what you expected? Record your answer in tyour journal. Chris introduces the idea of Rotate in the first 29 seconds. Then you can follow his step by step instructions, or skip to the instructions below the video. 5. ROTATE Command This does exactly what you think, either to the left or right. How many degrees does a ‘rotate’ mean? Try it and enter your answer into your journal. output “space” output m1:RotateRight():ToText() output “space” output m1:RotateLeft():ToText()
{"url":"http://idataproject.org/idata/2m/","timestamp":"2024-11-08T18:04:50Z","content_type":"text/html","content_length":"156448","record_id":"<urn:uuid:d0b8ac1f-3c1d-480d-9e17-c7cf1bfb7ed4>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00086.warc.gz"}
Theme 2: Hybrid Physics-based Data-driven Approaches for Large-scale Simulations and AI Technologies The main scientific goal in this topic is to develop and use multi-scale computational (HPC + data-driven) methodologies in order to construct more accurate quantitative models of complex materials across different scales. To achieve this, a hybrid physics-based data-driven paradigm is proposed that links High-Performance Computing (large-scale simulations) with AI technologies. In more detail, we envision work along the following directions: • Data-driven coarse-graining strategies: The development of data-driven systematic CG techniques is a very important and still unexplored area of multi-scale modelling. In systematic “bottom-up” strategies, the effective CG interactions are derived as follows: Assume a microscopic system, composed of N atoms/particles in the canonical ensemble, described by a Hamiltonian H[N] in the microscopic q (3N) configuration (positions of all atoms), and a mesoscopic CG description of this system with M “superatoms” (M<N), and Hamiltonian H[M], in the mesoscopic Q (3M) phase space. The observable can be distribution functions (e.g. bonded distributions, pair correlation functions, g(r)), the total force acting on a CG particle, or the relative entropy, or Kullback-Leibler divergence, between the microscopic and the coarse space Gibbs measure. Despite the success of such methods, they become problematic for multi-component nanostructured systems (e.g. blends, interfaces, crystals, etc.) due to the complex heterogeneous structure of such systems at the atomic level. One of the reasons the above techniques fail is that the set of basis functions used to approximate the exact, but not computable, many-body PMF is not large enough. To overcome the above limitations, our approach combines the existing schemes with deep learning-based approaches to provide more accurate and transferable approximations of the CG model (free energy surface, FES) under a broad range of conditions. For this, we propose the use of neural networks (NNs) that can, in principle, be used to fit any continuous function on compact subsets of R^n (see universal approximation theorem). However, the application thereof to describe the interaction between atoms or molecules is not straightforward, since NNs do not obey the required symmetries (such as permutation, translation, etc.) dictated by physical laws of nature. To this end, the NNs will be combined with proper transformations of the original data. We anticipate that new CG force fields will be a more accurate approximation of the “exact” many-body PMF, thus deriving more powerful and transferable CG models. In addition, we will explore CG density-dependent potentials in which density-dependent terms are used to approximate the exact many-body PMF in an analogous manner to classical density functional theory. Recently we have developed such CG models for homopolymer bulk systems. Here we will extend such potentials to complex nanostructured systems, also comparing against the data-driven (NNs-based) approximations. • Linking microscopic and mesoscopic scales: The main challenge in the multi-scale modelling of complex materials is the systematic linking of the models across the different scales. For these, algorithms that either eliminate (dimensionality reduction) or re-introduce degrees of freedom (back-mapping process) are required. Recently, we have developed hierarchical back-mapping strategies incorporating generic different scales of description from blob-based models and moderate coarse-grained up to all-atom models (see Figure 3). The central idea is to efficiently equilibrate CG polymers and then to re-insert atomistic degrees of freedom via geometric and Monte Carlo approaches. Furthermore, more recently we introduced a general image-based approach for structural back-mapping from coarse-grained to atomistic models using adversarial neural networks. These methods have been extensively tested for polymer melts of high molecular weight. Here, we will extend these methods to provide large all-atom configurations for heterogeneous nanostructured systems. This is a particularly challenging area due to the inherent complexities of such systems, which has not been addressed in the literature so far. The new methods will be thoroughly examined and validated by comparing their structural and conformational properties of the back-mapped model configurations with reference data from smaller systems, which are obtained directly from long atomistic simulations.
{"url":"https://simea.eu/research/theme-2-hybrid-physics-based-data-driven-approaches-for-large-scale-simulations-and-ai-technologies","timestamp":"2024-11-07T23:07:08Z","content_type":"application/xhtml+xml","content_length":"21224","record_id":"<urn:uuid:d363b125-e51b-4c64-9d2d-00deff185ece>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00096.warc.gz"}
Arthur Schopenhauer: Logic and Dialectic For Arthur Schopenhauer (1788-1860), logic as a discipline belongs to the human faculty of reason, more precisely to the faculty of language. This discipline of logic breaks down into two areas. Logic or analytics is one side of the coin; dialectic or the art of persuasion is the other. The former investigates rule-oriented and monological language. The latter investigates result-oriented language and persuasive language. Analytics or logic, in the proper sense, is a science that emerged from the self-observation of reason and the abstraction of all content. It deals with formal truth and investigates rule-governed thinking. The uniqueness of Schopenhauer’s logic emerges from its reference to intuition, which leads him to use numerous geometric forms in logic that are understood today as logic diagrams, combined with his aim of achieving the highest possible degree of naturalness, so that logic resembles mathematical proofs and, especially, the intentions of everyday thinking. It follows from both logic and dialectic that Schopenhauer did not actively work to develop a logical calculus because axiomatisation contradicts natural thinking and also mathematics in that the foundations of mathematics should rely upon intuition rather than upon the rigor that algebraic characters are supposed to possess. However, the visualization of logic through diagrams and of geometry through figures is not intended to be empirical; rather, it is about the imaginability of logical or mathematical forms. Schopenhauer is guided primarily by Aristotle with regard to naturalness, by Euler with regard to intuition, and by Kant with regard to the structure of logic. Schopenhauer called dialectic ‘eristics’, and the ‘art of persuasion’ and the ‘art of being right’. It has a practical dimension. Dialectic examines the forms of dialogue, especially arguments, in which speakers frequently violate logical and ethical rules in order to achieve their goal of argumentation. In pursuing this, Schopenhauer starts from the premise that reason is neutral and can, therefore, be used as a basis for valid reasoning, although it can also be misused. In the case of abuse, speakers instrumentalize reason in order to appear right and prevail against one or more opponents. Even if some texts on dialectic contain normative formulations, Schopenhauer’s goal is not to motivate invalid reasoning, but to protect against it. As such, scientific dialectic is not an ironic or sarcastic discipline, but a protective tool in the service of Enlightenment philosophy. Schopenhauer’s dialectic is far better known than his analytics, although in direct comparison it makes up the smaller part of his writings on logic in general. For this reason, and because most texts on dialectic build on analytics, the following article is not structured around the two sub-disciplines, but around Schopenhauer’s very different texts on logic in general. First, logic is positioned as a discipline within the philosophical system. Then, the Berlin Lectures, his main text on analytics and dialectic, is introduced and followed, in chronological order, by his shorter texts on analytics and dialectic. The final section outlines research topics. Table of Contents 1. Logic and System a. Schopenhauer’s Philosophical System Schopenhauer’s main work is The World as Will and Representation (W I). This work represents the foundation and overview of his entire philosophical system (and also includes a minor treatise on logic). It was first published in 1819 and was accepted as a habilitation thesis at the University of Berlin shortly thereafter. W I was also the basis for the revised and elaborated version—the Berlin Lectures (BL), written in the early 1820s. It also appeared in a slightly revised version in a second and third edition (1844, 1859) accompanied by a second volume (W II) that functioned as a supplement or commentary. However, none of these later editions were as rich in content as the revision in the BL. All other writings—On the Fourfold Root of the Principle of Sufficient Reason (1813 as a dissertation, 1847), On the Will in Nature (1836, 1854), The Two Fundamental Problems of Ethics (1841, 1860), and Parerga and Paralipomena (1851)—can also be regarded as supplements to the W I or the BL. Schopenhauer’s claim, made in the W I (and also the BL), follows (early) modern and especially Kantian system criteria. He claimed that philosophy aims to depict, in one single system, the interrelationships between all the components that need to be examined. In Kant’s succession, a good or perfect system is determined by the criterion of whether the system can describe all components of nature and mind without leaving any gaps or whether all categories, principles, and topics have been listed in order to describe all components of nature and mind. This claim to completeness becomes clear in Schopenhauer’s system, more precisely, in W I or BL, each of which is divided into four books. The first book deals mainly with those topics that would, in contemporary philosophy, be assigned to epistemology, philosophy of mind, philosophy of science, and philosophy of language. The second book is usually understood as covering metaphysics and the philosophy of nature. The third book presents his aesthetics and the fourth book practical philosophy, including topics such as ethics, theory of action, philosophy of law, political philosophy, social philosophy, philosophy of religion, and so forth. b. Normativism or Descriptivism Schopenhauer’s system, as described above (Sect. 1.a), has not been uniformly interpreted in its 200-year history of reception, a factor that has also played a significant role in the reception of his logic. The differences between the individual schools of interpretation have become increasingly obvious since the 1990s and are a significant subject of discussion in research (Schubbe and Lemanski 2019). Generally speaking, one can differentiate between two extreme schools of interpretation (although not every contemporary study on Schopenhauer can be explicitly and unambiguously assigned to one of the following positions): 1. Normativists understand Schopenhauer’s system as the expression of one single thought that is marked by irrationality, pessimism, obscurantism, and denial of life. The starting point of Schopenhauer’s system is Kant’s epistemology, which provides the foundation for traversing the various subject areas of the system (metaphysics, aesthetics, ethics). However, all topics presented in the system are only introductions (“Vorschulen”) to the philosophy of religion, which Schopenhauer proclaims is the goal of his philosophy, that is, salvation through knowledge (“Erlösung durch Erkenntnis”). Normativists are above all influenced by various philosophical schools or periods of philosophy such as late idealism (Spätidealismus), the pessimism controversy, Lebensphilosophie, and Existentialism. 2. Descriptivists understand Schopenhauer’s philosophy as a logically ordered representation of all components of the world in one single system, without one side being valued more than the other. Depending on the subject, Schopenhauer’s view alternates between rationalism and irrationalism, between optimism and pessimism, between affirmation and denial of life, and so forth. Thus, there is no intended priority for a particular component of the system (although, particularly in later years, Schopenhauer’s statements became more and more emphatic). This school is particularly influenced by those researchers who have studied Schopenhauer’s intense relationship with empiricism, logic, hermeneutics, and neo-Kantianism. c. Logic within the System The structure of logic is determined by three sub-disciplines: the doctrines of concepts, judgments, and inferences. However, the main focus of Schopenhauerian logic is not the doctrine of inferences in the sense of logical reasoning and proving but rather in the sense that his logic corresponds with his philosophy of mathematics. According to Schopenhauer, logical reasoning in particular is overrated as people rarely put forward invalid inferences, although they often put forward false judgments. However, the intentional use of fallacies is an exception to this that is therefore studied by dialectics. The evaluation of Schopenhauer’s logic depends strongly on the school of interpretation. Normativists have either ignored Schopenhauer’s logic or identified it with (eristic) dialectic, which in turn has been reduced to a normative “Art of Being Right” or “of Winning an Argument” (see below, Sect. 2.e, 3.c). A relevant contribution to Schopenhauer’s analytics from the school of normativists is, therefore, not known, although there were definitely intriguing approaches to dialectics. As normativism was the more dominant school of interpretation until late in the 20^th century, it shaped the public image of Schopenhauer as an enemy of logical and mathematical reasoning, and so forth. Descriptivists emphasize logic as both the medium of the system and the subject of a particular topic within the W I-BL system. The first book of W I-BL deals with representation and is divided into two sections (Janaway 2014): 1. Cognition (W I §§3–7, BL chap. 1, 2), 2. Reason (W I §§8–16, BL 3–5). Cognition refers to the intuitive and concrete, reason to the discursive and abstract representation. In the paragraphs on cognition, Schopenhauer examines the intuitive representation and its conditions, that is, space, time, and causality, while reason is built on cognition and is, therefore, the ‘representation of representation’. Schopenhauer examines three faculties of reason, which form the three sections of these paragraphs: 1. language, 2. knowledge, and 3. practical reason. Language, in turn, is then broken down into three parts: general philosophy of language, logic, and dialectics. (Schopenhauer defines rhetoric as, primarily, the speech of one person to many, and he rarely dealt with it in any substantial detail.) Following the traditional structure, Schopenhauer then divides logic into sections on concepts, judgments, and inferences. Logic thus fulfills a double role in Schopenhauer’s system: it is a topic within the entire system and it is the focus through which the system is organized and communicated. Fig. 1 shows this classification using W I as an example. Figure 1: The first part of Schopenhauer’s system focusing on logic However, this excellent role of logic only becomes obvious when Schopenhauer presents the aim of his philosophy. The task of his system is “a complete recapitulation, a reflection, as it were, of the world, in abstract concepts”, whereby the discursive system becomes a finite “collection [Summe] of very universal judgments” (W I, 109, BL, 551). As in Schopenhauer’s system, logic alone clarifies what concepts and judgments are: it is a very important component for understanding his entire philosophy. Schopenhauer, however, vehemently resists an axiomatic approach because in logic, mathematics and, above all, philosophy, nothing can be assumed as certain; rather, every judgment may represent a problem. Philosophy itself must be such that it is allowed to be skeptical about tautologies or laws (such as the laws of thought). This distinguishes it from other sciences. Logic and language cannot, therefore, be the foundation of science and philosophy, but are instead their means and instrument (see below, Sect. 2.c.i). Through this understanding of the role of logic within the system, the difference between the two schools of interpretation can now also be determined: Normativists deny the excellent role attributed to logic as they regard the linguistic-logical representation as a mere introduction (“Vorschule”) to philosophical salvation at the end of the fourth book of W I or BL. This state of salvation is no longer to be described using concepts and judgments. In contrast, descriptivists stress that Schopenhauer’s entire system aims to describe the world and man’s attitude to the world with the help of logic and language. This also applies to the philosophy of religion and the treatises on salvation at the end of W I and BL. As emphasized by Wittgensteinians in particular, Schopenhauer also shows, ultimately, what can still be logically expressed and what can no longer be grasped by language (Glock 1999, 439ff.). d. Schopenhauer’s Treatises on Logic and Dialectics Schopenhauer’s whole oeuvre is thought to contain a total of six longer texts on logic. In chronological order, this includes the following seven texts: (1) In the summer semester of 1811, Schopenhauer attended Gottlob Ernst (“Aenesidemus”) Schulze’s lectures on logic and wrote several notes on Schulze’s textbook (d’Alfonso 2018). As these comments do not represent work by Schopenhauer himself, they are not discussed in this article. The same applies to Schopenhauer’s comments on other books on logic, such as those of Johann Gebhard Ehrenreich Maass, Johann Christoph Hoffbauer, Ernst Platner, Johann Gottfried Kiesewetter, Salomon Maimon et al. (Heinemann 2020), as well as his shorter manuscript notes published in the Manuscript Remains. (Schopenhauer made several references to his manuscripts in BL.) (2) Schopenhauer’s first discussion of logic occurred in his dissertation of 1813 which presented a purely discursive reflection on some components of logic (concepts, truth, and so on). In particular, his reflections on the laws of thought were emphasized. (3) For the first time in 1819, in § 9 of W I, Schopenhauer distinguished between analytics and dialectics in more detail. In the section on analytics, he specified a doctrine of concepts with the help of a few logic diagrams. However, he wrote in § 9 that this doctrine had already been fairly well explained in several textbooks and that it was, therefore, not necessary to load the memory of the ‘normal reader’ with these rules. In the section on dialectic, he sketches a large argument map for the first time. § 9 was only lightly revised in later editions; however, his last notes in preparation for a fourth edition indicate that he had planned a few more interesting changes and additions. (4) During the 1820s, Schopenhauer took the W I system as a basis, supplemented the missing information from his previously published writings, and developed a system that eliminated some of the shortcomings and ambiguities of W I. The system within these manuscripts then served as a source for his lectures in Berlin in the early 1820s, that is, the BL. In the first book of the BL, there is a treatise on logic the size of a textbook. (5) Eristic Dialectics is the title of a longer manuscript that Schopenhauer worked on in the late 1820s and early 1830s. This manuscript is one of Schopenhauer’s best-known texts, although it is unfinished. It takes many earlier approaches further, but the context to analytics (and to logic diagrams) is missing in this small fragment on dialectics. With the end of his university career in the early 1830s, Schopenhauer’s intensive engagement with logic came to an end. (6) It was not until 1844, in W II, that Schopenhauer supplemented the doctrine of concepts given in W I with a 20-page doctrine of judgment and inference. This, however, is no longer compatible with the earlier logic treatises written before 1830, as Schopenhauer repeatedly suggests new diagrammatic logics, which he does not illustrate. Given these changes, the published texts on logic look (7) In 1851, Schopenhauer once again published a short treatise entitled “Logic and Dialectics” in the second volume of Parerga and Paralipomena. This treatise, however, only deals with some topics from the philosophy of logic in aphoristic style and, otherwise, focuses more strongly on dialectic. Few new insights are found here. Since the rediscovery of the Berlin Lectures by descriptivists, a distinction has been made—in the sense of scholastic subdivision—between Logica Maior (Great Logic) and Logica Minor (Small Logic): Treatises (2), (3), (4), (5) and (6) belong to the Logica Minor and are discussed briefly in Section 3. (For more information see Lemanski 2021b, chap. 1.) The only known treatise on logic written by Schopenhauer that deserves to be called a Logica Maior is a manuscript from the Berlin Lectures written in the 1820s. This book-length text is the most profitable reading of all the texts mentioned. Thus, it is discussed in more detail in Section 2. 2. Schopenhauer’s Logica Maior (the Berlin Lectures) Until the early 21^st century, due to the dominance of the normativists in Schopenhauer scholarship, the BL were considered just a didactic version of W I and were, therefore, almost completely ignored by researchers until intensive research on Schopenhauer’s logic began in the middle of the 2010s. These lectures are not only interesting from a historical perspective, they also propose a lot of innovations and topics that are still worth discussing today, especially in the area of diagrammatic reasoning and logic diagrams. As Albert Menne, former head of the working group ‘Mathematical Logic’ at the Ruhr-Universität in Bochum stated: “Schopenhauer has an excellent command of the rules of formal logic (much better than Kant, for example). In the manuscript of his Berlin Lectures, syllogistics, in particular, is thoroughly analyzed and explained using striking examples” (Menne 2002, 201–2). The BL are a revised and extended version of W I made for the students and guests who attended his lectures in Berlin. The belief that such an elaboration only has minor value is, however, not reasonable. Moreover, the extent, the content, and also the above-mentioned distinction between the exoteric-popular-philosophical and the esoteric-academic part of Schopenhauer’s work suggest a different evaluation. In W I, Schopenhauer deals only casually with difficult academic topics such as logic or philosophy of law; at the beginning of the BL, however, he states that these topics are the most important topics to teach prospective academics. Indeed, he repeatedly pointed out that he will also focus on logic in the title of his announcement for the Berlin Lectures. Thus, the lecture given in the winter semester of 1821-22 is titled “Dianologie und Logik” (BL, XII; Regehly 2018). Therefore, suspicion arises that research has hitherto ignored Schopenhauer’s most important textual version of his philosophical system, as the Berlin Lectures contain his complete system including some of the parts missing from W I, which are very important for the academic interpretation of the system such as logic and philosophy of law. The first edition of the BL was published by Franz Mockrauer in 1913, reprinted by Volker Spierling in 1986, and a new edition was published in four volumes between 2017 and 2022 by Daniel Schubbe, Daniel Elon, and Judith Werntgen-Schmidt. An English translation is not available. The manuscript of the BL is deposited in the Staatsbibliothek zu Berlin Preussischer Kulturbesitz and can be viewed online at http://sammlungen.ub.uni-frankfurt.de/schopenhauer/content/titleinfo/7187127. The Logica Maior is found in chapter III of the Berlin Lectures (book I). Here, Schopenhauer begins with (a) a treatise on the philosophy of language that announces essential elements of the subsequent theory of concepts. Then, (b) based on the diagrammatic representation of concepts, he develops a doctrine of judgment. (c) The majority of the work then deals with inferences, in which syllogistic, Stoic logic (propositional logic), modal logic, and the foundation and naturalness of logic are discussed. Together with (d) the appendix, these are the topics that belong to analytics or logic in the proper sense. (e) Finally, he addresses several topics related to dialectics. a. Doctrine of Concepts and Philosophy of Language This section mainly deals with BL, 234–260. Schopenhauer begins his discussion of logic with a treatise on language, which is foundational to the subsequent treatise. Several aspects of this part of the Logica Maior have been investigated and discussed to date—namely (i.) translation, use-theory, and contextuality as well as (ii.) abstraction, concretion, and graphs—which are outlined in the following subsections. i. Translation, Use-Theory, and Contextuality Schopenhauer distinguishes between words and concepts: he considers words to be signs for concepts, and concepts abstract representations that rest on other concepts or concrete representations (of something, that is, intuition). In order to make this difference explicit, Schopenhauer reflects on translation, as learning a foreign language and translating are the only ways to rationally understand how individuals learn abstract representations and how concepts develop and change over many generations within a particular language community. In his translation theory, Schopenhauer defines three possible situations: (1) The concept of the source language corresponds exactly to the concept of the target language (1:1 relation). (2) The concept of the source language does not correspond to any concept of the target language (1:0 relation). (3) The concept of the source language corresponds only partially to one or more concepts of the target language 1:(n – x)/n relation, where n is a natural number and x < n). For Schopenhauer, the last relation is the most interesting one: it occurs frequently, causes many difficulties in the process of translation or language learning, and is the relation with which one can understand how best to learn words or the meaning of words. Remarkably, Schopenhauer developed three theories, arguments, or topics regarding the 1:(n – x)/n relation that have become important in modern logic, linguistics, and analytical philosophy, namely (a) spatial logic diagrams, (b) use-theory of meaning, and (c) the context principle. (a)–(c) are combined in a passage of text on the 1:(n – x)/n translation: [T]ake the word honestum: its sphere is never hit concentrically by that of the word which any German word designates, such as Tugendhaft, Ehrenvoll, anständig, ehrbar, geziemend [that is, virtuousness, honorable, decent, appropriate, glorious and others]. They do not all hit concentrically: but as shown below: That is why one learns not the true value of the words of a foreign language with the help of a lexicon, but only ex usu [by using], by reading in old languages and by speaking, staying in the country, by new languages: namely it is only from the various contexts in which the word is found that one abstracts its true meaning, finds the concept that the word designates. [31, 245f.] To what extent the penultimate sentence corresponds to what is called the ‘use theory of meaning’, the last sentence of the quote to the so-called ‘context principle’, and to what extent these sentences are consistent with the corresponding theories of 20^th-century philosophy of language is highly controversial. Lemanski (2016; 2017, 2021b) and Dobrzański (2017; 2020) see similarities with the formulations of, for example, Gottlob Frege and Ludwig Wittgenstein. However, Schroeder (2012) and Schumann (2020) reject the idea of this similarity, and Weimer (1995; 2018) sees only a representationalist theory of language in Schopenhauer. Dümig (2020) contradicts a use theory and a context principle for quite different reasons, placing Schopenhauer closer to mentalism and cognitivism, while Koßler (2020) argues for the co-existence of various theories of language in Schopenhauer’s oeuvre. ii. Abstraction, Concretion, and Graphs With (b) and (c) Schopenhauer not only comes close to the modern philosophy of ordinary language, but he may also be the first philosopher in history to have used (a) logic diagrams to represent semantics or ontologies of concepts (independent of their function in judgments). In his philosophy of language, he also uses logic diagrams to sketch the processes of conceptual abstraction. Schopenhauer intends to describe processes of abstraction that are initially based on concrete representation, that is, the intuition of a concrete object, from which increasingly abstract concepts have formed over several generations within a linguistic community. Figure 2 (SBB-IIIA, NL Schopenhauer, Fasz. 24, 112^r = BL, 257) For example, Fig. 2 shows the ‘spheres’ of the words ‘grün’ (‘green’), ‘Baum’ (‘tree’), and ‘blüthetragend’ (‘flower-bearing’) using three circles. The diagram represents all combinations of subclasses by intersecting the spheres of the concepts that are to be analyzed, more specifically, There is a recognizable relationship with Venn diagrams here, as Schopenhauer uses the combination of the so-called ‘three-circle diagram’, a primary diagram in Venn’s sense. Schopenhauer distinguishes between an objective and a conceptual abstraction, as the following example illustrates: (1) GTF denotes a concept created by the objective abstraction from an object of intuitive representation, that is, a concretum. The object this was abstracted from belongs to the set of objects that is a tree that bears flowers and is green. All further steps of abstraction are conceptual abstractions or so-called ‘abstracta’. In the course of generations, language users have recognized that there are also (2) representations that can only be described with GF, but not with T, more (for example, a common daisy). In the next step (3), the concept F was excluded so that the abstract representation of G was formed, that is, (for example, bryophytes). Finally, (4) a purely negative concept was formed, whose property is not G nor T nor F, more specifically, This region lies outside the conceptual sphere and, therefore, does not designate an abstractum or a concept anymore: it is merely a word without a definite meaning, such as ‘the absolute’, ‘substance’, and so forth (compare Xhignesse 2020). Fig. 3: Interpretation of Fig. 1 In addition to the three-circle diagram (Fig. 2) and the eight classes, the interpretation in Fig. 3 includes a graph illustrating the four steps mentioned above: (1) corresponds to ν[1], (2) is the abstraction e[1] from ν[1] to ν[2], (3) is the abstraction e[2] from v[2] to v[3] and (4) e[3] is the abstraction from v[3] to v[4]. In this example, the graph can be interpreted as directed with ν [1] as the source vertex and v[4] as the sink vertex. However, Schopenhauer also uses these diagrams in the opposite direction, that is, not only for abstraction but also for concretion. In both directions, the vertices in the graph represent concepts, whereas the edges represent abstraction or concretion. On account of the concretion, Schopenhauer has also been associated with reism, concretism theory, and reification of the Lwów-Warsaw School (compare Dobrzański 2017; Lemanski and Dobrzański 2020). b. Doctrine of Judgments This section mainly focuses on BL, 260–293. Even though Schopenhauer had already used logic diagrams in his doctrine of concepts (see above, Sect. 2.a), he explicitly introduced them in his doctrine of judgment, making reference to Euler and others. Nevertheless, in some cases Schopenhauer’s logic diagrams are fundamentally different from Euler diagrams, so in the following, the first subsection defines the expression (i) ‘Schopenhauer diagrams’ or ‘relational diagrams’. Then subsection (ii) outlines how Schopenhauer applies these diagrams to Stoic logic and how they relate to oppositional geometry. Finally, subsection (iii) discusses Schopenhauer’s theory of conversion, his use of the term metalogic, and subsection (iv) discusses his diagrammatic interpretation of the analytic-synthetic distinction. i. Relational Diagrams The essential feature of Schopenhauer’s Logica Maior is that, for the most part, it is based on a diagrammatic representation. Schopenhauer learned the function and application of logic diagrams, at the latest, in Gottlob Ernst Schulze’s lectures. This is known because, although Schulze did not publish any diagrams in his textbook, Schopenhauer drew Euler diagrams and made references to Leonhard Euler in his notes on Schulze’s lectures (d’Alfonso 2018). Thus, as early as 1819, Schopenhauer published a logic of concepts based on circle diagrams in W I, § 9 (see below, Sect. 3.b) that he worked through in the Logica Maior of the Berlin Lectures (BL, 272 et seqq.). ‘Diagrammatic representation’ and ‘logic diagrams’ are modern expressions for what Schopenhauer called ‘visual representation’ or ‘schemata’. Schopenhauer’s basic insight is that the relations of concepts in judgments are analogous to the circular lines in Euclidean space. One, therefore, only has to go through all possible circular relations and examine them according to their analogy to concept relations in order to obtain the basic forms of judgment on which all further logic is built. With critical reference to Kant, Schopenhauer calls his diagrammatic doctrine of judgment a ‘guide of schemata’ (Leitfaden der Schemata). As the following diagrams are intended to represent the basic relations of all judgments, they can also be called ‘relation diagrams’ (RD) as per Fig. 4. Fig. 4.1 (RD1) All R is all C. All C is all R. Fig. 4.2 (RD2) All B is A. Some A is B. Nothing that is not A is B. If B then A Fig. 4.3 (RD3) No A is S. No S is A. Everything that is S is not A. Everything that is A is not S. Fig. 4.4 (RD4) All A is C. All S is C. Nothing that is not C is A. Nothing that is not C is S. Fig. 4.5 (RD 5) Some R is F. Some F is R. Some S is not F. Some F is not R Fig. 4.6 (RD6) B is either o or i. All six RDs form the basis on which to build all logic, that is, both Aristotelian and Stoic logic. Schopenhauer states that geometric forms were first used by Euler, Johann Heinrich Lambert, and Gottfried Ploucquet to represent the four categorical propositions of Aristotelian syllogistics: All x are y (RD2), Some x are y (RD5), No x are y (RD3) and Some x are not y (RD5). These three diagrams, together with RD1, result in the relations that Joseph D. Gergonne described almost simultaneously in his famous treatise of 1817 (Moktefi 2020). RD4 may have been inspired by Kant and Karl Christian Friedrich Krause, although there are clear differences in interpretation here. However, Fig. 3.6 is probably Schopenhauer’s own invention even though there were many precursors to these RDs prior to and during the early modern period that Schopenhauer did not know about. On account of the various influences, it might be better to speak of ‘Schopenhauer diagrams’ or ‘relational diagrams’ rather than of ‘Euler diagrams’ or ‘Gergonne relations’ and so forth. Schopenhauer shows how each RD can express more than just one aspect of information. This ambiguity can be evaluated in different ways. In contemporary formal approaches, the ambiguity of logic diagrams is often considered a deficiency. In contrast, Schopenhauer considers this ambiguity more an advantage than a deficiency as only a few circles in one diagram can represent a multitude of complex linguistic expressions. In this way, Schopenhauer can be seen as a precursor of contemporary theories about the so-called ‘observational advantages’ of diagrams. As meaning only arises through use and context (see above) and as axioms can never be the starting point of scientific knowledge (see above), the ambiguity of logic diagrams is no problem for Schopenhauer. For him, a formal system of logic is unnecessary. He wanted to analyze the ordinary and natural language with the help of diagrams. ii. Stoic Logic and Oppositional Geometry Nowadays, it is known that the relation diagrams described above can be transformed, under the definition of an arbitrary Boolean algebra, into diagrams showing the relations contrariety, contradiction, subcontrariety, and subalternation. The best-known of these diagrams, which are now gathered under the heading of ‘oppositional geometry’, is the square of opposition. Although no square of opposition has yet been found in Schopenhauer’s manuscripts, he did associate some of his RDs with the above-mentioned relations and in doing so also referred to “illustrations” (BL, 280, 287) that are no longer preserved in the manuscripts. Schopenhauer went beyond Aristotelian logic with RD2 and RD6 and also attempted to represent Stoic logic with them, which in turn can be understood as a precursor of propositional logic (BL, 278–286). RD2 expresses hypothetical judgments (if …, then …), RD6 disjunctive judgments (either … or …). In particular, researchers have studied the RD6 diagrams, also called ‘partition diagrams’, more intensively. For Schopenhauer, the RDs for Stoic logic are similar to syllogistic diagrams. However, quantification does not initially play a major role here, as the diagrams are primarily intended to express transitivity (hypothetical judgments) or difference (disjunction). Only in his second step does Schopenhauer add quantification to the diagrams again (BL, 287 et seqq.). In this context, Schopenhauer treats the theory of oppositions on several pages (BL, 284–289); however, he merely indicates that the diagrammatic representation of oppositions would have to be further The basic RD6 in Fig. 3.6 shows a simple contradiction between the concepts and . However, as the RDs given above are only basic diagrams, they can be extended according to their construction principles. Thus, there is also a kind of compositional approach in Schopenhauer’s work. For example, one can imagine that a circle, such as that given in RD6, is not separated by one line but two, making each compartment a part of the circle and excluding all others. An example of this can be seen in Fig. 5, alongside its corresponding opposition diagram, a so-called ‘strong JSB hexagon’ (Demey, Lemanski 2021). Figure 5: Partition diagram and Logical Hexagon (Aggregatzustand = state of matter, fester = solid, flüßiger = liquid, elastischer = elastic) An example of a more complex Eulerian diagram of exclusive disjunctions used by Schopenhauer is illustrated in Fig. 6, which depicts Animalia, Vertebrata, Mammals, Birds, Reptiles, Pisces, Mollusca, ArtiCulata, and RaDiata. These terms are included as species in genera and are mutually exclusive. While the transformation into the form of oppositional geometry is found in Lemanski and Demey (2021), Fig. 6 expresses Schopenhauer’s judgments such as:@ If something is A, it is either V or I. If something is V, it is either M or B or R or P. If something is A, but not V, it is either M or C or D. Fig. 6: Schopenhauer’s Animalia-Diagram Schopenhauer here notes that the transition between the logic of concepts, judgments, and inferences is fluid. The partition diagrams only show concepts or classes, but judgments can be read through their relation to each other, that is, in a combination of RD2 and RD3. However, as the relation of three concepts to each other can already be understood as inference, the class logic is already, in most cases, a logic of inferences. For example, the last judgment mentioned above could also be understood as enthymemic inference (BL 281): Something is A and not V. (If V then M or C or D.) Thus, it is either M or C or D. Schopenhauer’s partition diagrams have been adopted and applied in mathematics, especially by Adolph Diesterweg (compare Lemanski 2022b). iii. Conversion and Metalogic In his doctrine of judgments, Schopenhauer still covers all forms of conversion and laws of thought, in which he partly uses RDs, but partly also an equality notation (=) inspired by 18^th-century Wolffians. The notation for the conversio simpliciter given in Fig. 4.5 is a convenient example of the doctrine of conversion: universal negative: No A = B. No B = A. particular affirmative: Some A = B. Some B = A. (BL, 293). Following this example, Schopenhauer demonstrates all the rules of the traditional doctrine of conversion. The equality notation is astonishing as it comes close to a form of algebraic logic that is developed later by Drobisch and others (Heinemann 2020). Furthermore, the first three laws of thought (BL, 262 et seqq.) correspond to the algebraic logic of the late 19^th century, namely the: (A) law of identity: A = A, (B) law of contradiction: A = -A = 0, (C) law of excluded middle: A aut = b, aut = non b. (D) law of sufficient reason: divided into (1) the ground of becoming (Werdegrund), (2) the ground of cognition (Erkenntnisgrund), (3) the ground of being, and (4) the ground of action Only the second class of the law of sufficient reason relates to logic. This ground of cognition (Erkenntnisgrund) is then divided into four further parts, which, together, form a complex truth theory. Schopenhauer distinguishes between (1) logical truth, (2) empirical truth, (3) metaphysical truth, and (4) metalogical truth. The last form is of particular interest (Béziau 2020). Metalogical truth is a reflection on the four classes of the principle of sufficient reason mentioned above. A judgment can be true if the content it expresses is in harmony with one or more of the listed laws of thought. Although some parts of modern logic have broken with these basic laws, Schopenhauer is the first logician to describe the discipline entitled “metalogy” in a similar way to Nicolai A. Vasiliev, Jan Łukasiewicz, and Alfred Tarski. iv. Analytic-Synthetic Distinction and the Metaphor of the Concept Another peculiarity of Schopenhauer’s doctrine of judgments is the portrayal of analytic and synthetic judgments. In Kant research, the definition of analytic and synthetic judgments has been regarded as problematic and highly worthy of discussion since Willard Van Orman Quine’s time—at the latest. This is particularly because Kant, as Quine and some of his predecessors have emphasized, used the unclear metaphors of “containment,” that is, “enthalten” (Critique of Pure Reason, Intr. IV) and “actually thinking in something,” that is “in etw. gedacht werden” (Prolegomena, §2b) to define what analytic and synthetic judgments are. In the section of the Berlin Lectures on cognition, Schopenhauer introduces the distinction between analytic and synthetic judgments as follows: A distinction is made in judgment, more precisely, in the proposition, subject, and predicate, that is, between what something is said about, and what is said about it. Both concepts. Then the copula. Now the proposition is either mere subdivision (analysis) or addition (synthesis); which depends on whether the predicate was already thought of in the subject of the proposition, or is to be added only in consequence of the proposition. In the first case, the judgment is analytic, in the second synthetic. All definitions are analytic judgments: For example, gold is yellow: analytic gold is heavy: analytic gold is ductile: analytic gold is a chemically simple substance: synthetic (BL, 123) Here, Schopenhauer initially adheres strictly to the expression of ‘actually thinking through something’ (‘mitdenken’ that is, analytically) or ‘adding something’ (‘hinzudenken’ that is, synthetically). However, he explains in detail that the distinction between the two forms of judgment is relative as it often depends on the knowledge and experience of the person making the judgment. An expert will, for example, classify many judgments from his field of knowledge as analytic, while other people would consider them to be synthetic. This is because the expert knows more about the characteristics of a subject than someone who has never learned about these things. In this respect, Schopenhauer is an advocate of ontological relativism. However, in the sense of transcendental philosophy, he suggests that every area of knowledge must have analytic judgments that are also a priori. For example, according to Kant, judgments such as “All bodies are extended” are analytic. Even more interesting than these explanations taken from the doctrine of cognition (BL, 122–127) is the fact that Schopenhauer takes up the theory of analytic and synthetic judgments again in the Logica Maior (BL, 270 et seqq.). Here, Schopenhauer explains what the expression of ‘actually thinking through something’ (‘mitdenken’), which he borrowed from Kant, means. ‘Actually thinking in something’ can be translated with the metaphor of ‘containment’, and these expressions are linguistic representations of logic diagrams or RDs. To understand this more precisely, one must once again refer to Schopenhauer’s doctrine of concept (BL, 257 et seqq.). For Schopenhauer, there is no such thing as a ‘concept of the concept’. Rather, the concept itself is a metaphor that refers to containment. According to Schopenhauer, this is already evident in the etymology of the expression ‘concept’, which illustrates that something is being contained: horizein (Greek), concipere (Latin), begreifen (German). Concepts conceive of something linguistically, just as a hand grasps a stone. For this reason, the concept itself is not a concept, but a metaphor, and RDs are the only adequate means for representing the metaphor of the concept (Lemanski 2021b, chap. 2.2). If one says that the concept ‘gold’ includes the concept ‘yellow’, one can also say that ‘gold’ is contained in ‘yellow’ (BL, 270 et seqq.). Both expressions are transfers from concrete representation into abstract representation, that is, from intuition into language. To explain this intuitive representation, one must use an RD2 (Fig. 3.2) such as is given in Fig. 7 (BL, 270): c. Doctrine of Inferences This section mainly deals with BL, 293–356. As one can see from the page references, the doctrine of inferences is the longest section of the Logica Maior in the Berlin Lectures. Herein, Schopenhauer (i) presents an original thesis for the foundation of logic and (ii) develops an archaic Aristotelian system of inferences, (iii) whose validity he sees as confirmed by the criterion of naturalness. In all three areas, logic diagrams or RDs—this time following mainly Euler’s intention—play a central role. i. Foundations of Logic Similar to the Cartesianists, Schopenhauer claims that logical reasoning is innate in man by nature. Thus, the only purpose academic logic has is to make explicit what everyone implicitly masters. In this respect, the proof of inferential validity can only be a secondary topic in logic. In other words, logic is not primarily a doctrine of inference, but primarily a doctrine of judgment. Schopenhauer sums this up by saying that nobody is able to draw invalid inferences for himself by himself and intend to think correctly, without realizing it (BL, 344). For him, such seriously produced invalid inferences are a great rarity (in ‘monological thinking’), but false judgments are very common. Furthermore, learning logic does not secure against false judgments. Schopenhauer, therefore, does not consider proving inferences to be the main task of logic; rather, logic should help one formulate judgments and correctly grasp conceptual relations. However, when it comes to proof, intuition plays an important role. Schopenhauer takes up an old skeptical argument in his doctrine of judgments and inference that problematizes the foundations of logic: (1) Conclusions arrived at by deduction are only explicative, not ampliative, and (2) deductions cannot be justified by deductions. Thus, no science can be thoroughly provable, no more than a building can hover in the air, he says (BL, 527). Schopenhauer demonstrates this problem by referring to traditional proof theories. In syllogistics, for example, non-perfect inferences are reduced to perfect ones, more precisely, the so-called modus Barbara and the modus Celarent. Yet, why are the modes Barbara and Celarent considered perfect? Aristotle, for example, justifies this with the dictum de omni et nullo, while both Kantians and skeptics, such as Schopenhauer’s logic teacher Schulze, justify the perfection of Barbara and Celarent as well as the validity of the dictum de omni et nullo with the principle nota notae est nota rei ipsisus. However, Schopenhauer goes one step further and explains that all discursive principles fail as the foundations of science because an abstract representation (such as a principle, axiom, or grounding) cannot be the foundation for one of the faculties of abstract representation (logic, for example). If one, nevertheless, wants to claim such a foundation, one inevitably runs into a regressive, a dogmatic, or a circular argument (BL, 272). For this reason, Schopenhauer withdraws a step in the foundation of logic and offers a new solution that he repeats later as the foundation of geometry: Abstract representations are grounded on concrete representations, as abstract representations are themselves “representations of representations” (see above, Sect. 2.a.ii). The concrete representation is a posteriori or a priori intuition and both forms can be represented by RDs or logic diagrams. The abstract representation of logic is thus justified by the concrete representation of intuition, and the structures of intuition correspond to the structures of logic. For Schopenhauer, this argument can be proven directly using spatial logic diagrams (see above, Sect. 2.b.ii). The validity of an inference can, thus, be shown in concreto, while most abstract proofs illustrated using algebraic notations are not convincing. As Schopenhauer demonstrates in his chapters on mathematics, abstract-discursive proofs are not false or useless for certain purposes, but they cannot achieve what philosophers, logicians, and mathematicians aim to achieve when they ask about the foundations of rational thinking (compare Lemanski 2021b, chap. 2.3). This argument can also be understood as part of Schopenhauer’s reism or concretism (see above, Sect. 2.a.ii). ii. Logical Aristotelianism and Stoicism As described above, Schopenhauer’s focus is not on proving the validity of inferences, but on the question of which logical systems are simpler, more efficient, and, above all, more natural. Although he always uses medieval mnemonics, he explains that the scholastic system attributes only a name-giving, not a proof-giving, function to inferences. On the one hand, he is arguing against Galen and many representatives of Arabic logic when he claims that the fourth figure in syllogistics has no original function. On the other hand, he is also of the opinion that Kant overstepped the mark by criticizing all figures except the first one. The result of this detailed critique, which he carried out on all 19 valid modes and for all syllogistic figures, is proof of the validity of the archaic Aristotelian Organon. Therefore, Schopenhauer claims that Aristotle is right when he establishes three figures in syllogistics and that he is also right when it comes to establishing all general and special rules. The only innovation that Schopenhauer accepts in this respect is that logic diagrams show the abstract rules and differences between the three figures concretely and intuitively. According to Schopenhauer, a syllogistic inference is the realization of the relationship between two concepts formerly understood through the relationship of a third concept to each of them (BL, 296). Following the traditional doctrine, Schopenhauer divides the three terms into mAjor, mInor, and mEdius. He presents the 19 valid syllogisms as follows (BL, 304–321): 1^st Figure All E are A, all I is E, thus all I is A. No E is A, all I is E, thus no I is A. All E is A, some I is E, thus some I is A. No E is A, some I is E, thus some I is not A. ^nd Figure No A is E, all I is E, thus no I is A. All A is E, no I is E, thus no I is A. No A is E, some I is E, thus some I is not A. All A is E, some I is not E, thus some I is not A. 3rd Figure All E is A, all E is I, thus some I is A. No E is A, all E is I, thus some I is not A. Some E is A, all E is I, thus some I is A. All E is A, some E is I, thus some I is A. Some E is not A, all E is I, thus some I is not A. No E is A, some E is I, thus some I is not A. 4^th Figure ≈ 1^st Figure No A is E, all E is I, thus some I is not A. Some A is E, all E is I, thus some I is A. All A is E, no E is I, thus no I is A. All A is E, all E is I, thus some I is A. No A is E, some E is I, thus some I is not A. Remarkably, Schopenhauer transfers the method of dotted lines from Lambert’s line diagrams to his Euler-inspired RD3 (Moktefi 2020). These dotted lines, as in the case of Bocardo, are used to indicate the ambiguity of a judgment. Nevertheless, whether Schopenhauer applies this method consistently is a controversial issue (compare BL, 563 and what follows.). In addition to Aristotelian syllogistics, Schopenhauer also discusses Stoic logic (BL 333–339). However, Schopenhauer does not use diagrams in this discussion. He justifies this decision by saying that, here, one is dealing with already finished judgments rather than with concepts. Yet, this seems strange as, at this point in the text, Schopenhauer had already used diagrams in his discussion of the doctrine of judgment, which also represented inferences of Stoic logic. However, as the method was not yet well developed, it can be assumed that Schopenhauer failed to represent the entire Stoic logic with the help of RDs. Instead, in the chapter on Stoic logic, one finds characterization of the modus ponendo ponens and the modus tollendo tollens (hypothetical inferences), as well as the modus ponendo tollens and the modus tollendo ponens (disjunctive inferences). In addition, he also focused more intensively on dilemmas. iii. Naturalness in Logic One of the main topics in the doctrine of inferences is the naturalness of logic. For Schopenhauer, there are artificial logics, such as the mnemonics of scholastic logic or the mathematical demand for axiomatics, but there are also natural logics in certain degrees. Schopenhauer agrees with Kant that the first figure of Aristotelian syllogistics is the most natural one, “in that every thought can take its form” (BL, 302). Thus, the first figure is the “simplest and most essential rational operation” (ibid.) and most people unconsciously use one of the modes of the first figure for logical reasoning every day. In contrast to Kant, however, Schopenhauer does not conclude that all other figures are superfluous. For in order to make it clear that one wants to express a certain thought, one rightly falls back on the second and third figures. To determine the naturalness of the first three figures, Schopenhauer examines the function of the inferences in everyday reasoning and, thus, asks what thought they express. Similar to Lambert, Schopenhauer states that we use the first figure to identify characteristics or decisions. We use the second figure if we want to make a difference explicit (BL, 309), while the third figure is used to express or prove a paradox, anomaly, or exception. Schopenhauer gives each of the three figures its own name according to the thought operation expressed with the figure: the first figure is the “Handhabe” (manipulator), the second the “Scheidewand” (septum), and the third the “Anzeiger” (indicator) (BL, 316). As it is natural for humans to make such thought operations explicit, the first three figures are also part of a natural logic. Schopenhauer also explains that each of these three figures has its own enthymemic form and that the function of the medius differs with each figure (BL, 329). However, Schopenhauer argues intently against the fourth figure, which was introduced by Galen and then made public by Arabic logicians. It has no original function and is only the reversal of the first figure, that is to say, it does not indicate a decision itself, only evidence of a decision. Moreover, the fourth figure does not correspond to the natural grammatical structure through which people usually express their daily life. It is more natural when speakers put the narrower term in the place of the subject and the broader one in the place of the predicate. Although a reversal is possible, which allows the reversal from the first to the fourth figure, this reversal is unnatural. For example, it is more natural to say “No Bashire is a Christian” than to say “No Christian is a Bashire” (BL, 322). In the chapter on Stoic logic, the intense discussion of naturalness is lost, yet Schopenhauer points out here and elsewhere that there are certain forms of propositional logic that appear natural in the sciences and everyday language. Mathematicians, for example, tend to use the modus tollendo ponens in proof techniques, even though this technique is prone to error, as the tertium non datur does not apply universally (BL, 337, 512f.). As a result of such theses, Schopenhauer is often associated with intuitionism and the systems of natural deductions (compare Schueler et al. 2020; Koetsier 2005; Belle 2021). d. Further Topics of Analytic In addition to the areas mentioned thus far, the BL offer many other topics and arguments that should be of interest to many, not only researchers of the history and philosophy of logic. The major topics include, for example, a treatise on the Aristotelian rules, reasons, and principles of logic (BL, 323–331), a treatise on sorites (BL, 331–333), a treatise on modal logic (BL, 339–340), a further chapter on enthymemes (BL, 341–343), and a chapter on sophisms and false inferences (BL, 343–356). In the following sections, Schopenhauer’s views on (i) the history and development of logic, (ii) the parallels between logic and mathematics, and the focus on (iii) hermeneutics are discussed. As the chapter on sophisms and so forth is also used in dialectics, it is presented in Sect. 2.e. i. Schopenhauer’s History of Logic A history of logic in the narrower sense cannot be found in Schopenhauer’s treatise on logic in general (BL, 356 and what follows). However, Schopenhauer discusses the origin and development of Aristotelian logic in a longer passage on the question raised by logical algebra in the mid-18^th century—and then prominently denied by Kant: Has there been any progress in logic since Aristotle? Naturally, as an Aristotelian and Kantian, Schopenhauer answers this question in the negative but admits that there have been “additions and improvements” to logic. Schopenhauer argues that Aristotle wrote the first “scientific logic”, but admits that there were earlier logical systems and claims that Aristotle united the attempts of his precursors into one scientific system. Schopenhauer also suggests that there may have been an early exchange between Indian and Greek logic. The additions and improvements to Aristotelian logic concern a total of five points (Pluder 2020), some of which have already been mentioned above: (1) The discussion of the laws of thought; (2) the scholastic mnemonic technique; (3) the propositional logic; (4) Schopenhauer’s own treatise on the relation between intuition and concept; and (5) the fourth figure, introduced by Galen. Schopenhauer considers some of these additions to be improvements (1, 3, 4) and considers others to be deteriorations (2 and especially 5). It seems strange that Schopenhauer does not refer to the use of spatial logic diagrams once again (BL, 270). ii. Logic and Mathematics Another extensive chapter of the BL, which is closely related to logic, discusses mathematics. This is no surprise, as Schopenhauer spent a semester studying mathematics with Bernhard Friedrich Thibaut in Göttingen and systematically worked through the textbooks by Franz Ferdinand Schweins, among others (Lemanski 2022b). As already discussed above, one advantage of the BL is that Schopenhauer took W I as a basis, expanded parts of it considerably, and incorporated into it some essential topics from his supplementary works. Thus, before the treatise on mathematics, one finds a detailed presentation of the four roots of sufficient reason, which Schopenhauer covered in his dissertation. Schopenhauer’s representation of mathematics concentrates primarily on geometry. His main thesis is that abstract-algebraic proofs are possible in geometry but, like logic, they lead to a circulus vitiosus, a dogma, or an infinite regress by proving their foundation (see above, Sect. 2.c.i). Therefore, as in logic, Schopenhauer argues that abstract proofs should be dispensed with and that concrete-intuitive diagrams and figures should be regarded as the ultimate justification of proofs instead. Thus, he argues that feeling (Gefühl) is an important element, even possibly the basis, of proofs for geometry and logic (Follessa 2020). However, this feeling remains intersubjectively verifiable with the help of logic diagrams and geometric figures. Schopenhauer discusses the main thesis of the text, in particular, in connection with the Euclidean system in which one finds both kinds of justification: discursive-abstract proofs, constructed with the help of axioms, postulates, and so forth, and concrete-intuitive proofs, constructed with the help of figures and diagrams. Similar to some historians of mathematics in the 20^th century and some analytic philosophers in the 21^st century, Schopenhauer believed that Euclid was seduced by rationalists into establishing an axiomatic-discursive system of geometry, although the validity of the propositions and problems was sufficiently justified by the possibility of concrete-intuitive representation (Béziau 1993). Schopenhauer goes so far as to attribute Euclid’s axiomatic system to dialectic and persuasion. With his axiomatic system, Euclid could only show that something is like that (knowing-that), while the visual system can also show why something is like that (knowing-why). Schopenhauer demonstrates this in the BL with reference to Euclid’s Elements I 6, I 16, I 47, and VI 31. He develops his own picture proof for Pythagoras’s theorem (Bevan 2020), though he then corrects it over the years (Costanzo 2020). Given the probative power of the figures in geometry, there are clear parallels to the function of Schopenhauer diagrams in logic. Schopenhauer can, therefore, be regarded as an early representative of “diagrammatic proofs” and “visual reasoning” in mathematics. Schopenhauer’s mathematics has been evaluated very differently in its two-hundred-year history of reception (Segala 2020, Lemanski 2021b, chap. 2.3). While Schopenhauer’s philosophy of geometry was received very positively until the middle of the 19^th century, the Weierstrass School marks the beginning of a long period in which Schopenhauer’s approach was labeled a naive form of philosophy of mathematics. It was only with the advent of the so-called ‘proof without words’ movement and the rise of the so-called spatial or visual turn in the 1990s that Schopenhauer became interesting within the philosophy of mathematics once again (Costanzo 2020, Bevan 2020, Lemanski 2022b). iii. Hermeneutics The exploration and analysis of hermeneutics in Schopenhauer’s work are also closely related to logic. This has been the subject of intense and controversial discussion in Schopenhauer research. Overall, two positions can be identified: (1) Several researchers regard either Schopenhauer’s entire philosophy or some important parts of it as ‘hermeneutics. (2) Some researchers, however, deny that Schopenhauer can be called a hermeneuticist at all. (1) The form of hermeneutics that researchers see in Schopenhauer, however, diverges widely. For example, various researchers speak of “world hermeneutics”, “hermeneutics of existence”, “hermeneutics of factuality”, “positivist hermeneutics”, “hermeneutics of thought”, or “hermeneutics of knowledge” (Schubbe 2010, 2018, 2020; Shapshay 2020). What all these positions have in common is that they regard the activity of interpretation and deciphering as a central activity in Schopenhauer’s philosophy. (2) Other researchers argue, however, that Schopenhauer should not be ascribed to the hermeneutic position, while some even go as far as arguing that he is an “anti-hermeneutic”. The arguments of these researchers can be summarized as follows: (A1) Schopenhauer does not refer to authors of his time who are, today, called hermeneuticists. (A2) However, the term ‘hermeneutics’ does not actually fit philosophers of the early 19^th century at all, as it was not fully developed until the 20^th century. (A3) Schopenhauer is not received by modern hermeneutics. Representatives of position (1) consider the arguments outlined in (2) to be insufficiently substantiated (ibid). From a logical point of view, argument (A2) should be met with skepticism, as the term ‘hermeneutics’ can be traced back to the second book of the Organon of Aristotle at least. Schopenhauer takes up the theory of judgment contained in the Organon again in his Logica Maior (see above, Sect. 2.b) and, in addition, explains that judgment plays a central role not only in logic but also in his entire philosophy: Every insight is expressed in true judgments, namely, in conceptual relations that have a sufficient reason. Yet, guaranteeing the truth of judgments is more difficult than forming valid inferences from them (BL, 200, 360ff). e. Dialectic or Art of Persuasion In addition to the analytics discussed thus far, there is also a very important chapter on (eristic) dialectics or persuasion in the BL which can be seen as an addition to § 9 of W I and as a precursor of the famous fragment entitled Eristic Dialectics. The core chapter is BL 363–366, but the chapters on paralogisms, fallacies, and sophisms, as well as some of the preliminary remarks, also relate to dialectics (BL, 343–363), as does quite a bit of the information on analytics, such as the RDs. As seen in Kant, for Schopenhauer, analytic is the doctrine of being and truth, whereas dialectic is the doctrine of appearance and illusion. In analytic, a solitary thinker reflects on the valid relations between concepts or judgments; in dialectic, a proponent aims to persuade an opponent of something that is possible. According to Schopenhauer, the information presented in the chapter on paralogisms, fallacies, and sophisms belongs to both analytics and dialectics. In the former, their invalidity is examined; in the latter, their deliberate use in disputation is examined. Schopenhauer presents six paralogisms such as homonomy and amphiboly, seven fallacies such as ignoratio elenchi and petitio principii, and seven sophisms such as cornutus and crocodilinus. In total, 20 invalid argument types are described, with examples of each and partly subdivided into subtypes. In the core chapter on dialectics or the art of persuasion, Schopenhauer tries to reduce these invalid arguments to a single technique (Lemanski 2023). His main aim is, thus, a reductionist approach that does not even consider the linguistic subtleties of the dishonest argument but reveals the essence of the deliberate fallacy. To this end, he draws on the RDs from analytics and explains that any invalid argument that is intentionally made is based on a confusion of spheres or RDs. In an argument, one succumbs to a disingenuous opponent when one does not consider the RDs thoroughly but only superficially. Then one may admit that two terms in a judgment indicate a community without noticing that this community is only a partial one. Instead of the actual RD5 relation between two spheres, one is led, for example, by inattention or more covertly by paralogisms, fallacies, and sophisms, to acknowledge an RD1 or, more often, an RD2. According to Schopenhauer, dialectics is based on this confusion, as almost all concepts share a partial semantic neighborhood with another concept. Thus, it can happen that one concedes more and more small-step judgments to the opponent and then suddenly arrives at a larger judgment, a conclusion, that one would not have originally accepted at all. Schopenhauer gives several examples of this procedure from science and everyday life and also simulates this confusion of spheres by constructing fictional discussions about ethical arguments between philosophers. In doing so, Schopenhauer uses RDs several times to demonstrate which is the valid (analytic) and which is the feigned (dialectical) relation of the spheres. Then, he goes one step further. In order to demonstrate that one can start from a concept and argue just as convincingly for or against it, Schopenhauer designs large argument maps to indicate possible courses of conversation (Lemanski 2021b, Bhattarcharjee et al. 2022). Fig. 8 shows the sphere of the concept of good (“Gut”) on the left, the sphere of the concept of bad (“Uebel”) on the right, and the concept of country life (“Landleben”) in the middle. Starting with the term in the middle, namely, ‘country life’, the diagram reflects the partial relationship of this term with the adjacent spheres. When one chooses an adjacent sphere, for example, the adjacent circle ‘natural’ (“naturgemäß”), together, these two spheres form the small-step judgment: ‘Country life is natural’. This predicate can then be combined with another adjacent sphere to form a new judgment. Moving through the circles in this way, if one at some point arrives at ‘good’, for example, and the disputant has conceded all the small-step judgments en route, one can draw the overall conclusion that ‘country life is good’. However, as one can just as effectively argue for ‘country life is bad’ via other spheres, the argument map is a visualization of dialectical relations. Schopenhauer also used such diagrams in the dialectic of W I, § 9, for example, the more famous “diagram of good and evil”, which has been interpreted as one of the first logic diagrams for -terms (Moktefi and Lemanski 2018), as a precursor of a diagrammatic fuzzy-logic (Tarrazo 2004), and as an argument map in which the RD5s are used as graphs (Bhattarcharjee et al. 2022). If one relates the dialectic of the BL to the other texts on dialectics, it can be said that this dialectic serves as a bridge between the short diagrammatic dialectic of the W I and the well-known fragment entitled Eristic Dialectics, in which the paralogisms, in particular, were elaborated. Figure 8 3. Schopenhauer’s Logica Minor Schopenhauer’s Berlin Lectures must be considered a Logica Maior due to the enormous size and complexity of their original subjects (especially in comparison to many other 19^th-century writings). Nevertheless, one can also locate and collect a Logica Minor in Schopenhauer’s other writings. In the following, the most important treatises on analytic and dialectic from the other works of Schopenhauer are briefly presented. Even though the BL and the other writings have some literal similarities, the BL should remain the primary reference when assessing the various topics in the other a. Fourfold Root The first edition of Schopenhauer’s dissertation, the Fourfold Root of the Principle of Sufficient Reason, was published in 1813 and a revised and extended edition was published in 1847. The second edition contains numerous additions that are not always regarded as improvements or helpful supplements. In the 1813 version of chapter 5, logic is addressed through the principle of sufficient reason of knowing. Schopenhauer follows a typical compositional approach in which inferences are considered compositions of judgments and judgments as compositions of concepts. The treatise in this chapter, however, is primarily concerned with the doctrine of concepts. Although Schopenhauer points out that concepts have a sphere, there are no logic diagrams to illustrate this metaphor in the work. Schopenhauer deals mainly with the utility of concepts, the relationship between concept and intuition, and the doctrine of truth. The philosophy of mathematics and its relation to logic are discussed in chapters 3 and 8. The discussion of the doctrine of truth is especially close to the text of the BL as Schopenhauer already distinguishes between logical, empirical, metaphysical, and metalogical truth. Although the expression “metalogica” is much older, this book uses the term ‘metalogic’ in the modern sense for the first time (Béziau 2020). Furthermore, it can be argued that Schopenhauer presented the first complete treatise on the principle of sufficient reason in this book. While the other principles popularized by Leibniz and Wolff have found their way into today’s classical logic, that is, the principles of non-contradiction, identity, and the excluded middle, the principle of sufficient reason was considered non-formalizable and, therefore, not a basic principle of logic in the early 20^th century. Newton da Costa, on the other hand, proposed a formalization that has made Schopenhauer’s laws of thought worthy of discussion again (Béziau 1992). b. World as Will and Representation I (Chapter 9) Chapter 9 (that is, § 9) of the W I takes up the terminology of Fourfold Root again and extends several elements of it. Schopenhauer first develops a brief philosophy of language to clarify the relationship between intuition and concept. He then introduces analytic by explaining the metaphors used in the doctrine of concepts, that is, higher-lower (buildings of concepts) and wider-narrower (spheres of concepts). Schopenhauer keeps to the metaphor of the sphere and explains that Euler, Lambert, and Ploucquet had already represented this metaphor with the help of diagrams. He draws some of the diagrams discussed above in Sect. 2.a— RD3 is missing—and explains that these are the fundament for the entire doctrine of judgments and inferences. Here, too, Schopenhauer represents a merely compositional position: judgments are connections of concepts while inferences are composed of judgments. However, in § 9, there is no concrete doctrine of judgment or inference. The principles of logic are also listed briefly in only one sentence. Although W I makes the descriptive claim to represent all elements of the world, the logic presented here must be considered highly imperfect and incomplete. Schopenhauer explains that everyone, by nature, masters logical operations; thus, it is reserved for academic teaching alone to present logic explicitly and in detail, and this is what is done in the BL for an academic audience. In the further course of § 9, Schopenhauer also discusses dialectics, which contains an argument map similar to the one illustrated above (see above, Sect. 2.e) but also lists some artifices (“Kunstgriffe”) known from later writings including the BL and Eristic Dialectic (ibid.). The philosophy of mathematics and its relation to logic are discussed in § 15 of the W I. c. Eristic Dialectics Of all the texts on Schopenhauer’s logic listed here, the manuscript produced in the early 1830s that he entitled Eristic Dialectic is the best known. It is usually presented separately from all other texts in editions that bear ambiguous titles such as The Art of (Always) Being Right or The Art of Winning an Argument. Schopenhauer himself titled the manuscript Eristic Dialectic. The term ‘eristics’ comes from the Greek ‘erizein’ and means ‘contest, quarrel’ and is personified in Greek mythology by the goddess Eris. Although Schopenhauer also uses the above ambiguous expressions in the text (for example, 668, 675), these are primarily understood as translations of the Greek expression ‘eristiké téchne’. Regardless of the context, the ambiguous titles suggest that Schopenhauer is here recommending that his readers use obscure techniques in order to assert themselves against other speakers. Even though there are text fragments that partially convey this normative impression, Schopenhauer’s goal is, however, of a preventive nature: He seeks to give the reader a means to recognize and call out invalid but deliberately presented arguments and, thus, be able to defend themself (VI, 676). Therefore, Schopenhauer is not encouraging people to violate the ethical rules of good argumentation (Lemanski 2022a); rather, he is offering an antidote to such violations (Chichi 2002, 165, 170, Hordecki 2018). However, this fragment is often interpreted normatively, and in the late 20th and early 21st centuries, it was often instrumentalized in training courses for salesmen, managers, lawyers, politicians, and so forth, as a guide to successful argumentation. The manuscript consists of two main parts. In the first, Schopenhauer describes the relationship between analytics and dialectics (VI, 666), defines dialectics several times (2002, 165), and outlines its history with particular reference to Aristotle (VI, 670–675). The second main part is divided into two subsections. The first subsection describes the “basis of all dialectics” and gives two basic modes (VI, 677 f.). The second subsection (VI, 678–695) is followed by 38 artifices (“Kunstgriffe”), which are explained clearly with examples. These artifices, which Schopenhauer also called ‘stratagems’, can be divided into preerotematic (art gr. 1–6), erotematic (7–18), and posterotematic (19–38) stratagems (compare Chichi 2002, 177). The manuscript is unfinished and, therefore, the fragment is also referred to by Schopenhauer as a “first attempt” (VI, 676f.). According to modern research, both main parts are revisions of the Berlin Lectures, designed for independent publication: the first main part being an extension of BL 356–363, the second main part a revised version of BL 343–356. It can be assumed that Schopenhauer either wanted to add another chapter on the reduction of all stratagems to diagrams (as given in BL 363–366) or that he intended to dispense with the diagrams, as they would have presupposed knowledge of analytics. In any case, it can be assumed that Schopenhauer would have edited the fragment further before publishing it, as the manuscripts are not at the same standard as Schopenhauer’s printed works. Despite the misuse of the fragment described above, researchers in several areas, for example in the fields of law, politics, pedagogy, ludics and artificial intelligence, are using the fragment productively (for example, Fouqueré et al. 2012, Lübbig 2020, Marciniak 2020, Hordecki 2021). d. World as Will and Representation II (Chapters 9 and 10) In the very first edition of W II in 1844, Schopenhauer extended the incomplete explanations of logic given in W I with his doctrines of judgment (chapter 9) and inference (chapter 10). He adopts some text passages and results of the BL, but only briefly hints at many of these topics, theses, and arguments. In comparison to the BL, chapters 9 and 10 of W II also appear to be an unsatisfactory approach to logic. In his discussion of the doctrine of judgment, Schopenhauer pays particular attention to the function of the copula in addition to giving further explanations of the forms of judgments. In the doctrine of inference, he continues to advocate for Aristotelianism and argues against both Galen’s fourth figure and Kant’s reduction to the first figure. Furthermore, the text suggests an explanation for why Schopenhauer presents such an abbreviated representation of logic here. Schopenhauer explains in chapter 10 that RDs are a suitable technique to prove syllogisms although they are not appropriate for use in propositional logic. It seems as if Schopenhauer is going against some of the arguments of his former doctrine of diagrammatic reasoning (presented, for example, in Sect. 2.b.ii). Nevertheless, he presents this critique or skepticism almost reluctantly as an addition to W I. Although he does include some RDs, which mainly represent syllogistic inferences, in chapters 9 and 10, he also hints at a more advanced diagrammatic system based on “bars” and “hooks” several times. However, these text passages, which point to a new diagrammatic system, remain only hints whose meaning cannot yet be grasped. Based on these dark text passages, Kewe (1907) has tried to reconstruct an alternative logic system that is supposed to resemble the structure of a voltaic column as Schopenhauer himself hinted at such a comparison at the end of chapter 10 of W II. However, Kewe’s proposal is a logically trivial, if diagrammatically very complex, interpretation that almost only highlights the disadvantages in comparison to the system of RDs. It is more obvious that Schopenhauer thinks of these passages as a diagrammatic technique that was published in Karl Christian Friedrich Krause’s Abriss des Systemes der Logik in the late 1820s. This interpretation of W II is more plausible as Schopenhauer was in personal contact with Krause for a longer time (Göcke 2020). However, future research must clarify whether this thesis is tenable. To date, unfortunately, no note from among the manuscripts remains has been identified that may illustrate the technique described in W II, chapter 10. e. Parerga and Paralipomena II Parerga and Paralipomena II, chapter 2 contains a treatise on “Logic and Dialectic”. Although this chapter was written in the 1850s, it is the worst treatise Schopenhauer published on logic. In just a few paragraphs, he attempts to cover topics such as truth, analytic and synthetic judgments, and proofs. The remaining paragraphs are extracts from or paraphrases of the manuscript on Eristic Dialectics or the BL. One can see from these passages that there was a clear break in Schopenhauer’s writings around the 1830s and that his late work tended to omit rational topics. Schopenhauer also explained that he was no longer interested in working on the fragment on Eristic Dialectics, as the subject showed him the wickedness of human beings and he no longer wanted to concern himself with 4. Research Topics Research into Schopenhauer’s philosophy of language, logic, and mathematics is still in its infancy because, for far too long, normativists concentrated on other topics in Schopenhauer’s theory of representation, including his epistemology and, especially, his idealism. The importance of the second part of the theory of representation, namely, the theory of reason (language, knowledge, practical action), has been almost completely ignored. However, as language and logic are the media that give expression to Schopenhauer’s entire system, it can be said that one of the most important methodological and content-related parts of the system of Schopenhauer’s complete oeuvre has, historically, been largely overlooked. The following is a rough overview of research to be done on Schopenhauer’s logic. It is shown that these writings still offer interesting topics and theses. In particular, Schopenhauer’s use of logic diagrams is likely to meet with much interest in the course of intensive research into diagrammatic and visual reasoning. Nevertheless, many special problems and general questions remain unsolved. The most important general questions concern the following points: 1. Do we have all of Schopenhauer’s writings on logic, or are there manuscripts that have not yet been identified? In particular, the fact that Schopenhauer uses diagrams that are not discussed in the text and discusses diagrams that are not illustrated in the text suggests that Schopenhauer knew more about logic diagrams than can be gleaned from his known books and manuscripts. 2. How great is the influence of Schopenhauer’s logic on modern logic (especially the Vienna Circle, the school of Münster, the Lwów-Warsaw school, intuitionism, metalogic, and so forth)? Schopenhauer’s Berlin Lectures were first fully published in 1913, a period that saw the intensive reception of Schopenhauer’s teachings on logic in those schools. For example, numerous researchers have been discussing Schopenhauer’s influence on Wittgenstein for decades (compare Glock 1999). One can observe an influence on modern logic in the works of Moritz Schlick, Béla Juhos, Edith Matzun, and L. E. J. Brouwer. However, this relationship has, thus far, been consistently ignored in research. 3. What is Schopenhauer’s relationship to the pioneering logicians of his time (for example, Krause, Jakob Friedrich Fries, Carl Friedrich Bachmann, and so forth)? Previous sections have indicated that Schopenhauer’s logic may have been close to that of Krause. Bachmann, another remarkable logician of the early 19^th century, was also in contact with Schopenhauer. The fact that Schopenhauer was personally influenced by Schulze’s logic is well documented. In addition, Schopenhauer knew various logic systems from the 18^th and 19^th centuries; however, many studies are needed to clarify these relationships. 4. To what extent does Schopenhauer’s logic differ from the systems of his contemporaries? Many of Schopenhauer’s innovations and additions to logic have already been recognized. Yet, the question remains, to what extent does Schopenhauer’s approach to visual reasoning correspond to the Zeitgeist? At first glance, it seems obvious, for example, that Schopenhauer strongly contradicted the Leibnizian and Hegelian schools, the Hegelian schools especially, by separating logic and metaphysics from each other and emphasizing instead the kinship of logic and intuition. 5. To what extent can Schopenhauer’s ideas about logic and logic diagrams be applied to contemporary fields of research? Schopenhauer did not design ‘a logic’ that would meet today’s standards of logic without comment, but rather a stimulating philosophy of logic and ideas about visual reasoning. Schopenhauer questioned many principles that are often widely accepted today. Moreover, he offers many diagrammatic and graphical ideas that could be developed in many modern directions. Schopenhauer’s approaches, which were interpreted as contributions to fuzzy logic, -term logic, natural logic, metalogic, ludics, graph theory and so forth also require further intensive research. 6. How can Schopenhauer’s system of (for example, W I) be reconstructed using logic? This question is motivated by the fact that some logical techniques have already been successfully applied to Schopenhauer’s system. For example, Matsuda (2016) has offered a precise interpretation of Schopenhauer’s world as a cellular automaton based on the so-called Rule 30 ( ) elaborated by Stephen Wolfram. In Schopenhauer’s system, logic thus has a double function: As part of the world, the discipline called logic must be analyzed as any other part of the system. However, as an instrument or organon of expression and reason, it is itself the medium through which the world and everything in it are described. This raises the question of what an interpretation of Schopenhauer’s philosophical system using his logic diagrams would look like. 5. References and Further Readings a. Schopenhauer’s Works • Schopenhauer, A.: Philosophische Vorlesungen, Vol. I. Ed. by F. Mockrauer. (= Sämtliche Werke. Ed. by P. Deussen, Vol. 9). München (1913). Cited as BL. • Schopenhauer, A.: The World as Will and Representation: Vol. I. Transl. by J. Norman, A. Welchman, C. Janaway. Cambridge (2014). Cited as W I. • Schopenhauer, A.: The World as Will and Representation: Vol. II. Transl. by J. Norman, A. Welchman, C. Janaway. Cambridge (2015). Cited as W II. • Schopenhauer, A.: Parerga and Paralipomena. Vol I. Translated by S. Roehr, C. Janaway. Cambridge (2014). • Schopenhauer, A.: Parerga and Paralipomena. Vol II. Translated by S. Roehr, C. Janaway. Cambridge (2014). • Schopenhauer, A.: Manuscript Remains: Early Manuscripts 1804–1818. Ed. by Arthur Hübscher; translated by E. F. J. Payne Oxford et. al. (1988). • Schopenhauer, A.: Manuscript Remains: Critical Debates (1809–1818). Ed. by Arthur Hübscher; translated by E. F. J. Payne Oxford et. al. (1989). • Schopenhauer, A.: Manuscript Remains: Berlin Manuscripts (1818–1830). Ed. by Arthur Hübscher; translated by E. F. J. Payne Oxford et. al. (1988). • Schopenhauer, A.: Manuscript Remains: The Manuscript Books of 1830–1852 and Last Manuscripts. Ed. by Arthur Hübscher; translated by E. F. J. Payne Oxford et. al. (1990). b. Other Works • Baron, M. E. (1969) A Note on the Historical Development of Logic Diagrams: Leibniz, Euler and Venn. In Mathematical Gazette 53 (384), 113–125. • Bevan, M. (2020) Schopenhauer on Diagrammatic Proof. In Lemanski, J. (ed.) Language, Logic, and Mathematics in Schopenhauer. Birkhäuser, Cham, 305–315. • Belle, van, M. (2021) Schopenbrouwer: De rehabilitatie van een miskend genie. Postbellum, Tilburg. • Béziau, J.-Y. (2020) Metalogic, Schopenhauer and Universal Logic. In Lemanski, J. (ed.) Language, Logic, and Mathematics in Schopenhauer. Birkhäuser, Cham, 207–257. • Béziau, J.-Y. (1993) La critique Schopenhauerienne de l’usage de la logique en mathématiques. O que nos faz pensar 7, 81–88. • Béziau, J.-Y. (1992) O Princípio de Razão Suficiente e a lógica segundo Arthur Schopenhauer. In Évora, F. R. R. (Hg.): Século XIX. O Nascimento da Ciência Contemporânea. Campinas, 35–39. • Bhattarchajee, R., Lemanski, J. (2022) Combing Graphs and Eulerian Diagrams in Eristic. In : Diagrams. In: Giardino, V., Linker, S., Burns, R., Bellucci, F., Boucheix, JM., Viana, P. (eds) Diagrammatic Representation and Inference. Diagrams 2022. Lecture Notes in Computer Science, vol 13462. Springer, Cham, 97–113. • Birnbacher, D. : Schopenhauer und die Tradition der Sprachkritik. Schopenhauer-Jahrbuch 99 (2018), 37–56. • Chichi, G. M. (2002) Die Schopenhauersche Eristik. Ein Blick auf ihr Aristotelisches Erbe. In Schopenhauer-Jahrbuch 83, 163–183. • Coumet, E. (1977) Sur l’histoire des diagrammes logiques : figures géométriques. In Mathématiques et Sciences Humaines 60, 31–62. • Costanzo, J. (2020) Schopenhauer on Intuition and Proof in Mathematics. In Lemanski, J. (ed.) Language, Logic and Mathematics in Schopenhauer. Birkhäuser, Cham, 287–305. • Costanzo, Jason M. (2008) The Euclidean Mousetrap. Schopenhauer’s Criticism of the Synthetic Method in Geometry. In Journal of Idealistic Studies 38, 209–220. • D’Alfonso, M. V. (2018) Arthur Schopenhauer, Anmerkungen zu G. E. Schulzes Vorlesungen zur Logik (Göttingen 1811). In I Castelli di Yale Online 6(1), 191–246. • Demey, L. (2020) From Euler Diagrams in Schopenhauer to Aristotelian Diagrams in Logical Geometry. In Lemanski, J. (ed.) Language, Logic, and Mathematics in Schopenhauer. Birkhäuser, Cham, • Dobrzański, M. (2017) Begriff und Methode bei Arthur Schopenhauer. Königshausen & Neumann, Würzburg. • Dobrzański, M. (2020) Problems in Reconstructing Schopenhauer’s Theory of Meaning. With Reference to his Influence on Wittgenstein. In Lemanski, J. (ed.) Language, Logic, and Mathematics in Schopenhauer. Birkhäuser, Cham, 25–57. • Dobrzański, M., Lemanski, J. (2020) Schopenhauer Diagrams for Conceptual Analysis. In: Pietarinen, A.-V. et al: Diagrammatic Representation and Inference. 11^th International Conference, Diagrams 2020, Tallinn, Estonia, August 24–28, 2020, Proceedings. Springer, Cham, 281–288. • Dümig, S. (2016) Lebendiges Wort? Schopenhauers und Goethes Anschauungen von Sprache im Vergleich. In: D. Schubbe & S.R. Fauth (Hg.): Schopenhauer und Goethe. Biographische und philosophische Perspektiven. Meiner, Hamburg, 150–183 • Dümig, S. (2020) The World as Will and I-Language. Schopenhauer’s Philosophy as Precursor of Cognitive Sciences. In Lemanski, J. (ed.) Language, Logic, and Mathematics in Schopenhauer. Birkhäuser, Cham, 85–95. • Fischer, K.: Schopenhauers Leben, Werke und Lehre. 3^rd ed. Winters, Heidelberg 1908. • Follesa, L. (2020) From Necessary Truths to Feelings: The Foundations of Mathematics in Leibniz and Schopenhauer. In Lemanski, J. (ed.) Language, Logic, and Mathematics in Schopenhauer. Birkhäuser, Cham, 315–326. • Fouqueré, C., Quatrini, M. (2012). Ludics and Natural Language: First Approaches. In: Béchet, D., Dikovsky, A. (eds.) Logical Aspects of Computational Linguistics. LACL 2012. Lecture Notes in Computer Science, vol 7351. Springer, Berlin, Heidelberg, 21–44. • Glock, H. -J. (1999) Schopenhauer and Wittgenstein Language as Representation and Will. In: In Christopher Janaway (ed.), The Cambridge Companion to Schopenhauer. Cambridge Univ. Press, Cambridge, 422–458. • Göcke, B. P. (2020) Karl Christian Friedrich Krause’s Influence on Schopenhauer’s Philosophy. In Wicks, R. L. (ed.) The Oxford Handbook of Schopenhauer. Oxford Univ. Press, New York. • Heinemann, A. -S. (2020) Schopenhauer and the Equational Form of Predication. In Lemanski, J. (ed.) Language, Logic, and Mathematics in Schopenhauer. Birkhäuser, Cham, 165–181. • Hordecki, B. (2018). The strategic dimension of the eristic dialectic in the context of the general theory of confrontational acts and situations. In: Przegląd Strategiczny 11, 19–26. • Hordecki, B. (2021) “Dialektyka erystyczna jako sztuka unikania rozmówców nieadekwatnych”, Res Rhetorica 8(2), 18–129. • Jacquette, D. (2012) Schopenhauer’s Philosophy of Logic and Mathematics. In Vandenabeele, B. (ed.) A Companion to Schopenhauer. Wiley-Blackwell, Chichester, 43–59. • Janaway, C. (2014) Schopenhauer on Cognition. O. Hallich & M. Koßler (ed.): Arthur Schopenhauer: Die Welt als Wille und Vorstellung. Akademie, Berlin, 35–50. • Kewe, A. (1907) Schopenhauer als Logiker. Bach, Bonn. • Koetsier, Teun (2005) Arthur Schopenhauer and L. E. J. Brouwer. A Comparison. In: L. Bergmans & T. Koetsier (ed.): Mathematics and the Divine. A Historical Study. Elsevier, Amsterdam, 571–595. • Koßler, M. (2020) Language as an “Indispensable Tool and Organ” of Reason. Intuition, Concept and Word in Schopenhauer. In Lemanski, J. (ed.) Language, Logic, and Mathematics in Schopenhauer. Birkhäuser, Cham, 15–25. • Lemanski, J. (2016) Schopenhauers Gebrauchstheorie der Bedeutung und das Kontextprinzip. Eine Parallele zu Wittgensteins Philosophischen Untersuchungen. In: Schopenhauer-Jahrbuch 97, 171–197. • Lemanski, J. (2017) ショーペンハウアーにおける意味の使用理論と文脈原理 : ヴィトゲンシュタインショーペンハウアー研究 = Schopenhauer-Studies 22, 150–190. • Lemanski, J. (2021) World and Logic. College Publications, London. • Lemanski, J. (2022a) Discourse Ethics and Eristic. In: Polish Journal of Aesthetics 62, 151–162. • Lemanski, J. (2022b) Schopenhauers Logikdiagramme in den Mathematiklehrbüchern Adolph Diesterwegs. In. Siegener Beiträge zur Geschichte und Philosophie der Mathematik 16 (2022), 97–127. • Lemanski, J. (2023) Logic Diagrams as Argument Maps in Eristic Dialectics. In: Argumentation, 1–21. • Lemanski J. and Dobrzanski, M. (2020) Reism, Concretism, and Schopenhauer Diagrams. In: Studia Humana 9, 104–119. • Lübbig Thomas (2020), Rhetorik für Plädoyer und forensischen Streit. Mit Schopenhauer im Gerichtssaal. Beck, München. • Matsuda, K. (2016) Spinoza’s Redundancy and Schopenhauer’s Concision. An Attempt to Compare Their Metaphysical Systems Using Diagrams. Schopenhauer-Jahrbuch 97, 117–131. • Marciniak, A. (2020) Wprowadzenie do erystyki dla pedagogów – Logos. Popraw-ność materialna argumentu, In: Studia z Teorii Wychowania 11:4, 59–85. • Menne, A. (2003) Arthur Schopenhauer. In: Hoerster, N. (ed.) Klassiker des philosophischen Denkens. Vol. 2. 7th ed. DTV, München, 194–230. • Moktefi, A. and Lemanski, J. (2018) Making Sense of Schopenhauer’s Diagram of Good and Evil. In: Chapman, P. et al. (eds.) Diagrammatic Representation and Inference. 10th international Conference, Diagrams 2018, Edinburgh, UK, June 18–22, 2018. Proceedings. Springer, Berlin et al., 721–724. • Moktefi, A. (2020) Schopenhauer’s Eulerian Diagrams. In Lemanski, J. (ed.) Language, Logic, and Mathematics in Schopenhauer. Birkhäuser, Cham, 111–129. • Pedroso, M. P. O. M. (2016) Conhecimento enquanto Afirmação da Vontade de Vida. Um Estudo Acerca da Dialética Erística de Arthur Schopenhauer. Universidade de Brasília, Brasília 2016. • Pluder, V. (2020) Schopenhauer’s Logic in its Historical Context. In Lemanski, J. (ed.) Language, Logic, and Mathematics in Schopenhauer. Birkhäuser, Basel, 129–143. • Regehly, T. (2018) Die Berliner Vorlesungen: Schopenhauer als Dozent. In Schubbe, D., Koßler, M. (ed.) Schopenhauer-Handbuch: Leben – Werk – Wirkung. 2nd ed. Metzler, Stuttgart, 169–179. • Saaty, T. L. (2014) The Three Laws of Thought, Plus One: The Law of Comparisons. Axioms 3:1, 46–49. • Salviano, J. (2004) O Novíssimo Organon: Lógica e Dialética em Schopenhauer. In: J. C. Salles (Ed.). Schopenhauer e o Idealismo Alemão. Salvador 99–113. • Schroeder, S. (2012) Schopenhauer’s Influence on Wittgenstein. In: Vandenabeele, B. (ed.) A Companion to Schopenhauer. Wiley-Blackwell, Chichester et al., 367–385. • Schubbe, D. (2010) Philosophie des Zwischen. Hermeneutik und Aporetik bei Schopenhauer. Königshausen & Neumann, Würzburg. • Schubbe, D. (2018) Philosophie de l’entre-deux. Herméneutique et aporétique chez Schopenhauer. Transl. by Marie-José Pernin. Presses Universitaires Nancy, Nancy. • Schubbe, D. and Lemanski, J. (2019) Problems and Interpretations of Schopenhauer’s World as Will and Representation. In: Voluntas – Revista Internacional de Filosofia 10(1), 199–210. • Schubbe, D. (2020) Schopenhauer als Hermeneutiker? Eine Replik auf Thomas Regehlys Kritik einer hermeneutischen Lesart Schopenhauers. In: Schopenhauer-Jahrbuch, 100, 139–147. • Schüler, H. M. & Lemanski, J. (2020) Arthur Schopenhauer on Naturalness in Logic. In Lemanski, J. (ed.) Language, Logic, and Mathematics in Schopenhauer. Birkhäuser, Cham, 145–165. • Schulze, G. E. (1810) Grundsätze der Allgemeinen Logik. 2nd ed. Vandenhoeck und Ruprecht, Göttingen. • Schumann, G. (2020) A Comment on Lemanski’s “Concept Diagrams and the Context Principle”. In Lemanski, J. (ed.) Language, Logic, and Mathematics in Schopenhauer. Birkhäuser, Cham, 73–85. • Segala, M. (2020) Schopenhauer and the Mathematical Intuition as the Foundation of Geometry. In Lemanski, J. (ed.) Language, Logic, and Mathematics in Schopenhauer. Birkhäuser, Birkhäuser, Cham, • Shapshay, S. (2020) The Enduring Kantian Presence in Schopenhauer’s Philosophy. In: R. L. Wicks (ed.) The Oxford Handbook of Schopenhauer. Oxford Univ. Press, Oxford, 110–126. • Tarrazo, M. (2004) Schopenhauer’s Prolegomenon to Fuzziness. In: Fuzzy Optimization and Decision Making 3, 227–254. • Weimer, W. (1995) Ist eine Deutung der Welt als Wille und Vorstellung heute noch möglich? Schopenhauer nach der Sprachanalytischen Philosophie. In: Schopenhauer-Jahrbuch 76, 11–53. • Weimer, W.: (2018) Analytische Philosophie. In Schubbe, D., Koßler, M. (eds.) Schopenhauer-Handbuch. Leben – Werk – Wirkung. 2nd ed. Metzler, Stuttgart, 347–352. • Xhignesse, M. -A. (2020) Schopenhauer’s Perceptive Invective. In Lemanski, J. (ed.) Language, Logic, and Mathematics in Schopenhauer. Birkhäuser, Cham, 95–107. Author Information Jens Lemanski Email: jenslemanski@gmail.com University of Münster
{"url":"https://iep.utm.edu/schopenhauer-logic-and-dialectic/","timestamp":"2024-11-14T14:49:59Z","content_type":"text/html","content_length":"148040","record_id":"<urn:uuid:ba1349f5-e73e-40b3-953d-2b6ab3a154d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00483.warc.gz"}
The neo-classical synthesis The neo-classical synthesis Category: Macroeconomics Created by Conspecte Team Hits: 8,552 The neo-classical synthesis is a synthesis of the classical model and the Keynesian model. In short, it states that the Keynesian model is correct in the short run while the classical analysis is correct in the long run. Let us consider a concrete example. According to the Keynesian model, an increase in G will increase Y and reduce unemployment. In the classical model, an increase in G will have no effect at all on Y and unemployment. In the neo-classical synthesis, an increase in G will create a temporary increase in Y but Y will return to its original value after some time. To justify the neo-classical synthesis, it is helpful to identify the problem with the classical model in the short run and the problem with the Keynesian model in the long run. As for the classical model in the short run, we concluded that within this model, it is difficult to explain deep recessions with high involuntary unemployment. In the long run, it is more reasonable to believe that that the economy can get out of the recession by itself. The problem with the Keynesian model in the long run, as we will see, is the assumption of a stable Phillips curve. The various Phillips curves The augmented Phillips curve Remember that the Phillips curve, as it was incorporated into the Keynesian model, assumed a stable relationship between unemployment and wage inflation: for a given level of unemployment (say U = 5%), a given level of wage inflation would apply (say πw = 4%). As U increased, π w would fall and vice versa. Mathematically, the Phillips curve may be described by a decreasing function f as: πw = f(U) In the neo-classical synthesis, expected inflation is added and: πw = f(U) + πe To justify this amendment, imagine U = 5% and πw = 4% (so that we are on the Phillips curve) and the expected inflation rises from 4% to 6%. Since employees care about real wages, it is reasonable to assume that πw will increases as well (for a given U) and the Phillips curve will shift upwards. Fig. 15.1: The augmented Phillips curve. According to the synthesis, the Phillips curve must be drawn for a given value of πe and it must be shifted upwards (downwards) as πe increases (decreases). When the position of the Phillips curve is allowed to depend on πe, is called the augmented Phillips curve (or the expectations-augmented Phillips curve). This amendment to the Phillips curve is actually a consequence of a criticism of the traditional Phillips curve and the Keynesian model from the late 1960's (the Keynesian – Monetarism debate). Money illusion An important argument for the augmentation has to do with the concept of money illusion. Money illusion means that you care about nominal rather than real amounts. Imagine that your salary increases by 20% over one year. Does this mean that you can increase your consumption? The answer is that it depends on the inflation. If inflation is 20%, you can consume as much as you did before. You must actually decrease you consumption if inflation exceeds 20%. We say that you have suffer from money illusion if you believe that you are better off if your salary increases by 20% while prices also increase by 20%. A higher nominal salary may create the "illusion" that you are richer. If employees suffer from money illusion they will only care about nominal wage increases, expected inflation will not matter and there is no reason for the traditional Phillips curve not to hold. If, however, they do not suffer from money illusion, πw must depend on both U and πe and the augmented Phillips curve is more realistic. The long-run Phillips curve The augmented Phillips curve has an important consequence: the long-run Phillips curve must be vertical. Fig. 15.2: The long-term Phillips curve. To realize this, start by drawing a Phillips curve for πe = 3%. The only point on this curve that may apply in the long run is π W = 3% (point A). For example, πW = 2% and πe = 3% is not consistent with equilibrium in the long run as there is no level of inflation which is consistent with these values. π = 3% is not possible as real wages would go to zero. π = 2% is not possible since it would be unreasonable to continue to expect 3% inflation if inflation each year was 2%. According to the neo-classical synthesis, we may temporarily be anywhere on the lower Phillips curve when πe = 3%, but the economy must eventually return to point A (as long πe = 3%) Now draw a Phillips curve for πe = 6%. Again, on this curve there is only one point is consistent with equilibrium in the long run and that is the point where πW = 6% (point B). This point must be exactly above A as the new curve must be exactly three units above the first curve. If we draw all possible Phillips curves, we see that all points consistent with long run equilibrium must lie on a vertical curve and this curve is called the long-run Phillips curve. In the long run, the economy must return to this curve. This means that in the long run, there is no relation between inflation and unemployment. In the long term, the economy returns to the natural unemployment rate as in the classical model. Summary of the Phillips curves In the neo-classical synthesis, the augmented Phillips curve is called the short-run Phillips curve. It is assumed to be stable as long as expectations of future inflation do not change. To summarize, we have three Phillips curves: • The traditional Phillips curve. πW = f(U) and the same downward sloping relationship applies to both the short and the long run. • The short-run Phillips curve (SPC). πw = f(U) + πe and the curve is valid only in the short run (SPC = Short-run Phillips Curve). • The long-run Phillips curve (LPC). πw = πM, U = UN and there is no relationship between πwand U (UN is the natural rate of unemployment). The classical model and the long-term Phillips curve In the classical model, L and the real wage are determined from equilibrium conditions in the labor market. L and W / P , therefore, are only affected by the marginal product of labor (which determines the demand for labor) and by the utility function of the employees (which determines the supply of labor). All unemployment is voluntary and L, U or W / P are all affected by exogenous variables only. In the classical model, inflation is determined solely by the growth in the money upply πM. From the quantity theory of money, M·V = P·Y and if the growth rate of M is πM, then P must increase by the same rate as V and Y are constant. From the quantity theory we can conclude that π = πM must hold. The relationship M·V = P·Y is therefore sometimes called the quantity theory in levels while π = πM is called the quantity theory in rates. In the classical model, inflation is balanced and πW = π (real wage is constant). Since π = π M, we have π = πM = πW. As U is not affected by any endogenous variables, there is no relationship between πW och U in the classical model and the vertical LPC applies even in the short run. The position on the LPC determined by πM. Unlike the neo-classical synthesis, where the economy temporarily may depart from LPC, the economy must always be on the LPC in the classical model. Developments around 1960 The augmented Phillips curve and the long-run Phillips curve where developed during the late 1960s by Milton Friedman and Edmund Phelps. Friedman argued that a stable Phillips curve could exist in the short run as long individuals did not expect changes in the economy. Eventually, expectations would change and the traditional Phillips curve would shift and we would return to a point on the long-run Phillips curve. If the Phillips curve depends on πe, we can no longer expect observations of nemployment and wage inflation to nicely line up on a downward sloping curve. Instead, different observations will belong to different Phillips curves that move over time and we should expect to see all possible combinations of U och πw. Most Keynesian chose to hold on to the traditional Phillips curve. If you buy the augmented Phillips curve, you must buy the long-run Phillips curve and the economy must automatically return to the natural level of unemployment. This would violate one of the main results in the Keynesian analysis namely that the economy may be stuck in a long-run equilibrium with a high level of involuntary unemployment. With the long-run Phillips curve, it would again be impossible to determine the rate of inflation within the Keynesian model as all levels of inflation would be consistent with equilibrium (as for the Keynesian model without the Phillips curve). Since the traditional Phillips curve had a strong empirical support at this time, there was no reason to give it up. Milton Friedman argued that this stable relationship was a pure coincidence. He predicted that observations in line with Figure X would be common in the future. A period of "stagflation", a situation with high unemployment and high inflation, in the early 70s was a great victory for the augmented Phillips curve and a serious setback for the Keynesian model. According to the Keynesian model, the government should pursue an expansionary policy if unemployment was high and a tight policy if inflation was high. The Keynesian model had no answer on what policy to pursue if both were high. In the late 1970s it was clear that the augmented Phillips curve was superior to the traditional Phillips curve which from now on was assumed to be valid only in the short run. The neo-classical synthesis became the most popular model in macroeconomics and the synthesis is still the dominating model in macroeconomics taught in introductory and intermediate courses. The synthesis is also often the starting point for more advanced models in macro economics. It should be noted that the development in the 1970s was a setback for the Keynesian model which incorporated the Phillips curve. The Keynesian model without the Phillips curve was less affected by the debate. With constant wages it does determine all of the macroeconomic variables but without the Phillips curve, it cannot explain inflation (see chapter 14). For this reason, many macro economistsbelieve that the Keynesian model can be used in the short run or in recession when prices and wages do not change very much. From short to long run The dynamics from the short to the long run We shall describe how the synthesis explains the transition from the short run to the long run where the Keynesian model applies in the short term and the classic model in the long run. Fig. 15.3: From short run to long run. • Point A: We start at point A which is on LPC where the economy is in equilibrium. Say that expected inflation is 4% so we are also on the SPC1 corresponding to an expected inflation of 4%. Since we are in equilibrium, inflation and the growth of the money supply is equal to wage inflation and these must be equal to expected inflation. In point A we therefore have π = πM = πw = πe = 4%. • Movement 1: Suppose that πM suddenly and unexpectedly rises to 6%. The AD curve will then glide upward faster than the AS curve and Y will increase and π will increase. When Y increases, L increases and U will fall. Since the increase is not expected, inflation expectations will not change and neither will SPC. We move up along SPC 1 and πW increases. According to the discussion in section X, π and πW will eventually increase until they reach 6% and we move up to point B. So far, the discussion is completely consistent with the Keynesian model (as we have not replaced the • SPC1 SPC2: As inflation is 6% at point B, inflation expectations must eventually increase. If inflation was 4% for a long time and it rises to and stabilizes at 6%, it is reasonable to expect future inflation to be 6%. In the synthesis, SPC shifts upwards to SPC2 which applies to πe = 6%. • Point B: When inflation expectations have become 6%, we are below the new short-run Phillips curve. When the economy is at point B with unemployment below the natural rate, wages will rise by more than 6%. From SPC2 we can conclude that a wage inflation of 8% is consistent with an expected inflation of 6% when unemployment is equal to UB. • Movement 2: If wages, and therefore prices, rises by more than 6%, the AS curve will glide upwards faster than the AD curve which means that Y will fall and U will increase. This must continue until we reach point C, where we once again are in equilibrium. Note that in the Keynesian model SPC1 is the only Phillips curve and it is valid in the long run as well. In this model, there is no “movement 2”. The economy may remain in point B with π = πw = πM = 6% if this is desired by the government. The economy may return to point A by using restrictive fiscal and monetary policy. In this section, inflation expectations did not change until we reached point B. This choice was more of a pedagogical choice to isolate and study each event individually. In reality, it is more likely that inflation expectations will slowly increase as we begin to move from point A to point B as inflation increases in this move. We would then have a movement from A to C more similar to movement 3 in the figure below. If the change in πM was announced prior to the actual change, it is possible that πe immediately changed to 6% at point A. We would then see the movement 4 directly from A to C (which, however, may take some time because of wage contracts). Fig. 15.4: From short to long run with a faster change in inflation expectations. From the neo-classical synthesis, another important conclusion may be drawn. In order to keep U below UN, you need an accelerating inflation. Suppose that full employment is compatible with 4% inflation in the long run. In the Keynesian model, we can keep U below UN if we accept that inflation is above 4%. An inflation of, for example 7%, would keep the U below UN indefinitely. In the neoclassical synthesis, this will not work. If we want to keep U below UN, we must accept an ever higher inflation. In order to keep U one percentage unit below UN we might need an inflation of 7% in the first period, 9% in the second period, then 13% and so on. Figure 15.3 will explain why. In order to reduce unemployment below UN, the growth rate in money supply must increase (unexpectedly). If nothing else is done, U will fall back to UN (now with a higher inflation). In order to keep U below UN, πM must increase again and again at an accelerating rate. Only the natural rate of unemployment, UN, is compatible with a non-accelerating inflation and this rate is therefore often called the NAIRU (Non-Accelerating Inflation Rate of Unemployment). SAS-LAS-AD model of the neo-classical synthesis AS-AD in the Keynesian and the classical model First, a brief review of the AS-AD model according to the classical and the Keynesian model when W is constant and exogenous. Fig. 15.5: The two AS-AD models. According to the classical model, aggregate supply is independent of the price level and equal to potential GDP. Potential GDP is the amount produced when U = UN and the AS curve becomes a horizontal line through YPOT. The AD curve in the classical model consists of combinations of Y and P where the quantity theory M·V = P· Y is satisfied. Aggregate demand is equal to the aggregate supply according to Say's Law. In the classical model, one starts from Y and finds P from the AD curve. The only function of the AD curve in the classical model is to determine the price level. The AD curve slopes downwards in the Keynesian model as it does in the classical model but interpretation and the reason are quite different. In the Keynesian model, you start with P and you find YD from the AD curve. Here, the AD curve slopes downwards because when P falls,R decreases, I increases and YD increases (see section Aggregate demand). Another difference is that the AD curve may be affected by fiscal and monetary policy in the Keynesian model but not in the classical model. In the Keynesian model, the AS curve is horizontal for low value of Y. In this region, the AS curve determines P while the AD curve determines GDP. Aggregate supply will be equal to aggregate demanded by the reverse Say's Law. For higher values of Y you need higher prices to stimulate aggregate and the AS curve will slope upwards. In this region, the AS and the AD curves simultaneously determine P and Y. SAS, LAS, and AD In the neo-classical synthesis, the Keynesian model is correct in the short run while (a slightly modified version of) the classical model applies in the long run. We therefore need to reconcile the AS-AD analysis of these models. In synthesis, the following concepts are introduced: • Long-run aggregate supply (LAS): The classical AS curve (L for Long run) • Short-run aggregate supply (SAS): The Keynesian AS curve (S for Short run) In synthesis, it is the Keynesian AD curve that must be used. We can combine SAS, LAS, and AD in the same graph. Fig. 15.6: SAS, LAS, and AD. We begin by drawing them in such a way that both models agree in the determination of Y, Y = YPOT. In the synthesis, this corresponds to long run equilibrium – there is no tendency for Y to increase or decrease. Note that the price level is determined in according to the Keynesian model both in the short and the long run (as we use the Keynesian AD curve). There is no reason however, to believe that this price level is consistent with the quantity theory. In other words, the classical AD curve (not shown) may intersect LAS at a completely different P. The quantity theory in levels need not hold in the neoclassical synthesis neither in the short run nor in the long run. However, the quantity theory in rates (π = πM) must hold in the long run. Therefore, it is not entirely correct to claim neo-classical synthesis reduces to the classical model in the long. The dynamics from the short to the long run We will now describe the dynamics from short to the long run in the LAS-SAS-AD model. To avoid having AS and AD curves “gliding”, we will assume that π M = 0. The case πM ≠ 0 is not much harder to We begin by analyzing an increase in MS (πM is still zero – except for the brief when MS increases, which we assume is very short). We start in the long-run equilibrium as in Figure 15.6. Initially π = πw = πe = 0. Fig. 15.7: Dynamics in the neo-classical synthesis. • We are in the initial point A. • When MS increases, the AD curve moves outwards from AD1 to AD2. • We move from point A to point B. Y increases and P increases. • As Y increases, U falls and we moving to point B on the SPK. • At point B on the SPK, wages increases. • When wages increase, the SAS curve will shift upwards. • When the SAS curve shifts upwards, Y will fall and U will again increase. We ove back along the SPK. • The SAS curve must continue to shift upwards as long as Y > YPOT. It will shift from SAS1 to SAS2 and we move to point C. We are back on the LAS and we are back on the LPK. Whenever you use the neo-classical synthesis for your analysis, you should begin as if you where using the Keynesian model (with exogenous wages). This will give you the short-run outcome. To obtain the long-run results, remove the assumption of exogenous wages. Let wages adjust so that you will return to LAS and LPC.
{"url":"https://conspecte.com/en/macroeconomics/the-neo-classical-synthesis.html","timestamp":"2024-11-07T12:34:15Z","content_type":"text/html","content_length":"38408","record_id":"<urn:uuid:e87d765a-bd31-4c51-9d2c-b5a04c8f04b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00059.warc.gz"}
Symmetric Airfoil - Aerodynamics Questions and Answers - Sanfoundry Aerodynamics Questions and Answers – The Symmetric Airfoil – 3 This set of Aerodynamics Multiple Choice Questions & Answers (MCQs) focuses on “The Symmetric Airfoil – 3”. 1. The Kutta condition is not satisfied at the trailing edge where θ=π in transformed coordinates for a symmetrical airfoil. a) True b) False View Answer Answer: b Explanation: Directly putting θ=π gives an indeterminate form (γ(π)=\(\frac {0}{0}\)), but using L’Hospital’s rule in the solution for γ(θ) gives a finite value of zero. Thus, the Kutta condition is 2. Which of the following is the correct solution of the transformed fundamental equation of aerodynamics for a symmetrical airfoil? a) γ(θ)=2αV[∞]\(\frac {sin⁡\theta }{1+cos\theta }\) b) γ(θ)=2αV[∞]\(\frac {1+cos\theta }{sin⁡\theta }\) c) γ(θ)=2αV[∞]\(\frac {1-cos⁡\theta }{sin\theta }\) d) γ(θ)=2αV[∞]\(\frac {cos\theta }{sin\theta }\) View Answer Answer: b Explanation: The solution of the fundamental equation of thin airfoil theory is obtained using the transformation of coordinates. We have α and V[∞] and using the standard integrals we can find a solution for γ(x) as γ(θ)=2αV[∞]\(\frac {1+cos\theta }{sin⁡\theta }\) where 0&leq;θ&leq;π for 0&leq;x&leq;c. 3. What is the total circulation around the symmetric airfoil according to the thin airfoil theory? a) Γ=πα^2cV[∞] b) Γ=π^2αcV[∞] c) Γ=2παcV[∞] d) Γ=παcV[∞] View Answer Answer: d Explanation: The total circulation around the symmetric airfoil can be found by integrating the transformed solution γ(θ)=2αV[∞]\(\frac {1+cos\theta }{sin⁡\theta }\) using ξ=\(\frac {c}{2}\)(1-cosθ) er 0&leq;θ&leq;π i.e. Γ=\(\int_0^c\)γ(ξ)dξ=παcV[∞]. 4. Which of these is a wrong expression for the total circulation around a thin symmetric airfoil? a) Γ=\(\int_0^c\)γ(ξ)dξ b) Γ=\(\frac {c}{2} \int_0^{\pi }\)γ(θ)sin⁡θ dθ c) Γ=cαV[∞]\(\int_0^c\)(1+cosθ)dθ d) Γ=cαV[∞]\(\int_0^{\pi }\)(1+cosθ)dθ View Answer Answer: c Explanation: Using the transformation ξ=\(\frac {c}{2}\)(1-cosθ), where 0&leq;θ&leq;π, corresponding to 0&leq;ξ&leq;c in γ(θ) and integrating gives the total circulation Γ. 5. The lift coefficient for a thin symmetrical airfoil is given by______ a) c[l] = πα b) c[l] = π^2α c) c[l] = 2πα d) c[l] = πα^2 View Answer Answer: c Explanation: The lift coefficient is given by c[l]=\(\frac {L’}{q_∞S}\) where L’ is the lift per unit span and S = c (1). Now, L’=ΓV[∞]ρ[∞], according to the Kutta-Joukowski theorem. Putting Γ=παcV [∞] we get c[l] = 2πα. 6. The lift curve slope for a flat plate is_____ a) 2π rad b) 2π rad^-1 c) π rad d) 0.11 degree View Answer Answer: b Explanation: The lift curve slope is given by \(\frac {dc_l}{d\alpha }\)=2π rad^-1 from the thin airfoil theory for the symmetric airfoils. It is equal to 0.11 degree^-1 . 7. Given an angle of attack 5° and c = 5m, the moment coefficient about the leading edge is_____ a) -0.137 b) -0.685 c) -7.8 d) -0.27 View Answer Answer: a Explanation: The coefficient of moment about the leading edge is given by c[m,le]=-π \(\frac {\alpha }{2}\) where α is in rad. It is independent of chord length. 8. Which of the following is an incorrect relation for a flat plate? a) c[m,le]=-π \(\frac {\alpha }{2}\) b) c[m,le]=-\(\frac {c_l}{4}\) c) c[m,le]=-\(\frac {c_l}{2}\) d) c[m,c/4]=c[m,le]+\(\frac {c_l}{4}\) View Answer Answer: c Explanation: The coefficient of moment about the leading edge is given by c[m,le]=-π \(\frac {\alpha }{2}\). Putting c[l] = 2πα we get c[m,le]=-\(\frac {c_l}{4}\). Finding the moment coefficient about quarter chord we get, c[m,c/4]=c[m,le]+\(\frac {c_l}{4}\). 9. The coefficient of moment about the quarter chord is zero for a symmetric airfoil. This implies____ a) Quarter-chord is the center of pressure b) Quarter-chord is the center of mass c) Quarter-chord has zero forces acting on it d) Total lift is zero at quarter-chord View Answer Answer: a Explanation: The coefficient of moment about the quarter chord is zero. By definition, the center of pressure is the point about which the total moment is zero. Hence, quarter-chord is the center of pressure for the symmetric airfoil. Other statements cannot be said conclusively with the given information. 10. Select the incorrect statement for a thin, symmetric airfoil out of the following. a) Quarter-chord is the aerodynamic center b) Quarter-chord is the center of pressure c) Moment about quarter-chord depends on the angle of attack d) Moment about quarter-chord is zero View Answer Answer: c Explanation: The coefficient of moment about the quarter chord is zero, thereby making it the aerodynamic center (moment coefficient independent of angle of attack) and center of pressure (moment coefficient is zero) for a thin symmetric airfoil. 11. For a flat plate, aerodynamic center and center of pressure coincide. a) True b) False View Answer Answer: a Explanation: The flat plate is a thin, symmetric airfoil for which moment about quarter-chord is zero. Thus, quarter-chord acts as both the aerodynamic center and center of pressure. 12. Aerodynamic center and center of pressure coincide for all the airfoils. a) False b) True View Answer Answer: a Explanation: Aerodynamic center is the point where the pitching moment remains constant with changing angle of attack. It is generally the quarter-chord for an airfoil. Center of pressure is the point where the resultant of forces act and the moment at that point will change with the change of angle of attack. Thus, the center of pressure will change and may not be the quarter-chord always. Sanfoundry Global Education & Learning Series – Aerodynamics. To practice all areas of Aerodynamics, here is complete set of 1000+ Multiple Choice Questions and Answers.
{"url":"https://www.sanfoundry.com/aerodynamics-questions-answers-symmetric-airfoil-3/","timestamp":"2024-11-11T05:38:17Z","content_type":"text/html","content_length":"159239","record_id":"<urn:uuid:2da749d0-a829-42a2-8f55-e9e740e05aa2>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00141.warc.gz"}
Shifting functors: Alternative for numerical towers? I've been longing for a functional programming language that'd double as a computer algebra system. One of the problems has been that numerical types in these languages do not compose very well. People usually build up some sort of a numerical tower. You get integers, then real numbers from integers, then complex numbers.. However, if something is missing in this tower then you won't have it. Or alternatively you have it, but you don't have something else. What if we could build structures that do compose well? What kind of properties would they have? I think they'd be functors. Now in case I write something silly here, this is the first time I'm doing something with category theory. I just watched halfway through Bartosz Milewski's category theory lectures (There are also second and third part of these lectures). Also this is not something that could be just picked up and used. The question mark in the title is for that reason. Shifting functors Functor is a transform from types and functions that preserves composition. F (g.f) = F g . F f F id = id So if we'd have a cartesian category with a carrier number object and some functions for those numbers: zero : 1 -> n succ : n -> n neg : n -> n add : (n * n) -> n mul : (n * n) -> n A functor from this category would preserve the composition of these functions. Eg, for every morphism g in our category we'd have: F (g . add) = F g . F add F (add . f) = F add . F f What does this mean for addition? It means that if we add in the target category, it behaves the same as in the category that we started from. In this case if we do (g . add . (zero * succ)) it must be similar as if we did F g . F add . (F zero * F succ). We should have a functor to every base numerical value that we want to use. Eg. We'd have: zero :: Num a => () -> a succ :: Num a => a -> a neg :: Num a => a -> a add :: Num a => (a, a) -> a mul :: Num a => (a, a) -> a Note that we aren't writing an endofunctor, the source category is outside of the type/function category of Haskell. Next we should have a functor to structures that we want to use. For example if we had polynomials we'd then have: zero :: () -> Poly a succ :: Poly a -> Poly a neg :: Poly a -> Poly a add :: (Poly a, Poly a) -> Poly a mul :: (Poly a, Poly a) -> Poly a For complex numbers we'd have: zero :: () -> Complex a succ :: Complex a -> Complex a neg :: Complex a -> Complex a add :: (Complex a, Complex a) -> Complex a mul :: (Complex a, Complex a) -> Complex a Next we'd have a natural transformation from a to Poly a and to Complex a. Eg. This means that things must stay same if we transform from a to Poly a at anywhere in our computation. alpha :: a -> Poly a alpha.zero = zero.alpha alpha.succ = succ.alpha alpha.neg = neg.alpha alpha.add = add.alpha alpha.mul = mul.alpha alpha.(f,g) = (f,g).alpha alpha.fst = fst.alpha alpha.snd = snd.alpha Next we additionally want to have a natural isomorphism between the structure types we use: beta :: (Poly . Complex) a -> (Complex . Poly) a delta :: (Complex . Poly) a -> (Poly . Complex) a Or more generally: beta' :: ShiftingFunctor f => (f . Poly) a -> (Poly . f) a delta' :: ShiftingFunctor f => (f . Complex) a -> (Complex . f) a So we should be able to shift around the representation or refactorize the expression such that all these operations stay composible. If we were able to build a structure that can satisfy all these rules. We could build types that can compose to build larger arithmetic structures without needing to consider other possible structures when declaring such new structures. Though this whole problem isn't as interesting as it used to be. It would seem that geometric algebra unifies enough structures to make it quite convenient to work with arithmetic structures anyway. Also things such as polynomials and automatic differentation would seem to partially fall out of the functional programming itself. Polynomials and complex numbers Finally here's some sketch structures to operate with polynomials and complex numbers in Haskell. They shouldn't be that interesting themselves because I don't have a way to check that they form shifting functors. type Poly a = [a] class Arith a where zero :: a is_zero :: a -> Bool neg :: a -> a add :: a -> a -> a mul :: a -> a -> a instance Arith Int where zero = 0 is_zero = (==0) neg a = -a add = (+) mul = (*) lift :: a -> Poly a lift a = [a] norm :: Arith a => Poly a -> Poly a norm [] = [] norm (x:xs) = case x : norm xs of [a] -> if is_zero a then [] else [a] ys -> ys The substitution should be a bit different here. subs :: Num a => Poly a -> a -> a subs poly x = f poly x 0 where f (c:xs) x n = c*(x^n) + f xs x (n+1) f [] _ _ = 0 instance Arith a => Arith (Poly a) where zero = [] is_zero = all is_zero neg xs = fmap neg xs add [] [] = [] add xs [] = xs add [] ys = ys add (x:xs) (y:ys) = add x y : add xs ys mul xs = norm . foldl add [] . fmap f . indexed where f (n,c) = replicate n zero ++ fmap (mul c) xs indexed :: [a] -> [(Int,a)] indexed = f 0 where f n (x:xs) = (n,x) : f (n+1) xs f n [] = [] x :: Num a => Poly a x = [0,1] polymap :: (a -> b) -> Poly a -> Poly b polymap f xs = fmap f xs merge :: (Num a, Num b) => Poly a -> Poly b -> Poly (a,b) merge [] [] = [] merge (x:xs) [] = (x,0) : merge xs [] merge [] (y:ys) = (0,y) : merge [] ys merge (x:xs) (y:ys) = (x,y) : merge xs ys Complex numbers: type Complex a = (a,a) lift' :: Num a => a -> Complex a lift' a = (a,0) i :: Num a => Complex a i = (0,1) real :: Complex a -> a real (a,_) = a imag :: Complex a -> a imag (_,a) = a instance Arith a => Arith (Complex a) where zero = (zero, zero) is_zero (x,y) = is_zero x && is_zero y neg (x,y) = (neg x, neg y) add (a,b) (c,d) = (add a c, add b d) mul (a,b) (c,d) = (add (mul a c) (neg (mul b d)), add (mul a d) (mul b c)) complexmap :: (a -> b) -> Complex a -> Complex b complexmap f (x,y) = (f x, f y) Conversions between structures: beta :: Poly (Complex a) -> Complex (Poly a) beta poly = (polymap real poly, polymap imag poly) delta :: Num a => Complex (Poly a) -> Poly (Complex a) delta (r,v) = merge r v Note that the polymap and merge for each type should be enough to build up a system where the structure can be freely permuted. The problem is in verifying somehow that the conversion forms a natural isomorphism between permutations. There might be some simple rules that allow such structures to be constructed and work together without needing to but I'm likely not going to explore deeper into it yet. I'll watch the remaining videos and then get back to looking at the wordprocessing format again. I still may need to write few filler blogposts before getting to it though. Similar posts
{"url":"https://boxbase.org/entries/2020/feb/10/shifting_functors/","timestamp":"2024-11-01T22:25:33Z","content_type":"text/html","content_length":"13398","record_id":"<urn:uuid:38d45a52-8683-4689-9df5-5a876e0aa663>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00053.warc.gz"}
A Sketch Recognition Algorithm Based on Bayesian Network and Convolution Neural Network Xiang Hou School of Intelligent Manufacturing, Sichuan University of Arts and Science No.519 Tashi Road, Dazhou, Sichuan 635000, China May 27, 2018 July 24, 2018 March 20, 2019 Bayesian network, stroke grouping, convolution neural network, sketch recognition Most of the existing sketch recognition algorithms are used to restrict the user’s drawing habits to achieve the stroke grouping and recognition. In order to solve the problem, a new sketch recognition algorithm based on Bayesian network and convolution neural network (CNN) is proposed. First, the input sketch is processed by Gaussian low-pass filter and a smoother stroke can be obtained. The stroke of continuous input is divided, then the Bayesian network and CNN are performed on stroke recognition respectively. The recognition result of Bayesian network is adopted when the reliability of stroke is larger than the threshold, otherwise recognition result of CNN will be adopted. The experiment result shows that the proposed algorithm is effective in circuit symbol recognition. The recognition rate was achieved 80.34% in the drawing process, and the final recognition rate was achieved 93.48%. Cite this article as: X. Hou, “A Sketch Recognition Algorithm Based on Bayesian Network and Convolution Neural Network,” J. Adv. Comput. Intell. Intell. Inform., Vol.23 No.2, pp. 261-267, 2019. Data files: 1. [1] D. Rubine, “Specifying Gestures by Example,” ACM SIGGRAPH on Computer Graphics, Vol.25, No.4, pp. 329-337, 1991. 2. [2] S. Simhon and G. Dudek, “Sketch Interpretation and Refinement Using Statistical Models,” Proc. 15th Eurographics Conf. on Rendering Techniques (EGSR 04), pp. 23-32, 2004. 3. [3] G. Fang, L. He, and F. Kong, “Research on Technologies of Computer Aided Sketching Design,” Computer Engineering, Vol.32, No.18, pp.1 2, 2006. 4. [4] D. Anderson, C. Bailey, and M. Skubic, “Hidden Markov model symbol recognition for sketch-based interfaces,” AAAI Fall Symp., 2004. 5. [5] T. Kurtoglu and T. F. Stahovich, “Interpreting Schematic Sketches Using Physical Reasoning,” Proc. of AAAI Spring Symp. on Sketch Understanding, pp. 78-85, 2002. 6. [6] T. M. Sezgin and R. Davis, “HMM-based efficient sketch recognition,” Proc. of the 10th Int. Conf. on Intelligent User Interfaces, pp.281-283, 2005. 7. [7] M. J. Fonseca, C. Pimentel, and J. A. Jorge, “CALI: An Online Scribble Recognizer for Calligraphic Interfaces,” Proc. of AAAI Spring Symp. on Sketch Understanding, pp. 51-58, 2002. 8. [8] L. Gennari, L. B. Kara, T. F. Stahovich, and K. Shimada, “Combining Geometry and Domain Knowledge to Interpret Hand-drawn Diagrams,” Computers & Graphics, Vol.29, No.4, pp. 547-562, 2005. 9. [9] Z. Sun, G. Feng, and R. Zhou, “Techniques for Sketch-Based User Interface: Review and Research,” J. of Computer-Aided Design & Computer Graphics, Vol.17, No.9, pp. 1889-1899, 2005. 10. [10] Z. Sun, X. Xu, J. Sun, and X. Jin, “Sketch-Based Graphic Input Tool for Conceptual Design,” J. of Computer-Aided Design & Computer Graphics, Vol.15, No.9, pp. 1145-1152, 2003. 11. [11] B. Song et al., “Three-Tiered Recognition Method of Pen-Based Sketch,” J. of Computer-Aided Design & Computer Graphics, Issue 6, pp. 753-758, 2004. 12. [12] C. Alvarado and R. Davis, “Dynamically constructed Bayes nets for multi-domain sketch understanding,” ACM SIGGRAPH 2007 courses, Article No.33, 2007. 13. [13] S.-Z. Liao, X.-J. Wang, and J.-L. Lu, “An Incremental Bayesian Approach to Sketch Recognition,” Proc. of 2005 Int. Conf. on Machine Learning and Cybernetics, Vol.7, pp. 4549-4553, 2005. 14. [14] L.-W. Jin, Z.-Y. Zhong, Z. Yang, Z.-C. Xie, and J. Sun, “Applications of Deep Learning for Handwritten Chinese Character Recognition: A Review,” Acta Automatica Sinica, Vol.42, No.8, pp. 1125-1141, 2016. 15. [15] A. S. Razavian, H. Azizpour, J. Sullivan and S. Carlsson, “CNN features off-the-shelf: an astounding baseline for recognition,” 2014 IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 512-519, 2014. 16. [16] J. Vijay, S. Mita, Z. Liu, and B. Qi, “Pedestrian detection in thermal images using adaptive fuzzy C-means clustering and convolutional neural networks,” Proc. of 2015 14th IAPR Int. Conf. on Machine Vision Applications (MVA), pp. 246-249, 2015. 17. [17] J. Cai, J. Y. Cai, X. D. Liao, et al., “Preliminary study on hand gesture recognition based on convolutional neural network,” Computer Systems & Applications, Vol.24, No.4, pp. 113-117, 18. [18] Y. Goldberg, “Neural network methods for natural language processing,” Synthesis Lectures on Human Language Technologies, Vol.10, No.1, pp. 1-309, 2017. 19. [19] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, Vol.313, No.5786, pp. 504-507, 2006. 20. [20] M.’A. Ranzato, C. Poultney, S. Chopra, and Y. LeCun, “Efficient learning of sparse representations with an energy-based model,” Proc. of the 19th Int. Conf. on Neural Information Processing Systems, pp. 1137-1144, 2007.
{"url":"https://www.fujipress.jp/jaciii/jc/jacii002300020261/","timestamp":"2024-11-07T04:36:11Z","content_type":"text/html","content_length":"47782","record_id":"<urn:uuid:ce362d6f-8a03-4f75-9aad-39a7ec070285>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00816.warc.gz"}
The idea of a probability density function An initial thought experiment I'm thinking of a number, let's call it $X$, between 0 and 10 (inclusive). If I don't tell you anything else, what would you imagine is the probability that $X=0$? That $X=4$? Assuming that I don't have any preference for any particular number, you'd imagine that the probability of each of the eleven integers $0,1,2,\ldots, 10$ is the same. Since all the probabilities must add up to 1, a logical conclusion is to assign a probability of $1/11$ to each of the 11 options, i.e., you'd assume that the probability that $X=i$ is $1/11$ for any integer $i$ from 0 to 10, which we write as \ begin{gather*} \Pr(X=i) = \frac{1}{11} \qquad \text{for } i=0,1,2, \ldots, 10. \end{gather*} Implicit in this description is the assumption that the probability that $X$ is any other number $x$ is zero. (Here we make a distinction between the random number $X$ and the variable $x$ which can stand for any fixed number.) We can write this implicit assumption as \begin{gather*} \Pr(X=x) = 0 \ qquad \text{if $x$ is not one of } \{0,1,2, \ldots, 10\}. \end{gather*} What would change if instead I told you that I was thinking of a number $X$ between 0 and 1 (inclusive)? You might assume that I was thinking of either the number 0 or the number 1, and you'd assign a probability 1/2 to both options. Or, you might guess that I had more than two options in mind. There was nothing in what I said that forces you to conclude that I was thinking of an integer. Maybe I was thinking of 1/2, or 1/4, or 7/8. Once you start going down that road, the possibilities are endless. I could be thinking of any fraction between 0 and 1. But who said I was limiting myself to rational numbers? I could even be thinking of irrational numbers like $1/\sqrt{2}$ or $\pi/5$. If we allow the possibility that the number $X$ could any real number in the interval $[0,1]$, then there are clearly an infinite number of possibilities. (Of course, I could have been thinking of non-integers for the number betwen 0 and 10 as well, but most people would think I was referring to integers in that case.) Since we don't want to assume that I am favoring any particular number, then we should insist that the probability is the same for each number. In other words, the probability that the random number $X$ is any particular number $x \in [0,1]$ (confused?) should be some constant value; let's use $c$ to denote this probability of any single number. But, now we run into trouble due to the fact that there are an infinite number of possibilities. If each possibility has the same probability $c$ and the probabilities must add up to 1 and there are an infinite number of possibilities, what could the individual probability $c$ possibly be? If $c$ were any finite number greater than zero, once we add up an infinite number of the $c$'s, we must get to infinity, which is definitely larger than the required sum of 1. In order to prevent the sum from blowing up to infinity, we must have $c$ be infinitesimally small, i.e., we must insist that $c=0$. The probability that I chose any particular number, such as the probability that $X$ equals $1/2$, must be equal to zero. We can write this as \begin{gather*} \Pr(X=x) = 0 \qquad \text{for any real number $x$}. \end{gather*} What went wrong here? We know all probabilities must not be zero, because we know that the total probability must add up to one. In fact, were know that, somehow, there must be something special for the probability of numbers $0 \le x \le 1$. We know that $X$ is somewhere in that interval with probability one, and the probability that $X$ is outside that interval is zero. The probability density It turns out, for the case where we allow $X$ to be any real number, we are just approaching the question in the wrong way. We should not ask for the probability that $X$ is exactly a single number (since that probability is zero). Instead, we need to think about the probability that $x$ is close to a single number. We capture the notion of being close to a number with a probability density function which is often denoted by $\rho(x)$. If the probability density around a point $x$ is large, that means the random variable $X$ is likely to be close to $x$. If, on the other hand, $\rho(x)=0$ in some interval, then $X$ won't be in that interval. To translate the probability density $\rho(x)$ into a probability, imagine that $I_x$ is some small interval around the point $x$. Then, assuming $\rho$ is continuous, the probability that $X$ is in that interval will depend both on the density $\rho(x)$ and the length of the interval: \begin{gather} \Pr(X \in I_x) \approx \rho(x) \times \text{Length of $I_x$}. \label{eq:densityapprox} \end {gather} We don't have a true equality here, because the density $\rho$ may vary over the interval $I_x$. But, the approximation becomes better and better as the interval $I_x$ shrinks around the point $x$, as $\rho$ will be come closer and closer to a constant inside that small interval. The probability $\Pr(X \in I_x)$ approaches zero as $I_x$ shrinks down to the point $x$ (consistent with our above result for single numbers), but the information about $X$ is contained in the rate that this probability goes to zero as $I_x$ shrinks. In general, to determine the probability that $X$ is in any subset $A$ of the real numbers, we simply add up the values of $\rho(x)$ in the subset. By “add up,” we mean integrate the function $\rho (x)$ over the set $A$. The probability that $X$ is in $A$ is precisely \begin{gather} \Pr(x \in A) = \int_A \rho(x)dx. \label{eq:density} \end{gather} For example, if $I$ is the interval $I=[a,b]$ with $a \le b$, then the probability that $a \le X \le b$ is \begin{gather*} \Pr(x \in I) = \int_I \rho(x)dx = \int_a^b \rho(x)dx. \end{gather*} For a function $\rho(x)$ to be a probability density function, it must satisfy two conditions. It must be non-negative, so the that integral \eqref{eq:density} is always non-negative, and it must integrate to one, so that the probability of $X$ being something is one: \begin{gather*} \rho(x) \ge 0 \quad \text{for all $x$}\\ \int \rho(x) dx = 1, \end{gather*} where the integral is implicitly taken over the whole real line. Equation \eqref{eq:density} is the right way to define a probability density function. However, if we aren't worrying about being too precise or about discontinuities in $\rho$, we may sometimes state that \begin{gather*} \Pr(X \in (x,x+dx)) = \rho(x)dx. \end{gather*} Here, we are thinking of $dx$ as being an infinitesimally small number so that $(x,x+dx)$ is an infinitesimally small interval $I_x$ around $x$, in which case the approximation \eqref{eq:densityapprox} becomes exact, at least if $\rho$ is continuous. Example 1 Returning to the opening example of a number in the interval $[0,1]$, we can let $X$ be given by a uniform distribution in the interval $[0,1]$. The resulting probability density function of $X$ is given by \begin{gather*} \rho(x) = \begin{cases} 1 & \text{if $x \in [0,1]$}\\ 0 & \text{otherwise} \end{cases} \end{gather*} and is illustrated in the following figure. The function $\rho(x)$ is a valid probability density function since it is non-negative and integrates to one. If $I$ is an interval contained in $[0,1]$, say $I=[a,b]$ with $0 \le a \le b \le 1$, then $\rho(x)=1$ in the interval and \begin{align*} \Pr(x \in I) &= \int_I \rho(x)dx\\ &=\int_I 1 \, dx\\ &= \ int_a^b 1\,dx = b-a=\text{Length of $I$}. \end{align*} For any interval $I$, $\Pr(x \in I)$ is equal to the length of the intersection of $I$ with the interval $[0,1]$. Example 2 If \begin{gather*} \rho(x) = \begin{cases} x & \text{if $0 \lt x \lt 1$}\\ 2-x & \text{if $1 \lt x \lt 2$}\\ 0 & \text{otherwise}, \end{cases} \end{gather*} then $\rho(x)$ is a triangular probability density function centered around 1. You can verify that $\int \rho(x)dx=1$ so $\rho$ is a valid density. The density is largest near 1. If a random variable $X$ is given by this density, you can verify that \begin{align*} \Pr\left(\ frac{1}{2} \lt X \lt \frac{3}{2}\right) = \int_{1/2}^{3/2} \rho(x)dx = \frac{3}{4}. \end{align*} In this definition of $\rho(x)$ it doesn't matter that we defined $\rho(1)=0$. The density at a single point doesn't matter. We would get the same random variable if we used the density \begin {gather*} \rho(x) = \begin{cases} x & \text{if $0 \lt x \le 1$}\\ 2-x & \text{if $1 \lt x \lt 2$}\\ 0 & \text{otherwise} \end{cases} \end{gather*} so that $\rho(1)=1$. This second definition is a little nicer because $\rho$ is continuous. However, the value of an integral doesn't depend on the value of its integrand at just one point, so given the definition of equation \eqref{eq:density}, the probability of the random variable $X$ being in any set is unchanged if we change $\rho$ at just one point (or at any finite number of points). Example 3 One very important probability density function is that of a Gaussian random variable, also called a normal random variable. The probability density function looks like a bell-shaped curve. One example is the density \begin{gather*} \rho(x) = \frac{1}{\sqrt{2\pi}} e^{-x^2/2}, \end{gather*} which is graphed below. One has to do some tricks to verify that indeed $\int \rho(x)dx=1$. It turns out that Gaussian random variables show up naturally in many contexts in probability and statistics.
{"url":"https://mathinsight.org/probability_density_function_idea","timestamp":"2024-11-10T09:59:46Z","content_type":"text/html","content_length":"23794","record_id":"<urn:uuid:b1d0dfb7-fb50-47df-b95e-105b604f68ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00112.warc.gz"}
Find the solution of the system of equations.4, x, minus, 7, y, equals, 424x−7y=422, x, minus, 8, y, equals, 302x−8y=30 Find the solution of the system of equations.4, x, minus, 7, y, equals, 424x−7y=422, x, minus, 8, y, equals, 302x−8y=30 Solution 1 To solve the system of equations: 4x - 7y = 42 2x - 8y = 30 We can use the method of substitution or elimination. Here, I'll use the elimination method. First, we can multiply the second equation by 2 to make the coefficients of x in both equations the same: 4x - 7y = 42 4x - 16y = 60 Now, subt Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem. Knowee AI Upgrade your grade with Knowee Get personalized homework help. Review tough concepts in more detail, or go deeper into your topic by exploring other relevant questions.
{"url":"https://knowee.ai/questions/83639144-find-the-solution-of-the-system-of-equations-x-minus-y-equals-xy-x","timestamp":"2024-11-10T06:21:59Z","content_type":"text/html","content_length":"364827","record_id":"<urn:uuid:c91ea318-2116-43d2-8e96-74855591ac98>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00115.warc.gz"}
Comparing Classical and Quantum Generative Learning Models for High-Fidelity Image Synthesis Division of Engineering Science, University of Toronto, Toronto, ON M5S 1A1, Canada Department of Pathology and Molecular Medicine, Queen’s University, Kingston, ON K7L 3N6, Canada Quantum Computation and Neuroscience, Arthur C. Clarke Center for Human Imagination, University of California San Diego, La Jolla, CA 92093, USA Center for Biotechnology and Genomics Medicine, Medical College of Georgia, Augusta, GA 30912, USA NetraMark Holdings, Toronto, ON M6P 3T1, Canada Centre for Nanotechnology, Center for Quantum Information and Quantum Control, Department of Electrical Engineering, University of Toronto, Toronto, ON M5S 1A1, Canada Author to whom correspondence should be addressed. Submission received: 27 September 2023 / Revised: 11 November 2023 / Accepted: 21 November 2023 / Published: 18 December 2023 The field of computer vision has long grappled with the challenging task of image synthesis, which entails the creation of novel high-fidelity images. This task is underscored by the Generative Learning Trilemma, which posits that it is not possible for any image synthesis model to simultaneously excel at high-quality sampling, achieve mode convergence with diverse sample representation, and perform rapid sampling. In this paper, we explore the potential of Quantum Boltzmann Machines (QBMs) for image synthesis, leveraging the D-Wave 2000Q quantum annealer. We undertake a comprehensive performance assessment of QBMs in comparison to established generative models in the field: Restricted Boltzmann Machines (RBMs), Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Denoising Diffusion Probabilistic Models (DDPMs). Our evaluation is grounded in widely recognized scoring metrics, including the Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Scores. The results of our study indicate that QBMs do not significantly outperform the conventional models in terms of the three evaluative criteria. Moreover, QBMs have not demonstrated the capability to overcome the challenges outlined in the Trilemma of Generative Learning. Through our investigation, we contribute to the understanding of quantum computing’s role in generative learning and identify critical areas for future research to enhance the capabilities of image synthesis models. 1. Introduction 1.1. Generative Modeling Generative modeling is a class of machine learning that aims to generate novel samples from an existing dataset. Image synthesis is a subset of generative modeling applications relating to the generation of novel high-fidelity images that mimic an underlying distribution of images, known as the training set. The main types of generative models are Generative Adversarial Networks (GANs), Probabilistic models, and Variational Autoencoders (VAE), all of which are capable of high-fidelity image synthesis. In 2020, a new methodology for producing image synthesis using diffusion models was shown to produce high-quality images [ ]. In 2021, OpenAI demonstrated Denoising Diffusion Probabilistic Models’ (DDPM) superiority in generating higher image sample quality than the previous state-of-the-art GANs [ Quantum annealers, namely the D-Wave 2000Q, have also been shown to perform generative modeling with varied success [ ]. By taking advantage of quantum sampling and parallelization, D-Wave 2000Q can hold an embedding of the latent space relating to a set of training data in an architecture of coupled qubits [ ]. There are still significant research gaps relating to utilizing generative modeling on the quantum processing unit for image synthesis, especially as it relates to measuring their performance against other generative models on standard scoring methods, namely the Inception score, FID, and KID. This research aims to close this gap by investigating the efficacy of the D-Wave 2000Q quantum annealer on the problem of image synthesis. 1.2. Trilemma of Generative Learning Xiao et al. describe the Trilemma of Generative Learning as the inability of any single deep generative modeling framework to solve the following requirements for wide adoption and application of image synthesis: (i) high-quality sampling, (ii) mode coverage and sample diversity, and (iii) fast and computationally inexpensive sampling [ ] ( Figure 1 ). Current research primarily focuses on high-quality image generation and ignores the real-world sampling constraints and the need for high diversity and mode coverage. Fast sampling allows for the generative models to be utilized in greater fast-learning applications, which require quick image synthesis, e.g., interactive image editing [ ]. Diversity and mode coverage ensure generated images are not direct copies of, but are also not significantly skewed from, the training data. This paper reviews research that aims to tackle this trilemma with the D-Wave quantum annealer and attempts to determine the efficacy of modeling on the three axes of the trilemma. In doing so, the success of the quantum annealer will be tested against other classical generative modeling methodologies. Success in showing the quantum annealer’s ability to produce (i) high-quality images, (ii) mode coverage and diversity, and (iii) fast sampling will demonstrate the supremacy of quantum annealers over classical methods for the balanced task of image synthesis. 2. Background The trajectory of artificial intelligence in the domain of image synthesis, evolving from Restricted Boltzmann Machines (RBMs) to Denoising Diffusion Probabilistic Models (DDPMs), marks a significant technical progression. This advancement, intermediated by Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs), has driven improvements in the fidelity, diversity, and realism of generated images, while also introducing a host of model-specific challenges and computational complexities. Before exploring generative modeling within quantum computing environments, let us provide background into classical image synthesis models, namely RBMs, VAEs, GANs, and DDPMs. Following this, we will delve into the research of quantum annealing and its application in machine learning. The ultimate goal is to create a blueprint for image synthesis on a quantum annealer. 2.1. Classical Image Synthesis 2.1.1. Restricted Boltzmann Machine Boltzmann Machines are a class of energy-based generative learning models. A Restricted Boltzmann Machine, a subset of Boltzmann Machines, is a fully connected bipartite graph that is segmented into visible and hidden neurons as shown in Figure 2 RBMs are generative models that embed the latent feature space in the weights between the visible and hidden layers. RBMs were first introduced in 1986 by Smolensky and were further developed by Freund and D. Haussler in 1991 [ ]. The energy function to minimize when training an RBM is the following [ $E ( v , h ) = − a T v − b T h − v T W h$ Training is the process of tuning the weights matrix and bias vectors on the visible and hidden layers, respectively. represents the visible units, i.e., the observed values or a training sample. The network assigns a probability to every possible pair of a visible and a hidden vector via this energy function [ $p ( v , h ) = 1 Z e − E ( v , h )$ is the partition function given by summing over all possible pairs of ]. Thus, the probability of a given $p ( v ) = 1 Z ∑ h e − E ( v , h )$ $Z = ∑ v , h e − E ( v , h )$ The difficulty in evaluating the partition function introduces the need to use Gibbs sampling with Contrastive Divergence Learning, introduced by Hinton et al. in 2005 [ ]. By utilizing such methods, one can train the RBM quickly via gradient descent, similar to other neural networks. By adding more hidden layers, a deeper embedding can be captured by the model; such a system is called a Deep Belief Network (DBN). RBMs, while of little note in the modern landscape of machine learning research due to their limited performance and relatively slow training times, are of particular note to this research, as they have direct parallels with both the architecture of the D-Wave 2000Q quantum processor and the method by which they reduce the total energy of their respective systems. RBMs also have limited applications in computer vision but were an important advancement in the field of generative modeling as a whole. 2.1.2. Variational Autoencoder A Variational Autoencoder (VAE) is a generative machine learning model developed in 2013 composed of a neural network that is able to generate novel high-fidelity images, texts, sounds, etc. [ ]. Refer to Figure 3 for the VAE architecture. Autoencoders seek to compress an input space into a compressed latent representation from which the original input space can be recovered [ ]. Variational Autoencoders improve upon traditional Autoencoders by recognizing the input space has an underlying distribution and seeks to learn the parameters of that distribution [ ]. Once trained, VAEs can be used to generate novel data, similar to the input space, by removing the encoding layers and exploring the latent space [ ]. Exploring the latent space is simply treating the latent compression layer as an input layer and observing the output of the VAE for various inputs. VAEs marked the first reliable way to generate somewhat high-fidelity images using machine learning [ 2.1.3. Generative Adversarial Networks The most significant development in high-fidelity generative image synthesis was in 2014 with the introduction of GANs by Ian Goodfellow et al. [ ]. Goodfellow et al. propose a two-player minimax game composed of a generator model (G) and a discriminator model (D), as shown in Figure 4 . As the game progresses, both the generator and discriminator models improve. GANs are trained via an adversarial contest between the generator model (G) and discriminator model (D) [ contains samples from both the training set and $p g$ , the images generated by G. $D ( x ; θ d )$ outputs the probability that originates from the training dataset as opposed to $p g$ . Meanwhile, $G ( z , θ g )$ $p g$ given noise . G’s goal is to fool D while D aims to reliably differentiate real training data from data generated by G. The loss function for G is $l o g ( 1 − D ( G ( z ) ) )$ . Thus, the value/loss function, error, of a GAN is represented as: $min G max D V ( D , G ) = E x ∼ p d a t a ( x ) [ l o g D ( x ) ] + E z ∼ p z ( z ) [ l o g ( 1 − D ( G ( z ) ) ) ]$ Both G and D are trained simultaneously. This algorithm allows for lock-step improvements to both G and D. Towards the conclusion of training, G becomes a powerful image generator, which closely replicates the input space, i.e., training data. GANs have several shortcomings that make them difficult to train. Due to the adversarial nature of GANs, training the model can face the issue of Vanishing Gradients, when the discriminator develops more quickly than the generator, consequently correctly predicting every and leaving no error to train on for the generator [ ]. Another common issue is Mode Collapse, when the generator learns to generate a particularly successful such that the discriminator is consistently fooled and the generator continues to only produce that singular and has no variability in image generation [ ]. Both Vanishing Gradients and Mode Collapse are consequences of one of the adversarial models improving faster than the other. 2.1.4. Denoising Diffusion Probabilistic Model DDPMs are a recent development proposed by Jonathan Ho et al. (2020) inspired by nonequilibrium thermodynamics that produces high-fidelity image synthesis using a parameterized Markov chain [ ]. Beginning with the training sample, each step of the Markov chain adds a single layer of Gaussian noise. A neural network is trained on parameterizing these additional Gaussian noise layers to reverse the process from random noise to a high-fidelity image as shown in Figure 5 $q θ ( x t | x t − 1 )$ represents the forward process, adding Gaussian noise, and $p θ ( x t − 1 | x t )$ represents the reverse process, denoising. The reverse process is captured by training. $p θ ( x 0 ) : = ∫ p θ ( x 0 : T ) d x 1 : T$ $p θ ( x 0 : T ) : = p ( x T ) ∏ t = 1 T p θ ( x t − 1 | x )$ $p θ ( x t − 1 | x ) : = N ( x t − 1 ; μ θ ( x t , t ) ; Σ θ ( x t , t ) )$ For clarity, we remind the reader that $N ( x t − 1 ; μ θ ( x t , t ) ; Σ θ ( x t , t ) )$ is the normal distribution with mean $μ θ ( x t , t )$ and covariance matrix $Σ θ ( x t , t )$ . The loss function for a DDPM is as follows: $L : = E q [ − log p ( x T ) − ∑ t ≥ 1 log p θ ( x t − 1 | x ) q θ ( x | x t − 1 ) ]$ Using a U-Net and a CNN with upsampling, with stochastic gradient descent and $T = 1000$ , Ho et al. were able to generate samples with an impressive, but not state-of-the-art, FID score of 0.317 on the CIFAR10 dataset. On CelebA-HQ 256 × 256, the team generated the novel images in Figure 6 In 2021, Dhariwal et al. at OpenAI made improvements upon the original DDPM parameters, and it achieved state-of-the-art FID scores of 2.97 on ImageNet 128 × 128, 4.59 on ImageNet 256 × 256, and 7.72 on ImageNet 512 × 512 [ The first improvement is not to set $Σ θ ( x t , t )$ as a constant but rather as the following: $Σ θ ( x t , t ) = exp ( v log β t + ( 1 − v ) log β t ˜ )$ $β t$ $β t ˜$ correspond to the upper and lower bounds of the Gaussian variance. Dhariwal et al. also explore the following architectural changes; note: attention heads refer to embedding blocks in the U-Net [ • Increasing depth versus width, holding model size relatively constant. • Increasing the number of attention heads. • Using attention at 32 × 32, 16 × 16, and 8 × 8 resolutions rather than only at 16 × 16. • Using the BigGAN residual block for upsampling and downsampling the activations. • Rescaling residual connections with $1 2$. With these changes, Dhariwal et al. were able to demonstrate their DDPM beating GANs in every single class by FID score and establishing DDPMs as the new state-of-the-art for image synthesis [ 2.2. Quantum Machine Learning 2.2.1. Quantum Boltzmann Machine Energy-based machine learning models, such as the RBM, seek to minimize an energy function. Recall: $p ( v ) = ∑ h e − E ( v , h ) ∑ v , h e − E ( v , h )$ is maximized when $E ( v , h )$ is minimized. $E ( v , h ) = − a T v − b T h − v T W h$ or, in its expanded form $E ( v , h ) = − ∑ i v i · a i − ∑ j h j · b j − ∑ i ∑ j v i · W i j · h j$ Recall also that this energy function is intractable for all , thus RBMs are trained via Contrastive Divergence [ The D-Wave 2000Q via the Ising model is able to minimize an energy function via coupled qubits, taking advantage of entanglement. The energy function for the Ising model is the following Hamiltonian: $E ising ( s ) = ∑ i = 1 N h i s i + ∑ i = 1 N ∑ j = i + 1 N J i , j s i s j$ $s i ∈ { − 1 , + 1 }$ represents the qubit spin state, with spin up and spin down effectively. $h i$ is the bias term provided by the external magnetic field, and $J i , j$ captures the coefficients for the coupling between qubits [ ]. Solving for the ground state of an Ising model is NP-hard, but by taking advantage of the QPU’s ability to better simulate quantum systems, we can solve this problem more efficiently [ Clamping neurons is the process of affixing certain qubits to specific values, namely the data being trained on. By clamping the neurons onto the qubits, applying an external magnetic field equivalent to the biasing parameters , and setting the coupling parameters to match those of (and to 0 for absent or intralayer edges), the RBM can be effectively translated into a format suitable for a quantum annealer. The resulting model is known as a Quantum Boltzmann Machine (QBM) and is similarly trained using QPU-specific Gibbs sampling methods [ Increased sampling from the quantum annealer leads to a more comprehensive representation of the Hamiltonian’s energy landscape. The process of training a QBM involves adjusting the couplings based on this acquired information. The D-Wave 2000Q has the qubit coupling architecture in Figure 7 2.2.2. Image Classification The field of Quantum Machine Learning (QML) applied to computer vision is still quite nascent. Most QML research focuses on classification tasks, particularly using quantum support vector machines, decision trees, nearest neighbors, annealing-based classifiers, and variation classifiers [ ]. Wei et al. propose a Quantum Convolutional Neural Network with capabilities for spatial filtering, image smoothing, sharpening, and edge detection, along with MNIST digit recognition, with a lower computation complexity than classical counterparts [ ]. Such research provides a valuable precursor to the exploration of QML for image synthesis. 2.2.3. Image Synthesis In 2020, Sleeman et al. demonstrated the D-Wave QUBO’s ability to generate images mimicking the MNIST hand-drawn digits and Fashion MNIST datasets [ ]. Due to the limited number of qubits available, Sleeman et al. create an encoding of the images via a convolutional autoencoder, feed the encoding to a QBM, and finally reverse the process to perform image synthesis. The model architecture is provided in Figure 8 In their research, Sleeman et al. contrast the performance of their QBM with that of a traditional RBM, in addition to assessing the efficacy of the autoencoder’s encoding capabilities. Despite showcasing the potential of the D-Wave 2000Q in aiding image synthesis, the authors do not juxtapose their findings with those of other classical generative modeling methods. Furthermore, the omission of FID, KID, and Inception scores for their proposed models restricts the breadth of comparison between the QBM and its classical counterparts. 3. Methods 3.1. Goal To reiterate, the goal of this research is to train the D-Wave 2000Q quantum annealer on image synthesis (generative image creation) and compare the results both quantitatively and qualitatively against existing classical models. Secondly, the goal is to determine the quantum annealer’s efficacy at cracking the challenges outlined in Section 1.2 , specifically the Trilemma of Generative Learning. Additionally, our research aims to close many of the gaps in Sleeman et al.’s study. Namely: • perform the image synthesis directly on the QBM, • evaluate the performance of the QBM against a(n) RBM, VAE, GAN, and DDPM, • evaluate various generative modeling methods on FID, KID, and Inception scores, • model a richer image dataset, CIFAR-10. 3.2. Data We utilize a standardized dataset, CIFAR-10, for all of our experiments. The CIFAR-10 dataset consists of sixty thousand 32 by 32 three-channel (color) images in ten uniform classes [ ], a small selection of which is captured in Figure 9 . The data were initially collected in 2009 by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton and have become the standard for machine learning research relating to computer vision [ ]. One of the primary reasons CIFAR-10 is so popular is because the small image sizes allow for quick training and testing of new models [ ]. In addition, the ubiquity of testing models on CIFAR-10 allows researchers to quickly benchmark their model performance against prior research [ The images in CIFAR-10 are exclusive to photographs of discrete distinct objects on a generally neutral background. The dataset contains photographs, which are two-dimensional projections of three-dimensional objects, from various angles. 3.3. Classical Models To establish a benchmark and facilitate the comparison of results between novel quantum machine learning methods and existing generative image synthesis techniques, we initially trained and tested a series of classical models on the CIFAR-10 dataset. The classical models we trained on were the following: (i) RBM, (ii) VAE, (iii) GANs, and (iv) DDPM. Initially, we adopted a uniform approach, training each model with the same learning rate, batch size, and number of epochs to standardize results. However, this method led to significant challenges due to the varying rates of convergence among the models, causing an imbalance in result quality and impeding our analysis. Consequently, we adjusted our approach to individually optimize the hyperparameters for each model within the bounds of available time and resources. This adjustment yielded higher-quality results, offering a more equitable comparison across models. We concluded the training of each model when additional epochs resulted in insignificant improvements in model loss, a term left intentionally vague to accommodate training variability across models. An exception to this approach was made for the DDPM, which demanded considerable computational power, prompting us to conclude the experiment after 30,000 iterations. 3.4. Quantum Model For the quantum model, the training images were also normalized by mean and variance, identically to the preprocessing for the classical models. Since quantum bits can only be clamped to binary values and not floating point numbers, the data also had to be binarized. This process involves converting each input vector into 100 vectors where the representation of 1s in each row reflects the floating point number between 0–1, as pictured in Figure 10 The D-Wave 2000Q quantum annealer is trained by mapping the architecture of an RBM onto the QPU Chimera graph, thus creating a QBM [ ]. The visible, i.e., input, nodes are clamped with the training data, and the hidden layer is sampled from. As we increase sampling, we gain a better understanding of the energy landscape and can better update the weights (i.e., inter-qubit coupling coefficients) [ Due to limitations with the number of available qubits on the D-Wave 2000Q being 2048, and user resource allocation challenges, our experiments are limited. To resolve this constraint, each image was split into four distinct squares along the x and y axes. Thus, each training image was 16 × 16 × 3 for an input vector size of 768. 3.5. Hyper-Parameters The hyper-parameters were determined by conducting grid search hyper-tuning. Since DDPMs are trained via an iterative process, unbatched, they require significantly more epochs, as reflected in Table 1 3.6. Metrics 3.6.1. Inception Score Inception score measures two primary attributes of the generated images: (i) the fidelity of the images, i.e., the image distinctly belongs to a particular class and (ii) the diversity of the generated images [ ]. The Inception classifier is a Convolutional Neural Network (CNN) built by Google and trained on the ImageNet dataset consisting of 14 million images and 1000 classes [ Fidelity is captured by the probability distribution produced as classification output by the Inception classifier on a generated image [ ]. Note that a highly skewed distribution with a single peak indicates that the Inception classifier is able to identify the image as belonging to a specific class with high confidence. Therefore, the image is considered high fidelity. Diversity is captured by summing all the probability distributions produced for individually generated classes. The uniform nature of the resultant sum of distributions is indicative of the diversity of the generated images. E.g., a model trained on CIFAR-10 that only manages to produce high-fidelity images of dogs would severely fail to be diverse. The average of the K-L Divergences between the produced probability distribution and the summed distribution is the final Inception score, capturing both diversity and fidelity. Rigorously, each generated image $x i$ is classified using the Inception classifier to obtain the probability distribution $p ( y | x i )$ over classes ]. The marginal distributions are provided by: $p ( y ) = 1 N ∑ i = 1 N p ( y | x i )$ From which the K-L Divergence may be computed by the following [ $D KL ( p ( y | x i ) | | p ( y ) ) = ∑ y p ( y | x i ) log p ( y | x i ) p ( y )$ Take the expected value of these K-L Divergences over all generated images [ $E x [ D KL ( p ( y | x ) | | p ( y ) ) ] = 1 N ∑ i = 1 N D KL ( p ( y | x i ) | | p ( y ) )$ Finally, we exponentiate the value above to evaluate an Inception score [ $IS ( G ) = exp ( E x ∼ p g D K L ( p ( y | x ) | | p ( y ) )$ 3.6.2. Fréchet Inception Distance (FID) Fréchet Inception Distance improves upon the Inception score by capturing the relationship between the generated images against the training images, whereas the Inception score only captures the characteristics of the generated images against each other and their classifications. The Inception classifier, used to determine the Inception score, also embeds a feature vector. I.e., the architecture of the Inception classifier captures the salient features of the images it is trained on. The FID score is determined by taking the Wasserstein metric between the two multivariate Gaussian distributions of the feature vectors for the training and generated images on the Inception model [ ]. Simply, the dissimilarity between the features found in the training and generated data. This is an improvement upon the Inception score since it captures the higher-level features that would be more human-identifiable when comparing model performance. The Gaussian distributions of the feature vector for the generated images and the training images are $N ( μ , Σ )$ $N ( μ w , Σ w )$ , respectively [ ]. The Wasserstein metric, resulting in the FID score, is as follows [ $FID = | | μ − μ w | | 2 2 + tr ( Σ + Σ w − 2 ( Σ 1 / 2 Σ w Σ 1 / 2 ) 1 / 2 )$ 3.6.3. Kernel Inception Distance (KID) KID measures the maximum mean discrepancy of the distributions of training and generated images by randomly sampling from them both [ ]. KID does not specifically account for differences in high-level features and rather compares the raw distributions more directly. Specifically, for generator with probability measure and random variable with probability measure , we have [ $D F ( P , Q ) = sup f ∈ F E P f ( X ) − E Q f ( Y )$ 3.6.4. Quantitative Metrics Table 2 summarizes the three quantitative metrics used to evaluate model performance: 3.6.5. Qualitative Metrics Our qualitative evaluation was performed by analyzing the visual discernment of generated images in relation to their respective classes less stringently. This approach aims to foster a broader discussion about the applicability of such models and their effectiveness. 4. Results 4.1. Restricted Boltzmann Machine (RBM) The generated images by the RBM include a high degree of brightly-colored noise. Interestingly this noise is concentrated in sections of the image with high texture, i.e., high variance of pixel values. Notice the image of the cat in the bottom-center on Figure 11 b has a great deal of noise at the edges of and inside the boundaries of the cat itself, but not in the blank white space surrounding it. This demonstrates a high degree of internode interference in the hidden layer. That is, areas with large pixel variance influence the surrounding pixels greatly and often cause bright spots to appear as a result. 4.2. Variational Autoencoder (VAE) The generated images from the VAE are incredibly high fidelity. Notably, the VAE results liken superresolution. Notice the decrease in image blur/noise from the input images. Since the VAE encodes an embedding of the training data, some features, such as the exact color of the vehicle in the top left corner in Figure 12 b, are lost, but the outline of the vehicle and the background are sharpened. This demonstrates the VAE is capturing features exceptionally well. 4.3. Generative Adversarial Networks (GANs) The GAN is able to produce some images with high fidelity, namely the cat in the top left corner and the dog in the bottom right corner of Figure 13 b, but struggles with the sharpness of the images. Humans looking at the majority of the images produced could easily determine they are computer generated. In addition, the GAN was uniquely difficult to train, requiring retraining dozens of times in order to avoid Vanishing Gradients and Mode Collapse. Recall from Section 2.1.3 , Vanishing Gradients and Mode Collapse are issues that arise from the discriminator or generator improving significantly faster than the counterpart and dominating future training, thus failing to improve both models adequately and defeating the adversarial training nature of the network. 4.4. Denoising Diffusion Probabilistic Model (DDPM) The quality of the results for the DDPM ( Figure 14 ) are limited by the computational power available to run the experiment. DDPMs are state-of-the-art for image generation when scored on fidelity but require several hours of training on a Tensor Processing Unit (TPU). A TPU can perform one to two orders of magnitude more operations than an equivalent GPU per second [ ]. Without access to these Google-exclusive TPUs, we were unable to replicate state-of-the-art generation results. 4.5. Quantum Boltzmann Machine (QBM) Recall the QBM required training images to be split and restitched into four independent squares for training due to qubit limitations. This splitting and restitching has a distinct influence on the resultant generated images. Notice the generated images have distinct features in each quadrant of the image. These features are often from various classes and appear stitched together because they are. Notice how the image in the bottom row, second from the rightmost column has features of a car, a house, and a concrete background. 5. Analysis 5.1. Scores The following analyses reference the results captured in Table 3 5.1.1. Inception Score On Inception scores, the QBM performed significantly worse than the classical models. This means that the diversity and fidelity of the QBM-generated images were significantly worse than those produced via existing classical methods. The VAE produced an exceptionally high Inception score, suggesting the images were both distinctly single-class labelled and the results produced an equal variety of results across classes. Qualitative observation of the produced samples is consistent with this score, as the produced images are of high fidelity and varied classes. Note that Figure 12 b has distinct images of vehicles, animals, planes, etc. Interestingly the DDPM produced a middling Inception score despite producing images that were of exceptionally low fidelity. This is because the Inception score measures the K-L Divergence between the single sample classification probability distribution and the summed distribution. While the image fidelity may be low, the overall summed distribution is fairly uniform due to the high variance of results, resulting in a higher K-L Divergence than naively expected. 5.1.2. Fréchet Inception Distance The QBM produced the median FID score on the generated images, performing better than the RBM and DDPM but worse than the GAN and VAE. Recall the primary difference between the FID score and other metrics is the model’s ability to extract and replicate salient features of the training data. The VAE and GAN do this exceptionally well, producing images that have distinct features that are easily observable. Notice Figure 12 b and Figure 13 b both contain images that have easily identifiable features, namely the animals and vehicles in each set of generated images. Despite these images mimicking the input image very closely, especially Figure 12 , the FID score only captures the distance between the features present in produced vs. training images, not the diversity of the images themselves. Alternatively, the images produced by the DDPM and RBM have a distinct lack of identifiable features. To the human eye, Figure 11 b does reflect the general lines and edges of the input found in Figure 11 a, yet the Inception classifier fails to capture these features in its embedding, likely due to the high levels of surrounding noise with bright values. Note that brightly colored pixels are caused by large RGB (red-green-blue) values, which will have a larger effect upon the convolutional filters, which rely on matrix multiplication. This can have an undue negative effect on feature extraction and thus lead to lower FID scores. DDPMs face issues relating to a general lack of distinctive features produced. As discussed in Section 4.4 , the computational limitations did not allow for adequate training and can thus account for the lack of effective feature generation. As discussed in Section 4.5 , the stitching and restitching of images cause features from multiple classes to be present in a single image, despite each feature being of moderately high fidelity. This restitching has negative consequences on the FID score and, given more qubits, could be improved upon but clamping entire images to the QPU directly. 5.1.3. Kernel Inception Distance As with the FID score, the QBM produced the median score on the generated images yet skewed lower and thus achieved better results than the DDPM and GAN. The DDPM once again suffers from a lack of computing power and thus performs significantly worse than other models. The VAE and RBM performed exceptionally well, indicating the models’ superior ability to generate samples that are distributed similarly to the training set. KID is the metric on which the QBM performed comparatively best. This means that while the QBM lacks the ability to represent features in its generated images well and struggles to produce diverse, high-fidelity images, it can capture the underlying distribution of training images with its generated images moderately well. This result is significant because the fidelity of generated images should improve with increases in the number of qubits and better error correction, but a promising KID score is indicative that the QBM is adequately capturing the essence of image generation. Qualitatively, from Figure 15 , it is clear the QBM can capture some meaningful image features from the training set but struggles with fidelity, i.e., distinct objects, clear boundaries, textured backgrounds, etc. 5.2. Feature Extraction Since QBMs and RBMs both lack convolutional layers, which are especially effective at capturing image features via convolution and image filters, it is expected that they would in turn score poorly for FID scores. This limitation of RBMs and QBMs can be solved by transfer learning. Transfer learning allows a pre-trained model to be detached between two layers and then reconnected to an untrained model. That way, the embeddings, i.e., learned weights of the pre-trained model, can improve the performance of the untrained model [ ]. Transfer learning with the convolutional layers from a CNN can be detached and reattached to the visible nodes of the RBM and QBM. However, for this strategy to work as intended with the QBM, a binarization layer, discussed in Section 3.4 , would need to interface between the output of the CNN layers and the visible nodes. 5.3. Trilemma of Generative Learning Recall the trilemma consists of the following: “(i) high-quality sampling, (ii) mode coverage & sample diversity, and (iii) fast and computationally inexpensive sampling” [ 5.3.1. High-Quality Sampling High-quality sampling is captured by FID and Inception scores. The QBM performed terribly on the Inception score and only moderately well on FID scores. Thus, it would be inaccurate to say the quantum annealer is uniquely producing high-quality samples. We hypothesize the main contributor to this result is the lack of convolutional layers and the image stitching required for training. This will be further discussed in Section 6 5.3.2. Mode Coverage and Diversity Mode coverage and diversity are captured by Inception and KID scores. While the QBM performed poorly on the Inception score, the KID score was promising. From qualitative observations of the generated images, it seems the QBM is managing to produce a diversity of images representative of the training data. The Inception score is certainly poorer than expected due to image stitching causing the Inception classifier to fail at classifying the images into one class. 5.3.3. Fast Sampling The QBM thoroughly and unequivocally fails at fast sampling. The quantum annealer is extremely slow at sampling. This is partially due to hardware constraints, partially due to the high demand for quantum resources, and partially due to computational expensiveness. Regardless, the process of quantum sampling from an annealer is prohibitively slow and expensive. We hope to see this improve over 5.3.4. Conclusions The QBM currently fails to improve on the Trilemma of Generative Learning ( Section 1.2 ) in any of the three axes in any meaningful way. Despite this lack of improvement, it is important to note that quantum annealers are still in their infancy and have a limited number of qubits, require significant error correcting, are a shared resource, and are not the same as (or have the universality of) a general quantum computer. With hardware improvements, we expect to see further improvements and can revisit the trilemma once significant progress has been made. 6. Conclusions and Future Work In conclusion, our team attempted to determine the efficacy of the D-Wave 2000Q quantum annealer on image synthesis, evaluated by industry-standard metrics compared to classical model counterparts, and determined if QBMs can crack the Trilemma of Generative Learning ( Section 1.2 ). The quantum annealer, operating under a Quantum Boltzmann Machine (QBM) architecture, was assessed based on several performance metrics, including the Inception score, Fréchet Inception Distance (FID), and Kernel Inception Distance (KID). Its performance was compared against a suite of classical models comprising: • Restricted Boltzmann Machine • Variational Autoencoder • Generative Adversarial Network • Denoising Diffusion Probabilistic Model The quantitative results of these experiments can be found in Table 3 . The results showed that the QBM struggled to generate images with a high Inception score but managed to show promise in FID and KID scores, indicating an ability to generate images with salient features and a similar distribution to that of the training set. The QBM implemented on the D-Wave 2000Q quantum annealer is not significantly better than the state-of-the-art classical models in the field. While the QBM outperformed a few classical models on FID and KID scores, it is important to note the difficulty of comparing models with different architectures trained on different hyper-parameters. The QBM did show great promise in its ability to represent the underlying distribution of the training data in its generated samples, and we hope to see this improve with more hardware improvements. 6.1. Image Preprocessing A significant challenge in developing the QBM was the lack of qubits. This limitation forced us to split each image into a set of four squares, as described in Section 3.4 , leading to the issue of stitching generated images in post. This issue can be somewhat resolved in the future in a few different ways. Firstly, one could wait until hardware improvements are made to the quantum annealer in the form of an increase in the number of qubits and error-correcting abilities. With these improvements, one should see an increase in image synthesis quality. As more pixels can be embedded directly onto the QPU, the need for stitching will diminish and the QBM will be able to encode a richer embedding with features from the entire image in the correct locations. Secondly, a CNN could be introduced and pre-trained via transfer learning. This would limit the input vector size required for the visible nodes on the QBM, thus allowing the CNN to pick up the bulk of the feature extraction. While this would not be a purely “quantum” solution, it would allow for the quantum annealer to specialize in embedding and sampling from a distribution of features as opposed to pixel values. This ought to improve performance, as CNNs are the gold standard in image processing for machine learning applications. 6.2. Quantum Computing As quantum annealers improve, our team expects the ability to sample more often and in greater numbers will improve. With a greater number of samples, the QBM can evaluate a richer energy landscape and capture a more sophisticated objective function topology. With faster sampling, additional hyper-tuning could also be performed in a more timely manner, allowing for greater convergence upon a more ideal architecture. Author Contributions Conceptualization, S.J. and J.G.; Software, S.J.; Validation, S.J.; Investigation, S.J.; Resources, J.G.; Writing—original draft, S.J. and J.G.; Writing—review & editing, H.E.R.; Supervision, H.E.R. All authors have read and agreed to the published version of the manuscript. This research received no external funding. Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement Data are contained within the article. We would like to thank D-Wave for providing access to their quantum computing resources as well as their continued support for Quantum Machine Learning research. This research would not be possible without the deep collaboration with Nurosene Health and Joseph Geraci. Lastly, a special thank you to Harry Ruda for supervising this research. Conflicts of Interest The authors declare no conflict of interest, J.G. is a founder, employee, and major shareholder of NetraMark Holdings, which does conduct commercial research in quantum computation. NetraMark did not support the other two authors during the research and writing of this paper. 1. Ho, J.; Jain, A.; Abbeel, P. Denoising Diffusion Probabilistic Models. In Advances in Neural Information Processing Systems; Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2020; Volume 33, pp. 6840–6851. [Google Scholar] 2. Dhariwal, P.; Nichol, A. Diffusion Models Beat GANs on Image Synthesis. arXiv 2021, arXiv:2105.05233. [Google Scholar] 3. Jain, S.; Ziauddin, J.; Leonchyk, P.; Yenkanchi, S.; Geraci, J. Quantum and classical machine learning for the classification of non-small-cell lung cancer patients. SN Appl. Sci. 2020, 2, 1088. [Google Scholar] [CrossRef] 4. Thulasidasan, S. Generative Modeling for Machine Learning on the D-Wave; Technical Report; Los Alamos National Lab. (LANL): Los Alamos, NM, USA, 2016. [Google Scholar] [CrossRef] 5. Amin, M.H.; Andriyash, E.; Rolfe, J.; Kulchytskyy, B.; Melko, R. Quantum Boltzmann Machine. Phys. Rev. X 2018, 8, 21050. [Google Scholar] [CrossRef] 6. Xiao, Z.; Kreis, K.; Vahdat, A. Tackling the Generative Learning Trilemma with Denoising Diffusion GANs. arXiv 2021, arXiv:2112.07804. [Google Scholar] 7. Smolensky, P. Information processing in dynamical systems: Foundations of harmony theory. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations; MIT Press: Cambridge, MA, USA, 1986; Volume 1. [Google Scholar] 8. Freund, Y.; Haussler, D. Unsupervised learning of distributions on binary vectors using two layer networks. In Advances in Neural Information Processing Systems; Moody, J., Hanson, S., Lippmann, R., Eds.; Morgan-Kaufmann: Burlington, MA, USA, 1991; Volume 4. [Google Scholar] 9. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [PubMed] 10. Hinton, G.E. A Practical Guide to Training Restricted Boltzmann Machines. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; pp. 599–619. [Google Scholar] [CrossRef 11. Carreira-Perpiñán, M.Á.; Hinton, G.E. On Contrastive Divergence Learning. In Proceedings of the AISTATS, Bridgetown, Barbados, 6–8 January 2005. [Google Scholar] 12. Kingma, D.P.; Welling, M. Auto-Encoding Variational Bayes. arXiv 2014, arXiv:1312.6114. [Google Scholar] 13. Rocca, J. Understanding Variational Autoencoders (VAES). 2021. Available online: https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73 (accessed on 3 December 14. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [ 15. A Beginner’s Guide to Generative Adversarial Networks (Gans). Available online: https://wiki.pathmind.com/generative-adversarial-network-gan (accessed on 3 December 2023). 16. Arjovsky, M.; Bottou, L. Towards Principled Methods for Training Generative Adversarial Networks. arXiv 2017, arXiv:1701.04862. [Google Scholar] 17. Hinton, G.E. Training Products of Experts by Minimizing Contrastive Divergence. Neural Comput. 2002, 14, 1771–1800. [Google Scholar] [CrossRef] [PubMed] 18. What is Quantum Annealing? D-Wave System Documentation. Available online: https://docs.dwavesys.com/docs/latest/c_gs_2.html (accessed on 3 December 2023). 19. Lu, B.; Liu, L.; Song, J.Y.; Wen, K.; Wang, C. Recent progress on coherent computation based on quantum squeezing. AAPPS Bull. 2023, 33, 7. [Google Scholar] [CrossRef] 20. Wittek, P.; Gogolin, C. Quantum Enhanced Inference in Markov Logic Networks. Sci. Rep. 2017, 7, 45672. [Google Scholar] [CrossRef] [PubMed] 21. Li, W.; Deng, D.L. Recent advances for quantum classifiers. Sci. China Phys. Mech. Astron. 2022, 65, 220301. [Google Scholar] [CrossRef] 22. Wei, S.; Chen, Y.; Zhou, Z.; Long, G. A quantum convolutional neural network on NISQ devices. AAPPS Bull. 2022, 32, 2. [Google Scholar] [CrossRef] 23. Sleeman, J.; Dorband, J.E.; Halem, M. A hybrid quantum enabled RBM advantage: Convolutional autoencoders for quantum image compression and generative learning. arXiv 2020, arXiv:2001.11946. [ Google Scholar] 24. Krizhevsky, A.; Nair, V.; Hinton, G. CIFAR-10 (Canadian Institute for Advanced Research). Available online: https://www.cs.toronto.edu/~kriz/cifar.html (accessed on 3 December 2023). 25. Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images. 2009. Available online: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf (accessed on 3 December 2023). 26. Eckersley, P.; Nasser, Y. EFF AI Progress Measurement Project. 2017. Available online: https://www.eff.org/ai/metrics (accessed on 3 December 2023). 27. Mack, D. A Simple Explanation of the Inception Score. 2019. Available online: https://medium.com/octavian-ai/a-simple-explanation-of-the-inception-score-372dff6a8c7a (accessed on 3 December 28. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. arXiv 2015, arXiv:1512.00567. [Google Scholar] 29. Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved Techniques for Training GANs. arXiv 2016, arXiv:1606.03498. [Google Scholar] 30. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Advances in Neural Information Processing Systems; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google 31. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. arXiv 2017, arXiv:1706.08500. [Google Scholar] [CrossRef] 32. Bińkowski, M.; Sutherland, D.J.; Arbel, M.; Gretton, A. Demystifying MMD GANs. arXiv 2018, arXiv:1801.01401. [Google Scholar] [CrossRef] 33. Cloud Tensor Processing Units (TPUS)|Google Cloud. Available online: https://cloud.google.com/tpu/docs/tpus (accessed on 3 December 2023). 34. Dhillon, P.S.; Foster, D.; Ungar, L. Transfer Learning Using Feature Selection. arXiv 2009, arXiv:0905.4022. [Google Scholar] [CrossRef] Figure 1. Generative Learning Trilemma [ ]. Labels show frameworks that tackle two of the three requirements well. Figure 2. Restricted Boltzmann Machine architecture [ Figure 3. Variational Autoencoder architecture [ Figure 4. GANs architecture [ Figure 5. DDPM Markov chain [ Figure 6. Generated samples on CelebA-HQ 256 × 256 by DDPM [ Figure 7. D-Wave Quantum Processing Unit (QPU) topology Chimera graph [ Figure 8. Hybrid Approach that used a Classical Autoencoder to map the image space to a compressed space [ Figure 9. Ten random images from each class of CIFAR-10 with respective class labels [ Figure 10. Binarization of a normalized vector to a set of binary vectors [ Figure 11. RBM-generated image synthesis output from respective input. (a) RBM input images; (b) RBM output images. Figure 12. VAE-generated image synthesis output from respective input. (a) VAE input images; (b) VAE output images. Figure 13. GAN-generated image synthesis output from respective input. (a) GAN input images; (b) GAN output images. QBM RBM VAE GAN DDPM Epochs 10 10 50 50 30,000 Batch Size 256 256 512 128 - # of Hidden Nodes 128 2500 32 64 32 Learning Rate ($10 − 3$) 0.0035 0.0035 0.2 0.2 0.2 Metric Description Performance Inception K-L Divergence between conditional and marginal label distributions over generated data Higher is better FID Wasserstein distance between multivariate Gaussians fitted to data embedded into a feature space Lower is better KID Measures the dissimilarity between two probability distributions $P r$ and $P g$ using samples drawn independently from each distribution Lower is better QBM RBM VAE GAN DDPM Inception 1.77 3.84 7.87 2.72 3.319 FID 210.83 379.65 93.48 122.49 307.51 KID 0.068 0.191 0.024 0.033 0.586 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Jain, S.; Geraci, J.; Ruda, H.E. Comparing Classical and Quantum Generative Learning Models for High-Fidelity Image Synthesis. Technologies 2023, 11, 183. https://doi.org/10.3390/technologies11060183 AMA Style Jain S, Geraci J, Ruda HE. Comparing Classical and Quantum Generative Learning Models for High-Fidelity Image Synthesis. Technologies. 2023; 11(6):183. https://doi.org/10.3390/technologies11060183 Chicago/Turabian Style Jain, Siddhant, Joseph Geraci, and Harry E. Ruda. 2023. "Comparing Classical and Quantum Generative Learning Models for High-Fidelity Image Synthesis" Technologies 11, no. 6: 183. https://doi.org/ Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2227-7080/11/6/183","timestamp":"2024-11-12T12:01:28Z","content_type":"text/html","content_length":"473303","record_id":"<urn:uuid:f1ef4e52-ff83-4b64-8e60-6f0fca47b1fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00659.warc.gz"}
1-Cycle Schedule This tutorial shows how to implement 1Cycle schedules for learning rate and momentum in PyTorch. 1-Cycle Schedule Recent research has demonstrated that the slow convergence problems of large batch size training can be addressed by tuning critical hyperparameters such as learning rate and momentum, during training using cyclic and decay schedules. In DeepSpeed, we have implemented a state-of-the-art schedule called 1-Cycle to help data scientists effectively use larger batch sizes to train their models in PyTorch. To use 1-cycle schedule for model training, you should satisfy these two requirements: 1. Integrate DeepSpeed into your training script using the Getting Started guide. 2. Add the parameters to configure a 1-Cycle schedule to the parameters of your model. We will define the 1-Cycle parameters below. The 1-cycle schedule operates in two phases, a cycle phase and a decay phase which span one iteration over the training data. For concreteness, we will review how the 1-cycle learning rate schedule works. In the cycle phase, the learning rate oscillates between a minimum value and a maximum value over a number of training steps. In the decay phase, the learning rate decays starting from the minimum value of the cycle phase. An example of 1-cycle learning rate schedule during model training is illustrated below. 1-Cycle Parameters The 1-Cycle schedule is defined by a number of parameters which allow users to explore different configurations. The literature recommends concurrent tuning of learning rate and momentum because they are correlated hyperparameters. We have leveraged this recommendation to reduce configuration burden by organizing the 1-cycle parameters into two groups: 1. Global parameters for configuring the cycle and decay phase. 2. Local parameters for configuring learning rate and momentum. The global parameters for configuring the 1-cycle phases are: 1. cycle_first_step_size: The count of training steps to complete first step of cycle phase. 2. cycle_first_stair_count: The count of updates (or stairs) in first step of cycle phase. 3. cycle_second_step_size: The count of training steps to complete second step of cycle phase. 4. cycle_second_stair_count: The count of updates (or stairs) in the second step of cycle phase. 5. post_cycle_decay_step_size: The interval, in training steps, to decay hyperparameter in decay phase. The local parameters for the hyperparameters are: Learning rate: 1. cycle_min_lr: Minimum learning rate in cycle phase. 2. cycle_max_lr: Maximum learning rate in cycle phase. 3. decay_lr_rate: Decay rate for learning rate in decay phase. Although appropriate values cycle_min_lr and cycle_max_lr values can be selected based on experience or expertise, we recommend using learning rate range test feature of DeepSpeed to configure them. 1. cycle_min_mom: Minimum momentum in cycle phase. 2. cycle_max_mom: Maximum momentum in cycle phase. 3. decay_mom_rate: Decay rate for momentum in decay phase. Required Model Configuration Changes To illustrate the required model configuration changes to use 1-Cycle schedule in model training, we will use a schedule with the following properties: 1. A symmetric cycle phase, where each half of the cycle spans the same number of training steps. For this example, it will take 1000 training steps for the learning rate to increase from 0.0001 to 0.0010 (10X scale), and then to decrease back to 0.0001. The momentum will correspondingly cycle between 0.85 and 0.99 in similar number of steps. 2. A decay phase, where learning rate decays by 0.001 every 1000 steps, while momentum is not decayed. Note that these parameters are processed by DeepSpeed as session parameters, and so should be added to the appropriate section of the model configuration. PyTorch model PyTorch versions 1.0.1 and newer provide a feature for implementing schedulers for hyper-parameters, called learning rate schedulers. We have implemented 1-Cycle schedule using this feature. You will add a scheduler entry of type “OneCycle” as illustrated below. "scheduler": { "type": "OneCycle", "params": { "cycle_first_step_size": 1000, "cycle_first_stair_count": 500, "cycle_second_step_size": 1000, "cycle_second_stair_count": 500, "decay_step_size": 1000, "cycle_min_lr": 0.0001, "cycle_max_lr": 0.0010, "decay_lr_rate": 0.001, "cycle_min_mom": 0.85, "cycle_max_mom": 0.99, "decay_mom_rate": 0.0 Batch Scaling Example As example of how 1-Cycle schedule can enable effective batch scaling, we briefly share our experience with an internal model in Microsoft. In this case, the model was well-tuned for fast convergence (in data samples) on a single GPU, but was converging slowly to target performance (AUC) when training on 8 GPUs (8X batch size). The plot below shows model convergence with 8 GPUs for these learning rate schedules: 1. Fixed: Using an optimal fixed learning rate for 1-GPU training. 2. LinearScale: Using a fixed learning rate that is 8X of Fixed. 3. 1Cycle: Using 1-Cycle schedule. With 1Cycle, the model converges faster than the other schedules to the target AUC . In fact, 1Cycle converges as fast as the optimal 1-GPU training (not shown). For Fixed, convergence is about 5X slower (needs 5X more data samples). With LinearScale, the model diverges because the learning rate is too high. The plot below illustrates the schedules by reporting the learning rate values during 8-GPU training. We see that the learning rate for 1Cycle is always larger than Fixed and is briefly larger than LinearScale to achieve faster convergence. Also 1Cycle lowers the learning rate later during training to avoid model divergence, in contrast to LinearScale. In summary, by configuring an appropriate 1-Cycle schedule we were able to effective scale the training batch size for this model by 8X without loss of convergence speed.
{"url":"https://www.deepspeed.ai/tutorials/one-cycle/","timestamp":"2024-11-08T07:17:29Z","content_type":"text/html","content_length":"26299","record_id":"<urn:uuid:1d12caec-bad9-4c9b-97d0-1d013627eb79>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00679.warc.gz"}
OpenUCT :: Browsing by Subject "Quarks" Browsing by Subject "Quarks" Now showing 1 - 3 of 3 Results Per Page Sort Options • The quark-hadron transition and hot hadronic matter in the early universe (1987) Von Oertzen, Detlof Wilhelm; Rafelski, Johann; Cleymans, Jean Various calculationsĀ· of the evolution of the hadron gas in the early universe are carried out. To determine the starting point for the evolution equations a phase transition between the quark-gluon plasma phase and the hadron gas phase is constructed. A simple calculation leads to an estimate of the chemical potential of baryons at the quark-hadron phase transition in the early universe. We investigate how the transition temperature depends on the equations of state for the bagged quark and the hadron phase. A particle density evolution model is introduced which predicts the temperature at which particle species drop out of equilibrium (freeze-out) in an expanding universe. We then construct dynamical evolution equations to describe the reactions of interacting pions and photons. In order to model a more realistic hadron gas, we include kaons and finally nucleons and hyperons into the model universe. The results indicate that this type of model should be extended to include more interacting particle species and that a more realistic evolution model is dependent on obtaining accurate reaction cross-sections. • Quarks and hadrons on the lattice (1990) Boyd, Graham John; Cleymans, Jean There is a short introduction to the ideas of lattice theory, followed by an equally brief look at pure gauge QCD on the lattice. More details for either of these may be found in the references cited in each section, as well as in [143]. The bulk of this work deals with the problems encountered in placing fermions on to the lattice, and the techniques used for this purpose. The Nielsen-Ninomiya theorem is introduced, with a detailed treatment thereof relegated to an appendix. The two main fermion techniques, due to Wilson (1974); and Kogut and Susskind (1975) are dealt with in some detail. This is followed by a discussion of the construction of hadrons on the lattice, using either Wilson or Kogut-Susskind fermions. There is a chapter covering the algorithms used in numerical simulations of lattice QCD, with some examples illustrating them. The thesis concludes with a discussion of the results obtained thus far on the hadron spectrum, in both the quenched approximation as well as those obtained using dynamical quarks. • A review and application of the Hadron gas model to heavy ion collisions (1996) Elliott, Duncan Mark A review and application of the Hadron Gas model to data gathered from heavy ion collision experiments in search of the Quark Gluon Plasma. The Hadron Gas model is extended by ensuring overall charge conservation of the collision system at freeze-out. Conclusions of thermal and chemical equilibrium at freeze-out are drawn from an analysis of the data of Si-Au collisions at BNL-AGS, and compared with the literature on thermal analyses of Si-Au collisions.
{"url":"https://open.uct.ac.za/browse/subject?value=Quarks","timestamp":"2024-11-08T08:18:31Z","content_type":"text/html","content_length":"396223","record_id":"<urn:uuid:99b71bf6-4b04-4e52-9b8f-b470d28a41e3>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00665.warc.gz"}
Thin Lens Equation Calculator Last updated: Aug 06, 2022 Our thins lens equation calculator can obtain the image distance of any object after being refracted by a lens of known focal length. Within a few paragraphs, you will learn: • What the thin lens equation is; • How to calculate image distance using the image distance equation; and • The magnification equation for lenses. Keep reading to learn more about optical lenses with this lens calculator 🔎! Thin lens equation The thin lens equation describes how the image of an object after crossing a thin lens is created. This approximation considers that the width of the lens is much smaller than the object's distance. To use it, we only need the focal length and the object's distance: $\frac{1}{x}+\frac{1}{y} = \frac{1}{f}$ • $x$ is the distance between the object and the center of the lens; • $y$ is the distance between the image and the center of the lens; and • $f$ is the focal length of the lens expressed in length units. Thin lenses are especially important in telescopes when you want to observe various phenomena. Check the luminosity calculator and redshift calculator to learn more about astrophysics. How to calculate image distance Let's see how to calculate image distance now. We can rearrange the thin lens equation to obtain the image distance equation: $y = \frac{fx}{x-f}$ where again: • $x$ is the distance between the object and the center of the lens; • $y$ is the distance between the image and the center of the lens; and • $f$ is the focal length of the lens. But generally speaking, it is easier just to plug the numbers in the thin lens equation and solve for distance. 💡 Even easier, type any two parameters in the thin lens equation calculator, and our tool will automatically complete the missing parameter! Magnification equation for lenses Using the advanced mode of this thin lens equation calculator, we can find the magnification of a lens. What is it? Magnification is the ratio between the height of the image and the height of the object, and it's equal to the ratio between image distance and object distance: $M = \frac{|y|}{x}$ If you want to consider light going through different media, then you might need to use the index of refraction formula. We covered it in another article.
{"url":"https://www.calctool.org/optics/thin-lens-equation","timestamp":"2024-11-06T04:40:00Z","content_type":"text/html","content_length":"274092","record_id":"<urn:uuid:ddbfa4e8-bd27-4e82-95ab-22fc2e793887>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00419.warc.gz"}
Weaknesses Of Term Premium Estimates Derived From Yield Curve Models Term structure models have been a growth industry for researchers in academia and at central banks. These models can be structured in many different ways, which makes generalisations about them difficult. For the purposes of this article, I am only concerned about the use of these models to estimate a term premium that is embedded in nominal yields (although my comments can be extended to cover related exercises, such as calculating an inflation risk premium). When I examine individual models, the term premium estimates appear unsatisfactory, but the issues are different for each model. I believe that the root problems for this exercise are fundamental, and we need to understand these fundamental problems before looking at individual models. Attempting to Estimate an Unobservable Variable Will End Badly In control engineering, there is the notion of an variable. These are state variables whose values are not just directly measured, their values cannot be inferred from any manipulation of measured values and known system dynamics. For example, if can accurately measure the position of a vehicle, we can infer its velocity, so velocity is not unobservable. However, we have no way of determining the internal temperature from the position data, and so the temperature is indeed unobservable. The normal practice is to delete unobservable variables from dynamic systems models. We have no way of determining their value, and they interfere with attempts to estimate the state variable. (Since there is an infinite number of valid solutions, algorithms will not converge.) It is not that these variables do not exist, but we cannot say anything useful about them with available data and known model dynamics (like the vehicle temperature in my example). As I noted in " How to Approach the Term Premium ," an aggregate term premium is a variable that we cannot hope to measure with currently available data. Although I have not formally proved that the term premium is unobservable, that certainly appears to be the case. The only way we can say that an aggregate term premium exists is if we can infer measurable effects on other variables. (As I discussed in that article, investors should probably have their own estimate of the embedded term premium when making investment decisions. Since it is your own estimate, you presumably know what its value is. The catch is that we do not know others' estimates based on market behaviour.) In other words, researchers are writings hundreds of extremely complex papers discussing a concept that shows little sign of existing. If we want to be careful with what we are doing, we need to accept that we should not accept the labelling given to the time series as given by the researchers. That is, just because a model output is referred to as a term premium by a researcher, we should not assume that what the variable really corresponds to. However, I will refer to these model estimates as term premia in this article, as otherwise the text will be confusing. There's an Infinite Number of Term Premia Estimates The second issue with term premia estimates is that there is an infinite number of them. We can decompose observed nominal yields in an infinite number of ways, the rules for decomposition can change over time. The only restriction is that the decomposition is arbitrage-free , which is a relatively weak restriction (albeit with complex mathematics). This is wonderful for researchers, as an infinite number of models implies an infinite number of potential papers. (Of course, computational tractability eliminates most potential models.) However, it makes discussion of these models a question of hitting a moving target. One typical use of these models is to examine the effect of an event (for example, quantitative easing) on term premia. The abstract of such papers typically reads as follows: {Event X} caused the term premium at maturity M to move by Y basis points. Such papers can then be used to prove any number of statements about policy. The correct way of interpreting such papers if that the researcher has found term structure model -- out of an infinite number of possibilities -- where event coincided with a move in the term premium of basis points. Therefore, the usefulness of such research depends upon your prior beliefs about academic and central bank research. If you normally believe the claims of researchers in their abstracts, there is no problem. For those of us with more cynical prior beliefs, such results can easily be explained as being the result of The Decompositions are Dubious Once we get past the previous high-level problems, which are highly generic, we are left with more model-specific issues. These problems are usually the result of another inherent problem: we have no natural way to decompose observed yields into term premia and the expected path of short rates. In order to do this, the usual procedure is to force one of the components to follow some estimated value, and then the other component has to equal the residual. (One alternative -- interpreting statistical factors -- is discussed later.) That is, we could force term premia to be roughly equal to some variable, and then expectations is (roughly) equal to observed yields minus the estimated term premium. Vice-versa if we force expectations to follow some variable. Some example decompositions I have run across over the years include the following. • Use a survey of economists to determine the expected path of rates. • Use a measure that is roughly equivalent to historical volatility (or implied volatility) of rates to determine the term premium. • For inflation-linked curves, use a fundamental model with 2-3 variables to estimate expected inflation. The problem with all of these techniques is that they are questionable. In most cases, the importance of these assumptions is largely buried under a discussion of the mathematics of the curve structure model. However, for those of us who are primarily concerned about the level of the term premium, the results are entirely driven by these fundamental estimation techniques. There is an alternative way of approaching this problem, which is based on a yield curve model that relies solely on statistical risk factors. The researcher then interprets one or more of these factors as being a term premium. Such an approach appears more reasonable, but analysis comes down to battling interpretations of data. The presumed attraction of term premium models is that they were supposed to eliminate verbal arguments over how to interpret yield curve movements. Since these models are quite distinct, they are not discussed in the rest of this article. Frequency Domain Problems In most cases, model estimates for the rate component use data that are at a lower frequency than bond market data. By definition, all of the high frequency components of bond yields ("noise") have to be attributed to the other factor. In particular, if we have a slow-moving estimate of expected rates, term premia will be oscillating at a high frequency. In my opinion, such a decomposition makes little economic sense. (I would need to justify this intuition in other articles.) Why are Survey Estimates Dubious? It would seem that surveys regarding the path of short rates would be a useful estimator for rate expectations. However, these estimates are mainly used for entertainment purposes by market participants. (The people being surveyed tend to take them more seriously, of course.) The problem with surveys is that they are almost invariably set by chief economist, who has to work with a committee to set a house view on the economy. Since each committee meeting is invariably a compromise between factions, there is considerable institutional inertia in their estimated path for short rates. Market participants are well aware of the tendency for economists to be stubborn, and then only throw in the towel on their views after the bond market has already moved. Furthermore, there is considerable herding behaviour of economists in surveys. The optimal strategy is just put your view at end one of the consensus. If the outcome is way outside the consensus in your favour, you have the best forecast, and people love you. If the outcome is on the other side, your forecast was only slightly more wrong than the others. To top things off, what matters for bond pricing is what investors think, not economists. Even if the investment firm has a Chief Economist, the positioning of the bond portfolio may have no resemblance to the Chief Economist's views. Large bond investors are extremely coy about their positioning. If they write public bond market commentary, it may only reflect a desire to get out of a position. (Fiduciary rules should certainly imply that such investors not signal future portfolio shifts,) Finally, surveys are done at a low frequency (and with an unknown lag), while market makers adjust prices instantly based on incoming data and flows. As discussed in the previous section, this loads all of the high frequency dynamics in the curve on the term premium, which ends up wiggling around like a greased pig. (The obvious fix to this frequency mismatch is to do a survey of views about the term premium; if it does move at a low frequency, you do not need to worry about aligning survey data to market data.) Relationship to Realised Excess Returns If we want to interpret a time series as a term premium, it should have a relationship to future realised excess returns of a bond at that maturity. The deviation of the term premium from future excess returns is equal to the forecast error of the embedded rate expectations series. For the long end of the yield curve, we have problems with data limitations. The excess return of a 10-year swap starting in January 2000 are going to be pretty close to the excess return of a 10-year swap starting in February 2000. In order to create completely independent observations, we would need to use January 2010 as the next point we test. (I assume that there would be legitimate ways of taking samples closer together.) This runs into the problem that bond yields were regulated in the developed countries until the 1970s, or even the early 1980s. Furthermore, we had a major yield cycle within that era, in which it is clear that everyone overestimated future short rates. ( This did show up in historical excess returns. However, this is not the case for the front end of the curve. For example, in a 25-year period, we have 100 completely independent 3-month instruments issued. Additionally, short rates across currencies are somewhat uncorrelated, increasing the number of potential observations. This allows us to compare the predictions of the yield curve models with actual market behaviour. From an empirical standpoint, such historical analysis is where many term structure models fall apart. The overall relationship between a term premium and future excess returns is somewhat complicated; I may discuss it again in a later article. Concluding Remarks This article outlines the generic problems with term premium estimates derived from term structure models. We can then look at particular models, and see how they relate to the specific technique I may look at one or two examples, but I am not enthusiastic about this task. Any critique of a model that points out that a model has an undesirable property, which just raises the response that another model does not have that property. Given the infinite number of models that are available, that is a never-ending game of Whac-a-Mole™. I would rather spend my time looking at techniques that are useful, than being bogged down chasing after an unlimited number of techniques that appear to have few redeeming features. (c) Brian Romanchuk 2017 1 comment: 1. This comment has been removed by a blog administrator. Note: Posts are manually moderated, with a varying delay. Some disappear. The comment section here is largely dead. My Substack or Twitter are better places to have a conversation. Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.
{"url":"http://www.bondeconomics.com/2017/04/weaknesses-of-term-premium-estimates.html","timestamp":"2024-11-03T07:22:01Z","content_type":"text/html","content_length":"103925","record_id":"<urn:uuid:5204caae-d8ef-4a6e-938a-cd4b617e8c57>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00758.warc.gz"}
LibGuides: Math Resources: Intersections & Unions In basic probability, we often work with multiple events, not just one. To start this discussion, let's learn about unions and intersections. A union is indicated by the word "or". For example, what is the probability that Event A or Event B happens? That means that outcomes in either group would be desired. In the Venn Diagram below, that means anything in the colored section would be considered a desired outcome. Anything outside of the circles, but still inside the sample space would not be a desired outcome. An intersection is indicated by the word "and". For example, what is the probability that Event A and Event B happen? This refers to outcomes that are part of both groups, but not just one group. In the Venn Diagram below, that means only the outcomes contained in the overlap of the two circles would be considered desired outcomes. Anything in the white, teal, or light purple spaces would not be desired outcomes. Computing Probability Example 1 A janitor has 75 keys on her keychain. Of these, 60 open classroom doors, 10 open teacher spaces, and 15 can open classrooms and teacher spaces. Let's first find the probability that a key opens a classroom and teacher spaces. The word "and" indicates that only keys that can open both doors will be considered a success. There are 15 such keys. Basic probability says that the number of successes goes over the total number of outcomes. In this case, that's the total number of keys: 75. Therefore, the probability of picking a key that opens a classroom and a teacher space is 15/75 = .2 Example 2 Let's consider the same scenario, but this time we want to find the probability that it will open a classroom door or a teacher space. The word "or" indicates that keys that can open a classroom are a successful outcome. Keys that open a teacher space are also a successful outcome. Since these events are not mutually exclusive, that is there are keys that can open both doors, we must remember to account for the overlap of the two events. Let's break this one down into steps: Find the probability of the first event: the key opens a classroom There are 60 keys that open a classroom and 75 keys total. Therefore, the probability = 60 / 75 = .8 Find the probability of the second event: the key opens a teacher space There are 10 keys that open a teacher space and 75 keys total. Therefore, the probability = 10 / 75 = .133 Find the probability of the intersection of the events: the key opens a classroom and a teacher space This one we worked through above and found the probability to be 15 / 75 or .2 Now we're ready to plug into the formula on the probability rules page: P(classroom) + P(teacher space) - P(intersection) = .8 + .133 - .2 = .733
{"url":"https://resources.nu.edu/c.php?g=1336977&p=10407699","timestamp":"2024-11-07T03:57:11Z","content_type":"text/html","content_length":"32260","record_id":"<urn:uuid:664090b7-d1db-452c-859f-cb657b67dfe2>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00872.warc.gz"}
The Wonder of International Adoption: High School Grades in Sweden Moving young children from the Third World to Sweden wipes out about half of their national IQ deficit. What about performance in high school? Vinnerljung et al.’s “School Performance at Age 16 Among International Adoptees” (International Social Work, 2010) compiles the numbers, once again breaking them down by regular Swedes, Korean adoptees, and non-Korean adoptees. Since these are high school students rather than conscripts, the data include women, yielding a much larger sample. But otherwise, the national origin of the adoptees is basically the same as in Dalen et al. (2008) and Odenstad et al. (2008). India, Thailand, Chile, Sri Lanka, Colombia, Ethiopia, and Ecuador top the list. To start, imagine growing up in Sweden had zero effect on high school performance. How would the non-Korean adoptees do? As discussed earlier, if the non-Koreans had average IQ for their home countries, their mean IQ would be 84. On the international PISA tests of science, reading, and math, countries with IQs around 84 score about one standard deviation below Sweden.* When you look at adoptees’ actual grades, however, the performance gap is much smaller. Combining males and females, non-Koreans have an average GPA of 2.95, versus 3.24 for regular Swedes. It’s not in the paper, but Vinnerljung emailed me the standard deviation: .78. That’s a performance gap of only .37 SDs – over 60% less than you would expect from the PISA scores. The gap is even smaller for non-Koreans who were adopted as infants. And as I emphasized in my previous post, we should expect the international adoptees to be below average for their home countries, so the grade gain of growing up Swedish is probably even greater than it looks. What about the Korean adoptees? They once again do better than regular Swedes, with an average GPA of 3.42.** That’s an edge of .24 SDs – almost exactly the PISA gap between Sweden and Korea. For grades, like IQ, there are two stories to weave. The pessimist can say, “Even in Sweden, non-Koreans’ performance in school is well below average.” The optimist can say, “Non-Koreans in Sweden do much better than they would have done back home.” While both stories are correct, the latter is far more insightful. The fact that non-Koreans underperform in Swedish schools is obvious at a glance. The fact that non-Koreans excel compared to the relevant counter-factual, in contrast, is easy to miss. Wherever you’re from, Sweden is a good place to learn. * The PISA gap is roughly 100 points, and scores are normed to have a standard deviation of 100. ** This slightly overstates Korean performance, because the Korean adoptees are over two-thirds female, and girls in all groups have higher GPAs than boys. If you separately compare genders, Korean boys are .18 SDs and Korean girls are .18 SDs above the mixed-gender Swedish average. The post appeared first on Econlib. Surely these just reinforce the notion that mixing people from different locations with quite different attitudes is a recipe for disaster. It may drag them up a few points but what is the benefit for the advanced nation? More low IQ people to support? Expand full comment "On the international PISA tests of science, reading, and math, countries with IQs around 84 score about one standard deviation below Sweden." I'm not sure what this means: a) that the median PISA score of children who were adopted by Swedes from countries where the median IQ score is about 84 -- i.e., about one standard deviation below that of Swedish natives -- and tested in Sweden after being brought up there is likewise about one standard deviation below that of native Swedes; or b) that the median PISA score of students in countries where median IQ is ~84 is about one standard deviation below that of Swedish natives? If a) is correct the happy talk about grades may amount to little more than putting lipstick on a pig if, as seems likely, most Swedish schoolteachers are woke-ish and have leeway for subjective judgment in assigning course grades. Expand full comment 10 more comments...
{"url":"https://www.betonit.ai/p/the_wonder_of_i_1html","timestamp":"2024-11-08T02:34:25Z","content_type":"text/html","content_length":"165403","record_id":"<urn:uuid:da1f3ef5-9839-4ca5-8ac0-c9512f214936>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00716.warc.gz"}
A Course in Analytic Number Theorysearch Item Successfully Added to Cart An error was encountered while trying to add the item to the cart. Please try again. Please make all selections above before adding to cart A Course in Analytic Number Theory Hardcover ISBN: 978-1-4704-1706-2 Product Code: GSM/160 List Price: $135.00 MAA Member Price: $121.50 AMS Member Price: $108.00 eBook ISBN: 978-1-4704-2041-3 Product Code: GSM/160.E List Price: $85.00 MAA Member Price: $76.50 AMS Member Price: $68.00 Hardcover ISBN: 978-1-4704-1706-2 eBook: ISBN: 978-1-4704-2041-3 Product Code: GSM/160.B List Price: $220.00 $177.50 MAA Member Price: $198.00 $159.75 AMS Member Price: $176.00 $142.00 Click above image for expanded view A Course in Analytic Number Theory Hardcover ISBN: 978-1-4704-1706-2 Product Code: GSM/160 List Price: $135.00 MAA Member Price: $121.50 AMS Member Price: $108.00 eBook ISBN: 978-1-4704-2041-3 Product Code: GSM/160.E List Price: $85.00 MAA Member Price: $76.50 AMS Member Price: $68.00 Hardcover ISBN: 978-1-4704-1706-2 eBook ISBN: 978-1-4704-2041-3 Product Code: GSM/160.B List Price: $220.00 $177.50 MAA Member Price: $198.00 $159.75 AMS Member Price: $176.00 $142.00 • Graduate Studies in Mathematics Volume: 160; 2014; 371 pp MSC: Primary 11 This book is an introduction to analytic number theory suitable for beginning graduate students. It covers everything one expects in a first course in this field, such as growth of arithmetic functions, existence of primes in arithmetic progressions, and the Prime Number Theorem. But it also covers more challenging topics that might be used in a second course, such as the Siegel-Walfisz theorem, functional equations of L-functions, and the explicit formula of von Mangoldt. For students with an interest in Diophantine analysis, there is a chapter on the Circle Method and Waring's Problem. Those with an interest in algebraic number theory may find the chapter on the analytic theory of number fields of interest, with proofs of the Dirichlet unit theorem, the analytic class number formula, the functional equation of the Dedekind zeta function, and the Prime Ideal Theorem. The exposition is both clear and precise, reflecting careful attention to the needs of the reader. The text includes extensive historical notes, which occur at the ends of the chapters. The exercises range from introductory problems and standard problems in analytic number theory to interesting original problems that will challenge the reader. The author has made an effort to provide clear explanations for the techniques of analysis used. No background in analysis beyond rigorous calculus and a first course in complex function theory is assumed. Graduate students interested in number theory. □ Chapters □ Chapter 1. Arithmetic functions □ Chapter 2. Topics on arithmetic functions □ Chapter 3. Characters and Euler products □ Chapter 4. The circle method □ Chapter 5. The method of contour integrals □ Chapter 6. The prime number theorem □ Chapter 7. The Siegel-Walfisz theorem □ Chapter 8. Mainly analysis □ Chapter 9. Euler products and number fields □ Chapter 10. Explicit formulas □ Chapter 11. Supplementary exercises □ This book is a proper text for a graduate student (with a pretty strong background) keen on getting into analytic number theory, and it's quite a good one. It's well-written, rather exhaustive, and well-paced. The choice of themes is good, too, and will form a very sound platform for future studies and work in this gorgeous field. MAA Reviews • Desk Copy – for instructors who have adopted an AMS textbook for a course Permission – for use of book, eBook, or Journal content • Book Details • Table of Contents • Additional Material • Reviews • Requests Volume: 160; 2014; 371 pp MSC: Primary 11 This book is an introduction to analytic number theory suitable for beginning graduate students. It covers everything one expects in a first course in this field, such as growth of arithmetic functions, existence of primes in arithmetic progressions, and the Prime Number Theorem. But it also covers more challenging topics that might be used in a second course, such as the Siegel-Walfisz theorem, functional equations of L-functions, and the explicit formula of von Mangoldt. For students with an interest in Diophantine analysis, there is a chapter on the Circle Method and Waring's Problem. Those with an interest in algebraic number theory may find the chapter on the analytic theory of number fields of interest, with proofs of the Dirichlet unit theorem, the analytic class number formula, the functional equation of the Dedekind zeta function, and the Prime Ideal Theorem. The exposition is both clear and precise, reflecting careful attention to the needs of the reader. The text includes extensive historical notes, which occur at the ends of the chapters. The exercises range from introductory problems and standard problems in analytic number theory to interesting original problems that will challenge the reader. The author has made an effort to provide clear explanations for the techniques of analysis used. No background in analysis beyond rigorous calculus and a first course in complex function theory is Graduate students interested in number theory. • Chapters • Chapter 1. Arithmetic functions • Chapter 2. Topics on arithmetic functions • Chapter 3. Characters and Euler products • Chapter 4. The circle method • Chapter 5. The method of contour integrals • Chapter 6. The prime number theorem • Chapter 7. The Siegel-Walfisz theorem • Chapter 8. Mainly analysis • Chapter 9. Euler products and number fields • Chapter 10. Explicit formulas • Chapter 11. Supplementary exercises • This book is a proper text for a graduate student (with a pretty strong background) keen on getting into analytic number theory, and it's quite a good one. It's well-written, rather exhaustive, and well-paced. The choice of themes is good, too, and will form a very sound platform for future studies and work in this gorgeous field. MAA Reviews Desk Copy – for instructors who have adopted an AMS textbook for a course Permission – for use of book, eBook, or Journal content You may be interested in... Please select which format for which you are requesting permissions.
{"url":"https://bookstore.ams.org/gsm-160","timestamp":"2024-11-06T16:41:56Z","content_type":"text/html","content_length":"109887","record_id":"<urn:uuid:e20311e7-edd5-4378-86fd-e41fdc919351>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00807.warc.gz"}
Solid Geometry | Definitions on Solid Geometry Terms | Plane or Plane Surface Solid Geometry Definitions on solid geometry terms: (i) Dimension: Each of length, breadth and thickness of any body is called a dimension of the body. (ii) Point: A point has no dimension, that is, it has neither length nor breadth nor thickness ; it has position only. (iii) Line: A line has length only but no breadth and thickness. Therefore, a line has one dimension, that is, it is one dimensional. (iv) Surface: A surface has length and breadth but no thickness. Therefore, a surface has two dimensions, that is, it is two dimensional. (v) Solid: A solid has length, breadth and thickness. Therefore, a solid has three dimensions, that is, it is three dimensional. The book is a solid, each of its six faces is a surface, each of its edges is a line and each of its corners is a point. A line is bounded by points, a surface is bounded by lines and a solid is bounded by surfaces. In other words, a line is generated by the motion of a point, a surface is generated by the motion of a line and a solid is generated by the motion of a surface. (vi) Solid Geometry: The branch of geometry which deals with the properties of points, lines, surfaces and solids in three dimensional space is called solid geometry. (vii) Plane or Plane Surface: If the straight line joining two points on a surface lies wholly on the surface then the surface is called a plane surface or a plane. A straight line may be extended indefinitely in either direction, that is, straight lines are supposed to be of infinite length. Similarly, planes are also assumed to be of infinite extent, unless otherwise stated. The statement that a straight line lies wholly on a surface signifies that every point on the line (however produced in both directions) lies on the surface. A surface is called curved surface when it is not a plane surface. (i) Lines or points are said to be co-planar if they lie on the same plane; in other words, lines or points are co-planar if a plane can be made to pass through them. (ii) Two co-planar straight lines are either parallel or they intersect at a point. Two straight lines are said to be parallel when they are co-planar and they do not meet however indefinitely they are produced in both directions. (iii) Two straight lines are said to be skew (or non-coplanar) if a plane cannot be made to pass through them. In other words, two straight lines are said to be skew when they do not meet at a point and they are not parallel. (iv) Two planes are said to be parallel if they do not meet when extended infinitely in all directions. (v) A straight line is said to be parallel to a plane if they do not meet when both are produced infinitely. In the given picture we observe that, the lines LM, MN, NO and OL lie in the plane LMNO, that is, they are co-planar. The lines LM and LO meet at L and the lines LM and ON are parallel. LP and MN are skew lines and the line QR is parallel to the plane LPSO. The planes ABFE and DCGH are parallel. A straight line is said to be perpendicular to a plane if it is perpendicular to every straight lines drawn in the plane through the point where the line meets the plane. A straight line perpendicular to a plane is called a normal to the plane. In the given figure, the straight line OP meets the plane XY at O ; if OP is perpendicular to every straight lines OI, OJ, OK, OD etc. drawn through O in the XY plane then OP is perpendicular (or a normal) to the plane XY. A straight line parallel to the direction of a plumb-line hanging freely at rest is called a vertical line. A plane which is perpendicular to a vertical line is called the horizontal plane. A straight line drawn in a horizontal plane is called a horizontal line. Angle Between Two Skew Lines: The angle between two skew lines (i.e., two non-co-planar straight lines) is measured by the angle between one of them and a straight line drawn parallel to the other through a point on the first line. In the given figure, let MN and QR be two skew straight lines. Take any point O on the line MN and draw the straight line OP parallel to QR through O. Then ∠NOP gives the measure of the angle between the skew straight lines MN and QR. A triangle is a plane figure since all its three sides lie in one plane. Similarly a parallelogram is also a plane figure. But a quadrilateral may or may not be plane figures since all its four sides always do not lie in one plane. A quadrilateral whose two adjacent sides lie in one plane and other two adjacent sides lie in a different plane is called a skew quadrilateral. Orthogonal Projection: (a) If a perpendicular be drawn from an external point on a given line then the foot of the perpendicular is called the orthogonal projection (or simply the projection) of the external point on the given line. In the above left side figure, Pp is perpendicular from the external point P on the straight line AB. Since the foot of the perpendicular is p hence, p is the projection of P on the on the line AB. Again, we can observe that in the above right side figure, 5 the point P lies on the line AB ; hence, in this case the projection of P on AB is the point P itself. (b) The locus of the feet of the perpendiculars drawn from all points of a line (straight or curved) on a given straight line is called the projection of the line on the given straight line. In the above left side figure, Pp and Qq are perpendiculars from P and Q respectively on the straight line AB; p and q are the respective feet of perpendiculars. Then, pq is the projection of the straight line PQ on the straight line AB. Again, previous left hand side figure, the projection of the straight line PQ on the straight line AB is Pq. Again, similarly we can observe that in the above right side figure, pq is the projection of the curved line PQ on the straight line AB. Again, suppose the straight line PQ intersects the straight line AB at R; in this case, the projections of QR and RP on AB are qR and Rp respectively. (c) The locus of the feet of the perpendiculars drawn from all points of a line (straight or curved) on a given plane is called the projection of the line on the plane. In this figure, the locus of the feet of the perpendiculars drawn from points of the line MN on the plane XY is the line mn; hence, the projection of the line MN on the plane XY is the line mn. (i) The projection of a straight line on a plane is a straight line ; but the projection of a curved line on a plane may be a straight line as well as a curved line. If the curved line MN lies in a plane which is perpendicular to the plane XY then the projection of MN on the plane XY is a straight line. (ii) A straight line and its projection on a plane are co-planar. Angle Between a Straight Line and a Plane: The angle between a straight line and a plane is measured by the angle between the given straight line and its projection on the given plane. Let mn be the projection of the straight line MN on the plane XY. Suppose, in the given figure, straight lines MN and mn (when produced) meet at the point R in the plane XY. Then the angle between the straight line MN and the plane XY is measured by ∠MRm. Dihedral Angle: The plane angle between two intersecting planes is called a dihedral angle between the planes. The angle between two intersecting planes (i.e., a dihedral angle) is measured as follows: Take any point on the line of intersection of the two planes. From this point draw two straight lines, one in each plane, at right angles to the line of intersection. Then the plane angle between the drawn two lines gives the measure of the dihedral angle between the planes. Let XY and LM be two intersecting planes and XL be their line of intersection. From any point A on XL draw the straight line AB perpendicular to XL in the XY plane and the straight line AC perpendicular to XL in the LM plane. Then the plane ∠BAC is the measure of the dihedral angle between the two intersecting planes XY and LM. If the dihedral angle between two intersecting planes is a right angle then one plane is said to be perpendicular to the other. ● Geometry 11 and 12 Grade Math From Solid Geometry to HOME PAGE Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need. New! Comments Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question.
{"url":"https://www.math-only-math.com/solid-geometry.html","timestamp":"2024-11-04T04:24:50Z","content_type":"text/html","content_length":"48053","record_id":"<urn:uuid:d10457d9-940f-403a-a99e-a46356686c0d>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00376.warc.gz"}
Approximate periodicity We consider the question of finding an approximate period in a given string S of length n. Let S′ be a periodic string closest to S under some distance metric. We consider this distance the error of the periodic string, and seek the smallest period that generates a string with this distance to S. In this paper we consider the Hamming and swap distance metrics. In particular, if S is the given string, and S′ is the closest periodic string to S under the Hamming distance, and if that distance is k, we develop an O(nkloglogn) algorithm that constructs the smallest period that defines such a periodic string S′. We call that string the approximate period of S under the Hamming distance. We further develop an O(n ^2) algorithm that constructs the approximate period under the swap distance. Finally, we show an O(nlogn) algorithm for finite alphabets, and O(nlog^3 n) algorithm for infinite alphabets, that approximates the number of mismatches in the approximate period of the string. Publication series Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Number PART 1 Volume 6506 LNCS ISSN (Print) 0302-9743 ISSN (Electronic) 1611-3349 Conference 21st Annual International Symposium on Algorithms and Computations, ISAAC 2010 Country/Territory Korea, Republic of City Jeju Island Period 15/12/10 → 17/12/10 ★ Partly supported by NSF grant CCR-09-04581 and ISF grant 347/09. Funders Funder number National Science Foundation CCR-09-04581 Israel Science Foundation 347/09 Dive into the research topics of 'Approximate periodicity'. Together they form a unique fingerprint.
{"url":"https://cris.biu.ac.il/en/publications/approximate-periodicity-4","timestamp":"2024-11-03T07:45:52Z","content_type":"text/html","content_length":"56314","record_id":"<urn:uuid:1d6b45cd-bfa2-4e3d-8c56-a23b261f8a51>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00338.warc.gz"}
proximal Newton’s method In [R. J. Baraldi and D. P. Kouri, Mathematical Programming, (2022), pp. 1-40], we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex and nonsmooth convex function. The principle expense of this method is in computing a trial iterate that satisfies the so-called fraction of Cauchy decrease condition—a bound that ensures … Read more
{"url":"https://optimization-online.org/tag/proximal-newtons-method/","timestamp":"2024-11-03T16:46:11Z","content_type":"text/html","content_length":"83380","record_id":"<urn:uuid:8cf42061-858e-4ce5-a8a0-ea283da4905b>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00237.warc.gz"}
Category: algorithms Component type: function template <class InputIterator, class OutputIterator, class Predicate> OutputIterator remove_copy_if(InputIterator first, InputIterator last, OutputIterator result, Predicate pred); Remove_copy_if copies elements from the range [first, last) to a range beginning at result, except that elements for which pred is true are not copied. The return value is the end of the resulting range. This operation is stable, meaning that the relative order of the elements that are copied is the same as in the range [first, last). Defined in the standard header algorithm, and in the nonstandard backward-compatibility header algo.h. Requirements on types • InputIterator is a model of Input Iterator. • OutputIterator is a model of Output Iterator. • InputIterator's value type is convertible to a type in OutputIterator's set of value types. • Predicate is a model of Predicate. • InputIterator's value type is convertible to Predicate's argument type. • [first, last) is a valid range. • There is enough space in the output range to store the copied values. That is, if there are n elements in [first, last) that do not satisfy pred, then [result, result+n) is a valid range. • result is not an iterator in the range [first, last). Linear. Exactly last - first applications of pred, and at most last - first assignments. Fill a vector with the nonnegative elements of another vector. vector<int> V1; vector<int> V2; remove_copy_if(V1.begin(), V1.end(), bind2nd(less<int>(), 0)); See also copy, remove, remove_if, remove_copy, unique, unique_copy. STL Main Page
{"url":"http://ld2014.scusa.lsu.edu/STL_doc/remove_copy_if.html","timestamp":"2024-11-11T20:19:37Z","content_type":"text/html","content_length":"6558","record_id":"<urn:uuid:25478d69-ed0f-498e-9d27-fcba5eafe8d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00826.warc.gz"}
Case Study: Average Economic Growth Rate in the World Economy Learn statistical inference and sampling distributions through a case study. In the context of studying any substantive question, we first need to define what the population as the target of inference is and its relationship with the data we have available. Recall that the basic premise of valid statistical inference is random sampling, that is, each subject has an equal chance of being selected into a sample. This assumption is likely upheld in real probability samples but most likely violated in convenience samples. Get hands-on with 1400+ tech skills courses.
{"url":"https://www.educative.io/courses/using-r-data-analysis-social-sciences/case-study-average-economic-growth-rate-in-the-world-economy","timestamp":"2024-11-10T15:23:06Z","content_type":"text/html","content_length":"763072","record_id":"<urn:uuid:7fb1be21-ea1f-474d-bb45-67ff7bc8fd88>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00730.warc.gz"}
Short note: Estimation of anisotropic minimum horizontal closure stress The following derivations and equations describe anisotropic, “Transversely Isotropic” (VTI and HTI) Poisson’s ratios to obtain realistic in-situ estimates of Minimum Horizontal Closure Stress and its relation to Thomsen’s δ, and Sayers’ K^0 and χ terms (Thomsen 1986; Sayers 1995, 2010). VTI anisotropy In the past seismic imaging and AVO were driven by isotropic models of the Earth. More recently the use of quantitative interpretation attributes to characterize unconventional reservoirs has continued within this isotropic assumption. However, at the 2013 SEG convention, Leon Thomsen argued that industry has been doing AVO wrong for 30 years, and doing geomechanics wrong for 5 years, by ignoring anisotropy in seismic methods that are increasingly being applied to extract geomechanical properties of the Earth (Goodway et al., 2006, 2010, and 2012). In his presentation Thomsen showed a VTI stiffness matrix in terms of anisotropic λ, μ parameters leading to a conclusion that λ[13] is a simple expression of P-wave modulus M[0], and Thomsen’s polar anisotropy parameter δ (Thomsen However, the very same anisotropic VTI stiffness matrix showing the connection of δ to λ[13] and λ[⊥] (vertical or perpendicular to layers) in terms of a stress strain matrix was previously derived by Bill Goodway, as shown below, and published in the CSEG RECORDER nearly 15 years ago (Goodway, 2001). Thomsen initially developed his parameters for TI media with a vertical symmetry axis (Thomsen, 1986). This layer induced anisotropy, subsequently renamed VTI, is formulated as a stress-strain tensor matrix relationship (Matrix A, below) and contains five independent anisotropic stiffnesses instead of the two for the familiar isotropic case. Matrix A This matrix can be represented in Lamé moduli terms as Matrix A’ below modified from Sheriff’s Dictionary of Geophysics (Sheriff 1991, p 99) as a simpler more common representation of VTI media, Matrix A' where ≡ is measured parallel to layering and ⊥ is perpendicular to layering, while λ[13] (= λ[31]) and λ[33] (with λ[33] being equivalent to λ[⊥]) are contained in stiffnesses C[13] and C[33]. With this version of Sheriff’s matrix Thomsen’s δ is represented in Lamé terms as: These equations reveal δ as being a function of the difference between λ[13] and λ[33] and their squares. To obtain a physical sense of the difference between these anisotropic lambdas, consider a VTI version of Hooke’s law for axial stress in the vertical z-direction given as: The difference between λ[13] and λ[33] can be seen as contributing to a difference in axial strain e[zz] to transverse strains e[xx ]and e[yy], from lambda terms alone, unlike the isotropic case where the contribution is equal. This suggests that δ is influenced more by variations in lambda due to both the solid planes and fluid or kerogen fill between the planes, in both VTI (see figure 1) and HTI models (Berryman et al., 1999). Figure 1. VTI symmetry axis diagram modified from Figure 13b in Goodway et al. (CSEG RECORDER, April 2006) where λ[⊥]is equivalent to λ[33]. The recent interest in hydraulic fracturing of unconventional reservoirs requires an estimate of minimum stress, since this determines the downhole pressure required to propagate a hydraulic Neglecting terms involving horizontal strains, the Minimum Horizontal Closure Stress σ[h] may be written in terms of the vertical stress σ[V] and pore pressure p as: where α[h] and α[V] are poroelastic coefficients, and K[0] = C[13]/C[33] (Sayers, 2010), with K0 being equivalent to the bound Poisson’s ratio ν* as defined in the next section. The impact of VTI anisotropy on the Minimum Horizontal Closure Stress is well documented as being a function of Thomsen’s δ (Sayers, 1995 and 2010; Goodway et al., 2006; Thomsen, 2013). As δ is almost always positive (NMO velocity > vertical velocity), λ[13] > λ[⊥] implies stiffer lambda values parallel to than across layers (figure 1). This difference in lambda can be used to derive the anisotropic VTI bound Poisson’s ratio ν*[VTI] (Sayers’ K[0]^VTI) from the isotropic form ν* (Sayers’ K[0]^ISO) within the Minimum Horizontal Closure Stress equation and is equivalent to a quantity termed χ from Sayers’ 2010 SEG DISC notes (page 120) as shown in the following derivations, where E[x] and E[z] are the horizontal and vertical Young’s moduli, and ν[xz] and ν[yx] are the Poisson’s ratios that quantify the horizontal strain resulting from a vertical and horizontal stress It follows from Sayers (2010) that the equality K[0]^VTI = K[0]^ISO + χ (or ν*[VTI] = ν* + χ in the notation used here) implies that From this it is seen that the same difference between λ[13] and λ[⊥]that controls the sign of Thomsen’s δ parameter is also the controlling factor that results in K[0]^VTI > K[0]^ISO (or ν*VTI > isotropic ν*) for most of the points shown in figure 2 below. Figure 2. K[0] ^VTI calculated for the shales studied by Jones and Wang (1981), Hornby (1994), Johnston and Christensen (1995), and Wang (2002), compared with K[0] ^ISO (Sayers, 2010). This connection to Thomsen’s δ parameter, expressed in Lamé terms from the VTI stress strain tensor matrix A’, is identical to the relationship shown by Sayers (2010) for χ, since: These results show that estimates of δ, or C13 and C33, from sonic logs measured in wells of different deviation, or from surface seismic can be used to calculate the increase of K[0]^VTI over the standard, vertical log based estimated K[0]^ISO. Table 1 shows the effect of this additional term on the standard isotropic calculation of K[0], using χ values extracted from sonic logs measured in a vertical and deviated well from the Horn River (Sayers et al., 2015). Table 1: Average zonal values from sonic logs in a vertical and deviated well eta η = (ε −δ ) / (1+ 2δ ) , λ≡C[13], C[33], C[55], χ, K[0]^ISO and K[0]^VTI For completeness, the relevant equation from Thomsen’s 2013 SEG convention talk (mentioned above) is reproduced below and compared to a similar formulation based on the matrices and derivations From this, one can see that Thomsen’s delta differs by a μ⊥/(λ⊥+ μ⊥) term leading to a different result for anisotropic VTI bound Poisson’s ratio ν*VTI as presented by Thomsen compared to the original Goodway formulation of δ in Lamé terms as follows: HTI anisotropy The effects of λ and μ on Minimum Horizontal Closure Stress for an anisotropic HTI model of vertical fractures is not well documented, but can be understood by analogy to the VTI case described above. Matrix B for HTI is a stiffness tensor matrix represented in Lamé terms equivalent to the VTI Matrix A’ and shown for a P-wave incident on a layer of vertical fractures in figure 3. Figure 3. HTI symmetry axis diagram modified from Figure 13a in Goodway et al. (CSEG RECORDER, April 2006, part 2). Matrix B From Matrix B and by extension from the equations derived above for the VTI case, the Minimum Horizontal Closure Stress for an anisotropic HTI model of vertical fractures can be given as: An interesting conclusion from this result, contrary to the VTI case, predicts a decrease in the bound HTI Poisson’s ratio ν*HTI over the isotropic ν* and the VTI equivalent ν*VTI, as λ≡ + 2μ≡ is greater than both isotropic λ + 2μ and the equivalent λ[⊥] + 2μ[⊥] for VTI, and as λ12 ≈ λ[13] < λ isotropic. About the Author(s) Bill Goodway obtained a BSc in geology from the University of London and an MSc in geophysics from the University of Calgary. Prior to 1985 Bill worked for various seismic contractors in the UK and Canada. Since 1985 Bill has been employed at PanCanadian and then EnCana in various capacities from geophysicist to Team Lead of the Seismic Analysis Group, to Advisor for Seismic Analysis within the Frontier and New Ventures Group, and subsequently in the Canadian Ventures and Gas Shales business unit. In 2010 he ended his career with EnCana to join Apache as Manager of Geophysics and Advisor Senior Staff in the Exploration and Production Technology group. Bill has received numerous CSEG Best Paper Awards as well as the CSEG Medal in 2008. He is a member of the CSEG, SEG, EAGE, and APEGA as well as the SEG Research Committee. In addition, Bill was elected Vice President and President of the CSEG for the 2002-04 term and in 2009 he was selected as the SEG’s Honorary Lecturer for North Berryman J., Grechka V., 1999, Analysis of Thomsen parameters for finely layered VTI media, Geophysical Prospecting, Vol. 47, 959-978. Berryman, J. G., Berge, P. A., and Bonner, B. P., 1999, Role of λ-diagrams in estimating porosity and saturation from seismic velocities, SEG Expanded abstracts, 176-179. Goodway, B., 2001, AVO and Lamé constants for rock parameterization and fluid detection: CSEG RECORDER, Vol. 26, no. 6, 39-60. Goodway, B., 2014, The magic of Lamé; An interview with Bill Goodway: CSEG RECORDER, Vol. 39, no. 6. Goodway, B., Monk, D., Perez, M., Purdue, G., Anderson, P., Iverson, A., and Cho, D., 2012, Combined microseismic and 4D to calibrate and confirm surface 3D azimuthal AVO/LMR predictions of completions performance and well production in the Horn River gas shales of NEBC, The Leading Edge, Vol. 31, no. 12, 1502-1511. Goodway, B., Perez, M., Varsek, J., and Abaco, C., 2010, Seismic petrophysics and isotropic-anisotropic AVO methods for unconventional gas exploration. The Leading Edge, Vol. 29, no. 12, 1500-1508. Goodway, B., Varsek, J., and Abaco, C., 2006, Practical applications of P-wave
{"url":"https://csegrecorder.com/articles/view/short-note-estimation-of-anisotropic-minimum-horizontal-closure-stress","timestamp":"2024-11-03T16:31:53Z","content_type":"text/html","content_length":"34563","record_id":"<urn:uuid:a3053a12-a481-4218-a961-7bab3aa6839e>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00214.warc.gz"}
[Solved] Question 2. The rate of drug destruction | SolutionInn Answered step by step Verified Expert Solution Question 2. The rate of drug destruction by the kidneys is proportional to the amount of the drug in the body. The constant of Question 2. The rate of drug destruction by the kidneys is proportional to the amount of the drug in the body. The constant of proportionality is denoted by K. At time t the quantity of the drug in the body is x. Write down a differential equation relating x and t and show that the general solution is x=Aet, where A is an arbitrary constant. There are 3 Steps involved in it Step: 1 To solve the given problem step by step lets start by writing the differential equation that describ... Get Instant Access to Expert-Tailored Solutions See step-by-step solutions with expert insights and AI powered tools for academic success Ace Your Homework with AI Get the answers you need in no time with our AI-driven, step-by-step assistance Get Started Recommended Textbook for Authors: Barry Monk 2nd edition 1259345297, 978-0077836351, 77836359, 978-1259295911, 1259295915, 978-1259292484, 1259292487, 978-1259345296 More Books Students also viewed these Mathematics questions View Answer in SolutionInn App
{"url":"https://www.solutioninn.com/study-help/questions/question-2-the-rate-of-drug-destruction-by-the-kidneys-1007771","timestamp":"2024-11-14T02:01:48Z","content_type":"text/html","content_length":"103401","record_id":"<urn:uuid:c89b03ab-c441-4bad-8e5c-145848f2b433>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00849.warc.gz"}
Piloting a Math Writing Intervention With Late Elementary Students At Risk for Learning Difficulties - The Meadows Center High‐stakes mathematics assessments require students to write about mathematics, although research suggests that students exhibit limited proficiency on such assessments. Students with learning difficulties may struggle with writing, mathematics, or both. Researchers employed an intervention for teaching students how to organize mathematics writing. Researchers randomly assigned participants (n = 61) in grades 3–5 to receive instruction in mathematics writing or information writing. Students receiving mathematics writing outperformed control students on a researcher‐developed measure of mathematics writing (d = 1.05). Component assessment revealed that mathematics writing students improved in writing organization (d = 1.49) but not in mathematics content (d = 0.11 ns). Results also indicated that mathematics writing students outperformed control on percentage of correct mathematics writing sequences (d = 0.82). Future directions for mathematics writing intervention development are discussed. Hebert, M. A., Powell, S. R., Bohaty, J. J., & Roehling, J. (2019). Piloting a mathematics writing intervention with late elementary students at risk for learning difficulties. Learning Disabilities Research and Practice, 34, 144–157. doi:10.1111/ldrp.12202
{"url":"https://meadowscenter.org/library/piloting-a-math-writing-intervention-with-late-elementary-students-at-risk-for-learning-difficulties/","timestamp":"2024-11-03T20:34:54Z","content_type":"text/html","content_length":"33436","record_id":"<urn:uuid:fcc748eb-88cd-4d17-9e68-7027513a5d8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00258.warc.gz"}
PRESENT VALUE OF ANNUITY: Definition, Formula, and Calculator When it comes to retirement planning, an annuity can be a useful tool. You’ll be relying on your savings and Social Security payments to support yourself and enjoy your golden years once you’ve retired. An annuity provides an additional income stream, which can make life easier. Let’s look at how to calculate the present value of an annuity, the formula, the calculator, and how it may affect your retirement. What is the Present Value of an Annuity? The present value of an annuity is the current cash value of all future payments, influenced by the annuity’s rate of return or discount rate. It’s critical to remember the time value of money when calculating the present value of an annuity because it takes inflation into account. The lower the rate of return on an annuity, the higher the present value of the annuity. The time value of money is used to calculate the present value of an annuity. According to Harvard Business School, the time value of money theory states that a sum of money is worth more now than the promise of the same sum in the future. Payments made decades ago are worth less today due to uncertain economic conditions. Current payments, on the other hand, have more value because they can be invested in the meantime. As a result, $10,000 in your hand today is worth more than $10,000 in ten years. If you own an annuity or receive payments from a structured settlement, you may choose to sell future payments to a purchasing company in exchange for cash now. Having early access to these funds can help you pay off debt, repair your car, or put down a down payment on a house. The present value formula, along with other variables, is used by companies that buy annuities to calculate the value of future payments in today’s dollars. What Is the Formula for Calculating the Present Value of an Annuity? To calculate the present value of an annuity is part of determining how much your annuity is worth — and whether you are getting a fair deal when you sell your payments. You will need specific information, such as the discount rate offered by a purchasing company, to understand and apply the present value of the annuity formula. When using the present value formula, you will need the following information: • Each fixed payment’s monetary value • How many payments do you want to sell? • Rate of discount P = PMT * (^1 – ( 1/(1+r)^n))/r Present Value of Annuity Formula An Example of How To Calculate Present Value of Annuity Using the above formula, you can calculate the present value of an annuity and decide whether a lump sum or an annuity payment is a better option. Here’s an example of how it might work. It should be noted that this formula is for a regular annuity. Assume you have the choice of a $25,000 annuity for 20 years or a $300,000 lump sum at a 5% discount rate. The following numbers can be entered into the formula: P = 25,000 x ((1 – (1 / (1 + .05) ^ -20)) / .05) That works out to $311,555 when you do the math. This means that the value of the annuity is greater than the lump sum for this particular annuity, and you’d be better off taking the annuity payments rather than the lump sum. When Should You Use An Annuity Present Value Calculator? The Present Value of the Annuity Calculator is most commonly used to calculate the cash value of a court settlement, retirement funding requirements, or loan payments. For example, a court settlement may entitle the recipient to $2,000 per month for 30 years, but the receiving party may be hesitant to be paid over time and instead request a cash settlement. The present value of an annuity formula would then be used to calculate the equivalent value. As a result of discounting, the present value cash settlement will be less than the total of all future payments (time value of money). When buying and selling mortgages, real estate investors use the Present Value of Annuity Calculator. The mortgage represents a future payment stream that combines interest and principal and can be discounted back to a present cash value, allowing the investor to calculate how much the mortgage is worth. This informs the investor whether the price he is paying is greater than or less than the expected value. Because present value calculations involve the compounding of interest, which means that the interest on your money earns interest, they can be difficult to model in spreadsheets. Our Present value annuity calculator, fortunately, solves these issues for you by converting all of the math headaches into point-and-click simplicity. Terms and Definitions forPresent Value Annuity Calculator • Annuity – A fixed sum of money paid to someone, usually once a year for the rest of their life. • Payment/Withdrawal Amount – This is the total of all annuity payments received or made (loan) on the annuity. This is a series of payments that will occur in the future, expressed in nominal, or today’s, dollars. • Annual Interest Rate (%) – The annuity’s annual interest rate. The interest rate will be used by the present value annuity calculator to discount the payment stream to its present value. • Number of Years To Calculate Present Value – The number of years the annuity is expected to be paid or received. • Payment/Withdrawal Frequency – The frequency with which you want the present value annuity calculator to calculate the present value. The frequency can be set to monthly, quarterly, semi-annually, or annually. • Present Value Of An Annuity – This is the present value of the annuity for which you entered information. The present value of any future value lump sum and future cash flows (payments). When Do You Calculate the Present Value of Annuity? The cash value of recurring payments in court settlements, retirement funds, and loans is commonly calculated using the present value of an annuity. It is also used to calculate whether a mortgage payment is greater than or less than an expected value. These payments are sometimes referred to as annuities. The Effect of Discount Rates on Present Value Discount rates are used by factoring companies, or companies that will buy your annuity or structured settlement, to account for market risks such as inflation and to make a small profit by granting you early access to your payments. The value of an annuity and the amount of money you receive from a purchasing company are both affected by the discount rate. The standard discount rate ranges between 9% and 18%. They can be higher, but they are usually in the middle. The higher the present value, the lower the discount rate. Low-interest rates enable you to keep more of your money. Most states require factoring companies to disclose discount rates and present value during the transaction process, according to the Internal Revenue Service. Always request these numbers ahead of time. It’s also worth noting that the value of distant payments is lower for purchasing firms due to economic factors. The sooner you receive a payment owed to you, the more money you will receive for that payment. Payments due in the next five years, for example, are worth more than payments due in the next 25 years. Remember this during the selling process. What Factors Influence the Present Value of an Annuity? A few factors can influence the present value of an annuity. These are some examples: • The interest rate: The higher the interest rate, the lower the annuity’s present value. This is due to the fact that the interest rate is used to discount future payments. • The time it will take to receive the payments: The greater the time elapsed between payments, the greater the present value of the annuity. This is because compound interest has more time to grow on the investment. • The amount of each payment: The greater the periodic payments, the greater the annuity’s present value. This is due to the fact that the payments are more valuable in today’s dollars. These are just a few of the factors that can influence an annuity’s present value. When deciding whether or not to invest in an annuity, it is critical to consider all of these factors. What is present value of an annuity due? The present value of an annuity due (PVAD) is calculated using the current value of money at the end of the number of periods specified. Another way to look at it is how much an annuity due would be worth when payments are completed in the future and delivered to the present. Annuity Due vs. Ordinary Annuity The timing of annuity payments — whether at the start or end of a period — has an impact on present value calculations. Annuity due refers to payments that are made on a regular basis at the start of each period. Rent is a classic example of an annuity due to the fact that it is paid at the start of each month. An ordinary annuity is a type of retirement account in which you receive a fixed or variable payment from an insurance company at the end of each month or quarter based on the value of your annuity Annuities Explained An annuity is a financial agreement between you and an insurance company. You will pay a certain amount of money upfront or as part of a payment plan in exchange for a predetermined annual payment. You can receive annuity payments indefinitely or for a set period of time. One of the benefits of annuities is regular payments. Annuity contracts are classified into three types: • Fixed annuities provide fixed interest rates paid over a set period of time. • Variable annuities do not have guaranteed payouts, so you will have more freedom to invest your money in different ways, and your payments will be tied to the performance of those investments. This can result in higher returns, but it also has the potential to result in lower returns. • Indexed annuities are hybrid annuities that combine elements of fixed and variable annuities. An indexed annuity is one that tracks a stock market index, such as the S&P 500 or the Dow Jones Industrial Average, and pays out a percentage of the index’s return. Remember that money invested in an annuity grows tax-free. That is, when you begin making withdrawals, the amount you contributed to the annuity is not taxed, but your earnings are taxed at your regular income tax rate. An annuity’s present value is the amount of money you would need to invest today in order to receive a specified stream of payments in the future. The interest rate, the length of time until the payments are received, and the amount of each payment all have an impact on this calculation. When deciding whether or not to invest in an annuity, it is critical to consider all of these factors. When considering an annuity investment, it is critical to seek the advice of a financial advisor. They can calculate you in calculating the annuity’s present value and determining whether it is a good investment for you. Present Value of Annuity FAQs What is a Present Value of 1 Table? A present value of 1 table shows the present value discount rates for various interest rate and time period combinations. To calculate the present value, a discount rate from this table is multiplied by a cash quantity to be paid at a later date. Why do we calculate present value? You must choose an adequate discount rate to value future cash flows. The present value determines whether a sum of money today is more valuable than the same sum in the future. The present value demonstrates that money received in the future is not worth the money received today. Related Articles
{"url":"https://businessyield.com/finance-accounting/present-value-of-annuity/","timestamp":"2024-11-06T20:35:09Z","content_type":"text/html","content_length":"238365","record_id":"<urn:uuid:873fe3d4-0e3e-47eb-a06c-fe08aedc78fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00045.warc.gz"}
General numerical solution to the time-dependent master equation for non-steady-state reactions: Application to the photoinitiated isomerization of cycloheptatriene to toluene for Chemical Physics Chemical Physics General numerical solution to the time-dependent master equation for non-steady-state reactions: Application to the photoinitiated isomerization of cycloheptatriene to toluene View publication An efficient numerical algorithm has been developed to solve the time-dependent master equation. The method may be used to calculate the probability of reaction as a function of pressure for a vibrationally hot reactant produced under non-steady-state reaction conditions. To test the numerical algorithm, calculations were performed on the probability of reaction versus buffer gas pressure for the 260 nm photoinitiated isomerization of cycloheptatriene to toluene, which occurs on the ground state potential surface following internal conversion. The average energy transferred per collision, 〈ΔEcoll〉, from cycloheptatriene to 25 buffer gases was determined via master equation fits to the Stern-Volmer data of Troe and Wieters [J. Chem. Phys. 71 (1979) 3931]. The calculated values of 〈ΔEcoll〉 agree very well with those measured directly by Troe and co-workers. © 1991.
{"url":"https://research.ibm.com/publications/general-numerical-solution-to-the-time-dependent-master-equation-for-non-steady-state-reactions-application-to-the-photoinitiated-isomerization-of-cycloheptatriene-to-toluene","timestamp":"2024-11-04T11:46:15Z","content_type":"text/html","content_length":"66960","record_id":"<urn:uuid:c98b4015-a406-48af-aa28-a5e3db7b90ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00872.warc.gz"}
A pseudorandom selection of maths jokes “A mathematician is a device for turning coffee into theorems.” (P. Erdös) So while you’re waiting for the coffee to take effect, look through these… [Compiled without attribution from lots of sources. If you know of an original source for any of these, please let us know.] Q. Why did the chicken cross the Möbius strip? A. To get to the other – er… Q. What does a mathematician do when he’s constipated? A. He works it out with a pencil. Q. What’s the value of a contour integral around Western Europe? A. Zero, because all the Poles are in Eastern Europe. An English mathematician was asked by his very religious colleague: Q. Do you believe in one God? A. Yes – up to isomorphism. Q. What’s purple and commutes? A. An abelian grape. Q. What’s yellow, and equivalent to the Axiom of Choice? Alternatively: Q. What’s yellow and pro-choice? A. Zorn’s Lemon. Q. Why did the mathematician name his dog “Cauchy”? A. Because he left a residue at every pole. Q. Why is it that the more accuracy you demand from an interpolation function, the more expensive it becomes to compute? A. That’s the Law of Spline Demand. Q. What’s nonorientable and lives in the sea? A. Möbius Dick. Q. What is an ‘ugh’? A. The dual of a cough. Q. Why didn’t Newton discover group theory? A. Because he wasn’t Abel. Asked how his pet parrot died, the mathematician answered, “Polynomial. Polygon.” Lumberjacks make good musicians because of their natural logarithms. Russell to Whitehead: “My Gödel is killing me!” Did you hear about the geometer who went sunbathing and became a tangent? My geometry teacher was sometimes acute, and sometimes obtuse, but always right. Old mathematicians never die; they just lose some of their functions. Statisticians probably do it. Algebraists do it in groups. (Logicians do it) or [not (logicians do it)]. Möbius always did it on the same side. Remember Americans pronounce z ‘zee’. Integral z-squared dz From 1 to the cube root of 3 Times the cosine Of three pi over 9 Equals log of the cube root of ‘e’. And it’s right! A little longer Three men are in a hot-air balloon. Soon, they find themselves lost in a canyon. One of the three men says, “I’ve got an idea. We can call for help in this canyon and the echo will carry our voices far.” So he leans over the basket and yells out, “Helllloooooo! Where are we?” They hear the echo several times. Fifteen minutes later, they hear this echoing voice: “Helllloooooo! You’re lost!” One of the men says, “That must have been a mathematician.” Puzzled, one of the other men asks, “Why do you say that? ” “For three reasons. One, he took a long time to answer; two, he was absolutely correct, and three, his answer was absolutely useless.” A bunch of Polish scientists decided to flee their repressive government by hijacking an airliner and forcing the pilot to fly them to a western country. They drove to the airport, forced their way on board a large passenger jet, and found there was no pilot on board. Terrified, they listened as the airport sirens rang out. Finally, one of the scientists suggested that since he was an experimentalist, he would try to fly the aircraft. He sat down at the controls and tried to figure them out. The sirens got louder and louder. Armed men surrounded the jet. The would-be pilot’s friends cried out, “Please, please take off now! Hurry!” The experimentalist calmly replied, “Have patience. I’m just a simple Pole in a complex plane.” Noah’s Ark lands after The Flood and Noah releases all the animals, saying, “Go forth and multiply.” Several months pass and Noah decides to check up on the animals. All are doing fine except a pair of snakes. “What’s the problem?” asks Noah. “Cut down some trees and let us live there,” say the snakes. Noah follows their advice. Several more weeks pass and Noah checks up on the snakes again. He sees lots of little snakes; everybody is happy. Noah says, “So tell me how the trees helped.” “Certainly,” reply the snakes. “We’re adders, and we need logs to multiply.” Two male mathematicians are in a café. The first one says to the second that the average person knows very little about basic mathematics. The second mathematician disagrees, and claims that most people can cope with a reasonable amount of maths. The first goes off to the toilets, and in his absence his companion calls over the waitress. He tells her that in a few minutes, after his friend has returned, he will call her over and ask her a question. All she has to do is answer one third x cubed. She repeats, “One thir – dex cue”? He repeats, “One third x cubed”. “One thir dex cubed?” Yes, that’s right, he says. So she agrees, and goes off mumbling to herself, “One thir dex cubed…” The first guy returns and the second proposes a bet to prove his point that most people do know something about basic maths. He says he will ask the blonde waitress an integral, and the first laughingly agrees. The second man calls over the waitress and asks, “What is the integral of x squared?”. As instructed, the waitress says “One third x cubed,” and while walking away, turns back and adds over her shoulder, “Plus a constant.” Dubious mathematics 1+1=3, for large values of 1 and small values of 3. lim ---- lim 3 = 8 8->9 \/ 8 = 3 w->oo sin x lim ------- = 6 n->oo n Proof: cancel the n in the numerator and denominator. Lemma: All horses are the same colour. Base case: It is immediate that all horses in a set containing only one horse are the same colour. Inductive step: Suppose you have a set of k+1 horses. Remove one horse from the set, so that you have k horses. The inductive hypothesis tells us that all these horses are the same colour. Now return the horse you removed and take out a different horse. Again, by the inductive hypothesis, the remaining k horses are all the same colour. Thus all k+1 horses you started with are the same colour. The result follows by induction. QED. Theorem: All horses have infinitely many legs. Proof (i): Everyone would agree that all horses have an even number of legs. It is also well-known that horses have forelegs in front and two legs at the back. 4 + 2 = 6 legs, which is certainly an odd number of legs for a horse to have! So, since we have shown the number of legs on a horse to be both even and odd, there must be infinitely many of them. QED. Proof (ii): Suppose, for a contradiction, that there exists a horse which does not have infinitely many legs. That would be a horse of another colour; so by the above Lemma, it doesn’t exist. QED. Theorem: A cat has nine tails. Proof: No cat has eight tails. A cat has one tail more than no cat. Therefore, a cat has nine tails. QED. Theorem: All positive integers are equal. Proof: It suffices to show that, for any two positive integers a and b, we have a=b; to demonstrate this, we show that, for all positive integers n, if for any two positive integers a and b we have max(a,b)=n then a=b. We proceed by induction. Base case: If n=1 then a and b, being positive integers, must both equal 1, so a=b. Inductive step: Assume that the theorem is true for some value k. Take positive integers a and b with max(a,b)=k+1. Then max(a-1,b-1)=k so, by the inductive hypothesis, a-1=b-1; consequently a=b. The great and the good John von Neumann supposedly had the habit of simply writing answers to homework assignments on the board (the method of solution being, of course, obvious) when he was asked how to solve problems. Once, one of his students tried to get more helpful information by asking if there was another way to solve the problem. Von Neumann looked blank for a moment, thought, and then answered, “Yes.” Norbert Wiener was renowned for his absent-mindedness. When he and his family moved from Cambridge to Newton his wife, knowing that he would be of absolutely no help, packed him off to MIT while she directed the move. Since she was certain that he would forget that they had moved and where they had moved to, she wrote down the new address on a piece of paper, and gave it to him. Naturally, in the course of the day, some insight occurred to him. He reached in his pocket, found a piece of paper on which he furiously scribbled some notes, thought it over, decided there was a fallacy in his idea, and threw the piece of paper away. At the end of the day he went home – to the old address in Cambridge, of course. When he got there he realised that they had moved, that he had no idea where they had moved to, and that the piece of paper with the address was long gone. Fortunately inspiration struck. There was a young girl on the street and he conceived the idea of asking her where he had moved to, saying, “Excuse me, perhaps you know me. I’m Norbert Wiener and we’ve just moved. Would you know where we’ve moved to?” To which the young girl replied, “Yes Daddy, Mommy thought you would forget.” The great Polish mathematician Waclaw Sierpinski was coincidentally also absent-minded and coincidentally also had to move house. His wife knew of his fallibility as they stood on the street with all their belongings, said to him, “Now, you stand here and watch our ten cases, while I go and get a taxi.” She left him there, eyes glazed and humming absently. Some minutes later she returned, a taxi having been called. Sierpinski challenged her (possibly with a glint in his eye): “I thought you said there were ten cases, but I’ve only counted to nine.” His wife insisted there were ten. “No, count them,” replied Sierpinski, “0, 1, 2, …” The great logician Bertrand Russell once claimed that he could prove anything if given that 1+1=1. So one day, an undergraduate demanded: “Prove that you’re the Pope.” Russell thought for a while and proclaimed, “I am one. The Pope is one. Therefore, the Pope and I are one.” Hiawatha Designs an Experiment Kendall, Maurice (1959) The American Statistician 13: 23-24 Hiawatha, mighty hunter, He could shoot ten arrows upward, Shoot them with such strength and swiftness That the last had left the bow-string Ere the first to earth descended. – This was commonly regarded As a feat of skill and cunning. Several sarcastic spirits Pointed out to him, however, That it might be much more useful If he sometimes hit the target. “Why not shoot a little straighter And employ a smaller sample?” Hiawatha, who at college Majored in applied statistics, Consequently felt entitled To instruct his fellow man In any subject whatsoever, Waxed exceedingly indignant, Talked about the law of errors, Talked about truncated normals, Talked of loss of information, Talked about his lack of bias, Pointed out that (in the long run) Independent observations, Even though they missed the target, Had an average point of impact Very near the spot he aimed at, With the possible exception of a set of measure zero. – “This,” they said, “was rather doubtful; Anyway it didn’t matter What resulted in the long run: Either he must hit the target Much more often than at present, Or himself would have to pay for All the arrows he had wasted.” – Hiawatha, in a temper, Quoted parts of R. A. Fisher, Quoted Yates and quoted Finney, Quoted reams of Oscar Kempthorne, Quoted Anderson and Bancroft (practically in extenso) Trying to impress upon them That what actually mattered Was to estimate the error. – Several of them admitted: “Such a thing might have its uses; Still,” they said, “he would do better If he shot a little straighter.” – Hiawatha, to convince them, Organized a shooting contest. Laid out in the proper manner Of designs experimental Recommended in the textbooks, Mainly used for tasting tea (but sometimes used in other cases) Used factorial arrangements And the theory of Galois, Got a nicely balanced layout And successfully confounded Second order interactions. – All the other tribal marksmen, Ignorant (benighted creatures) Of experimental setups, Used their time of preparation Putting in a lot of practice Merely shooting at the target. – Thus it happened in the contest That their scores were most impressive With one solitary exception. This, I hate to have to say it, Was the score of Hiawatha, Who as usual shot his arrows, Shot them with great strength and swiftness, Managing to be unbiased, Not however with a salvo Managing to hit the target. – “There!” they said to Hiawatha, “That is what we all expected.” Hiawatha, nothing daunted, Called for pen and called for paper. But analysis of variance Finally produced the figures Showing beyond all peradventure, Everybody else was biased. And the variance components Did not differ from each other’s, Or from Hiawatha’s. (This last point it might be mentioned, Would have been much more convincing If he hadn’t been compelled to Estimate his own components From experimental plots on Which the values all were missing.) – Still they couldn’t understand it, So they couldn’t raise objections. (Which is what so often happens with analysis of variance.) All the same his fellow tribesmen, Ignorant benighted heathens, Took away his bow and arrows, Said that though my Hiawatha Was a brilliant statistician, He was useless as a bowman. As for variance components Several of the more outspoken Made primaeval observations Hurtful of the finer feelings Even of the statistician. – In a corner of the forest Sits alone my Hiawatha Permanently cogitating On the normal law of errors. Wondering in idle moments If perhaps increased precision Might perhaps be sometimes better Even at the cost of bias, If one could thereby now and then Register upon a target. The Zeta Function Song [Sung to the tune of “Sweet Betsy from Pike”] Where are the zeros of zeta of s? G. F. B. Riemann has made a good guess, They’re all on the critical line, said he, And their density’s one over 2 pi log t. This statement of Riemann’s has been like a trigger, And many good men, with vim and with vigour, Have attempted to find, with math’matical rigour, What happens to zeta as mod t gets bigger. The names of Landau and Bohr and Cramèr, And Hardy and Littlewood and Titchmarsh are there, In spite of their efforts and skill and finesse, In locating the zeros no-one’s had success. In 1914 G. H. Hardy did find, An infinite number that lay on the line, His theorem, however, won’t rule out the case, That there might be a zero at some other place. Let P be the function pi minus li, The order of P is not known for x high, If square root of x times log x we could show, Then Riemann’s conjecture would surely be so. Related to this is another enigma, Concerning the Lindelhof function mu(sigma) Which measures the growth in the critical strip, And on the number of zeros it gives us a grip. But nobody knows how this function behaves, Convexity tells us it can have no waves, Lindelhof said that the shape of its graph, Is constant when sigma is more than one half. Oh, where are the zeros of zeta of s? We must know exactly, we cannot just guess, In order to strengthen the prime number theorem, The path of integration must not get too near ’em. Impure Mathematics: the cautionary tale of Polly Nomial Once upon a time (1/t) pretty little Polly Nomial was strolling across a field of vectors when she came to the boundary of a singularly large matrix. Now Polly was convergent, and her mother had made it an absolute condition that she must never enter such an array without her brackets on. Polly, however, who had changed her variables that morning and was feeling particularly badly behaved, ignored this condition on the basis that it was insufficient and made her way in amongst the complex elements. Rows and columns closed in on her from all sides. Tangents approached her surface. She became tensor and tensor. Quite suddenly two branches of a hyperbola touched her at a single point. She oscillated violently, lost all sense of directrix, and went completely divergent. She tripped over a square root that was protruding from the erf and plunged headlong down a steep gradient. When she rounded off once more, she found herself inverted, apparently alone, in a non-Euclidean space. She was being watched, however. That smooth operator, Curly Pi, was lurking inner product. As his eyes devoured her curvilinear coordinates, a singular expression crossed his face. He wondered, “Was she still convergent?” He decided to integrate properly at once. Hearing a common fraction behind her, Polly rotated and saw Curly Pi approaching with his power series extrapolated. She could see at once by his degenerate conic and dissipative that he was bent on no good. “Arcsinh,” she gasped. “Ho, ho,” he said, “What a symmetric little asymptote you have. I can see your angles have lots of secs.” “Oh sir,” she protested, “Keep away from me. I haven’t got my brackets on.” “Calm yourself, my dear,” said our suave operator, “your fears are purely imaginary.” “I, I,” she thought, “perhaps he’s not normal but homologous.” “What order are you?” the brute demanded. “Seventeen,” replied Polly. Curly leered. “I suppose you’ve never been operated on.” “Of course not,” Polly replied quite properly, “I’m absolutely convergent.” “Come, come,” said Curly, “let’s go to a decimal place I know and I’ll take you to the limit.” “Never,” gasped Polly. “Abscissa,” he swore, using the vilest oath he knew. His patience was gone. Coshing her over the coefficient with a log until she was powerless, Curly removed her discontinuities. He stared at her significant places, and began smoothing out her points of inflection. Poor Polly. The algorithmic method was now her only hope. She felt his digits tending to her asymptotic limit. Her convergence would soon be gone forever. There was no mercy, for Curly was a Heaviside operator. Curly’s radius squared itself; Polly’s loci quivered. He integrated by parts. He integrated by partial fractions. After he cofactored, he performed Runge-Kutta on her. The complex beast even went all the way around and did a contour integration. What an indignity – to be multiply connected on her first integration. Curly went on operating until he completely satisfied her hypothesis, then he exponentiated and became completely orthogonal. When Polly got home that night, her mother noticed that she was no longer piecewise continuous, but had been truncated in several places. But it was too late to differentiate now. As the months went by, Polly’s denominator increased monotonically. Finally she went to l’Hôpital and generated a small but pathological function which left surds all over the place and drove Polly to deviation. The moral of our sad story is this: If you want to keep your expressions convergent, never allow them a single degree of freedom. The only dirty joke in maths Q. What’s the square root of 69? A. Eight something.
{"url":"https://www.invariants.org.uk/a-treasury-of-mathematical-humor/a-pseudorandom-selection-of-maths-jokes/","timestamp":"2024-11-06T07:55:58Z","content_type":"text/html","content_length":"94963","record_id":"<urn:uuid:98ec4622-3433-42ab-b0c7-4f0acf0e6afd>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00588.warc.gz"}
Quantum Steampunk Is A Real Thing!Science and techn news{{ post.title }}Quantum Steampunk Is A Real Thing! Quantum steampunk is a retrofuturistic field of physics where thermodynamics and electrons meet / ScienceDirect. The Industrial Revolution m... Quantum steampunk is a retrofuturistic field of physics where thermodynamics and electrons meet / ScienceDirect. The Industrial Revolution meets the quantum-technology revolution! A steampunk adventure guide to how mind-blowing quantum physics is transforming our understanding of information and energy. Victorian era steam engines and particle physics may seem worlds (as well as centuries) apart, yet a new branch of science, quantum thermodynamics, reenvisions the scientific underpinnings of the Industrial Revolution through the lens of today's roaring quantum information revolution. • The real field of quantum steampunk is cutting-edge science. • Entropy explains why time moves forward and first emerged in thermodynamics. • Any closed system of any size or scale can exhibit entropies that are meaningful. Scientists are mashing up steamy old thermodynamics and cutting-edge quantum mechanics into a new field they’re calling quantum steampunk. By combining some of the most established 19th-century science with nano-scale quantum technology, researchers say they can solve complex problems that require versatility and skill in both fields. Writing for Scientific American, theoretical physicist Nicole Yunger Halpern explains that one of the key concepts shared by both kinds of science is entropy. “Entropy quantifies our uncertainty,” she writes. “According to the second law of thermodynamics, the entropy of a closed, isolated system cannot shrink. This fact underlies the reality that time flows in a single direction.” New technologies like quantum computers and quantum-based information security end up using physics as much as they use the typical programming ideas of traditional barriers like firewalls or complex passwords. Because of that, entropy is back in the picture as a way to measure unknown quantities in these quantum systems. “Entropy is often thought of as a single entity, but in fact, many breeds of entropy exist in the form of different mathematical functions that describe different situations,” Yunger Halpern explains. She says the overarching idea of entropy and its application as functions and equations means it’s a good model for a variety of other situations, too. Quantum systems down to the nano-scale level can still be closed in the sense of the definition of entropy. “Suppose we are trying to use entanglement to share information in a certain channel,” Yunger Halpern muses. “We might ask, Is there a theoretical limit to how efficiently we can perform this task? The answer will likely depend on an entropy.” Instead of an idea like bandwidth over a cable, quantum systems deal directly with how electrons are moving. The speeds or capacities can be pure There’s a more direct and literal way quantum mechanics and thermodynamics must peacefully coexist. Quantum computers rely on that same direct motion of electrons and other particles, which means they generate heat that usually warms the computer to an extent that it stops running correctly. The idea of fault-tolerant quantum computing may posit ways to vent heat from quantum computers and can channel it into energy-producing heat engines. What Yunger Halpern concludes is that while thermodynamic entropy is the one the public understands best, the next phase of science will pivot on the quantum kind. “These entropies quantify not only uncertainty but also the efficiency with which we can perform information-processing tasks, like data compression, and thermodynamic tasks, like the powering of a car,” Yunger Halpern writes. If you want to feel very dizzy and small in the world, think about the microsystems with entropies that nest into larger systems with larger entropies, and so on and so on—all the way up to the scale of the universe itself. Suddenly, the idea of a computer made by monitoring individual electrons one at a time doesn’t seem so far out.
{"url":"https://www.sciencetechniz.com/2022/12/quantum-steampunk-is-real-thing.html","timestamp":"2024-11-08T23:26:20Z","content_type":"application/xhtml+xml","content_length":"498659","record_id":"<urn:uuid:6077ce5c-1a38-44ca-a2bb-7a7c800a355c>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00513.warc.gz"}
What are all the factors of 72? | HIX Tutor What are all the factors of 72? Answer 1 The factors are 1, 2, 3, 4, 6, 8, 9, 12, 18, 24, 36, 72. I find factors in pairs, It will look like more work than it is, because I will explain how I am doing these steps. I do most of the work without writing it down. I'll put the explanation in black in [brackets] and the answer in #color(blue)"blue"#. I'll proceed by starting with #1# on the left and checking each number in order until either I get to a number already on the right or I get to a number greater than the square root of 72. #color(blue)(1 xx 72)# [I divide 72 by 2 because I can see it is divisible by 2; this gives me the next pair.] #color(blue)(2 xx 36)# [Now we check 3 and we get the next pair.] [I use a little trick for this. I know that 36 is divisible by 3 and #36 = 3xx12#. This tells me that #72 = 2xx3xx12#, so I know that #72 = 3xx2xx12 = 3xx24 #color(blue)(3 xx 24)# [Now we need to check 4. Up above, we got #72 = 2xx36# since #36 = 2xx18#, we see that #72 = 2xx2xx18 = 4xx18#] #color(blue)(4 xx 18)# [The number five is the next to be checked, but 72 is not divisible by five. I usually write a number before I check, so I cross out the ones that don't apply.] {Move on to 6. Looking above I want to 'build' a 6 by multiplying a number on the left times a factor of the number to its right. I see two ways to do that: #2xx36 = 2xx3xx12 = 6xx12# and #3xx24 = 3xx2xx12=6xx12#. (Or maybe you just know that #6xx12=72#.)] #color(blue)(6 xx 12)# [72 cannot be divided by 7] {#4xx18 = 4xx2xx9=8xx9#] #color(blue)(8 xx 9)# [Is that clear? Any factor of 72 greater than 9 must be multiplied by something less than 8 to get 72. But we've checked all the numbers up to and including 8. So we're finished.] [And that's all. 9 and the factors that are greater than 9 are already written on the right in the list of pairs above.] [If we were doing this for #39# we would get #1xx39# and #3xx13#, then we cross off every number until we notice that #7xx7 = 49#. If 39 had a factor greater than 7 it would have to be multiplied by something less that 7 (otherwise we get 49 or more). So we would be finished.] Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/what-are-all-the-factors-of-72-8f9afa44b8","timestamp":"2024-11-10T16:00:18Z","content_type":"text/html","content_length":"586225","record_id":"<urn:uuid:0387284c-4126-4014-b4f9-eca13640cd58>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00717.warc.gz"}
The material on this page is from the 1996-97 catalog and may be out of date. Please check the current year's catalog for current information. Professors Brooks and Haines; Associate Professors Ross, Rhodes, Chair, and Wong; Assistant Professors Shulman and Allman; Ms. Harder and Ms. Burrows Mathematics today is a dynamic and ever-changing subject, and an important part of a liberal-arts education. Mathematical skills such as data analysis, problem solving, pattern recognition, statistics, and probability are increasingly vital to science, technology, and society itself. Our entry-level courses introduce students to basic concepts and tools and hint at some of the power and beauty behind these fundamental results. Our upper-level courses and senior thesis option provide our majors with the opportunity to explore mathematical topics in greater depth and sophistication, and delight in the fascination of this "queen of the sciences." During new-student orientation the Department administers a placement examination to all new students planning to study mathematics. Based on the examination and other information, the Department recommends an appropriate starting course: Mathematics 103, 105, 106, 205, 206, or a more advanced course. The major in mathematics consists of: 1) Mathematics 205, 206; 2) Mathematics s21, which should be taken during Short Term of the first year; 3) Mathematics 301, 309, and five elective mathematics or computer-science courses numbered 200 or higher; 4) a one-hour oral presentation; and 5) either a written comprehensive examination or a two-semester thesis (Mathematics 457-458). This option requires departmental approval. Entering students may be exempted from any of the courses in 1) on the basis of work before entering college. Any mathematics or computer-science Short Term unit numbered 30 or above may be used as one of the electives in 3). One elective may also be replaced by a departmentally approved course from another department. The mathematics major requirements accommodate a wide variety of interests and career goals. The courses provide broad training in undergraduate mathematics and computer science, preparing majors for graduate study and for positions in government, industry, and the teaching profession. The student should consult with his or her major advisor in designing an appropriate course of study. The following suggestions may be helpful: For majors considering a career in secondary education we suggest Mathematics 312, 314, 315, 341, and Computer Science 101 and 102. Students interested in operations research, business, or actuarial science should consider Mathematics 218, 239, 314, 315, 341, s32, and the courses in computer science. Students interested in applied mathematics in the physical and engineering sciences should consider Mathematics 218, 219, 308, 314, 315, 341, and the courses in computer science. Majors planning on graduate study in pure mathematics should particularly consider Mathematics 302, 308, 310, 313, and 317. All mathematics majors may pursue individual research either through 360 (Independent Study) or 457-458 (Senior Thesis). Students normally begin study of computer science with Computer Science 101. New students who have had the equivalent of 101 and would like to continue should consult with the Department. The four core courses required for the secondary concentration in computer studies are Computer Science 101 and 102 and any two from Computer Science 201, 202, 301, or 302. Mathematics 218 and any of the computer-science courses or units not credited toward the core may be credited toward the three electives required for the concentration. The complete list of electives includes courses from other departments as well, and is designated annually by the Computing Services Committee. Students interested in a career in computer science should consider not only computer-science courses, but also Mathematics 205, 218, 239, 314, and 315. General Education. The quantitative requirement is satisfied by any of the mathematics or computer-science courses or units. 101. Working with Data. Techniques for analyzing data are described in ordinary English without emphasis on mathematical formulas. Graphical and descriptive techniques for summarizing data, design of experiments, sampling, analyzing relationships, statistical models, and hypothesis testing. Applications from everyday life: drug testing, legal discrimination cases, public-opinion polling, industrial quality control, and reliability analysis. Students are instructed in the use of the computer, which is used extensively throughout the course. Enrollment is limited to 30. B. Shulman. 103-104. Calculus with Algebra. Designed for students whose backgrounds in algebra are weak, or who have not taken any mathematics recently, the course covers in two semesters the material of Mathematics 105 together with a review of the necessary precalculus ideas. A student receives one course credit upon completion of the consecutive fall and winter semesters. Students may then enroll in Mathematics 106 if they wish to continue their study of calculus. Staff. 105. Calculus I. Calculus is Latin for a small stone used in reckoning or "calculating." Inspired by problems in astronomy, the branch of mathematics today known as the calculus was developed in the seventeenth century by Newton and Leibniz. Since then, the methods of integral and differential calculus have been applied to problems in the biological, physical, chemical, and social sciences. The first semester develops a library of functions and treats the key concepts of the derivative and the integral, emphasizing interpretation and understanding of the ideas behind the techniques. Applications of the derivative include optimization problems and curve sketching. The course follows a modern approach, combining standard analysis with a new emphasis on graphical and numeric techniques. The graphing scientific calculator becomes an important tool in this approach. Enrollment is limited to 25 per section. S. Ross, D. Haines, P. Wong, Staff. 106. Calculus II. A continuation of Calculus I. Further techniques of integration are studied. Applications of Riemann sums and the definite integral to problems drawn from physics, biology, chemistry, economics, and probability are treated in depth. An introduction to differential equations (with applications) and approximation techniques using Taylor and Fourier series is included. Prerequisite: Mathematics 105 or the equivalent. Enrollment is limited to 25 per section. P. Wong, Staff. 155. Mathematical Models in Biology. Mathematical models are increasingly important throughout the life sciences. This course provides an introduction to deterministic and stochastic models in biology, and to methods of fitting and testing them against data. Examples are chosen from a variety of biological and medical fields, such as ecology, molecular evolution, and infectious disease. Computers are used extensively for modeling and for analyzing data. This course is the same as Biology 255. Recommended background: Biology 101s. Enrollment is limited to 30. Staff. 205. Linear Algebra. Vectors and matrices are introduced as devices for the solution of systems of linear equations with many variables. Although these objects can be viewed simply as algebraic tools, they are better understood by applying geometric insight from two and three dimensions. This leads to an understanding of higher dimensional spaces and to the abstract concept of a vector space. Other topics include orthogonality, linear transformations, determinants, and eigenvectors. This course should be particularly useful to students majoring in any of the natural sciences or economics. Prerequisite: one 100-level mathematics course or the equivalent. Open to first-year students. J. Rhodes. 206. Multivariable Calculus. This course extends the ideas of Calculus I and II to deal with functions of more than one variable. Because of the multidimensional setting, essential use is made of the language of linear algebra. While calculations tend to make straightforward use of the techniques of single-variable calculus, more effort must be spent in developing a conceptual framework for understanding curves and surfaces in higher-dimensional spaces. Topics include partial derivatives, derivatives of vector-valued functions, vector fields, integration over regions in the plane and three-space, and integration on curves and surfaces. This course should be particularly useful to students majoring in any of the natural sciences or economics. Prerequisites: Mathematics 106 and 205, or their equivalents. Open to first-year students. S Ross. 218. Numerical Analysis. This course studies the best ways to perform calculations that have already been developed in other mathematics courses. For instance, if a computer is to be used to approximate the value of an integral, one must understand both how quickly an algorithm can produce a result and how trustworthy that result is. While students will implement algorithms on computers, the focus of the course is the mathematics behind the algorithms. Topics may include interpolation techniques, approximation of functions, finding solutions of equations, differentiation and integration, solution of differential equations, Gaussian elimination and iterative solutions of linear systems, and eigenvalues and eigenvectors. Prerequisites: Math 106 and 205 and Computer Science 101. D. Haines. 219. Differential Equations. A differential equation is a relationship between a function and its derivatives. Many real-world situations can be modeled using these relationships. This course is a blend of the mathematical theory behind differential equations and their applications. The emphasis is on first and second order linear equations. Topics include existence and uniqueness of solutions, power series solutions, numerical methods, and applications such as populations models and mechanical vibrations. Prerequisite: Mathematics 206. Staff. 239. Linear Programming and Game Theory. Linear programming is an area of applied mathematics that grew out of the recognition that a wide variety of practical problems reduce to the purely mathematical task of maximizing or minimizing a linear function whose variables are restricted by a system of linear constraints. A closely related area is game theory, which provides a mathematical way of dealing with decision problems in a competitive environment, where conflict, risk, and uncertainty are often involved. The course focuses on the underlying theory, but applications to social, economic, and political problems abound. Topics include the simplex method for solving linear-programming problems and two-person zero-sum games, the duality theorem of linear programming, and the min-max theorem of game theory. Additional topics will be drawn from such areas as n-person game theory, network and transportation problems, relations between price theory and linear programming. Computers are used regularly. The course is the same as Economics 239. Prerequisites: Computer Science 101 and Mathematics 205. Staff. 301. Real Analysis. An introduction to the foundations of mathematical analysis, this course presents a rigorous treatment of elementary concepts such as limits, continuity, differentiation, and integration. Elements of the topology of the real numbers will also be covered. Prerequisites: Mathematics 206 and s21. B. Shulman. 302. Topics in Real Analysis. The content varies. Possible topics include Fourier analysis, Lebesgue integration, measure theory, calculus on manifolds, special functions. Prerequisite: Mathematics 301. Staff. 308. Complex Analysis. This course extends the concepts of calculus to deal with functions whose variables and values are complex numbers. Instead of producing new complications, this leads to a theory that is not only more aesthetically pleasing, but is also more powerful. The course should be valuable not only to those interested in pure mathematics, but also to those who need additional computational tools for physics or engineering. Topics include the geometry of complex numbers, differentiation and integration, representation of functions by integrals and power series, and the calculus of residues. Prerequisite: Mathematics 206. Staff. 309. Abstract Algebra I. An introduction to basic algebraic structures, many of which are introduced either in high-school algebra or in Mathematics 205. These include the integers and their arithmetic, modular arithmetic, rings, polynomial rings, ideals, quotient rings, fields, and groups. Prerequisites: Mathematics 205 and s21. J. Rhodes. 310. Abstract Algebra II. A continuation of Mathematics 309, with emphasis on the theory of rings and fields. Topics include integral domains, polynomial rings, an introduction to Galois theory, and solvability by radicals. Prerequisite: Mathematics 309. Staff. 312. Foundations of Geometry. The study of the evolution of geometric concepts starting from the ancient Greeks (800 B.C.) and continuing to current topics. These topics are studied chronologically as a natural flow of ideas: conic sections from the Greek awareness of astronomy, continuing to Kepler and Newton; perspective in art and geometry; projective geometry including the Gnomic, Mercator, and Stereographic terrestrial maps; Euclidean and non-Euclidean geometries with their respective axiomatic structure; isometries; the inversion map in the plane and in 3-space; curvature of curves and surfaces; graph theory including tilings (tesselations); fixed point theorems; space-time geometry. Geometers encountered are Euclid, Apollonius, Pappus, Descartes, Dürer, Kepler, Newton, Gauss, Riemann, A.W. Tucker, and others. R. Sampson. 313. Topology. A study of those geometric properties of space which are invariant under transformations. Properties include continuity, compactness, connectedness, and separability. Prerequisites: Mathematics 206 and s21. S. Ross. 314. Probability. Probability theory is the foundation on which statistical data analysis depends. This course together with its sequel, Mathematics 315, covers topics in mathematical statistics. Both courses are recommended for math majors with an interest in applied mathematics and for students in other disciplines such as psychology and economics who wish to learn about some of the mathematical theory underlying the methodology used in their fields. Prerequisite: Mathematics 106. M. Harder. 315. Statistics. The sequel to Mathematics 314. This course covers estimation theory and hypothesis testing. Prerequisite: Mathematics 314. Staff. 317. Differential Geometry. This course further develops the ideas of multivariable calculus to study the geometry of curves and surfaces. The concepts of the tangent space and orientation are used to understand the curvature of n-dimensional surfaces. Geodesics on surfaces are introduced as analogs of straight lines in Euclidean space. Students with an interest in physics may find this course a useful introduction to some of the ideas behind mathematical formulations of general relativity. Prerequisite: Mathematics 206. P. Wong. 341. Mathematical Modeling. Often we are interested in analyzing complex situations (like the weather, a traffic flow pattern, or an ecological system) in order to predict qualitatively the effect of some action. The purpose of this course is to provide experience in the process of using mathematics to model real-life situations. The first half examines and critiques specific examples of the modeling process from various fields. During the second half each student creates, evaluates, refines, and presents a mathematical model from a field of his or her own choosing. Prerequisite: Mathematics 206. Staff. 360. Independent Study. Independent study by an individual student with a single faculty member. Permission of the Department is required. Students are limited to one independent study per semester. 365. Special Topics. Content varies from semester to semester. Possible topics include chaotic dynamical systems, number theory, mathematical logic, representation theory of finite groups, measure theory, algebraic topology, combinatorics, and graph theory. Prerequisites vary with the topic covered but are usually Mathematics 301 and/or 309. Staff. 457-458. Senior Thesis. Prior to entrance into Mathematics 457, students must submit a proposal for the work they intend to undertake toward completion of a two-semester thesis. Open to all majors upon approval of the proposal. Required of candidates for honors. Students register for Mathematics 457 in the fall semester and Mathematics 458 in the winter semester. Staff. Short Term Units s21. Introduction to Abstraction. An intensive development of the important concepts and methods of abstract mathematics. Students work individually, in groups, and with the instructors to prove theorems and solve problems. Students meet for up to five hours daily to explore such topics as proof techniques, logic, set theory, equivalence relations, functions, and algebraic structures. The unit provides exposure to what it means to be a mathematician. Prerequisite: one semester of college mathematics. Required of all majors. Staff. s32. Topics in Operations Research. An introduction to a selection of techniques that have proved useful in management decision-making: queuing theory, inventory theory, network theory (including PERT and CPM), statistical decision theory, computer modeling, and dynamic programming. Prerequisites: Mathematics 105 and a course in probability or statistics. Written permission of the instructor is required. Enrollment is limited to 20. Staff. s45. Seminar in Mathematics. The content varies. Recent topics have included Inverse Problems in the Mathematical Sciences and Introduction to Error Correcting Codes. Staff. s50. Individual Research. The Department permits registration for this unit only after the student submits a written proposal for a full-time research project to be completed during the Short Term and obtains the sponsorship of a member of the Department to direct the study and evaluate its results. Students are limited to one individual research unit. Staff. Computer Science 101. Computer Science I. An introduction to computer science, with the major emphasis on the design, development, and testing of computer software. It introduces the student to a disciplined approach to problem-solving and system development in a modern programming environment using an object-based event-driven programming language. Students develop programs in Visual BASIC to run under the Windows operating system. The course is taught entirely in a hands-on laboratory setting. Students spend the last portion of the course on an individual or group project of their own choice. Enrollment is limited to 16 per section. R. Brooks. 102. Computer Science II. A continuation of Computer Science I. The major emphasis of the course is on object-oriented software design and development using the C++ language. The object-oriented paradigm provides the context for studying additional topics such as data structures, software engineering, and large software systems. Students spend the last portion of the course on an individual or group project of their own choice. Computer Science 101 and 102 provide a foundation for further study in computer science. Prerequisite: Computer Science 101. Enrollment is limited to 16 per section. R. Brooks. 201. Principles of Computer Organization. Computer and processor architecture and organization including topics such as operating systems, memory organization, addressing modes, segmentation, input/ output, control, synchronization, interrupts, multiprocessing, and multitasking. The course includes training in digital logic, machine language programming, and assembly language programming. Prerequisite: Computer Science 101. Open to first-year students. R. Brooks. 202. Principles of Programming Languages. An introduction to the major concepts and paradigms of contemporary programming languages. Concepts covered include procedural abstraction, data abstraction, tail-recursion, binding and scope, assignment, and generic operators. Paradigms covered include imperative (e.g., Pascal and C), functional (e.g., LISP), object-oriented (e.g., Smalltalk), and logic (e.g., Prolog). Students write programs in SCHEME to illustrate the paradigms. Prerequisite: Computer Science 102. Open to first-year students. Staff. 301. Algorithms. The course covers specific algorithms (searching, sorting, merging, and network algorithms), related data structures, an introduction to complexity theory (O-notation, the classes P and NP, NP-complete problems, and intractable problems), and laboratory investigation of algorithm complexity and efficiency. Students gain extensive further computing experience, both in the programming of specific algorithms and in the empirical investigation of their efficiency. Prerequisites or Corequisites: Computer Science 101 and 102. Open to first-year students. R. Brooks. 302. Theory of Computation. A course in the theoretical foundations of computer science. Topics include finite automata and regular languages, pushdown automata and context-free languages, Turing machines, computability and recursive functions, and complexity. Prerequisite: Computer Science 102. Staff. 360. Independent Study. Independent study by an individual student with a faculty member. Permission of the Department is required. Students are limited to one independent study per semester. Staff. 365. Special Topics. A seminar usually involving a major project. Recent topics have been: The Mathematics and Algorithms of Computer Graphics, in which students designed and built a computer-graphics system, and Contemporary Programming Languages and their Implementations, in which students explored new languages, in some cases using the Internet to obtain languages such as Oberon, Python, Haskell, and Dylan. Permission of the instructor is required. Prerequisites vary with the topic covered, but are usually Computer Science 201 and 202. Staff. Short Term Units s35. Digital Design, Computer Architecture, and Interfacing. Beginning with the smallest logical building blocks -- logical gates -- the unit examines how to organize them into a computer and how to interface the computer to the physical world. Topics include combinational and clocked logic, microprogramming, parallel and serial communication, multiplexing, and analog-to-digital and digital-to-analog conversion. The unit is an intensive laboratory experience. Prerequisite: Computer Science 101. Open to first-year students. Enrollment is limited to 14. Staff. s45. Seminar in Computer Science. The content varies. A recent topic was Cryptography and Data Security. Prerequisites vary with the topic covered. Staff. s50. Individual Research. The Department permits registration for this unit only after the student submits a written proposal for a full-time research project to be completed during the Short Term and obtains the sponsorship of a member of the Department to direct the study and evaluate its results. Students are limited to one individual research unit. Staff. © 1996 Bates College. All Rights Reserved. Last modified: 08/05/96 by PD
{"url":"https://catalog-archive.bates.edu/catalog96-97/mathematics.html","timestamp":"2024-11-05T16:54:44Z","content_type":"text/html","content_length":"26677","record_id":"<urn:uuid:375362ee-562a-4c25-b9c3-8691941bbbd0>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00509.warc.gz"}
David versus Goliath: An analysis of 2020 stock market performance 19 Feb. 2021 Dear investors, In the past we have written a lot about how in 2020 it was very difficult for an equally weighted approach to keep up with the capitalization-weighted benchmarks. Now data for the entire year is available we are able to show numerically just how pervasive the outperformance of mega caps was during the year. We will try to illustrate this by calculating the performance contribution of the MSCI All Countries. In short, a performance contribution dissects the return of the constituents of an index. In the simple example below, where the index is made up of 5 companies, we can see that Stock A contributed 5.25% to the total return of the index (= 10.50%). A contribution of a single company is calculated by multiplying the weight a company holds within an index with the return the stock earned over the holding period. The performance of the index can then easily be calculated by summing the contributions of its constituents. Stock Weight Return Contribution Cumulative Contr. A 35% 15% 5.25% 5.25% B 25% 8% 2.00% 7.25% C 18% 5% 0.90% 8.15% D 17% 5% 0.85% 9.00% E 5% 30% 1.50% 10.50% We have done the same exercise, but on a much larger scale, for the MSCI All Countries. We calculate the weight of the constituents each month, multiply it with their monthly returns and sum over the twelve-month period. Using this method for 2020, returns sum to 7.1% compared to the 6.6% for the Net Return index, so the monthly approximation is pretty accurate. The graph below goes to show just how influential the global titans were, with the ten largest companies contributing 4.7% to the return of the index. Put differently, while the 10 largest companies - by their average weight over 2020 (*) - contributed 66% (= 4.7%/7.1%) to the index’s total return, the remaining 2490 were able to add just 34%! In the following graph we allotted an equal weight to all of the index’ constituents. So, the weight of Apple, 3.3% on average during 2020, falls back to about 0.04% and the weight of the micro stocks actually increases from 0.01% to this same level. This time we show the contribution to the total return of the index on a relative rather than absolute level, which makes comparison of 2020 and 2021 data visually more appealing. In this graph the cumulative contribution only increases on the merit of a stock’s return. If all stocks had identical returns then the cumulative contribution would be linear (grey line). It is clear that in 2020 the largest half of the stocks also had the best return, reaching a cumulative contribution of about 200%. The graph then stabilizes for a while only to drop precipitously towards the end. This implies that the remaining half consisted of the kind of apples you don’t want in your basket: these companies had neutral to extremely negative contributions to the index’s total return on average. We hope this analysis shows that when your selection method was biased towards large caps in 2020, you were set up for success. Yet, while big tech got bigger, performance of mid-caps and smaller companies lagged substantially. While the capitalization weighted version of the MSCI ACWI invested only about 10% of its assets in smallest 1250 companies, stock pickers that opt to invest over the entire universe tend to end up allotting a lot more weight to these small to mid-caps. This has been punishing in the past, as large caps have been outperforming for a couple of years now, but never so painful as in 2020. In large part it explains why last year was so difficult for our funds. All things change, however, and in 2021 we have seen the opposite happen (orange line on the graph). The large caps have contributed almost nothing so far, and the mid to small caps made a modest comeback. As we tend to focus more on the mid to small cap part of the index this has been a tailwind for us. Vector Navigator outperformed the index by about 2.5%. Vector Flexible had an even better month, outperforming the benchmark by 2.7%. Best regards, Werner, Thierry and Nils ^(*) the conclusions remain unchanged when we sort stocks by their beginning or ending weight of 2020
{"url":"https://www.vector.lu/de/blog/2021-02-19-david-versus-goliath-an-analysis-of-2020-stock-market-performance/","timestamp":"2024-11-10T01:53:21Z","content_type":"text/html","content_length":"451413","record_id":"<urn:uuid:7fe5eee1-4f99-484c-b5d9-7f49da8e03d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00063.warc.gz"}
Multiplication Flash Cards 0-12 Studying multiplication after counting, addition, as well as subtraction is ideal. Children learn arithmetic via a organic progression. This progress of understanding arithmetic is generally the adhering to: counting, addition, subtraction, multiplication, lastly division. This assertion leads to the query why learn arithmetic within this series? Moreover, why understand multiplication after counting, addition, and subtraction just before department? The following specifics respond to these inquiries: 1. Children find out counting very first by associating visible things making use of their hands. A perceptible case in point: The number of apples are there any from the basket? More abstract example is when outdated are you presently? 2. From counting numbers, another logical phase is addition accompanied by subtraction. Addition and subtraction tables are often very useful educating helps for the kids as they are aesthetic equipment generating the move from counting simpler. 3. Which should be acquired after that, multiplication or division? Multiplication is shorthand for addition. At this stage, youngsters have a business understand of addition. As a result, multiplication may be the after that logical kind of arithmetic to discover. Evaluate fundamentals of multiplication. Also, assess the fundamentals using a multiplication table. Allow us to overview a multiplication instance. Using a Multiplication Table, multiply 4 times about three and get an answer 12: 4 by 3 = 12. The intersection of row three and line four of the Multiplication Table is twelve; twelve will be the answer. For kids starting to learn multiplication, this can be simple. They may use addition to resolve the problem thus affirming that multiplication is shorthand for addition. Example: 4 by 3 = 4 4 4 = 12. It is an excellent guide to the Multiplication Table. A further benefit, the Multiplication Table is visual and displays back to understanding addition. In which can we get started discovering multiplication making use of the Multiplication Table? 1. Very first, get familiar with the table. 2. Start with multiplying by 1. Begin at row # 1. Proceed to line primary. The intersection of row a single and line the initial one is the answer: 1. 3. Repeat these steps for multiplying by a single. Multiply row one particular by posts one by means of 12. The replies are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12 respectively. 4. Repeat these methods for multiplying by two. Multiply row two by posts 1 through five. The replies are 2, 4, 6, 8, and 10 correspondingly. 5. We will hop ahead. Replicate these methods for multiplying by five. Grow row five by columns one via twelve. The solutions are 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, and 60 correspondingly. 6. Now let us boost the level of problems. Perform repeatedly these actions for multiplying by about three. Flourish row 3 by columns one by way of 12. The answers are 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, and 36 correspondingly. 7. When you are at ease with multiplication so far, use a examination. Resolve these multiplication difficulties in your thoughts after which evaluate your answers towards the Multiplication Table: multiply six as well as 2, multiply 9 and three, multiply one and eleven, multiply 4 and a number of, and increase 7 and 2. The issue responses are 12, 27, 11, 16, and 14 respectively. If you acquired 4 out of 5 various troubles correct, design your very own multiplication checks. Determine the solutions in your thoughts, and appearance them while using Multiplication Table.
{"url":"https://www.printablemultiplication.com/multiplication-flash-cards-0-12/","timestamp":"2024-11-03T22:41:29Z","content_type":"text/html","content_length":"63079","record_id":"<urn:uuid:f64f73ff-737a-417f-94a3-cce3db2fcd75>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00177.warc.gz"}
Theory (mathematical logic) deductive system is first understood from context, after which an element ${\displaystyle \phi \in T}$ of a deductively closed theory ${\displaystyle T}$ is then called a of the theory. In many deductive systems there is usually a subset ${\displaystyle \Sigma \subseteq T}$ that is called "the set of " of the theory ${\displaystyle T}$ , in which case the deductive system is also called an " axiomatic system ". By definition, every axiom is automatically a theorem. A first-order theory is a set of sentences (theorems) obtained by the inference rules of the system applied to the set of axioms. General theories (as expressed in formal language) When defining theories for foundational purposes, additional care must be taken, as normal set-theoretic language may not be appropriate. The construction of a theory begins by specifying a definite non-empty conceptual class ${\displaystyle {\mathcal {E}}}$, the elements of which are called statements. These initial statements are often called the primitive elements or elementary statements of the theory—to distinguish them from other statements that may be derived from them. A theory ${\displaystyle {\mathcal {T}}}$ is a conceptual class consisting of certain of these elementary statements. The elementary statements that belong to ${\displaystyle {\mathcal {T}}}$ are called the elementary theorems of ${\displaystyle {\mathcal {T}}}$ and are said to be true. In this way, a theory can be seen as a way of designating a subset of ${\displaystyle {\mathcal {E}}}$ that only contain statements that are true. This general way of designating a theory stipulates that the truth of any of its elementary statements is not known without reference to ${\displaystyle {\mathcal {T}}}$. Thus the same elementary statement may be true with respect to one theory but false with respect to another. This is reminiscent of the case in ordinary language where statements such as "He is an honest person" cannot be judged true or false without interpreting who "he" is, and, for that matter, what an "honest person" is under this theory.^[1] Subtheories and extensions A theory ${\displaystyle {\mathcal {S}}}$ is a subtheory of a theory ${\displaystyle {\mathcal {T}}}$ if ${\displaystyle {\mathcal {S}}}$ is a subset of ${\displaystyle {\mathcal {T}}}$. If ${\ displaystyle {\mathcal {T}}}$ is a subset of ${\displaystyle {\mathcal {S}}}$ then ${\displaystyle {\mathcal {S}}}$ is called an extension or a supertheory of ${\displaystyle {\mathcal {T}}}$ Deductive theories A theory is said to be a deductive theory if ${\displaystyle {\mathcal {T}}}$ is an axioms. In a deductive theory, any sentence that is a logical consequence of one or more of the axioms is also a sentence of that theory. More formally, if ${\displaystyle \vdash }$ is a Tarski-style consequence relation , then ${\displaystyle {\mathcal {T}}}$ is closed under ${\displaystyle \vdash }$ (and so each of its theorems is a logical consequence of its axioms) if and only if, for all sentences ${\displaystyle \phi }$ in the language of the theory ${\displaystyle {\mathcal {T}}}$ , if ${\displaystyle {\mathcal {T}}\vdash \phi }$ , then ${\displaystyle \phi \in {\mathcal {T}}}$ ; or, equivalently, if ${\displaystyle {\mathcal {T}}'}$ is a finite subset of ${\displaystyle {\mathcal {T}}}$ (possibly the set of axioms of ${\displaystyle {\mathcal {T}}}$ in the case of finitely axiomatizable theories) and ${\displaystyle {\mathcal {T}}'\vdash \phi }$ , then ${\displaystyle \phi \in {\mathcal {T}}'}$ , and therefore ${\displaystyle \phi \in {\mathcal {T}}}$ Consistency and completeness A syntactically consistent theory is a theory from which not every sentence in the underlying language can be proven (with respect to some deductive system, which is usually clear from context). In a deductive system (such as first-order logic) that satisfies the principle of explosion , this is equivalent to requiring that there is no sentence φ such that both φ and its negation can be proven from the theory. A satisfiable theory is a theory that has a model. This means there is a structure every sentence in the theory. Any satisfiable theory is syntactically consistent, because the structure satisfying the theory will satisfy exactly one of φ and the negation of φ, for each sentence φ. A consistent theory is sometimes defined to be a syntactically consistent theory, and sometimes defined to be a satisfiable theory. For ω-inconsistent theories ${\displaystyle {\mathcal {T}}}$ such that for every sentence φ in its language, either φ is provable from ${\displaystyle {\mathcal {T}}}$ ${\displaystyle {\mathcal {T}}}$ ${\displaystyle \cup }$ {φ} is inconsistent. For theories closed under logical consequence, this means that for every sentence φ, either φ or its negation is contained in the theory. incomplete theory is a consistent theory that is not complete. (see also ω-consistent theory for a stronger notion of consistency.) Interpretation of a theory An interpretation of a theory is the relationship between a theory and some subject matter when there is a many-to-one correspondence between certain elementary statements of the theory, and certain statements related to the subject matter. If every elementary statement in the theory has a correspondent it is called a full interpretation , otherwise it is called a partial interpretation Theories associated with a structure Each structure has several associated theories. The complete theory of a structure A is the set of all first-order sentences over the signature of A that are satisfied by A. It is denoted by Th(A). More generally, the theory of K, a class of σ-structures, is the set of all first-order σ-sentences that are satisfied by all structures in K, and is denoted by Th(K). Clearly Th(A) = Th({A}). These notions can also be defined with respect to other logics. For each σ-structure A, there are several associated theories in a larger signature σ' that extends σ by adding one new constant symbol for each element of the domain of A. (If the new constant symbols are identified with the elements of A that they represent, σ' can be taken to be σ ${\displaystyle \cup }$ A.) The cardinality of σ' is thus the larger of the cardinality of σ and the cardinality of A. The diagram of A consists of all atomic or negated atomic σ'-sentences that are satisfied by A and is denoted by diag[A]. The positive diagram of A is the set of all atomic σ'-sentences that A satisfies. It is denoted by diag^+[A]. The elementary diagram of A is the set eldiag[A] of all first-order σ'-sentences that are satisfied by A or, equivalently, the complete (first-order) theory of the natural to the signature σ'. First-order theories A first-order theory ${\displaystyle {\mathcal {QS}}}$ is a set of sentences in a first-order formal language ${\displaystyle {\mathcal {Q}}}$. Derivation in a first-order theory There are many formal derivation ("proof") systems for first-order logic. These include Syntactic consequence in a first-order theory A formula A is a syntactic consequence of a first-order theory ${\displaystyle {\mathcal {QS}}}$ if there is a derivation of A using only formulas in ${\displaystyle {\mathcal {QS}}}$ as non-logical axioms. Such a formula A is also called a theorem of ${\displaystyle {\mathcal {QS}}}$. The notation "${\displaystyle {\mathcal {QS}}\vdash A}$" indicates A is a theorem of ${\displaystyle {\mathcal Interpretation of a first-order theory An interpretation of a first-order theory provides a semantics for the formulas of the theory. An interpretation is said to satisfy a formula if the formula is true according to the interpretation. A model of a first-order theory ${\displaystyle {\mathcal {QS}}}$ is an interpretation in which every formula of ${\displaystyle {\mathcal {QS}}}$ is satisfied. First-order theories with identity A first-order theory ${\displaystyle {\mathcal {QS}}}$ is a first-order theory with identity if ${\displaystyle {\mathcal {QS}}}$ includes the identity relation symbol "=" and the reflexivity and substitution axiom schemes for this symbol. Topics related to first-order theories • Consistent set • Enumeration theorem One way to specify a theory is to define a set of Peano arithmetic A second way to specify a theory is to begin with a set of axioms. The theory of ( , +, ×, 0, 1, =) was shown by Tarski to be for more). See also Further reading
{"url":"https://findatwiki.com/Theory_(mathematical_logic)","timestamp":"2024-11-11T19:41:01Z","content_type":"text/html","content_length":"162042","record_id":"<urn:uuid:61c8cc8c-d7e0-47d0-8255-48034fb0908f>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00164.warc.gz"}
Markov Chains Let \(P\) be a regular \(n\times n\) transition matrix. Then there is a row vector \(W=(w_1, w_2,\dots,w_n\)) of positive real numbers summing to \(1\) so that as \(m\) tends to infinity, each row of \(P^m\) tends to \(W\text{.}\) Furthermore, \(WP=W\text{,}\) and for each \(i=1,2,\dots,n\text{,}\) the value \(w_i\) is the limiting probability of being in state \(S_i\text{.}\)
{"url":"https://rellek.net/book-2016.1/s_kitchensink_markov-chains.html","timestamp":"2024-11-07T02:56:33Z","content_type":"text/html","content_length":"34870","record_id":"<urn:uuid:04c89880-fd6c-4ce4-a61c-d10d5704b4ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00282.warc.gz"}
The degeneracy of the free Dirac equation Parity-mixed solutions of the free Dirac equation with the same 4-momentum are considered. The first-order EM energy has an electric dipole moment term whose value depends on the mixing angle. Further implications of this degeneracy to perturbative calculations are discussed. It is argued that the properties of the Dirac equation with the Coulomb potential can be used to decide the mixing angle, which should be zero. Pub Date: August 1991 □ Coulomb Potential; □ Degeneration; □ Dipole Moments; □ Dirac Equation; □ Electric Dipoles; □ Perturbation Theory; □ Fermions; □ Hamiltonian Functions; □ Wave Functions; □ Thermodynamics and Statistical Physics
{"url":"https://ui.adsabs.harvard.edu/abs/1991dfde.rept.....G/abstract","timestamp":"2024-11-05T17:29:21Z","content_type":"text/html","content_length":"33677","record_id":"<urn:uuid:584067dc-7cd4-4d01-9ab8-d90b022983dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00770.warc.gz"}